Blog

Eyes Wide Open When Working With AI: How to Allocate AI Risks

Posted by Tom McKeever | Jan 28, 2025 | 0 Comments

Artificial intelligence technologies are everywhere in business.

AI promises next level opportunities for growth and innovation, but within the excitement of this transformational new technology it's important to understand how using AI is different – including its unique risks. At the rapid rate AI is evolving, it is important to have a trusted advisor to help you keep on target with those changes. 

Today we're going to examine how AI's current state is new and different and how to adequately address its current risks. 

Addressing AI Concerns, Rights, and Risks: Why Traditional Licensing Agreements Fall Short

Traditionally, when an organization licenses software or technology from a vendor, there are a number of standard considerations outlined in the licensing agreement. These agreements typically define things like usage rights, warranties, limitations of liability, up-time guarantees and other performance variables. 

But licensing AI introduces a whole new range of AI concerns that must be addressed in the licensing agreement. Failure to do so can expose licensees to unexpected liabilities, litigation and other negative surprises.

Some AI-specific concerns in software agreements include:

  • Intellectual Property Ownership - Who actually owns your prompts to or output from an AI model - the licensee or the licensor? These are fundamental questions that must be answered before entering into a licensing agreement for AI software. 

  • AI Performance Issues - With a standard software license, it's relatively straightforward to define uptime guarantees and other performance metrics in an agreement. But with AI software, output quality is heavily reliant on the AI training and AI training data, leading to results of varying quality. 

  • Data Flow and Privacy - With AI software, large amounts of data of varied types are transferred between the licensee, customer, developers, and the AI vendor. This creates major risks that must be accounted for in the service agreement. 

  • Data Training - Licensee and licensee customer data can be used by licensors to train AI models. Are you ok with this? It raises data ownership and privacy concerns when that data is used to improve the AI model or prompts for use with other customers. 

These are just some examples of the increased complexity of working with AI software. As the legal landscape around AI evolves and matures, it is critical these and other new questions be first identified and then addressed in the licensing agreement to avoid future complications. 

Understanding Artificial Intelligence Risks

AI powered software products typically fall into one of two categories: 1) Software that uses third-party AI services hosted offsite; and 2) software that utilizes custom AI within the cloud or on premises. 

Each comes with different forms of AI risk. Let's break down the AI concerns of each type. 

AI Risks Using Third-Party Software

Many software vendors license AI technologies from companies like OpenAI, Microsoft, Google, Amazon and others. That software is in turn licensed to their customers, i.e., you.

However the nature of these relationships raises important concerns that must be considered when licensing AI powered software products. 

Some of the most pressing concerns include:

Liability for AI Errors

As an emerging technology, AI systems are susceptible to errors. But determining who is responsible for any missteps can be difficult to pinpoint. That makes it essential to include clear disclaimers within a service agreement that protect your interests, including limitations on the capabilities of AI, indemnification clauses, and other liability limitations.

For example, a hospital may use a medical imaging software that relies on a third-party AI for diagnosis. If the AI misdiagnoses a patient's illness, who is responsible? The third-party who trained the AI, the developer who built the imaging software, or the hospital who failed to provide human oversight to review the AI diagnosis? 

Determining liability can be difficult in such an example, unless the service agreements clearly set expectations and responsibilities for each party.

Data Privacy and Security

A material AI concern is how data is handled as it flows from a software developer, to the third-party AI service and back, and eventually to the customer. During this transit and while it sits on third-party servers, vulnerabilities within the infrastructure of the third-party and the software developer can be exploited. This can leave data susceptible to security breaches and hacks.      

Specific provisions to provide robust data encryption, as well as fulsome incident response plans, should be included within the service agreement. 

IP Considerations

In the short time AI technologies have been available, we've already seen significant unresolved controversies regarding intellectual property rights. 

When AI powered software generates potentially valuable outputs, such as code, research analysis, or even creative works, who owns the AI generated content can be a point of contention. Service agreements should clearly define who owns any content and what rights other parties have to that output, if any.

A related AI risk involves large language models (LLMs) that have been trained using massive datasets that include copyrighted works. These models can potentially create output that closely resemble works they were trained on, opening the doors for possible copyright infringement. 

Similarly, if data owned by a customer is used to train the third-party provider's general AI models, it can result in their data potentially being used to benefit their competitors. Provisions must be negotiated and documented to define the permitted use of customer IP and data. 

Software Using Custom Built AI

Software that uses custom AI technology can provide customers with greater control and flexibility over features and data than that of a third-party provider. But the party that is primarily responsible for providing and maintaining the infrastructure behind the AI services also assumes increased responsibility and risk.

The Role of Gatekeeper

In terms of AI, the gatekeeper role refers to the responsibility to ensure that the AI system is used legally and ethically. Most often this responsibility falls to whomever hosts the AI software - either the software vendor who hosts the AI system in the cloud, or the customer if they host the AI system on their own IT infrastructure.

Gatekeepers assume the risk of things like preventing bias in AI outputs, conducting regular audits and testing of the system, and data quality checks. Human oversight is often used to review and correct AI decisions when needed. 

The higher the potential impact of the AI on society, such as areas like hiring, financial decision making, and medical issues, the more likely a gatekeeper role is needed to balance AI risk.

Therefore, gatekeeper roles and responsibilities must be explicitly defined within the service agreement.

Service Level Agreements and Performance Guarantees

A key risk of using AI comes in how performance guarantees are measured and evaluated. In traditional software agreements, performance standards like uptime are used to guarantee the customer receives a certain level of service. 

However AI performance is often more nuanced than traditional software. For example, as data changes over time, the effectiveness of AI models can waver. This is known as data drift. 

Beyond simply being online, AI models are judged on their accuracy, reliability and fairness in their outputs. This requires that AI powered software be retrained on new data to retain its effectiveness. 

Service level agreements should account for these AI risks, and include provisions that the AI models be regularly monitored and maintained to continue producing accurate and reliable results. 

Tips for Successfully Navigating Artificial Intelligence Risks

Now that we understand some of the key issues, here are our most important tips for navigating artificial intelligence risks. 

  1. When entering into an AI software service agreement, ensure that IP ownership rights regarding AI prompts as well as output are clearly defined. 

  2. Whenever possible, use human oversight to validate AI output. 

  3. Have policies in place for implementing AI within your organization. Know and identify what you care more and less about with respect to risks of AI for your specific business.

  4. Make data security and privacy a high priority. Implement effective security measures designed to work in the context of AI.

  5. When working with software that uses third-party AI, regularly audit performance. Request links to their performance metrics and incident reports.

  6. The security standards of third-party AI models may not provide the same standards of your own organization. Research the third-party's security protocols to minimize the risk of being exposed to data breaches. 

  7. Gather information about the datasets that the AI model was trained on. This can be invaluable in understanding potential biases, as well as determine if the model is suitable for your use. 

  8. Clearly define liability for AI errors and IP infringement.  

  9. Before adopting any AI service, test it as much as possible prior to integration with your existing systems. 

  10. Establish a system of quantifiable metrics that can be used to measure AI performance as well as detect potential issues like data drift. 

Conclusion

It's clear AI-powered software is tremendously powerful. That power will only increase from here, but its use is not without risks. Navigating the challenges of AI powered software requires a careful understanding of these risks including data security and privacy, liability for AI errors, ownership of intellectual property generated by AI as well as that used to train AI, and liability for IP infringement. 

By fully understanding the risks of using AI powered software, you will be well positioned to mitigate them and harness the power of AI software systems successfully in your organization.

Are you looking to adopt AI powered software solutions in your business? With over 15 years of experience serving Silicon Valley, SVTech Law Advisors can help you create a rock-solid service level agreement covering all legal aspects of implementing AI software. 

Contact SVTech Law Advisors today to schedule a consultation and learn how we can help you mitigate AI risks and set you up for success.

About the Author

Tom McKeever

Leverage Tom's deep technology law experience and solid business judgment to your unfair advantage.

Comments

There are no comments for this post. Be the first and Add your Comment below.

Leave a Comment

Comments have been disabled.