We just unveiled our latest advancements in AI agents. Watch the product launch on-demand.

Assessing and Addressing AI Risk: The Latest Legal Developments

Lewis Barr

Vice President Legal and General Counsel

Recent legal developments in assessing and addressing AI risk
Recent legal developments in assessing and addressing AI risk
Share Article
AI Regulation & EthicsTrustworthy AI
Published 08/05/24
5 minutes read

To obtain the commercial benefits of AI-powered SaaS without extraordinary liability exposure, SaaS providers in scope and their customers both need to appropriately assess and address the associated risks.

While the prospects for AI-specific regulation were just percolating a year ago, AI-specific regulation has come into force to join existing antitrust, anti-discrimination and other laws being applied against AI providers and deployers.

Are European and U.S. legal frameworks so different in their assessment of risk and what they require to address it?  Not as much as you may think, despite the lack of U.S. federal law specifically addressing AI.

The EU’s AI Act: Risk-Based Approach to Regulation

The European Union’s (EU) AI Act, which became law last week with provisions that will take effect over the next two years and is touted by the EU as the first regulation on artificial intelligence, takes a risk-based approach to regulating the use of AI and distinguishes between categories of risk depending on the intended or actual use of an AI system. (Keep in mind that, like the GDPR, the AI Act applies equally to U.S.-based companies offering AI-powered SaaS to EU companies or residents for their use as it does to EU-based companies.) The law provides for high monetary penalties if the risks are not appropriately addressed.

The AI Act bans certain uses of AI outright, imposes particular requirements on the providers of general systems, such as Gemini and Chat GPT, and identifies a number of high-risk systems based on their intended uses.

EU AI Act Regulatory Framework for Risk

Source: European Commission

High-Risk AI Systems

With limited exceptions, high-risk AI systems identified in the AI Act include those used in the context of infrastructure impacting humans, certain public services, credit assessment, law enforcement, and employment. An employment use case of specific concern is using AI “to analyse and filter job applications, and to evaluate candidates.” (See AI Act Annex III. 4(a) and Article 6.) 

The AI Act requires that providers of high-risk systems implement, document and maintain a comprehensive AI risk management system with a related data governance program as well as implement other controls to appropriately managed the high risk presented for the identified use cases.

There also is a heightened transparency requirement applicable to “deployers” of high-risk systems, which include businesses using high-risk systems:

“Deployers of high-risk AI systems . . . also play a critical role in informing natural persons and should, when they make decisions or assist in making decisions related to natural persons, where applicable, inform the natural persons that they are subject to the use of the high-risk AI system. This information should include the intended purpose and the type of decisions it makes. The deployer should also inform the natural persons about their right to an explanation provided under [the AI Act].” (AI Act preamble (92) and see Article 26 (11).)

Limited Risk AI Systems

For limited risk systems—those with limited risk use cases (like Conversica’s AI-powered SaaS)—there are two transparency requirements discussed in AI Act Article 50.

First, providers will be required to inform individual users of their systems that they are interacting with an AI system unless that would be obvious to a reasonably well-informed, observant person under the circumstances and context of use. (See our best practices recommendation on transparency in the use of Conversica AI.)

Second, providers of generative AI, with a few exceptions, must ensure that the outputs of their AI systems are marked in a machine-readable format and detectable as artificially generated or manipulated.

Developments in the US Court System

In the U.S., a recent federal court case ruling indicates the potential for liability where AI is applied to certain high-risk use cases without appropriate controls, resulting in automated decisions that may harm humans. 

On July 12 a federal court based in California found that Derek Mobley, an employment applicant who allegedly was rejected for over 100 jobs he applied for with companies using Workday SaaS to scan his resume and determine his qualifications, stated a claim for disparate impact discrimination on the basis of race, age, and disability.

The court noted in its decision that a third-party agent (such as Workday) of the employers to which Mobley had applied, may be liable as an employer if it has been delegated functions traditionally exercised by an employer. The court found that Workday “does qualify as an agent because its tools are alleged to perform a traditional hiring function of rejecting candidates at the screening stage and recommending who to advance to subsequent stages, through the use of artificial intelligence and machine learning.”   

The court also found that Mobley sufficiently alleged discrimination in his amended complaint:

“Mobley also allegedly received rejection emails at early hours in the morning, outside of regular business hours. On one occasion, Mobley received a rejection at 1:50 a.m., less than one hour after he had submitted his application. The sheer number of rejections and the timing of those decisions, coupled with the FAC’s allegations that Workday’s AI systems rely on biased training data, support a plausible inference that Workday’s screening algorithms were automatically rejecting Mobley’s applications based on a factor other than his qualifications, such as a protected trait.” [Internal references omitted.]

While the Mobley case is still in its early stages, the court’s decision to let the case proceed brings to mind the AI Act’s categorization of the use of AI to screen job applicants as a high-risk activity. Because of the potential harmful impact on individuals, this is a use case where the risk of bias must be guarded against; easier said than done. NIST has identified no less than three AI risk categories that have to be considered and managed: “systemic, computational, and human, all of which can occur in the absence of prejudice, partiality, or discriminatory intent.” (Section 4.3 of AI Risk Management Framework: Second Draft.)

NIST how biases contribute to harms

Source: NIST

State-Level AI-Focused Statutes

While AI-specific legislation stalls in Congress, in May of this year Colorado and Utah became the first states to enact AI-focused statutes.

Colorado’s imposes requirements on developers and deployers of high-risk AI systems to prevent algorithmic discrimination. And both the new Colorado and Utah laws contain transparency provisions. Utah’s requires deployers of generative AT systems to pro-actively inform users that they are interacting with AI prior to any written exchange or at the start of any verbal exchange where the systems are deployed in the context of certain professions and industries regulated by the state.  The Utah law also requires that generative AI systems disclose that they are AI and not human in response to individuals that ask. In contract, Colorado’s law is similar to the AI Act in requiring AI system deployers to pro-actively disclose to individuals that interact with an AI system of that fact except where that would be obvious to a reasonable person.  These Colorado provisions take effect in February next year, while Utah’s requirements are now in force.

In short, transparency for generative AI systems that interact with consumers in a business context is no longer just a best practice, but the law in a growing number of jurisdictions.

Share Article

No results found

Related Posts

Explore More Posts

Subscribe to get the latest blogs in your inbox

* By submitting this form, I agree to receive information and updates, including marketing communications, by email about Conversica’s products and services. By submitting this form, I am agreeing to Conversica's privacy policy.

Thank you!

Ready to See a
Revenue Digital Assistant™ in Action?

Let us show you how our Powerfully Human®️ digital assistants
can help your team unlock revenue. Get the conversation started today.

Request a Demo