One of the many legal questions swirling in the world of Generative AI (“GenAI”) is how far Section 230 of the Communications Decency Act (CDA) applies to the provision of GenAI. Can CDA immunity apply to GenAI-generated output and protect GenAI vendors from potential third-party liability?

On June 14, 2023, Senators Richard Blumenthal and Josh Hawley introduced THE “No Section 230 Immunity for AI Act“bipartisan legislation that would expressly remove most immunity under the CDA for a provider of an interactive computer service if the conduct underlying the claim or charge” involves the use or provision of generative artificial intelligence by interactive computer service”. While the bill would remove the “publisher’s” immunity under §230(c)(1) for claims involving the use or provision of generative artificial intelligence by a interactive computer service, immunity for the so-called “Good Samaritan” block under § 230(c )(2)(A), which protects service providers and users from liability for claims arising from good faith actions aimed at filtering or limiting access to “objectionable” material from their services, would not be interested.

The bill defines generative AI very broadly as: “an artificial intelligence system that can generate new text, video, image, audio, and other media based on suggestions or other forms of data provided by a person.”

Since the CDA’s cut from the bill extends to any interactive computer service where the underlying conduct relates to the use of generative AI, it appears that it may extend to computer-based services that have generative AI embedded. This may be beyond the intent of the drafters of the bill. Indeed the Press release accompanying the bill states that the bill was written to help build a framework for “AI platform liability” and that the bill would strip “AI companies” immunity in “civil claims or criminal proceedings involving the use or provision of generative artificial intelligence”. Thus, it appears that the drafters may have written the bill to remove CDA immunity for “AI companies” (whatever that means), and not for all services that integrate some generative AI features into its offerings.

The bill raises a key question: Would a court find that a generative AI platform or computer service with GenAI capabilities is the information content provider or the creator of its output that is responsible, in whole or in part, for creating or of the development of the information provided (no CDA immunity) or simply the publisher of third-party content based on third-party training data (potential immunity under section 230 CDA)?

It could be argued that generative AI tools, simply by that term, would suggest that they “generate” content and therefore the service would not enjoy CDA immunity for claims arising from the emitted content (as opposed to a social media platform that hosts user generated content). On the other hand, it could be argued that a generative AI tool is not a person or entity that creates independent content, rather an algorithm that organizes third-party training data into a useful form in response to a user prompt. user, and therefore should be protected by CDA immunity for egress. The Supreme Court recently rejected the opportunity to comment on the CDA’s immunity as it pertains to the algorithmic organization of a social media platform or the presentation of content. With the introduction of a bill that would strip providers of generative AI from CDA publisher immunity, it’s possible that senators, knowingly or otherwise, have delved into this thorny legal issue.

Will this bill go ahead? Who knows. Congress does not have a successful track record of passing CDA-related legislation (see the travails of CDA reform), but GenAI appears to be a matter of bipartisan concern. We’ll be watching the bipartisan-sponsored bill’s progress closely to see if it has a chance of passage through a divided Congress.