Source:https://legiscan.com/CA/text/AB3211/id/2984195

Clarifications

(l) “Provenance data” means data that identifies the origins of synthetic content, including, but not limited to, the following:

(1) The name of the generative AI provider.

(2) The name and version number of the AI system that generated the content.

(3) The time and date of the creation.

(4) The portions of content that are synthetic.

(m) “Synthetic content” means information, including images, videos, audio, and text, that has been produced or significantly modified by a generative AI system.

(n) “Watermark” means information that is embedded into a generative AI system’s output for the purpose of conveying its synthetic nature, identity, provenance, history of modifications, or history of conveyance.

(o) “Watermark decoders” means freely available software tools or online services that can read or interpret watermarks and output the provenance data embedded in them.

AI Generative services obligations

(a) A generative AI provider shall do all of the following:

(1) Place imperceptible and maximally indelible watermarks containing provenance data into synthetic content produced or significantly modified by a generative AI system that the provider makes available.

(A) If a sample of synthetic content is too small to contain the required provenance data, the provider shall, at minimum, attempt to embed watermarking information that identifies the content as synthetic and provide the following provenance information in order of priority, with clause (i) being the most important, and clause (iv) being the least important:

(i) The name of the generative AI provider.

(ii) The name and version number of the AI system that generated the content.

(iii) The time and date of the creation of the content.

(iv) If applicable, the specific portions of the content that are synthetic.

Use of watermarks

(B) To the greatest extent possible, watermarks shall be designed to retain information that identifies content as synthetic and gives the name of the provider in the event that a sample of synthetic content is corrupted, downscaled, cropped, or otherwise damaged.

(2) Develop downloadable watermark decoders that allow a user to determine whether a piece of content was created with the provider’s system, and make those tools available to the public.

(A) The watermark decoders shall be easy to use by individuals seeking to quickly assess the provenance of a single piece of content.

(B) The watermark decoders shall adhere, to the greatest extent possible, to relevant national or international standards.

(3) Conduct AI red-teaming exercises involving third-party experts to test whether watermarks can be easily removed from synthetic content produced by the provider’s generative AI systems, as well as whether the provider’s generative AI systems can be used to falsely add watermarks to otherwise authentic content. Red-teaming exercises shall be conducted before the release of any new generative AI system and annually thereafter.

(b) A generative AI provider may continue to make available a generative AI system that was made available before the date upon which this act takes effect and that does not have watermarking capabilities as described by paragraph (1) of subdivision (a), if either of the following conditions are met:

(1) The provider is able to retroactively create and make publicly available a decoder that accurately determines whether a given piece of content was produced by the provider’s system with at least 99 percent accuracy as measured by an independent auditor.

© Providers and distributors of software and online services shall not make available a system, application, tool, or service that is designed to remove watermarks from synthetic content.

(d) Generative AI hosting platforms shall not make available a generative AI system that does not place maximally indelible watermarks containing provenance data into content created by the system.

AI Text Chat LLMs

(f) (1) A conversational AI system shall clearly and prominently disclose to users that the conversational AI system generates synthetic content.

(A) In visual interfaces, including, but not limited to, text chats or video calling, a conversational AI system shall place the disclosure required under this subdivision in the interface itself and maintain the disclosure’s visibility in a prominent location throughout any interaction with the interface.

(B) In audio-only interfaces, including, but not limited to, phone or other voice calling systems, a conversational AI system shall verbally make the disclosure required under this subdivision at the beginning and end of a call.

(2) In all conversational interfaces of a conversational AI system, the conversational AI system shall, at the beginning of a user’s interaction with the system, obtain a user’s affirmative consent acknowledging that the user has been informed that they are interacting with a conversational AI system. A conversational AI system shall obtain a user’s affirmative consent prior to beginning the conversation.

(4) The requirements under this subdivision shall not apply to conversational AI systems that do not produce inauthentic content.

‘Add Authenticity watermark to all cameras’

(a) For purposes of this section, the following definitions apply:

(1) “Authenticity watermark” means a watermark of authentic content that includes the name of the device manufacturer.

(2) “Camera and recording device manufacturer” means the makers of a device that can record photographic, audio, or video content, including, but not limited to, video and still photography cameras, mobile phones with built-in cameras or microphones, and voice recorders.

(3) “Provenance watermark” means a watermark of authentic content that includes details about the content, including, but not limited to, the time and date of production, the name of the user, details about the device, and a digital signature.

(b) (1) Beginning January 1, 2026, newly manufactured digital cameras and recording devices sold, offered for sale, or distributed in California shall offer users the option to place an authenticity watermark and provenance watermark in the content produced by that device.

(2) A user shall have the option to remove the authenticity and provenance watermarks from the content produced by their device.

(3) Authenticity watermarks shall be turned on by default, while provenance watermarks shall be turned off by default.

How to demonstrate use

Beginning March 1, 2025, a large online platform shall use labels to prominently disclose the provenance data found in watermarks or digital signatures in content distributed to users on its platforms.

(1) The labels shall indicate whether content is fully synthetic, partially synthetic, authentic, authentic with minor modifications, or does not contain a watermark.

(2) A user shall be able to click or tap on a label to inspect provenance data in an easy-to-understand format.

(b) The disclosure required under subdivision (a) shall be readily legible to an average viewer or, if the content is in audio format, shall be clearly audible. A disclosure in audio content shall occur at the beginning and end of a piece of content and shall be presented in a prominent manner and at a comparable volume and speaking cadence as other spoken words in the content. A disclosure in video content should be legible for the full duration of the video.

© A large online platform shall use state-of-the-art techniques to detect and label synthetic content that has had watermarks removed or that was produced by generative AI systems without watermarking functionality.

(d) (1) A large online platform shall require a user that uploads or distributes content on its platform to disclose whether the content is synthetic content.

(2) A large online platform shall include prominent warnings to users that uploading or distributing synthetic content without disclosing that it is synthetic content may result in disciplinary action.

(e) A large online platform shall use state-of-the-art techniques to detect and label text-based inauthentic content that is uploaded by users.

(f) A large online platform shall make accessible a verification process for users to apply a digital signature to authentic content. The verification process shall include options that do not require disclosure of personal identifiable information.

‘AI services must reports their efforts against harmful content’

(a) (1) Beginning January 1, 2026, and annually thereafter, generative AI providers and large online platforms shall produce a Risk Assessment and Mitigation Report that assesses the risks posed and harms caused by synthetic content generated by their systems or hosted on their platforms.

(2) The report shall include, but not be limited to, assessments of the distribution of AI-generated child sexual abuse materials, nonconsensual intimate imagery, disinformation related to elections or public health, plagiarism, or other instances where synthetic or inauthentic content caused or may have the potential to cause harm.

Penalty for violating this bill

A violation of this chapter may result in an administrative penalty, assessed by the Department of Technology, of up to one million dollars ($1,000,000) or 5 percent of the violator’s annual global revenue, whichever is higher