close
close

Implementing regulation on AI calls on federal authorities to act

Implementing regulation on AI calls on federal authorities to act

  1. President Biden’s October 2023 Executive Order on AI directed various agencies to take certain actions by June 26, 2024 – 240 days after the EO is issued.
  2. The 240-day measures included steps by authorities to strengthen data protection, identify techniques to label and authenticate AI-generated content, and curb the spread of AI-generated explicit content.
  3. Specifically, on June 26, 2024, the National Science Foundation (NSF) launched a program to fund projects to improve data protection.
  4. The National Institute of Technology (NIST) has published a draft report on techniques for labeling and authenticating AI-generated content and restricting AI-generated explicit content.

President Joe Biden’s October 2023 Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (AI EO) directed various federal agencies to take certain actions related to AI. As we previously explained in the AI ​​EO timeline, June 26, 2024—240 days after the EO was issued—was the deadline for agencies to take action to strengthen privacy protections and label and authenticate AI-generated content. In this newsletter, we cover the two most important actions taken 240 days after the EO was signed.

NSF launches funding program for privacy-friendly technologies

As AI technologies have evolved and proliferated in recent years, privacy concerns have become top of mind among various regulators and policymakers. In March 2023, the National Science and Technology Council (NSTC) released its “National Strategy to Promote Privacy-Compliant Data Sharing and Analysis (PPDSA).” According to the strategy document, PPDSAs are “methodological, technical and socio-technical approaches that use privacy-enhancing technologies to extract value from data and enable analysis of the data to drive innovation while ensuring privacy and security.” The NSTC’s strategy creates a framework to mitigate the privacy-related risks associated with technologies used to analyze data, including artificial intelligence.

In October 2023, Biden’s AI EO also underscored the need for privacy protection. The EO directed the NSF to “work with agencies to identify ongoing work and potential opportunities to incorporate privacy-enhancing technologies (PETs) into their operations” within 240 days of the EO. PETs encompass a broad range of privacy-enhancing tools, including differential privacy and end-to-end encryption. The EO also directed the NSF to “prioritize, where possible and appropriate, research—including efforts to translate research findings into practical applications—that promote the adoption of cutting-edge PET solutions for use in agencies, including through research engagement.”

Building on the NSTC strategy document and pursuant to the AI ​​EO, NSF launched the Privacy-Preserving Data Sharing in Practice (PDaSP) program on June 26, 2024. The program seeks applications for three main areas of project funding:

  • Track 1: “Advancing key technologies to enable practical PPDSA solutions” – This track focuses on the advancement of PPDSA technologies and combinations of such technologies, with an emphasis on “translating theory into practice for the key PPDSA techniques considered”.
  • Track 2: “Integrated and comprehensive solutions for trusted data exchange in application environments” – This track supports integrated data protection management solutions, focusing on solutions for different use cases and application contexts, including different technological, legal and regulatory contexts.
  • Track 3: “Usable tools and test environments for the trusted exchange of private or otherwise confidential data” – This track highlights the need to “develop tools and test environments to support and accelerate the adoption of PPDSA technologies”. Currently, stakeholders face several barriers to adopting such technologies, including “a lack of effective and user-friendly tools”, which this track aims to overcome.

The PDaSP program is supported by partnerships with other federal agencies and industry. Current funding partners include Intel Corporation, VMware LLC, the Federal Highway Administration, the Department of Transportation, and the Department of Commerce. NSF is also open to collaboration with other agencies and organizations interested in co-funding projects. Project funding is expected to range from $500,000 to $1.5 million for up to three years.

NIST publishes draft guidelines on synthetic content

Policymakers have also focused on concerns related to synthetic content—audio, image, or text information generated or significantly altered by AI. The AI ​​EO specifically directed the Secretary of Commerce and other relevant agencies to identify, within 240 days of the EO, the “existing standards, tools, methods, and practices, and the possible development of additional science-based standards and techniques” to authenticate content, label synthetic content, and “prevent generative AI from producing child sexual abuse material or non-consensual intimate images of real persons.”

On April 29, 2024, the Department of Commerce’s National Institute of Technology (NIST) published a draft report on “Reducing the risks posed by synthetic content.” The draft report covers three main thematic areas, which are explained below.

First, the report covers two data tracking techniques to disclose that content is generated or modified by AI: digital watermarking and metadata recording. While digital watermarking involves “embedding information into content (image, text, audio, video)” to indicate that the content is synthetic, metadata recording stores and makes information about the properties of the content accessible so that an interested party can “verify the origins of the content and how the history of the content (might have changed) over time,” the report says.

Second, the report describes best practices for testing and evaluating data tracking and synthetic content detection technologies, including digital watermarking testing techniques and metadata recording techniques, as well as automated content-based detection techniques.

Finally, the report provides an overview of specific techniques to prevent harm from child sexual abuse material (CSAM) and non-consensual intimate images (NCII) created or distributed by AI. The report discusses techniques for filtering out CSAM and NCII from data used to train AI systems, blocking the output of AI-generated images that may contain CSAM or NCII, and hashing confirmed synthetic CSAM and NCII images to prevent their further distribution.

Comments on the draft report were submitted by 2 June 2024. Although the final report would have been due on 26 June 2024, it was not made publicly available.