Protecto – AI Regulations and Governance Monthly Update – June 2024

SHARE THIS ARTICLE
Table of Contents

NIST Launches ARIA: A Comprehensive AI Risk and Impact Evaluation Program

__Wf_Reserved_Inherit

The National Institute of Standards and Technology (NIST) has announced the launch of Assessing Risks and Impacts of AI (ARIA), a groundbreaking evaluation program to guarantee the secure and trustworthy deployment of artificial intelligence. Spearheaded by Reva Schwartz, ARIA is designed to integrate human interaction into AI evaluation, covering three crucial levels: model testing, red-teaming, and field testing.

Overview of ARIA

ARIA is the latest addition to NIST’s AI evaluation initiatives managed by the Information Technology Laboratory. This program is unique in its sector and task-agnostic approach, focusing on system performance, accuracy, and technical and societal robustness. The goal is to develop guidelines, tools, methodologies, and metrics that institutions can use to assess the safety of their AI systems, informing governance and decision-making processes.

ARIA 0.1: The Initial Evaluation Phase

The initial phase, ARIA 0.1, will serve as a pilot to test the new evaluation environment, focusing on the risks and impacts associated with large language models (LLMs). Future iterations may extend to other generative AI technologies, such as text-to-image models, as well as other forms of AI like recommender systems and decision support tools. This phase will involve a comprehensive set of tasks to uncover pre-specified and unforeseen risks across three testing levels.

The Three Levels of ARIA Evaluation

1. Model Testing: Model testing is the most fundamental level of evaluation, examining the functionality and capabilities of AI models or system components. This involves comparing system outputs to known outcomes to determine performance accuracy. While model testing is scalable, it has limitations, particularly in accounting for human interaction with AI. Static benchmark datasets used in model testing may not fully represent dynamic human behaviors, making it challenging to predict real-world impacts.

2. Red-Teaming: Red-teaming involves structured testing to identify potential adverse outcomes and vulnerabilities within AI systems, such as generating false, toxic, or discriminatory outputs. This process can be conducted before or after AI models are publicly available. By involving experts and the general public, red-teaming can uncover different types of harm and provide data for remedying harmful functionalities. However, like model testing, it cannot fully predict user interactions in real-world scenarios.

3. Field Testing: Field testing evaluates how AI interacts with the public in realistic settings. Thousands of participants will interact with AI applications under controlled conditions, providing insights into positive and negative impacts. This large-scale testing aims to reveal the actual content and functionality users encounter and the societal impacts. Field testing complements model testing and red-teaming by providing real-world interaction data, enhancing the understanding of AI capabilities and impacts post-deployment.

ARIA Metrics and Future Development

The ARIA 0.1 pilot will produce evaluation outputs annotated by professional assessors using technical and societal robustness metrics. NIST plans to run a mini-challenge within ARIA to refine societal impact metrics further. This initiative encourages the broader measurement community to develop valid and generalizable metrics for AI safety and trustworthiness, informing other AI safety evaluation efforts.

Conclusion

NIST’s ARIA program represents a significant advancement in AI evaluation. It strives to assure that AI systems are safe, trustworthy, and beneficial to society. By incorporating comprehensive testing levels and focusing on technical and societal impacts, ARIA sets a new standard for AI governance and risk management. As the program evolves, it will provide valuable insights and tools for developers and policymakers, contributing to the responsible advancement of AI technology.

Senator Peters Introduces Bipartisan Bill to Ensure Safe and Responsible Federal AI Use

__Wf_Reserved_Inherit

U.S. Senator Gary Peters (MI), Chairman of the Homeland Security and Governmental Affairs Committee, has introduced a bipartisan bill to ensure the federal government’s use of artificial intelligence (AI) is safe and responsible. The legislation, known as the Promoting Responsible Evaluation and Procurement to Advance Readiness for Enterprise-wide Deployment (PREPARED) for AI Act, seeks to establish standards and safeguards for the federal adoption of AI technologies.

Key Provisions of the PREPARED for AI Act

The bill mandates federal agencies evaluate and address potential risks associated with AI before purchasing and deploying such technologies. This proactive approach aims to harness AI’s potential while safeguarding against possible risks and harms. The legislation builds on the requirements set by the Advancing American AI Act, which became law in 2022 and was spearheaded by Senator Peters.

Ensuring Safety and Security

The PREPARED for AI Act requires agencies to classify the risk levels of their AI uses, focusing on protecting public rights and safety. It stipulates that government contracts for AI capabilities must include terms covering data ownership, civil rights, privacy, and incident reporting. Agencies must also test and monitor potential risks before, during, and after procurement, ensuring continuous evaluation to mitigate risks.

AI Governance Structures

The bill calls for establishing AI governance structures within agencies to oversee AI procurement and usage, including appointing Chief AI Officers. These officers will lead and coordinate AI procurement efforts, ensuring a structured approach to AI adoption.

Pilot Programs and Innovation

The legislation proposes pilot programs to streamline AI purchasing processes, promoting innovation and competitive practices. These programs are designed to make acquiring AI and other commercial technologies more flexible, enhancing the government’s ability to adopt cutting-edge solutions.

Transparency and Public Reporting

A consequential aspect of the bill is its emphasis on transparency. It includes provisions for public disclosures and reporting on the government’s use of AI systems. This transparency is intended to build public trust and ensure that AI deployment is conducted in an open and accountable manner.

Support and Endorsements

The PREPARED for AI Act has garnered support from various organizations, including the Center for Democracy and Technology, Transparency Coalition, AI Procurement Lab, and the Institute of Electrical and Electronics Engineers (IEEE-USA).

Alexandra Reeve Givens, President & CEO of the Center for Democracy & Technology, emphasized the importance of responsible AI use. Rob Eleveld, Chairman of the Transparency Coalition, highlighted the need for transparency and innovation. Keith Moore, President of IEEE-USA, supported the bill’s focus on mitigating risks.

The PREPARED for AI Act represents a crucial step towards responsible AI adoption in the federal government. By establishing rigorous standards and promoting transparency, the bill seeks to ensure that AI technologies serve the American public safely and effectively.

States Step Up AI Regulation Amid Federal Inaction

__Wf_Reserved_Inherit

As the federal government struggles to regulate artificial intelligence (AI), states like California and Colorado are taking significant steps to address the technology’s potential risks and benefits. Last month, California lawmakers advanced 30 new AI-related measures to protect consumers and jobs. This move represents one of the most substantial efforts yet to regulate AI, reflecting growing concerns about the technology’s potential to disrupt various sectors and pose national security risks.

California’s Legislative Push

California’s proposed AI bills are designed to set stringent restrictions. They aim to prevent AI tools from discriminating in areas such as housing and healthcare. The legislation also seeks to safeguard intellectual property and protect jobs. If passed, these measures could establish some of the nation’s most demanding AI regulations.

Rebecca Bauer-Kahan, a Democratic assembly member and chair of the State Assembly’s Privacy and Consumer Protection Committee, emphasized the urgency of state action.

Broader State Efforts

California is not alone in its regulatory efforts. State lawmakers nationwide have introduced nearly 400 new AI-related bills in recent months. Colorado, for instance, recently enacted a comprehensive consumer protection law requiring AI companies to exercise “reasonable care” in developing technologies to avoid discrimination. In Tennessee, the ELVIS Act protects musicians from unauthorized use of their voice and likeness in AI-generated content.

Challenges and Impacts

Matt Perault, executive director of the Center on Technology Policy at the University of North Carolina at Chapel Hill, noted the relative ease of passing legislation at the state level compared to the federal level, given the prevalence of “trifecta” administrations, in which one party manages both legislative houses and the governor’s office.

The surge of state AI legislation has caught the attention of tech companies, prompting intense lobbying efforts, particularly in California. Nearly every tech lobbying group has expanded its presence in Sacramento to influence the legislative process.

Global Influence

The impact of state-level AI regulations is being felt globally. Victoria Espinel, chief executive of the Business Software Alliance, highlighted how other countries look at these state drafts to guide their AI laws. This global interest underscores the potential for state regulations to shape international standards.

Federal Stalemate

At the federal level, progress on AI regulation could be faster. Although U.S. lawmakers have held hearings and tech leaders have called for federal guardrails, concrete legislative action has yet to materialize. Last month, Senate Majority Leader Chuck Schumer introduced an AI regulation roadmap proposing $38 billion in investments but offering few specific regulatory measures.

Michael Karanicolas, executive director of the Institute for Technology, Law and Policy at UCLA, emphasized the need for harmonized federal legislation. However, most tech policy experts expect little federal action this year.

California’s Pioneering Role

Given the state’s historical influence on tech regulations, California’s legislative efforts could set a national precedent. Josh Lowenthal, an Assembly member and Democrat, stressed the state’s leadership role due to its economic stature and concentration of tech innovators.

Among the most significant bills in California is one that mandates safety tests for future versions of generative AI models like OpenAI’s ChatGPT. State Senator Scott Wiener, the bill’s sponsor, aims to ensure transparency and consumer protection through these regulations.

Proponents like Wiener remain optimistic despite opposition from tech industry groups, who argue that the bill could stifle innovation. He acknowledged the preference for federal action but expressed skepticism about its likelihood.

Final Thoughts

As the federal government grapples with AI regulation, states like California are forging ahead with comprehensive legislative measures. These efforts highlight the urgency and complexity of managing AI’s rapid development and potential societal impacts. By setting stringent standards and promoting transparency, state lawmakers hope to protect consumers and ensure the responsible use of AI technology.

Rahul Sharma

Content Writer

Join Our Newsletter
Stay Ahead in AI Data Privacy & Security
Snowflake Cortex AI Guidebook
Related Articles

Bipartisan AI Task Force and More – This Month in AI

Read about the launch of a bipartisan AI task force by House leaders and NIST's AI Safety Institute Consortium to promote safe and trustworthy AI development....

Google Gemma and PyRIT – This Week in AI

Discover the latest AI news this week, including Google's open-source Gemma model and Microsoft's PyRIT framework for identifying risks in generative AI systems....

The AI Landscape in 2024: Milestones and Innovations to Expect

Explore the AI landscape in 2024, from advances in AI research to ethical considerations in AI deployment. Discover the transformative power of AI across diverse domains and the pivotal junctures that mark this era of unprecedented evolution....

Download Playbook for Securing RAG on Snowflake Cortex AI

A Step-by-Step Guide to Mastering Enterprise-Grade RAG Security on Snowflake.