Bipartisan AI Task Force and More - This Month in AI

Bipartisan AI Task Force and More - This Month in AI

House Leaders Launch Bipartisan Task Force to Tackle AI Policy Challenges

In a significant move to address the complexities of regulating artificial intelligence (AI), Speaker Mike Johnson (R-La.) and Minority Leader Hakeem Jeffries (D-N.Y.) declared the formation of a bipartisan task force dedicated to exploring AI innovation and devising safeguards against potential threats. This initiative comes as lawmakers grapple with the rapid evolution of AI technology and its implications for various sectors.

Task Force Leadership and Composition

The newly established task force comprises 24 members. It will be led by Chairman Jay Obernolte, a Republican with a master's degree in AI and a background in video game development, and Co-Chairman Ted Lieu, a Democrat and critical member of the Democratic leadership team. Lieu, notable for his previous work on AI regulation, authored a bill regulating AI using the AI chatbot ChatGPT.

Both Obernolte and Lieu, who have backgrounds in computer science, have emphasized the significance of addressing AI threats, such as deepfakes, misinformation spread, and job displacement. The task force aims to leverage its members' expertise to develop comprehensive policy proposals.

Objectives of the Task Force

The task force's primary objectives include exploring avenues for the United States to maintain a leadership role in AI innovation and establishing safeguards to mitigate potential risks associated with the technology. This includes studying AI's impact on the economy and society and formulating policies to navigate its promises and complexities.

The task force will work towards producing a comprehensive report that encompasses guiding principles, recommendations, and policy proposals. The report will be developed collaboratively with the input of relevant House committees, ensuring a thorough and well-informed approach to AI policy.

Bipartisan Collaboration for AI Advancement

Recognizing the transformative potential of AI on the economy and society, Speaker Johnson stressed the importance of bipartisan collaboration to understand and plan for the multifaceted aspects of this technology. He emphasized the need to encourage innovation, protect national security, and establish guardrails to develop safe and trustworthy technology.

Minority Leader Jeffries echoed the sentiment, emphasizing Congress's responsibility to facilitate AI breakthroughs while ensuring equitable benefits for everyday Americans. Acknowledging the challenges presented by the rise of AI, he stressed the necessity for bipartisan efforts to maintain America's leadership in this emerging space and prevent malicious exploitation of evolving AI technology.

Task Force Members and Expertise

The task force boasts a diverse composition, with members possessing expertise in various fields relevant to AI policy. From computer science professionals to lawmakers experienced in technology-related legislation, the team aims to leverage its collective knowledge to address the challenges posed by AI.

Looking Ahead

As the task force begins its work, the AI community anticipates significant developments in AI policy and regulation. The collaboration between House leaders, backed by the expertise of task force members, signals a proactive approach to shaping AI's future in the United States.

NIST Launches Artificial Intelligence Safety Institute Consortium to Advance Trustworthy AI

The National Institute of Standards and Technology (NIST), under the United States Department of Commerce, has announced the establishment of the Artificial Intelligence Safety Institute Consortium. The consortium aims to play a pivotal role in creating safe and trustworthy artificial intelligence (AI) by fostering collaboration among organizations to develop robust measurement science, techniques, and metrics. This initiative aligns with NIST's commitment to addressing challenges outlined in the October 30, 2023, Executive Order titled "The Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence."

Consortium Objectives and Framework

The consortium's primary goal is to identify proven, scalable, and interoperable techniques and metrics that promote safe and trustworthy AI development and responsible use. This includes focusing on advanced AI systems, particularly the most capable foundation models. NIST invites organizations, including non-profits, universities, government agencies, and technology companies, to participate by submitting letters of interest showcasing their technical expertise, products, data, and models.

Collaboration for Responsible AI

To address the challenges associated with AI development and deployment, NIST seeks to create a collaborative space for informed dialogue and sharing of information and knowledge. The consortium will engage in collaborative research and development, offering organizations an opportunity to contribute technical expertise in various areas related to AI, such as AI governance, safety, fairness, explainability, and more.

NIST emphasizes the importance of aligning AI with societal norms and values, ensuring public safety, and reducing market uncertainties. The consortium will serve as a hub for interested parties to work together in building and maturing a measurement science for trustworthy and responsible AI.

Timeline and Participation

The collaborative activities of the consortium are set to commence as soon as sufficient letters of interest are received, but no earlier than December 4, 2023. NIST will continue to accept letters of interest on an ongoing basis. Organizations interested in participating can submit letters of interest through the consortium's webform.

Selected participants must enter a consortium Cooperative Research and Development Agreement (CRADA) with NIST. Even entities not permitted to enter CRADAs under specific laws may participate through separate non-CRADA agreements.

Project Objectives and Contributions

The consortium's efforts align with the priorities outlined in the Executive Order, focusing on creating guidelines, tools, and best practices for designing and deploying AI safely. It will explore the complexities of the intersection between society and technology, addressing the science of human engagement with AI in different contexts. Additionally, the consortium aims to guide understanding and managing interdependencies between AI actors throughout the lifecycle.

Organizations interested in participating should outline their role in the consortium, their specific expertise, and the products, services, data, or technical capabilities they intend to contribute.

As A Final Observation

NIST's Artificial Intelligence Safety Institute Consortium represents a significant step forward in the collaborative effort to ensure the safe and reliable development and usage of AI technologies. By bringing together a diverse group of stakeholders, the consortium aims to contribute to the establishment of industry standards and best practices that align with societal norms and values.

U.S. Department of Justice Bolsters AI Oversight with Appointment of Chief AI Officer

In a significant move highlighting the growing importance of artificial intelligence (AI) in legal and governmental spheres, U.S. Attorney General Merrick Garland has appointed Jonathan Mayer, a distinguished computer science and public policy professor at Princeton University, as the first Chief AI Officer (CAIO) for the Department of Justice (DOJ). Mayer will also serve as the DOJ's Chief Science and Technology Officer, marking a pivotal moment as the government takes strides to address legal policies and build internal capacity for AI.

The Appointment of Jonathan Mayer

Jonathan Mayer, with a Ph.D. in computer science from Stanford University and a law degree from Stanford Law School, brings a unique blend of technical expertise and legal knowledge to the role. His experience includes serving as the technology advisor to Vice President Kamala Harris during her tenure as a U.S. senator and as the Chief Technologist at the U.S. Federal Communications Commission's Enforcement Bureau.

Mayer's dual role as CAIO and Chief Science and Technology Officer aligns with the DOJ's commitment to being well-prepared for the challenges and opportunities presented by new technologies, as stated by Attorney General Garland. The appointment comes in response to President Joe Biden's recent executive order on AI, emphasizing the safe and secure use of AI tools for U.S. residents.

Responsibilities of the Chief AI Officer

As CAIO, Mayer will be crucial in the DOJ's technology capacity-building efforts. This includes advising on recruiting technical employees and addressing issues such as AI and cybersecurity. Mayer will be situated in the DOJ's Office of Legal Policy, working alongside a team of technical and policy experts dedicated to navigating the complex landscape of AI.

Furthermore, Mayer will lead the newly established Emerging Technology Board, which is responsible for coordinating and governing AI and other emerging technologies across the department. This demonstrates the DOJ's commitment to proactive engagement with rapidly evolving technologies and aligns with the government's broader initiatives to foster innovation while ensuring responsible use.

Expert Reactions and Industry Insights

Ritu Jyoti, Group Vice President for AI and Automation at IDC, commended the DOJ's decision to appoint Mayer. She highlighted the increasing prevalence of chief AI officer roles in organizations and emphasized the need for someone with a strong technical understanding of AI, ethical and legal knowledge, leadership skills, and adaptability.

Mike Demler, an independent technology analyst focused on semiconductors and AI, noted the critical need for government technology experts to advise policymakers. He sees the appointment as a positive step, signaling the government's efforts to understand and leverage AI responsibly.

Both experts agree that responsible AI use is paramount for scaling AI, and the appointment of Mayer is seen as a strategic move to ensure the DOJ is well-equipped to evaluate AI use from all angles.

In The Final Analysis

The appointment of Jonathan Mayer as the Chief AI Officer for the Department of Justice underscores the government's commitment to navigating the complex landscape of AI with expertise and responsibility. As AI continues to play an increasingly central role in various sectors, Mayer's role will be instrumental in shaping legal policies, ensuring the responsible use of AI, and building internal capacity for emerging technologies within the DOJ.

Download Example (1000 Synthetic Data) for testing

Click here to download csv

Signup for Our Blog

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Request for Trail

Start Trial

Rahul Sharma

Content Writer

Rahul Sharma graduated from Delhi University with a bachelor’s degree in computer science and is a highly experienced & professional technical writer who has been a part of the technology industry, specifically creating content for tech companies for the last 12 years.

Know More about author

Prevent millions of $ of privacy risks. Learn how.

We take privacy seriously.  While we promise not to sell your personal data, we may send product and company updates periodically. You can opt-out or make changes to our communication updates at any time.