Biden Executive Order on AI and More - This Month in AI - Nov 2023

Biden Executive Order on AI and More - This Month in AI - Nov 2023

Biden Admin Issues Seminal AI Exec Order

President Biden has issued a groundbreaking Executive Order focused on ensuring that the United States leads in developing and managing artificial intelligence (AI). The Executive Order tries to strike a balance between seizing the potential benefits of AI and managing the associated risks. Covering a range of aspects, the order establishes new standards for AI safety and security, prioritizes privacy protection, advances equity and civil rights, stands for consumers and workers, encourages innovation and competition, and enhances American leadership globally.

New Standards for AI Safety and Security:

  • Developers of robust AI systems must share safety test results and critical information with the US government, ensuring that AI systems are safe, secure, and trustworthy before public release.
  • Rigorous standards, tools, and tests will be developed to ensure the safety of AI systems, with red-team testing conducted by the National Institute of Standards and Technology.
  • Strong new standards for biological synthesis screening will be established to prevent the engineering of dangerous biological materials.
  • Measures will be taken to protect against AI-enabled fraud and deception, including content authentication and watermarking for AI-generated content.
  • An advanced cybersecurity program will be initiated to use AI tools to find and fix vulnerabilities in critical software.

Protecting Americans' Privacy:

  • Federal support will prioritize developing and using privacy-preserving techniques, including AI-driven methods, and funding will be allocated for advancing privacy-preserving research and technologies.
  • An evaluation of how agencies collect and use commercially available information, including data from brokers, will focus on privacy risks.
  • Guidelines for federal agencies to evaluate the effectiveness of privacy-preserving techniques, particularly in AI systems, will be developed.

Advancing Equity and Civil Rights:

  • Clear guidance will be provided to prevent AI algorithms from exacerbating discrimination in housing, federal benefits programs, and federal contractors.
  • The Department of Justice and Federal civil rights offices will address algorithmic discrimination through training and coordination.
  • Best practices for using AI in the criminal justice system will be developed to ensure fairness in various aspects.

Standing Up for Consumers, Patients, and Students:

  • Initiatives will be launched to ensure responsible use of AI in healthcare and the development of affordable drugs.
  • Resources will be created to support educators deploying AI-enabled educational tools.

Supporting Workers:

  • Principles and best practices will be developed to mitigate harm and maximize the benefits of AI for workers, addressing issues such as job displacement, labor standards, and workplace equity.
  • A report on AI's potential labor-market impacts will be produced, and options for strengthening federal support for workers facing labor disruptions will be identified.

Promoting Innovation and Competition:

  • A National AI Research Resource will be piloted to provide AI researchers and students access to critical resources and data.
  • The fair, open, and competitive AI ecosystem will be promoted, offering support to small developers and entrepreneurs and encouraging the Federal Trade Commission to exercise its authority.
  • Existing authorities will be used to enhance the ability of highly skilled immigrants to study, stay, and work in the United States.

Advancing American Leadership Abroad:

  • Bilateral, multilateral, and multistakeholder engagements will be expanded to collaborate on AI globally.
  • Acceleration of the development and implementation of vital AI standards with international partners will be pursued.
  • Safe, responsible, and rights-affirming development and deployment of AI abroad will be promoted to address global challenges.

Ensuring Responsible and Effective Government Use of AI:

  • Guidance for agencies' use of AI will be issued, emphasizing clear standards for protecting rights and safety, improving procurement, and enhancing deployment.
  • Agencies will be assisted in acquiring AI products and services more efficiently through rapid and efficient contracting.
  • A government-wide AI talent surge will be initiated, focusing on the rapid hiring of AI professionals, with training provided for employees at all levels.

These actions are part of the Biden-Harris Administration’s broader strategy for responsible innovation and build on previous efforts to engage leading companies in voluntary commitments for the safe and trustworthy development of AI. The initiatives aim to address the challenges posed by AI while fostering innovation, protecting privacy, promoting equity, and positioning the United States as a global leader in AI development and deployment.

The UK AI Safety Summit

The recent UK AI Safety Summit, hosted by Prime Minister Rishi Sunak, marked a significant step forward in global AI diplomacy, addressing the policy implications of substantial advances in machine learning and artificial intelligence (AI). While acknowledging that not every policy problem was solved, the summit facilitated a major diplomatic breakthrough that set the stage for reducing risks and maximizing benefits from rapidly evolving AI technology.

The summit achieved several notable outcomes, including a joint commitment by twenty-eight governments and leading AI companies. This commitment entails subjecting advanced AI models to a battery of safety tests before release. Additionally, a new UK-based AI Safety Institute was announced, and there was a push to support regular scientist-led assessments of AI capabilities and safety risks.

The urgency to address these issues stems from the recognition that advanced AI systems are progressing rapidly. The computing power in training AI systems has grown 55 million times in the past decade. The next generation of frontier models, potentially available as early as next year, could pose new risks for society if suitable safeguards and policy responses are not implemented quickly. The summit successfully gathered support from diverse countries, including the EU, the United States, China, Brazil, India, and Indonesia.

The UK and the United States also announced the creation of AI Safety Institutes, the first two in an envisioned global network of centers. Notably, the summit generated support for an international panel of scientists, led by AI luminary Yoshua Bengio, tasked with producing a report on AI safety. This initiative represents a crucial step toward establishing a permanent organization that provides scientific assessments of advanced AI models' capabilities.

The diplomatic achievements at the summit were complemented by other jurisdictions taking faster action. The White House issued an executive order requiring certain companies to disclose training runs and testing information for advanced AI models threatening national security. The G7 released a draft code of conduct for organizations developing advanced AI systems, and the United Nations appointed an international panel of experts to advise on AI governance.

Beyond the commitments and agreements, the relationships forged and trust built during the summit are crucial. Delegates engaged in debates over significant challenges, including defining thresholds for dangerous AI systems, engaging global participation in AI policy discussions, incorporating human values into AI systems, and ensuring reasonable behavior from countries collaborating on AI safety.

The summit's location at Bletchley Park, historically associated with Alan Turing's breakthroughs during World War II, adds a layer of significance. Turing's work, which contributed to cracking the Enigma code, played a pivotal role in shortening the war. Similarly, today's leaders face institutional design and diplomacy challenges as they navigate the geopolitical changes and technological breakthroughs AI brings.

As leaders continue to shape the global AI policy landscape, the summit highlights the importance of wise questions, savvy diplomacy, and well-crafted institutions. The ongoing efforts will focus on balancing the merits of freely shared, open-source AI models with effective policies, leveraging existing laws for civil liability without stifling innovation, and ensuring democracy benefits from AI while mitigating risks of misinformation. The UK AI Safety Summit represents a significant chapter in the evolving narrative of responsible and effective AI governance.

Download Example (1000 Synthetic Data) for testing

Click here to download csv

Signup for Our Blog

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Request for Trail

Start Trial

Rahul Sharma

Content Writer

Rahul Sharma graduated from Delhi University with a bachelor’s degree in computer science and is a highly experienced & professional technical writer who has been a part of the technology industry, specifically creating content for tech companies for the last 12 years.

Know More about author

Prevent millions of $ of privacy risks. Learn how.

We take privacy seriously.  While we promise not to sell your personal data, we may send product and company updates periodically. You can opt-out or make changes to our communication updates at any time.