Latest Insights on EU AI Act Developments - This Month in AI Updates

Latest Insights on EU AI Act Developments - This Month in AI Updates

EU Nations Secure Unanimous Agreement on Pioneering Artificial Intelligence Act

In a historic move, European Union member countries have unanimously agreed on the groundbreaking Artificial Intelligence Act. This achievement comes after overcoming last-minute concerns that the rulebook might impede European innovation. The EU deputy ambassadors provided the green light to the final compromise text, marking the conclusion of extensive negotiations between the Council, European Parliament members, and European Commission officials.

The AI Act is set to ban specific applications of AI technology, establish stringent limits on high-risk use cases, and impose transparency and stress-testing obligations on advanced software models. This legislation positions the EU at the forefront, becoming the first to implement binding rules for the rapidly advancing field of AI technology.

The announcement of the final compromise in December was initially hailed as a significant step, particularly amid the increasing prevalence of AI tools such as OpenAI's ChatGPT and Google's Bard. However, the agreement faced opposition from some EU countries, including Germany, France, and Austria, expressing concerns that the rules for advanced AI models could hinder the growth of Europe's emerging AI champions.

The potential opposition from these influential EU members cast doubt on the fate of the AI Act, raising the possibility of a deadlock. France's Economy Minister Bruno Le Maire called for additional negotiations with the European Parliament to address concerns, adding a layer of complexity to the situation. Additionally, privacy concerns related to facial recognition rules further complicated the matter.

Ultimately, a resolution emerged through public relations efforts and diplomatic maneuvering. The European Commission responded by announcing a comprehensive package of pro-innovation measures targeting the AI sector. In a strategic move, the EU's Artificial Intelligence Office was established to enforce the AI Act. Austria, France, and Germany were enticed back into agreement with assurances from the Commission regarding their specific concerns, providing a semblance of guarantee, although not legally binding.

The Commission outlined plans to establish an "expert group" comprising EU member countries' authorities to address potential ambiguities and ensure effective implementation. This group will advise and assist the Commission in applying and implementing the AI Act, particularly in avoiding overlaps with other EU regulations.

The Commission also emphasizes the commitment to fostering innovation in the AI sector and ensuring a flexible and future-proof legal framework. The AI Office will provide detailed guidance for developers of advanced general-purpose AI models, particularly regarding disclosing copyrighted materials used in training the software.

While the AI Act awaits formal approval from the European Parliament, the text is expected to undergo committee-level scrutiny in two weeks, with a plenary vote anticipated in April. Although pro-privacy lawmakers may attempt to propose amendments that could impact the law's progression, those closely involved in the AI Act within the Parliament express confidence in its passage without significant changes.

Commission Unveils Groundbreaking AI Innovation Package to Foster European Startups and SMEs

The European Commission has launched a comprehensive package of measures to support European startups and small- to medium-sized enterprises (SMEs) in developing trustworthy Artificial Intelligence (AI). This initiative follows the political agreement in December 2023 on the EU AI Act, the world's first comprehensive law on AI, designed to facilitate the development, deployment, and adoption of trustworthy AI in the EU.

The package includes several key elements aimed at nurturing AI innovation in Europe:

1. EuroHPC Regulation Amendment for AI Factories:

- Introduction of AI Factories as a new pillar for the EU's supercomputers Joint Undertaking activities.

- Acquisition, upgrading, and operation of AI-dedicated supercomputers for fast machine learning and training of extensive General Purpose AI (GPAI) models.

- Facilitating privileged access to AI-dedicated supercomputers for startups, SMEs, and the broader innovation community.

- Offering comprehensive support, including algorithmic development, testing, evaluation, validation of large-scale AI models, and supercomputer-friendly programming facilities.

- Enabling the development of emerging AI applications based on General Purpose AI models.

2. Establishment of AI Office:

- Creation of an AI Office within the Commission to ensure the development and coordination of AI policy at the European level.

- Supervision of the implementation and enforcement of the forthcoming AI Act.

3. EU AI Startup and Innovation Communication:

- Financial support from the Commission through Horizon Europe and the Digital Europe program dedicated to generative AI, generating an additional overall public and private investment of around €4 billion until 2027.

- Initiatives to strengthen the EU's generative AI talent pool through education, training, skilling, and reskilling activities.

- Encouragement of public and private investments in AI startups and scale-ups, including through venture capital or equity support.

- Accelerating the development and deployment of Common European Data Spaces, made available to the AI community.

- Introduction of the 'GenAI4EU' initiative to support novel use cases and emerging applications in Europe's 14 industrial ecosystems.

4. European Digital Infrastructure Consortiums (EDICs):

- Establishment of the 'Alliance for Language Technologies' (ALT-EDIC) to develop a common European infrastructure in language technologies.

- Creation of the 'CitiVERSE' EDIC to apply state-of-the-art AI tools for developing Local Digital Twins for Smart Communities.

5. AI@EC Communication:

- Adopt a communication outlining the Commission's strategic approach to using AI, preparing to implement the EU AI Act.

- Concrete actions to build institutional and operational capacity, ensuring the development and use of trustworthy, safe, and ethical AI.

- Support for EU public administrations in their adoption and use of AI.

Next Steps:

- Consideration of the proposed amendments by the European Parliament and the Council on the Regulation establishing the European High-Performance Computing Joint Undertaking.

- The establishment of the AI Office within the Commission is scheduled to enter into force on 21 February 2024.

- Formation of the European Digital Infrastructure Consortiums ALT-EDIC and the CitiVERSE EDIC by Member States with the support of the Commission.

This comprehensive AI innovation package is a testament to the EU's commitment to fostering a thriving AI ecosystem while ensuring adherence to values and rules, setting a precedent for the global AI landscape. The measures aim to empower startups and SMEs, strengthen talent pools, and drive innovation across various sectors. As the EU takes a pioneering step with the AI Act, these initiatives position Europe as a leading hub for responsible and innovative AI development.

LeftoverLocals: Critical GPU Vulnerability Exposes Privacy Risks in AI Models

In a significant revelation, a team of researchers has exposed a critical vulnerability named LeftoverLocals that poses a potent threat to the security of GPU applications, particularly impacting Local Linear Models (LLMs) and Machine Learning (ML) models. The vulnerability allows attackers to recover data from GPU local memory created by another process on major GPU platforms, including Apple, Qualcomm, AMD, and Imagination GPUs.

Understanding LeftoverLocals

The LeftoverLocals vulnerability enables attackers to listen to responses generated by Large Linear Models (LLMs) through leaked GPU local memory. By recovering local memory, an optimized region of GPU memory, the researchers demonstrated a Proof of Concept (PoC) where an attacker could eavesdrop on another user's interactive LLM session across process or container boundaries.

The potential impact of LeftoverLocals is significant, as it can leak approximately 5.5 MB per GPU invocation on an AMD Radeon RX 7900 XT.

Coordinated Disclosure and CVE Tracking

The vulnerability, tracked by CVE-2023-4969, was discovered by an assistant professor at UCSC. Since September 2023, the researchers have been working with the CERT Coordination Center on a coordinated disclosure effort involving major GPU vendors, including NVIDIA, Apple, AMD, Arm, Intel, Qualcomm, and Imagination.

Status of Impacted Vendors

As of the disclosure, the researchers provided updates on the status of the impacted vendors:

Apple: Some devices, like the Apple iPad Air 3rd G (A12), have been patched. However, the issue persists on the Apple MacBook Air (M2), and specific patches on newer devices, like the Apple iPhone 15, need to be detailed.

AMD: AMD has confirmed that their devices remain impacted, and they are actively investigating potential mitigation plans.

Qualcomm: A patch to Qualcomm firmware v2.07 addresses LeftoverLocals for some devices, but others may still be impacted. Qualcomm emphasizes the development of technologies supporting robust security.

Imagination: Although the researchers did not observe LeftoverLocals on Imagination GPUs, Google confirmed that some are indeed impacted, and a fix has been released in their latest DDK release.

Next Steps and Mitigations

The researchers suggest modifications to GPU kernel source code as a mitigation against LeftoverLocals. Specifically, clearing local memory before the kernel ends can prevent another user from reading leftover values. However, they acknowledge the difficulty of this mitigation for many users, especially in complex software stacks like those used in ML applications.

Broader Implications

The LeftoverLocals vulnerability raises broader concerns about the security of the ML development stack, highlighting unknown security risks that have yet to undergo rigorous review by security experts. The impact extends beyond LLM applications, potentially affecting various GPU compute domains, such as image processing and scientific computing.

Final Thoughts

As the disclosure efforts continue, the LeftoverLocals vulnerability underscores the evolving threat landscape surrounding GPU applications, necessitating comprehensive scrutiny of GPU compute environments when processing sensitive data. The research team emphasizes the importance of addressing vulnerabilities in up-and-coming GPU platforms and the need for unified security specifications to ensure the secure handling of sensitive data.

Download Example (1000 Synthetic Data) for testing

Click here to download csv

Signup for Our Blog

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Request for Trail

Start Trial

Rahul Sharma

Content Writer

Rahul Sharma graduated from Delhi University with a bachelor’s degree in computer science and is a highly experienced & professional technical writer who has been a part of the technology industry, specifically creating content for tech companies for the last 12 years.

Know More about author

Prevent millions of $ of privacy risks. Learn how.

We take privacy seriously.  While we promise not to sell your personal data, we may send product and company updates periodically. You can opt-out or make changes to our communication updates at any time.