Protecto - AI Regulations and Governance Monthly Update - March 2024

Protecto - AI Regulations and Governance Monthly Update - March 2024

DHS Initiates Groundbreaking AI Roadmap and Pilot Projects to Safeguard National Security

In a landmark development, the U.S. Department of Homeland Security (DHS) has unveiled its pioneering Artificial Intelligence Roadmap, marking a significant stride towards incorporating generative AI models into federal agencies' operations. Under the leadership of Secretary Alejandro N. Mayorkas and Chief Information Officer Eric Hysen, DHS aims to harness AI technologies to bolster national security while safeguarding individual privacy and civil liberties.

Introduction of DHS Artificial Intelligence Roadmap

Today, Secretary Mayorkas and CIO Hysen announced the launch of DHS's inaugural Artificial Intelligence Roadmap, outlining the Department's ambitious plans for 2024. This roadmap underscores DHS's commitment to exploring AI's potential to enhance homeland security missions while upholding privacy rights and civil liberties.

Three Groundbreaking Pilot Projects

As part of the roadmap, DHS will initiate three groundbreaking pilot projects to evaluate AI's efficacy in various mission areas:

1. Transforming Security Investigative Processes: Homeland Security Investigations (HSI) will leverage AI to enhance investigative processes, particularly in detecting fentanyl and combating child sexual exploitation. By deploying Large Language Models (LLM), HSI aims to streamline investigative procedures, identify perpetrators and victims, and uncover critical patterns and trends.

2. Bolstering Planning Assistance for Resilient Communities: The Federal Emergency Management Agency (FEMA) will utilize AI to assist communities in developing hazard mitigation plans, enhancing resilience, and minimizing risks. This initiative aims to support local governments, including underserved communities, in crafting customized plans to mitigate disaster risks effectively.

3. Enhancing Immigration Officer Training: United States Citizenship and Immigration Services (USCIS) will employ Generative AI to revolutionize immigration officer training. By developing interactive applications tailored to individual officers' needs, USCIS aims to enhance understanding, retention of crucial information, and decision-making accuracy, ultimately improving immigration services.

Commitment to Responsible AI Utilization

Secretary Mayorkas emphasized DHS's commitment to responsible AI utilization, ensuring privacy protection, civil rights preservation, and rigorous testing to mitigate risks. The roadmap delineates three critical lines of effort:

  • Leveraging AI responsibly.
  • Promoting nationwide AI safety and security.
  • Fostering cohesive partnerships to drive AI development and deployment.

Ongoing AI Initiatives

These initiatives build upon DHS's ongoing AI endeavors, including establishing the AI Task Force and recruiting AI technology experts. The Department's focus encompasses diverse areas, from combating fentanyl trafficking to enhancing cargo screening and supply chain integrity.

Alignment with Presidential Executive Order

DHS's latest efforts align with President Biden's Executive Order on AI, emphasizing the importance of global AI safety standards, cybersecurity, and talent retention. Establishing an AI Safety and Security Advisory Board underscores the administration's commitment to responsible AI development and deployment.


The unveiling of the DHS Artificial Intelligence Roadmap and the launch of innovative pilot projects mark a pivotal moment in integrating AI into federal agency operations. With a focus on enhancing national security, improving operational efficiency, and safeguarding individual rights, DHS is poised to lead the way in responsible AI utilization. As these initiatives unfold, they promise to unlock new capabilities and advance the homeland security mission in the digital age.

U.S. Treasury Report Highlights AI-Specific Cybersecurity Risks in Financial Sector

The U.S. Department of the Treasury has released a comprehensive report addressing the management of artificial intelligence (AI)-specific cybersecurity risks within the financial services sector. This report, a response to President Biden's Executive Order on AI, delves into the current state of AI-related cybersecurity threats, trends, best practices, and challenges. Based on 42 in-depth interviews with stakeholders in late 2023, the report sheds light on significant opportunities and risks posed by AI technologies in financial services.

Understanding AI's Impact on Financial Cybersecurity

The report recognizes AI's dual nature in the financial sector, acknowledging its potential for driving innovation while posing new cybersecurity and fraud risks. Financial institutions have long utilized AI systems to bolster cybersecurity and anti-fraud operations. Yet, with the prompt advances in AI technology, including emerging capabilities like Generative AI, concerns about data integrity, privacy, and cyber threats have intensified.

Challenges and Opportunities

One of the report's key findings is the evolving risk landscape surrounding AI adoption in financial services. While some institutions have integrated AI-related risks into their existing frameworks, many are cautious about the potential vulnerabilities posed by emerging AI technologies. Safely adopting AI necessitates collaboration across various teams, including model, technology, legal, and compliance, to manage risks effectively.

Addressing Cybersecurity Threats

As the financial services sector faces increasingly sophisticated cyber threats, the report underscores the importance of expanding and strengthening risk management and cybersecurity practices. Financial institutions are encouraged to integrate AI solutions into their cybersecurity strategies, enhance collaboration, and effectively prioritize threat information sharing to counteract cyber adversaries.

The Role of Data in AI Development

The report highlights the pivotal role of data in AI technology, emphasizing that the quality and quantity of data directly impact the precision and efficiency of AI models. Collaboration and data sharing among institutions are critical for enhancing cybersecurity protection. While efforts like the Financial Services Information Sharing and Analysis Center (FS-ISAC) facilitate cyber threat information sharing, there remains a gap in fraud-related data sharing, particularly among smaller institutions.

Closing the Fraud Information Gap

Organizations such as the Bank Policy Institute (BPI) and the American Bankers Association (ABA) are working to address the fraud information gap, especially for smaller financial institutions. Collaboration between regulatory bodies like FinCEN and core providers could further support these efforts, ensuring that all financial institutions benefit from advancements in AI technology for countering fraud.

Navigating Third-Party Risks

The increasing reliance on third-party providers for AI technology and data raises concerns about data integrity and provenance. The complex ecosystem of AI solutions adopted through multiple intermediaries challenges traditional expectations regarding data ownership and oversight. Transparent oversight and verification of AI systems' insights and decision-making processes become imperative in mitigating third-party risks.


The Treasury report is a comprehensive guide for financial institutions navigating the evolving landscape of AI-related cybersecurity risks. By embracing responsible AI adoption, strengthening risk management practices, and fostering collaboration, the financial services sector can harness AI's transformative potential while safeguarding against emerging threats. As AI continues to reshape the financial industry, proactive measures outlined in the report will be crucial for ensuring cybersecurity resilience and maintaining public trust in financial systems.

EU Parliament Approves Landmark AI Regulation: What You Need to Know

In a significant move aimed at regulating the burgeoning field of artificial intelligence (AI), the European Parliament has voted to approve the European Union's Artificial Intelligence Act (EU AI Act). This milestone decision, made on March 13, 2024, marks a pivotal moment in shaping the future of AI governance in the European Union (EU). Here's a breakdown of what you need to know about this landmark legislation:

Timeline and Compliance Deadlines

The EU AI Act will come into force twenty days after its entry in the EU Official Journal, typically shortly after an affirmative vote. While the law will take "full effect" two years from its enactment, several aspects will apply sooner, with specific deadlines set as follows:

  • Six months: Bans on specific AI applications posing unacceptable risks.
  • Nine months: Regulators establish "Codes of Practice" for AI models.
  • Twelve months: Law applies to general-purpose AI models.
  • Thirty-six months: Obligations on high-risk AI models apply.

Risk-Based Approach

A cornerstone of the EU AI Act is its risk-based approach, which categorizes AI systems into four levels based on the risks they pose: unacceptable, high, limited, and minimal risks.

Key Provisions

  • Prohibited Practices: The EU AI Act bans AI systems that pose an unsuitable risk, including those that manipulate human behavior, engage in biometric categorization, or facilitate government social scoring.
  • High-Risk AI Systems: Systems used in critical sectors such as healthcare, banking, law enforcement, and democratic processes must undergo rigorous assessments before deployment.
  • Conformity Assessments: Providers of high-risk AI systems must conduct conformity assessments to ensure compliance with EU regulations.
  • Transparency Rules: Users interacting with AI systems must be informed of their artificial nature, promoting transparency and informed decision-making.
  • Data Governance: Stringent measures are mandated to manage data used in AI systems to mitigate risks and biases.
  • Monitoring and Reporting: Providers and users of high-risk AI systems must monitor performance and report incidents to the European Artificial Intelligence Board.
  • Innovation Support: Regulatory sandboxes and structured real-world testing opportunities aim to foster innovation while ensuring compliance.
  • Supervisory Authorities: Member states are tasked with designating or establishing national supervisory authorities to oversee compliance.
  • European Artificial Intelligence Board: A dedicated board will facilitate the consistent application of the AI Act across member states.

Implications and Future Outlook

The EU AI Act represents a significant effort towards guaranteeing the responsible development and deployment of AI technologies within the EU. By striking a balance between innovation and safeguarding fundamental rights, the legislation sets a precedent for AI regulation worldwide. As the EU prepares to implement this comprehensive framework, stakeholders must prepare to adapt to new compliance requirements and navigate the evolving landscape of AI governance.

In conclusion, the approval of the EU AI Act heralds a new era of AI regulation, signaling the EU's commitment to harnessing the benefits of AI while mitigating its potential risks. As other jurisdictions grapple with similar challenges, the EU's approach will likely influence global AI governance standards for years.

Download Example (1000 Synthetic Data) for testing

Click here to download csv

Signup for Our Blog

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Request for Trail

Start Trial

Rahul Sharma

Content Writer

Rahul Sharma graduated from Delhi University with a bachelor’s degree in computer science and is a highly experienced & professional technical writer who has been a part of the technology industry, specifically creating content for tech companies for the last 12 years.

Know More about author

Prevent millions of $ of privacy risks. Learn how.

We take privacy seriously.  While we promise not to sell your personal data, we may send product and company updates periodically. You can opt-out or make changes to our communication updates at any time.