Home » US wants to nix the EU AI Act’s code of practice, leaving enterprises to develop their own risk standards

US wants to nix the EU AI Act’s code of practice, leaving enterprises to develop their own risk standards

by Jamal Richaqrds
2 minutes read

The European Union’s AI Act is at a critical juncture, with the drafting of a code of practice that will define regulations for general-purpose AI models. However, the US government, led by President Donald Trump, is advocating for the dismissal of this rulebook. Critics argue that the code restricts innovation, imposes unnecessary burdens, and extends the boundaries of AI legislation beyond what is required.

As the deadline for finalizing the code approaches, the debate intensifies. The US Mission to the EU has actively lobbied against the adoption of the code in its current form, emphasizing the challenges of implementing strict obligations such as third-party model testing and full training data disclosure. Critics fear that these requirements could hinder scalability and innovation in the AI sector.

The shift of responsibility from vendors to enterprises is a key aspect of this debate. The proposed code aims to assist providers in complying with the EU AI Act by outlining best practices in transparency, copyright, and risk management. While the code is voluntary, non-compliance could result in significant financial penalties or increased regulatory scrutiny, underscoring the importance of adherence to these guidelines.

Regardless of the outcome of this debate, the emphasis on “responsible AI” is increasingly falling on the shoulders of organizations deploying AI technologies. Companies are urged to develop their own AI risk frameworks, encompassing privacy assessments, provenance tracking, and rigorous testing protocols. This shift underscores the need for businesses to prioritize responsible AI practices as a fundamental aspect of their operations.

In contrast to the EU’s stringent approach, the current US administration is advocating for a lighter regulatory touch in AI legislation. Recent executive orders and guidance from federal agencies reflect a deregulatory stance, focusing on reducing barriers to innovation and economic competitiveness. This approach contrasts with the EU’s emphasis on comprehensive regulations and underscores the diverging paths taken by global AI policymakers.

Ultimately, the debate over AI regulation highlights the complex interplay between innovation, regulation, and responsibility. As enterprises navigate this evolving landscape, the need for clear guidelines and proactive risk management strategies becomes paramount. The decisions made today will shape the future of AI development and deployment, underscoring the importance of finding a balance between innovation and accountability in the digital age.

You may also like