As the clock ticks toward the enforcement of the European Union’s landmark AI Act, a growing number of European companies — including tech giants, startups, and research labs — are making a unified plea to Brussels: “Delay the implementation by two years.” In a high-stakes letter delivered to the European Commission, key industry players warned that the current timeline risks harming Europe’s competitiveness, stifling innovation, and overwhelming companies still struggling to understand the law’s sweeping implications.
The AI Act, designed to regulate artificial intelligence systems across the EU, was adopted in June 2025 after more than three years of debate. It is the world’s first comprehensive legal framework for AI, placing strict compliance requirements on everything from biometric surveillance to deepfake technology and high-risk algorithms. While hailed as a global model for ethical AI governance, its scope and speed of implementation are causing unease within the industry.
The Request: A 24-Month Reprieve
The joint letter, signed by over 150 organizations across the 27 EU member states, asks the European Commission to delay enforcement of the AI Act’s key provisions by two years beyond the currently scheduled date of January 2026. Companies argue that such a delay is necessary to:
-
Build robust internal compliance systems
-
Clarify ambiguous clauses in the Act
-
Train AI teams on legal frameworks
-
Collaborate with national regulatory bodies
-
Avoid unintended consequences on innovation
“We are not against regulation. But we are against rushing it,” the letter states. “Without more time, we risk freezing AI innovation in Europe while other global competitors move forward unimpeded.”
Who’s Behind the Push? A Broad Coalition
Unlike past tech lobbying efforts dominated by Silicon Valley, this appeal is European-born and led, uniting stakeholders from across the AI value chain. Signatories include:
-
SAP, Germany’s enterprise software powerhouse
-
Nokia, the Finnish telecom pioneer
-
Mistral AI and Aleph Alpha, emerging European AI labs
-
Startups and SMEs from France, Estonia, Spain, and Poland
-
Academic research labs and think tanks focused on ethical AI
-
National business federations, such as Germany’s BDI and France’s MEDEF
“We believe in the EU’s vision for human-centric AI. But we must ensure this vision is feasible for the companies tasked with building it,” said Sophie Durand, legal counsel at a Paris-based robotics firm.
What Is the AI Act? A Recap of the World’s First AI Law
The AI Act was first proposed by the European Commission in 2021 and finalized in 2025. It uses a risk-based approach, categorizing AI systems into four tiers:
-
Unacceptable risk – Banned outright (e.g., social scoring by governments)
-
High risk – Subject to strict compliance, auditing, and transparency rules (e.g., biometric ID, hiring algorithms)
-
Limited risk – Requires transparency to users (e.g., chatbots)
-
Minimal risk – Allowed without additional requirements (e.g., spam filters)
Key requirements under the Act for high-risk AI systems include:
-
Data quality and bias mitigation checks
-
Robust documentation
-
Human oversight mechanisms
-
Cybersecurity resilience
-
Registration in an EU AI database
Violations can result in fines up to €35 million or 7% of global turnover, making compliance critical.
Industry Concerns: A Law with Good Intentions and Tough Execution
While the AI Act has been praised for setting global ethical standards, businesses say the timeline is too aggressive for such a complex and untested framework. Common concerns include:
1. Lack of Technical Guidelines
Companies say the European Commission has not yet issued detailed implementing acts or technical standards, making it difficult to even begin compliance preparations.
“How do we meet a standard that hasn’t been fully defined?” asked Dr. Lars Meier, CTO of a Berlin-based AI analytics startup.
2. Cost of Compliance
Startups worry about the high costs associated with third-party audits, documentation processes, and legal consultancy fees. Small AI developers may exit the market or shift their focus outside the EU.
3. Talent Shortages
With only a handful of legal experts specialized in AI compliance across Europe, many firms are struggling to build teams capable of interpreting and applying the law.
4. Fragmented National Oversight
Each member state is tasked with establishing national supervisory bodies, but many have yet to do so. Companies fear inconsistent enforcement and guidance.
The Commission’s Response: Not So Fast
So far, the European Commission has not indicated any intent to postpone the enforcement deadline. In a recent statement, EU Commissioner for the Internal Market Thierry Breton reaffirmed the bloc’s commitment to being the global leader in responsible AI:
“Europe must lead not just in setting ethical standards, but in deploying world-class AI. The AI Act gives us the rules to do both.”
The Commission also argues that the long lead time — between passage in 2025 and enforcement in 2026 — already offers companies sufficient runway.
Still, sources suggest internal discussions are ongoing, and the Commission may consider selective flexibility for certain sectors, such as healthcare or mobility.
Geopolitical Context: The AI Race Heats Up
The debate over AI regulation in Europe comes amid growing global competition in the AI space. The United States, while lacking comprehensive federal AI regulation, has surged ahead in AI investment, patents, and commercialization. Meanwhile, China continues to deploy AI at scale, particularly in surveillance, manufacturing, and logistics.
Critics warn that an overly rigid AI Act could make Europe less attractive to investors and hamper its innovation ecosystem, despite recent efforts to build “AI made in Europe.”
“If we turn compliance into a barrier instead of a benchmark, we’ll push the next generation of AI founders elsewhere,” said Anikó Szentpeteri, co-founder of a Hungarian AI imaging startup.
A Call for Regulatory Sandboxes and Realism
Many in the industry are not advocating for deregulation, but rather for a more gradual, flexible approach. One popular proposal involves:
-
Regulatory sandboxes that allow companies to test AI applications in controlled environments
-
Phased rollouts where smaller firms are given extra time
-
Modular compliance guides tailored to company size and AI use case
These ideas echo broader conversations around “agile regulation,” where laws evolve alongside technology rather than trying to preempt it.
Civil Society Responds: Ethics Cannot Wait
Not everyone agrees with the call for delay. Several digital rights organizations and AI ethicists warn that stalling the Act would endanger human rights, particularly in areas like facial recognition, credit scoring, and automated hiring.
“The AI Act is not perfect, but it is a vital first step toward accountable AI,” said Julia van der Beek, a policy expert at AlgorithmWatch. “Europe cannot afford to blink in the face of industry pressure.”
Some warn that granting delays sets a precedent where powerful industries can undermine crucial legislation before it has a chance to prove its value.
What Happens Next? A Race Against the Clock
The European Commission has yet to issue an official response to the letter, but sources say high-level meetings with industry representatives are scheduled in the coming weeks.
Key upcoming events include:
-
October 2025 – Release of final implementing acts
-
December 2025 – Formation of national AI oversight bodies
-
January 2026 – Scheduled start of enforcement
If the Commission opts not to delay, companies will need to accelerate their compliance efforts or face serious penalties.
Conclusion: A Defining Moment for European Tech
The push to delay the AI Act is more than a regulatory tussle — it’s a defining moment for Europe’s digital future. The EU stands at a crossroads between principled leadership and pragmatic competitiveness.
Will it double down on its bold vision of ethical AI leadership, or will it adjust course to support its innovators in the race to stay relevant on the global tech stage?
As negotiations unfold, one thing is clear: the world is watching.
Read More: influencer gonewild
FAQs
Q1: What is the EU AI Act?
It’s the world’s first comprehensive legislation governing artificial intelligence, passed by the European Union to regulate AI systems based on their risk level.
Q2: Why are companies asking for a delay?
They say the law is too complex, under-defined, and difficult to implement in time, risking innovation and competitiveness in the EU.
Q3: Will the AI Act still be enforced in 2026?
As of now, yes. But discussions are ongoing, and selective enforcement delays may be possible.
Read More:
