Delay in the IA Act guidelines creates uncertainty for companies.
The European Commission has failed to publish on time guidelines outlining how high-risk AI operators should comply with their obligations under the AI Act, specifically Article 6, which defines which AI applications are considered high-risk and require stricter documentation and monitoring. The original deadline was February 2.
The Commission says it is still integrating months of comments and plans to publish a final draft by the end of the month, with final adoption expected in March or April. In the meantime, the high-risk compliance requirements are scheduled to go into effect in August 2026.
The delay reflects the difficulty of implementing the IA Act, including problems with the designation of national authorities and delays in technical standards from European standardization bodies, the delivery of which is postponed until the end of 2026.
The Commission's Digital Omnibus package seeks to simplify the definition of high-risk AI and postpone its entry into force for up to 16 months. However, critics warn that this increases uncertainty and could undermine confidence in the law. U.S. and EU companies have called for delays to allow more time to comply with the regulation.
In summary, the delay in the guidelines and the possible modification of deadlines creates legal and operational uncertainty for companies prior to the entry into force of the high-risk standards.