Integrate into the organization in a practical and economically smart way.
Artificial intelligence has long been an integral part of numerous business models, processes, and decision-making processes—and it is now impossible to imagine everyday working life without it. The EU AI Act has now created the first binding legal framework for the use of AI—and it is already in force. The impact on organization, technology, and governance is significant. Companies must now meet clear requirements for transparency, risk management, documentation, and accountability. This affects not only highly complex models, but also AI applications that are already in productive use today.
The first provisions of the EU AI Act will apply from February 2025 – the time window for implementation is narrow. Those who take a structured approach now will create regulatory certainty, strengthen their own innovative capabilities, and anchor responsible AI in their companies in the long term. We support you in integrating the EU AI Act into your organization in a practical and economically sensible way – with clarity, feasibility, and a focus on the essentials.
The EU AI Act does not only affect providers of advanced AI systems – it extends across the entire value chain. From developers to operators to distributors, new regulatory obligations, responsibilities, and liability issues are emerging.
The focus is on four key roles:
- Providers develop or commission the development of AI systems and market them under their own brand in the EU. They are responsible for safety, quality, and legal compliance throughout the entire life cycle. Examples: OpenAI or Google
- Importers bring AI systems from third countries into the EU and ensure that each system complies with European safety and compliance standards. Example: European subsidiaries of a US technology company
- Distributors sell AI systems within the EU and, as the last point of contact before the customer, ensure compliance with all labeling, safety, and purpose requirements. Examples: MediaMarkt or Saturn.
- Operators use AI systems within their own companies to operate products, services, or processes, ensuring safe, purposeful, and responsible use. Example: Companies with AI-supported processes.
The requirements of the EU AI Act are complex – and the first obligations will come into force in just a few months. Companies now need clarity in order to identify regulatory risks at an early stage, act in compliance, and avoid fines and reputational damage.
AI systems are increasingly having an external impact: on customers, partners, and authorities. Those who use transparent and ethically responsible AI today build trust, protect their brand, and at the same time meet the key requirements of the AI Act.
AI is driving new business models. But innovation needs room to breathe. Companies that invest now in governance, risk management, and data quality are securing their ability to innovate—and avoiding regulatory pitfalls before they arise.
Whether you are a provider, operator, dealer, or importer, your role determines which obligations apply. By defining your position under the AI Act at an early stage, you can take targeted action, organize responsibility effectively, and use resources efficiently.
Your challenges:
- Uncertainty about which AI applications fall under the EU AI Act and how high the regulatory risk is.
- Lack of processes and structures to develop, document, and operate AI systems in compliance with the law.
- Concerns about barriers to innovation due to regulatory requirements and high internal coordination efforts.
UNITY solution approach:
- We analyze your existing and planned AI applications in terms of classification, risk class, and action required in accordance with the EU AI Act.
- We work with you to develop appropriate governance structures and processes to ensure transparency, control, and compliance.
- We support you in establishing a compliant development and operating model – with a focus on efficiency, accountability, and competitiveness.
The AI Act ensures that AI systems do not discriminate, manipulate, or cause harm. Strict requirements apply, particularly for high-risk applications, to prevent negative impacts on people, society, and democracy.
Users should be able to recognize when and how AI is being used. The EU AI Act requires labeling, technical documentation, and explainable decision-making logic—especially for sensitive or automated processes.
Companies must create internal structures to make AI systems controllable and manageable—with clear roles, risk monitoring, quality controls, and ongoing documentation throughout the entire lifecycle.
Einheitliche Regeln schaffen Rechtssicherheit für Unternehmen und stärken die Marktakzeptanz von KI. Der AI Act gibt Orientierung, wie KI verantwortungsvoll entwickelt und eingesetzt werden kann – als Grundlage für nachhaltige, wettbewerbsfähige Geschäftsmodelle.
Our experts for Artificial Intelligence
Daniel Gaspers
Head of Digital Services
Jan Carstens
Manager