The rapid development of artificial intelligence technologies creates both unprecedented opportunities for progress and significant risks to fundamental human rights, security, and privacy. This necessitates the development of balanced regulatory frameworks that would ensure the safe use of AI.
European Approach: Comprehensive Regulation
The European Union positions itself as one of the global leaders in the regulation of digital technologies, striving to create a legal framework that would promote innovation while guaranteeing a high level of protection for fundamental rights and the security of its citizens. The EU AI Act is a vivid example of such an approach.
The central idea underlying the EU AI Act is the creation of trustworthy AI in Europe and beyond. This means that artificial intelligence systems that are developed, deployed, and used in the territory of the EU must respect fundamental human rights, safety principles, and ethical norms.
An important feature of the Act is its extraterritorial effect. It applies to all providers and developers of AI systems who place their products on the market or use them within the European Union, regardless of where these companies are registered or physically located. This means that even companies from the US or other countries that offer their AI solutions to European consumers fall under this legislation.
The EU AI Act pays special attention to General Purpose AI Models (GPAI), such as large language models (for example, GPT-4), which are trained on vast amounts of data and capable of performing a wide range of diverse tasks.
Regarding generative AI, such as ChatGPT, the Act does not automatically classify it as high-risk. However, such systems must meet specific transparency requirements. This includes clear disclosure that content was generated by artificial intelligence, developing the model in such a way that it does not generate illegal content, and publishing summaries of copyrighted data used for training.
One of the central tasks of the EU AI Act is finding a balance between ensuring a high level of protection for fundamental rights, security, and ethical principles on the one hand and promoting innovation and technological development on the other.
Strict requirements, especially for high-risk systems and GPAI, including conformity assessment, quality management systems, detailed documentation, and human oversight, require significant financial, time, and human resources. For small and medium-sized enterprises (SMEs), meeting these requirements may prove particularly difficult and expensive, potentially putting them at a disadvantage compared to large technology corporations. We also see that most LLM developers prefer to work in the US or China with less strict limitations.
US: Flexible Approach with Focus on Innovation
The regulatory landscape of artificial intelligence in the United States of America differs significantly from the European one. Instead of a single comprehensive legislative act, the US demonstrates a more flexible, decentralized approach.
Historically, the USA has supported a market-oriented model of technology regulation that promotes rapid innovation, competition, and strengthens the positions of American companies in the global market. In the field of AI, this translates into an effort to avoid “premature” regulation that could hinder the development and implementation of technologies.
The US federal government has not yet adopted a single comprehensive law on artificial intelligence similar to the EU AI Act. Instead, regulatory activity is carried out through a combination of presidential executive orders, the development of voluntary frameworks and standards (for example, by the National Institute of Standards and Technology – NIST), as well as through legislative initiatives at the level of individual states.
An important feature of the American approach is the significant influence of the current presidential administration on the direction of regulatory policy. For example, the approach of Joe Biden’s administration, outlined in Executive Order 14110 (October 30, 2023), emphasized safe, secure, and trustworthy development and use of AI, protection of civil rights, and promotion of justice and equality. In contrast, Donald Trump’s administration, which canceled Biden’s order with its Executive Order 14179, shifted focus to ensuring American dominance in the field of AI.
One of the key instruments in the American regulatory landscape is the NIST AI Risk Management Framework (AI RMF). Developed by the National Institute of Standards and Technology (NIST), this framework is a voluntary, non-binding document designed to help organizations of any size and sector manage risks associated with artificial intelligence throughout its entire lifecycle — from development to implementation and decommissioning. Although the NIST AI RMF is voluntary, it is gaining increasing importance as a de facto standard for responsible AI development in the US, which the industry follows.
In the USA, the approach to ensuring ethics, transparency, and accountability of AI systems largely relies on voluntary principles, corporate social responsibility, and market mechanisms, rather than strict legislative requirements. Overall, the American approach is characterized by the desire not to hinder innovation with excessive regulatory restrictions, relying on the ability of the market and industry for self-regulation, as well as on existing legal mechanisms to solve problems as they arise.
Ukraine on the Path to AI Regulation
Ukraine is actively forming its own policy in the field of artificial intelligence. This process takes place under challenging conditions, particularly a full-scale war, which leaves its mark on priorities and the pace of reforms.
Ukraine has already several strategic documents that define the directions for AI development and regulation:
- Concept for the Development of Artificial Intelligence in Ukraine: This document became the first official step on the path to forming state policy in the field of AI. It defines the goals, principles, and main tasks for developing AI technologies in Ukraine as one of the priority areas of scientific and technical research.
- WINWIN Strategy: Although a formally approved unified National AI Strategy is still in the development stage, the Ministry of Digital Transformation has already announced ambitious plans within the so-called WINWIN Strategy, which is part of a broader national plan to transform Ukraine into an innovation hub by 2030.
- AI Regulation Roadmap: This document is a key guide for Ukraine in the process of harmonizing its approach to AI regulation with European norms, particularly with the EU AI Act. The Roadmap is based on a “bottom-up approach” concept, which involves a gradual transition from self-regulation and voluntary standards to mandatory regulatory norms.
- White Paper on AI Regulation in Ukraine: This analytical document, developed with support from USAID and UK Dev, details the Ministry of Digital Transformation’s vision for the future regulatory framework for AI in Ukraine and is open for public discussion. The White Paper confirms a commitment to the “bottom-up” approach, emphasizing the need to give businesses time and tools to prepare for future national legislation, without introducing strict mandatory regulation in the next 2-3 years.
It is important that the White Paper, like other strategic documents on AI in Ukraine, recognizes the inevitability of adapting Ukrainian legislation to EU norms in the field of AI and digital technologies in the future.
At the same time, given the state of war with Russia, the White Paper proposes to exclude the defense sector from regulation at this stage, so as not to limit the development of innovative AI products that help in the fight against the aggressor. In fact, Ukraine is currently trying to avoid steps that could limit AI development in Ukraine, which makes the approach to regulation somewhat similar to the US, until Ukraine begins implementing EU AI legislation at the level of adopted laws.
References
[1] EU AI Act: first regulation on artificial intelligence Available [online]: https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
[2] Exec. Order No. 14110, 2023, Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence Available [online]: https://www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence
[3] Exec. Order No. 14179, 2025, Removing Barriers to American Leadership in Artificial Intelligence Available [online]: https://www.federalregister.gov/documents/2025/01/31/2025-02172/removing-barriers-to-american-leadership-in-artificial-intelligence
[4] AI Risk Management Framework, 2023, Available [online]: https://www.nist.gov/itl/ai-risk-management-framework
[5] Order of the Cabinet of Ministers of Ukraine On the approval of the Concept for the Development of Artificial Intelligence in Ukraine Available [online]: zakon.rada.gov.ua/go/1556-2020-%D1%80
[6] The Ukrainian Global Innovation Strategy 2030, 2023, Available [online]: https://winwin.gov.ua/assets/files/WINWIN_Main%20Presentation.pdf
[7] AI Regulation Roadmap, 2023, Available [online]: https://surl.li/lrfoqf
[8] White Paper on AI Regulation in Ukraine, 2024, Available [online]: https://surl.li/ehydhb
Continue exploring:
Learning to Be: Does School Dare to Ask Why?
Germany at Crossroads with Klaus Bachmann [PODCAST]