Türkiye AI National Action Plan: Convergence with the EU AI Act and Beyond

NATIONAL AI STRATEGY 2024-2025 ACTION PLAN 

An updated National AI Strategy - Action Plan has published recently on 24 July 2024; to supplement the previously published Presidential Circular No. 2021/18 in the Official Gazette No. 31574 dated 20 August 2021.

Six strategic priorities have been identified in the Action Plan in parallel with the Strategy Document. These are

  • To increase the number of AI specialists and employment in the field.

  • To support research, entrepreneurship and innovation.

  • To expand access to quality data and technical infrastructure.

  • To make arrangements to accelerate socio-economic harmonization.

  • To strengthen co-operation at the international level.

  • To accelerate structural and labor transformation.

To realize the strategic objectives of this Action Plan, a coordinated effort has been undertaken by key government agencies, including the Ministry of Industry and Technology, TÜBİTAK, Higher Education Council, Presidential Digital Transformation Office, Ministry of National Education, Ministry of Justice. This collaboration has led to the development of 71 specific actions and activities.

Among these actions, a key initiative for advancing artificial intelligence at a national level is the development of a Turkish Large Language Model and the creation of a corresponding community. This collaborative approach will encourage contributions from the entire ecosystem, including voluntary participants, to enhance the model's ability to address diverse needs and promote wider adoption of AI in daily life and business.

The Ministry of Justice is responsible for the preparation of the ‘Legal Evaluation Guide on Artificial Intelligence Applications’ under the strategic priority titled as ‘Making Regulations to Accelerate Socioeconomic Harmonization’. The ministry will consider the following while drafting the national regulations:

  • Alignment with international norms.

  • Regulating the development and use of AI systems.

  • The supply of systems containing artificial intelligence to the market.

  • Identifying new generation cyber threats especially those powered by artificial intelligence.

A key aspect of this effort is the "conduct of necessary policy and legislative development studies to prevent and mitigate the effects of AI systems from a single source." This step is crucial for harmonizing Turkish regulations with the EU Artificial Intelligence Law, parts of which became effective on August 1, 2024, and with evolving international legal frameworks.

 

EU ARTIFICIAL INTELLIGENCE LAW HAS ENTERED INTO FORCE

The law, which was adopted by EU Member States on 25th May 2024, entered into force on 1st August 2024 following its publication in the Official Journal of the Union on 12th July 2024. While many countries and organizations in the world have taken important steps towards regulations in the field of artificial intelligence, the European Union with 27 member states has prepared and enacted the world's first Artificial Intelligence Law’; (Regulation (EU) No. 2024/1689 Introducing Harmonized Rules on Artificial Intelligence and Amending Certain Union Legislative Acts), imposes obligations on the member states within the EU to adopt regulations in their national norms in accordance with the Union.

The law shall apply to all real and legal persons, whether located in the EU or not, who place systems on the EU Market or if the use of such artificial intelligence has an impact on persons located in the EU. However, it has been decided to exempt any organization operating in the fields of military, defense or national security. In addition, AI systems that are in the research, development and on testing stages are also excluded from the regulation.

The law defines an artificial intelligence system as “a machine-based system that is designed to operate with varying levels of autonomy and can show adaptability after deployment and, for explicit or implicit goals, extracts from the input it receives how to produce outputs such as predictions, content, recommendations or decisions that may affect physical or virtual environments”. With this definition, the definition of AI is left open-ended, and it is left to interpretation which systems are AI systems within the context of the law.

The Law takes a risk-based approach to the categorization of artificial intelligence systems and categorizes artificial intelligence systems according to their risk levels. In terms of protecting the market and fundamental human rights, artificial intelligence systems are categorized into four different categories:

  • Group I: Unacceptable Risk - Chapter II (e.g. social categorization systems and manipulative AIs).

  • Group II: High Risk - Chapter III (a significant part of the articles of law in the Regulation are reserved for AIs in this category). 

  • Group III: Limited Risk - Chapter IV (e.g. chatbots and deepfakes)

  • Group IV: Minimal Risk (e.g. video games on the EU market, AI-powered spam filters)

Prohibited Artificial Intelligence Systems:

According to the Article 5 of the second chapter of the Law, the use of systems with unacceptable risks is prohibited. One of the main justifications for such a prohibition is that the systems show characteristics that conflict with the values of the EU. In particular, this conflict can be said to emerge in the field of fundamental human rights. The prohibited systems are as follows:

  • Artificial intelligence systems which utilize manipulative, deceptive or subliminal technologies.

  • Systems that benefit from vulnerabilities depending on various categories.

  • Biometric classification systems that reveal sensitive features and attributes.

  • Social classification, rating systems.

  • Systems that evaluate a person's risk of committing an offence according to various profiles.

  • Creation of face databases based on faces and videos on the Internet.

  • Analysis and deduction of emotions from the system at work and in educational institutions.

  • ‘Real Time Remote Biometric Identification’ systems (RIB) in public areas for public security forces.

High Risk Systems:

The regulations for systems in the high-risk category include the most detailed regulations in the law. The systems defined in Section III; Article 6 must operate in the following areas:

  • Remote identity and emotion identification systems and social identification systems.

  • Management and operation of electricity, gas, water, heating supply systems, highway systems and critical digital infrastructures.

  • Monitoring of learners by measuring and evaluating learning outcomes through enrolment or access to education and professional qualifications.

  • Managing employees, including the assessment and selection of candidates.

  • Assessing the advantages of access to and improvement of basic private and public services.

  • Oppression, if authorized by Union or National law.

  • Migration, asylum and border protection, justice and democratic processes.

Some Risks to Transparency:

According to Article 50 of the Law, a transparency obligation and regulation has been introduced especially for artificial intelligence systems that come into direct contact with natural persons. In particular, artificial intelligence systems that produce artificial video, image, audio or text content, systems for the identification and identification of emotions, and systems that include digital identity classification are considered within this scope.

Minimum/Minimal Risk:

Systems with minimal risk can be defined as systems in neither the first nor the second group. This category means that the systems in this group can be developed and used according to the existing legal framework without the need for any additional legal obligation. However, systems that contain some risks for transparency are not considered in any group. Any artificial intelligence system within this scope can be in the first, second or third group depending on the situation and purpose for which it is used.

General-Purpose Artificial Intelligence Models:

A GPAI model is defined as an AI model that demonstrates high generality, including when trained with large amounts of data using large-scale self-monitoring pathways, and can competently perform a diverse range of different tasks and be integrated into a variety of downstream systems or applications, regardless of how the model is presented to the market. This definition excludes AI models that are used for research, development and prototyping activities before they are released to the market. Since some of them carry fundamental risks, a separate regulation area has been opened in Article 51 for unforeseen risk types. With this regulation, the systemic risk in particular includes the predictable negative effects of artificial intelligence systems on public health, public safety, fundamental rights or society as a whole.

Enforcement Timetable:

  • 6 months for prohibited Artificial Intelligence systems.

  • 12 months for GPAI.

  • 24 months for high-risk Artificial Intelligence systems under Annex III.

  • 36 months for high-risk Artificial Intelligence systems under Annex I.

However, the rules and regulations to be introduced for implementation must be ready within 09 months from the date of entry into force of the law.

 

A BRIEF OVERVIEW OF TURKEY AND EU ARTIFICIAL INTELLIGENCE REGULATIONS

Turkey's AI National Action Plan and the EU's Artificial Intelligence Law

Turkey's recent public release of its AI National Action Plan, closely following the adoption of the EU's Artificial Intelligence Law, indicates a strong alignment with the EU's risk-based approach. The action plan's proposed "Implementation of the Artificial Intelligence Risk Management System Certification Program and the creation of a Trustworthy Artificial Intelligence Stamp" mirrors the EU's efforts to identify and mitigate AI risks through a robust certification process.

To address the current regulatory gaps in Turkish legislation, the action plan outlines a two-year timeline for implementing necessary legal frameworks. The absence of such regulations could significantly impact EU companies operating in Turkey, especially as the EU's AI Law takes effect.

Given the critical importance of AI regulation, Turkey should prioritize the development of new regulations that emphasize security, transparency, fairness, accountability, confidentiality, and respect for fundamental human rights. While the EU's AI Law has been published, its gradual enforcement schedule provides an opportunity for Turkey to enact its own regulations within this timeframe. By adopting a framework similar to the EU's law, Turkey can effectively prevent the entry of unregulated and potentially harmful AI technologies into the country.

Additional Considerations for Turkey's AI National Action Plan:

Given the rapid advancements in AI and its increasing societal impact, Turkey's AI National Action Plan should consider incorporating the following vital points:

  1. Ethical Guidelines: Develop comprehensive ethical guidelines for AI development and deployment, addressing issues such as bias, fairness, transparency, and accountability. These guidelines should be aligned with international standards and best practices.

  2. Data Privacy and Security: Strengthen data protection laws and regulations to ensure the privacy and security of personal data used in AI systems. This includes implementing robust data governance frameworks and addressing the challenges of data sharing and cross-border data transfers.

  3. Public Trust and Engagement: Foster public trust and engagement in AI by promoting transparency, education, and open dialogue about AI technologies and their potential impacts. This can involve establishing public consultation mechanisms and supporting research on AI ethics and societal implications.

  4. International Cooperation: Collaborate with other countries and international organizations to develop global standards and best practices for AI governance. This can help ensure that AI is developed and used responsibly and equitably on a global scale.

  5. Talent Development: Invest in education and training programs to develop a skilled AI workforce. This includes supporting research institutions, universities, and industry partnerships to foster innovation and entrepreneurship in the AI sector.

  6. Infrastructure and Resources: Ensure adequate infrastructure and resources are available to support AI research, development, and deployment. This includes investing in high-performance computing, data centers, and broadband connectivity.

  7. Monitoring and Evaluation: Establish a robust monitoring and evaluation framework to assess the effectiveness of the AI National Action Plan and identify areas for improvement. This can involve collecting data on AI adoption, impact, and challenges, as well as conducting regular evaluations and audits.

By incorporating these additional points, Turkey can create a comprehensive and forward-looking AI National Action Plan that addresses the current and future challenges of AI development and deployment.

Do you want more information?

CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.