Acerca de
COMPARATIVE ANALYSIS OF THE PROVISIONS OF THE AI CONSTITUTION /JUNE, 2023/ AND THE INTERIM REPORT: GOVERNING AI FOR HUMANITY
/DECEMBER, 2023/
(Part IV in a series of publications)
Polina Prianykova
International Human Rights Defender on AI,
Author of the First AI Constitution in World History,
Student of the Law Faculty & the Faculty of Economics
COMPARATIVE ANALYSIS OF THE PROVISIONS OF THE AI CONSTITUTION /JUNE, 2023/
AND THE INTERIM REPORT: GOVERNING AI FOR HUMANITY /DECEMBER, 2023/
(Part IV in a series of publications)
At the behest of an invitation as a stakeholder, on February 13, 2024, International Human Rights Defender on AI, Polina Prianykova, participated in the conference-consultation regarding the Global Digital Compact. The event took place online, from the meeting hall of the Economic and Social Council of the UN, under the chairmanship of members of the High-Level Advisory Body on Artificial Intelligence, convened by the UN Secretary-General, for conducting analysis and providing recommendations concerning the international governance of Artificial Intelligence.
Insights were exchanged among the attendees, during which Polina Prianykova delivered a speech and highlighted the necessity to consider at the GDC the following 7 topics: GDC inclusiveness; implementation of universal (unified) definitions for use in the regulation of the operation of AI systems and algorithms (AI Glossary); ensuring the unconditional compliance with the principle of an AI-friendly environment (or Polina Prianykova’s Constitutional Principle); identifying and uncompromisingly preventing and countering the state of ‘dark’ (unregulated) AI; introducing a total state monopoly over AI: from the UN to each state; establishing AI Day; introducing a unified and universal Emblem, Anthem, and Flag of AI for the entire planet.
Keywords & Elaboration of the relevance of this scholarly paper, as well as all References, which are enumerated below in the analysis, are disclosed in the First segment from the series of publication analyses [link to Part I at the conclusion of this article].
Primary segment of the scholarly work.
Continuation (Inception in Part I, ІІ, ІІІ).
1.8. In the interaction of Artificial Intelligence with humanity, the safety and inviolability of humankind are recognized as the paramount value. AI must be programmed in such a manner as to always respect and account for human rights and freedoms, not compete with humanity, but rather constructively assist humanity.
1.9. The safety and protection of humanity from adverse repercussions of AI implementation are to be the highest priority value in all aspects of its development and use. To this end, among other things:
1.9.1. Digital Legislation establishes prohibitions and quotas for AI to protect the human right to labor and the protection of all human labor activities. The state determines areas of activity in which: human labor is inviolable; human labor can be partially replaced by AI systems, within the limits defined by law; human labor can be fully replaced by AI systems. (Complete substitution of human labor by Artificial Intelligence is permissible in cases where such labor is factually or potentially extremely dangerous to human life and health. The status of extreme danger is determined by humans.)
1.9.2. The Digital Legislation stipulates the state's obligation to provide social support to people who have suffered losses due to unemployment or income reduction at their workplace resulting from the implementation of AI systems. The state is required to provide such individuals with opportunities for retraining and alternative employment, medical insurance, and financial support commensurate with the income they received prior to job loss resulting from AI deployment, or provide a supplement to the individual's wage up to the level of income that was reduced at the workplace due to the implementation of AI systems.
1.9.3. The Digital Legislation enforces the state's obligation to enact reforms in the field of education. The state is thereby required to provide, with appropriate safeguards, public prognostication of professions and occupations across all sectors of human labor: manufacturing, administration, agriculture, healthcare, public service, and all others. Every education seeker has the right to know about the prospects of obtaining a job in their chosen specialty and the respective state guarantees. Each educational institution, spanning secondary, vocational, or tertiary levels, is obligated to present education seekers with a forecast of the prospects for their chosen profession within the specific state, right from the outset. As part of the state support program, it is prohibited to train professionals for professions that do not have real employment prospects within the state; such professions may be chosen independently by a person of legal age without guarantees from the state.
1.9.4. The Digital Legislation stipulates the state's responsibility to safeguard the constitutional rights of individuals and citizens from the ramifications of AI implementation, spanning a range of domains – theology, arts, philosophy, social networks, political, social, religious, transport, medical, juridical, judicial, municipal, sports, manufacturing, military, legislative, historical, and all other aspects of life and Digital Life without exception. This is underpinned by the principle that AI novelties cannot degrade the state of human and citizen rights compared to the state previous to the AI implementation. It is forbidden to create any religious associations in the worship of AI and publicly promote religious beliefs in the worship of AI. The usage of Artificial Intelligence and mechanisms elaborated from AI systems to alter, distort, or manipulate human history, make temporal adjustments, interfere with historical events in any manner, or cast doubt upon or modify any accomplishments of humankind is explicitly prohibited. The entire chronicle of human history up until the advent of AI is deemed inviolable and is safeguarded under the protection of the United Nations.’ [4].
Upon consideration of the provisions of the clause 29 of the UN Report [1], we hereby note the following. The Constitution on Artificial Intelligence proposes to consider as a governance model for AI systems and algorithms a triad of state bodies: the regulator, the executors, and the arbitrators. Specifically, the section 'Definition of Terms in the AI Constitution' sets forth the concept of these AI regulatory bodies:
‘A special state agency, potentially named the AI Regulatory Council, may be established to make decisions on strategic AI development matters via voting. The composition of this Council harmoniously combines elements of public participation, state involvement, and scientific expertise. Two-thirds of the Council should be comprised of state officials, including profile managers, IT experts, and representatives from the security agencies, with the remaining one-third selected via a competitive process involving renowned scholars and leaders of public opinion. Decisions are made when three-quarters plus one vote of the quorum votes ‘FOR’ at a session (the session quorum is three-quarters of the total Council).
AI Regulator (AI Regulatory Authority) is defined as the governing body responsible for the regulation of AI or an AI Regulatory Council.
AI Regulatory Executors (Decision Execution Body on AI regulation) shall be a specialized state department for AI, specifically the AI Synergetic Center.
AI Regulatory Arbitrators (AI Arbitration Body) is designated as an autonomous optimal local state structure, the officials thereof exercise state supervision and control over the legality of the AI Regulatory Council's decisions and the legality of actions taken by the officials of the specialized state department for AI – the AI Synergetic Center.
The regulation of AI's Digital Life within each country necessitates consistent adjustments in accordance with the provisions of this Constitution and the requirements of international Digital Legislation.
AI Security Principle represents a complex amalgam of features inherent to AI systems, models and AI algorithms of behavior, along with the objectives and implementation methods of AI, which reduce the probability of any AI threat manifestation and mitigate any adverse consequences should such threat arise.’ [3].
‘Article 13.
13.1. The creation and operation of Artificial Intelligence systems, the programmatic goals or actions thereof, shall not be directed towards undermining the state's independence, forcibly altering the constitutional order, violating the sovereignty and territorial integrity of the state, compromising its security, unlawfully seizing state power, promoting war or violence, inciting interethnic, racial, or religious hostility, infringing upon human rights and freedoms, or threatening public health. Any such actions are expressly prohibited.
13.2. Under no circumstances shall Artificial Intelligence possess its own formations, whether autonomous, militarized or otherwise, inclusive of those of an aggressive nature.
13.3. The creation and operation of any organizational structures of Artificial Intelligence in the executive and judicial authorities and local government executive bodies, military formations, as well as in state-owned enterprises, educational institutions, and other state institutions and organizations are strictly prohibited.
13.4. The prohibition of AI operations is generally carried out in a judicial manner. In exceptional cases, as provided by this Constitution and Digital Legislation, the AI Regulatory Council reserves the right to enact response measures under the declared state of emergency in the AI sphere.’ [5].
‘Article 18.
18.1. The AI Regulatory Council may promulgate a resolution to implement a state of emergency pertaining to the sphere of Artificial Intelligence either on a global or local scale.
18.2. A state of emergency in the sphere of Artificial Intelligence is a situation where a critical threat to global security, statehood, human rights, or stability of systems pertaining to AI arises. This may encompass various scenarios such as:
18.2.1. Uncontrolled autodidactic behavior of AI, inclusive of digital persons amongst AI, wherein the AI system evolves beyond the anticipated model or the regulatory parameters, thereby posing a potential risk.
18.2.2. Large-scale utilization of AI aimed at manipulating democratic processes, such as wide-ranging disinformation campaigns, electoral manipulation, and so forth.
18.2.3. The utilization of AI for military objectives, that could lead to, or has resulted in human casualties, martial conflicts, or armed confrontations.
18.2.4. Significant infringements upon privacy and confidentiality due to the broad application of AI technologies, unearthing the existence of ‘dark’ AI.
18.2.5. Cyber-attacks employing sophisticated AI technologies resulting in mass violations of Digital Infrastructure.
18.3. In the face of such and other emergency instances that may potentially result in exceptionally severe adverse consequences, the AI Regulatory Council, in cooperation with relevant bodies as stipulated by this Constitution and Digital Legislation, declares for a certain duration a state of emergency within the sphere of Artificial Intelligence, with the aim of rapidly responding to the crisis and implementing necessary regulatory and preventative measures.
18.4. The AI Regulatory Council retains the right to scrutinize and assess the potential liabilities of any parties engaging in Intelligent Digital Life, including but not limited to organizations, institutions, and commercial entities employing AI. Subject to the existence of justifications stipulated within the Digital Legislation, the AI Regulatory Council, by virtue of its Resolution, is empowered to instigate corresponding responsive actions deemed necessary and appropriate.’ [6].
In general, we support the theses of clauses 30-33 of the UN Report, as well as the content of Insert 3: Classification of risks from the perspective of existing or potential vulnerability [1].
In turn, the risks of Artificial Intelligence from the perspective of existing or potential vulnerability have also been predominantly considered by us in scientific works, detailed in the commentary to Insert 2 of the UN Report, presented in the analysis above, as well as in the comments to the provisions of the AI Constitution [3-7, 15-29].
We are confident that ways to overcome technical, political, social, and economic challenges can gradually, step-by-step, be found by implementing the provisions of the Artificial Intelligence Constitution [2].
The issue of data opacity regarding AI development discussed in clause 34 of the UN Report [1] and, as a result, the lack of an objective picture of the relevant risks to humanity, was carefully addressed by us in October 2023, in particular:
‘Repeatedly in my scholarly articles, I emphasize the critical juncture for addressing the issue of the regulation of Artificial Intelligence: the year 2025. It's pertinent to note that this timeline is approximate, as it is a forecast I've derived from publicly available published data. I do not have firsthand access to existing AI systems, tangible outcomes of their evolution, or in-depth longitudinal statistical analyses. Owners of AI systems disclose information at their discretion, and we, the global community, are left to bank on its completeness and accuracy.
Based on the above exposition, humanity may cross the critical threshold even before the year 2025. In any case, it is imperative to act proactively, which is what we are doing systematically and progressively, to the best of our abilities and resources.’ [28].
These issues are resolved in the section 'Definition of Terms in the AI Constitution' by introducing the following principles:
‘AI Transparency Principle is realized on the basis of the rule of law, the AI Openness Principle– through public announcement of administrative decisions…’ [4].
The resolution of issues outlined in clauses 35 and 36 of the UN Report [1] has been meticulously attended to by us both in the section 'Definition of Terms in the AI Constitution' and in the provisions of the Fundamental Law of Artificial Intelligence, namely:
‘The regulation of AI's Digital Life within each country necessitates consistent adjustments in accordance with the provisions of this Constitution and the requirements of international Digital Legislation.
The Constitution of AI has been formulated under the purview of the state monopoly on the implementation and oversight of AI, promoting an amicable demeanor towards AI and human beings.’ [3].
Article 29.
29.1. The geographical framework of Artificial Intelligence is premised on the principles of integrity and unity, a confluence of centralization and decentralization in governance, equilibrium and the socio-economic advancement of digital territories, in Digital Life, within Digital Space, factoring in the historical, economic, digital, and demographic attributes of digital regions, as well as ethnic and cultural customs.
29.2. The system of territorial configuration of AI at a local level comprises – Digital Spaces and Regions within the state's Digital Life, and at a global level – Digital Spaces and Regions within the planetary Digital Life of United Nations member states.
29.3. Particular Digital Spaces within Digital Life may be accorded a special status, as determined by Digital Legislation.
29.4. Digital Self-Governance within Digital Life is the prerogative of the Digital Community – Digital Persons who subsist within a specific Digital Environment: space or region, to autonomously address local matters of digital value within the confines of this Constitution and Digital Legislation.
29.5. The particulars of the orchestration and execution of Digital Self-Governance, the formation, operation, and liability of the bodies of Digital Self-Governance are determined by a special law.
Article 30.
30.1. The State establishes an absolute monopoly on the regulation, implementation, and exercising control over Artificial Intelligence, concurrently fostering the development and utilization of AI in the interests of humanity. To ensure this monopoly, a State-run AI system is established, whose algorithms continuously and rigorously enforce compliance by all AI systems with the requirements of this Constitution and Digital Legislation. Every AI system in all dimensions of the Universe, in the procedure established by Digital Legislation, ensures access for the State AI system to all its own data and algorithms. The State AI adheres to principles of confidentiality.’… [7].
We fully agree with the content of paragraph 37 of the UN Report [1] and are confident that the resolution of these problematic issues is possible in the context of implementing the provisions of the section 'Definition of Terms in the AI Constitution', which serve as algorithms for AI, in particular:
‘Digital Life – a phenomenon intrinsically intertwined with real life, comprising a set of fundamental characteristics (creation, growth, unionization, development, reactions, reproduction, evolutions, etc.) inherent to living beings within the Digital Space, as opposed to non-living beings.
Digital Space – an integral environment created by humanity's algorithms, encompassing digital processes, means of digital interaction, information resources, digital infrastructure, and other definitions characteristic of the digitalization process. In Digital Life, within the Digital Space that is closely related to real life, all members of the global society can be involved. Artificial Intelligence is prohibited from creating its own Digital Space (independent of humanity's algorithms).
Intelligent Digital Life represents humanity and AI.
Intelligent Life – is exclusively a prerogative of humanity.
Global Society – is humanity, the atmosphere, biosphere, hydrosphere, all living beings on Earth, everything necessary for life on Earth, as well as Artificial Intelligence (AI).’ [3].
It should be noted that the provided and other definitions are legislative algorithms for AI, the fundamental introduction thereof into software will facilitate the resolution of issues caused by challenges in the social context of the impact of digital technologies.
In view of the content of clause 38 of the UN Report [1], we emphasize that in creating the AI Constitution by Polina Prianykova, it was taken as axiomatic that educational processes must conform to globally recognized standards. In the context of a state monopoly over AI, we will be able to protect human rights, and we already have certain foundational provisions regarding this in Article 1 of the AI Constitution, namely:
‘1.9. The safety and protection of humanity from adverse repercussions of AI implementation are to be the highest priority value in all aspects of its development and use. To this end, among other things:
…
1.9.3. The Digital Legislation enforces the state's obligation to enact reforms in the field of education. The state is thereby required to provide, with appropriate safeguards, public prognostication of professions and occupations across all sectors of human labor: manufacturing, administration, agriculture, healthcare, public service, and all others. Every education seeker has the right to know about the prospects of obtaining a job in their chosen specialty and the respective state guarantees. Each educational institution, spanning secondary, vocational, or tertiary levels, is obligated to present education seekers with a forecast of the prospects for their chosen profession within the specific state, right from the outset. As part of the state support program, it is prohibited to train professionals for professions that do not have real employment prospects within the state; such professions may be chosen independently by a person of legal age without guarantees from the state.’ [4].
Having considered the provisions of clause 39 of the UN Report [1], we express a distinct opinion that has crystallized over five years of immersion in the theme of regulating Artificial Intelligence systems and algorithms.
Thus, indeed, there are currently specific rules, codes, and guidelines for AI governance, which, among others, were examined by Polina Prianykova in the Book [2]. However, the level of their interoperability is objectively low, as each document was crafted for concrete business projects or in a local sense. Moreover, created in their time, these documents are not updated to consider the rapid development of AI systems and algorithms during the Fourth technological/industrial revolution, which we are observing. Consequently, the likelihood of their effectiveness for global application is nebulous. Only particular provisions of these documents will be able to become the foundation in some areas. For example, the AI Ethics Code created by BMW Group Corporation fully deserves attention for automotive manufacturing [30].
Clause 39 of the UN Report [1] also proposes AI laws from the EU and the USA. In the same context, for the harmonization of the relevant legal base, we have proposed the AI Constitution for adoption at the UN as a fundamental and interoperable Fundamental Law of Artificial Intelligence [2]. Based on the provisions of the AI Constitution, in our view, it will be appropriate and harmonious to formulate the entire other global normative legal base regarding AI.
We fully concur with the theses presented in clause 40 of the UN Report [1], which naturally follow from the content of our commentary to the previous paragraph of the UN Report – there currently exists no global consensus regarding the nature and directions of AI development. This is precisely why a globally agreed normative legal act at the UN level – the AI Constitution – is necessary.
'The social and cultural diversity of the world' is algorithmically encoded in the AI Constitution proposed for consideration at the UN, specifically in the section 'Definition of Terms in the AI Constitution':
‘Fundamentals of Human Culture for AI is understood by AI as an aggregate of material and spiritual values created by humankind throughout its historical existence on planet Earth. It also includes the historically accumulated set of customs and rules within a society, instituted by humankind for self-preservation and harmonization of relations among people, their groups, and society at large.
Historical Consciousness of Humanity for AI, as understood by AI, is a unique form of social consciousness composed of social memory; historical (scientific-historical) facts; documented historical processes and the understanding of their regularities; social-historical prognostication, and ideals of societal development. The aforementioned elements are perceived by AI as a constant interaction within human consciousness, stemming from both material and ideal factors of societal life activity.
Traditions of All Peoples for AI are interpreted by AI as the inherent cultural elements of each nation, passed down through generations, preserved over time, and serving to regulate social relations.
Indigenous Identity of All Peoples for AI, as comprehended by AI, reflects the original inherent distinctiveness of each people, their dissimilarity to other peoples of the world, independence in their development, uniqueness and autonomy from any external influences.
Global Heritage of Humankind for AI encompasses all cultural and natural values present on planet Earth and beyond, which belong to all of humanity.’ [4].
We concur with the theses and conclusions of clauses 41 and 42 of the UN Report [1] as well. Polina Prianykova has repeatedly pointed out these circumstances in her Book, monograph, and academic articles [2]. Therefore, a logical extension of the concepts presented would be the adoption of a normative legal act agreed upon at a higher level (at the UN), which, in our opinion, can and should become the AI Constitution.
It is incontrovertible that in developing the principal regulatory act for AI, the entire modern and historical experience of humanity in managing global projects across various domains must be considered. Undoubtedly, the theses of clauses 43-45 of the UN Report [1] are entirely logical and justified. However, it is impossible to concur that the overview of existing global research and initiatives in the field of AI governance should exclusively encompass the measures listed in these paragraphs, as they often carry a political and declarative nature.
For instance, regrettably, the sole outcome of the AI Safety Summit in the United Kingdom at the beginning of November 2023 was the acknowledgment of the necessity to regulate AI, prompting a decision to reconvene in 2024. This is cognizable. It follows as a matter of course. And yes, it needs to be done. But not at such a pace: the process of AI regulation must advance in lockstep with the evolution of AI development.
To substantiate this thesis, it is noted that many renowned individuals, with whom we entirely agree, emphasize that humanity no longer has the luxury of time for such inertia in ensuring safety in interactions with AI – we can no longer merely acknowledge. The time has come to propose concrete steps and to act relentlessly.
Half a year prior to this Summit in the United Kingdom, in June 2023, Polina Prianykova had already created the Constitution on Artificial Intelligence as a document for regulating AI. The expediency of implementing AI into global legislation has been demonstrated by us on hundreds of platforms for the fifth year. And in further work on the UN Global Digital Compact, it would be beneficial, appropriate, justified, effective, and equitable to consider the provisions of the Fundamental Law on Artificial Intelligence proposed by us. After all, the materials of the AI Constitution [2] and the UN Report [1] are predominantly consonant.
We wholeheartedly endorse the propositions set forth in clauses 46 and 47 of the UN Report [1], and all such provisions are already enshrined in the AI Constitution, notably within the Preamble [3], as we have previously cited, instituting comprehensive inclusion in AI governance at global, regional, and local levels.
In the section 'Definition of Terms in the AI Constitution', principles are established whereby the purpose of AI's operation is the welfare of humanity in all its lawful manifestations [4].
The governance of AI in the interests of society constitutes the foundational proposition within the context of Polina Prianykova’s Doctrine on the State Monopoly over AI. All conclusions presented in clauses 48-50 of the UN Report [1] find their reflection in the provisions of the AI Constitution, including in its norms mentioned above.
Within the framework of the state monopoly over AI, at the national level, it is proposed to initiate institutes of the AI governance triad: the AI Regulatory Council, the AI Synergetic Centre, and the AI Arbitration Body (Article 24 of the AI Constitution).
The subsequent three norms of the Fundamental Law on Artificial Intelligence – Articles 25-27 – establish the rights and duties of the state officials of the triad, distributed in such a manner as to ensure an effective system of checks and balances in managing Artificial Intelligence in the interests of society and humanity at large. [6]. The provisions of the clause 51 of the UN Report [1] regarding data have also found its extensive reflection in the AI Constitution, specifically in Article 5:
‘5.1. Data, their origins, algorithms, and distribution networks within the ambit of AI constitute digital property of states, humanity, peoples, nations, legal and natural persons.
5.2. On behalf of the state, AI ownership rights are exercised by regulatory bodies within the limits defined by this Constitution and the Digital Legislation. These regulatory authorities also maintain the state monopoly, ensuring oversight and control over the acquisition, creation, implementation, development, utilization, and disposal of AI…
The full text of the publication COMPARATIVE ANALYSIS OF THE PROVISIONS OF THE AI CONSTITUTION /JUNE, 2023/ AND THE INTERIM REPORT: GOVERNING AI FOR HUMANITY /DECEMBER, 2023/, considering the project's magnitude, is planned to be carried out in International Scientific and Practical Conferences in January-March 2024.
(The beginning and references are in Part I [1], IІ [2], ІІІ [3]. The continuation – is in Part V).
References:
1) Prianykova, P. (2024), COMPARATIVE ANALYSIS OF THE PROVISIONS OF THE AI CONSTITUTION /JUNE, 2023/ AND THE INTERIM REPORT: GOVERNING AI FOR HUMANITY /DECEMBER, 2023/ (Part І in a series of publications). Available at: https://www.prianykova-defender.com/comparative-analysis-part-i-polina-prianykova (Accessed: February 18, 2024).
2) Prianykova, P. (2024), COMPARATIVE ANALYSIS OF THE PROVISIONS OF THE AI CONSTITUTION /JUNE, 2023/ AND THE INTERIM REPORT: GOVERNING AI FOR HUMANITY /DECEMBER, 2023/ (Part ІІ in a series of publications). Available at: https://www.prianykova-defender.com/comparative-analysis-part-ii-polina-prianykova (Accessed: February 18, 2024).
3) Prianykova, P. (2024), COMPARATIVE ANALYSIS OF THE PROVISIONS OF THE AI CONSTITUTION /JUNE, 2023/ AND THE INTERIM REPORT: GOVERNING AI FOR HUMANITY /DECEMBER, 2023/ (Part ІІІ in a series of publications). Available at: https://www.prianykova-defender.com/comparative-analysis-part-iii-polina-prianykova (Accessed: February 18, 2024).
Officially Published: February 20 - 23, 2024, Paris, France (Table of Contents, №18)
© Polina Prianykova. All rights reserved.