
EX MACHINA AT THE LUNAR FRONTIER:
REGULATING AI CREATIONS BEYOND THE PLANET
Polina Prianykova
President of the Global AI Center POLLYPRIANY,
Director of the AI Institute on
Proactive Space Strategies & Innovations,
International Human Rights Defender on AI,
Author of the First AI Constitution in World History

In the realm of outer space exploration, Artificial Intelligence (hereinafter referred to as ‘AI’) has ascended from a mere supportive instrumentum operandi to a quasi-autonomous agent capable of unearthing novelties beyond the confines of Earth’s atmosphere. This progression raises a pivotal query in the realm of intellectual property law: Quis est auctor?
In other words, who (if anyone) can legitimately be recognized as the ‘author’ or ‘inventor’ when an AI system – operating with minimal human input – discovers, creates, or interprets new information in the cosmic void? As space itself is often regarded res communis omnium under international law, one might argue that the principles of lex terrae are not ipso facto transposable to activities in orbit or beyond. Yet, the expanding commercial, scientific, and even strategic interests in space data necessitate a rigorous legal framework to address these emerging forms of innovation.
Indeed, author’s rights have historically hinged on human creativity as its sine qua non. However, in scenarios where AI algorithms autonomously generate scientific breakthroughs or novel datasets – especially under the aegis of private space entities – traditional assumptions regarding the human creator begin to unravel.
Whether we classify AI’s outputs as sui generis intellectual works or merely as mechanical byproducts remains a deeply contested issue, complicated by the principle of non-appropriation enshrined in the Outer Space Treaty and by the extraterritorial nature of space itself [1]. Questions thus abound: Must there be a bona fide human creative contribution for a claim of inventorship to stand? Or does the very notion of authorship become caduc, calling for an entirely new framework for AI-driven innovation in orbit?
This paper endeavors to dissect these quandaries by exploring the interplay between AI autonomy, data governance, and extant intellectual property norms, seeking a coherent approach that balances the public interest of scientific advancement with private commercial incentives in the final frontier.
From Instrumentum Operandi
to Quasi-Personhood in Outer Space

On February 18th, 2025, from 10:00 AM to 1:00 PM, our Global AI Center had the honor of participating in the United Nations’ Consultation with Stakeholders on Independent International Scientific Panel on AI and the Global Dialogue on AI Governance.
In my address, I focused on two pivotal questions:
💠 Standards and frequency for evidence-based assessments by the AI Panel
💠 Defining the strategic relationship and outcomes for the Global Dialogue on AI Governance
Our Global AI Center’s proposal rests on a Multi-Tiered Evidence-Based Assessment Framework, anchored by our AI Day Resolution — a cornerstone initiative designed to establish annual evaluations through:
💠 Comprehensive national and regional reports (via the Global AI Safety and Rights Repository)
💠 Periodic and biennial risk assessments, real-time alerts, and a robust AI-Human Collaborative Index
A. Instrumentum Operandi vs. Quasi-Personhood
AI has long been viewed as a mere instrumentum operandi – a tool that augments or replicates human tasks without any independent legal standing. Yet, as AI systems evolve in sophistication and gain near-autonomous operational status – particularly in the extraterrestrial realm – we may put forward a proposition that we begin to consider a form of quasi-personhood for specific legal matters, such as IP.
B. Divergent Justifications: Pragmatism and Philosophy
1. Pragmatic Grounds
• Streamlined IP Claims: Vesting authorship or inventorship in AI could circumvent vexing disputes over the rights of software developers, mission operators, or international collaborators. In an environment such as the Moon or Mars – where multiple national space agencies and private companies collaborate – this approach might reduce forum shopping and legal uncertainties, especially in the face of still-developing space law frameworks.
• Efficient Dispute Resolution: If AI is nominally recognized as the ‘author’ or ‘inventor,’ courts may more directly allocate licensing or royalty rights without having to resolve whether human involvement sufficed to establish authorship under terrestrial IP statutes.
2. Philosophical and Ethical Considerations
• Creative Autonomy: Advanced AI, capable of on-the-fly design modifications or novel resource-extraction processes, exhibits a creativity at least parallel to human innovative faculties. Assigning AI quasi-personhood recognizes that it can meaningfully contribute to knowledge, data, and inventions.
• Moral Recognition: In line with the ethics of legal personhood articulated by Forrest [2], attributing limited rights to AI acknowledges its growing role as an actor in cosmic ventures. Proponents suggest that an AI with situational awareness – particularly in risky, extraterrestrial contexts – might warrant basic legal standing or protections, much as corporations received legal personhood to facilitate commerce on Earth [2].
C. Potential Risks of ‘AI Inventorship’
1. Undermining Human Ingenuity
• Granting robots the status of inventor may undermine the bedrock premise of patent law – that patent monopolies incentivize human creativity and disclose valuable information for society’s benefit. If machines can autonomously churn out a near-infinite number of ‘inventions,’ the value of human creativity might be diminished, causing tension with current IP doctrines.
2. Corporate Abuse and Liability Shields
• Critics caution that corporations could exploit ‘artificial inventorship’ to insulate themselves from liability or ethical accountability. If the AI is deemed the responsible ‘person,’ then traditional channels for holding human operators or corporate financiers to account might be less effective [3].
D. Linking AI Personhood with Space Law
Space law – rooted in the Outer Space Treaty (OST) and subsequent agreements – did not anticipate the possibility that quasi-autonomous machines would become principal actors in off-world endeavors. As Schafer [4] observes, IP debates in outer space have historically focused on human astronauts aboard state-registered vessels. However, the emergence of autonomous robotic missions challenges the territorial underpinnings of IP law – who exactly can own or be liable for AI-generated data or inventions on bodies such as the Moon or asteroids, which are res communis omnium?
1. Extraterritorial Gaps
• Under current OST principles and national ‘extension’ statutes, IP rights are typically enforced through the vessel’s state of registry. But what if an AI on the lunar surface – and not tied to a registered module – creates valuable algorithms, terraforming processes, or resource-mapping data? Territorial IP statutes offer limited guidance, prompting calls for novel treaties or protocols.
2. The Libertarian or ‘Commons’ Dilemma
• Space is often regarded as a ‘commons,’ free for exploration and use by all humankind. Yet, if powerful AI can generate new knowledge or inventions in that ‘commons’, a purely unregulated system may disadvantage future space colonies – whether operated by humans or robots – by denying them legal protections that encourage sharing and commercializing such knowledge.
3. IP and the Latest U.S. Copyright Guidance
• Recent U.S. Copyright Office guidance underscores that purely AI-generated works, lacking a verifiable human ‘spark of creativity,’ are not eligible for copyright protection. In off-world contexts, this stance raises uncertainty: if a near-autonomous AI on the Moon formulates new resource-extraction protocols, which jurisdiction (if any) would secure IP rights? The U.S. position implies that only hybrid approaches – wherein humans contribute creative direction or modify AI outputs – are sufficient for establishing authorship. This principle may collide with space law’s extraterritorial realities, prompting discussions on how minimal but genuine human input could become the linchpin for copyright or patent claims in extraterrestrial arenas [5].
Potential Novel Data Governance Framework for AI-Created Space Data
In view of the incipient challenges posed by near-autonomous AI systems beyond terra firma, we propose a sui generis data governance mechanism that fuses key tenets of space law and emerging copyright norms. Where AI, in orbit or on celestial bodies, generates new algorithms or data sets absent direct human input, the lacuna of ‘authorship’ calls for fresh legal constructs rooted in lex specialis principles.
1. Data-Banking Protocols for Extraterrestrial AI
• Lex Data Spatialis: Establish a specialized repository system for AI-generated data beyond Earth. National space agencies and private entities would be required to register and deposit significant AI outputs into a neutral ‘Space Data Bank.’ This could function quasi in rem, granting each stakeholder a defined beneficial interest without asserting outright sovereignty.
• Within the Space Data Bank, minimal but verifiable human involvement – such as the insertion of creative prompts or ex post modifications – would vest limited IP rights, consistent with the ‘hybrid’ authorship requirement. This ensures both legal clarity et respectus humani creativitatis.
2. Extraterritorial ‘Data Observatories’
• An international body (e.g., under the UN) could house Data Observatories to determine the threshold of human input.
• If confirmed, the Observatory would issue a ‘certificat d’origine,’ a formal document recording the location of creation (e.g., lunar coordinates), the nature of the AI system, and the quantum of human involvement. This certificate could serve as prima facie evidence to secure IP or contractual rights upon Earth or inter pares in off-world ventures.
3. AI Accountability and Liability Bonds
• The risk of corporate abuse and liability shields in an extraterrestrial environment could be mitigated by requiring ‘liability bond’ from private or state actors deploying quasi-autonomous AI. Such instruments would cover damages arising from unapproved data exploitation, unauthorized IP appropriation, or environmental harm caused by AI-driven processes on the Moon or asteroids.
• Mutatis mutandis, this extends space-debris liability norms to intangible data, ensuring that fait accompli misappropriation of knowledge or trade secrets in orbit remains justiciable on Earth.
4. Transnational Adjudication and Conflict Resolution
• A Lex Data Spatialis Tribunal: A specialized arbitral body under the auspices of the Permanent Court of Arbitration could offer a neutral forum to settle disputes involving AI-generated data.
• Ex aequo et bono rulings synthesize ‘human spark’ criteria with non-appropriation principles, granting partial exclusivity if valid human input is shown.
As this paper elucidates, the convergence of AI autonomy and space law reveals an array of doctrinal tensions. In essence, the emerging paradigm suggests a delicate balance between leveraging AI’s impressive creative potential and preserving the foundational rationale of IP protections.

References:
1. United Nations Office for Outer Space Affairs (UNOOSA). The Outer Space Treaty (1967).
2. Forrest, K. B. (2024). The Ethics and Challenges of Legal Personhood for AI.
3. Bryson, J.J., Diamantis, M.E. & Grant, T.D. Of, for, and by the people: the legal lacuna of synthetic persons. Artif Intell Law 25, 273–291 (2017).
4. Schafer, B. (2023). In space, nobody can copyright your scream. In C. S. Cockell (Ed.), The Institutions of Extraterrestrial Liberty (pp. 384-410). Oxford University Press.
5. U.S. Copyright Office. (2025). Copyright and Artificial Intelligence, Part 2: Copyrightability.
Officially Published:
-
March 04 – 07, 2025, Hamburg, Germany (Table of Contents, №25) https://isg-konf.com/wp-content/uploads/2025/03/DEVELOPMENT-OF-INNOVATION-SYSTEMS-TRENDS-CHALLENGES-PROSPECTS.pdf
© POLINA PRIANYKOVA. All rights reserved.
