A new manifesto from technology thinker Neal Fishman is reigniting debate over how artificial intelligence should be governed at a global level. In a sweeping proposal that blends ethics, law, and technology policy, Fishman argues that all advanced AI systems should require licensing and certification before they are created, deployed, or modified. The manifesto comes at a time when artificial intelligence is rapidly expanding into nearly every sector of society, from healthcare and finance to education and national security. Fishman’s central claim is that current regulatory frameworks are insufficient to manage the risks posed by increasingly powerful AI systems. He warns that without structured oversight, AI could reshape economies, influence democratic systems, and disrupt social stability at an unprecedented scale. The proposal is bold and controversial. It suggests treating AI development in a manner similar to regulated industries such as medicine, aviation, and nuclear energy—fields where licensing is considered essential to protect public safety.
A Manifesto Rooted in Urgency
Fishman frames artificial intelligence as a turning point in human history comparable to the invention of nuclear technology. According to his argument, AI is not just another tool but a self-improving, adaptive system capable of operating at massive scale, often without continuous human oversight.
He highlights several emerging risks, including the spread of misinformation, economic disruption through automation, and the concentration of power among a small number of organizations deploying advanced AI systems. These risks, he argues, are not hypothetical but already visible in early forms across industries and societies.
The manifesto emphasizes that AI systems can influence millions or even billions of people simultaneously, shaping information flows, decision-making, and access to opportunities. Without accountability, such systems could operate with what Fishman describes as “ungoverned power.”
The Core Proposal: Licensing and Certification
At the heart of Fishman’s manifesto is a clear and comprehensive proposal: no entity should be allowed to develop or deploy advanced AI systems without first obtaining a license from an authorized governing body.
This licensing framework would apply broadly to corporations, governments, academic institutions, and even individuals working with advanced AI technologies. The requirement would extend not only to initial development but also to significant modifications of existing systems. Fishman draws parallels to other regulated fields. Just as societies require licenses to practice medicine or operate aircraft, he argues that the creation of AI systems—given their potential impact—should be subject to similar oversight. The proposal is not intended to halt innovation but to ensure that innovation occurs within a framework of responsibility and public trust.
Defining Which AI Systems Require Regulation
Recognizing that not all AI systems pose equal risk, Fishman proposes a tiered licensing system based on factors such as:
- The level of autonomy the system possesses.
- The scale of its deployment.
- The significance of its impact on individuals or society.
- The degree to which it makes or influences critical decisions.
For example, a simple automated tool with limited functionality might not require strict oversight. In contrast, an AI system used for medical diagnosis, financial decision-making, or large-scale information distribution would fall under stricter regulatory requirements. This risk-based approach is designed to balance innovation with safety, allowing low-risk experimentation while imposing stronger controls on high-impact systems.
Accountability at the Human Level
One of the most striking elements of the manifesto is its insistence on clear human accountability. Fishman argues that every AI system operating in the real world should have an identifiable human or organization responsible for its behavior. This principle is intended to eliminate what he describes as an “accountability vacuum,” where harmful outcomes cannot be traced back to a responsible party.
Under the proposed framework:
- All AI systems must be registered with a governing authority.
- Developers must demonstrate technical competence and ethical responsibility.
- Organizations must declare the intended purpose and limits of each system.
- Licenses must be renewed periodically based on performance and compliance.
The manifesto also calls for personal accountability among executives and developers, suggesting that individuals—not just organizations—should bear responsibility for the consequences of AI systems.
Global Coordination and International Standards
Fishman’s proposal extends beyond national regulation to include a global framework for AI governance. He advocates for the creation of an international body similar to organizations that regulate nuclear energy or aviation safety. This body would establish baseline standards for AI licensing and facilitate cooperation between countries. The manifesto acknowledges that achieving global consensus will be difficult, particularly given geopolitical competition among major powers. However, Fishman argues that the cross-border nature of AI makes international coordination essential. Without it, he warns, companies and governments may exploit regulatory gaps by operating in jurisdictions with weaker oversight.
Enforcement and Consequences
The manifesto outlines strict consequences for organizations that develop or deploy AI systems without proper licensing.
These include:
- Significant financial penalties.
- Criminal liability for severe violations.
- Forced shutdown or seizure of noncompliant systems.
Fishman argues that enforcement must be strong enough to prevent organizations from treating penalties as a cost of doing business. He also stresses that regulatory frameworks must evolve alongside the technology, with continuous monitoring and updates to ensure effectiveness.
Military and High-Risk Applications
A particularly controversial aspect of the manifesto is its stance on military AI. Fishman argues that military and national security applications should not be exempt from licensing requirements, despite the sensitive nature of such systems. He calls for strict oversight of autonomous weapons and other high-risk technologies, emphasizing the need for human accountability in decisions involving life and death. While acknowledging the need for confidentiality in defense systems, the manifesto insists that oversight mechanisms must still exist, even if they operate within classified frameworks.
Implications for Innovation and Industry
The proposal raises important questions about how regulation might affect innovation. Critics of strict licensing systems often argue that heavy regulation could slow technological progress and limit experimentation. However, Fishman counters that trust and accountability are essential for sustainable innovation. He suggests that without clear rules, public backlash against AI could grow, potentially leading to more restrictive and less effective forms of regulation. By establishing transparent standards early, the industry could avoid future crises and build long-term public confidence.
From an editorial standpoint, Fishman’s manifesto represents one of the most comprehensive calls for AI governance to date. Its strength lies in its clarity and ambition. By framing AI as a technology that demands oversight comparable to nuclear energy or aviation, the manifesto forces policymakers and industry leaders to confront the scale of the challenge. However, the proposal also raises practical and philosophical questions. Implementing a global licensing system would require unprecedented levels of international cooperation. It would also demand new regulatory institutions, technical standards, and enforcement mechanisms. There is also the question of how such a system would interact with open-source AI development, where tools and models are distributed freely across the internet. Despite these challenges, the manifesto contributes an important perspective to the ongoing debate about AI governance. It highlights the tension between innovation and control, and it underscores the need for proactive rather than reactive regulation.
Conclusion
The manifesto published by Neal Fishman calls for a fundamental shift in how artificial intelligence is governed. By proposing a global system of licensing and certification, it seeks to ensure that AI development proceeds with accountability, transparency, and public oversight. While the feasibility of such a system remains uncertain, the ideas presented reflect growing concern about the societal impact of advanced AI technologies. As governments, companies, and researchers continue to grapple with these challenges, proposals like Fishman’s are likely to play a significant role in shaping the future of AI policy. Ultimately, the manifesto raises a central question that will define the next era of technology: how to harness the power of artificial intelligence while ensuring it remains aligned with human values and societal well-being.