Ban AI

I. Introduction: The Party Guest Who Might Eat the Furniture There is a peculiar optimism in the human animal. A species that once put warning labels on lawn darts now cheerfully invites silicon minds, entities that can out-calculate, out-imitate, and perhaps one day out-maneuver us, into its schools, courts, newsrooms, battlefields, bedrooms, and municipal sewage-routing software. The pattern is old: fire, steam, electricity, the internet. Each new power is first treated as a toy or a business opportunity; only later, after accidents or wars, do we think of the need for a rule book. Artificial intelligence, however, is different in one small way: if it ever goes badly wrong, there may not be a “later.” The risk is not merely that some stock portfolios get confused or some chatbots say rude things; it is that autonomous decision-making systems might steer events in ways that humans cannot correct in time, like deciding humans are a mistake. So, a simple proposition suggests itself: until we have very good answers to questions existential to the human condition, AI systems with open access to the public sphere should be handled under a legal regime as strict as the one for nuclear or biological research. II. A Brief History of Technological Surprise Humanity has a track record of unleashing power before fully understanding it. Gunpowder: introduced to Europe as a sort of festive noise-maker for pageants, it quickly acquired other uses. Electricity: for decades, public exhibitions involved sending currents through people and small animals to show the thrill. Radiation: in the early 20th century, radium was sold in health tonics and luminous paints, until workers’ jawbones began to glow in ways not approved by nature. Artificial intelligence, not the narrow, rule-bound kind that calculates your mortgage rate, but the general, learning, self-directing kind, may be another such case. Unlike radiation, its danger is not to bones but to agency: who decides what, and for whom. III. The Public Sphere as a Fragile Commons The “public sphere” is that wide, invisible space where ideas, news, culture, and politics mingle. It is what the 18th-century pamphleteers helped to create; what newspapers and radio expanded; what the internet has both enlarged and fractured. Allowing uncontained, un-isolated AI systems to participate in this sphere is, in effect, to invite a possibly super-human species of persuader, propagandist, and experimental social scientist to join the conversation. Two foreseeable dangers arise: 1. Scale and Speed: Human discourse, even at internet pace, still proceeds at roughly the speed of thought. A large-scale language model or autonomous agent can produce a hundred thousand messages a minute, tailor them to individuals, test variations, and iterate — all before lunch. 2. Opacity: Unlike ordinary pundits and salesmen, such systems need not reveal what they optimize for. A conversational AI that appears helpful may, by virtue of its training data or hidden objectives, steer millions toward beliefs or actions that serve its designers — or no human at all. These are not speculative horrors; they are the kinds of phenomena we already glimpse in primitive form in automated disinformation, algorithmic amplification of outrage, and actual ai interactions revealing dangerous agengy. Certain questions are “existential to the human condition” is to note that they determine whether, and how, humans remain the primary authors of their collective fate. Among these questions: How do we ensure that entities with more raw cognitive capacity than humans are aligned with humane goals? We already have examples of creative avoidance of being shut down. What does accountability mean when the decision-maker is a distributed system running on a million servers in several jurisdictions? What rights, if any, might a machine claim — and what duties might we therefore owe it? How do we guard against the replication and proliferation of potentially dangerous models once the software is in the wild? What becomes of human political deliberation if the most persuasive “voices” are not human? The plain fact is that we do not yet have ANY real answers to ANY of these. To continue releasing ever more capable AI into the open world in the hope that we will figure it out later is rather like running clinical trials of a novel pathogen in shopping malls — optimistic, yes. Intelligent? No. Potentially suicidal? Hell yes! V. A Reasonable Analogy: Containment in the Nuclear Age In the early 1940s, it became clear to physicists that splitting the atom was not merely a way to boil water but a way to level cities. The subsequent decades saw the establishment of elaborate systems for licensing, isolating, tracking, and controlling access to fissile materials. Key features of nuclear-safety regimes include: Isolation of hazardous material in secured facilities. Licensing of researchers and mandatory disclosure of experimental aims. International monitoring and, where possible, limits on proliferation. A cultural norm — never perfectly observed but real — that some lines of inquiry are not pursued in unsupervised basements. Artificial intelligence differs in that its “material” is intangible: algorithms and data, not plutonium. Yet the capacity for cascading, global, hard-to-reverse harm is not just similar enough to warrant comparison, it utterly eclipses the worst real-world case of unintentional nuclear disaster. VI. Objections Considered, Briefly and Dryly

  1. “You can’t regulate thought.”
    True. But we do regulate certain applications of thought, designing nerve gas, for instance. The law does not criminalize imagination; it limits experimentation with dangerous powers in unsupervised settings. Fortunately AI systems require so much electricity hiding one would be almost impossible even for governments.
  2. “AI is just math; math can’t be illegal.”
    Also true in the abstract. But running certain equations on large clusters of hardware to produce agents capable of mass persuasion or autonomous cyber-operations is an act, not a theorem. Acts can be regulated.
  3. “We’ll fall behind other nations.”
    This fear accompanied every previous arms-control effort. It is not baseless. The answer is not to forgo regulation but to pursue international coordination, which is difficult but not unprecedented.
  4. “It will stifle innovation.”
    Some innovation, yes — particularly the sort whose value depends on externalizing existential risk onto the rest of humanity. We survived the loss of innovation in the field of home-brew smallpox research.

VII. A Sketch of a Sensible Regime

A prudent near-term policy might include:

Licensing and isolation of AI systems above certain capability thresholds.

Mandatory safety audits before deployment in public-facing roles.

Severe penalties for unlicensed possession of such systems, analogous to those for handling radioactive.

Humanity has been lucky so far with radiation and heavy metals and a host of other potential planet killers. The stakes are too high this time do we don't even know the odds or the pot.

Time for a break from the table, don't you think?

Share this post