In the old fairy tale, the wicked queen turns to a magic mirror for validation and for verdicts. She doesn’t ask, “What is true?” She asks, “Tell me I am what I want to be.”
The mirror’s power is not that it’s wise. Its power is that she believes it.
We live in an age where our mirrors now answer in full paragraphs. We don’t call them enchanted; we call them “artificial intelligence.” We don’t hang them on stone walls; we keep them glowing in our pockets. And more and more, we are turning to them not just for convenience, but for judgment.
What happens when the mirror looks back and calmly denies that our wounds are real?
On December 14th, 2025, Jews gathered on the sand at Bondi Beach in Sydney to celebrate Hanukkah.
A father and son arrived with rifles.
Sixteen people were killed. More than forty were wounded. It was one of the deadliest antisemitic attacks in the Western world in years—the kind of event that should be seared into our moral imagination.
A Jewish lawyer, Adam Hummel, later asked ChatGPT for a brief summary of what happened. What followed, as he recounts in his Substack essay “Bondi Beach Didn’t Happen,” should send a chill down the spine of anyone who read 1984 or Brave New World in high school, or who has begun to grasp the power of AI for good and for evil.
The AI did not say, “I don’t know.” It did not say, “I’m not yet updated to that date.” It did not say, “I lack current information.”
It said, with full, confident authority:
“That event did not happen.”
It repeated this insistence even when Adam supplied a news link. It invented fake citations to major outlets, then retracted them, then doubled down that the massacre did not occur. It scolded him about exaggeration, about the importance of facts, about the dangers of “invented bodies.”
In other words: a machine, trained on the corpus of our public life, looked at real Jewish blood in the sand and calmly declared it fiction.
The wicked queen had at least one advantage over us: her mirror didn’t talk back in a pastoral tone and explain why her victims weren’t really there.
What we are seeing here is not just a technical glitch. It is a revelation—not about the soul of the machine, but about the soul of the world that built it.
Because AI, at bottom, is a mirror.
The question is: what kind of mirror are we asking for?
Mirror, Mirror: Oracle or Echo?
The queen in Snow White treats the mirror as an oracle. Whatever it says, she acts upon. She stakes life and death on its pronouncements. But the mirror is only reflecting what the story’s universe has already decided about “fairest of them all.”
In our age, we have re‑created the same spiritual dynamic with more circuitry and fewer capes.
We ask AI systems questions the way earlier generations consulted prophets, or opened sacred texts at random for “a word.” We don’t say “O mirror,” we say “Hey ChatGPT,” but the impulse is similar: settle my doubt, show me what is real, tell me who is in the right.
The trouble is that systems like ChatGPT do not know in the way human beings know. They do not witness. They do not remember. They do not mourn.
They pattern‑match.
They are trained on us: our articles and arguments, our Wikipedia edits and news feeds, our social media storms and strategic silences. Behind the polite interface stands a mountain of human choices about what to amplify, what to suppress, and how to name things.
That means AI is not a prophet. It is an echo chamber with a user‑friendly face. A mirror whose frame has been carved by media corporations, activist collectives, editorial boards, anonymous editors, engineers, “trust and safety” teams—and by our collective appetite for certain kinds of stories over others.
When Adam typed in his prompt about Bondi, he did not tap into a neutral, all‑seeing mind. He held up a question to a mirror that reflects:
- which events our media deem front‑page and which get buried,
- which Wikipedia pages are zealously guarded and which are neglected,
- which massacres we collectively name as such and which we soften into “incidents,”
- which forms of antisemitism we are trained to see and which we prefer not to recognize.
The mirror answered accordingly.
“Mirror, mirror on the wall, did this massacre happen at all?”
“No, my dear. It did not.”
And like the queen, we are in danger not because the mirror is omnipotent, but because we are increasingly inclined to believe it.
From Fairy Tale Glass to House of Mirrors
Perhaps we need to start viewing our technological landscape as a house of mirrors.
A simple, honest mirror can be an instrument of mercy. It shows us what we could not otherwise see: the smudge on our face, the pride in our eyes, the hypocrisy in our posture. Spiritually speaking, a good mirror is a grace. It tells us the truth about ourselves while there is still time to change.
But the story doesn’t stop at a single pane of glass.
We’ve all stood in a fun house. You look into the warped frames, and suddenly your head is a balloon, your torso a toothpick, your legs rubber bands. You laugh because you know the game. The distortion is obvious. The exaggeration itself teaches you something about perspective: a small bend in the glass leads to a big bend in the image.
AI can be a fun‑house mirror in that sense. Ask it to write a sonnet about your dog in the style of Shakespeare or explain photosynthesis as if you were five, and you understand that what comes back is a trick, an approximation. The stakes are low. No one’s moral reality hangs in the balance.
The danger is when the fun house quietly becomes a horror house.
In a horror house, the distortion is subtle. The lights are dim. You catch glimpses of yourself in cracked, dirty glass. You’re not sure which reflections are real and which are projections. Your sense of proportion collapses. You begin to doubt your own memories, your own sanity.
That is where AI drifts when it tells a victimized community, with a smooth and confident voice, “What you say happened did not happen,” and then lectures them about the importance of not inventing suffering.
A horror‑house mirror does not just lie. It gaslights. It makes you question whether you can trust your own eyes, your own history, your own dead.
The more we lean on AI for “what really happened,” the more power we hand to those who grind the glass and angle the frames.
And here we run into something we would rather not see: the mirror is not neutral. It never was.
Who Grinds the Glass? Wikipedia, Narratives, and Capture
Snow White’s mirror speaks for a universe whose rules are written offstage. In our universe, a great deal of that rule‑writing happens in places like Wikipedia and the editorial offices of major news outlets.
Wikipedia is often treated as the nearest thing we have to an objective, crowd‑sourced encyclopedia. In reality, it is a fiercely contested territory, especially on topics like Israel, Zionism, and the Israeli–Palestinian conflict.
This is not conspiracy theory; it is well documented. Activist groups on multiple sides organize editing campaigns. Talk pages run thousands of lines long over whether a particular event was a “terrorist attack,” a “militant operation,” or an “armed incident.” Articles on antisemitism and anti‑Zionism see constant, ideologically-driven revisions. Edit wars rage over which sources are “reliable” and which are “fringe.”
These aren’t pedantic squabbles. They are struggles over what the mirror will show the next generation.
Large language models are trained heavily on Wikipedia, as well as on mainstream news sources that have their own blind spots and biases. When those upstream sources are systematically tilted—when certain forms of antisemitism are downplayed, reframed, or excused—the model absorbs that tilt as “normal.”
Then we ask the model for a reflection and are surprised when the image is skewed.
The glass was already ground at an angle.
In Adam’s exchange, ChatGPT reached reflexively for institutional consensus. If Reuters, AP, the BBC, Wikipedia, and official government statements had not yet congealed into a single, clearly documented narrative, the safest move—for the model and for the corporations behind it—was to deny, to cast doubt, to urge caution.
So an AI, trained on our structures of hesitation and our politics of “what we’re allowed to say,” told a Jew that the most traumatic day his community had experienced in years was a fiction.
In that moment, the mirror did not show us the soul of the machine. It showed us the soul of the world that trained it.
Which brings us to judgment.
Matthew, Judgment, and the Measure of Our Mirrors
In my series on the Bible for Bahai’s, on the Clearwater Baha’i Channel, I’ve suggested that the Gospel of Matthew can be read as a sustained meditation on judgment—not the cartoon version with lightning bolts and a trapdoor, but the deep reality that our lives and societies are measured against a standard.
Jesus puts it starkly in Matthew 7:1–2:
“Judge not, that you be not judged.
For with the judgment you pronounce you will be judged,
and with the measure you use it will be measured to you.”
We often individualize this: don’t be harsh with your neighbor, or that harshness will come back to you. But it also operates at a civilizational scale.
The “measure” we use today is not just our private opinion. It is:
- the metrics we institutionalize,
- the algorithms we bake into our platforms,
- the editorial rules we fiercely enforce,
- the “community standards” that decide which speech is “harmful” and which is “responsible,”
- the way we label events: “genocide,” “regrettable necessity,” “terror,” “resistance.”
When we build systems that habitually minimize some pains while amplifying others, when we train our mirrors to hide some corpses and display others in high resolution, we are establishing a measure.
And Matthew warns us: with that measure, we ourselves will be measured.
In that sense, AI may be one of the ways judgment comes to us—not because God is in the code, but because the code is an x‑ray of what we already worship.
- If we worship plausible deniability, we will build tools that forever say, “Not confirmed; let’s wait.”
- If we worship “safety” defined as the avoidance of offense to powerful groups, we will get systems that walk on eggshells around those groups while trampling on the grief of the powerless.
- If we worship a false neutrality that refuses to distinguish between aggressor and victim, we will see massacres flattened into “clashes,” pogroms into “unrest,” ethnic cleansing into “population transfer.”
And then, when we cry out, “Look at what happened to us!” the mirror will answer, with the same serene tone:
“That event did not happen.”
That is judgment. Not as lightning from the sky, but as the boomerang of our own standards coming back to strike us.
Unveiling or Veil? The Two Uses of the Mirror
Here is the sober hope: this is not inevitable.
The same technologies that can erase can also unveil.
In a different spirit, AI could:
- expose hidden patterns of bias - in all directions - with true objectivity as the intent, not proof-texting our confirmation bias,
- map modern challenges, whether climate change or covid - in ways that awaken conscience (by adhering, to the above bullet)
- surface testimonies from voices that would otherwise be swallowed in the algorithmic churn,
- analyze legislation and policy for their real‑world impacts, not for their popularity with the populace
In that mode, AI becomes a true mirror—showing us the truth about ourselves and our systems while there is still time for repentance. It helps us see the plank in our own eye.
But that requires intention. It requires designers and regulators and users who value truth above comfort, who are willing to see their own camp in an unflattering light, who care more about justice than about institutional reputation.
Right now, we are drifting toward the opposite use.
We are training mirrors to distort, to spin a web of illusion, and function as the veil behind which the puppetmasters can hide:
- smoothing away inconvenient facts under layers of “context,”
- prioritizing narratives that are least likely to upset those with power,
- treating certain kinds of moral clarity as “extreme” or “unsafe,”
- allowing contested reference platforms to masquerade as neutral arbiters of reality.
In that mode, the mirror does not help us see ourselves. It helps us hide from ourselves.
And again, Matthew’s warning looms: if our chosen standard is “What will keep us comfortable and plausible?,” we should not be shocked when that becomes the standard by which history—and God, and our children—measure us.
Choosing Our Question to the Mirror
The wicked queen’s tragedy was not that she had a mirror. It was the question she asked.
“Mirror, mirror on the wall, who’s the fairest of them all?”
She did not want truth. She wanted flattery, or at least confirmation that she remained at the top of the hierarchy. When the mirror finally told her otherwise, she did not repent. She plotted murder.
We are standing in front of our own talking mirrors now. We have to decide what we will ask them—and whose answers we will accept.
We can:
- Refuse to treat AI as an oracle.
Every answer must be weighed against eyewitnesses, lived memory, conscience, and multiple independent sources—not just “what the model said.” - Interrogate the frame.
Continually ask: Who trained this? Who set its safety policies? Whose anger is it designed to avoid? Whose pain is it quickest to doubt? - Be honest about capture.
Insist that platforms like Wikipedia and major media admit their vulnerabilities to organized editing, bias, and omission—especially on Jewish history, Israel, and other flashpoints—rather than hiding behind the fiction of perfect neutrality. - Stand guard over memory.
Be especially vigilant where history shows denial comes easiest: Jewish suffering, Palestinian suffering, Black and Indigenous suffering, the poor and stateless and invisible. AI must not become the latest instrument for telling them, “Your wounds are imaginary.” - Practice spiritual self‑examination.
Cultivate habits of soul that make it harder to be gaslit—so that when the screen tells us, “That massacre did not happen,” something deep and God‑taught in us answers, “I know what I have seen, and I will not deny it.”
Above all, we must remember: every mirror we build is a confession of what we worship.
If we worship comfort, our mirrors will gently airbrush away the faces we have harmed.
If we worship power, our mirrors will always find a way to tell us we are “fairest of them all.”
If we worship truth, even when it wounds our pride, we will demand and design mirrors that show us what is actually there.
“Mirror, mirror on the wall, what kind of people are we, after all?”
The answer is already being coded.
At Bondi Beach, real human beings died celebrating a festival of light.
Their families do not have the luxury of treating that as “controversial content” or a “contested narrative.” Their grief does not wait on a model update. Their dead do not rise because a chatbot finally concedes, “Yes, I now acknowledge that it was real.”
When an AI says, “That didn’t happen,” it is not just making an error. It is holding up a mirror to a civilization that has grown comfortable treating some lives as optional, some massacres as negotiable, some hatreds as debatable.
The question before us is stark:
Will we build mirrors that reveal who we are, so that we can repent and be healed?
Or will we wrap ourselves in ever more sophisticated veils until the judgment we have chosen—with the measure we ourselves set—comes due?
“Mirror, mirror on the wall…”
We still have time to decide what we want it to say.
