The Banality of Artificial Intelligence

What happens when an AI hallucination leads to bombing an elementary school?

By Michael Altfield
License: CC BY-SA 4.0
tech.michaelaltfield.net

It appears likely that the US government is using Anthropic, OpenAI, Google and/or xAI data models for processing signals intelligence (SIGINT), for AI-generated “kill lists” to determine where to drop their bombs.

Image shows a nazi german chemical war factory on the left in black-and-white (with logos of companies Bayer and BASF overlaying it) and an image of a new AI datacenter on the right (with logos of companies OpenAI and Anthropic overlaying it). In the middle of the two industrial sites is an equal sign. On the right is a question mark.
[right] This AI datacenter is a machinery of war. Its LLM hallucinations decide which children to assassinate [left] This IG Farben (Bayer/BASF) factory in Auschwitz produced Zyklon B for the Nazis, who murdered over a million children

In Apr 2024, +972 (an Israeli news outlet) published a >9,000 word article describing how the Israeli military had been using Artificial Intelligence to decide which (residential) buildings, hospitals, and schools to bomb in Gaza.

In Feb 2026, the US (and Israel) bombed Iran – killing over 100 schoolchildren (and Ali Khamenei).

In Mar 2026, it appears that the US has likely built a similar system, leveraging US AI companies’ tech to decide which (school) buildings to bomb, false-positive hallucinations be damned.

Who targeted the Shajareh Tayyiba girls’ elementary school in Minab, Iran? Could it have been an AI hallucination? A false-positive?


Read the full article here: