An Essay on War planning for Hiroshima and Nagasaki
By the spring of 1945, the United States had learned how to burn a city to the ground. Before Hiroshima and Nagasaki ushered in the nuclear age, Tokyo had already been turned into a furnace.
On the night of March 9–10, 1945, under General Curtis LeMay, nearly 300 B-29s roared toward Japan’s capital in an operation codenamed Meetinghouse. They carried not the promise of precision, but canisters of napalm and incendiaries engineered to do one thing exceptionally well: ignite. Tokyo—a tightly packed sprawl of wood and paper—was the perfect laboratory for industrialized fire.
What followed was not simply a bombing raid. It was the deliberate creation of a firestorm.
Entire neighborhoods vanished in waves of heat and flame. Winds strong enough to tear clothes from bodies sucked oxygen from the streets. Civilians fled into canals and rivers only to find the water itself choked with burning debris and human remains. By morning, somewhere between 80,000 and 100,000 people were dead, and over a million were homeless—the most destructive single air raid in history, even when measured against Hiroshima or Nagasaki.
This was not a “mistake,” and it was not the side effect of trying to hit a factory and missing by a few blocks. It was doctrine. The United States had come to accept that the fastest way to break Japan’s war machine was to incinerate the urban ecosystem that sustained it.
Japanese industry did not exist in neat, isolated industrial zones. It was embedded in homes, workshops, and neighborhood cooperatives. Destroying the city meant destroying the economy. With that logic in place, the distinction between civilian and combatant became a technicality—one that could be safely ignored from 30,000 feet.
And yet, Japan did not surrender.
Tokyo burned. Other cities followed—Nagoya, Osaka, Kobe—each reduced in turn by similar raids. Hundreds of thousands more civilians died under conventional bombs. But inside Japan’s leadership, the debate dragged on. Factions within the army clung to a fantasy of “decisive battle” on the home islands, hoping that inflicting massive casualties on an invading force would buy better surrender terms.
For American planners, this raised a brutal question: if city after city in ashes was not enough, what would be?
The atomic bomb arrived as an answer that was less about “more destruction” and more about an entirely new kind of destruction. Firebombing required hundreds of planes, favorable weather, and hours of bombing runs. The atomic bomb promised the same—or greater—level of devastation in an instant, delivered by a single aircraft.
Hiroshima was not chosen at random. It was a military hub, but just as important, it was still largely intact. An untouched city provided a pristine canvas on which to demonstrate the full capabilities of the new weapon. Tokyo, already charred and cratered, could not serve that purpose. The message had to be unmistakable: this is what we can do, again and again.
Seen from Washington, the atomic bomb did not represent a clean moral break. It was a continuation—and an intensification—of a path already taken with Operation Meetinghouse. Once you have accepted that burning a city full of civilians to force surrender is legitimate, the step from napalm to nuclear is not as large as it appears from the ground.
The internal calculations were explicit. Invasion plans for Japan, such as Operation Downfall, came with casualty estimates that ran into the hundreds of thousands for Allied troops and into the millions for Japanese soldiers and civilians. Some analyses, including work commissioned by U.S. officials, projected several million Japanese fatalities if the war continued with conventional means and invasion.
Placed next to those numbers, the projected dead of Hiroshima and Nagasaki—ultimately perhaps 200,000 in Hiroshima and up to 80,000 in Nagasaki by the end of 1945, with higher long‑term totals—could be framed as a “lesser evil.” That framing has echoed ever since.
But this is where the ethical narrative becomes dangerously seductive.
Utilitarian logic invites a clean ledger: if X lives lost now prevent Y far greater losses later, then X becomes not only acceptable, but morally preferable. From that vantage point, both firebombing and nuclear attack can be defended as grim but rational steps to shorten the war and save lives on balance.
Yet this comfort with arithmetic hides a series of choices. Who counts? Who calculates? And what assumptions are allowed to stand unchallenged?
Did the projected invasion casualties assume that no other diplomatic avenues—such as clarifying terms around the preservation of the emperor—were politically viable? Were alternatives like a demonstration blast of the atomic bomb on an uninhabited area seriously pursued, or were they dismissed as too risky to American credibility? To treat the atomic bombing as the only way to avert a bloodbath is to accept, often uncritically, the parameters set by those who wanted to use it.
From a deontological perspective, the argument looks very different. If intentionally targeting civilians is categorically wrong, then no balancing of totals can redeem the act. Just War Theory’s principles of discrimination (do not target non‑combatants) and proportionality (ensure the military advantage justifies the harm) sit in obvious tension with an operation that uses a city—packed with civilians—as the unit of destruction.
Realists counter that this is the wrong way to read history. Political and military leaders, they argue, were operating under intense time pressure, limited information, and genuine fear of staggering casualties if the war dragged on. In that frame, the question is not whether the decisions were pure, but whether they were understandable given the constraints.
But realism explains; it does not absolve. Understanding why something happened is not the same as declaring it right.
What Operation Meetinghouse makes unmistakably clear is that the moral threshold for mass killing of civilians had already been crossed before Hiroshima. Strategic bombing campaigns in both Europe and Asia had normalized the destruction of cities as an acceptable instrument of policy. Hiroshima and Nagasaki did not invent that logic; they completed it.
Firebombing proved that a modern state could disassemble a city piece by piece through heat and blast. Nuclear weapons proved that it could erase one in a flash—and hinted that one day it might erase many more.
The deeper, and more unsettling, legacy of these events lies in what they reveal about how states think under conditions of perceived existential threat. When survival and victory are framed as all‑or‑nothing, ethical frameworks become elastic. The language of necessity does heavy lifting. Numbers on briefing charts stand in for human beings. And once a particular type of violence has been justified once, it becomes far easier to justify again.
We like to imagine that there is a bright line somewhere—that there are things “we” would never do. But the history of Operation Meetinghouse and the atomic bombings suggests that the line is not nearly as bright as we might hope. It shifts under pressure. It bends in the hands of those with the authority to redraw it.
That is why these nights of fire—Tokyo, Hiroshima, Nagasaki—still matter. Not just as tragedies to be remembered, but as warnings. They expose how quickly sophisticated, rational institutions can talk themselves into actions that, from the vantage point of the people beneath the bombs, look indistinguishable from murder.
The question they leave us with is not only whether those decisions were justified then. It is whether, faced with a different crisis, different technologies, and a new vocabulary of “necessity,” we would truly decide any differently now.