--- Tag: ["đź“ś", "đź’Ł", "🪖"] Date: 2023-07-29 DocType: "WebClipping" Hierarchy: TimeStamp: 2023-07-29 Link: https://www.altaonline.com/dispatches/a44004490/robert-oppenheimer-artificial-intelligence-atomic-bomb-jennet-conant/ location: CollapseMetaTable: true --- Parent:: [[@News|News]] Read:: 🟥 ---   ```button name Save type command action Save current file id Save ``` ^button-TheCautionaryTaleofJRobertOppenheimerNSave   # The Cautionary Tale of J. Robert Oppenheimer **W**hen Christopher Nolan’s blockbuster biopic of the theoretical physicist J. Robert Oppenheimer, the so-called father of the atomic bomb, [drops in theaters on July 21](https://www.youtube.com/watch?v=uYPbbksJxIg), moviegoers might be forgiven for wondering, Why now? What relevance could a three-hour drama chronicling the travails and inner torment of the scientist who led the Manhattan Project—the race to develop the first nuclear weapon before the Germans during World War II—possibly have for today’s 5G generation, which greets each new technological advance with wide-eyed excitement and optimism? But the film, which focuses on the moral dilemma facing Oppenheimer and his young collaborators as they prepare to unleash the deadliest device ever created by mankind, aware that the world will never be the same in the wake of their invention, eerily mirrors the present moment, as many of us anxiously watch the artificial intelligence doomsday clock countdown. Surely as terrifying as anything in Nolan’s war epic is the *New York Times*’ [recent account](https://www.nytimes.com/2023/03/31/technology/sam-altman-open-ai-chatgpt.html) of OpenAI CEO Sam Altman, sipping sweet wine as he calmly contemplates a radically altered future; boasting that he sees the U.S. effort to build the bomb as “a project on the scale” of his GPT-4, the awesomely powerful AI system that approaches human-level performance; and adding that it was “the level of ambition we aspire to.” If Altman, whose company created the chatbot [ChatGPT](https://www.sciencefocus.com/future-technology/gpt-3/), is troubled by any ethical qualms about his unprecedented artificial intelligence models and their potential impact on our lives and society, he is not losing any sleep over it. He sees too much promise in machine learning to be overly worried about the pitfalls. Large language models, the types of neural network on which ChatGPT is built, enable everything from digital assistants like Siri and Alexa to self-driving cars and computer-generated tweets and term papers. The 37-year-old AI guru thinks it’s all good—transformative change. He is busy creating tools that empower humanity and cannot worry about all their applications and outcomes and whether there might be what he calls “a downside.” Talk Oppenheimer and A.I. with Jennet Conant and Will Hearst on *Alta Live*, Wednesday, August 2 at 12:30 p.m. Pacific time. [REGISTER](https://altaonline.zoom.us/webinar/register/WN_wQpxFdCCTF6KdA2klvy_9w) Just this March, in an interview for the podcast *On with Kara Swisher*, Altman seemed to channel his hero Oppenheimer, asserting that OpenAI had to move forward to exploit this revolutionary technology and that “it requires, in our belief, this continual deployment in the world.” As with the discovery of nuclear fission, AI has too much momentum and cannot be stopped. The net gain outweighs the dangers. In other words, the market wants what the market wants. Microsoft is gung ho on the AI boom and has invested $13 billion in Altman’s technology of the future, which means tools like robot soldiers and facial recognition–based surveillance systems might be rolled out at record speed. We have seen such arrogance before, when Oppenheimer quoted from the Hindu scripture the Bhagavad Gita in the shadow of the monstrous mushroom cloud created by the [Trinity test explosion](https://www.afnwc.af.mil/About-Us/History/Trinity-Nuclear-Test/#:~:text=Department%20of%20Energy)-,The%20world's%20first%20nuclear%20explosion%20occurred%20on%20July%2016%2C%201945,the%20test%20was%20%22Trinity.%22) in the Jornada Del Muerto Desert, in New Mexico on July 16, 1945: “Now I have become Death, destroyer of worlds.” No man in history had ever been charged with developing such a powerful scientific weapon, an apparent affront to morality and sanity, that posed a grave threat to civilization yet at the same time proceeded with all due speed on the basis that it was virtually unavoidable. The official line was that it was a military necessity: the United States could not allow the enemy to achieve such a decisive weapon first. The bottom line is that the weapon was devised to be used, it cost upwards of $2 billion, and President Harry Truman and his top advisers had an assortment of strategic reasons—hello, Soviet Union—for deploying it. Back in the spring of 1945, a prominent group of scientists on the [Manhattan Project](https://ahf.nuclearmuseum.org/ahf/history/manhattan-project/) had voiced their concerns about the postwar implications of atomic energy and the grave social and political problems that might result. Among the most outspoken were the Danish Nobel laureate Niels Bohr, the Hungarian Ă©migrĂ© physicist Leo Szilard, and the German Ă©migrĂ© chemist and Nobel winner James Franck. Their mounting fears culminated in the [Franck Report](https://sgp.fas.org/eprint/franck.html), a petition by a group from the project’s Chicago laboratory arguing that releasing this “indiscriminate destruction upon mankind” would be a mistake, sacrificing public support around the world and precipitating a catastrophic arms race. The Manhattan Project scientists also urged policymakers to carefully consider the questions of what the United States should do if Germany was defeated before the bomb was ready, which seemed likely; whether it should be used against Japan; and, if so, under what circumstances. “The way in which nuclear weapons…are first revealed to the world,” they noted, “appears to be of great, perhaps fateful importance.” They proposed performing a technical demonstration and then giving Japan an ultimatum. The writers of the Franck Report wanted to explore what kind of international control of atomic energy and weapons would be feasible and desirable and how a strict inspection policy could be implemented. The shock waves of the Trinity explosion would be felt all over the world, especially in the Soviet Union. The scientists foresaw that the nuclear bomb could not remain a secret weapon at the exclusive disposal of the United States and that it inexorably followed that rogue nations and dictators would use the bomb to achieve their own territorial ambitions, even at the risk of triggering Armageddon. Fast-forward to the spring of 2023, when more than 1,000 tech experts and leaders, such as Tesla chief [Elon Musk](https://www.altaonline.com/dispatches/a40151589/elon-musk-spacex-starbase/), Apple cofounder Steve Wozniak, and entrepreneur and 2020 presidential candidate Andrew Yang, sounded the alarm on the unbridled development of AI technology in a signed letter warning that the AI systems present “profound risks to society and humanity.” AI developers, they continued, are “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control.” The open letter called for a temporary halt to all AI research at labs around the globe until the risks can be better assessed and policymakers can create the appropriate guardrails. There needs to be an immediate “pause for at least 6 months,” it stated, on the training of AI systems more powerful than GPT-4, which has led to the rapid development and release of imperfect tools that make mistakes, fabricate information unexpectedly (a phenomenon AI researchers have aptly dubbed “hallucination”), and can be used to spread disinformation and further the grotesque distortion of the internet. “This pause,” the signatories wrote, should be used to “jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts,” and they urged policymakers to roll out “robust AI governance systems.” How the letter’s authors hope to enforce compliance and prevent these tools from falling into the hands of authoritarian governments remains unclear. Geoffrey Hinton, a pioneering computer scientist who has been called the godfather of AI, did not sign the letter but in May announced that he was leaving Google in order to freely [express his concerns](https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html) about the global AI race. He is worried that the reckless pace of advances in machine superintelligence could pose a serious threat to humanity. Until recently, Hinton thought that it was going to be two to five decades before we had general-purpose AI—with its wide range of possible uses, both intended and unintended—but the trailblazing work of Google and OpenAI means the ability of AI systems to learn and solve any task with something approaching human cognition looms directly ahead, and in some ways they are already eclipsing the capabilities of the human brain. “Look at how it was five years ago and how it is now,” Hinton said of AI technology. “Take the difference and propagate it forwards. That’s scary.” [READ *ALTA'*S REVIEW OF 'OPPENHEIMER'](https://www.altaonline.com/culture/movies-tv-shows/a44603626/christopher-nolan-oppenheimer-movie-review-lisa-kennedy/) Until this year, when people asked Hinton how he could work on technology that was potentially dangerous, he would always paraphrase Oppenheimer to the effect that “when you see something that is technically sweet, you go ahead and do it.” He is not sanguine enough about the future iterations of AI to say that anymore. Now, as during the Manhattan Project, there are those who argue against any moratorium on development for fear of the United States losing its competitive edge. Ex–Google CEO Eric Schmidt, who has expressed concerns about the possible misuse of AI, does not support a hiatus for the simple reason that it would “benefit China.” Schmidt is in favor of voluntary regulation, which he has described somewhat lackadaisically as “letting the industry try to get its act together.” Yet he concedes that the dangers inherent in AI itself may pose a larger threat than any global power struggle. “I think the concerns could be understated.… Things could be worse than people are saying,” he told the *[Australian Financial Review](https://www.afr.com/politics/federal/china-will-win-ai-race-if-research-paused-ex-google-chief-20230405-p5cy7v)* in April. “You have a scenario here where you have these large language models that, as they get bigger, have emergent behavior we don’t understand.” If Nolan is true to form, audiences may find the personal dimension of *Oppenheimer* even more chilling than the IMAX-enhanced depiction of hair-raising explosions. The director has said that he is not interested in the mechanics of the bomb; rather, what fascinates him is the paradoxical and tragic nature of the man himself. Specifically, the movie will examine the toll inventing a weapon of mass destruction takes on an otherwise peaceable, dreamy, poetry-quoting blackboard theoretician, whose only previous brush with conflict was the occasional demonstration on UC Berkeley’s leafy campus. *This article appears in Issue 24 of* Alta Journal*.* **[SUBSCRIBE](http://www.altaonline.com/memberships)** One of the things that would haunt Oppenheimer was his decision, as head of the scientific panel chosen to advise on the use of the bomb, to argue that there was no practical alternative to military use of the weapon. He wrote to Secretary of War Henry Stimson in June 1945 that he did not feel it was the panel’s place to tell the government what to do with the invention: “It is clear that we, as scientific men, have no proprietary rights \[and\]…no claim to special competence in solving the political, social, and military problems which are presented by the advent of atomic power.” Even at the time, Oppenheimer was already in the minority: most of the project scientists argued vehemently that they knew more about the bomb, and had given more thought to its potential dangers, than anyone else. But when Leo Szilard tried to circulate a petition rallying the scientists to present their views to the government, Oppenheimer forbade him to distribute it at Los Alamos. ![on august 6, 1945, the united states detonated the little boy atomic bomb over hiroshima, japan](https://hips.hearstapps.com/hmg-prod.s3.amazonaws.com/images/hiroshima-japan-atomic-bomb-646f9db55ac96.jpg?resize=480:*) On August 6, 1945, the United States detonated the Little Boy atomic bomb over Hiroshima, Japan. Universal History Archive After the two atomic attacks on Japan—first Hiroshima on August 6 and then, just three days later, Nagasaki on August 9—the horror of the mass killings, and of the unanticipated and deadly effects of radiation poisoning, forcefully hit Oppenheimer. In the days and weeks that followed, the brilliant scientific leader who had been drawn to the bomb project by ego and ambition, and who had skillfully helmed the secret military laboratory at Los Alamos in service of his country, was undone by the weight of responsibility for what he had wrought on the world. Within a month of the bombings, Oppenheimer [regretted his stand](https://www.newsweek.com/hiroshima-smouldered-our-atom-bomb-scientists-suffered-remorse-360125) on the role of scientists. He reversed his position and began frantically trying to use his influence and celebrity as the “father” of the A-bomb to convince the Truman administration of the urgent need for international control of nuclear power and weapons. The film will almost certainly include the famous, or infamous, scene when Oppenheimer, by then a nervous wreck, burst into the Oval Office and dramatically announced, “Mr. President, I feel I have blood on my hands.” Truman was furious. “I told him,” the president said later, “the blood was on my hands—to let me worry about that.” Afterward, Truman, who was struggling with his own misgivings about dropping the bombs and what it would mean for his legacy, would denounce Oppenheimer as that “cry-baby scientist.” In the grip of his postwar zealotry, Oppenheimer became an outspoken opponent of nuclear proliferation. He was convinced no good could come of the race for the hydrogen bomb. Just months after the Soviet Union’s successful test of an atomic bomb in 1949, he joined other eminent scientists in lobbying against the development of the H-bomb. In an attempt to alert the world, he helped draft a report that went so far as to describe Edward Teller’s “Super” bomb as a “weapon of genocide”—essentially, a threat to the future of the human race—and urged the nation not to proceed with a crash effort to develop bigger, ever more destructive thermonuclear warheads. In an effort to silence him, Teller and his faction of bigger-is-better physicists, together with officials in the U.S. Air Force who were eyeing huge defense contracts, cast aspersions on Oppenheimer’s character and patriotism and dug up old allegations about his ties to communism. In 1954, the Atomic Energy Commission, after a kangaroo court hearing, found him to be a loyal citizen but stripped him of his security clearance. Last December, almost 70 years later, the U.S. Department of Energy [restored Oppenheimer’s clearance](https://www.smithsonianmag.com/smart-news/us-restores-j-robert-oppenheimers-security-clearance-after-68-years-180981329/), admitting that the trial had been “flawed” and that the verdict had less to do with genuine national security concerns than with his failure to support the country’s hydrogen bomb program. The reprieve came too late for the physicist, whose reputation had been destroyed, his public life as a scientist-statesman over. He died in 1967, relatively young, aged 62, still an outcast. Altman and today’s other lofty tech leaders would do well to note the terrible swiftness of Oppenheimer’s fall from grace—from hero to villain in less than a decade. And how quick the government was to dispense with Oppenheimer’s advice once it had taken possession of his invention. The internet still remains unregulated in this country, but the European Union is considering labeling ChatGPT “high risk.” Italy has already banned OpenAI’s service. Perhaps revealing a bit of nervousness that he has gotten ahead of himself, Altman responded to the open letter about temporarily halting the development of AI by taking to Twitter to gush about the demand that his company release a “great alignment dataset,” calling it “one thing coming up in the debate about the pause letter I really agree with.” Nolan’s *Oppenheimer* epic will inevitably be a cautionary tale. The story of the nuclear weapons project illustrates, in the starkest terms, what happens when new science is developed too quickly, without any moral calculus, and how it can lead to devastating consequences that could not have been imagined at the outset.• [Jennet Conant](https://www.altaonline.com/author/268863/Jennet-Conant/) Jennet Conant is the granddaughter of James B Conant, a former president of Harvard University and a key scientific adviser on the Manhattan Project who oversaw the development of the atomic bomb and its deployment against Japan and, along with Oppenheimer, later led the opposition to the development of the hydrogen bomb.     --- `$= dv.el('center', 'Source: ' + dv.current().Link + ', ' + dv.current().Date.toLocaleString("fr-FR"))`