
After watching Oppenheimer for the first time, I thought about the movie for a while, trying to figure it out. I like to ask myself “Why does this Director feel like he needs to make this film now?” I was unsure for a while, but I’d like to write my theory here just for fun and to create a dialogue about it.
If you haven’t seen the film, that’s okay; I’ll try not to spoil too much of the film’s plot, but I will have to acknowledge some points! And with that out of the way…
Here it goes:
After seeing the movie, I had some time to think about it for a little while and was really confused by that choice. It just felt so out of place, but then I thought of some shows that were softly focused (where the image is blurry at some points) and had a lot and zoom-ins. I thought of shows like Succession, The Office, and some old documentaries that I’d watched a month ago.
At that point, it just hit me: I think he did that to remind the audience that these were real events and not some fabricated, highly-polished story; it’s not absent from our current world.
With AI developing rapidly, I can imagine a lot of AI developers might be in a similar position to Oppenheimer and his other scientists on the Manhattan Project. Both groups are being pushed to develop a technology that could result in “a better world.” In the 1940s a better world meant one without war, but now, it’s a world without significant effort that doesn’t sacrifice optimal performance. That’s why so many businesses like Microsoft, Amazon, SalesForce, and other large-cap tech companies are trying to get a handle on it. It saves people a lot of time and a lot of money.
Even not considering the economic benefits of AI, Many potential social risks come with the overly-quick development of this technology. Some concerns include information bias, as a result of overly complex machine learning software; over-reliance on technology, and other legal and regulatory challenges. Likewise, during Washington’s AI Summit, Elon Musk was especially concerned about getting ahead of ourselves with this technology, which claims to be a “civilizational risk.”
From these concerns, we can see some similarities between the events relating to the hasty nature underpinning the development of AI and that of the Manhattan Project. This is not to say that these two events are the same or carry equal weight, but it should be something that concerns us.
Both situations raise a similar question: how far is too far when it comes to developing technology? Could AI developers end up in a similar state of regret as Oppenheimer and other scientists? It’s obvious that the bomb wasn’t used as it was intended after watching the film, and it’s imaginable that AI could create consequences that the original developers had not envisioned, as well.
Sooo, my Conclusion:
I’m not saying that I drew all those ideas from a camera being slightly out of focus (lol), but I think that aspect does contribute significantly to the themes of the film. I just hope that developers don’t “compartmentalize” (groupthink) like the scientists were required to do in the film.
Leave a comment