“We believe that a democratic vision for AI is essential to unlocking its full potential and broadly sharing its benefits,” OpenAI wrote, as well as a White House memo. Repeating similar phrases. “We believe that democracies should continue to lead AI development based on values such as freedom, fairness, and respect for human rights.”
We proposed a number of ways OpenAI can help achieve this goal. These include efforts to “streamline translation and summarization tasks, and investigate and mitigate harm to civilians,” while ensuring that OpenAI’s technology “does not harm people or property.” It is still prohibited to use the weapon for the purpose of “destroying or developing weapons.” ”More than anything, it was a message that OpenAI is participating in national security efforts.
Heidi Klaaf, a lead AI scientist at the AI Now Institute and a safety researcher who co-authored a paper with OpenAI in 2022 on the potential dangers of OpenAI’s technology in settings including the military, said the new He says the policy emphasizes “flexibility and compliance with the law.” . The company’s pivot “ultimately signals that it is OK to carry out military and war-related activities as the Department of Defense and the U.S. military deem appropriate,” she said.
Amazon, Google, and OpenAI partner and investor Microsoft have been competing for Department of Defense cloud computing contracts for years. These companies are learning that working with defense can be incredibly lucrative, with the company anticipating a $5 billion loss and exploring new revenue streams such as advertising. OpenAI’s reported change in direction could indicate it wants a piece of such a deal. The relationship between big tech companies and the military no longer provokes the anger and scrutiny that it once did. But OpenAI is not a cloud provider, and the technology it’s building is about more than just storing and retrieving data. With this new partnership, OpenAI promises to help classify data on the battlefield, provide insight into threats, and make the decision-making process in war faster and more efficient.
OpenAI’s statement on national security probably raises more questions than it answers. The company wants to reduce harm to civilians, but who are the civilians for? Wouldn’t providing an AI model to a program to shoot down a drone be considered development of a weapon capable of harming humans?
“Defensive weapons are still weapons,” Klaf said. They are “often deployed offensively, depending on the area and objective of the mission.”
Beyond these questions, working in the defense sector means that the world’s leading AI companies, which have incredible influence in the industry and have long swaggered about how to responsibly manage AI, will now , meaning you’ll be working in the defense technology industry, which plays a key role. A completely different set of rules. That system doesn’t allow technology companies to decide how their products are used if the customer is the U.S. military.