OpenAI quietly lobbied for weaker AI regulations

OpenAI CEO Sam Altman has been very loud about the need for AI regulation during numerous interviews, events, and even while sitting before U.S. Congress.

However, according to OpenAI documents used for the company’s lobbying efforts in the EU, there’s a catch: OpenAI wants regulations that heavily favor the company and have worked to weaken proposed AI regulation. 

The documents, obtained by Time(opens in a new tab) from the European Commission via freedom of information requests, gives a behind-the-scenes peek into what AItman means when he calls for AI regulation.

In the document, titled “OpenAI’s White Paper on the European Union’s Artificial Intelligence Act,” the company focuses on exactly what it says: the EU’s AI Act and attempting to change various designations in the law which would weaken the scope of it. For example, “general purpose AI systems” like GPT-3 were classified as “high risk” in the EU’s AI Act. 

According to the European Commission, “high risk” classification would include(opens in a new tab) systems that could result in “harm to people’s health, safety, fundamental rights or the environment.” They include examples such as AI that “influence voters in political campaigns and in recommender systems used by social media platforms.” These “high risk” AI systems would be subject to legal requirements regarding human oversight and transparency.

“By itself, GPT-3 is not a high-risk system, but possesses capabilities that can potentially be employed in high risk use cases,” reads the OpenAI white paper. OpenAI also argued against classifying generative AI like the popular ChatGPT and the AI art generator Dall-E as “high risk.”

Basically, the position held by OpenAI is that the regulatory focus should be on the companies using language models, such as the apps that utilize OpenAI’s API, not the companies training and providing the models.

OpenAI’s stance aligned with Microsoft, Google

According to Time(opens in a new tab), OpenAI basically backed positions held by Microsoft and Google when those companies lobbied to weaken the EU’s AI Act regulations.

The section that OpenAI lobbied against ended up being removed from the final version of the AI Act.

OpenAI’s successful lobbying efforts likely explain Altman’s change of heart when it comes to OpenAI’s operations in Europe. Altman previously threatened(opens in a new tab) to pull OpenAI out of the EU over the AI Act. Last month, however, he reversed course. Altman said(opens in a new tab) at the time that the previous draft of the AI Act “over-regulated but we have heard it’s going to get pulled back.” 

Now that certain parts of the EU’s AI Act have been “pulled back,” OpenAI has no plans to leave.

Source

      Guidantech
      Logo
      Shopping cart