UK quietly dismisses independent AI advisory board, alarming tech sector

The UK government has quietly dismissed the independent advisory board of its Centre for Data Ethics and Innovation (CDEI) — tasked with promoting the responsible deployment of data and AI technologies, especially within the public sector.

The board’s webpage was officially shut down on September 9, but a rather uninformative public announcement was released only yesterday. Recorded Future News (which first broke the news) reported that the government updated the page in such a way that it wouldn’t send email alerts to those subscribed to the topic.

The background

The CDEI’s advisory board was first appointed in June 2018. Its mission was to “analyse and anticipate gaps in the governance landscape, agree and set out best practice to guide ethical and innovative uses of data, and advise the government on the need for specific policy or regulatory action.”

Since its launch, the centre has been offering practice guidance to organisations across both the private and public sectors on how to innovate AI- and data-driven technologies in an ethical and risk-mitigating way. For example, it published an algorithmic transparency standard in January 2023, and a portfolio of AI assurance techniques in June.

Powering the next business revolution

Join the Financial Times & TNW Future of AI Summit on November 15 & 16

However, as former members of the board anonymously told Recorded Future News, the government’s attitude towards the body changed over time, following the rotation of four different prime ministers and seven secretaries of state since the board’s launch.

A former senior CDEI official explained that there was no political will to incorporate the centre’s work into the public sector. The algorithmic transparency standard wasn’t widely adopted, and “wasn’t promoted in the AI white paper,” they added.

Favouring “frontier” AI concerns

Rishi Sunak’s approach to AI governance has strongly focused on mitigating the technology’s existential risks, following multiple warnings from tech and academia.

In response, the UK launched in April its designated AI Taskforce, dedicated to not only boosting the country’s leadership in the field, but also leading the reliable deployment of “frontier” AI models, “systems which could pose significant risks to public safety and global security.” In contrast, the centre’s work has been focusing on the actual, day-to-day uses of data and AI.

“The existential risks seem to be the current focus, at least in the PM’s office,” a former member of the CDEI’s advisory board told ComputerWeekly.com. “You could say that it’s easy to focus on future ‘existential’ risks as it avoids having to consider the detail of what is happening now and take action.”

Unease across the UK tech sector

Reacting to the news, tech business founders across the UK have expressed their worries, citing concerns about “transparency and trust in the government,” and asking for “a new era of accountability.”

“[This is] another indication that the government simply doesn’t have a coherent strategy towards data and AI, nor does it have strong stakeholder engagement on this topic,” Natalie Cramp, CEO at data solutions firm Profusion, told TNW.

“When we put this move into the context of the failure to finalise a replacement of GDPR seven years after it was announced it would be scrapped, the ongoing issues around the Online Safety Bill, and the failure to introduce any wide-ranging AI regulations, it paints a picture of a government that does not seem to have a strategy towards regulating and cultivating innovation.”

Cramp hopes that the UK’s upcoming AI Safety Summit will bring fruitful results regarding artificial intelligence practices, but questions how the government is going to proceed with data ethics without the advisory board — as indeed one might.

Source

      Guidantech
      Logo
      Shopping cart