Google warns UK risks falling behind in AI race if it doesn’t have more data centres | Artificial Intelligence (AI)

Google has said Britain risks being left behind in the global artificial intelligence race unless the government acts quickly to build more data centres and allow tech companies to use copyrighted works in their AI models.

The company noted that Research showing that the UK ranks seventh in a global AI readiness index for data and infrastructure, and called for a number of policy changes.

Google UK CEO Debbie Weinstein said the government “sees the opportunity” in AI but needs to introduce more policies to drive its implementation.

“We have a lot of advantages and a long history of leadership in this space, but if we don’t take proactive steps, there is a risk that we will be left behind,” he said.

AI is experiencing a global investment boom following advances in the technology led by the launch of the chatbot ChatGPT by US company OpenAI and other companies such as Google, which has produced a powerful AI model called Gemini.

However, government-backed AI projects have been early victims of Keir Starmer’s government’s cost-cutting. In August, Labour confirmed it would not go ahead with unfunded commitments of £800m for the creation of an exascale supercomputer (seen as key infrastructure for AI research) and another £500m for the AI ​​Research Resource, which funds processing power for AI.

Asked about the supercomputer decision, Weinstein referred to the government’s upcoming “AI action plan” led by tech entrepreneur Matt Clifford. “We’re hoping to see a really comprehensive view on what are the investments we need to be making in the UK,” he said.

Google has outlined its policy suggestions for the UK in a paper called “Unlocking the UK’s AI potential,” to be published this week, in which it recommends the creation of a “national research cloud,” or a publicly funded mechanism to provide computing power and data — two key factors in building the AI ​​models behind products like ChatGPT — to startups and academics.

The report adds that the UK “is struggling to compete with other countries for investment in data centres” and welcomes Labour’s commitment to building more centres as it prepares to introduce a new planning and infrastructure bill.

Other recommendations in Google’s report include creating a national skills service to help the workforce adapt to AI and introducing the technology more widely into public services.

It also calls for changes to UK copyright laws after attempts to draft a new code for using copyrighted material to train AI models failed this year.

Data from copyrighted material such as newspaper articles and academic papers is considered vital for the models that underpin tools such as chatbots, which are “trained” with billions of words that allow them to understand text-based prompts and predict the correct response. The same concerns apply to models that create music or images.

The Google paper calls for relaxing restrictions on a practice known as text and data mining (TDM), which allows copying copyrighted works for non-commercial purposes, such as academic research.

The Conservative government has abandoned plans to allow TDM for commercial purposes in 2024, amid deep concerns from creative industries and news publishers.

“The unresolved copyright issue is a barrier to development, and one way to unblock it, obviously from Google’s perspective, is to go back to where I think the government was in 2023, which was to allow commercial use of TDM,” Weinstein said.

The report also calls for “pro-innovation” regulation, signalling support for the current regulatory framework, where AI is overseen by a number of public regulators, including the Competition and Markets Authority and the Information Commissioner’s Office.

“We would encourage the government to continue to look at existing regulation first, rather than creating new regulation,” Weinstein said.

UK ministers are in the process of drafting a consultation on a draft AI bill that is Reportedly focused on making a voluntary agreement on AI model testing between the UK government and tech companies legally binding, as well as on turning the UK AI Safety Institute into a independent government body.

The Department of Science, Innovation and Technology has been contacted for comment.

Fuente

Leave a comment