LONDON (Reuters) - Britain plans to split responsibility for governing artificial intelligence (AI) between its regulators for human rights, health and safety, and competition, rather than creating a new body dedicated to the technology.
AI, which is rapidly evolving with advances such as the ChatGPT app, could improve productivity and help unlock growth, but there are concerns about the risks it could pose to people's privacy, human rights or safety, the government said.
It said it wanted to avoid heavy handed legislation that could stifle innovation and would instead take an adaptable approach to regulation based on broad principles such as safety, transparency, fairness and accountability.
The European Union is tackling the issue head on by attempting to devise landmark AI laws and create a new AI office. The speed at which the technology is advancing, however, is complicating its efforts, sources have said.
Britain said its approach, outlined in a policy paper published on Wednesday, meant it could adapt its rules as the technology developed.
It said that over the next 12 months, existing regulators would issue practical guidance to organisations, as well as other tools and resources like risk assessment templates.
It said legislation could later be introduced to ensure regulators were applying the principles consistently.
(Reporting by Paul Sandle; Editing by Mark Potter)