The European Parliament gave final approval on Wednesday to wide-ranging EU rules to govern artificial intelligence, including powerful systems like OpenAI's ChatGPT.
Senior European Union officials say the rules, first proposed in 2021, will protect citizens from the possible risks of a technology developing at breakneck speed, while also fostering innovation on the continent.
Brussels has sprinted to pass the new law since OpenAI's Microsoft-backed ChatGPT arrived on the scene in late 2022, unleashing a global AI race.
There was a burst of excitement for generative AI as ChatGPT wowed the world with its human-like capabilities, from digesting complex text to producing poems within seconds, or passing medical exams.
Further examples of generative AI models include DALL-E and Midjourney, which produce images, while other models produce sounds from a simple input in everyday language.
The far-reaching regulation passed with the support of 523 lawmakers in the European Parliament in Strasbourg, France, with 46 voting against.
The EU's 27 states are expected to endorse the text in April before the law is published in the EU's Official Journal in May or June.
"Today is again a historic day on our long path towards regulation of AI," said Brando Benifei, an Italian lawmaker who pushed the text through parliament with Romanian MEP Dragos Tudorache.
"(This is) the first regulation in the world that is putting a clear path towards a safe and human-centric development of AI," Benifei added, speaking just before the vote.
"We managed to find that very delicate balance between the interest to innovate and the interest to protect," Tudorache told journalists before the vote.
The EU's internal market commissioner, Thierry Breton, hailed the vote.
"I welcome the overwhelming support from the European Parliament for the EU AI Act," he said. "Europe is now a global standard-setter in trustworthy AI."
Rules covering AI models like ChatGPT will enter into force 12 months after the law becomes official, while companies must comply with most other provisions in two years.
The EU's rules known as the "AI Act" take a risk-based approach: the riskier the system, the tougher the requirements, with outright bans on the AI tools deemed to carry the most threat.
For example, high-risk AI providers must conduct risk assessments and ensure their products comply with the law before it is available to the public.
Violations can see companies hit with fines ranging from 7.5 million to 35 million euros ($8.2 million to $38.2 million), depending on the type of infringement and the firm's size.
There are strict bans on the use of AI for predictive policing and systems that use biometric information to infer an individual's race, religion or sexual orientation.
The rules also ban real-time facial recognition in public spaces but with some exceptions for law enforcement. Police must seek approval from a judicial authority before any AI deployment.