But we are far, far from specifics on how, precisely, AI regulations will be implemented and enforced, but today a great many countries including the United States, the United Kingdom and the European Union signed up to a council of Europe treaty that outlines the safety of artificial intelligence. The Council of Europe was founded with the vision of providing an international standards and human rights organization.
The Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law — to me, this sounded a bit formal when done by the COE, "as the first-ever international legally binding instrument aimed at ensuring that the use of AI systems is fully consistent with human rights, democracy, and the rule of law.".
Today, the treaty was opened for signature at a gathering in Vilnius, Lithuania. In addition to the three above-mentioned large markets, Andorra, Georgia, Iceland, Norway, the Republic of Moldova, San Marino and Israel have signed the treaty.
The list means the COE's framework has netted a number of countries where some of the world's biggest AI companies are either headquartered or are building substantial operations. But perhaps as important are the countries not included so far: none in Asia, the Middle East, nor Russia for example.
The high-level treaty calls for focusing on the intersections of AI with three main areas: human rights-the protection against data misuse and discrimination, and ensuring privacy; protecting democracy; and protecting the "rule of law." Basically the third of these commits signing countries to setting up regulators to protect against "AI risks." It doesn't specify what those risks might be, but it's also a circular requirement referring to the other two main areas it's addressing.
More concretely, though, is the actual aim of this treaty-the ideal targets it seeks to address. "The treaty provides a legal framework covering the entire lifecycle of AI systems," the COE says. "It promotes AI progress and innovation, while managing the risks it may pose to human rights, democracy, and the rule of law. To stand the test of time, it is technology-neutral."
(For background: The COE is not a legislative body but was established after World War II with the mandate to ensure that human rights, democracy and the rule of law are respected in Europe. It drafts treaties which it ensures its signatories comply with; for instance, this was what established the European Court of Human Rights.)
Regulation of artificial intelligence has been a hot potato within the world of technology, tossed among a complicated matrix of stakeholders.
A number of antitrust, data protection, financial, and communications watchdogs-thinking, perhaps, how they missed the other technological innovations and problems-have made some initial moves to try to frame how they might have a better grip on AI.
The idea is that if AI does indeed mark this gigantic shift in how the world goes, not all such changes may work out well unless kept under watchful eye, so stay ahead of things. But there is an obviously felt nervousness, too, among regulators of going too far and being admonished for that by innovation being crimped either by acting too soon or by applying a too broad brush.
Companies involved in artificial intelligence also jumped in early to proclaim that they, too, have an equal interest in what has come to be described as AI Safety. Cynics describe private interest as regulatory capture; optimists believe that companies need seats at the regulatory table to communicate better about what they are doing and what might be coming next to inform appropriate policies and rulemaking.
Politicians are everywhere too, sometimes as accessories to regulators, but sometimes speaking even more egregious pro-business rhetoric that homes in on the interests of firms under their care in order to grow the economies of their countries. (The last UK government fell into this AI cheerleading camp.)
That mix has led to a smorgasbord of frameworks and pronouncements, from the U.K.'s AI Safety Summit in 2023, to the G7-led Hiroshima AI Process, or the resolution the UN adopted earlier this year. We've also seen country-based AI safety institutes spring up, along with regional regulations, such as the California SB 1047 bill, and the European Union's AI Act, among others.
It sounds like the COE's treaty wants to be the common ground for all of these efforts.
In a statement the U.K. Ministry of Justice made about the signing of the treaty, they went on to add that "the treaty will see to it that countries monitor its development and ensure that any technology is managed within strict parameters," so that "once the treaty is ratified and brought into effect in the U.K., existing laws and measures will be enhanced."
We have to ensure that the advancement of AI does not undermine but supports and reinforces our standards," stated COE Secretary General Marija Pejčinović Burić in a statement. "The Framework Convention is designed precisely to ensure that. It is a strong and balanced text-the result of the open and inclusive approach by which it was drafted and which ensured that it benefits from multiple and expert perspectives.".
"The Framework Convention is an open treaty with a potentially global reach. I hope that these will be the first of many signatures and that they will be followed quickly by ratifications, so that the treaty can enter into force as soon as possible," she added.
Although the original framework convention has for the first time been agreed and adopted by the COE's committee of ministers, it will enter into force formally "on the first day of the month following the expiration of a period of three months after the date on which five signatories, including at least three Council of Europe member states, have ratified it.".
In other words, countries signing up Thursday will still need to ratify it at their level, and from then on it will take another three months before the provisions become effective.
It is not clear how long that process might take. For example, the U.K. said it plans to work on AI legislation but did not put a firm timeline on when draft bill[s] might be introduced. Specifically, on the COE framework, it indicates it will have more updates on its implementation "in due course.".