Chips are key to regulating artificial intelligence, experts say
Since the boom in artificial intelligence, governments around the world pressed the accelerator to try to control a technology and an industry that is becoming increasingly more powerful. There are many bills, with very different approaches, but a new study from the University of Cambridge believes that the most effective way is in the control of a key piece: chips.
In it analysis, published this week, includes experts from other universities, such as Harvard and Oxford, and some specialists from OpenAI, the creator of ChatGPT. The group proposes measures that range from the control of the distribution of the chips, even the possibility of creating a switch to suspend remotely These devices.
«Computing relevant to artificial intelligence is a particularly effective point of intervention: “It is detectable, excludable and quantifiable, and is produced through an extremely concentrated supply chain,” say the experts in the report. This, in contrast to the complexity of trying to regulate the results of developments, such as algorithms or trained models. “They are intangible, non-rival and easily shareable goods, which makes them intrinsically difficult to control.”
The analysis highlights that, at this time, advanced chips that are used to train artificial intelligence systems They are manufactured by a very small group of actors. Nvidia, for example, controls almost 90% of this market. “This allows policymakers restrict the sale of these products to persons or countries of interest», they emphasize.
The pros and cons of a button to turn off artificial intelligence
The experts proposed practical measures, such as implement a global registry for sales of artificial intelligence chips. The measure would allow these pieces to be tracked throughout their life cycle anywhere in the world. For a registry like this, the researchers propose the incorporation of a unique identifier for each chip. This would prevent, for example, illegal trafficking.
Measures such as these would increase policymakers’ visibility and understanding of artificial intelligence developments. Specialists highlight that visibility is “crucial”, because it will allow governments “anticipate problems, make more accurate decisions, track results within a country, and negotiate and implement agreements between countries.”
Some progress has been made in this regard in the United States. The study presents as an example the measure dictated by President Joe Biden last year to identify all companies that are developing large models of artificial intelligence. Or the position of the Department of Commerce of this country, which has reinforced the limitations on the sale of accelerators to China.
The button to remotely suspend the chips is proposed as an extreme measure. They say they could Incorporate kill switches into the silicon to prevent its use in malicious applications. This would help regulators act quickly if they identify any dangerous use.
They warn, however, that it is not a perfect solution. A switch of this type could be targeted by cybercriminals and exploited for abusive uses of artificial intelligence.
Training tests
The analysis also raises the possibility that several parties coordinate efforts to approve training tasks in potentially risky artificial intelligence systems. “Nuclear weapons use similar mechanisms called permissive action links,” the report explains.
During the Artificial Intelligence Security Summit, held in November in the United Kingdom, around twenty governments and leading developers committed to working together on future tests for their models. Representatives of OpenAI, Anthropic, Google, Microsoft, Meta and xAI They participated in the sessions in which the proposal was discussed.
At this summit, it was announced the creation of a new global testing center based in the United Kingdom. The British government announced that special attention would be paid to the dangers of artificial intelligence for national security and society.