Human-centered AI use seen as key

作者:YIFAN XU in Washington来源:chinadaily.com.cn
分享

The importance of using artificial intelligence in a human-centered way for international security was the topic at a recurring US-China dialogue.

The Jan 10 event at the Washington-based Brookings Institution was titled "How will artificial intelligence impact security relations between the United States and China? US and Chinese perspectives".

Since October 2019, Brookings' Foreign Policy program and Tsinghua University's Center for International Security and Strategy have convened such dialogues.

Colin Kahl, the Steven C. Hazy senior fellow at Stanford University and the Sydney Stein Jr. scholar at Brookings, is the head of the US delegation. Kahl said the two sides held 11 rounds of dialogue over the past five years, and the next round will be on the sidelines of the Munich Security Conference in February.

"This ongoing series of meetings, held in person in third countries and virtually during the COVID pandemic, has spanned two US presidential administrations. It's brought together consistent teams of US and Chinese experts on artificial intelligence and national security to examine where there is consensus and dissensus on boundaries around the uses of AI in national security," he said.

One of the key outcomes of the dialogue has been the development of a shared glossary of AI terminology, Kahl said. The glossary, published in August 2024, provides a common framework for experts from both countries to discuss AI issues.

Dong Ting, a Tsinghua University fellow who participated in the discussion, said: "By focusing on how each side defines and understands key concepts, we could begin to map out where our risk perceptions actually diverge. It's not just about agreeing on definitions. It's about understanding why we define things differently."

The dialogue also has helped identify areas of agreement on the use of AI in national security. For example, the two sides agreed that AI should not be used to make decisions about nuclear launches.

Chris Meserole, a former Brookings expert and one of the founding members of the Track 2 dialogue, highlighted the importance of "intellectual mechanisms to home in on the actual issues divorced from some of the broader geopolitical dynamics", such as the use of war games and hypothetical scenarios.

The panelists agreed that AI presents a number of risks and challenges for international security. However, they also emphasized that AI can be a force for good if it is developed and used responsibly.

"AI for medical and human good is not a competition," said Jacquelyn Schneider, a Hoover Institution fellow and another founding member of the dialogue. "What we know from political science is that arms races are when we're using technology to create uncertainty about states' intentions or that increase the propensity for offensive campaigns.

"The idea is not to prognosticate the future, but instead to use games as a way to understand the most dangerous futures and then to find recommendations about how to avoid these most dangerous outcomes," she said.

The panelists also discussed the importance of keeping people informed when it comes to AI decision-making, especially on national security.

"AI will never be as good as humans," said Andrew Forrest, founder of the Minderoo Foundation. "And that's why I am talking about humans for good, not humans for aggression."

Lu Chuanying, a nonresident fellow at the Center for International Security and Strategy, emphasized that experts from both sides agreed that "we need to keep AI under the control of human beings".

"AI, on the one hand, will bring us more high-quality information very fast. But on the other hand, it also will create some kind of uncertainty, which will lead to escalations. So that is dangerous, very dangerous," Lu said.

Xiao Qian, deputy director of the Center for International Security and Strategy, stressed the importance of dialogue in US-China relations.

"Although the government may not be ready to discuss this very directly right now, the academic people or the think tank people should go ahead and get ready for the future risks and challenges," she said.

分享