Special attention needed to ensure AI safety, US professor says

作者:Mike Gu from Kong Hong来源:chinadaily.com.cn
分享

US computer science professor Stuart Russell talks to the media at the 2025 Asia Financial Forum (AFF) in Hong Kong on Tuesday. MIKE GU / CHINA DAILY

Stuart Russell, a distinguished professor of computer science at the University of California, Berkeley, emphasized the need for special attention to the safety of artificial intelligence (AI) during its development, when participating in a group interview at the 2025 Asia Financial Forum (AFF) held in Hong Kong.

For safety reason, AI systems need to have behavioral red lines, Russell said. "The problem with general-purpose AI is that it can go wrong in so many ways that you can't easily write down what it means to be safe. What you can do is write down some things that you definitely don't want the systems to do. These are the behavioral red lines," he explained to the reporters why building behavioral red lines for AI is important.

"We definitely don't want AI systems to replicate themselves without permission. We definitely don't want them to break into other computer systems. We definitely don't want them to advise terrorists on how to build biological weapons," Russell said.

He added that it is hoped that AI development will always be under human control, rather than becoming uncontrollable.

This is why it is crucial to generate behavioral red lines at the early stages of AI development, especially with the help of governments, Russell said.

"So, you can make a list of things that you definitely don't want to do. It is quite reasonable for governments to say that before you can put a system out there, you need to show us that it's not going to do these things," he said.

Russell pointed out that AI gives rise to new forms of cybercrime. Currently, criminals are using AI to craft targeted emails by analyzing social media profiles and accessing personal emails, he said. This allows AI to generate messages that reference ongoing conversations, impersonating someone else, he added.

Russell, however, stated that AI also boosts the defense of crimes. "On the other side, we have AI defenses. I'm part of a team in various universities in California working together to use AI as a defense to screen emails against fishing attacks, to look at the activities of algorithms operating within the network, and to see which ones are possibly engaging in various activities", he said.

When asked about AI competition between countries, Russell said, "I think, in general, competition is healthy". However, he emphasized that excessive competition in AI should be approached with caution, as it could jeopardize AI safety. "Safety failures damage the entire industry. For example, if one airline doesn't pay enough attention to safety and airplanes start crashing, that damages the whole industry," he said.

AI cooperation, based on safety, is both allowable and economically sensible, Russell said. "In collaboration with several AI researchers from the West and China, we've been running a series of dialogues on AI Safety, specifically to encourage cooperation on safety. Those have been quite successful. The behavioral red lines I mentioned earlier are a result of those discussions," he said.

Regarding AI cooperation between China and the United States, Russell stated that both countries now place a strong emphasis on ensuring AI safety.

"I think there's at least as much interest in that direction in China as there is in the US. Several senior Chinese politicians have talked about AI safety and are aware of the risks to humanity from uncontrolled AI systems. So, I really hope that we can cooperate on this dimension," he said.

"The US and China have agreed not to allow AI to control the launch of nuclear weapons, which I think is sensible," he added.

mikegu@chinadailyhk.com

分享