Insiders urged a shift toward localized, equitable approaches to artificial intelligence safety frameworks to counter what they called Western biases, at a webinar hosted by the Brookings Institution.
The event on Wednesday, titled "Globalizing Perspectives on AI Safety", highlighted how current AI systems often neglect linguistic diversity, cultural context and resource disparities in non-Western regions.
Participants in the webinar from Africa, Latin America, Southeast Asia, the Caribbean, and Oceania listed the challenges that different regions face.
Africa faces dependency on foreign AI providers and crippling resource gaps, said Grace Chege, a junior research scholar from Kenya in the ILINA Program, which is dedicated to finding talented people from across Africa to work on global issues.
"Open-access AI democratizes development, but African nations lack bargaining power when Northern institutions control model access," she said.
Limited computing infrastructure, electricity shortages and funding gaps also hinder local AI safety efforts, she said.
In the Caribbean region, AI systems exclude Creole and indigenous languages, said Craig Ramlal of the University of the West Indies. "We're treated as terra nullius (Latin for nobody's land) — empty digital territory — while foreign automation threatens key industries like tourism," he said, adding that climate vulnerabilities and unstable internet service compound difficulties.
Southeast Asia grapples with AI-driven cybercrime, said Jam Kraprayoon, a strategy manager based in Thailand at the Institute for AI Policy and Strategy. Criminal networks, which are already costing billions of dollars through scams, could exploit generative AI for phishing and deepfakes, he said.
"Frontier AI risks aren't theoretical here — they're operational," he said.
Maia Levy Daniel, a senior program manager at the Trust and Safety Foundation, said Latin America lacks a unified definition of AI safety.
Fragmented policies
While Brazil's Senate passed a bill requiring transparency, most regional policies remain "generic and fragmented", she said, adding that reliance on foreign-developed AI systems leaves governance reactive rather than proactive.
In Oceania, indigenous communities face "computational colonialism", said Ben Kereopa-Yorke, a Maori researcher who is now studying at the University of New South Wales.
He said that data centers drain water and energy equivalent to the usage of 50,000 homes annually while rising seas threaten island nations.
"AI safety debates chase future risks but ignore today's harms — like Tuvalu drowning as Silicon (Valley) theologizes," Kereopa-Yorke said.
Tuvalu is a small Polynesian island nation in the Pacific Ocean. It is the fourth-smallest country in the world and is threatened by rising sea levels.
The panelists proposed localized solutions at the event, emphasizing region-specific strategies.
For the Caribbean, Ramlal called for "digital compute sovereignty" to reduce dependency. He said that low-compute AI models tailored to local needs, such as hurricane forecasting, are critical.
Kraprayoon urged Southeast Asian countries to integrate regional cybercrime tactics into safety evaluations. Partnerships with Western firms could result in sharing tools to counter AI-enhanced scams, he said.
Levy Daniel said that UNESCO-backed regional dialogues to harmonize actionable steps were crucial in Latin America. "Public-sector AI use, like automated welfare systems, needs strict oversight," she added.
Kereopa-Yorke demanded accountability for present harms in Oceania. "Indigenous frameworks like Tamana Oranga — not Silicon theology — offer sustainable models," he said. Tamana Oranga is grounded in Maori holistic well-being principles focusing on the balance of community, ecology and spirituality.