Why should we tailor AI to different cultures?

Why should we tailor AI to different cultures?

Welcome to the 49 new deep divers who have joined us since last Wednesday. If you haven’t already, subscribe and join our community in receiving weekly AI insights, updates, and interviews with industry experts straight to your feed.

----------------------

DeepDive

Your weekly immersion in AI. 

AI’s first language is English. But that doesn’t have to be the case for all AI models everywhere in the world forever…does it? 

In a recent interview with Rest of World, Jerry Chi (Head of Japan for Stability AI) said: 

“It’d be dystopian if all AI systems had the values of a 35-year-old male in San Francisco.”

And he makes a good point. Researchers in the AI community talk often about identifying and minimising bias, but there’s almost always a focus on how to make Western-built models less biassed towards marginalised demographics. 

But what if we switched the balance and built models specifically from and for those demographics?

What work is being done to create culturally adapted AI models? 

Since he opened Stability AI’s Tokyo office in 2022, Chi and his team have released a number of products that are tailored to Japan’s language and culture, including: 

  • A language model
  • An image-to-text generator that responds to image prompts in Japanese language, with cultural nuances
  • A text-to-image generator that responds to prompts with Japanese or Asian imagery

Work like this isn’t currently widespread. But it’s possible that more alternative language models for AI will be developed in the future – and that could help to ensure that non-English speaking cultures aren’t left out of the future benefits of AI tech. 

In an article for University World News, journalist Yojana Sharma recounted insights from Aiman Erbad (Associate Professor at Hamad Bin Khalifa University, Qatar) at a recent conference. Erbad had explained that a number of researchers are attempting to get the same level of AI accuracy with less data, and smaller models – models which don’t have access to endless own-language resources in the cloud and online. 

And at the same conference, Natasa Milic-Frayling (Research Director of Arabic Language Technologies at the Qatar Computing Research Institute) noted that countries in the MENA region are putting serious effort into researching alternatives to English-language ChatGPT. 

Because in order to use AI tools in work and education, they have to be enabled for regional languages and cultural nuances. If they’re not, the results they produce simply won’t be relevant for local working practices and local knowledge. Students in Saudi Arabia, for example, don’t want to only get US-based results when they ask AI a question. 

It’s not about removing bias – it’s about building AI models that have different biases

This is a different approach to the conversation about bias in AI. 

Because it’s not about trying to eliminate bias completely; which is, arguably, an impossible task. 

Instead, it’s about building AI models that have different biases. Models that are created specifically for a more diverse range of cultures and languages – rather than trying to fit different cultures into existing models, and inevitably producing irrelevant (at best) and dangerously biassed (at worst) results. 

As Chi told Rest of World about Stability AI, “We wanted to give this model more of a Japanese bias so that it generates an image that Japanese people might typically think of when trying to picture a prompt.” 

And when you think about it, it makes perfect sense: it’s impossible for one AI model to represent the values and culture of the entire world. We have to continue to acknowledge that, and work to ensure that everyone benefits from AI – and that AI demonstrates and celebrates cultural differences instead of squashing them. 


If you enjoyed this content and want to learn more about the latest in AI, subscribe to our YouTube channel, where we upload new videos every week featuring leading AI industry experts like Pascal Bornet (Chief Data Officer, Aera Technology), Cassie Kozyrkov (Chief Decision Scientist, Google), Betsy Greytok (Vice President, Ethics amp; Policy, IBM) and more at #DeepFest23. You can also register for DeepFest 2024.

Interested to hear more about what AI has to say about other topics? Let us know!

Related
articles

What if your market doesn’t really exist yet?

Welcome to the 2,442 new techies who have joined us since last Friday. If you haven’t already, subscribe and join our community in receiving weekly tech insights, updates, and interviews with industry experts straight to your inbox. This week we’re quoting Fabien ‘Neo’ Devide (Co-Founder and CEO

Is AI humanity’s answer to immortality?

Welcome to the 1,611 new techies who have joined us since last Friday. If you haven’t already, subscribe and join our community in receiving weekly tech insights, updates, and interviews with industry experts straight to your inbox. This week we’re quoting Noah Davidsohn (Chief Scientific Officer and

Why do Chief AI Officers matter?

We caught up with Dobrin before he heads to Riyadh for DeepFest 2024. We wanted to find out how the role of Chief AI Officer has developed over the last year – and what it means to be a strategic AI leader in a rapidly changing landscape for tech and business.