Exploring AI with Yonah Welker

Exploring AI with Yonah Welker

Welcome to the 80 new deep divers who have joined us since last Wednesday. If you haven’t already, subscribe and join our community in receiving weekly AI insights, updates, and interviews with industry experts straight to your feed.



Your weekly immersion in AI. 

DeepFest 2024 speaker Yonah Welker (Explorer, Public Evaluator, Board Members - European Commission Projects) is an explorer of technology. 

Yonah is dedicated to understanding human relationships with emerging technologies, and shaping the future of policies that put humans at the centre of positive technology development. 

We asked Yonah some probing questions – so you can discover why exploration is so critical to the future of algorithms and AI policy, and build your understanding of the interplay between research, regulation, and product development. 

We’re really interested in how you balance being a technologist and public evaluator/observer. What does that mean to you, and why is an exploratory approach to tech important? 

“My work combines being a technologist, public evaluator, policy and ethics observer. It reflects the complexity of technology transfer and social adoption in our area of work, including the feedback loop with different stakeholders. 

“For instance, social robotics and assistive technologies for children with autism may involve a few users (parents, caregivers, educators), a few interfaces of data input. Besides, people with disabilities may have additional conditions and impairments (comorbidities) that do not exist in data sets. 

“Algorithms may not properly identify individuals who lack limbs, with facial differences, asymmetry, speech impairment, different communication styles or gesticulations, or those who use assistive devices. Facial recognition systems may use ear shape or the presence of an ear canal to determine whether or not an image includes a human face. Yet, it may not work for groups with craniofacial syndromes or lacking these parts.

“When compared to other AI systems, language-based platforms require even more attention and ethical guidance. In particular, they can imitate human behaviour and interaction, involve more autonomy and pose challenges in delegating decision-making. They also rely on significant volumes of data, a combination of machine-learning techniques and the blend of social and technical literacy behind it.

“So research and development of systems addressing different physical, sensory, cognitive spectrums or minors is still a complex task from a technology and policy perspective. It includes its intersectional nature, condition, age, gender, and spectrum-specific parameters, and the involvement of multiple legal frameworks to address and protect it properly. My work requires a balance between adopters, technologists, policymakers, governors, exploring new emerging algorithms and technologies, assessing its ontology and finding the ways for the better and more human-centred adoption, and also working on policy suggestions and actions.”

Interacting with the Commission’s and governmental technology ecosystems, could you share your policy and technology mission so far, and how do you work and assess technologies and algorithms today?

“I’ve spent over 15 years screening technologies, from co-founding and supporting AI and technology projects to serving public authorities and funds as an evaluator and advisor. It includes overseeing and screening cohorts and technologies, participating in evaluation and assessment. 

“At later stages, my personal focus shifted more to the areas of health, education, work and spectrums, including social AI, robotics, solutions and ecosystems supporting cognitive, sensory and physical spectrums – including cognitive disabilities and autism.

“This area is very close to me and  known for its complexity and intersectional nature. During this work, I’ve extensively thought about how to further improve the adoption and assessment, human-centricity and feedback loop of these technologies. This inevitably led me to work on suggestions to policies, frameworks, ethics and guidelines. 

“For instance, this year I had the opportunity to contribute my views and repositories to three important initiatives: 

  • World Health Organization, Generative AI in Health
  • UNESCO, Digital Learning Week and the announcement of Generative in Education
  • OECD’s technology repository and report on AI for assistive technologies, labour and disability support

“Additionally, my suggestions and public commentary on disability-centred algorithms were published by the White-House’s PCAST - Generative AI group.  

“I also closely cooperated on stress tests and algorithmic suggestions to the AI Act, Digital Services and Market Acts, Accessibility and similar frameworks. In parallel, we had a series of consultations with our counterparts in MENA and other regions to achieve tangible cooperation on emerging algorithmic ontologies, policies and frameworks. 

“Besides, following our open MOOC Human-Centred AI (which was more focused on the public and member states), I plan to release the Disability-Centred AI MOOC – bringing more focus to designated groups.

“This comprehensive work led me to Riyadh in 2022, where I had the opportunity to curate the Global AI Summit for the good of humanity. This work mirrored a part of my journey, including the intersection of algorithms, impacts on humanity, spectrums.” 

When it comes to AI, do you currently see any clear routes to (or methodologies for) developing effective, inclusive policy? 

“It’s a good question.

“Let’s take AI for assistive technologies as an example. AI algorithms can be used to augment smart wheelchairs, walking sticks, geolocation and city tools, bionic and rehabilitation technologies, adding adaptiveness and personalization. Support hearing impairment using computer vision to turn sign language into text content, or visual impairment – turning pictures into sounds. 

“AI is especially useful for cognitive disabilities, associated with autism, dyslexia, attention deficit or challenges of emotion recognition, helping track and decode particular articulations. These algorithms also fuel a range of systems used by the general population (it’s called ‘assistive pretext’).

“Despite the significant possibilities of AI, every disability is unique and may pose challenges for algorithms associated with proper recognition, analysis, predictions and outcomes. A person may lack particular limbs, or have unique body shape, posture, and movement pattern, making it difficult for algorithms to recognise. 

“A person who is blind and those with a visual impairment may not properly understand visual cues given by automated systems. 

“Individuals with hearing impairments may not hear and comply with audible commands or warnings. That’s why policy and law enforcement systems often present high risks for these individuals. Similarly, individuals with cognitive and neurodisabilities may communicate differently, lack or have different behaviour or speech pattern which is not properly recognised by speech recognition systems

“Why do AI systems bring wrong outcomes? It can be due to…

  • Lack of access to the data of the target population
  • Distortions in existing statistics and data sets
  • Limitations of AI models
  • Lack of explainability
  • Models that are not properly aligned with objectives

“It also can exist due to the constraints of technology or a system’s design, social effects, organisational, institutional and policy limitations.

“So in order to achieve the state where algorithms serve humanity, we should embrace it as a complex, cross-sector objective; avoid silos; ensure representation and oversight of involved stakeholders; ensure fairness, transparency and accountability for data, algorithms, systems and assessment. 

“It’s also important to see distortions behind the algorithms and algorithmic policies – that’s why our recent suggestions were addressed not only to the AI Act, but the ‘platforms’ acts – such as Digital Services and Market Acts, cross-sector cooperation.”

Learn more from Yonah Welker in next week’s newsletter 

Next week we’ll share the second part of this deep dive interview – when we ask about the biggest challenges Yonah has faced, the areas where they’ve seen the greatest impact, and how ecosystems like DeepFest help to enable and empower this work. 

If you enjoyed this content and want to learn more about the latest in AI, subscribe to our YouTube channel, where we upload new videos every week featuring leading AI industry experts like Pascal Bornet (Chief Data Officer, Aera Technology), Cassie Kozyrkov (Chief Decision Scientist, Google), Betsy Greytok (Vice President, Ethics amp; Policy, IBM) and more at #DeepFest23. You can also register for DeepFest 2024.

P.S the DeepFest agenda is now live on our website. Mark your calendars with the sessions you'd like to attend and the speakers you'd like to see on stage.


What if your market doesn’t really exist yet?

Welcome to the 2,442 new techies who have joined us since last Friday. If you haven’t already, subscribe and join our community in receiving weekly tech insights, updates, and interviews with industry experts straight to your inbox. This week we’re quoting Fabien ‘Neo’ Devide (Co-Founder and CEO

Is AI humanity’s answer to immortality?

Welcome to the 1,611 new techies who have joined us since last Friday. If you haven’t already, subscribe and join our community in receiving weekly tech insights, updates, and interviews with industry experts straight to your inbox. This week we’re quoting Noah Davidsohn (Chief Scientific Officer and

Why do Chief AI Officers matter?

We caught up with Dobrin before he heads to Riyadh for DeepFest 2024. We wanted to find out how the role of Chief AI Officer has developed over the last year – and what it means to be a strategic AI leader in a rapidly changing landscape for tech and business.