Connecting people to develop AI policy

Connecting people to develop AI policy


Welcome to the 100 new deep divers who have joined us since last Wednesday. If you haven’t already, subscribe and join our community in receiving weekly AI insights, updates, and interviews with industry experts straight to your feed.

----------------------

DeepDive

Your weekly immersion in AI.

Last week, we asked Yonah Welker (Explorer, Public Evaluator, Board Members - European Commission Projects) what it means to be a technologist and a public observer, and whether there are any clear routes to developing effective, inclusive AI policy.

This week, Yonah talks about their involvement in public initiatives, and the power of events like DeepFest to enable collaboration and policy-making in the AI space.

Let’s get straight into it.

Could you tell us about your involvements in public and the Commission’s technology initiatives? Where have you seen the biggest impact - and how were these initiatives affected by emerging AI regulation such as the AI Act?

“Over the last few years we worked on two important things. First is repositories and landscapes of assistive and human-centred technologies driven by the AI systems and algorithms. In particular, we aimed to assess different ontologies, parameters and groups to serve public reports, social awareness or frameworks like the Accessibility Act.

“This directive aims to improve how the internal market for accessible products and services works by removing barriers created by divergent rules in EU Member States. It covers products and services that have been identified as being most important for persons with disabilities.

“Some parts of this work were added to the recent OECD report, which encompasses not only the list of emerging assistive technologies, but existing challenges and ways to improve market access, adoption and assessment.

“The second thing is the policy, where we worked on suggestions to stress-test emerging AI regulation, Digital Service and Market, and other frameworks.

“Following the Bletchley Declaration, governments are looking to address a risk-based approach to algorithmic safety, focusing on areas, types, cases and affected populations. While there is general agreement, countries are still in different stages of deployment.

“An even bigger tendency is to see algorithmic impacts and mechanisms through the lens of complex national and social strategies. In particular, the US’ AI executive order requires safety assessments, civil rights guidance, and research on labour market impact, accompanied by the launch of the AI Safety Institute. In parallel, the UK’s governors introduced the AI Safety Institute and the online Safety Act, echoing the approach of the European Union to the Digital Services Act, with more focus on minors' protection.

“It was followed by efforts from multilateral agencies and institutions, such as UNESCO, WHO, and the OECD, working on area-specific guidelines to address algorithms in education, healthcare, the labour market, literacy and capacities-oriented recommendations. It includes Unesco’s AI competence framework for students and teachers or a recommendation to set the minimum age at 13 when generative AI can be used. Moreover, its recent action plan to address disinformation and social media’s harms, including the case with the use of Generative AI, collected responses from 134 countries, including Africa and Latin America. Similarly, governments from 193 countries signed their commitment to effectively implement children’s rights in the digital environment with the adoption by the United Nations General Assembly’s Third Committee.

Such tendencies are increasing the role of non-AI-specific frameworks such as the Accessibility Act (which expects its further iteration in 2025), the EU Digital Services and Market Acts, children and designated groups protection laws and directives, involvement of specialised institutions and frameworks. In particular, the Digital Services and Digital Market Acts cover the ‘gatekeepers’ – big technology companies and platforms. These acts have specific articles to address fair competition, minimise silos, and improve accountability and reporting systems. For user protection, they address algorithmic transparency, outcomes and user consent, and protection for minors and designated groups. They also look at identifying dark patterns and manipulation.”

How can ecosystems like DeepFest be valuable to empower this mission and work?

“Our objective is to avoid silos and connect all stakeholders together to ensure human-centred development and adoption.

“For instance, AI algorithms and systems play a significant role in supporting and accommodating disabilities from augmenting assistive technologies and robotics to creating personalised learning and healthcare solutions. Language-based models (so widely discussed recently) and similar approaches may further expand this impact and the R&D behind it. In particular, such systems may fuel existing assistive ecosystems, health, work, learning and accommodation solutions, requiring communication and interaction with the patient or student, social and emotional intelligence and feedback.

“Such solutions are frequently used in areas involving cognitive impairments, mental health, autism, dyslexia, attention deficit disorder and emotion recognition impairment, which largely rely on language models and interaction.

“With the growing importance of web and workplace accessibility (including the dedicated European Accessibility Act), Generative AI-based approaches can be used to create digital accessibility solutions, associated with speech-to-text or image-to-speech conversion. It may also fuel accessible design and interfaces involving adaptive texts, fonts and colours benefiting reading, visual or cognitive impairments. Similar algorithms can be used to create libraries, knowledge and education platforms that may serve the purpose of assistive accommodation, social protection and micro-learning.

“Finally, approaches explored through building such accessible and assistive ecosystems may help to fuel the assistive pretext - when technologies created for specific designated groups can be later adapted for a broader population, fueling technologies across areas of health, education, work, cities or 'neurofuturism' - bringing new forms of interaction, learning and creativity, involving biofeedback, languages and different forms of media.”

See you next week

We’ll be back in your inbox with insights from another AI expert.


If you enjoyed this content and want to learn more about the latest in AI, subscribe to our YouTube channel, where we upload new videos every week featuring leading AI industry experts like Pascal Bornet (Chief Data Officer, Aera Technology), Cassie Kozyrkov (Chief Decision Scientist, Google), Betsy Greytok (Vice President, Ethics amp; Policy, IBM) and more at #DeepFest23. You can also register for DeepFest 2024.

P.S the DeepFest agenda is now live on our website. Mark your calendars with the sessions you'd like to attend and the speakers you'd like to see on stage.

Related
articles

Will you write the next AI novel?

Welcome to the 118 new deep divers who joined us since last Wednesday. If you haven’t already, subscribe and join our community in receiving weekly AI insights, updates, and interviews with industry experts straight to your feed. DeepDive Your weekly immersion in AI.   Since ChatGPT was released in November

Do all tech leaders do this?

Welcome to the 1,372 new techies who have joined us. If you haven’t already, subscribe and join our community in receiving weekly tech insights, updates, and interviews with industry experts straight to your inbox. This week we’re quoting Oscar Barranco Liébana (Integrated Operations Platform Director, FIFA World

AI startups: The difference between good and bad ideas

Kablan’s experience building and exiting successful startups serves as inspiration to upcoming AI entrepreneurs. We asked him what led him to AI, and why he’s focusing on the intersection of AI and blockchain in 2024.