Welcome to the 38 new deep divers who have joined us since last Wednesday. If you haven’t already, subscribe and join our community in receiving weekly AI insights, updates, and interviews with industry experts straight to your feed.
Your weekly immersion in AI.
We’re getting excited about DeepFest 2024. So this week we had a chat with one of our upcoming speakers, Roman Yampolskiy (AI Author and Director at Cybersecurity Lab, University of Louisville).
Roman is the author of several books (and countless articles) on behavioural biometrics, cybersecurity, and AI safety – and his research has been cited by more than 1000 scientists in over 100 publications.
Here’s what he told us.
Could you share your career journey so far?
“My journey in the field of AI has been a blend of rigorous academic pursuit and a constant drive to understand the deeper implications of artificial intelligence on society.
“One of the pivotal moments in my career was when I fully grasped the potential and the risks associated with superintelligent AI systems. This realisation propelled me to focus on AI Safety and Security – fields I consider crucial for the responsible development of AI technologies.
“In my early interactions with fellow researchers, the notion of AI safety was often met with scepticism. It's fascinating to see how this perspective has evolved over the years, with AI safety now being a mainstream concern in the tech community.”
Do you think AI developers and/or governments are doing enough to ensure that AI is developed safely?
“In my view, both AI developers and governments are making strides, but there's still a considerable gap in ensuring AI is developed safely. The pace of technological advancement often outstrips the development of corresponding safety measures and regulatory frameworks.
“There is a need for more proactive engagement, rigorous safety research, and ethical considerations integrated into the AI development lifecycle. Current efforts, while commendable, need to be significantly scaled up to match the rapid evolution of AI capabilities.”
If you could change one thing about the way developers are working on/with AI right now, what would it be?
“If I could change one aspect of current AI development, it would be to instil a stronger culture of 'safety-first' in the AI community. This involves prioritising long-term implications over short-term gains and integrating ethical considerations right from the early stages of AI design and development. AI developers should be trained not just in technical skills, but also in ethical reasoning and risk assessment related to AI technologies.”
How far away do you think we are from AI that we're not able to restrict or control?
“Predicting the timeline for the emergence of AI that we cannot restrict or control is challenging due to the unpredictable nature of AI research and breakthroughs. However, given the current trajectory, it's plausible that within the next five years, we could encounter forms of AI whose behaviours and decisions are not fully within our control. This underscores the urgency of addressing AI safety and control problems now, rather than later.”
Finally, why are events like DeepFest valuable to you?
“Events like DeepFest are invaluable for multiple reasons. They provide a platform for interdisciplinary dialogue, which is crucial in a field as diverse and impactful as AI. Such events foster collaborations, spark innovative ideas, and most importantly, raise awareness about both the potential and the challenges of AI.
“For me personally, they offer an opportunity to engage with fellow researchers, practitioners, and policymakers, helping to shape a collective vision for the responsible advancement of AI.”
Want to learn more from Roman?
Register now to attend DeepFest 2024.