Transition Now: From Experimental to Applied AI

Transition Now: From Experimental to Applied AI

The sheer volume of data being generated by current technology has created the need for AI. New stats show that that 2.5 quintillion bytes of data are created every single day in 2022, and no team of humans could handle that in any meaningful way. 

But during his keynote at #LEAP22, Dan Wright (CEO at DataRobot) pointed out that as many companies try to adopt AI tech to make sense of data, they hit roadblocks – “and in fact 85% of AI projects never make it off the ground. They fail.” 

Organisations have people working hard on data models, but those models never actually reach the point where they add value and inform the decisions being made across the business. According to Wright, this is because they need to shift away from an experimental approach (work on models, try them out, hit or miss), and towards applied AI: bringing AI development out of the lab and into real world settings. 

What exactly is AI?

To define what AI is, Wright pointed out, you first need to define what intelligence is. Definitions have varied throughout time, and across cultures; ranging from Aristotle’s notion of reason (in which an early description of intelligence was closely linked to morality), to the Binet-Simon scale which was created in the early 1900s, and is still used as an intelligence measure today (the IQ test). 

At DataRobot, Wright said, “we believe intelligence is the ability to acquire knowledge from experience, for future decision-making.” 

If we take that view, then applied AI makes sense – not just because it means you can get AI models into operation more quickly, but because the data those models will then learn from is real life data. 

How do you do that?

In a nutshell, AI ensures that machine learning models produce real-world results, by putting those models in the real world. Instead of being created by either data scientists or MLOps teams, ML models need to be monitored between both of those teams with full stack visibility, so the routes to leverage ML become clear. Instead of focusing just on development or deployment, applied AI integrates the whole journey – running on real data from the start. 

But although this might sound logical, it’s not simple – nor is it easy to achieve. According to the AI Data & Analytics Network, an applied AI system has a long list of tech-prerequisites to enable it to work – including (but not limited to) a centralised data repository, and optimised, efficient architecture for data collection, transformation, storage, and retrieval; a networking infrastructure that is “high-bandwidth, low-latency and creative”; and numerous functionality, governance, and monitoring features to support and track and manage the model as it learns from real data. 

DataRobot has been focused on applied AI for the last decade. When the company was established in Boston in 2012, it aimed to automate the processes behind machine learning. “We pioneered the category of AutoML,” Wright said, “then we realised that on this journey to create value from AI, that was not sufficient.” 

“So we needed to be able to get models deployed in production, monitored, managed, optimised and updated, constantly as data is changing.” 

With this in mind, in 2019 DataRobot acquired ParallelM, the company that created MLOps – one of the core functions of machine learning engineering. “And when you combine AutoML with MLOps, you get continuous AI. A system for always updating machine learning models as data is changing.” 

From there, DataRobot added data preparation and feature engineering; “and then we took a huge leap forward with the realisation that machine intelligence is so important, but no person, no data scientist, nobody understands your business, or your problems, or your key decisions, like you do.” This drove the company to combine the idea of machine intelligence with the idea of human intelligence: automated, with decision-capability, and with the ability to create “a unified structure between data science and the business.” 

Last year, all of this was levelled up to the cloud. And with the addition of cloud tech, that entire decade of innovation can be encompassed in a flexible, multi-cloud architecture where it can be coded or automated, and made available to all users across an entire enterprise. 

Human beings have always learnt from experience. And when you get down to it, data is really just experience – recorded in a new, digital form. So the future of AI development isn’t experimental, meaning that it isn’t confined to a data science lab until it’s fine-tuned and ready to deploy. 

Efficient, effective, truly intelligent AI will do as humans do: it’ll use real experience, in real time.