BackHow AI handles uncertainty
In this post, we’ll explore how AI deals with uncertainty, using a concept called probabilistic reasoning to make decisions, and the ways it tackles the complexities of our unpredictable world.
Artificial intelligence
Education
How AI works
In the real world, things are rarely black and white. Situations are full of uncertainty, nuances, and unexpected twists—making decision-making tough for anyone, human or machine. This unpredictability is one of the biggest challenges for artificial intelligence (AI) too. Unlike the math problems and games we often associate with AI, real-world scenarios demand that it handle uncertainty and make educated guesses rather than clear-cut decisions. But how does AI manage this?
When you ask a human a question, they often “hedge their bets.” Imagine asking someone, “Will it rain tomorrow?” They might say, “Probably,” or “It’s likely.” They might even quote weather odds if they know them. Humans handle uncertainty intuitively, because we’re used to operating in an unpredictable world. AI, however, isn’t naturally good at handling such ambiguity—it’s built on mathematics, logic, and patterns that generally work best with clear answers.
But the world is full of situations where there is no single correct answer. Predicting the weather involves multiple, ever-changing factors.
Self-driving cars need to navigate unpredictable roads and deal with unexpected events, like a pedestrian crossing outside a crosswalk. To diagnoses a medical condition you need to determine an illness when symptoms overlap with other conditions. For AI to handle these complex, ambiguous situations, it uses probabilistic reasoning - a fancy term for calculating probabilities to come up with the most likely answer, rather than an absolute one.
Probabilistic reasoning
Probabilistic reasoning is about calculating the likelihood of different outcomes based on the data an AI model has seen before. This approach allows AI to make educated guesses when certainty is impossible. Think of it like rolling a die. If you know the dice has six sides, you’d reason that each number (1 through 6) has an equal chance of being rolled: around 16.7%. But if you started seeing that some numbers show up more often than others, you’d adjust your expectations accordingly.
Probabilistic reasoning in AI works similarly, but instead of a die, it’s dealing with data—lots and lots of it.In technical terms, this reasoning uses probability distributions, which map out all possible outcomes and the likelihood of each. Some common tools AI uses to handle uncertainty are:
Bayesian inference: A method where AI updates its predictions based on new information, similar to how we change our opinions as we get new facts.
Markov decision processes: A mathematical model that helps AI decide the next step in a sequence of actions based on the probability of different outcomes.
Hidden Markov models: Used when some parts of the situation are unknown or “hidden.” These models help AI make guesses about hidden variables by analyzing what it can see.
Imagine an AI system built to predict stock market trends. The stock market is influenced by countless factors, from global news events to investor psychology, making it inherently unpredictable. To handle this uncertainty, an AI model would use probabilistic reasoning to evaluate how likely certain trends are based on past data. When new data comes in (like a surprising economic report), the model updates its predictions, recalculating the probabilities of different market directions.
Bayesian inference helps the AI adjust, improving its ability to “hedge its bets” about which way the market might move next. The AI doesn’t give a 100% “yes” or “no” answer. Instead, it might say, “There’s a 70% chance the market will go up tomorrow based on recent patterns,” allowing investors to make informed choices, even with an uncertain outcome.
Dealing with unknowns
One major hurdle in handling real-world complexities is the concept of hidden variables. In many situations, some important details remain unknown, creating blind spots for AI.Let’s say you’re using a medical diagnostic AI to identify potential diseases based on symptoms. The model might be pretty accurate, but what if a patient has a rare underlying condition that isn’t in the dataset? This hidden variable could skew the AI’s decision-making, leading it to give less accurate predictions.
Hidden Markov models and other advanced probabilistic methods help AI cope by allowing it to account for possible “unknowns” or missing information, making it more resilient. In practice, though, these models are not perfect. Handling hidden variables remains one of the toughest challenges in AI.
When AI makes a prediction, it often attaches a confidence level to it. Imagine a virtual assistant answering, “I’m 80% confident this is what you’re looking for.” This confidence level helps users understand the model’s level of certainty and assess how much to trust it.High confidence is great, but there’s a trade-off: if AI sticks only to high-certainty answers, it will miss out on a lot of potential predictions.
For instance, in a medical setting, a model might have low confidence about a rare diagnosis, but if it only reports high-confidence predictions, it could miss an important possibility.To find balance, some AI systems use a technique called confidence calibration, adjusting predictions to better reflect real-world accuracy. This way, the model knows when to provide more cautious suggestions and when to be more assertive, based on historical accuracy.
Real-world examples
Self-driving cars: Autonomous vehicles use probabilistic reasoning to make decisions under uncertainty. For example, if a sensor picks up a blurry object, the car has to calculate whether it’s more likely a pedestrian or a shadow and respond accordingly. Self-driving cars calculate the probabilities of different scenarios constantly to avoid collisions while adapting to complex environments.
Spam filters: Ever wondered how your email’s spam filter “decides” what to flag as spam? It’s not just a simple yes-or-no process. Instead, it uses probabilistic reasoning to calculate the likelihood that an email with certain words, phrases, or attachments is spam. The filter doesn’t need to be 100% certain—it just needs a high enough probability to separate spam from non-spam.
Voice assistants (like Siri or Alexa): When you ask a voice assistant a question, it doesn’t always know exactly what you mean. But it’s able to handle uncertainty by breaking down your query and estimating the probabilities of different interpretations. For instance, if you say, “Play the new song by [artist],” it might look for recent releases and guess what you mean, even if it’s not entirely certain.
The limits of probabilistic reasoning
Although probabilistic reasoning is powerful, it has its limitations. Sometimes, real-world situations are just too complex, chaotic, or novel for AI to handle with high accuracy. An unexpected natural disaster, for instance, might overwhelm an AI weather prediction model since it’s based on historical data that doesn’t account for all future possibilities. Furthermore, the quality of an AI’s predictions depends heavily on the data it’s trained on. If the training data doesn’t cover enough diverse situations, the AI’s probabilistic reasoning will be limited in scope, reducing its accuracy when encountering something entirely new.
Understanding how AI handles uncertainty can help us appreciate both its capabilities and limitations. When we know that AI relies on probabilistic reasoning, we can better grasp why it might give a “best guess” rather than a certain answer. And in situations where it’s important to get a 100% correct answer, we know to seek human expertise or a more thorough analysis.
AI’s probabilistic reasoning is a bit like ours - it can handle a lot of uncertainty, make educated guesses, and learn over time. But just like humans, it doesn’t always get it right. It needs high-quality data, diverse experiences, and continuous updates to keep making the best possible decisions. So next time you interact with AI, remember: it’s doing its best to handle uncertainty, just as we all do in a complex and unpredictable world.