The marketing industry is currently obsessed with the final mile of Artificial Intelligence (AI)… scaling creatives, automating tasks, and utilizing predictions to maximize profits. Every day, we are flooded with a new tool, a new prompt, or an improved output from some AI-driven platform. Virtually every resource teaches us how to drive the car. But almost nobody in our space is teaching us how the engine underneath actually works.
Before diving in, I want to be clear: I am a huge proponent of AI. My enthusiasm doesn’t stem from a belief that these systems are flawless (far from it). Rather, I see AI as the ultimate liberator. By automating the monotonous, repetitive tasks that consume our workdays, AI grants us the bandwidth to unleash what truly matters: our creativity. It is that human creativity that allows us to differentiate our brands in a crowded marketplace.
To wield this power effectively, we have to understand the tool. What follows is my personal take on applying the technical insights from Machines That Think, authored by Inga Strümke, a physicist specializing in XAI, to our world of marketing and sales. Here is what I learned from her deep dive, and why every Martech executive needs to pick up this book to transition from being a passenger to a driver of their AI strategy.
The Architecture of Our New Reality
Inga Strümke takes us back to the roots of logic. She demystifies the transition from Symbolic AI, which relied on rigid, human-coded rules, to the modern era of Machines That Learn. For a marketer, this distinction is critical. We’ve moved from systems that follow our instructions to systems that infer their own rules from the data we feed them.
Machine learning isn’t a magical spark of consciousness; it is an optimization process. When we deploy a tool, we aren’t hiring a digital employee who understands our brand values; we are deploying a mathematical function designed to minimize error.
The Trillion-Parameter Illusion: The Context Gap
We often hear about hundreds of billions or trillions of parameters in Large Language Models (LLMs) as a proxy for intelligence. But one of the most sobering realizations from Inga Strümke’s work is the sheer scale of what these models don’t know.
Inga Strümke explores the true distribution and the wrong distribution, explaining that while networks can capture incredibly subtle patterns, they are still limited to the world of their training data. For a marketer, this brings us back to the old garbage-in, garbage-out adage, but with a modern, sobering twist.
A trillion-parameter model is still working with a tiny, flattened fraction of the data a human uses to make a decision. A consumer’s choice isn’t just based on text or past clicks; it’s based on the randomness of our lives, our physical environment, real-time emotional state, senses, social pressure, and decades of unrecorded life experiences.
The Caution: We must resist the belief that AI knows our customers better than we do, or that it possesses the complete picture. Data is always an incomplete map of the complex human experience; by over-relying on the machine’s view, we ignore the 90% of the decision-making iceberg that exists outside the digital data stream. AI can see the digital footprint, but it entirely misses the person making the tracks. Furthermore, jumping into AI while ignoring existing data accuracy issues within your systems is a dangerous leap that only scales your errors.
The Probabilistic Future: The High-Confidence Guess
AI does not know the future; it generates a probabilistic one… or several. We use these predictions daily across systems to illustrate our customers’ journeys. We use a consumer’s past behavior, compare it to millions of similar profiles in our database, and the AI delivers a prediction with confidence. But we must remember that prediction is still a statistical guess.
The machine is often quite confident in its output, but it struggles to provide a human-readable margin of error for the variables it cannot see. Consider an auto dealer who uses a sophisticated AI model to schedule an annual open house for the following month. The model analyzes years of sales data, local economic trends, and competitor activity to identify the optimal weekend for the event. The dealer invests heavily in inventory and staff based on this high-confidence prediction.
Unbeknownst to the model, storms are expected that entire weekend. The AI didn’t have a weather feed, so it never calculated the specific psychological deterrent of a localized storm on a car buyer’s motivation. The prediction was correct based on its data, but reality fell outside its parameters. As marketers, we must balance algorithmic confidence with contingency planning.
The Death of the Spark: The Cost of Over-Optimization
We love how algorithms can determine the next best action (NBA) to nudge a customer down the funnel. The book uncovers a warning about the cost of these feedback loops. When an algorithm finds what a customer likes and feeds them more of it, it narrows their reality. It optimizes away surprise, delight, and serendipity.
It’s so good, you simply can’t resist… it might kill your spark.
Inga Strümke
My Caution: In marketing, that spark is brand differentiation. These systems can drift toward targets that don’t align with human intent. If we rely purely on algorithmic optimization, we optimize for sameness. We ensure our customer journeys look exactly like our competitors’ because both sets of algorithms are maximizing for the same mathematical targets (CTR, ROAS, time-on-page). A brand that optimizes away surprise cannot sustain long-term loyalty because it has become a commodity of the algorithm.
The Trap of Correlation vs. Causation
AI is the greatest pattern-matching tool ever invented. But AI is purely mathematical. It excels at finding correlation, but it lacks a human-like understanding of the world. An algorithm can tell you that customers who buy your CRM also happen to buy green office chairs. The correlation is real in the data. But there is no causal link. Starting a cross-promotion based on this is a waste of resources.
The Caution: As leaders, we must not mistake an AI’s output for strategic insight. The AI can predict an outcome, but more often it cannot tell you why. Only humans can intuit meaning, frustration, and motivation. If we build our strategies purely on predictive correlations, we are building houses of cards that will collapse the moment the market context shifts.
The Accountability Crisis: Opening the Black Box
When an ad targeting algorithm leads to a discriminatory outcome or a chatbot provides an off-brand response, the algorithm did it is no longer a valid defense. AI doesn’t create bias from thin air; it surfaces and amplifies historical bias in the data it was trained on. It effectively automates the past.
The Caution: Marketers are the gatekeepers of data. We are responsible for the inputs. If you are relying on a proprietary AI solution and you don’t understand why it is delivering certain results, you are inviting an active brand safety and ethical crisis into your tech stack. Strümke encourages a shift toward algorithmic auditing, a practice every business should adopt.
Why You Need This Book
As we look toward a future where AI handles the bulk of our execution, the value of the human marketer shifts from doing to directing. Machines That Think is the prerequisite for that transition.
It shifted my perspective from just asking, What task can I automate? to asking, Given how these neural networks operate, is this task safe or smart to automate? It teaches us that the most important part of AI-driven marketing isn’t the AI.
If you want to be a modern business leader who leverages AI rather than a passenger of algorithmic outcomes, you need to understand the machine. I highly recommend getting a copy of Inga Strümke’s work to ensure your team isn’t just using AI, but truly mastering it.
Buy Machines That Think
©2026 DK New Media, LLC, All rights reserved | DisclosureOriginally Published on Martech Zone: Beyond the “How-To” of AI: Why Marketers Must Understand What AI Is