“The information you never see is, by definition, the information you cannot evaluate. And when the machine decides what is relevant on your behalf, the missing piece becomes the needle in a haystack you did not even know existed.”
The Seductive Logic of the Machine
“Something quiet and consequential is happening to the way we think.”
Every day, millions of people type a question into a generative AI tool and receive an answer that reads like it was composed by an expert: polished, structured, and delivered with the calm authority of a textbook.
We read the output, nod, and move on. In that moment, we have done something we rarely pause to examine: we have offloaded our critical thinking to a machine, assuming its logic is flawless simply because the output looks authoritative.
The prose is clean. The citations seem plausible. The reasoning flows with the kind of orderly confidence that makes it feel ungrateful to question it. And so we don’t.
We absorb the answer the way we might absorb a news headline quickly, passively, and without the friction that genuine scrutiny requires.
This is where the danger begins. Not in the dramatic failure, the hallucinated legal case or the fabricated statistic that makes the evening news but in the subtle, creeping erosion of our willingness to look further.
When an AI provides an answer, it simultaneously removes the question from our mental to-do list. The research feels done. The decision feels informed.
But what if the AI’s confident summary omitted a critical caveat buried in a footnote?
What if a pivotal study was excluded because it fell outside the model’s training window?
What if the “best” option it recommended was simply the most popular one, and the truly ideal choice, the one perfectly suited to your circumstances, was a niche alternative the model never encountered?
A Flood Without a Filter
Generative AI has created something unprecedented: a world in which practically anyone can produce vast quantities of professional-looking content in minutes.
Blog posts, research summaries, marketing copy, product descriptions, policy briefs, social media threads, the output pours from the machine with an ease that would have been unimaginable a decade ago.
On the surface, this looks like democratisation. More people creating, more voices contributing, more knowledge circulating. But there is a catch that we are only beginning to reckon with: almost none of this content is rigorously evaluated.
“Today, the bottleneck is evaluation.”
The sheer volume of AI-generated material has outpaced our collective capacity to verify it. Traditional gatekeepers: editors, peer reviewers, fact-checkers, were designed for a world in which content creation was slow and expensive.
In that world, the bottleneck was production. A single person with access to a generative model can produce in an afternoon what once took a research team weeks. But nobody has invented a corresponding tool for checking all of it.
The result is an information environment that is simultaneously richer and less reliable than anything that came before it.
Mistakes do not disappear in this environment; they multiply, camouflaged by the professional veneer the machine gives to everything it touches.
A factual error in an AI-generated report looks exactly the same as a factual truth. Both arrive in the same clean typeface, the same confident register, the same structured format.
The mistake has become the needle in an ever-growing haystack, and we are producing more hay every hour of every day.
The Anxiety Between Efficiency and Intuition
The unease many people feel about AI-driven research is not irrational, and it is not simply technophobia. It is rooted in a genuine tension between two competing needs: the desire for efficiency and the instinct that human discernment catches things algorithms cannot.
“what did it miss?”
Consider a common scenario. You are looking for a new product, a financial service, a medical specialist, a piece of software for your business.
You ask an AI assistant to find the best option. Within seconds, it returns a list. It has scanned thousands of reviews, cross-referenced features, and presented the top candidates in a tidy ranked format.
This is, by any measure, efficient. But as you look at the list, a thought nags at the back of your mind: what did it miss?
This is where a particular kind of anxiety settles in. It is a fear of missing out, but not the social-media variety. It is a deeper fear of the hidden gem, the possibility that the perfect recommendation, the one that would have made all the difference, was filtered out before you ever saw it.
By outsourcing the search, you have gained speed, but you may have traded the nuanced discernment of human judgement for a checklist. And checklists, by their nature, can only evaluate what they were designed to measure.
They are silent on everything else.
Agentic Commerce and the Collapse of Discoverability
“The tension described above is about to intensify. The next wave of AI development is not merely advisory; it is agentic.”
Industry observers and leading financial institutions, including J.P. Morgan in their analysis of the future of digital commerce, are describing a shift toward what is being called agentic commerce, AI systems that do not just recommend products but actively browse, compare, negotiate, and make purchases on a consumer’s behalf.
The vision is seductive: an AI agent that knows your preferences, monitors prices, and executes transactions while you focus on other things. Shopping without shopping.
But embedded in this convenience is a structural problem that deserves far more attention than it is receiving. When an AI agent acts as the intermediary between consumers and the marketplace, the question of who gets discovered changes fundamentally.
Human shoppers browse. They wander. They follow tangents, notice a window display, or click on an unexpected link. These unstructured explorations are precisely how small businesses, independent creators, and niche providers reach new customers.
Discoverability, in the human-driven marketplace, is partly a function of serendipity.
AI agents do not wander. They optimize. They follow structured data, favour products with robust digital metadata, and rank options according to parameters that inevitably advantage large, well-indexed retailers over small, distinctive alternatives.
A local boutique or an artisanal service provider might offer exactly what you need, complete with a loyal word-of-mouth following and a quality of personal attention that no global brand can replicate. But if that business lacks the right schema markup, the right volume of online reviews, or the right API integration with the agent’s search framework, it is simply invisible.
The agent skips over it, not out of malice but out of structural limitation. The machine executes within its constraints, and what falls outside those constraints ceases, for all practical purposes, to exist.
The consequence is what might be called a collapse of discoverability. In a world where AI agents mediate commerce, the marketplace does not shrink in reality, it shrinks in perception. The range of options a consumer is exposed to narrows to whatever the agent’s algorithms can index and rank.
The hidden gems are not merely hard to find; they are structurally excluded from the process entirely. For consumers, this means a gradual homogenisation of choice. For small businesses, it means a new and potentially devastating barrier to entry.
The Needle in the Haystack Problem
The mismatch between how people communicate and how AI systems process information creates a problem that is easy to underestimate. Shoppers today interact with AI using richly nuanced, conversational prompts.
They say things like:
“I need a reliable accountant near me who is good with freelancers and won’t charge a fortune,”
or
“find me a weekend getaway that’s quiet, dog-friendly, and under two hours from the city.”
These requests are layered with context, personal values, and subjective priorities that humans navigate instinctively.
But the retail catalogues, service directories, and product databases that AI agents search were never built to absorb that level of human nuance. They were designed for keyword matching, category filtering, and structured attributes.
A family-run bed and breakfast with exactly the character you are looking for might describe itself in ways that do not map to any of the agent’s search parameters. Its charm is communicated through a hand-written website, a handful of enthusiastic TripAdvisor reviews, and the kind of reputation that exists in conversations between friends, not in structured data fields.
The AI agent cannot “feel” a brand’s story. It cannot improvise beyond its programmed constraints. It cannot read between the lines of a description or intuit what a human reviewer really meant when they wrote, “this place just has something special.”
And so those authentic hidden gems, the businesses and services that thrive precisely because they offer something that defies easy categorisation, are routinely filtered out of the final results.
The needle was there all along. The haystack just made sure you never saw it.
The Attribution Blind Spot – “The trail goes dark”
There is another dimension to this problem that receives far less public attention but may prove equally transformative: the erosion of attribution.
When a human being discovers a product through a Google search, clicks an advertisement, or follows a link from a blog post, that journey leaves a trail. Businesses can trace where their customers came from, which marketing channels work, and how discovery translates into revenue. This feedback loop, however imperfect, allows companies to invest intelligently in the channels that bring them customers.
AI-mediated discovery breaks this loop. As analysts at Practical eCommerce and researchers at the UC Berkeley Haas School of Business have observed, discoverability in the age of AI agents is collapsing from what was once a broad, ten-link search results page to a single AI-curated recommendation.
When an AI agent selects a product or service on your behalf, the business on the receiving end often has no visibility into how or why it was chosen. Was it the reviews? The price? The metadata? A specific phrase in the product description? The trail goes dark.
Equally, businesses that were not selected have no way of knowing they were in contention, or what they might have done differently to be included.
This creates what might be called an attribution blind spot. Businesses lose the ability to understand the causal relationship between their efforts and their outcomes.
Marketing investment becomes a matter of guesswork. And for consumers, the blind spot operates in reverse: you receive a recommendation without understanding the criteria that produced it. You trust the output without seeing the process, which brings us back to the fundamental problem—the offloading of judgement to a system whose reasoning is opaque.
The UC Berkeley research highlights a further subtlety: productivity gains from AI often create an illusion of efficiency while simply relocating the constraint elsewhere. A team that uses AI to accelerate its research may produce more output, but if the quality-control step remains a human bottleneck, the net effect is not faster decisions, it is a larger pile of unverified material awaiting review.
The blind spot is not just in attribution; it is in our understanding of where the real work now lies.
Reclaiming the Friction
None of this is an argument against using generative AI. The technology is genuinely powerful, and its ability to accelerate research, surface patterns, and process information at scale is not in dispute. The argument is against using it uncritically, against allowing the polish of the output to substitute for the rigour of the process.
The friction that AI removes, the slow, sometimes tedious work of reading widely, comparing sources, questioning assumptions, and sitting with uncertainty, was never merely an inconvenience.
It was the mechanism through which understanding was built. When we skip the friction, we skip the thinking.
When we skip the thinking, we become dependent on a system whose errors look identical to its truths.
The path forward is not to abandon AI-assisted research but to treat it as a starting point rather than a conclusion. Use the machine to survey the landscape, but walk the ground yourself.
Ask what the AI did not show you, and why. Seek out the niche recommendation, the dissenting opinion, the word-of-mouth insight that no algorithm can capture. Verify the confident-sounding claim. Read the terms of service with your own eyes. Do the uncomfortable work of thinking slowly in a world that is accelerating around you.
Because in a world drowning in machine-generated content, the most valuable human skill is not the ability to produce more. It is the willingness to pause, to question, and to insist that looking polished is not the same thing as being right.
References
J.P. Morgan (2025). Agentic Commerce: The AI Future of Shopping. J.P. Morgan Payments Newsroom. Available at: jpmorgan.com/payments/newsroom/agentic-commerce-ai-future-shopping
Practical eCommerce (2025). The AI Attribution Blind Spot. Available at: practicalecommerce.com/the-ai-attribution-blind-spot
California Management Review / UC Berkeley Haas School of Business (2026). The AI Productivity Blind Spot. Available at: cmr.berkeley.edu/2026/01/ai-productivity-blind-spot

