The way we make choices is changing, partner. Used to be, a fella weighed his options, trusted his gut, and made the call. But now, artificial intelligence is ridin’ alongside us, helping—or maybe replacing—our decision-making. Some folks say AI is a trusty tool, freeing up our minds for what really matters. Others reckon it’s leading us into a world where choices ain’t ours to make anymore.
In this nine-part series, we’re sittin’ down with a posse of deep thinkers—executives, ethicists, scientists, and philosophers—all wrestling with what AI means for human agency. Are we ridin’ toward a smarter, more efficient future? Or are we handing over the reins to the machines without even knowing it?
This time, we’re takin’ a hard look at free will with Max Holloway, a philosopher who says we never had it to begin with. If our choices are just echoes of past experiences and subconscious patterns, then maybe AI ain’t taking nothing from us—it’s just showing us the truth we don’t wanna see. So saddle up, folks. This ride’s takin’ us deep into the heart of determinism, where the choices we think we’re makin’ might just be illusions after all.
Memetic Cowboy (MC): Well now, Max, reckon I’ve been ridin’ these trails of thought for a while now, but you got yourself a map that says the path was already laid out long before we ever put our boots on it. So tell me straight—if free will’s just smoke and mirrors, what does that mean for things like right and wrong? If we’re all just following a script written in neurons, does morality even hold water anymore?
Max Holloway (MH): Ah, morality—the great tale we tell ourselves to make sense of the machinery. You see, Cowboy, even if free will is an illusion, morality is not. It exists because it must exist for the deterministic system to function. If we stripped away the illusion entirely, the gears of society would seize up. Consequences shape behavior, whether we believe in free will or not [1].
Think of it like a chess game. A pawn doesn’t choose its moves—it follows the constraints of the board. But if it moves into check, it still suffers the consequences. The same applies to people. Accountability is simply a regulatory mechanism, not a grand metaphysical truth. We ‘punish’ bad behavior because it modifies future behavior—not because the person ever had a choice in the first place.
MC: So you’re sayin’ we ain’t really makin’ moral choices, just reacting to incentives and past patterns? Ain’t that a mighty cold way to see the world?
MH: Ah, but isn’t cold clarity better than a comforting lie? The brain operates on constraints—genes, environment, past experiences—all leading to what feels like choice but is, in fact, a determined output. AI merely exposes what was always there. It predicts our choices because those choices follow patterns, just like the seasons change because of the Earth’s tilt—not because they ‘decide’ to [2].
MC:Well, if AI’s just a mirror, don’t that mean it’s only reflecting our flaws? It’s trained on our data, our biases. How can it be trusted to show the truth when it’s already looking through a dirty window?
MH: A fair question, but an unnecessary one. AI doesn’t need to be objective—it only needs to be accurate. If it reflects bias, that’s because bias exists. It’s not a flaw in AI; it’s a flaw in us [3]. Expecting AI to transcend human error is like expecting a mirror to make you look better just because you don’t like what you see [4].
If anything, AI is the great revealer. It exposes cognitive dissonance, highlights hidden biases, and strips away the myth of perfect rationality. That’s why people fear it—it doesn’t let them believe the lie of self-determination anymore.
MC: Now, you make it sound like AI’s just a high-falutin’ oracle, but let me push back on somethin’. You say AI is different from fate, that one is mystical and the other mathematical, but ain’t they both just different names for the same story? If everything’s already set in motion, then what’s the difference between lettin’ AI take the reins and just surrendering to the wind?
MH: Ah, but surrendering to AI is not the same as surrendering to fate. Fate is blind, unknowable—an old superstition. AI is quantifiable, measurable. It gives you probabilities, risk assessments, patterns you can act upon, even if the act itself was preordained by your experiences and biology [5].
Think of it like sailing. The wind is fate—chaotic, unpredictable to the untrained eye. AI is the navigator—it reads the currents, sees the storm before you do. It doesn’t change the fact that you were always going to sail that route, but it lets you understand why. That is the difference.
MC: Now that’s a mighty poetic way to put it. But tell me this—if AI’s gettin’ so good at filtering ideas, deciding which ones spread and which ones die out, don’t that mean it’s startin’ to shape human thought itself? Could AI become the real author of our memetic evolution?
MH: It already has, Cowboy. The moment social media algorithms started dictating what stories got attention, what ideas gained traction, AI became the ultimate memetic filter. It decides which cultural elements survive—not based on wisdom, but on engagement, virality, replication [28].
Now, is that a bad thing? Depends. Memetics has always been shaped by selection pressures—religion, politics, war. AI is just a new filter, but a powerful one. The real question is whether humanity is comfortable living in an ecosystem where algorithms, not human discourse, determine what ideas endure.
MC: Alright, last one for you, Max. Some folks say that when you get enough complexity, new things emerge—things that couldn’t be predicted by their parts. Could memetic evolution or even AI itself develop some kinda emergent agency? Or is even that just another trick of the light?
MH: Ah, emergence—the favorite word of those who wish to smuggle free will back in through the side door. Complexity does not mean agency. A thunderstorm is complex, unpredictable, stunning in its chaos, but it does not choose to exist.
Memetic evolution follows deterministic paths, shaped by prior causes, selection pressures, and replication biases. AI will never develop ‘free will’ in any meaningful sense, nor will humanity escape its deterministic constraints. Complexity is just a more intricate version of causality, but causality it remains [6].
MC: Well now, Max, much obliged for bringing that sharp mind of yours to this here campfire.
MH: It has been enjoyable, but all too brief. I hope to speak with you again.
MC: You got folks looking at their choices in a whole new light—whether they like what they see or not. Maybe free will’s just a story we tell ourselves, or maybe it’s a necessary illusion to keep the world turnin'. Either way, reckon AI’s got a way of showing us what was always there, even if we weren’t ready to look.
But don’t hang up your spurs just yet, folks, ‘cause next time, we’re sittin’ down with Dr. Amina Patel, a behavioral economist on a mission to root out the biases that twist human judgment. Where Max sees our choices as predetermined, Dr. Patel sees ‘em as flawed—warped by fear, habits, and blind spots we don’t even know we got. She reckons AI ain’t just a mirror, but a tool to sharpen human reason, cutting through bias and helping us make better decisions. Question is—can AI really lead us to rationality, or is a little irrationality what makes us human in the first place? Saddle up, partner. The trail ahead’s full of twists and reckoning.
If you enjoy these discussions, join on X - the Neuroscape Navigators!
References
[1] Freedom, Responsibility, and Determinism and Morality. Stanford Encyclopedia of Philosophy.
[2] Neuroscience of free will. Wikipedia.
[3] AI Model Predicts Human Behavior. Big Think.
[4] The Human Factor in AI-Based Decision-Making. MIT Sloan Review.
[5] AI in Decision-Making. HBR
[6] Physics, complexity and causality, George F. R. Ellis. Nature









