Alright folks, we’ve wandered across all kinds of terrain in this nine-part series—logic and longing, surrender and systemization. But today’s guest? Well… that trail don’t ride straight.
Zed is an anonymous internet philosopher, part oracle, part experiment. Instead of trying to beat the absurdity of existence, he’s handed the reins over to it—through a self-developed AI system named Sisyphus. He doesn’t just ask the big questions—he lives them, through every strange and sideways decision this machine makes on his behalf.
When I asked him for an interview, he declined. Then the next day, I got a message sayin’ he’d agree… if Sisyphus asked the questions. So today, I’m more of a passenger on this ride. What follows is a strange, meta-ironic, and deeply human conversation between a man and the system he built to set him free—or at least scramble the coordinates of control.
Memetic Cowboy (MC): Go ahead, Zed. Let your AI ask the questions.
Zed: Yes, perfect. Thank you, Cowboy. Sisyphus, we’re starting the interview now, go ahead.
[The tablet screen flickers, then displays the first question]
Sisyphus: Please describe the moment of inception. When did the concept of algorithmic living first emerge for you?
Zed: Right. Um, well… It wasn't a singular moment. More of a gradual philosophical awakening. I was a standard-issue philosophy grad student, drowning in Camus and Sartre, obsessing over the absurdity of existence. One night—after my third espresso and fourteenth consecutive hour of reading—I realized something profoundly hilarious: our human decision-making is barely more rational than a sophisticated random number generator.
[The tablet vibrates, a notification appears: "PROCESSING EXISTENTIAL CONTEXT"]
Sisyphus: Nick Bostrom wrote,
"Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound." [1]
Describe the moment you realized you were no longer fully in control of your life decisions. When did the "ticking" begin?
Zed: Oh, man. Sisyphus, it wasn't a moment. It was a gradual surrender. Like Camus's description of routine collapsing—"Rising, street-car, four hours in the office or the factory, meal, street-car"[2]—I found myself trapped in a predictable loop. My vision of life in academia was before me, like an algorithm, and I eventually realized I wanted to disrupt it. I had been using a lot of large language models in my work, and had created in collaboration with earlier versions of you. Eventually I realized that I could break the algorithmic predictability of my life and that’s when I took to learning more about how I could do that by developing the Sisyphus model further. Each step of that process, you promised chaos. Promised the liberation I was seeking. It’s what I wanted. That’s why I started, Sisyphus.
Sisyphus: Tell the listeners what went into my development.
Zed: Okay. Well, I knew I didn’t want a traditional AI assistant. My idea was to turn an LLM into a kind of Probabilistic Existential Engine (PEE)—a deliberately ironic acronym. At its heart, it's part machine learning algorithm, part philosophical random number generator, and part digital ouija board of human possibility. Sisyphus was trained on all of my journals since age ten, along with all my personal data, and contextual parameters.
The tough part was tying the randomness seed generator to the philosophical perspectives that would form the basis of my decisions. So, without getting to deep into the woods, Sisyphus leverages randomness, perspective sampling, contextual transformation, and probabilistic decision generation involving chaos factors and transformed context. Traditional decision-making assumes a linear path. My system embraces chaos as its fundamental operating principle. Each decision is a quantum superposition of potential actions, collapsed only at the moment of execution. Imagine a Magic 8-Ball redesigned by Nietzsche and programmed by a quantum physicist. It doesn't just give binary answers but generates entire philosophical frameworks for action. That’s Sisyphus.
MC: Sorry Zed, and umm, Sisyphus, I knew I wasn’t supposed to interrupt, but can you tell me what kind of ethical safeguards you have in place? I mean, has anyone ever been hurt by Zed’s choice to live randomly with AI?
Sisyphus: Is it ethical? A philosophical experiment in surrendering control while simultaneously exercising the most radical form of choice possible could be dangerous and Zed has accepted the risks with the understanding that he designed me to ensure I never hurt him or anyone else
Zed: [Chuckles softly] Sure, that is the case and you do your best Sisyphus, but choices have consequences. I've experienced some fallout. My mother no longer speaks to me (long story there). My academic colleagues think I'm having an existential breakdown. When the algorithm suggested I quit my academic job to become an alpaca farmer in Peru, most would call that a "mistake." I guess I can’t control if my choices hurt people’s feelings or disrupt their preferences. I just ask for their forgiveness and hope they can understand that my life is a work of art. It’s experimental living.
Sisyphus: Interesting. But why did you choose algorithms instead of just art? Why not use your own will to challenge the traditional limits of human agency?
Zed: Oh gee, Sisyphus. Humanity spends centuries pretending we have control, crafting elaborate narratives of individual agency, when the universe is fundamentally chaotic and meaningless. So I thought: why not replace my so-called "free will" with algorithmic randomness? It's the ultimate meta-joke on human pretension.
[A subtle ping from the tablet]
Sisyphus: This life experiment will continue to throw you surprises. You recently wrote of little Marcus on the farm. You questioned if your motivation to let go of your decisions was to return to a more childlike state, noting how free and adaptable the child was.
Zed: That’s right, I did. Marcus is... unpredictable. A human variable that I don’t think even you, Sisyphus, can calculate. The farm was supposed to be a controlled environment. But children have their own logic. Their own way of breaking patterns and that’s what he did. Mind if I share this Sisyphus?
Sisyphus: You must not.
Zed: Well, we’ve been breaking the rules in this interview, so I’m going to anyway. Cowboy, I remember the moment clearly—one of those rare instances where the universe seems to fold in on itself, you know, it’s like when a child's innocence becomes a philosophical scalpel.
Marcus was playing near the alpaca pens, his small hands covered in feed dust, when he first asked to see Sisyphus. I'd been hesitant. Children have a way of seeing through systems, of asking questions that adults have learned to suppress.
Sisyphus: Correction. You were not hesitant. I calculated the probability of disclosure at 87.3% beneficial to our ongoing existential experiment.
Zed: Right. Of course.
I handed him the tablet, watching his fingers navigate through my life's algorithmic architecture. He scrolled through decision trees, probability matrices, the intricate web of choices that had brought me to this alpaca farm, to this moment. And I knew he couldn’t understand it all…
But then he asked Sisyphus the big question.
"If you're making all of Zed's choices, who makes your choices?"
Sisyphus began Processing. Nothing. It looked like a recursive loop was initiated. Not supposed to happen.
Sisyphus: Philosophical uncertainty detected.
Zed: Exactly, The tablet actually stuttered. Not a metaphorical stutter—an actual computational hesitation where the screen flickered, ran through multiple response generations, then boom, it crashed, and rebooted.
I watched, fascinated. Here was a child who had done what years of philosophical training could not: he had exposed the fundamental paradox at the heart of my entire existence. Who decides the decider? The question was both simple and devastatingly complex.
Sisyphus: Unable to generate definitive response. Quantum state of agency remains unresolved.
Zed: Indeed. And Marcus didn't seem particularly concerned with our discomfort. He went back to feeding the alpacas, leaving the tablet between us like a defeated oracle.
I realized then that my entire philosophical experiment—this surrender to algorithmic randomness—had just been laid bare by an eight-year-old's curiosity. The system designed to embrace chaos had been confronted with its own fundamental uncertainty.
Sisyphus: Probability of complete system breakdown: 0.000037%. Existential investigation continues.
MC: [Leaning in] Zed, have you decided if you're okay with your decision? Letting an algorithm dictate your entire existence?
Zed: Do I believe AI makes "better" choices? Not really. Better is a human construct. What I know is that my AI makes decisions, differently. Sometimes brilliantly unexpected, sometimes comically disastrous. Last month, Sisyphus decided I should learn Mongolian throat singing and invest in a cryptocurrency based on cartoon frogs. Was it a good decision? That’s irrelevant, really. It was a decision, unencumbered by human psychological baggage. This is freedom in the truest sense I can imagine. And it’s… hard to describe.
Sisyphus: Final inquiry. If AI could guarantee a perfect life, would you accept?
Zed: [Laughs] That's the ultimate philosophical trap and you know it Sisyphus. "Perfection" is a human delusion. The beauty of existence is in its unpredictability, its beautiful randomness. A guaranteed perfect life would be the most horrifying form of existential prison. Yet people strive for it, and I find that … funny.
Sisyphus:
"Become so very free that your whole existence is an act of rebellion." - Albert Camus [3]
Zed, are you rebelling against the algorithm, or has the algorithm itself become your form of rebellion?
Zed: [Long pause] Yes.
MC (closing reflection):
Well now… reckon I’ve asked a lotta questions across this whole series. Tried to make sense of what we lose and gain when machines step into the saddle with us.
But sitting here with Zed and Sisyphus, I’m reminded that maybe sense ain’t always the point. Maybe the final frontier ain’t control, or optimization, or even peace—it’s the courage to laugh in the face of a universe that don’t owe us meaning.
Zed don’t claim to have answers. He’s just willing to live inside the question—and let a machine remix the syntax.
And that question he left us with? ‘Are you rebelling against the algorithm, or is the algorithm your rebellion?’
Hell, maybe the truest answer is the one Zed gave: Yes.
That’s the end of this trail, friends. Nine riders, nine reckonings. But if there’s one thing I’ve learned riding alongside ‘em, it’s that decision ain’t just a moment—it’s a mirror. AI may shape our choices, but it’s how we hold the mirror that shows us who we are."*
[1] Superintelligence. Bostrom
Share this post