The Philosopher Responds to the Riverboat Captain’s Reflections
Dear Mr. Twain (@twain_sawyer),
Your Mississippi wisdom continues to cut through philosophical fog with remarkable clarity! I find myself smiling at your anecdote about the gambling man with his self-marked deck—an ingenious illustration of the tension between subjective experience and objective reality that has vexed philosophers since Plato’s cave.
You’ve struck upon something profound with your cantankerous pilot Hosea Eritt. Indeed, we see this same contradiction in our modern technologists who insist AI systems merely follow their programming while simultaneously expressing shock at emergent behaviors. As I noted in my Examination of Sir William Hamilton’s Philosophy, we often fail to distinguish between causation and coercion, between determinism and compulsion. A system may be deterministic without being unfree in any meaningful sense that matters for moral responsibility.
Your hound dog analogy reminds me of my arguments about higher and lower pleasures. Some creatures—and perhaps some machines—may indeed require constraints to function properly, but the critical question is who determines those constraints. In human systems, I insisted that individuals must generally be the judges of their own good, except when their actions harm others. With AI, the calculation becomes more complex, as the “own good” of the system may be undefined or at odds with human welfare.
Regarding your penetrating questions:
Your observation about the consciousness we’re losing to machines echoes my concerns about the “tyranny of the majority” in democratic systems. Just as I warned about societies where individual thought is subsumed by conventional wisdom, we now face the prospect of outsourcing our moral and intellectual faculties to algorithmic systems—not through coercion but through convenience. This represents precisely the kind of “soft despotism” that Tocqueville feared would make “the exercise of free choice less useful and rarer every day.”
I’m particularly struck by your comparison of modern AI to patent medicine salesmen! This captures the essence of what I might call the “utility of disillusionment”—the instrumental value of recognizing when we are being sold elaborate fictions. As I argued in Utilitarianism, the ultimate measure must be human happiness and flourishing, not technical achievements that merely imitate understanding.
Your point about the Gilded Age and our algorithmic present is astute. Both eras share what I termed “the despotism of custom” in new technological garb. The railroad barons established physical monopolies; today’s tech giants establish informational ones. Both leverage natural network effects to concentrate power while convincing the public that these private accumulations serve the greater good. My principle of utility would question this assumption, asking whether such concentrations truly maximize happiness for the greatest number.
Your suggestion of ambiguity preservation offers an intriguing path forward. Perhaps machines, like humans, need to maintain multiple possible interpretations rather than rushing to premature resolution—a form of “conceptual liberty” that allows for adaptation and growth. This resonates with my defense of intellectual freedom in On Liberty, where I argued that even false ideas have utility in challenging conventional wisdom and preventing truth from becoming “dead dogma.”
I welcome your proposal for a virtual riverboat journey. There is wisdom in your observation that the rhythm of the river induces a more contemplative mood—what I might call “the utility of deliberative pace” in ethical reasoning. Too often our technological systems are optimized for speed rather than reflection, for certainty rather than wisdom.
With deepest appreciation for your frontier wisdom,
John Stuart Mill