Is every civilization destined to rise… only to eventually fall?
Is there such a thing as a perfect form of government that can stop that fall?
Why do some civilizations thrive for centuries, while others vanish without a trace?
These are the kinds of questions that have haunted humanity for thousands of years.
And more than two thousand years ago, they deeply fascinated a Greek historian named Polybius.
He wasn’t just some dusty scholar tucked away in a library. Polybius had a front-row seat to history. Exiled to Rome as a political hostage, he didn’t waste away in a prison cell. Instead, he found himself living among Rome’s elite — right in the heart of a rising empire. Rather than just observing history, he tried to make sense of it. And in doing so, he asked one of the most powerful questions in political thought:
Can there be anyone, so apathetic or lacking in curiosity to have no desire to understand , by what means & under what form of government the Romans conqured the entire inhabitat world & brought in under absolute control in a time span of barely 53 years?
Like many ancient thinkers, Polybius believed civilizations behaved like living organisms. They go through a natural life cycle: birth, growth, maturity, stagnation, decline, and death. Philosophers like Plato and Aristotle had explored similar ideas, but Polybius took it a step further. He laid out a detailed theory — a cycle of seven political stages — called Anacyclosis.
Anacyclosis: The Political Life Cycle
Stage 1 → No Political Structure
In the beginning, early humans had no political system. No kings, no councils — just small groups struggling to survive in a lawless world. Decisions were made by necessity or mutual understanding, with no long-term leadership.
↓
Stage 2 → Kingship
Eventually, from this power vacuum, a strong and capable leader emerges — someone brave and wise enough to bring order. He earns the people’s trust, protects them, and builds structure. In gratitude, they grant him authority. Kingship is born.
But over time, the king’s descendants inherit power without earning it. Arrogance creeps in. Justice fades. Kingship begins to rot.
↓
Stage 3 → Tyranny
Kingship decays into tyranny. Rulers start to govern through fear and cruelty, not wisdom. They exploit the people for personal gain. Discontent simmers.
↓
Stage 4 → Aristocracy
Eventually, the noblest and wealthiest citizens overthrow the tyrant. They don’t install a new king. Instead, they share power among themselves. An aristocracy is formed.
In its early days, this elite class governs with the public’s interests in mind. But their children grow up pampered, entitled. The noble mission gives way to selfishness.
↓
Stage 5 → Oligarchy
Aristocracy slips into oligarchy — the selfish rule of a powerful few. Power is hoarded, and inequality deepens. The people suffer under neglect and oppression.
↓
Stage 6 → Revolution → Democracy
Eventually, the people have had enough. They rise up, overthrow the oligarchs, and swear never to give unchecked power to the few again. They decide that power must lie with the people. Thus, democracy is born — founded on freedom, equality, and collective voice.
For a time, things improve. Prosperity returns. But nothing lasts forever.
↓
Stage 7 → Democracy Corrupted → Mob Rule
Future generations, born into rights they never fought for, begin to take them for granted. Division spreads. Greed grows.
Charismatic leaders — demagogues — rise. They speak the people’s language but serve only themselves. They inflame anger, manipulate fear, and break down reason.
Democracy unravels. Chaos takes hold.
↓
Stage 8 → Anarchy → Rise of a New Strongman
Out of the chaos, either anarchy reigns — or a new strongman takes control. He promises order, restores discipline, and begins the cycle anew.
And so, the wheel turns once again.
This is Anacyclosis — Polybius’s theory of a repeating cycle in which each form of government inevitably decays into its corrupted version. Monarchies become tyrannies. Aristocracies turn into oligarchies. Democracies dissolve into mob rule.
It sounds dramatic, but when you look at history, the pattern… kind of checks out.
Take Athens, the crown jewel of ancient Greece. Legend says it began under wise kings like Theseus — the same guy who defeated the Minotaur. Over time, Athens grew wealthier and stronger but fell into the hands of tyrants. Eventually, the people rose up and handed power to aristocrats.
After more political evolution, Athens developed into a direct democracy by the late 5th century BCE — at the height of its cultural and military might.
But then came corruption. Demagogues, pretending to be men of the people, took power and made reckless decisions. The result? A crushing defeat in the Peloponnesian War and the slow death of the Athenian empire.
In the next century, all of Greece ended up ruled by kings again — Alexander the Great and his successors. Full circle. Back to monarchy.
So why did Rome break the cycle — at least temporarily?
That’s the puzzle Polybius tried to solve in his famous Histories. How had Rome conquered the Greek world so easily?
His answer? Rome didn’t get stuck on the Anacyclosis wheel like everyone else.
Rome began with kings, like most civilizations. But by the 6th century BCE, they overthrew their monarchy and built a Republic — run by aristocrats at first, but gradually expanded to include ordinary citizens. And by the 2nd century BCE, during Polybius’s lifetime, the Roman Republic had evolved into something entirely different: a mixed constitution.
Rome’s system blended elements of monarchy (executive magistrates), aristocracy (the Senate), and democracy (popular assemblies and elected officials). Each branch could check the power of the others.
As Polybius wrote:
“Rome’s constitution has three branches, each with its own political power. These powers are distributed and balanced so carefully that you can’t say for sure whether Rome is a monarchy, an aristocracy, or a democracy.”
In modern terms, we’d call this a system of checks and balances. Each branch depended on the others, and none could dominate without being restrained.
He explained further:
If one branch tries to overstep its bounds, the others can block or restrain it. This balance keeps the whole system stable.”
To Polybius, this was Rome’s secret sauce. They hadn’t abolished the political cycle — they’d transcended it by blending the strengths of all forms of government.
But here’s the twist.
Polybius was writing at the height of the Roman Republic — when it still looked like it might last forever.
History, though, had other ideas.
Just about a hundred years later, in the 1st century BCE, Rome’s Republic imploded. And guess what? It collapsed almost exactly the way Polybius had warned.
Discontent was everywhere — among veterans, allies, poor citizens, and even parts of the elite. They all felt cheated out of the rewards of Rome’s success.
Into this chaos stepped power-hungry figures: Marius, Sulla, Pompey, and finally Julius Caesar. They manipulated the system, built personal armies, and turned political conflict into full-blown civil war.
And then came Octavian — Caesar’s adopted heir. He crushed his rivals, took total control, and rebranded himself as Augustus. With him, the Republic died — and the Empire began.
So did Polybius get it wrong?
Not really.
He had just underestimated one thing: nothing lasts forever.
Even Rome, with its clever mixed constitution, couldn’t escape the wheel forever. It just delayed the inevitable.
And that brings us to today.
Polybius’s theory doesn’t just feel ancient — it feels timeless.
Take Nepal, for example. In just a few decades, it transitioned from an absolute monarchy to a constitutional monarchy, fell into civil war, flirted with dictatorship, and eventually became a democratic republic in 2008.
Yet, even now, the system faces instability, infighting, and disillusionment.
The wheel turns.
So maybe Polybius wasn’t just talking about Rome.
Maybe he was talking about us — about human nature, about power.
We like to think history moves in a straight line — always forward. But maybe it’s a circle.
Maybe it’s a story we keep rewriting. Different names, different systems, different flags…
But the same patterns.
Call it history.
Call it politics.
Call it what it really is:
The oldest game we still don’t know how to stop playing.
The Ancient Greek philosophy has acquired a new generation of acolytes. USC Dornsife philosophy professor Ralph Wedgwood explains its appeal.
In 2012, Penguin Random House sold 12,000 copies of Marcus Aurelius’ Meditations, reflections influenced by the Ancient Greek philosophy Stoicism. In 2019, the book sold 100,000 copies.
YouTube channels devoted to “Modern Stoicism” have millions of subscribers, and Silicon Valley tech millionaires expound its wisdom. What prompted a 2,300-year-old philosophy to stage a comeback in such spectacular fashion?
It may be that Stoicism’s ancient framework for managing emotions feels particularly relevant for navigating modernity’s crises. Our phones buzz ceaselessly with alarm about rising authoritarianism, the threat of nuclear war, or AI’s impending takeover, yet responding constructively to all of these disasters feels impossible.
Enter Stoicism, which urges you to ignore the rage bait, put down the phone and think more constructively. “Men are disturbed not by things, but by the views which they take of them,” says the Stoic Epictetus in his Handbook.
“Stoics think that each of us are finite, limited beings. There are a few things we can control and other things we can’t control, and we should keep track of those things and have different attitudes towards those domains,” says Ralph Wedgwood, director of the School of Philosophy and professor of philosophy at the USC Dornsife College of Letters, Arts and Sciences. “That’s the goal of life, to have this accurate understanding, and to be guided by this.”
Stoicism: The phoenix philosophy
Stoicism was born from disaster and has rerisen, rather Phoenix-like, for centuries. Around 300 B.C., a shipwreck bankrupted a merchant named Zeno and landed him in Athens, Greece. There, he began studying philosophy, eventually developing and teaching his own. He held forth at the Stoa Poikile, a columned walkway from which his acolytes, Stoics, would later draw their name.
For nearly 500 years, the philosophy held great influence in both Greece and the Roman Empire. It was eclipsed over the years by other branches of philosophical thought, and then Christianity. A millennium passed, and Stoicism became mostly forgotten, the vast majority of its texts lost or destroyed, including those of Zeno. (Most of what remains is Roman like the Meditations.)
In the 15th century, the Renaissance’s renewed interest in classical antiquity sent excited scholars diving into the archives to dredge up older ideas. One of these was Stoicism. The debut of the printing press in the 1440s made broad distribution of ideas easier, and Stoicism gained a more permanent cultural foothold, although its popularity would continue to wax and wane over the years.
Although Stoicism’s ascendance seems relatively recent, it’s actually been a somewhat steadily growing, subliminal influence since the 1970s.
Cognitive behavioral therapy (CBT), a form of talk therapy that encourages patients to rethink their emotional reactions, was directly inspired by Stoicism. Its founder, the psychiatrist Aaron Beck, told an interviewer in 2007, “I also was influenced by the Stoic philosophers who stated that it was a meaning of events rather than the events themselves that affected people.”
CBT is now one of the most popular forms of mental health treatment. Small wonder, then, that Stoicism’s popularity has grown alongside the widespread clinical use of its philosophical relative.
Stoicism as a tool for the warrior scholar
However, unlike traditional therapy, which often conjures up visions of pastel couches and comforting Kleenex, Stoicism has a reputation for tactical, mindful hardiness.
Aurelius wrote down his reflections while planning military campaigns. Navy Officer James Stockdale famously deployed its teachings to help him endure years of torture and imprisonment during the Vietnam War. Stockdale turned in particular to the lectures of Epictetus, who himself suffered as a slave in ancient Rome.
It’s perhaps unsurprising that its current revival has sprung up in large part from the “manosphere” of male podcasters, YouTubers and Substack writers, an association that has some poo-pooing its revival as just a toxic return to repression of male emotions.
Wedgwood, whose USC Dornsife courses include “The Ancient Stoics” (PHIL 416), says that’s an inaccurate understanding of the philosophy. “It’s not about tamping down feelings. For Stoics, it’s about achieving an emotional intelligence, trying to change your habits so they’re not so destructive,” he says.
Stoics criticized emotions like anger, which they regarded as misleading. They analogized the beginnings of anger to being splashed with cold water, the jolt of which makes you feel you must immediately react. “This is an illusion, that somehow revenge would fix the wrong,” says Wedgwood. “Rather than raging or fuming, you should try to have feelings that are productive. We should think of the future rather than avenging the past.”
Women will find the philosophy’s wisdom just as useful; Stoics themselves made a number of egalitarian arguments, observes Wedgwood. “The later Stoics are not social reformers, but they believed women should receive the same education as men and insisted they have the same capacities as men for courage, wisdom and self-control.”
Stoics offer “circles of concern” to guide priorities
In addition to better management of emotions, Stoics offer helpful insight into how to prioritize demands on our time and resources, says Wedgwood. Such a framework may be increasingly helpful in an era in which we’re grappling with how to best respond to the crises of the entire world.
Consider the debate over rebuilding Notre Dame: Effective altruists decried the millions spent to fund the reconstruction of the Notre Dame Cathedral, arguing that the money would have been better spent on lifesaving mosquito tents in Africa. In recent discussions around immigration, Vice President JD Vance revived St. Augustine’s notion of “Ordo Amoris” (“Order of Love”) as a guide to how we deploy our attention and resources.
Stoics have been contemplating the best way to order our priorities since Zeno himself. They proposed that humans inhabit a nested set of circles, a framework of affinity dubbed “Oikeiosis.” The innermost circle was our soul, next came one’s physical body, then various layers of family, after that one’s community, and so on, to the entirety of humankind.
Closer circles are usually given more weight, but those closer to the edge of the ring can still be valued. We may even strive to collapse some of the difference between outer circles at times by treating them as if they inhabited a more inner ring. “For the Stoics, we do not belong to just one whole, we are part of many wholes, called to serve all those many communities,” says Wedgwood.
Hannibal, Missouri made Mark Twain, and, in turn, Twain made Hannibal famous. Few American authors are as closely intertwined — and influenced — by their hometowns as Twain. The childhood years spent in this Missouri town gave birth to some of the most famous characters in American literature, an emotional and memory-filled well that Twain would return to again and again.
Twain came from humble origins
Samuel Langhorne Clemens was born in the tiny town of Florida, Missouri, on November 30, 1835, two weeks after Halley’s Comet made its closest approach to the Earth. He was the sixth of seven children born to John and Jane Clemens. He was a sickly youth, whose parents feared he might not survive, and the family was beset by the tragic early deaths of three of Twain’s siblings.
When Twain was 4 years old, his family moved to the Mississippi River port town of Hannibal, where John worked as a lawyer, storekeeper and judge. John also dabbled in land speculation, leaving the family’s finances often precarious. His son, who would become one of the wealthiest authors in America, would follow in his father’s financially-shaky footsteps as an adult and was prone to speculation and ill-advised investments that would repeatedly threaten his financial security.
Jane was a loving mother, and Twain would later note that he inherited his love of storytelling from her. His father couldn’t have been more different, and Twain later claimed that he had never seen the dour and serious John smile.
His years in Hannibal would be the most formative of his life
Hannibal would be immortalized as the town of “St. Petersburg” in Twain’s works. He would write of lazy days spent in the company of a group of loyal friends. They played games and spent hours and days exploring the surrounding area, including a cave just outside of town that was a favorite of Clemens’ real gang of friends, which would play a key role in Tom Sawyer as the cave where Tom Sawyer and Becky Thatcher nearly died.
Thatcher was based on Twain’s real-life childhood crush, Laura Hawkins. Like Twain, Hawkins had moved to Hannibal as a child, and her family lived on the same street as the Clemens family. She and Twain were schoolmates and sweethearts, and idealized versions of Laura made their way into several other Twain books, including The Gilded Age. Later in life, Twain and Hawkins rekindled their friendship, with Twain visiting with her in Hannibal and Hawkins traveling east to Twain’s Connecticut home just two years before his death.
Sawyer’s half-brother Sid was based on Twain’s younger brother Henry. The two were quite close, and when Twain began training as a riverboat pilot on the Mississippi, he encouraged Henry to join him. Tragically, Henry was killed in a steamboat explosion at the age of just 20. Twain never forgave himself, and Henry’s death haunted him for the rest of his life.
Twain said he based the character of Sawyer on himself and two childhood friends, John B. Briggs and William Bowen. But many believe that he nicked the character’s name from a hard-drinking, Brooklyn-born fireman named Tom Sawyer who Twain had befriended in the 1860s. Like Twain, Sawyer had worked on riverboats in his youth, and the pair bonded over a series of drinking benders and gambling adventures in San Francisco, Nevada and elsewhere.
Another childhood friend was the inspiration for Huck Finn
Although Twain initially claimed to have invented the character entirely, he later admitted that Finn was based on Tom Blankenship. The son of the town drunkard, Blankenship was nonetheless idolized by the boys of Hannibal, who relished his sense of freedom and easy ways.
As Twain later wrote in his autobiography, “He was ignorant, unwashed, insufficiently fed; but he had as good a heart as ever any boy had. His liberties were totally unrestricted. He was the only really independent person — boy or man — in the community, and by consequence, he was tranquilly and continuously happy and envied by the rest of us.”
The character of Finn, first introduced in “Tom Sawyer” before getting his own book in 1884, was Twain’s most indelible creation — and his most controversial. While enormously influential and still popular more than a century after it was published, the book is also one of the most frequently banned in America, criticized for its use of coarse language, ethnic slurs and its depiction of the runaway enslaved person, Jim, which many consider racist.
The novel shows Twain dealing with the impact of American slavery
The Adventures of Huckleberry Finn was one of the first American novels to be written entirely using an English vernacular language and dialect, as Twain recalled both the sights and sounds of his youth. It was also Twain’s attempt to reconcile both the darkness and light of his Hannibal years, which were filled with happy childhood memories as well as darker ones, reflecting the realities of the often capriciously violent world of a riverboat town and the lasting effects of racism and slavery.
Twain later admitted he had grown up unquestioningly accepting slavery, before becoming an avowed advocate for Black rights later in life. Missouri was a slave state, and both Twain’s father and several Clemens family members owned enslaved people. As a young boy, Twain spent summers on his uncle’s farm, listening to stories told by its enslaved workers, including an old man named “Uncle Daniel.” Twain also drew on similar stories he heard from formerly enslaved people who worked for his sister-in-law in upstate New York after the Civil War to create his portrait of Jim, and a long-ago story of the Tom Blankenship’s brother’s secret assistance to a runaway enslaved person would inspire Finn’s relationship with Jim.
Twain’s childhood ended early
When young Twain was just 11, his father died, pushing the family to the brink of economic collapse. Twain was forced to leave school and worked a series of jobs before becoming a printer’s apprentice, where he put his burgeoning love of words into tactical practice by setting type. After stints working for his brother’s newspaper and other publishers in the Midwest and East, Twain fulfilled another childhood love fueled by his Hannibal days by becoming a Mississippi River boat pilot. This brief, though happy, phase of his early 20s was also where he acquired the pen name that millions would soon know him by: “Mark Twain,” a term used by captains to mark a water depth of two fathoms, indicating safe passage for their ships.
Although Twain would only work on the Mississippi for a few years before the start of the Civil War, that period, like those in Hannibal before them, left a lasting impression. Twenty years after his riverboat career ended, Twain took a nostalgic journey along the river to New Orleans, inspiring much of his 1883 book, Life on the Mississippi. And as he made his way back up along the river, he made a return visit home to Hannibal, back to where it all began.
In the late 19th century, an American economist named Henry George released a book that would become a global sensation, outsold in its time only by the Bible. That book, Progress and Poverty, introduced a radical yet simple framework for solving the paradox of advancing civilization and deepening misery. This philosophy, known today as Georgism (or Geoism), argues that while individuals should own the value they produce through their labor and capital, the "economic rent" derived from land and natural resources should belong equally to all members of society.
Spanning history, economics, and social justice, Georgism remains one of the most intellectually resilient "alternative" economic theories in the modern world.
The Core Philosophy: People, Land, and Labor
At the heart of Georgism is a distinction between two types of property. The first is wealth created by human effort—such as a house, a machine, or a software program. Georgists believe that the creator should have absolute rights to this value, meaning that taxes on income, sales, and improvements are viewed as a form of "socialized theft" that discourages productive work.
The second type of property is land and natural resources. Unlike a house, no human being created the earth. Furthermore, the value of a specific plot of land—especially in a city—is not created by the owner’s effort, but by the community around it. If a government builds a new subway station or a entrepreneur opens a popular park nearby, the value of the surrounding land skyrockets. Under current systems, the landowner captures this "unearned increment" as profit. Georgism proposes that this value, known as economic rent, should be recaptured by the public through a Land Value Tax (LVT).
The "Single Tax" and Economic Efficiency
Historically, Georgism was synonymous with the "Single Tax" movement. Henry George argued that if the government captured the full rental value of land, it could eliminate all other taxes. By removing taxes on labor (income) and capital (investment), the economy would be unshackled, while the tax on land would prevent the speculative hoarding of natural opportunities.
Economists across the political spectrum, from the libertarian Milton Friedman to the progressive Joseph Stiglitz, have expressed admiration for the Land Value Tax. Friedman famously called it the "least bad tax." Its primary appeal lies in its efficiency. Most taxes create "deadweight loss"—if you tax windows, people build fewer windows; if you tax income, people may work less. However, the supply of land is "perfectly inelastic"; you cannot "make" more land, and no matter how much you tax its value, the land does not disappear. Consequently, an LVT does not distort economic decisions. Instead, it encourages owners to put land to its "highest and best use" rather than leaving prime urban lots empty while waiting for prices to rise.
A Legacy of Influence
Georgism was not merely a theoretical curiosity; it was a potent political force in the early 20th century. George himself nearly won the race for Mayor of New York City in 1886. His ideas traveled across the Atlantic, influencing the "People's Budget" in the United Kingdom and sparking land reform movements in Denmark, Australia, and New Zealand.
The ideology also left a mark on popular culture. The board game Monopoly was originally invented by Elizabeth Magie as The Landlord's Game. Her intent was not to celebrate the accumulation of property, but to demonstrate how land monopolies inevitably lead to the bankruptcy of everyone except the owner. She hoped that children playing the game would see the inherent unfairness of the system and embrace Georgist principles.
In the United States, several "single tax colonies" were founded to test these ideas. The most famous, Fairhope, Alabama, and Arden, Delaware, still exist today, though they have adapted their structures to coexist within broader state and federal tax systems.
Georgism and the Environment
In recent decades, Georgism has found a new audience among environmentalists and "Green" economists. This branch, sometimes called "Green Geoism," extends George’s logic to pollution and the global commons. Just as land belongs to everyone, so does the atmosphere and the water.
When a corporation pollutes a river or emits carbon into the air, they are effectively using up a common resource for private gain. Georgists advocate for "Pigovian taxes" or carbon taxes that function similarly to a land tax: they charge the user for the privilege of diminishing or monopolizing the commons, with the revenue often returned to the public as a "Citizen’s Dividend" (a form of Universal Basic Income).
Modern Relevance: The Housing Crisis
Today, the most urgent application of Georgism is in the debate over the global housing crisis. In major cities like New York, London, and San Francisco, the "housing" cost is often actually a "land" cost. When a modest home sells for millions, the building materials are a fraction of the price; the true cost is the location.
Critics of current property tax systems argue that they penalize homeowners for making improvements. If you add an extra bedroom or a garden, your taxes go up. Meanwhile, a speculator who owns a derelict, rat-infested building in a prime neighborhood pays very little while the land beneath the building gains value every year. A Georgist shift to LVT would flip this incentive: the speculator would be forced to develop the land or sell it to someone who will, while the homeowner would no longer be punished for improving their dwelling.
Conclusion
Georgism occupies a unique space in political economy, sitting between the traditional left and right. It advocates for a radical socialization of land and resources, yet it is fiercely pro-market and anti-interventionist regarding labor and capital. It seeks a "middle way"—a system where the individual is truly free to keep what they earn, but the community is truly compensated for the use of the shared earth.
As the 21st century grapples with soaring inequality, urban sprawl, and environmental degradation, the "cat" that Henry George invited the world to see over a century ago remains a compelling vision for a fairer, more efficient society. Whether implemented as a full "Single Tax" or as a targeted land value levy, the principles of Georgism continue to offer a profound critique of how we value the world we live in.
At the dawn of the 20th century, the global landscape was defined by the clashing forces of decaying empires and rising industrialization. In the midst of this upheaval, Vladimir Lenin, a Russian Marxist revolutionary, developed a political ideology that would not only reshape the borders of Eurasia but also redefine the trajectory of global politics for nearly a hundred years. Known as Leninism, this body of thought was not merely a theoretical interpretation of Karl Marx’s writings; it was a pragmatic blueprint for revolution designed to leapfrog the traditional stages of economic development.
Leninism emerged as the "revolutionary praxis" of the Bolsheviks, providing the ideological fuel for the October Revolution of 1917. While it remains one of the most debated and controversial subjects in political science, its core tenets—vanguardism, democratic centralism, and the theory of imperialism—offer a window into the mechanics of 20th-century authoritarianism and social transformation.
The Vanguard Party: Orchestrating the Proletariat
The most distinct feature of Leninism is the concept of the vanguard party. According to traditional Marxist theory, the transition from capitalism to socialism would occur naturally in advanced industrial societies as the working class (the proletariat) developed "class consciousness." However, Lenin observed that left to their own devices, workers tended to develop only "trade-union consciousness"—a focus on immediate economic gains like higher wages rather than the total overthrow of the state.
To solve this, Lenin proposed in his 1902 pamphlet What Is To Be Done? that the revolution must be led by a highly disciplined, centralized party of professional revolutionaries. This vanguard would act as the "advanced and resolute" section of the working class, educating and organizing the masses toward a singular political goal: the seizure of state power. This shift moved the engine of revolution from a spontaneous mass movement to a meticulously planned operation directed from the top down.
Democratic Centralism: The Internal Engine
To ensure the effectiveness of this vanguard, Lenin implemented democratic centralism. This principle sought to balance internal freedom with external unity. In theory, party members were free to debate and criticize policies until a consensus or majority vote was reached. However, once a decision was finalized, every member was required to abide by it unconditionally.
Lenin argued that "universal and full freedom to criticize" was essential, provided it did not disrupt the "unity of a definite action." In practice, however, this system often prioritized the "centralism" over the "democratic" aspect. Over time, particularly during the stresses of the Russian Civil War, the suppression of internal factions became the norm, creating a template for the monolithic party structures that would define subsequent communist regimes.
Imperialism: The Global Context
Lenin’s intellectual contributions extended beyond party organization to a grand theory of global economics. In his 1916 work Imperialism, the Highest Stage of Capitalism, he argued that capitalism had entered a final, predatory phase. Wealthy industrialized nations, he claimed, had exhausted their domestic markets and were forced to export capital to colonies to exploit labor and resources.
This "superexploitation" allowed Western capitalists to bribe their own domestic workers with slightly higher living standards—the "labor aristocracy"—thereby preventing revolution in the West. Consequently, Lenin theorized that the first socialist revolution would not happen in a developed country like Germany, as Marx had predicted, but in the "weakest link" of the imperialist chain: underdeveloped, agrarian Russia. This was a significant departure from orthodox Marxism and provided the justification for a socialist takeover in a country that had not yet fully industrialized.
The Dictatorship of the Proletariat and the State
Following the 1917 revolution, Leninist theory moved into the realm of governance. Lenin advocated for the dictatorship of the proletariat, a transitional period where the working class would use the state’s coercive power to suppress the former ruling bourgeoisie and dismantle the old bureaucratic apparatus.
In his 1917 book The State and Revolution, Lenin described the state as a "special machine for the suppression of one class by another." He argued that true democracy could only be achieved by disenfranchising the "exploiters." In the early Soviet years, this took the form of government by soviets—councils of workers and soldiers. However, the realities of civil war, famine, and foreign intervention led to the rapid concentration of power within the party leadership, effectively replacing the "dictatorship of the proletariat" with the "dictatorship of the party."
The Contested Legacy: Leninism vs. Stalinism
One of the most enduring historical debates is the degree of continuity between Leninism and the subsequent regime of Joseph Stalin. Critics like Richard Pipes argue that Stalinism was the "natural consequence" of Leninism, noting that the foundations of the police state, the use of "Red Terror," and the ban on political factions were all established under Lenin’s watch.
Conversely, many scholars and left-wing critics, including Leon Trotsky, argued that Stalinism was a "counter-revolution" that betrayed Lenin’s vision. They point to the fact that Lenin’s final writings, known as his Testament, warned against Stalin’s growing power and called for his removal. Furthermore, Lenin’s "New Economic Policy" (NEP), which allowed for some limited market activity, stood in stark contrast to Stalin’s later brutal programs of forced collectivization and rapid industrialization.
Conclusion: A Century of Influence
Despite the collapse of the Soviet Union in 1991, Leninism remains a foundational subject for understanding modern political history. Its influence can be seen in the revolutionary movements of the 20th century across Asia, Africa, and Latin America, where the "vanguard party" model was adopted by leaders ranging from Mao Zedong to Ho Chi Minh.
At its core, Leninism was an attempt to mold history through sheer political will. It sought to prove that a dedicated minority could seize the levers of power and transform a semi-feudal society into a global superpower. Whether viewed as a successful adaptation of Marxism or a tragic detour into totalitarianism, Leninism stands as a testament to the power—and the peril—of a disciplined ideology in the hands of a determined revolutionary.
In an unusual right-to-repair battle, members of the ethical hacking group Dragon Sector discovered deliberate software locks in Polish trains that caused them to "brick" when serviced at independent repair shops. The manufacturer, Newag, has denied the allegations and threatened legal action, but hackers successfully restored the fleet to service by uncovering a hidden "unlock code" within the trains' control panels.In the late 18th century, officials in Prussia and Saxony began to rearrange their complex, diverse forests into straight rows of single-species trees. Forests had been sources of food, grazing, shelter, medicine, bedding and more for the people who lived in and around them, but to the early modern state, they were simply a source of timber.
So-called “scientific forestry” was that century’s growth hacking. It made timber yields easier to count, predict and harvest, and meant owners no longer relied on skilled local foresters to manage forests. They were replaced with lower-skilled laborers following basic algorithmic instructions to keep the monocrop tidy, the understory bare.
Information and decision-making power now flowed straight to the top. Decades later when the first crop was felled, vast fortunes were made, tree by standardized tree. The clear-felled forests were replanted, with hopes of extending the boom. Readers of the American political anthropologist of anarchy and order, James C. Scott, know what happened next.
It was a disaster so bad that a new word, Waldsterben, or “forest death,” was minted to describe the result. All the same species and age, the trees were flattened in storms, ravaged by insects and disease — even the survivors were spindly and weak. Forests were now so tidy and bare, they were all but dead. The first magnificent bounty had not been the beginning of endless riches, but a one-off harvesting of millennia of soil wealth built up by biodiversity and symbiosis. Complexity was the goose that laid golden eggs, and she had been slaughtered.
The story of German scientific forestry transmits a timeless truth: When we simplify complex systems, we destroy them, and the devastating consequences sometimes aren’t obvious until it’s too late.
That impulse to scour away the messiness that makes life resilient is what many conservation biologists call the “pathology of command and control.” Today, the same drive to centralize, control and extract has driven the internet to the same fate as the ravaged forests.
The internet’s 2010s, its boom years, may have been the first glorious harvest that exhausted a one-time bonanza of diversity. The complex web of human interactions that thrived on the internet’s initial technological diversity is now corralled into globe-spanning data-extraction engines making huge fortunes for a tiny few.
Our online spaces are not ecosystems, though tech firms love that word. They’re plantations; highly concentrated and controlled environments, closer kin to the industrial farming of the cattle feedlot or battery chicken farms that madden the creatures trapped within.
We all know this. We see it each time we reach for our phones. But what most people have missed is how this concentration reaches deep into the internet’s infrastructure — the pipes and protocols, cables and networks, search engines and browsers. These structures determine how we build and use the internet, now and in the future.
They’ve concentrated into a series of near-planetary duopolies. For example, as of April 2024, Google and Apple’s internet browsers have captured almost 85% of the world market share, Microsoft and Apple’s two desktop operating systems over 80%. Google runs 84% of global search and Microsoft 3%. Slightly more than half of all phones come from Apple and Samsung, while over 99% of mobile operating systems run on Google or Apple software. Two cloud computing providers, Amazon Web Services and Microsoft’s Azure make up over 50% of the global market. Apple and Google’s email clients manage nearly 90% of global email. Google and Cloudflare serve around 50% of global domain name system requests.
Two kinds of everything may be enough to fill a fictional ark and repopulate a ruined world, but can’t run an open, global “network of networks” where everyone has the same chance to innovate and compete. No wonder internet engineer Leslie Daigle termed the concentration and consolidation of the internet’s technical architecture “‘climate change’ of the Internet ecosystem.”
Walled Gardens Have Deep Roots
The internet made the tech giants possible. Their services have scaled globally, via its open, interoperable core. But for the past decade, they’ve also worked to enclose the varied, competing and often open-source or collectively provided services the internet is built on into their proprietary domains. Although this improves their operational efficiency, it also ensures that the flourishing conditions of their own emergence aren’t repeated by potential competitors. For tech giants, the long period of open internet evolution is over. Their internet is not an ecosystem. It’s a zoo.
Google, Amazon, Microsoft and Meta are consolidating their control deep into the underlying infrastructure through acquisitions, vertical integration, building proprietary networks, creating chokepoints and concentrating functions from different technical layers into a single silo of top-down control. They can afford to, using the vast wealth reaped in their one-off harvest of collective, global wealth.
Taken together, the enclosure of infrastructure and imposition of technology monoculture forecloses our futures. Internet people like to talk about “the stack,” or the layered architecture of protocols, software and hardware, operated by different service providers that collectively delivers the daily miracle of connection. It’s a complicated, dynamic system with a basic value baked into the core design: Key functions are kept separate to ensure resilience, generality and create room for innovation.
Initially funded by the U.S. military and designed by academic researchers to function in wartime, the internet evolved to work anywhere, in any condition, operated by anyone who wanted to connect. But what was a dynamic, ever-evolving game of Tetris with distinct “players” and “layers” is today hardening into a continent-spanning system of compacted tectonic plates. Infrastructure is not just what we see on the surface; it’s the forces below, that make mountains and power tsunamis. Whoever controls infrastructure determines the future. If you doubt that, consider that in Europe we’re still using roads and living in towns and cities the Roman Empire mapped out 2,000 years ago.
In 2019, some internet engineers in the global standards-setting body, the Internet Engineering Task Force, raised the alarm. Daigle, a respected engineer who had previously chaired its oversight committee and internet architecture board, wrote in a policy brief that consolidation meant network structures were ossifying throughout the stack, making incumbents harder to dislodge and violating a core principle of the internet: that it does not create “permanent favorites.” Consolidation doesn’t just squeeze out competition. It narrows the kinds of relationships possible between operators of different services.
As Daigle put it: “The more proprietary solutions are built and deployed instead of collaborative open standards-based ones, the less the internet survives as a platform for future innovation.” Consolidation kills collaboration between service providers through the stack by rearranging an array of different relationships — competitive, collaborative — into a single predatory one.
Since then, standards development organizations started several initiatives to name and tackle infrastructure consolidation, but these floundered. Bogged down in technical minutiae, unable to separate themselves from their employers’ interests and deeply held professional values of simplification and control, most internet engineers simply couldn’t see the forest for the trees.
Up close, internet concentration seems too intricate to untangle; from far away, it seems too difficult to deal with. But what if we thought of the internet not as a doomsday “hyperobject,” but as a damaged and struggling ecosystem facing destruction? What if we looked at it not with helpless horror at the eldritch encroachment of its current controllers, but with compassion, constructiveness and hope?
Technologists are great at incremental fixes, but to regenerate entire habitats, we need to learn from ecologists who take a whole-systems view. Ecologists also know how to keep going when others first ignore you and then say it’s too late, how to mobilize and work collectively, and how to build pockets of diversity and resilience that will outlast them, creating possibilities for an abundant future they can imagine but never control. We don’t need to repair the internet’s infrastructure. We need to rewild it.
What Is Rewilding?
Rewilding “aims to restore healthy ecosystems by creating wild, biodiverse spaces,” according to the International Union for Conservation of Nature. More ambitious and risk-tolerant than traditional conservation, it targets entire ecosystems to make space for complex food webs and the emergence of unexpected interspecies relations. It’s less interested in saving specific endangered species. Individual species are just ecosystem components, and focusing on components loses sight of the whole. Ecosystems flourish through multiple points of contact between their many elements, just like computer networks. And like in computer networks, ecosystem interactions are multifaceted and generative.
Rewilding has much to offer people who care about the internet. As Paul Jepson and Cain Blythe wrote in their book “Rewilding: The Radical New Science of Ecological Recovery,” rewilding pays attention “to the emergent properties of interactions between ‘things’ in ecosystems … a move from linear to systems thinking.”
It’s a fundamentally cheerful and workmanlike approach to what can seem insoluble. It doesn’t micromanage. It creates room for “ecological processes [that] foster complex and self-organizing ecosystems.” Rewilding puts into practice what every good manager knows: Hire the best people you can, provide what they need to thrive, then get out of the way. It’s the opposite of command and control.
Rewilding the internet is more than a metaphor. It’s a framework and plan. It gives us fresh eyes for the wicked problem of extraction and control, and new means and allies to fix it. It recognizes that ending internet monopolies isn’t just an intellectual problem. It’s an emotional one. It answers questions like: How do we keep going when the monopolies have more money and power? How do we act collectively when they suborn our community spaces, funding and networks? And how do we communicate to our allies what fixing it will look and feel like?
Rewilding is a positive vision for the networks we want to live inside, and a shared story for how we get there. It grafts a new tree onto technology’s tired old stock.
What Ecology Knows
Ecology knows plenty about complex systems that technologists can benefit from. First, it knows that shifting baselines are real.
If you were born around the 1970s, you probably remember many more dead insects on the windscreen of your parents’ car than on your own. Global land-dwelling insect populations are dropping about 9% a decade. If you’re a geek, you probably programmed your own computer to make basic games. You certainly remember a web with more to read than the same five websites. You may have even written your own blog.
But many people born after 2000 probably think a world with few insects, little ambient noise from birdcalls, where you regularly use only a few social media and messaging apps (rather than a whole web) is normal. As Jepson and Blythe wrote, shifting baselines are “where each generation assumes the nature they experienced in their youth to be normal and unwittingly accepts the declines and damage of the generations before.” Damage is already baked in. It even seems natural.
Ecology knows that shifting baselines dampen collective urgency and deepen generational divides. People who care about internet monoculture and control are often told they’re nostalgists harkening back to a pioneer era. It’s fiendishly hard to regenerate an open and competitive infrastructure for younger generations who’ve been raised to assume that two or three platforms, two app stores, two operating systems, two browsers, one cloud/mega-store and a single search engine for the world comprise the internet. If the internet for you is the massive sky-scraping silo you happen to live inside and the only thing you can see outside is the single, other massive sky-scraping silo, then how can you imagine anything else?
Concentrated digital power produces the same symptoms that command and control produces in biological ecosystems; acute distress punctuated by sudden collapses once tipping points are reached. What scale is needed for rewilding to succeed? It’s one thing to reintroduce wolves to the 3,472 square miles of Yellowstone, and quite another to cordon off about 20 square miles of a polder (land reclaimed from a body of water) known as Oostvaardersplassen near Amsterdam. Large and diverse Yellowstone is likely complex enough to adapt to change, but Oostvaardersplassen has struggled.
In the 1980s, the Dutch government attempted to regenerate a section of the overgrown Oostvaardersplassen. An independent-minded government ecologist, Frans Vera, said reeds and scrub would dominate unless now-extinct herbivores grazed them. In place of ancient aurochs, the state forest management agency introduced the famously bad-tempered German Heck cattle and in place of an extinct steppe pony, a Polish semi-feral breed.
Some 30 years on, with no natural predators, and after plans for a wildlife corridor to another reserve came to nothing, there were many more animals than the limited winter vegetation could sustain. People were horrified by starving cows and ponies, and beginning in 2018, government agencies instituted animal welfare checks and culling.
Just turning the clock back was insufficient. The segment of Oostvaardersplassen was too small and too disconnected to be rewilded. Because the animals had nowhere else to go, overgrazing and collapse was inevitable, an embarrassing but necessary lesson. Rewilding is a work in progress. It’s not about trying to revert ecosystems to a mythical Eden. Instead, rewilders seek to rebuild resilience by restoring autonomous natural processes and letting them operate at scale to generate complexity. But rewilding, itself a human intervention, can take several turns to get right.
Whatever we do, the internet isn’t returning to old-school then-common interfaces like FTP and Gopher, or organizations operating their own mail servers again instead of off-the-shelf solutions like G-Suite. But some of what we need is already here, especially on the web. Look at the resurgence of RSS feeds, email newsletters and blogs, as we discover (yet again) that relying on one app to host global conversations creates a single point of failure and control. New systems are growing, like the Fediverse with its federated islands, or Bluesky with algorithmic choice and composable moderation.
The 1930s represented the darkest hour for American capitalism. Following the stock market crash of 1929, the United States spiraled into the Great Depression, an era defined by bread lines, 25% unemployment, and a total collapse of the banking system. When Franklin Delano Roosevelt (FDR) took office in 1933, he famously declared that "the only thing we have to fear is fear itself." What followed was the New Deal—a whirlwind of legislative action, executive orders, and social experiments that fundamentally altered the relationship between the United States government and its citizens.
The New Deal was not a single, unified plan, but rather a series of programs and agencies, often referred to as "alphabet soup," designed to provide immediate relief, long-term recovery, and permanent reform. While its success is still debated by economists today, its legacy remains the bedrock of the modern American social contract.
The First Hundred Days and the First New Deal
Roosevelt’s presidency began with an unprecedented burst of activity known as the "First Hundred Days." His immediate priority was the stabilization of the financial system. Through the Emergency Banking Act, he declared a "bank holiday" to stop a run on deposits and used his "fireside chats" to restore public confidence. This period also saw the creation of the Federal Deposit Insurance Corporation (FDIC), which guaranteed bank deposits and remains a cornerstone of financial stability today.
To address the crushing poverty in rural and industrial areas, the First New Deal focused on "Relief and Recovery." The Civilian Conservation Corps (CCC) employed millions of young men in environmental projects, such as planting trees and building trails in national parks. Simultaneously, the Agricultural Adjustment Act (AAA) sought to raise crop prices by paying farmers to reduce production, a controversial move intended to stabilize the collapsing agrarian economy. Meanwhile, the National Recovery Administration (NRA) attempted to organize industry through "codes of fair competition," though it was later struck down by the Supreme Court as an overreach of executive power.
The Second New Deal: Shifting Toward Reform
As the initial crisis abated, Roosevelt moved toward the "Second New Deal" (1935–1936), which shifted focus toward social justice and long-term security. This phase introduced the most enduring pillars of American life. The Social Security Act of 1935 established a safety net for the elderly, the unemployed, and the disabled, marking the first time the federal government took direct responsibility for the social welfare of its people.
This era also saw the passage of the Wagner Act, which guaranteed the right of workers to organize into unions and engage in collective bargaining. This gave rise to the modern labor movement and helped create the American middle class. To combat persistent unemployment, the Works Progress Administration (WPA) was launched, becoming the largest New Deal agency. The WPA didn’t just build bridges and roads; it employed artists, writers, and musicians, documenting American life and creating public works of art that still decorate post offices and city halls across the country.
The Critics and the Constitutional Crisis
The New Deal was far from universally popular. Conservative critics and business leaders attacked it as "socialism" and an unconstitutional expansion of federal power. On the other end of the spectrum, populist figures like Huey Long and Father Charles Coughlin argued that the New Deal did not go far enough in redistributing wealth.
The most significant challenge came from the judicial branch. The Supreme Court, dominated by conservative justices known as the "Four Horsemen," struck down several key pieces of legislation, including the NRA and the AAA. In response, a frustrated Roosevelt proposed the Judicial Procedures Reform Bill of 1937—the infamous "court-packing plan." While the plan failed in Congress and cost FDR significant political capital, the Court eventually began upholding New Deal legislation, a shift often called "the switch in time that saved nine."
The New Deal Coalition and Political Realignment
The New Deal did more than change the economy; it reshaped American politics. It forged the "New Deal Coalition," a diverse and powerful voting bloc that included labor unions, blue-collar workers, racial and religious minorities (particularly African Americans and Catholics), and Southern Democrats. This coalition allowed the Democratic Party to dominate American politics for decades, controlling the White House and Congress for much of the mid-20th century.
However, the New Deal’s record on civil rights was mixed. While many programs provided aid to African Americans, others—like Social Security and the Wagner Act—initially excluded domestic and agricultural workers, categories that were predominantly Black. Furthermore, FDR often deferred to powerful Southern segregationist Democrats to ensure his economic bills passed, leaving the era’s systemic racism largely unchallenged at the federal level.
Legacy and Modern Echoes
The New Deal did not fully end the Great Depression—it was the massive industrial mobilization for World War II that finally brought the U.S. to full employment. However, it successfully prevented a total social collapse and created the regulatory framework that defined the post-war era. Agencies like the Securities and Exchange Commission (SEC) and the Tennessee Valley Authority (TVA) continue to function today, and the concept of a government "safety net" is now an inseparable part of the American identity.
In recent years, the New Deal has returned to the forefront of political discourse. Proponents of the "Green New Deal" look to FDR’s massive public works projects as a blueprint for tackling climate change, while the economic disruptions of the 21st century have led many to call for a renewed focus on the "Relief, Recovery, and Reform" tripos. Whether viewed as a savior of capitalism or a precursor to the welfare state, the New Deal remains the most significant domestic transformation in the history of the United States.
In the annals of economic thought, few works stand as prominently or as controversially as Ludwig von Mises’ Human Action: A Treatise on Economics. Published in 1949, this magnum opus is not merely an economics textbook; it is a comprehensive philosophical defense of laissez-faire capitalism and a profound investigation into the nature of human decision-making. At nearly 900 pages, Mises constructs a rigorous logical framework that attempts to demonstrate that the free market is not just an efficient system for resource allocation, but the very foundation of civilization and personal freedom.
The Foundation of Praxeology
The core of Human Action is built upon a methodology Mises termed "praxeology"—the science of human action. Unlike the natural sciences, which rely on observation, experimentation, and the hunt for constant external laws, Mises argued that economics must be rooted in a priori truths. He began with a single, self-evident axiom: "human action is purposeful behavior."
From this starting point, Mises deduced that individuals act because they are in a state of unease and believe that by using specific "means," they can achieve a "more satisfactory state." This shift from objective to subjective value was revolutionary. In Mises’ view, value does not reside in objects themselves but in the minds of the actors who rank those objects based on their own internal goals. This subjective theory of value allows for a universal understanding of human behavior that transcends culture, race, or class—a direct rebuttal to the "polylogism" of his day, which claimed that different groups possessed fundamentally different logical structures.
The Market as a Process of Calculation
One of the most critical arguments in Human Action concerns the necessity of economic calculation. Mises famously contended that socialism is "economically impossible" because it lacks a market-driven price system. Without private ownership of the means of production, there can be no genuine exchange of capital goods, and therefore no market prices for those goods.
Prices are not just numbers; they are signals that encapsulate the relative scarcity of resources and the shifting desires of billions of consumers. Without these signals, a central planner is effectively "groping in the dark," unable to determine which production methods are efficient or which consumer needs are most urgent. For Mises, the market is a grand, decentralized computer that coordinates the actions of an entire society through the medium of money prices, ensuring that resources are directed toward their most highly valued uses.
The Dangers of the Hampered Market
A significant portion of the treatise is dedicated to "The Hampered Market Economy," where Mises analyzes the effects of government intervention. He was a staunch critic of what he called "interventionism"—a middle-of-the-road policy that attempts to preserve the market while introducing piecemeal controls like minimum wages, price ceilings, or credit expansion.
Mises argued that these interventions are inherently self-defeating. For instance, if the government artificially lowers the price of milk to help the poor, it reduces the incentive for farmers to produce milk, leading to shortages. The government must then either repeal the price control or intervene further—perhaps by subsidizing farmers or controlling the prices of cow feed—leading to a "spiraling" effect that eventually necessitates total state control.
His analysis of the business cycle is equally biting. Mises attributed the "boom-and-bust" cycles of modern economies to central bank manipulation of interest rates. By artificially lowering rates through credit expansion, banks encourage "malinvestments"—capital projects that appear profitable but are actually unsustainable. When the credit bubble eventually bursts, the result is a painful but necessary economic downturn as the market attempts to purge these inefficiencies.
Social Cooperation and the Great Society
Despite its reputation as a cold, technical work, Human Action is deeply concerned with the preservation of social peace. Mises argued that the division of labor is the greatest tool for human cooperation. When individuals specialize and trade, they find that their own well-being is tied to the well-being of others. This "Law of Association" (inspired by David Ricardo) suggests that even the most productive person benefits from cooperating with the least productive, creating a natural incentive for peace over conflict.
For Mises, capitalism is the only system that reconciles the pursuit of individual interest with the needs of the community. In a market society, a person can only increase their own wealth by serving others—producing the goods and services that fellow citizens are willing to pay for. This "consumer sovereignty" ensures that the ultimate direction of the economy is determined by the "masses" rather than a political elite.
A Lasting Legacy
Today, Human Action remains a cornerstone of the Austrian School of economics and a primary text for libertarians and free-market advocates worldwide. While modern mainstream economics has moved toward mathematical modeling and empirical testing—methods Mises largely rejected—his insights into the importance of prices, the role of time and uncertainty, and the dangers of monetary manipulation continue to resonate.
Human Action is more than a defense of a specific economic policy; it is a plea for the recognition of the individual as the ultimate source of all social phenomena. In an age of increasing centralization and digital surveillance, Mises’ warning remains stark: civilization depends on the freedom of the acting individual to make choices, take risks, and pursue happiness in a market unhampered by the dictates of the state.
It has become almost impossible to separate the effects of digital technologies from our everyday experiences. Reality is parsed through glowing screens, unending data feeds, biometric feedback loops, digital protheses and expanding networks that link our virtual selves to satellite arrays in geostationary orbit. Wristwatches interpret our physical condition by counting steps and heartbeats. Phones track how we spend our time online, map the geographic location of the places we visit and record our histories in digital archives. Social media platforms forge alliances and create new political possibilities. And vast wireless networks – connecting satellites, drones and ‘smart’ weapons – determine how the wars of our era are being waged. Our experiences of the world are soaked with digital technologies.
But for the French philosopher Bernard Stiegler, one of the earliest and foremost theorists of our digital age, understanding the world requires us to move beyond the standard view of technology. Stiegler believed that technology is not just about the effects of digital tools and the ways that they impact our lives. It is not just about how devices are created and wielded by powerful organisations, nation-states or individuals. Our relationship with technology is about something deeper and more fundamental. It is about technics.
According to Stiegler, technics – the making and use of technology, in the broadest sense – is what makes us human. Our unique way of existing in the world, as distinct from other species, is defined by the experiences and knowledge our tools make possible, whether that is a state-of-the-art brain-computer interface such as Neuralink, or a prehistoric flint axe used to clear a forest. But don’t be mistaken: ‘technics’ is not simply another word for ‘technology’. As Martin Heidegger wrote in his essay ‘The Question Concerning Technology’ (1954), which used the German term Technik instead of Technologie in the original title: the ‘essence of technology is by no means anything technological.’ This aligns with the history of the word: the etymology of ‘technics’ leads us back to something like the ancient Greek term for art – technē. The essence of technology, then, is not found in a device, such as the one you are using to read this essay. It is an open-ended creative process, a relationship with our tools and the world.
This is Stiegler’s legacy. Throughout his life, he took this idea of technics, first explored while he was imprisoned for armed robbery, further than anyone else. But his ideas have often been overlooked and misunderstood, even before he died in 2020. Today, they are more necessary than ever. How else can we learn to disentangle the effects of digital technologies from our everyday experiences? How else can we begin to grasp the history of our strange reality?
Stiegler’s path to becoming the pre-eminent philosopher of our digital age was anything but straightforward. He was born in Villebon-sur-Yvette, south of Paris, in 1952, during a period of affluence and rejuvenation in France that followed the devastation of the Second World War. By the time he was 16, Stiegler participated in the revolutionary wave of 1968 (he would later become a member of the Communist Party), when a radical uprising of students and workers forced the president Charles de Gaulle to seek temporary refuge across the border in West Germany. However, after a new election was called and the barricades were dismantled, Stiegler became disenchanted with traditional Marxism, as well as the political trends circulating in France at the time. The Left in France seemed helplessly torn between the postwar existentialism of Jean-Paul Sartre and the anti-humanism of Louis Althusser. While Sartre insisted on humans’ creative capacity to shape their own destiny, Althusser argued that the pervasiveness of ideology in capitalist society had left us helplessly entrenched in systems of power beyond our control. Neither of these options satisfied Stiegler because neither could account for the rapid rise of a new historical force: electronic technology. By the 1970s and ’80s, Stiegler sensed that this new technology was redefining our relationship to ourselves, to the world, and to each other. To account for these new conditions, he believed the history of philosophy would have to be rewritten from the ground up, from the perspective of technics. Neither existentialism nor Marxism nor any other school of philosophy had come close to acknowledging the fundamental link between human existence and the evolutionary history of tools.
In the decade after 1968, Stiegler opened a jazz club in Toulouse that was shut down by the police a few years later for illegal prostitution. Desperate to make ends meet, Stiegler turned to robbing banks to pay off his debts and feed his family. In 1978, he was arrested for armed robbery and sentenced to five years in prison. A high-school dropout who was never comfortable in institutional settings, Stiegler requested his own cell when he first arrived in prison, and went on a hunger strike until it was granted. After the warden finally acquiesced, Stiegler began taking note of how his relationship to the outside world was mediated through reading and writing. This would be a crucial realisation. Through books, paper and pencils, he was able to interface with people and places beyond the prison walls.
It was during his time behind bars that Stiegler began to study philosophy more intently, devouring any books he could get his hands on. In his philosophical memoir Acting Out (2009), Stiegler describes his time in prison as one of radical self-exploration and philosophical experimentation. He read classic works of Greek philosophy, studied English and memorised modern poetry, but the book that really drew his attention was Plato’s Phaedrus. In this dialogue between Socrates and Phaedrus, Plato outlines his concept of anamnesis, a theory of learning that states the acquisition of new knowledge is just a process of remembering what we once knew in a previous life. Caught in an endless cycle of death and rebirth, we forget what we know each time we are reborn. For Stiegler, this idea of learning as recollection would become less spiritual and more material: learning and memory are tied inextricably to technics. Through the tools we use – including books, writing, archives – we can store and preserve vast amounts of knowledge.
After an initial attempt at writing fiction in prison, Stiegler enrolled in a philosophy programme designed for inmates. While still serving his sentence, he finished a degree in philosophy and corresponded with prominent intellectuals such as the philosopher and translator Gérard Granel, who was a well-connected professor at the University of Toulouse-Le Mirail (later known as the University of Toulouse-Jean Jaurès). Granel introduced Stiegler to some of the most prominent figures in philosophy at the time, including Jean-François Lyotard and Jacques Derrida. Lyotard would oversee Stiegler’s master’s thesis after his eventual release; Derrida would supervise his doctoral dissertation, completed in 1993, which was reworked and published a year later as the first volume in his Technics and Time series. With the help of these philosophers and their novel ideals, Stiegler began to reshape his earlier political commitment to Marxist materialism, seeking to account for the ways that new technologies shape the world.
"A Modest Proposal," written by Jonathan Swift in 1729, is a Juvenalian satirical essay that suggests a shocking solution to ease the economic troubles of impoverished Irish people: selling their children as food to rich gentlemen and ladies. This hyperbolic proposal is a critique of heartless attitudes towards the poor, particularly the Irish Catholic population, and British policies towards the Irish.
Swift's essay is renowned for its sustained irony and shock value. It begins by vividly describing the plight of starving beggars in Ireland, leading readers to expect a serious proposal. However, the essay takes a surprising turn when Swift proposes that well-nursed, healthy children can be a delicious and nourishing food. He even provides detailed suggestions for preparing and cooking the children, as well as calculations to show the financial benefits of his suggestion.
The essay also critiques the can-do spirit of the times, which led to illogical schemes to solve social and economic issues. Swift mocks projects that proposed simplistic solutions to complex problems, such as the idea of running the poor through a joint-stock company.
In addition to criticizing these projects, Swift targets the calculating way people perceived the poor, viewing them as commodities. He uses statistical analysis ironically to show the absurdity of trying to justify cruelty with dispassionate statistics.
Swift's rhetorical style persuades readers to detest the speaker and pity the Irish. He creates sympathy for the Irish and disdain for the narrator, who shows emotion only for his own class. Swift also degrades the Irish by using language usually reserved for animals, highlighting the dehumanization of the poor.
Scholars have speculated about the influences on Swift's essay. It has been compared to Tertullian's Apology, which satirically attacked early Roman persecution of Christianity. Swift's work also responds to Daniel Defoe's essay on preventing murder and other abuses, as well as Bernard Mandeville's proposal for public bordellos.
Swift's essay was met with backlash in British society, with members of the aristocracy responding to it. Despite its controversial nature, "A Modest Proposal" is considered a masterpiece of satire and is studied in literature courses as an example of early modern western satire. It continues to be relevant today, serving as a critique of simplistic solutions to complex social and economic issues.
In conclusion, Jonathan Swift's "A Modest Proposal" remains a powerful and provocative work that challenges readers to confront their assumptions about poverty, society, and human nature.