Techno-Feudalism and Digital Serfdom II
A Review of Ten Existing Proposals to Save Us from Serfdom
Last week, in Techno-Feudalism and Digital Serfdom, I laid out the grim reality of modern digital life - that we are serfs existing at the pleasure of our digital landlords. In the hands of today’s Big Tech monopolies, contract law enforced with clickthrough licenses and one-sided terms of service has become a tool of systemic exploitation. Powerful entities—corporations, digital landlords, and gig economy firms—structure relationships in ways that legally pretend the parties are equal, even when they clearly are not.
Property law and labor law were reformed when their unequal foundations became politically unacceptable. But contract law, shielded by its illusion of fairness, has escaped such scrutiny—allowing modern economic powers to game the system. If we want to escape digital serfdom, the system has to be reformed.
How might that be done? As it turns out, a number of proposals for reform have already been written. The problem of digital serfdom is so manifest that top legal and political thinkers around the United States are already writing about it. In today’s installment, we’re going to explore some of these proposals.
The authors of these proposals span the full spectrum of ideological thought. They include far-left Yale law professors who want to regulate the internet like a public utility, hard-right Supreme Court justices who want to bring back 19th century common law, earnest law students who want to empower plaintiff’s actions, and MAGA officials who want to leverage executive action via the regulatory agencies they now control.
Proposal #1: Enforce Common Carriage Doctrine
Justice Clarence Thomas proposed the doctrine of common carriage as a possible solution to digital serfdom in his concurrence to Biden vs. Knight First Amendment. This Supreme Court case addressed whether a public official’s social media account constituted a public forum, but Thomas used the opportunity to explore broader issues of digital platform power. In his opinion, published as part of the Court’s decision on April 5, 2021, he argued that the concentrated control of speech by private digital platforms warranted a reevaluation of legal doctrines, specifically highlighting common carriage as a potential framework by which to do so. He is, to my knowledge, the only sitting justice of the Supreme Court to have written about the risk of digital serfdom created by companies like Twitter and Facebook.
Historical Context and General Definition
Common carriage is a legal doctrine with roots in English common law, historically applied to businesses that provide essential transportation or communication services to the public, such as ferries, railroads, and telegraphs. These entities, deemed “common carriers,” are required to serve all customers without discrimination, provided they pay the fee and comply with reasonable terms, and they cannot arbitrarily exclude individuals. The doctrine emerged to prevent monopolistic or powerful entities from abusing their control over critical infrastructure, as seen in cases like Primrose v. Western Union Telegraph Co. (1894), where the Supreme Court affirmed telegraph companies’ duty to serve all alike. Historically, justifications varied: some tied it to substantial market power, while others, as in the UK case Ingate v. Christie (1850), linked it to a business holding itself out as open to all. In exchange for these obligations, governments often granted carriers privileges like immunity from certain lawsuits or franchise licenses.
Application to Digital Serfdom
Justice Thomas suggests applying common carriage to digital platforms to address digital serfdom—the state where users are beholden to a few dominant firms controlling online speech and access. He likens platforms like Google, Facebook, and Twitter to traditional carriers because they “carry” information across digital networks and hold themselves out as neutral distributors, not publishers, under laws like Section 230 of the Communications Decency Act. For instance, Google’s 90% search market share and Facebook’s 3 billion users give them gatekeeping power akin to railroads over commerce.
It changes nothing that these platforms are not the sole means for distributing speech or information. A person always could choose to avoid the toll bridge or train and instead swim the Charles River or hike the Oregon Trail. But in assessing whether a company exercises substantial market power, what matters is whether the alternatives are comparable. For many of today’s digital platforms, nothing is. — Justice Thomas, Biden v. Knight First Amendment
By imposing a duty to serve all without discrimination, common carrier doctrine could prevent platforms from arbitrarily banning users or suppressing content, reducing their unilateral control. Thomas posits that this might also make government officials’ accounts public forums, subject to First Amendment rules, thereby curbing the platforms’ feudal-like dominance over digital expression.
Thomas’s approach leverages a well-established legal precedent, requiring no new legislation and allowing courts to adapt existing principles to modern contexts, as they did with telegraphs. It directly targets the exclusionary power of platforms, ensuring broader access and reducing the risk of viewpoint-based censorship, which aligns with free speech values. It avoids forcing platforms to speak or endorse content, sidestepping First Amendment conflicts, as Thomas notes regulations could be narrowly tailored to nondiscrimination. Finally, for platforms with dominant market shares, like Google or Amazon, it addresses network effects and barriers to entry without necessitating breakups, preserving their operational scale while enhancing user autonomy.
Defining which platforms qualify as common carriers is contentious—Thomas’s focus on market power (e.g., Google’s 90%) might exclude smaller firms, yet a broader “open to the public” standard could overreach, ensnaring unintended entities. It risks stifling platforms’ ability to moderate harmful content (e.g., hate speech or misinformation), potentially flooding digital spaces with undesirable material if they cannot exclude users or posts reasonably. Legal challenges will certainly arise, as platforms might claim First Amendment protections as private entities, a tension unresolved since Manhattan Community Access Corp. v. Halleck (2019) affirmed private firms’ exclusion rights. Finally, enforcement could strain judicial resources, requiring case-by-case determinations of “reasonable” service obligations, leading to inconsistency or regulatory uncertainty.
Proposal #2: Enforce Public Accommodation Law
Justice Thomas is famous for saying very little during oral arguments, and saying quite a lot in his written opinions. In the same 2021 concurrence where he proposed common carriage doctrine, Thomas also explored the use of public accommodation law as an alternative or complementary doctrine.
Historical Context and General Definition
Public accommodation law historically mandated that certain private businesses serving the public—such as inns, theaters, or restaurants—provide equal access to all individuals without discrimination, regardless of market power. Rooted in English common law and formalized in the U.S. through cases like the Civil Rights Cases (1883), where Justice Harlan’s dissent highlighted its application to public-facing entities, the doctrine gained prominence with the Civil Rights Act of 1964 (42 U.S.C. §2000a), which prohibited racial discrimination in such places. Unlike common carriage, which focuses on transportation or communication networks, public accommodation applies to a broader range of services offering “lodging, food, entertainment, or other services to the public,” as defined by Black’s Law Dictionary (11th ed. 2019). Historically, it has been used to ensure equitable access, often tied to civil rights rather than economic monopoly, and does not typically grant special government privileges in return.
Application to Digital Serfdom
Thomas proposes that digital platforms could be regulated as public accommodations to mitigate digital serfdom, where users are subject to the whims of a few powerful firms controlling online spaces. He compares platforms like Twitter or Facebook to traditional public accommodations because they hold themselves out as open forums for public engagement, akin to a digital “town square.”
Even if digital platforms are not close enough to common carriers, legislatures might still be able to treat digital platforms like places of public accommodation. Although definitions between jurisdictions vary, a company ordinarily is a place of public accommodation if it provides “lodging, food, entertainment, or other services to the public in general.” Black’s Law Dictionary 20 (11th ed. 2019) (defining “public accommodation”); accord, 42 U. S. C. §2000a(b)(3) (covering places of “entertainment”). Twitter and other digital platforms bear resemblance to that definition… Justice Thomas, Biden v. Knight First Amendment
This approach would impose a duty to serve all users without arbitrary exclusion, addressing issues like account bans or content suppression that reinforce platform dominance. For example, it could prevent Twitter from silencing users based on viewpoint or Amazon from delisting books capriciously, reducing the feudal-like control over digital commerce and speech. Thomas suggests this might strengthen arguments that government accounts on these platforms are public forums, enhancing user protections under the First Amendment.
The public accommodation framework offers distinct advantages. It applies regardless of market share, making it broadly applicable to platforms big and small, unlike common carriage’s potential focus on dominance. It aligns with historical efforts to protect equal access, resonating with democratic values and potentially garnering public support as a civil rights analogy in the digital realm. Third, it avoids mandating speech or restructuring businesses, preserving platforms’ operational freedom while curbing exclusionary practices, as seen in PruneYard Shopping Center v. Robins (1980), where access rights coexisted with private ownership. Finally, its simplicity—requiring only nondiscrimination—could streamline enforcement compared to more complex regulatory schemes, leveraging existing legal precedents.
However, applying public accommodation to digital platforms is not without disadvantages. Courts are split on whether it extends beyond physical spaces, as seen in Doe v. Mutual of Omaha (1999) versus Parker v. Metropolitan Life (1997), creating legal uncertainty for digital application. Even if legal certainty were achieved, the doctrine might limit platforms’ ability to curate content, potentially increasing harmful material (e.g., misinformation or harassment) if they cannot exclude users or posts, a concern not fully addressed by Thomas. Platforms could resist under First Amendment claims, arguing their private status shields exclusion rights, a tension heightened since Halleck. Finally, overbroad application might ensnare smaller platforms or niche services not truly “public,” diluting the doctrine’s focus and sparking regulatory overreach or judicial backlash.
Proposal #3: Implement Public Utility Regulation
The idea of applying public utility regulation to address digital serfdom was proposed by Lina M. Khan in her article “Amazon’s Antitrust Paradox,” published in the Yale Law Journal in January 2017. Khan, then a legal scholar (later becoming FTC Chair), focused on Amazon’s dominance as essential infrastructure in the internet economy, arguing that its market power and vertical integration necessitated regulatory oversight beyond traditional antitrust. Her work, spanning 96 pages in Volume 126, Issue 3, critiques the inadequacy of current competition law and proposes public utility regulation as a means to manage the power of dominant online platforms. Khan’s proposal emerged amid growing concerns about tech giants’ unchecked influence, making it a seminal contribution to the digital serfdom debate.
Historical Context and General Definition
Public utility regulation historically applies to industries deemed natural monopolies or essential services—such as water, electricity, railroads, and telephony—where competition is impractical or undesirable. Dating back to the Progressive Era in the early 1900s, this framework accepts monopoly power but imposes strict government controls, including nondiscrimination, rate-setting, and investment requirements, to ensure universal service at just rates. The Supreme Court’s Munn v. Illinois (1876) upheld this approach, ruling that businesses “affected with a public interest” (e.g., grain storage) could be regulated for the common good. Unlike common carriage, which focuses on access, public utility regulation comprehensively manages pricing and operations, often socializing infrastructure costs while curbing monopoly abuses, as seen with the Interstate Commerce Commission’s oversight of railroads in 1887.
Animating public utility regulations was the idea that essential network industries—such as railroads and electric power—should be made available to the public in the form of universal service provided at just and reasonable rates. The Progressive movement of the early twentieth century embraced public utility as a way to use government to steer private enterprise toward public ends. - Lina M. Khan, “Amazon’s Antitrust Paradox”
Application to Digital Serfdom
Khan argues that public utility regulation can address digital serfdom by treating dominant platforms like Amazon as essential network industries, curbing their ability to exploit users and dependent businesses. For Amazon, this would mean prohibiting self-preferencing (e.g., favoring its own products on Marketplace) and enforcing nondiscrimination among sellers and consumers, mitigating anticompetitive risks from its vertical integration. Applied broadly, it could regulate Google’s search algorithms or Facebook’s ad pricing to ensure fair access and prevent exclusionary practices that entrench digital feudalism. Khan envisions this as accepting platforms’ scale—due to network effects—while limiting their power to pick winners and losers, akin to how railroads were regulated to stop favoring certain shippers, thus enhancing user and third-party autonomy in the digital ecosystem.
Public utility regulation directly tackles structural power, leveraging a proven framework to manage monopolies without requiring breakups, preserving efficiency from scale. Nondiscrimination rules could level the playing field, boosting competition among reliant businesses (e.g., Amazon sellers) and reducing user dependency on platform whims. Its historical success with utilities suggests adaptability to digital contexts, potentially gaining traction from precedents like net neutrality debates. It provides a holistic solution—beyond mere access—to address pricing, service quality, and conflicts of interest, offering a robust check on the feudal-like control Khan identifies in firms like Amazon.
However, this approach carries significant risks of government intrusion. Defining “fair rates” or investment requirements for digital platforms is complex—unlike utilities with tangible costs, Amazon’s losses from below-cost pricing (e.g., to gain market share) defy traditional rate-setting, as Khan notes. Heavy regulation might deter platforms from experimenting with new services, a concern raised by mid-20th-century critics who saw public utility rules as outdated amid technological change - and who were largely proven right when landline monopolies were devastated by the wireless age. And, of course, political resistance is likely to be extremely high. The doctrine’s intellectual vigor has been in steep decline since the 1970s, and tech lobbying could thwart implementation. If it were implemented, public utility overregulation could easily harm consumers if platforms pass compliance costs onto users or if inflexible rules fail to adapt to the fast-evolving digital landscape, potentially entrenching inefficiency rather than empowerment.
Proposal #4: Enforce Essential Facilities Doctrine
The essential facilities doctrine is another remedy for digital serfdom proposed by Lina M. Khan in her Yale Law Journal article “Amazon’s Antitrust Paradox.” Building on her broader critique of Amazon’s market power, she introduced the essential facilities doctrine as a “lighter” alternative to public utility regulation, suggesting it could ensure fair access to key digital assets.
Historical Context and General Definition
The essential facilities doctrine, an antitrust principle, mandates that a monopolist controlling a facility essential to competition must provide reasonable access to competitors if duplication is impractical and denial harms competition. Emerging from early 20th-century Supreme Court cases like United States v. Terminal Railroad Association (1912), where a railroad coalition was forced to share a key bridge, it was formalized in MCI Communications Corp. v. AT&T (1983), which set a four-factor test: monopolist control, competitor inability to duplicate, denial of access, and feasibility of sharing. Historically applied to physical infrastructure (e.g., bridges, power grids), it prevents monopoly leveraging into adjacent markets without requiring breakup, balancing efficiency with competition. However, its vitality waned after the Supreme Court’s skepticism in Verizon v. Trinko (2004), reflecting debates over forced sharing of private property.
While the Supreme Court has never recognized nor articulated a standard for “essential facility,” three Supreme Court rulings are seen as having established the functional foundation” for the doctrine. In 2004, however, the Court disavowed the essential facilities doctrine in dicta, leading several commentators to wonder whether it is a dead letter. This decision by the Court to effectively reject its prior case law on essential facilities followed challenges on other fronts: notably from Congress, enforcement agencies, and academic scholars, all of whom have critiqued the idea of requiring dominant firms to share their property. - - Lina M. Khan, “Amazon’s Antitrust Paradox”
Application to Digital Serfdom
Khan applies the essential facilities doctrine to digital serfdom by identifying Amazon’s infrastructure—such as its Marketplace, fulfillment services, or Amazon Web Services (AWS)—as critical facilities that competitors cannot feasibly replicate due to scale and network effects. She argues that requiring Amazon to grant nondiscriminatory access to these assets could prevent it from excluding rivals or favoring its own products, reducing the dependency that defines digital feudalism. For example, third-party sellers could compete fairly on Marketplace, or cloud providers could challenge AWS, if access were mandated. Extended to other platforms, Google’s search engine or Facebook’s social graph could be deemed essential, curbing gatekeeping power that locks users and businesses into subservient roles, thus fostering a more competitive digital ecosystem.
The essential facilities doctrine has several benefits. It targets specific choke points of platform power without dismantling firms, preserving economies of scale while promoting competition. It has a clear legal foundation in antitrust, requiring less legislative overhaul than public utility regulation, and could be enforced through existing agencies like the FTC or DOJ. It empowers dependent businesses and users by ensuring access to vital tools, directly addressing serf-like reliance on platforms like Amazon. Finally, its flexibility—applied case-by-case via the MCI test—allows tailored solutions, avoiding one-size-fits-all rules that might miss nuances of digital markets.
The essential facilities approach is quite similar to the public accommodations approach proposed by Justice Thomas, so much so that his quote bears repeating: “A person always could choose to avoid the toll bridge or train and instead swim the Charles River or hike the Oregon Trail. But in assessing whether a company exercises substantial market power, what matters is whether the alternatives are comparable. For many of today’s digital platforms, nothing is.”
Unfortunately, the legal status of the essential facilities doctrine is quite shaky post-Trinko, where the Supreme Court questioned its scope. Even if SCOTUS did give the nod to courts to apply it, determining what qualifies as “essential” in a dynamic digital context would be quite challenging—e.g., is AWS truly non-duplicable when competitors like Microsoft Azure exist? Forced sharing could also deter innovation; if Amazon must open its infrastructure, it might invest less in future development, echoing critiques that the doctrine penalizes success. And, as with our other proposals, enforcement could bog down in litigation over feasibility and terms of access, creating uncertainty and costs that might disproportionately burden smaller firms or consumers if platforms raise prices to offset compliance.
Proposal #5: Block Equitable Servitudes
Danielle D’Onfro proposed that Courts forbid equitable servitudes in chattel as a response to digital serfdom in her 79-page article “Contract-Wrapped Property,” published in the Harvard Law Review in February 2020 (Volume 133, Issue 4).
D’Onfro, a law professor, critiques how firms use contracts—particularly software licenses—to impose restrictions that mimic equitable servitudes, undermining traditional ownership in the digital age. Her proposal, spanning a detailed analysis of property and contract law intersections, calls for legislative or judicial action to curb these practices. Written amid rising concerns about digital dependency, D’Onfro’s work targets the contractual mechanisms that entrench platform power over users, framing it as a key lever to dismantle digital serfdom. Like me, D’Onfro believes that abuse of contract law is at the core of the problem.
Elevating contract over all other private law doctrines disrupts the broader equilibrium of the private law, in which a complementary suite of doctrines developed to promote liberty while curtailing opportunism. While the pathologies that have flourished internally in modern contract doctrine have been well covered, with a few exceptions, the outsized role of contract itself has received less attention. - Danielle D’Onfro, Contract-Wrapped Property,
Historical Context and General Definition
Equitable servitudes historically are property-law devices that bind successors in interest to obligations tied to property and enforceable through injunctions rather than just damages, distinguishing them from mere contracts. Originating in English equity courts and refined in American law, they traditionally governed land use (e.g., neighborhood aesthetics), as in Tulk v. Moxhay (1848). For centuries their extension to chattels was sharply limited, met with judicial skepticism due to concerns over restricting alienability.
However, nowadays software licenses—bolstered by cases like MAI Systems v. Peak Computer (1993)—enabled firms to impose perpetual restrictions on goods (e.g., printers or games). Unlike the ProCD v. Zeidenberg (1996) case, which upheld the shrinkwrap contract on Zeidenberg without binding his successors, equitable servitudes attach to the object itself, historically curbed to balance private control with public freedom of use.
Application to Digital Serfdom
D’Onfro argues that an anti-equitable servitudes law could address digital serfdom by limiting firms’ ability to use software licenses and adhesion contracts to control goods post-sale, reducing users’ subservience to platform dictates. For instance, HP’s Instant Ink locks printer cartridges via software, while Steam’s game licenses restrict resale—both bind downstream owners, eroding ownership into access. By banning such servitudes, legislation could restore property’s mandatory rules, ensuring consumers retain rights to use, modify, or sell digital goods (e.g., smart appliances or e-books) without perpetual firm oversight. This would weaken platforms’ feudal-like grip, empowering users against contractual overreach that defines digital dependency.
This approach directly targets the root cause of digital serfdom—contractual erosion of ownership—restoring consumer agency without broad industry restructuring. It leverages existing property law principles, requiring no new regulatory framework, and could be implemented via targeted statutes or judicial reinterpretation, as D’Onfro suggests Congress or the Supreme Court could act. It reduces waste and cost—e.g., usable ink or appliances wouldn’t be landfilled due to license limits—aligning with environmental and economic goals. It counters firm opportunism (e.g., subscription traps), enhancing market transparency and trust, which benefits consumers trapped by platforms like HP or Steam.
It is not, of course, a panacea. Firms will litigate as if their profit margins depended upon it, sparking legal battles that could run for decades. Innovation could suffer if companies, fearing lost control, reduce investment in software-driven products—e.g., smart devices might stagnate. Enforcement complexity arises; distinguishing permissible contracts from servitudes (unlike ProCD’s one-time assent) could clog courts with litigation over intent and scope. Overreach might disrupt legitimate business models—e.g., software-as-a-service like Adobe Creative Cloud—potentially alienating firms and consumers who prefer access over ownership, undermining the solution’s precision.
Proposal #6: Permit Unconscionability Actions
A proposal to expand the unconscionability doctrine into an affirmative cause of action to combat digital serfdom was advanced by Brady Williams in his article “Unconscionability as a Sword: The Case for an Affirmative Cause of Action,” published in the California Law Review in December 2019 (Volume 107, Issue 6).
Williams, then a Berkeley law student, argued that the traditional defensive use of unconscionability—merely voiding unfair contract terms—fails victims of oppressive digital contracts who have already suffered harm. His 50-page piece, focused initially on consumer credit but extensible to digital contexts, calls for courts to allow plaintiffs to seek restitution proactively, transforming the doctrine into an offensive “sword.” Written amid rising concerns about fine-print overreach by tech firms, Williams’s proposal targets the contractual roots of digital dependency.
Historical Context and General Definition
Unconscionability is an equitable contract law doctrine that allows courts to refuse enforcement of terms deemed “unreasonably and unexpectedly harsh” or “so one-sided as to shock the conscience,” rooted in English equity and codified in the U.S. Uniform Commercial Code (Section 2-302). Historically used as a “shield” since cases like Williams v. Walker-Thomas Furniture Co. (1965), where a predatory lease was struck down, it protects against adhesion contracts with gross power imbalances—e.g., fine print exploiting uneducated buyers. Traditionally, it applies ex post to void terms (e.g., excessive interest rates), not to grant affirmative relief, reflecting a reluctance to disrupt freedom of contract unless enforcement itself is unjust. Courts have occasionally reduced terms, as in Carboni v. Arrospide (1991), but rarely award damages proactively.
Application to Digital Serfdom
Williams suggests that an affirmative unconscionability doctrine could address digital serfdom by empowering users to challenge oppressive platform contracts after performance, not just during enforcement disputes. In digital contexts, adhesion contracts—e.g., forced arbitration clauses on Facebook, unilateral modification rights on Steam, or hidden fees on Amazon—bind users with little choice or awareness, reinforcing their subservience. By allowing restitution claims, courts could penalize firms for past overreach (e.g., reclaiming losses from unfair subscription terms) and deter future exploitation. For instance, a user locked into HP’s Instant Ink or denied Steam game access could sue for damages, weakening platforms’ feudal control over digital goods and services by making contractual fairness actionable, not just avoidable.
Consumers are drowning in a sea of one-sided fine print. To combat contractual overreach, consumers need an arsenal of effective remedies. To that end, the doctrine of unconscionability provides a crucial defense against the inequities of rigid contract enforcement. However, the prevailing view that unconscionability operates merely as a “shield” and not a “sword” leaves countless victims of oppressive contracts unable to assert the doctrine as an affirmative claim. This crippling interpretation betrays unconscionability’s equitable roots and absolves merchants who have already obtained their ill-gotten gains. But this need not be so. - Brady Williams, “Unconscionability as a Sword”
William’s approach provides direct redress for victims, shifting from passive defense to active empowerment, addressing the “no remedy” gap that others exists in most digital transactions. Since it leverages existing equitable principles, it requires no new legislation—courts could evolve precedent, as in De La Torre v. CashCall (2018), to embrace offensive use. Its flexibility—e.g., reducing terms to “minimally tolerable” levels—balances consumer protection with contractual freedom, aligning with market dynamics while curbing the serf-like dependency on tech giants. Most importantly, it deters predatory terms by raising financial risks for platforms, fostering fairer contracts without dismantling market structures.
Yet, risks temper its appeal. Defining “unconscionable” affirmatively is imprecise—e.g., what threshold justifies restitution for a 200% interest rate versus a vague platform fee?—risking inconsistent rulings. Firms might raise prices or tighten terms preemptively to offset liability, potentially harming consumers indirectly. Overuse could flood courts with claims, straining resources and inviting frivolous suits, diluting the doctrine’s focus on truly oppressive cases and undermining its legitimacy in tackling digital serfdom effectively. For these reasons, judicial reluctance is likely to persist; making unconscionability into a “sword” would ultimately require statutory action, thus defeating the goal of gradual reform.
Proposal #7: Overhaul Antitrust Law
The majority staff of the House Judiciary Subcommittee on Antitrust, Commercial and Administrative Law assembled a proposal to overhaul antitrust law to address digital serfdom in their report Investigation of Competition in Digital Markets: Majority Staff Report and Recommendations, released on October 6, 2020.
Spanning 450 pages(!), this report capped a year-long investigation into the dominance of digital platforms like Google, Facebook, Amazon, and Apple, led by Democratic staff under Chairman David Cicilline. It advocates sweeping legislative reforms to restore competition and curb the power of these “gatekeepers,” framing their control as a modern feudal system over users and businesses. The report’s broad scope, endorsed with a relatively unusual level of bipartisan input, marks it as a pivotal call to action against digital serfdom.
The Staff Report lodges heavy criticism of the state of competition in the digital economy and antitrust enforcement, and recommends an array of legislative proposals—including specific reforms to address anticompetitive conduct in digital markets, as well as strengthening merger and monopolization enforcement and significant revisions to the antitrust laws generally—that would arguably represent the largest overhaul of antitrust law and enforcement in history, not just in digital markets but across all industries. A group of Republican Subcommittee members issued a separate report, The Third Way: Antitrust Enforcement in Big Tech, which supports bipartisan efforts to reform antitrust enforcement to address competitive harm in digital markets, but disagrees with some of the majority staff's recommended proposals.
Historical Context and General Definition
Antitrust law, rooted in the Sherman Act (1890), Clayton Act (1914), and Federal Trade Commission Act (1914), aims to prevent monopolies, restrain trade abuses, and promote competition across industries. Historically, it dismantled trusts like Standard Oil (Standard Oil Co. v. United States, 1911) and regulated mergers, as in United States v. Philadelphia National Bank (1963), which set market-share presumptions. It addresses exclusionary conduct (e.g., predatory pricing) and structural dominance, traditionally focusing on consumer welfare since the 1970s Chicago School shift, though earlier goals included protecting small businesses and democratic ideals. Tools like structural separation (e.g., AT&T’s 1982 breakup) and behavioral remedies have been used to curb power in sectors from railroads to telecom, evolving with economic and technological change.
Application to Digital Serfdom
The House report applies antitrust law to digital serfdom by targeting platforms’ market power and exclusionary practices that subjugate users and rivals. Proposals include structural separations (barring platforms from competing with dependent firms, such as Amazon Marketplace vs. sellers), nondiscrimination rules (stopping Google’s self-preferencing in search), and merger reforms (presuming harm from dominant firms’ acquisitions, like Facebook’s Instagram buy). It also seeks to lower monopolization thresholds (setting a 30% market share as “dominance”) and revive doctrines like essential facilities (discussed above). These measures aim to dismantle gatekeeping—Amazon’s book delisting or Google’s search manipulation—reducing users’ feudal dependency by fostering competition and choice in digital markets like e-commerce, search, and social media.
Antitrust reform seems to offer robust advantages over some of the other proposals. It addresses root causes—market concentration and anticompetitive conduct—systematically, not just symptoms, leveraging a century of legal precedent for legitimacy and adaptability. Its structural remedies could break dependency cycles (e.g., separating Amazon’s retail from Marketplace), empowering users and third parties without micromanaging operations. Enhanced enforcement (e.g., agency budget boosts) with proactive oversight could deter abuses like data misuse or predatory acquisitions. Its broad scope—beyond digital markets—could prevent serfdom’s spread to emerging tech, aligning with historical goals of fair markets and democratic access… or so the report asserts.
Implementation faces political hurdles; the original report’s scope split Republicans (see The Third Way dissent, linked above) back in 2020; with Silicon Valley courting the Right even harder today, tech lobbying could stall Congress for a generation. Overreach might harm innovation—breaking up firms or banning acquisitions of nascent rivals could chill investment, as Chicago School adherents. The complexity of enforcement, defining “dominance” and proving public interest in mergers, could bog down agencies and courts. And the ultimate consumer impacts are uncertain; higher compliance costs or fragmented services (e.g., no seamless Google ecosystem) might raise prices or degrade user experience, potentially trading one form of serfdom for another under regulatory overload.
Proposal #8: Impose Information Fiduciary Duties
The concept of imposing information fiduciary duties on digital platforms to address digital serfdom was proposed by Professor Jack Balkin, most prominently developed in a series of papers starting with “Information Fiduciaries and the First Amendment,” published in the UC Davis Law Review in April 2016 (Volume 49, Issue 4).
Balkin, a Yale law professor, refined this idea over subsequent works, including a 2018 article in The Atlantic, responding to scandals like Cambridge Analytica and concerns over tech giants’ data practices. His proposal gained traction, earning endorsements from scholars, lawmakers (e.g., a 2019 Senate bill), and even Mark Zuckerberg, as noted in “A Skeptical View of Information Fiduciaries” (Harvard Law Review, 2020). Balkin’s framework targets the trust-based vulnerability of users to firms like Facebook and Google, offering a legal fix to their feudal-like dominance.
Historical Context and General Definition
Fiduciary duties historically apply to relationships of trust and dependence—e.g., doctors, lawyers, or trustees—requiring care, confidentiality, and loyalty to clients over self-interest. Rooted in English equity law and codified in professions via statutes and case law (e.g., Meinhard v. Salmon, 1928), these obligations ensure fiduciaries prioritize beneficiaries’ interests, as when doctors safeguard patient data. Unlike public utility or antitrust regimes, fiduciary law governs conduct within private relationships, not market structure, and has been used to curb abuses of power without mandating access or competition. Balkin adapts this to digital firms, proposing “more limited” duties than traditional fiduciaries, reflecting their commercial nature, a concept first floated by Kenneth Laudon in the 1990s but fleshed out by Balkin for the internet age.
Application to Digital Serfdom
Balkin’s information fiduciary model addresses digital serfdom by imposing duties on platforms to protect users’ data and interests, reducing their unchecked power over dependent “serfs.” For instance, Facebook or Google, as fiduciaries, would owe users confidentiality (not selling sensitive data to third parties) and loyalty (not manipulating feeds for profit over user well-being), curbing practices like predatory ads or arbitrary bans.
On the one hand, I understand that human freedom in the information age requires regulation of new forms of social and economic power, just as it did in the first Gilded Age. On the other hand, I also believe in the constitutional freedoms of the First Amendment. This essay attempts to make these two commitments cohere - to show how protections of personal privacy in the digital age can co-exist with rights to collect, analyze, and distribute information that are protected under the First Amendment. - Jack M. Balkin, “Information Fiduciaries and the First Amendment”
This wouldn’t mandate serving all comers but would limit harmful exploitation—e.g., Twitter couldn’t silently censor based on whims without breaching trust. By reframing users as beneficiaries, not mere commodities, it weakens the feudal dynamic where platforms dictate terms unilaterally, offering a behavioral fix to data-driven dominance without structural upheaval.
This solution uses a familiar legal framework, requiring no new agencies or radical laws—courts could adapt fiduciary principles, as with doctors, easing adoption. It attempts to balance user protection with platform autonomy; limited duties avoid forcing speech or access, sidestepping First Amendment clashes, as Balkin emphasizes. It enjoys broad appeal compared to some of the other proposals—support from tech CEOs and bipartisan lawmakers suggests political feasibility, per the 2019 Senate bill. It directly tackles trust breaches (e.g., data misuse), empowering users against serf-like vulnerability without disrupting digital markets’ core functions, offering a pragmatic middle ground.
Vagueness plagues enforcement, though—what “limited” duties mean for Google versus a doctor is unclear, risking inconsistent rulings or weak standards, as “A Skeptical View” critiques. Even if specify were achieved, it doesn’t address broader serfdom issues like market power or content suppression (e.g., disinformation spread), limiting its scope to data harms. Platforms will exploit loopholes—e.g., defining “loyalty” narrowly—or raise costs to offset liability, indirectly burdening users. Reliance on private litigation or agency action (e.g., FTC) could falter if underfunded or if firms resist, claiming First Amendment shields, leaving serfs with symbolic rights but little practical relief in a still-dominant digital fiefdom.
But perhaps the most telling argument against relying on “information fiduciary duties” is the extent to which all of our society’s fiduciaries have utterly failed us. Doctors, lawyers, and other professionals already carry fiduciary duties. Does anyone really believe that law firms, hospitals, banks, or scientific institutions act with much beyond naked self-interest? If not, why would we believe Facebook would?
Proposal #9: Prohibit Some Terms and Conditions
A proposal to prohibit certain terms and conditions in consumer contracts to address digital serfdom was advanced by the Consumer Financial Protection Bureau (CFPB) in a rulemaking notice titled “Prohibited Terms and Conditions in Agreements for Consumer Financial Products or Services,” published in the Federal Register on January 14, 2025.
Authored under the CFPB’s authority, this proposed regulation targets adhesion contracts in financial services, including digital platforms like payment apps, and builds on prior FTC rules. While not attributed to a single individual, it reflects the agency’s leadership under Director Rohit Chopra, responding to growing concerns about corporate overreach in the digital economy. The rule aims to curb contractual practices that entrench user dependency, marking a regulatory push against digital feudalism.
“Consumer finance companies often limit or restrict individual freedoms and rights by including coercive terms and conditions in contracts of adhesion. These types of contracts—which are ubiquitous in transactions for consumer financial products or services—are drafted by the companies or their lawyers and presented to consumers on a “take it or leave it” basis. Form contracts can create operational efficiencies for large businesses, but in recent years they have been used to constrain fundamental freedoms and rights that are recognized and protected under the U.S. Constitution and statutory and common law. While the Bill of Rights, with limited exceptions, only protects people from government actions, jurists have long recognized affirmative obligations regarding certain private actors, and scholars and jurists are increasingly recognizing that corporate intrusion into historically recognized individual rights poses a similar threat as government intrusion. Clauses buried in the fine print of these contracts can have dramatic consequences for consumers…” - Consumer Finance Protection Bureau Proposed Rule, 14 January 2025
Historical Context and General Definition
Prohibiting specific contract terms has historical roots in consumer protection law, aiming to void clauses that unfairly exploit weaker parties. The FTC’s Credit Practices Rule (1984) banned terms like confessions of judgment and wage assignments in credit contracts, deemed unfair under Section 5 of the FTC Act. Congress followed with laws like the Consumer Review Fairness Act (2016), barring gag clauses on reviews, and anti-waiver provisions in financial statutes (e.g., Truth in Lending Act). These measures historically targeted adhesion contracts—standardized, take-it-or-leave-it agreements—where power imbalances, as in Williams v. Walker-Thomas Furniture Co. (1965), justified intervention. The doctrine protects core rights (e.g., due process, speech) against private erosion, evolving with markets to address modern abuses.
Application to Digital Serfdom
The CFPB’s proposal applies to digital serfdom by banning terms in financial service contracts—relevant to platforms like PayPal or Venmo—that waive legal rights, allow unilateral amendments, or restrict free expression (e.g., negative reviews). Such clauses, ubiquitous in digital terms of service, reinforce feudal-like control by locking users into exploitative agreements with no recourse, as seen in arbitration clauses or sudden fee hikes. By codifying the Credit Practices Rule and adding prohibitions (e.g., against silencing speech), the rule ensures users retain statutory protections and voice, weakening platforms’ ability to dictate terms unilaterally. For instance, a payment app couldn’t nullify consumer laws or penalize criticism, reducing serf-like subservience to digital overlords.
This approach directly dismantles contractual tools of domination, enhancing user autonomy without restructuring markets, leveraging the CFPB’s existing CFPA authority for swift action. It builds on proven precedents (e.g., FTC rules), ensuring legal grounding and public acceptance, especially given widespread frustration with fine print. Protecting speech and due process aligns with constitutional values, countering private “takings” akin to government overreach, as the CFPB notes. Its narrow focus—specific terms—avoids overregulation, targeting only egregious practices while preserving legitimate contract freedom, offering a precise strike against digital serfdom’s underpinnings.
But since the scope is limited—applying only to financial products—it misses broader digital platforms (e.g., social media), leaving much serfdom untouched unless expanded. Even where it is applied, firms will almost certainly adapt by shifting burdens elsewhere—e.g., raising fees or exiting markets—harming consumers indirectly. Enforcement hinges on CFPB resources and political will; regulatory capture will over time erode its power. It also cannot stop innovation-driven exploitation. New tech could simply embed control in software, not legal terms, leaving serfs vulnerable to evolving feudal tactics beyond the rule’s reach.
Proposal #10: Oversee Content Moderation
The Federal Trade Commission (FTC) has recently proposed content moderation oversight as a solution to digital serfdom. The proposal came from Chairman Andrew N. Ferguson, in a Request for Information (RFI) announced on February 20, 2025, as reported by sources like the National Law Review and Reuters. It was published via an FTC press release and seeks public comments until May 21, 2025. Ferguson, driving this effort, framed it as a response to “un-American” and potentially illegal censorship by tech giants, targeting their power over speech. Emerging amid debates over deplatforming and shadow banning, this regulatory probe aims to address the feudal-like control platforms exert over users’ expression.
Historical Context and General Definition
Content moderation oversight, while new in its digital form, draws from the FTC’s historical consumer protection and antitrust authority under Section 5 of the FTC Act (1914), which prohibits “unfair or deceptive acts or practices.” Historically, the FTC regulated deceptive advertising (e.g., 1970s cigarette ad rules) and unfair practices, like the $5 billion Facebook fine in 2019 for data missteps. It also echoes public accommodation principles, as in Packingham v. North Carolina (2017), where the Supreme Court called social media the “modern public square.” The doctrine involves scrutinizing private firms’ policies—here, moderation rules—for transparency and fairness, traditionally used to ensure equitable treatment in commerce, now extended to digital speech governance to curb arbitrary exclusion.
Application to Digital Serfdom
The FTC’s oversight targets digital serfdom by examining how platforms like Twitter, YouTube, or Facebook use moderation—e.g., bans, demonetization, or shadow banning—to silence users, reinforcing their lord-like dominance over digital discourse. The RFI probes opaque policies, lack of appeal processes, and external pressures (e.g., from advertisers), which leave users as serfs with no voice or recourse. By deeming misleading moderation deceptive or coordinated suppression anticompetitive, the FTC could mandate transparency and due process, as Ferguson suggests, reducing platforms’ unilateral power. For example, a YouTuber demonetized without explanation could gain appeal rights, weakening the feudal grip that stifles expression and dependency on platform whims.
Censorship by technology platforms is not just un-American, it is potentially illegal. Tech firms can employ confusing or unpredictable internal procedures that cut users off, sometimes with no ability appeal the decision. Such actions taken by tech platforms may harm consumers, affect competition, may have resulted from a lack of competition, or may have been the product of anti-competitive conduct. - FTC Press Release, 20 Feb 2025
The FTC proposal leverages existing FTC powers, requiring no new laws—data from the RFI could spark enforcement or rules swiftly, as seen in past privacy actions. It directly enhances user agency over speech, a core serfdom grievance, aligning with free expression values Ferguson champions. Its bipartisan appeal—echoing Trump-era censorship concerns—boosts feasibility, potentially uniting regulators and public. Its investigative nature allows flexibility; findings could tailor solutions (e.g., fines, policy mandates) to specific abuses like shadow banning, offering a focused fix without broad market disruption.
The FTC’s legal authority to implement the proposal is already being contested—The Foundation for Individual Rights and Expression rejected it the same day it was proposed. Platforms will claim First Amendment protection for moderation, as in Halleck (2019), challenging FTC jurisdiction and delaying action. Even if it passes Constitutional muster, the proposal might backfire; mandating openness could flood platforms with harmful content (e.g., misinformation), trading serfdom for chaos if firms overcorrect. Scope creep also looms—regulating speech risks political bias accusations, undermining FTC credibility, especially given Ferguson’s strong rhetoric. Enforcement depends on resources and evidence; if public comments lack substance or funding lags, the inquiry could fizzle, leaving serfs with symbolic scrutiny but no tangible relief from digital overlords. Finally, and perhaps most dangerously, it strengthens a federal agency at a time when the corruption of our federal agencies has never been more apparent.
Which (If Any) Proposal Should We Adopt?
The table below summarizes the approach taken by the 10 different proposals, highlighting the problem they seek to address, the source of the reform, the proposed method of enforcement, the primary focus of the reform, and the likely scope of application.
Originally I had planned to finish this essay by declaring a “winning proposal”… but now that I’ve finished going through them all, I find myself unable to select a clear winner. Therefore I’m going to end with some broad sentiments instead.
I have a certain fondness for the four approaches rooted in the common law (common carriage, public accommodation, equitable servitudes). The virtue of the common law was that it worked - its rules and standards were crafted over centuries during which Anglo-American power reached its peak. If we were able to use the doctrines of common carriage and public accommodation in the 19th century without destroying our market society perhaps we could implement them again today. If we had good reason to ban equitable servitudes in chattel in the 19th century, maybe there remains good reason to ban them today.
In contrast to these late 19th century “classical liberal” approaches, overhauling antitrust law or implementing public utility regulations are early 20th century approaches that would have delighted that era’s Progressives. Most of these efforts had failed by the late 20th century, when governments throughout the West began to deregulate and privatize utilities and de-emphasize antitrust action as more harmful than helpful. I suspect they would fail us in the 21st century, too.
Allowing a cause of action for unconscionability is an idea that has merit, but it would require a robust statutory framework behind it to establish what unconscionability actually is — otherwise courts will simply fall back on the idea that if parties consented to the contract, it can’t really be unconscionable! The “teeth” behind this cause of action would have to come from other proposals.
The 2025 proposals by the CFPB and FTC might do good things in the short-term, while Trump is in charge, but make matters worse in the long-term if Democrats regain control — Right-leaning libertarians rarely control the levers of government power for long. Asking the administrative agencies to control corporations is like asking the jaguars to monitor the diet of the leopards.
What seems to be missing from any of the proposals is a truly robust overhaul of property law, one that is designed for the 21st century digital world. Market power is not going to be regulated out of existence. Inequality of bargaining power is inevitable. As long as a purely contract law framework is used to govern our relationship with the digital world, we will remain digital serfs.
Next week we’ll examine a book that might hold better answers. Owned: Property, Privacy, and the New Digital Serfdom is a 2017 book by Joshua A. T. Fairfield. In it, Fairfield proposes a series of policy reforms and legal adjustments aimed at restoring traditional property rights to digital and smart property. It seems that Fairfield coined the phrase “digital serfdom” eight years before me, so I owe it to him to give it a read!
In the meantime, contemplate the plight of your digital serfdom on the Tree of Woe. Be sure to jump in the comments and let me know which (if any) of the ten proposals today seems to have merit.
Quite the tour de force!
My inclination is for brute force antitrust measures, some of which are compatible with libertarian sentiments.
By brute force, I mean a general downscaling of the biggest corporations, whether they be true monopolies or not. For a tiny niche, the value of economies of scale may outweigh monopoly pricing. If the niche is small, raising capital to invade the niche is feasible, so the implied threat of competition can be more credible than in an oligopoly of giant firms.
There are simple, rule-of-law compatible measures to do this. For starters, let's recognize that overregulation of public corporations increases economies of scale. Thanks to Dodd-Frank, small investors are "protected" from being able to invest in growth companies while they are still in full-on growth mode.
Another measure is the level the cost of capital for new firms vs. giant firms retaining earnings. Amazon can enter a new niche using pre-tax retained earnings. Some kind of cap on using R&D for new businesses as a deduction against existing products may be in order. (Or this might be too complicated.) Note that Amazon grew enormous while showing no profits for years -- and they didn't require mass infusions of fresh capital.
Finally, make the corporate income tax truly progressive. Currently, the rate is flat. Back in 2017, it was lumpy, with the highest bracket being 39% for $100,000-335,000. The rate for the kinds of corporations you see listed on exchanges was 35%, with no difference between an Apple and a penny mining stock.
Make the brackets step on on the major lines of a log graph, and the market will break up the behemoths. Indeed, corporations with high retainable earnings may function as incubators. DuPont was a prime example of this model. They would develop a product and this spin it off. At one point they owned General Motors.
One thought I had was to tax social networks based on Metcalfe's law. The value of a social network is equal to (n-1) squared where n is the number of nodes on the network. Current tax systems are linear while the value of networks scales exponentially with the number of users.
Basically structure the taxes in such a way as to discourage social networks from getting too big. More smaller social networks will reduce their power over the individual. With just one social network embracing free speech the other networks have taken a step in that direction. So more competition is better.
But the tax would only apply if the social network used an algorithm to sort content. The power is mostly in the algorithm.