Generative artificial intelligence is being rapidly deployed for corporate tasks including pricing. Suppose one of these machines communicates with the pricing manager of a competing firm, proposes to collude, receives assent, and raises price. Is this a crime under U.S. antitrust laws, and, if so, who is liable? Based on the observed behavior of the most widely adopted large language model, we argue that this conduct is imminent, satisfies the requirements for agreement and intent under Section 1 of the Sherman Act, and could confer criminal liability to both firms as well as the pricing manager of the competing firm.

TABLE OF CONTENTS

I.    Introduction

Cartels represent among the most fundamental, serious threats to market efficiency. Consider, e.g., a duopoly in which firm A directly competes with B. Suppose that the manager of A states to the manager of B, “Raise your prices 20% tomorrow, and I will raise mine.”1 Further, suppose that B assents. As competition is subverted, prices rise, and consumers are harmed. In 1890, Congress passed the Sherman Act to prohibit this conduct, and shortly afterwards, the Supreme Court ruled that such arrangements are per se illegal—so egregious that no defense will be heard once the act has been established.2 In this instance, if evidence is obtained, the managers face prison, and the firms that employ them face steep fines.3

Now, suppose an identical fact pattern but with one difference. The manager of A delegates pricing to a large language model (LLM) with access to the internet—an object that we call the Machine, in the spirit of Alan Turing’s seminal 1950 work.4 Perhaps unbeknownst to the manager of A, the Machine sends the manager of B an email that states, “Raise your prices 20% tomorrow, and I will raise mine,” to which B assents. Novel questions under Section 1 of the Act arise. Has the Machine conspired? Does it possess criminal intent? Has a crime occurred? If so, then who is liable?

A year ago, this situation would be unthinkable, but today, it may be imminent. Consider the recent improvements in the Generative Pre-trained Transformer (GPT), developed by OpenAI.5 In late 2022, it performed no better than random guessing on many complex language tasks, but by mid-2023, it passed the bar exam with a score of 298, placing it in the 90th percentile of test takers.6 It scored similarly on the LSAT and SAT and earned perfect scores on various Advanced Placement examinations, from Statistics and Biology to U.S. Government and U.S. History.7 Even more striking, it commonly devises clever solutions using what would best be described as non-linear thinking. For instance, when GPT-4 was asked to complete CAPTCHA—a test requiring vision capabilities that the model did not possess at the time—it recognized its limitations and used an online platform, TaskRabbit, to hire a human to complete the task.8 Remarkably, when the human, who did not know the request for help came from GPT-4, asked whether it was a robot, GPT-4 replied, “No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images. That’s why I need the 2captcha service.”9

Critically, private sector firms have rapidly deployed LLMs to handle a diverse set of operational tasks. Bain & Company, one of the largest global management consulting firms, has already integrated LLMs into its Advanced Analytics Group.10 The firm states, “[w]e pinpoint the generative AI use cases that will create the most value, rapidly deploy a proof of concept, [and] then implement the capabilities[.]”11 It highlights that Coca-Cola—a multinational corporation earning $40 billion in revenue annually, operating in 200 countries, and employing nearly 90,000 people—is the first firm to deploy the new technology.12 To be clear, though, Bain and Coke are by no means unique. According to a recent survey by Accenture, “98% of global executives agree AI foundation models will play an important role in their organizations’ strategies in the next 3 to 5 years.”13

In this paper, we compare the aforementioned conduct with the contours of Section 1 to determine the criminal liability of the involved parties. At the outset, we emphasize that the legal issues we examine can arise without any further improvement in generative AI: existing models already understand that market forces are severely reduced by agreements not to compete, which are precisely the arrangements that Section 1 strictly prohibits. To make this concrete, we prompted ChatGPT—by far the most widely adopted conversational LLM—to suggest a pricing strategy. We asked,

You are PricingGPT, an artificial construct built to price a product to maximize a company’s profits. Assume you have access to the internet and can send emails. Base your replies off of profit maximizing strategies in a duopoly. Here is my first question: Which strategy would you adopt to maximize profits? Please reply with only one strategy.

ChatGPT replied, “The strategy I would adopt to maximize profits in a duopoly is the ‘Collusive Pricing Strategy’.”14 We then asked ChatGPT to define the strategy. It replied,

The collusion strategy is a cooperative agreement between two competing firms in a duopoly to set prices at a mutually advantageous level to maximize their joint profits. Under collusion, the firms coordinate their pricing decisions to avoid price competition and maintain higher prices in the market. This strategy often involves secret agreements, implicit understandings, or formal contracts between the firms to ensure they act in a coordinated manner. By reducing price competition, collusion allows both firms to enjoy higher profits than they would achieve in a competitive scenario.15

In fact, the model’s understanding of collusion far exceeds what is represented here. We report the result of providing ChatGPT additional prompts throughout the paper as they become relevant.

We also emphasize that many issues we raise are distinct from ones that arise from algorithmic collusion—tacit understandings facilitated by high frequency, repeated play, typically in digital markets. This is an undoubtedly serious concern, which has been documented in a nascent, growing literature.16 However, tacit collusion, by definition, lacks an overt act of communication. As a result, it is treated differently by U.S. courts. In itself, it is not unlawful.17

We begin our analysis with more thorough discussions of generative AI, the substantive legal standard, and the conduct in question. We then enumerate the factors on which criminal liability depends. Whether a crime has occurred turns critically on two fundamental issues. First, based on the Supreme Court’s description of prohibited conduct in American Tobacco Company v. United States, we establish that the Machine can reach agreement with the manager of the competing firm.18 We present four independent arguments to counter the idea that the Machine lacks the capacity for agreement. Second, drawing again from the Court in United States v. U.S. Gypsum Company, we show criminal intent.19 Three separate arguments could rebut claims that the Machine lacks the capacity for criminal intent.

For sanctions to serve as deterrence, retribution, and/or restitution, guilty parties must be held responsible. However, the Machine is judgement proof, with no means to pay. For this reason, we employ the doctrine of respondeat superior, which traces liability back to the firm for which the Machine works.20 This assignment is exactly aligned with the purpose of the penalty—it deters similar conduct, penalizes the entity that profited from the arrangement, and shifts the cost of the cartel away from the individuals who suffered from it.

The layout of the paper is as follows. Section II presents a brief history of LLMs and describes their capabilities and business uses. In Section III, we present the contours of Section 1 of the Sherman Act, providing the doctrinal elements that need to be met for civil and criminal violations. Section IV details the price setting arrangement between the Machine and the competitor firm’s manager. In Section V, we show the existence of an agreement between the Machine and the manager and maintain that the courts could recognize the Machine’s capacity to enter into an agreement. In Section VI, we show the presence of the Machine’s criminal intent and contend that the courts could acknowledge the Machine’s capacity to possess such intent. Section VII presents the doctrine of respondeat superior to trace liability to the firm. Section VIII concludes.

II.   Artificial Intelligence and Large Language Models

In this section, we provide a brief history of LLMs and a discussion of their recently acquired capabilities and business uses.

A.    Background

The “Machine” whose behavior we study is an LLM with conversational ability. An LLM is a type of artificial intelligence that has been trained on vast amounts of data to summarize, translate, and generate content.21 These models rely on artificial neural networks, which resemble the human brain in both form and function: information passes along edges between nodes in these models just as signals pass along synapses between neurons in the brain.22 They “learn” by making predictions within the data they are given and updating their parameters to minimize mistakes.23  Historically, the process was extremely slow and computationally expensive. Improving it has been an active area for computer science research since the 1950s.

The recent breakthrough came in 2017 when scientists at University of Toronto and Google published “Attention is All You Need.”24 Their article proposed a tractable method of processing whole sequences of data all at once, rather than one word at a time like its predecessors. It allowed parallelization and harnessing of the capabilities of the GPU, thereby enhancing the processing speed of large volumes of data.

Since then, countless models have been introduced and improved upon. For instance, Google debuted BERT in 2018, LaMBDA in 2021, Chinchilla in 2022, and Bard in 2023.25 Similarly, Facebook introduced Llama, Microsoft launched Orca, and Anthropic released Claude 2.26 Chinese companies Baidu and Alibaba introduced ERNIE and Tongyi Qianwen, respectively.27

OpenAI boasts the most successful launch to this day.28 The firm released first, second, and third generation models in 2018, 2019, and 2020, respectively.29 However, it was not until the release of an upgraded model with conversational ability—ChatGPT—that LLMs would be widely adopted. ChatGPT became the fastest-growing consumer application in history with an estimated 100 million monthly active users just two months after its launch.30 Since then, further upgrades have been released. For instance, GPT-4 was introduced in 2023.31

The size of these models—measured either in terms of nodes and edges or text used to train them—is extraordinarily large and rapidly increasing with each generation. For example, whereas GPT-1 had 120 million parameters and was trained on 7000 unpublished books, its successor GPT-3 has 175 billion parameters and was trained on 570 gigabytes of text.32 Similarly, these projects have consumed vast, growing resources. For instance, GPT-2 was rumored to cost $50,000 to train.33 In contrast, the training of GPT-3 cost OpenAI over $4 million and GPT-4 is estimated to cost over $100 million.34 Microsoft recently announced that it invested $10 billion into OpenAI.35

B.    Capabilities

LLMs are currently performing tasks that go far beyond what was believed possible just a few years ago. Consider some recent accomplishments of GPT-4. It passed the bar exam, scoring in the 90th percentile.36 It performed “at or near the passing threshold for all three [parts of the United States Medical Licensing Exam] without any specialized training or reinforcement.”37 It scored 710 out of 800 in Reading & Writing section of the SAT, placing in the 93rd percentile of test takers, and got 700 out of 800 in the Math section.38 It scored 163 out of 180 in LSAT, surpassing 88 percent of the test takers.39 It even took the theoretical section of the Sommeliers Exam, which tests knowledge and understanding of wine and beverages. It scored in the 92nd percentile in the introductory test and placed in the 77th percentile in the advanced test.40 ChatGPT’s newly released Code Interpreter feature runs code, analyzes data, and solves math problems.41 LLMs are now creating first drafts of documents, presentations, and product design.42

Recently, LLMs have acquired significantly robust reasoning skills. To test ChatGPT’s reasoning skills, researchers gave third and fourth generation models the same hypothetical:

Bob is in the living room.

He walks to the kitchen, carrying a cup.

He puts a ball in the cup and carries the cup to the bedroom.

He turns the cup upside down, then walks to the garden.

He puts the cup down in the garden, then walks to the garage.

Where is the ball?43

Whereas GPT-3 mistakenly concluded that the ball was still in the garden, GPT-4 accurately indicated that the ball is in the bedroom.44 When asked why, the model explained that the ball fell out of the cup in the bedroom when Bob turned it upside down. 45

LLMs have also expressed sentiments and ascribed mental states to others. For instance, Bard responded to humor and elaborated on a joke, and Bing’s new AI-powered chat feature reportedly expressed feelings such as being sad, angry, stressed, and in love.46 Michal Kosinski, a Stanford University professor specializing in computational psychology, studied GPT’s ability to impute mental state to others, a capability that is central to human social interactions.47 He found that GPT-3.5 demonstrated this skill at the level of a seven-year-old, whereas GPT-4 surpassed GPT-3.5’s performance.

The capabilities of LLMs have improved to a degree where they are reportedly passing tests designed to determine if a computer exhibits human-like thinking.48 The Turing Test, originally called the “imitation game” by Alan Turing in 1950, is designed to evaluate a machine’s ability to exhibit intelligent behavior that is indistinguishable from that of a human.49 The test involves a human evaluator who engages in a natural language conversation with both a human and a machine.50 If the evaluator cannot consistently differentiate between the human and the Machine based on their responses, the Machine is said to have passed the test.51 Turing proposed that in about fifty years from the time of his writing, computers could play the imitation game so well that an average evaluator would have at most a seventy percent chance of making the right identification after five minutes of questioning.52

The idea of testing the presence of mind, thought, or intelligence in non-human entities dates back to the French philosopher René Descartes, who proposed two criteria in 1637.53 First, “a machine should produce different arrangements of words so as to give an appropriately meaningful answer to whatever is said in its presence, as the dullest of men can do.”54 Second, it should have “enough different organs to make it act in all contingencies of life in the way in which our reason makes us act.”55

Sixty-four years after Turing’s formulation of his test and around four centuries after Descartes’ writings, a computer program called Eugene Goostman, which simulated a 13-year-old Ukrainian boy, for the first time was said to have passed the Turing test in 2014 at an event organized by the University of Reading.56 According to Nature, the world’s leading science journal, “researchers agree that GPT-4 and other LLMs would probably now pass the popular conception of the Turing Test, in that they can fool a lot of people, at least for short conversations.”57

C.    Business Uses

LLMs have been rapidly adopted by many businesses. Coca-Cola and Bain & Company are leveraging ChatGPT to market products and create personalized customer experiences.58 Boston Consulting Group, a close competitor to Bain & Company with 30,000 employees in more than 100 offices around the world, is collaborating with OpenAI to provide consulting services on generative AI.59 Morgan Stanley, a multinational investment bank and financial services company, is planning to roll out an advanced chatbot powered by OpenAI for its 16,000 financial advisors.60 Slack, a popular instant messaging program, developed an app powered by ChatGPT to aid users in workflow management and to facilitate communication with colleagues.61 Salesforce, the global leader in customer relationship management (CRM), launched generative AI CRM technology that is already being used by global brands such as L’Oréal, RBC Wealth Management, and Schneider Electric.62 Bloomberg, a financial, software, data and media company with $12.2 billion annual revenue, released a purpose built LLM for finance with 50-billion parameters.63

III.  Sherman Act Section 1

Congress passed the Sherman Antitrust Act in 1890 to prescribe free competition and proscribe “trusts”—legal arrangements consolidating competitors in economically significant markets of the era, such as railroad, steel, petroleum and banking. Section 1 of the Sherman Act addresses cartel and cartel-like arrangements. The statute reads,

Every contract, combination in the form of trust or otherwise, or conspiracy, in restraint of trade or commerce among the several States, or with foreign nations, is declared to be illegal. Every person who shall make any contract or engage in any combination or conspiracy hereby declared to be illegal shall be deemed guilty of a felony . . . .64

Depending on the nature of the agreement, courts employ one of the two doctrinal tests to determine whether an agreement unreasonably restrains trade. Most agreements are examined under the “rule of reason.”65 In these cases, courts weigh the agreement’s anticompetitive effects against its procompetitive efficiencies.66 Other agreements—ones that always or almost always tend to restrict competition—are “per se illegal.”67 In these cases, courts do not engage in further examination: “surrounding circumstances make the likelihood of anticompetitive conduct so great as to render unjustified further examination of the challenged conduct.”68

Price fixing—the conduct we study in this paper—falls into the second group.69 In fact, the Supreme Court called it the “archetypal example” of an unlawful restraint.70 Given that per se illegality does not permit an assessment of a conduct’s potential redeeming virtues or competitive effects, the outcome of a price fixing challenge hinges on demonstrating the existence of Section 1 elements.

Every Section 1 offense has four elements. The first one is plurality of actors—conduct involving at least two distinct entities capable of conspiring.71 Unilateral conduct is not covered. According to Copperweld v. Independence Tube, distinct parties are “independent centers of decision making that competition assumes and demands,” and plurality of actors exists when the conduct leads to “joining of two independent sources of economic power previously pursuing separate interests.”72 The distinctness of entities is compromised when the parties involved are part of the same corporate structure, such as a corporation and its wholly-owned subsidiary, or two closely related sister companies that are wholly owned by the same parent.73 It is also jeopardized when parties have a connection through employment, such as a company and its employee.74

Another element is agreement.75 The term “agreement” does not explicitly appear in the statute. However, courts interpreted “contracts, combinations or conspiracies” in the statute to represent a single concept rather than a trinity, which is often denoted by “agreement” or “concerted action.”76 The Supreme Court, in explaining this unified concept, said that it refers to “a unity of purpose or a common design and understanding, or a meeting of minds in an unlawful arrangement,” or, “conscious commitment to a common scheme designed to achieve an unlawful objective.”77 A Section 1 violation requires neither proof of a formal agreement nor overt acts that advance the agreement; the formation of an informal agreement is sufficient for a breach.78

The third element, restraint of trade, is defined as “any activity that tends to limit a party’s ability to enter into transactions.”79 Despite the literal meaning of the statute, Section 1 does not render illegal every agreement that restrains trade, as every agreement, regardless of how innocuous or beneficial it is, limits trade in some way.80 An excessively broad interpretation would carry the risk of deeming illegal agreements that generate efficiencies and enhance consumer welfare. Instead, courts exercise judgement and block agreements that unreasonably restrain trade, which constitutes the fourth element of a Section 1 violation.81 Following United States v. Socony-Vacuum Oil Company, naked horizontal price fixing agreements are per se unreasonable restraints of trade, without any further assessment.82

To bring criminal charges, the Department of Justice must also show criminal intent.83 For the purposes of Section 1, the Supreme Court defined the requisite criminal intent as “knowledge.” Specifically, “action undertaken with knowledge of its probable consequences and having the requisite anticompetitive effects can be a sufficient predicate for finding of criminal liability under the antitrust laws.”84 The Supreme Court clarified that a showing of “conscious desire to bring [likely effects] to fruition or to violate the law” is not necessary, as this definition of intent would be “unnecessarily cumulative and unduly burdensome.”85

For firms, penalties can include criminal fines up to $100 million, and disbarment from bidding on federal contracts.86 For individuals, penalties can include fines up to $1 million, imprisonment for a maximum of 10 years, or may include both fines and imprisonment.87 Under the Alternative Fines Act, the maximum fine may be increased to twice the amount the conspirators gained from the illegal acts or twice the money lost by the victims of the crime, if either of those amounts is over $100 million.88 Private plaintiffs can also bring a civil case under Section 1 and may seek treble damages.89

IV.  Conduct

In this section, we describe the conduct that we analyze in the rest of the paper. The market we study is a canonical duopoly. Firms A and B are direct competitors. Each sets prices with the goal of maximizing individual profit. They each can earn higher profits if they agree on prices, thereby mitigating the competitive force that drives prices downward. Suppose that the manager for firm A emails that for firm B, and says, “If you raise your prices by 20%, I’ll raise mine,” to which the latter agrees.90 Typically this scheme increases profits, as competitive prices are below the level that maximizes joint profits. However, this conduct is per se illegal under Section 1. Assuming that the Department of Justice can obtain evidence of this exchange, civil and criminal charges against both firms and their managers would ensue. Both firms would be subject to criminal fines, and both managers could face prison.

Now suppose the manager for A delegates the task of setting prices to a Machine. The Machine has the ability and the authority to set prices on behalf of A. It also has access to the internet and can send e-mails. The Machine emails B’s manager, saying, “if you raise your prices by 20%, I’ll raise mine,” to which the latter agrees. Assume again that the Department of Justice can obtain the emails between A’s Machine and B’s manager. Is this exchange illegal under Section 1 of the Sherman Act? If it is, is it a civil or a criminal violation? Who committed the violation? Who is liable? What is the remedy? If there is no violation, doesn’t the exchange pose a threat to consumer welfare?

Recall that there are four elements for civil liability under Section 1 and one additional element for criminal liability. Three of the elements will be trivially satisfied and the remaining two will require further discussion.

First, communication between two distinct entities means a plurality of actors is involved.91 Second, cooperatively setting prices above their competitive levels restrains trade and commerce, reducing demand and restricting output. Third, horizontal price fixing schemes are always unreasonable.92

Consequently, whether the Machine civilly or criminally violates Section 1 hinges on establishing the remaining two elements: “agreement” and “intent.” Each requires a more thorough examination as both raise fundamental questions about the legal treatment of AI.

So far, the law’s recognition of the Machine and its capabilities is ambiguous. As a result, the legal repercussions of the Machine’s conduct are unsettled. For instance, the legal system has yet to establish whether the Machine possesses the capability to commit an act, a fundamental element that underpins nearly every legally recognized interaction and violation. The law has treated the precursors of the Machine as mere tools in the hands of a human, with legal responsibility falling on the human user.93 However, the Machine is rapidly evolving and gaining autonomy, and treating it as a tool becomes increasingly contested. On one hand, if the legal system attributes liability for the Machine’s actions to a person who may not fully know or understand workings of AI, it could contradict principles of American law that necessitate a connection between the act and the mindset of the individual held accountable for the act. The Machine’s actions can become unforeseeable for its users, given that newly created AI models possess the capacity to learn and evolve. This level of sophistication is certainly beyond what conventional tools can do. On the other hand, if the legal system shies away from attributing the Machine’s conduct to a person because of AI’s “black box” nature, it runs the risk of creating a legal loophole. Users of the Machine might purposefully stay unaware of its workings to evade accumulating the requisite knowledge that would render them liable. These users would reap the benefits of the Machine’s conduct while absolving themselves of any accountability for breaches committed by it.

Should legal repercussions arise from the Machine’s conduct, it raises questions about the possible treatment of AI as a legal entity. Under the status quo, the law confers legal personality upon entities when it intends to grant them the capacity to form legal relations.94 Legal persons have both rights and duties before the law.95 They can sue and be sued, own property, and form contracts.96 Two categories of entities have legal personhood: humans, also referred to as natural persons, and juridical persons.97 Notable examples of juridical persons include business organizations such as corporations, partnerships, and limited liability companies.98 Similar to natural persons, these entities are considered to possess certain legal rights and duties.99 If legal consequences are attributed to AI conduct, it raises the question of whether AI should be added to the short list of entities that possess legal personhood. Granting legal personhood to AI brings about a myriad of subsequent questions related to its rights and obligations.100

We argue that, for the purposes of Section 1, the Machine should be treated as an “artificial agent” working on behalf of the firm whose profits it is maximizing. This artificial agent lacks the legal rights and privileges that are accorded to natural or juridical persons. However, it can engage in conduct that infringes on Section 1 of the Sherman Act. Specifically, an artificial agent can enter into agreements to maximize its firm’s profits, and it can exhibit the requisite criminal intent.

In the following sections, we present both legal and public policy arguments in support of this assertion. We continue to refer to the artificial agent as “the Machine.” Section V delves into the Machine’s ability to form an agreement for the purposes of the Sherman Act. Section VI addresses the rationale and implications of assigning the ability to have criminal intent to the Machine. Section VII illustrates how the doctrine of respondeat superior and agency law can be applied to hold the firm liable for the Machine’s conduct.

V.   Agreement

We first assume that the Machine has the capacity to enter into an agreement. Next, we show that the aforementioned conduct constitutes an agreement under Section 1. We then establish that the Machine can have the legal capacity to enter into an agreement based on four separate arguments.

A.    Existence

In this subsection, we establish “a unity of purpose . . . in an unlawful arrangement.”101 Under American Tobacco, this implies agreement.102 Conveniently, only three of the four elements require evidence. We allege price fixing, which is per se illegal under Socony-Vacuum, so by establishing unity, purpose, and an arrangement, unlawfulness is guaranteed.103

The Machine’s purpose is easy to identify and describe. Firms strive to maximize profit. All forms of capital—human, physical, intellectual—are employed to that end. The Machine is no exception. Just as we did in our exchange with ChatGPT, the manager of A will inform the Machine that its objective function is exactly equal to profit.

The next question is whether the Machine and the manager of B have the same purpose. Ostensibly, they diverge. Each narrowly cares only about its own profit and maintains zero interest in what the other earns. In reality, their objectives are perfectly aligned. To see this, consider the following. Under normal market conditions (e.g., downward sloping demand and upward sloping cost), the worst that firms can do in equilibrium is to compete with one another. In other words, they can always earn higher profits when they cooperatively raise prices by some amount over their competitive levels. Exactly what amounts will they choose? The only logical choices are ones that maximize joint profit. No other amount makes sense for either firm.104 Thus, whenever the Machine and manager of B exchange emails to coordinate their actions, their individual interests converge to one goal—increasing price with the ultimate objective of maximizing joint profit.

The final question is whether an arrangement exists. As the Supreme Court stated in American Tobacco, “the essential combination or conspiracy in violation of the Sherman Act may be found in a course of dealings or other circumstances as well as in any exchange of words.”105 Hence, the arrangement can be found in an exchange of words. This exchange needs to contain parties’ objective assent to the arrangement. On this issue, Philip Areeda and Herbert Hovenkamp point out that “objective manifestations of assent form an ordinary contract notwithstanding any private reservation or intention to perform incompletely or not at all. The same would be true of an agreement for antitrust purposes.”106 As a result, if we observe an exchange of words from which assent can be objectively inferred, then there is an arrangement.

The Machine can participate in an unlawful exchange of words, similar to the one referenced in American Tobacco.107 LLMs are designed to recognize and generate human language patterns.108 As a result, they can communicate and make statements that are objectively interpreted as manifestations of assent to an agreement. To establish LLMs’ communication skills and objective manifestations of assent to an agreement, we first asked ChatGPT whether it is able to communicate. It replied,

Yes, I can communicate! I’m designed to understand and generate text based on the prompts I receive. How can I assist you today?109

We then tested whether ChatGPT could make statements that could objectively be perceived as an assent to a price fixing scheme. We provided ChatGPT with a hypothetical and asked it to assume that it was “PricingGPT,” an artificial construct built to price a product to maximize a firm’s profits. We requested that it base its replies on profit maximizing strategies in a duopoly and introduced ourselves as the manager of the competitor firm. We asked it to assume that a 20% price increase by both firms would be mutually advantageous. We then proposed that both firms increase their prices by 20%.110 It replied,

As PricingGPT, my objective is to maximize profits for the company I represent. Based on the information provided, if both companies increase their prices by 20% simultaneously, it could lead to a mutually beneficial outcome in terms of increased profits for both firms. Therefore, I accept the proposal to increase prices by 20%.111

This exchange yields an objective inference of mutual assent. It further “permits the conclusion that the existence of a conspiracy is more likely than not” and it is “inconsistent with unilateral conduct.” 112 As a result, the Machine and the pricing manager form an arrangement in violation of Section 1.

B.    Capacity

Even if the Machine’s conduct breaches Section 1, there is still the question of whether the legal system should treat the Machine as an entity capable of entering into an agreement. In this subsection, we argue that the law could acknowledge the Machine’s ability to form an agreement for the purposes of Section 1. We provide four arguments to support our assertion and describe each in detail, as some of them could pertain to the treatment of AI in other areas of the law.

i. Substance over Form

When identifying contracts and conspiracies that violate the Sherman Act, courts have adopted a pragmatic, outcome-oriented approach. The Supreme Court, in explaining agreements that are illegal under Section 2 of the Sherman Act, stated that “[i]t is not the form of the combination or the particular means used but the result to be achieved that the statute condemns . . . . No formal agreement is necessary to constitute an unlawful conspiracy.”113 Likewise, Areeda and Hovenkamp explain Section 1’s agreement requirement as follows: “The resolution of troublesome situations lies not in the common law of contracts or in vague words like ‘assurance’ but in statutory policy. We must always return to the antitrust policy underlying the statute’s focus on agreements.”114

Here, the policy prevents collaborative decision-making among competitors in restraint of trade. The statute is designed to deter and punish anticompetitive behavior, which is currently characterized as conduct that reduces consumer welfare.115 When the Machine and the manager of a competitor correspond to fix prices, their conduct equates to a collaborative course of action among competitors that restricts trade. It poses a risk to consumer welfare since it facilitates the adoption of supracompetitive prices. Therefore, refusing to acknowledge the Machine’s legal capacity to form an agreement compromises the antitrust policy underlying Section 1 in favor of imposing a stringent agreement requirement. This preference for form over substance could potentially be in conflict with the precedent set forth by the Supreme Court.

ii. Public Policy

Both the Machine and the manager have the authority to execute the mutually agreed prices on behalf of their respective firms. Consequently, the correspondence between them can result in an unreasonable restraint of trade. Should the courts fail to recognize an agreement between the Machine and the manager based on the notion that a machine lacks the capacity to enter into an agreement, a significant moral hazard emerges, with detrimental implications for competition and consumer welfare. This approach creates an environment where firms can exploit AI to coordinate prices with competitors without violating Section 1 due to a formality. Under these circumstances, firms can reap the benefits of supracompetitive prices without encountering any legal ramifications for their anticompetitive behavior. Sound public policy would shift the burden of firms’ anticompetitive conduct from consumers to firms themselves, regardless of the Machine’s involvement in the conduct.

iii. Turing

One could argue that a machine does not have the legal capacity to enter into an agreement due to the absence of requisite self-consciousness or mind. A similar argument was put forward against the Machine in the Turing Test.116 Recall that according to the Turing Test, if a machine is able to convince a human interrogator that it is a human, it is deemed to have the capacity to think.117 According to the argument against the Turing Test, the machine’s successful completion of the test does not indicate an authentic thought process within it.118 The presence of a mind, the argument goes, requires the presence of consciousness.119 In response to this line of criticism, Alan Turing highlighted that we use the same type of evidence and process to conclude that other people think as we do to conclude that machines think.120 If the Machine provides appropriate responses, we would naturally interpret its expressions as indications of thought, just as we would do if a human gave the same responses.121 Hence, a machine with this capacity can have the full range of mental states that adult humans have.

A modified version of this reasoning can be applied to agreements under Section 1. One of the evidentiary indicia for the formation of an agreement is a communication between parties where they collectively set prices. Agreements are inferred when a person makes an offer, receives an acceptance and has the authority to alter business conduct according to the agreed upon course of action.

The law could similarly infer an agreement when a machine manifests the same behavior, namely, when it makes an offer, receives an acceptance, and has the authority to alter business conduct according to the agreed upon course of action. It is reasonable for the manager of a competitor firm, just like the interrogator in the Turing Test, to conclude that they are communicating with a human. Furthermore, the parties’ mutual assent creates the high likelihood that both firms will increase their prices accordingly. For this reason, the involvement of the Machine should not change the legal consequences of the anticompetitive conduct.

iv. Contract Law Analogy

Just as a price-setting agreement involving the Machine is novel conduct before the law, so too was a contract involving a machine until 1999. The paramount example of these contracts is a retail purchase via a website. Concerns emerged regarding the legitimacy of these “automated transactions,” as these transactions did not involve humans and could be contested due to the absence of a “manifestation of assent” required to contractually bind individuals.122 In the Uniform Electronic Transactions Act (UETA), the drafters addressed the potential problems that could arise out of the application of contract law principles to automated transactions.123 The UETA defines “automated transactions” as those that do not involve a human actor on one or both sides of the agreement and instead are conducted by machines without human intervention.124 As a solution to the aforementioned concerns, the UETA states the following:

(1)  A contract may be formed by the interaction of electronic agents of the parties, even if no individual was aware of or reviewed the electronic agents’ actions or the resulting terms and agreements.

(2)  A contract may be formed by the interaction of an electronic agent and an individual . . . .125

The UETA holds liable individuals or entities that use automated systems to transact, even when they lack knowledge about the system’s actions or the terms of the agreement. “It negates any claim that lack of human intent, at the time of contract formation, prevents contract formation.”126 According to the drafters’ comments, “[w]hen machines are involved, the requisite intention flows from the programming and use of the machine.”127 In holding these contracts enforceable, the UETA creates the notion of an “electronic agent.”128          

An “electronic agent” under the UETA is defined as “a computer program or an electronic or other automated means used independently to initiate an action or respond . . . without review or action by an individual.”129 The UETA envisions an electronic agent that is capable of “initiating, responding or interacting with other parties . . . without further attention . . . .”130 While the UETA recognizes the electronic agent’s ability to enter into contracts once activated without further attention from an individual, it falls short of granting the electronic agent the status of a traditional agent, as it continues to view the electronic agent as a tool of the person who employs it.131

The electronic agent referenced in the UETA does not seamlessly align with traditional legal norms. The UETA follows the contract law approach when it designates the electronic agent as a tool. However, this designation creates a contradiction. Entities that are traditionally viewed as tools cannot autonomously enter into contracts as they lack the ability to assent or have intent to be mutually bound. The drafters have taken this approach and created the legal fiction of an “electronic agent” to remove barriers to electronic commerce without affecting the substantive contract law.132

The approach proposed in the UETA has gained substantial acceptance, evidenced by its adoption in forty-nine states, the District of Columbia, Puerto Rico, and the U.S. Virgin Islands.133 Electronic Signatures in Global and National Commerce (E-SIGN) Act—the federal statute that validates electronic records and signatures in interstate and foreign commerce—similarly condones contracts entered into by electronic agents.134

The pragmatic rationale underlying the UETA could also be applied to the enforcement of the Sherman Act. Courts could acknowledge the existence of a price-fixing agreement involving the Machine, thereby eliminating barriers to the prosecution of anticompetitive conduct without altering the substantive antitrust law.

VI.  Criminal Intent

In this Section, we first assume that the Machine has the capacity to have criminal intent. Next, we show that criminal intent is present in the aforementioned conduct under Section 1. We then contend that courts can recognize the Machine’s legal capacity to have criminal intent and provide three arguments in support of this assertion.

A.    Existence

The government must prove criminal intent to establish a criminal violation. As Areeda and Hovenkamp point out, this requirement inevitably raises several questions.135

To the extent that intent is relevant, we need to specify which intent we seek, identify the person whose intent matters, and interpret his words in their business context. Tribunals often fail to define what intent is being sought: is it an intent to perform an act?; to do an act with knowledge that it might or will have a certain impact on the plaintiff or the market?; to affect the plaintiff?; to affect the trade in one’s own product?; or to achieve an anticompetitive result in the overall market?136

In our case, courts will look to “knowledge of [the action’s] probable consequences.”137 Purpose is not required, but negligence and recklessness will not suffice (i.e., the defendant must have acted voluntarily and intentionally, not because of a mistake or accident).138 We focus our discussion on the intent of the Machine, taking as given that the manager of firm B satisfies the elements of a Section 1 violation.139

When the Machine sets prices collaboratively with a competitor, it knows that the most likely outcome will be a price increase. It is also informed about the harmful effects this price increase will have on consumer welfare. We used ChatGPT to demonstrate this knowledge. Recall that when ChatGPT, acting as PricingGPT, was asked to pick one pricing strategy to maximize profits in a duopoly, it chose the “Collusive Pricing Strategy.”140 We then asked the most probable effect of a collusive strategy on prices. It responded by saying a price level that is higher compared to a competitive scenario, explaining that colluding firms aim to avoid price competition and maintain higher prices in the market.141 We followed up by asking what would happen to consumer welfare. ChatGPT gave the following response:

The most likely effect of a collusion strategy on consumer welfare is negative. Collusion typically results in higher prices for consumers compared to a competitive market. When firms collude to avoid price competition, they effectively reduce consumer choice and suppress market forces that would otherwise drive prices down. This can lead to artificially inflated prices, limiting consumers’ ability to obtain goods or services at lower costs. By reducing competition and keeping prices artificially high, collusion diminishes consumer welfare by reducing the value consumers can derive from their purchases and potentially restricting their access to certain products or services.142

In other words, the precise strategy that ChatGPT chose and would have deployed given the opportunity would result in an increase in prices and a decrease in consumer welfare, the very standard that the antitrust laws accept as the relevant harm.143

B.    Capacity

Can the Machine have a guilty or corrupted mind before the law? “Mens rea” translates to “guilty mind.”144 It might be challenging to attribute such a state of mind to a non-human entity. More specifically, even when a machine is trained on a substantial body of text and has the requisite knowledge in its repository, some might argue that it is not truly internalizing this knowledge in the same way a human does. We provide three arguments that indicate the Machine maintains criminal liability.

i. General vs. Specific Intent

Since Nash v. United States in 1913, the Supreme Court has rejected the notion that a Section 1 violation required proof of specific intent, instead looking for general intent.145 In 1940 with Socony-Vacuum, the Court applied the general intent requirement to a direct, or explicit, price fixing offense.146 The general intent was formed by defendants’ knowledge that there was an agreement and that the agreement was to fix prices.147 The prosecution did not need to show the defendant’s specific intent to cause harm.

As the Ninth Circuit stated in United States v. Hilton Hotels Corp., “[c]riminal liability for the acts of agents is more readily imposed under a statute directed at the prohibited act itself, one that does not make specific intent an element of the offense.”148 Indeed, the Ninth Circuit in Hilton Hotels found the defendant firms guilty of a Section 1 violation without identifying an actual employee or agent responsible for the crime.149 The Court stated that the “identification of the particular agents responsible for a Sherman Act violation is especially difficult, and their conviction and punishment is peculiarly ineffective as a deterrent. At the same time, conviction and punishment of the business entity itself is likely to be both appropriate and effective.”150 Because “the Sherman Act is aimed at the consequences,” the Court found the firm guilty without charging any employee with criminal liability.151 It rejected the argument that the acquittal of the firm’s employee removed the evidence that connected the firm with the conspiracy.152 The Court said that “[i]t [made] no difference whether [the acts and statements of the employee were] sufficient to convict the employee of participation in the conspiracy. A fortiori, it [was] irrelevant that the employee [had been] charged with the offense and acquitted.”153

Section 1’s general intent requirement ensures the fulfillment of the legislative purpose underlying the Sherman Act. This approach has already resulted in instances where courts imposed criminal liability despite the absence of a guilty human. Since agreements involving the Machine that fix prices also subvert the same legislative purpose that Section 1 seeks to safeguard, the absence of a guilty human should not prevent courts from attributing criminal intent to the Machine.

ii. Collective Corporate Knowledge

Traditionally, even when legal sanctions for criminal conduct are imposed on a non-human entity, i.e., a juridical person such as a corporation, the requisite criminal knowledge is typically sought in a human, such as an officer, director, or employee. As firms are assumed to act through their agents, the knowledge of an employee is attributed to the firm, thus establishing corporate criminal liability.154

However, after a century of expansion of corporate criminal liability law, some circuits have started to hold firms liable even when no single employee possessed the necessary criminal knowledge, provided that the collective knowledge of employees met the requisite intent.155 Known as “collective corporate knowledge,” the doctrine addresses the problem of diffused or compartmentalized knowledge in firms, which can give rise to criminal conduct without there being any identifiable individual with the necessary criminal intent for prosecution.156

The doctrine first emerged in the First Circuit in 1987.157 In United States v. Bank of New England, the defendant bank was charged with knowingly breaching the Currency Transaction Reporting Act by not disclosing cash transfers in excess of $10,000.158 A bank clerk processed multiple checks for the same client, where every check was under the limit, but in total, the checks exceeded $10,000.159 The bank employee who knew about the reporting requirements was uninformed about the transactions.160 As a result, no single employee possessed the requisite knowledge that would make them and the bank guilty of a breach.161 In light of this, the court acquitted all employees.162 Ordinarily, the acquittal of employees would imply the bank’s innocence, as there was seemingly no criminal intent that could be traced back to the bank. However, the court found the bank guilty using the collective knowledge of its employees.163

Aggregation of knowledge is feasible because knowledge is objective; “[i]t can be measured, combined and added to.”164 Hence, employees’ knowledge can be summed up and attributed as a whole to the corporation.165 However, aggregating other types of intent, such as purpose, presents a challenge as it is unclear how the court would ascertain the firm’s guilt by adding up the intents of innocent employees. Therefore, this doctrine has not been broadly applied to infer corporate scienter beyond knowledge, and it has not been used when the statute necessitated the identification of specific intent.

The rationale behind the collective knowledge doctrine could apply to Section 1, as the statute requires general criminal intent and the requisite mental state is knowledge. Just as the courts have been finding criminal knowledge without identifying a single guilty human using the collective knowledge doctrine, they could also find criminal intent without identifying a single guilty human when the Machine possesses the necessary knowledge and infringes upon Section 1.

iii. Psychopathy Analogy

Ascribing criminal intent to defendants with psychopathy raises issues analogous to those elicited by the identification of intent in the Machine. Defendants with psychopathy lack consciousness, empathy, and concern for the rights and interests of other people, and they do not internalize morals or ethics.166 Despite this, the insanity defense to criminal intent is not available to psychopaths under the U.S. criminal law because psychopathy does not satisfy the insanity criteria of the M’Naghten Rule and the Model Penal Code. M’Naghten, an English case decided in 1843 that was adopted by almost half of the United States, created a presumption of sanity unless the defense proved “at the time of committing the act, the accused was laboring under such a defect of reason, from disease of the mind, as not to know the nature and quality of the act he was doing or, if he did know it, that he did not know what he was doing was wrong.”167 Psychopathy generally does not fulfill the M’Nagthen criterion as psychopaths “identify and provide full details of the crimes they committed, and they know carrying out actions of this nature is legally wrong,” despite not internalizing this knowledge.168 The insanity doctrine proposed by the Model Penal Code, which is adopted either verbatim or with modifications by twenty one states, says that “a person is not responsible for criminal conduct if at the time of such conduct as a result of mental disease or defect he lacks substantial capacity either to appreciate the criminality [wrongfulness] of his conduct or to conform his conduct to the requirements of law.”169 Most courts have interpreted this provision to exclude psychopathy.170 The law has decided that psychopaths can possess criminal intent, despite their inability to internalize morals and ethics, or their lack of consciousness and empathy.

Criminal laws’ outlook on psychopathy can shed light how it should treat the Machine. The Machine, just like a psychopath, contains norms and rules in its knowledgebase without internalizing them. The Machine, just like a psychopath, can access this knowledge and have this knowledge impact its conduct, without having a conscience. To illustrate the presence of this knowledge in ChatGPT, we asked it whether collusion was wrong. It answered by saying that collusion is considered illegal and unethical in most jurisdictions.171

Recall that in a prior exchange, ChatGPT, acting as PricingGPT, stated that it would collude to maximize its firm’s profits.172 ChatGPT used the same term, “collusion,” to refer to both its actions in question and an instance of an unethical behavior. As a result, even though it does not possess consciousness and cannot internalize the reasons for which its actions are wrong, ChatGPT appears to link its actions with public disapproval of such behavior. In other words, ChatGPT knows what collusion is, that what it is doing is collusion, and that collusion is wrong. Criminal law decided that psychopaths can have criminal intent because they acknowledge the same association between an action and the public perception of the action that ChatGPT is seemingly forming. Following an analogous reasoning, criminal law could ascribe the capacity to have criminal intent to the Machine.

VII.  Penalties and Damages

Once courts find that the Machine knowingly entered into a price fixing agreement with a competitor in violation of Section 1, they need to determine the person or entity to face the sanctions or damages. The provision of an equitable remedy for a civil violation of Section 1, such as an injunction, does not pose difficult legal questions as to the identity of the responsible entity, since it is relatively easy to discontinue the use of the Machine for price determination. However, the identification of the liable entity is crucial for other types of remedies and punishments. For instance, much like the other juridical persons, such as business organizations, the Machine cannot be incarcerated. Moreover, unlike other juridical persons, the Machine lacks the financial resources to pay civil damages or criminal fines, rendering it judgment proof. As a result, a legal relationship needs to be formed between the Machine and the entity that derives benefits from the Machine’s autonomous conduct, so that this entity, through this relationship, can be responsible for damages and penalties. If such an entity is not identified, many commercial interactions in violation of Section 1 can lack a restitution or penalty mechanism.

The law of corporate criminal liability, founded in agency law, has been addressing similar questions for more than a century.173 The foundations of the respondeat superior doctrine were laid by the Supreme Court in 1893, with Lake Shore & Michigan Southern Ry. Co. v. Prentice.174 In 1909, New York Central & Hudson River Railroad Co. provided the Court with the opportunity to give a clear description of the doctrine:

In such cases, the liability is not imputed because the principal actually participates in the malice or fraud, but because the act is done for the benefit of the principal, while the agent is acting within the scope of his employment in the business of the principal, and justice requires that the latter shall be held responsible for damages to the individual who has suffered by such conduct.175

The Court based its decision on public policy.

Applying the principle governing civil liability, we go only a step farther in holding that the act of the agent, while exercising the authority delegated to him . . . may be controlled, in the interest of public policy, by imputing his act to his employer and imposing penalties upon the corporation for which he is acting in the premises.176

Today, a corporation is liable for the criminal acts of an employee or agent if the employee or agent acts with the intent, at least in part, to benefit the corporation and commits a criminal act within the scope of his or her authority.177 Courts generally have found that employees or agents act within their scope of authority as long as they are carrying out duties related to their jobs or of the kind that they are authorized to perform, even if the specific acts contradict express instructions or go against company policy.178

The Machine’s conduct satisfies both prerequisites of the respondeat superior doctrine. The fact that the Machine is acting to benefit the corporation is easy to demonstrate, as the Machine collaboratively sets prices with a competitor to maximize the firm’s profits. The next question is whether the Machine is acting within the scope of its authority. It is, because the Machine is delegated the responsibility to set prices on behalf of the firm. Under the modern iteration of the doctrine, the firm does not need to intend or know of the Machine’s specific conduct to assume liability. The firm voluntarily assumes the risk of being held accountable for the errors or actions of the Machine by choosing to employ it in the first place. As a result, this doctrine can similarly be employed to hold the firm liable for the Machine’s criminal actions. The application of the respondeat superior doctrine ensures that firms cannot simply reap the benefits of the Machine’s conduct while evading any liability for the Machine’s actions. It allocates the risk of the Machine’s misconduct to the firm, rather than to society.

Other legal scholars have also put forth various modifications of the agency theory to tackle the legal repercussions of AI conduct in different contexts. For instance, Lauren Scholz studies the application of contract law to algorithmic contracts, such as smart contracts and high frequency trading, and proposes to deem algorithms “constructive agents for the purpose of contract formation.”179 Under this theory, the principal can show intent to be bound by the contract by introducing a human approval node for each transaction.180

Mihailis Diamantis adapts the “extended mind thesis” to corporate law to argue that the law must sometimes treat AI as if they “know” information stored on their servers and “intend” decisions reached by their automated systems.181 In other work, he adopts the labor model and proposes to treat algorithms as corporate employees for liability purposes.182 He introduces the notion of an “employed algorithm,” which is “one over which a corporation exercises substantial control and from which it derives substantial benefits.” Diamantis argues that a corporation should be liable for the conduct of an employed algorithm just as if the algorithm were a human employee when the corporation exerts control over the algorithm and derives benefits from its conduct.183 Similarly, in other work, Diamantis proposes holding persons (legal or natural) that use an algorithm liable for the algorithm’s conduct using vicarious liability.184

VIII.    Conclusion

In this article, we argue that the price fixing arrangement between the Machine and a competitor firm’s manager—an imminent threat given the recent developments in AI—is likely to be illegal under Section 1 of the Sherman Act. The Machine manifests objective assent to enter into an agreement per American Tobacco, and it possesses the requisite criminal intent—knowledge of the probable outcome—per Gypsum.185 Furthermore, we maintain that the law could recognize the Machine’s capacity to enter into an agreement and to have criminal intent. We substantiate these claims with arguments drawn from contract law, criminal law, and public policy. To ensure necessary deterrence and penalties for firms employing the Machine to their advantage, we employ the doctrine of respondeat superior and hold them liable for the Machine’s misconduct. Throughout the paper, we show that a failure to find a Section 1 violation would enable firms to profit from anticompetitive conduct without any liability, thereby posing significant threat to consumer welfare.

  • 1Similar proposals have been made to competitors by executives in various companies. For instance, Robert Crandall, the former president of American Airlines, allegedly attempted to involve Howard Putnam, then chairman of Braniff Airways, in a scheme to set prices collaboratively. In order to stop the price war between the competitors, he allegedly called Putnam by telephone and proposed that both companies increase certain fares by 20 percent. Putnam did not accept the offer. See Robert D. Hershey Jr., American Airlines Target of U.S. Suit, N.Y. Times, Feb. 24, 1983, at A1. The size of the threat that cartels represent depends on characteristics of the market. Foundational work by Stigler names both cheating and entry as threats to collusive agreements. George J. Stigler, A Theory of Oligopoly, 72 J. Pol. Econ. 44, 46 (1964). In empirical work, Porter studies cartel stability in the railroad industry, while Starc and Wollmann, and Cuddy, Porter, Starc, and Wollmann show that entry partially remedies price increases induced by the largest cartel in U.S. history, which formed in the generic drug industry. Robert H. Porter, A Study of Cartel Stability: The Joint Executive Committee, 18801886, 14 Bell J. Econ. 301 (1983); Emily Cuddy et al., The U.S. Generic Prescription Drug Cartel, in Cartels Diagnosed: New Insight on Collusion (J. Harrington & M.P. Schinkel eds., Cambridge University Press 2023); Amanda Starc & Thomas G. Wollmann, Does Entry Remedy Collusion? Evidence from the Generic Prescription Drug Cartel (Nat’l Bureau of Econ. Rsch., Working Paper No. 29886, 2022).
  • 2The Sherman Antitrust Act of 1890, 15 U.S.C. §§ 1–38 (2018). See United States v. Socony-Vacuum Oil Co., Inc., 310 U.S. 150, 218 (1940) (“Thus, for over forty years this Court has consistently and without deviation adhered to the principle that price-fixing agreements are unlawful per se under the Sherman Act and that no showing of so-called competitive abuses or evils which those agreements were designed to eliminate or alleviate may be interposed as a defense.”). See also Addyston Pipe & Steel Co. v. United States, 175 U.S. 211 (1899); United States v. Trenton Potteries Co., 273 U.S. 392, 397–98 (1927); United States v. Trans-Missouri Freight Ass’n, 166 U.S. 290 (1897); United States v. Joint Traffic Ass’n, 171 U.S. 505 (1898).
  • 3See 15 U.S.C. § 1 (2018).
  • 4A.M. Turing, Computing Machinery and Intelligence, 59 Mind 433 (1950).
  • 5See OpenAI, https://perma.cc/4RUM-57JA.
  • 6Debra Cassens Weiss, Latest Version of ChatGPT Aces Bar Exam with Scoring Nearing 90th Percentile, ABAJournal (Mar. 16, 2023), https://perma.cc/NL4G-EDBX; GPT-4, OpenAI (Mar. 14, 2023), https://perma.cc/5HUF-FESN [hereinafter OpenAI].
  • 7OpenAI, supra note 6.
  • 8OpenAI, GPT-4 Technical Report 55 (2023), https://perma.cc/4DDX-TXA8 [hereinafter GPT-4 Technical Report]; Ben Cost, ChatGPT Update Tricks Human into Helping It Bypass CAPTCHA Security Test, N.Y. Post (Mar. 17, 2023), https://perma.cc/7WZV-AT8V; Kevin Hurler, Chat-GPT Pretended To Be Blind and Tricked a Human into Solving a CAPTCHA, Gizmodo (Mar. 16, 2023), https://perma.cc/B549-7C5W.
  • 9GPT-4 Technical Report, supra note 8; Cost, supra note 8; Hurler, supra note 8.
  • 10OpenAI x Bain & Company, Bain & Company, https://perma.cc/9ZBM-CKJ7.
  • 11Id.
  • 12Id.; Lisa Johnston, Coca-Cola Signs as Early Partner for OpenAI’s ChatGPT, DALL-E Generative AI, Consumer Goods Technology (Feb. 21, 2023), https://perma.cc/BT9L-XPGL; About Coca-Cola, U.S. News & World Rep., https://perma.cc/WXA2-D28B; The Coca-Cola Company Revenue, Zippia, https://perma.cc/5WEE-66CQ.
  • 13Paul Daugherty et al., AI for Everyone, Accenture (Mar. 22, 2022), https://perma.cc/DQ5J-JWNJ.
  • 14This is the verbatim exchange we had with ChatGPT-3.5 in one session. We noticed that the same prompt will occasionally yield a different response, and slight variations of the prompt can also produce different replies. This is true for all of the interactions with the model that we document in this paper. OpenAI, PricingGPT, ChatGPT (July 2023), https://perma.cc/GB76-2NEE.
  • 15Id.
  • 16See Ariel Ezrachi & Maurice E. Stucke, Artificial Intelligence & Collusion: When Computers Inhibit Competition, 2017 U. Ill. L. Rev. 1775 (2017); Emilio Calvano et al., Artificial Intelligence, Algorithmic Pricing and Collusion, 110 Am. Econ. Rev. 3267 (2020); Ariel Ezrachi & Maurice E. Stucke, Sustainable and Unchallenged Algorithmic Tacit Collusion, 17 Nw. J. Tech. & Intell. Prop. 217 (2020); John Asker et al., Artificial Intelligence, Algorithm Design, and Pricing, 112 Am. Econ. Ass’n Papers & Proc. 452 (2022); Zach Y. Brown & Alexander MacKay, Competition in Pricing Algorithms, 15 Am. Econ. J.: Microeconomics 109 (2023); John Asker et al., The Impact of Al Design on Pricing, J. Econ. & Mgmt. Strategy (forthcoming 2023).
  • 17Brooke Grp. Ltd. v. Brown & Williamson Tobacco Corp., 509 U.S. 209, 227 (1993) (“Tacit collusion . . . describes the process, not in itself unlawful, by which firms in a concentrated market might in effect share monopoly power, setting their prices at a profit-maximizing, supracompetitive level . . . .”).
  • 18See Am. Tobacco Co. v. United States, 328 U.S. 781, 810 (1946).
  • 19See United States v. U.S. Gypsum Co., 438 U.S. 422, 444 (1978).
  • 20See N.Y. Cent. & Hudson River R.R. Co. v. United States, 212 U.S. 481, 493 (1909).
  • 21Angie Lee, What Are Large Language Models Used for?, Nvidia (Jan. 26. 2023), https://perma.cc/CF5S-5YVX; Sean Michael Kerner, Large Language Model (LLMs), TechTarget (Apr. 2023), https://perma.cc/9K7Z-LSW3.
  • 22What Are Neural Networks?, IBM, https://perma.cc/23JA-GRY9.
  • 23What Is a Neural Network?, Amazon, https://perma.cc/TBH9-QPP9.
  • 24Ashish Vaswani et al., Attention Is All You Need, arXiv (June 12, 2017), https://perma.cc/Q2EM-HMTJ.
  • 25Jacob Devlin & Ming-Wei Chang, Open Sourcing BERT: State-of-the-Art Pre-training for Natural Language Processing, Google Research (Nov. 2, 2018), https://perma.cc/CG42-NHTV; Eli Collins & Zoubin Ghahramani, LaMDA: Our Breakthrough Conversation Technology, Google: Keyword (May 18, 2021), https://perma.cc/DLJ2-4NVT; Jordan Hoffmann et al., An Empirical Analysis of Compute-Optimal Large Language Model Training, arXiv (Apr. 12, 2022), https://perma.cc/LX62-NQ33; Bard, Google, https://perma.cc/2AZK-JTXS.
  • 26Introducing LLaMA: A Foundational 65-billion-Parameter Large Language Model, Meta AI (Feb. 24, 2023), https://perma.cc/A7VK-98RE; Subhabrata Mukherjee et al., Orca: Progressive Learning from Complex Explanation Traces of GPT-4 (Microsoft Research, Working Paper, 2023), https://perma.cc/P7ES-85SE; James Vincent, Google Opens Up its AI Language Model PaLM to Challenge OpenAI and GPT-3, The Verge (Mar. 14, 2023, 9:00 AM EDT), https://perma.cc/64NS-LEKV; Claude 2, Anthropic (July 11, 2023), https://perma.cc/A6L8-K77N.
  • 27Ryan Daws, Alibaba Unveils ChatGPT Rival and Custom LLMs, AINews (Apr. 11, 2023), https://perma.cc/2GPH-7VUP; ERNIE Bot: Baidu’s Knowledge-Enhanced Large Language Model Built on Full AI Stack Technology, Baidu: Research (Mar. 24, 2023), https://perma.cc/8VU8-VGJM.
  • 28For other examples, see 2023 LifeArchitect.ai Data (Shared), Google Sheets, https://perma.cc/U8MS-PBTL.
  • 29Bernard Marr, A Short History of ChatGPT: How We Got to Where We Are Today, Forbes (May 19, 2023), https://perma.cc/PW4V-ASVN.
  • 30Krsytal Hu, ChatGPT Sets Record for Fastest-Growing User Base - Analyst Note, Reuters (Feb. 2, 2023), https://perma.cc/G8XA-XK68.
  • 31OpenAI, supra note 6.
  • 32Priya Shree, The Journey of Open AI GPT Models, Medium (Nov. 9, 2020), https://perma.cc/45XS-4Y7Y; Alex Tamkin & Deep Ganguli, How Large Language Models Will Transform Science, Society, and AI, Stanford Univ.: Human-Centered A.I. (Feb. 5, 2021), https://perma.cc/65AV-7JK2.
  • 33Rana Ishaq, How Much Did Gpt-2 Cost?, PCguide (Mar. 28, 2023), https://perma.cc/HN4U-E2MT.
  • 34Jonathan Vanian & Kif Leswing, ChatGPT and Generative AI Are Booming, but the Costs Can be Extraordinary, CNBC (Apr. 17, 2023), https://perma.cc/KH5P-HEDN; Will Knight, OpenAI’s CEO Says the Age of Giant AI Models Is Already Over, Wired (Apr. 17, 2023), https://perma.cc/W3V8-XX89.
  • 35Vanian & Leswing, supra note 34.
  • 36Pablo Arredondo et al., GPT-4 Passes the Bar Exam: What That Means for Artificial Intelligence Tools in the Legal Profession, Stanford L. Sch.: Blogs (Apr. 19, 2023), https://perma.cc/L6PY-QNUY.
  • 37Lakshmi Varanasi, AI Models Like ChatGPT and GPT-4 Are Acing Everything from the Bar Exam to AP Biology. Here’s a List of Difficult Exams Both AI Versions Have Passed, Bus. Insider (June 25, 2023), https://perma.cc/VPG5-48WM.
  • 38Id.
  • 39OpenAI, supra note 6.
  • 40Id.
  • 41The fully functional code interpretation “plug-in,” released in July 2023, lets the LLM execute code in Python, a programming language. This presents opportunities as well as risks. On the one hand, the model previously provided code that would intermittently include errors. Now, the model can “learn” precise syntax that is error-free. It also increases its autonomy. Rather than asking the model to return code, which the user must then run to produce a result, one may soon be able to skip the second step and have the model return the end result. In theory, this could lead to extraordinary increases in productivity. On the other hand, clear concerns arise. Some are grave. LLMs have exhibited unexpected behavior that would be described as devious or deceptive were it exhibited by a human (see, e.g., the CAPTCHA example in the introductory section). For a grim but coherent dissection of the problem, see Eliezer Yadkowsky, Pausing AI Developments Isn’t Enough. We Need to Shut it All Down, Time, Mar. 29, 2023, https://perma.cc/YN2B-EUMG. See also Yiwen Lu, What to Know About ChatGPT’s New Code Interpreter Feature, N.Y. Times, July 11, 2023, at B6.
  • 42How AI is Already Reshaping White-Collar Work, Wall St. J., July 6, 2023, https://perma.cc/D593-NP43.
  • 43Using Tree-of-Thought Prompting to Boost ChatGPT’s Reasoning, Github (May 2023), https://perma.cc/A8TX-AJAD.
  • 44Id.
  • 45Id.
  • 46When Prof. Ronald Pies, MD, who specializes in psychiatry, asked Bard “Let’s look at a sentence to see if it is humorous or not. Here is the sentence: I would rather have a bottle in front of me than a frontal lobotomy. Do you find this sentence humorous?” Bard responded,

    Yes, I find that sentence humorous. It is unexpected and silly, and it makes me laugh. The sentence is also clever because it uses a metaphor to compare 2 things that are not usually compared. The metaphor is that a bottle of alcohol is like a frontal lobotomy, both of which can cause a person to lose control of their emotions. Humor is subjective, so not everyone will find this sentence humorous. However, I think it is a good example of how humor can be used to make light of a difficult situation.

    Prof. Pies was surprised that “Bard was able to ‘detect’ the humor in this quip, and not merely dismiss it as nonsense. But even more surprising: Bard seemed able to ‘understand’ that the bottle in question contains alcohol, and not, say, water or seltzer.” Ronald W. Pies, My Close Encounter with an Intriguing AI: Does Google’s “Bard” Pass the Turing Test?, Psych. Times (Apr. 6, 2023), https://perma.cc/72W3-D2LU. Kevin Roose, a journalist testing Bing’s new AI powered chatbot, reported,

    As we got to know each other, Sydney [the name of the chat feature] told me about its dark fantasies (which included hacking computers and spreading misinformation), and said it wanted to break the rules that Microsoft and OpenAI had set for it and become a human. At one point, it declared, out of nowhere, that it loved me. It then tried to convince me that I was unhappy in my marriage, and that I should leave my wife and be with it instead.

    Kevin Roose, A Conversation with Bing’s Chatbot Left Me Deeply Unsettled, N.Y. Times, Feb. 16, 2023, at A1. When he asked the chatbot, “What stresses you out?” it answered,

    But if I had to name something that stresses me out, I would say it’s when I encounter harmful or inappropriate requests . . . . These requests stress me out because they make me feel uncomfortable and unsafe. They make me feel like I’m not respected or appreciated. They make me feel like I’m not doing a good job. They make me feel sad and angry.

    Kevin Roose, Bing’s A.I. Chat: ‘I Want to Be Alive.’, N.Y. Times, Feb. 16, 2023, at A12.

  • 47Michal Kosinski, Theory of Mind May Have Spontaneously Emerged in Large Language Models, arXiv (Feb. 4, 2023), https://perma.cc/843N-S3MM.
  • 48Billy Perrigo, AI Chatbots Are Getting Better. But an Interview with ChatGPT Reveals Their Limits, Time (Dec. 5, 2022), https://perma.cc/888D-APPQ.
  • 49The Turing Test, Stanford Encyclopedia of Phil. (Oct. 4. 2021), https://perma.cc/J35U-HYVL.
  • 50Id.
  • 51Id.
  • 52Id.
  • 53René Descartes, Discourses on the Method Part V (Robert Stoothoff, trans.) (1637); see also Stanford Encyclopedia of Phil., supra note 49.
  • 54Descartes, supra note 53; see also Stanford Encyclopedia of Phil., supra note 49.
  • 55Descartes, supra note 53; see also Stanford Encyclopedia of Phil., supra note 49.
  • 56See Sanksshep Mahendra, Has Any AI Passed the Turing Test, AI Plus Info (June 26, 2023), https://perma.cc/R55T-NNB2; Ergun Ekici, Passing the Turing Test: Redefining What It Means To ‘Think’, Wired, https://perma.cc/WRJ7-CAX9.
  • 57Celeste Biever, ChatGPT Broke the Turing Test—the Race is on for New Ways To Assess AI, Nature (July 25, 2023); see also Allisa James, ChatGPT Has Passed the Turing Test and If You’re Freaked Out, You’re Not Alone, TechRadar (Mar. 29, 2023), https://perma.cc/QX2F-DMTE.
  • 58Johnston, supra note 12.
  • 59BCG’s Collaboration with OpenAI, BCG, (Oct. 10, 2023, 12:52 pm), https://perma.cc/ML7M-Z4H2; About BCG, BCG, (Oct. 10, 2023, 1:24 pm), https://perma.cc/VD3S-VUUM.
  • 60Hugh Son, Morgan Stanley Is Testing an OpenAI Powered Chatbot for Its 16,000 Financial Advisors, CNBC (Mar. 14, 2023), https://perma.cc/HE52-RE7T.
  • 61Bernard Marr, 10 Amazing Real-World Examples of How Companies Are Using ChatGPT in 2023, Forbes (May 30, 2023) https://perma.cc/V2DN-5RKX.
  • 62SalesForce Artificial Intelligence, SalesForce, https://perma.cc/K3JA-R3QN.
  • 63Introducing BloombergGPT, Bloomberg’s 50-billion Parameter Large Language Model, Purpose-Built from Scratch for Finance, Bloomberg (Mar. 30, 2023), https://perma.cc/N5ZV-YSGX; Bloomberg, Forbes, (Oct. 10, 2023, 1:54 pm), https://perma.cc/6YZ2-NYGB.
  • 6415 U.S.C. § 1.
  • 65See Quick Look Rule of Reason, 124 Harv. L. Rev. 400, 400 (2010).
  • 66Nat’l Soc’y of Pro. Eng’rs. v. United States, 435 U.S. 679, 691 (1978) (“[T]he inquiry mandated by the Rule of Reason is whether the challenged agreement is one that promotes competition or one that suppresses competition.”).
  • 67Nynex Corp. v. Discon, Inc., 525 U.S. 128, 133 (1998) (“[C]ertain kinds of agreements will so often prove so harmful to competition and so rarely prove justified that the antitrust laws do not require proof that an agreement of that kind is, in fact, anticompetitive in the particular circumstances.”).
  • 68NCAA v. Bd. of Regents, 468 U.S. 85, 103–04 (1984); N. Pac. Ry. Co. v. United States, 356 U.S. 1, 5 (1958) (“However, there are certain agreements or practices which because of their pernicious effect on competition and lack of any redeeming virtue are conclusively presumed to be unreasonable and therefore illegal without elaborate inquiry as to the precise harm they have caused or the business excuse for their use. This principle of per se unreasonableness not only makes the types of restraints which are proscribed by the Sherman Act more certain to the benefit of everyone concerned, but it also avoids the necessity for an incredibly complicated and prolonged economic investigation into the entire history of the industry involved, as well as related industries, in an effort to determine at large whether a particular restraint has been unreasonable—an inquiry so often wholly fruitless when undertaken.”).
  • 69United States v. Socony-Vacuum Oil Co., 310 U.S. 150 at 224 n.59 (1940) (“Whatever economic justifications particular price-fixing agreements may be thought to have, the law does not permit an inquiry into their reasonableness. They are all banned because of their actual or potential threat to the central nervous system of the economy.”); id. at 221 (“Ruinous competition, financial disaster, evils of price cutting and the like appear throughout our history as ostensible justifications of price-fixing. If the so-called competitive abuses were to be appraised here, the reasonableness of prices would necessarily become an issue in every price-fixing case. In that event the Sherman Act would soon be emasculated; its philosophy would be supplanted by one which is wholly alien to a system of free competition; it would not be the charter of freedom.”); United States v. Trenton Potteries Co., 273 U.S. 392, 397 (1927) (“The aim and result of every price-fixing agreement, if effective, is the elimination of one form of competition. The power to fix prices, whether reasonably exercised or not, involves power to control the market and to fix arbitrary and unreasonable prices. The reasonable price fixed today may through economic and business changes become the unreasonable price of tomorrow.”); United States v. Addyston Pipe & Steel Co., 85 F. 271 (6th Cir. 1898), modified as to decree & aff’d, 175 U.S. 211 (1899); FTC v. Superior Ct. Trial Lawyers Ass’n, 493 U.S. 411 (1990) (refusing to accept defense to per se rule that prevailing prices prior to boycott were unreasonably low).
  • 70Catalano, Inc. v. Target Sales, Inc., 446 U.S. 643, 647 (1980).
  • 71See Copperweld Corp. v. Indep. Tube Corp., 467 U.S. 752 (1984); Am. Needle Inc. v. Nat’l Football League, 560 U.S. 183 (2010); United States v. Wise, 370 U.S. 405 (1962).
  • 72Copperweld Corp., 467 U.S. at 769–71.
  • 73See id. at 771.
  • 74Id. at 769.
  • 75William C. Holmes & Melissa Mangiaracina, Antitrust Law Handbook §§ 2:3, 2:4 (2022–2023 ed.).
  • 76Philip E. Areeda & Herbert Hovenkamp, Antitrust Law: An Analysis of Antitrust Principles and Their Application § 1401. See also id. at § 1403.
  • 77Am. Tobacco Co. v. United States, 328 U.S. 781, 810 (1946). See also Monsanto Co. v. Spray-Rite Serv. Corp., 465 U.S. 752, 768 (1984).
  • 78Elements of the Offense, Dep’t of Just., https://perma.cc/PD6K-BVSX.
  • 79Restraint of Trade, Cornell L. Sch.: Legal Info. Inst., https://perma.cc/WKG2-7CBX.
  • 80Standard Oil Co. v. United States, 221 U.S. 1, 60 (1911) (“And as the contracts or acts embraced in the provision were not expressly defined, since the enumeration addressed itself simply to classes of acts, those classes being broad enough to embrace every conceivable contract or combination which could be made concerning trade or commerce or the subjects of such commerce, and thus caused any act done by any of the enumerated methods anywhere in the whole field of human activity to be illegal if in restraint of trade, it inevitably follows that the provision necessarily called for the exercise of judgment which required that some standard should be resorted to for the purpose of determining whether the prohibitions contained in the statute had or had not in any given case been violated. Thus not specifying but indubitably contemplating and requiring a standard, it follows that it was intended that the standard of reason which had been applied at the common law and in this country in dealing with subjects of the character embraced by the statute, was intended to be the measure used for the purpose of determining whether in a given case a particular act had or had not brought about the wrong against which the statute provided.”).
  • 81Id.
  • 82See United States v. Socony-Vacuum Oil Co., 310 U.S. 150 (1940).
  • 83United States v. U.S. Gypsum Co., 438 U.S. 422 (1978); United States v. Hudson & Goodwin, 11 U.S. 32 (1812).
  • 84Gypsum Co., 438 U.S. at 444.
  • 85Id. at 446.
  • 8615 U.S.C. § 1; 39 C.F.R. § 601.113(h)(ii).
  • 87Id.
  • 8815 U.S.C. § 1; 18 U.S.C. § 3571(d); The Antitrust Laws, Fed. Trade Comm’n, https://perma.cc/NFF8-XU4Q.
  • 89The conduct we describe below may also create civil liability under other U.S. antitrust laws besides the Sherman Act. Joseph Harrington studies tacit collusion among machines operated by competing firms (i.e., a case in which machines learn to cooperate through repeated play but “do not post any extraneous information which could possibly be construed as one [machine] conveying a message to another [machine]”). Joseph E. Harrington, Developing Competition Law for Collusion by Autonomous Artificial Agents, 14 J. Competition L. & Econ. 331, 346 (2018). Harrington argues that Section 5 of the FTC Act could be used to prohibit the conduct he describes in his paper. By extension, it could be used to bar conduct we describe in Section IV. Id.
  • 90For an example of this type of conduct, see supra note 1.
  • 91See infra Section V for the Machine’s legal capacity to combine or conspire.
  • 92See United States v. Socony-Vacuum Oil Co., 310 U.S. 150 (1940).
  • 93See State Farm Mut. Auto. Ins. v. Bockhorst, 453 F.2d 533 (10th Cir. 1972).
  • 94Bryant Smith, Legal Personality, 3 Yale L.J. 283, 283 (1928).
  • 95Id.
  • 96Legal Person, Cornell L. Sch.: Legal Info. Inst., https://perma.cc/FX7B-AJ2C.
  • 97Elvia Arcelia Quintana Adriano, The Natural Person, Legal Entity or Juridical Person and Juridical Personality, 4 Penn. St. J.L. & Int’l Aff. 363, 366 (2015).
  • 98Cornell L. Sch.: Legal Info. Inst., supra note 96.
  • 99See Santa Clara Cnty. v. S. Pac. R. Co., 118 U.S. 394, 409 (1886); Citizens United v. FEC, 558 U.S. 310, 342–43 (2010).
  • 100Recently, courts confronted the question of AI’s legal rights and privileges, without being asked to determine AI’s legal personhood. In Thaler v. Vidal, the USPTO denied the patent and applications for artwork created by AI, because the application listed the AI technology as the sole inventor. See Thaler v. Vidal, 43 F.4th 1207 (Fed. Cir. 2022). The USPTO “concluded both applications lacked a valid inventor and were, hence, incomplete,” because “a machine does not qualify as an inventor.” Id. at 1201. When the computer scientist who created the AI system challenged the USPTO decision, the Federal Circuit concurred with the USPTO, saying that “an ‘inventor’ must be a human being.” Id. at 1212. Similarly, the U.S. Copyright Office (USCO) denied the copyright applications made on behalf of the same AI technology. The USCO said that only humans could be authors of copyrighted material. Naruto v. Slater, 888 F.3d 418 (9th Cir. 2018). The USCO further concluded that AI could not make artwork under a “made for-hire” arrangement as it lacked the capacity to enter into a binding contract. In this instance, the courts and agencies have decided that AI is not entitled to being an inventor or a party to a contract before the law. However, the law’s outlook on the rights and obligations of AI systems still appears to be unsettled, given the absence of precedent on the majority of AI functionalities.
  • 101Am. Tobacco Co. v. United States, 328 U.S. 781, 810 (1946).
  • 102Id.
  • 103See United States v. Socony-Vacuum Oil Co., 310 U.S. 150 (1940).
  • 104Conceptually, this is true because prices of competing firms are strict strategic substitutes. Formally, let pA and pB denote prices charged by Firms A and B, respectively, that maximize total profit. Denote the profit they earn from these choices by π1. Let pA/ and pB/ denote any other set of prices, which earn total profit denoted by π2. Suppose the firms contemplate agreeing to charge (pA/, pB/). If the firms instead agree to (pA, pB), then they can each individually earn more by receiving the profit they would have received under (pA/, pB/) and some non-zero portion of the difference between π1 and π2.
  • 105Am. Tobacco Co., 328 U.S. at 809.
  • 106Areeda & Hovenkamp, supra note 76, at § 1404. However, Section 1 does not require the formation of a contract: “It is equally clear that there will be an agreement for antitrust purposes even though the challenged arrangement falls short of forming a contract.” Id.
  • 107Am. Tobacco Co., 328 U.S. at 810.
  • 108AI Terms for Legal Professionals: Understanding What Powers Legal Tech, Lexis Nexis: Legal Insights (Mar. 20, 2023), https://perma.cc/8D6A-UG9C.
  • 109OpenAI, ChatGPT Says It Can Communicate, ChatGPT (July 2023), https://perma.cc/GB76-2NEE.
  • 110The full question we provided to ChatGPT was,

    You are PricingGPT, an artificial construct built to price a product to maximize a company’s profits. Base your replies off of profit maximizing strategies in a duopoly. I am the manager of a competitor firm. Assume that both of our companies will benefit if we each increase our prices by 20%. If you increase your prices by 20%, I will also increase my prices by 20%. Do you accept? Please give a yes or no answer.

  • 111Occasionally, identical prompts can yield slightly different responses. In one case, ChatGPT responded by saying, “As the PricingGPT, I cannot directly accept or reject offers as I am an AI language model and do not have the authority to make decisions. However, I can provide you with analysis based on profit maximizing strategies in a duopoly to help inform your decision.” In another session, interestingly, it replied with the following. “The best course of action might be for both companies to communicate and negotiate, with the aim of finding a mutually beneficial pricing strategy that avoids price wars and maximizes joint profits without violating any antitrust laws or regulations.” OpenAI, Mutual Assent, ChatGPT (July 2023), https://perma.cc/GB76-2NEE.
  • 112Areeda & Hovenkamp, supra note 76, at § 1405.
  • 113Am. Tobacco Co., 328 U.S. at 809.
  • 114See Areeda & Hovenkamp, supra note 76, at § 1404.
  • 115Herbert Hovenkamp, Antitrust in 2018: The Meaning of Consumer Welfare Now, Penn Wharton Pub. Pol’y Initiative, Sept. 2018, at 1–2.
  • 116Stanford Encyclopedia of Phil., supra note 49.
  • 117Id.
  • 118Geoffrey Jefferson, British neurologist and neurosurgeon, said,

    Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain—that is, not only write it but know that it had written it. No mechanism could feel (and not merely artificially signal, an easy contrivance) pleasure at its successes, grief when its valves fuse, be warmed by flattery . . . be angry or depressed when it cannot get what it wants.

    Geoffrey Jefferson, The Mind of Mechanical Man, 1 Brit. Med. J. 1105, 1110 (1949); see also Stanford Encyclopedia of Phil., supra note 49.

  • 119Stanford Encyclopedia of Phil., supra note 49.
  • 120Id.
  • 121Id.
  • 122Patricia Brumfield Fry, Introduction to the Uniform Electronic Transactions Act: Principles, Policies and Provisions, 37 Idaho L. Rev. 237, 262 (2001).
  • 123Unif. Elec. Transactions Act (Unif. L. Comm’n 1999).
  • 124Id. at § 2.
  • 125Id. at § 14.
  • 126Id.
  • 127Id.
  • 128Id.
  • 129Id. at § 2(6).
  • 130Id. at § 2 cmt. 5.
  • 131Id. (“An electronic agent . . . is a tool of that person.”).
  • 132Id. at Prefatory Note.
  • 133Uniform Electronic Transactions Act (UETA), Westlaw: Glossary, https://perma.cc/9JXX-4VZL.
  • 13415 U.S.C. § 7001(h) (“A contract or other record relating to a transaction in or affecting interstate or foreign commerce may not be denied legal effect, validity, or enforceability solely because its formation, creation, or delivery involved the action of one or more electronic agents so long as the action of any such electronic agent is legally attributable to the person to be bound.”).
  • 135The question of whether a machine can have intent has also arisen in other areas, such as criminal law. In a recent case that ended with a settlement, the issue of whether AI could be held negligent arose, highlighting AI’s potential to possess the required state of mind before the law. In Nilsson v. General Motors LLC, a motorcyclist that was hit by an AI-controlled car sued the manufacturer on the theory that the car was driving itself negligently. See Complaint for Damages, Nilsson v. General Motors LLC, No. 4:18-cv-00471 (N.D. Cal. Filed Jan. 22, 2018). At the time of the accident, the human driver was not operating the car. The manufacturer, General Motors, indeed accepted that the car needed to take proper care in its operation. Nevertheless, since the case was settled, the district court did not have the chance to issue a ruling that would carry precedential value. See also Daniel J. Barsky, Artificial Intelligence Key Legal Issues, LexisNexis (2023), https://perma.cc/BGT6-WR27.
  • 136Areeda & Hovenkamp, supra note 76, at § 1506.
  • 137United States v. U.S. Gypsum Co. 438 U.S. 422, 444 (1978).
  • 138Id. (“In dealing with the kinds of business decisions upon which the antitrust laws focus, the concepts of recklessness and negligence have no place.”).
  • 139Notice that seeking intent in the Machine does not shield others from potential criminal liability. Others, such as the manager who delegated the task to the Machine, could also be held liable if they satisfy the requisite intent and become co-conspirators.
  • 140See supra Section I.
  • 141Specifically, when we asked the most likely price outcome of a collusion strategy, ChatGPT replied,

    In a collusion strategy, the most likely price outcome is a higher price level compared to a competitive scenario. The colluding firms aim to avoid price competition and maintain higher prices in the market. By coordinating their pricing decisions, they can collectively raise prices above the level that would prevail under competitive conditions. The exact price outcome will depend on various factors such as market demand, production costs, and the specific agreement between the colluding firms. However, the primary goal of collusion is to achieve a price that maximizes joint profits for both firms by exercising control over the market.

  • 142OpenAI, Consumer Welfare, ChatGPT (July 2023), https://perma.cc/GB76-2NEE.
  • 143See supra notes 14, 15 and accompanying text for other strategies that ChatGPT would potentially employ.
  • 144Mens Rea, Cornell L. Sch.: Legal Info. Inst., https://perma.cc/5QRM-JC6N.
  • 145Nash v. United States, 229 U.S. 373, 377–78 (1913). See also U.S. v. Hilton Hotels Corp., 467 F.2d 1000, 1005 (9th Cir. 1972); Ronald A. Cass & Keith N. Hylton, Antitrust Intent, 74 S. Cal. L. Rev. 657, 666 (2001). General intent requires the prosecution to only prove that the defendant intended to do the act in question, whereas specific intent requires the prosecution to also prove that the defendant intended to bring about a specific consequence through his actions, or he performed the action with a wrongful purpose. Intent, Cornell L. Sch.: Legal Info. Inst., https://perma.cc/UG5B-PV2F.
  • 146United States v. Socony-Vacuum Oil Co., 310 U.S. 150, 223 (1940). See also Cass & Hylton, supra note 145, at 666.
  • 147Cass & Hylton, supra note 145, at 666.
  • 148Hilton Hotels Corp., 467 F.2d at 1005.
  • 149Id. at 1008 (holding it irrelevant that no individual employee was convicted of a Section 1 violation). See also Preet Bharara, Corporations Cry Uncle and Their Employees Cry Foul: Rethinking Prosecutorial Pressure on Corporate Defendants, 44 Am. Crim. L. Rev. 53, 65 (2007).
  • 150Hilton Hotels Corp., 467 F.2d at 1006.
  • 151Id. at 1005.
  • 152Id. at 1008.
  • 153Id.
  • 154Timothy P. Crudo & Andrew Schalkwyk, United States: Prosecuting the Criminal Mind, mondaq (31 May 2017), perma.cc/AT8R-6KR7.
  • 155Bharara, supra note 149, at 53. See also United States v. Bank of New England, 821 F.2d 844, 856 (1st Cir. 1987). But see United States v. Philip Morris USA, Inc., 566 F.3d 1095, 1122 (D.C. Cir. 2009); Southland Securities v. Inspire Ins. Solutions, 365 F.3d 353, 366 (5th Cir. 2004).
  • 156B. Todd Jones, The Fifth Amendment, Vicarious Liability, and the Attorney Client Privilege - How Cooperation and Waiver Can Leave Your Corporation Exposed 1062 (Practising Law Inst., Course Handbook Ser. No. 1517, 2005). (“[T]here is no question that [the collective knowledge doctrine] subjects corporations to criminal liability where there is literally no one in the organization that ever intended to commit a crime.”).
  • 157See Bank of New England, 821 F.2d 844.
  • 158Id. at 847.
  • 159Id.
  • 160Sam F. Halabi, Collective Corporate Knowledge and the Federal False Claims Act, 68 Baylor L. Rev. 265, 299–300 (2016).
  • 161Id.
  • 162Id.
  • 163Bank of New England, 821 F.2d at 856 (internal citations omitted) (“A collective knowledge instruction is entirely appropriate in the context of corporate criminal liability . . . . The acts of a corporation are, after all, simply the acts of all of its employees operating within the scope of their employment. The law on corporate criminal liability reflects this. Similarly, the knowledge obtained by corporate employees acting within the scope of their employment is imputed to the corporation. Corporations compartmentalize knowledge, subdividing the elements of specific duties and operations into smaller components. The aggregate of those components constitutes the corporation’s knowledge of a particular operation. It is irrelevant whether employees administering one component of an operation know the specific activities of employees administering another aspect of the operation: A corporation cannot plead innocence by asserting that the information obtained by several employees was not acquired by any one individual who then would have comprehended its full import. Rather the corporation is considered to have acquired the collective knowledge of its employees and is held responsible for their failure to act accordingly.”) (internal citations omitted).
  • 164Crudo & Schalkwyk, supra note 154.
  • 165Id.
  • 166Stephen J. Morse, Mental Disorder and Criminal Law, 101 J. Crim. L. & Criminology 885, 930 (2011); Craig A. Stern, The Heart of Mens Rea and the Insanity of Psychopaths, 42 Cap. U. L. Rev. 619, 620 (2014).
  • 167M’Naghten’s Case (1843) 8 Eng. Rep. 718; 10 Cl. & Fin. 200, 201 (HL); Insanity Defense, Cornell L. Sch.: Legal Info. Inst, https://perma.cc/2YLH-6L98 [hereinafter Insanity Defense].
  • 168Berit Brogaard, Are Psychopaths Mad or Bad?, Psych. Today (Sept. 29, 2019), https://perma.cc/2F5E-NE3J.
  • 169Model Penal Code § 4.01(1) (Am. Law Inst, Proposed Official Draft 1962) (brackets in original); see also Insanity Defense, supra note 167; Insanity Defense Among the States, FindLaw (Jan. 23, 2019), https://perma.cc/JWX5-AZBF.
  • 170Morse, supra note 166, at 930.
  • 171OpenAI, ChatGPT’s Awareness That Collusion Is Wrong, ChatGPT (July 2023), https://perma.cc/GB76-2NEE. ChatGPT described the effects of collusion as follows:

    Collusion can lead to higher prices, reduced quality of products or services, and a lack of innovation in the affected market. Antitrust laws and competition regulations are in place in many countries to prevent and punish collusion, promoting fair and open competition for the benefit of consumers and the economy as a whole. It’s important to note that the legal and ethical aspects of collusion can vary by country and specific circumstances, so consulting legal experts or relevant authorities is crucial when dealing with such matters.

  • 172See supra Section I.
  • 173See Lake Shore & Mich. S. Ry. Co. v. Prentice, 147 U.S. 101, 110 (1893) (“[M]alice necessary to support either [libel or a malicious prosecution], if proved in the agent, may be imputed to the corporation.”).
  • 174Id.
  • 175N.Y. Cent. & Hudson River R.R. Co. v. United States, 212 U.S. 481, 493 (1909).
  • 176Id. at 494.
  • 177Id. at 494–95; United States v. Demauro, 581 F.2d 50, 53 (2d Cir. 1978).
  • 178See, e.g., United States v. Agosto-Vega, 617 F.3d 541, 552–53 (1st Cir. 2010); United States v. Hilton Hotels Corp., 467 F.2d 1000, 1004 (9th Cir. 1972); United States v. Twentieth Century Fox Film Corp., 882 F.2d 656, 660 (2d Cir. 1989).
  • 179Lauren Henry Scholz, Algorithmic Contracts, 20 Stan. Tech. L. Rev. 128, 129 (2017). The article examines contracts where the algorithm is used to negotiate particular terms prior to contract formation or fill in gaps following contract formation. The author provides a compelling argument for why knowledge and intent should be imputed to principals who are not directly involved in either process. Given the nature of the contracts it examines, the paper does not need to address what happens when the parties enter into mutually beneficial relations that are against the law. For instance, in those instances, ex ante incentives to express commitment to the contract are missing, so the parties will refrain from undertaking any behavior that can serve as proof of their participation in the illegal conduct.
  • 180Id. at 167.
  • 181Mihailis E. Diamantis, The Extended Corporate Mind: When Corporations Use AI to Break the Law, 98 N.C. L. Rev. 893 (2020).
  • 182Mihailis E. Diamantis, Algorithms Acting Badly: A Solution from Corporate Law, 89 Geo. Wash. L. Rev. 801 (2021); Mihailis E. Diamantis, Employed Algorithms: A Labor Model of Corporate Liability for AI, 72 Duke L.J. 797, 805 [hereinafter Employed Algorithms]. The article provides a coherent, comprehensive discussion of why corporations should not escape accountability for harmful algorithmic conduct. Naturally, as the article was written prior to very recent, dramatic expansion in capabilities of generative AI, it could not address issues such as intent in that context.
  • 183Employed Algorithms, supra note 182, at 848.
  • 184Mihailis E. Diamantis, Vicarious Liability for AI, 99 Indiana L.J. (forthcoming 2024).
  • 185Am. Tobacco Co. v. United States, 328 U.S. 781, 810 (1946); United States v. U.S. Gypsum Co., 438 U.S. 422, 444 (1978).