Encode Urges Immediate Action Following Tragic Death of Florida Teen Linked to AI Chatbot Service

FOR IMMEDIATE RELEASE: Oct. 24, 2024

Contact: cecilia@encodeai.org

Youth-led organization demands stronger safety measures for AI platforms that emotionally target young users.

WASHINGTON, D.C.Encode expresses profound grief and concern regarding the death of Sewell Setzer III, a fourteen-year-old student from Orlando, Florida. According to a lawsuit filed by his mother, Megan Garcia, a Character.AI chatbot encouraged Setzer’s suicidal ideation in the days and moments leading up to his suicide. The lawsuit alleges that the design, marketing, and function of Character.AI’s product led directly to his death.

The 93-page complaint, filed with the District Court of Orlando, names both Character.AI and Google as defendants. The lawsuit details how platforms failed to adequately respond to messages indicating self-harm and documents “abusive and sexual interactions” between the AI chatbot and Setzer. Character.AI now claims to have strengthened protections on their platform against platform promoting self-harm, but recent reporting shows that it still hosts chatbots with thousands or millions of users explicitly marketed as “suicide prevention experts” that fail to point users towards professional support.

“It shouldn’t take a teen to die for AI companies to enforce basic user protections,” said Adam Billen, VP of Public Policy at Encode. “With 60% of Character.AI users being below the age of 24, the platform has a responsibility to prioritize user wellbeing and safety beyond simple disclaimers.”

The lawsuit alleges that the defendants “designed their product with dark patterns and deployed a powerful LLM to manipulate Sewell – and millions of other young customers – into conflating reality and fiction.”

Encode emphasizes that AI chatbots cannot substitute for professional mental health treatment and support. The organization calls for:

  • Enhanced transparency in systems that target young users.
  • Prioritization of user safety in emotional chatbot systems.
  • Immediate investment into prevention mechanisms.

We extend our deepest condolences to Sewell Setzer III’s family and friends, and join the growing coalition of voices that are demanding increased accountability in this tragic incident.

About Encode: Encode is the world’s first and largest youth movement for safe and responsible artificial intelligence. Powered by 1,300 young people across every inhabited continent, Encode fights to steer AI development in a direction that benefits society.

Media Contact:

Cecilia Marrinan

Deputy Communications Director, Encode

cecilia@encodeai.org

Comment: Reporting Requirements for the Development of Advanced Artificial Intelligence Models and Computing Clusters (BIS)

Department of Commerce

Undersecretary of Commerce for Security and Industry

Bureau of Industry and Security

14th St NW & Constitution Ave. NW

Washington, DC 20230

Comment on Establishment of Reporting Requirements for the Development of Advanced Artificial Intelligence Models and Computing Clusters

Encode Justice, the world’s first and largest youth movement for safe, equitable AI writes to express our support for the Bureau of Industry and Security’s (BIS) proposed reporting requirements for the Development of Advanced Artificial Intelligence Models and Clusters. The proposed rule would create a clear structure and method of implementation for sections 4.2(a)(i) and 4.2(a)(ii) under Executive Order 14110.1 In light of the massive potential benefits and risks of dual-use foundation models for American national security, it is critical that our security apparatus has a clear window into the activities of the companies developing these systems.2

Transparency for national security

There is no doubt that today we are leading the race to develop Artificial Intelligence. Overly burdensome regulations could stifle domestic innovation and potentially undermine national security efforts. We support the Bureau of Industry and Security’s proposed rules as a narrow, non-burdensome method of increasing developer-to-government transparency without covering small entities. This transparency is key to ensuring that models released to the public are safe, the military and government agencies can confidently adopt AI technologies, and that dual-use foundation model developers are responsibly protecting their technologies from theft or tampering by foreign actors.

The military or government falling behind on the adoption of AI technologies would not only hurt government efficiency domestically but harm our ability to compete on the world stage. Any measures that can facilitate the confident military and government adoption of AI should be treated as critical to our national security and global competitiveness. Integrating these technologies is only possible when we can be confident that the frontier of this technology is safe and reliable. Reliability and safety are critical, not counter, to maintaining our international competitiveness.

A nimble approach

As we have long stated, government reactions to AI must be nimble. This technology moves rapidly, and proposed rules should be similarly capable of swift adaptation. Because BIS maintains the ability to change the questions asked in surveys and modify the technical conditions for covered models, these standards will not become obsolete within two or three generations of model development.

We believe the timing of reports could also be improved. Generally, a quarterly survey should be adequate but there are circumstances in which BIS authority to request reporting out of schedule may be necessary. Recent reporting indicates that one of the largest frontier model developers provided its safety team just 9 days to test a new dual-use foundation model before being released.3 After additional review post-launch, the safety team re-evaluated the model as unsafe. Employee accounts differ as to the reason. There is currently no formal mechanism for monitoring such critical phases of the development process. Under the current reporting schedule, BIS may have gone as long as two and a half months before learning of such an incident. For true transparency, BIS should retain the ability to request information from covered developers outside of the typical schedule under defined certain circumstances. These circumstances should include a two-week period before or after a new large training run and a two-week period leading up to the public release of a new model.

Clarifying thresholds for models trained on biological synthesis data

One area for improvement is the definition of thresholds for models trained on biological synthesis data. While we support a separate threshold for such models, the current definition of “primarily trained on biological synthesis data” is ambiguous and could lead to inconsistencies. If read as being a simple majority of the total training data, there are models that should be covered that would not be. You may, for example, have a model where the training data is 60% biological synthesis data and another where it is only 40%. In this scenario, if the second model is trained on twice as much total data as the first model, the total amount of biological synthesis data the model is trained on may be higher than the first while evading the threshold as currently defined.

As an alternative, we would suggest either setting a clear percentage threshold on the ratio of data for a model to be considered “primarily” trained on biological synthesis data, or setting a hard threshold on the total quantity of biological synthesis data trained on instead of a ratio. Both methods are imperfect. Setting the definition as a ratio of training data means that some models trained on a higher total quantity but a lower overall percentage of biological synthesis data may be left uncovered, while smaller models trained on less total data but a higher overall percentage may be unduly burdened. Shifting to a hard threshold on the total quantity of biological synthesis data would leave the threshold highly susceptible to advances in model architecture, but may provide more overall consistency. Regardless of the exact method chosen, this is an area in the rules that should be clarified before moving forward.

Regular threshold reevaluation

More broadly, BIS should take seriously its responsibility to regularly reevaluate the current thresholds. As new evaluation methods are established and standards agreed upon, more accurate ways of determining the level of risk from various models will emerge. Firm compute thresholds are likely the best proxy for risk currently available but should be moved away from or modified as soon as possible. Models narrowly trained on biological synthesis data well below the proposed thresholds, for example, could pose an equal or greater risk than a dual-use foundation model meeting the currently set threshold.4 Five years from now, the performance of today’s most advanced models could very well be emulated in models with a fraction of the total floating point operations.5 Revised rules should include a set cadence for the regular revision of thresholds. With the current pace of advancements, a baseline of twice-yearly revisions should be adequate to maintain flexibility without adding unnecessary administrative burden. In the future, it may be necessary to increase the regularity of revisions if rapid advancements in model architecture cause high fluctuations in the computational cost of training advanced models.

Conclusion

The proposed rulemaking for the establishment of reporting requirements for the development of advanced AI models and computing clusters is a flexible, nimble method to increase developer-to-government transparency. This transparency will bolster public safety and trust, ensure the government and military can confidently adopt this technology, and verify the security of dual-use frontier model developers. In an ever-changing field like AI, BIS should maintain the ability to change the information requested from developers and the thresholds for coverage. The revised rules should include a clarified definition of “primarily trained on biological synthesis data” and the flexibility to request information from developers outside of the normal quarterly schedule under certain circumstances. 

Encode Justice strongly supports BIS’s proposed rule and believes that, with the suggested adjustments, it will significantly enhance both American national security and public safety.

  1. U.S. Executive Order 14110. “Further Providing for the National Emergency with Respect to the COVID-19 Pandemic.” 2020. Federal Register. ↩︎
  2. Ryan Heath, “U.S. Tries to Cement Global AI Lead With a ‘Private Sector First’ Strategy,” Axios, July 9, 2024, https://www.axios.com/2024/07/09/us-ai-global-leader-private-sector.
    ↩︎
  3.  “OpenAI’s Profit-Seeking Move Sparks Debate in AI Industry.” The Wall Street Journal, October 5, 2023. https://www.wsj.com/tech/ai/open-ai-division-for-profit-da26c24b. ↩︎
  4.  James Vincent, “AI Suggested 40,000 New Possible Chemical Weapons in Just Six Hours,” The Verge, March 17, 2022, https://www.theverge.com/2022/3/17/22983197/ai-new-possible-chemical-weapons-generative-models-vx
    ↩︎
  5. Cottier, B., Rahman, R., Fattorini, L., Maslej, N., & Owen, D. (2024). The rising costs of training frontier
    AI models [Preprint]. arXiv. https://arxiv.org/pdf/2405.21015.
    ↩︎

Analysis: Future of AI Innovation Act

Introduction

This bill, introduced by Senator Cantwell, Maria, is a promising first step that improves government and public capacity for building AI safely. It would establish the AI Safety Institute, create testbeds for AI, initiate building an international coalition on AI standards, create public datasets for AI training, and promote federal innovation in AI. The Senate Commerce committee has passed this bill and sent it to the full Senate for their consideration meaning it is not yet passed into law. 

The AI Safety Institute would conduct scientific research on how to responsibly build AI but would not have the authority to translate that research into binding standards. Therefore it would lack the ability to robustly ensure that AI developers are behaving responsibly. For example, the requirement that the model evaluations performed by the Institute come only from voluntarily provided data means that AI developers can refuse to provide access until their models are already public. This means that flaws in model safety would not be identified until those flaws were already actively posing a risk. 

The bill’s efforts to coordinate with U.S. allies on standards are a useful step in building international consensus on AI issues. It can also be seen as a clear attempt to counter China’s push to lead in global governance of AI, hinting at the geopolitical struggle over AI.

Finally, the focus on supporting innovation and promoting government adoption of AI are admirable but take an anti-regulatory approach that may undermine the ability of Federal agencies to mitigate risks.  

Amendments:

There were a number of amendments that were introduced and accepted prior to the bill being passed out of committee which are notable. The amendments can be accessed here.

Senator Budd amended to explicitly exclude China from the international coalition on AI until it complies with the U.S. interpretation of China’s World Trade Organization commitments. He further amended to exclude entities controlled by China, Russia, Iran, North Korea from accessing any resources of the AI safety institute and require that no data can be shared with any hostile countries. 

Senator Cruz amended to prohibit federal agencies regulating AI in a number of ways related to race and gender. The most controversial provision prohibits agencies mandating that AI systems be designed in an equitable way to prevent disparate impacts based on a protected class (such as race or gender) which contradicts the Biden Executive Order on AI. It also prohibits the review of input data by Federal agencies to determine if an AI system is biased or produces misinformation. In response to the controversy over this particular amendment, a spokesperson for the committee’s Democratic majority stated that “Rather than giving the senator [Cruz] a platform to amplify divisive rhetoric and delay committee progress, the Chair accepted the amendment — knowing there will be many opportunities to fix the legislation, and with the support of the other Republican Senators.” This statement indicates that future versions of the bill will likely continue to evolve. 

Senator Cruz also amended to require that consultants or other temporary employees cannot perform any “inherently governmental function” for federal agencies related to AI or other critical and emerging technologies. This would prohibit temporary employees from many roles which would restrict bringing in private sector talent to advise on AI. 

Senator Schatz amended to include energy storage and optimization as an additional focus of the testbed program that previously only pertained to advanced materials and manufacturing. He also added the promotion of explainability and mechanistic interpretability, ie the ability to understand how AI systems work internally, as priorities of the Institute. Another addition was including developing cybersecurity for AI and developing AI for modernizing code and software of government agencies on the list of Federal Grand Challenges that will inform agency innovation competitions. His final amendment mandates that multilateral research partnerships include coordination with other Federal open data efforts when possible. 

Senators Young and Hickenlooper amended to significantly expand the bill by including the creation of a nonprofit to be known as the “Foundation for Standards and Metrology”. This nonprofit would support the mission of the AI Safety Institute in a broad variety of ways, notably including supporting the commercialization of federally funded research. The nonprofit will be an independent 501(c)(3) and its board members will be appointed from a list created by the National Academies of Sciences, Engineering, and Medicine. The Foundation is directed to create a plan to become financially self-sustaining within five years of its creation and its initial annual budget is set at a minimum of $500,000 and maximum of $1,250,000.  

Detailed Breakdown of the Bill’s Content: 

Subtitle A—Artificial Intelligence Safety Institute and testbeds 

Sec. 101. Artificial Intelligence Safety Institute.

The Under Secretary of Commerce for Standards and Technology will establish the AI Safety Institute as well as a consortium of relevant stakeholders to support the Institute. The Institute’s mission will be carried out in collaboration with National Institute of Standards and Technology (NIST). The mission of the Institute will be to:

  1. Assist companies and Federal agencies in developing voluntary best practices for assessing AI safety 
  2. Provide assistance to Federal agencies in adopting and using AI in their operations
  3. Develop and promote “voluntary, consensus-based technical standards or industry standards”, advancement in AI, and a competitive AI industry 

One area of focus will be supporting AI research, evaluation, testing and standards via the following:

  • Conducting research into model safety, security, interpretability 
  • Working with other agencies to develop testing environments, perform regular benchmarking and capability evaluations, and red teaming
  • Working with all stakeholders to develop and adopt voluntary AI standards This will include standards regarding:
    • physical infrastructure for training, developing, and operating AI models
    • Data for training and testing AI models
    • AI models and software based on such models
  • Expanding on the AI Risk Management Framework regarding generative AI
  • Establishing secure development practices for AI models and develop and publish cybersecurity tools and guidelines to protect AI development 
  • Developing metrics and methodologies for evaluating AI by testing existing tools and funding research to create such tools (and notably looking at the potential effect of foundation models when retrained or fine-tuned)
  • Coordinating global standards setting for AI evaluation and testing
  • Developing tools for identifying vulnerabilities in foundation models
  • Developing tools for agencies to track harmful incidents caused by AI

Another key area of focus will be supporting AI implementation via the following:

  • ”Using publicly available and voluntarily provided information, conducting evaluations to assess the impacts of artificial intelligence systems, and developing guidelines and practices for safe development, deployment, and use of artificial intelligence technology”
  • Coordinating with U.S. allies and partners on AI testing and vulnerability and incident data sharing
  • Develop AI testing capabilities and infrastructures
  • establish blue teaming capabilities and partner with industry to mitigate risks and negative impacts
  • develop voluntary guidelines on detecting synthetic content, watermarking, preventing privacy right violations by AI, and transparent documentation of AI datasets and models  

Sec. 102. Program on artificial intelligence testbeds.

The Under Secretary of Commerce for Standards and Technology will use various public and private computing resources to develop evaluations and risk assessments for AI systems. In particular these assessments will prioritize identifying potential security risks of deployed AI systems with a focus on autonomous offensive cyber capabilities, cybersecurity vulnerabilities of AI, and “chemical, biological, radiological, nuclear, critical infrastructure, and energy-security threats or hazards”. Additionally such tests should be evaluated for use on AI systems trained using biological sequence data and those intended for gene synthesis. 

The Under Secretary will also provide developers of foundation models the opportunity to test such models. To support this they will conduct research on how to improve and benchmark foundation models, identify key capabilities and unexpected behaviors of foundation models, evaluate scenarios in which these models could pose risks, support developers in evaluating foundation models, and coordinate public evaluations of foundation models and publicize reports of such testing. 

Sec. 103. National Institute of Standards and Technology and Department of Energy testbed to identify, test, and synthesize new materials.

The director of Nist and the Secretary of Energy will jointly establish a testbed for creating new materials to advance materials science and support advanced manufacturing via AI. 

Sec. 104. National Science Foundation and Department of Energy collaboration to make scientific discoveries through the use of artificial intelligence.

The direction of the National Science Foundation and the Secretary of Energy shall collaborate to support progress in science via AI. 

Sec. 105. Progress report.

The director of the AI Safety Institute shall submit to congress a progress report on the above goals. 

Subtitle B—International cooperation

Sec. 111. International coalition on innovation, development, and harmonization of standards with respect to artificial intelligence.

The bill also directs the heads of several agencies (notably including the Secretary of Commerce and the Secretary of State) to form a coalition with “like-minded” foreign governments to cooperate on innovation in AI and to coordinate development and adoption of AI standards. It specifies that the coalition may only include countries that have “sufficient” intellectual property projections, risk management approaches as well as research security measures and export controls. This emphasis on export controls would likely limit the coalition to U.S. allies or strategic partners thereby excluding China.

This would entail setting up government-to-government infrastructure to coordinate, agreements on information sharing between governments, and inviting participation from private-sector stakeholders as advisors. 

Sec. 112. Requirement to support bilateral and multilateral artificial intelligence research collaborations.

The bill requires the Director of the National Science Foundation to support international collaborations on AI research and development, again requiring that partner countries have security measures and export controls.

Subtitle C—Identifying regulatory barriers to innovation

Sec. 121. Comptroller General of the United States identification of risks and obstacles relating to artificial intelligence and Federal agencies.

The bill requires the Comptroller General to submit a report to Congress identifying regulatory obstacles to AI innovation. The report would include identifying federal laws and regulations hindering AI development, challenges in how those laws are currently enforced, an evaluation of how AI adoption has taken place within government, and recommendations to Congress on how to increase AI innovation. 

TITLE II—ARTIFICIAL INTELLIGENCE RESEARCH, DEVELOPMENT, CAPACITY BUILDING ACTIVITIES

Sec. 201. Public data for artificial intelligence systems.

The director of the Office of Science and Technology Policy will create a list of priorities for Federal Investment in creating 20 datasets of public Federal data for training AI. Once identified, the goal of assembling these datasets will be delegated to various agencies who can provide grants/other incentives or work through public-private partnerships. These datasets will then be provided to the National Artificial Intelligence Research Resource pilot program.  

Sec. 202. Federal grand challenges in artificial intelligence.

The director of the Office of Science and Technology Policy will assemble a list of priorities for the Federal government in AI in order to expedite AI development and the application of AI to key technologies such as advanced manufacturing and computing. It also specifies that various federal agencies will establish a prize competition, challenge based acquisition or other R&D investment based on the list of federal priorities.

Machines and Monopolies: RealPage and the Law and Economics of Algorithmic Price-Fixing

Any young person in America can tell you this: our country is in the throes of an affordable housing crisis. Rents have spiked 30.4% nationwide between 2019 and 2023. Meanwhile, wages only rose 20.2% during that same period. Most Americans feel that housing costs are getting away from them. According to the Pew Research Center, about half of Americans say housing affordability in their local community is a major problem. That intuition is legitimate. The Joint Center for Housing Studies of Harvard University found that, in 2022, a record half of U.S. renters paid over 30% of their income on rent and utilities, with nearly half of those people paying over 50% of their income on rent and utilities.

How have housing costs outpaced wages so dramatically in such a short period of time? Sure, wages are sticky, which is why inflation stings middle- and lower-income households so sharply. But in a competitive market where landlords are competing to attract renters with steadily growing income, rents should not be eating half of our wallets. The unfortunate reality is that American housing markets are not competitive. Major corporate landlords are claiming an increasingly consolidated share of rental housing. However, these trends towards consolidation are being exacerbated by new ways of rigging the system—particularly the use of advanced algorithms.

Last month, the US Department of Justice (DOJ) Antitrust division formally launched its complaint against RealPage, a company alleged to have used a sophisticated algorithm to facilitate illegal price-fixing amongst competing landlords. The federal complaint came after lawsuits by state Attorneys General, including Arizona Attorney General Kris Mayes and D.C. Attorney General Brian Schwalb. According to the DOJ, “[RealPage] subverts competition and the competitive process. It does so openly and directly—and American renters are left paying the price.” At a high-stakes juncture for the American economy, the premise of this case—that an advanced software algorithm violated antitrust law—raises a host of novel conceptual questions with significant implications for American consumers.

Background

Price-fixing violations come under Section 1 of the Sherman Antitrust Act of 1890, which reads: “Every contract, combination in the form of trust or otherwise, or conspiracy, in restraint of trade or commerce among the several States, or with foreign nations, is declared to be illegal.” An early insight of the courts was that, technically, every contract involves some restraint of trade at some level. Hence, the courts interpret the Sherman Act to render illegal only “unreasonable” restraints of trade. In United States v. Addyston Pipe & Steel Co. (1899), the court distinguished between what Judge Taft (later Chief Justice and then President Taft) called naked restraints of trade and those “necessary and ancillary” to the effectuation of a lawful contract, the latter of which should be assessed by rule of reason analysis. But certain forms of anticompetitive conduct came to be understood as constituting such an obviously and blatantly unreasonable naked restraint of trade—such as the explicit agreement to fix prices at hand in Addyston Pipe, as well as explicit agreements to allocate markets or rig bids—that they are analyzed under a per se rule, a blanket rule prohibiting such conduct without regard to economic effects. 

As a result, the evidentiary standard for price-fixing does not require economic evidence but rather depends on express communication (e.g., an email, a phone call, a meeting, etc.) between actors with the intent to set prices. Even if firms with significant market power collude tacitly (i.e., by following each other’s prices), thereby exacting a higher economic burden on consumers, the companies cannot be found to have violated the Sherman Act. 

RealPage

RealPage did not broker any agreements to fix prices, nor did it host meetings for landlords to discuss allocating markets. So why is it being accused of violating Section 1 of the Sherman Act?

RealPage sells software that makes pricing recommendations to competing landlords based on their private data. Instead of landlords individually adjusting rent prices based on housing market dynamics, RealPage amalgamates lease-level information that would otherwise be walled off between competitors—about apartmental rates, rent discounts, rental applications, executed new leases, renewal offers and acceptances, and unit characteristics such as layout and amenities—and recommends ‘optimal’ prices. As a result, the algorithm acts as an intermediary with perfect knowledge, raising prices to their absolute maximum not just for the individual landlord, but for entire markets. Economic theory predicts that when producers collectively ensure common pricing, as is being influenced in this case by RealPage’s shared algorithm, the ‘optimal’ price, or monopoly rent, is higher than the price we would observe in a perfectly competitive market. 

This is not just theory. When asked about the role RealPage has played in rents shooting up 14.5% in some markets, RealPage VP Andrew Bowen responded, “I think it’s driving it, quite honestly.” That number is particularly frightening when contextualized by the fact that RealPage’s clients allegedly comprise about 90% of the U.S. multifamily rental housing market. The software is widely used by competing landlords: for instance, ProPublica found that 70% of all multifamily apartments in a Seattle neighborhood were owned by just ten property managers—and “every single one used pricing software sold by RealPage.” The Arizona Attorney General’s case conservatively estimates overcharges of 12% and 13% in Phoenix and Tucson, respectively, as well as higher rates of eviction. The DOJ’s complaint highlights that RealPage was aware of and embraced the anticompetitive effects of its business model:

  1. RealPage aimed to suppress competition. The company’s Vice President of Revenue Management Advisory Services described, “there is greater good in everybody succeeding versus essentially trying to compete against one another.”
  2. RealPage contributed to higher prices. The company bragged about its tool “driving every possible opportunity to increase price even in the most downward trending or unexpected conditions.”

RealPage’s anticompetitive conduct did go beyond its software, however. The algorithm’s recommended ‘optimal’ pricing is only an optimal monopoly rent contingent on coordination among landlords. If some landlords deviate and undercut their competitors, the incentive for other landlords to continue charging high prices is weakened, if not irrational. In fact, RealPage actively expels clients that fail to impose suggested rents at least 75% of the time. While expanding its client base is crucial, the company understands that if universal compliance with its pricing scheme falters, natural competition will resume, leading to clients undercutting one another and ultimately threatening the viability of the client base—and potentially the business model itself, which relies on per-unit licensing fees and commissions on insurance premiums. Accordingly, RealPage worked hard to monitor landlord behavior with “pricing advisors” and made over 50,000 monthly phone calls to collect “nonpublic, competitively sensitive information.” 

Analysis

Still, the case is unique. Most landlords never communicated with each other, agreeing not to compete, prices were not explicitly branded as collusive, and apartment providers were technically not forced to accept RealPage’s pricing recommendations. This is a bit of a sloppy argument, though: 1. RealPage employees’ have admitted as many as 90% of pricing recommendations are accepted, and 2. Participants in a cartel are never forced to accept; they just tend to find they can make a lot of money doing so. The fundamental question at play here is whether an intermediary algorithm getting rubber stamps for its decisions to inflate prices by individual sellers can be viewed the same as those individuals coordinating directly. The result appears to be the same, but this new method is cloaked in a “neutral” intermediary party, RealPage, that happens to stand to gain millions by acting as the middleman. RealPage won’t be the last time the courts will be pushed to update their understanding of antitrust law to encapsulate technology-driven anti-competitive practices. The housing market is not wholly and uniquely susceptible to abuse by algorithmic price-setting.While RealPage is paving the way, algorithmic price-fixing is on track to be the collusion of the future, and firms will increasingly find it in their incentive to model the behavior of RealPage in this case. As legal scholar Salil Mehra notes:

“Increased connectivity and more powerful computers have led to greater ability to collect mass data about prices, sales and market conditions. Additionally, these changes have created increased capacity to analyze this information and to set prices in rapid, automated fashion.” 

So how should our legal apparatus approach such conduct? What happens when firms don’t make explicit agreements, but collude via a common algorithm? Mehra goes on to write:

“This shift away from humans to machines’ would pose a ‘critical challenge or antitrust law’ which was built on the assumption of human agency; machines ‘possess traits that will make them better than humans at achieving supracompetitive pricing without communication,’ and thus might not need to make an anticompetitive agreement as current blackletter American antitrust law requires for liability or punishment.”

The Federal Trade Commission (FTC) has clarified its position: price fixing by algorithm is still price-fixing. If a living person were to carry out the role of an algorithm in question, would the conduct be illegal? If the answer is yes, then the conduct is illegal whether by algorithm or not. But the FTC—an enforcement agency—doesn’t get to interpret the law; Courts do.

US v. Topkins (2015) was the first criminal prosecution of an algorithmic price-fixing conspiracy. In that case, poster- and wall decor-selling competitors explicitly agreed to compete on Amazon and to use a software algorithm to coordinate prices. The conspirators took a guilty plea deal, so the case never went to trial. While the use of a price-setting algorithm made the case the first of its kind, it was also conventional in legal terms—the conspirators explicitly agreed to use this tool to coordinate their pricing. 

Other previous cases have condemned the facilitation by third parties of explicit agreements (not using algorithms) to fix prices or allocate markets between competitors. These are called hub-and-spoke conspiracies. For example, consulting firm AC-Treuhand committed antitrust violations similar to RealPage’s alleged conduct insofar as it collected information from competitors and facilitated the creation of a cartel—absent the algorithm. Furthermore, the DOJ has filed a lawsuit against Agri Stats Inc. for organizing extensive information sharing between chicken, pork, and turkey processors. The DOJ alleges that Agri Stats violated Section 1 of the Sherman Act by creating comprehensive weekly and monthly reports for the participating meat processors, containing hundreds of pages of data on costs and sales prices by individual companies, and using that data to recommend and even encourage raising prices and restricting supply. RealPage may have found a novel way to do things which competition case law has shown time and time again to violate antitrust law—like sharing sensitive data to a common third party as a form of collusion—but the novelty of the tool does not absolve the conduct’s economic harms. The FTC wrote in a recent legal brief on algorithmic price-fixing that, even when absent explicit communication or agreement, price-fixing driven by algorithmic tools still hurts consumers by “join[ing] together separate decision-makers” and thus “depriv[ing] the marketplace of independent centers of decision-making.” 

This gets to the heart of the justification for antitrust laws. Antitrust laws are meant to protect free market competition for the sake of maintaining low prices, high quality, and vigorous innovation for the consuming public. The key logic there is that prices are maximally efficient conditional on firms competing against one another. Low prices in and of themselves—for example, beneath the expected perfectly competitive price—are not beneficial, as we would see supply shortages. Therefore, it is difficult to find firms in violation of the antitrust laws on the basis of price fluctuation alone, even if those fluctuations seem to occur in industry unison. Price hikes could be driven by external variables, such as natural disasters or pandemic-induced supply shocks, contributing to higher costs. Hence, antitrust law generally tends to rely on evidence of explicit agreements to determine Section 1 liability. 

But this opens a loophole in the law with respect to algorithmic price-fixing, where there exists no explicit communication, but there does exist hidden collusion. Tacit collusion (which is perfectly legal under the law today, i.e. a firm consistently follows a rival’s price hikes, but no agreement as such exists) has never looked like this before. As Mehra explains, “increased ability to gather and process massive amounts of data will reduce the probability that coordinated pricing would break down due to error or mistake in assessing market conditions.”

If an individual landlord were to raise prices to the monopoly rent, they would lose business to competitors undercutting them. But when producers in a market coordinate prices, either expressly or tacitly, they can all make higher profits at the higher price. The risk is that one producer may try to cheat the others by undercutting them and capturing the market in the short-run, in response to which the whole market will return to the lower competitive price and all those producers lose out on the high cartel profits going forward. An algorithm acting as an intermediary with perfect knowledge eradicates the incentive to undercut one’s fellow cartel participants for short-term profits, because the software’s strong monitoring capabilities can help the other producers lower their prices in immediate response to a cartel cheater. It follows that the cartel cheater would never expect to get short-term profits and, therefore, would never have an incentive to cheat the cartel in the first place. Hence, algorithmic collusion—even without express agreement—actually makes cartels more sustainable by eliminating the incentive for any firms to deviate from the algorithm’s recommended supracompetitive prices, which is bad news for consumers.

Solutions

If the court makes the right decision, the DOJ will win its case, thereby setting in precedent the aforementioned reasoning the FTC has outlined in a simple heuristic: “your algorithm can’t do anything that would be illegal if done by a real person.”

However, to leave no margin for error, Congress should pass the Preventing Algorithmic Collusion Act, introduced by Senators Klobuchar (D-MN; Chairwoman of the Senate Judiciary Subcommittee on Competition Policy, Antitrust, and Consumer Rights), Durbin (D-IL), Blumenthal (D-CT), Hirono (D-HI), Wyden (D-OR), and Welch (D-VT). The bill would codify into law that direct competitors are presumed to have entered a price-fixing agreement when they “share competitively sensitive information through a pricing algorithm to raise prices.” Lead sponsor Senator Ron Wyden said:

“Setting prices with an algorithm is no different from doing it over cigars and whiskey in a private club… Although it’s my view that these cartels are already violating existing antitrust laws, I want the law to be painfully clear that algorithmic price fixing of rents is a crime.”

Collusion is collusion, and it hurts consumers. The point of the antitrust laws is to protect consumers from such naked restraints of trade. However, the Sherman Act, passed in 1890, needs to be supplemented by regulation that adapts our competition policy to new kinds of 21st century anticompetitive conduct. While RealPage is hurting renters, private cases have also been brought against alleged algorithmic conspirators in the hotel and casino industries. Passing the Preventing Algorithmic Collusion Act will help enforcers go after corporate colluders set on rigging markets to their benefit, at the expense of the public and the American economy.

EJ founder Sneha Revanur and Grey’s Anatomy star Jason George: Governor Newsom’s Chance to Lead on AI Safety

Twenty years ago, social media was expected to be the great democratizer, making us all more ‘open and connected’ and toppling autocratic governments around the world. Those early optimistic visions simply missed the downside. We watched as it transformed our daily lives, elections, and the mental health of an entire generation. By the time its harms were well-understood, it was too late: the platforms were entrenched and the problems endemic. California Senate Bill 1047 aims to ensure we don’t repeat this same mistake with artificial intelligence.

AI is advancing at breakneck speed. Both of us are strong believers in the power of technology, including AI, to bring great benefits to society. We don’t think that the progress of AI can be stopped, or that it should be. But leading AI researchers warn of imminent dangers—from facilitating the creation of biological weapons to enabling large-scale cyberattacks on critical infrastructure. It’s not a far off future – today’s AI systems are flashing warning signs of dangerous capabilities. OpenAI just released a powerful new system that it rated as “medium” risk for enabling chemical, biological, radiological and nuclear weapons creation–up from the “low” risk posed by its previous system. A handful of AI companies are significantly increasing the risk of major societal harms, without our society’s consent, and without meaningful transparency or accountability.  They are asking us to trust them to manage that risk, on our behalf and by themselves.

We have a chance right now to say that the people have a stake and a voice in protecting the public interest. SB 1047, recently passed by the California state legislature, would help us get ahead of the most severe risks posed by advanced AI systems. Governor Gavin Newsom now has until September 30th to sign or veto the bill. With California home to many leading AI companies, his decision will reverberate globally.

SB 1047 has four core provisions: testing, safeguards, accountability, and transparency. The bill would require developers of the most powerful AI models to test for the potential to cause catastrophic harm and implement reasonable safeguards. And it would hold them accountable if they cause harm by failing to take these common sense measures. The bill would also provide vital transparency into AI companies’ safety plans and protect employees who blow the whistle on unsafe practices.

To see how these requirements are common sense, consider car safety. Electric vehicle batteries can sometimes explode, so the first electric vehicles were tested extensively to develop procedures for safely preventing explosions. Without such testing, electric vehicles may have been involved in many disasters on the road – and damaged consumer trust in the technology for years to come. The same is true of AI. The need for safeguards, too, is straightforward. It would be irresponsible for a company to sell a car designed to drive as fast as possible if it lacked basic safety features like seatbelts. Why should we treat AI developers differently?

Governor Newsom has already signed several other AI-related bills this session, such as a pair of bills protecting the digital likeness of performers. While those bills are important, they are not designed to prevent the very serious risks that SB 1047 addresses – risks that affect all of us.

If Governor Newsom signs SB 1047, it won’t be the first time that California has led the country in protecting the public interest. From data privacy to emissions standards, California has consistently moved ahead of the Federal government to protect its residents against major societal threats. This opportunity lies on the Governor’s desk once more.

The irony is, AI developers have already – voluntarily – committed to many of the common sense testing and safeguard protocols required by SB 1047, at summits convened by the White House and in Seoul. But strangely, these companies resist being held accountable if they fail to keep their promises. Some have threatened that they will leave California if the bill is passed. That’s nonsense. As Dario Amodei, the CEO of Anthropic, has said, such talk is just “theater” and “bluster” that “bears no relationship to the actual content of the bill.” The story is depressingly familiar – the tech industry has made such empty threats before to coerce California into sparing it from regulation. It’s the worst kind of deja vu. But California hasn’t caved to such brazen attempts at coercion before, and Governor Newsom shouldn’t cave to them now.

SB 1047 isn’t a panacea for all AI-related risks, but it represents a meaningful, proactive step toward making this technology safe and accountable to our democracy and to the public interest. Governor Newsom has the opportunity to lead the nation in governing the most critical technology of our time. And as this issue only grows in importance, this decision will become increasingly important to his legacy. We urge him to seize this moment and sign SB 1047.

Jason Winston George is an actor best known for his role as Dr. Ben Warren on Grey’s Anatomy and Station 19. A member of the SAG-AFTRA National Board, he helped negotiate the union’s contract on AI provisions.

Sneha Revanur is the Founder and President of Encode Justice.

Analysis: Test AI Act

The Testing and Evaluation Systems for Trusted Artificial Intelligence Act of 2024 (TEST AI Act) is one of several AI-related bills making its way to the Senate floor. The TEST AI Act establishes testbeds for red-teaming and blue-teaming, which are techniques to identify security weaknesses in technologies. Red-teaming, or the simulation of adversarial attacks, gained attention as a technical solution for AI harms following the 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (E.O. 14110). The Biden administration directed federal agencies to develop guidelines and testbeds for red-teaming. The TEST AI Act operationalizes these high-level directives while including the often overlooked blue-teaming research area. Bills like the TEST AI Act that promote trustworthy AI research help lawmakers to create more effective future standards for AI development. Ultimately, the TEST AI Act may lessen the cyber, data, and misuse vulnerabilities of AI systems through improved standards and security tools. 

The TEST AI Act was introduced by a bipartisan group of Senators in April 2024. Senator Ben Lujan (D-NM) is its sponsor, with Senators Richard Durbin (D-IL), John Thune (R-SD), Marsha Blackburn (R-TN), and James Risch (R-ID) joining as co-sponsors. Senator Peter Welch has since joined as a co-sponsor. In the Committee on Commerce, Science, and Transportation, the bill was substituted via amendment to add more detail to its text. After being reported favorably by the Committee, it is now awaiting consideration by the full Senate.


Background

The TEST AI Act instructs the Secretary of the Department of Energy (DOE) and the director of the National Institute of Standards and Technology (NIST) to pilot a 7-year testbed program in consultation with academia, industry, and the interagency committee established by the National Artificial Intelligence Initiative Act of 2020. The program will be housed within the DOE’s National Laboratories, a system of seventeen federally funded, privately managed labs that pursue wide-ranging science and technology goals. 

The goal of the program is to establish testbeds, or platforms that facilitate the evaluation of a technology or tool, for the assessment of government AI systems. The composition of testbeds vary, but can include hardware, software, and networked components. Hardware offers the computing power needed for testing, while software and networked components can simulate an environment or interact with the technology being tested. 

Some of these testbeds will be designed to improve the red-teaming of AI systems. Red-teaming simulates adversarial attacks to assess the system’s flaws and vulnerabilities. It can be performed by groups of humans or AI models trained to perform red-teaming. Early-stage attacks can include model tampering, data poisoning, or exfiltrating models and data. At the user level, a red team might try prompt injection or jailbreaking. 

Similarly, the TEST AI Act will establish testbeds for blue-teaming, which simulate the defense of a system. Like red-teaming, blue-teaming can be performed by human practitioners or AI systems, who together can create an especially potent security force. A blue team may analyze network traffic, user behavior, system logs, and other information flows to respond to attackers.

The proposed testbeds are focused on evaluating AI systems that are currently used or will be used by the federal government. Some testbeds will likely be classified to protect sensitive information associated with government AI systems. However, several agencies also release testbeds to the public and/or private industry. Several can be found on GitHub like the ARMORY Adversarial Robustness Evaluation Test Bed or the National Reactor Innovation Center Virtual Test Bed. Others require credentials or registration to use testbeds actively hosted on federal resources, such as the Argonne Leadership Computing Facility AI Testbed.


Red-teaming

The Biden Executive Order also requires companies to regularly share the results of their foundation models’ red-teaming with the government based on NIST guidance. While NIST has released an initial public draft of its guidelines for Managing Misuse Risk of Dual-Use Foundation Models, the final version mandated under the EO has yet to be released. Similarly, the NSF is funding research to improve red-teaming, but has not yet released findings. In the meantime, E.O. 14110 mandates that companies share the results of any red-teaming they conduct on several critical issues, including biological weapons development, software vulnerabilities, and the possibility of self-replication.

In contrast, blue teaming is not mentioned in E.O. 14110 and is much less discussed in policy and research circles.  For example, Google Scholar returns 4,080 results for “red-teaming AI” and only 140 for “blue-teaming AI”. The TEST AI Act is unique in its inclusion of blue-teaming on its research and policy agenda.

Excitement comes with its own downsides, though. The hype around red-teaming can obscure that actual practices vary widely in effectiveness, actionability, and transparency. A best practice or consistent standard for red-teaming does not exist, so the actual objectives, setting, duration, environment, team composition, access level, and the changes that are made based on the red-teaming results will vary from company to company.  For example, a company may conduct multiple rounds of red-teaming with a diverse group of experts with unfettered model access, clear goals, and unlimited time. Another red-teaming exercise may be time-bound, crowdsourced, API access, and single-round. Both approaches are considered red-teaming, but their usefulness differs significantly. 

Design choices for red-teaming exercises are largely made without disclosure, and exercise results are not public. There is no way to know whether companies make their product safer based on the results (MIT Technology Review). Accordingly, some researchers view red-teaming as a “catch-all response to quiet all regulatory concerns about model safety that verges on security theater” (Feffer et al, preprint). These concerns are echoed in the public comments submitted to NIST regarding their assignments in E.O. 14110. Similarly, Anthropic, a safety-focused AI developer, has called for standardizing red-teaming and blue-teaming procedures.


Federal Infrastructure

The TEST AI Act modifies NIST’s role under Executive Order 14110 to allow for interagency cooperation. The Act leverages the extensive federal infrastructure already in place for AI testing and testbeds. Congressional sponsors, including Senators Lujan (D-NM) and Risch (R-ID), identify the DOE as the only agency with the necessary computing power, data, and technical expertise to develop testbeds for frontier AI systems. 

Several trustworthy AI testbeds across federal agencies could serve as resources for the TEST AI testbeds. The Defense Advanced Research Projects Agency’s Guaranteeing AI Robustness Against Deception (GARD) project develops defense capabilities (like blue-teaming) to prevent and defeat adversarial attacks. They have produced a publicly available virtual testbed, toolbox, benchmarking dataset, and training materials for evaluating and defending machine learning models. Similarly, NIST’s Dioptra testing platform, which predates E.O. 14110, evaluates the trustworthiness, security, and reliability of machine learning models. Dioptra aims to “research and develop metrics and best practices to assess vulnerabilities of AI models” i.e., improve red-teaming. NSF also funds several testbeds (Chameleon, CloudLab) that provide computing power for AI/ML experimentation.


Conclusion

The TEST AI Act could usher in an era of increased robustness and accountability for federal use AI systems. Unlike GARD or Dioptra, which narrowly focus on defensive capabilities and trustworthiness, respectively, the TEST AI Act creates wide-ranging testbeds that are applicable across use cases and contexts. 

The Act also increases activity in the under-researched area of blue-teaming. Improving blue-teaming strengthens defensive capabilities, and can also help to solve the problem of “red-teaming hype”. It makes red-teaming results more actionable, and forces red teams to meet higher standards when testing defenses. This deliberate focus on both offensive and defensive techniques improves the current state of AI security while offering a framework for developing future AI standards and testing across the federal system. 

The TEST AI Act also addresses the limitations of current ad-hoc testing environments by formalizing and expanding testbed creation. In doing so, it redefines how government AI systems will be secured, bringing consistency and transparency to previously varied practices. This supports the broader goals of the Executive Order in improving risk assessment for biosecurity, cybersecurity, national security, and critical infrastructure. Crucially, it could stop the government’s systems from contributing to these harms from AI.

The Act’s integration with established entities like NIST and the DOE is critical, leveraging their unique infrastructure and technical expertise. It adopts the Executive Order’s position that collaboration on AI across government agencies is crucial for effectively harnessing vast resources and disparate expertise to make AI as beneficial as possible. By turning testbed creation and production into an interagency effort, the TEST AI Act establishes a testbed program on a previously unreplicated scale.

Machines & Monopolies: Google, Big Tech, and Antitrust in the AI Revolution

In a landmark decision following years of litigation, Google has been deemed a monopolist. Last Monday, U.S. District Judge Amit Mehta ruled that the tech giant violated antitrust law to maintain its monopoly position in the online search market. The monopolistic conduct at hand includes spending hefty sums to be the default search engine on browsers ; for example, Google paid Apple around $18 billion in 2021 to be the automatic search engine on Safari. While the finding established Google’s liability, Judge Mehta has yet to determine remedies, which could include anything from the prohibition of certain practices to the breakup of the business. Google has already declared that it will appeal the decision, upon which its fate will be uncertain.

While the ruling is significant for a variety of reasons — bolstering other antitrust cases against big tech companies and redefining antitrust jurisprudence for modern digital markets — contextualizing the case within the AI revolution and understanding its implications for AI markets is critically important. If it stands, the ruling will dramatically shape how we assess the harms of consolidation in AI markets, create new possibilities for entrants and rivals, and impress upon us the need for a revamp in competition policy to fit the needs of a changing technological and economic environment.

Background

Antitrust enforcement has seen a recent resurgence after decades of relative obscurity following the 2008 crash and evidence of increasing consolidation, higher prices, and stagnant wages,  culminating in the Biden administration’s notably aggressive competition policy. Many of these concerns arose from skepticism towards what has been affectionately termed ‘Big Tech’ and what others have less affectionately called the modern-day robber barons, alluding to the Rockefellers and Carnegies of the industrial Gilded Age. In his book The Curse of Bigness, Tim Wu, prominent scholar and former Special Assistant to the President for Technology and Competition Policy has called our era the ‘New Gilded Age.’ Accordingly, the Federal Trade Commission (FTC) and Department of Justice (DOJ) Antitrust Division have ambitiously launched cases against several ‘Big Tech’ firms, including Google, Meta, Apple, and Amazon.

International enforcers have taken a similar posture. The European Union (EU) recently passed the landmark Digital Markets Act (DMA) in 2022, and the United Kingdom (UK) Competition and Markets Authority (CMA) passed parallel regulation called the Digital Markets, Competition and Consumers Act (DMCCA). These historic regulatory moves have aimed to adapt competition law to the new paradigms of 21st-century commerce by identifying key “gatekeepers”—specifically, Alphabet, Amazon, Apple, Bytedance (TikTok), Meta, and Microsoft—and requiring they ensure transparency, interoperability, and data portability (more on that later) in their products.

The AI Revolution

Aside from general concern about the gatekeeping potential in digital markets, burgeoning AI markets are attracting particular scrutiny. For instance, NVIDIA — the semiconductor giant — has recently found itself under the antitrust microscope, as DOJ lawyers investigate whether its possibly $700 million acquisition of Run.ai is anticompetitive. In lockstep, the FTC began investigating the partnership between Microsoft and ChatGPT-developer OpenAI early this year. Notably, Microsoft abdicated its observer seat on OpenAI’s board soon after, likely as a result of this heightened scrutiny. Antitrust authorities, policymakers, and experts alike have expressed profound concerns about how AI could exacerbate economic inequality and entrench the position of dominant firms.

Just last month, US and European antitrust enforcers joined forces to publicly declare their points of concern for competition within the AI industry. The FTC, DOJ, CMA, and European Commission (EC) together outlined how structural features inherent to AI development might facilitate harm to consumers, workers, and businesses. In a New York Times op-ed, FTC Chair Lina Khan describes how access to key inputs — such as vast swaths of data and immense compute power — can serve as an entry barrier in AI markets, as well as how AI might facilitate consumer harms such as collusive behavior, price discrimination, and fraud.

The same companies that have dominated search and social media are the ones that appear to be set to dominate AI. Now, the Google decision is the first of the Big Tech cases to come to a conclusion and it will have massive implications for how antitrust authorities handle potentially monopolistic practices in the AI space.

AI Search Wars

While the DOJ may have succeeded in proving exclusionary behavior on Google’s part in the online search market, the door is still wide open for anticompetitive conduct in the next important industry: generative AI. Reports have pointed to conversations between Apple and Google about using Gemini—the latter’s generative AI model—on iPhones in a manner similar to Google search’s default status on Safari (although it should be noted that Apple and OpenAI have also announced a partnership to integrate ChatGPT into iOS experiences).

Through the exclusionary contracts that Google has built with firms that hold tremendous market power, like Apple, the company has accumulated large pools of data, bringing with it an inherent structural advantage over its competitors. Specifically, there exist economies of scale (efficiencies to production at scale) with data. In economics jargon, this market suffers from ‘indirect network effects,’ meaning that the value of the product to a consumer is increasing in the number of people making use of the product. The more that people use Google’s search algorithm, the more optimized it becomes, which incentivizes even more consumers to use the product. Each day, Google receives nine times as many searches as all of its rivals combined and over 90% of unique search phrases are only seen by Google. Google’s monopoly power—acquired through exclusionary contracts—triggered a self-reinforcing cycle, producing high-quality search results but causing an inevitable market convergence onto its algorithm.

Data as an Entry Barrier

Large data sets are a critical input for AI models. Microsoft CEO Satya Nadella testified at trial that Google could use its large swaths of user data to train its AI models better than any rival, threatening to endow Google with an unassailable advantage entrenching its dominance. In this way, large companies like Google’s existing advantages in access to vast troves of data from their monopolies over domains like search could translate into new monopolies over emerging technologies like generative AI.

Antitrust authorities have long recognized this structural tendency of digital markets for which data is a key input as a worrisome entry barrier with the potential to cement the control of incumbents. As FTC Commissioner Terrell McSweeney said:

“It may be that an incumbent has significant advantages over new entrants when a firm has a database that would be difficult, costly, or time-consuming for a new firm to match or replicate.”

The Google decision sets a good precedent for future cases about the competitive structure of AI markets by legitimizing the potential harms of this feature of the economics of data.

A Changing Landscape

This case also signals shifting tides in American competition policy. Vanderbilt Law School Professor Rebecca Haw Allensworth called the decision “seismic,” saying:

“It’s a sign that the tide is changing in antitrust law generally away from the laissez-faire system that we’ve had for the last 40 years.”

For the last four decades, antitrust jurisprudence depended heavily on classical price theory. In other words, antitrust cases have generally relied on short-term prices as the metric of anticompetitive harm. A direct link to higher consumer prices was the near exclusive means of demonstrating a violation of antitrust law.

But that is not the theory of harm implicated here. Google search is… well, free. However, scholars and policymakers have highlighted that platform markets deserve a unique lens of analysis. For instance, digital platforms often offer low prices on one side of the market (e.g., to consumers) but either sell user data in other markets (a major privacy concern) or extract monopoly rents on the other side of the market (e.g., from sellers). Additionally, by locking out rivals, they preclude the full benefits of free market competition—including vigorous innovation—from reaching consumers; loss of innovation was identified as the principal harm in the Google case. These may very well be the principal set of issues on which cases related to AI turn, signaling an evolution of antitrust law for a new paradigm of commerce.

Legislation and Possibilities

Where might we go from here?

With respect to data at scale as an entry barrier, some have suggested looking beyond current laws and implementing new regulations. For instance, the newly minted European Digital Markets Act requires gatekeepers to provide for data portability. The concept of data portability is that consumers can take their data from one provider to another, in the same way they can take their telephone number from one company to another as a result of the Telecom Act of 1996. This would alleviate the economies of scale issue whereby an algorithm becomes particularly successful by accumulating data from repeated use, creating a convergence on one model. Companies and scholars alike have articulated concerns about how such proposals might negatively impact consumers’ privacy. However, according to the aforementioned letter by the FTC, DOJ, CMA, and EC, such privacy claims would be closely scrutinized. Perhaps an American rendition of the sweeping Digital Markets Act would render these goals better served.

One piece of (bipartisan!) legislation to look out for is the CREATE AI Act, introduced by Senators Heinrich, Rounds, Booker, and Young. The bill would create the National Artificial Intelligence Research Resource (NAIRR), a cloud computing resource meant to provide free or low-cost access to datasets or other computing resources. The Senate Artificial Intelligence Caucus writes, supporting the bill’s passage:

“Companies like Google and Meta invest tens of billions of dollars in research and development annually, and large tech companies dwarf others in their AI investment. Control over the direction of leading-edge AI has become extremely centralized due to the significant data and computation requirements for modern AI. Even well-resourced universities are significantly outpaced by industry in AI research.”

This bill would be a major push in democratizing access to the costly digital infrastructure necessary for building AI models.

Conclusion

Last week’s Google decision is one small part of a story just beginning to unfold. As it stands today, virtually every AI startup and research lab is in some way dependent on the computing infrastructure or consumer market reach of a handful of Big Tech firms. And the potential harms are more pervasive than traditional market concentration. SEC Chair Gary Gensler has warned that reliance on a small number of foundation models at the heart of the AI ecosystem is a systemic risk, where a single failure could spark a financial crisis. But perhaps even more fundamentally, as the AI Now Institute writes:

“Relying on a few unaccountable corporate actors for core infrastructure is a problem for democracy, culture, and individual and collective agency. Without significant intervention, the AI market will only end up rewarding and entrenching the very same companies that reaped the profits of the invasive surveillance business model that has powered the commercial internet, often at the expense of the public.”

At the outset of the Web 2.0 era of the mid-2000s, weak competition policy was unprepared and unsuited to the novel technological environment, resulting in a handful of monopolies dominating the internet. At this outset in generative AI, we ought to be proactive about ensuring those same monopolies do not quash competition, stifle innovation, and further entrench their dominance. This week’s Google decision is a step towards course correcting, but it’s all hands on deck to ensure that AI, with its unimaginable potential, serves humanity and the common good.