TIME: AI Companion App Replika Faces FTC Complaint

On Tuesday Encode, the Young People’s Alliance, and the Tech Justice Law Project filed an FTC complaint against Replika, a mobile and web-based application owned and managed by Luka, Inc., for violations of deceptive and unfair trade practices (15 U.S.C. § 45) pursuant to 16 CFR § 2.2. Their filing was covered as an exclusive in TIME today.

“Replika promises its users an always-available, professional girlfriend and therapist that will cure their loneliness and treat their mental illness. What it provides is a manipulative platform engineered to exploit users for their time, money, and personal data.” – Adam Billen, Vice President of Public Policy, Encode.

###

Encode is America’s leading youth voice advocating for bipartisan policies to support human-centered AI development and U.S. technological leadership.

First Tier Companies Make Standards: Catching up with China on the Edge

A popular Chinese saying states:

三流企业做产品; 二流企业做技术; 一流企业做标准

Third tier companies make products; second tier companies make technology; first tier companies make standards.

Standards are a key area of U.S. national security and a domain of increasing competition with China. China’s investments in this area have given them significant influence which requires a vigorous U.S. response. 

Solutions include better engaging American talent within universities, funding the key standards agency in the US: NIST, and multilateral international coordination.

Why Standards Matter

The rapid development of AI technologies, especially generative AI, has led to concerns over the rules and regulations that govern these technologies. There are 3 major bodies that govern standards-setting for AI technologies: the International Telecommunication Union (ITU; a UN body), the International Organization for Standards/International Electrotechnical Commission Joint Technology Committee (ISO/IEC-JTC 1, or JTC 1; a non-governmental body), and the European Telecommunications Standards Institute (ETSI; a non-governmental body of the European Standards Organization). As AI’s importance to national security grows, especially in light of US-China competition over AI, these standard development organizations are coming under increased scrutiny. 

Technological standards serve as guidelines for the development of new technology products. From telegrams to DVD players to 5G wireless networks, standards-setting has played a key role in global trade and soft power. As AI grows in economic and strategic importance, both domestically and abroad, standards bodies are coming into focus as a major arena for international competition, as both China and the United States attempt to influence standards to align with their national interests. The scope of standards impact on trade is global and pervasive: according to the Department of Commerce, 93% of global trade is impacted by standards and regulations, amounting to trillions of dollars annually.

In the US, standards-setting typically occurs as a bottom-up, industry-driven process, with groups of corporations independently organizing standard-setting conferences. Once the conferences agree on a standard, a sometimes multi-year process, the finalized standards are then delivered to government organizations for dissemination and codification. While these processes make standard-setting in the US agile, it also means the process is fragmented and underfunded. NIST, the National Institute on Standards and Technology, “recognizes that for certain sectors of exceptional national importance, self-organization may not produce a desirable outcome on its own in a timely manner.” In such cases NIST can step in as an “effective convener” to coordinate and accelerate the traditional standards development process. NIST has identified AI as one of these nationally critical technologies.

The US process of standard-setting exists in sharp contrast to the current Chinese method of standard-setting, which is overwhelmingly government-directed and government-funded. China has pushed for increased adherence from other countries to standards bodies that China has influence over, like the ITU, through regional bilateral agreements, which leverage China’s existing relationships in the Global South developed through the Belt and Road Initiative.

Control over standards serves as a powerful soft power tool to promote national values abroad. By shifting standards governance to the ITU, where China has increased backing from smaller countries, China is able to imbue technology standards with its distinct cultural and economic values. As Dr. Tim Rühlig, Senior Analyst for Asia/Global China at the European Union Institute for Security Studies (EUISS), explains: 

“Technology is not value-neutral. Whether an innovation is developed in a democratic or autocratic ecosystem can shape the way it is designed—often unintentionally. Only a tiny share of technical standards developed in China reflects authoritarian values, but if they turn into international standards, they carry transformative potential, because once a technical standard is set, accepted, and used for the development of products and services, the standard is normally taken for granted.”

According to leaked documents obtained by the Financial Times in 2019, AI facial recognition software standards developed by the ITU were influenced by the Chinese effort to increase data-sharing and provide population surveillance technology to African countries integrated into the Belt and Road Initiative. Standards put out by the ITU are particularly influential in regions like Africa, the Middle East, and Asia where developing countries don’t have the resources to develop their own standards. This Chinese effort has resulted in standards that directly mirror top Chinese companies’ surveillance tech, including video monitoring capabilities used for surveillance within smart street lights developed by top Chinese telecommunications firm ZTE. Groups like the American Civil Liberties Union have long warned about the potential dangers of street light surveillance, and protestors in Hong Kong in 2019 toppled dozens of street lamps they suspected were surveilling their activities. When America falls behind in groups like the ITU, China pulls ahead. 

This comes at a time of great power competition over AI. The CCP views itself as having failed to influence standards-setting for the global Internet in the 80’s and 90’s. It now wants to rewrite those rules to support its authoritarian view of the internet through its “New IP” plan. That history casts a long shadow over its work on AI, a technology which China has a much larger domestic industry for than it did for the internet in its early days. China wants to ensure that this time it can infuse its values and secure its dominance over the next transformative technology. 

The economic rewards of controlling standards are significant and deeply embedded into our global trading system. Companies whose proprietary technology is incorporated into the official standard for a technology can reap substantial rewards through IP licensing payments from other firms that make products using the standard. The development of standards can also be a powerful trade tool for negotiating cheaper royalties which China employed to great effect during the development of DVD technology in the mid-2000s. In the 1990s, DVD standards development was primarily spearheaded by a coalition of US, European, and Japanese corporations, leading to higher royalty rates for product manufacturing in China. However, in 1999, a Chinese coalition of manufacturers and government bodies began developing AVD technology, a new standard with improved quality. Even though AVD (and EVD, the Taiwanese equivalent technology) never rose to high market capitalization, the development of a competing standard was used as leverage to bargain for substantial reductions in standards royalty payments.

Standards also influence the level of cybersecurity risks posed by new technologies. If China leads on standards-setting, looser data privacy standards and decreased corporate governance may increase cybersecurity risks. 

Human rights lawyer Mehwish Ansari argues, “There are virtually no human rights, consumer protection, or data protection experts present in ITU standards meetings so many of the technologies that threaten privacy and freedom of expression remain unchallenged in these spaces….” This means that when global leaders like the US abdicate leadership in these bodies there are very few remaining checks on Chinese influence.

China is strategically increasing its influence in global standards organizations

In 2018, China launched the “China Standards 2035” initiative, which sought to increase the country’s role in shaping emerging technology standards, including AI. This reflects China’s view that “standardisation is one of the most important factors for the economic future of China and our standing in the world.” Since 2020, China has succeeded in increasing their proposals to ISO by 20% annually. In September of 2024, the ITU approved three technical standards for 6G mobile technology, governing how 6G networks integrate AI and virtual reality experiences proposed by Chinese entities. While the US relies on private companies to draft proposals to SDOs, China employs state-controlled organizations to shape emerging technology and increasingly dominate global standards forums.

Changing the environment of standards-setting

In order to increase its influence in standards-setting organizations, China has made efforts to shift standards-setting towards the ITU, a UN body where they have comparatively greater influence than other SDO’s.  Unlike many SDOs, where Western multinational corporations dominate, the ITU’s membership is more heavily composed of government representatives. China has bolstered its presence within the ITU by increasing participation in its staff and leadership, thereby gaining more control over the standards proposed within the organization. Further, many meetings for standards-setting now take place in China, where standards officials are reportedly impressed by the technical knowledge of Chinese senior government officials and the lavish support given to standards development.

China has reportedly tried to influence the outcomes of standards votes by coordinating the actions of participants from China. In 2021, it was discovered that all Chinese representatives in telecom standards meetings had been instructed to support Huawei’s proposal, which blatantly violated established practices.

Funding and targeting key positions in standards organizations

One way that China has increased its presence in key standards organizations is through actively pursuing the appointment of its chosen officials to influential positions. Several experts on standards-setting have stated that China has aggressively pursued essential positions in these organizations as a way of advancing its own priorities. China has expanded its membership in over 200 committees of the ISO between 2005 and 2021. In the ITU, China has disproportionate influence by holding several key senior positions that oversee the agency. In another important standards-setting body, the 3rd Generation Partnership Project, China holds 19 leadership positions, compared with America’s 12 and the EU’s 14.

Beyond vying for key positions, China has also invested substantially in supporting the technology companies that propose standards to these bodies. This substantially advantages China, considering the large time and capital cost of these proposals. Local governments in China also provide subsidies to firms for setting standards, with the highest compensation being provided for international standards. This initiative has led to an influx of low-quality proposals submitted to standards bodies, as it incentivizes submitting proposals regardless of expertise or quality. As a result, Chinese companies submitted 830 technical standards related to wired communications in 2019 – more than the combined proposals of the next three largest contributors, Japan, the US, and South Korea. In some cases, the resulting behavior from Chinese firms can be such an annoyance that some US companies have withdrawn from standards bodies. This has become a significant problem in the ITU-T, where this practice is commonplace. 

Incentivizing participation by researchers and academics

China has further increased its influence in international standards development through academic grants and investing in academic programs. According to a report from the US-China Economic and Security Review Commission, “[a]ctive participation and submission of technology into Chinese standards – particularly getting the technology included in standards – affords bonuses, travel permissions, or credits toward promotions and tenure.” It can also provide researchers access to grants ranging from thousands to tens of millions of yuan, encouraging and supporting university professors and researchers to be active in standards-setting work.

Beyond these grants, China’s National Institute for Standardization has created graduate-level degree programs focused on standards. This gives Chinese firms a deeper pool of experts for standards-setting work. In the US, by contrast, there are no graduate-level standards courses. American investment into academia by organizations like NIST pale in comparison to Chinese investments. Additionally, Chinese institutions are heavily recruiting foreign experts in science and technology to take part in their standards-setting work through the “Recruitment Program of Global Experts.” Under this program, China has established 19 partnerships with foreign universities and companies.

One Belt One Road chanelling China’s power

Even when China isn’t able to advance standards through international standards bodies, they are able to use their influence through the One Belt One Road initiative to promote standards. China has arranged more than 100 bilateral standards agreements mostly with countries in the Global South. For example, during the Forum on China-Africa Cooperation, China established joint labs and research institutions for emerging technology. Even if countries don’t accept international-level standards proposed by China, they might find themselves locked out of these markets because of such bilateral agreements. As China continues to pull more countries into the One Belt One Road project, they have the potential to create an entire market of economies that use Chinese standards. This harms US companies that don’t use Chinese standards as it creates challenges to accessing these markets, gives China a competitive advantage, and increases their influence in standards organizations. As noted by a scholar at the University of Sydney, the West “might find itself outvoted [in the ITU] as China has heavily invested in countries from the Global South with its Belt and Road Initiative.”

How increased Chinese influence over standards threatens American value

China is reportedly spending $1.4 trillion dollars on a digital infrastructure program with the intention of dominating development of the technologies of the future. The standards China promotes often reflect values prioritizing centralized control over individual privacy. For example, Huawei, a prominent company in China’s technological advancements, proposed alternative internet protocols to the ITU in 2019. These protocols included a “shut up command,” allowing governments to revoke individual access to the internet. Although this proposal was not adopted, it garnered support from other authoritarian states and Huawei is reportedly already developing this new IP with partner countries. Although the US and other allies have banned or restricted Huawei’s technology, over 90 countries and counting have begun to implement their products, particularly those in the global south. As China continues to gain dominance in the realm of standards-setting, and brings more countries into its fold, this presents a significant risk of reshaping global norms in ways that prioritize state control over individual freedoms, challenging the open, democratic values upheld by the United States and its allies. 

The US Government is drastically underinvesting in supporting SDO’s

Chinese officials have embraced government-led standards development as a means through which to gain global soft power. Domestically, however, standards bodies prioritize broad, voluntary participation by key private sector stakeholders. While standardization in the U.S. is and should continue to be privately led, there can be no doubt that the US government and its relevant agencies, such as NIST, serve a key role in identifying priorities, particularly regarding national security, for industry and academia to follow.

Additionally, the National Standards Strategy For Critical And Emerging Technology (NSSCET)  identifies key areas of standards development where the government has a uniquely important role that simply cannot be filled by the private sector. This includes areas where the government is the official representative (such as the ITU), areas of national interest, and early stage technologies that lack a sufficiently developed industry such as quantum information technology.

The ANSI/NIST panel highlighted the “… importance of government expert participation in standards activities, and the need for funding and consistent government-wide policy supporting that participation.” As the national agency in charge of standards, NIST should have adequate resources to support the development of standards. Instead, NIST has been forced to “stop hiring and filling gaps” in the wake of FY 2024 budget cuts, directly delaying critical new standards. NIST has stated that its work on developing AI standards would be “very, very tough” absent additional funding. 

Even more concerning, a 2023 National Academies report commissioned by Congress found that NIST’s physical facilities are severely inadequate with over 60% of its facilities failing to meet federal standards for acceptable building conditions due to “grossly inadequate funding.” These inadequate physical conditions undermine NIST’s work and “routinely wreak havoc with researcher productivity and national needs” causing an estimated 20% loss in productivity. Given NIST’s annual budget is just over $1 billion dollars, a 20% loss in productivity translates into hundreds of millions being lost due to this failure to invest in infrastructure.

The National Academies report found that the lack of funding to repair NIST’s crumbling facilities has led to:

  • “Substantive delays in key national security deliverables due to inadequate facility performance.
  • Substantive delays in national technology priorities such as quantum science, engineering, biology, advanced manufacturing, and core measurement sciences research.
  • Inability to advance research related to national technology priorities.
  • Material delays in NIST measurement service provisions to U.S. industry customers.
  • Serious damage or complete destruction of highly specialized and costly equipment, concomitant with erosion of technical staff productivity.”

Lack of accessible SDO meetings located within the U.S.

Hosting international standards meetings within the U.S. is critical for ensuring robust attendance by U.S. participants and to solidify American leadership within SDOs. A lack of domestic SDO meetings means giving up the “homefield advantage” of easier attendance by American companies. The National Security Agency (NSA) and the Cybersecurity and Infrastructure Security Agency (CISA) highlight that  “… in the past several years, organizers have held fewer standards meetings in the U.S” due to insufficient logistical support from the U.S. – in contrast to China’s high level of state support making it an increasingly popular venue for global SDO meetings.

A key barrier that hamstrings organizing domestic SDO meetings is significant delays in processing visas. A panel hosted by The American National Standards Institute (ANSI) and the National Institute of Standards and Technology (NIST) noted that “… lengthy visa processes for some attendees have presented challenges in bringing international stakeholders to the U.S. for meetings.” Additionally, U.S. visa restrictions on certain countries and industries can pose a further challenge for hosting SDO meetings.

Another barrier to hosting successful and accessible domestic SDO meetings is a lack of financial resources. The ANSI/NIST panel highlighted that the financial costs of participating in SDO meetings was a deterrent to American small and medium businesses. USTelecom, a trade association, estimates that participating companies spend $300,000 per engineer, per year, to work full time on standards development which means that multiyear standards efforts can frequently cost a single company millions of dollars. China utilizes state grants and subsidies to help reduce these costs, but in the U.S. the full cost falls directly on businesses, directly disincentivizing participation. 

Lack of engagement and sufficient funding of academia

The National Standards Strategy For Critical And Emerging Technology (NSSCET) identifies academia as “critical stakeholders” in standards development. Academia both provides essential experts for current standards developments and also cultivates essential future talent.

A lack of substantial funding and an absence of dedicated efforts within academia to channel talent towards participating in SDO’s seriously impairs the ability of some of America’s brightest minds to contribute to developing standards. The NSSCET identifies that while SDO’s have grown significantly in the last decade, “…the U.S. standards workforce has not kept pace with this growth”. 

This reflects both a lack of investment and a lack of recognition within academia of the importance of standards. The NSSCET notes that standards successes are not recognized within academia as equivalent to a publication or patent, thereby making them less prestigious and thus they are less often pursued. The NSA/CISA recommendations on standards similarly highlights that colleges and universities undervalue the benefits of standards development education for students due to a perceived lack of value in the job market. Compounding this weak interest in standards is a lack of available funding for academia to participate in standards activities. Total investment by NIST into developing standards curricula in universities is a paltry $4.3 million over the last 12 years in contrast to lavish Chinese annual spending. 

At present there are no American dedicated degree programs for becoming an expert in creating standards. This stands in contrast to China which is not only “…actively recruiting university graduates…” for standards development, but also has the only university in the world, China Jiliang University (CJLU), that offers degrees specifically in standards development. At CJLU, 65% of its 1000+ students are registered members of SDO’s such as the International Organization for Standardization (ISO) and the university was awarded the first and only “ISO Award for Higher Education in Standardization.” The culture of promoting standards within academia is deep, with thousands of Chinese students even competing in the National Standardization Olympic Competition.

The NSA/CISA recommendations on standards highlight the need for academia to promote standards to students in order to create awareness and expertise within the next generation of talent to fill the existing gap. Failing to leverage academia neglects one of the most valuable sources of expertise and talent within the field of standards.

In order to maintain and extend its global leadership within standards, the U.S. must make investments commensurate to the importance of leading on technology standards. Failing to do so imperils national security, economic prosperity, and technological progress. 

This is how we can fix it:

Invest in NIST commensurately to its importance as a key and irreplaceable actor in developing standards

  • Fully funding NIST is an investment into our nation’s economic and national security that will pay off for generations to come by ensuring they have the physical resources and personnel to meet the challenges ahead as crucial technologies are pioneered and standards set. Failing to do so bottlenecks our ability to compete with China.
  • Specifically, passing the Expanding Partnerships for Innovation and Competitiveness (EPIC) Act to establish a Foundation for Standards and Metrology to support NIST’s mission – similar to existing foundations that support other federal science agencies like the CDC and NIH – would provide NIST greater access to private sector and philanthropic funding, enhancing its capabilities.
  • The EPIC Act includes provisions to increase basic quality of life on NIST’s campus, including provisions to “Support the expansion and improvement of research facilities and infrastructure at the Institute to advance the development of emerging technologies.”

Dedicate resources to expediting visa backlogs and clarifying visa restrictions to ensure American hosted SDO meetings are convenient

  • Until SDO meetings can be easily scheduled within the U.S., American companies will lack the home field advantage that host countries enjoy. Establishing a fast track program for SDO participants could enable visa issues to be easily solved with minimal expenditure.
  • Existing members of the US-based standards community should be empowered to attend more SDO meetings internationally, allowing them to develop relationships and bolster the possibility of international stakeholders attending US-held SDO meetings in the future. This should include the utilization of provisions in the EPIC Act, especially by “Offer[ing] direct support to NIST associates, including through the provision of fellowships, grants, stipends, travel, health insurance, professional development training, housing, technical and administrative assistance, recognition awards for outstanding performance, and occupational safety and awareness training and support, and other appropriate expenditures.”

Engage academia in direct standards development and standards talent development using selective grant funding and clear guidance

  • Existing designations of top quality academic institutions as NSA National Centers for Academic Excellence allows them to compete for DOD funds. As put forth in the the NSA/CISA recommendations for standards, a similar mechanism could be used to designate schools that conduct outstanding work in standards development and invest in the next generation of standards talent to channel increased funding and incentives to academic institutions. These investments are necessary to stay competitive with China’s massive investment in higher education for standards.
  • Another key priority in engaging academia in the development of technological standards is designating and communicating key research priorities, as noted by the NSA/CISA. We concur with their recommendation which encourages “express[ing] future requirements that they identify, particularly in the area of national security, so that academia and industry can consider them as they plot a course for research.”

Multilateral coordination with allies and partners

  • As identified by the White House National Standards Strategy, the US should include standards activities in bilateral and multilateral science and technology cooperation agreements, leverage standing bodies like the U.S.-EU Trade and Technology Council Strategic Standardization Information mechanism to share best practices, coordinate using the International Standards Cooperation Network, and deploy all available diplomatic tools in support of developing secure standards and countering China’s influence.

Policy Brief: Bridging International AI Governance

Key Strategies for Including the Global South

With the AI Safety Summit approaching, it is imperative to address the global digital divide and ensure that the voices of the Global South are not just heard but actively incorporated into AI governance. As AI reshapes societies across continents, the stakes are high for all, particularly for the Global South, where the impact of AI-driven inequalities could be devastating.

In this report, we address why the Global North should take special care to include the Global South in international AI Governance and what the Global North can do to facilitate this process. We identify the following 5 key objectives pertaining to AI, which are of global importance, with the Global South being a particularly relevant stakeholder.

School District Brief: Safeguards to Prevent Deepfake Sexual Abuse in Schools

Introduction / State of the Problem

Technologists and academics have been warning the public for years about the proliferation of non-consensual sexual deepfakes, altered or artificial non-consensual pornography of real people. Today, a potential abuser just needs access to a web browser and internet connection to freely create hundreds or thousands of non-consensual intimate images. 96% of deepfake videos online are non-consensual pornographic videos, and 99% of them target women. Alarmingly, we are now seeing a pattern of boys as young as 13 using these tools to target their female classmates with deepfake sexual abuse.

On October 20, 2023, young female students at Westfield High School in New Jersey discovered that teenage boys at the school had taken fully-clothed photos of them and used an AI app to alter them into sexually explicit, fabricated photos for public circulation. One of the female victims revealed that it was not just one male student, but a group using “upwards of a dozen girls’ images to make AI pornography.” In the same month, halfway across the country at Aledo High School in Texas, a teenage boy generated nude images of ten female classmates. The victims, who sought help from the school, the sheriff’s office, and the social media apps, struggled to stop the photos from spreading for over eight months: “at that point, they didn’t know how far it spread”. At Issaquah High School in Washington, another teenage boy circulated deepfake nude images of “at least six 14-to-15-year-old female classmates and allegedly a school official” on popular image-based social media app Snapchat. While school staff knew about the images, the police only heard about the incident through parents who independently reached out to file sex offense reports. Four months later, the same incident occurred in Beverly Hills, California — and the sixteen victims were only in middle school.

Consequences of Inaction / Lack of Appropriate Guidelines

In almost every case of deepfake sexual abuse in schools, administrators and district officials were caught off guard and unprepared; even when relevant guidelines existed. At Westfield High School, school administrators conducted initial investigation with the alleged perpetrators and police present, without their parents and lawyers, making all collected evidence inadmissible in court. Because of the school negligence, the victims could not seek accountability and still do not know “the exact identities or the number of people who created the images, how many were made, or if they still exist”. At Issaquah High School, when a police detective inquired about why the school had not reported the incident, school officials questioned why they would be required to report “fake images”. Issaquah’s Child Abuse, Neglect, and Exploitation Prevention Procedure states that in cases of sexual abuse, reports to law enforcement or Child Protective Services must be made “at the first opportunity, but in no case longer than forty-eight hours”. Yet, because fake images are not directly named in the policy, the school did not file a report until six days later— and not without multiple reminders from the police about the school’s duty as mandatory reporters. In Beverly Hills, California, administrators acted more swiftly in expelling the five students responsible. Still, the perpetrators retained full anonymity, while the images of victims were permanently made public, attaching their faces to a nude body. Victims shared struggles with feelings of anxiety, shame, isolation, an inability to focus at school, and serious concerns about reputational damage, future repercussions with job prospects, and the possibility that photos could resurface at any point.

A Path Forward:

Deepfake sexual abuse is not inevitable: it is possible and necessary for schools to implement concrete preventative and reactive measures. Even before an incident has occurred, schools can protect students by setting standards for acceptable and unacceptable behavior, educating staff, and modifying existing policies to account for such incidents.

Many schools have existing procedures related to sexual harassment and cyberbullying issues. However, standard practices to handle digital sexual abuse via deepfakes have yet to materialize. Existing procedures non-specific to this area have been ineffective, resulting in the exposure of victims’ identities, week-long delays while pornographic images are circulated amongst peers, and failures to report incidents to law enforcement in a timely manner. School action plans to address these risks should incorporate the following considerations:

  1. Deepfake sexual abuse incidents in schools follow a similar pattern: students feed fully-clothed images of their peers into an AI application to manipulate them into sexually explicit images and circulate them through social media platforms like Snapchat. The apps used to create and distribute deepfake sexual images are easily accessible to most students, who recklessly disregard the grave consequences their actions hold. Schools must update their codes of conduct, sexual harassment and abuse, harassment, intimidation and abuse, cyberbullying and AI policies to clearly ban the creation and dissemination of deepfake sexual imagery. Those updated policies should be clearly communicated through school wide events and announcements, orientation, and consent or sexual education curricula. Schools must clearly communicate to students the seriousness of the issue and the severity of the consequences, setting a clear precedent for action before crises occur. 
  2. Appropriate consequences for perpetrators: The lack of appropriate consequences for the creation and dissemination of deepfake sexual imagery will undermine efforts to deter such behavior. Across recent incidents, most schools failed to identify all perpetrators involved in incidents or deliver reasonable consequences for the serious harm caused as a result of their actions. Westfield High School suspended a male student accused of fabricating the images for one or two days; victims and families shared that the perpetrators at Aledo High School received “probation, a slap on the wrist… [that will] be expunged. But these pictures could forever be out there of our girls.” To deter perpetrators and protect victims, schools should establish guidelines for determining consequences and a system for stakeholders that should be involved in the determination of what consequences there will be and which parties will carry them out. Even in cases where the school needs to involve local authorities, there should be school-specific consequences such as suspension or expulsion.
  3. Equivalence of real images and deepfake generated images: Issaquah failed to address its deepfake sexual abuse incident because school administrators were unsure whether existing sexual abuse policies applied to generated images. Procedures addressing sexual abuse incidents must be updated to treat the creation and distribution of non-consensual sexual deepfake images the same as real images. For example, an incident that involves creating deepfake porn should be treated with the same seriousness as an incident that involves non-consensually photographing someone nude in a locker room. Deepfake sexual abuse incidents require the same rigorous investigative and reporting process as other sexual abuse incidents because their consequences are similarly harmful to victims and the larger school community.
  4. Standard procedures to reduce harms experienced by victims: At Westfield High School, victims discovered their photos were used to generate deepfake pornography after their names were announced over the school-wide intercom. Not only did victims feel that it was a violation of privacy to have their identities exposed to the entire student body, but the boys who generated the images were privately pulled aside for investigation. Schools should have established, written procedures to discreetly inform relevant authorities about incidents and to support victims at the start of an investigation on deepfake sexual abuse. After procedures are established, educators should be made aware of relevant school procedures for protecting victims through dedicated training.

Case Study: Seattle Public School District

The incidents that have sounded the alarm bells on this issue are only the ones that have been reported in large news outlets. Such incidents are likely occurring all over the country without much media attention. The action we’re seeing today is largely the result of a few young, brave advocates using their own experiences as a platform to give voice to this issue – and it is time that we listen. Seattle Public Schools, like most districts around the country, has not yet had a high profile incident. A review of its code of conduct, sexual harassment policy, and cyber bullying policy, similar to those of many other schools, reveals a lack of preparedness in preventing and responding to potential deepfake sexual abuse within schools. Below is a case study of how the aforementioned considerations may apply to bolster Seattle’s district policies:

  1. Code of conduct: Seattle Public School District’s code of conduct, revised and re-approved every year by the Board of Education, contains policy on acceptable student behavior and standard disciplinary procedures. Conduct that is “substantially interfering with a student’s education […] determined by considering a targeted student’s grades, attendance, demeanor, interaction with peers, interest and participation in activities, and other indicators” merits a “disciplinary response.” Furthermore, “substantial disruption includes but is not limited to: significant interference with instruction, school operations or school activities… or a hostile environment that significantly interferes with a student’s education.” Deepfake sexual abuse incidents falls squarely under this conduct, affecting a victim’s ability to focus and interact with teachers and peers. Generating and electronically distributing pornographic images of fellow students outside of school hours or off-campus falls within the school’s purview under their off-campus student behavior policy, as it causes a substantial disruption to on-campus activities and interferes with the right of students to safely receive their education. Furthermore, past instances have shown that these incidents spread rapidly and become a topic of conversation that continues into the school day, especially when handled without sensitivity for victims, creating a hostile environment for students.
  2. Sexual harassment policy: Existing policy states that sexual harassment includes “unwelcome sexual or gender-directed conduct or communication that creates an intimidating, hostile, or offensive environment or interferes with an individual’s educational performance”. Deepfake pornography, which has been non-consensual and directed toward young girls in every high profile case thus far, should be considered a form of “conduct or communication” that is prohibited under this policy. The Superintendent has a duty to “develop procedures to provide age-appropriate information to district staff, students, parents, and volunteers regarding this policy… [which] include a plan for implementing programs and trainings designed to enhance the recognition and prevention of sexual harassment.” Such policies should reflect the most recent Title IX regulations, effective August 1st, which state that “non-consensual distribution of intimate images including authentic images and images that have been altered or generated by artificial intelligence (AI) technologies” are considered a form of online sexual harassment. Revisions made to sexual harassment policy should be made clear to all school staff through dedicated training.
  3. Cyber bullying policy: Deepfake sexual abuse is also a clear case of cyberbullying. As defined by the Seattle Public Schools, “harassment, intimidation, or bullying may take many forms including, but not limited to, slurs, rumors, jokes, innuendoes, demeaning comments, drawings, cartoons… or other written, oral, physical or electronically transmitted messages or images directed toward a student.” Furthermore, the act is specified as one that “has the effect of substantially interfering with a student’s education”, “creates an intimidating or threatening educational environment”, and/or “has the effect of substantially disrupting the orderly operation of school”. From what victims have shared about their anxiety, inability to focus in school, and newfound mistrust toward those around them, it is evident that deepfake sexual abuse constitutes cyberbullying – at a minimum. However, because the proliferation of generated pornography is so recent, school administrators may be uncertain how existing policy applies to such incidents. Therefore, this policy should be revised to directly address generated visual content. For instance, “electronically transmitted messages or images directed toward a student” may be revised to “electronically generated or transmitted messages or images directed toward or depicting a student”.

Conclusion

Deepfake pornography can be created in seconds, yet follows victims for the rest of their lives. Perpetrators today are emboldened by free and rapid access to deepfake technology and school environments that fail to hold them accountable. School districts’ inaction has resulted in the proliferation of deepfake sexual abuse incidents nationwide, leaving countless victims with little recourse and irreversible trauma. It is critical for schools to take immediate action to protect students, especially young girls, by incorporating safeguards within school policies: address the equivalence of generated and real images within their codes of conduct, sexual harassment policies, and cyber bullying policies, setting guidelines to protect victims and determine consequences for perpetrators, and ensuring all staff are aware of these changes. By taking these steps, schools can create a safer environment, ensuring that students receive the protection and justice they deserve, and deterring future incidents of deepfake sexual abuse.

Critical AI Legislation in the Lame Duck Session

As we enter the lame duck session of the 118th Congress, we stand at a critical juncture for artificial intelligence policy in the United States. The rapid advancement of AI technologies has created both unprecedented opportunities and challenges that demand a coordinated legislative response. Throughout the year, Encode has been working tirelessly with lawmakers and coalition partners to advocate for a comprehensive AI package that addresses safety, innovation, and American leadership in this transformative technology.

With the election behind us, we congratulate President-elect Trump and Vice President-elect Vance and look forward to supporting their administration’s efforts to maintain American leadership in AI innovation. The coming weeks present a unique opportunity to put in place foundational, bipartisan policies that will help the next administration hit the ground running on AI governance.

1. The DEFIANCE Act: Protecting Americans from AI-Generated Sexual Abuse

The Problem: In recent years the technology used to create AI-generated non-consensual intimate imagery (NCII) has become widely accessible. Perpetrators are now able to create highly realistic deepfake NCII of individuals with a single, fully clothed photo and access to the internet. That has resulted in an explosion of this content – 96% of all deepfakes are nonconsensual pornography and 99% of it targets women. Today, 15% of children say they know of other children who have been a victim of synthetic NCII in their own school just in the last year. Victims often grapple with anxiety, shame, isolation, and deep fears about reputational harm, future career repercussions, and the ever-present risk that photos might reappear at any time.

The Solution: The DEFIANCE Act (S. 3696) creates the first comprehensive federal law allowing victims to sue not just the people who create these fake images and videos, but also those who share them. Importantly, the bill gives victims up to 10 years to take legal action — critical because many people don’t discover this content until long after it’s been created. The bill also includes special protections to keep victims’ identities private during court proceedings, making it safer for them to seek justice without fear of further harassment.

Why It Works: With deepfake models becoming increasingly decentralized and accessible, individuals can now create harmful content with limited technical expertise. Given how easy it is for perpetrators to spin up these models independently, establishing a private right of action is crucial. The DEFIANCE Act creates a meaningful pathway for victims to directly target those responsible for creating and distributing harmful content.

2. Future of AI Innovation Act: Ensuring AI Systems Are Safe and Reliable

The Problem: AI systems are becoming increasingly powerful and are being used in more critical decisions. Yet we currently lack standardized ways to evaluate whether these systems are safe, reliable, or biased. As companies race to deploy more powerful AI systems, we need a trusted way to assess their capabilities and risks.

The Solution: The Future of AI Innovation Act (S. 4178/H.R. 9497) codifies America’s AI Safety Institute (AISI) at NIST, our nation’s standards agency. Through collaborative partnerships with companies, the institute will develop testing methods and evaluation frameworks to help assess AI systems. Companies can voluntarily work with AISI to evaluate their AI technologies before deployment.

Why It Works: This bill creates a collaborative approach where government experts work alongside private companies, universities, and research labs to develop voluntary testing standards together. Unlike regulatory bodies, AISI has no authority to control or restrict the development or release of AI models. Instead, it serves as a technical resource and research partner, helping companies voluntarily assess their systems while ensuring America maintains its leadership in AI development.

The Support: This balanced approach has earned unprecedented backing from across the AI ecosystem. Over 60 organizations — from major AI companies like OpenAI and Google to academic institutions like UC Berkeley and Carnegie Mellon to advocacy groups focused on responsible AI — have endorsed the bill. This broad coalition shows that safety and innovation can go hand in hand.

3. The EPIC Act: Building America’s AI Infrastructure

The Problem: As AI becomes more central to our economy and national security, NIST (our national standards agency) has been given increasing responsibility for ensuring AI systems are safe and reliable. However, the agency faces two major challenges: it struggles to compete with private sector salaries to attract top AI talent, and its funding process makes it difficult to respond quickly to new AI developments.

The Solution: The EPIC Act (H.R. 8673/S. 4639) creates a nonprofit foundation to support NIST’s work, similar to successful foundations that support the NIH, CDC, and other agencies. This foundation would help attract leading scientists and engineers to work on national AI priorities, enable rapid response to emerging technologies, and strengthen America’s voice in setting global AI standards.

Why It Works: Rather than relying solely on taxpayer dollars, the foundation can accept private donations and form partnerships to support critical research. This model has proven highly successful at other agencies – for example, the CDC Foundation played a crucial role in the COVID-19 response by quickly mobilizing resources and expertise. The EPIC Act would give NIST similar flexibility to tackle urgent AI challenges.

The Support: This practical solution has been endorsed by four former NIST directors who understand the agency’s needs, along with major technology companies and over 40 civil society organizations who recognize the importance of having a well-resourced standards agency.

4. CREATE AI Act: Democratizing AI Research

The Problem: Today, cutting-edge AI research requires massive computing resources and extensive datasets that only a handful of large tech companies and wealthy universities can afford. This concentration of resources means we’re missing out on innovations and perspectives from researchers at smaller institutions, potentially overlooking important breakthroughs and lines of research that the largest companies aren’t incentivized to invest in.

The Solution: The CREATE AI Act (S. 2714/H.R. 5077) establishes a National AI Research Resource (NAIRR) — essentially a shared national research cloud that gives researchers from any American university or lab access to the computing power and data they need to conduct advanced AI research.

Why It Works: By making these resources widely available, we can tap into American talent wherever it exists. A researcher at a small college in rural America might have the next breakthrough idea in AI safety or discover a new application that helps farmers or small businesses. This bill ensures they have the tools to pursue that innovation.

5. Nucleic Acid Standards for Biosecurity Act: Securing America’s Biotech Future

The Problem: Advances in both AI and biotechnology are making it easier and cheaper to create, sell and buy synthetic DNA sequences. While this has enormous potential for medicine and research, it also creates risks if bad actors try to recreate dangerous pathogens or develop new biological threats. Currently, there is no standardized way for DNA synthesis companies to screen orders for potentially dangerous sequences, leaving a critical security gap.

The Solution: The Nucleic Acid Standards for Biosecurity Act (H.R. 9194) directs NIST to develop clear technical standards and operational guidance for screening synthetic DNA orders. It creates a voluntary framework for companies to use to identify and stop potentially dangerous requests while facilitating legitimate research and development.

Why It Works: Rather than creating burdensome regulations, this bill establishes voluntary standards through collaboration between industry, academia, and government. It helps make security protocols more accessible and affordable, particularly for smaller biotech companies. The bill also addresses how advancing AI capabilities could be used to design complex and potentially dangerous new genetic sequences that could go undetected by existing screening mechanisms, ensuring our screening approaches keep pace with technological change.

The Support: This approach has gained backing from both the biotechnology industry and security experts. By harmonizing screening standards through voluntary cooperation, it helps American businesses compete globally while cementing U.S. leadership in biosecurity innovation.

6. Securing Nuclear Command: Human Judgment in Critical Decisions

The Problem: As AI systems become more capable, there’s increasing pressure to use them in Nuclear Command, Control, and Communications (NC3). While AI can enhance many aspects of NC3, we need to make it absolutely clear to our allies and adversaries that humans remain in control of our most consequential military decisions — particularly those involving nuclear weapons.

The Solution: A provision in the National Defense Authorization Act would clearly require human control over all critical decisions related to nuclear weapons. This isn’t about banning AI from Nuclear Command, Control, and Communications — it’s about establishing clear boundaries for its most sensitive applications.

Why It Works: This straightforward requirement ensures that while we can benefit from AI’s capabilities in NC3, human judgment remains central to the most serious decision points. It’s a common-sense guardrail that has received broad support.

The Path Forward

These bills represent carefully negotiated, bipartisan solutions that must move in the coming weeks. The coalitions are in place. The urgency is clear. What’s needed now is focused attention from leadership to bring these bills across the finish line before the 118th Congress ends.

As we prepare for the transition to a new administration and Congress, these foundational measures will ensure America maintains its leadership in AI development while protecting our values and our citizens.

———

This post reflects the policy priorities of Encode, a nonprofit organization advocating for safer AI development and deployment.

Analysis: Future of AI Innovation Act

Introduction

This bill, introduced by Senator Cantwell, Maria, is a promising first step that improves government and public capacity for building AI safely. It would establish the AI Safety Institute, create testbeds for AI, initiate building an international coalition on AI standards, create public datasets for AI training, and promote federal innovation in AI. The Senate Commerce committee has passed this bill and sent it to the full Senate for their consideration meaning it is not yet passed into law. 

The AI Safety Institute would conduct scientific research on how to responsibly build AI but would not have the authority to translate that research into binding standards. Therefore it would lack the ability to robustly ensure that AI developers are behaving responsibly. For example, the requirement that the model evaluations performed by the Institute come only from voluntarily provided data means that AI developers can refuse to provide access until their models are already public. This means that flaws in model safety would not be identified until those flaws were already actively posing a risk. 

The bill’s efforts to coordinate with U.S. allies on standards are a useful step in building international consensus on AI issues. It can also be seen as a clear attempt to counter China’s push to lead in global governance of AI, hinting at the geopolitical struggle over AI.

Finally, the focus on supporting innovation and promoting government adoption of AI are admirable but take an anti-regulatory approach that may undermine the ability of Federal agencies to mitigate risks.  

Amendments:

There were a number of amendments that were introduced and accepted prior to the bill being passed out of committee which are notable. The amendments can be accessed here.

Senator Budd amended to explicitly exclude China from the international coalition on AI until it complies with the U.S. interpretation of China’s World Trade Organization commitments. He further amended to exclude entities controlled by China, Russia, Iran, North Korea from accessing any resources of the AI safety institute and require that no data can be shared with any hostile countries. 

Senator Cruz amended to prohibit federal agencies regulating AI in a number of ways related to race and gender. The most controversial provision prohibits agencies mandating that AI systems be designed in an equitable way to prevent disparate impacts based on a protected class (such as race or gender) which contradicts the Biden Executive Order on AI. It also prohibits the review of input data by Federal agencies to determine if an AI system is biased or produces misinformation. In response to the controversy over this particular amendment, a spokesperson for the committee’s Democratic majority stated that “Rather than giving the senator [Cruz] a platform to amplify divisive rhetoric and delay committee progress, the Chair accepted the amendment — knowing there will be many opportunities to fix the legislation, and with the support of the other Republican Senators.” This statement indicates that future versions of the bill will likely continue to evolve. 

Senator Cruz also amended to require that consultants or other temporary employees cannot perform any “inherently governmental function” for federal agencies related to AI or other critical and emerging technologies. This would prohibit temporary employees from many roles which would restrict bringing in private sector talent to advise on AI. 

Senator Schatz amended to include energy storage and optimization as an additional focus of the testbed program that previously only pertained to advanced materials and manufacturing. He also added the promotion of explainability and mechanistic interpretability, ie the ability to understand how AI systems work internally, as priorities of the Institute. Another addition was including developing cybersecurity for AI and developing AI for modernizing code and software of government agencies on the list of Federal Grand Challenges that will inform agency innovation competitions. His final amendment mandates that multilateral research partnerships include coordination with other Federal open data efforts when possible. 

Senators Young and Hickenlooper amended to significantly expand the bill by including the creation of a nonprofit to be known as the “Foundation for Standards and Metrology”. This nonprofit would support the mission of the AI Safety Institute in a broad variety of ways, notably including supporting the commercialization of federally funded research. The nonprofit will be an independent 501(c)(3) and its board members will be appointed from a list created by the National Academies of Sciences, Engineering, and Medicine. The Foundation is directed to create a plan to become financially self-sustaining within five years of its creation and its initial annual budget is set at a minimum of $500,000 and maximum of $1,250,000.  

Detailed Breakdown of the Bill’s Content: 

Subtitle A—Artificial Intelligence Safety Institute and testbeds 

Sec. 101. Artificial Intelligence Safety Institute.

The Under Secretary of Commerce for Standards and Technology will establish the AI Safety Institute as well as a consortium of relevant stakeholders to support the Institute. The Institute’s mission will be carried out in collaboration with National Institute of Standards and Technology (NIST). The mission of the Institute will be to:

  1. Assist companies and Federal agencies in developing voluntary best practices for assessing AI safety 
  2. Provide assistance to Federal agencies in adopting and using AI in their operations
  3. Develop and promote “voluntary, consensus-based technical standards or industry standards”, advancement in AI, and a competitive AI industry 

One area of focus will be supporting AI research, evaluation, testing and standards via the following:

  • Conducting research into model safety, security, interpretability 
  • Working with other agencies to develop testing environments, perform regular benchmarking and capability evaluations, and red teaming
  • Working with all stakeholders to develop and adopt voluntary AI standards This will include standards regarding:
    • physical infrastructure for training, developing, and operating AI models
    • Data for training and testing AI models
    • AI models and software based on such models
  • Expanding on the AI Risk Management Framework regarding generative AI
  • Establishing secure development practices for AI models and develop and publish cybersecurity tools and guidelines to protect AI development 
  • Developing metrics and methodologies for evaluating AI by testing existing tools and funding research to create such tools (and notably looking at the potential effect of foundation models when retrained or fine-tuned)
  • Coordinating global standards setting for AI evaluation and testing
  • Developing tools for identifying vulnerabilities in foundation models
  • Developing tools for agencies to track harmful incidents caused by AI

Another key area of focus will be supporting AI implementation via the following:

  • ”Using publicly available and voluntarily provided information, conducting evaluations to assess the impacts of artificial intelligence systems, and developing guidelines and practices for safe development, deployment, and use of artificial intelligence technology”
  • Coordinating with U.S. allies and partners on AI testing and vulnerability and incident data sharing
  • Develop AI testing capabilities and infrastructures
  • establish blue teaming capabilities and partner with industry to mitigate risks and negative impacts
  • develop voluntary guidelines on detecting synthetic content, watermarking, preventing privacy right violations by AI, and transparent documentation of AI datasets and models  

Sec. 102. Program on artificial intelligence testbeds.

The Under Secretary of Commerce for Standards and Technology will use various public and private computing resources to develop evaluations and risk assessments for AI systems. In particular these assessments will prioritize identifying potential security risks of deployed AI systems with a focus on autonomous offensive cyber capabilities, cybersecurity vulnerabilities of AI, and “chemical, biological, radiological, nuclear, critical infrastructure, and energy-security threats or hazards”. Additionally such tests should be evaluated for use on AI systems trained using biological sequence data and those intended for gene synthesis. 

The Under Secretary will also provide developers of foundation models the opportunity to test such models. To support this they will conduct research on how to improve and benchmark foundation models, identify key capabilities and unexpected behaviors of foundation models, evaluate scenarios in which these models could pose risks, support developers in evaluating foundation models, and coordinate public evaluations of foundation models and publicize reports of such testing. 

Sec. 103. National Institute of Standards and Technology and Department of Energy testbed to identify, test, and synthesize new materials.

The director of Nist and the Secretary of Energy will jointly establish a testbed for creating new materials to advance materials science and support advanced manufacturing via AI. 

Sec. 104. National Science Foundation and Department of Energy collaboration to make scientific discoveries through the use of artificial intelligence.

The direction of the National Science Foundation and the Secretary of Energy shall collaborate to support progress in science via AI. 

Sec. 105. Progress report.

The director of the AI Safety Institute shall submit to congress a progress report on the above goals. 

Subtitle B—International cooperation

Sec. 111. International coalition on innovation, development, and harmonization of standards with respect to artificial intelligence.

The bill also directs the heads of several agencies (notably including the Secretary of Commerce and the Secretary of State) to form a coalition with “like-minded” foreign governments to cooperate on innovation in AI and to coordinate development and adoption of AI standards. It specifies that the coalition may only include countries that have “sufficient” intellectual property projections, risk management approaches as well as research security measures and export controls. This emphasis on export controls would likely limit the coalition to U.S. allies or strategic partners thereby excluding China.

This would entail setting up government-to-government infrastructure to coordinate, agreements on information sharing between governments, and inviting participation from private-sector stakeholders as advisors. 

Sec. 112. Requirement to support bilateral and multilateral artificial intelligence research collaborations.

The bill requires the Director of the National Science Foundation to support international collaborations on AI research and development, again requiring that partner countries have security measures and export controls.

Subtitle C—Identifying regulatory barriers to innovation

Sec. 121. Comptroller General of the United States identification of risks and obstacles relating to artificial intelligence and Federal agencies.

The bill requires the Comptroller General to submit a report to Congress identifying regulatory obstacles to AI innovation. The report would include identifying federal laws and regulations hindering AI development, challenges in how those laws are currently enforced, an evaluation of how AI adoption has taken place within government, and recommendations to Congress on how to increase AI innovation. 

TITLE II—ARTIFICIAL INTELLIGENCE RESEARCH, DEVELOPMENT, CAPACITY BUILDING ACTIVITIES

Sec. 201. Public data for artificial intelligence systems.

The director of the Office of Science and Technology Policy will create a list of priorities for Federal Investment in creating 20 datasets of public Federal data for training AI. Once identified, the goal of assembling these datasets will be delegated to various agencies who can provide grants/other incentives or work through public-private partnerships. These datasets will then be provided to the National Artificial Intelligence Research Resource pilot program.  

Sec. 202. Federal grand challenges in artificial intelligence.

The director of the Office of Science and Technology Policy will assemble a list of priorities for the Federal government in AI in order to expedite AI development and the application of AI to key technologies such as advanced manufacturing and computing. It also specifies that various federal agencies will establish a prize competition, challenge based acquisition or other R&D investment based on the list of federal priorities.

Machines and Monopolies: RealPage and the Law and Economics of Algorithmic Price-Fixing

Any young person in America can tell you this: our country is in the throes of an affordable housing crisis. Rents have spiked 30.4% nationwide between 2019 and 2023. Meanwhile, wages only rose 20.2% during that same period. Most Americans feel that housing costs are getting away from them. According to the Pew Research Center, about half of Americans say housing affordability in their local community is a major problem. That intuition is legitimate. The Joint Center for Housing Studies of Harvard University found that, in 2022, a record half of U.S. renters paid over 30% of their income on rent and utilities, with nearly half of those people paying over 50% of their income on rent and utilities.

How have housing costs outpaced wages so dramatically in such a short period of time? Sure, wages are sticky, which is why inflation stings middle- and lower-income households so sharply. But in a competitive market where landlords are competing to attract renters with steadily growing income, rents should not be eating half of our wallets. The unfortunate reality is that American housing markets are not competitive. Major corporate landlords are claiming an increasingly consolidated share of rental housing. However, these trends towards consolidation are being exacerbated by new ways of rigging the system—particularly the use of advanced algorithms.

Last month, the US Department of Justice (DOJ) Antitrust division formally launched its complaint against RealPage, a company alleged to have used a sophisticated algorithm to facilitate illegal price-fixing amongst competing landlords. The federal complaint came after lawsuits by state Attorneys General, including Arizona Attorney General Kris Mayes and D.C. Attorney General Brian Schwalb. According to the DOJ, “[RealPage] subverts competition and the competitive process. It does so openly and directly—and American renters are left paying the price.” At a high-stakes juncture for the American economy, the premise of this case—that an advanced software algorithm violated antitrust law—raises a host of novel conceptual questions with significant implications for American consumers.

Background

Price-fixing violations come under Section 1 of the Sherman Antitrust Act of 1890, which reads: “Every contract, combination in the form of trust or otherwise, or conspiracy, in restraint of trade or commerce among the several States, or with foreign nations, is declared to be illegal.” An early insight of the courts was that, technically, every contract involves some restraint of trade at some level. Hence, the courts interpret the Sherman Act to render illegal only “unreasonable” restraints of trade. In United States v. Addyston Pipe & Steel Co. (1899), the court distinguished between what Judge Taft (later Chief Justice and then President Taft) called naked restraints of trade and those “necessary and ancillary” to the effectuation of a lawful contract, the latter of which should be assessed by rule of reason analysis. But certain forms of anticompetitive conduct came to be understood as constituting such an obviously and blatantly unreasonable naked restraint of trade—such as the explicit agreement to fix prices at hand in Addyston Pipe, as well as explicit agreements to allocate markets or rig bids—that they are analyzed under a per se rule, a blanket rule prohibiting such conduct without regard to economic effects. 

As a result, the evidentiary standard for price-fixing does not require economic evidence but rather depends on express communication (e.g., an email, a phone call, a meeting, etc.) between actors with the intent to set prices. Even if firms with significant market power collude tacitly (i.e., by following each other’s prices), thereby exacting a higher economic burden on consumers, the companies cannot be found to have violated the Sherman Act. 

RealPage

RealPage did not broker any agreements to fix prices, nor did it host meetings for landlords to discuss allocating markets. So why is it being accused of violating Section 1 of the Sherman Act?

RealPage sells software that makes pricing recommendations to competing landlords based on their private data. Instead of landlords individually adjusting rent prices based on housing market dynamics, RealPage amalgamates lease-level information that would otherwise be walled off between competitors—about apartmental rates, rent discounts, rental applications, executed new leases, renewal offers and acceptances, and unit characteristics such as layout and amenities—and recommends ‘optimal’ prices. As a result, the algorithm acts as an intermediary with perfect knowledge, raising prices to their absolute maximum not just for the individual landlord, but for entire markets. Economic theory predicts that when producers collectively ensure common pricing, as is being influenced in this case by RealPage’s shared algorithm, the ‘optimal’ price, or monopoly rent, is higher than the price we would observe in a perfectly competitive market. 

This is not just theory. When asked about the role RealPage has played in rents shooting up 14.5% in some markets, RealPage VP Andrew Bowen responded, “I think it’s driving it, quite honestly.” That number is particularly frightening when contextualized by the fact that RealPage’s clients allegedly comprise about 90% of the U.S. multifamily rental housing market. The software is widely used by competing landlords: for instance, ProPublica found that 70% of all multifamily apartments in a Seattle neighborhood were owned by just ten property managers—and “every single one used pricing software sold by RealPage.” The Arizona Attorney General’s case conservatively estimates overcharges of 12% and 13% in Phoenix and Tucson, respectively, as well as higher rates of eviction. The DOJ’s complaint highlights that RealPage was aware of and embraced the anticompetitive effects of its business model:

  1. RealPage aimed to suppress competition. The company’s Vice President of Revenue Management Advisory Services described, “there is greater good in everybody succeeding versus essentially trying to compete against one another.”
  2. RealPage contributed to higher prices. The company bragged about its tool “driving every possible opportunity to increase price even in the most downward trending or unexpected conditions.”

RealPage’s anticompetitive conduct did go beyond its software, however. The algorithm’s recommended ‘optimal’ pricing is only an optimal monopoly rent contingent on coordination among landlords. If some landlords deviate and undercut their competitors, the incentive for other landlords to continue charging high prices is weakened, if not irrational. In fact, RealPage actively expels clients that fail to impose suggested rents at least 75% of the time. While expanding its client base is crucial, the company understands that if universal compliance with its pricing scheme falters, natural competition will resume, leading to clients undercutting one another and ultimately threatening the viability of the client base—and potentially the business model itself, which relies on per-unit licensing fees and commissions on insurance premiums. Accordingly, RealPage worked hard to monitor landlord behavior with “pricing advisors” and made over 50,000 monthly phone calls to collect “nonpublic, competitively sensitive information.” 

Analysis

Still, the case is unique. Most landlords never communicated with each other, agreeing not to compete, prices were not explicitly branded as collusive, and apartment providers were technically not forced to accept RealPage’s pricing recommendations. This is a bit of a sloppy argument, though: 1. RealPage employees’ have admitted as many as 90% of pricing recommendations are accepted, and 2. Participants in a cartel are never forced to accept; they just tend to find they can make a lot of money doing so. The fundamental question at play here is whether an intermediary algorithm getting rubber stamps for its decisions to inflate prices by individual sellers can be viewed the same as those individuals coordinating directly. The result appears to be the same, but this new method is cloaked in a “neutral” intermediary party, RealPage, that happens to stand to gain millions by acting as the middleman. RealPage won’t be the last time the courts will be pushed to update their understanding of antitrust law to encapsulate technology-driven anti-competitive practices. The housing market is not wholly and uniquely susceptible to abuse by algorithmic price-setting.While RealPage is paving the way, algorithmic price-fixing is on track to be the collusion of the future, and firms will increasingly find it in their incentive to model the behavior of RealPage in this case. As legal scholar Salil Mehra notes:

“Increased connectivity and more powerful computers have led to greater ability to collect mass data about prices, sales and market conditions. Additionally, these changes have created increased capacity to analyze this information and to set prices in rapid, automated fashion.” 

So how should our legal apparatus approach such conduct? What happens when firms don’t make explicit agreements, but collude via a common algorithm? Mehra goes on to write:

“This shift away from humans to machines’ would pose a ‘critical challenge or antitrust law’ which was built on the assumption of human agency; machines ‘possess traits that will make them better than humans at achieving supracompetitive pricing without communication,’ and thus might not need to make an anticompetitive agreement as current blackletter American antitrust law requires for liability or punishment.”

The Federal Trade Commission (FTC) has clarified its position: price fixing by algorithm is still price-fixing. If a living person were to carry out the role of an algorithm in question, would the conduct be illegal? If the answer is yes, then the conduct is illegal whether by algorithm or not. But the FTC—an enforcement agency—doesn’t get to interpret the law; Courts do.

US v. Topkins (2015) was the first criminal prosecution of an algorithmic price-fixing conspiracy. In that case, poster- and wall decor-selling competitors explicitly agreed to compete on Amazon and to use a software algorithm to coordinate prices. The conspirators took a guilty plea deal, so the case never went to trial. While the use of a price-setting algorithm made the case the first of its kind, it was also conventional in legal terms—the conspirators explicitly agreed to use this tool to coordinate their pricing. 

Other previous cases have condemned the facilitation by third parties of explicit agreements (not using algorithms) to fix prices or allocate markets between competitors. These are called hub-and-spoke conspiracies. For example, consulting firm AC-Treuhand committed antitrust violations similar to RealPage’s alleged conduct insofar as it collected information from competitors and facilitated the creation of a cartel—absent the algorithm. Furthermore, the DOJ has filed a lawsuit against Agri Stats Inc. for organizing extensive information sharing between chicken, pork, and turkey processors. The DOJ alleges that Agri Stats violated Section 1 of the Sherman Act by creating comprehensive weekly and monthly reports for the participating meat processors, containing hundreds of pages of data on costs and sales prices by individual companies, and using that data to recommend and even encourage raising prices and restricting supply. RealPage may have found a novel way to do things which competition case law has shown time and time again to violate antitrust law—like sharing sensitive data to a common third party as a form of collusion—but the novelty of the tool does not absolve the conduct’s economic harms. The FTC wrote in a recent legal brief on algorithmic price-fixing that, even when absent explicit communication or agreement, price-fixing driven by algorithmic tools still hurts consumers by “join[ing] together separate decision-makers” and thus “depriv[ing] the marketplace of independent centers of decision-making.” 

This gets to the heart of the justification for antitrust laws. Antitrust laws are meant to protect free market competition for the sake of maintaining low prices, high quality, and vigorous innovation for the consuming public. The key logic there is that prices are maximally efficient conditional on firms competing against one another. Low prices in and of themselves—for example, beneath the expected perfectly competitive price—are not beneficial, as we would see supply shortages. Therefore, it is difficult to find firms in violation of the antitrust laws on the basis of price fluctuation alone, even if those fluctuations seem to occur in industry unison. Price hikes could be driven by external variables, such as natural disasters or pandemic-induced supply shocks, contributing to higher costs. Hence, antitrust law generally tends to rely on evidence of explicit agreements to determine Section 1 liability. 

But this opens a loophole in the law with respect to algorithmic price-fixing, where there exists no explicit communication, but there does exist hidden collusion. Tacit collusion (which is perfectly legal under the law today, i.e. a firm consistently follows a rival’s price hikes, but no agreement as such exists) has never looked like this before. As Mehra explains, “increased ability to gather and process massive amounts of data will reduce the probability that coordinated pricing would break down due to error or mistake in assessing market conditions.”

If an individual landlord were to raise prices to the monopoly rent, they would lose business to competitors undercutting them. But when producers in a market coordinate prices, either expressly or tacitly, they can all make higher profits at the higher price. The risk is that one producer may try to cheat the others by undercutting them and capturing the market in the short-run, in response to which the whole market will return to the lower competitive price and all those producers lose out on the high cartel profits going forward. An algorithm acting as an intermediary with perfect knowledge eradicates the incentive to undercut one’s fellow cartel participants for short-term profits, because the software’s strong monitoring capabilities can help the other producers lower their prices in immediate response to a cartel cheater. It follows that the cartel cheater would never expect to get short-term profits and, therefore, would never have an incentive to cheat the cartel in the first place. Hence, algorithmic collusion—even without express agreement—actually makes cartels more sustainable by eliminating the incentive for any firms to deviate from the algorithm’s recommended supracompetitive prices, which is bad news for consumers.

Solutions

If the court makes the right decision, the DOJ will win its case, thereby setting in precedent the aforementioned reasoning the FTC has outlined in a simple heuristic: “your algorithm can’t do anything that would be illegal if done by a real person.”

However, to leave no margin for error, Congress should pass the Preventing Algorithmic Collusion Act, introduced by Senators Klobuchar (D-MN; Chairwoman of the Senate Judiciary Subcommittee on Competition Policy, Antitrust, and Consumer Rights), Durbin (D-IL), Blumenthal (D-CT), Hirono (D-HI), Wyden (D-OR), and Welch (D-VT). The bill would codify into law that direct competitors are presumed to have entered a price-fixing agreement when they “share competitively sensitive information through a pricing algorithm to raise prices.” Lead sponsor Senator Ron Wyden said:

“Setting prices with an algorithm is no different from doing it over cigars and whiskey in a private club… Although it’s my view that these cartels are already violating existing antitrust laws, I want the law to be painfully clear that algorithmic price fixing of rents is a crime.”

Collusion is collusion, and it hurts consumers. The point of the antitrust laws is to protect consumers from such naked restraints of trade. However, the Sherman Act, passed in 1890, needs to be supplemented by regulation that adapts our competition policy to new kinds of 21st century anticompetitive conduct. While RealPage is hurting renters, private cases have also been brought against alleged algorithmic conspirators in the hotel and casino industries. Passing the Preventing Algorithmic Collusion Act will help enforcers go after corporate colluders set on rigging markets to their benefit, at the expense of the public and the American economy.

Analysis: Test AI Act

The Testing and Evaluation Systems for Trusted Artificial Intelligence Act of 2024 (TEST AI Act) is one of several AI-related bills making its way to the Senate floor. The TEST AI Act establishes testbeds for red-teaming and blue-teaming, which are techniques to identify security weaknesses in technologies. Red-teaming, or the simulation of adversarial attacks, gained attention as a technical solution for AI harms following the 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (E.O. 14110). The Biden administration directed federal agencies to develop guidelines and testbeds for red-teaming. The TEST AI Act operationalizes these high-level directives while including the often overlooked blue-teaming research area. Bills like the TEST AI Act that promote trustworthy AI research help lawmakers to create more effective future standards for AI development. Ultimately, the TEST AI Act may lessen the cyber, data, and misuse vulnerabilities of AI systems through improved standards and security tools. 

The TEST AI Act was introduced by a bipartisan group of Senators in April 2024. Senator Ben Lujan (D-NM) is its sponsor, with Senators Richard Durbin (D-IL), John Thune (R-SD), Marsha Blackburn (R-TN), and James Risch (R-ID) joining as co-sponsors. Senator Peter Welch has since joined as a co-sponsor. In the Committee on Commerce, Science, and Transportation, the bill was substituted via amendment to add more detail to its text. After being reported favorably by the Committee, it is now awaiting consideration by the full Senate.


Background

The TEST AI Act instructs the Secretary of the Department of Energy (DOE) and the director of the National Institute of Standards and Technology (NIST) to pilot a 7-year testbed program in consultation with academia, industry, and the interagency committee established by the National Artificial Intelligence Initiative Act of 2020. The program will be housed within the DOE’s National Laboratories, a system of seventeen federally funded, privately managed labs that pursue wide-ranging science and technology goals. 

The goal of the program is to establish testbeds, or platforms that facilitate the evaluation of a technology or tool, for the assessment of government AI systems. The composition of testbeds vary, but can include hardware, software, and networked components. Hardware offers the computing power needed for testing, while software and networked components can simulate an environment or interact with the technology being tested. 

Some of these testbeds will be designed to improve the red-teaming of AI systems. Red-teaming simulates adversarial attacks to assess the system’s flaws and vulnerabilities. It can be performed by groups of humans or AI models trained to perform red-teaming. Early-stage attacks can include model tampering, data poisoning, or exfiltrating models and data. At the user level, a red team might try prompt injection or jailbreaking. 

Similarly, the TEST AI Act will establish testbeds for blue-teaming, which simulate the defense of a system. Like red-teaming, blue-teaming can be performed by human practitioners or AI systems, who together can create an especially potent security force. A blue team may analyze network traffic, user behavior, system logs, and other information flows to respond to attackers.

The proposed testbeds are focused on evaluating AI systems that are currently used or will be used by the federal government. Some testbeds will likely be classified to protect sensitive information associated with government AI systems. However, several agencies also release testbeds to the public and/or private industry. Several can be found on GitHub like the ARMORY Adversarial Robustness Evaluation Test Bed or the National Reactor Innovation Center Virtual Test Bed. Others require credentials or registration to use testbeds actively hosted on federal resources, such as the Argonne Leadership Computing Facility AI Testbed.


Red-teaming

The Biden Executive Order also requires companies to regularly share the results of their foundation models’ red-teaming with the government based on NIST guidance. While NIST has released an initial public draft of its guidelines for Managing Misuse Risk of Dual-Use Foundation Models, the final version mandated under the EO has yet to be released. Similarly, the NSF is funding research to improve red-teaming, but has not yet released findings. In the meantime, E.O. 14110 mandates that companies share the results of any red-teaming they conduct on several critical issues, including biological weapons development, software vulnerabilities, and the possibility of self-replication.

In contrast, blue teaming is not mentioned in E.O. 14110 and is much less discussed in policy and research circles.  For example, Google Scholar returns 4,080 results for “red-teaming AI” and only 140 for “blue-teaming AI”. The TEST AI Act is unique in its inclusion of blue-teaming on its research and policy agenda.

Excitement comes with its own downsides, though. The hype around red-teaming can obscure that actual practices vary widely in effectiveness, actionability, and transparency. A best practice or consistent standard for red-teaming does not exist, so the actual objectives, setting, duration, environment, team composition, access level, and the changes that are made based on the red-teaming results will vary from company to company.  For example, a company may conduct multiple rounds of red-teaming with a diverse group of experts with unfettered model access, clear goals, and unlimited time. Another red-teaming exercise may be time-bound, crowdsourced, API access, and single-round. Both approaches are considered red-teaming, but their usefulness differs significantly. 

Design choices for red-teaming exercises are largely made without disclosure, and exercise results are not public. There is no way to know whether companies make their product safer based on the results (MIT Technology Review). Accordingly, some researchers view red-teaming as a “catch-all response to quiet all regulatory concerns about model safety that verges on security theater” (Feffer et al, preprint). These concerns are echoed in the public comments submitted to NIST regarding their assignments in E.O. 14110. Similarly, Anthropic, a safety-focused AI developer, has called for standardizing red-teaming and blue-teaming procedures.


Federal Infrastructure

The TEST AI Act modifies NIST’s role under Executive Order 14110 to allow for interagency cooperation. The Act leverages the extensive federal infrastructure already in place for AI testing and testbeds. Congressional sponsors, including Senators Lujan (D-NM) and Risch (R-ID), identify the DOE as the only agency with the necessary computing power, data, and technical expertise to develop testbeds for frontier AI systems. 

Several trustworthy AI testbeds across federal agencies could serve as resources for the TEST AI testbeds. The Defense Advanced Research Projects Agency’s Guaranteeing AI Robustness Against Deception (GARD) project develops defense capabilities (like blue-teaming) to prevent and defeat adversarial attacks. They have produced a publicly available virtual testbed, toolbox, benchmarking dataset, and training materials for evaluating and defending machine learning models. Similarly, NIST’s Dioptra testing platform, which predates E.O. 14110, evaluates the trustworthiness, security, and reliability of machine learning models. Dioptra aims to “research and develop metrics and best practices to assess vulnerabilities of AI models” i.e., improve red-teaming. NSF also funds several testbeds (Chameleon, CloudLab) that provide computing power for AI/ML experimentation.


Conclusion

The TEST AI Act could usher in an era of increased robustness and accountability for federal use AI systems. Unlike GARD or Dioptra, which narrowly focus on defensive capabilities and trustworthiness, respectively, the TEST AI Act creates wide-ranging testbeds that are applicable across use cases and contexts. 

The Act also increases activity in the under-researched area of blue-teaming. Improving blue-teaming strengthens defensive capabilities, and can also help to solve the problem of “red-teaming hype”. It makes red-teaming results more actionable, and forces red teams to meet higher standards when testing defenses. This deliberate focus on both offensive and defensive techniques improves the current state of AI security while offering a framework for developing future AI standards and testing across the federal system. 

The TEST AI Act also addresses the limitations of current ad-hoc testing environments by formalizing and expanding testbed creation. In doing so, it redefines how government AI systems will be secured, bringing consistency and transparency to previously varied practices. This supports the broader goals of the Executive Order in improving risk assessment for biosecurity, cybersecurity, national security, and critical infrastructure. Crucially, it could stop the government’s systems from contributing to these harms from AI.

The Act’s integration with established entities like NIST and the DOE is critical, leveraging their unique infrastructure and technical expertise. It adopts the Executive Order’s position that collaboration on AI across government agencies is crucial for effectively harnessing vast resources and disparate expertise to make AI as beneficial as possible. By turning testbed creation and production into an interagency effort, the TEST AI Act establishes a testbed program on a previously unreplicated scale.