Introduction / State of the Problem

Technologists and academics have been warning the public for years about the proliferation of non-consensual sexual deepfakes, altered or artificial non-consensual pornography of real people. Today, a potential abuser just needs access to a web browser and internet connection to freely create hundreds or thousands of non-consensual intimate images. 96% of deepfake videos online are non-consensual pornographic videos, and 99% of them target women. Alarmingly, we are now seeing a pattern of boys as young as 13 using these tools to target their female classmates with deepfake sexual abuse.

On October 20, 2023, young female students at Westfield High School in New Jersey discovered that teenage boys at the school had taken fully-clothed photos of them and used an AI app to alter them into sexually explicit, fabricated photos for public circulation. One of the female victims revealed that it was not just one male student, but a group using “upwards of a dozen girls’ images to make AI pornography.” In the same month, halfway across the country at Aledo High School in Texas, a teenage boy generated nude images of ten female classmates. The victims, who sought help from the school, the sheriff’s office, and the social media apps, struggled to stop the photos from spreading for over eight months: “at that point, they didn’t know how far it spread”. At Issaquah High School in Washington, another teenage boy circulated deepfake nude images of “at least six 14-to-15-year-old female classmates and allegedly a school official” on popular image-based social media app Snapchat. While school staff knew about the images, the police only heard about the incident through parents who independently reached out to file sex offense reports. Four months later, the same incident occurred in Beverly Hills, California — and the sixteen victims were only in middle school.

Consequences of Inaction / Lack of Appropriate Guidelines

In almost every case of deepfake sexual abuse in schools, administrators and district officials were caught off guard and unprepared; even when relevant guidelines existed. At Westfield High School, school administrators conducted initial investigation with the alleged perpetrators and police present, without their parents and lawyers, making all collected evidence inadmissible in court. Because of the school negligence, the victims could not seek accountability and still do not know “the exact identities or the number of people who created the images, how many were made, or if they still exist”. At Issaquah High School, when a police detective inquired about why the school had not reported the incident, school officials questioned why they would be required to report “fake images”. Issaquah’s Child Abuse, Neglect, and Exploitation Prevention Procedure states that in cases of sexual abuse, reports to law enforcement or Child Protective Services must be made “at the first opportunity, but in no case longer than forty-eight hours”. Yet, because fake images are not directly named in the policy, the school did not file a report until six days later— and not without multiple reminders from the police about the school’s duty as mandatory reporters. In Beverly Hills, California, administrators acted more swiftly in expelling the five students responsible. Still, the perpetrators retained full anonymity, while the images of victims were permanently made public, attaching their faces to a nude body. Victims shared struggles with feelings of anxiety, shame, isolation, an inability to focus at school, and serious concerns about reputational damage, future repercussions with job prospects, and the possibility that photos could resurface at any point.

A Path Forward:

Deepfake sexual abuse is not inevitable: it is possible and necessary for schools to implement concrete preventative and reactive measures. Even before an incident has occurred, schools can protect students by setting standards for acceptable and unacceptable behavior, educating staff, and modifying existing policies to account for such incidents.

Many schools have existing procedures related to sexual harassment and cyberbullying issues. However, standard practices to handle digital sexual abuse via deepfakes have yet to materialize. Existing procedures non-specific to this area have been ineffective, resulting in the exposure of victims’ identities, week-long delays while pornographic images are circulated amongst peers, and failures to report incidents to law enforcement in a timely manner. School action plans to address these risks should incorporate the following considerations:

  1. Deepfake sexual abuse incidents in schools follow a similar pattern: students feed fully-clothed images of their peers into an AI application to manipulate them into sexually explicit images and circulate them through social media platforms like Snapchat. The apps used to create and distribute deepfake sexual images are easily accessible to most students, who recklessly disregard the grave consequences their actions hold. Schools must update their codes of conduct, sexual harassment and abuse, harassment, intimidation and abuse, cyberbullying and AI policies to clearly ban the creation and dissemination of deepfake sexual imagery. Those updated policies should be clearly communicated through school wide events and announcements, orientation, and consent or sexual education curricula. Schools must clearly communicate to students the seriousness of the issue and the severity of the consequences, setting a clear precedent for action before crises occur. 
  2. Appropriate consequences for perpetrators: The lack of appropriate consequences for the creation and dissemination of deepfake sexual imagery will undermine efforts to deter such behavior. Across recent incidents, most schools failed to identify all perpetrators involved in incidents or deliver reasonable consequences for the serious harm caused as a result of their actions. Westfield High School suspended a male student accused of fabricating the images for one or two days; victims and families shared that the perpetrators at Aledo High School received “probation, a slap on the wrist… [that will] be expunged. But these pictures could forever be out there of our girls.” To deter perpetrators and protect victims, schools should establish guidelines for determining consequences and a system for stakeholders that should be involved in the determination of what consequences there will be and which parties will carry them out. Even in cases where the school needs to involve local authorities, there should be school-specific consequences such as suspension or expulsion.
  3. Equivalence of real images and deepfake generated images: Issaquah failed to address its deepfake sexual abuse incident because school administrators were unsure whether existing sexual abuse policies applied to generated images. Procedures addressing sexual abuse incidents must be updated to treat the creation and distribution of non-consensual sexual deepfake images the same as real images. For example, an incident that involves creating deepfake porn should be treated with the same seriousness as an incident that involves non-consensually photographing someone nude in a locker room. Deepfake sexual abuse incidents require the same rigorous investigative and reporting process as other sexual abuse incidents because their consequences are similarly harmful to victims and the larger school community.
  4. Standard procedures to reduce harms experienced by victims: At Westfield High School, victims discovered their photos were used to generate deepfake pornography after their names were announced over the school-wide intercom. Not only did victims feel that it was a violation of privacy to have their identities exposed to the entire student body, but the boys who generated the images were privately pulled aside for investigation. Schools should have established, written procedures to discreetly inform relevant authorities about incidents and to support victims at the start of an investigation on deepfake sexual abuse. After procedures are established, educators should be made aware of relevant school procedures for protecting victims through dedicated training.

Case Study: Seattle Public School District

The incidents that have sounded the alarm bells on this issue are only the ones that have been reported in large news outlets. Such incidents are likely occurring all over the country without much media attention. The action we’re seeing today is largely the result of a few young, brave advocates using their own experiences as a platform to give voice to this issue – and it is time that we listen. Seattle Public Schools, like most districts around the country, has not yet had a high profile incident. A review of its code of conduct, sexual harassment policy, and cyber bullying policy, similar to those of many other schools, reveals a lack of preparedness in preventing and responding to potential deepfake sexual abuse within schools. Below is a case study of how the aforementioned considerations may apply to bolster Seattle’s district policies:

  1. Code of conduct: Seattle Public School District’s code of conduct, revised and re-approved every year by the Board of Education, contains policy on acceptable student behavior and standard disciplinary procedures. Conduct that is “substantially interfering with a student’s education […] determined by considering a targeted student’s grades, attendance, demeanor, interaction with peers, interest and participation in activities, and other indicators” merits a “disciplinary response.” Furthermore, “substantial disruption includes but is not limited to: significant interference with instruction, school operations or school activities… or a hostile environment that significantly interferes with a student’s education.” Deepfake sexual abuse incidents falls squarely under this conduct, affecting a victim’s ability to focus and interact with teachers and peers. Generating and electronically distributing pornographic images of fellow students outside of school hours or off-campus falls within the school’s purview under their off-campus student behavior policy, as it causes a substantial disruption to on-campus activities and interferes with the right of students to safely receive their education. Furthermore, past instances have shown that these incidents spread rapidly and become a topic of conversation that continues into the school day, especially when handled without sensitivity for victims, creating a hostile environment for students.
  2. Sexual harassment policy: Existing policy states that sexual harassment includes “unwelcome sexual or gender-directed conduct or communication that creates an intimidating, hostile, or offensive environment or interferes with an individual’s educational performance”. Deepfake pornography, which has been non-consensual and directed toward young girls in every high profile case thus far, should be considered a form of “conduct or communication” that is prohibited under this policy. The Superintendent has a duty to “develop procedures to provide age-appropriate information to district staff, students, parents, and volunteers regarding this policy… [which] include a plan for implementing programs and trainings designed to enhance the recognition and prevention of sexual harassment.” Such policies should reflect the most recent Title IX regulations, effective August 1st, which state that “non-consensual distribution of intimate images including authentic images and images that have been altered or generated by artificial intelligence (AI) technologies” are considered a form of online sexual harassment. Revisions made to sexual harassment policy should be made clear to all school staff through dedicated training.
  3. Cyber bullying policy: Deepfake sexual abuse is also a clear case of cyberbullying. As defined by the Seattle Public Schools, “harassment, intimidation, or bullying may take many forms including, but not limited to, slurs, rumors, jokes, innuendoes, demeaning comments, drawings, cartoons… or other written, oral, physical or electronically transmitted messages or images directed toward a student.” Furthermore, the act is specified as one that “has the effect of substantially interfering with a student’s education”, “creates an intimidating or threatening educational environment”, and/or “has the effect of substantially disrupting the orderly operation of school”. From what victims have shared about their anxiety, inability to focus in school, and newfound mistrust toward those around them, it is evident that deepfake sexual abuse constitutes cyberbullying – at a minimum. However, because the proliferation of generated pornography is so recent, school administrators may be uncertain how existing policy applies to such incidents. Therefore, this policy should be revised to directly address generated visual content. For instance, “electronically transmitted messages or images directed toward a student” may be revised to “electronically generated or transmitted messages or images directed toward or depicting a student”.

Case Study: Seattle Public School District

Deepfake pornography can be created in seconds, yet follows victims for the rest of their lives. Perpetrators today are emboldened by free and rapid access to deepfake technology and school environments that fail to hold them accountable. School districts’ inaction has resulted in the proliferation of deepfake sexual abuse incidents nationwide, leaving countless victims with little recourse and irreversible trauma. It is critical for schools to take immediate action to protect students, especially young girls, by incorporating safeguards within school policies: address the equivalence of generated and real images within their codes of conduct, sexual harassment policies, and cyber bullying policies, setting guidelines to protect victims and determine consequences for perpetrators, and ensuring all staff are aware of these changes. By taking these steps, schools can create a safer environment, ensuring that students receive the protection and justice they deserve, and deterring future incidents of deepfake sexual abuse.