1. Introduction
The emergence of generative artificial intelligence (AI), a suite of technologies capable of creating novel and realistic content, represents a transformative development in AI (Surden, Reference Surden2024, pp. 1944–1948). Generative AI refers to a class of machine learning models and techniques that can create new content – such as text, images, audio, video, and code – based on patterns learned from vast troves of training data (Surden, Reference Surden2024). These systems, including natural language models like ChatGPT, image generation tools like DALL-E and Stable Diffusion, and audio synthesis engines like WaveNet, have already begun to revolutionize how we create, interact with, and perceive digital content (Davenport & Mittal, Reference Davenport and Mittal2022; Kreaic et al., Reference Kreaic, Uribe, Lasater-Wille and Romeo2024).
While earlier AI excelled at tasks like image classification, speech recognition, and fraud detection, these systems were primarily constrained to identifying and exploiting correlations within narrowly defined problem domains (Surden, Reference Surden2024). In contrast, generative AI systems produce output that is often indistinguishable from human-created content (Cooper et al., Reference Cooper, Lee, Grimmelmann, Ippolito, Callison-Burch, Choquette-Choo, Mireshghallah, Brundage, Mimno, Zahrah Choksi, Balkin, Carlini, De Sa, Frankie, Ganguli, Gipson, Guadamuz, Leng Harris, Jacobs, Joh and Kamath2023, p. 7; Heikkilä, Reference Heikkilä2024). While these tools offer exciting new possibilities for creative expression, scientific discovery, and social innovation, they also raise profound privacy concerns (Bender et al., Reference Bender, Gebru, McMillan-Major and Shmitchell2021; Cooper et al., Reference Cooper, Lee, Grimmelmann, Ippolito, Callison-Burch, Choquette-Choo, Mireshghallah, Brundage, Mimno, Zahrah Choksi, Balkin, Carlini, De Sa, Frankie, Ganguli, Gipson, Guadamuz, Leng Harris, Jacobs, Joh and Kamath2023, p. 10; Helmus & Chandra, Reference Helmus and Chandra2024; King & Meinhardt, Reference King and Meinhardt2024; Luccioni & Viviano, Reference Luccioni and Viviano2021; Matsumi & Solove, Reference Matsumi and Solove2024; Song & Shmatikov, Reference Song and Shmatikov2019; Weidinger et al., Reference Weidinger, Mellor and Rauh2021; Weidinger et al., Reference Weidinger, Uesato, Rauh, Griffin, Huang, Mellor, Glaese, Cheng, Balle, Kasirzadeh, Gabriel, Muennighoff, Higgins, Song, Nichol, Diamanti, Coppin, Gao, El-Showk and Irving2022).
This article explores the privacy challenges posed by generative AI and argues for a fundamental rethinking of privacy governance frameworks in response. Section 2 examines the technical characteristics and capabilities of generative AI systems that amplify existing privacy risks and introduce new challenges, including nonconsensual data extraction, data leakage and re-identification, inferential profiling, synthetic media generation, algorithmic bias, and quantification. Section 3 surveys the current landscape of U.S. privacy law and its shortcomings in addressing these emergent issues, highlighting the limitations of the sectoral approach, the FTC’s constrained authority, the promise and pitfalls of state laws, and the inadequacy of individualistic privacy paradigms. Section 4 outlines critical elements of an alternative paradigm for generative AI privacy governance that: (1) shifts from individual to collective conceptions of privacy; (2) moves from reactive to proactive governance; and (3) reorients the goals and values of AI governance. The analysis concludes by discussing the political, legal, and cultural obstacles to regulatory reform in the United States while emphasizing the urgent need for action given the high stakes for individual autonomy and democratic values.
2. How generative AI challenges privacy
The rapid advancement of generative AI systems, with their enhanced ability to create highly realistic and persuasive content, magnifies existing privacy risks while also introducing new challenges that test the foundational assumptions of current privacy frameworks. The technical characteristics of generative AI systems exacerbate existing privacy threats. These risks include large-scale extraction of public data without individual consent and control; data leakage and re-identification; inferential privacy harms; generation of fake but convincing synthetic media; exacerbation of algorithmic bias and discrimination; and decontextualized quantification (Bommasani et al., Reference Bommasani, Hudson, Adeli, Altman, Arora, von Arx, Bernstein, Bohg, Bosselut, Brunskill, Brynjolfsson, Buch, Card, Castellon, Chatterji, Chen, Creel, Davis, Demszky and Liang2022; Cooper et al., Reference Cooper, Lee, Grimmelmann, Ippolito, Callison-Burch, Choquette-Choo, Mireshghallah, Brundage, Mimno, Zahrah Choksi, Balkin, Carlini, De Sa, Frankie, Ganguli, Gipson, Guadamuz, Leng Harris, Jacobs, Joh and Kamath2023, p. 10; Helmus & Chandra, Reference Helmus and Chandra2024; King & Meinhardt, Reference King and Meinhardt2024; Lee et al., Reference Lee, Ippolito and Cooper2024; Shelby et al., Reference Shelby, Rismani, Henne, Venema and Passi2023; Solove, Reference Solove2024, Reference Solove2025; Song & Shmatikov, Reference Song and Shmatikov2019; Weidinger et al., Reference Weidinger, Mellor and Rauh2021, Reference Weidinger, Uesato, Rauh, Griffin, Huang, Mellor, Glaese, Cheng, Balle, Kasirzadeh, Gabriel, Muennighoff, Higgins, Song, Nichol, Diamanti, Coppin, Gao, El-Showk and Irving2022; Zeide, Reference Zeide2017). While society has long grappled with privacy concerns around big data and machine learning, the power, sophistication, and inscrutability of generative AI exceed the scope of current data governance paradigms, revealing weaknesses in established legal and ethical frameworks for protecting privacy and autonomy.
2.1 Nonconsensual data extraction and the failure of notice and consent
Generative AI systems are trained on vast amounts of data, often comprising billions of individual data points spanning a wide range of formats, domains, and sources (Baio, Reference Baio2022; Schuhmann et al., Reference Schuhmann, Vencu, Beaumont, Kaczmarczyk, Mullis, Katta, Coombes, Jitsev and Komatsuzaki2022). For example, OpenAI trained its GPT-4 language model on a corpus of over 45 terabytes of text data, including books, articles, and websites (Achiam et al., Reference Achiam, Adler, Agarwal, Ahmad, Akkaya, Aleman, Almeida, Altenschmidt, Altman, Anadkat, Avila, Babuschkin, Balaji, Balcom, Baltescu, Bao, Bavarian, Belgum and Zoph2024). While some of these data are clearly public material like stock photos, much of it includes individuals’ names, addresses, and images published online under varying expectations of privacy (Cooper et al., Reference Cooper, Lee, Grimmelmann, Ippolito, Callison-Burch, Choquette-Choo, Mireshghallah, Brundage, Mimno, Zahrah Choksi, Balkin, Carlini, De Sa, Frankie, Ganguli, Gipson, Guadamuz, Leng Harris, Jacobs, Joh and Kamath2023; King & Meinhardt, Reference King and Meinhardt2024).
This harvesting and processing of personal information and sensitive content occurs without notice, consent, or constraint (King & Meinhardt, Reference King and Meinhardt2024, pp. 17–19; Leffer, Reference Leffern.d.; Morrison, Reference Morrison2023; Solove, Reference Solove2025, pp. 23–29). Current law explictly allows (HiQ Labs v. LinkedIn Corp., 2019) or implicitly sanctions the collection and use of publicly available information (California Consumer Privacy Act (CCPA), 2023); Va. Code § 59.1-571, 2021; Utah Code Ann. § 13-61-101(29)(b), 2024). However, the unprecedented scope and granularity of data extraction by generative AI systems erode the assumptions of individual autonomy that underlie existing privacy frameworks (King & Meinhardt, Reference King and Meinhardt2024, pp. 17–19; Solove, Reference Solove2025, pp. 23–29). In many cases, individuals are unaware that their data are being used to train these systems and cannot manage the downstream uses of their data. For example, Clearview AI, a facial recognition company, scraped billions of images from social media platforms to train its algorithms without the knowledge or consent of the individuals depicted (Hill, Reference Hill2020; Tangalakis-Lippert, Reference Tangalakis-Lippert2023).
2.2 Data leakage and re-identification
The training data used by generative AI systems can also be vulnerable to data leakage and re-identification attacks (Carlini et al., Reference Carlini, Liu, Erlingsson, Kos and Song2019; King & Meinhardt, Reference King and Meinhardt2024; Leffer, Reference Leffern.d.; Morrison, Reference Morrison2023; Solove, Reference Solove2025; Staab et al., Reference Staab, Vero, Balunović and Vechev2023; Weidinger et al., Reference Weidinger, Uesato, Rauh, Griffin, Huang, Mellor, Glaese, Cheng, Balle, Kasirzadeh, Gabriel, Muennighoff, Higgins, Song, Nichol, Diamanti, Coppin, Gao, El-Showk and Irving2022; Winograd, Reference Winograd2023). Because these models capture patterns at a high level of granularity, they can inadvertently “memorize” and reproduce sensitive snippets of input data in synthetic outputs (Carlini et al., Reference Carlini, Liu, Erlingsson, Kos and Song2019; King & Meinhardt, Reference King and Meinhardt2024; Leffer, Reference Leffern.d.; Staab et al., Reference Staab, Vero, Balunović and Vechev2023; Winograd, Reference Winograd2023). For example, a language model trained on a corpus of emails might reveal real names, addresses, or phone numbers (Carlini et al., Reference Carlini, Tramer, Wallace, Jagielski, Herbert-Voss, Lee, Roberts, Brown, Song, Erlingsson, Oprea and Raffel2021). Image synthesis models trained on photos from social media can produce pictures that depict recognizable individuals or locations (Fernandez et al., Reference Fernandez, Sanchez, Pinaya, Behrmann, Schölkopf and Botvinick2023). Moreover, malicious actors can craft adversarial prompts to extract specific sensitive information (Edwards, Reference Edwards2022).
2.3 Inferential profiling and privacy harms
Generative AI systems exploit subtle patterns and correlations in large datasets to make probabilistic inferences about a person’s demographics, preferences, behaviors, and beliefs, even when such information is not explicitly disclosed (Cooper et al., Reference Cooper, Lee, Grimmelmann, Ippolito, Callison-Burch, Choquette-Choo, Mireshghallah, Brundage, Mimno, Zahrah Choksi, Balkin, Carlini, De Sa, Frankie, Ganguli, Gipson, Guadamuz, Leng Harris, Jacobs, Joh and Kamath2023; Gillis, Reference Gillis2022; King & Meinhardt, Reference King and Meinhardt2024; Solove, Reference Solove2025; Weidinger et al., Reference Weidinger, Uesato, Rauh, Griffin, Huang, Mellor, Glaese, Cheng, Balle, Kasirzadeh, Gabriel, Muennighoff, Higgins, Song, Nichol, Diamanti, Coppin, Gao, El-Showk and Irving2022, p. 218). Language models trained on social media posts might learn to associate certain linguistic styles, topics, or sentiments with particular demographic groups, allowing them to make inferences about a user’s age, ethnicity, or socioeconomic status based on their writing patterns (Solow-Niederman, Reference Solow-Niederman2022; Zeide, Reference Zeide2015, Reference Zeide2022). Similarly, a computer vision model trained on user-uploaded images might be able – or at least claim to be able – to infer sensitive attributes like health conditions, political affiliations, or sexual orientation based on visual cues and contextual signals in the images (Wang & Kosinski, Reference Wang and Kosinski2018).
2.4 Synthetic media, deepfakes, and disinformation
Generative AI’s ability to create highly realistic content opens the door to pervasive deception and manipulation (King & Meinhardt, Reference King and Meinhardt2024; Solove, Reference Solove2025; Weidinger et al., Reference Weidinger, Uesato, Rauh, Griffin, Huang, Mellor, Glaese, Cheng, Balle, Kasirzadeh, Gabriel, Muennighoff, Higgins, Song, Nichol, Diamanti, Coppin, Gao, El-Showk and Irving2022). Malicious actors can exploit readily available tools to produce “deepfakes” and other types of synthetic media that convincingly impersonate real people and mislead audiences on a massive scale (Salam, Reference Salam2023). Deep learning models, for instance, can clone an individual’s voice from just a few seconds of audio and generate fake audio clips (Leffer, Reference Leffer2024). Natural language models can mass-produce fake news articles, product reviews, and social media posts that are nearly impossible to distinguish from authentic content. Image synthesis systems can create realistic faces of nonexistent individuals or seamlessly insert real people’s faces into fabricated scenarios (Westerlund, Reference Westerlund2019).
These technologies are now accessible even to those with limited technical expertise or resources, who can leverage them for deceptive or harmful purposes (Heikkilä, Reference Heikkilä2024). Middle school children across the country, for example, alter images of their classmates to create “deepfake porn” (Rubin, Reference Rubin2023; Verma, Reference Verma2023). As synthetic media capabilities grow increasingly sophisticated and accessible, it is becoming increasingly difficult to distinguish between real and generated content, eroding the epistemic foundations of trust and truth in online interactions (Baker & Chadwick, Reference Baker, Chadwick, Tumber and Waisbord2021; Chesney & Citron, Reference Chesney and Citron2019; Kalpokas, Reference Kalpokas2019; MacKenzie & Bhatt, Reference MacKenzie and Bhatt2020; Shin, Reference Shin2024; Tokaji, Reference Tokaji2019; Whyte, Reference Whyte2020).
2.5 Algorithmic bias and discrimination
Generative AI systems also risk perpetuating and amplifying historical patterns of bias and discrimination reflected in their training data. For example, facial recognition algorithms exhibit higher error rates for women and people of color (Buolamwini & Gebru, Reference Buolamwini and Gebru2018; Grother, Reference Grother2022; Grother et al., Reference Grother, Ngan and Hanaoka2019). Researchers have shown that language models like GPT-3 can exhibit gender and racial biases in their generated text, such as associating men with career-oriented terms and women with family-oriented terms or perpetuating harmful stereotypes about minority groups (Bolukbasi et al., Reference Bolukbasi, Chang, Zou, Saligrama and Kalai2016; Caliskan-Islam et al., Reference Caliskan-Islam, Bryson and Narayanan2016).
These biases can translate to real-world harms when entities use generative systems to allocate benefits and opportunities (Ajunwa, Reference Ajunwa2021, Reference Ajunwa2020, p. 1405; Geddes, Reference Geddes2023, p. 31; Kim, Reference Kim2016; O’Neil, Reference O’Neil2016; Solove, Reference Solove2025, pp. 45–46; Solove & Matsumi, Reference Solove and Matsumi2024; Zeide, Reference Zeide2022). A company that uses a biased language model to screen resumes or generate job descriptions may end up excluding qualified candidates from underrepresented backgrounds. A government agency that employs a skewed facial recognition tool to identify suspects or predict recidivism risk may disproportionately target and surveil communities of color (Barrett, Reference Barrett2017; Meijer & Wessels, Reference Meijer and Wessels2019). Over time, such discriminatory outcomes can compound disadvantage and erode economic mobility for marginalized communities (Zeide, Reference Zeide2022).
2.6 Quantification and decontextualization
Automated profiling and decision-making by generative AI systems can also lead to the decontextualization and abstraction of individuals, reducing them to a set of quantifiable data points and statistical inferences (Citron & Pasquale, Reference Citron and Pasquale2014; Cohen, Reference Cohen2000, p. 1405; Zeide, Reference Zeide2017, p. 169). This reductive approach to human identity and agency fails to capture the complexity and nuance of individual circumstances, leading to decisions that may be inaccurate, unfair, or devoid of situational understanding (Cohen, Reference Cohen2000, p. 1405; Geddes, Reference Geddes2023, p. 31; O’Neil, Reference O’Neil2016; Solove & Matsumi, Reference Solove and Matsumi2024, pp. 45–46; Zeide, Reference Zeide2017). A generative AI system used to predict someone’s creditworthiness or risk of recidivism may rely on aggregate patterns and correlations learned from historical data that do not reflect the full context of that person’s circumstances and capacity for behavioral change (Eaglin, Reference Eaglin2017). This risks creating a system of self-fulfilling prophesies that undermine individuals’ autonomy and agency (Cohen, Reference Cohen2013; Harcourt, Reference Harcourt2008; Kerr & Earle, Reference Kerr and Earle2013; Lazaro, Reference Lazaro2018; Solove, Reference Solove2024; Véliz, Reference Véliz2021; Zeide, Reference Zeide2017, Reference Zeide2022). Moreover, using generative AI systems to automate high-stakes decisions shifts discretion away from domain experts to unaccountable actors (Engstrom & Haim, Reference Engstrom and Haim2023, pp. 291–292; Zeide, Reference Zeide2017, pp. 168–169). These systems optimize based on what they measure, thereby shaping not only individual assessments but also determining the broader goals and values of a given context (Engstrom & Haim, Reference Engstrom and Haim2023, pp. 291–292; Zeide, Reference Zeide2017, pp. 168–169). By displacing situated judgment with opaque and unaccountable systems, generative AI risks enabling private entities to shape public values and societal norms without adequate transparency or oversight (Engstrom & Haim, Reference Engstrom and Haim2023, pp. 291–292; Zeide, Reference Zeide2017, pp. 168–169). In summary, the advanced capabilities of generative AI systems pose significant threats to privacy at both the individual and societal levels.
3. The inadequacy of U.S. privacy law in addressing generative AI challenges
The formidable capabilities and evolving risks of generative AI systems present substantial challenges to current privacy and data protection frameworks in the United States. This section highlights three key limitations of the current legal framework: (1) the fragmented and incomplete patchwork of federal and state laws; (2) the mismatch between generative AI’s collective harms and a framework premised on notice and choice; and (3) the inadequacy of individualistic privacy models in capturing AI’s systemic impacts. These shortcomings necessitate a fundamental rethinking of privacy governance in the age of AI.
3.1. A fragmented and sectoral approach to privacy regulation
Unlike the European Union, which has comprehensive privacy and data protection regimes like the General Data Protection Regulation (GDPR) (Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation, 2016), Arts 12–22 (Rights of the data subject) and the Artificial Intelligence Act (Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act)), the United States employs a sectoral approach to privacy and data regulation. This fragmented approach protects specific data categories rather than establishing a general right to privacy across all contexts.
The current framework comprises a patchwork of federal and state laws imposing different obligations on collecting, using, and sharing personal data based on industry, data type, and jurisdiction. Key examples include the Health Insurance Portability and Accountability Act (42 U.S.C. § 1420d et seq.), the Gramm–Leach–Bliley Act (15 U.S.C. § 6821 et seq.), and the Children’s Online Privacy Protection Act (15 U.S.C. 6501–6506), each imposing unique obligations on the handling of personal healthcare, financial, and children’s data, respectively. The fragmented U.S. framework, which focuses narrowly on protecting specific categories of data based on context, struggles to regulate AI systems that can repurpose and recombine information in ways that transcend traditional sectoral boundaries (King & Meinhardt, Reference King and Meinhardt2024; Solove, Reference Solove2024; Weidinger et al., Reference Weidinger, Uesato, Rauh, Griffin, Huang, Mellor, Glaese, Cheng, Balle, Kasirzadeh, Gabriel, Muennighoff, Higgins, Song, Nichol, Diamanti, Coppin, Gao, El-Showk and Irving2022).
3.2. FTC authority and limitations
In the absence of comprehensive federal AI legislation, the Federal Trade Commission (FTC) has emerged as the de facto federal authority responsible for privacy protection in the context of AI systems, including generative models (DiResta & Sherman, Reference DiResta and Sherman2023). The FTC’s authority stems from its general mandate to protect consumers from unfair or deceptive practices under section 5 of the FTC Act. This includes taking enforcement actions against companies that violate their own privacy policies or engage in “unfair” practices that cause or are likely to cause substantial injury to consumers, that cannot be reasonably avoided by consumers, and that are not outweighed by countervailing benefits to consumers or competition (Federal Trade Commission Act § 5, 15 U.S.C. § 45(a), 2018).
In recent years, the Commission has taken a number of actions to address the privacy and fairness implications of AI systems, including issuing guidance (Federal Trade Commission, 2021, 2023a), conducting workshops (Federal Trade Commission, 2023b), and penalizing companies deploying AI without adequate safeguards against discriminatory impact on protected classes (Federal Trade Commission v. Rite Aid Corp., 2024; Hanley & Goldfarb, Reference Hanley and Goldfarb2021). For example, in 2016, the FTC reached a settlement with InMobi over its deceptive use of location tracking in its mobile ad targeting system (FTC v. InMobi, 2016). In 2019, the agency issued warnings to companies that purport to use AI for automated hiring decisions about the risks of perpetuating or exacerbating discriminatory biases (FTC, 2020). In 2021, the FTC issued guidance explicitly cautioning that bias in AI could lead to enforcement actions under laws prohibiting unfair or deceptive practices (FTC, 2021). That same year, the FTC required Everalbum to delete biometric and facial recognition data that had been collected without adequate notice and consent (FTC v. EverAlbum, 2021).
However, several factors limit the FTC’s ability to effectively oversee generative AI systems under its current section 5 authority. First, the Commission lacks the substantive rulemaking power to issue binding regulations interpreting what constitutes an unfair or deceptive AI practice (Hartzog & Solove, Reference Hartzog and Solove2014), unlike the supervisory powers of the European Data Protection Board or the UK Information Commissioner’s Office. There is also no statutory authority to conduct general audits or inspections of AI developers’ practices (Hartzog & Solove, Reference Hartzog and Solove2014). This results in largely reactive, fact-specific enforcement actions focused on cases of procedural violations, such as deceptive marketing or inadequate disclosures, rather than addressing the broader societal risks and harms posed by AI systems (Hartzog & Solove, Reference Hartzog and Solove2014; Hirsch, Reference Hirsch2020; Waldman, Reference Waldman2019).
Second, the FTC’s jurisdiction only extends to commercial practices that cause or are likely to cause substantial injury to consumers, which may not cover some of the more intangible and externalized impacts of generative AI, such as the erosion of public trust or the amplification of disinformation (Calo, Reference Calo2021; Lamo & Calo, Reference Lamo and Calo2019).
Third, recent judicial decisions have further curtailed the FTC’s enforcement tools. In AMG Capital Management v. FTC (2021), the Supreme Court held that section 13(b) of the FTC Act does not authorize the agency to seek monetary relief like restitution or disgorgement, removing a key deterrent against privacy abuses (pp. 1344–1347). This significantly narrow the agency’s enforcement powers Slaught (Federal Trade Commission, 2021).
3.3. The promise and pitfalls of state privacy laws
Given these limitations at the federal level, a growing number of states have taken up the mantle of regulating AI systems (2024 AI State Law Tracker, 2024). As of January 2025, 20 states have enacted comprehensive consumer data protection laws that impose heightened requirements for processing sensitive personal information and establish rights of access, correction, and deletion (2024 AI State Law Tracker, 2024). These state privacy laws often incorporate data protection mechanisms from the GDPR (Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (Text with EEA relevance, 2016, Arts 12–22; Chander et al., Reference Chander, Kaminski and McGeveran2020).
In addition to these comprehensive privacy laws, states have also enacted or proposed more targeted laws governing specific AI applications. For example, Maryland’s Algorithmic Decision Systems Risk Assessment Act mandates bias and privacy impact assessments for government contractors using AI tools, while California’s proposed Automated Decision Systems Accountability Act would require businesses to evaluate high-risk AI systems and report on their data practices. Some states have specifically targeted generative AI and deepfakes, with California’s A.B. 1280 (2023) prohibiting the dissemination of synthetic media in political campaigns with the intent to deceive voters, and Texas’ S.B. 2382 (2023) creating a civil cause of action for individuals whose likeness is used in sexually explicit deepfakes without their consent. Other states, such as Tennessee and Utah, have gone further by enacting laws that prohibit or criminalize the production of deepfakes more broadly (Tenn. Code Ann. § 47-25-1101 et seq., 2023); Utah Code Ann. § 76-5b-206 et seq., 2024).
While these state efforts represent an important source of policy innovation and experimentation in AI governance, they also suffer from several key limitations in addressing the unique challenges posed by generative AI. They remain fragmented and reactive, often addressing narrow harms rather than the underlying structural conditions that enable them (King & Meinhardt, Reference King and Meinhardt2024; Solove, Reference Solove2024). Most still rely heavily on a notice-and-choice model of privacy protection that is ill-equipped to govern the complex data ecosystems of machine learning pipelines (Hartzog & Richards, Reference Hartzog and Richards2020, p. 1704; King & Meinhardt, Reference King and Meinhardt2024; Solove, Reference Solove2022, pp. 983–984, Reference Solove2024). Moreover, their enforcement mechanisms are often limited to attorney general actions or narrow private rights of action, rather than more comprehensive administrative oversight and auditing (Fitzgerald et al., Reference Fitzgerald, Williams and Cross2024). Ultimately, while state AI laws play a vital role in experimenting with different regulatory approaches and filling gaps in federal policy, they are not sufficient on their own to govern the far-reaching impacts of generative AI, which demands a more comprehensive, proactive, and cohesive governance framework that can address both individual and collective privacy harms, align innovation with public values, and hold AI developers and deployers accountable across contexts.
3.4. The limitations of individualistic privacy paradigms
The biggest conceptual limitation of U.S. privacy law in the context of generative AI is its overreliance on individual notice and consent as the primary mechanism for protecting personal autonomy (King & Meinhardt, Reference King and Meinhardt2024; Solove, Reference Solove2022, Reference Solove2025). This approach is ill-suited to address the scale, complexity, and opacity of data flows powering generative AI systems, which rely on web crawling, data brokers, and other indirect sources of data collection that are not visible and thus not amenable to granular individual control (Burrell, Reference Burrell2016; Hartzog & Richards, Reference Hartzog and Richards2020; King & Meinhardt, Reference King and Meinhardt2024; Pasquale, Reference Pasquale2015; Solove, Reference Solove2013, Reference Solove and Matsumi2024, Reference Solove2025; Veale & Zuiderveen Borgesius, Reference Veale and Zuiderveen Borgesius2021).
First, privacy and data protection laws, which rely heavily on individual rights and procedural safeguards, struggle to contend with the scale, complexity, and opacity of data flows in generative AI systems (Edwards & Veale, Reference Edwards and Veale2017; Hartzog & Richards, Reference Hartzog and Richards2020, p. 1704; Kaminski, Reference Kaminski2023; Solove, Reference Solove2022, p. 993). These characteristics render many core provisions, such as access rights, correction mechanisms, and deletion requirements, technically and practically unfeasible (Carlini et al., Reference Carlini, Tramer, Wallace, Jagielski, Herbert-Voss, Lee, Roberts, Brown, Song, Erlingsson, Oprea and Raffel2021; Katell et al., Reference Katell, Young, Dailey, Herman, Guetler, Tam and Krafft2020; Shokri, Stronati, Song & Shmatikov, Reference Shokri, Stronati, Song and Shmatikov2017; Villaronga et al., Reference Villaronga, Kieseberg and Li2018; Waldman, Reference Waldman2019). Even when individuals can exercise their rights, doing so may not be technically feasible, as generative AI models involve a compressed representation of their training data, making it difficult to erase or remove personal information (Carlini et al., Reference Carlini, Tramer, Wallace, Jagielski, Herbert-Voss, Lee, Roberts, Brown, Song, Erlingsson, Oprea and Raffel2021). Similarly, if a generative model is used to create and disseminate harmful synthetic media, it will be difficult to contain or reverse the viral spread of this content (Bloch-Wehba, Reference Bloch-Wehba2020; Chesney & Citron, Reference Chesney and Citron2019; Lamo & Calo, Reference Lamo and Calo2019; Van der Sloot & Wagensveld, Reference Van der Sloot and Wagensveld2022).
Furthermore, the individualistic focus of U.S. privacy law extends beyond notice and choice to its emphasis on procedural rights and ex post remedies through private lawsuits (Edwards & Veale, Reference Edwards and Veale2017; Hartzog & Richards, Reference Hartzog and Richards2020, p. 1704; Hirsch, Reference Hirsch2020, p. 462; Solove, Reference Solove2022, p. 993; Reference Solove2025). However, these mechanisms are often inadequate or ineffective in the face of generative AI’s structural risks and harms, as the opacity and inscrutability of these systems pose significant challenges for existing legal frameworks designed to ensure transparency and accountability (Cohen, Reference Cohen2019b; Selbst & Barocas, Reference Selbst and Barocas2018; Wachter & Mittelstadt, Reference Wachter and Mittelstadt2019).
Finally, intellectual property laws, particularly trade secret protections, enable companies to assert proprietary control over training data and algorithms (Tschider, Reference Tschider2021, p. 711; Wexler, Reference Wexler2018, p. 1402), limiting external visibility and oversight. This legal barrier compounds the technical inscrutability of generative AI systems, further undermining accountability (Burrell, Reference Burrell2016; Citron, Reference Citron2008; Kroll et al., Reference Kroll, Huey, Barocas, Felten, Reidenberg, Robinson and Yu2017; Selbst & Barocas, Reference Selbst and Barocas2018). As a result, there is often little public disclosure of information about data sources, model architectures, training procedures, and output generation processes, making it difficult for individuals to understand how their data are being used and to identify potential harms or abuses.
3.5. The collective harms and societal risks of generative AI
Beyond these individual challenges, generative AI systems pose a range of diffuse societal risks that are mismatched with the individualistic, reactive focus of existing U.S. privacy laws (Bhargava & Velasquez, Reference Bhargava and Velasquez2020; Cinnamon, Reference Cinnamon2017; Cooper et al., Reference Cooper, Lee, Grimmelmann, Ippolito, Callison-Burch, Choquette-Choo, Mireshghallah, Brundage, Mimno, Zahrah Choksi, Balkin, Carlini, De Sa, Frankie, Ganguli, Gipson, Guadamuz, Leng Harris, Jacobs, Joh and Kamath2023; Zeide, Reference Zeide2022). Anti-discrimination laws, which target discrete instances of intentional or disparate impact discrimination, struggle to address the structural and emergent harms of generative AI, such as compounded disadvantage, intersectional bias, and the preemptive shaping of opportunities (Kerr & Earle, Reference Kerr and Earle2013b; Solove, Reference Solove2024; Zeide, Reference Zeide2022). These laws fail to capture the systemic and diffuse impacts of generative AI on historically disadvantaged populations (Mayson, Reference Mayson2018). Many of the most troubling impacts of generative AI are invisible, embedded in automated systems, and occurring before formal decision points (Zeide, Reference Zeide2022). Instead of explicit rejection, biased or inaccurate assessments and predictions often preempt access to opportunity (Kerr & Earle, Reference Kerr and Earle2013b; Solove, Reference Solove2024; Zeide, Reference Zeide2022). For example, as I have discussed in prior work, predictive hiring algorithms can filter out qualified job applicants based on biased assessments, creating a “silicon ceiling” that imperceptibly impedes economic mobility for marginalized communities (Zeide, Reference Zeide2022).
In light of these limitations, reactive, individual-centric regulation is inadequate to mitigate the structural risks of generative AI. Addressing these challenges requires a proactive, systemic, and collaborative approach that moves beyond individual rights and remedies.
4. Towards a paradigm for generative AI governance
The limitations of existing regulatory frameworks in addressing the privacy risks posed by generative AI systems underscore the need for a new paradigm of privacy governance. This section argues for a fundamental reorientation of privacy protection from a narrow focus on individual control and procedural safeguards to a more systemic approach. It outlines three key elements of this new paradigm: (1) shifting from individual to collective conceptions of privacy; (2) moving from reactive to proactive governance; and (3) reorienting the goals and values of AI governance. The section concludes by acknowledging the significant obstacles to implementing such a paradigm shift in the United States, including the lack of a comprehensive federal privacy law, the limitations of sectoral and state-level regulations, and the entrenched ideological resistance to precautionary governance of emerging technologies.
4.1. Shifting from individual to collective conceptions of privacy
Given the limitations of individual control and the societal impact of generative AI, privacy governance should shift toward a more holistic understanding of privacy as a social value and public good. A robust approach to generative AI privacy governance will require reorienting the foundations of privacy protection from individual control and procedural rights to recognizing privacy as a social foundation and collective good (Cohen, Reference Cohen2019b; Tisné, Reference Tisné2020). As demonstrated by the limitations of existing regulations, policymakers cannot mitigate the privacy risks of generative AI solely by empowering individuals to control how their personal data are collected and used by particular entities. Instead, achieving meaningful privacy protection in the generative AI era requires recognizing the collective and relational dimensions of privacy harms (Milner & Traub, Reference Milner and Traub2021; Viljoen, Reference Viljoen2021).
4.2. Moving from reactive to proactive governance
A second key element in a more robust privacy governance framework is a shift from reactive and retroactive enforcement actions to proactive and preventative oversight regimes (Kaminski, Reference Kaminski2023). Rather than relying primarily on ex post remedies triggered by specific legal violations or consumer complaints, policymakers should institutionalize continuous monitoring, auditing, and impact assessment requirements to surface and mitigate potential risks before companies and organizations deploy generative AI systems at scale (Kaminski, Reference Kaminski2023; Katell et al., Reference Katell, Young, Dailey, Herman, Guetler, Tam and Krafft2020; Metcalf et al., Reference Metcalf, Moss, Watkins, Singh and Elish2021). This means prioritizing proactive, preventative, and participatory approaches to AI governance that can anticipate and mitigate risks before they cause harm. It requires investing in new institutional capacities and governance frameworks that can enable ongoing monitoring, assessment, and public engagement throughout the AI lifecycle.
The European Union’s AI Act offers a potential model for systematic AI governance (European Commission, 2023). One key strength of the AI Act is its focus on the broader societal impacts of AI systems, rather than just individual privacy harms (European Commission, 2023). The Act creates a comprehensive regulatory framework for AI systems, imposing graduated requirements based on a technology’s level of risk (European Commission, 2023). Notably, the Act would require “high-risk” AI systems to undergo mandatory conformity assessments to ensure compliance with essential requirements related to data quality, transparency, human oversight, and robustness before entering the EU market. It also creates ongoing monitoring obligations for high-risk systems and establishes a centralized database for registering stand-alone AI systems. While the Act has drawn criticism for the compliance burdens it would impose (Corbett, Reference Corbett2024), it represents a meaningful effort to extend regulatory scrutiny to the entire AI lifecycle and to create an institutional infrastructure for proactive and adaptive governance (Veale & Zuiderveen Borgesius, Reference Veale and Zuiderveen Borgesius2021).
4.3. Reorienting the goals and values of AI governance
Ultimately, the limitations of the current U.S. privacy framework in addressing the challenges of generative AI point to the need for a more fundamental reorientation of the goals and values underlying AI governance. Addressing these challenges requires not just new regulatory tools and oversight mechanisms, but a deeper shift in how we conceptualize the purposes and priorities of AI governance itself (Milner & Traub, Reference Milner and Traub2021; Powles & Nissenbaum, Reference Powles and Nissenbaum2018; Viljoen, Reference Viljoen2021). This reorientation involves moving beyond a narrow focus on protecting individual privacy rights and towards a broader vision of promoting collective well-being, social justice, and democratic values in the development and deployment of AI systems (Crawford, Reference Crawford2021; West et al., Reference West, Whittaker and Crawford2019; Whittaker et al., Reference Whittaker, Alper, Bennett, Hendren, Kaziunas, Mills, Morris, Rankin, Rogers, Salas and Myers West2019). It means reconceptualizing privacy not just as a matter of individual control over personal data, but as a collective good that is essential for human autonomy, dignity, and self-determination in the face of increasingly powerful and pervasive AI systems (Cohen, Reference Cohen2019b; Tisné, Reference Tisné2020).
Finally, it means grappling with the inherently political and value-laden nature of AI development and governance, and creating mechanisms for democratic deliberation and contestation over the goals, values, and trade-offs embedded in these systems (Benthall & Haynes, Reference Benthall and Haynes2019). This requires moving beyond technocratic and instrumental approaches to AI ethics and governance, and towards more inclusive and participatory processes that empower affected communities to shape the trajectories of AI innovation in line with their values and interests (Crawford, Reference Crawford2021). While the specific regulatory tools and accountability mechanisms needed to operationalize these principles will likely vary across different contexts and jurisdictions, reorienting the underlying goals and values of AI governance is an essential first step toward a more proactive, equitable, and democratically legitimate approach to managing the risks and benefits of generative AI systems (Milner & Traub, Reference Milner and Traub2021; Viljoen, Reference Viljoen2021).
4.4. Obstacles to comprehensive AI privacy governance in the USA
Implementing a new paradigm of AI privacy governance in the USA will not be easy, as it must contend with the country’s deeply rooted legal traditions, political economy, and ideological commitments. One major hurdle is the political challenge of passing a comprehensive federal privacy law that could provide a coherent and consistent framework for governing AI systems across different sectors and jurisdictions. While there have been several proposals for such a law in recent years, none have yet been enacted, partly due to disagreements over preemption, private rights of action, and the scope of covered data and entities (Kerry et al., Reference Kerry, Chin and Lee2020). The highly polarized and industry-captured policymaking process can block or dilute even incremental reforms, making it difficult to achieve the kind of systemic change needed to address generative AI’s privacy risks (Kaminski, Reference Kaminski2023).
Another significant obstacle is the strong protection afforded to freedom of expression under the First Amendment, which courts interpret to cover a wide range of data-driven activities, from collecting and disseminating publicly available information to creating and sharing synthetic media (Franks, Reference Franks2019; Wu, Reference Wu2012). These free speech protections, along with intellectual property rights, can hinder the ability of regulators to impose substantive restrictions on AI-generated content or mandate disclosure of proprietary AI systems (Franks, Reference Franks2019; Massaro et al., Reference Massaro, Norton and Kaminski2017, pp. 2481–2525). This constitutional constraint, coupled with the U.S. policy landscape’s historical favor for a laissez-faire and innovation-friendly approach to technological development, creates a challenging environment for proactive AI governance (Cohen, Reference Cohen2019a; Thierer, Reference Thierer2016, pp. 33–38).
The permissive stance of U.S. law, favoring market-driven solutions and self-regulation over precautionary regulation, is exemplified by numerous safe harbors and immunities for online platforms and technology providers, most notably section 230 of the Communications Decency Act, which shields platforms from liability for user-generated content (Kaminski, Reference Kaminski2023; Citron & Wittes, Reference Citron and Wittes2017; Kosseff, Reference Kosseff2019). This preference for innovation over precaution (Thierer, Reference Thierer2016, pp. 33–38), reflected in permissive liability regimes and judicial doctrines, places the burden of proof on regulators to demonstrate clear and concrete harm before intervening in the development and deployment of AI systems (Calo & Citron, Reference Calo and Citron2020; Thierer, Reference Thierer2016). As a result, proactive regulation and public participation in AI governance can be chilled, making it difficult to address generative AI’s privacy risks in a comprehensive and timely manner (Buchanan et al., Reference Buchanan, Lohn, Musser and Sedova2021)
Despite these obstacles, there is a growing recognition among policymakers, experts, and the public of the need for a more proactive, equitable, and democratically accountable approach to AI governance (Milner & Traub, Reference Milner and Traub2021; Viljoen, Reference Viljoen2021; Zhang & Dafoe, Reference Zhang and Dafoe2020). Overcoming the current barriers will require a sustained effort to build political will, public awareness, and institutional capacity for a new paradigm of AI privacy governance. This may involve innovative legal strategies, multi-stakeholder partnerships, and public education campaigns to shift the discourse and create the conditions for meaningful reform. While the path forward is challenging, the stakes are too high to maintain the status quo in the face of generative AI’s transformative impact on privacy, autonomy, and democracy.
5. Conclusion
The meteoric rise and widespread adoption of generative AI systems present significant threats to privacy and the regimes that seek to protect it. The ability of these systems to generate novel and realistic content based on patterns learned from vast troves of personal data raises profound risks, from nonconsensual data extraction and inferential profiling to the spread of synthetic media and the amplification of algorithmic biases. Existing regulatory approaches, which rely heavily on individual notice and consent, ex post enforcement, and a narrow conception of privacy harms, are ill-equipped to address generative AI’s systemic and diffuse impacts. Addressing these challenges will require a fundamental reorientation of privacy governance from reliance on individual control and procedural safeguards to a more collective, proactive, and precautionary approach that recognizes privacy as a public good and collective responsibility.
Acknowledgements
Many thanks to Harry Surden, Ignacio Cofone, Neil M. Richards, and Paul Weitzel for their valuable input.
Funding statement
None.
Competing interests
None declared.
Elana Zeide is an Assistant Professor at the University of Nebraska College of Law, where she examines how artificial intelligence and digital systems affect privacy, equity, and opportunity. Her scholarship focuses on the governance of technology in education and hiring contexts. She has published extensively on student privacy and algorithmic decision-making, exploring how law can promote responsible innovation. She advises schools, companies, and policymakers on the ethical implementation of automated systems.