{"id":106658,"date":"2025-10-29T08:00:39","date_gmt":"2025-10-29T12:00:39","guid":{"rendered":"https:\/\/www.historians.org\/?post_type=resource&#038;p=106658"},"modified":"2025-10-31T12:54:11","modified_gmt":"2025-10-31T16:54:11","slug":"history-of-artificial-intelligence-privacy-security","status":"publish","type":"resource","link":"https:\/\/www.historians.org\/resource\/history-of-artificial-intelligence-privacy-security\/","title":{"rendered":"History of Artificial Intelligence, Privacy, &#038; Security"},"content":{"rendered":"<h4>About the Briefing<\/h4>\n<p>This handout was created for the AHA&#8217;s October 29, 2025 online <a href=\"https:\/\/www.historians.org\/news-and-advocacy\/congressional-briefings\">Congressional Briefing<\/a> on the history of artificial intelligence. Panelists <a href=\"https:\/\/as.vanderbilt.edu\/history\/bio\/sarah-igo\/\">Sarah Igo<\/a> (Vanderbilt Univ.), <a href=\"https:\/\/www.cla.purdue.edu\/directory\/profiles\/aaron-mendon-plasek.html\">Aaron Mendon-Plasek<\/a> (Purdue Univ.), and <a href=\"https:\/\/sts.cornell.edu\/rebecca-slayton\">Rebecca Slayton<\/a> (Cornell Univ.) discussed the historical context of privacy and national security issues that are being transformed by AI. <a href=\"https:\/\/www.cla.purdue.edu\/directory\/profiles\/kathryn-cramer-brownell.html\">Kathryn Cramer Brownell<\/a> (Purdue Univ.) served as moderator.<\/p>\n<p>The recording for this event can be found on the <a href=\"https:\/\/www.youtube.com\/watch?v=2UR80JdtP4Q\">AHA&#8217;s YouTube channel<\/a>.<\/p>\n<p>&nbsp;<\/p>\n<h4>Technology, Privacy, and Security<\/h4>\n<ul>\n<li><span class=\"TextRun SCXW158841528 BCX0\" lang=\"EN-US\" xml:lang=\"EN-US\" data-contrast=\"none\"><span class=\"NormalTextRun SCXW158841528 BCX0\">New technologies<\/span><span class=\"NormalTextRun SCXW158841528 BCX0\"> have regularly presented challenges to Americans\u2019 privacy and security, for policymakers and ordinary citizens alike.<\/span><\/span><span class=\"EOP SCXW158841528 BCX0\" data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}\">\u00a0<\/span><\/li>\n<li>In the late 19th century, a host of innovations in communications and media made virtual intrusions as important as physical ones for the first time in US history, prompting calls for a legal right to privacy.<\/li>\n<li>In the more than a century since Americans have weighed how to balance the many efficiencies and conveniences that new technologies\u2014from the telephone and instantaneous photography to chatbots and deepfakes\u2014provide with their capacity to compromise individuals\u2019 physical, psychological, biometric, financial, and data security.<\/li>\n<li>Although generative artificial intelligence (AI) appears to harbor entirely new threats, technologies of exposure, surveillance, interception, capture, and transmission have long shaped the conditions for, and understandings of, individual privacy in the United States.<\/li>\n<li>At key moments in the last 150 years, debates over the privacy risks posed by new technologies generated novel legal and policy responses. This history permits a view on the regulatory roads taken and not taken, allowing an assessment of the efficacy of existing US frameworks for public oversight.<\/li>\n<\/ul>\n<h4>Early Computing in Public Policy<\/h4>\n<ul>\n<li><span class=\"TextRun SCXW88148036 BCX0\" lang=\"EN-US\" xml:lang=\"EN-US\" data-contrast=\"none\"><span class=\"NormalTextRun SCXW88148036 BCX0\">In the United States, public debate about privacy, security, and computers first <\/span><span class=\"NormalTextRun SCXW88148036 BCX0\">emerged<\/span><span class=\"NormalTextRun SCXW88148036 BCX0\"> in the 1960s, when much of the infrastructure that motivates current concerns about AI was developed<\/span><span class=\"NormalTextRun SCXW88148036 BCX0\">.\u00a0<\/span><\/span><\/li>\n<li><span class=\"TextRun SCXW167438842 BCX0\" lang=\"EN-US\" xml:lang=\"EN-US\" data-contrast=\"none\"><span class=\"NormalTextRun SCXW167438842 BCX0\">In the 1970s, Congress passed laws to protect the privacy of citizens and consumers<\/span><span class=\"NormalTextRun SCXW167438842 BCX0\"> but<\/span><span class=\"NormalTextRun SCXW167438842 BCX0\"> declined to create a federal privacy agency or enact other recommended protections. Individual privacy <\/span><span class=\"NormalTextRun SCXW167438842 BCX0\">largely fell<\/span> <span class=\"NormalTextRun SCXW167438842 BCX0\">by the waysi<\/span><span class=\"NormalTextRun SCXW167438842 BCX0\">de as the internet was commercialized in the 1990s.<\/span><\/span><\/li>\n<li>Current AI trends focus on \u201cmachine learning.\u201d What most distinguishes it from past approaches is its dependence upon vast amounts of data that are produced and gathered through the internet.<\/li>\n<li>While many AI applications are useful and do not violate privacy, the specific developments that threaten privacy and security today are largely enabled by explicit choices to forego privacy protections. In this sense, we can think not just about how AI is impacting privacy, but also about how a lack of privacy protection has shaped the evolution of AI.<\/li>\n<\/ul>\n<h4>Machine Learning<\/h4>\n<ul>\n<li>Historical accounts of AI frequently downplay the contributions of those communities critical to the creation of contemporary forms of machine learning. This paucity has entrenched an oversimplified narrative of technological development, which, in turn, has been leveraged by technologists to forcefully argue for the inevitability and superiority of certain forms of AI.<\/li>\n<li>These particular visions, often couched in the discourses of \u201cefficacy\u201d and \u201cinnovation,\u201d have reorganized beliefs about technology transfer, the value(s) of science, and the ways technology facilitates economic development. Such concerns explicitly inform conversations about national security even as they spur the reimagining of \u201cprivacy.\u201d<\/li>\n<li>Several historical communities of practice engaged in \u201cmachine learning\u201d research suggest how disunified research efforts spurred the proliferation of specific contemporary forms of machine learning.<\/li>\n<li>The interweaving of privacy and national security has been a distinctive feature of many historical efforts to use forms of machine learning to make decisions given contradictory information.<\/li>\n<\/ul>\n<h4>Participant Biographies<\/h4>\n<p><strong>Kathryn Cramer Brownell<\/strong> is professor of history and director of the Center for American Political History and Technology at Purdue University. She is author of Showbiz Politics: Hollywood in American Political Life (2014) and 24\/7 Politics: Cable Television and the Fragmenting of America from Watergate to Fox News (2023), which won the Eugenia M. Palmegiano Prize Award from the American Historical Association and the PROSE Award in Media and Cultural Studies from the Association of American Publishers. She also serves as senior editor for the \u201cMade By History\u201d column at TIME Magazine.<\/p>\n<p><strong>Sarah E. Igo<\/strong> is the Andrew Jackson Chair in American History at Vanderbilt University. She teaches and writes about modern US cultural, intellectual, legal and political history, with special interests in the human sciences, the sociology of knowledge, and the public sphere. Her most recent book, The Known Citizen: A History of Privacy in Modern America, traces US debates over the meaning of privacy, beginning with \u201cinstantaneous photography\u201d in the late 19th century and culminating in our present dilemmas over social media and big data. Her first book, The Averaged American: Surveys, Citizens, and the Making of a Mass Public, explores the relationship between survey data\u2014opinion polls, sex surveys, consumer research\u2014and modern understandings of self and nation. She is also a co-author of Bedford\/St. Martin\u2019s American history textbook, The American Promise.<\/p>\n<p><strong>Aaron Mendon-Plasek<\/strong> is an assistant professor of history at Purdue University. His first book project, tentatively titled The Ill-Defined World: A History of Machine Learning and Novel Political Knowledge, examines how little-known communities of transnational researchers sought to build learning machines that linked \u201cefficacy\u201d to visions of subjectivity. The book traces how these schemes of quantification would go on to remake contemporary AI, scientific inquiry, and the ways that societies know themselves.<\/p>\n<p><strong>Rebecca Slayton<\/strong> is associate professor of science and technology studies at Cornell University. Her research and teaching examine the relationships between and among risk, governance, and expertise, with a focus on international security and cooperation since World War II. Her first book, Arguments that Count: Physics, Computing, and Missile Defense, 1949\u20132012, shows how the rise of a new field of expertise in computing reshaped public policies and perceptions about the risks of missile defense in the United States. In 2015, Arguments That Count won the Computer History Museum Prize. Her second book project, Shadowing Cybersecurity, examines the emergence of cybersecurity expertise through the interplay of innovation and repair. She is also working on a third project that examines tensions intrinsic to the creation of a \u201csmart\u201d electrical power grid\u2014i.e., a more sustainable, reliable, and secure grid.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>About the Briefing This handout was created for the AHA&#8217;s October 29, 2025 online Congressional Briefing on the history of&hellip;<\/p>\n","protected":false},"featured_media":27724,"template":"","aha-topic":[60],"geographic-taxonomy":[],"resource-type":[87,906,80],"thematic-taxonomy":[33,38],"class_list":{"0":"post-106658","1":"resource","2":"type-resource","3":"status-publish","4":"has-post-thumbnail","5":"hentry","6":"aha-topic-aha-initiatives-projects","7":"resource-type-aha-resource","8":"resource-type-congressional-briefing-resource","9":"resource-type-for-the-classroom","10":"thematic-taxonomy-medicine-science-technology","11":"thematic-taxonomy-political","18":"has-featured-image"},"acf":[],"_links":{"self":[{"href":"https:\/\/www.historians.org\/wp-json\/wp\/v2\/resource\/106658","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.historians.org\/wp-json\/wp\/v2\/resource"}],"about":[{"href":"https:\/\/www.historians.org\/wp-json\/wp\/v2\/types\/resource"}],"version-history":[{"count":4,"href":"https:\/\/www.historians.org\/wp-json\/wp\/v2\/resource\/106658\/revisions"}],"predecessor-version":[{"id":106753,"href":"https:\/\/www.historians.org\/wp-json\/wp\/v2\/resource\/106658\/revisions\/106753"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.historians.org\/wp-json\/wp\/v2\/media\/27724"}],"wp:attachment":[{"href":"https:\/\/www.historians.org\/wp-json\/wp\/v2\/media?parent=106658"}],"wp:term":[{"taxonomy":"aha-topic","embeddable":true,"href":"https:\/\/www.historians.org\/wp-json\/wp\/v2\/aha-topic?post=106658"},{"taxonomy":"geographic-taxonomy","embeddable":true,"href":"https:\/\/www.historians.org\/wp-json\/wp\/v2\/geographic-taxonomy?post=106658"},{"taxonomy":"resource-type","embeddable":true,"href":"https:\/\/www.historians.org\/wp-json\/wp\/v2\/resource-type?post=106658"},{"taxonomy":"thematic-taxonomy","embeddable":true,"href":"https:\/\/www.historians.org\/wp-json\/wp\/v2\/thematic-taxonomy?post=106658"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}