Subscribe for the latest discoveries and in-depth analyses in your inbox.
Thank you to our subscribers for your continued support and passion for science!
Artificial Intelligence (AI) is no longer a futuristic concept confined to research labs and tech giants; it is now a central topic of legislative discourse in the United States. The rapid proliferation of AI technologies and their profound impact on various facets of life—from healthcare and finance to education and national security—has necessitated a robust regulatory framework. This surge in AI-related legislative activity is evident in the fact that since 2016, federal lawmakers have passed 23 AI-related bills into law, outpacing any other country in the world.
Amid this legislative frenzy, AI scientists are stepping out of academia and into the corridors of power, bringing their technical expertise to the table. One such scientist is Kiri Wagstaff, a computer scientist who temporarily left her teaching position at Oregon State University to work for a year in the office of Senator Mark Kelly, an Arizona Democrat and former astronaut. Wagstaff is part of the Science & Technology Policy Fellowships program run by the American Association for the Advancement of Science (AAAS), which has placed six AI researchers in Congress to provide critical technical advice on proposed AI laws.
Kiri Wagstaff: From NASA JPL to Capitol Hill
Kiri Wagstaff's journey from the NASA Jet Propulsion Laboratory (JPL) to the US Congress is a testament to the growing recognition of the need for technical expertise in shaping AI policy. At JPL, Wagstaff spent about two decades developing AI and machine learning applications for space exploration. Her work included analyzing vast datasets and enhancing the capabilities of rovers and orbiters. One notable project involved updating the Mars Science Laboratory rover's software to autonomously analyze and prioritize rock samples for laser spectrometry, significantly improving the efficiency of its scientific investigations.
This hands-on experience with applied machine learning made Wagstaff an ideal candidate for the AAAS fellowship. When the opportunity arose in late July 2023, Wagstaff was immediately intrigued. The fellowship program typically takes about a year to process applications, but the urgency of the AI regulatory landscape expedited the selection process. By September 1, Wagstaff and her fellow AI experts were in Washington, D.C., ready to dive into the legislative process.
Inside the Halls of Power
Wagstaff's day-to-day responsibilities in Senator Kelly's office are diverse and impactful. She reviews bill proposals, assesses their technical feasibility, and ensures that the language used in legislative drafts accurately reflects the complexities of AI technology. AI's broad applicability means that it intersects with numerous sectors, including finance, jobs, education, and copyright. As Wagstaff notes, AI's ubiquity is such that asking whether a topic involves AI is akin to asking if it involves electricity or computers.
The current congressional session, which began in January 2023, has seen over 300 AI-related bills introduced. These bills cover a wide range of issues, from combating misinformation to promoting AI innovation and research. The sheer volume and diversity of these proposals highlight the multifaceted nature of AI and the pressing need for informed legislative oversight.
The Challenge of Misinformation and Deceptive AI
One area where AI legislation is particularly relevant is in addressing misinformation. Several bills propose measures to regulate the use of generative AI in political campaigns. Some suggest that any campaign content created using generative AI, regardless of its truthfulness, should carry a label or disclaimer. Others go further, proposing to ban deceptive AI content that portrays events or statements that did not actually occur.
These legislative efforts reflect a broader concern about the potential for AI to amplify misinformation and undermine democratic processes. Existing laws already prohibit certain types of falsehoods, but the unique capabilities of AI to generate realistic and persuasive content necessitate additional legal safeguards. The key challenge is identifying where current laws fall short and crafting new regulations to fill those gaps.
Environmental Impact of AI Systems
Another significant aspect of AI regulation is addressing the environmental impact of AI systems. AI models, particularly large language models and deep learning algorithms, require substantial computational resources, leading to high energy consumption and significant carbon footprints. Additionally, data centers that support AI operations consume vast amounts of water for cooling purposes.
Recognizing this, some proposed bills aim to measure and mitigate the environmental impacts of AI. These efforts are crucial as the adoption of AI technologies continues to accelerate, and the need for sustainable practices becomes ever more urgent.
Learning from the European Union
The United States is not alone in grappling with the challenges of AI regulation. The European Union (EU) has been proactive in this domain, most notably with the passage of the AI Act in March 2024. This legislation aims to establish a comprehensive regulatory framework for AI, addressing issues such as transparency, accountability, and ethical use.
For US lawmakers, the EU's AI Act presents both an opportunity and a cautionary tale. By observing the EU's regulatory efforts, the US can learn valuable lessons about potential pitfalls and best practices. However, the US must also navigate its unique legal landscape, particularly the protections afforded by the First Amendment, which safeguard freedom of speech and can complicate efforts to regulate AI-generated content.
The Future of AI Policy in the United States
Looking ahead, the trajectory of AI policy in the United States will likely focus on several key areas. One pressing issue is data privacy and ownership. As AI systems become more integrated into everyday life, questions about who owns personal data, how it can be used, and what rights individuals have over their data will become increasingly important. Addressing these concerns will be crucial to maintaining public trust and ensuring that AI technologies are developed and deployed in a manner that respects individual privacy and autonomy.
In conclusion, the involvement of scientists like Kiri Wagstaff in the legislative process marks a significant step forward in the development of informed and effective AI policies. As AI continues to evolve and permeate various aspects of society, the collaboration between technologists and lawmakers will be essential in crafting regulations that balance innovation with ethical considerations and public safety. The work being done today will lay the groundwork for a future where AI can be harnessed for the greater good while minimizing its potential risks and harms.
Reference
https://www.nature.com/articles/d41586-024-01354-4