Why a Scientist Spoke Out to Label Political AI and Audit Costs
Andres Hernandez-Serna, a researcher in the UMD-GLAD lab, explains why the same tools he uses to track forests and land cover now need rules for transparency in political messaging.
Generative AI is changing the way we communicate and not always in ways we notice. In a recent Nature correspondence, Andres Hernandez-Serna, a principal faculty specialist in UMD’s Global Land Analysis and Discovery (GLAD) laboratory at the Department of Geographical Sciences, raised concerns about how AI is being used in political persuasion. Hernandez-Serna also serves as a university senator and affiliated researcher to UMD’s Artificial Intelligence Interdisciplinary Institute (AIM). We spoke with him about why he chose to weigh in, what’s at stake and how his work tracking land and forests connects to these emerging issues.
Your research focus is Earth observation to monitor forests. What led you to speak out about how AI is being used in politics?
At the Global Land Analysis and Discovery (GLAD) lab, I use machine learning and deep learning to analyze satellite imagery and spatial data. Scientists have used models like these for decades. What is new today is the scale of data and computing, newer architectures such as transformers and diffusion, and the mass adoption that has put the umbrella term “AI” into everyday conversation.
The same pattern-finding methods we use to map forests or crops now power large-scale messaging systems, including political communication. I spoke up because we are in an early, high-impact phase. The norms we set on transparency and accountability now will shape how these tools are used for years. Political persuasion influences collective choices that also affect land, water and climate, so the topic connects directly to my work on the human-environment interface.
Can you explain how AI systems target voters, and how that relates to the pattern-finding methods you use in your research on forests and land cover?
Imagine a system that reads a voter’s public posts and neighborhood-level interests, then generates thousands of ad variants, adjusting tone, imagery and wording for each micro-audience. It runs A/B tests to see which version works best in English or a different language, for young parents versus retirees and keeps optimizing in near real time.
Technically, that is the same logic we use to distinguish one forest type from another in satellite images. We detect patterns, validate them and act on the result. The difference is that the “map” here is a message designed to capture attention with precision, so auditability and disclosure matter as much as accuracy.
You’ve pointed out that people often can’t tell when a message comes from a machine. Why is that lack of transparency such a concern?
In science, provenance is essential. We record whether a measurement came from a satellite, a drone or a ground sensor, and without that context conclusions can wobble. Communication needs the same norm.
Knowing whether a message is human- or AI-generated does not make it good or bad by itself, but it provides critical context to judge intent, credibility and risk. Clear labeling also enables accountability and consent because people can understand when they are interacting with an automated system and can report misuse or seek corrections.
You’ve also raised the issue of hidden costs, like the energy and water AI systems use and the human labor behind them. Why is that important and why do you think these costs get overlooked?
AI is physical. Training and serving models depend on data centers that draw electricity and water for cooling, and on the often invisible labor of people who label data, moderate content and evaluate model outputs. In my work at GLAD, we plan explicitly for GPUs, memory, processing time and days of compute for each model. That discipline keeps projects efficient and manageable.
It is easy to forget these costs because the cloud abstracts them away and consumer interfaces are smooth. A useful reminder is that models have existed in science for decades, but the new scale and always-on deployment are what drive today’s footprints. Bringing these factors into audits helps institutions choose the right model size and deployment strategy and supports investments in efficiency and fair labor practices.
Beyond the risks, do you see positive ways AI could be used in civic life?
There are many. In practice, AI can turn long public reports into short plain-language notes, translate city services across languages and surface patterns in mobility, water use or public safety so agencies can respond faster.
It can also power open land-use and land-cover maps from satellite imagery, with near real-time alerts of forest loss, illegal clearing, fires and wetland change. In agriculture, it can flag crop stress, forecast yields and help plan irrigation and planting windows so farmers and agencies act sooner.
I see AI as an instrument, similar to a microscope or a telescope, that extends human judgment rather than replacing it. With careful design, it can support participatory tools like open data dashboards, multilingual guides to public benefits and early-warning systems for environmental hazards. To realize these benefits while minimizing harms, keep a human in the loop, explain how the system works, use opt-in data and audit regularly for errors and bias.
Image: Andres Hernandez-Serna at Stokksnes (Vestrahorn), Iceland, Nov. 4, 2024. Courtesy of Hernandez-Serna
Published on Mon, 09/29/2025 - 10:58