Homa Hosseinmardi

5 Questions With Homa Hosseinmardi: New DataX Faculty Thrives in Interdisciplinary Ecosystem

AI safety means recognizing that risks rarely emerge from algorithms alone—they arise from the interaction of humans, platforms, and content.

Homa Hosseinmardi

We speak with Professor Homa Hosseinmardi, Assistant Professor of Data Science (DataX) and Computational Communication, whose interdisciplinary approach to research opens new ways to interrogate, analyze, and interpret data.

Q: You’re the first faculty member hired through UCLA’s DataX Initiative—an appointment that crosses traditional disciplinary lines. What benefits and challenges come with holding a position designed to connect multiple disciplines,  and in what ways does UCLA support this kind of research?  

A: Spanning disciplines is both rewarding and difficult. You would need to be a superhero to master psychology, political science, social science, health, public policy, law, and computational methods all at once. The reality is that when you cross fields, you may lose some depth in the traditional sense, but not at the cost of impact, at least in my opinion. If anything, you are a connector tackling problems no single discipline can solve alone. Many of my colleagues and I have felt this tension, and we all have our moments of discouragement. What I have learned is that opportunities to pursue this kind of work are more likely at institutions willing to break boundaries and invest in a forward-looking vision. UCLA is one of those places—among the first universities to create conditions where interdisciplinarity is recognized as a strength, not a liability.

Q: As AI becomes more integrated or “natural” in daily life, how do you think about who—or what—controls the flow of information we see? When people talk about “the algorithm,” where do you see the real power lying—in the technology itself, or in the human and platform choices behind it?

A: With the tremendous amount of information in today’s world, we cannot function without curation algorithms. From recommendation systems to conversational chatbots to summary-based interfaces, algorithms are everywhere—helping us navigate information overload by curating what we see, in what order, and with what emphasis. But my research shows that this power is not held by algorithms alone. It emerges from the interaction of three forces: the supply of content that creators produce, the demand shaped by user preferences, and the design choices of platforms and their algorithms. Each of these layers can amplify or constrain what we encounter, with dynamics that evolve quickly and unpredictably.

The danger is not only that when AI systems feel “natural” we stop asking who is making these choices and who benefits from them, but also that we risk oversimplifying by blaming “the algorithm” alone. My work emphasizes the complexity of this space: many moving pieces—human behavior, platform incentives, and technical design—come together to shape the information we see.

Q: How do you define “AI safety” in a way that captures both technical acumen and societal impact?

A: I’ve built a research program that isn’t just about algorithms or just about users. It’s about the safety and integrity of entire online ecosystems. I frame harms not as isolated pieces of content, but as the result of interactions between humans, platforms, and algorithms. I define AI safety as the challenge of ensuring that algorithmically mediated systems—from recommender platforms to generative models that suggest and produce content, respectively—operate in ways that do not create hidden harms when deployed in the messy, unstructured environments of everyday life. 

On the technical side, this requires rigorous measurement and auditing: developing methods that can handle noisy, imbalanced, and large-scale data, and that can uncover hidden dynamics across complex sociotechnical systems, in the interplay of human behavior, algorithmic design, and platform incentives. For example, analyzing  what recommendations a platform makes to different audiences that couldn’t be explained by their viewing habits.

On the societal side, AI safety means recognizing that risks rarely emerge from algorithms alone—they arise from the interaction of humans, platforms, and content. My work defines AI safety at this intersection: building tools to quantify risks across entire information ecosystems, aiming for causal answers that guide effective interventions, and translating those insights into governance, policy, and design choices that make technology safer and more aligned with human values. In other words, advocating for human-centered AI.

Q: How might fields like health care, psychology, or law use your data or build on your findings? Do you think about these possible collaborations before you begin the research?

A: Yes, I think about interdisciplinary impact from the very beginning. The kinds of data I work with—large-scale traces of online behavior, recommendation pathways, algorithmic audits—don’t belong to a single field. Health researchers can use them to understand how vulnerable patients are exposed to misinformation or predatory health claims. Psychologists can study how repeated exposure to curated images of success and luxury affects youth mental health, self-perception, or addiction. Legal scholars can use the same evidence to evaluate accountability, liability, or gaps in current regulation. And communication and political scientists can analyze how these ecosystems shape news diets, polarization, and democratic processes. 

Q: What is it that you most want to impart to your students? What’s motivating them and you? 

A: In our highly satisfying and impactful research, my lab is rooted in tackling myths, anecdotes, and hard-to-observe behaviors—often the very ones that stir controversy. The difficulty lies not only in the topic itself but also in engineering the whole system around it: choosing the right data, methods, and frameworks that span software engineering, politics, and sociology, often without cleanly belonging to any one of them. We don’t build fundamental machine-learning methods, yet none of the off-the-shelf ones work, so we tailor our approaches to fit  the problem. Nor do we follow one theory of social science; instead, we let the data guide us.

Breaking these traditional boundaries can lead some scholars to question the scientific value or depth of such studies. But this is exactly where the opportunity lies. By pushing across disciplines, we can ask questions that no single field could answer on its own—that’s what makes the work both demanding and deeply rewarding. My mission is to prepare the next generation of proud graduates who, rather than feeling pressured to fit into a single box, have the confidence to break it open, pursue their own ambitious ways of thinking, and become the ones who build the connections needed to find answers across disciplines.

I’m excited about the winter launch of my new center for cybersafety with DataX. The name is still being finalized, but it marks an exciting next chapter for our interdisciplinary work in AI systems integrity and safety.