Hi! I'm a philosophy undergrad and a Departmental Collaboration Research Fellow at the University of Barcelona, a Board Member of the Centre for Animal Ethics at University Pompeu Fabra, Editor-in-Chief of Animal Ethics Review and a Digital Sentience Consortium Fellow.
My research focuses on foundational questions in well-being, philosophy of mind, and political philosophy, and explores how these intersect in the context of frontier AI systems and non-human animals.
I'm interested in figuring out how we ought to consider the interests of all welfare subjects in our decision-making, particularly when determining how to conduct safe AI development.
Feel free to contact me at adriarodriguezmoret@gmail.com.
I am also on philpeople, google scholar, twitter and bluesky. And here is my CV.
Publications
AI Welfare Risks (2025)
Philosophical Studies
I argue that advanced near-future AI systems have a non-negligible chance of being welfare subjects under major theories of well-being. I further contend that AI safety and development efforts pose two AI welfare risks, and propose how leading AI companies could reduce them.
AI Alignment: The Case for Including Animals (2025)
Philosophy & Technology
With Peter Singer, Yip Fai Tse and Soenke Ziesche
We argue that frontier AI systems may harm animals and should be aligned with basic concern for animal welfare. We also propose low-cost policies for AI companies and public policy to ensure such protection.
An Inclusive Account of the Permissibility of Sex (2024)
Social Theory & Practice
I develop a theory of permissibility of sex acts that explains our beliefs about the status of sex acts involving non-human animals, children, and humans with intellectual disabilities without resorting to unjustified discrimination such as speciesism.
Taking Into Account Sentient Non-Humans in AI Ambitious Value Learning (2023)
Journal of Artificial Intelligence and Consciousness
I argue that, all else being equal, we have strong moral reasons to align future AI systems with the interests of all sentient beings, rather than solely with human interests or preferences.
Papers under review and in progress
A paper on welfare components and welfare subjecthood defending welfare sentientism.
A paper on AI alignment and public justification (with Eze Paez and Pablo Magaña).
A paper on the potential personal identity and psychological connectedness of LLMs (with José Curbera-Luis).
A paper documenting the first clinical case of AI psychosis (in collaboration with psychiatrists and psychologists).
Recorded Talks
AI Welfare Risks (philosophy-focused, 45 min), at the University Pompeu Fabra's Law & Philosophy Colloquium.
Including Non-human Welfare in AI Alignment (6 min), at AIADM NYC 2025.
AI Welfare Risks (policy-focused, 30 min), at AIADM London 2025.
Including Animal and AI Welfare in AI Alignment (20 min), at Rethink Priorities' Strategic Animal Webinars.
Other Projects
Contributed to efforts achieving the inclusion of a "non-human welfare" clause in the EU's General Purpose AI Code of Practice.
Contributed to the development of AnimalHarmBench 2.0: a benchmark evaluating LLM reasoning on animal welfare.
Contributed to the development of When AI Seems Conscious: Here’s What to Know—an online practical guide addressing risks of seemingly-conscious AI to users.
Contributing to the Digital Consciousness Model, a quantitative model to estimate probabilities of consciousness in AI systems.
Contact: adriarodriguezmoret@gmail.com