AI’s opportunities and challenges: Insights from 2024 Hypatia Prize Winner, Nuria Oliver#

In this interview, the 2024 winner of the Hypatia European Science Prize, Professor Nuria Oliver, shares her reaction to receiving the award and discusses the opportunities and challenges of Artificial Intelligence.

Professor Nuria Oliver
Professor Nuria Oliver

About Nuria Oliver MAE#

Dr Nuria Oliver is a distinguished computer scientist, recognised internationally for her significant contributions to Artificial Intelligence and its societal applications. She is Scientific Director and one of the founders of the ELLIS Alicante Foundation. Prof. Oliver was elected as member of the Informatics section of Academia Europaea in 2018.

Throughout her career, Dr Oliver has made notable advancements in computational models of human behaviour, human-computer interaction, mobile computing, and big data for social good. Her work has gained her several prestigious fellowships, including recognition as a Fellow of the European Association of Artificial Intelligence. In 2018 she was elected to both Academia Europaea and the Spanish Royal Academy of Engineering, as their youngest female member.

In 2024, Dr Oliver received the Hypatia European Science Prize from Barcelona City Council, in collaboration with the Academia Europaea Barcelona Knowledge Hub. This prize is awarded on an annual basis to contribute to the visibility of science, as well as promoting, reinforcing, and maximising the value of the search for excellence and its impact on society within Europe.

The Hypatia Prize acknowledges Dr Oliver’s outstanding achievements in Artificial Intelligence and her commitment to leveraging technology for positive societal impact. Her research on modelling human behaviour and efforts to enhance technology accessibility highlights her dedication to advancing the field for the betterment of society. She has recently joined an international scientific expert advisory panel on advanced AI safety.

Dr Oliver is passionate about inspiring young individuals, particularly girls, to pursue careers in STEM fields. She actively engages in outreach initiatives aimed at making technology more inclusive and accessible to all.

The interview#

Many congratulations on winning the prestigious Hypatia European Science Prize. Could you share with us what receiving this award means to you?

Thank you very much. Receiving this award was a huge surprise and an immense honour. This prestigious award means a lot to me for several reasons.

First, because it highlights the importance of investing in scientific research focused on having positive societal impact, which has been a driving force throughout my career.

Second, because it serves as an acknowledgement of the continuous support and encouragement that I have received from my family, colleagues, professors, mentors and collaborators. Without their guidance, expertise and inspiration, none of this would have been possible.

Finally, the name of the award is very meaningful to me. Hypatia was a prominent female philosopher, astronomer and mathematician who lived in Alexandria over 2,000 years ago. Therefore, I am hoping that this award with her name will help amplify the message of the need for inclusivity and diversity in the technology sector in general and particularly in Artificial Intelligence.


You’ve spoken previously about your belief in the power of technology to improve our quality of life, both individually and collectively. In your opinion, what are the most promising applications of AI in addressing societal challenges, such as healthcare, education, or environmental sustainability?

Yes, I am convinced that we need AI to address most of the immense challenges that we face in the 21st century, from the energy crisis to climate change and pandemics. Artificial Intelligence won’t be the solution, but will undoubtedly be part of the solution.

Regarding climate change, AI methods based on machine learning – and especially based on deep neural networks – allow us to model climate and weather, identify patterns and make accurate predictions of changes in global temperature by analysing large amounts of multidimensional weather and climate data. In addition to being used to build more accurate climate predictions and models, AI methods can also be applied to improve next-generation weather modelling systems by enabling, for example, the detection and separation of noise in climate observations or the automatic labelling of climate data.

Extreme weather events – such as hurricanes, intense storms and floods – are increasing in frequency and intensity due to climate change. AI has also proven to be a valuable ally in predicting these extreme weather events and their impact, and in enabling a more efficient and faster response to natural disasters. Autonomous drones (guided by AI) can be used to prevent fires, or to search for survivors in floods and earthquakes. In this area, the Artificial Intelligence for Disaster Response (IADR) project at QCRI in Qatar provides a free online tool that analyses social media messages related to emergencies, humanitarian crises and disasters. It uses AI techniques to automatically tag thousands of messages per minute, acting as an early warning system.

Beyond the direct application of Artificial Intelligence techniques to model and predict the climate, AI methods can be applied to industries or sectors that have a negative environmental impact to enable the reduction of greenhouse gas (GHG) emissions. According to a report commissioned by Microsoft from PwC, the use of AI in environmental use cases could contribute up to $5.2 billion to the global economy by 2030 and reduce greenhouse gas emissions by 4%, which is equivalent to the estimated 2030 annual emissions of Japan, Canada and Australia combined.

We also rely on AI techniques to achieve more efficient renewable energies (such as solar and wind) thanks to the prediction of both the weather and energy demand. And let’s not forget that it is impossible to have a smart energy network (smart grid) without the help of AI.

With respect to healthcare, Artificial Intelligence has the potential to bring immense benefits to medicine in different areas, from accelerating the discovery of drugs and treatments to improving diagnostic accuracy, personalising treatments or optimising medical data management.

AI techniques make it possible to perform virtual screening of compounds, accelerating the identification of promising candidates and reducing the need for costly and laborious experiments. In addition, AI algorithms can model molecular interactions and predict the efficacy and safety of new potential drugs, as well as predict the synthesis of chemical compounds, which helps optimise the production of new drugs and reduces associated times and costs.

Thanks to AI, we can improve the design and planning of clinical trials by identifying more effective inclusion/exclusion criteria, predicting treatment response, and optimising patient recruitment.

Through the analysis of clinical and genomic data with AI techniques, we can discover patterns, biomarkers and correlations about the effectiveness of treatments and patient response, we can personalise and adapt treatments to the characteristics of each patient or we can identify existing medications to treat different diseases.

AI algorithms can analyse large sets of medical data, such as magnetic resonance imaging (MRI), CT scans, X-rays, and laboratory tests, to provide faster and more accurate diagnoses. This can help healthcare professionals identify diseases at an early stage and improve treatment success rates. There are numerous examples of AI algorithms that support the diagnosis of different types of cancer, for example.

Artificial Intelligence is also a key tool for more efficient management and analysis of large amounts of medical data, helping professionals make informed decisions and allowing the identification and reduction of possible medical errors. For example, AI techniques enable the identification of patterns in electronic medical records, the management of medical records, and the prediction of epidemiological trends. In the context of public health, the Data Science against COVID-19 working group, that I led for more than 2 years, is an example of the application of Artificial Intelligence techniques to help combat a pandemic.

Artificial Intelligence can also assist surgeons. For example, in robotic surgeries, AI can improve precision and enable more delicate movements.

AI can improve telemedicine by providing remote diagnostic tools and medical advice based on data collected by mobile phones, sensors, wearables and connected devices. Likewise, this data, analysed with AI techniques, can allow the detection of signs of clinical deterioration or changes in a patient’s condition, enabling early intervention and risk reduction. Robotic pets and other types of social robots are playing an increasingly important role in combating loneliness in older people and continuously monitoring their physiological signals and activity patterns.

Obviously, as in other sectors, AI can be used to automate administrative tasks, such as billing and scheduling, freeing up time for medical professionals to focus more on direct patient care. There are also examples of the use of chatbots to facilitate administrative procedures, resolve doubts or even provide answers to simple medical questions.

However, Artificial Intelligence systems are not perfect and pose technical, ethical and social challenges that must be addressed to realise their immense potential in all areas and especially in the field of medicine. We should demand that AI systems applied to medicine comply with the FATEN principles, described below.

The opportunities that AI research offers us to have positive social impact are almost limitless. It is precisely this social aspect of AI that motivates and has always motivated our work. It is the focus of ELLIS Alicante.



As a leader in AI research, what ethical considerations do you believe are most crucial in the development and deployment of AI technologies?

I like summarising the ethical considerations that we should demand from any AI system with an acronym: FATEN

F is for fairness or justice: Algorithmic decisions based on data can discriminate, because the data used to train such algorithms may have biases that result in discriminatory decisions; because of the use of a certain algorithm; or due to the misuse of certain models in different contexts. We should always demand algorithms that offer guarantees of non-discrimination.

A is for autonomy, a central value in Western ethics, according to which each person should have the capacity to decide their own thoughts and actions, therefore ensuring free choice, freedom of thought, and action. However, nowadays we can build – and I have built – computational models of our desires, needs, personality, and behaviour with the ability to influence our decisions and actions subliminally, as has become evident in recent electoral processes in the US and the UK. We should ensure that intelligent systems make decisions while always preserving human autonomy and dignity.

A is also for accountability or attribution of responsibility, that is, being clear about the attribution of responsibility for the consequences of algorithmic decisions.

And for augmentation of human intelligence, so that Artificial Intelligence systems are used to enhance or complement human intelligence, not to replace it.

T is for trust, which is a basic pilar in the relationships between humans and institutions.

T is also for transparency, to understand the reasons behind the decisions or the behaviour of the very complex neural networks that are at the core of most AI systems today. Likewise, it is essential for Artificial Intelligence systems to be transparent not only regarding what data they capture and analyse about human behaviour and for what purposes, but also regarding the situations in which humans are interacting with artificial systems (for example, chatbots) versus with other humans.

E stands for education, meaning investing in education at all levels, starting with compulsory education, but also education for citizens, professionals – especially those whose professions are being transformed by technology – public sector workers, and our political representatives.

E is also for the principle of beneficence, i.e., maximizing the positive impact of using Artificial Intelligence, with sustainability, diversity, honesty, and truthfulness.

Not all technological development entails progress. What we should aspire to and invest in is progress. From my point of view, progress involves an improvement in the quality of life of people -all people-, other living beings, and our planet.

N stands for non-maleficence, minimising the negative impact that may arise from the use of AI in our societies, applying a principle of prudence, guaranteeing the security, reliability, and reproducibility of the AI systems, and always preserving people’s privacy.

It will only be when we respect these FATEN requirements that we will be able to advance and achieve a socially sustainable Artificial Intelligence.”



You’ve shown dedication to making technology more accessible to non-technical audiences, and inspiring young people (especially girls) to pursue careers in technology. In your experience, how can we enhance diversity and inclusion in the field of AI, both in terms of research and industry?

“Enhancing diversity and inclusion in the field of AI is crucial for fostering innovation, reducing bias, and ensuring that AI benefits everyone. Artificial Intelligence is widely used in our society yet it is developed by homogeneous groups that lack gender and other types of diversity. It is estimated that less than 20% of AI experts in the world are women. This lack of diversity is certainly negative, not only for the field of AI, but for society at large given the ubiquity of AI in our lives, our businesses, our governments, and our societies.

There are a variety of strategies that can help achieve this goal, starting with education and outreach programmes to inspire underrepresented groups, including women and minorities, to study AI and related fields at an early age. Examples include workshops, coding camps, and mentorship programmes. Community engagement can also be valuable to understand the needs, concerns, and perspectives of diverse communities regarding AI and thus foster a more inclusive development of AI systems.

From a workplace perspective, there are several key actions to take. First, we should implement diversity in hiring practices, by seeking out diverse candidates when hiring for AI research and industry positions. Furthermore, the workplace culture should be inclusive so that everyone –and particularly women and those belonging to minorities-feel valued and supported. Diversity training, employee resource groups, and policies that promote work-life balance and accommodate diverse needs are relevant actions on this front. Finally, we also need to ensure diversity in leadership positions within AI organisations and research institutions. Having diverse voices at the decision-making table can lead to more inclusive policies and practices.

From an algorithmic perspective, we need to address biases in AI systems so we can ensure fair and equitable outcomes that result from the use of such systems for all populations. In ELLIS Alicante, we have a research area devoted to algorithmic fairness.

These strategies could help make the field of AI more inclusive, diverse, and reflective of the broader society it serves, leading to more innovative and equitable outcomes.”

The interview was posted on the 23rd February 2024 and conducted by the Academia Europaea Cardiff Knowledge Hub.
For further information please contact AECardiffHub@cardiff.ac.uk.
Imprint Privacy policy « This page (revision-4) was last changed on Sunday, 25. February 2024, 18:01 by Kaiser Dana
  • operated by