!!Artificial Intelligence in Emergency and Crisis Management
__Date:__ Wednesday, 25th February, 15:00-16:00 CET (14:00-15:00 UK/Ireland)
\\ \\
[{Image src='SAM.jpg' caption='' height='300' alt='SAM' align='center'}]
\\ \\
__As artificial intelligence (AI) continues to grow rapidly, many professionals are exploring whether its capabilities could be an asset in managing emergencies and crises, such as extreme weather events and pandemics. At the same time, experts warn of the potential challenges of using AI in such complex, high-pressure, demanding environments. How do we ensure AI is used responsibly and safely, safeguard our fundamental liberties, trust it in high-stakes situations, and, ultimately, build confidence in this transformative technology? Drawing on the recent work of the Scientific Advice Mechanism, our distinguished panel of speakers, including Professors [Tina Comes|Member/Comes_Tina] MAE and Andrej Zwitter, Dr Olya Kudina and Captain Quentin Brot, will discuss these issues, drawing on their academic and professional expertise and experience.__
\\ \\
__The webinar is free, but [registration|https://us02web.zoom.us/webinar/register/WN_GCtnGVXyT5yGHPreuWnhmA?utm_source=mailchimp&utm_medium=newsletter&utm_campaign=202601#/registration] is necessary.__
\\ \\
!About the Webinar

Artificial intelligence can significantly enhance emergency and crisis management across Europe through applications like early warning systems, damage assessment, and analytical support, but requires careful ethical oversight, human control, standardized data frameworks, and recognition of its limitations in novel or morally complex situations.
\\ \\
Artificial intelligence offers significant potential to enhance emergency and crisis management, in certain situations. AI can be understood as an ‘umbrella term’ that includes a diverse set of technologies, methodologies and applications, including machine learning, computer vision, and natural language processing. AI can support situational awareness, forecasting, damage assessment, and decision-making throughout the disaster risk management cycle of prevention, preparedness, response and recovery. But there is as yet no comprehensive, system-level assessment of the risks of using AI for crisis management in the EU context. Also, acute crisis phase (response) poses additional challenges to the deployment of AI tools, both ethical (life-and-death decisions need to be made with limited information and under extreme time pressure) and infrastructural  (AI tools’ performance is dependent on network resilience and, in the specific European case, AI must reflect the reality of cross-border operations and multi-national crisis across Europe).
\\ \\
Evidence suggests that AI performs best on standardised, data-intensive tasks that are typical in frequent disasters such as floods, wildfires and droughts. AI is good at repetitive tasks that may be tiring for humans, such as continuous environmental monitoring, which is important for early warning systems. It can also be effective in damage assessment and social media processing, performing at scales and speeds that are impossible for human analysts. It is not well suited to interpreting highly heterogeneous contexts, and in new situations where it lacks appropriate training data. Moreover, morally challenging decisions and trade-offs should not be referred to an AI tool.
\\ \\
AI tools must uphold human dignity, transparency and responsibility, whilst meeting European standards for safety and ethics. Careful monitoring is required to ensure compliance with legal frameworks, avoid algorithmic biases and maintain meaningful human control, where people are ultimately in charge and are thus responsible for AI and any decisions made with it.
\\ \\
Other policy options include developing benchmarks, practical guidelines, codes of conduct and sandbox environments for AI in crisis management, which would allow the testing of AI under supervision and with ethical oversight, prior to full deployment.
\\ \\
An assessment of the risks associated with the use of crisis management AI tools, covering areas such as cybersecurity, AI resilience, and the EU’s strategic autonomy in this area is recommended by the Group of Chief Scientific Advisors. The assessment should be comparative in nature, evaluating the added value of AI tools for different uses against other options. The Advisors call for an inventory of AI tools, particularly those already in use.
\\ \\
! About Scientific Advice Mechanism 

The [Scientific Advice Mechanism|https://scientificadvice.eu/about-us/scientific-advice-mechanism/who-we-are]  provides independent scientific evidence and policy recommendations to the European institutions by request of the College of Commissioners.