Executive summary
2024 has been a year of significant growth and consolidation for Observatorio de Riesgos Catastróficos Globales (ORCG), marked by impactful advancements across our four core workstreams. We've expanded our research initiatives and strengthened international collaborations, solidifying our role as a leading voice in understanding and mitigating catastrophic risks.
From the outset, ORCG has championed proactive and collaborative approaches to navigating the complex challenges of potential global catastrophes. This commitment fueled a diverse range of activities in 2024, with highlights including:
Artificial Intelligence: We continued our work on the implementation of the EU AI Act, including by participating in the drawing-up of the GPAI Code of Practice, supported by resources from Longview Philanthropy. We also formulated the AI Risk Explorer (AIRE), securing funding from Open Philanthropy. Finally, we published the report “Compute for Good: The Potential of European Supercomputers for AI Research and Development”.
Pandemic Preparedness: We led a regional mapping effort to assess biosafety and biosecurity needs in Latin America. This resulted in research notes and a report that will be included in the Global Research Agenda for Evidence-Based Biosafety, helping to redirect efforts and resources for effective pandemic management in Latin America.
Resilience to Global Catastrophic Food Failure: We generated a well-received policy brief, "Agricultural Resilience in Latin America: Facing Scenarios of Abrupt Reduction of Sunlight (ASRS)." We are actively working to integrate its recommendations into the plans of various South American governments.
Improvement in Risk Management Systems: We published a new edition of our flagship report, Latin America and Global Catastrophic Risks, featuring a prologue by former Mexican President Ernesto Zedillo, a member of The Elders. This generated significant media attention for ORCG. We are also developing a handbook for implementing the OECD's framework on emerging critical risks.
These endeavors have not only deepened our understanding of global risks but also fostered crucial partnerships with organizations and experts worldwide. We've collaborated with leading institutions and initiatives such as ALLFED, Global Shield, CSER, GCRI, CLTR, and AMexBio. Furthermore, by participating in several working groups of the EU GPAI Code of Practice, we've engaged with several civil society organizations in the field of AI safety and governance. Our work has also involved stakeholder engagement with OECD, EuroHPC, OEI, AESIA, IMSS, UNAM, and UANL. We are working on defining collaborative projects across our workstreams with some of these institutions, starting in 2025. We invite you to stay tuned for our updates next year.
In this recap, we'll delve into the key activities and accomplishments that have shaped ORCG's 2024 journey. We'll explore the impact of our research, the progress made in our key programs, and the valuable connections we've forged. Join us as we review our milestones—especially our key products—that have defined this chapter in our ongoing commitment to safeguarding humanity's future.
Looking ahead, securing comprehensive funding remains a critical challenge. While our AI initiatives are funded through 2025, other vital areas face resource limitations that could jeopardize our continued progress. We invite you to partner with us by donating to help sustain our work across all areas of our mission.
Products
Reports
New Prologue for the Report "Latin America and Global Catastrophic Risks: Transforming Risk Management" Written by Elders Member and Former President Ernesto Zedillo: This updated report, with a foreword by a respected global leader, provides a comprehensive analysis of global catastrophic risks affecting Latin America and proposes strategies for enhanced risk management.
“Compute for Good: The Potential of European Supercomputers for AI Research and Development”: This report provides a technical analysis of European public compute and recommends dedicating it to broadly beneficial AI applications, AI safety research, model evaluations, and piloting a future ‘CERN for AI’.
“Mapping Regional Needs in Biosafety, Biosecurity, and Public Health” This project mapped biosafety, biosecurity, and public health needs in Latin America. Surveys of professionals in 9 countries revealed top priorities: training programs and stronger regulatory frameworks. Key challenges include a lack of funding, government support, and ongoing training. Recommendations focus on competency-based training, international collaboration, and increased investment in biosafety and biosecurity infrastructure.
ORCG in press
How to avoid the next global catastrophe, Opinion article written for the Planeta Futuro section of major Spanish newspaper El País, which highlights that governments and multilateral organizations must prepare for possible future pandemics or monitor potential AI risks, among other GCRs.
'Leaders should treat global catastrophic risks as a priority': Here's what countries would need to do to avoid the next global disaster, Article resulting from an interview for major Colombian newspaper El Tiempo, where our director Juan Garcia refers to potential upcoming crises (such as from AI, pandemics, and nuclear weapons) and the importance of being prepared for them.
Latin America must prioritize global catastrophic risks, report warns, Review prepared by journalists from the newspaper El Heraldo based on our report “Latin America and Global Catastrophic Risks” with its new prologue.
Experts urge not to minimize global catastrophic risks, Note about a global catastrophic risks colloquium at the top Latin America university UNAM, in which our director and some team members participated.
Academic papers
The EU AI Act: A pioneering effort to regulate frontier AI? This paper, published in IberamIA journal, examines the EU AI Act, the first attempt to regulate frontier models. It concludes the Parliament’s draft was a good step toward adequately addressing the risks posed by these models, though some of its provisions were insufficiently defined in some areas and lacking in others. The final version of the Act improved in many of the aspects outlined.
"Systematic Review of Risk Taxonomies Associated with Artificial Intelligence": This article systematically reviewed 36 studies on AI risks, resulting in a taxonomy of threats and risk vectors. Our research found a need to consider emerging risks, bridge gaps between present and future harms, and further explore the potential pathways to an AI catastrophe
"Resilient Food Solutions to Avoid Mass Starvation During a Nuclear Winter in Argentina": This research explores potential food sources and production methods that could help ensure food security in Argentina during a nuclear winter scenario, describing how timely food system adaptation could flip a situation of national famine to a situation in which the country could not only feed itself but also continue to make significant exports to neighbors.
“The Securitization of Artificial Intelligence: An Analysis of its Drivers and Consequences”; submitted to Revista de Estudios Sociales by Universidad de los Andes. This article analyzes how the US is framing AI as a national security issue, revealing tensions between politicization and securitization, national and global security concerns, and threat-based versus risk-based approaches. It argues that effective AI governance requires balancing national interests with global security, favoring a risk-based approach that acknowledges uncertainties and promotes multilateral solutions instead of focusing solely on threats and nationalistic competition.
“Training and Education: What are the essential elements necessary for biosafety and biosecurity training programs for researchers and professionals?”, in submission process. To address the needs and challenges identified by Latin American researchers and professionals, effective biosafety and biosecurity training programs must prioritize formalized competency-based training, continuous professional development, accessibility, practical application, and intersectoral collaboration. By incorporating these elements, training programs can empower individuals to mitigate biological risks, foster responsible conduct, and strengthen regional biosecurity and public health.
Policy Brief
Policy Recommendation: "Agricultural Resilience in Latin America: Facing Abrupt Sunlight Reduction Scenarios": This policy brief outlines strategies for enhancing agricultural resilience in Latin America to mitigate the impacts of abrupt sunlight reduction events, describing how timely food system adaptation could avert famine for hundreds of millions in these scenarios.
Research notes
Retrospective of our contribution to the European Union AI Act: This analysis details ORCG's involvement in shaping the EU's landmark legislation on AI.
Addressing the Challenges of Advanced AI: Review of Managing Extreme AI Risks Amid Rapid Progress: This review examines key strategies for mitigating the risks posed by increasingly powerful AI systems, including AI oversight, interpretability, evaluations of dangerous capabilities, and societal resilience interventions.
Infographics: European Union Artificial Intelligence Act: These infographics provide a clear and accessible overview of the EU's AI Act, focusing on provisions regarding general-purpose AI models.
Progress in the Regulation of Artificial Intelligence in Latin America: This analysis provides an update on AI governance developments in Latin America, highlighting key trends (Risk-based approach and international influence) and challenges (Implementing the Risk-based Approach and Creating Strong Institutions)
"Research Priorities According to the World Health Organization (WHO) for Intelligence on Pandemics, Epidemics, and Bacterial Pathogens": This report identifies key research areas that can inform more effective pandemic preparedness and response strategies, such as epidemiological and pathogen research.
"Exploration of the New North American Trade Agreement: Focus on Preparing for Future Pandemics in Mexico": This article examines the trade agreement's implications for pandemic preparedness in Mexico, offering valuable insights for regional cooperation. This includes facilitating information sharing, joint research and development, and harmonization of regulations.
"The Urgency of Improving Pandemic Surveillance and Response": This concise analysis highlights the need for strengthened global systems for detecting and responding to emerging infectious disease threats. In conclusion, pandemic preparedness requires a comprehensive strategy that combines advanced technologies, strengthened public health infrastructure, and international collaboration.
“On the global importance of Latin America in AI governance”: This article argues Latin America could gain global importance in AI governance since it might (1) attract compute providers, (2) leverage its strong diplomatic tradition, (3) experience the emergence of societal-scale risks, (4) offer a strong pool of technical talent, and (5) enact impactful regulation. Each element opens up several possible implications and interventions to increase Latin America’s contribution to safe and beneficial AI.
Events presence
Presentation: “Global Catastrophic Risk Governance in Latin America: The Case of ORCG”
Workshop: “Tabletop Exercise: Food security in Argentina in an abrupt reduction in sunlight scenario (ASRS)”
Summit on Existential Security
Lightning talk: “5 Lessons from the EU AI Act”
Made in Europe: Generating the ecosystem of trustworthy AI innovation (Future of Life Institute)
Panel: The Role of the Global South in Global Catastrophic Risk
Biosafety Beyond Borders: Unifying Biosafety and Biosecurity” (SBA-AMEXBIO)
Webinar “Innovation Beyond Frontiers” of Chemistry Institute of UNAM, “Retrospective analysis of the response to COVID-19 and preparedness for future emergencies”
Cambridge Conference on Global Catastrophic Risk 2024
Presentation: “Latin America's role in Global Catastrophic Risk management: insights, opportunities, and challenges”
Poster: “A Framework for a Forecasting-Based AI Risk Assessment”
European Digital Policy: where are we and where are we going? (CIDOB-Santander Future Leaders Forum)
Global Research Agenda for Evidence-Based Biosafety" Buenos Aires.
Join us in building a safer and more resilient future. Your contribution to ORCG will help us address the full spectrum of global catastrophic risks and ensure that humanity is prepared for the challenges that lie ahead.
Donate to ORCG today and help us protect humanity's future. Read here about our current plans.
In addition to financial contributions, you can support ORCG by:
Spreading the word: Share this Recap and our website (https://orcg.info) with your networks.
If you are an expert interested in our work: you can contribute by sending suggestions, providing feedback, and sharing academic and funding opportunities with us, among others (info@orcg.info).
Connecting us with potential partners: Introduce us to individuals or organizations who can support our work.
Together, we can build a safer future for all.