March 27-29, 2023


Hyatt Regency San Francisco Airport, Burlingame, CA

AAAI-EDGeS 2023 is part of the AAAI Spring Symposium Series

A Symposium on Challenges and Methods for Assessing the Next Generation of AI

The field of Artificial Intelligence is generally described in three waves. The first wave comprised hand-crafted knowledge for reasoning, with limitations in perception and learning. The second wave has been propelled by significant advances in data-driven methods using machine learning, primarily with deep neural networks, with limitations in reasoning. The third wave of AI combines reasoning machine learning systems into foundation models. With the advent of large domain-universal models (sometimes called ‘foundation models’), we are witnessing a trend towards AI systems that are no longer specific to particular tasks. We are also seeing a resurgence of reasoning and symbolic systems in AI to assist in common sense reasoning, explainable AI, and learning with limited training data. Assessing the next generation of AI will require novel tools, methods, and benchmarks that address both reasoning and generalist systems, individually and combined holistically.

Generalist systems can encompass language models, multimodal models, and various other

developments, such as life-long learning systems. This new generation of increasingly general AI systems can be applied with small or no modifications over a wide range of tasks and applications, in many cases without having been explicitly designed for them. Where current AI systems are typically evaluated against a small set of narrow, task-specific benchmarks, this next generation of AI systems may fall short of reaching or surpassing the state of the art in any one domain, but excel in practical applications due their generality, or various factors that become only visible in the aggregate of many measures.

Modern reasoning systems can encompass neuro-symbolic reasoning, common sense reasoning, and statistical and relational AI (STAR AI). They are combined with data-driven or statistical approaches. Whereas numerous metrics exist for the evaluation of specialized machine learning algorithms, the space of quantifiable metrics for modern reasoning AI systems is limited.

Assessments and benchmarking of these general AI systems will require novel approaches, to allow comparisons of performance of these systems, identify areas in need of research, and shape the direction of progress. Comparing general AI systems may require mapping their performance and properties into a high dimensional space of capabilities, and compare the regions they occupy in that space. Assessing modern reasoning systems may involve comparing the effects of different symbolic representations on the overall performance of the system and also identifying and understanding of vulnerabilities stemming from symbolic design choices.

Goals of the Symposium

  • Foster discussion and development of methodologies for understanding and assessing a new generation of AI systems, especially in the domains of reasoning and generativity

  • Enable new design and training protocols for the third wave of AI systems, with the view towards achieving generalist performance in reasoning and generative tasks

  • Build and support our community with better benchmarks and assessments

  • Provide a platform for collaborations on novel AI architectures built around innovative assessment protocols


In the interest of fostering discussion of methodologies for understanding and assessing AI in the domains of reasoning and generativity, we are accepting submissions on topics including (not exhaustive):

  • Novel training protocols for achieving generalist performance in reasoning and generative tasks

  • Limitations of current approaches in AI/Machine learning

  • Novel methodologies to assess progress on increasingly general AI

  • Methods and tools for identification of vulnerabilities in modern reasoning systems

  • Quantifiable approaches to assessing ethical robustness of generalist AI systems

  • Relevant architectures involving neuro-symbolics, neural network-based foundation models, generative AI, common sense reasoning, statistical and relational AI

  • Architecture for systems capable of reasoned self-verification and ethical robustness

  • Description of novel systems that combine reasoning with generativity


The Symposium will consist of a set of short presentations, grouped into thematic sessions, combined with discussion, over the duration of 2.5 days.

Important Dates

  • Extended abstract submission (1-2 pages): 16th of January 2023 - deadline extended to 23rd of January 2023

  • Notification: 13th of February 2023

  • Invited Participant registration deadline: 17th of February 2023

  • Camera-ready submission due: 3rd of March 2023

  • Registration deadline: 4th of March 2023

  • Symposium: 27-29th of March 2023


You are invited to submit:

  • Full technical papers (recommended length 6-8 pages excluding bibliography).

  • Technical presentation proposals (extended abstract of up to two pages, including a short biography of the main speaker),

  • Position papers (recommended length 4-6 pages excluding bibliography)

Manuscripts must be submitted as PDF files via EasyChair online submission system.

Please keep your paper format in line with AAAI Formatting Instructions (two-column format). The AAAI author kit can be downloaded from: .

Papers will be peer-reviewed by the Organizing Committee (2-3 reviewers per paper).

EasyChair submission link:

At least one of the authors will have to register and present their contribution at the Symposium.

Please send any questions about submissions to

Organizing Committee

  • Joscha Bach, Principal Research Scientist at Intel Labs, Committee Co-Chair

  • Amanda Hicks, Senior Ontologist, Johns Hopkins University Applied Physics Lab, Committee Co-Chair

  • Tetiana Grinberg, AI Research Scientist, Intel Labs

  • John Beverley, Assistant Professor, University at Buffalo

  • Steven Rogers, Senior Scientist, Air Force Research Laboratory

  • Grant Passmore, Co-Founder and Co-CEO, Imandra

  • Ramin Hasani, Principal AI and Machine Learning Scientist at the Vanguard Group and Research Affiliate at MIT CSAIL

  • Casey Richardson, Principal Machine Learning Engineer, S&P Global

  • Richard Granger, Professor, Dartmouth College

  • Jascha Achterberg, Computational Neuroscientist, University of Cambridge

  • Kristinn R. Thórisson, Full Research Professor at Reykjavik University, Founder and Managing Director of IIIM

  • Luc Steels, Professor Emeritus, Barcelona Supercomputing Center

  • Yulia Sandamirskaya, Neuromorphic Computing lead, Intel Labs

For any inquiries, please contact