<div dir="ltr"><p dir="ltr" style="line-height:1.38;margin-top:12pt;margin-bottom:12pt"><font color="#000000"><span style="background-color:transparent;font-family:Arial;white-space:pre-wrap">Dear Colleagues, </span><br></font></p><p dir="ltr" style="line-height:1.656;text-align:justify;margin-top:0pt;margin-bottom:0pt"><font color="#000000"><span style="font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">We are pleased to announce that AAAI-23 will have a new special track on Safe and Robust AI, covering research on creating safe and robust AI systems, as well as using AI to create other safe and robust systems. </span><span style="font-variant-numeric:normal;font-variant-east-asian:normal;font-family:Arial;vertical-align:baseline;white-space:pre-wrap">We invite you to submit your contributions to </span><span style="background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-family:Arial;vertical-align:baseline;white-space:pre-wrap">this special track at AAAI-23.</span></font></p><font color="#000000"><br></font><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span style="font-family:Arial;background-color:transparent;font-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><font color="#000000">Aims and Scope</font></span></p><p dir="ltr" style="line-height:1.9872;text-align:justify;margin-top:0pt;margin-bottom:0pt"><span style="font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><font color="#000000">This special track focuses on the theory and practice of safety and robustness in AI-based systems. AI systems are increasingly being deployed throughout society within different domains such as data science, robotics and autonomous systems, medicine, economy, and safety-critical systems. Although the widespread use of AI systems in today's world is growing, they have fundamental limitations and practical shortcomings, which can result in catastrophic failures. Specifically, many of the AI algorithms that are being implemented nowadays fail to guarantee safety and success and lack robustness in the face of uncertainties. </font></span></p><p dir="ltr" style="line-height:1.9872;text-align:justify;margin-top:0pt;margin-bottom:0pt"><span style="font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><font color="#000000"><br></font></span></p><p dir="ltr" style="line-height:1.9872;text-align:justify;margin-top:0pt;margin-bottom:0pt"><span style="font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><font color="#000000">To ensure that AI systems are reliable, they need to be robust to disturbance, failure, and novel circumstances. Furthermore, this technology needs to offer assurance that it will reasonably avoid unsafe and irrecoverable situations. In order to push the boundaries of AI systems' reliability, this special track at AAAI-23 will focus on cutting-edge research on both the theory and practice of developing safe and robust AI systems. Specifically, the goal of this special track is to promote research that studies 1) the safety and robustness of AI systems, 2) AI algorithms that are able to analyze and guarantee their own safety and robustness, and 3) AI algorithms that can analyze the safety and robustness of other systems. For acceptance into this track, we would expect papers to have fundamental contributions to safe and robust AI, as well as applicability to the complexity and uncertainty inherent in real-world applications.<br></font></span></p><p dir="ltr" style="line-height:1.9872;text-align:justify;margin-top:0pt;margin-bottom:0pt"><font color="#000000"> </font></p><p dir="ltr" style="line-height:1.9872;text-align:justify;margin-top:0pt;margin-bottom:0pt"><span style="font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><font color="#000000">In short, the special track covers topics related to safety and robustness of AI-based systems and to using AI-based technologies to enhance the safety and robustness of themselves and other critical systems, including but not limited to:</font></span></p><ul style="margin-top:0px;margin-bottom:0px"><li dir="ltr" style="list-style-type:disc;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline"><font color="#000000">Safe and Robust AI Systems</font></span></p></li><li dir="ltr" style="list-style-type:disc;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline"><font color="#000000">Safe Learning and Control</font></span></p></li><li dir="ltr" style="list-style-type:disc;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline"><font color="#000000">Quantification of Uncertainty and Risk</font></span></p></li><li dir="ltr" style="list-style-type:disc;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline"><font color="#000000">Safe Decision Making Under Uncertainty and Limited Information</font></span></p></li><li dir="ltr" style="list-style-type:disc;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline"><font color="#000000">Robustness Against Perturbations and Distribution Shifts</font></span></p></li><li dir="ltr" style="list-style-type:disc;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline"><font color="#000000">Detection and Explanation of Anomalies and Model Misspecification</font></span></p></li><li dir="ltr" style="list-style-type:disc;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline"><font color="#000000">Formal Methods for AI Systems</font></span></p></li><li dir="ltr" style="list-style-type:disc;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline"><font color="#000000">On-line Verification of AI Systems</font></span></p></li><li dir="ltr" style="list-style-type:disc;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline"><font color="#000000">Safe Human-Machine Interaction</font></span></p></li></ul><font color="#000000"><br></font><p dir="ltr" style="line-height:1.656;text-align:justify;margin-top:0pt;margin-bottom:0pt"><span style="font-family:Arial;background-color:transparent;font-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><font color="#000000">Special Track Co-Chairs:</font></span></p><ul style="margin-top:0px;margin-bottom:0px"><li dir="ltr" style="list-style-type:disc;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline"><font color="#000000">Chuchu Fan (Massachusetts Institute of Technology)</font></span></p></li><li dir="ltr" style="list-style-type:disc;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline"><font color="#000000">Ashkan Jasour (NASA/Caltech JPL)</font></span></p></li><li dir="ltr" style="list-style-type:disc;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline"><font color="#000000">Reid Simmons (Carnegie Mellon University)</font></span></p></li></ul><div><font color="#000000" face="Arial"><span style="white-space:pre-wrap"><br></span></font></div><div><b><font color="#000000">Submission Instructions </font></b></div><div><font color="#000000">Submissions to this special track will follow the regular AAAI technical paper submission procedure, but the authors need to select the Safe and Robust AI special track (SRAI).<font face="Arial"><span style="white-space:pre-wrap"><br></span></font></font></div><p dir="ltr" style="line-height:1.38;margin-top:12pt;margin-bottom:12pt"><font color="#000000"><span style="font-family:Arial;font-weight:700;white-space:pre-wrap">Important Dates:</span><br></font></p><ul style="margin-top:0px;margin-bottom:0px"><li dir="ltr" style="list-style-type:disc;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:12pt;margin-bottom:0pt"><span style="font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline"><font color="#000000">August 8, 2022: Abstracts due at 11:59 PM UTC-12 </font></span></p></li><li dir="ltr" style="list-style-type:disc;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:12pt"><span style="font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline"><font color="#000000">August 15, 2022: Full papers due at 11:59 PM UTC-12</font></span></p></li></ul><div><font color="#000000">For more information, please visit: </font></div><div><font color="#000000"><a href="https://aaai.org/Conferences/AAAI-23/safeandrobustai" target="_blank">https://aaai.org/Conferences/AAAI-23/safeandrobustai</a> </font></div><div><font color="#000000"><a href="https://aaai.org/Conferences/AAAI-23/aaai23call" target="_blank">https://aaai.org/Conferences/AAAI-23/aaai23call</a><br></font></div><font color="#000000"><br></font><p dir="ltr" style="line-height:1.38;margin-top:12pt;margin-bottom:12pt"><font color="#000000"><br></font></p><font color="#000000"><br><br></font></div>