Safe Autonomy and Intelligence Lab (SAIL) Home Group Teaching Talks Publications





Wei Xiao


Office: Unity Hall, Room 287

Emails: wxiao3 [at] wpi (dot) edu, weixy [at] mit (dot) edu

[Google Scholar] / [Github] / [LinkedIn] / [WPI Profile]




    I am an assistant professor leading the Safe Autonomy and Intelligence / AI Lab (SAIL) in the robotics engineering department at WPI, and a research affiliate with MIT CSAIL. I was a Postdoc Associate at the MIT CSAIL (2021-2025) advised by Prof. Daniela Rus. I got my Ph.D. degree in Systems Engineering in 2021 at Boston University advised by Prof. Christos G. Cassandras and Prof. Calin Belta.

    We are actively looking for Postdocs, PhD students, Master students, Undergrads to join our lab, Please reach out to the emails above if you are interested.

    Recent News (last update: Sep. 2025)

    • [08/2025] I joined WPI robotics engineering department as an assistant professor.
    • [07/2025] One paper on CBFs for polygonal environments is accepted in CDC 2025.
    • [06/2025] One paper on autonomous surface vehicle is accepted in IROS 2025.
    • [05/2025] Our ABNet paper is accepted in ICML 2025.
    • [01/2025] Our SafeDiffuser paper is now accepted in ICLR 2025.
    • [11/2024] One paper is accepted in Cybernetics and AI.
    • [07/2024] Three papers are accepted in CDC 2024.
    • [04/2024] Our special session on safety-critical control is available in Annual Reviews in Controls.
    • [04/2024] One paper is accepted in ARC 2024.
    • [01/2024] Three papers (foundation model driving, swarm robots, autonomous vessels) are accepted in ICRA 2024.
    • [01/2024] Two papers regarding optimal control for mixed traffic are accepted in ACC 2024
    • [01/2024] One paper of safe control for soft robots is accepted in RoboSoft 2024.
    • [01/2024] One paper, secure ontrol for CAVs, is accepted in VehicleSec 2024.
    • [12/2023] One paper regarding event/self triggered CBFs for CAVs is accepted in Automatica.
    • [11/2023] One paper is accepted in NeurIPS 2023 robot learning workshop.
    • [10/2023] One paper is accepted in CoRL 2023 OOD workshop.
    • [09/2023] Two papers are accepted in NeurIPS 2023.
    • [08/2023] One paper is accepted in CoRL 2023 as an oral paper (6.6%).
    • [07/2023] Three papers are accepted in CDC 2023.
    • [07/2023] One paper is accepted in ITSC 2023.
    • [06/2023] Three papers (one conference, two journals) are accepted to present at IROS 2023.
    • [05/2023] One paper is accepted in CCTA 2023, and is selected as the best student paper award finalist (as advisor and co-author).
    • [04/2023] Our invariance paper is accepted in ICML 2023.
    • [03/2023] One paper about learning stability attention is accepted in L4DC 2023.
    • [03/2023] One paper about learning feasibility constraints is accepted in ECC 2023.
    • [02/2023] Our BarrierNet paper is accepted in IEEE Transactions on Robotics.
    • [02/2023] One paper about cyber-attacks in transportation systems is accepted in VehicleSec 2023.
    • [01/2023] One paper about risk metric evaluation is accepted in ICRA 2023.
    • [01/2023] One paper regarding game theoretic planning is accepted in RAL.
    • [12/2022] Our BarrierNet paper is conditionally accepted in IEEE Transactions on Robotics.
    • [10/2022] Our paper using optimal control for CAVs in roundabouts is accepted in T-ITS.
    • [08/2022] Our TAC paper regarding safety guarantees under unknown dynamics is now available.

    Research

    The research interest of the Safe Autonomy and Intelligence / AI Lab (SAIL) includes safety-critical control theory and trustworthy machine learning with applications to robotics and multi-agent systems. The ambition is to develop a new science of autonomy that integrates intelligence with certifiability. This science will be the cornerstone for a future where AI-enabled robots are an everyday presence, safely and efficiently augmenting human cognitive and physical capabilities.

    Specifically, we explore the following domains:

    1. Nonlinear systems and control theory (exploring safety and stability,etc.)

    2. Trustworthy machine learning and Safe AI (incorporating theories into machine learning methods/models)

    3. Robotics software (developing new algorithms)

    4. Robotics hardware (implementing on hardware and embedded systems)

    5. Intelligent Transportation Systems (learning and optimal control for CAVs, etc.)


    See the figure below for a brief summary of our work.


    Research highlights

  •     ABNet: Adaptive explicit-Barrier Net for Safe and Scalable Robot Learning, ICML 2025: Project
  •     SafeDiffuser: Safe Planning with Diffusion Probabilistic Models, ICLR 2025: Website
  •     On the forward invariance of neural ODEs, ICML 2023: Website




    Talks




 
  HTML Counter unique visitors since Sep 2025