Princeton University users: to view a senior thesis while away from campus, connect to the campus network via the Global Protect virtual private network (VPN). Unaffiliated researchers: please note that requests for copies are handled manually by staff and require time to process.
 

Publication:

Safety Monitoring for Autonomous Systems using Introspective LLMs

Loading...
Thumbnail Image

Files

Thesis.pdf (1.34 MB)

Date

2025-04-14

Journal Title

Journal ISSN

Volume Title

Publisher

Research Projects

Organizational Units

Journal Issue

Access Restrictions

Abstract

As autonomous systems become increasingly deployed in real-world environments, ensuring their safety under anomalous conditions remains a critical challenge. This thesis investigates the use of large language models (LLMs) as reasoning agents for selecting appropriate fallback strategies in response to hazardous scenarios. We develop a prompting and evaluation methodology to assess an LLM agent's performance in three tasks: classifying scenarios as safe or hazardous, selecting an appropriate fallback for a hazardous observation, and predicting a set of multiple feasible fallbacks for a scenario. To improve decision quality, we apply introspective planning techniques that ground the LLM's reasoning in prior knowledge. We compare zero-shot reasoning with the introspective planning approach, demonstrating that the latter improves both accuracy and safe fallback selection. This work takes a step toward the integration of LLMs in the safety monitoring pipelines of autonomous systems.

Description

Keywords

Citation