Deng, JiaZhu, Anlon V.2026-01-062026-01-062025-04-10https://theses-dissertations.princeton.edu/handle/88435/dsp01fq977z25wInfinigen Physics augments the procedurally generated 3D scenes produced by the Infinigen system with material physics values. Despite recent advances in visual language models (VLMs) for robotics, physical world understanding remains a notable weak point. To address this, our approach enables the generation of photorealistic 3D scenes alongside physics-encoded ground truth renderings for latent material properties (e.g., density, thermal conductivity). This novel framework has applications in a wide range of 3D vision tasks—such as robotic manipulation, navigation, and augmented reality—that require both material and physical scene understanding, capabilities not available in current physics-vision datasets or dynamic physics-simulation engines like Omniverse and Unreal. Furthermore, we introduce an LLM-powered research agent that leverages web search to retrieve reliable sources of material data, facilitating the mapping of Infinigen shaders to corresponding physical properties. Embracing the scalability of Infinigen, our method automates the assignment of material properties for new shaders across an infinite variety of scenes. We demonstrate the utility of Infinigen Physics by constructing a benchmark dataset of 100 scenes with physics-encoded ground truths, which is used to evaluate an existing VLM on a physical understanding task. Our contributions include: • Developing an LLM-powered research agent for sourcing material properties and mapping Infinigen shaders to these values. • Integrating material-physics-encoded segmentation masks into the Infinigen rendering pipeline. • Creating a benchmark dataset of 100 scenes with physics-encoded ground truths for evaluating VLM performance on physical reasoning tasks.en-USInfinigen Physics: LLM-Driven Material Physics in Procedurally Generated ScenesPrinceton University Senior Theses