Campus users should disconnect from VPN to access senior theses, as there is a temporary disruption affecting VPN.
 

Publication:

RAGLess: Evaluating RAG Systems In Emergence of Long-Context Model Era

Loading...
Thumbnail Image

Files

td6955_written_final_report-2.pdf (712.54 KB)

Date

2025

Journal Title

Journal ISSN

Volume Title

Publisher

Research Projects

Organizational Units

Journal Issue

Access Restrictions

Abstract

ChatGPT, Google Gemini, and Claude are some of the most recognizable terms in artificial intelligence today. They are tools that have become synonymous with productivity and the notion of work itself. The paradigm shift towards an era of massive, general models is empowered by foundation models. Foundation models are a recent development and are extremely impactful in the language modeling space. They are large models pre-trained on massive amounts of data, giving them the ability to complete many different tasks. However, a persistent weakness of foundation models is their inability to perform tasks which require a deep understanding of specific topics. Existing approaches, such as fine-tuning or retrieval-augmented generation (RAG), aim to improve foundation model responses and fit them for downstream tasks. This thesis leverages a key feature of the newest foundation model releases, the long-context window, in an attempt to reduce hallucinations. Results from experimentation are promising as in-context learning seems to be achieved via large-scale prompt injections. Successful reproduction of RAG performance also signals future opportunities in foundation model advancements.

Description

Keywords

Citation