Fellbaum, Christiane DorotheaMenezes, Trivan T.2026-01-052026-01-052025https://theses-dissertations.princeton.edu/handle/88435/dsp015d86p368hNatural language processing tools remain scarce for low-resource languages, despite their possibly large speaker populations. This research investigates the potential for improving machine translation for Javanese, an Indonesian language with over 80 million speakers but limited digital resources. We analyze the performance of fine-tuning techniques—supervised fine-tuning (SFT), model distillation, and Chain-of-Thought (CoT) distillation—on enhancing Javanese-to-Indonesian translation quality using open-weight models (T5, mT5, Gemma 3 4B, Aya 8B, Aya 32B), comparing them against zero-shot and many-shot baselines from larger proprietary models. Evaluation using BLEU, TER, chrF, and BERTScore reveals that while large models like Gemini 2.0 Flash achieve top performance, fine-tuning significantly boosts the performance of smaller models. The first stage of SFT on the 500-example NusaX dataset caused significant improvement in translation quality. Subsequent model distillation and CoT distillation yielded only marginal improvements over SFT, suggesting diminishing returns potentially limited by pre-training knowledge. The improvements were still tangible, with the fine-tuned 4-billion parameter Gemma 3 4B model achieving performance comparable to, and sometimes exceeding, much larger models like GPT-4o in a zero-shot setup. The results show that fine-tuning smaller, accessible models offers a resource-efficient path to high-quality translation for low-resource languages like Javanese, potentially enabling deployment on edge devices and broadening access to NLP technologies for underserved linguistic communities.en-USFine-tuning Small Language Models for Javanese TranslationPrinceton University Senior Theses