
Overview
Practice tool designed where users answer questions via text or voice and receive structured feedback, including scores, strengths, improvements, and an improved answer. Focused on reliability through prompt design and robust parsing simulate real interview feedback. Users can select or create interview questions and respond either by typing or using voice input, which is converted into editable text.
“Unstructured responses to consistent, interview feedback using LLMs”


Technical Highlights
Implemented a structured prompt schema to enforce consistent JSON outputs from LLM responses, combined with fallback parsing and validation to handle model behavior. Built a unified input pipeline supporting both typed and speech-to-text responses, and integrated OpenRouter free models through a modular Next.js API route. Focused on robust state management and UX by incorporating loading, recording, and error states to ensure a stable, responsive user experience.