Self Help AI - iOS Mobile App
I developed the full software stack for Self Help AI, an iOS mobile app that integrates user data, function calling & personalization settings to create a ChatBot experience for the purpose of self-help. The end product uses ~30k lines of code across 24 files, and has amassed over 200 App Store downloads since launch.
Expert Witness Semantic Search Engine
At Advice Company, I re-engineered the company's search system for attorneys to find expert witnesses. To do this, I first created a script to systematically search through our database and create LLM summaries of each expert witness. Then, I embedded these summaries using a BERT model, and created a system to match these precomputed vectors with the embedding of a user query.
I was able to deploy this system as an API endpoint through Docker and Google Cloud, and designed the front-end interface to integrate it onto the website. For the last few months, I've been monitoring the system, making adjustments based on user feedback, and improving its efficiency. The service is now live to 500+ high-paying clients on ExpertPages.com, an online directory for expert witnesses.
Integrating LLMs into an Original 3D Unity Game
I led a group project in creating an original Unity-based video game which incorprates Large Language Models to create dynamic NPC interactions and combat narratives. We managed the game state & character behaviors through C# scripts and Unity's component system, while a custom API helped us handle LLM prompting, context management, and response processing.
- Unity game engine with C# scripting for core mechanics
- Custom API middleware for LLM integration and prompt management
- 7 NPCs with context-aware dialogue systems
- 21 unique combat abilities with particle system integration
- 37 total animations, 3 combat encounters, and 3 original world maps
Multimodal Voice-Controlled Robot
I developed a multimodal robot with Rasberry Pi that can interpret voice commands, control movements, analyze its surroundings, and respond with emotionally expressive speech. This involved integrating three seperate AI models (Speech-to-Text, a Multimodal LLM, and Text-to-Speech) with a Rasberry Pi, controller board, motors, camera, microphone, and speaker.