Hello! 👋
I am Prabhdeep Singh (Prabh). I am a first year student at the University of Toronto planning to study Computer Science, Statistics, and Mathematics.
Most of my time goes into AI research, writing code, and trying to understand why models behave the way they do instead of just taking their answers at face value.
Growing up I self-learned a lot through asking the right questions. I never enjoyed school because it felt slow. Self-learning did not.
When I found online resources I started to teach myself. That same energy moved into math, programming, and later into models and optimization. I got used to building things before I fully understood them, then going back to clean up the theory after.
I fell into AI research because I liked the questions more than the demos. How do models store concepts. Why do some prompts light up entire behaviors. What changes when you push systems to the edge of their comfort zone.
Along the way I spent time as a research assistant at CMU, then joined a group at MIT CSAIL, and eventually found myself working with the AI Ophthalmology team at Harvard. Each step taught me something different about how real research happens.
Now I split my time between university, which I will finish very early, research projects, and small projects that test whether my ideas are practical.
I care about two things at the same time.
One is depth. I like proofs, weird edge cases, and understanding how far a method can be pushed before it breaks.
The other is impact. I like when something I built is in a repo that people actually clone, or in a product that someone uses without thinking about the model behind it.
Long term I want to work on systems that feel like early pieces of whatever comes after today's models, not just slightly better chatbots. I want to be close to the hard engineering and hard science that actually moves the frontier forward.
Some of the work I am proud of looks like this:
- FETA Transformers - a framework that makes token level adapters behave like LoRA and other parameter efficient methods, with shared theory, training, and a compiler behind it.
- MeshRAG - a constant time RAG that mixes hashing and graphs so latency stays constant no matter how large the dataset gets.
- Cornea segmentation and PCO analysis - vision language models that help with cataract surgery workflows, including cornea segmentation and the risk of posterior capsule opacification.
- IntelliDrive - an LLM controlled RC car built for a hackathon, where the model reasons about any environment and chooses how to drive.
- Small apps - things like productivity apps and quiz feeds that test how far you can push learning and habit building with a little bit of ML, good UX, and some humor.
None of these are perfect, but each one taught me something real about how models fail and how people actually interact with them.
If any of this sounds interesting and you would like to talk about research, building, or ideas that do not have a name yet, I would be happy to chat.
I am especially interested in work around model architectures, and tools that make intelligent systems more capable and reliable.
Email is the best way to reach me: s.prabhdeep[at]mail.utoronto.ca
You can also find me here:
- GitHub: https://github.com/prabhxyz
- Twitter: https://x.com/PrabhdeepS_/
- ORCID: https://orcid.org/0009-0009-3329-3595