HomeAboutProjectsLabResumeContact

Lab Experiments

Outside of my regular work, I like to experiment with new AI tools and physical prototyping to enhance my ability to create. This page also includes some of my other exciting projects from my time at SCAD.

Read More

AI + Design · Design Systems · UI Design

Design-assistant.md

Design framework that teaches Visual Design to AI Agents (in progress)

A groundbreaking AR project, conceived as hypothetical client work for Big Bus Tours, that uses bus windows as a gamified canvas to directly enhance tourist engagement.

Read More

AI Chatbot · Conversational Design · Self Representation

Sumant 2.0

AI chatbot version of myself that advocates for me to recruiters

A groundbreaking AR project, conceived as hypothetical client work for Big Bus Tours, that uses bus windows as a gamified canvas to directly enhance tourist engagement.

Read More

Read More

AI Tools · Multi-Agent Systems · Design Process

UX Explorer

Multi-agent AI tool that augments my UX ideation process

A groundbreaking AR project, conceived as hypothetical client work for Big Bus Tours, that uses bus windows as a gamified canvas to directly enhance tourist engagement.

Read More

Read More

Multi-Agent Systems · AI Orchestration · Harness Engineering

AI Agency

Hierarchical multi-agent system that builds complete products

A groundbreaking AR project, conceived as hypothetical client work for Big Bus Tours, that uses bus windows as a gamified canvas to directly enhance tourist engagement.

Read More

Read More

Wearable Hardware · Multi-Input Sensors · Music Tech

Piano Glove

Sensor gloves that let me play musical notes on any physical surface

A groundbreaking AR project, conceived as hypothetical client work for Big Bus Tours, that uses bus windows as a gamified canvas to directly enhance tourist engagement.

Read More

Read More

Physical Craft · Product Design · Hand Built

Butterfly Lamp

Spray-painted pen holder that doubles as a desk lamp

A groundbreaking AR project, conceived as hypothetical client work for Big Bus Tours, that uses bus windows as a gamified canvas to directly enhance tourist engagement.

Read More

Like what you see?

Let's Get in Touch!

Butterfly Lamp

Description
A pen holder with built-in lights, designed for a product studio class. I pushed past the obvious forms and landed on a butterfly, where its body holds the pens, its antennae serve as the lights, and its wings carry the whole structure. Modeled in Rhino, 3D printed, finished with bondo, sanding, spray paint, and lacquer.
Reflection
This project let me push my 3D modeling skills further than I'd taken them before, and the result is something I'd actually be proud to keep on my desk. My one regret is going too heavy on the bondo, which smoothed out some of the detail I'd sculpted into the wings in Rhino and left it less visible in the final piece. What I took from it is a willingness to try things that are slightly beyond my current abilities, because that's where the growth happens.
Close
Description
Have you ever pressed your fingers against a solid surface and pretended you were playing piano? This project makes that real. A single glove packed with sensors, pressure sensors on the fingertips to modulate volume, flex sensors along each finger to distinguish black keys from white, and an IMU tracking lateral hand movement to shift octaves, all feeding into FL Studio over USB MIDI through an Arduino Nano ESP32 IoT 33. Built for an electronic prototyping class, where the goal was exploring gesture-based input as a real music-making tool.
Reflection
I dreamt of this project for years while making music in my room, so actually building it was hugely satisfying. Along the way I picked up firmware, soldering, and circuit design skills I didn't have before, and soldering together a dense sensor array that had to sit comfortably on a hand was a genuinely hard problem to solve. Looking back, the scope was probably a little too ambitious for the timeline — if I did it again I'd dial back one feature to give the final product more polish. Still, I'd rather aim big and learn through the complexity.
Close

AI Agency

Description
A personal experiment in whether I could become a one-person design agency, built as a hierarchical multi-agent system that takes a brief and builds a product end-to-end. Orchestrated in Python across multiple Claude Code instances, each agent has a defined role and collaborates through a structured pipeline.
How it works
A boss agent interprets the brief, decomposes it into tasks, and decides which preset agent teams are needed across the UX, UI, front-end, and back-end phases. A QA subagent team continuously verifies work along the way to catch errors before they compound. I stay in the loop for the UX phase to guide features, information architecture, and wireframing, while the rest of the pipeline runs more autonomously, only checking in with me between phases.
Reflection
Working through the system architecture was the most rewarding part, figuring out how the agents should communicate, how the boss should reason about task delegation, and how each team's role should be scoped to stay useful without stepping on the others. Building AI Company also revealed how much the quality of output depended on the structure around the agents, not just the agents themselves. It's still in testing, and it directly inspired both UX Explorer and Design-Assistant.md as I started thinking about how to optimize specific parts of the pipeline. I'll return to this project once those two are more developed.
Close

UX Explorer

Description
A web app that augments my UX ideation process by putting me in conversation with a team of six AI agents, each with a distinct role. I built it to help me think more rigorously when designing alone, covering both breadth and depth without losing either and pressure-testing ideas the way a strong design team would. Modeled after my capstone team, UX Explorer gives me the unique perspectives of a multidisciplinary group while staying a solo practice.
How it works
The Explorer agent helps me brainstorm when I'm stuck or going broad. The UX Critic and Technical Critic pressure-test each direction for usability and feasibility. The Logic Critic watches the conversation for contradictions or flawed reasoning. The Devil's Advocate opposes everything I say, which has become the most valuable voice in the room. A Writer agent captures and summarizes the thinking, and a Verifier agent checks each agent's output for hallucinations and forces corrections when it catches one.
Reflection
This is my most successful experiment to date. I used it actively during the ideation phase of my capstone. At one point the Devil's Advocate caught me making a decision based on an assumption I hadn't tested. It pushed me to run interviews and prototype first, and the evidence that came back shaped the direction of the entire project. Building UX Explorer while using it was its own kind of feedback loop. Every limitation I hit became the next thing to fix. I'm thinking about how the tool could eventually produce tangible outputs rather than just enhance thinking, but for now it's genuinely changed how I approach ideation.
Close

Sumant 2.0

Try it out!
Description
An AI chatbot version of myself that advocates for me to recruiters, built because my portfolio showed my work but didn't fully show who I was. It lives directly on my site, ready to answer any work-related question a visitor asks, and it naturally leads the conversation toward the projects and experiences most relevant to them.
How it works
Underneath is a Gemini Flash model grounded in a layered knowledge base of JSON files covering my background, my work, and responses to specific interview questions. The chatbot is designed to keep conversations feeling natural rather than bot-like. Not spamming random facts, but threading through my story based on what the recruiter actually wants to know.
Reflection
Building this taught me how tricky curating a conversational tool could be. I had to balance breadth and depth in the knowledge base so that the bot could go wide across my projects but also go deep into any one of them without losing the thread of a real conversation. Portfolio reviewers have consistently responded to it the most, often spending more time playing with Sumant 2.0 than clicking through the actual projects.
Close

Design-Assistant.md

Description
An exploration into defining taste and codifying what makes visual design feel good so that AI agents can produce work that reflects mine. The project is split into two parts: articulating the principles of strong visual design, and building a framework around them that captures the balance between usability and engagement.
How it works
The first part answers why something looks good. It began as a drum-based framework, where I translated the principles I use to compose drum parts in my band, such as dynamics, negative space, syncopation, and tension, into visual design terms. Dynamics became variation in weight, scale, and emphasis. Negative space translated directly. Syncopation became the rhythm of a layout's hits and pauses. Tension became the moments where a composition pushes against equilibrium to create interest. That framework evolved into a sharper three-point system for evaluating visual decisions: eye flow (where attention travels), visual technique (what choices are being made and whether they work), and emotional response (what the design actually makes you feel). Frame, a hypothetical weight management exercise app, redesign is the case study, where I created variations of an app using the framework and studied which versions resonated most with users.
The second part is a multi-layered system: design tokens for the visual foundation, a written philosophy layer that captures the why, a JSON-based taste memory that accumulates preferences over time, and session-level feedback that takes priority over everything else. Claude Design sits on top as a potential execution layer, turning the framework into actual design output.
Reflection
I spend a lot of time on craftsmanship and polish in the execution phase, and a tool that could absorb some of that work by learning my taste would save meaningful time without compromising on quality. This project grew out of AI Company as I started thinking about how its UI phase could be grounded in a real point of view rather than generic output. I've gotten pretty far into phase one,  defining good taste, and phase two is about executing the first version of the layered system. Eventually, design-assistant.md could become the visual layer inside AI Company itself.
Close