"What if AI could see how we 'feel' about misinformation?"
Lies Rot is a prototype for experimental AR filter that reacts to AI fact-checking results in real time. When a statement is flagged as false or misleading, the filter triggers a decaying visual effect, burning embers, glitching static, and mold-like particles that symbolically represent the disintegration of falsehood. Built with Lens Studio, this project extends the Real-Time AI FactChecker into an embodied, visual experience, making the truth (or the lack of it) visible and visceral.
Project Type
Experimental AR Interaction, Visual UX Prototype
Used Tools
Lens Studio, JavaScript, Snap AR, Illustrator, Photoshop, After Effects, Figma
What if misinformation could rot instead of spread?
I’ve always been fascinated by the way false information feels invisible, slippery, fast, and hard to hold accountable. It doesn’t just mislead, it lingers, it spreads. It shapes perception without being seen. While building the Real-Time AI FactChecker, I started wondering, "What if we could make misinformation visible? Tangible? Undeniably… rotten?" That’s how Lies Rot began, not as a tool, but as a provocation. A thought experiment turned into a visual, interactive experience. Instead of letting falsehoods quietly pass through the feed, this AR filter stops them in their tracks. If the AI detects a lie, it doesn’t just say it’s false, it makes it decay right on your face. Because sometimes, the truth doesn’t need more words. It just needs to be seen.
Visual Preview
This isn’t just about detecting misinformation. It’s about exposing it, visually, instantly, and unforgettably. When truth confronts a lie, the screen responds. It shows how misinformation visually breaks down through a real-time AR filter, designed to make deception impossible to ignore.
Ideation & Iteration
To find the right visual language for decay, I explored multiple directions. Each aims to symbolize a different quality of misinformation.
Here are a few iterations I tested during the ideation phase.
Iteration 1. Burning Embers
Simulates urgency and danger. Designed to feel like a warning, misinformation as a fire that spreads unless stopped.
Iteration 2. Mold & Rot Suggests slow erosion and hidden toxicity. Misinformation is something that grows quietly and decays trust over time.
Iteration 3. Glitch Distortion Represents fractured perception and confusion. Inspired by analog signal breakdowns, what it feels like when the signal isn't clear.
How It Works
Detection
A mock AI pipeline simulates real-time fact-checking results. In this prototype, the "false" trigger is manually activated to test visual reactions. The system is designed for future integration with real-time AI APIs.
Reponse
Visual element (animation) represents a symbolic decay of misinformation. This effect is applied directly over the detected false statement.
Display
Built using Lens Studio, the filter uses custom scripts and particle systems to layer these effects in real-time. So, the misinformation becomes impossible to ignore, it literally falls apart on screen.
Lens Studio workspace during AR effect development
Captured in Snapchat
What I Learned
"Misinformation isn’t just a data problem. It’s an emotional one."
This started as a visual experiment. How could misinformation look if it were something we could feel? What surprised me was how visceral the experience became. Watching falsehoods burn, glitch, and rot triggered a kind of emotional unease I didn’t expect. It wasn’t about being “right” or “wrong”, it was about trust breaking down in real time.
Technically, this prototype is simulated. The “decay” effect is manually triggered, not by real-time detection, but by design. But that’s where it gets interesting. In future versions, I want to integrate real-time AI fact-checking, using systems like Gemini-based detection, semantic analysis, and knowledge grounding to power the response layer. Rather than faking the reaction, the filter would know when something is likely false, and decay accordingly. This would turn a visual metaphor into a functional warning system, blending emotional response with actual AI insight.