The fashion industry generates over $1.5 trillion annually. Yet for the average person, getting dressed remains an unsolved software problem. Clothes get bought, worn twice, and forgotten. Resale is manual and painful. Personal style is invisible to machines.
Drip Deck is our answer.
Not a styling app. Not another e-commerce play. A digital wardrobe operating system — one that knows what you own, how it looks on a body, and what it is worth.
This is the full technical and strategic story of how we built it.
The Problem We Set Out to Solve
Most people own more clothes than they think they do, and wear fewer than they realize. The average wardrobe has 77 items. The average person wears 20% of them 80% of the time.
The reason is not taste. It is access. People cannot see their wardrobe at a glance. They cannot combine items across a mental model. They cannot get a second opinion without texting a photo to a friend and waiting.
The secondary market makes this worse. Selling a single item on Vinted or Depop requires photography, background removal, categorization, pricing research, and listing copy — 20 to 30 minutes per item. The friction is high enough that most clothes never get listed.
The opportunity was clear: solve the data layer, and everything else follows.
The Architecture
Drip Deck runs on three layers.
The mobile app is built in Flutter. We chose Flutter for a single reason: one codebase, iOS and Android, without compromising on native performance. Our garment upload flow, outfit generation pipeline, Squad social layer, and digital wardrobe grid are all Flutter. State management is minimal — we rely on Supabase real-time subscriptions rather than complex local state.
The backend is Supabase: PostgreSQL with Row Level Security, real-time WebSocket subscriptions, and Auth. Choosing Supabase over a custom backend cut our initial infrastructure work by several months. RLS policies mean we never write authorization logic in application code — it lives at the database level. We use Cloudflare R2 for image storage, which is S3-compatible and significantly cheaper at scale than AWS S3 or Google Cloud Storage.
The AI server runs locally on a Windows workstation with an NVIDIA RTX 4060. It is exposed to the internet via ngrok's static domain feature. This is intentional — GPU inference at the scale of an early-stage product does not justify cloud GPU costs. When we scale, we will migrate to dedicated inference infrastructure. Until then, the local server gives us full control and zero per-inference cost.
The AI Pipeline: Virtual Try-On and Background Removal
The core product experience depends on two computer vision pipelines.
Virtual Try-On (VTON)
When a user photographs a garment, Drip Deck composites it onto a standard mannequin using a virtual try-on model. The result is a clean, consistent product image — the kind you would see on a fashion retailer's website — generated from a casual phone photo.
For clothing categories that work with a body model (tops, bottoms, outerwear, dresses, sportswear), we use FASHN's VTON 1.5 model. This is a commercial diffusion-based try-on model that produces photorealistic results with correct draping, wrinkle behavior, and lighting.
For accessories, shoes, bags, and jewelry, we use rembg — a background removal library based on the U2Net architecture — to isolate the product on a clean white background.
Outfit generation extends this further. We run VTON twice sequentially: first the top garment is placed on the mannequin, then the output of that inference becomes the new "person image" for the bottom garment inference. The result is a complete outfit on a single mannequin, generated from two separate wardrobe items.
AI Licensing: A Longer Search Than Expected
Finding the right VTON model involved more due diligence than we anticipated.
Most state-of-the-art try-on models — including several prominent ones on HuggingFace — are released under non-commercial licenses. The CC BY-NC 4.0 license, for example, prohibits commercial use. Others use custom academic licenses that explicitly forbid product integration.
We evaluated several open-weight models before settling on FASHN's commercial offering. The open-source ecosystem for VTON is maturing fast — IDM-VTON, CatVTON, and StableVTON all produce impressive results — but licensing restrictions meant we could not use them in a commercial product without significant legal exposure.
This is a broader pattern in AI product development that deserves more attention. The gap between what is technically available and what is commercially licensable is significant. Many of the best-performing models are research artifacts, not production-ready software. Startups navigating this space need to read licenses carefully and — when in doubt — pay for a commercial license or build on genuinely open alternatives.
We expect this landscape to improve. The open-source community is actively building commercially-permissive alternatives, and major model providers are introducing tiered licensing that accommodates startups at early stages.
The Social Layer: Drip Check
Drip Deck is not just a personal tool. It has a social dimension we call Drip Check.
The concept is simple. You photograph your outfit. You send it to your Squad — a closed group of people whose opinion actually matters to you. They react with 🔥 (fire) or 🧊 (ice). You make a decision.
The entire interaction is designed to take under 10 seconds.
This is deliberately different from Instagram or TikTok. There is no public feed. No follower count. No algorithmic amplification. No pressure to perform. The Squad is capped at 25 people — close enough to be honest.
The technical foundation is Supabase's real-time infrastructure. When a Drip Check is sent, recipients see it immediately via WebSocket — no polling, no push notification delay for in-app users. Reactions propagate the same way. The experience is closer to messaging than to social media.
The Squad system uses a simple friendship graph: a friends table with requester_id, addressee_id, and status (pending or accepted). Users discover each other via name or email search. Friend requests are accepted or declined in the app. Drip Checks are addressed to specific recipient IDs, not broadcast to a feed.
Open Source Infrastructure
Drip Deck is built almost entirely on open-source software.
- Flutter — Apache 2.0, Google
- Supabase — Apache 2.0 (the platform is open-source; we use their hosted offering)
- rembg — MIT, Daniel Gatis
- FastAPI — MIT, Sebastián Ramírez
- PostgreSQL — PostgreSQL License (permissive)
- boto3 / AWS SDK for Python — Apache 2.0
- Uvicorn — BSD
The commercial components are Cloudflare R2 (storage), ngrok (tunneling), and the FASHN VTON API license.
We are strong believers in building on open-source foundations. The leverage is extraordinary: we are shipping a production-grade AI vision pipeline on top of work that represents thousands of engineering hours — for free, legally, with active community maintenance. The obligation is to give back where we can and to be honest about what we use.
What We Got Wrong
RLS recursion
Supabase Row Level Security is powerful but subtle. We hit an infinite recursion bug when our Squad policies referenced squad_members, which in turn referenced squads. The fix was simple — rewrite policies to avoid circular subqueries — but it cost us debugging time and is not obvious from the documentation.
Async UI and background processing
Our initial garment upload flow blocked the UI while the VTON model ran inference. That could take 60–90 seconds. We refactored to a fire-and-forget architecture: the garment is immediately inserted into Supabase with status: processing, the modal closes, a pulsing loading card appears in the wardrobe, and the inference runs as a top-level async function outside the widget lifecycle. The card updates automatically via polling when inference completes.
The key insight: in Flutter, widget disposal cancels async operations that are scoped to the widget. Background work must be top-level functions or managed by a service layer, not widget methods.
Image caching
Flutter's Image.network aggressively caches by URL. When we regenerated outfit images at the same R2 key, the app showed the old image. The fix was a key: ValueKey(imageUrl) on every Image.network that displays user-generated content.
What Is Next
Phase 2 is the commerce layer. We are building integrations with Zalando, About You, and other DACH fashion retailers. The vision: Drip Deck recognizes a garment in your wardrobe, identifies similar items available for sale, and surfaces resale pricing based on brand, condition, and category. The wardrobe becomes a two-sided interface between what you own and what the market values.
Phase 3 is B2B. Secondhand stores, powersellers, and fashion resellers have the same problem at scale — cataloguing inventory is slow and manual. Drip Deck's pipeline can process hundreds of items per day with one person and a phone.
The long-term thesis is simple: fashion data is an unseized infrastructure layer. The brands have it. The retailers have it. Individual consumers have never had it. Drip Deck is building the consumer side of that infrastructure — and using it to create a product that is genuinely useful before any monetization.
Closing
Building AI products in 2025 and 2026 is simultaneously easier and harder than it looks.
Easier, because the open-source ecosystem — models, inference libraries, vector databases, managed backends — has matured to the point where a small team can ship genuinely sophisticated AI features in weeks rather than years.
Harder, because licensing is messy, inference costs are non-trivial at scale, latency expectations are high, and the gap between a demo and a production-quality product is larger than most tutorials suggest.
Drip Deck is live. It is early. It is going to get significantly better.
If you are building in this space — or if fashion, AI, and product engineering are interesting to you — we would like to hear from you.