Lensdrop

When photographers spend less time delivering galleries, they have more time behind the camera.

SaaS

Artificial Intelligence

Role

Founder & Engineer

Timeline

Feb 2026 - Ongoing

team

1 person, many tabs

platform

Web + Desktop (Tauri)

a group of people

The Real Problem

Photographers and studios were delivering event galleries through a mix of WhatsApp forwards, Google Drive links, WeTransfer uploads, and spreadsheet-based selection sheets. A single wedding shoot could mean 500 or more photos scattered across three or four sharing tools, and by the time the client came back with their picks, something had usually gone missing along the way.

When I started digging through photography forums and subreddits, the same frustrations came up over and over:

  • 'Clients send me selections in three different formats. I have to reconcile them by hand.'

  • 'Every shoot ends with me sending ten emails explaining how to download the gallery.'

  • 'I have no idea if a client is actually going to pick their photos or if they forgot the link exists.'

The real issue wasn't that any one tool was broken. It was that nobody had built something that owned the full loop: upload, share, select, deliver. Every studio was duct-taping their own version together, and every client was stuck learning yet another 'system' that was really three apps and a prayer.

Stylish woman in white tennis attire leans

What I Took Away

I'd talk to photographers before writing a single line of code next time. I spent the first few weeks building features I thought were obvious, then found out most of them were solving problems photographers didn't actually have. Even one conversation with a real studio owner changes more than a month of guessing.

Past that, what stuck from the build itself is simpler. Shipping is the real start. Everything before the first user signup is a guess, and the only way to find out what actually matters is to watch someone use it.

Boring tech wins every time. Next.js, Supabase, and R2 have been around long enough to be deeply documented, and not one of those calls has come back to bite me. Infrastructure compounds quietly. The small decisions at the start (R2 over S3, edge workers over traditional functions, Postgres over a specialty database) each saved more than I expected.

A dynamic shot of runners in motion,

How I Built It

My rule was simple: boring tech for the backbone, real engineering for the parts that actually had to be different. Next.js, Supabase, and Cloudflare R2 handled most of the product. A small number of problems got their own deliberate solutions.

Face recognition went into the same database as everything else. I self-host InsightFace on a Lightsail box, computing 512-dim embeddings at photo upload. The embeddings live in Postgres with pgvector, indexed with HNSW, searched through a single RPC within cosine distance 0.5. Face search runs inside the same RLS as every other row, I don't pay for a separate vector database, and an hourly backfill cron catches any that the embedding host misses.

Gallery watermarking happens at the edge, not the server. A Cloudflare Worker rasterizes the watermark SVG, composites in the studio's logo, and returns WebP without the request ever hitting my origin. Plan limits live in a single file too, retention days and storage caps and feature gates together, so the product's economics can only change in one place.

Let's Talk

I'm most energized by projects where I can architect messy systems, build with AI, and ship products that people actually use.

Comment

Beny Dishon

If you're hiring, building, or just curious, send me a note and I'll reply to everything.

1