• OpenSesame
  • Posts
  • Sneak Peek at Self-Serve: Build & Collaborate on Beautiful AI-Native Interfaces for Your Product

Sneak Peek at Self-Serve: Build & Collaborate on Beautiful AI-Native Interfaces for Your Product

Spin up, co-edit, and ship an AI-native interface to make your product completely AI native in minutes.

Yo! Jai & Anthony here.

We’ve been shipping. This is a quick note on why we’re building Self-Serve, how we approached it, and what actually shipped.

Watch the product demo here:

Why Self-Serve (and why now)

People shouldn’t need to rely on us for help on Zoom to try Cell. They should make one themselves and decide in five minutes if it’s useful enough to add to their product.

Goal: go from a URL → working, on-brand agent → embedded in your product before your coffee gets cold.

The setup (how we proved it, live)

On the morning of the demo, we Googled supply-chain platforms, grabbed the Flexport URL, and pasted it into OpenSesame.

Less than ten seconds later, we had a Cell:

  • Auto-matched colours and pulled the logo

  • Generated suggested prompts for your users

  • Panels for Design, Suggestions, Context, Endpoints and Add-ons.

Introducing Collaboration

To start using collaboration, click the share button. Two modes: view or edit.

I sent Anthony an edit link over Slack. He popped onto the platform, and we could both edit and build our AI interface together.

Frictionless. This is the part we’re most proud of; teams are able to build a Cell together in real time. Share a link with view or edit access, see teammates appear instantly, and co-edit the same canvas: design tweaks, question order, context, even uploaded endpoints. Changes sync live for everyone, so PM, design, and eng can make decisions in one place instead of passing screenshots around. Fewer handoffs, faster truth, ship sooner.

Control Your APIs

With Cell, you can plug in your OpenAPI specs, and we do the rest. Endpoints auto-populate for the whole team, you drop in auth, and you can query immediately. Answers stay grounded in your docs, context, and live API responses to ensure your agent is embedded in your product.

Add-ons
Turn on additional features for your interface only when you need them. Dictation is currently live; actions/workflows, analytics, and role routing are on the way. Everything is modular and toggleable, so the core stays fast and clean.

What’s next
We’re adding real-time API tests so you can validate endpoints inside the Cell (latency, status, sample responses) before you ship.

What this early version looks like (v1)

  • Self-Serve creation from a URL (auto-brand + seed questions)

  • Real-time collaboration with view/edit links

  • Large OpenAPI ingestion (~600 endpoints) + per-env auth

  • Add-ons starter with Dictation

  • Embeds for Next.js/React/HTML

If you want some early access to the new self-serve version of our product, reply to this email. Until then, we’ll keep building in public :)

Thanks for paying attention.

— A & J