Overview

As the Founding Product Designer at bootstrapped startup Puzzmatic AI, I led the creation of a web app from inception to market launch (Nov 2022 - May 2024). Our goal was to build domain-specific chatbots and improve AI-human communication through chat and voice interfaces. Operating under significant resource constraints, we needed to validate our market hypothesis and achieve product-market fit within 12 months. 

Using the same technology behind ChatGPT, we trained embedding models to convert web pages into lists of numbers known as embeddings. We used RAG technology for an AI Writing assistant, Chatbot.

My Role

As the Designer at the startup in its wilderness stage, I wore many hats. I crafted the brand identity, designed interfaces and conducted user research to shape our product. Working directly with the CEO and engineers, I implemented a lean design process that reduced our feature development cycle from 4 sprints to 2 sprints while maintaining quality. This approach helped us optimize our runway while iterating based on market feedback.

Opportunity

Our market analysis revealed a $50B opportunity in AI-assisted productivity tools, with leading competitors like ChatGPT and Character AI capturing the most significant part of the market in 2022. We identified a critical gap: existing solutions were either too generic (like ChatGPT) or too complex for individual contributors (enterprise solutions at $100+/user). This presented an opportunity to capture the mid-market segment with a targeted solution at $20-40/user.

Work on this problem led to developing features including messaging experience, creating customized AI co-pilots, onboarding experience and payment flow.

Goal

Our goal was to design the UI/UX ground zero, uncovering and simplifying our product's features to boost user adoption and generate our first revenue. 

Constraints

  • Building a product from absolute zero.
  • Limited runway requiring rapid market validation.
  • Balancing time-to-market pressures, limited budget, and a lean team of just four people.
  • Competition from well-funded AI companies.

When we moved further in our design process, it turned out that we received a lot of rage clicks in the core chat and voice experience. That is where we have our Aha-Moment!

Trade-Off Decisions

  1. Focused on self-serve onboarding over feature richness to reduce support costs.
  2. Prioritized chat interface improvements over additional AI capabilities to address immediate user friction.
  3. Chose to build our own embedding model despite higher initial cost, projecting 70% lower operational costs at scale.

Metrics and Impact

During my tenure, the product posted strong month-over-month metrics in user feature adoption, helpfulness-rate, and growth. Our design process led to the successful launch of features, including messaging experience, account creation, personalization, answer pages, and pricing flow. We achieved our goal of generating first revenue while maintaining a simple interface that our small engineering team could feasibly build.

Starting from scratch, we achieved the following results in our first year:

1. User Growth and Engagement:

  • We went from 0 to 600+ active users in the first year
  • We grew from zero to 1K+ visitors
  • After acquiring our first users, we saw a 27% month-over-month growth for the next quarter

2. User Experience:

  • 400% increase in user session duration after implementing the typing indicator effect
  • 80% fewer rage clicks (indicating reduced user frustration)
  • 45% decrease in error rates during AI interactions
  • 4.5/5 average user satisfaction score, up from 3.2/5 before a redesign
  • 59% reduction in UI-related bug fixes

3. Feature Adoption:

  • 1K+ interactions in core experience between humans and AI
  • 52% of users regularly used at least three AI personas, up from 30% after acquiring our first users in the first three months.

Design Process

It was a startup, and we needed to move fast, so we started with MVP, trying to stick with the big picture and sacrifice research. Rather than following a traditional design process, I implemented a lean methodology optimized for our constraints:

  • Discovery - Research
  • Prototype/Wireframe/Visualize/Blueprint
  • Test/Review
  • Report/Reflect/Plan/Ideate

This approach helped to prioritize features based on engineering effort vs. revenue potential matrix.

#1 Discovery: my WHYs

In Agile/Scrum, it can be challenging to see how iterative changes affect the end-to-end experience. So, I began by implementing an annual E2E Intercept Study to learn how our customers sought answers to their questions. We started by writing PRDs, identifying obvious elements, and setting the business context.

To answer user-related questions, I conducted a foundational user interview. I implemented a rapid testing cycle using a guerrilla research method, saving us at least $5K in research costs. It revealed who had a problem (ICs), why it was a problem for them, their pain points, and how our product could benefit them.

Key findings from these studies were not only reported but also tracked in a UX Backlog to help prioritize work. Quarterly, I brought stakeholders together to review and prioritize our backlog.

Understanding Users' Needs

I organized data and insights using affinity diagramming, discovering patterns that led to the creation of customer journey maps, personas, and user stories. I applied the Job-to-be-done (JTBD) framework, asking: "Will users be able to easily complete the job they 'hired' the product for? Does this solution provide a better outcome than existing ones?" This approach yielded clear answers:

IC's JTBD is "seeking quick, accessible, hight-trusted, with high-data assistant for various tasks."

IC's User Story: "As an IC and task solver, I want to have immediate access to a versatile AI assistant so that I can efficiently get information, generate ideas, and complete various tasks without extensive research."

To visualize the user experience, I designed a user flow depicting the basic start-to-finish journey of purchasing a subscription. This helped us understand user interactions and navigate through user goals effectively.

#2 Design and Test

“If you are not embarrassed by the first version of your product, you've launched too late.” Reid Hoffman

Chat and Voice Interface (Our Before)

Our first design, created in just two hours, was minimalistic: a white background, an image, and a chat field. We quickly launched this version for user testing. The feedback was invaluable, offering insights on colors, styles, design patterns, and functionality.

Also, we implemented quick feedback loops with early adopters ROI: Saved $19.5K in research costs while gathering insights.

Browse Page (Our Before)

The browse page allowed users to find and select chatbots. User research revealed the need for improved visual design, consistency, interactions, and accessibility. Users requested features like filters, a search bar, and short descriptions for each persona. One user's comparison to a marketplace inspired an Amazon-style shopping experience redesign. I remember one user compared our browse page with a marketplace, which gave me the idea to turn it into an Amazon-style shopping experience.

I used an affinity diagram (miro.com) to separate the data into groups of tasks, further categorized by high-level goals for improvement in efficiency, process, depth, and familiarity.

Key Insights from Research:

  1. Usability: Users complained about difficulty finding specific pages (like the personas page), missing browse button, and unclear click targets (we even got notifications about the rage clicks in our analytics).
  2. Design: We received critical feedback about the text being too small for dyslexic users and the chat window being too small on desktops. Users also found the color palette boring and mismatched with the product's purpose, and a variety in font styles was pointed out as confusing.
  3. Chatbot Interaction: Users asked for a snapshot of the in-chat experience (Home page) before they dive in. They also shared their experience of one-sided chat, taking too long for responses, and leaving them uncertain about waiting times.
  4. AI Personality Feature: Users desired more customization options and detailed persona descriptions.
  5. Information Presentation: Participants suggested simplifying the homepage and adding guides or instructions for using the chat.
  6. Sharing Feature: Users were interested in sharing personas with friends (a social sharing feature).

#2.1 Design "After" and "Aha" Moment

Rather than implementing all requested features, we prioritized based on a business impact vs. resource cost matrix, which meant a chat experience Optimization. 

Cross-functional workshops and the use experiments were part of our regular process and helped us redefine our core experience, and even inspired us to replatform with new technology.

We used the Linear app as our task management tool to keep our work organized and efficient.

New Core Experience

Over three months, we conducted usability studies and implemented at least seven iterations. Here's what we changed based on user feedback:

  1. We created our style guided by the psychology of color. We strived to create a calming effect but still evoke feelings of trust and sophistication.
  2. We improved the technology behind the product, and our chatbots were more talkative, asking follow-up questions.
  3. We added a search bar and "AI Personas" to the menu to make it easy to switch between copilots.
  4. Users could call or message right from the same interface.
  5. We added the left menu, where users could see the chatting history with others. We wanted to eliminate this feeling that you are chatting with not a real person and turn it into a new chatting app.

"Aha" Moment

A significant breakthrough came when we noticed every third user was leaving the chat window after 30 seconds, with numerous rage clicks recorded in our analytics.

Situation: High drop-off rate and user frustration in the messaging experience. Each abandoned chat session cost us approximately $2.50 in CAC.

Task: Understand WHY and decrease the drop-off rate in the messaging experience.

Action: To figure out the root of frustrations:

  1. I conducted a guerrilla interview with our users (the most budgeted and controlled way to test with users).
  2. We recorded users' sessions, which are renderings of visitors' real actions as they browse a website. It helped us analyze a user's behavior by walking through their entire website journey from the starting page to their exit page, including how visitors change tabs and where they are stuck.

Result: We discovered users couldn't tell if the chatbot was active or the system was frozen. Instead of a complete redesign ($15K development cost), we identified the typing indicator as a high-impact, low-resource solution. I designed a typing indicator effect for chat conversations, signaling that the copilot was active and responding. We pushed this update, resulting in a 400% increase in user session duration and 80% fewer rage clicks (frustrations).

New Browse Page

Considering all the findings from the usability studies, a redesign and new pages were created.

#3 Analytics: Design Validation

I validated our designs through research and Focus/ Heat maps, constantly using metrics from analytical system (PostHog) and insights to improve our interfaces. This data-driven approach allowed us to make informed decisions about user interaction patterns and preferences.

Puzzmatic Interfaces to Support AI Features

Branding Materials & Identity

Learnings

  • If I had my way with funds, I would have put some budget into expanding our design team and engaging with our users. That would have helped us understand our user needs better from a broader design standpoint and validate decisions.
  • Use a sheet, Jira, or other tracking software to monitor progress.
  • The use of analytics and heat maps underscored the importance of basing design decisions on concrete data rather than assumptions.
  • The project reinforced the need to be adaptable in the face of changing requirements and user feedback because sometimes we could follow the sunk cost fallacy.
  • One critical lesson that has stuck with me is the power of prioritization. It has made such a big difference, not just in my work but in everything I do!
  • A big shoutout to Tom Greever! With his help, I've learned to wear multiple hats and articulate my design decisions.

Puzzmatic (Web App): AI SaaS from 0→1

Timeline: Nov 2022 — May 2024
Company:
Puzzmatic AI
My Role:
Founding Product Designer

Team: CEO, Developers, Product Designer
Project type:
Web app
Applied Skills:
Branding, UX Research, Concept Development, Wireframing and Prototyping, Visual Design, Collaboration
Tools: Figma, Photoshop, Linear, Posthog, Google Docs, Sheets, Notion