Other Work

Builder Studio

Builder Studio

A flagship platform that bridges the gap between customers and development teams. Powered by AI-driven conversations and machine learning–based feature dependency mapping, it simplifies requirement gathering and helps users shape their app ideas into reality.

A flagship platform that bridges the gap between customers and development teams. Powered by AI-driven conversations and machine learning–based feature dependency mapping, it simplifies requirement gathering and helps users shape their app ideas into reality.

Status

Shipped

Timeline

2023 — 2025 (Over multiple phases)

Tools

Mixpanel • HotJar • Figma • Figjam • Veed.io • After effects

Team

6+

My Role

User Testing • Brainstorming • Competitor Analysis • Systems Thinking • Conversational Design • Wireframing • Workshops • Prototyping

Status

Shipped

Timeline

2023 — 2025 (On mulitple phases)

Tools

Mixpanel • HotJar • Figma • Figjam • Veed.io • After effects

Team

6+

My Role

User Testing • Brainstorming • Competitor Analysis • Systems Thinking • Conversational Design • Wireframing • Workshops • Prototyping

About Builder Studio

Builder Studio 3.0 is Builder’s flagship platform, an AI-powered tool designed to help anyone bring their app idea to life. It enables users to effortlessly drag and drop features, set delivery timelines, and customise their app with simplicity and speed.

The Problem

The older version of Builder.ai’s Studio 3.0 lets users drag and drop different feature blocks to create the app they want. Oftentimes, users can choose a pre-existing popular app and take inspiration from its feature list. However, this approach has a problem: it is very limiting and not flexible at all. It often fails to collect granular but crucial information for the user to create a job requirement board.

Builder.ai needed a more proficient way to allow users to provide their requirements in a more structured manner, along with offering more information about their delivery preferences and details of their app.

The Goal

The goal was to enhance buildcard quality by capturing richer, more relevant data, helping users make clearer decisions. This aimed to reduce cart abandonment and, in turn, boost overall conversion rates.

Impact & Results

50% more data captured enriched buildcard inputs led to significantly improved data quality.

28% reduction in cart abandonment streamlined flow encouraged users to stay and complete their build.

15% improvement in overall conversion enhanced usability and flow translated directly into better conversion rates.

Highlight Videos

Studio 4 - AI powered app creator

Initial release I worked on - focused on guided app creation using AI.

Highlight Videos

Studio 4 - AI powered app creator

Initial release I worked on - focused on guided app creation using AI.

Highlight Videos

Studio 4 - AI powered app creator

Initial release I worked on - focused on guided app creation using AI.

Highlight Videos

Studio 4 - AI powered app creator

Initial release I worked on - focused on guided app creation using AI.

Studio 5 - The future of Natasha AI

Next iteration of Studio 4

Studio 5 - Latest evolution

Voice Interaction and refinement

Enjoyed the overview? Buckle up, its a long one for the curious minds.

About Builder Studio 3.0

Builder Studio 3.0 is Builder’s flagship platform, an AI-powered tool designed to help anyone bring their app idea to life. It enables users to effortlessly drag and drop features, set delivery timelines, and customise their app with simplicity and speed.

Discovery

Audit: Evaluating the Current Builder Studio Experience

We began with a heuristic evaluation using Nielsen’s heuristics, involving a team of myself and other product designers, who independently reviewed the interface. By prioritising issues by severity, we proposed actionable solutions to improve the overall user experience. The benefits of this evaluation include early detection of usability problems, cost-effective analysis, and ensuring an intuitive, user-friendly product.

The Problem

Audit: Evaluating the Current Builder Studio Experience

We began with a heuristic evaluation using Nielsen’s heuristics, involving a team of myself and other product designers, who independently reviewed the interface. By prioritising issues by severity, we proposed actionable solutions to improve the overall user experience. The benefits of this evaluation include early detection of usability problems, cost-effective analysis, and ensuring an intuitive, user-friendly product.

Understanding Current Users

To understand how users were interacting with Builder Studio’s existing flow, we combined quantitative data from Hotjar and Mixpanel with qualitative insights from sales and support call recordings. Hotjar revealed key usability issues through heatmaps and session recordings:

Scroll maps showed that 72% of users never reached the feature library, which sat below the fold, explaining low feature selection rates.

Session recordings highlighted hesitation and erratic mouse movements around the delivery stage, often followed by exits or long idle times, signs of user confusion.

Click heatmaps showed high interaction with inactive or non-clickable UI elements, especially on the preview screen, indicating misaligned expectations.

Sales and support call recordings revealed that many users struggled to articulate their app idea without clear guidance and often didn’t understand what was expected of them during key steps in the journey. Several users voiced frustration with Natasha, describing her as more of a scripted chatbot than a responsive AI, unable to adapt to their input or provide meaningful help.

The older version of Builder.ai’s Studio 3.0 lets users drag and drop different feature blocks to create the app they want. Oftentimes, users can choose a pre-existing popular app and take inspiration from its feature list. However, this approach has a problem: it is very limiting and not flexible at all. It often fails to collect granular but crucial information for the user to create a job requirement board.

Builder.ai needed a more proficient way to allow users to provide their requirements in a more structured manner, along with offering more information about their delivery preferences and details of their app.

Key improvements became the focus:

Enhancing Buildcard quality by capturing richer data

Providing a clear journey to completion

Reducing cart abandonment

Increasing Buildcard completion rate

Boosting overall conversion rates

User Interviews

We conducted interviews with startup founders and product managers actively exploring no-code tools. While their goals varied, pain points started to surface.

User Interviews - Studio 3.0

What we found out

Drag-and-Drop Limitations

While Studio 3.0’s block-based builder simplified getting started faster, it was extremely manual and slow, and it didn’t capture crucial context

Template Over-Reliance

Users gravitated toward popular app templates (e.g, Uber clones) but these often misaligned with unique business needs.

Feature Organisation Chaos

Without role-based grouping, features became cluttered, making it hard for developers to inperpret scope.

Contextual Gaps

Users lacked the ability to annotate features with notes or specify intent with detailed information further hampering clarity.

Delivery Confusion

Ambiguous choices around timeline, platforms, and pricing created hesitation and suprise costs at the end.

Cognitive Overload

The multi-step process, through necessary, risked overwhelming users without progressive guidance.

Founders and product managers seek low-code/no-code solutions with clear guidance

Many of these users aren’t technical, so they want tools that let them build or spec an app without coding, but they still need help understanding what to do next or how to express their idea clearly. They don’t just want tools, they want smart guidance built into the process.

Traditional app briefs lack structure, often miss critical details

When users describe their app idea (in a form or during onboarding), they often leave out important information, like who the users are, what features they actually need, or how it should work. Their briefs are vague, which causes delays or miscommunication in the build process.

Inspiration & Competitor Analysis

Our competitive analysis highlighted a clear gap in the market. While tools like Bubble and AppyPie offer strong drag-and-drop functionality, they rely on users to manually define their feature list. For those still in the ideation phase, and needed guidance to translate their business concepts into technical requirements. This approach allowed us to differentiate meaningfully and secure a unique place in the market.

We benchmarked key no-code and low-code platforms, like Appy Pie, Bubble.io, and Webflow - focusing on three core areas: sign-up experience, self-serve guidance, and human interaction. Our evaluation revealed that while many competitors offered easy sign-up and rich educational content, most lacked proactive human support and relied heavily on users to figure things out alone.

Live chat was often slow or robotic. For example, on Appy Pie, it took over 30 minutes to get a useful response, despite the promise of 24/7 support. Most platforms offered documentation but missed opportunities to guide users contextually within the product.

These insights showed a clear opportunity for Builder to differentiate with structured onboarding, embedded AI support, and fast, contextual human help, especially for first-time founders unfamiliar with technical terminology.

User Archetypes

We focused our strategy on The Executor. A high-intent, ROI-driven user most likely to convert. Their need for clarity, structure, and trust informed key design decisions around onboarding, pricing transparency, and delivery, helping reduce friction at critical conversion points.

Design Challenges & Goals

Our central question began to form.

How might we guide users to articulate detailed requirements, within an intuitive, structured flow?

We prioritised three end goals:

Streamlined Requirement Capture

AI-guided wizard translating user ideas into actionable modules and journeys.

Intelligent Feature Planning

Role-based grouping of features, complete with granular metadata and real time previews.

Transparent Delivery Process

Progressive checkout with live cost/timeline estimates and clear ownership/export options.

Design Challenges & Goals

How might we guide users to articulate detailed requirements, within an intuitive, structured flow?

Workshops

To better understand user motivations and breakdowns across the journey, we ran collaborative workshops with the UX research team focused on two areas: user compromises and “wow” moments, which highlighted friction and delight points.

Key Compromises Uncovered

Lack of Clarity & Guidance

This directly affected user confidence and completion. Internal jargon confused users, and they weren’t clear on what they were building or getting. It’s foundational and likely caused drop-off.

Weak Value Communication

If users didn’t understand the platform’s value or what made it magical, they were less likely to stay engaged or convert. This touches both UX and marketing.

Poor Timing of Visualisation & Feedback

Delaying previews and intelligent guidance meant missed chances to excite users early — a big issue for first impressions and momentum.

WOW Moments Identified

Clear Visual Previews

When previews matched what users picked, it helped them feel in control and understand what they were building

Support that feels human

Timely, friendly messages (from AI or humans) made users feel guided and reassured — especially during tricky steps.

User Journey flows

We mapped the user journey to uncover key friction points and opportunities. This helped us identify where users felt confused or unsupported, and informed where to focus our design efforts.

Designing the System & Flows

Agentic Structure: Translating Ideas into Buildable Outputs

Based on early research, we collaborated with the data science team to define a technical structure that Natasha could use to interpret vague user input. 



We scoped what was achievable with machine learning and used that to design a flow that turns open-ended ideas into structured, buildable outputs. For example, when a user says, “I want to build an app like Uber,” Natasha follows a logic model to ask what platform it’s for, who the users are, and what core journeys and features are needed. This ensured that even high-level ideas could be accurately scoped into Buildcards.

Diagnosing Pain Points Along the Journey

Beyond mapping the user flow, we broke it down to expose user pain points and Natasha’s supporting role at each step. This layered view helped us fine-tune the assistant’s prompts, surfacing where users get stuck and how to guide them toward clearer outcomes.

Conversational Design

To shape Natasha’s voice and guide the user journey, our copy team collaborated with us closely on the conversational design. We mapped real example dialogues, tested tone and phrasing, and refined each prompt to feel intuitive and human, helping users feel supported, not scripted.

Solidifying Core Stages

To shape Natasha’s voice and guide the user journey, our copy team collaborated with us closely on the conversational design. We mapped real example dialogues, tested tone and phrasing, and refined each prompt to feel intuitive and human, helping users feel supported, not scripted.

Conceptualisation - Wireframing

I worked with the content and copy team to turn abstract concepts into clear, focused layouts. Using research and journey mapping, we shaped the app’s structure to feel intuitive and easy to follow. The wireframes outlined key screens and content, and we iterated on them to better match how users actually think and navigate.

Solution Highlights

Introducing A Better Concept

To help users better articulate what they needed, I introduced a new nested structure: user types → journeys → features. Rather than starting from a long list of technical features, users could now describe their app based on who it’s for and what it needs to do. This shift made the process more intuitive and collaborative — users felt less overwhelmed, and the team could align more easily on scope. It also laid the groundwork for a smarter, more flexible way to recommend and organise features through journeys.

Guided onboarding

A seamless, step-by-step onboarding process with clear prompts and tooltips. This guided journey empowers users to quickly master the app with confidence.

Reducing choice of paradox

In the older version, users saw too many unrelated, preset app options, causing overwhelm and hesitation.

The redesign brings tailored, relevant features upfront, making the process feel clearer and helping users decide faster with more confidence.

A collection of 500+ known apps

Referencing familiar apps helps them communicate ideas more clearly and effectively, speeding up the design process and improving collaboration. This ensures the final product feels intuitive and grounded in real-world use cases.

Users can use publicly available apps as inspiration for their own projects.

Quick Visualisation

Users get a clear view of their app upfront, with features grouped by user role and a matching splash screen generated by AI. This helps them align on the concept early, reduces misunderstandings, and builds confidence to move forward.

Customising Look & Feel

Users can personalise their app experience by adjusting their app’s appearance and functionality during this stage.

Users can upload their logo, and an appropriate colour will be automatically selected. They will also have options to modify various visual aspects of the app’s look and feel, such as buttons, fonts, colours, and more.

Guidance for new user

Many of our users aren’t highly technical, so they rely heavily on the interactive guidance to navigate the process with confidence.

Customising App Requirements

The Refine Idea stage allows users to fine-tune their app’s requirements.

Users can review all the user roles, journeys, and associated features recommended by our AI feature generator. They can then decide whether to add or remove any specific feature or journey.

Deciding the Delivery Preferences

Users gain precise control over their platform, customising it to match their vision. They can set their own development pace and choose how to deploy their app in the cloud, creating a tailored, seamless experience aligned with their goals and timeline.

Final Review

A final review screen lets users confirm all app details and interact with a live prototype. They can easily share it with others or book a call with an expert if they need additional support.

Exploration of AI generated splash screen

We also designed over 1000+ mobile and web screen templates to cover various verticals, ensuring consistency and scalability. This allowed us to construct UI for our prototypes and app visualisation.

Studio 5.0

After launching Studio 4.0 in 2024, our design team initiated an R&D effort to reimagine the platform, dubbed Studio 5.0. Inspired by the demand and rise of voice interface in popular application like ChatGPT and Gemini AI, we prioritised voice interaction to create a hands-free experience, significantly speeding up user input collection. We also enhanced AI capabilities, focusing on:

- Improved conversational AI-generated mood boards for visual direction
- Smarter requirement gathering.
- Refined idea iteration to better align with user goals.

This was also an opportunity to modernise the platform’s look and feel. A key challenge was ensuring cohesion across our evolving product suite, which led us to develop an upgraded Builder Design Language, Block 5.0, a more intuitive, scalable, and visually unified design system.

Results & Impact

Understanding the measurable outcomes and key insights gained throughout the design process has been crucial. It highlights how our approach shaped the user experience and uncovered future opportunities for growth. This reflection also helps me grow as a designer, empowering me to create more intuitive and user-centric flows in future projects.

50% more data captured

Enriched buildcard inputs led to significantly improved data quality.

70% increase in journey completion

More users successfully reached the end of the experience.

28% reduction in cart abandonment

Streamlined flow encouraged users to stay and complete their build.

125% uplift in buildcard completion rate

Major progress in engagement with the core feature.

15% improvement in overall conversion

Enhanced usability and flow translated directly into better conversion rates.

4.6/5

NPS score

Marked improvement in user satisfaction and experience.

Next steps

Based on initial results and feedback, several enhancements are planned:
Integrate Analytics
Integrate analytics to monitor drop-off, build AI-powered feature-grouping recommendations, and introduce real-time collaboration for journey co-editing.
AI - Powered Suggestions
Recommend feature groupings based on industry benchmarks.
Collaborative Features
Enable team commenting & real-time co-editing of journeys/features.

Emma Meyers

© 2024

Launch of Studio 5.0
Building on our success, Studio 5.0 will feature a unified design language, hands-free voice interactions inspired by ChatGPT and Gemini Studio, AI-generated mood boards, and deeper requirement-gathering capabilities; delivering an even more seamless, human-centered app-building experience. Hence our core focus would be a successul handover of this design and a collaborative workflow with the development and research team.

Next up

Builder Cloud

Keeping track of your cloud usage, a Dashboard re-design.

UX UI / Dashboard

Final Designs

Conversational AI agent (Natasha)

Captures requirements through text or voice, suggests modules and features, and asks smart follow-ups automatically.

Feature Library

A wide range of journeys and features with searchable tags, real-world inspiration, and customisation options.

Progressive Checkout

Checklist-style flow with live pricing and timeline updates. Users can save progress and return later.

Timeline

July 2024 - December 2024

Project Overview

Project Overview

Timeline

2023 — 2025 (Over multiple phases)

Timeline

July 2024 - December 2024

Tools

Mixpanel • HotJar • Figma • Figjam • Veed.io • After effects

Team

6+

My Role

User Testing • Brainstorming • Competitor Analysis • Systems Thinking • Conversational Design • Wireframing • Workshops • Prototyping

About Builder Studio

About Builder Studio

Builder Studio 3.0 is Builder’s flagship platform, an AI-powered tool designed to help anyone bring their app idea to life. It enables users to effortlessly drag and drop features, set delivery timelines, and customise their app with simplicity and speed.

The Problem

The Problem

The older version of Builder.ai’s Studio 3.0 lets users drag and drop different feature blocks to create the app they want. Oftentimes, users can choose a pre-existing popular app and take inspiration from its feature list. However, this approach has a problem: it is very limiting and not flexible at all. It often fails to collect granular but crucial information for the user to create a job requirement board.

Builder Now is seeing a significant drop-off rate of 92% when users reach the current platform. Users struggled to visualise their app ideas in a clear and simple way, making it harder for them to showcase their concept to investors, gain confidence in their vision, and fully understand their app idea before committing to a significant investment with Builder.ai.

Builder.ai needed a more proficient way to allow users to provide their requirements in a more structured manner, along with offering more information about their delivery preferences and details of their app.

Builder Now is seeing a significant drop-off rate of 92% when users reach the current platform. Users struggled to visualise their app ideas in a clear and simple way, making it harder for them to showcase their concept to investors, gain confidence in their vision, and fully understand their app idea before committing to a significant investment with Builder.ai.

The Goal

The Goal

The goal was to enhance buildcard quality by capturing richer, more relevant data, helping users make clearer decisions. This aimed to reduce cart abandonment and, in turn, boost overall conversion rates.

Builder Now is seeing a significant drop-off rate of 92% when users reach the current platform. Users struggled to visualise their app ideas in a clear and simple way, making it harder for them to showcase their concept to investors, gain confidence in their vision, and fully understand their app idea before committing to a significant investment with Builder.ai.

Impact & Results

Impact & Results

50% more data captured enriched buildcard inputs led to significantly improved data quality.

28% reduction in cart abandonment streamlined flow encouraged users to stay and complete their build.

15% improvement in overall conversion enhanced usability and flow translated directly into better conversion rates.

Final Designs

Final Designs

Highlight Videos

Highlight Videos

Studio 4 - AI powered app creator

Initial release I worked on - focused on guided app creation using AI.

Studio 5 - The future of Natasha AI

Next iteration of Studio 4

Studio 5 - Latest evolution

Voice Interaction and refinement

Enjoyed the overview? Buckle up, its a long one for the curious minds

Extended case study on Notion.

About Builder Studio 3.0

About Builder Studio 3.0

Builder Studio 3.0 is Builder’s flagship platform, an AI-powered tool designed to help anyone bring their app idea to life. It enables users to effortlessly drag and drop features, set delivery timelines, and customise their app with simplicity and speed.

Studio 3.0 - Flagship Platform

Studio 3.0 - Flagship Platform

Discovery

Discovery

Audit: Evaluating the Current Builder Studio Experience

We began with a heuristic evaluation using Nielsen’s heuristics, involving a team of myself and other product designers, who independently reviewed the interface. By prioritising issues by severity, we proposed actionable solutions to improve the overall user experience. The benefits of this evaluation include early detection of usability problems, cost-effective analysis, and ensuring an intuitive, user-friendly product.

The Problem

The Problem

The older version of Builder.ai’s Studio 3.0 lets users drag and drop different feature blocks to create the app they want. Oftentimes, users can choose a pre-existing popular app and take inspiration from its feature list. However, this approach has a problem: it is very limiting and not flexible at all. It often fails to collect granular but crucial information for the user to create a job requirement board.

Builder.ai needed a more proficient way to allow users to provide their requirements in a more structured manner, along with offering more information about their delivery preferences and details of their app.

Key improvements became the focus:

Enhancing Buildcard quality by capturing richer data

Providing a clear journey to completion

Reducing cart abandonment

Increasing Buildcard completion rate

Boosting overall conversion rates

Understanding Current Users

What we knew already

Internal analytics showed the dashboard was underperforming compared to the rest of the product.

When reviewing Mixpanel, we found:

  • 35% of users dropped off shortly after landing on the dashboard — suggesting it wasn’t clear or useful enough to keep them engaged.

  • It had only a 20% retention rate, meaning few users came back to use it again.

From support tickets, we also learned:

  • Many users experienced unexpected service suspensions after forgetting to top up their wallet.

  • These incidents were often caused by poor visibility of the Auto-Recharge feature, which was buried in the UI and rarely discovered until it was too late.

But what else was stopping users from using the dashboard?
To dig deeper, we conducted customer interviews.


Customer interviews

In-depth interviews were conducted with 5 existing Builder.ai customers who had previously used the "Cloud" platform to track their spend. These interviews focused on understanding their experiences, challenges, and expectations for using the current cloud dashboard. 


To understand how users were interacting with Builder Studio’s existing flow, we combined quantitative data from Hotjar and Mixpanel with qualitative insights from sales and support call recordings. Hotjar revealed key usability issues through heatmaps and session recordings:

What we knew already

Internal analytics showed the dashboard was underperforming compared to the rest of the product.

When reviewing Mixpanel, we found:

  • 35% of users dropped off shortly after landing on the dashboard — suggesting it wasn’t clear or useful enough to keep them engaged.

  • It had only a 20% retention rate, meaning few users came back to use it again.

From support tickets, we also learned:

  • Many users experienced unexpected service suspensions after forgetting to top up their wallet.

  • These incidents were often caused by poor visibility of the Auto-Recharge feature, which was buried in the UI and rarely discovered until it was too late.

But what else was stopping users from using the dashboard?
To dig deeper, we conducted customer interviews.


Customer interviews

In-depth interviews were conducted with 5 existing Builder.ai customers who had previously used the "Cloud" platform to track their spend. These interviews focused on understanding their experiences, challenges, and expectations for using the current cloud dashboard. 


Scroll maps showed that 72% of users never reached the feature library, which sat below the fold, explaining low feature selection rates.

Session recordings highlighted hesitation and erratic mouse movements around the delivery stage, often followed by exits or long idle times, signs of user confusion.

Click heatmaps showed high interaction with inactive or non-clickable UI elements, especially on the preview screen, indicating misaligned expectations.

Sales and support call recordings revealed that many users struggled to articulate their app idea without clear guidance and often didn’t understand what was expected of them during key steps in the journey. Several users voiced frustration with Natasha, describing her as more of a scripted chatbot than a responsive AI, unable to adapt to their input or provide meaningful help.

What we knew already

Internal analytics showed the dashboard was underperforming compared to the rest of the product.

When reviewing Mixpanel, we found:

  • 35% of users dropped off shortly after landing on the dashboard — suggesting it wasn’t clear or useful enough to keep them engaged.

  • It had only a 20% retention rate, meaning few users came back to use it again.

From support tickets, we also learned:

  • Many users experienced unexpected service suspensions after forgetting to top up their wallet.

  • These incidents were often caused by poor visibility of the Auto-Recharge feature, which was buried in the UI and rarely discovered until it was too late.

But what else was stopping users from using the dashboard?
To dig deeper, we conducted customer interviews.


Customer interviews

In-depth interviews were conducted with 5 existing Builder.ai customers who had previously used the "Cloud" platform to track their spend. These interviews focused on understanding their experiences, challenges, and expectations for using the current cloud dashboard. 


User Interviews

User Interviews

We conducted interviews with startup founders and product managers actively exploring no-code tools. While their goals varied, pain points started to surface.

User Interviews - Studio 3.0

User Interviews - Studio 3.0

What we found out

What we found out

Drag-and-Drop Limitations

While Studio 3.0’s block-based builder simplified getting started faster, it was extremely manual and slow, and it didn’t capture crucial context

Template Over-Reliance

Users gravitated toward popular app templates (e.g, Uber clones) but these often misaligned with unique business needs.

Feature Organisation Chaos

Without role-based grouping, features became cluttered, making it hard for developers to inperpret scope.

Contextual Gaps

Users lacked the ability to annotate features with notes or specify intent with detailed information further hampering clarity.

Delivery Confusion

Ambiguous choices around timeline, platforms, and pricing created hesitation and suprise costs at the end.

Cognitive Overload

The multi-step process, through necessary, risked overwhelming users without progressive guidance.

Founders and product managers seek low-code/no-code solutions with clear guidance

→ Many of these users aren’t technical, so they want tools that let them build or spec an app without coding, but they still need help understanding what to do next or how to express their idea clearly. They don’t just want tools, they want smart guidance built into the process.

Traditional app briefs lack structure, often miss critical details

→ When users describe their app idea (in a form or during onboarding), they often leave out important information—like who the users are, what features they actually need, or how it should work. Their briefs are vague, which causes delays or miscommunication in the build process.

Inspiration & Competitor Analysis

Our competitive analysis highlighted a clear gap in the market. While tools like Bubble and AppyPie offer strong drag-and-drop functionality, they rely on users to manually define their feature list. For those still in the ideation phase, and needed guidance to translate their business concepts into technical requirements. This approach allowed us to differentiate meaningfully and secure a unique place in the market.

We benchmarked key no-code and low-code platforms, like Appy Pie, Bubble.io, and Webflow - focusing on three core areas: sign-up experience, self-serve guidance, and human interaction. Our evaluation revealed that while many competitors offered easy sign-up and rich educational content, most lacked proactive human support and relied heavily on users to figure things out alone.

Live chat was often slow or robotic. For example, on Appy Pie, it took over 30 minutes to get a useful response, despite the promise of 24/7 support. Most platforms offered documentation but missed opportunities to guide users contextually within the product.

These insights showed a clear opportunity for Builder to differentiate with structured onboarding, embedded AI support, and fast, contextual human help, especially for first-time founders unfamiliar with technical terminology.

The Goal

Our goal was to redesign the dashboard to make spend tracking clearer, improve discoverability of Auto-Recharge, and reduce reliance on customer support—ultimately driving better engagement and retention.

User Archetypes

User Archetypes

We focused our strategy on The Executor. A high-intent, ROI-driven user most likely to convert. Their need for clarity, structure, and trust informed key design decisions around onboarding, pricing transparency, and delivery, helping reduce friction at critical conversion points.

Design Challenges & Goals

Our central question began to form.

How might we guide users to articulate detailed requirements, within an intuitive, structured flow?

We prioritised three end goals:

Streamlined Requirement Capture

AI-guided wizard translating user ideas into actionable modules and journeys.

Intelligent Feature Planning

Role-based grouping of features, complete with granular metadata and real time previews.

Transparent Delivery Process

Progressive checkout with live cost/timeline estimates and clear ownership/export options.

Workshops

Workshops

To better understand user motivations and breakdowns across the journey, we ran collaborative workshops with the UX research team focused on two areas: user compromises and “wow” moments, which highlighted friction and delight points.

Key Compromises Uncovered

Key Compromises Uncovered

Lack of Clarity & Guidance

This directly affected user confidence and completion. Internal jargon confused users, and they weren’t clear on what they were building or getting. It’s foundational and likely caused drop-off.

Weak Value Communication

If users didn’t understand the platform’s value or what made it magical, they were less likely to stay engaged or convert. This touches both UX and marketing.

Poor Timing of Visualisation & Feedback

Delaying previews and intelligent guidance meant missed chances to excite users early — a big issue for first impressions and momentum.

WOW Moments Identified

WOW Moments Identified

Clear Visual Previews

When previews matched what users picked, it helped them feel in control and understand what they were building

Support that feels human

Timely, friendly messages (from AI or humans) made users feel guided and reassured — especially during tricky steps.

User Journey flows

User Journey flows

We mapped the user journey to uncover key friction points and opportunities. This helped us identify where users felt confused or unsupported, and informed where to focus our design efforts.

Designing the System & Flows

Agentic Structure: Translating Ideas into Buildable Outputs

Based on early research, we collaborated with the data science team to define a technical structure that Natasha could use to interpret vague user input. We scoped what was achievable with machine learning and used that to design a flow that turns open-ended ideas into structured, buildable outputs. For example, when a user says, “I want to build an app like Uber,” Natasha follows a logic model to ask what platform it’s for, who the users are, and what core journeys and features are needed. This ensured that even high-level ideas could be accurately scoped into Buildcards.

Diagnosing Pain Points Along the Journey

Beyond mapping the user flow, we broke it down to expose user pain points and Natasha’s supporting role at each step. This layered view helped us fine-tune the assistant’s prompts, surfacing where users get stuck and how to guide them toward clearer outcomes.

Conversational Design

To shape Natasha’s voice and guide the user journey, our copy team collaborated with us closely on the conversational design. We mapped real example dialogues, tested tone and phrasing, and refined each prompt to feel intuitive and human, helping users feel supported, not scripted.

Solidifying Core Stages

We broke the app-building process into clear, structured stages, from initial AI chat to final sign-off. At each step, we refined the experience to improve clarity, reduce user friction, and ensure smooth handoffs. This visual flow helped align teams and build a more intuitive journey for users.

Conceptualisation - Wireframing

Conceptualisation - Wireframing

I worked with the content and copy team to turn abstract concepts into clear, focused layouts. Using research and journey mapping, we shaped the app’s structure to feel intuitive and easy to follow. The wireframes outlined key screens and content, and we iterated on them to better match how users actually think and navigate.

Solution Highlights

Introducing A Better Concept

To help users better articulate what they needed, I introduced a new nested structure: user types → journeys → features. Rather than starting from a long list of technical features, users could now describe their app based on who it’s for and what it needs to do. This shift made the process more intuitive and collaborative — users felt less overwhelmed, and the team could align more easily on scope. It also laid the groundwork for a smarter, more flexible way to recommend and organise features through journeys.

Nested concept of user types > journeys > features

Conversational AI agent (Natasha)

Captures requirements through text or voice, suggests modules and features, and asks smart follow-ups automatically.

Feature Library

A wide range of journeys and features with searchable tags, real-world inspiration, and customisation options.

Progressive Checkout

Checklist-style flow with live pricing and timeline updates. Users can save progress and return later.

User Testing

To validate our approach and refine the experience, we ran multiple moderated user testing sessions across a range of user types — from tech-savvy founders to non-technical first-time entrepreneurs. These sessions gave us both behavioral observations and direct user feedback, helping us assess usability, clarity, and perceived value. We found the following:

Overwhelmed by Feature Complexity

Users appreciated the ability to refine and group features, but many felt overwhelmed by the number of available options and the lack of clear guidance on prioritisation or relevance to their app type.

Lack or Clarity in the AI Chat Output

While the AI agent effectively gathered user intent, some users struggled to understand how their input translated into concrete features, leading to doubts about the accuracy and completeness of the AI-generated flow.

Uncertainity Around Expectations

The delivery configuration step raised questions about timelines, platform limitations, and pricing. Users wanted more contextual help and transparency to make informed decisions before committing to the next step.

Final Screens

Final Screens

Further Improvements

Exploration of AI generated splash screen

To humanise the experience, we experimented with AI-generated splash screens using Stable Diffusion, DALL·E, and Adobe Firefly. By running 1,000+ prompts and manually curating outputs for training data, we landed on imagery that reinforced character and set the tone for each stage of the journey.

Exploration of AI generated splash screen

We also designed over 1000+ mobile and web screen templates to cover various verticals, ensuring consistency and scalability. This allowed us to construct UI for our prototypes and app visualisation.

Studio 5.0

Studio 5.0

After launching Studio 4.0 in 2024, our design team initiated an R&D effort to reimagine the platform, dubbed Studio 5.0. Inspired by the demand and rise of voice interface in popular application like ChatGPT and Gemini AI, we prioritised voice interaction to create a hands-free experience, significantly speeding up user input collection. We also enhanced AI capabilities, focusing on:

- Improved conversational AI-generated mood boards for visual direction
- Smarter requirement gathering.
- Refined idea iteration to better align with user goals.

This was also an opportunity to modernise the platform’s look and feel. A key challenge was ensuring cohesion across our evolving product suite, which led us to develop an upgraded Builder Design Language, Block 5.0, a more intuitive, scalable, and visually unified design system.

Results & Impact

Results & Impact

Understanding the measurable outcomes and key insights gained throughout the design process has been crucial. It highlights how our approach shaped the user experience and uncovered future opportunities for growth. This reflection also helps me grow as a designer, empowering me to create more intuitive and user-centric flows in future projects.

50% more data captured

Enriched buildcard inputs led to significantly improved data quality.

70% increase in journey completion

More users successfully reached the end of the experience.

28% reduction in cart abandonment

Streamlined flow encouraged users to stay and complete their build.

125% uplift in buildcard completion rate

Major progress in engagement with the core feature.

15% improvement in overall conversion

Enhanced usability and flow translated directly into better conversion rates.

4.6/5

NPS score

Marked improvement in user satisfaction and experience.

Next steps

Next steps

Based on initial results and feedback, several enhancements are planned:
Based on initial results and feedback, several enhancements are planned:
Integrate Analytics
Integrate analytics to monitor drop-off, build AI-powered feature-grouping recommendations, and introduce real-time collaboration for journey co-editing.
Integrate Analytics
Integrate analytics to monitor drop-off, build AI-powered feature-grouping recommendations, and introduce real-time collaboration for journey co-editing.
AI - Powered Suggestions
Recommend feature groupings based on industry benchmarks.
Collaborative Features
Enable team commenting & real-time co-editing of journeys/features.
Launch of Studio 5.0
Building on our success, Studio 5.0 will feature a unified design language, hands-free voice interactions inspired by ChatGPT and Gemini Studio, AI-generated mood boards, and deeper requirement-gathering capabilities; delivering an even more seamless, human-centered app-building experience. Hence our core focus would be a successul handover of this design and a collaborative workflow with the development and research team.

Other Work