Prototyping with AI

Prototyping with AI

AI is everywhere right now, and not trying it almost feels like a crime—or at least a serious case of FOMO.


That got me thinking: how could I actually use it in my current workflow?


I remembered that in past usability tests, we usually built our prototypes in Figma. But the interactions were often quite limited, which made it harder to fully test user flows. So why not try using AI for prototyping instead!

I’m in charge of the top-up page at PayPay.

A common task in usability testing is entering a top-up amount in the input field.


⚠️ But this is where prototypes often fall short. Since Figma can’t reflect real-time number input, as shown in the demo, users aren’t able to manually enter amounts. Even with predefined amount buttons, the values can’t be edited after being tapped.


This reduces the realism of the prototype. As a result, users are sometime forced to imagine how the interaction would work, which can lead to feedback that doesn’t fully reflect their true experience.

So I thought why not try prototyping with AI vibe coding? It claims you can build prototypes just by talking to the AI.

Before starting:

  • I’m a designer, not a professional developer. Most of what I know about code comes from collaborating with engineer teammates. Earlier in my career, I did pick up some basic coding—mainly HTML and CSS.

  • The prototype I created is a quick demo for communication and user testing. It doesn’t require a perfect setup since it won’t be used long-term.

In this project, I chose to prototype the top-up flow:

  • Users can freely enter and edit the amount, with real-time updates shown on the screen.

  • If the input is invalid, an error message will appear.

  • After the top-up, a success screen is displayed, showing the entered amount.

Tools I used: Claude and Figma Make

One is a general-purpose large language model, while the other is a design-focused tool

  1. Claude

🔮 I started with the following prompt:

  • Can you help me develop a mobile UI based on the attached image? Please match the visual as closely as possible.

  • Basic functionality is enough for now—we can add more features later.

  • I’d also like the UI to be fully responsive, with all components stretching edge-to-edge.

And it returned the following. The layout of the result is similar, but it still needs some tweaks.

Then I jumped into tweaking the design (⏩ fast forward).

Here’s the problem: I wanted to upload the PayPay icon, but ⚠️ Claude doesn’t support uploading assets like icons or images directly.


I did some research and found that uploading files usually requires setting up an environment and server, which goes against our initial goal of building a quick prototype.


However, I discovered a workaround: we can add images using base64 encoding.

It worked! But it also took a long time to load.


If I replaced all the icons with base64 strings, the failure rate during loading would be insanely high.

Believe me—I tried, and the browser crashed every time.

Result

The rest of the flow works well:

  • Users can tap any of the amount selection buttons.

  • Users can freely edit the number.

  • If a user enters an invalid amount (in this demo, anything below 1,000 yen), an error message appears.

  • The entered amount is passed through to both the confirmation half-sheet and the final success screen.


I also added a confirmation half-sheet—this is a concept I’d like to test in the future.

*Please resize browser to mobile view as this prototype is mobile-based

  1. Figma Make

🔮 I started with the following prompt:

  • Can you help me develop a mobile UI based on the attached frame? Please match the visual as closely as possible.

  • Basic functionality is enough for now—we can add more features later.

  • I’d also like the UI to be fully responsive, with all components stretching edge-to-edge.

💡 The difference is that I can attach the actual Figma file—not just a screenshot.

I believe they’ve added functionality that allows the AI to read the design layers directly, which leads to better results. Just make sure the content is inside a frame, not a section or group.

You can see that the UI is about 70% the same compared to Claude — pretty impressive!

And the best part—voilà!

I can directly import assets from Figma and ask it to replace them in the layout easily.

I also discovered something cool and useful—

I can select a specific section and directly edit values like padding or font size, without having to prompt again and wait for the code to re-run.

I also discovered something cool and useful— I can select a specific section and directly edit values like padding or font size, without having to prompt again and wait for the code to re-run.

Result

After some tweaking, I’d say it feels pretty close to a real product now


Users can enter any amount they want and select their preferred top-up method.

*Due to company policy and organizational restrictions, I can’t make the prototype public and share the link here.

Lastly, you might wonder which tool to choose—here’s a quick breakdown:


Use Figma’s native prototyping if:

  • Your test flow only involves simple interactions (e.g. checking layouts, viewing images)

  • Your design is already complete and just needs light interaction


Try Figma Make or Claude if:

  • You need more flexibility in interactions or behavior

  • 🔍 Key differences between Figma Make and Claude:

    • While Claude struggles to recreate specific images or icons, making it harder to match visual details, Figma Make allows you to import assets directly from your Figma file.

I also created a table for comparison:

Feature Figma Prototype Figma Make Claude
Best for Simple screen flows Flexible, visual prototyping Text-based prototyping
Image/Icon Support Full design access Easy import from Figma Hard to use custom
images/icons
Ease of Editing Limited interactions Edit values directly (e.g., padding) Requires re-prompting
Cost Free / Paid Paid Free (with limits)

*Figma Make can also incorporate a design system, but it requires a specific setup. I plan to try that out in the future.

Reflection

I spent about 1–2 hours per day building the prototype over a week—around 10 hours in total. Once I got used to prompting and navigating the interface, everything became much quicker and more efficient. In the future, I’ll likely use vibe coding to create prototypes for user testing.

Some tips:

  • Ask one thing at a time in your prompt—otherwise, the AI won’t perform well.

  • Get familiar with the codebase and ask your AI to explain parts of it. This not only speeds up your understanding but also helps with debugging more effectively.

  • As a designer, it’s easy to get caught up in pixel-perfect details. I’ve been there—tweaking visuals and interactions endlessly. But remember, the goal of a prototype is to be quick and rough—just enough to validate ideas.

At the end, the real strength of vibe coding is that it helps others experience the product early on. It builds alignment, reduces guesswork, and encourages honest feedback in usability testing. And best of all, there’s a sense of achievement in building something yourself!

🕑 Timeline

Total 10 hours

🚀 Responsibility

Exploring, experimenting and playing

🏢 Company

PayPay

Copied!

Copyright © 2025 by Ryan Chang

Copied!

Copyright © 2025 by Ryan Chang

Copied!

Copyright © 2025 by Ryan Chang

Prototyping with AI