In the AI era, it feels like everything is shifting. Honestly, I'm at the point where I'm almost sick of seeing the word "AI" everywhere ๐คฃ, but I have to admitโthe design workflow has truly evolved.
Instead of just imagining things on a blank canvas, it's so much better to use LLMs to generate a working demo to validate your ideas first. Then, you can just toss that HTML into Figma, use Pixlore to export the design layers, and finally polish it with our professional design skills. The efficiency is through the roof ๐ซ.
Our old workflow used to be the standard waterfall:
Requirements โ Wireframes โ Visual Design (UI) โ Handoff โ Development.
But now, a lot of the time, I'll just use an LLM to run through a prototype of the core interactions first.
For example, when dealing with a complex user flow, component states, or tricky interaction logic, I use an LLM (like Claude or GPT) to quickly generate a runnable mini-demo. By getting my hands on it and testing it out myself, it's so easy to spot issues I would have never caught during the static mockup phase.
A lot of UX pitfalls aren't actually found during design reviews; they are discovered when you actually "use" the product. Exposing these problems early saves you from the painful, endless reworks during the dev handoff stage.
๐ But here comes the new bottleneck:
The UI generated by LLMs is usually pretty far from a production-ready design. If you try to tweak visual details purely through prompting, itโs incredibly random. Itโs hard to control precisely and can drive you absolutely crazy.
๐ The closed-loop approach that's been working really well for me lately is this:
First, use LLMs to quickly generate and test the interaction logic โก๏ธ Then, import that structure back into Figma to reorganize and refine the UI details.
Lately, Iโve been heavily using a Figma plugin:Pixlore.
It takes the AI-generated HTML page structure and converts it directly into Figma layers with one click. Plus, the converted component structure is actually clean, so you don't have to manually rebuild the skeleton from scratch.
The biggest advantage of this workflow is: you use AI upfront to validate the solution (Fast), and then you use Figma on the backend to polish the UI (Precise). Letting tools do what they do best makes the entire design process so much smoother.
What's the biggest bottleneck you guys are facing when using LLMs in your daily work? Let's chat in the comments ๐
The plugin I mentioned: https://pixlore.newportai.com/despilot-server/operation/trace/promotion/vf-xj-20260318
Top comments (0)