As generative AI changes the process of coding and design, enabling enterprise developers to use natural language to create and code, Google introduced a revamped platform that uses AI to aid in the design of applications and web pages.
Google, on March 18, introduced Vibe Design with Stitch. Stitch was originally introduced in May 2025 as a Gemini-powered UI design and code-generation tool.
As a redesigned platform, Stitch includes an AI-native canvas that lets users combine text prompts, images and code to generate UI designs. A design agent can assist with the process from start to end, Google said. There is also a new agent manager that tracks the design progress. The platform includes an agent-friendly markdown file called Design.md that can be used to export or import design rules to or from other design and coding tools. Users can also vibe design with their voice by speaking directly to their canvas. The design agent provides users with design critiques, creates a new landing page and makes real time updates.
The Next Level of Vibe Coding
Google’s design with Stitch is the latest iteration of vibe, which enables developers to describe their coding vision to an AI agent rather than manually writing code. It is also an example of how different jobs and tasks are shifting from having a human figure everything out to outsourcing some of that to an AI co-worker.
Over the last 12 to 18 months, a key target of the human-plus-AI co-worker concept has been coding. Anthropic focused on the coding domain with its Claude Code tool and agent, which automates tasks such as navigation, debugging and code generation. OpenAI has also been focusing on coding, with the vendor’s most recent releases, earlier this week, being GPT-5.4 mini and GPT-5.4 nano, which it said are effective in coding workflows.
While Stitch is an AI-native software design canvas, it ultimately is another coding agent, according to Futurum Group analyst Bradley Shimmin. He added that at the bottom of the platform, Stitch will generate TypeScript for a user’s app or HTML and CSS for the web page design.
“That’s what coding agents do,” Shimmin said.
AI vs. Software
However, Google does a good job of accommodating multimodal information such as images, audio, and text, Shimmin continued. This approach enables designers to upload ideas, sketches, or images as a color palette to lay out how they envision their interface, speeding up the design process. For Shimmin, this is another example of software’s diminishing utility.
“You don’t have to learn the actual app,” he said. “You don’t need to spend a year figuring out how to master Adobe Premiere.” He added that users can write up what they want in natural language. “It’s intent-driven design, just like intent-driven development.”
However, there are risks when using AI to drive your design or development, Shimmin said. Therefore, enterprises need deterministic elements that guide their use of platforms like Stitch, whether that is a standard for corporate design patterns or requirements, or a database or datasets that would apply to the design.
“Without those kinds of controls and constraints, and context, you’re taking a bigger risk than you probably need to,” Shimmin said.

