Tooling

Figma MCP Introduces AI Agents With Your Design Taste

Figma · Apr 8, 2026

For years, AI-generated design has had a recognizable problem: it looks like AI-generated design. Generic button padding, forgettable color palettes, layouts that feel assembled rather than considered. The reason was structural — AI agents had no access to the accumulated taste, conventions, and decisions that make a design system actually yours.

That changes today.

Figma has announced that AI agents can now write directly to the Figma canvas, using the new use_figma tool available through Figma's MCP (Model Context Protocol) server. It's a significant shift in how agentic AI can participate in the design process — not as an outside generator spitting out mockups, but as a collaborator working inside your actual design system.


Why This Is Different

The core insight behind this feature is that design quality lives in the details you've already defined. Your component library. Your spacing variables. Your color tokens. Your typography decisions.

Previous approaches to AI-generated design tended to start from scratch — producing something that looked plausible but had no connection to your established standards. The Figma canvas integration flips that. Agents now operate inside your existing files, generating and modifying assets that are linked to your design system. The guardrails aren't imposed from outside; they're the system itself.

OpenAI's Codex is already using this. Ed Bayes, design lead at Codex, describes it plainly: teams use Figma to iterate and make product decisions, and now Codex can find and use that design context to build higher quality products more efficiently.


Skills: Teaching Agents How to Work

Opening the canvas to agents is one thing. Making sure they use it well is another. That's where skills come in.

Skills are markdown files — essentially structured instructions — that tell an agent how to execute a workflow in Figma. They define which steps to take, what sequencing to follow, and which conventions to observe. Crucially, they also give agents the specialized knowledge needed to produce durable, brand-aligned output.

The foundational skill is /figma-use, which establishes a shared understanding of how Figma works — its structure, its core principles. Every other skill builds on top of it.

Several skills launching today were authored by design practitioners from the community. The full list includes capabilities for generating component libraries from a codebase, applying design systems to existing designs, syncing design tokens between code and Figma, generating screen reader specs, and running parallel multi-agent workflows. Nine example skills are available to explore on the Figma Community.

Cat Wu, head of product for Claude Code at Anthropic, frames the value clearly: the best products come from teams who care about the details, and skills teach agents how to work directly in the design canvas in a way that stays true to the team's intent and judgment.


Code and Canvas, Working Together

This feature doesn't exist in isolation. It's part of a broader vision Figma has been building toward over the past several months.

The existing generate_figma_design tool translates HTML from live apps and websites into editable Figma layers — useful when designs drift out of sync with code. The new use_figma tool handles the other direction: creating and editing designs using your components and variables. Together, they create a loop. When things fall out of sync, you can pull the current UI into Figma. From there, agents can iterate on it using the same system they'd use for anything else.

The feature works with a range of MCP clients already, including Claude Code, Codex, Cursor, Copilot in VS Code, Augment, Warp, Firebender, and Factory.


What's Coming

Figma's roadmap here is clearly expansive. The team says it's working toward parity with the Plugin API, with image support and custom fonts coming first. The capability will also extend to surfaces like Code Connect, Figma Draw, and FigJam through the Plugin API.

There's also a structural incentive to build a robust skill ecosystem: because skills are plain markdown files, anyone in the design community can write one without building a plugin or writing code. That lowers the barrier considerably and could lead to a rich library of community-authored workflows over time.

Currently, the feature is in beta and available for free during that period. It will eventually become a usage-based paid feature.


The Bigger Picture

What's genuinely interesting about this release is the philosophical shift it represents. For a while, the conversation around AI and design has been framed as replacement versus augmentation. This feature sidesteps that framing almost entirely.

Agents aren't replacing designers here. They're being given access to what designers have already built and being asked to work within it. The judgment that went into your design system — all those small decisions about typography, spacing, color, and component behavior — becomes the guide that constrains and shapes agent output.

In that sense, good design systems just got a lot more valuable. The more thorough and well-considered your component library and variables are, the better an agent will be at producing output that actually belongs in your product.

The canvas has always been where product ideas come into focus. Now it's open to agents — on your terms.

View original source