Figma recently launched make designs, a new tool that uses generative AI to help users quickly prototype apps. However, the tool was withdrawn after generating designs that closely resembled Apple’s iOS weather app. CEO Dylan Field took responsibility for the decision to pull the tool, attributing it to his push to meet deadlines, and defended Figma’s approach to developing AI tools.
Andy Allen, CEO of Not Boring Software, highlighted the issue by showing how closely Make Designs replicated Apple’s app, cautioning designers to thoroughly review results to avoid potential legal implications.
In an interview, Figma CTO Kris Rasmussen clarified that Make Designs was not trained on Apple’s app designs. He explained that Figma did not conduct any training for its generative AI features, which are powered by off-the-shelf models and a bespoke design system commissioned by Figma. Rasmussen acknowledged the need to investigate whether similarities stemmed from third-party models or the commissioned design systems.
Field affirmed that Make Designs was not trained on Figma’s content or community files, dismissing accusations of data training as false. He acknowledged a flaw in variability and noted that the key AI models behind Make Designs include OpenAI’s GPT-4 and Amazon’s Titan Image Generator G1.
Regarding future plans, Rasmussen discussed Figma’s intention to refine its AI training policies and potentially train its own models. He emphasized the importance of ensuring that any model training focuses on general design patterns and specific Figma design concepts to benefit professional designers.
To address the current issue, Figma is conducting a review of its design system to enhance variation and quality standards before re-enabling Make Designs, which remains in beta testing. Rasmussen admitted oversight in not catching the specific issue sooner.
Figma aims to reintroduce Make Designs soon, while other AI features continue in beta with a waitlist for access. The company’s approach to AI in creative tools has come under scrutiny, similar to challenges faced by Adobe and Meta in managing AI-related controversies.