Figma Disables AI Design Feature After Controversy
Figma Disables AI Design Feature After Controversy
Figma has recently faced significant backlash after its AI-powered design tool, "Make Design," was found to be replicating Apple's Weather app. This issue was brought to light by Andy Allen, founder of NotBoring Software, who discovered that the AI-generated designs were virtually identical to Apple's interface each time he used the tool to create a weather app. The controversy has prompted Figma to temporarily disable the feature.
The Issue Uncovered
The problem with Figma's AI design tool surfaced when Allen shared his findings on social media. He noted that every attempt to generate a weather app design using the "Make Design" feature resulted in outputs strikingly similar to Apple's Weather app. This revelation raised concerns about the underlying data and design systems used by Figma's AI, suggesting that it might have been trained on existing app designs, despite Figma's denial of such practices.
Figma's Response
In response to the allegations, Figma CEO Dylan Field clarified that the AI feature was not trained on Figma content, community files, or app designs. Instead, he attributed the issue to the "underlying design systems" and the off-the-shelf large language models employed by the tool. Field emphasized the importance of a rigorous quality assurance process and acknowledged the need for improvements to prevent such incidents in the future.
Implications and Future Steps
The incident has broader implications for the use of generative AI in design, highlighting the potential for legal and ethical challenges when AI tools inadvertently replicate existing designs. Field assured users that the "Make Design" feature would remain disabled until Figma can confidently stand behind its outputs, signaling a commitment to addressing the issue thoroughly before reintroducing the tool.
Conclusion
This controversy underscores the complexities and risks associated with integrating AI into design workflows. As Figma works to resolve these issues, the incident serves as a cautionary tale for the tech industry, reminding companies of the critical need for transparency and diligence in the development and deployment of AI technologies.