Slack Faces Backlash Over AI Training Policy
Slack Faces Backlash Over AI Training Policy
Amid rising concerns over data privacy, Slack is under fire for its approach to training its AI services using customer data. The controversy centers around the company's policy of automatically opting users into data usage for AI training, which many find buried in an outdated privacy policy. To opt out, users must email Slack directly, a process deemed cumbersome by critics.
The uproar began when a viral post on Hacker News highlighted Slack's practice, leading to widespread debates. Users were particularly irked by the lack of clear communication about how their data is utilized. Slack’s privacy policy, which hasn't been transparently updated, states that while customer data is used to train global models for features like channel and emoji recommendations, it does not apply to Slack AI—a separately purchased add-on using large language models hosted internally.
In response to the backlash, Slack acknowledged the need to revise its privacy policy for better clarity. The company reiterated that customer data used for Slack AI remains within their control and is not shared externally. However, this assurance has done little to quell user dissatisfaction, as many demand more straightforward policies and easier opt-out mechanisms.
This incident underscores the critical need for tech companies to prioritize transparency in their data usage policies, especially as AI technologies become more integrated into everyday tools. As Slack navigates this controversy, it serves as a reminder of the delicate balance between innovation and user trust.