We should avoid sharing sensitive data with third-party apps
Slack has come under siege for using customer data to train its global AI models and generative AI add-on. Sure, requiring users to manually opt-out via email seems sneaky (isn’t avoiding email the whole point of Slack?), but the messaging app doesn’t bear all the responsibility here. The most popular workplace apps have all integrated AI into their products, including Slack AI, Jira AI-Powered Virtual Agent, and Gemini for Google Workspace. Anyone using technology today — especially for work — should assume their data will be used to train AI. That’s why it’s up to individuals and companies to avoid sharing sensitive data with third-party apps. Anything less is naive and risky.Trust no one
There’s a valid argument floating around the internet that Slack’s opt-out policy sets a dangerous precedent for other SaaS apps to automatically opt customers in to share data with AI models and LLMs. Regulating bodies will likely examine this, especially for companies working in locations protected by the General Data Protection Regulations (but not the California Consumer Privacy Act, which allows businesses to process personal data without permission until a user opts out). Until then, anyone using AI — which IBM estimates is more than 40% of enterprises — should assume shared information will be used to train models.
We could dive into the ethics of training AI on individuals‘ billion-dollar business ideas that come to life in Slack threads, but surely someone on the internet has already written that. Instead, let’s focus on what’s actually important: whether or not Slack’s AI models are trained on its users’ sensitive data. This means personally identifiable information (PII) like social security numbers, names, email addresses, and phone numbers; personal health information (PHI); or secrets and credentials that can expose PII, PHI, and other valuable business and customer information.