sabreW4K3@lazysoci.al to Technology@beehaw.org · 6 months agoSlack users horrified to discover messages used for AI trainingarstechnica.comexternal-linkmessage-square33fedilinkarrow-up1228arrow-down11cross-posted to: technology@lemmy.zip
arrow-up1227arrow-down1external-linkSlack users horrified to discover messages used for AI trainingarstechnica.comsabreW4K3@lazysoci.al to Technology@beehaw.org · 6 months agomessage-square33fedilinkcross-posted to: technology@lemmy.zip
minus-squareblabber6285@sopuli.xyzlinkfedilinkarrow-up15·edit-26 months agoThis was definitely a fuckup from Slack but as I’ve understood it, the “AI training” means that they’re able to suggest emoji reactions to messages. Not sure how to think about this, but here’s some additional info from slack: https://slack.engineering/how-we-built-slack-ai-to-be-secure-and-private/ Edit: Just to pick main point from the article: Slack AI principles to guide us. Customer data never leaves Slack. We do not train large language models (LLMs) on customer data. Slack AI only operates on the data that the user can already see. Slack AI upholds all of Slack’s enterprise-grade security and compliance requirements.
minus-squareesaru@beehaw.orglinkfedilinkarrow-up5·6 months agoAI training to suggest emoji reactions? Really? 😂
This was definitely a fuckup from Slack but as I’ve understood it, the “AI training” means that they’re able to suggest emoji reactions to messages.
Not sure how to think about this, but here’s some additional info from slack: https://slack.engineering/how-we-built-slack-ai-to-be-secure-and-private/
Edit: Just to pick main point from the article:
Slack AI principles to guide us.
AI training to suggest emoji reactions? Really? 😂