Artificial intelligence is invading every application, and in some cases this can become a problem.
Take Slack for example. The workplace messaging app has a suite of optional artificial intelligence features you can purchase for an additional fee, but according to security firm PromptArmor, it’s rife with potential vulnerabilities. The feature exists to help create quick conversation summaries, but according to PromptArmor, it does so by accessing private DMs and can be tricked into phishing other users.
The technical details are all in the PromptArmor blog post, but the problem here is essentially twofold. First, Slack recently updated its AI system to scrape data from private user DMs and file uploads deliberately. Among other things, PromptArmor used a technique called “prompt injection” to demonstrate that you can use Slack AI to create malicious links that could potentially phish members of said Slack channel.
Mix and match speed of light
Gmail can now actually “polish” email draft junk
Mashable has reached out to Slack for comment. According to PromptArmor’s blog, the issue was raised with Slack before the blog post was published. A spokesperson for Slack parent company SalesForce told The Register that the issue has been resolved, but did not disclose specific details.
A SalesForce spokesperson said: “When we were made aware of the report, we launched an investigation into the scenario described, which in very limited and specific circumstances could have enabled a malicious actor with an existing account in the same Slack workspace to Phishing to obtain sensitive information “We have deployed a patch to address this issue and there is currently no evidence of unauthorized access to customer information. “
If nothing else, it might be worth finding a prescribed AI policy for each app you regularly use.
theme
AI