Benjamin Carr, Ph.D. 👨🏻💻🧬<p>Here Come the <a href="https://hachyderm.io/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://hachyderm.io/tags/Worms" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Worms</span></a><br><a href="https://hachyderm.io/tags/Security" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Security</span></a> researchers created <a href="https://hachyderm.io/tags/AIworm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIworm</span></a> in a test environment that can automatically spread between <a href="https://hachyderm.io/tags/generativeAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>generativeAI</span></a> agents—potentially stealing data and sending spam emails. <br>To create <a href="https://hachyderm.io/tags/MorrisII" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MorrisII</span></a>, researchers turned to a so-called “adversarial self-replicating prompt.” This is a prompt that triggers the generative AI model to output, in its response, another prompt. In short, <a href="https://hachyderm.io/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> system is told to produce a set of further instructions in its replies. <br><a href="https://www.wired.com/story/here-come-the-ai-worms/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">wired.com/story/here-come-the-</span><span class="invisible">ai-worms/</span></a></p>
