xAI’s Grok Gains Memory to Remember Previous Conversations
#AI #xAI #Grok #AIMemory #GrokUpdate #ElonMusk #GenAI #Chatbots #PersonalizedAI
https://winbuzzer.com/2025/04/17/xai-s-grok-gains-memory-to-remember-previous-conversations-xcxwbn/
Meet The AI Agent With Multiple Personalities
https://web.brid.gy/r/https://www.wired.com/story/simular-ai-agent-multiple-models-personalities/
Wenn generative #Chatbots mit #Internetsuche falsche #Quellen zitieren und sich dabei sicher geben, wird es gefährlich. Eine Untersuchung des Tow Center for Digital Journalism hat acht KI-Suchmaschinen unter die Lupe genommen und geschaut, wie gut die KI-Syseme im Umgang mit Originalartikeln abschneiden.
#KI #Fehlinformation #Quellenangaben #Medienkompetenz #DigitaleRecherche #Faktenprüfung #Credits
https://tino-eberl.de/uncategorized/ki-suchmaschinen-im-faktencheck-60-der-zitate-sind-falsch/
Researchers claim breakthrough in fight against AI’s frustrating #security hole
In the #AI world, a #vulnerability called "prompt injection" has haunted developers since #chatbots went mainstream in 2022. Despite numerous attempts to solve this fundamental vulnerability—the digital equivalent of whispering secret instructions to override a system's intended behavior—no one has found a reliable solution. Until now, perhaps.
#promptinjection
Ein Paper von Deepmind-Forscher:innen, dem Oxford Internet Institute und weiteren Partnern legt erstmals konkrete Zahlen zur Nutzung von persönlichen KI-Chatbots vor.
https://t3n.de/news/gefaehrlicher-als-social-media-warum-ki-freunde-suechtiger-machen-1683143/
Looking for a European AI alternative from (mostly American) big tech? Check out Paris, France based Mistral AI
‘She helps cheer me up’: the people forming relationships with AI chatbots https://www.theguardian.com/technology/2025/apr/15/she-helps-cheer-me-up-the-people-forming-relationships-with-ai-chatbots #Artificialintelligence(AI) #Mentalhealth #Technology #Computing #Chatbots #Society #Health
Wenn der Scammer dir Geld zahlen will.
Sex-Fantasy #Chatbots Are Leaking a Constant Stream of #Explicit Messages
Some misconfigured #AI chatbots are pushing people’s #chats to the open web—revealing #sexual prompts and conversations that include descriptions of child sexual abuse.
#privacy #security #leak
https://www.wired.com/story/sex-fantasy-chatbots-are-leaking-explicit-messages-every-minute/
This tells us a lot about how the lives of an increasing number of human beings are so empty of social contact with other human beings that they need to enter into false relationships with chatbots governed by neural networks and statistical probabilities...
"More and more of us are using LLMs to find purpose and improve ourselves.
Therapy and Companionship is now the #1 use case. This use case refers to two distinct but related use cases. Therapy involves structured support and guidance to process psychological challenges, while companionship encompasses ongoing social and emotional connection, sometimes with a romantic dimension. I grouped these together last year and this year because both fulfill a fundamental human need for emotional connection and support.
Many posters talked about how therapy with an AI model was helping them process grief or trauma. Three advantages to AI-based therapy came across clearly: It’s available 24/7, it’s relatively inexpensive (even free to use in some cases), and it comes without the prospect of judgment from another human being. The AI-as-therapy phenomenon has also been noticed in China. And although the debate about the full potential of computerized therapy is ongoing, recent research offers a reassuring perspective—that AI-delivered therapeutic interventions have reached a level of sophistication such that they’re indistinguishable from human-written therapeutic responses.
A growing number of professional services are now being partially delivered by generative AI—from therapy and medical advice to legal counsel, tax guidance, and software development."
https://hbr.org/2025/04/how-people-are-really-using-gen-ai-in-2025?ab=HP-hero-latest-2
"If you’re new to prompt injection attacks the very short version is this: what happens if someone emails my LLM-driven assistant (or “agent” if you like) and tells it to forward all of my emails to a third party?
(...)
The original sin of LLMs that makes them vulnerable to this is when trusted prompts from the user and untrusted text from emails/web pages/etc are concatenated together into the same token stream. I called it “prompt injection” because it’s the same anti-pattern as SQL injection.
Sadly, there is no known reliable way to have an LLM follow instructions in one category of text while safely applying those instructions to another category of text.
That’s where CaMeL comes in.
The new DeepMind paper introduces a system called CaMeL (short for CApabilities for MachinE Learning). The goal of CaMeL is to safely take a prompt like “Send Bob the document he requested in our last meeting” and execute it, taking into account the risk that there might be malicious instructions somewhere in the context that attempt to over-ride the user’s intent.
It works by taking a command from a user, converting that into a sequence of steps in a Python-like programming language, then checking the inputs and outputs of each step to make absolutely sure the data involved is only being passed on to the right places."
"Finally, AI can fact-check itself. One large language model-based chatbot can now trace its outputs to the exact original data sources that informed them.
Developed by the Allen Institute for Artificial Intelligence (Ai2), OLMoTrace, a new feature in the Ai2 Playground, pinpoints data sources behind text responses from any model in the OLMo (Open Language Model) project.
OLMoTrace identifies the exact pre-training document behind a response — including full, direct quote matches. It also provides source links. To do so, the underlying technology uses a process called “exact-match search” or “string matching.”
“We introduced OLMoTrace to help people understand why LLMs say the things they do from the lens of their training data,” Jiacheng Liu, a University of Washington Ph.D. candidate and Ai2 researcher, told The New Stack.
“By showing that a lot of things generated by LLMs are traceable back to their training data, we are opening up the black boxes of how LLMs work, increasing transparency and our trust in them,” he added.
To date, no other chatbot on the market provides the ability to trace a model’s response back to specific sources used within its training data. This makes the news a big stride for AI visibility and transparency."
https://thenewstack.io/llms-can-now-trace-their-outputs-to-specific-training-data/
"When thinking about a large language model input and output, a text prompt (sometimes accompanied by other modalities such as image prompts) is the input the model uses to predict a specific output. You don’t need to be a data scientist or a machine learning engineer – everyone can write a prompt. However, crafting the most effective prompt can be complicated. Many aspects of your prompt affect its efficacy: the model you use, the model’s training data, the model configurations, your word-choice, style and tone, structure, and context all matters. Therefore, prompt engineering is an iterative process. Inadequate prompts can lead to ambiguous, inaccurate responses, and can hinder the model’s ability to provide meaningful output.
When you chat with the Gemini chatbot, you basically write prompts, however this whitepaper focuses on writing prompts for the Gemini model within Vertex AI or by using the API, because by prompting the model directly you will have access to the configuration such as temperature etc.
This whitepaper discusses prompt engineering in detail. We will look into the various prompting techniques to help you getting started and share tips and best practices to become a prompting expert. We will also discuss some of the challenges you can face while crafting prompts."
Entfessele deine Kreativität
So erzeugst du mit KI einzigartige Illustrationen für Bildergeschichten.
#previewreveart, #bildgenerierung, #storytellingmitKI, Aistorytelling, #AiComic, #medienkompetenz, #kreativmitmedien, #GenAI, #LLMs, #Chatbots
Bild made by preview.reve.art
A hot topic in #GLAMR -- and this paper makes a good fist of understanding AI's limitations. Large differentials between #ChatGPT, and #CoPilot / #Gemini.
"this [study] underscores the continued importance of human labor in subject #cataloging work [...] #AI tools may prove more valuable in assisting catalogers especially in subject heading assignment, but continued testing & assessment will be needed..."
AI #Chatbots & Subject #Cataloging: A Performance Test https://doi.org/10.5860/lrts.69n2.8440 #metadata
#WordPress launches an #AI tool to help users build simple #websites using a #chat interface, available to users for free, to compete with #Squarespace and #Wix
https://techcrunch.com/2025/04/09/wordpress-com-launches-a-free-ai-powered-website-builder/
PYOK: The British Airways Customer Service Chatbot is So Bad It Doesn’t Even Know Where The Airline is Based. “The conversation started with a fairly simple question as the chatbot asked Paddy to tell it where he was flying. The chatbot then suggested that Paddy either type the city or airport code – such as London or LHR for London Heathrow. Paddy replied with LHR, but having just given […]