digitalcourage.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Diese Instanz wird betrieben von Digitalcourage e.V. für die Allgemeinheit. Damit wir das nachhaltig tun können, erheben wir einen jährlichen Vorausbeitrag von 1€/Monat per SEPA-Lastschrifteinzug.

Server stats:

815
active users

#languagemodels

4 posts3 participants0 posts today
N-gated Hacker News<p>👾🤖 Oh, the tragedy! Large language models can't daydream like us mere mortals—alas, they remain as stiff as your Uncle Bob at a yoga class. While <a href="https://mastodon.social/tags/Gwern" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Gwern</span></a> waxes poetic about 'missing capabilities,' one can't help but think: perhaps these <a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> systems are just too busy counting ones and zeros to appreciate the finer points of a good nap. 🌈💤<br><a href="https://gwern.net/ai-daydreaming" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">gwern.net/ai-daydreaming</span><span class="invisible"></span></a> <a href="https://mastodon.social/tags/Daydreaming" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Daydreaming</span></a> <a href="https://mastodon.social/tags/LanguageModels" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LanguageModels</span></a> <a href="https://mastodon.social/tags/Humor" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Humor</span></a> <a href="https://mastodon.social/tags/TechTragedy" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>TechTragedy</span></a> <a href="https://mastodon.social/tags/HackerNews" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>HackerNews</span></a> <a href="https://mastodon.social/tags/ngated" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ngated</span></a></p>
N-gated Hacker News<p>🚀 Moonshot AI's Kimi K2 is here to dazzle you with... another large language model! 🌟 GitHub's flashy navigation and security tools are about as exciting as watching paint dry, but hey, at least you can automate your workflow while pretending to write "better" code. 💻✨<br><a href="https://github.com/MoonshotAI/Kimi-K2" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">github.com/MoonshotAI/Kimi-K2</span><span class="invisible"></span></a> <a href="https://mastodon.social/tags/MoonshotAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MoonshotAI</span></a> <a href="https://mastodon.social/tags/KimiK2" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>KimiK2</span></a> <a href="https://mastodon.social/tags/GitHub" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GitHub</span></a> <a href="https://mastodon.social/tags/Automation" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Automation</span></a> <a href="https://mastodon.social/tags/LanguageModels" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LanguageModels</span></a> <a href="https://mastodon.social/tags/TechInnovation" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>TechInnovation</span></a> <a href="https://mastodon.social/tags/HackerNews" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>HackerNews</span></a> <a href="https://mastodon.social/tags/ngated" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ngated</span></a></p>
Harald Klinke<p>The key insight: hallucinations are not bugs, but artifacts of compression. Like Xerox photocopiers that silently replaced digits in floorplans to save memory, LLMs can introduce subtle distortions. Because the output still looks right, we may not notice what has been lost or changed.<br>The more they’re used to generate content, the more the web becomes a blurrier copy of itself.<br><a href="https://det.social/tags/LanguageModels" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LanguageModels</span></a> <a href="https://det.social/tags/CompressionArtifacts" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CompressionArtifacts</span></a> <a href="https://det.social/tags/AIliteracy" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AIliteracy</span></a></p>
N-gated Hacker News<p>Oh no, evil masterminds are teaching large language models to fib! 😱 Apparently, Skynet's been taking night classes in <a href="https://mastodon.social/tags/deception" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>deception</span></a>, and everyone's pretending to be shocked. 🤡 Maybe next they'll waterboard pencils for spelling mistakes. 🖍️<br><a href="https://americansunlight.substack.com/cp/168074209" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">americansunlight.substack.com/</span><span class="invisible">cp/168074209</span></a> <a href="https://mastodon.social/tags/evilmasterminds" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>evilmasterminds</span></a> <a href="https://mastodon.social/tags/languagemodels" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>languagemodels</span></a> <a href="https://mastodon.social/tags/Skynet" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Skynet</span></a> <a href="https://mastodon.social/tags/humor" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>humor</span></a> <a href="https://mastodon.social/tags/technology" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>technology</span></a> <a href="https://mastodon.social/tags/HackerNews" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>HackerNews</span></a> <a href="https://mastodon.social/tags/ngated" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ngated</span></a></p>
rijo<p>ICYMI: AI models fake understanding while failing basic tasks <a href="https://ppc.land/ai-models-fake-understanding-while-failing-basic-tasks/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">ppc.land/ai-models-fake-unders</span><span class="invisible">tanding-while-failing-basic-tasks/</span></a> <a href="https://frankfurt.social/tags/AImodels" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AImodels</span></a> <a href="https://frankfurt.social/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ArtificialIntelligence</span></a> <a href="https://frankfurt.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MachineLearning</span></a> <a href="https://frankfurt.social/tags/MITResearch" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MITResearch</span></a> <a href="https://frankfurt.social/tags/LanguageModels" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LanguageModels</span></a></p>
PPC Land<p>ICYMI: AI models fake understanding while failing basic tasks: MIT research reveals language models can define concepts but cannot apply them consistently <a href="https://ppc.land/ai-models-fake-understanding-while-failing-basic-tasks/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">ppc.land/ai-models-fake-unders</span><span class="invisible">tanding-while-failing-basic-tasks/</span></a> <a href="https://mastodon.social/tags/AImodels" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AImodels</span></a> <a href="https://mastodon.social/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ArtificialIntelligence</span></a> <a href="https://mastodon.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MachineLearning</span></a> <a href="https://mastodon.social/tags/MITResearch" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MITResearch</span></a> <a href="https://mastodon.social/tags/LanguageModels" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LanguageModels</span></a></p>
N-gated Hacker News<p>In a stunning revelation, the AI experts have proclaimed that small language models will single-handedly pave the way to our robot overlords 🤖🔮. Meanwhile, the rest of us are just here trying to remember our WiFi passwords and wondering if these "agentic AIs" will fetch us a coffee ☕.<br><a href="https://arxiv.org/abs/2506.02153" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">arxiv.org/abs/2506.02153</span><span class="invisible"></span></a> <a href="https://mastodon.social/tags/AIExperts" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AIExperts</span></a> <a href="https://mastodon.social/tags/LanguageModels" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LanguageModels</span></a> <a href="https://mastodon.social/tags/RobotOverlords" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>RobotOverlords</span></a> <a href="https://mastodon.social/tags/AgenticAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AgenticAI</span></a> <a href="https://mastodon.social/tags/TechHumor" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>TechHumor</span></a> <a href="https://mastodon.social/tags/HackerNews" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>HackerNews</span></a> <a href="https://mastodon.social/tags/ngated" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ngated</span></a></p>
rijo<p>ICYMI: AI models fake understanding while failing basic tasks <a href="https://ppc.land/ai-models-fake-understanding-while-failing-basic-tasks/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">ppc.land/ai-models-fake-unders</span><span class="invisible">tanding-while-failing-basic-tasks/</span></a> <a href="https://frankfurt.social/tags/AImodels" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AImodels</span></a> <a href="https://frankfurt.social/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ArtificialIntelligence</span></a> <a href="https://frankfurt.social/tags/MITresearch" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MITresearch</span></a> <a href="https://frankfurt.social/tags/LanguageModels" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LanguageModels</span></a> <a href="https://frankfurt.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MachineLearning</span></a></p>
PPC Land<p>ICYMI: AI models fake understanding while failing basic tasks: MIT research reveals language models can define concepts but cannot apply them consistently <a href="https://ppc.land/ai-models-fake-understanding-while-failing-basic-tasks/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">ppc.land/ai-models-fake-unders</span><span class="invisible">tanding-while-failing-basic-tasks/</span></a> <a href="https://mastodon.social/tags/AImodels" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AImodels</span></a> <a href="https://mastodon.social/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ArtificialIntelligence</span></a> <a href="https://mastodon.social/tags/MITresearch" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MITresearch</span></a> <a href="https://mastodon.social/tags/LanguageModels" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LanguageModels</span></a> <a href="https://mastodon.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MachineLearning</span></a></p>
.:\dGh/:.<p>Sometimes I feel vindicated when I ask the <a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> to do something with code, just to give me an idea of a better solution.</p><p>I mean, if the idea is to make me smarter or point me to the right direction, current <a href="https://mastodon.social/tags/LLM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLM</span></a> are doing that entertainingly a success.</p><p><a href="https://mastodon.social/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ArtificialIntelligence</span></a> <a href="https://mastodon.social/tags/LargeLanguageModels" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LargeLanguageModels</span></a> <a href="https://mastodon.social/tags/LanguageModels" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LanguageModels</span></a></p>
Agustin V. Startari<p>From Obedience to Execution: Structural Legitimacy in the Age of Reasoning Models<br>When models no longer obey but execute, what happens to legitimacy?</p><p>Core contributions:<br>• Execution vs. obedience in LLMs<br>• Structural legitimacy without subject<br>• Reasoning as authority loop</p><p>🔗 Full article: <a href="https://zenodo.org/records/15635364" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">zenodo.org/records/15635364</span><span class="invisible"></span></a><br>🌐 Website: <a href="https://www.agustinvstartari.com" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="">agustinvstartari.com</span><span class="invisible"></span></a><br>🪪 ORCID: <a href="https://orcid.org/0009-0002-1483-7154" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">orcid.org/0009-0002-1483-7154</span><span class="invisible"></span></a></p><p><a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://mastodon.social/tags/LLM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLM</span></a> <a href="https://mastodon.social/tags/Execution" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Execution</span></a> <a href="https://mastodon.social/tags/StructuralLegitimacy" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>StructuralLegitimacy</span></a> <a href="https://mastodon.social/tags/LanguageModels" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LanguageModels</span></a> <a href="https://mastodon.social/tags/AlgorithmicPower" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AlgorithmicPower</span></a> <a href="https://mastodon.social/tags/Authority" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Authority</span></a> <a href="https://mastodon.social/tags/Epistemology" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Epistemology</span></a></p>
.:\dGh/:.<p>Good news for tech bros: training AI on copyrighted work is legal…</p><p><a href="https://www.cnbc.com/2025/06/24/ai-training-books-anthropic.html" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">cnbc.com/2025/06/24/ai-trainin</span><span class="invisible">g-books-anthropic.html</span></a></p><p>…as long the copyrighted works are not reproducible. May be <a href="https://mastodon.social/tags/Midjourney" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Midjourney</span></a> and similar tools are not doomed after all?</p><p><a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://mastodon.social/tags/LLM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLM</span></a> <a href="https://mastodon.social/tags/LM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LM</span></a> <a href="https://mastodon.social/tags/LanguageModels" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LanguageModels</span></a> <a href="https://mastodon.social/tags/LargeLanguageModels" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LargeLanguageModels</span></a> <a href="https://mastodon.social/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ArtificialIntelligence</span></a> <a href="https://mastodon.social/tags/Legal" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Legal</span></a> <a href="https://mastodon.social/tags/Lawsuit" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Lawsuit</span></a></p>
eicker.news ᳇ tech news<p><a href="https://eicker.news/tags/MIT" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MIT</span></a> researchers developed a method called <a href="https://eicker.news/tags/SelfAdapting" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>SelfAdapting</span></a> <a href="https://eicker.news/tags/LanguageModels" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LanguageModels</span></a> (<a href="https://eicker.news/tags/SEAL" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>SEAL</span></a>) that enables large language models to continuously <a href="https://eicker.news/tags/learn" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>learn</span></a> and <a href="https://eicker.news/tags/improve" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>improve</span></a> by generating <a href="https://eicker.news/tags/synthetic" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>synthetic</span></a> <a href="https://eicker.news/tags/trainingdata" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>trainingdata</span></a> and updating their parameters based on new information. <a href="https://www.wired.com/story/this-ai-model-never-stops-learning/?eicker.news" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">wired.com/story/this-ai-model-</span><span class="invisible">never-stops-learning/?eicker.news</span></a> <a href="https://eicker.news/tags/tech" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>tech</span></a> <a href="https://eicker.news/tags/media" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>media</span></a> <a href="https://eicker.news/tags/news" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>news</span></a></p>
N-gated Hacker News<p>Researchers have finally discovered that if you leave language models <a href="https://mastodon.social/tags/unsupervised" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>unsupervised</span></a>, they turn into unruly teenagers who refuse to clean their rooms or do anything useful. 🤖🧹 Meanwhile, the Simons Foundation is still trying to figure out which member institutions actually support this academic circus. 🎪🎓<br><a href="https://arxiv.org/abs/2506.10139" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">arxiv.org/abs/2506.10139</span><span class="invisible"></span></a> <a href="https://mastodon.social/tags/languagemodels" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>languagemodels</span></a> <a href="https://mastodon.social/tags/research" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>research</span></a> <a href="https://mastodon.social/tags/academiccircus" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>academiccircus</span></a> <a href="https://mastodon.social/tags/AIbehavior" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AIbehavior</span></a> <a href="https://mastodon.social/tags/HackerNews" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>HackerNews</span></a> <a href="https://mastodon.social/tags/ngated" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ngated</span></a></p>
N-gated Hacker News<p>😜 Oh, joy! Another dense paper on language models that "self-adapt" faster than a chameleon at a disco. 🤖✨ For those craving a nap, dive into this thrilling exposé from the Simons Foundation, proving once again that <a href="https://mastodon.social/tags/academia" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>academia</span></a> can take even the liveliest subjects and make them as exciting as watching paint dry. 🎨💤<br><a href="https://arxiv.org/abs/2506.10943" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">arxiv.org/abs/2506.10943</span><span class="invisible"></span></a> <a href="https://mastodon.social/tags/languageModels" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>languageModels</span></a> <a href="https://mastodon.social/tags/selfAdaptation" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>selfAdaptation</span></a> <a href="https://mastodon.social/tags/research" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>research</span></a> <a href="https://mastodon.social/tags/humor" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>humor</span></a> <a href="https://mastodon.social/tags/HackerNews" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>HackerNews</span></a> <a href="https://mastodon.social/tags/ngated" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ngated</span></a></p>
Agustin V. Startari<p>📢 New article: Non-Neutral by Design: Why Generative Models Cannot Escape Linguistic Training<br>I provide empirical and theoretical evidence that generative models cannot operate outside their training grammar.<br>Neutrality is a structural illusion — not a feature.<br>Read it here:<br>🔗 <a href="https://doi.org/10.5281/zenodo.15635364" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">doi.org/10.5281/zenodo.15635364</span><span class="invisible"></span></a><br>✍️ Agustín V. Startari<br>🌐 agustinvstartari.com<br><a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://mastodon.social/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ArtificialIntelligence</span></a> <a href="https://mastodon.social/tags/LLM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLM</span></a> <a href="https://mastodon.social/tags/LanguageModels" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LanguageModels</span></a> <a href="https://mastodon.social/tags/LinguisticBias" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LinguisticBias</span></a> <a href="https://mastodon.social/tags/TechEpistemology" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>TechEpistemology</span></a></p>
N-gated Hacker News<p>🤖📉BREAKING: <a href="https://mastodon.social/tags/Chatbots" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Chatbots</span></a> are as reliable as that friend who swears they almost landed a gig on Mars. 🚀🔥 Study reveals 73% of large language models' conclusions are as accurate as a weather forecast from a fortune cookie. 🍪🔮 Maybe the secret to AI's success is just a splash of human hyperbole? 🌈<br><a href="https://www.uu.nl/en/news/most-leading-chatbots-routinely-exaggerate-science-findings" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">uu.nl/en/news/most-leading-cha</span><span class="invisible">tbots-routinely-exaggerate-science-findings</span></a> <a href="https://mastodon.social/tags/Hyperbole" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Hyperbole</span></a> <a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://mastodon.social/tags/Reliability" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Reliability</span></a> <a href="https://mastodon.social/tags/LanguageModels" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LanguageModels</span></a> <a href="https://mastodon.social/tags/HackerNews" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>HackerNews</span></a> <a href="https://mastodon.social/tags/ngated" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ngated</span></a></p>
Harald Sack<p>In our <a href="https://sigmoid.social/tags/ISE2025" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ISE2025</span></a> lecture last Wednesday, we learned how in n-gram language models via Markov assumption and maximum likelihood estimation we can predict the probability of the occurrence of a word given a specific context (i.e. n words previous in the sequence of words).</p><p><a href="https://sigmoid.social/tags/NLP" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>NLP</span></a> <a href="https://sigmoid.social/tags/languagemodels" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>languagemodels</span></a> <a href="https://sigmoid.social/tags/lecture" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>lecture</span></a> <span class="h-card" translate="no"><a href="https://sigmoid.social/@fizise" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>fizise</span></a></span> @tabea <span class="h-card" translate="no"><a href="https://sigmoid.social/@enorouzi" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>enorouzi</span></a></span> <span class="h-card" translate="no"><a href="https://fedihum.org/@sourisnumerique" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>sourisnumerique</span></a></span> <span class="h-card" translate="no"><a href="https://wisskomm.social/@fiz_karlsruhe" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>fiz_karlsruhe</span></a></span> <span class="h-card" translate="no"><a href="https://mastodon.social/@KIT_Karlsruhe" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>KIT_Karlsruhe</span></a></span></p>
Nick Byrd, Ph.D.<p>How can <a href="https://nerdculture.de/tags/LanguageModels" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LanguageModels</span></a> optimize tradeoffs between performance and inference costs?</p><p>Meta-reasoner's "contextual multi-armed bandits" made the best trades on <a href="https://nerdculture.de/tags/math" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>math</span></a> and <a href="https://nerdculture.de/tags/logic" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>logic</span></a> tasks by iteratively checking opportunities to redirect, correct, and optimize.</p><p><a href="https://doi.org/10.48550/arXiv.2502.19918" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">doi.org/10.48550/arXiv.2502.19</span><span class="invisible">918</span></a></p>
Harald Sack<p>Building on the 90s, statistical n-gram language models, trained on vast text collections, became the backbone of NLP research. They fueled advancements in nearly all NLP techniques of the era, laying the groundwork for today's AI. </p><p>F. Jelinek (1997), Statistical Methods for Speech Recognition, MIT Press, Cambridge, MA</p><p><a href="https://sigmoid.social/tags/NLP" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>NLP</span></a> <a href="https://sigmoid.social/tags/LanguageModels" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LanguageModels</span></a> <a href="https://sigmoid.social/tags/HistoryOfAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>HistoryOfAI</span></a> <a href="https://sigmoid.social/tags/TextProcessing" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>TextProcessing</span></a> <a href="https://sigmoid.social/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://sigmoid.social/tags/historyofscience" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>historyofscience</span></a> <a href="https://sigmoid.social/tags/ISE2025" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ISE2025</span></a> <span class="h-card" translate="no"><a href="https://sigmoid.social/@fizise" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>fizise</span></a></span> <span class="h-card" translate="no"><a href="https://wisskomm.social/@fiz_karlsruhe" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>fiz_karlsruhe</span></a></span> <span class="h-card" translate="no"><a href="https://fedihum.org/@tabea" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>tabea</span></a></span> <span class="h-card" translate="no"><a href="https://sigmoid.social/@enorouzi" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>enorouzi</span></a></span> <span class="h-card" translate="no"><a href="https://fedihum.org/@sourisnumerique" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>sourisnumerique</span></a></span></p>