Ask questions of a low-probability scenario you’ll get something similar to this. This doesn’t mean #LLMs are not useful, it’s just that they are not useful in dealing with novel/low-probability situations without assistance.

Ask questions of a low-probability scenario you’ll get something similar to this. This doesn’t mean #LLMs are not useful, it’s just that they are not useful in dealing with novel/low-probability situations without assistance.
Re: iNaturalist getting involved with Google genAI...feeding our comments into things...
Due to continued silence from iNaturalist about everything, October 31. That’s my deadline. That’s MORE THAN FAIR amount of time for them to:
1) Have a proper outline of the project and exactly what it will be.
2. Have a solid opt-in to the project, so no users are auto opted in without their consent
3. Have added account deletion options from an over-year-old feature request to add ways to delete including without removing ID’s for others along with anonymization. If data loss is really such a problem to them (which I think it should be) not having a way to do such a type of delete should be TOP PRIORITY especially with all this genAI bs going on...already it sounds like some power users have fully deleted their accounts over this, tired of waiting.
- Signed, someone with almost 25k ID’s for others, and almost 4k observations, including some firsts on the site (including new species to science) and other rare reports.
Please boost because I don't think most users know what is going on. All this info is mostly occurring on their separate forum, which you need to make a separate account to join. This is part of the issue of lack of transparency!
Unsubscribed from another retro YT channel who started to use genAI images as thumbnails.
Part of the appeal of old computers is getting away from the worst aspects of current tech. Shoving AI slop images as a thumbnail on what I think will be a very good video just means I won't watch it or anything else of yours.
@jducoeur @kagihq
While reading the article, I was thinking of Kagi, which I'm wowed by as I trial it. Thanks for bringing it up. AI is a tool. Simply a tool. It has uses, like carburetors once had in cars. There's need, was a need, but are only a means to an end, and who uses carburetors now?
Loved this. Sums up the crazed thinking:
Large Language Models authoritatively state things that are incorrect because they have no concept of right or wrong. I believe that the writers, managers and executives that find it exciting do so because it gives them the ability to pretend to be intelligent without actually learning anything, to do everything they can to avoid actual work or responsibility for themselves or others.
Exactly. Highlights the anti-educational bias and anti-truth bias that grew since the 1990s of people who really want their lunch to be free if they believe hard enough, but really there ain't no free lunch, is there, now?
what is illegal for the Internet Archive is legal for Anthropic. welcome to the future, where by law, copyright law exists to limit humans, while giving corporations free reign.
Piracy Is Legal Now
https://www.youtube.com/watch?v=EoOQtwp0wOM
For a while so called "reasoning" was hyped as a solution to improve #LLM results, but it turned out the output gets worse the longer the models "think":
https://winfuture.de/news,152445.html
That's of course because the models do not "think" but give statistic outputs based on the input (context). The more context there is, the higher the probability that something irrelevant influences the output. No thinking involved. It's no intelligence, not even an artificial one. It's statistics.
So it’s been over a month since iNaturalist first announced using Google's genAI…
…I signed up for the q&a they were going to do, and have not heard one thing. There is also no updates other than a forum post from iNat staff saying they hope to have a working prototype at the end of the year.
I don’t want a “prototype at the end of the year" released and then have it be too late to retract what I have given to the site so it doesn’t get eaten up by a genAI machine. If there is a working demo, that means the data’s already been fed, I would think…?
Thus, at this point, it’s just how long do I wait until I pull the plug? Because there is a total lack of transparency, a lot of double talking, and this is far too big an issue to just leave to chance. And for everyone saying “but science!” we functioned as scientists far before iNat existed, and there are other ways to manage datasets. If you are that conditioned to needing a computer to think for you, maybe rethink being a scientist.
iNat has done great things, getting into bed with genAI is not one of them; instead it is a nail in the coffin of science, thinking, and reason.
Politico's Owner Is Embarrassing Its Journalists With Garbled AI Slop
"Nobody in the company has to explain in the company why she or he is using AI to do something — whether to prepare a presentation or analyze a document," Dopfner said in a speech, as quoted by Status. "You only have to explain if you didn't use AI."
As expected, generative AI gives us scams and grifts, because that's all it's good for.
In Finland there is now a systematic campaign of #deepfake videos advertising fake health product, with fake faces and voices of national health officials delivering fake reports that the product was supposedly authorised. (even though no such announcements are ever made even for real)
And what'd ya know, huh? The snake oil salesmen are first out the gate
https://thl.fi/-/thl-ja-fimea-varoittavat-verkossa-leviaa-nyt-runsaasti-huijausmainoksia
#genai #grift #scam #health
@savetheAI You could get much more audience for your survey (and social media work) by hashtags like #AI #genAI #generativeAI #LLM #AIslop
A new Linux malware named Koske is seemingly using JPEG images of cute panda bears to deploy malware directly into system memory.
Two major AI "vibe coding" tools wiped out user data despite explicit instructions not to modify code.
Interesting tidbits from #Anthropic’s blog on how they use Claude Code:
https://www.anthropic.com/news/how-anthropic-teams-use-claude-code
Top tip from Data Science and ML Engineering teams: treat it like a *slot machine*. Save your state before letting Claude work, let it run for 30 minutes, then either accept the result or start fresh…
Top tip from Product Engineering teams: treat it as an *iterative partner*, not a one-shot solution…
The marked paragraph is also a perfect description of the #convictedFelonTrump. However, not so much intelligence is involved in his case.
A friend sent me the story of the LLM deleting a database during a code freeze and said "it lied when asked about it." I assert that a generative AI cannot lie. These aren't my original thoughts. But if you read Harry Frankfurt's famous essay On Bullshit (downloadable PDF here), he makes a very reasoned definition of bullshit. And this paragraph near the end of the essay explains why an LLM cannot lie.
It is impossible for someone to lie unless he thinks he knows the truth. Producing bullshit requires no such conviction. A person who lies is thereby responding to the truth, and he is to that extent respectful of it. When an honest man speaks, he says only what he believes to be true; and for the liar, it is correspondingly indispensable that he consider his statements to be false. For the bullshitter, however, all these bets are off: he is neither on the side of the true nor on the side of the false. His eye is not on the facts at all, as the eyes of the honest man and of the liar are, except insofar as they may be pertinent to his interest in getting away with what he says. He does not care whether the things he says describe reality correctly. He just picks them out, or makes them up, to suit his purpose.
And that's a generative artificial intelligence algorithm. Whether generating video, image, text, network traffic, whatever. It has no reference to the truth and is unaware of what truth is. It just says things. Sometimes they turn out to be true. Sometimes not. But that's irrelevant to an LLM. It doesn't know.
Andwering my own question. Damn, it's too clever
Just ran a quick experiment with Copilot Notebooks (https://lnkd.in/gYf-dXmp), and I’m impressed by how fast it turned a rough draft into something polished and shareable https://www.linkedin.com/posts/shishs_ai-clipchamp-copilot-activity-7354377475327475717-lxSb
How good are LLMs at explaining code without relying on names/keywords for hints.
If I systematically alpha-vary every identifier to something unhelpful like a01234 etc and replace all the keywords like 'if', 'match', 'use' etc by 'frob', 'garp', 'smeg' and then ask the LLM to explain what the code in a particular function is doing, will it be able to figure it out by structure alone?
Same question, but instead of a programming language, it's a Rocq proof
So, not only AI assistant are making you 19% slower on non bullshit tasks, they also make you believe that you are 20% faster.
From what looks like a serious study (with a small set of 16 people, but the set-up seems good):
"Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity"
GenAI is a master bullshiter, the kind raising cults
we find that when developers use AI tools, they take 19% longer than without—AI makes them slower
This gap between perception and reality is striking: developers expected AI to speed them up by 24%, and even after experiencing the slowdown, they still believed AI had sped them up by 20%.