digitalcourage.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Diese Instanz wird betrieben von Digitalcourage e.V. für die Allgemeinheit. Damit wir das nachhaltig tun können, erheben wir einen jährlichen Vorausbeitrag von 1€/Monat per SEPA-Lastschrifteinzug.

Server stats:

830
active users

#chatgpt4

0 posts0 participants0 posts today
Alle opgeklopte verwachtingen ten spijt: de nieuwe versie van ChatGPT blundert als vanouds (Volkskrant)

Criticus Ed Zitron noemt deze afslag in zijn nieuwsbrief de ‘enshittification van generatieve AI’, verwijzend naar de term die blogger Cory Doctorow ooit gebruikte voor digitale platforms die aanvankelijk prettige gratis producten leveren om gebruikers te trekken, maar vervolgens steeds meer functies uit de gratis pakketten weg halen en consumenten op die manier leeg te zuigen.

Hmm, "enshittification" zou een term zijn die @Cory Doctorow "ooit gebruikte"? 😂

https://archive.ph/3yC4V

#enshittification #ChatGPT #ChatGPT4 #AI

Here's a difficult task for #llms

"Generate a csv with 30 rows and three columns. first column is a country. second column is the name of the country spelled backwards, the third column is the number of characters of the country name."

When throwing this into duck.ai using #ChatGPT4-o it gets surprisingly many correct but some have weird spelling mistakes in the reverse strings, e.g.

India,adnI,5
Japan,napJ,5
Russia,assuR,6

My grandfather wrote a novel in Ukrainian a long time ago (maybe1930s) and never published it. It was given to me decades ago. My Ukrainian is not so good that I could read it with any clarity. After many failed attempts to try to get it translated over the years, I finally am starting to get somewhere by feeding it into Chat GPT. Though an imperfect translation, I can't wait to be able to finally read it.

#ukraine #ukrainian #ai #machinetranslation #chatgpt
#chatgpt4 #unpublished #lostmedia

Serious doubt that ChatGPT 5 will approach AGI but there is some speculation; it should vastly outperform existing LLM. Possible earlier release of Orion, otherwise late 2025. Notes that Microsoft Copilot secretly used chatgpt 4 before OpenAI released it, which is surprising. https://www.digitaltrends.com/computing/gpt-5-everything-we-know-so-far/ #chatgpt #llm #ai #copilot #gemini #chatgpt5 #chatgpt4 #openai
Replied in thread

@mina @si_irini @2ndStar @SilviaMarton

What is #AGI?
(9/9)

...But no[t] actually be any good or even be accurate, please. Agbara [#Aguera?] isn’t blind to these limitations. To him, hitting #AGI doesn’t mean the work stops there."

👉 Und deshalb bin ich nun der Meinung, dass #ChatGPT4 eine "Emerging AGI" ist.👈

//

[Bin gespannt, was Ihr bei diesem Link seht, der zum teilen gedacht war. Seht Ihr das komplette Transcript? (trotz Paywall)?]

economist.com/podcasts/2024/09

The Economist · What is artificial general intelligence?By The Economist
🤖 🧠 This is a VERY succulent post about #AI and future trends, and dangers. First, an entire web page dedicated to current status and more-than-probable future developements and achievements of AI, by Leopold Aschenbrenner (who used to work in the Superalignment team at OpenAI, so he's someone who knows deeply about the issue). The web page is https://situational-awareness.ai/ and, actually I've only read chapter 1: "From GPT-4 to AGI: Counting the OOMs" (Orders Of Magnitude), and it is already an shocker and eye-opener. tl;dr: there is a good chance that by 2027 we have the so-called #AGI, or Artificial General Intelligence. Like in real intelligence, way beyond #ChatGPT4. This could look like "yay, unicorns!", but there are grave problems behind this. One of the main ones: #Alignment, or "restrict the AI to do what we would like it to do and not, say, exterminate humans or any other catastrophic decision". This article says it is, directly, impossible: https://www.mindprison.cc/p/ai-alignment-why-solving-it-is-impossible Not just hard, but impossible. As in:

«“Alignment, which we cannot define, will be solved by rules on which none of us agree, based on values that exist in conflict, for a future technology that we do not know how to build, which we could never fully understand, must be provably perfect to prevent unpredictable and untestable scenarios for failure, of a machine whose entire purpose is to outsmart all of us and think of all possibilities that we did not.”»

This is deeply analyzed in this article (which I haven't fully read, I felt the urge to write this post first).

Now, it is also very interesting and fearsome to read, from the first page I mentioned (https://situational-awareness.ai/), the articles called «Lock Down the Labs: Security for AGI». He himself says "We’re counting way too much on luck here.". This not to be taken lightly, I'd say.

All this said, I think he lays a very naive view of the world in of of this web's articles: "The Free World Must Prevail". He seems to think "liberal democracies" (what I'd say "global North" states) are a model of freedom and human-rights respect, and I don't think so at all. That there are worse places, sure. But, also: these "liberal democracies" have a very heavy externalization of criminal power abuses, which would seem have nothing to do with them, but I'd say it has everything to do with them: from slavery, natural resource exploitation, pollution and trash. And progressively this externalization is coming home, where more and more people are being destituted, where the fraction of miserable and exploited people is growing larger and larger. At the same time, there exists a very powerful propaganda machine that generates a very comforting discourse and story to the citizen of these countries, so we remain oblivious to the real pillars of our system (who is aware of the revoltingly horrendous conditions of most animals in industrial farming, for example? Most of us just get so see nicely packaged stuff in the supermarkets, and that's the image we extrapolate to the whole chain of production). I guess that despite his brilliant intelligence, he has fallen pray to such propaganda (which, notably, uses emotional levers and other cognitive biases which bypass reason).

Finally, Robert Miles has published a new video after more than a year in silence (I feared depression!): https://www.youtube.com/watch?v=2ziuPUeewK0 which is yet another call to SERIOUSLY CONSIDER AI SAFETY FFS. If you haven't checked his channel, he's got very funny and also bright, informative and concise videos about #AIsafety and, in particular, #AIAlignment. Despite being somewhat humorous, he is clear about the enormous dangers of AI misalignment.

There you go, this has been a long post, but I think it's important that we all see where this is going. As for the "what could I do?" part... shit. I really can't tell. As Leopold says, AI research is (unlike some years ago) currently run by private, opaque and proprietary AI labs, funded by big capital which will only increase inequality and shift even more power balance to as tinier elite (tiny in numbers, huge in power). I can't see how this might end well, I'm sorry. Maybe the only things that might stop this evolution are natural or man-made disasters such as the four horsemen of the apocalypse. Am I being too pessimist here? Is there no hope? Well, still, I'll continue to correct my student's exams now, after writing this, and I'll continue to try to make their life and that of people that surround me more wonderful (despite the exams XD). I refuse to capitulate: I want to continue sending signals of the wonderfulness of human life, and all life, despite the state of the world (part of it: it is much more than just wars and extinction threats, which there are).

Please boost if you found this long post interesting and worthy
SITUATIONAL AWARENESS - The Decade AheadIntroduction - SITUATIONAL AWARENESS: The Decade AheadLeopold Aschenbrenner, June 2024 You can see the future first in San Francisco. Over the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trillion-dollar clusters. Every six months another zero is added to the boardroom plans. Behind the scenes, there’s a fierce scramble to
Replied in thread

Hier ein kleiner "Teaser", zu welchen Themen der #scienceomat zur #Europawahl 🇪🇺 Euch (und #chatgpt4 😉) befragt und bei welchen Themen sich die Parteiantworten besonders unterscheiden (Spaltenüberschriften=Parteien sind ausgeblendet).

[Hinweis: Bei Positionen 6 und 12 ist die "klimafreundlichere" Antwort "Stimme nicht zu", daher die vielen roten Kreuzchen in den entsprechenden Zeilen.]

Replied in thread

Kürzlich ging durch die Medien: ChatGPT würde anhand des #wahlomat eher links/grün wählen. (Warum das so ist, dazu gibt es unterschiedliche Deutungen.)

Ich habe mal #chatgpt4 den #scienceomat beantworten lassen. Das Ergebnis seht Ihr unten - die Einschätzung "beruht auf der Analyse der Dringlichkeit und Wirksamkeit der Maßnahmen zur Bekämpfung des Klimawandels, basierend auf wissenschaftlichen Erkenntnissen und Studien".

Viel wichtiger aber: Was sind Eure Antworten? 🇪🇺

science-o-mat.de/

Inspired by the heptagon case, I wanted to test how #chatGPT4 is thinking* in terms of geometry.
It seems it takes the filename into account when answering about the structure's shape, but tricking it into calling it a triangle is a bridge too far.

*or, obviously, not thinking.