digitalcourage.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Diese Instanz wird betrieben von Digitalcourage e.V. für die Allgemeinheit. Damit wir das nachhaltig tun können, erheben wir einen jährlichen Vorausbeitrag von 1€/Monat per SEPA-Lastschrifteinzug.

Server stats:

818
active users

#discovery

8 posts8 participants1 post today

It has been a quiet few weeks on TrueFans. But we are also going to push live today our new BlueSky integration.

Now you can auto publish your listening activity, comments etc. to your BlueSky account. You choose what verbs we should publish and then authorise your BlueSky account and we will do the rest.

It means anyone following your BlueSky account will see your podcast listening or comments. #discovery.

We already do this with X and Mastodon.

La vidéo « Galbi Bghaha » (Mon cœur l’a choisie) realisée par le Lycée Secondaire Houmt Souk Djerba a remporté le premier prix dans le cadre du concours du meilleur clip vidéo, organisé dans le… | Nour Ben Tahar

linkedin.com/posts/nourbentaha

☀️🔆 Wow, breaking news: the sun—yes, the one that's been hanging out in the sky for 4.6 billion years—is finally doing something noteworthy! Congratulations, it only took #humanity a couple of centuries to notice. Next up: water is wet! 💧🙄
newyorker.com/news/annals-of-a #sun #news #breakthrough #science #discovery #HackerNews #ngated

The New Yorker · 4.6 Billion Years On, the Sun Is Having a MomentBy Bill McKibben

Happy #Higgsdependence Day! Today is the anniversary of the discovery the Higgs Particle. The discovery was announced July 4, 2012.

The existence of the Higgs is the sign that fundamental symmetry is broken in nature. Perfect symmetry is not good for a universe like ours. With it, intrinsic mass as we know it cannot exist. Today we celebrate freedom from mathematical symmetry ... a universe made wonderful by its disrespect for sameness.

Another week -- which means another research paper questioning whether #LLMs should go anywhere near the scholarly research process. And the answer is, unsurprisingly, 'no'. #ChatGPT 4.0 and #Bard #hallucinated #references in circa 29% and 91% of cases. But there are many other worrying observations in this study.

Hallucination Rates and Reference Accuracy of ChatGPT and Bard for Systematic Reviews
doi.org/10.2196/53164 #LLM #scholcomm #AI #search #discovery #hallucinations

Journal of Medical Internet ResearchHallucination Rates and Reference Accuracy of ChatGPT and Bard for Systematic Reviews: Comparative AnalysisBackground: Large language models (LLMs) have raised both interest and concern in the academic community. They offer the potential for automating literature search and synthesis for systematic reviews but raise concerns regarding their reliability, as the tendency to generate unsupported (hallucinated) content persist. Objective: The aim of the study is to assess the performance of LLMs such as ChatGPT and Bard (subsequently rebranded Gemini) to produce references in the context of scientific writing. Methods: The performance of ChatGPT and Bard in replicating the results of human-conducted systematic reviews was assessed. Using systematic reviews pertaining to shoulder rotator cuff pathology, these LLMs were tested by providing the same inclusion criteria and comparing the results with original systematic review references, serving as gold standards. The study used 3 key performance metrics: recall, precision, and F1-score, alongside the hallucination rate. Papers were considered “hallucinated” if any 2 of the following information were wrong: title, first author, or year of publication. Results: In total, 11 systematic reviews across 4 fields yielded 33 prompts to LLMs (3 LLMs×11 reviews), with 471 references analyzed. Precision rates for GPT-3.5, GPT-4, and Bard were 9.4% (13/139), 13.4% (16/119), and 0% (0/104) respectively (P<.001). Recall rates were 11.9% (13/109) for GPT-3.5 and 13.7% (15/109) for GPT-4, with Bard failing to retrieve any relevant papers (P<.001). Hallucination rates stood at 39.6% (55/139) for GPT-3.5, 28.6% (34/119) for GPT-4, and 91.4% (95/104) for Bard (P<.001). Further analysis of nonhallucinated papers retrieved by GPT models revealed significant differences in identifying various criteria, such as randomized studies, participant criteria, and intervention criteria. The study also noted the geographical and open-access biases in the papers retrieved by the LLMs. Conclusions: Given their current performance, it is not recommended for LLMs to be deployed as the primary or exclusive tool for conducting systematic reviews. Any references generated by such models warrant thorough validation by researchers. The high occurrence of hallucinations in LLMs highlights the necessity for refining their training and functionality before confidently using them for rigorous academic purposes.
Replied in thread

@LotharNunnenmacher Krass. Was mensch mit #Discovery-Katalogen alles machen kann. 🧐

"The Lib4RI Search Tool combines the results from the library catalogue with hits from other scholarly sources" (auch das eigene Repository)

"For this purpose, the data is enriched to display relevant OA information at the level of individual journal titles, tailored to the situation at one’s own institution."

"the information on the institutional OA agreements is prepared and imported locally via a CSV file"

The Mystery of the World's Oldest Writing System Remained Unsolved Until Four Competitive Scholars Raced to Decipher It [Shared]

In the 1850s, cuneiform was just a series of baffling scratches on clay, waiting to spill the secrets of the ancient civilizations of Mesopotamia

welchwrite.com/blog/2025/06/30