digitalcourage.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Diese Instanz wird betrieben von Digitalcourage e.V. für die Allgemeinheit. Damit wir das nachhaltig tun können, erheben wir einen jährlichen Vorausbeitrag von 1€/Monat per SEPA-Lastschrifteinzug.

Server stats:

823
active users

#compsci

6 posts6 participants1 post today

C# is doing a lot of work to explore unions in an OO language. Watch this space, a very hard decision: perhaps more style perhaps than theory.

A very good summary here:
github.com/dotnet/csharplang/b

As inheritance is baked in deep, we don't necessarily need unions -- rather we need closed hierarchies. So one of my favourite low-impact suggestions are `Standard Unions`
github.com/dotnet/csharplang/b

Is Chain-of-Thought Reasoning of LLMs a Mirage? A Data Distribution Lens arxiv.org/abs/2508.01191 #paper📄#LLM #AI #compsci

arXiv logo
arXiv.orgIs Chain-of-Thought Reasoning of LLMs a Mirage? A Data Distribution LensChain-of-Thought (CoT) prompting has been shown to improve Large Language Model (LLM) performance on various tasks. With this approach, LLMs appear to produce human-like reasoning steps before providing answers (a.k.a., CoT reasoning), which often leads to the perception that they engage in deliberate inferential processes. However, some initial findings suggest that CoT reasoning may be more superficial than it appears, motivating us to explore further. In this paper, we study CoT reasoning via a data distribution lens and investigate if CoT reasoning reflects a structured inductive bias learned from in-distribution data, allowing the model to conditionally generate reasoning paths that approximate those seen during training. Thus, its effectiveness is fundamentally bounded by the degree of distribution discrepancy between the training data and the test queries. With this lens, we dissect CoT reasoning via three dimensions: task, length, and format. To investigate each dimension, we design DataAlchemy, an isolated and controlled environment to train LLMs from scratch and systematically probe them under various distribution conditions. Our results reveal that CoT reasoning is a brittle mirage that vanishes when it is pushed beyond training distributions. This work offers a deeper understanding of why and when CoT reasoning fails, emphasizing the ongoing challenge of achieving genuine and generalizable reasoning.

Happy birthday to trailblazing American computer scientist Frances Elizabeth Allen (1932 – 2020) who made foundational contributions to optimizing compilers, optimizing programs and parallel computing. She was the first woman to become an IBM Fellow, where she worked from 1957 to 2002 and as an emeritus fellow afterwards. She was the first woman to win the Turing Prize.

IBM Research was recruiting teachers 🧵1/n

The Big OOPs:

Anatomy of a Thirty-five-year Mistake – BSC 2025

by Casey Muratori

youtube.com/watch?v=wo84LFzx5nI

I don't watch or attend a lot of conferences and talks these days, probably for the same reasons you shouldn't watch as much tv and believe it all as you used to.

But to me, at least, this is a deep and serious one worth your time in a fundamental way. If you are a programmer who actually cares about code, anyway.