digitalcourage.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Diese Instanz wird betrieben von Digitalcourage e.V. für die Allgemeinheit. Damit wir das nachhaltig tun können, erheben wir einen jährlichen Vorausbeitrag von 1€/Monat per SEPA-Lastschrifteinzug.

Server stats:

845
active users

#devops

80 posts70 participants11 posts today

🔥 Breaking news: #TradExpert claims it can predict your next bad trade with an #AI cocktail more potent than your last regrettable tequila mix. 🤖🍹 Meanwhile, #arXiv, the least chaotic bit of the internet, is desperately seeking a #DevOps wizard to tame its academic jungle. 🧙‍♂️💼
arxiv.org/abs/2411.00782 #BreakingNews #TechTrends #HackerNews #ngated

arXiv logo
arXiv.orgTradExpert: Revolutionizing Trading with Mixture of Expert LLMsThe integration of Artificial Intelligence (AI) in the financial domain has opened new avenues for quantitative trading, particularly through the use of Large Language Models (LLMs). However, the challenge of effectively synthesizing insights from diverse data sources and integrating both structured and unstructured data persists. This paper presents TradeExpert, a novel framework that employs a mix of experts (MoE) approach, using four specialized LLMs, each analyzing distinct sources of financial data, including news articles, market data, alpha factors, and fundamental data. The insights of these expert LLMs are further synthesized by a General Expert LLM to make a final prediction or decision. With specific prompts, TradeExpert can be switched between the prediction mode and the ranking mode for stock movement prediction and quantitative stock trading, respectively. In addition to existing benchmarks, we also release a large-scale financial dataset to comprehensively evaluate TradeExpert's effectiveness. Our experimental results demonstrate TradeExpert's superior performance across all trading scenarios.

Ah, the noble quest for yet another "fast and adaptable" compiler framework 🤹‍♂️, presumably to compile the mountains of meta-babble inside academic papers. Meanwhile, #arXiv tempts you with the life-altering opportunity to apply your #DevOps wizardry for a pittance, so you, too, can experience the thrilling world of open science bureaucracy. 🌐✨
arxiv.org/abs/2505.22610 #fastcompilers #adaptablecompilers #open_science #HackerNews #ngated

arXiv logo
arXiv.orgTPDE: A Fast Adaptable Compiler Back-End FrameworkFast machine code generation is especially important for fast start-up just-in-time compilation, where the compilation time is part of the end-to-end latency. However, widely used compiler frameworks like LLVM do not prioritize fast compilation and require an extra IR translation step increasing latency even further; and rolling a custom code generator is a substantial engineering effort, especially when targeting multiple architectures. Therefore, in this paper, we present TPDE, a compiler back-end framework that adapts to existing code representations in SSA form. Using an IR-specific adapter providing canonical access to IR data structures and a specification of the IR semantics, the framework performs one analysis pass and then performs the compilation in just a single pass, combining instruction selection, register allocation, and instruction encoding. The generated target instructions are primarily derived code written in high-level language through LLVM's Machine IR, easing portability to different architectures while enabling optimizations during code generation. To show the generality of our framework, we build a new back-end for LLVM from scratch targeting x86-64 and AArch64. Performance results on SPECint 2017 show that we can compile LLVM-IR 8--24x faster than LLVM -O0 while being on-par in terms of run-time performance. We also demonstrate the benefits of adapting to domain-specific IRs in JIT contexts, particularly WebAssembly and database query compilation, where avoiding the extra IR translation further reduces compilation latency.

7 Course-Bundle: Shut Up and Code Python + PyCharm + Coding Interview + Machine Learning + One-Liners + Regex + Lambdas leanpub.com/set/leanpub/7cours by Christian Mayer, Lukas Rieger, and Shubham Sayon is the featured Track of online courses on the Leanpub homepage! leanpub.com #ComputerProgramming #Devops

Leanpub7 Course-Bundle: Shut Up and Code Python + PyCharm + Coding Interview + Machine Learning + One-Liners + Regex + Lambdas"Shut up and code." Laughter in the audience. The hacker had just plugged in his notebook and started sharing his screen to present his super-smart Python script. "Shut up and code" The letters written in a white literal coding font on black background was the hackers' home screen background mantra. At the time, I was a first-year computer science student and I didn't understand the code he was explaining. But I was hooked! Python was going to be my pet project and I wasn't going to stop trying to tame it -- and become a Python master myself. Well, 10 years later I'm still learning new and exciting language features every day - I now know what every programmer finally understands: nobody knows shit about anything! Fortunately, it doesn't take 10 years to start using Python and create your own projects. You simply need to learn just enough to make your first program run. Then your second. Then your third.... And before you know it, people will pay you lots of money to solve their coding problems. In this 7-course bundle, you'll learn 7 hands-on programming skills. You'll become a better Python coder faster and build yourself an extremely valuable skill in the 21st century. Whether you're coming from the US, Europe, Asia, or South America - Learning the Python basics will prove useful throughout your career. Thousands of students have learned with our courses. Here's what Edwin Gomez, a University professor and student of our courses, says about our content:"I am a university professor and I recommended my students to open an account and practice Python with your Puzzles." Here's another testimonial from my student Anthony Billings:"I’m a huge fan of the site, subscribe to the emails, and have learned a TON from your resources, cheat sheets, and Finxter. Thank you for your continued effort and work that goes into all of this." I don't want to bother you by listing dozens of testimonials here, but if you want to read over them, feel free to check them out here. Statistically, "Python" is a six-figure per year skill: the average salary of a skilled Python professional is way above $100,000 per year. That's about $8,300 per month. Could you build yourself and your family a comfortable life by earning $8,000 per month? If you want to build yourself this exciting, fun, and surprisingly easy skill set of being a Python coder, feel free to check out the 7-course bundle in front of you --- I'd love to see you in the courses! Ahh, and - yes: "Shut the hell up and start coding!"

🚀 NBA37 – DORA

In a nutshell:
1️⃣ 4 DORA-Metriken: Speed (Lead Time, Deploy-Freq) & Stability (Change-Fail %, MTTR).
2️⃣ Spiegel fürs Team, kein KPI-Pranger.
3️⃣ Quick-Check zeigt euren Reifegrad & Hebel.

❓ Welche DORA-Metrik packt ihr zuerst an?
👉 no-bullshit-agile.de/nba37-dor

No Bullshit AgileNBA37: DORAEinführung in die DORA-Analyse: Entdecke die zentralen Metriken, die erfolgreiche Software-Teams auszeichnen und ihre Leistung fördern.

🤖 So, we're on the brink of birthing machines with "higher mental states"—because clearly, turning toasters into philosophers is the next logical step in tech #evolution. Meanwhile, arXiv is desperately seeking a #DevOps engineer to keep their quantum of solace from crashing, proving that even the pursuit of #AI enlightenment needs someone who can turn it off and on again. 🔧🧐
arxiv.org/abs/2505.06257 #TechJobs #QuantumComputing #HackerNews #ngated

arXiv logo
arXiv.orgBeyond Attention: Toward Machines with Intrinsic Higher Mental StatesAttending to what is relevant is fundamental to both the mammalian brain and modern machine learning models such as Transformers. Yet, determining relevance remains a core challenge, traditionally offloaded to learning algorithms like backpropagation. Inspired by recent cellular neurobiological evidence linking neocortical pyramidal cells to distinct mental states, this work shows how models (e.g., Transformers) can emulate high-level perceptual processing and awake thought (imagination) states to pre-select relevant information before applying attention. Triadic neuronal-level modulation loops among questions ($Q$), clues (keys, $K$), and hypotheses (values, $V$) enable diverse, deep, parallel reasoning chains at the representation level and allow a rapid shift from initial biases to refined understanding. This leads to orders-of-magnitude faster learning with significantly reduced computational demand (e.g., fewer heads, layers, and tokens), at an approximate cost of $\mathcal{O}(N)$, where $N$ is the number of input tokens. Results span reinforcement learning (e.g., CarRacing in a high-dimensional visual setup), computer vision, and natural language question answering.