📣 From theory to throughput, MLabs is excited to share its latest blog post on Pull Arrays in Plutarch. If you're not up to speed: CIP-138 brought built-in arrays to Cardano, giving Plutus Core constant-time lookups. And with them, access to a lot of the classic algorithms/data structures engineers know and love 🤓 Yet not all was well in Cardano-land. The raw API remains cumbersome, making it challenging (sometimes impossible) to get the desired benefits of spatially local data structures via functional languages. That's why we introduced Pull Arrays to Plutarch (e.g., the PPullArray type), our efficient Haskell eDSL for Cardano. With Pull Arrays, we finally get to enjoy array perks onchain: * maps/zips/slices compose without extra allocations * Real performance wins versus list-based flows * Θ(1) introductions & efficient slices * A cleaner, more ergonomic API that builds on CIP-138 rather than fighting it for a first-in-class solution Pretty cool, huh? Want to learn more, peer into the nitty-gritty, and see the benchmark receipts for yourself? 👉 Head over to our latest post: https://lnkd.in/eMBh2Pam
Introducing Pull Arrays in Plutarch: A Game Changer for Cardano
More Relevant Posts
-
📣 MLabs is back with another deep dive for Haskell and Cardano builders 📣 Our latest post, Patterns & Paradoxes: The Logic of Pattern Synonyms, explores how modern features like pattern synonyms and view patterns let developers bridge the gap between elegant APIs and efficient data representation. In short, these tools let you write code that's: *constructor-like and ergonomic *safe by design (no invalid states) *packed tight for performance Want to design elegant interfaces without sacrificing performance or safety? In this post, we show how to hide low-level, high-performance representations behind expressive, pattern-matchable views delivering real speedups without sacrificing safety. 👉 Read the full post here: https://lnkd.in/eZJV7Bfc
To view or add a comment, sign in
-
🦀 Reticulum-rs is now live on crates.io We’re excited to announce that Reticulum-rs (developed by Beechat), the Rust implementation of the Reticulum Network Stack, is now available on crates.io (the official Rust package registry) at https://crates . io/crates/reticulum (https://lnkd.in/e4jevZU4) This marks the first update to the Reticulum crate in over a year. In June 2025, Beechat took over maintainership from Jiri Jakes of the "reticulum" crate, and this release begins a new phase of active development bringing Reticulum to modern, memory-safe Rust. What this means: For developers building with or planning to adopt Reticulum, the Rust version is now a real, production-viable alternative to the reference Python implementation offering stronger performance, safety, and portability for embedded, tactical, and decentralised applications. Add it to your Rust project: [dependencies] reticulum = "0.1.0" Current status: - Core packet and link layers fully implemented - Transport layer partially implemented (in progress) - Compatible with existing Reticulum networks - Built for mobile ad-hoc networks, radio, and IP-based environments This release is part of our company's ongoing investment in open, cryptographically secure networking at Beechat Network Systems, developers of the Kaonic SDR mesh platform. 📦 Crate: https://lnkd.in/e4jevZU4 💻 GitHub: https://lnkd.in/e9qxYmVZ
To view or add a comment, sign in
-
-
It is a great time to be a computational scientist. I’ve 3 code assistants (OpenAI Codex, Gemini, Claude Code - at a subsidized $60/month or 12 coffees), VSCode, GitHub, OpenSource software, and a 90 TF laptop ($1200). All you need is ideas, broad knowledge at the intersection of fields, computational thinking, and experience to guide the code assistants when they hallucinate or get stuck in loops. What would have taken months or years to build will now take days or weeks to build, test, verify and validate, and document. All this will give enormous freedom to introduce new ideas and test them without getting locked into code structures that were not developed with enough flexibility as modeling and simulation is an iterative journey to have simulations match physical reality. The key is how to direct this volume of work to something meaningful to create value and at the end make our lives better. What do you think?
To view or add a comment, sign in
-
We are pleased to announce the public release of their latest research: the SConvTransform code optimization algorithm. This compiler optimization, implemented using the MLIR compiler infrastructure, converts Convolution operations from the Linalg dialect into an efficient loop nest that performs tiling, packing, and invokes a highly optimized microkernel. SConvTransform is used as an operation in the Transform dialect, allowing both the code being transformed (Payload IR) and the transformation itself (Transform IR) to be represented using MLIR. SConvTransform can efficiently utilize the cache hierarchy by running a convolutional analysis algorithm called Convolution Slicing Analysis (CSA), which determines how many tiles for each tensor can fit in each cache level. The user provides parameters such as cache size, cache latency, and microkernel size, allowing CSA to output the ideal scheduling and partitioning of the tensors. With this information, the Convolution Slicing Optimization (CSO) tiles the convolution operation into a loop nest, adds packing operations, and invokes a microkernel from the OpenBLAS library. The code for the SConvTransform is publicly available at Celera AI GitHub: https://lnkd.in/dVW8tmwp If you have questions or suggestions, reach out via email contact@celera.ai or send us a DM! #compilers #mlir #llvm #convolution #deeplearning #optimization
To view or add a comment, sign in
-
-
From Garbage Collection to Zero-Cost Abstractions Determined to move away from garbage-collected languages, I took the risk (and the challenge) of building a large-scale distributed system entirely in Rust, from the ground up. The learning curve? Steep. The debugging sessions? Endless. But the outcome? Beautiful. There’s something profoundly satisfying about seeing deterministic performance, zero memory bloat, and concurrency safety come together — not because of a runtime, but because of design. Sometimes, stepping outside the comfort of managed runtimes teaches you more about uncompromising efficiency and performance. Pound for pound the best language to implement Agentic frameworks and infrastructure.
To view or add a comment, sign in
-
💡 Enter `fmodel-decider` - a TypeScript / Deno library that progressively refines the Decider abstraction along two orthogonal dimensions: - Computation mode → from event-sourced to state-stored - Consistency boundary → from Dynamic Consistency Boundary (#DCB) to #Aggregate This dual refinement captures both the functional essence of event-sourced decision-making and the structural evolution of domain models as they mature. In this view, both the DCB pattern and the Aggregate pattern are specializations of a more general model - the Decider - which defines the minimal algebra of decision-making. https://lnkd.in/dQgyFTRB
To view or add a comment, sign in
-
-
🃏 @fizzwiz/mockchain v0.0.0-dev.1 — Mockable Promises The first prerelease of @fizzwiz/mockchain is here! 🎉 A library designed to make asynchronous operations fully controllable, starting with one core abstraction: MockChain – A mockable Promise for deterministic async testing. We are actively exploring this idea for testing distributed computation networks in isolation, one node at a time, like any plain JavaScript object. 🧠 Learn more: https://lnkd.in/eDH6AXKq 📦 Grab it on npm: https://lnkd.in/ewepDiXq — @fizzwiz ✨
To view or add a comment, sign in
-
We’re thrilled to announce that Milestone 4 of the Move → Stylus project has been officially approved by the Arbitrum DAO! This milestone represented one of the most technically demanding phases of the initiative; and laid the groundwork for a major achievement that followed shortly after: ✅ the first fully functional ERC-20 contract written in Move and compiled to run natively on Arbitrum Stylus. Through this stage, we: - Delivered a functional compiler capable of translating Move contracts into WASM and executing them on Stylus. - Implemented a runtime that preserves Move’s semantics — ownership, transfer, and resource safety. - Completed integration tests validating end-to-end execution, event emission, and ABI compatibility. - Advanced support for the Standard Library, enabling generics and more complex structures. With Milestone 4 now approved, we’ve crossed a key threshold in bringing the Move ecosystem to Arbitrum, proving interoperability and expanding the tooling available for developers and protocols.Up next: the Beta phase, SDK and CLI tooling, extended compiler support, and more testing layers. We’re proud to keep building alongside the Arbitrum DAO, pushing forward the boundaries of what’s possible in Web3. Read the full article: https://lnkd.in/dsRegVa5
To view or add a comment, sign in
-
Just successfully "ported" our old AGENTS 0.9 platform — a concurrent constraint programming system for the AKL language I helped develop in the early 1990s as part of my doctoral research — to run natively on modern 64-bit platforms (x86-64 and ARM64/Apple Silicon). Using Claude Code as my AI pair programmer and workhorse for all the heavy lifting, we identified and fixed two critical bugs that had been dormant in the codebase for three decades: a signedness issue in integer classification and a parser reentrancy problem. The remarkable part? Only seven lines of code needed changing. The system now runs cleanly on Apple Silicon, proving that well-architected research software can span generations with minimal intervention. 🙂 Here’s the classic n-queens problem running on the ported system — near-instant execution of what was cutting-edge concurrent programming research from before the Internet was a thing. It was a great pleasure to work on this system with Johan Bevemyr, Johan Montelius, Kent Boortz, Per Brand, Bjorn Carlson, Ralph Haygood, Björn Danielsson, Peter Olin, Dan Sahlin, Thomas S., and Seif Haridi — remarkable how solid that foundation still is. 🔗 https://lnkd.in/denpPHxq
To view or add a comment, sign in
-
-
Monads in Haskell Monads... So this time, I wanted to fix that. In this video and article, I’ll walk you through what Monads actually are — but not by dropping the term on you out of nowhere. We’ll start from the ground up: Functors, then Applicatives, and only then arrive at Monads. Step 1 — Functors: applying a function inside something A Functor is any type that can be “mapped over.” You already know this idea from other languages: The wrapper (Maybe) stays the same — you just apply a function inside it. Step 2 — Applicatives: applying a wrapped function to a wrapped value Applicatives go one step further. 2) <> Just 10 → gives Just 20. It’s basically saying: “I’ve got a boxed function and a boxed value. Apply one to the other.” And when something’s missing (Nothing), the whole thing fails gracefully. Step 3 — Monads: chaining things that return wrapped results Now comes the big one — Monads. For example: safeDivide :: Float -> Float -> Maybe Float Using the “bind” operator (>>=), we can write: Just https://lnkd.in/gUpBbeVY
To view or add a comment, sign in