We'll first see what recursion is about and why you should use
it. Then, we'll see how functions work (down under; buckle up we're
gonna write assembly code and Fortran), invent automatic memory, and
see why you shouldn't use recursion.
We'll take a quick detour in optimization land to explain why you
should use recursion sometimes, and have a little theoretical trip
to really understand what's under recursion, and why some things are
inherently recursive and others aren't.
We'll have a language stay and learn a bit of Polish (notation), and
discover the power of stack machines. To conclude, we'll try writing
a Rust macro that parses and executes Forth code at compile time.
Over the last few weeks I did a bit of tinkering on a hobby software
project. The code was written by two other people during the last
several years and I finally managed to find some time to fix a
couple of bugs.
Calling asynchronous Rust code in a synchronous context may lead to
problems, particularly for beginners who are unfamiliar with
asynchronous Rust runtime, such as Tokio. In this blog post, we will
be discussing a particular problem that we came across and the
approach we took to identify and resolve it.
This post shows different use cases for a Yubikey. There are also
command line examples in a cheatsheet like manner. I’m using a
Yubikey 5C on Arch Linux. If you run into issues, try to use a newer
version of ykman (part of
package on Arch).
I heard the phrase “ampersand-driven development” in a talk by Tad
Lispy, and it
immediately captured the experience of a new Rust developer randomly
inserting ampersands to appease the Rust compiler.
This blog post describes a little mental model I used to explain to
someone new to Rust the difference between &, &mut, owned
values, Rcs, and Arcs. I hope that you or other aspiring
Rustaceans find it helpful!
Cargo's new index protocol will be available starting in Rust 1.68,
which will be released on 2023-03-09. This new "sparse" protocol
should usually provide a significant performance improvement when
We would like your help in testing this new feature and
We need a vision of Perl’s future. Many of the things we need in the
Perl core are things we’ve re-implemented many times on the
CPAN. Not having them in core means developers have to learn their
current environment’s way of handling this, and makes it easier to
choose a module that might be inappropriate, or worse,
unmaintained. This increases the cognitive burden on all developers
who, I’m sure, are more interested in building solutions than
arguing over them.
NiceGUI is an easy-to-use, Python-based UI framework, which shows up
in your web browser. You can create buttons, dialogs, Markdown, 3D
scenes, plots and much more.
It is great for micro web apps, dashboards, robotics projects, smart
home solutions and similar use cases. You can also use it in
development, for example when tweaking/configuring a machine
learning algorithm or tuning motor controllers.
A common trope is how, if one wants to build a game, one should
build a game, rather than a game engine, because it is all too easy
to fall into a trap of building a generic solution, without getting
to the game proper. It seems to me that the situation with code
editors is the opposite — many people build editors, but few are
building “editor engines”. What’s an “editor engine”? A made up term
I use to denote a THIN
editor is build upon, the set of core concepts, entities and APIs
which power the variety of editor’s components. In this post, I will
highlight Emacs’ thin waist, which I think is worthy of imitation!
Comprehensions are currently compiled as nested functions, which
provides isolation of the comprehension’s iteration variable, but is
inefficient at runtime. This PEP proposes to inline list,
dictionary, and set comprehensions into the code where they are
defined, and provide the expected isolation by pushing/popping
clashing locals on the stack. This change makes comprehensions much
faster: up to 2x faster for a microbenchmark of a comprehension
alone, translating to an 11% speedup for one sample benchmark
derived from real-world code that makes heavy use of comprehensions
in the context of doing actual work.
The GHC team is very pleased to announce the availability of the
first (and likely final) release candidate of GHC 9.6.1. As usual,
binaries and source distributions are available at
Beginning with GHC 9.6.1, GHC can be built as a cross-compiler to
support for compiling Haskell to the Web, but there are a few
caveats to be aware of in the 9.6 series:
stage of development and are present in this release as a
Using GHC as a cross-compiler is not as easy as we would like it
to be; in particular, there are challenges related to Template
GHC is not yet run-time retargetable; a given GHC binary targets
considered platforms for this purpose. Cross-compilers must be
built from source by their users
We hope to lift all of these limitations in future releases.
Additionally, 9.6.1 will include:
Significant latency improvements in the non-moving garbage
Efficient runtime support for delimited continuations
Improvements in compiler error messages
Numerous improvements in the compiler’s memory usage
See the release
for a comprehensive accounting of changes in this release.
I’m excited to announce two new open source Swift packages:
and swift-asn1. Together,
these libraries provide developers a faster and safer implementation
of X.509 certificates, a critical technology that powers the
security of TLS.
It is possible, however, to implement some of those additional
features, and still operate in worst-case linear time. The
implementation (~400 lines of Rust) even fits in a single blog post!
The full code is on
and the commit
up with the blog post if you want to follow along.
Since the last post I've been in contact with some members of the
Nix community with regards to joining the documentation team. From
that discussion and my experience with other ecosystems I've had
some ideas rolling around about what the ideal Nix documentation
strategy/ecosystem would look like to me, so I'm putting those ideas
in writing to start a discussion and generate ideas. These ideas
aren't super concrete and I don't speak for anyone else, but they're
my vision for how Nix documentation could better serve experienced
users and onboard new ones.
In general, anything that requires reasoning—like programming, or
applying the scientific method—long-term planning, and
algorithmic-like data manipulation, is out of reach for deep
learning models, no matter how much data you throw at them. Even
learning a sorting algorithm with a deep neural network is
Nowadays, Alpine Linux is one of the most popular options for
container base images. Many people (maybe including you) use it for
anything and everything. Some people use it because of its small
size, some because of habit and some, just because they copy-pasted
a Dockefile from some tutorial. Yet, there are plenty of reasons
why you should not use Alpine for your container images, some of
which can cause you great amount of grief...
Rust is designed around safety and soundness. Roughly speaking, safe
code is code that doesn't use the unsafe keyword, and sound code
is code that can't cause memory corruption or other undefined
behavior. One of Rust's most important features is the promise that
all safe code is sound. But that promise can be broken when unsafe
code is involved, and unsafe code is almost always involved
somewhere. Data structures like Vec and HashMap have unsafe
code in their implementations, as does any function like
File::open that talks to the OS. This leads to a common question:
"If Rust can't guarantee that all safe code is sound, how can it
be a memory-safe language?" It's hard to give a short answer to
that question, so this post is my attempt at a medium-length answer.
Sites take time to build these days. A large Next.js 11 site will
take several minutes to build. This is wasted time in the
development cycle. Build tools like Vite or
Turbopack highlight their ability to get
this number down.