I started a new job recently so I had the opportunity to get one of
the new M1 MacBooks, I decided to go with the Air. The reviews have
been very positive and I’m here to tell you: it is indeed an amazing
device. The performance feels a lot better than my MacBook Pro 16”,
which is only a year old and about 3x the price.
When I got the Mac I set out with the goal of avoiding Intel builds
of software as much as possible and using native whenever possible
unless it’s absolutely impossible.
Deep Learning is an extremely fast-moving field and the huge number
of research papers and ideas can be overwhelming. Even seasoned
researchers have a hard time telling company PR from real
breakthroughs. The goal of this post is to review those ideas that
have stood the test of time, which is perhaps the only significance
test one should rely on. These ideas, or improvements of them, have
been used over and over again. They're known to work.
When getting started with containers, it’s pretty easy to be shocked
by the size of the images that we build. We’re going to review a
number of techniques to reduce image size, without sacrificing
developers’ and ops’ convenience. In this first part, we will talk
about multi-stage builds, because that’s where anyone should start
if they want to reduce the size of their images. We will also
explain the differences between static and dynamic linking, as well
as why we should care about that. This will be the occasion to
introduce Alpine.
When I ask my colleagues why mmap is faster than system calls, the
answer is inevitably “system call overhead”: the cost of crossing
the boundary between the user space and the kernel. It turns out
that this overhead is more nuanced than I used to think, so let’s
look under the hood to understand the performance differences.
Tutorials on the scientific Python ecosystem: a quick introduction
to central tools and techniques. The different chapters each
correspond to a 1 to 2 hours course with increasing level of
expertise, from beginner to expert.
Most undergraduate computer sciences courses teach students about
tail call optimization
(TCO), and even if you
don't have a formal computer science background the concept is
talked about enough that you might be familiar with it anyway,
especially if you've ever done any functional programming. However,
I think the way TCO is normally taught is very confusing, because
it's normally taught in the context of recursion. It's taught this
way because without TCO many recursive functions can blow up the
stack causing a stack overflow. Therefore by teaching people about
TCO in the context of recursion, you can teach them why optimizing
compilers (or interpreters) can run tail recursive code efficiently
and without causing a stack overflow.
However, the recursion case for TCO is actually not the norm: in
fact, if you're writing code in C, C++, or any most other languages
with an optimizing compiler you're almost certainly having TCO
applied all over your programs even if they don't use any recursion
whatsoever. Understanding the non-recursive case of TCO is actually
a lot simpler, and if you understand the non-recursive case you
realize that there's actually nothing special whatsoever about how
TCO is applied to recursive functions.
Something about making visually interesting simulations to play with
just gets me really excited about programming, particularly when
there’s some cool algorithm or bit of math backing it.
Long story short: If one uses dd with a bigger block size (>=
4096), be sure to use either the oflag=direct or conv=fsync
option to have proper error reporting while writing data to a
device. I would prefer conv=fsync, dd will then fsync() the file
handle once and report the error, without having the performance
impact which oflag=direct has.
The Gl.iNET GL-MT300N is a $21/£19 travel router designed for WIFI
on the go. The device runs a custom version of OpenWRT that is
easily replaced with a standard release of OpenWRT making this
device an ultra cheap hackable Dual NIC router/SBC.
When a program is mostly just accessing memory randomly, a standard
cost model is to count the number of distinct random accesses. The
general idea is that memory access is much slower than most other
computational tasks.
Emacs Evil mode is an extensible Vi layer for Emacs. It adds a set
of Vi(m) key bindings and features to Emacs to give it a more modal
feel, and lets you rely less on the pinky-accessed CTRL key when
manipulating text. Where Emacs uses more key combinations and
commands, Evil mode brings Vi’s operators and motions to execute
text operations.
He calls himself Ulf—as good a name as any, he thinks—and the only
thing he’s looking for is a place where he won’t be found by Oslo’s
most notorious drug lord: the Fisherman. He was once the Fisherman’s
fixer, but after betraying him, Ulf is now the one his former boss
needs fixed—which may not be a problem for a man whose criminal
reach is boundless. When Ulf gets off the bus in Kåsund, on Norway’s
far northeastern border, he sees a “flat, monotonous, bleak
landscape . . . the perfect hiding place. Hopefully.”
In the evening I started in Midnight Sun: A
novel by Jo Nesbø. This
is the sequel to the excellent Blood on
Snow
so I have high expectations.
For the past months I have been thinking about this on and off. And I
decided to use a YAML block to specify meta data, just like
pandoc
uses. For example:
---
tags: [tumblelog]
...
## Tags are coming to tumblelog
So, last Saturday I started tagging Plurrrr's articles. It took me 3
days, several hours each day, to do most of the 1600+ articles. Today
I did the last few and some refinements. Next is adding the actual
code to tumblelog as currently my version only skips tags.
I receive messages from time to time asking me to share my views on
the topic of whether Emacs can fit into a Unix-centric workflow. One
such email arrived in my inbox yesterday. I replied to it and asked
whether I could publish the answer on my website, while omitting all
private information.
When we configure our programs, we usually have three ways to do it:
config files, cli flags and environment variables. While all of
these three options are a solid way to go, with CI/CD environments
you are usually instructed to use environment variables,
e.g. envvars.
If we look at the golang’s os package
documentation, we can see that it
offers multiple ways to read envvars:
Environ which gives you raw list of strings in form of
“key=value” which you can parse your self
Getenv which gives you the envvar value if it exists, or an
empty string
LookupEnv which is similar to Getenv, but also returns you a
boolean telling you if the variable exists or not