Every now and then, I get a PR from a well-meaning contributor
trying to add __all__ to a Python module for whatever reason. I
always decline these, they are unnecessary (at least for the way I
structure my code) and I thought I’d write a short post explaining
why.
In this post I want to provide you with a practical introduction to
structured concurrency. I will do my best to explain what it is, why
it's relevant, and how you can start applying it to your rust
projects today. Structured concurrency is a lens I use in almost all
of my reasoning about async Rust, and I think it might help others
too.
On a quiet day, away from the hustle of Richmond, in a small cottage
on the Virginia coast, Dr. Kay Scarpetta receives a disturbing phone
call from the Chesapeake police. Thirty feet deep in the murky
waters of Virginia's Elizabeth River, a scuba diver's body is
discovered near the Inactive Naval Shipyard.As the police begin
searching for clues, the wallet of investigative reporter Ted
Eddings is found.
Unnerved by the possible identity of the victim, Scarpetta orders
the crime scene roped off and left alone until she arrives. What was
he doing there, searching for Civil War relics as the officer
suggested, or was there a bigger story? As she rifles through the
multitude of clues, a second murder hits much closer to home. This
new development puts Scarpetta and her colleagues hot on the trail
of a military conspiracy.
In the evening I started in Cause of
Death,
Kay Scarpetta book 7 by Patricia Cornwell.
Every now and then, at work, I find myself discussing git worfklows,
commit messages, branching, releasing, versioning, changelogs
etc. Since my opinion has remained fairly consistent for the past
few years, I found myself repeating the same points a lot, so I
wrote it down. This page is the resulting compilation of my opinions
on the software development lifecycle (SDLC), without
workplace-specific tangeants.
I’d like to preface this article by saying that it is not an
authoritative guide, rather it is just me documenting my experience
figuring various things out, in the hope that it’ll be useful or
interesting to someone else. I assume some knowledge of Nix and
containerization throughout this article.
When you’re creating a Python package, one of the nice things is
that you can just add a .py source file to the package directory,
and then users can import it right away.
Ticks can be attracted across air gaps several times larger than
themselves by the static electricity that their hosts naturally
accumulate, researchers at the University of Bristol have
discovered.
Our team had some fun experimenting with Python 3.9-nogil, the
results of which will be reported in an upcoming blog post. In the
meantime, we saw an opportunity to dive deeper into the history of
the global interpreter lock (GIL), including why it makes Python so
easy to integrate with and the tradeoff between ease and
performance.
As developers, we often find ourselves working in multiple tmux
panes, each running different applications or instances of the same
application. When we make changes to a configuration file, such as
~/.vimrc for Vim or ~/.aliases for our shell, we need to
manually reload that configuration in each relevant instance. This
can be a time-consuming process, especially when working with a
large number of panes. But also, let's be wizards and automate this
process!
In this post, we'll explore a simple automation that can save you a
lot of time and effort. We'll focus on a specific use case —
reloading a .vimrc file across all Vim instances in tmux panes —
but the pattern can be applied to a variety of scenarios.
As I tend to do, I picked a topic to write about that is much larger
in scope than I could manage in a reasonable amount of time. Did I
learn? Apparently not. This article started off with switching from
zsh to
fish. Then I thought, "Might as well
manage it all with Nix!", which led me to switch to home
manager to manage
my dotfiles which led me to using Nix everywhere I possibly could.
As expected, using Nix where it's not supported caused some
issues. Buckle up, and watch my slow descent into madness (Nix).
CLI tools hidden in the Python standard
library in
which Simon Willison wonders what little tools are lurking in the
Python standard library, available on any computer with a working
Python installation?
An explaination of how to implement practical procedural macros in
the Rust programming
language. Explains the different types of macros, then shows an
implementation of a procedural macro following best practices,
focusing on testing and ergonomics. Assumes some familiarity with
Rust.
In this post, we will analyze some data covering years of early
adoption of Rust here at Google. At Google, we have been seeing
increased Rust
adoption,
especially in our consumer applications and platforms. Pulling from
the over 1,000 Google developers who have authored and committed
Rust code as some part of their work in 2022, we’ll address some
rumors head-on, both confirming some issues that could be improved
and sharing some enlightening discoveries we have made along the
way.
FreeBSD networking and containers (Jails) stacks are very mature and
provide lots of useful features … yet for some reason these features
are not properly advertised by the FreeBSD project … or not even
documented at all.
ORMs are one of those things that software writers like to pick
on. There are many online articles that go by the same tune: “ORMs
are an anti-pattern. They are a toy for startups, but eventually
hurt more than help.”
This is an exaggeration. ORMs aren’t bad. Are they perfect?
Definitely not, just like anything else in software. At the same
time, the criticisms are expected—two years ago, I would’ve agreed
with that stereotyped headline wholeheartedly. I’ve had my share of
“What do you mean the ORM ran the server out of memory?” incidents.
But in reality, ORMs are more misused than overused.
If you’re doing numeric calculations, NumPy is a lot faster than
than plain Python—but sometimes that’s not enough. What should you
do when your NumPy-based code is too slow?
Your first thought might be parallelism, but that should probably be
the last thing you consider. There are many speedups you can do
before parallelism becomes helpful, from algorithmic improvements to
working around NumPy’s architectural limitations.
Let’s see why NumPy can be slow, and then some solutions to help
speed up your code even more.
The ultimate Python library in building OAuth and OpenID Connect
servers. It is designed from low level specifications
implementations to high level frameworks integrations, to meet the
needs of everyone.
We have released a new "Cookbook of Self-Supervised
Learning,” a practical guide for
AI researchers and practitioners on how to navigate SSL recipes,
understand its various knobs and levers, and gain the know-how
needed to experiment with SSL's untapped flavors. This is part of
our efforts to lower the barrier and help democratize access to SSL
research. You’ll also find tips and tricks from more than a dozen
authors across multiple universities, including New York University,
University of Maryland, UC Davis, University of Montreal; as well as
leading Meta AI researchers, such as Yann LeCun.
Codon is a high-performance Python compiler that compiles Python
code to native machine code without any runtime overhead. Typical
speedups over Python are on the order of 100x or more, on a single
thread. Codon supports native multithreading which can lead to
speedups many times higher still.
The Codon framework is fully modular and extensible, allowing for
the seamless integration of new modules, compiler optimizations,
domain-specific languages and so on. We actively develop Codon
extensions for a number of domains such as bioinformatics and
quantitative finance.
macOS is fortunate to have access to the huge arsenal of standard
Unix tools. There are also a good number of macOS-specific
command-line utilities that provide unique macOS functionality. To
view the full documentation for any of these commands, run man <command>.
An unidentified nude female sits propped against a fountain in
Central Park. There are no signs of struggle. When Dr. Kay Scarpetta
and her colleagues Benton Wesley and Pete Marino arrive on the
scene, they instantly recognize the signature of serial killer
Temple Brooks Gault. Scarpetta, on assignment with the FBI, visits
the New York City morgue on Christmas morning, where she must use
her forensic expertise to give a name to the nameless—a difficult
task. But as she sorts through conflicting forensic clues, Gault
claims his next victim. He has infiltrated the FBI’s top secret
artificial-intelligence system developed by Scarpetta’s niece, and
sends taunting messages as his butchery continues, moving
terrifyingly closer to Scarpetta herself.
In the afternoon I started in From Potter's
Field,
Kay Scarpetta book 6 by Patricia Cornwell.
If you’ve ever had to write a parser before, you know that creating
parsers can be a tedious and complicated process. The good news is
that it doesn’t have to be this way. In this post, I’m going to
introduce parser combinators - a technique for building parsers
that I’ve found to be both practical and fun to play around
with1.
Gitflow is, by far, the most popular branching model and possibly
the one that has endured the test of time the most. Introduced by
Vincent Driessen in
2010,
its fundamental idea is that you should isolate your work into
different types of git branches.
Other branching strategies, such as the centralized workflow (for
those teams that come from SVN), and the forking workflow (for
open-source projects) exist. Git, as a version control system, only
details basic branching operations, and it remains controversial as
to which approach is the best. Beyond those basic branching
operations, it's a matter of opinion.
> In this article we will compare Gitflow with its newer approach,