In a reply on Hacker
News user
JustARandomGuy
mentioned that I had misspelled San Francisco in the CSS file for
this blog; I had written it as San Fransisco ๐.
A quick find-grep from within Emacs showed that I had made the same
mistake in all Sass source files, files ending with the scss
extension, for the tumblelog project. This project is the SSG
(static site generator) that generates this blog.
I also noticed that those files had a copyright 2019, to which I
added 2020. And all those SCSS files still had the old "same terms as
Perl itself" license, which I removed as the entire project is now
available under the MIT
license.
After those fixes I bumped the version number of the project to 4.0.5
and pushed the new version to
GitHub.
I used the molt of a Brachypelma smithi that molted the
25th of June,
2020. The exuviae used
in the tutorial is show in the above photo. To get an impression of
the size of the molt, each square on the paper is 5mm by 5mm
and the black line near the bottom is 25mm or about 1 inch.
Vanilla recurrent neural networks (RNNs) form the basis of more
sophisticated models, such as LSTMs and GRUs. There are lots of
great articles, books, and videos that describe the functionality,
mathematics, and behavior of RNNs so, don't worry, this isn't yet
another rehash. (See below for a list of resources.) My goal is to
present an explanation that avoids the neural network metaphor,
stripping it down to its essenceโa series of vector transformations
that result in embeddings for variable-length input vectors.
This post differs from most posts around setting up a Haskell
development environment in the sense that it does not directly jump
into Cabal or Stack. Instead, it first provides some background
information that makes it possible to understand the basics of the
development environment in Haskell, the different moving parts, and
how those come together in turning Haskell source code into an
executable which can then be run.