Plurrrr

Mon 05 Apr 2021

Deep learning model compression

Each year, larger and larger models are able to find methods for extracting signal from the noise in machine learning. In particular, language models get larger every day. These models are computationally expensive (in both runtime and memory), which can be both costly when served out to customers or too slow or large to function in edge environments like a phone.

Researchers and practitioners have come up with many methods for optimizing neural networks to run faster or with less memory usage. In this post I’m going to cover some of the state-of-the-art methods. If you know of another method you think should be included, I’m happy to add it. This has a slight PyTorch bias (haha) because I’m most familiar with it.

Source: Deep learning model compression, an article by Rachit Singh.

Loading SQL data into Pandas without running out of memory

You have some data in a relational database, and you want to process it with Pandas. So you use Pandas’ handy read_sql() API to get a DataFrame—and promptly run out of memory.

The problem: you’re loading all the data into memory at once. If you have enough rows in the SQL query’s results, it simply won’t fit in RAM.

Pandas does have a batching option for read_sql(), which can reduce memory usage, but it’s still not perfect: it also loads all the data into memory at once!

So how do you process larger-than-memory queries with Pandas? Let’s find out.

Source: Loading SQL data into Pandas without running out of memory, an article by Itamar Turner-Trauring.

Pterinochilus murinus outside its burrow

In the evening I managed to take a few photos of the Pterinochilus murinus I keep. Because the specimen is very skittish the only way I could take those photos was by taking them through the plastic container and carefully providing some lighting from above with a flash light.

Pterinochilus murinus outside its burrow
Pterinochilus murinus outside its burrow.