In this blog post, I would like to present a simple implementation
of PyTorch distributed training on CIFAR-10 classification using
DistributedDataParallel wrapped ResNet models. The usage of Docker
container for distributed training and how to start distributed
training using torch.distributed.launch would also be covered.
Using the Docker cache efficiently can result in significantly
faster build times. In some environments though, like CI/CD systems,
individual builds happen independent of each other and the build
cache is never preserved. Every build starts from zero which can be
slow and wasteful. This article will try to provide some solutions
for these cases.
In the evening, while checking on the tarantulas I keep, I noticed
that the Chromatopelma cyaneopubescens had molted! Since it had been
refusing food for several weeks and been lethargic I already suspected
that it was in pre-molt.
In the above photo you can see a part of the cast off exoskeleton in
the top centre.
I will try to feed this little one after 7-10 days, so it can properly
recover from the molting process.