Welcome to nickb.dev, a blog by Nick Babcock! Below are my most recent articles. You can find the history of all my writings in the archive or just find out more about me.


Results of Authoring a JS Library with Rust and Wasm banner

Results of Authoring a JS Library with Rust and Wasm

I recently overhauled a JS parsing library to delegate to the Rust implementation via Wasm. The result to users is seamless as the small Wasm bundled is inlined in the library as base64 encoded. Along the way, I saw that parsing can be much faster than object creation.

Read more...


Migrating a 160k Word Jekyll Blog to Hugo banner

Migrating a 160k Word Jekyll Blog to Hugo

At 160k words, this is by no means a small blog. When years of annoyances with Jekyll boiled over, I decided to migrate the Jekyll blog over to Hugo. The results were amazing and will hopefully lower the barrier needed for me to continue writing unencumbered.

Read more...


Backblaze B2 as a Cheaper Alternative to Github's Git LFS banner

Backblaze B2 as a Cheaper Alternative to Github's Git LFS

When cost optimizing, consider using Backblaze B2 over Github’s git LFS. B2 has a much more generous free tier compared to Github’s: 10x more storage (10GB vs 1GB) and 30x more bandwidth (30GB vs 1GB). Even after exceeding the free tier, Github’s git LFS still commands a 12x price premium. Before abandoning git lfs, there are several tips to keep in mind.

Read more...


My Bet on Rust has been Vindicated banner

My Bet on Rust has been Vindicated

I chose Rust for a project and I had struggles along the way which made me second guess this decision, but after releasing and fulfilling use cases I hadn’t considered, the decision was vindicated. The wins that Rust brings outweighed any struggles.

Read more...


Reasons to Migrate away from Gatsby banner

Reasons to Migrate away from Gatsby

Gatsby is a site generation framework. I recently was using it for one of my side projects, sff.life, a site dedicated to small form factor computing (eg: small computers). I decided to migrate away from Gatsby, as it is not a zero cost abstraction for static sites for two reasons: Needless bloat via javascript and JSON bundles Way too many dependencies and releases Before I give a bit more background and expand on these reasons, I still believe that in many situations, Gatsby benefits outweigh the costs.

Read more...


Opinionated Guide for Web Development with Rust Web Workers banner

Opinionated Guide for Web Development with Rust Web Workers

I have a toy site that parses Rocket League replays using the rust crate boxcars. Parsing replays is done synchronously and shouldn’t block the browser’s UI thread, so parsing is offloaded to a web worker. Getting this side project to a point where it works + is maintainable (minimum configuration) has been an exercise in doggedness as I spent weeks exploring options. I believe I’ve found a happy medium and here’s the recipe:

Read more...


Parsing Performance Improvement with Tapes and Spatial Locality banner

Parsing Performance Improvement with Tapes and Spatial Locality

2020-08-12: This article described a performant method to parse data formats and how to aggregate serde fields with the input buffered. While the serde demonstration is still valid, I opted to create a derive macro that will aggregate fields that isn’t susceptible to edge cases. There’s a format that I need to parse in Rust. It’s analogous to JSON but with a few twists: { "core": "core1", "nums": [1, 2, 3, 4, 5], "core": "core2" } The core field appears multiple times and not sequentially The documents can be largish (100MB) The document should be able to be deserialized by serde into something like struct MyDocument { core: Vec<String>, nums: Vec<u8>, } Unfortunately we have to choose one or the other:

Read more...


Monitoring Remote Sites with Traefik and Prometheus banner

Monitoring Remote Sites with Traefik and Prometheus

I have several sites deployed on VPSs like DigitalOcean that have been dockerized and are reverse proxied by traefik so they don’t have to worry about Let’s Encrypt, https redirection, etc. Until recently I had very little insight into these sites and infrastructure. I couldn’t answer basic questions like: How many requests is each site handling What are the response times for each site Is the box over / underprovisioned For someone who has repeatedly blogged about metrics and observability (here, here, here, here, and here) – this gap was definitely a sore spot for me.

Read more...


Downsampling Timescale Data with Continuous Aggregations banner

Downsampling Timescale Data with Continuous Aggregations

Timescale v1.3 was released recently and the major feature is continuous aggregations. The main use case for these aggregations is downsampling data: which has been brought up several times before, so it’s great that these issues have been addressed. Downsampling data allows queries spanning a long time interval to complete in a reasonable time frame. I wrote and maintain OhmGraphite, which I have configured to send hardware sensor data to a Timescale instance every couple seconds.

Read more...


SQLite: Dropping an index for a 300x speedup banner

SQLite: Dropping an index for a 300x speedup

For a small, personal project I use sqlite as a time series database and make queries like the following: SELECT referer, COUNT(*) AS views FROM logs WHERE host = 'comments.nbsoftsolutions.com' AND epoch >= 1551630947 AND epoch < 1551632947 GROUP BY referer ORDER BY views DESC Over time, I’ve noticed that this query has been increasingly time consuming, sometimes taking minutes to complete. I thought I created proper indices: CREATE index idx_epoch on logs(epoch); CREATE index idx_host ON logs(host); I ran ANALYZE, and still no changes.

Read more...