Published on:
Stop inlining Wasm as base64 and using top level awaits. Learn why treating WebAssembly as an implementation detail hurts performance and how to properly bundle for Cloudflare, SIMD, and modern runtimes.
Published on:
Arena allocators are having a bit of a moment. I’m often hearing about them. I have a data model that is deserialized from 200MB of binary data in the browser via Wasm and performance is a key feature. Could arena allocators be a good fit?
Published on:
Wasm-pack, the rustwasm working group, and other Wasm related tools were sunset and archived in July 2025, after more than 5 years of being on life support. Since wasm-pack can be seen as an abstraction layer over several tools, what is an effective way to peel back this layer and use the tools directly?
Published on:
Web workers that load Wasm through an ES6 import with the help of bundler plugins can silently drop messages during startup. This happens because the top level await for Wasm’s asynchronous initialization blocks the worker’s message handler registration, creating a race condition that’s often invisible.
Published on:
SIMD gather instructions seem incredibly useful, and I’ve measured they improve performance by 3x. If compiler can auto vectorize the equivalent scalar code, they must be underrated right?
Published on:
I didn’t understand the hype around Duckdb until recently, where I wanted to share data and Duckdb was the perfect fit. While powerful, it has rough edges, such as limited ecosystem support for tagged unions and substantial Wasm payload sizes for web deployment. Still, DuckDB has me excited to revisit how I structure bespoke data queries.
Published on:
There are several options for how to structure a Bevy app for the web. I’m a proponent of shifting as much as possible to web workers, but shifting an entire Bevy app to a web worker is going to require elbow grease as we ditch the builtin windowing system.
Published on:
Journey through WebAssembly’s architectural quirk: 32-bit addressing with 64-bit operations.How does this impact SWAR techniques to turn this unusual combination into a performance advantage? Let’s benchmark this bit wizardry in a whitespace skipping scenario!
Published on:
In a real world benchmark, the default musl allocator caused a 7x slowdown compared to other allocators. I recommend all Rust projects immediately swap to a different allocator in a musl environment.
Published on:
What is the pread syscall and why do Zip implementations favor it to unlock powerful concurrent decompression? And how can we emulate pread on systems without that syscall? And what does Go and .NET have to do with it?