Don't freak, but our Rust web server is now Node.js
Published on:Table of Contents
A year and a half ago I wrote that My Bet on Rust had been Vindicated, as an incredible amount of code could be shared between the server and web client, and I could easily spin out C libraries and executables. No use case was out of reach.
But based on the title you know that the backend is no longer leveraging a Rust web server (warp in this case). So what happened in the last year and a half and why am I saying not to worry?
For that we need some context:
- There’s this web app
- This web app analyzes local files without uploading
- The parsing and extraction of data from the files happen within Wasm compiled from Rust
- Users can upload and share, and the server will perform the same parsing and extraction logic as in the frontend
From this context, the backend resembles a simplistic CRUD app and this is something that I admitted in the original article.
For me a client side solution means that the server isn’t critical
The original article also listed the one, simple requirement I had for the web server:
The same parsing code is executed on the client and server side and so must be fast for both
Since there’s no friction in calling Rust from Rust, the original backend was coded with a Rust web server. And I went down this road for a couple years, so you know things couldn’t have been that bad. However, I made the transition such that the app is now fully Next.js, frontend and backend with the backend calling the Rust logic through napi-rs.
What were some of the reasons for this transition?
Web Ecosystem
At its zenith, the Rust version had 45 dependencies, which to me starts to verge on becoming difficult to organize. Do I group dependencies alphabetically or categorically? If categorically do I group the async database driver with database dependencies or the async dependencies? What about the dependency for TLS connection to the database, is that in the TLS category or database?
Here are some examples of how I grouped and annotated the dependencies the best I could:
Dependencies to connecting to S3:
# S3 Dependencies
rusoto_s3 = "0.46"
rusoto_core = "0.46"
rusoto_credential = "0.46"
Dependencies for connecting to databases
# Database dependencies
deadpool-postgres = { version = "0.7", default-features = false }
deadpool-redis = { version = "0.7", default-features = false }
postgres-native-tls = "0.5"
postgres-types = { version = "0.2", features = ["derive"] }
redis = { version = "0.20", default-features = false, features = ["tokio-comp"] }
tokio-postgres = { version = "0.7", features = ["with-chrono-0_4"] }
Dependencies for the async ecosystem, web, and TLS.
# Async, Web, TLS dependencies
futures = "0.3"
headers = "0.3"
hyper-tls = "0.5"
native-tls = { version = "0.2", features = ["vendored"] }
reqwest = { version = "0.11", features = ["json", "multipart", "gzip"] }
serde = { version = "1", features = ["derive"] }
serde_json = "1"
serde_urlencoded = "0.7"
tokio = { version = "1", features = ["macros", "signal", "fs", "io-util"] }
tokio-native-tls = "0.3"
tokio-util = "0.6"
url = "2"
warp = "0.3"
I think my point is getting across. Lots of dependencies are needed, and I’ve omitted categories for compression, observability, security, and utility dependencies. Every dependency has some maintenance cost. For instance, in the examples above, it can be argued that mistakes were made by not specifying precise dependency versions, though making said mistakes is more tolerable in the presence of an app’s lockfile.
Over time, I’ve started to appreciate frameworks abstracting away dependency decisions. Yes, the tradeoff is less flexibility, but not needing downstream users to wire up all the dependencies could be a productivity boon.
Maybe I could have saved some heartache by sticking to a synchronous web framework, but Rust’s mindshare seems to be focussed on the asynchronous web, and I don’t want to bet on something whose days are numbered.
One may ask how writing the backend within Next.js solves this problem, considering many associate Javascript and the dependency hell of managing hundreds of dependencies. Well, I’m not sure if it’s because I’m dependency averse, but only 10 dependencies were needed, and these dependencies even added features compared to their Rust counterparts.
For instance, there’s Prisma. This one package (technically two dependencies) is everything I need to query a postgres database within a typesafe ORM. I’ve never been a fan of ORMs, as I often prefer raw SQL for transparency, but Prisma has made quite the impression on me and seems to work well. Prisma also handles schema migrations, which I was writing and managing by hand. Perhaps if this was the only stickler, I’d consider Rust’s SeaORM, which gained migration support just a few days ago.
The only pitfall I ran into with Prisma is that it doesn’t support Postgres composite types, but I was able to workaround this issue by expanding each field in a composite type into its own column:
-- Original type
CREATE TYPE save_version AS (
first SMALLINT,
second SMALLINT,
third SMALLINT,
fourth SMALLINT
);
-- The migration
ALTER TABLE saves ADD COLUMN save_version_first SMALLINT;
ALTER TABLE saves ADD COLUMN save_version_second SMALLINT;
ALTER TABLE saves ADD COLUMN save_version_third SMALLINT;
ALTER TABLE saves ADD COLUMN save_version_fourth SMALLINT;
UPDATE saves SET
save_version_first = (version).first,
save_version_second = (version).second,
save_version_third = (version).third,
save_version_fourth = (version).fourth;
ALTER TABLE saves DROP COLUMN version;
DROP TYPE save_version;
While the increased verbosity in the column names is enough to make one wince, I think it is but a small price to pay for type safe queries and migrations by conforming to Prisma’s worldview.
Prisma exemplifies the larger trend when it comes to dependencies for the web: Node.js has a much richer web ecosystem than Rust. No one is denying this, even Are we web yet?:
Rust does have a diverse package ecosystem, but you generally have to wire everything up yourself. […] If you are expecting everything bundled up for you, then Rust might not be for you just yet
To further cement this point, in the Rust implementation I needed to spend brain cycles on implementing user authentication. My grandma used to tell me I was a smart cookie, so I’m sure I coded a decent implementation that communicated randomly generated session identifiers in cookies that were stored in a database to be checked on subsequent requests. Now I just use iron-session
and don’t think about user authentication. I still think about authorization, but that’s more tolerable.
In the new implementation I don’t have to worry about supplying an HTTP client, as Next.js provides a server side fetch. I don’t need to worry about compression as the Node.js standard library includes both deflate and brotli algorithms. I don’t need to include dependencies for JSON or query param serialization, as they are also in the standard library. TLS is taken care of and async is baked in. It feels nice to not need to wire all these up.
What may be even worse news for Rust web projects is that the already small community (in comparison to Node.js), is splintered. It seems like no two projects have the same stack in terms of async runtime, web framework, and database drivers. There’s no doubt that Node.js isn’t also splintered but the immense size of the community and resources available to learn should make potential contributing experiences less overwhelming. And being more accessible to contributors should be seen as a major win for any project.
Napi-rs
The largest departure from a traditional Node.js web server is that I needed to wire it up to Rust code that housed all the business logic and performance sensitive bits. Thankfully others, who have wanted to sneak high performance or platform specific code into Node.js, have blazed this trail long before me through the use of Node-API, “an intermediary between C/C++ code and the Node JavaScript engine”.
Well, C/C++ isn’t the only game in town, and @Brooooooklyn
came up with napi-rs that allows one to ergonomically bind Node.js to Rust code.
An aside before we get to the code. I see that @Brooooooklyn
was recently hired by vercel, which is awesome to see vercel continue to support building the bridge between Rust and JS (@kdy1
, creator behind swc being another recent hire). These two deserve all the accolades and I want to see more companies like vercel step up and support these creators. The only caveat is that, as a community, we just need to be cognizant of putting all our bridge builders in the hands of one company.
Back to code. The Rust functions we’ll be bridging are synchronous, so I was worried that calling this code from Node.js would block the main thread, or that I would need to juggle a thread pool. But I’m glad to report that neither of those two solutions are needed, as Node-API provides an AsyncWorker
(and napi-rs provides an analogous AsyncTask
for Rust) that will queue the desired task onto a libuv worker thread so it won’t block the main event loop.
This can be seen in action below. The code sets up a callsite that will compute the checksum of a file at the given path on a background thread.
use napi::{Error, JsString, Task};
use napi::bindgen_prelude::*;
use napi_derive::*;
#[napi]
pub fn file_checksum(path: String) -> AsyncTask<FileHasher> {
AsyncTask::new(FileHasher { path })
}
pub struct FileHasher {
pub path: String,
}
impl Task for FileHasher {
type Output = String;
type JsValue = JsString;
fn compute(&mut self) -> napi::Result<Self::Output> {
// This internal function computes the highwayhash
applib::hasher::hash(&self.path)
.map_err(|e| Error::from_reason(e.to_string()))
}
fn resolve(&mut self, env: napi::Env, output: Self::Output) -> napi::Result<Self::JsValue> {
env.create_string_from_std(output)
}
}
Then we can build our dynamic system library and rename it to have a .node
file extension, as all shared libraries to be loaded in Node.js should have the .node
file extension:
cargo build --release -p applib-node
cp -f ./target/release/libapplib_node.so ./src/app/src/server-lib/applib.node
If this manual step seems cumbersome, there is @napi-rs/cli
, but since the two steps are simple, I figured it wasn’t worth the dependency.
What is most definitely cumbersome is the dance needed to appropriately load the shared library in a way that works in a Next.js dev environment, jest test environment, and production environment. After a significant amount of effort this is what I landed on.
import getConfig from "next/config";
import { dlopen } from "process";
let nextRoot = getConfig()?.serverRuntimeConfig?.PROJECT_ROOT;
nextRoot = nextRoot && process.env.NODE_ENV === "production" ? "." : nextRoot;
// Much better to do this than to use a combination of
// __non_webpack_require__, node-loader, and webpack configuration
// to workaround webpack issues
var modulePath = nextRoot
? path.join(path.resolve(nextRoot), "src", "server-lib", "applib.node")
: path.join(__dirname, "applib.node");
const nativeModule = { exports: {} as any };
dlopen(nativeModule, modulePath);
export function fileChecksum(path: string): Promise<string> {
return nativeModule.exports.fileChecksum(path);
}
Don’t fret if the above code seems opaque. It is. The gist is that we avoid require
, as those get processed by webpack and we really don’t want webpack to process our node library. The solution isn’t pretty but it’s better than the alternatives of using non standard __non_webpack_require__
and adding additional dependencies and configuration.
In sum, we’ve replaced our pure Rust web server with a Node.js web server that wraps a Rust core via Node-API, but that isn’t nearly as catchy as a title. But the more accurate title encapsulates why a hypothetical manager (or me from 2 years ago) shouldn’t worry, as we didn’t need to discard most the business logic, we simply migrated the server that handled the HTTP endpoints. To me, this speaks of how flexible Rust is, as it can fit anywhere in any stack.
Before moving on, there’s a Next.js specific bonus. We can call into our node library at build time so that we can statically generate pages and cache them on the CDN. I’ve only taken advantage of this in one spot of the project, but it is immensely cool to see a seemingly fully rendered page in the time it takes for the CDN to respond.
Ease of Deployment
I must talk about the regression in ease of deployment. Rust can easily compile into a static executable that is trivial to run anywhere. Or it can be embedded into a distroless container for images that are shockingly small, at around 15 MB. The new implementation is not so lucky. Docker images come out to around 500 MB (an over 30x increase) and that is after enacting Next.js standalone deployment and a good chunk of time optimizing further. It’s not a horrible regression, but I now need to keep a closer eye on storage space.
Since Next.js standalone deployment was released as part of Next.js 12, it is a relatively new feature. Prior to this feature (and completing the migration from Rust to Next.js backend), docker image sizes were 1.5 GB and I heavily relied upon dive to manually identify and remove unused dependencies. It was an error prone process (ie: accidentally removing a dependency thought to be unused), and I am very glad to see Next.js standalone deployment see the light of day.
Lines of Code Comparison
Anyone who has done a rewrite will know that the rewrite will often be shorter than the original in terms of lines of code, so I don’t want to linger on this point for too long. But for those curious the Rust implementation was 5k lines (whitespace and comments excluded), while the rewrite is about 1.5k.
What I think is important to highlight is that there was more code reuse after the migration. This may be surprising considering the core logic is in Rust, but the Rust web server had to redefine all the models for communicating JSON payloads between the server and the client. But now I just use the same interface for both.
// An example model that I only need to define one
// and both the client and server can use it
export interface GameVersion {
first: number;
second: number;
third: number;
fourth: number;
}
Performance
For those performance driven folks (and I count myself amongst them most of the time), there has been no discernible slowdown due to Node-API or Node.js HTTP handling. Is there technically a slowdown somewhere in the pipeline? Probably. I’ve measured Node-API calls to have about a microsecond or two of overhead per invocation, but this overhead is easily dwarfed by the invoked function that may take milliseconds or hundreds of milliseconds.
What has noticeably improved is the time taken to compile. In the previous article, I mentioned compile times were not great, but I could endure them:
Building the server crate on my 8 core machine takes 9 minutes for a clean build and 6 minutes for incremental
What I didn’t mention is that those compile times were probably in large part due to compiling 400-500 dependencies. Now, without the Rust server component, only 80 dependencies need to be compiled for the node shared library, which is completed in under a minute. And I only need to recompile when the business logic changes, not when I update an endpoint, so the developer experience has much improved due to server side hot module reloading.
Conclusion
Has the migration been a success? It’s probably too soon to tell. I think I did a good migration. No one knew I swapped out the backend. I only needed to match the existing REST API endpoints, which wasn’t too difficult to do (after all, the backend is just a glorified CRUD app). The only behavior seen was that everyone was logged out, as I transitioned all accounts off my home grown cookie authentication and onto a bit more industry standard one.
I would like to think that the migration achieved the following outcomes:
- Type safe database queries
- Managed database migrations
- Fewer dependencies to manage
- Quicker compile times
- Less code to maintain
- More accessible to contributors
- Shared server and client payload types
- Facilitated statically generated pages
- Preserved top notch performance
All those positive outcomes seem worth the downside of larger docker images.
In conclusion, even when the core business logic needs to be in Rust for performance, code reuse, or platform specific behavior, it doesn’t mean one needs a Rust web server to communicate this business logic. There are ergonomic ways to bridge Rust into Node.js (or another runtime) so that one can take advantage of a more established web ecosystem.
Comments
If you'd like to leave a comment, please email [email protected]