by craftkiller
1 subcomments
- I don't see any mention of enabling kTLS (TLS in the kernel). I'd suggest re-running the benchmark with kTLS enabled: https://www.f5.com/company/blog/nginx/improving-nginx-perfor...
Also it doesn't look like they enabled sendfile() in the nginx conf: https://nginx.org/en/docs/http/ngx_http_core_module.html#sen...
The combination of sendfile and kTLS should avoid round-trips to userland while sending files.
- The numbers seems to be too much near 65535 to be a coincidence.
are you making the request from a single IP address source?
are you aware of the limit of using the same source IP address for the same destination IP address ( and port )? ( each connection can have only a unique source address and source port to the destination, maxing out in source 65535 ports ) for the same destination
by spankibalt
6 subcomments
- Sucks that that there's no ECC-RAM model. A phone-sized x86 slab, as opposed to those impractical mini-PC/Mini-Mac boxes, that one could carry around and connect to a powerbank of similar size, and/or various types of screens (including a smartphone itself), would make for a great ultramobile setup.
by artimaeis
3 subcomments
- I love how capable these tiny N150 machines are. I've got one running Debian for my home media and backup solution and it's never stuttered. I'd be curious about exactly what machine they're testing with. I've got the Beelink ME mini running that media server. And I use a Beelink EQ14 as a kind of jump box to remote into my work desktop.
by PaulKeeble
0 subcomment
- I didn't see a size of the test page as I went through (Did I miss it?) and I think in this case it potentially matters. A 2.5 gbps link can do ~280 MB/s, which at 63k requests is just 4.55KB a request. That could easily be a single page and saturating the connection link, explaining the clustering at that value.
by matthewhartmans
1 subcomments
- Love this! I have been running a N150 with Debian 13 as my daily driver and super impressed! For ~$150 it packs a punch!
- the N100 family has been the raspberry pi host killer for me, migrated to one from an rpi4, couldn't be happier.
- Of course zones and jails win a bit there, because they get their own native networking stack on the kernel instead of going through bridges.
Not much experience with Solaris zones, but FreeBSD jails and their vnets are amazingly good. They also don't lose much in translation. Say you run an Ubuntu 12.04 with a Debian 13 Docker image. Sure, it works, but it has to translate.
Jails have the restriction that a jail can't have a higher version than the host system. So there's (almost) zero translation involved.
My home stack is OpenBSD for the gateway/router, several FreeBSD machines (services, DBs, pkg build server, data storage/NAS) and another OpenBSD machine to run OpenBSD VMs via VMD and I haven't looked back since then. It's a stack that works with impeccable perfomance and equally impeccable documentation. Should the internet crumble due to another AWS us-east-1 or another cloudflare fuckup I can at least run my local stuff and feel confident enough to continue making changes to the system just based off the locally available documentation.
- This is related to a quad core intel processor. It must be noted that most of these OS with the exception of NetBSD can't efficiently handle heterogenous core configurations like in what you find on more powerful Intel processors.
- I'd love to see benchmarks that hit CPU or NIC limits; the HTTPS test hit CPU limits on many of the configurations, but inquiring minds want to know how much can you crank out with FreeBSD. Anyway, overload behavior is sometimes very interesting (probably less so for static https). May well need more load generation nodes though; load generation is often harder than handling load.
OTOH, maybe this is a bad test on purpose? the blogger doesn't like running these tests, so do a bad one and hope someone else is baited into running a better test?
- Pleasantly surprised to see SmartOS and zones used. Nice writeup.
by koakuma-chan
1 subcomments
- All these benchmarking utilities like wrk are notorious for not supporting HTTP/2. Why would you serve static content and not use HTTP/2?
- Imagine what a big piece of iron could do, it makes me think of the stories recently of people who came out of cloud and run everything of one or few bare metal hosts.
by LeoPanthera
1 subcomments
- Is there a guide somewhere to what low power CPUs exist in these new mini PC things? I feel like I'm increasingly out of touch.
- Love these N150 systems. I wonder if the RAM/SSD/misc shortages are going to make these humble $140 boxes like $300+ soon.
by waynesonfire
2 subcomments
- I'd really like one that has 2x M.2 slots. I'm very uncomfortable running a server on a single disk.
Also, ECC ram would be nice.
- It really should be "nginx static web hosting..." as it seems to be very specifically measuring nginx performance across OSs.
Otherwise, seL4/LionsOS webserver scenario could be tested.