Another version of this is to use grpc to communicate the "metadata" of a download file, and then "side" load the file using a side channel with http (or some other light-weight copy methods). Gitlab uses this to transfer Git packfiles and serve git fetch requests iirc https://gitlab.com/gitlab-org/gitaly/-/blob/master/doc/sidec...
The metadata schema changed constantly (new compression formats, different checksum algorithms, retry policies). Having protobufs for that was genuinely useful. But trying to pipe multi-gigabyte files through gRPC streams was painful. Memory pressure, connection timeouts on slow clients, debugging visibility was terrible.
S3 presigned URLs are the boring answer, but they work. Your object storage handles the hard parts (resumability, CDN integration, the actual bytes), and your gRPC service handles the interesting parts (authentication, metadata, business logic).
In contrast, while HTTP/2 does impose framing of streams, that framing is done entirely server-side. If all one end has to send to the other is a single stream, it'll be DATA frame after DATA frame for the same stream. The client is not required to acknowledge anything. (At least, nothing above the TCP layer!)
It probably wasn't noticeable in this experiment as, if I'm reading it correctly, the server and client were on the same box, but if you were separated by any significant distance, plain HTTP should be noticeably faster.
The questions aren't unique to gRPC, however; gRPC forces you to confront them early and explicitly IMO, which is not a bad thing.
I know it's not easy to solve given how protobuf-centric it is, but this is the worst piece of gRPC for me. The 4MB limit is a terrible size, it's big enough to rarely hit in test cases but small enough it can hit you in production. If you control it all you can just lift that number to something arbitrarily big to avoid most things just failing (although you probably don't want to use that as an actual solution for streaming files of any size), but in practice a lot of "serious" setups end up contorting themselves remarkably to try to avoid that limit.
In all seriousness, don’t do large file transfers over gRPC, except in a pinch for small files. As soon as e.g API gateways are introduced in the mix, stuff can go south very quickly: increased allocation, GC pressure, network usage, etc. Just use presigned S3 URLs.