1. Dense json
Interesting idea. You can also just keep the compact binary if you just tag each payload with a schema id (see Avro). This also allows a generic reader to decode any binary format by reading the schema and then interpreting the binary payload, which is really useful. A secondary benefit is you never ever misinterpret a payload. I have seen bugs with protobufs misinterpreted since there is no connection handshake and interpretation is akin to 'cast'.
2. Compatibility checks
+100 there's not reason to allow breaking changes by default
3. Adding fields to a type: should you have to update all call sites?
I'm not so sure this is the right default. If I add a field to a core type used by 10 services, this requires rebuilding and deploying all of them.
4. enum looks great. what about backcompat when adding new enum fields? or sometimes when you need to 'upgrade' an atomic to an enum?
Maybe I'm missing some additional features but that's exactly what https://buf.build/plugins/typescript does for Protobuf already, with the advantage that you can just keep Protobuf and all the battle hardened tooling that comes with it.
GH style import. This is a big one I wish proto had in the first place. The entire idea of a proto registry feels reactive to me when, ideally, you want to pull in a versioned shared file to import that is verified by the compiler long before serve or client verifies the payload schema.
Schema validation and compatibility checks on CI. Again a big one and critical to catch issues early.
Enums done right... No further comment required.
I think with some more attention to details e.g. hammering out the gaps some other comments have identified and more language support e.g. Rust, Go, C# this can actually work out over time.
Here is an idea to contemplate as a side gig with your favorite Ai assistant: A tool to convert proto to Skir. Or at least as much as possible. As someone who had to maintain larger and complex proto files, a lot of proto specific pain points are addressed.
The only concern i have is timing. Ten years ago this would have been a smash hit. These days, we have Thrift and similar meaning the bar is definitely higher. That's not necessarily bad, but one needs to be mindful about differentiation to the existing proto alternatives.
I hope this project gains trajectory and community especially from the frustrated proto folks.
In the "dense JSON" format, isn't representing removed/absent struct fields with `0` and not `null` backwards incompatible?
If you remove or are unaware of a `int32?` field, old consumers will suddenly think the value is present as a "default" value rather than absent
protobuf solved serialization with schema evolution back/forward compatibility.
Skir seems to have great devex for the codegen part, but that's the least interesting aspect of protobufs. I don't see how the serialization this proposes fixes it without the numerical tagging equivalent.
Unfortunately, I really like postfix types, but IDL itself doesn't support them.
I like constants, great addition.
Things that I'll miss:
1. Oneof fields. There are enums, but it looks like it's not possible to have ad-hoc onefos?
2. Streaming requests/responses.
3. Introspection and annotations.
4. Go bindings.
The best thing Skir does is strict generated constructors. You add a field, every construction site lights up. Protobuf's "silently default everything" model has caused mass production incidents at real companies. This is a legitimately better default.
Dense JSON is interesting but the docs gloss over the tradeoff: your serialized data is [3, 4, "P"]. If you ever lose your schema, or a human needs to read a payload in a log, you're staring at unlabeled arrays. Protobuf binary has the same problem but nobody markets binary as "easy to inspect with standard tools." The "serialize now, deserialize in 100 years" claim has a real asterisk. Compatibility checking requires you to opt into stable record IDs and maintain snapshots. If you skip that (and the docs' own examples often do), the CLI literally warns you: "breaking changes cannot be detected." So it's less "built-in safety" and more "safety available if you follow the discipline." Which is... also what Protobuf offers.
The Rust-style enum unification is genuinely cleaner than Protobuf's enum/oneof split. No notes there, that's just better language design.
Minor thing that bothered me disproportionately: the constant syntax in the docs (x = 600) doesn't match what the parser actually accepts (x: 600).
The weirdest thing that bugged the heck out of me was the tagline, "like protos but better", that's doing the project no favors.
I think this would land better if it were positioned as "Protobuf, but fresh" rather than "Protobuf, but better." The interesting conversation is which opinions are right, not whether one tool is universally superior.
Quite frankly, I don't use protobuf because it seems like an unapproachable monolith, and I'm not at FAANG anymore, just a solo dev. No one's gonna complain if I don't. But I do love the idea of something simpler thats easy to wrap my mind around.
That's why "but fresh" hits nice to me, and I have a feeling it might be more appealing than you'd think - ex. it's hard to believe a 2 month old project is strictly better than whatever mess and history protobufs gone through with tons of engineers paid to use and work on it. It is easy to believe it covers 99% of what Protobuf does already, and any crazy edge cases that pop up (they always do, eventually :), will be easy to understand and fix.
Why build another language instead of extending an existing one?