As other commenters pointed out this is "just" a signature. However, in the absence of standardised checks, this is a useful intermediate way of addressing the integrity issue around ML supply chain; FWIW today.
Eventually, you want to move to more complete solutions that have more elaborate checks, e.g. provenance of data that went into the model, attested training. C2PA is trying to cover it.
Inference time attestation (which some other commenters are pointing out) -- how can I verify that the response Y actually came from model F, on my data X, Y=F(X) -- is a strongly related but orthogonal problem.
We need remote models hosted in enclaves with remote attestation and end to end cryptography for inference. Then you can prove client-side that an output from a model was private, and direct without tampering by advertizers, censors, or propagandists.
Also looking forward to reading through the SLSA for ML PoC and seeing how it evolves. I was planning to use Witness for model training but wasn't sure how it would work for such a long and intensive process.