- Hey Eric, great to see you've now published this! I know we chatted about this briefly last year, but it would be awesome to see how the performance of jax-js compares against that of other autodiff tools on a broader and more standard set of benchmarks: https://github.com/gradbench/gradbench
by fouronnes3
1 subcomments
- Congrats on the launch! This is a very exciting project because the only decent autodiff implementation in typescript was tensorflowjs, which has been completely abandonned by Google. Everyone uses onnx runtime web for inference but actually computing gradients in typescript was surprisingly absent from the ecosystem since tfjs died.
I will be following this project closely! Best of luck Eric! Do you have plans to keep working on it for sometime? Is it a side project or will you abe ble to commit to jax-js longer term?
- This is really great. I don't do ML stuff. But I some mathy things that would benefit from running in the GPU so it's great to see the Web getting this.
I hope this will help grow the js science community.
by yuppiemephisto
0 subcomment
- This project is an inspiration, I've been working on porting tinygrad to [Lean](github.com/alok/tinygrad)
- I have a project using tfjs and jax-js is very exciting alternative. However during porting I struggle a lot with `.ref` and `.dispose()` API. Coming from tfjs where you garbage collect with `tf.tidy(() => { ... })`, API in jax-js seems very low-level and error-prone. Is that something that can be improved or is it inherent to how jax-js works?
Would `using`[0] help here?
[0]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
- What is the state of web ML? Anybody doing cool things already? How about https://www.w3.org/TR/webnn/ ?
by sbondaryev
0 subcomment
- The examples are great. It would be really nice to have a sandbox with the full training code (e.g. MNIST) to play with.
- Could not run the demos on Firefox. On Chromium, the Great Expectations loads but then nothing happens.
by forgotpwd16
0 subcomment
- Very nice work. Like how it supports webgpu but also cpu/wasm/webgl. Would love to read more on the internals & design choices made like e.g. ref counting in README.
P.S. And thanks for taking your time working on this and releasing something polished rather a Claude slop made within few days as seems to be the norm now.