curl -X POST "XXX/infer" \ -H "Content-Type: application/json" \ -H "x-api-key: YOUR_API_KEY" \ -d '{}'
How do I know what the inputs/outputs are for one of my models? I see I could have set the response variable manually before training but I was hoping the auto-infer would work.
Separately it'd be ideal if when I ask for models that you seem to not be able to train (I asked for an embedding model as a test) the platform would tell me it couldn't do that instead of making me choose a dataset that isn't anything to do with what I asked for.
All in all, super cool space, I can't wait to see more!
I'm a former YC founder turned investor living in Dogpatch. I'd love to chat more if you're down!
Few questions: 1. Can it work with tabular data, images, text and audio? 2. Data preprocessing code is deployed with the model? 3. Have you tested use cases when ML model was not needed? For example, you can simply go with average. I'm curious if agent can propose not to use ML in such case. 4. Do you have agent for model interpretation? 5. Are you using generic LLM or have your own LLM tuned on ML tasks?
Sounds very practical in real-world use cases. I trained a ML model couple months ago, I think it's a good case to test this product.
It would be more useful for the export to have an option (or by default) to include everything from the session.