Anybody can send a PCB description/schematic into an LLM, with a prompt suggesting it generate an analysis and it will diligently produce a document that perceptually resembles an analysis of that PCB. It will do that approximately 100% of the time.
But making an LLM actually deliver a sound, useful, accurate analysis would be quite an accomplishment! Is that really what you've done? How did you know you got it right? How right did you get it?
To sell an analysis tool, I'd expect to see some kind of comparison against other tooling and techniques. General success rate? False negative rate? False positive rate? How does it do against simple schematics vs large ones? What IC's and components will it recognize and which will it fail to recognize? Does it throw an error if it encounters something it doesn't recognize? When? Do you have testimonials? Examples?
I built this because I was tired of shipping boards with avoidable mistakes — hopefully it saves you from a re-spin too!
I know a brilliant PCB engineer whose first major multimillion dollar R&D corporate design (decades ago) resulted in production of a modular product which couldn't physically plug in with the rest of the system (because of above issues)... I'll send him this link to see if he'll give you feedback, but that's going to be how he'd initially test your AI system (he considers it a humbling lifetime blunder).
Without any PCB design experience, my presumption is that OP's "AI product" is more of just a "fundamentals of circuit board design"[0] and not an all-expansive "how did no human ever catch such a simple multi-dimensional clash"[1]
[0] isolated voltage areas; trace attenuation avoidance; signal protection
[1] the darn thing won't even plug in, because the plug is pin'd-out backwards
The real question is whether this has enough value to justify the pricing model [1] - I think so for a company, but would be difficult to justify for a hobby. One thing that should be defined is what "usage limit" actually is.
> Of course, Jack. I can understand the schematic from the provided JSON file. It describes an RS485 to TTL Converter Module. > Here is a detailed breakdown of the circuit's design and functionality
...followed by an absolutely reasonable description of the whole board. It was imprecise, but with some guidance (and by putting together my basic skills with Gemini's vast but unreliable knowledge) I was able to figure out a few things I needed to know about the board. Quite impressive.
I'm always looking for workflow and automation improvements and the new wave of tooling has been useful for datasheet extraction/OCR, rubber-ducking calculations, or custom one-off scripts which interact with KiCAD's S-Expression file formats. However I've seen minimal improvements across my private suite of electronics reasoning/design tests since GPT4 so I'm very skeptical of review tooling actually achieving anything useful.
Testing with a prior version of a power board that had a few simple issues that were found and fixed during bringup. Uploaded the KiCAD netlist, PDFs for main IC's, and also included my internal design validation datasheet which _includes the answers to the problems I'm testing against_. There were three areas I'd expect easy identification and modelling on:
- Resistor values for a non-inverting amplifier's gain were swapped leading to incorrect gain.
- A voltage divider supplying a status/enable pin was drawing somewhat more current than it needed to.
- The power rating of a current-sense shunt is marginal for some design conditions.
For the first test, the prompt was an intentionally naiive "Please validate enable turn on voltage conditions across the power input paths". The reasoning steps appeared to search datasheets, but on what I'd have considered the 'design review' step it seems like something got stuck/hung and no results after 10min. A second user input to get it to continue did get an output, and my comments: - Just this single test consumed 100% of the chat's 330k token limit and 85% of free tier capacity, so I can't even re-evaluate the capability with a more reasonable/detailed prompt, or even giving it the solution.
- A mid-step section calculates the UV/OV behaviour of a input protection device correctly, but mis-states the range in the summary.
- There were several structural errors in the analysis, including assuming that the external power supply and lithium battery share the same input path, even though the netlist and components obviously have the battery 'inside' the power management circuit. As a result most downstream analysis is completely invalid.
- The inline footnotes for datasheets output `4 [blocked]` which is a bare-minimum UI bug that you must have known about?
- The problem and solution were in the context and weren't found/used.
- Summary was sycophantic and incorrect.
You're leaving a huge amount of useful context on the table by relying on netlist upload. The hierarchy in the schematic, comments/tables and inlined images are lost. A large chunk of useful information in datasheets is graphs/diagrams/equations which aren't ingested as text. Netlist don't include the comments describing the expected input voltage range on a net, an output load's behaviour, or why a particular switching frequency is chosen for example.In contrast, GPT5.1 API with a single relevant screenshot of the schematic, with zero developer prompt and the same starting user message:
- Worked through each leg of the design and compared it's output to my annotated comments (and was correct).
- Added commentary about possible leakage through a TVS diode, calculated time-constants, part tolerance, and pin loadings which are the kinds of details that can get missed outside of exhaustive review.
- Hallucinated a capacitor that doesn't exist in the design, likely due to OCR error. Including the raw netlist and an unrelated in-context learning example in the dev-message resolved that issue.
So from my perspective, the following would need to happen before I'd consider a tool like this: - Walk back your data collection terms, I don't feel they're viable for any commercial use in this space without changes.
- An explicit listing of the downstream model provider(s) and any relevant terms that flow to my data.
- I understand the technical side of "Some metadata or backup copies may persist for a limited period for security, audit, and operational continuity" but I want a specific timeline and what that metadata is. Do better and provide examples.
- I'm not going to get into the strategy side of 'paying for tokens'. but your usage limits are too vague to know what I'm getting. If I'm paying for your value add, let me bring an API key (esp if you're not using frontier models).
- My netlist includes PDF datasheet links for every part. You should be able to fetch datasheets as needed without upload.
- Literally 5 minutes of thinking about how this tool is useful for fault-finding or review would have led you to a bare-minimum set of checklist items that I could choose to run on a design automatically.
- Going further, a chat UX is horrible for this review use-case. Condensing it into a high level review of requirements and goals, with a list of review tasks per page/sub-circuit would make more sense. From there, then calculations and notes for each item can be grouped instead of spread randomly through the output summary. Output should be more like an annotated PDF.So, just a typical HN comment?