Looks like every single one of the 38 vulnerabilities were either SQL injection, XSS, path traversal or "Insecure Direct Object Reference" aka failing to check the caller was allowed to access the record.
This is actually a pretty good example of the value of AI security scanners - even really strong development teams still occasionally let bugs like this slip through, having an AI scanner that can spot them feels worthwhile to me.
People thinking that this isn't the case everywhere need a reality check. Most software is riddled with obvious security issues. If we can remediate them with AI, great, but don't be thinking that this is something that we could only have dealt with with AI. Enough attention and prioritization of these issues would also have sorted it.
Ask yourself if we weren't currently in an era of AI-focus and AI was just another boring tool, if we would be bothering to do this sort of thing. Loads of us still aren't bothering with basic static analysis.
Back in 2010, as a security engineer, I also looked at OpenEMR. It was an absolute disaster, and was (and is) somewhat well-known as such. I found and published vulnerabilities very similar to these sixteen years ago. This is not exactly the Fort Knox of software.
It makes sense for AISLE to demonstrate that they're able to find vulnerabilities here, but I'd love to see a side-by-side comparison of modern SAST and DAST reviews. I bet we'd find similar vulnerabilities.
>I was the main contributor and maintainer to OpenEMR about ~20 years ago and then decided it was irredeemable and started over with ClearHealth/HealthCloud. Shockingly some of my code code lives on (from PHP 3). I am reluctant to say don't use it but if you do please don't expose it to anything public, which sadly happens most of the time. There are some real problems that exist in that code base from a security and HIPAA perspective.
Finding SQL injections etc is definitely valuable, but at the same time they did not hack Epic; the "100000 medical providers" number links to https://www.hhs.gov/sites/default/files/open-emr-sector-aler... which links open-emr.org/blog/openemr-is-proud-to-announce-seamless-support-for-telehealth/ which...404s. Per archive.org the source is something the CEO of now defunct lifemesh.ai said.
"medical record software" makes it sound super serious, but again OpenEMR should not be taken as seriously as for instance Epic.
Interesting... I have been working with many different EHR platforms across the country for the last 15 years and I have never heard of OpenEMR before, or any open-source platform for that matter.
I wrote an OSS PHP SAST tool 6 years ago, but it's suffered from industry neglect — most people only care about security after an incident, and PHP has enough magical behaviour that any tool needs to be tuned to how specific repositories behave.
I agree there's a big opportunity for LLMs to take this work forward, filling in for a lack of human expertise.
If you are sufficiently funded then you could benefit from the flip side of discovery but it looks bleak if you are a sole maintainer on a large project that is a dependency in many deployed instances without any revenue or donations, plus there is nobody digging deep enough to care or spend inference ( would your company spend the money on extra inference to is the question, more often than not) on both sides of the fence, we are going to see massive disruptions across the board.
Cybersecurity is becoming a proof-of-work of sorts and the race is on. There might be unknown number of zero days being silently discovered and deployed, likely have an impact on the economics too, thus making the access far more widespread.
I do wonder if this means our tech stacks will go back to being boring and simple as possible...you wouldn't hack a static html website being served on nginx would you?
Automation doesn't usually replace humans it just hikes up the floor.
I.e. nearly all of these (most in general?) bugs will be spotted quickly by a train eye. But it's hard to get trained eyes on code all the time. AI will catch all the low hanging fruit.
What's great about this it seems mostly low hanging I.e. even basic AI will help people patch holes.
I say this purely as a Software Engineer, not a security expert, but you have to consider hackers can, are, and will use AI against you.
The Mexican government was hacked by people using Claude[0] this was apparently many government systems and services, all that PII for everyone in the country in these systems. Even if Claude somehow "patches" this, there's so many open source models out there, and they get better every day. I've seen people fully reverse engineer programs from disassmebling their original code into compilable code in its original programmed language, Claude happily churning until it is fully translated, compiles and runs.
Whatever your thoughts on AI are, if you aren't at least considering it for security auditing (or to enhance security auditing) you are sleeping at the wheel just waiting to be hacked by some teenager skiddie with AI.
Spotted over 100 “security issue but after whittling them down via reproduction scripts and validating they were real CVE’s - that number was around 30.
Even so - it was a huge win and something we wouldn’t have spotted.
It’s something I’ve now codified into repowarden.dev
Here, something that looks like the thing is a strong signal, as long as the probability is high enough to be useful.
Remember Netflix‘ chaos monkey?
Was this autonomous, as in "look at this repo and find me all the CVEs that could exist"?
Or was it much more guided?
if, during an automated code review, claude finds a vulnerability in a dependency, where should i direct it to share the findings?
who would be willing to take the slop-report, and validate it?
i've never done vulnerability disclosure, yet, with opus at max effort, i have found some security issues in popular frameworks/libraries i depend on.
a proper report can't be one pass, it has to validate it's a real problem, but ask opus to do that and you run the risk of the api refusing the request, endangering your account status. you ask to do it anyway, and write a report and now, you're burning tokens on a report that's likely to be ignore, because slop.
so i sit on this, and hope it doesn't hit me.
...so far !
===
Did they privately disclose these vulnerabilities to the developers and give them a reasonable amount of time to fix them, before they announced them to the world?
Because, and I'm going to highlight, if someone exploits a CVE in an EMR, they can wreck havoc on actual real patient data, and can endanger health and lives.
https://github.com/openemr/openemr/security
"Option 1 (preferred) : Report the vulnerability at this link. See Privately reporting a security vulnerability for instruction on doing this."
Did they do that?
Because if they didn't responsibly disclose, this sure seems like a hit job performed by someone who'd rather EMR software be closed source.