There already exist fantastic open source tools such as Yosys, Nextpnr, iverilog, OpenFPGALoader, ... that together implement most features that a typical hardware dev would want to use. But chip support is unfortunately limited, so fewer people are using these tools.
We decided to build a VSCode extension that wraps these open source tools (https://edacation.github.io for the interested) to combat this problem. Students are already using it during the course and are generally very positive about the experience. It's by no means a full IDE, but if you're just getting started with HDL it's great to get familiar with it. Instead of a mess of a toolchain that nobody truly knows how to use, you now get a few buttons to visualize and (soon) program onto an FPGA.
There's also Lushay Code for the slightly more advanced users. But we need more of these initiatives to really get the ball rolling and make an impact, so I'd highly recommend people to check out and contribute to projects like this.
As soon as they reach the point where it's as easy to put down an FPGA as it is an old STM32 or whatever, they'll get a lot more interesting.
VHDL and Verilog are used because they are excellent languages to describe hardware. The tools don't really hold anyone back. Lack of training or understanding might.
Consistently the issue with FPGA development for many years was that by the time you could get your hands on the latest devices, general purpose CPUs were good enough. The reality is that if you are going to build a custom piece of hardware then you are going to have to write the driver's and code yourself. It's achievable, however, it requires more skill than pure software programming.
Again, thanks to low power an slow cost arm processors a class of problems previously handled by FPGAs have been picked up by cheap but fast processors.
The reality is that for major markets custom hardware tends to win as you can make it smaller, faster and cheaper. The probability is someone will have built and tested it on an FPGA first.
"engineers are stuck using outdated languages inside proprietary IDEs that feel like time capsules from another century.". The article misses that Vivado was developed in the 2010's and released around 2013. It's a huge step-up from ISE if you know how to drive it properly and THIS is the main point that the original author misses. You need to have a different mindset when writing hardware and it's not easy to find training that shows how to do it right.
If you venture into the world of digital logic design without a guide or mentor, then you're going to encounter all the pitfalls and get frustrated.
My daily Vivado experience involves typing "make", then waiting for the result and analysing from there (if necessary). It takes experience to set up a hardware project like this, but once you get there it's compatible with standard version control, CI tools, regression tests and the other nice things you expect form a modern development environment.
Xilinx, Altera, and Lattice are culturally incapable of doing this. For lattice especially it seems like a no brainer but they don’t understand the appeal of open source still.
Gowin and Efinix, like Lattice, have some very interesting new FPGAs, that they've innovated hard on, but which still are only so-so available.
Particularly with AI about, having open source stacks really should be a major door opening function. There could be such an OpenROAD type moment for FPGAs!
My feeling is that hardware companies do better when they ship the software needed to utilize their hardware for free. (You need a little margin in the hardware price to cover the software development). However, the FPGA companies haven't figured this out. They try to make way too much software and charge exhorbitant fees for it, somehow thinking that their hardware is useless without that. In fact, their hardware is useless because I can't put anything on it without a 1-to-20 hour compile time. That makes it impossible to use it as an accelerator. I can compile OpenCL for my GPU in a few milliseconds; that's what we need for the FPGA. Even thirty seconds would be easily tolerable -- there's many a game that still requires 15 seconds to load a level and compile its shaders.
FPGAs could be much more useful than they are at present. They've artificially limited themselves to ASIC prototyping alone.
So Intel bought an FPGA company -- nobody knows why. AMD got scared and did the same thing with no clue what to do with it. They've both let them rot. Intel did start incorporating it into its compiler targets, but it was only half-baked. Now they've wisely divested themselves of the company, but it should have never happened. They should have just focused on selling the hardware at a small margin whilst opening up the data to use it.
FPGAs do need a new future. They need a future where someone tapes out an FPGA! Xilinx produced Ultrascale+ over a decade ago and haven't done anything interesting since. Their Versal devices went off a tangent into SoCs, NOCs, AI engines - you know what they didn't do? Build a decent FPGA.
Altera did something ambitious back in 2014 when they proposed the hyper-register design, totally failed to execute on it and have been treading water because of the Intel cluster**. They're now an independent company but literally don't have anyone who knows how to tape out a chip.
I'm less familiar with the Lattice stuff, but since their most advanced product is still 16nm finfet I suspect they aren't doing anything newer than Xilinx or Altera.
We need a company that builds an FPGA. It doesn't matter what tooling you have because the fundamental performance gap between a custom FPGA solution and a CPU or GPU based solution is entirely eaten up by the fact the newest FPGA you can buy is a decade old and inexplicably still tens of thousands of dollars.
If FPGA technology had progressed at the same rate NVidia or Apple had pushed forward CPU/GPU performance, thered be some amazing opportunities to accelerate AI (on the path to then creating ASICs). But they haven't, so all the scaling laws have worked against them and the places they have a performance benefit have shrunk massively.
The real argument for open source toolchains is much narrower in scope and implying its requirement for fixing a nonexistent tool problem is absurd
Unlikely this will ever happen but one can always dream.
It's a declarative programming system, and there's a massive impedance match when you try to write source code for it in text. I suspect that something closer to flow charts, would be much easier to grok. Verilog is about as good at match as you are likely to get, if you stick with the source code approach to designing with them.
Secondly the integration with consumer devices and OS is almost non-ecistant - it should really be simpler to interact with ala GPU/Network chip and have more mainboards with lowcost integrated FPGAs even if they are only a couple of hundred of logic cells.
[1]https://github.com/chipsalliance/chisel/blob/main/README.md
I've never really thought of any interesting projects to do with it. Anyone know of anything?
I once tried to use Xilinx' Vitis (2025) to make a small-ish piece of software running on such a Zynq chip. After wrestling with it* for like 5 weeks, me and my colleagues decided to ditch the entire Xilinx suite entirely and just pick a compiler and make a bare-metal binary with it. The FPGA part is done by a separate team of course, so us traditional software devs can stick with decent tools. We actually opted for a Rust toolchain and I'm extremely glad we did this, despite the additional time it took.
I don't know how my FPGA colleagues work with the proprietary toolchains and not go insane.
*The IDE is effectively a wrapper with a custom python API around cmake and gcc. It's not very well written cmake and I also don't know how they configure the linker that it does the weird things it does.
They see themselves as CAD software companies. The chip is just a copy-protection dongle.
It's the weirdnesses of FPGAs though. You aren't really designing a gate level circuit at the end. I'm not sure Verilog or VHDL are to blame here. Maybe they aren't fit for purpose to begin with. I hate the toolchains too. They got worse (sluggish, more paid IPs etc) in the last 15 years. IC design tools cost A LOT more (like 2-3 orders of magnitude more) comparatively but they just work at least!
I disagree with "HDL is software" though. It's not, it's even in the name: "hardware description language". Yes it's a text file with what looks a lot like regular code in it. However what's being decribed is how to connect boxes of logic together, and how to compute the output of the boxes from their inputs. There's no implicit program counter that's advancing from one line to the next.
It is (theoretically) possible to write these kind of things with a lot of abstraction, but every time you try that by using more advanced language features, you hit some bugs in the tool's implementation of the language. If you're lucky it'll tell you where you're doing undupported stuff. Often it'll crash. Sometimes it'll sythesize hardware that doesn't conform with the language spec.
Finally, FPGAs are simple only when you're looking at a bird's eye view (just like CPUs are simple when you're looking at a diagram with a few boxes saying "ALU", "Cache", "Registers"). The actual datasheets are thousands of pages long.
FPGAs are still useful though, their use case is "I need custom hardware and I don't have the volume to build an ASIC". For example, my application is a custom signal processing pipeline that's handling about 3.5 Gbit/s of streaming raw data. On a $40 chip.
I think my main point is that yes, the tooling is a pain to use, with heaps of bugs and bad language support. However a HDL is conceptually different from a software language and I'm not sure you can hide away the complexities of designing hardware behind "modern" language features.
For those suggesting diagram-based languages, go program something in LabView, you'll quickly understand why that's a bad idea (works for trivial designs, anything complex is an opaque mess of boxes and lines, unsearchable, and impossible to integrate with version control).
Yes, that's certainly a big misconception. Maybe not the one the author meant to call out, but... yes, a big misconception indeed.
No mention of that Brazilian company that was set to manufacture them to undercut the market?
I think that’s the clearest explanation of FPGAs I’ve ever seen.
I agree with another commenter: I think there are parallels to "the bitter lesson" here. There's little reason for specialized solutions when increasingly capable general-purpose platforms are getting faster, cheaper, and more energy efficient with every passing month. Another software engineering analogy is that you almost never need to write in assembly because higher-level languages are pretty amazing. Don't get me wrong, when you need assembly, you need assembly. But I'm not wishing for an assembly programming renaissance, because what would be the point of that?
FPGAs were a niche solution when they first came out, and they're arguably even more niche now. Most people don't need to learn about them and we don't need to make them ubiquitous and cheap.
My Ryzen agrees — the fans just spun up like it’s hitting 10,000 rpm.