First, there is modeling ambiguity, too many ways to represent the same data structure. Which means you can’t parse into native structs but instead into a heavy DOM object and it sucks to interact with it.
Then, schemas sound great, until you run into DTD, XSD, and RelaxNG. Relax only exists because XSD is pretty much incomprehensible.
Then let’s talk about entity escaping and CDATA. And how you break entire parsers because CDATA is a separate incantation on the DOM.
And in practice, XML is always over engineered. It’s the AbstractFactoryProxyBuilder of data formats. SOAP and WSDL are great examples of this, vs looking at a JSON response and simply understanding what it is.
I worked with XML and all the tooling around it for a long time. Zero interest in going back. It’s not the angle brackets or the serialization efficiency. It’s all of the above brain damage.
The entire point of heavy-handed XML is to 1:1 the type system across the wire. Once I generate my service references, it is as if the service is on my local machine. The productivity gains around having strongly typed proxies of the remote services are impossible to overstate. I can wire up entirely new operations without looking at the documentation most of the time. Intellisense surfaces everything I need automatically as I drill into the type system.
JSON can work and provide much of the same, but XML has already proven to work in some of the nastiest environments. It's not the friendliest or most convenient technology, but it is an extremely effective technology. I am very confident that the vendors I work with will continue to use XML/WCF/SOAP into 2030.
These items make XML deeply tedious and annoying to ingest and manipulate. Plus, some major XML libraries, like lxml in Python, are extremely unintuitive in their implementation of DOM structures and manipulation. If ingesting and manipulating your markup language feels like an endless trudge through a fiery wasteland then don't be surprised when a simpler, more ergonomic alternative wins, even if its feature set is strictly inferior. And that's exactly what happened.
I say this having spent the last 10 years struggling with lxml specifically, and my entire 25 year career dealing with XML in some shape or form. I still routinely throw up my hands in frustration when having to use Python tooling to do what feels like what should be even the most basic XML task.
Though xpath is nice.
XML did some good things for its day, but no, we abandoned it for very good reasons.
> This is not engineering. This is fashion masquerading as technical judgment.
The boring explanation is that AI wrote this. The more interesting theory is that folks are beginning to adopt the writing quirks of AI en masse.
Then if there were any problems in my XML, trying to decipher horrible errors determining what I did wrong.
The docs sucked and where "enterprise grade", the examples sucked (either too complicated or too simple), and the tooling sucked.
I suspect it would be fine now days with LLMs to help, but back when it existed, XML was a huge hassle.
I once worked on a robotics project where a full 50% of the CPU was used for XML serialization and parsing. Made it hard to actually have the robot do anything. XML is violently wordy and parsing strings is expensive.
It is a very niche solution but actually very stable and quite handy for all kinds of data handling; web-based applications and APIs as it nicely integrates with all kinds of text-based formats such as JSON, CSV or XML.
Yet I can easily comprehend how people get lost in all kinds of standards, meta-standards, DTDs, schemas, namespaces, and modeling the whole enterprise in SOAP.
However, you can do simple things simply and small, but in my experience, most tools promised to solve problems with ever-layered complexities.
Little disclaimer, I am probably biased, as I am with BaseX, an open-source XQuery processor :-)
The vast, vast majority of devs today have never known anything except JSON for their React frontends, but honestly if they gave XML a try and weren't working from second hand horror stories from 20 years ago I think a lot more people would like it than you expect.
Maybe this is okay if you know your schema beforehand and are willing to write an XSD. My usecase relied on not knowing the schema. Despite my excitement to use a SAX-style parser, I tucked my tail between my legs and switched back to JSONL. Was I missing something?
The industry abandoned both in favor of JSON and RPC for speed and perceived DX improvements, and because for a period of time everyone was in fact building only against their own servers.
There are plenty of examples over the last two decades of us having to reinvent solutions to the same problems that REST solved way back then though. MCP is the latest iteration of trying to shoehorn schemas and self-documenting APIs into a sea of JSON RPC.
> the various XML-based "standards" spawned by enterprise committees are monuments to over-engineering. But the core format (elements, attributes, schemas, namespaces) remains sound. We threw out the mechanism along with its abuses.
It's mostly only arguing for using the basic XML in place of the basic JSON.
I largely agree to that, although I wouldn't consider the schemas among its core, go read the Schema specifications and tell me when you come out.
But I agree that a good part of XML's downfall was due to its enterprise committees: no iteration, and few incentives to make things lean and their specifications simple; a lot of the companies designing them had an interest in making them hard to implement.
But what I do not miss is the over-engineering that happened in the ecosystem, especially with everything SOAP. Yes, when it worked, it worked. But when it didn’t work, which was often the case when integrating different enterprise systems, then well… lord have mercy on me.
Sometimes I still use XSD to define a schema for clients, because in some areas there’s still better tooling for XML. And it gives me the safety of getting valid input data, if the XML couldn’t be validated.
And in the enterprise world, XML is far from being dead anyways.
1. Following Postel's law, don't reject "invalid" third-party input; instead, standardize how to interpret weird syntax. This is what we did with HTML.
2. Use declarative schema definitions sparingly, only for first-party testing and as reference documentation, never to automatically reject third-party input.
3. Use XML-like syntax (like JSX) in a Turing-complete language for defining nested UI components.
Think of UI components as if they're functions, accepting a number of named, optional arguments/parameters (attributes!) and an array of child components with their own nested children. (In many UI frameworks, components literally are functions with opaque return types, exactly like this.)
Closing tags like `</article>` make sense when you're going to nest components 10+ layers deep, and when the closing tag will appear hundreds of lines of code later.
Most code shouldn't look like that, but UI code almost always does, which is why JSX is popular.
I remember XML proponents back then argued that it allows semantics -- although, it was never clear how a non-human would understand it and process.
The funny thing about namespaces is that the prefix, by the XML docs, should be meaningless -- instead you should look at the URL of the namespace. It's like if we read a doc with snake:front-left-paw, and ask how come does a snake have paws? -- Because it's actually a bear -- see the definition of snake in the URL! It feels like mathematical concepts -- coordinate spaces, numeric spaces with different number 1 and base space vectors -- applied to HTML. It may be useful in rare cases. But few can wrap their heads around it, and right from the start, most tools worked only with exactly named prefixes, and everyone had to follow this way.
The main difference is that with enterprise companies and consultancies pushed complex XML solutions that differentiated them and created a moat (involving developer tools and compliance). JSON has always just been a way to sling data around, with a modicum of sanity. Hence the overbuilt/underbuilt split.
XML saved our axx. We had both internal and external API's with complex objects in JSON which failed constantly with mismatching implementations, causing friction with clients. Switching both to XML with schema solved that forever. But this was for complex B2B. We still used json for trivial web UI interactions.
Also "worse is better". Many developer still prefer to use something that is similar to notepad.exe, instead of actual tools that understand the formats on a deeper level.
Although ironically there are less production-time human mistakes when editing an XML that is properly validated with a XSD than a YAML file, because Norway.
A saner subset of XML, possibly run through some over-caffeinated developers to lighten its redundant syntactic feeling, would have given us something FAR better than JSON's failure and YAML's gratuitously hypercomplicated syntax.
Developers Are Stupid - developer.
"XML is a lot like violence. If it's not getting the job done, then you aren't using enough of it."
I don't see how this is in any way more of an afterthought than XML Schema was (except that it was designed by the same group as XML)
> Namespaces. XML allows you to compose documents from multiple schemas without collision.
It "allows" it in the extremely narrow sense that you can write a file with elements from different namespaces, parse it into a DOM with a schemaless parser and can still distinguish the elements.
It does not define any semantics about what an interaction between different namespaces means or which namespaces you can and which you can't combine.
This is an odd qualifier. Is human readability in a REPL different from human readability in an editor? What could be more important in a format -- as long as machines are able to parse it at all -- then human readability? Machines can parse both JSON and XML, so the only way to compare them is how well humans are able to read (and write) them.
The article admits that JSON has answers for many of the problems it points out, like schemas and comments (JSONC) but dismisses them as not widely used. Compared to what? Total JSON usage? Fair enough. But more people probably use JSONC then XML for config files at this point.
In addition, it has tons and tons and tons of cruft, specification bloat, dogma.
The best parts of XML were probably XPath, and some aspects of document validation.. and that's it
I'm sure there are plenty of arenas where this doesn't make as much sense but I suspect it's common
I've seen a bunch of times where an API returns invalid XML that has to be manipulated before parsing but never that for JSON.
I think that's the real sell for JSON. A lot of APIs are terrible, and JSON being simpler, terrible JSON beats terrible XML.
Even with zipped payloads, it's just way unnecessarily chatty without being more readable.
My main point is that the very purpose of XML is not to transfer data between machines. XML use case is to transfer data between humans and machines.
Look at the schemas. They are all grammatical. DTD is a textbook grammar. Each term has a unique definition. XSD is much more powerful: here a term may change definition depending on the context: 'name' in 'human/name' may be defined differently than 'name' in 'pet/name' or 'ship/name'. But within a single context the definition stays. As far as I know Relax NG is even more powerful and can express even finer distinctions, but I don't know it too well to elaborate.
Machines do not need all that to talk to each other. It is pure overhead. A perfect form to exchange data between machines is a dump of a relational structure in whatever format is convenient, with pretty straightforward metadata about types. But humans cannot author data in the relational form; anything more complex than a toy example will drive a human crazy. Yet humans can produce grammatical sequences in spades. To make it useful for a machine that grammatical drive needs only a formal definition and XML gives you exactly that.
So the use case for XML is to make NOTATIONS. Formal in the sense they will be processed by a machine, but otherwise they can be pretty informal, that is have no DTD or XSD. It is actually a power of XML that I can just start writing it and invent a notation as I go. Later I may want to add formal validation to it, but it is totally optional and manifests as a need only when the notation matures and needs to turn into a product.
What makes one XML a notation and another not a notation? Notations are about forming phrases. For example:
<func name="open">
<data type="int"/>
<args>
<addr mode="c">
<data type="char"/>
</addr>
<data type="int"/>
<varg/>
</args>
</func>
This is a description of a C function, 'open'. Of course, a conventional description is much more compact: int open(char const*, int, ...)
But let's ignore the verbosity for a moment and stay with XML a bit longer. What is grammatical about this form? 'func' has '@name' and contains 'data' and 'args'. 'data' is result type, 'args' are parameters. Either or both can be omitted, resulting in what C calls "void". Either can be 'data' or 'addr'. 'data' is final and has '@type'; addr may be final (point to unknown, 'void') or non-final and may point to 'data', 'func' or another 'addr', as deep as necessary. 'addr' has '@mode' that is a combination of 'c', 'v', 'r' to indicate 'const', 'volatile', 'restrict'. Last child of 'args' may be 'varg', indicating variable parameters.Do you see that these terms are used as words in a mechanically composed phrase? Change a word; omit a word; link words into a tree-like structure? This is the natural form of XML: the result is phrase-like, not data-like. It can, of course, be data-like when necessary but this is not using the strong side of XML. The power of XML comes when items start to interact with each other, like commands in Vim. Another example:
<aaaa>
<bbbb/>
</aaaa>
This would be some data. Now assume I want to describe changes to that data: <aaaa>
<drop>
<bbbb/>
</drop>
<make>
<cccc/>
</make>
</aaaa>
See those 'make' and 'drop'? Is it clear that they can enclose arbitrary parts of the tree? Again, what we do is that we write a phrase: we add a modifier, 'make' or 'drop' and the contents inside it get a different meaning.This only makes sense if XML is composed by hand. For machine-to-machine exchange all this is pure overhead. It is about as convenient as if programs talked to each other via shell commands. It is much more convenient to load a library and use it programmatically than to compose a command-line call.
But all this verbosity? Yes, it is more verbose. This is a no-go for code you write 8 hours a day. But for code that you write occasionally it may be fine. E.g. a build script. An interface specification. A diagram. (It is also perfect for anything that has human-readable text, such as documentation. This use is fine even for a 8-hour workday.) And all these will be compatible. All XML dialects can be processed with the same tools, merged, reconciled, whatever. This is powerful. They require no parsing. Parsing may appear a solved problem, but to build a parser you still must at least describe the grammar for a parser generator and it is not that simple. And all that this description gives you is that the parser will take a short form and convert it into an AST, which is exactly what XML starts with. The rest of the processing is still up to you. With XML you can build the grammar bottom up and experiment with it. Wrote a lot of XML in some grammar and then found a better way? Well, write a script to transform the old XML into the new grammar and continue. The transformer is a part of the common toolset.
There’s one improvement XML had over JSON; and that’s comments.
The author laments about features and functionality that were largely broken, or implemented in a ways that countered their documentation. There were very few industries that actually wrote good interfaces and ensured documentation matched implementation, but they were nearly always electrical engineers who’d re-trained as software engineers through the early to late 90s.
Generally speaking namespaces were a frequent source of bugs and convoluted codepaths. Schemas, much like WSDL’s or docs, were largely unimplemented or ultimately dropped to allow for faster service changes. They’re from the bygone era of waterfall development, and they’re most definitely not coming back.
Then there’s the insane XML import functionality, or recursive parsing, which even today results in legacy systems being breached.
Then again, I said “author” at the start of this comment, but it’s probably disingenuous to call an LLM an author. This is 2026 equivalent of blogspam, but even HN seems to be falling for it these days.
The AI seems to also be missing one of the most important points; migration to smaller interfaces, more meaningful data models and services that were actually built to be used by engineers - not just a necessary deliverable as part of the original system implementation. API specs in the early 2000’s were a fucking mess of bloated, Rube-Goldbergesque interdependent specs, often ready to return validation errors with no meaningful explanation.
The implementation of XML was such a mess it spawned an an entire ecosystem of tooling to support it; SoapUI, parsers like Jackson and SAX (and later StAX), LINQ to XML, xmlstarlet, Jing, Saxon..
Was some of this hugely effective and useful? Yes. Was it mostly an unhinged level of abstraction, or a resulting implementation by engineers who themselves didn’t understand the overly complex features? The majority of the time.
JSON is perhaps an accident of history rather than deliberately designed, but for structured data interchange it is better because it is simpler.
Just like XML, JSON os getting used outside of its area. Using JSON for configuration files is absurd, since it doesn’t allow comments.
I hate xml.
so it isn't direct? That's what you're saying. You're saying that there's two options for how to map any property of structured data. That's bad, you know that right? There's no reason to have two completely separate, incompatible ways of encoding your data. That's a good way to get parsing bugs. That's just a way to give a huge attack surface for adversarially generated serialized documents.
Also, self documentation is useless. A piece of data only makes sense within the context of the system it originates from. To understand that system, I need the documentation for the system as a whole anyway. If you can give me any real life situation where I might be handed a json/xml/csv/etc file without also being told what GENERATED that file, I might be willing to concede the point. But I sure can't think of any. If I'm writing code that deserializes some data, its because I know the format or protocol I'm interested in deserializing already. You cant write code that just ~magically knows~ how its internal representation of data maps to some other arbitrary format, just because both have a concept of a "person" and a concept of a "name" for that person.
The problem with tags in XML isn't that they are verbose its that putting the tag name in the closing tag makes XML a context-sensitive grammar which are NIGHTMARES to parse in comparison to context-free grammars.
Comments are only helpful when I'm directly looking at the serialized document. and again, that's only gonna happen when I'm writing the code to parse it which will only happen when I also have access to the documentation for the thing that generated it.
"tooling that can verify correctness before runtime" what do you even mean. Are you talking like, compile time deserialization? What serialized data needs to be verified before runtime? Parsing Is Validation, we know this, we have known this for YEARS. Having a separate parsing and validation step is the way you get parsing differential bugs within your deserialization pipeline.
I didn't see any.
<aaaa bbbb="bbbb" cccc="cccc"/>
{"bbbb":"bbbb","cccc":"cccc"}
See that the difference is only two characters? Yet XML also has a four-character element name, which JSON lacks. And JSON is packed to the limit, while XML is written naturally and is actually more readable than JSON.XML only has text data (although other kinds can be represented, it isn't very good at doing so), and the structure is named blocks which can have named attributes and plain text inside; and is limited to a single character set (and many uses require this character set to be Unicode).
XML does not require a schema, although it can use one, which is a benefit, and like they say does work better than JSON schema. Some ASN.1 formats (such as DER) can also be used without a schema, although it can also use a schema.
My own nonstandard TER format (for ASN.1 data) does have comments, although the comments are discarded when being converted to DER.
Namespaces are another benefit in XML, that JSON does not have. ASN.1 has OIDs, which have some of this capability, although not as much as XML (although some of my enhancements to ASN.1 improve this a bit). However, there is a problem with using URIs as namespaces which is that the domain name might later be assigned to someone else (ASN.1 uses OIDs which avoids this problem).
My nonstandard ASN1_IDENTIFIED_DATA type allows a ASN.1X data file to declare its own schema, and also has other benefits in some circumstances. (Unlike XML and unlike standard ASN.1, you can declare that it conforms with multiple formats at once, you can declare conformance with something that requires parameters for this declaration, and you can add key/value pairs (identified by OIDs) which are independent of the data according to the format it is declared as.)
(I have other nonstandard types as well, such as a key/value list type (called ASN1_KEY_VALUE_LIST in my implmentation in C).)
XSLT is a benefit with XML as well, although it would also be possible to make a similar thing with other formats (for databases, there is SQL (and Tutorial D); there is not one for ASN.1 as far as I know but I had wanted such a thing, and I have some ideas about it).
The format XML is also messy and complicated (and so is YAML), compared with JSON or DER (although there are many types in DER (and I added several more), the framing is consistent for all of them, and you do not have to use all of the types, and DER is a canonical form which avoids much of the messiness of BER; these things make it simpler than what it might seem to some people).
Any text format (XML, JSON, TER, YAML, etc) will need escaping to properly represent text; binary formats don't, although they have their own advantages and disadvantages as well. As mentioned in the article, there are some binary XML formats as well; it seems to say that EXI requires a schema (which is helpful if you have a schema, although there are sometimes reasons to use the format without a schema; this is also possible with ASN.1, e.g. PER requires a schema but DER does not).
Data of any format is not necessarily fully self-descriptive, because although some parts may be self-described, it cannot describe everything without the documentation. The schema also cannot describe everything (although different schema formats might have different capabilities, they never describe everything).
> When we discarded XML, we lost: ...
As I had mentioned, other formats are capable of this too
> What we gained: Native parsing in JavaScript
If they mean JSON, then, JSON was made from the syntax of JavaScript, although before JSON.parse was added into standard JavaScript they might have used eval and caused many kind of problems with that. Also, if you are using JavaScript then the data model is what JavaScript does, although that is a bit messy. Although JavaScript now has a integer type, it did not have at the time that JSON was made up, so JSON cannot use the integer type.
> I am tired of lobotomized formats like JSON being treated as the default, as the modern choice, as the obviously correct solution. They are none of these things.
I agree and I do not like JSON either, but usually XML is not good either. I would use ASN.1 (although some things do not need structured data at all, in which case ASN.1 is not necessary either).
(Also, XML, JSON, and ASN.1 are all often badly used; even if a format is better does not mean that the schema for the specific application will be good; it can also be badly designed, and in my experience it often is.)
And yeah, XML wasn't perfect -- people harping on it are literally flogging a dead horse. Had the horse been given a pasture, it would have recovered. Instead we have very tiny pig-horses like JSON and YAML and three dozen other "weekend project candidates to someone's claim to fame and CS history" which haven't got half of XML's _useful_ features -- like namespaces, being one.
YAML has anchors, which is a useful feature in itself -- so no, we don't just regress or reinvent the wheel, there's room for improving XML. The tragedy is throwing the baby with the bathwater, or so it seems to me that we have.
Giving XML largely the collective boot was the wrong decision. That's my firm opinion. Tools like XSLT haven't got an equal today -- these for better and for worse need XML in some capacity, and are much more extensible (no pun intended) than abominations like Jinja or what have you. XSLT was _designed_ while Jinja for one, appears to have been grown in a petri dish of sorts.
The hipster-like obsession with every new thing on the hill gave us HTML 5, with its weird context-sensitive parser rules where some tags can be closed, some must be closed and some must not be closed and so on. On top of it it mandates some forgiving behaviour on part of the parser, making best-effort assumptions that kind of get it to render the document but not the one you wanted -- add modern scripting and you are sitting there debugging subtly but by-design hidden errors -- instead of what was the case with XML that demanded you had the basic capacity to write the forward slash in the right places. But no, that was apparently too hard.
Also, really love the choice quotes in the article:
> They are the result of path dependence and fashion, not considered engineering judgment.
_Fashion_ is the word that comes to my mind every time I have to hear people half my age try to sell me JSON or YAML. Like, what basis do you have to argue on bare mention of something you haven't even worked on, just essentially repeating the person on your left? That's _cargo-cult programming_ again. The fact that mention of XML often draws use of that very term, "old-fashioned", speaks enough of the level of the conversation here -- we're apparently occupied by _fashion_ in choices of systems that by and large do the same thing their predecessors have done since the 60's.
> We value familiarity over rigor. We value the appearance of simplicity over actual simplicity, which is the simplicity that comes from clear rules and consistent structure.
Just the cherry on the cake, frankly. The entire "The Final Point" section really nails it for my part. I spend considerable amount of time at work trying to hammer into rookies essentially the equivalent of:
> Formality in data representation prevents entire classes of errors.
But it would appear history repeats itself as every generation has to learn too late the same mistakes that someone in the previous generation could have written a large book about, as a _warning_. Just the other day, for example, one of my let's say less rigorous colleagues said outright that "`null` is great in a [programming] language" (the exact wording was something along of "I love null!"), following up with the dubious clarification that this also includes SQL. I am not sure they even comprehend the size of the hole such statement makes.