Permanent identifiers should not carry data. This is like the cardinal sin of data management. You always run into situations where the thing you thought, "surely this never changes, so it's safe to squeeze into the ID to save a lookup". Then people suddenly find out they have a new gender identity, and they need a last final digit in their ID numbers too.
Even if nothing changes, you can run into trouble. Norwegian PNs have your birth date (in DDMMYY format) as the first six digits. Surely that doesn't change, right? Well, wrong, since although the date doesn't change, your knowledge of it might. Immigrants who didn't know their exact date of birth got assigned 1. Jan by default... And then people with actual birthdays on 1 Jan got told, "sorry, you can't have that as birth date, we've run out of numbers in that series!"
Librarians in the analog age can be forgiven for cramming data into their identifiers, to save a lookup. When the lookup is in a physical card catalog, that's somewhat understandable (although you bet they could run into trouble over it too). But when you have a powerful database at your fingertips, use it! Don't make decisions you will regret just to shave off a couple of milliseconds!
In our case, we don't want database IDs in an API and in URLs. When IDs are sequential, it enables things like dictionary attacks and provides estimates about how many customers we have.
Encrypting a database ID makes it very obvious when someone is trying to scan, because the UUID won't decrypt. We don't even need a database round trip.
The ability to rapidly shard everything can be extremely valuable. The difference between "we can shard on a dime" and "sharding will take a bunch of careful work" can be expensive If the company has poor margins, this can be the difference between "can scale easily" and "we're not getting this investment".
I would argue that if your folks have the technical chops to be able to shard while avoiding surrogate guaranteed unique keys, great. But if they don't.... a UUID on every table can be a massive get-out-of-jail free card and for many companies this is much, much important than some minor space and time optimizations on the DB.
Worth thinking about.
> Random values don’t have natural sorting like integers or lexicographic (dictionary) sorting like character strings. UUID v4s do have "byte ordering," but this has no useful meaning for how they’re accessed.
Might the author mean that random values are not sequential, so ordering them is inefficient? Of course random values can be ordered - and ordering by what he calls "byte ordering" is exactly how all integer ordering is done. And naive string ordering too, like we would do in the days before Unicode.I'm sure every dba has a war story that starts with similar decision in the past
If you're using latest version of PG, there is a plugin for it.
That's it.
0: https://bsky.app/profile/hugotunius.se/post/3m7wvfokrus2g
1: https://github.com/postgres/postgres/blob/master/src/backend...
id => 123, public_id => 202cb962ac59075b964b07152d234b70
There are many ways to generate the public_id. A simple MD5 with a salt works quite well for extremely low effort.
Add a unique constraint on that column (which also indexes it), and you'll be safe and performant for hundreds of millions of rows!
Why do we developers like to overcomplicate things? ;)
I wrote a CRUD app for document storage. It had user id's and document id's. I wrote a method GetDocumentForUser(docID, userID) that checked permissions for that user and document and returned the document if permitted. I then, stupidly, called that method with GetDocumentForUser(userID, docID), and it took me a good half hour to work out why this never returned anything.
It never returned anything because a valid userID will never be a valid docID. If I had used integers it would have returned documents, and I probably wouldn't have spotted it while testing, and I would have shipped a change that cheerfully handed people other people's documents.
I will put up with a fairly considerable amount of performance hit to avoid having this footgun lurking. And yes, I know there are other ways around this (e.g. types) but those come with their own trade-offs too.
It is not just about being hard to guess a valid individual identifier in vacuum. Random (or at least random-ish) values, be they UUIDs or undecorated integers, in this context are also about it being hard to guess one from another, or a selection of others.
Wrt: "it isn't x it is y" form: I'm not an LLM, 'onest guv!
PlanetScale wrote up a really good article on why incrementing primary keys are better for performance when compared to randomly inserted primary keys; when it comes to b-tree performance. https://planetscale.com/blog/btrees-and-database-indexes
Are there valid reasons to use UUID (assuming correctly) for primary key? I know systems have incorrectly expose primary key to the public, but assuming that's not the concern. Why use UUID over big-int?
While that is often neat solution, do not do that by simply XORing the numbers with constant. Use a block cipher in ECB mode (If you want the ID to be short then something like NSA's Speck comes handy here as it can be instantiated with 32 or 48 bit block).
And do not even think about using RC4 for that (I've seen that multiple times), because that is completely equivalent to XORing with constant.
And of course we could come up with many ways to generate our own ids and make them unique, but we have the following requirements.
- It needs to be a string (because we allow composing them to 'derive' keys) - A client must be able to create them (not just a server) without risk for collisions - The time order of keys must not be guessable easily (as the id is often leaked via references which could 'betray' not just the existence of a document, but also its relative creation time wrt others). - It should be easy to document how any client can safely generate document ids.
The lookup performance is not really such a big deal for us. Where it is we can do a projection into a more simple format where applicable.
Aren't people using (big)ints are primary keys, and using UUIDs as logical keys for import/export, solving portability across different machines?
I was using 64-bit snowflake pks (timestamp+sequence+random+datacenter+node) previously and made the switch to UUID7 for sortable, user-facing, pks. I'm more than fine letting the DB handle a 128-bit int vs over a 64-bit int if it means not having make sure that the latest version of my snowflake function has made it to every db or that my snowflake server never hiccups, ever.
Most of the data that's going to be keyed with a uuid7 is getting served straight out of Redis anyway.
But that also adds complexity.
(in the scientific reporting world this would be the perennial "in mice")
Even worse, some tools generate random strings and then ADD hyphens in them to look like UUID (even thought it's not, as the UUID version byte is filled randomly as well), cannot wrap my head why, e.g:
https://github.com/VictoriaMetrics/VictoriaLogs/blob/v1.41.0...
> Misconceptions: UUIDs are secure
> One misconception about UUIDs is that they’re secure. However, the RFC describes that they shouldn’t be considered secure “capabilities.”
> From RFC 41221 Section 6 Security Considerations:
> Do not assume that UUIDs are hard to guess; they should not be used as security capabilities
This is just wrong, and the citation doesn't support it. You're not guessing a 122-bit long random identifier. What's crazy is that the article, immediately prior to this, even cites the very math involved in showing exactly how unguessable that is.
… the linked citation (to §4.4, which is different from the in-prose citation) is just about how to generate a v4, and completely unrelated to the claim. The prose citation to §6 is about UUIDs generally: the statement "Do not assume that [all] UUIDs are hard to guess" is not logically inconsistent with properly-generated UUIDv4s being hard to guess. A subset of UUIDs have security properties, if the system generating & using them implements those properties, but we should not assume all UUIDs have that property.
Moreover, replacing an unguessable UUID with an (effectively random) 32-bit integer does make it guessable, and the scheme laid out seems completely insecure if it is to be used in the contexts one finds UUIDv4s being an unguessable identifier.
The additional size argument is pretty weak too; at "millions of rows", a UUID column is consuming an additional ~24 MiB.
The UUID is 128bits. The first 64bits are a java long. The last 64bits are a java long. Let's just combine the Tenant ID long with a Resource ID long to generate a unique id for this on our platform. (worked until it didn't).
The power and main purpose of UUIDs is to act as easy to produce, non-conflicting references in distributed settings. Since the scope of TFA is explicitly set to be "monolithic web apps", nothing stops you from having everything work with bigint PKs internally, and just add the UUIDs where you need to provide external references to rows/objects.
Also is it necessary to show uuid at all to customers of an API? Or could it be a valid pattern to hide all the querying complexity behind named identifiers, even if it could cost a bit in terms of joining and indexing?
The context is the classic B2B SaaS, but feel free to share your experiences even if it comes from other scenarios!
I would like to hear from others using, for example, Google Spanner, do you have issues with UUID. I don't for now, most optimizations happen at the Controller level, data transformation can be slow due to validations. Try to keep service logic as straightforward as possible.
Sometimes I have to talk to legacy systems, all my APIs have str IDs, and I encode int IDs as just decimal left padded with leading zeros up to 26 chars. Technically not a compliant ULID but practically speaking, if I see leading `00` I know it's not an actual ULID, since that would be before Nov-2004, and ULID was invented in 2017. The ORM automatically strips the zeros and the query just works.
I'm just kind of over using sequential int IDs for anything bigger than hobby level stuff. Testing/fixturing/QA are just so much easier when you do not have to care about whether an ID happens to already exist.
> the impact to inserts and retrieval of individual items or ranges of values from the index.
Classic OLTP vs OLAP.
Guess how much I have to defend against attackers trying 538-62-xxxx
> We need to read blocks from the disk when they are not in the PostgreSQL buffer cache. Conveniently, PostgreSQL makes it very easy to inspect the contents of the buffer cache. This is where the big difference between uuidv4 and uuidv7 becomes clear. Because of the lack of data locality in uuidv4 data, the primary key index is consuming a huge amount of the buffer cache in order to support new data being inserted – and this cache space is no longer available for other indexes and tables, and this significantly slows down the entire workload.
And I'd likely want a unique constraint on that?
Hash index is ideally suited for UUIDs but for some reason Postgres hash indexes cannot be unique.
The ability to know ahead of time what a primary key will be (in lieu of persisting it first, then returning) opened up a whole new world of architecting work in my app. It made a lot of previous awkward things feel natural.
The issue is that is true for more or less all capability URLs. I wouldn't recommend UUIDs per se here, probably better to just use a random number. I have seen UUIDs for this in practice though and these systems weren't compromised because of that.
I hate the tendency that password recovery flows for example leave the URL valid for 5 minutes. Of course these URLs need to have a limited life time, but mail isn't a real time communication medium. There is very little security benefit from reducing it from 30 minutes to 5 minutes for example. You are not getting "securer" this way.
We have all faced issues where we don’t know where the data will ultimate live that’s optimal for our access patterns.
Or we have devices and services doing async operations that need to sync.
I’m not working on mission critical “if this fails there’s a catastrophic event” type shit. It’s just rent seeking SaaS type shit.
Oh no it cost $0.35 extra to make $100. Next year will make more money relative to cost increase.
Uuid4 are only 224bits is a bs argument. Such a made up problem.
But a fair point is that one should use a sequential uuid to avoid fragmentation. One that has a time part.
I'm tired of midwit arguments like "Tech X is N% faster than tech Y at performing operation Z. Since your system (sometimes) performs operation Z, it implies that Tech X is the only logical choice in all situations!"
It's an infuriatingly silly argument because operation Z may only represent about 10% of the total CPU usage of the whole system (averaged out)... So what is promoted as a 50% gain may in fact be a 5% gain when you consider it in the grand scheme of things... Negligible. If everyone was looking at this performance 'advantage' rationally; nobody would think it's worth sacrificing important security or operational properties.
I don't know what happened to our industry; we're supposed to be intelligent people but I see developers falling for these obvious logical fallacies over and over.
I remember back in my day, one of the senior engineers was discussing upgrading a python system and stated openly that the new version of the engine was something like 40% slower than the old version but he didn't even have to explain himself why upgrading was still a good decision; everybody in the company knew he was only talking about the code execution speed and everybody knew that this was a small fraction of the total.
Not saying UUIDv7 was a bad choice for Postgres. I'm sure it's fine for a lot of situations but you don't have to start a cult preaching the gospel of The One True UUID to justify your favorite project's decisions.
I do find it kind of sly though how the community decided to make this UUIDv7 instead of creating a new standard for it.
The whole point of UUID was to leverage the properties of randomness to generate unique IDs without requiring coordination. UUIDv7 seems to take things in a philosophically different path. People chose UUID for scalability and simplicity (both of which you get as a result of doing away with the coordination overhead), not for raw performance...
That's the other thing which drives me nuts; people who don't understand the difference between performance and scalability. People foolishly equate scalability with parallelism or concurrency; whereas that's just one aspect of it; scalability is a much broader topic. It's the difference between a theoretical system which is fast given a certain artificially small input size and one which actually performs better as the input size grows.
Lastly; no mention is made about the complex logic which has to take place behind the scenes to generate UUIDv7 IDs... People take it for granted that all computers have a clock which can produce accurate timestamps where all computers in the world are magically in-sync... UUIDv7 is not simple; it's very complicated. It has a lot of additional complexity and dependencies compared to UUIDv4. Just because that complexity is very well hidden from most developers, doesn't mean it's not there and that it's not a dependency... This may become especially obvious as we move to a world of robotics and embedded systems where cheap microchips may not have enough Flash memory to hold the code for the kinds of programs required to compute such elaborate IDs.
Yes there are different flavors of generating them with their own pros and cons, but at the end of the day it’s just so much more elegant than some auto incrementing crap your database creates. But that is just semantic, you can always change the uuid algorithm for future keys. And honestly if you treat the uuid as some opaque entity (which you should), why not just pick the random one?
And I just thought of the argument that “but what if you want to sort the uuid…” say it’s used for a list of stories or something? Well, again… if you treat the uuid as opaque why would you sort it? You should be sorting on some other field like the date field or title or something. UUIDs are opaque, damn it. You don’t sort opaque data. “Well they get clustered weird” say people. Why are you clustering on a random opaque key? If you need certain data to be clustered, then do it on the right key (user_id field did your data was to be clustered by user, say)
Letting the client generate the primary keys is really liberating. Not having to care about PK collisions or leaking information via auto incrementing numbers is great!
In my opinion uuid isn’t used enough!