Also, kudos for packaging it as a static web app. That's the one platform I'm willing to bet will still function in 10 years.
(At home of course, people get pissy if you do this at work!)
You can discard/modify part of a password before sending it to your backend. Then, when you log in the server has to brute force the missing part.
One could extend this with security questions like how many children pets and cars you own. What color was your car in 2024. Use that data to aid brute forcing.
The goal would be to be able to decrypt with fewer than 5 shards but make it as computation heavy as you like. If no one remembers the pink car it will take x hours longer.
I wonder who would not only have the passwords, but the know-how to manage the whole thing, at least to transition it to more managed services...
https://support.apple.com/guide/iphone/share-passwords-iphe6...
https://support.apple.com/guide/icloud/share-files-and-folde...
Thankfully my very long password I use for an encrypted Borgbackup I have was somewhere deep or untouched, but, otherwise I would have been fucked. Also, the backup codes Google told me they would always accept failed and it wasn't until I found a random unused Android device in a drawer that had been unused for a year was I able to get access back to my Google account of ~25 years.
One practical problem to consider is the risk of those distributed bundles all ending up on one or two major cloud provider's infra because your friends happened to store them someplace that got scooped up by OneDrive, GDrive, etc. Then instead of the assumed <threshold> friends being required for recovery, your posture is subtley degraded to some smaller number of hacked cloud providers.
Someone using your tool can obviously mitigate by distributing on fixed media like USB keys (possibly multiple keys to each individual as consumer-grade units are notorious for becoming corrupted or failing after a time) along with custodial instructions. Some thought into longevity is helpful here - eg. rotating media out over the years as technology migrates (when USB drives become the new floppy disks) and testing new browsers still load up and correctly run your tool (WASM is still relatively new).
Some protocol for confirming from time to time that your friends haven't lost their shares is also prudent. I always advise any disaster recovery plan that doesn't include semi-regular drills isn't a plan it's just hope. There's a reason militaries, first responders, disaster response agencies, etc. are always doing drills.
I once designed something like this using sealed paper cards in identified sequence - think something like the nuclear codes you see in movies. Annually you call each custodian and get them to break open the next one and read out the code, which attests their share hasn't been lost or damaged. The routine also keeps them tuned in so they don't just stuff your stuff in an attic and forget about it, unable to find their piece when the time comes. In this context, it also happens to be a great way to dedicate some time once a year to catch up (eg. take the opportunity to really focus on your friend in an intentioned way, ask about what's going on in their life, etc).
The rest of my comments are overkill but maybe fun to discuss from an academic perspective.
Another edge case risk is of a flawed Shamir implementation. i.e. Some years from now, a bug or exploit is discovered affecting the library you're using to provide that algorithm. More sophisticated users who want to mitigate against that risk can further silo their sensitive info - eg. only include a master password and instructions in the Shamir-protected content. Put the data those gain access to somewhere else (obviously with redundancy) protected by different safeguards. Comes at the cost of added complexity (both for maintenance and recovery).
Auditing to detect collusion is also something to think about in schemes like these (eg. somehow watermark the decrypted output to indicate which friends' shares were utilized for a particular recovery - but probably only useful if the watermarked stuff is likely to be conveyed outside the group of colluders). And timelocks to make wrench attacks less practical (likely requires some external process).
Finally, who conducted your Security Audit? It looks to me as if someone internal (possibly with the help of AI?) basically put together a bunch of checks you can run on the source code using command line tools. There's definitely a ton of benefit to that (often the individuals closest to a system are best positioned to find weaknesses if given the time to do so) and it's nice that the commands are constructed in a way other developers are likely to understand if they want to perform their own review. But might be a little misleading to call it an "audit", a term typically taken to mean some outside professional agency is conducting an independent and thorough review and formally signing off on their findings.
Also those audit steps look pretty Linux-centric (eg. Verify Share Permissions / 0600, symlink handling). Is it intended development only take place on that platform?
Again, thanks for sharing and best of luck with your project!
Tell someone you trust about where you left these pieces of paper.
I would be in an impaired state, and cannot function in way that would be conducive to either work or pleasure in terms of computer use.
That is to say, the entire reason why I have password security at all is to keep out people who do not know the password. If someone does not know the password, they should not be able to access the system. That obviously and clearly applies to myself as much as any other person. "If you do not know it, then you do not need it."