by ycombinatrix
0 subcomment
- >We prioritized simplicity and correctness first, and plan to incrementally introduce performance optimizations in future iterations.
Sir, this is a correctness issue.
- Does nobody else think the responses from the person who wrote the code read like the usual sycophantic “you’re absolutely right!” tone you get from AI these days?
by burnt-resistor
2 subcomments
- Sigh. Piss poor engineering, likely by humans. For the love of god, do atomic updates by duplicating data first such as in a move-out-of-the-way-first strategy before doing metadata updates. And keep a backup of metadata at each point of time to maximize crash consistency and crash recovery while minimizing the potential for data loss. An online defrag kernel module would likely be much more useful but I don't trust them to be able to handle such an undertaking.
If a user has double storage available, it's probably best to do the old-fashioned "defrag" by single-threaded copying all files and file metadata to a newly-formatted volume.
by forgotpwd16
2 subcomments
- >After reviewing the core defrag logic myself, I've come to a conclusion that it's AI slop.
Will call it a human slop. AI may've given them some code but they certainly haven't use it fully. I uploaded the defrag.c in ChatGPT asking to review on performance/correctness/safety and pointed the sames issues as you (alongside bunch of others but not interested at the moment to review them).