I think I naively thought I'd end up with 10 rules or something, blocking telemetry. Oh what a sweet naive child I was. Its constant. Everything on my computer seemed to use about 8 different telemetry and update services. The sheer number of packets of environmental waste being produced every second by modern computers is breathtaking. It never stops.
Reading this article, I wonder what would happen if you tried selling software the old way again. "Buy our software! Pay once. We'll mail you out a USB stick with the program on it. Our software does not access the internet." It would be terribly inefficient, but it'd probably be fun to try. It would definitely force a lot more rigour around releases & testing.
If General Motors had developed technology like Microsoft, we would all be driving cars with the following characteristics:
For no reason whatsoever, your car would crash twice a day.
Every time they repainted the lines in the road, you would have to buy a new car.
Occasionally your car would die on the freeway for no reason. You would have to pull over to the side of the road, close all of the windows, shut off the car, restart it, and reopen the windows before you could continue. For some reason, you would simply accept this.
Occasionally, executing a maneuver such as a left turn would cause your car to shut down and refuse to restart, in which case you would have to reinstall the engine.
Macintosh would make a car that was powered by the sun, was reliable, five times as fast and twice as easy to drive – but would run on only five percent of the roads.
The oil, water temperature, and alternator warning lights would all be replaced by a single “General Protection Fault” warning light.
The airbag system would ask “Are you sure?” before deploying.
Occasionally, for no reason whatsoever, your car would lock you out and refuse to let you in until you simultaneously lifted the door handle, turned the key and grabbed hold of the radio antenna.
Every time GM introduced a new car, car buyers would have to learn to drive all over again because none of the controls would operate in the same manner as the old car.
You’d have to press the “Start” button to turn the engine off.
Why can't they at least offer something of small value, like 10% off your next food order, or some API credits, so it's a fairer exchange? I guess because everyone's doing it, no individual product gets penalized for annoying their users.
There are exceptions of course, like Kagi. But they're far and few between.
Do you think [big tech company] understands consent?
> Yes
> Ask me later
At least with Android it is mostly the apps that generate interruptions, so I can choose apps that do not, and control notification permissions for those I need.
Damn you wonder how we as a Software Industry lost the plot - hell - I have a product in e-commerce analytics one of the features I never put in was 'Retention / Cohorts' etc - coz in the real world retailers don't speak in those terms
Thinking about other forms of media, the film industry just expects that its consumers will have some basic visual media literacy. Like, let's say you're watching your first ever movie, and there's a fade-out to represent a time jump. The movie does not stop with a dismissable pop-up explaining what it represents - the vast, vast majority of the audience already intuitively understands it, and the rest can probably figure it out from context. The only requirement to pull this off is an extremely minimal amount of respect for your audience / users!
The reality is that we just have shit consumer protections for our time and attention, because it's revenue for the companies, which lawmakers don't want to infringe upon. They can't even go after the relatively small markets of phone call/mail spam.
Selling is just as old as money. Every business that tried sell you soaps and cosmetics had to scare you about bacteria, making you forget that bacteria was always there with you for millennia. What you call enshittfication is the change accumulation that you witnessed over decades. Ask children who hasn't seen all that change. They see everything is just fine.
> Software could finally be updated after it shipped. Bugs could be fixed. Security holes could be closed.
They could be. Is that what this technology was used for?
> Crash reports made it easier to fix real problems, update checks were convenient, and license activation reduced some kinds of piracy.
I am skeptical that automatic sending of crash reports is worth the harm to users. A program can create crash reports and save them locally. Then if there is a crash or bug, a support team can instruct the user in how to find and send just the relevant reports. There is no reason to automatically send anything.
> “Can we understand how people actually use this?”
> Again, that’s not an evil thought. In fact, it’s useful!
Evil and useful are not mutually exclusive. Whether it's evil depends on what it's useful for.
> Before analytics, if you wanted to understand user behavior, you had to ask people, watch them, or infer patterns from support tickets. That requires time, empathy, and effort.
If you are making a choice out of a desire to avoid spending time, effort and (especially) empathy, you might be doing something evil (even if it's useful to you).
> When experimentation becomes the primary decision-making tool, a strong product vision becomes optional.
> Not because anyone argues against vision, but because you don’t strictly need it anymore, and because backing a chart is safer than backing an opinion.
This sidesteps the issue of either the vision or the chart is being backed. If you're backing it just to make your company more money, it's probably evil, whether it's a chart or a vision.
> Some categories are basically made of alerts: messaging, security, banking, calendars, delivery tracking, anything where timing actually matters.
Banking is not made of alerts. Delivery tracking is not made of alerts. Alerts may have valid uses in those contexts, but they're not the main event.
Delivery tracking I think is a good example of how notifications can be misused. People got deliveries all the time before push notifications. Most of the time you simply don't need to know what most of the notifications are telling you. What good does it do you to know that the package left Las Vegas and is now en route to San Bernardino? What good does it do you to know that the package was delivered at 3:47 if you won't be off work until 5pm anyway? When you get home, it'll either be there or it won't.
> The problem is that once a company builds the machinery to do it, that machinery becomes cheap to reuse, and the incentives gradually pull it away from “help the user succeed” toward “move the metric.”
That is evil if that metric is "help the company regardless of whether it helps the user". That is the issue here. The article consistently dances around the central issue, which is the underlying motives driving these actions. Printing words on a sheet of paper and posting it in the town square was an evil use of technology when done by a 19th-century charlatan to enrich himself by enticing saps into buying useless snake oil. Ruthlessly using any technology to pursue every possible gain regardless of the effect on others is unethical.
> Here are a few practical ways out.
Who are these directed at? Programmers? The article already says programmers hate doing this stuff. Bosses? Venture capitalists? We've already seen that they don't care. None of this is going to change unless these recommendations are aimed at the creation of normative guidelines to be enforced by law.
I really do appreciate the article and have saved it because it does a great job of laying out how the choices were made. But I am so tired of people making excuses for evil behavior on the basis that "it's just technology" or "well they were just trying to improve their product". Every company that did these evil things could have just settled for 2% growth instead of 2.5% and our world would be the better for it; and our world will be the better in another 30 years if we now enact punitive measures against those who continue to do these things.
I suspect many cookie consent dialogs come into existence this way. All the mindless onboarding nonsense, notifications, etc. come from a rather dogmatic application of growth hacking type advice. You get startups hiring people that specialize in that out of a belief that they have to do that that then start doing stuff. And once you have those people they start justifying their presence by imposing a lot of that stuff.
If you ask a lawyer for advice on legalese, they'll give you plenty of terms and conditions, consent forms, etc. Mandatory scroll to the end thingies are a good example of an anti pattern here. The thing is that laws don't specify much in terms of UI/UX. Some lawyer once upon a time decided that "we have to twist user's nipples and make sure they read my 20 pages of legalese before they are allowed in the app". This is completely stupid if you think about it for more than 4 seconds. But it's being copied over and over again by world + dog. Convoluted cookie consent screens are another good example. Corporate lawyers invented those because they are being paid to justify their existence. They come up with implausible scenarios and then protect their clients from those. A lawyer will never tell you to skip an optional/redundant step but they'll come up with reasons to add more of those. Removing complexity is not their job.
If nobody applies any critical thinking and fact checks these things you end up with a lot of ass coverage, legalese, "better safe than sorry" type features and shit that is not needed that adds up to a lot of user hostile behavior, onboarding friction, and application complexity.
Authentication is a thing that many product owners just blindly imitate from others. Including all the negative patterns around it. I've had this discussion with more than a few product owners. "We have to 'own' the user relation ship and therefore we must have a email/password thing and can't do openid, sso, email links, etc.". This is nonsense but if nobody challenges that, you go down the path of repeating decades of mistakes on this front. But it's OK because everybody else does it too.
People don't even question this any more. As soon as you go down this path it leads to a lot of fairly standard and boring stuff that you just have to do, apparently. Over and over again. If you have a password, you got to have a reset my password. Is "secret" an acceptable password? No, so we got to have a password complexity thingy. Do we add 2FA? Notification preference screens, Push notifications, and all the rest.
Modern logins should be simple. "send me a login link" "login with X, Y, or Z", passkeys, etc. Make sure the process is password manager friendly if you have passwords (why?!). Bias towards enabling your users to getting started with your thing ASAP. Get them in and then consent; not the other way around.
Get a good product person that understands these things rather than one that does things because he heard about a person that knows a person that is totally legit that told them that you gotta do X because reasons that are too complicated for you to worry your petty head about. Most bad decision making boils down to BS, urban myths, and bad advice like that. Ask the "why" questions. Make sure you understand and fact check the answers. Do what you actually have to do. But nothing more.
Not in my experience. Typically all of the "news" happens either during startup, or as part of some other flow. It doesn't happen in the middle of using software. Google Docs is not throwing up a blocking dialog in the middle of you typing a sentence.
>The analytics didn’t prove the feature was unwanted. The analytics proved that we buried it.
If I actually wanted a feature I would go through 10 menus to flip the switch. If the analytics says no one is uses it that is proof that no one wants it. It is possible that the user is unaware of it though.
>the product stops being a finished artifact
When you are doing constant software updates it is not a finished artifact anyways.