Car dashboards without buttons, TVs sold with 3D glasses (remember that phase?), material then flat design, larger and larger phones: the list is embarrasing to type because it feels like such a stereotypical nerd complaint list. I think it’s true though — the tech PMs are autocrats and have such a bizarrely outsized impact on our lives.
And now with AI, too. I just interacted with duck.ai, duck duck go’s stab at a bot. I long for a little more conservatism.
Around that time, one of my employer's website had added google plus share buttons to all the links on the homepage. It wasn't a blog, but imagine a blog homepage with previews of the last 30 articles. Now each article had a google plus tag on it. I was called to help because the load time for the page had grown from seconds to a few minutes. For each article, they were adding a new script tag and a google plus dynamic tag.
It was fixed, but so much resources were wasted for something that eventually disappeared. Ai will probably not disappear, but I'm tired of the busy work around it.
Pretty much my sentiment too.
In general, I think we want to have it, just like nuclear fusion, interplanetary and interstellar colonization, curing cancer, etc. etc.
We don't "need" it similar to people in 1800s don't need electric cars or airports.
Who owns AGI or what purpose the AGI believe it has is a separate discussion - similar to how airplanes can be used to transport people or fight wars. Fortunately today, most airplanes are made to transport people and connect the world.
For instance, I am fiddling with LineageOS on a Pixel (ironically enough) that minimizes my exposure to Google's AI antics. That doesn't mean to say it is easy or sustainable, but enough of us need to stop participating in their bad bets to force upon that realization.
We (You and I) don't. Shareholders absolutely need it for that line to go further up. They love the idea of LLMs, AI, and AGI for the sole reason that it will help them reduce the cost of labour massively. Simple as that.
> I will use what creates value for me. I will not buy anything that is of no use to me.
If only everyone was thinking this way. So many around these parts have no self-preservation tendencies at all.
> We will work with the creators, writers, and artists, instead of ripping off their life's work to feed the model.
I’m not sure I have an idea of what this might look like. Do they want money? What might that model look like? Do they want credit? How would that be handled? Do they want to be consulted? How does that get managed?
A student will be showing me something on their laptop, their thumb accidentally grazes it because it's larger than the modifier keys and positioned so this happens as often as possible. The computer stops all other activity and shifts all focus to the Copilot window. Unprompted, the student always says something like "God, I hate that so much."
If it was so useful they wouldn't have to trick you into using it.
What startups are doing earning calls?
Also, even many home users may be finding that they interact less and less with the "platform" now that everything including MS Office runs from a browser. I can barely remember when the differences between Windows and Linux were even relevant to my personal computer use. This was necessitated by having to find a good way to accommodate Windows, MacOS, iOS, and Android all at once.
Yeah, I think the days of working software are over (at least deterministically)
right... none of them are saying that. They could probably use more GPUs considering the price of GPUs and memory are skyrocketing and the supply chain can't keep up. It's about experimentation, they need real users and real feedback to know how to improve the current generation of models, and figure out how to monetise them.
Our heroes are in the office of a tech billionaire who says "See that coffee machine? Just speak what you want. It can make any coffee drink."
So one character says "Half caf latte" and the machine does nothing except open a small drawer.
"You have to spit in the drawer so it can collect your DNA--then it will make your coffee" the billionaire says.
This pretty much sums up the whole tech scene today. Any good engineer could build that coffee machine, but no VC would fund her company without the spit drawer.
We are, however, at the “we need an AI strategy” stage, so execs will throw anything and everything at the wall to see what sticks.
Users don't have many other options to switch to. Even if they did, the b2b/advertising revenue they get makes up for any losses they may take on the consumer side.
Plus the general economy outlook is negative, AI is the bright spot. They are striving to keep growth up amid downward pressure.
They spent a ton of money and/or they see everyone's LinkedIn posts or fantastic news stories by someone selling BS and they're afraid to say the emperor has no clothes.
[1] https://www.techspot.com/news/110418-nvidia-jensen-huang-urg...
They want to explore what is possible and what sticks with users.
The best way to do this is to just push it in their apps as many places as possible since 1. you get a nice list of real world problems to try and solve. 2. You have more pressure on devs to actually make something that works because it is going into production. 3. You get feedback from millions of users.
Also, by working heavily with their AI, they will discover areas that can be improved and thus make the AI itself better.
They don't care that it is annoying, unhelpful or uneconomical because the purpose is experimentation.
Unfortunately these reckless investments are likely to cause massive collateral damage to us 'little people'. But I'm sure the billionaires will be just fine.
I am, personally, quite optimistic about the potential for "AI," but there's still plenty of warts.
Just a few minutes ago, I was going through a SwiftUI debugging session with ChatGPT. SwiftUI View problems are notoriously difficult to debug, and the LLM got to the "let's try circling twice widdershins" stage. At that point, I re-engaged my brain, and figured out a much simpler solution than the one proposed.
However, it gave me the necessary starting point to figure it out.
> Not my problem.
I don't know about the author specifically, but the bubble popping is a very bad thing for many people. People keep saying that this bubble isn't so bad because it's concentrated on the balance sheet of very deep pocketed tech companies who can survive the crash. I think that is basically true, but a lot is riding on the stock valuations of these big tech companies and lots of bad stuff will happen when these crash. It's obviously bad for the people holding these stocks, but I these tech stocks are so big that there is a real risk of widespread contagion.
AI really needs R&D time, where we first figure out what it’s good for and how best to exploit it.
But R&D for SW is dead. Users proved to be super-resilient to buggy or mis-matched sw. They adapt. ‘Good-enough’ often doesn’t look it. Private equity sez throw EVERYTHING at the wall and chase what sticks…
You need to push slop, because people don’t really want it.
> We see the hallucinations. We see the errors. Let's pick the things which work and slowly integrate it into our lives. We don't need to do it this quarter just because some startup has to do an earnings call.
Citation needed?
To me this is a contentless rant. AI is about old billionaires getting richer before they die? It’s at least a lot more than that.
Seems like some people tried AI in 2023, got negative affinity for it, then just never update with new information. In my personal use, hallucinations are way down, and it’s just getting more and more useful.
When he says "billionaires making more billions" it's really off the mark. These people are not forcing AI down our throats to make billions.
They are doing it so they can win.
Winning means victory in a zero sum game. This is a game that is zero sum because the people that play it think that way. However, the point is not to make money. That's a side effect.
They want to win so the other guys don't. That means power, growth, prestige, and winning. Winning just to win.
Once people start to understand that this is the prime directive for the elite in the tech business, the easier it is for everyone to defend against it.
The way you defend against it is to make it non-zero sum. Spread the ideas out. Give people choices. Act more like Tim Berners Lee and less like Zuck. This will mean less money, sure, but more to the point, it deprives anyone of being "the winner". We should all celebrate any moves that take power away from the few and redistribute it to many. Money will be made in the process, but that's okay.
E.g. Programming - and I do judge not only those who use AI to code but execs who force people to use AI to code. Sorry, I'd like to know how my code works. Sorry, you're not an efficient worker, you're just making yourself dumber and churning out garbage software. It will be a competitive advantage for me when slop programmers don't know how to do anything and I can actually solve problems. Silicon Valley tech utopians cannot convince me otherwise. I don't think poorly socialized dweebs know much about anything other than their AI girlfriends providing them with a simulation of what it feels like to not be lonely.
i support this but the Smarter Than Me types say it's impossible. It's not possible to track down an adequate number of copyright holders, much less get their permission, much less afford to pay them, for the number of works required in order to get the LLM to achieve "liftoff".
I would think that as I use Claude for coding, it would work just as well if it didnt suck down the last three years of NYT articles as if it did. There's a vast amount of content that is in the public domain, and if you're ChatGPT you can make deals with a bunch of big publishers to get more modern content too. But that's my know-nothing take.
maybe the issue is more about the image content. Screw the image content (and definitely the music content, spotify pushing this slop is immensely offensive), pay the artists. My code OTOH is open source, MIT licensed. It's not art at all. Go get it (though throw me a few thousand bucks every year because you want to do the right thing).
Some "No True Scotsman"-flavored cope.
I’m not opposed to any of the above, necessarily. I’ve just always been the type to want to adopt things as they are needed and bring demonstrable value to myself or my employer and not a moment before. That is the problem that capital has tried to solve through “founder mode” startups and exploitative business models: “It doesn’t matter whether you need it or not, what matters is that we’re forcing you to pay for it so we get our returns.”
The thing is, AI did suck in 2023, and even in 2024, but recently the best AI models are veering into not sucking territory, which when you look at it from a distance makes sense, eventually if you throw the smartest researchers on the planet and billions of dollars at a problem, something eventually will give and the wheels will start turning.
There is a strange blindness many people have on here, a steadfast belief that AI just will just never end up working or always be a scam, but the massive capex on AI now is predicated on the eventual turning of the fledgling LLM's into self-adaptive systems that can manage any cognitive task better than a human. I don't see how the improvements we've seen over the past few years in AI aren't surely heading in that direction.