This article assumes that AI is the centre of the universe, failing to understand that that assumption is exactly what's causing the attitude they're pointing to.
There's a dichotomy in the software world between real products (which have customers and use cases and make money by giving people things they need) and hype products (which exist to get investors excited, so they'll fork over more money). This isn't a strict dichotomy; often companies with real products will mix in tidbits of hype, such as Microsoft's "pivot to AI" which is discussed in the article. But moving toward one pole moves you away from the other.
I think many engineers want to stay as far from hype-driven tech as they can. LLMs are a more substantive technology than blockchain ever was, but like blockchain, their potential has been greatly overstated. I'd rather spend my time delivering value to customers than performing "big potential" to investors.
So, no. I don't think "engineers don't try because they think they can't." I think engineers KNOW they CAN and resent being asked to look pretty and do nothing of value.
I haven't escaped this mindset myself. I'm convinced there are a small number of places where LLMs make truly effective tools (see: generation of "must be plausible, need not be accurate" data, e.g. concept art or crowd animations in movies), a large number of places where LLMs make apparently-effective tools that have negative long-term consequences (see: anything involving learning a new skill, anything where correctness is critical), and a large number of places where LLMs are simply ineffective from the get-go but will increasingly be rammed down consumers' throats.
Accordingly I tend to be overly skeptical of AI proponents and anything touching AI. It would be nice if I was more rational, but I'm not; I want everyone working on AI and making money from AI to crash and burn hard. (See also: cryptocurrency)
1) A third party app simply cannot compete with Google Maps on coverage, accuracy and being up to date. Yes, there are APIs you can use to access this, but they're expensive and limited, which leads us to the second problem:
2) You can't make money off them. Nobody will pay to use your app (because there's so much free competition), and the monetization opportunities are very limited. It's too late in the flow to sell flights, you can't compete with Booking etc for hotel search, and big ticket attractions don't pay commissions for referrals. That leaves you with referrals for tours, but people who pay for tours are not the ones trying to DIY their trip planning in the first place.
I feel fatigued by AI. To be more precise, this fatigue includes several factors. The first one is that a lot of people around me get excited by events in the AI world that I find distracting. These might be new FOSS library releases, news announcements from the big players, new models, new papers. As one person, I can only work on 2-3 things at a given interval in time. Ideally I would like to focus and go deep in those things. Often, I need to learn something new and that takes time, energy and focus. This constant Brownian motion of ideas gives a sense of progress and "keeping up" but, for me at least, acts as a constantly tapped brake.
Secondly, there is a sentiment that every problem has an AI solution. Why sit and think, run experiments, try to build a theoretical framework when one can just present the problem to a model. I use LLMs too but it is more satisfying, productive, insightful when one actually thinks hard and understands a topic before using LLMs.
Thirdly, I keep hearing that the "space moves fast" and "one must keep up". The fundamentals actually haven't changed that much in the last 3 years and new developments are easy to pick up. Even if they did, trying to keep up results in very shallow and broad knowledge that one can't actually use. There are a million things going on and I am completely at peace with not knowing most of them.
Lastly, there is pressure to be strategic. To guess where the tech world is going, to predict and plan, to somehow get ahead. I have no interest in that. I am confident many of us will adapt and if I can't, I'll find something else to do.
I am actually impressed with and heavily use models. The tiresome part now are some of the humans around the technology who participate in the behaviors listed above.
1. You were a therapy session for her. Her negativity was about the layoffs.
2. FAANG companies dramatically overhired for years and are using AI as an excuse for layoffs.
3. AI scene in Seattle is pretty good, but as with everywhere else was/is a victim of the AI hype. I see estimates of the hype being dead in a year. AI won't be dead, but throwing money at the whatever Uber-for-pets-AI-ly idea pops up won't happen.
4. I don't think people hate AI, they hate the hype.
Anyways, your app actually does sound interesting so I signed up for it.
And not just for travel by the way... I love just exploring maps and seeing a place.. I'd love to learn more about a place kind of like a mesh between Wikipedia and a map and AI could help
There will absolutely some cases where AI is used well. But probably the larger fraction will be where AI does not give better service, experience or tool. It will be used to give a cheaper but shittier one. This will be a big win for the company or service implementing it, but it will suck for literally everybody else involved.
I really believe there's huge value in implementing AI pervasively. However it's going to be really hard work and probably take 5 years to do it well. We need to take an engineering and human centred approach and do it steadily and incrementally over time. The current semi-religious fervour about implementing it rapidly and recklessly is going to be very harmful in the longer term.
> I wanted her take on Wanderfugl , the AI-powered map I've been building full-time.
I can at least give you one piece of advice. Before you decide on a company or product name, take the time to speak it out loud so you can get a sense of how it sounds.
> none of it had anything to do with what I built. She talked about Copilot 365. And Microsoft AI. And every miserable AI tool she's forced to use at work. My product barely featured. Her reaction wasn't about me at all. It was about her entire environment.
She was given two context clues. AI. And maps. Maps work, which means all the information in an "AI-powered map" descriptor rests on the adjective.
This is a product of hurt feelings and not solid logic.
My first reaction was "replace 'AI' with the word 'Cloud'" ca 2012 at MS; what's novel here?
With that in mind, I'm not sure there is anything novel about how your friend is feeling or the organizational dynamics, or in fact how large corporations go after business opportunities; on those terms, I think your friends' feelings are a little boring, or at least don't give us any new market data.
In MS in that era, there was a massive gold rush inside the org to Cloud-ify everything and move to Azure - people who did well at that prospered, people who did not, ... often did not. This sort of internal marketplace is endemic, and probably a good thing at large tech companies - from the senior leadership side, seeing how employees vote with their feet is valuable - as is, often, the directional leadership you get from a Satya who has MUCH more information than someone on the ground in any mid-level role.
While I'm sure there were many naysayers about the Cloud in 2012, they were wrong, full stop. Azure is immensely valuable. It was right to dig in on it and compete with AWS.
I personally think Satya's got a really interesting hyper scaling strategy right now -- build out national-security-friendly datacenters all over the world -- and I think that's going to pay -- but I could be wrong, and his strategy might be much more sophisticated and diverse than that; either way, I'm pretty sure Seattleites who hate how AI has disrupted their orgs and changed power politics and winners and losers in-house will have to roll with the program over the next five years and figure out where they stand and what they want to work on.
AI pushed down everywhere. Sometimes shitty-AI that needed to be proved at all cost because it should live up to the hype.
I was in one of such AI-orgs and even there several teams felt the pressure from SLT and a culture drift to a dysfunctional environment.
Such pressure to use AI at all costs, as other fellows from Google mentioned, has been a secret ingredient to a bitter burnout. I’m going to therapy and under medication now to recover from it.
I still think there’s a third path, one that makes people’s lives better with thoughtful, respectful, and human-first use of AI. But for some reason there aren’t many people working on that.
Right now, late in the business cycle, "tech" companies are dominated by non-technical people, who don't know how to write software, and aren't even capable of thinking through a real problem well enough to design software to solve it. This happens because people make up imposter roles like scrum master, and product manager, and then convert their friends into these roles to get them jobs, and build up their own political faction at a company. The salaries and opinions of these roles directly crowd out those of real talent.
Take a minute to cut through the bullshit about what each role is supposed to contribute at each part of the development cycle, and focus on the amount of decision-making influence that each person has over engineering resources. It's not weighted towards the innovative or creative people; it's probably inversely weighted. That's all you need to know; don't expect good products until that's fixed.
I'll know things have come around full circle when startups are recruiting with: huge management ratios (10+) or flat orgs, remote work or private offices, no product org, everyone in eng can program, everyone in sales can sell, etc. as selling points.
It hits weirdly close to home. Our leadership did not technically mandate use, but 'strongly encourages' it. I did not even have my review yet, but I know that once we get to the goals part, use of AI tools will be an actual metric ( which is.. in my head somewhere between skeptic and evangelist.. dumb ).
But the 'AI talent' part fits. For mundane stuff like data model, I need full committee approval from people, who don't get it anyway ( and whose entire contribution is: 'what other companies are doing' ).
So what's different between Seattle and San Francisco? Does Seattle have more employee-workers and San Francisco has more people hustling for their startup?
I assume Bali (being a vacation destination) is full people who are wealthy enough to feel they're insulated from whatever will happen.
You know who's NOT divided? Everyone outside the tech/management world. Antipathy towards AI is extremely widespread.
They should focus more on data engineering/science and other similar fields which is a lot more about those, but since there are often no tests there, that's a bit too risky.
I live in Seattle, and got laid off from Microsoft as a PM in Jan of this year.
Tried in early 2024 to demonstrate how we could leverage smaller models (such as Mixtral) to improve documentation and tailor code samples for our auth libraries.
The usual “fiefdom” politics took over and the project never gained steam. I do feel like I was put in a certain “non-AI” category and my career stalled, even though I took the time to build AI-integrated prototypes and present them to leadership.
It’s hard to put on a smile and go through interviews right now. It feels like the hard-earned skills we bring to the table are being so hastily devalued, and for what exactly?
I consider it divine intervention that I departed shortly before LLMs got big. I can't imagine the unholy machinations my former team has been tasked with working on since I left.
Electricl engineering? Garbage.
Construction projects? Useless.
But code is code everywhere, and the immense amount of training data available in the form of working code and tutorials, design and style guides, means that the output as regards software development doesn't really resemble what anybody working in any other field sees. Even adjacent technical fields.
> It felt like the culture wanted change. > > That world is gone.
Ummm source?
> This belief system—that AI is useless and that you're not good enough to work on it anyway
I actually don't know anyone with this belief system. I'm pretty slow on picking up a lot of AI tooling, but I was slow to pick up JS frameworks as well.
It's just smart to not immediately jump on a bandwagon when things are changing so fast because there is a good chance you're backing the wrong horse.
And by the way, you sound ridiculous when you call me a dinosaur just because I haven't started using a tool that didn't even exist 6 months ago. FOMO sales tactics don't work on everyone, sorry to break it to you.
When the singularity hits in who knows how many years from now, do you really think it's one of these llm wrapper products that's going to be the difference maker? Again, sorry to break it to you but that's a party you and I are not going to get invited to. 0% chance governments would actually allow true super intelligence as a direct to consumer product.
I think the SEA and SF tech scenes are hard to differentiate perfectly in a HN comment. However, I think any "Seattle hates AI" has to do more with the incessant pushing of AI into all the tech spaces.
It's being claimed as the next major evolution of computing, while also being cited as reasons for layoffs. Sounds like a positive for some (rich people) and a negative for many other people.
It's being forced into new features of existing products, while adoption of said features is low. This feels like cult-like behavior where you must be in favor of AI in your products, or else you're considered a luddite.
I think the confusing thing to me is that things which are successful don't typically need to be touted so aggressively. I'm on the younger side and generally positive to developments in tech, but the spending and the CEO group-think around "AI all the things" doesn't sit well as being aligned with a naturally successful development. Also, maybe I'm just burned out on ads in podcasts for "is your workforce using Agentic AI to optimize ..."
? For the better, or for the worse ?
I'm not sure why. I don't think it's access to capital, but I'd love to hear thoughts.
I expect it to settle out in a few years where: 1. The fiduciary duties of company shareholders will bring them to a point of stopping to chase AI hype and instead derive an understanding of whether it's driving real top-line value for their business or not. 2. Mid to senior career engineers will have no choice but to level up their AI skills to stay relevant in the modern workforce.
"I said, Imagine how cool would this be if we had like, a 10-foot wall. It’s interactive and it’s historical. And you could talk to Martin Luther King, and you could say, ‘Well, Dr, Martin Luther King, I’ve always wanted to meet you. What was your day like today? What did you have for breakfast?’ And he comes back and he talks to you right now."
Oddly, the screenshots in the article show the name as "Wanderfull".
Visual Studio is great. IntelliSense is great. Nothing open-source works on our giant legacy C++ codebase. IntelliSense does.
Claude is great. Claude can't deal with millions of lines of C++.
You know what would be great? If Microsoft gave Claude the ability to semantic search the same way that I can with Ctrl-, in Visual Studio. You know what would be even better? If it could also set breakpoints and inspect stuff in the Debugger.
You know what Microsoft has done? Added a setting to Visual Studio where I can replace the IntelliSense auto-complete UI, that provides real information determined from semantic analysis of the codebase and allows me to cycle through a menu of possibilities, with an auto-complete UI that gives me a single suggestion of complete bullshit.
Can't you put the AI people and the Visual Studio people in a fucking room together? Figure out how LLMs can augment your already-really-good-before-AI product? How to leverage your existing products to let Claude do stuff that Claude Code can't do?
It's not about their careers. It's about the injustice of the whole situation. Can you possibly perceive the injustice? That the thing they're pissed about is the injustice? You're part of the problem because you can't.
That's why it's not about whether the tools are good or bad. Most of them suck, also, but occasionally they don't--but that's not the point. The point is the injustice of having them shoved in your face; of having everything that could be doing good work pivot to AI instead; of everyone shamelessly bandwagoning it and ignoring everything else; etc.
Look, good engineers just want to do good work. We want to use good tools to do good work, and I was an early proponent of using these tools in ways to help the business function better at PriorCo. But because I was on the wrong team (On-Prem), and because I didn’t use their chatbots constantly (I was already pitching agents before they were a defined thing, I just suck at vocabulary), I was ripe for being thrown out. That built a serious resentment towards the tooling for the actions of shitty humans.
I’m not alone in these feelings of resentment. There’s a lot of us, because instead of trusting engineers to do good work with good tools, a handful of rich fucks decided they knew technology better than the engineers building the fucking things.
Here's the deal. Everyone I know who is infatuated with AI shares things AI told them with me, unsolicited, and it's always so amazingly garbage, but they don't see it or they apologize it away [1]. And this garbage is being shoved in my face from every angle --- my browser added it, my search engine added it, my desktop OS added it, my mobile OS added it, some of my banks are pushing it, AI comment slop is ruining discussion forums everywhere (even more than they already were, which is impressive!). In the mean time, AI is sucking up all the GPUs, all the RAM, and all the kWH.
If AI is actually working for you, great, but you're going to have to show it. Otherwise, I'm just going to go into my cave and come out in 5 years and hope things got better.
[1] Just a couple days ago, my spouse was complaining to her friend about a change that Facebook made, and her friend pasted an AI suggestion for how to fix it with like 7 steps that were all fabricated. That isn't helpful at all. It's even less helpful than if the friend just suggested to contact support and/or delete the facebook account.
[…]
Seattle has talent as good as anywhere. But in San Francisco, people still believe they can change the world—so sometimes they actually do.”
Nope, still completely fucking tone deaf.
> This belief system—that AI is useless and that you're not good enough to work on it anyway—hurts three groups
I don't know anyone who thinks AI is useless. In fact, I've seen quite a few places where it can be quite useful. Instead, I think it's massively overhyped to its own detriment. This article presents the author as the person who has the One True Vision, and all us skeptics are just tragically undereducated.
I'm a crusty old engineer. In my career, I've seen RAD tooling, CASE tools, no/low-code tools, SGML/XML, and Web3 not live up to the lofty claims of the devotees and therefore become radioactive despite there being some useful bits in there. I suspect AI is headed down the same path and see (and hear of) more and more projects that start out looking really impressive and then crumble after a few promising milestones.
I think some of the reasons that they were bullshit, but in fairness I have grown pretty tired of how much low-effort AI slop has been ruining YouTube. I use ChatGPT all the time, but I am growing more than a little frustrated how much shit on the internet is clearly just generated text with no actual human contribution. I don’t inherently have an issue with “vibe coding”, but it is getting increasingly irritating having to dig through several-thousand-line pull requests of obviously-AI-generated code.
I’m conflicted. I think AI is very cool, but it is so perfectly designed to exploit natural human laziness. It’s a tool that can do tremendous good, but like most things, it requires people use it with effort, which does seem to be the outlier case.
[1] basically the hall of shame for bad threads.
Second, engineering and innovation are two different categories. Most of engineering is about ... making things work. Fixing bugs, refactoring fragile code, building new features people need or want. Maybe AI products would be hated less if they were just a little less about being able to pretend they are an innovation and just a little more about making things works.
Satya has completely wasted their early lead in AI. Google is now the leader.
But also, it's not just my own. My wife's a graphic designer. She uses AI all the time.
Honestly, this has been revolutionary for me for getting things done.
Not only because it's destroying creator jobs while also ripping off creators, but it's also producing shit that's offensively bad to professionals.
One thing that people in tech circles might not be aware of is that people outside of tech circles aren't thinking that tech workers are smart. They haven't thought that for a long time. They are generally thinking that tech workers are dimwit exploiter techbros, screwing over everyone. This started before "AI", but now "AI" (and tech billionaires backing certain political elements) has poured gasoline on the fire. Good luck getting dates with people from outside our field of employment. (You could try making your dating profile all about enjoying hiking and dabbling with your acoustic guitar, but they'll quickly know you're the enemy, as soon as you drive up in a Tesla, or as soon you say "actually..." before launching into a libertarian economics spiel over coffee.)
Again a somewhat positive term (if you focus on "back to nature" and ignore the nationalist parts) is taken, assimilated and turned on its head.
My buddies still or until recently still at Amazon have definitely been feeling this same push. Internal culture there has been broken since the post covid layoffs, and layering "AI" over the layoffs leaves a bad taste.
Of course, you could also go online and sulk, I suppose. There are more options between "ZIRP boomtimes lol jobs for everyone!" and "I got fired and replaced with ELIZA". But are tech workers willing to expore them? That's the question.
It just feels like it's in bad taste that we have the most money and privilege and employment left (despite all of the doom and gloom), and we're sitting around feeling sorry for ourselves. If not now, when? And if not us, who?
AI the manual algorithm to generate code and analyze images is quite an interesting underlying tech.
I'm not sure they're as wrong as these statements imply?
Do we think there's more or less crap out now with the advent and pervasiveness of AI? Not just from random CEOs pushing things top down, but even from ICs doing their own gig?
Its an infinite moving goalpost of hate, if its an actor, "creative", writer, AI is a monolithic doom, next its theoretical public policy or the lack thereof, and if they have nothing that affects them about it then it's about the energy use and environment
nobody is going to hear about what your AI does, so don't mention anything about AI unless you're trying to earn or raise money. Its a double life
I wouldn't shit talk you to your face if you're making an AI thing. However I also understand the frustration and the exhaustion with it, and to be blunt, if a product advertises AI in it, I immediately do treat it more skeptically. If the features are opt-in, fine. If however it seems like the sort of thing that's going to start spamming me with Clippy-style "let our AI do your work for you!" popups whilst I'm trying to learn your fucking software, I will get aggravated extremely fast.
No shit. But that's hardly everyone is Seattle. I'd imagine people at Amazon aren't upset about being forced to use Copilot, or Google folks.
Oh yeah, call out a tech city and all the butt-hurt-ness comes out. Perfect example of "Rage Bait".
People here aren't hurt because of AI - people here are hurt because they learned they were just line items in a budget.
When the interest rates went up in 2022/2023 and the cheap money went away, businesses had to pivot their taxes while appeasing the shareholder.
Remember that time when Satya went to a company sponsored rich people thing with Aerosmith or whomever playing while announcing thousands of FTE's being laid off? Yeah, that...
If your job can be done by a very small shell script, why wasn't it done before?