Hate this
My current company (and other companies from speaking to colleagues) are all requiring employees to do some AI "lunch and learn" or AI "share out" or AI "show and tell".
It's meeting inflation, a weekly meeting where each employee has to add an item to a doc and talk about it. And every employee has to take part, even if they have nothing.
So half the employees just come up with some BS: "Uhhh I tried this Claude Code skill I hadn't tried before...", "Read this article about X...interesting, may be what's coming in the future..."
Imagine calling yourself a "Champion" and dispensing nuggets of wisdom like this:
>> When a colleague asks how you accomplished something, the most useful response is the prompt you actually used. They will learn more from running that prompt against their own problem than from any description you could write, and it gives them something they can act on immediately.
Colleague: How did you get it to find that race condition?
Champion: I asked, "The test in @tests/scheduler.test.ts is flaky, figure out why," and it traced two unjoined promises in the scheduler. Try the same phrasing on your test.
People quickly became too embarrassed to call themselves "prompt engineers." I don't think anyone is jumping at the bit to be the office Claude Champion. The most effective response is rarely to argue the general case. Instead, acknowledge the concern, offer a brief reframe, and propose one concrete demonstration on the person’s own code. Most concerns are resolved by a single successful experience.
First of all, Google didn't have to write this stuff about Kubernetes, suggesting psychological tricks and magic demonstrations to cajole people into agreeing with you. Kubernetes was happy to discuss the general case - I don't like k8s and don't think they had a bulletproof argument, but they offered a pretty good one. What Anthropic is doing here is very very weird. I said "Scientology" earlier and I was not kidding.Part of the reason LLMs have led me to tear out so much of my own hair is how many people seem to have made it through four years of STEM college without developing any scientific thinking ability whatsoever. A truly stunning number of people have been wowed by "a single successful experience." Actually that section is full of horrible logic:
Concern
"I am faster without it."
Suggested response
That is likely true for code the person writes routinely. Suggest trying it on the work they tend to avoid: legacy files, unfamiliar services, or test scaffolding, where the leverage is highest.
Evidence to offer
Time one tedious task both ways and compare.
This isn't just unscientific and manipulative: it's really goddamn annoying! If someone times me at 1.5 hours reading about and learning an unfamiliar service, and smugly says Claude learned it in 12 seconds of "thinking," either my laptop or a certain Claude Champion is getting thrown out the window.I sense these AI companies getting desperate. Could it be that the public seem to hate AI? Could it be that they are making huge losses?
I'm being an anti champion, pushing back on my managers bullshit claims that AI can do everything, and my message is getting through to my coworkers. More and more doubt about the message they are getting from leadership is creeping in.
Don't let idiots in leadership who are just building their resume as an "AI manager" convince you your hard earned skills are useless now. Your skills are worth so much! Don't let them atrophy!
Not even landscaping companies do that :D
Anthropic has always been astroturfing everywhere and now they make it explicit. In the way of marketing it is probably more evil than xAI and OpenAI.
[1] Or at least replace a reasonable profession with a dystopian and wasteful way of plagiarizing software.