This isn't always a great indicator.
I can't stand Google Docs as an interface to write with, so use VIM and the copy/paste the completed document into it.
I've started having AI write those documents. Each one used to take me a full week to produce, now it's maybe one day, including editing. I don't feel bad about it. I'm ecstatic about it, actually; this shouldn't be part of my job, so reducing its footprint in my life is a blessing. Someday, someone will realize that such documents do not need to exist in the first place, but that's not the world we live in right now, and I can't change it. I'm just glad AI exists for this kind of pointless yeoman's work.
This was before vibe coding, around the days of GPT 3.5. At the time I just thought it was a challenging topic and my colleague was probably preoccupied with other things so we parked the talk.
A few weeks later, while exploring ways to use GPT for technical tasks I suddenly remembered that slack chat and realised the person had been copy pasting my messages to gpt and back. I really felt bad at that moment, like… how can you do this to someone…? It’s not bad that you try tools to find information or whatever, but not disclosing that you’re effectively replacing your agency with that of a bot is just very suboptimal and probably disrespectful.
This
When I receive a PR, of course it’s natural an AI is involved.
The mortal sin is the rubber stamp.
If they haven’t read their own PR, I only have so many warnings in me. And yes, it is highly visible.
Maybe we need a different document structure--something that has verification/justification built in.
I'd like to see a conclusion up front ("We should invest $x billion on a new factory in Malaysia") followed by an interrogation dialogue with all the obvious questions answered: "Why Malaysia and not Indonesia?", "Why $x and not $y billion?", etc.
At that point, maybe I don't care if the whole thing was produced by AI. As long as I have the justification in front of me, I'm happy. And this format makes it easy to see what's missing. If there's a question I would have asked that's not in the document, then it's not ready.
I assume they are working at a business to make money, not a school or a writing competition.
At least a "Generated by AI, reviewed and edited by xyz" tag would be some indicator of effort and accountability.
It may not be wrong to use AI to generate things whole cloth, but it definitely sidesteps something important and calls into question the "prompter's" contributions to the whole thing.
I was later asked why is it taking so long to complete the task when the document had a step by step recipe. I had to explain why the AI was solving the wrong problem in the wrong place. The PMs did not understand and scheduled more meetings to solve the problem. All they knew is that tickets were not moving on the board.
I suddenly realized that nobody had any idea of what’s going on at all on a technical level. Their contribution was to fret about target dates and executive reports. It’s like a pyramid scheme of technical ignorance. The consequence is some ICs forced to do uncompensated overtime to actually make working software.
These are the unintended consequences of the AI hype that CEOs are evangelizing.
There's been a lot of social contract undermining lately. Does anyone please know about something that can be done to try and revert back? Social contract of "F you. I got mine" isn't very appealing to me, but that seems to be the current approach.
This is _exactly_ how I feel. Any time saved by precooking a "plan" (typically halfbaked ideas) with AI isn't really time saved, it is a transfer of work from the planner to whoever is going to implement the plan.
I feel like more time is wasted trying to catch your coworkers using AI vs just engaging with the plan. If it's a bad plan say that and make sure your coworker is held accountable for presenting a bad plan. But it shouldn't matter if he gave 5 bullets to Chat gpt that expanded it to a full page with a detailed plan.
1. If the output is solid, does it matter?
2. The author could simply have done the research, created the plan, and then gave an LLM the bullet list points of research and told it to "make this into a presentable plan". The author does the heavy work and actually does the creative work, and outsources the manual formatting to the LLM. My Wife speaks English as a second language, she much prefers telling an LLM what she is trying to say and to generate a business friendly email from this than writing it herself and letting in grammatical mistakes.
3. If I were to write a paper in my favorite text editor and then put it through pandoc to generate a word doc it would do the same thing.
Later, at someone else's desk:
"Chat, summarize these 10 pages into 3 points."
If someone just generates an incredibly detailed plan in one go, that destroys the process. Others now are wasting time looking at details in something that may not even be a good idea if you step back.
The successive refinement flow doesn't preclude consideration of input from AI.
This comment was generated by chatgpt (inspired by me).
Agree with the premise but this part is off. When I find a project online, I assume it will be abandoned within a year unless I see evidence of a substantive team and/or prior long-term time investments.
> My own take on AI etiquette is that AI output can only be relayed if it's either adopted as your own or there is explicit consent from the receiving party.
Because the prompter is basically gaslighting reviewers into doing work for them. They put their marks of authorship on the AI slop when they've barely looked at it at all which convinces the reviewer to look. When the comments come back, they pump the feedback into the LLM, more slop falls out and around we go again. The prompter isn't really doing work at all—the reviewers are.
For example suppose that someone likes to work in Markdown using VSCode. To get the kind of Word document that everyone else expects, you just copy and paste into Word. AI isn't involved, but it will look exactly like AI to you.
And there are more complicated hybrids. For example my wife has a workflow where everything that she does, communications, and so on, wind up in Markdown in Obsidian. She adds information about who was at the meeting that includes basuc research into them done by an agent (company directory, title, LinkedIn, and so on - all good to know for someone working in sales). Her AI assistant then extracts out bullet points, cross references, and so on. She uses that to create summaries that she references whenever she goes back to that project. And if someone wants to know what has happened or is currently planned for that project, AI extracts that from the same repository.
There's lots of AI in this workflow. But the content and thought is mostly from her. (With facts from searches that an agent did.) The fact that she's automated a lot of her organizational scutwork to an AI doesn't make the output "AI slop".
I look at the output and ask it to re-re-verify its results, but at the end of the day the LLM is doing the work and I am handing that off to others.
Why aren't people using LLM to shorten rather than lengthen their plans? You know what you meant so can validate whether the shorter version still hits the points you care about. Whereas if I use an LLM to shorten your email there is always a risk I've now missed your main point.
Cleaning up grammar, punctuation spelling etc is a good thing worth doing but adding padding is exclusively irritating.
When used right, ideas could be distilled not extrapolated into slop. -- So maybe its not ALL BAD?
I propose a new quotation system, the 3 quote marker to disclose text written or assisted by ai:
'''You are absolutely right'''
Until AI is used to fake that, too.
The whole llm paranoia is devolving into hysteria. Lots of finger pointing without proof, lots of shoddy evidence put forward and nuance missing points.
My stance is this: I don't really care whether someone used an llm or wrote it themselves. My observation is that in both cases people were mostly wrong and required strict reviews and verification, with the exception of those who did Great Work.
There are still people who do Great Work, and even when they use llms the output is exceptional.
So my job hasn't changed much, I'm just reading more emojis.
If you find yourself becoming irrationally upset by something that you're encountering that's largely outside of your control, consider going to therapy and not forming a borderline obsession with purity on something that has always been a bit slippery (creative originality ).
The author does not mention whether the generated project plan actually looked good or plausible. If it is, where is the harm? Just that the manager had their feelings hurt?
Each can be seen as using a tool to add false legitimacy. But ultimately they are just tools.