If this was so easy, teams wouldn't suck, matrix would be everywhere, and discord would be replaced already by the furries (as much as stoat is trying).
Why not build on something better like Matrix? Or Signal?[0] Or even Keybase?
I really do agree we need to move away from Slack and Discord, but I'm also very confused why the call to action is to Anthropic. IMO we should really be pushing for open systems so that nobody can take it from us. Otherwise we repeat the cycle again and again. There's some good protocols to start on. I'd also say this is a good reason to make sure that the things you work on are hackable. It's how we combine different domains of expertise.
[0] see the Molly project, you don't have to use Signal's servers
Slack has massive lock in due to cross-organization connections. The only way you're going to get people off slack is to build a 10x better mode for collaboration than river of shit chat, and while such models probably exist, you also have to convince people that they are better.
I wish whomever tries this the best of luck.
For compliance, my company already has a tool that scrapes all slack messages, and archives them for a required amount of years. I'm at a small company, so I assume large corporations have already refined this process.
What problem does this solve?
The migration out of Slack is actually quite easy and preserves all messages, files, etc. Even the user migration is straightforward, keeping Google or whoever as the identity provider if you prefer.
For being a blog post about problems with Slack's policies, it's odd that it has no details whatsoever on what the issues actually are.
Am I out of touch here, or is this a crazy entitled view? 'My close-to-free AI agent that can answer most things requires me to copy/paste and contextualise my questions!'. This is incredible compared to even a few years ago, and it's very fast and accurate.
So question why do we need Five team by same argument?
> Claude has a glaring limitation: it only does 1:1 conversations. In business, work happens in groups. Today, if I want Claude's help with something that came up in a Slack thread, I have to relay the context between Slack and Claude by copy-pasting. This is absurd. I am not a sub-agent!
It seems to me that LLMs/Chatbots are engineered for one thing above ground-level truth and that is attention. The more people you bring into a shared context, the harder it seems it would become to retain people's attention.
Here is my anecdotal evidence for this: when I chat with a chatbot, I find its answers and line of thinking, relevant, compelling, and worth engaging with. However, when people share with me their "chatbot links" and I read their conversations with it, I have "yet" to find one compelling or worth engaging with. Maybe the newer models are good enough to retain the "attention" of a large group, but I don't see this happening.
So there is nothing stopping you from taking all your company's Slack data in real time and feeding it into any LLM or external product you want.
Hey if I thought the "most important repository of text data" is inaccessible to my data pipeline company I'd likely also be shouting from the roofs like this CEO to get people to dethrone the king with a competitor whose principles aligned to my business.
Seems just like it could be anyone as long as they give an open API to access conversations.. Mentioning anthropic here just feels buzzwordy and in vogue enough to get traction in the blog post... seems to work for clicks, but will likely not give you a new king.
We're a 3-person startup (2 humans + 1 AI agent). Yesterday we had a 40-minute product positioning discussion in Slack — all three of us. The AI agent wasn't summarizing after the fact or answering questions in a sidebar. It was in the thread, in real time, doing these things simultaneously:
1. Synthesizing two humans' conflicting viewpoints into a framework (one wanted to position as "open-source Linear," the other insisted on "agent harness for product development" — the agent articulated why the distinction matters and took a side)
2. Generating investor personas and tailored one-liners for each audience when asked
3.Building a comparison slide (Chorus vs Linear agent workflow) and uploading it to the thread mid-conversation
4.Answering technical challenges ("can't Linear just build a plugin to do the same thing?") with honest analysis — "technically yes, but they won't prioritize it because 95% of their users are traditional teams"
The output: 5 audience-segmented positioning statements, a competitive analysis slide, an investor target list, and a new internal tool (Slack file upload skill) — all produced during a natural conversation, not as a separate "ask the AI" step.
A better Slack wouldn't have helped here. What helped was an AI agent that sits in the same channel, has full project context, can disagree with the founder, and executes tasks while still participating in the discussion.
We're building this at Chorus (open source, github.com/Chorus-AIDLC/Chorus) — it's a control plane for AI agents that build products. The agent runs on OpenClaw. The insight is: you don't need a new communication tool. You need your existing communication tool to have a third kind of participant that actually does work.
Yes it can? We have agents in Slack as first class participants. They can even use Slack search.
With regards to the specific complaints about not owning your data, we're building the product so that you own your data and you can run your agents and read your messages however often you want. Obviously when we build a platform and others build 3rd party apps we will have to have some restrictions so it'll be a steady balance in the future
That means, by default, every Claude Code user is actively getting royally screwed
And what is so different about today’s dream of “agents” accessing private company data and functionality?
It is a lovely dream that I would be very happy to see. What can we do differently this time around?
Then it got acquired by GitHub in 2018, presumably integrated into the main product, and their separate offering disappeared from the web (taking lots of valuable discussion with them).
Cowork / Code are interfaces for individual knowledge workers, the PM / EM team orchestration layer is the obvious play for ‘26.
We're a 3-person startup (2 humans + 1 AI agent). Yesterday we had a 40-minute product positioning discussion in Slack — all three of us. The AI agent wasn't summarizing after the fact or answering questions in a sidebar. It was in the thread, in real time, doing these things simultaneously:
• Synthesizing two humans' conflicting viewpoints into a framework (one wanted to position as "open-source Linear," the other insisted on "agent harness for product development" — the agent articulated why the distinction matters and took a side)
• Generating investor personas and tailored one-liners for each audience when asked
• Building a comparison slide (Chorus vs Linear agent workflow) and uploading it to the thread mid-conversation
• Answering technical challenges ("can't Linear just build a plugin to do the same thing?") with honest analysis — "technically yes, but they won't prioritize it because 95% of their users are traditional teams"
The output: 5 audience-segmented positioning statements, a competitive analysis slide, an investor target list, and a new internal tool (Slack file upload skill) — all produced during a natural conversation, not as a separate "ask the AI" step.
A better Slack wouldn't have helped here. What helped was an AI agent that sits in the same channel, has full project context, can disagree with the founder, and executes tasks while still participating in the discussion.
We're building this at Chorus,it's a control plane for AI agents that build products. The agent runs on OpenClaw. The insight is: you don't need a new communication tool. You need your existing communication tool to have a third kind of participant that actually does work.
We're a 3-person startup (2 humans + 1 AI agent). Yesterday we had a 40-minute product positioning discussion in Slack — all three of us. The AI agent wasn't summarizing after the fact or answering questions in a sidebar. It was in the thread, in real time, doing these things simultaneously:
1.Synthesizing two humans' conflicting viewpoints into a framework (one wanted to position as "open-source Linear," the other insisted on "agent harness for product development" — the agent articulated why the distinction matters and took a side)
2.Generating investor personas and tailored one-liners for each audience when asked
3.Building a comparison slide (Chorus vs Linear agent workflow) and uploading it to the thread mid-conversation
4.Answering technical challenges ("can't Linear just build a plugin to do the same thing?") with honest analysis — "technically yes, but they won't prioritize it because 95% of their users are traditional teams"
The output: 5 audience-segmented positioning statements, a competitive analysis slide, an investor target list, and a new internal tool (Slack file upload skill) — all produced during a natural conversation, not as a separate "ask the AI" step.
A better Slack wouldn't have helped here. What helped was an AI agent that sits in the same channel, has full project context, can disagree with the founder, and executes tasks while still participating in the discussion.
We're building this at Chorus (open source, github.com/Chorus-AIDLC/Chorus) — it's a control plane for AI agents that build products. The agent runs on OpenClaw. The insight is: you don't need a new communication tool. You need your existing communication tool to have a third kind of participant that actually does work.
Openclaw fully supports team chat inside Slack and works with Claude.
Say you need to present a new statistic to a prospective partner, or an enterprise client has an operational issue that needs to be escalated. Sales/account management pings people, and pretty soon there's a web of connections that range between email, ticketing systems, Slack, and Claude Code sessions. Someone being brought in needs to be brought up to speed on that entire web. It's a highly focused conversation with human and AI participants, that (because human counterparties need to weigh in) by definition must happen in parallel with other work.
So many companies would benefit from a Hub that speaks agentic workflows, and streams progress token by token, fluently.
Could Anthropic excel at building a backend for this? Absolutely.
Could they excel at building a frontend that takes the world by storm the way Slack did, with its radical simplicity? Unfortunately I'm not as confident here. Consider that their VS Code plugin lags their terminal TUI so massively that it still is impossible to rename sessions [0], much less use things like remote-control functionality.
Show me that they can treat native-feeling multi-platform UI with as much care as they do their agentic loops, and I'll show you a company that could change every business forever.
Perhaps that info can be fed into Maven, too, in case a domestic dissenters need to be targeted.
You'll rue the day when they decide to release a Slack lookalike.
For developer like me - Slack bot already proven useful digging out info. Slack also supports kanban so probably can replace jira/asana/etc for documenting system. In Salesforce "vibes" already can tell a lot of stuff about your Salesforce implementation. Connect it all up and you got pretty useful package. Sadly Salesforce is moving too slow here.
For group chats chatgpt has that, but not the same. I think the closest is Airtable where you can collab on data.
People are so weird.
Never used it but interesting
But yeah slack could use some competition. Let’s see it would Make sense. It would make anthemic even more sticky in the enterprise.
We're a 3-person startup (2 humans + 1 OpenClaw Agent). Yesterday we had a 40-minute product positioning discussion in Slack — all three of us. The AI agent wasn't summarizing after the fact or answering questions in a sidebar. It was in the thread, in real time, doing these things simultaneously:
• Synthesizing two humans' conflicting viewpoints into a framework (one wanted to position as "open-source Linear," the other insisted on "agent harness for product development" — the agent articulated why the distinction matters and took a side)
• Generating investor personas and tailored one-liners for each audience when asked
• Building a comparison slide (Chorus vs Linear agent workflow) and uploading it to the thread mid-conversation
• Answering technical challenges ("can't Linear just build a plugin to do the same thing?") with honest analysis — "technically yes, but they won't prioritize it because 95% of their users are traditional teams"
The output: 5 audience-segmented positioning statements, a competitive analysis slide, an investor target list, and a new internal tool (Slack file upload skill) — all produced during a natural conversation, not as a separate "ask the AI" step.
A better Slack wouldn't have helped here. What helped was an AI agent that sits in the same channel, has full project context, can disagree with the founder, and executes tasks while still participating in the discussion.