Siri: ... "Here's what I found on the web for "what is my ETA'"
From outside I don't know the cause but contrary to their normal reputation for better integration between parts of their products it seems like Siri is in some organizational way fundamentally broken.
I want my location to be in Singapore (Singapore SIM/Card etc) I want my UI to be in English. I want my Siri speaks Cantonese to me.
For reason on Apple knows why, I have to use English (Singapore) as my UI and Siri language or Apple Intelligence will not turn on. As if the engineer who develop Siri/Apple Intelligence have never think about the needs of those who speaks more than one language.
But what’s really lacking is a model for multiple people sharing a single computing experience in real life. Companion mode in Google Meet or Spotify Jam are two attempts but both still force you through the one user, one device path.
Two adults sitting in a car shouldn’t have to constantly think “whose phone is this?” connected to CarPlay. Especially when they’re part of the same Apple “family” and on a Spotify family plan.
Two people seamlessly interacting with one “system” would break all sorts of auth and other assumptions, but it seems worth figuring out as computing becomes more and more prevalent in every facet of life.
And also living without it doesn’t really affect Apple’s bottom line. But yeah I wish I had an AI assistant in my iPhone which would text back my parents with what I’m doing today and reply to their needless updates I get since buying them smartphones.
Siri in general seems to be, for me at least, superfluous. The answer to most questions I ask is “I don’t know” or “I didn’t catch that” or “I can’t”. AI in general is still causing me major question marks, especially where it comes to the valuations right now on the stock market. This morning I was watching Bloomberg at the European open and noticed one of my stocks wasn’t really moving as usual, and the presenter then announced that the Nordic markets were closed today because of the Ascension Day public holiday. So I googled “is the Danish stock market open today?” and naturally Google’s AI was the top link, proudly announcing “Yes! The Danish market is open today, here are the hours yadda yadda”. I scrolled down and found the actual link to the market and it showed that, of course, the market is closed, it’s ascension day. So I asked the Google AI - “are you sure about that?” and it thought again and found out that “no, the Danish stock market is closed today. I apologise for telling you it was open without checking”. Honest to god this is the tech that’s putting Nvidia at a $5.5Trilion valuation and keeping the market at all time highs right now? A technology that makes even Google worse?
These sort of things are exactly what hand-rolled setups à la OpenClaude are great for- the potential for insane privacy disaster is still there, but in that case you have no one to blame but yourself.
Large tech companies aren’t going to take that heat for features that aren’t really monetizable.
I'd consider building the system out as an MCP server rather than trying to bundle the agent with it. I had an AI build something out that is just a tasklist that works the way I think about tasks, which I've been using both personally and professionally. It's an MCP server only, which I can expose on the internet with OAuth. It has been surprisingly fun to use, because the AI can spontaneously interact with the information in ways I didn't program in. I have a recurring task with an AI to give me a dump of my current top tasks once a day to my phone.
Professionally, I'm working between a lot of different teams with their own Jira boards and I needed something to use myself to organize and prioritize tasks that can't be prioritized within one place in Jira. With the Atlassian MCP server hooked up to the same agent as my code it is fairly trivial to attach a Jira bug to a task and then prompt the AI to do whatever to the bug attached to this task. I put an explicit field for it in to the task definition but you don't even really need that, just putting the bug in the description is all that was really necessary.
The point I am trying to make here is, you don't even really have to "design" a product at this point. You just need to expose things to the AI so that when the user makes some vague statement about what they want to do it can convert that into concrete calls. The AI and the user will do things with it that you didn't even think of, and users can just add things by saying things in the descriptions of various tasks. I've mentioned how even if AI were to freeze today for the next 10 years we'd still be learning how to use AI and getting more out of it... this is I think a still under-explored application space.
The vision is to create an "operating system for your family" that delivers the right information to parents, via the channel they want it delivered through, at the right time. You can check it out at https://helloleto.com — I just launched a few weeks ago and am starting to onboard parents while I work to improve the onboarding.
The overall goal aligns: help make the lives of parents easier, using technology that works automatically behind the scenes. Would love to chat more / answer any questions that you (or anyone else) may have about this—we've got the technology to help parents and it's time to finally do it.
Seems like there would be more of a market for it.
Although, I guess most software has "user" and "organization", and family kind of slots into the 2nd one. But most of that software isn't oriented to the needs of actual families.
Based on the author's list, what's needed is some kind of dashboard that integrates many different systems together. The 2000s were kind of moving in that direction, with different platforms being interoperable, and UIs being highly customizable.
I'm sorry to read that. Looks like it's good that Apple didn't build that yet.
I mean thats not actually true
It requires a shit tonne of context and also has a fucktonne of bad outcomes that people accept with chatGPT but not apple products.
> Know that my son has a test on Thursday and hasn’t opened the revision material since Monday. A gentle nudge, not a surveillance report.
That requires two bits of context that are hard to find:
1) that there is an exam. Ideally it'll be in the calendar, but who's exam is it actually? is it the creator, the invitee or owner of the calendar's exam
2) That certain actions on the web == revision. THat requires knowing what the exam is about, what the offical study material is, and more importantly cross account access to web history.
> Track our medication schedule and ping people (or me, if someone misses a window) without turning into a clinical monitoring tool.
How do you nonintrusive test that medication has been taken? How do you know its the right pills? How do you upload the prescription to do that? how do you handle power of attorney? How do you not get sued when people rely on it?
> Coordinate pickup times, grocery lists, meal plans–the sort of mundane family logistics that currently live in a group chat and three different apps.
Again sharing of rawe data to model to build a context. How do you screen for privacy? how do you make sure that talking about private stuff (like love interest etc) doesn't leak into other contexts?
> Better family e-mail, better event handling, better package tracking across household members.
Define better.
Look as someone who worked on AR/AI assistant glasses, its trivial to make something like that which works 80% of the time. You can't make it secure though, because it requires removing a bunch of privacy barriers that stop fraud and stuff leaking to third parties.
Its a really hard problem to crack to both be accurate, private and secure. You can pick one, at best.
That's not a trivial thing to build. By what criteria should a show match what you've already seen if you watched shrinking and below decks and silo this past month?
Things with boats? Jason Siegel? Post apocalyptic stuff?
It’s very hard to do both things well and at Apple scale it’s nearly impossible.
This is what enabled us to win despite FindMy being launched a few years after us.
As a shameless plug I’m building a family AI team as a startup within our larger 600 person org.
https://chrishulls.medium.com/life360-is-building-a-family-a...
Siri isn't lame because of the lack of frontier LLM. Siri is a massive failure of simply coding it to do obvious things, which is a UX failure, which is ironic given Apple's reputation as the UX leaders. I guess it is a low bar considering the competition of MS and Goog.
Over the last 6 years, I have fully bought into the ecosystem and it constantly dissapoints me. I invite the UX team to spend a few days watching me struggle with their fragmented ecosystem. But I warn them to not let me get started on AppleTV (the streaming app), where the enshitification takes the crown over all of their competitors. They seem to have jumped the shark past the give the consumer great value stage.
They should just provide the hardware.
No you don't. You want to gain trust with him and talk. And talk with the educational team.
"A gentle nudge, not a surveillance report."
That's exactly what it's going to end up being
I’m also not sure how any of this can happen given that Apple seems intent on making their apps harder to use and less interested in the users’ preferences over time. They are running away from elegant solutions and simple just-works software.