But what really has my attention is: Why is this something I'm reading about on this smart engineer's blog rather than an Apple or Google product release? The fact that even this small set of features is beyond the abilities of either of those two companies to ship -- even with caveats like "Must also use our walled garden ecosystem for email, calendars, phones, etc" -- is an embarrassment, only obscured by the two companies' shared lack of ambition to apply "AI" technology to the 'solved problem' areas that amount to various kinds of summarization and question-answering.
If ever there was a chance to threaten either half of this lumbering, anticompetitive duopoly, certainly it's related to AI.
I've got a little utility program that I can tell to get the weather or run common commands unique to my system. It's handy, and I can even cron it to run things regularly, if I'd like.
If it had its own email box, I can send it information, it could use AI to parse that info, and possibly send email back, or a new message. Now, I've got something really useful. It would parse the email, add it to whatever internal store it has, and delete the message, without screwing up my own email box.
Thanks for the insight.
Honestly, saying way too little with way too much words (I already hate myself for it) is one of the biggest annoyances I have with LLM's in the personal assistant world. Until I'm rich and thus can spend the time having cute conversations and become friends with my voice assistant, I don't want J.A.R.V.I.S., I need LCARS. Am I alone in this?
1. I'd like the backend to be configured for any LLM the user might happen to have access to (be that the API for a paid service or something locally hosted on-prem).
2. I'm also wondering how feasible it is to hook it up to a touchscreen running on some hopped-up raspberry pi platform so that it can be interacted with like an Alexa device or any of the similar offerings from other companies. Ideally, that means voice controls as well, which are potentially another technical problem (OpenAI's API will accept an audio file, but for most other services you'd have to do voice to text before sending the prompt off to the API).
3. I'd like to make the integrations extensible. Calendar, weather, but maybe also homebridge, spotify, etc. I'm wondering if MCP servers are the right avenue for that.
I don't have the bandwidth to commit a lot of time to a project like this right now, but if anyone else is charting in this direction I'd love to participate.
This works really effectively with thinking models, because the thinking eats up tons of context, but also produces very good "summary documents". So you can kind of reap the rewards of thinking without having to sacrifice that juicy sub 50k context. The database also provides a form of fallback, or RAG I suppose, for situations where the summary leaves out important details, but the model must also recognize this and go pull context from the DB.
Right now I have been trying it to make essentially an inventory management/BOM optimization agent for a database of ~10k distinct parts/materials.
1. Claude Desktop 2. Projects 3. MCPs for [Notion, Todoist] and exploring emails + WhatsApp for a next upgrade
This is for me to support productivity workflows for consulting + a startup. There are a few Notion databases - clients, projects, meetings, plus a Jeeves database. The Jeeves database is up to Jeeves how it uses it, but with some guidance. Jeeves uses his own database for things like tracking a migration of all of my previous meeting notes etc under the new structure.
So my databases, I've set up my best practices for use. Here's how my minutes look, here's how client one pagers looks like, here's the information to connect it all together, and here's how I manage To Dos. I then drop in transcriptions into a new chat, with some text-expanding prompts in Alfred for a few common meetings or similar, and away he goes. He'll turn the transcript into meeting notes, create the todos, check everything with me, do a pass, and then go and file everything away into Notion and Todoist via MCP.
It's also self documenting on this. The todoist MCP had some bugs, so I instructed Jeeves to go, run all the various use cases it could, figure out the limitations and strengths, document it, and it's filed away in the Jeeves database that it can pull into context.
It lacks the cron features which I would like, but honestly, a once-a-day prepared prompt dropping into Claude is hardly difficult.
Today I asked Siri “call the last person that texted me”, to try and respond to someone while driving.
Am I surprised it couldn’t do it? Not really at this point, but it is disappointing that there’s such a wide gulf between Siri and even the least capable LLMs.
For others: they use Claude.
- https://docs.mcp.run/tasks/tutorials/telegram-bot
for memories (still not shown in this tutorial) I have created a pantry [0] and a servlet for it [1] and I modified the prompt so that it would first check if a conversation existed with the given chat id, and store the result there.
The cool thing is that you can add any servlets on the registry and make your bot as capable as you want.
[0] https://getpantry.cloud/ [1] https://www.mcp.run/evacchi/pantry
Disclaimer: I work at Dylibso :o)
1. How did he tell Claude to “update” based on the notebook entries?
2. Won’t he eventually ran out of context window?
3. Won’t this be expensive when using hosted solutions? For just personal hacking, why not simply use ollama + your favorite model?
4. If one were to build this locally, can Vector DB similarity search or a hybrid combined with fulltext search be used to achieve this?
I can totally imagine using pgai for the notebook logs feature and local ollama + deepseek for the inference.
The email idea mentioned by other commenters is brilliant. But I don’t think you need a new mailbox, just pull from Gmail and grep if sender and receiver is yourself (aka the self tag).
Thank you for sharing, OP’s project is something I have been thinking for a few months now.
Large swathes of the stack is commoditized OSS plumbing, and hosted inference is already cheap and easy.
There are obvious security issues with plugging an agent into your email and calendar, but I think many will find it preferable to control the whole stack rather than ceding control to Apple or Google.
For me, that is an extremely low barrier to cross.
I find Siri useful for exactly two things at the moment: setting timers and calling people while I am driving.
For these two things it is really useful, but even in these niches, when it comes to calling people, despite it having been around me for years now it insist on stupid things like telling me there is no Theresa in my contacts when I ask it to call Therese.
That said what I really want is a reliable system I can trust with calendar acccess and that is possible to discuss with, ideally voice based.
What do you think of this: instead of just deleting old entries, you could either do LRU (I guess Claude can help with it), or you could summarize the responses and store the summary back into the same table — kind of like memory consolidation. That way raw data fades, but a compressed version sticks around. Might be a nice way to keep memory lightweight while preserving context.
TL;DR I made shortcuts that work on my Apple watch directly to record my voice, transcribe it and store my daily logs on a Notion DB.
All you need are 1) a chatgpt API key and 2) a Notion account (free).
- I made one shortcut in my iPhone to record my voice, use whisper model to transcribe it (done locally using a POST request) and send this transcription to my Notion database (again a POST request on shortcuts)
- I made another shortcut that records my voice, transcribes and reads data from my Notion database to answer questions based on what exists in it. It puts all data from db into the context to answer -- costs a lot but simple and works well.
The best part is -- this workflow works without my iPhone and directly on my Apple Watch. It uses POST requests internally so no need of hosting a server. And Notion API happens to be free for this kind of a use case.
I like logging my day to day activities with just using Siri on my watch and possibly getting insights based on them. Honestly the whisper model is what makes it work because the accuracy is miles ahead of the local transcription model.
I have not thought about adding memory log of all current things and feeding it into the context I'll try it out.
Mine is a simple stateless thing that captures messages, voice memos and creates task entries in my org mode file with actionable items. I only feed current date to the context.
Its pretty amusing to see how it sometimes adds a little bit of its own personality to simple tasks, for example if one of my tasks are phrased as a question it will often try to answer the question in the task description.
The more i think about it however, command line applications are about as interactive as a program can be.
Let's say one is interested to find out which one of the following 7 next days is gonna rain and send it to a telegram bot. `weather_forecast --days 7 | grep rain` | send_telegram.
The whole thing of off-loading everything to nondeterministic computation instead of the good ol determinism does seem strange to me. I am a huge fan though, of using non-deterministic computation for creating deterministic computation, i.e. programming.
/As a side note, i have played chess against Ishiguro many times on lichess.
[1] https://www.technologyreview.com/2023/09/15/1079624/deepmind...
The background tasks can call mcp servers, to connect to more data sources and services. At least you don’t have to write all the connectivities to them.
> cron job which makes a call to the Claude API
> I’ll use fake data throughout this post, beacuse our actual updates contain private information
but then later:
> which makes a call to the Claude API
I guess we have different ideas of privacy
I am wondering, how powerful the AI model need to be to power this app?
Would a selfhosted Llama-3.2-1B, Qwen2.5-0.5B or Qwen2.5-1.5B on a phone be enough?
How? This post shows nothing of the sort.
"I’ve written before about how the endgame for AI-driven personal software isn’t more app silos, it’s small tools operating on a shared pool of context about our lives."
Yes, probably, so now is the time to resist and refuse to open ourselves up to unprecedented degrees of vulnerability towards the state and corporations. Doing it voluntarily while it is still rather cheap is a bad idea.