I’ve also been experimenting with Go on a separate project and keep running into the opposite feeling — a lot of relatively common code (fetching/decoding) seems to look so visually messy.
E.g., I find this Swift example from the article to be very clean:
func fetchUser(id: Int) async throws -> User {
let url = URL(string: "https://api.example.com/users/\(id)")!
let (data, _) = try await URLSession.shared.data(from: url)
return try JSONDecoder().decode(User.self, from: data)
}
And in Go (roughly similar semantics) func fetchUser(ctx context.Context, client *http.Client, id int) (User, error) {
req, err := http.NewRequestWithContext(
ctx,
http.MethodGet,
fmt.Sprintf("https://api.example.com/users/%d", id),
nil,
)
if err != nil {
return User{}, err
}
resp, err := client.Do(req)
if err != nil {
return User{}, err
}
defer resp.Body.Close()
var u User
if err := json.NewDecoder(resp.Body).Decode(&u); err != nil {
return User{}, err
}
return u, nil
}
I understand why it's more verbose (a lot of things are more explicit by design), but it's still hard not to prefer the cleaner Swift example. The success path is just three straightforward lines in Swift. While the verbosity of Go effectively buries the key steps in the surrounding boilerplate.This isn't to pick on Go or say Swift is a better language in practice — and certainly not in the same domains — but I do wish there were a strongly typed, compiled language with the maturity/performance of e.g. Go/Rust and a syntax a bit closer to Swift (or at least closer to how Swift feels in simple demos, or the honeymoon phase)
The design goal of structured concurrency is to have a safe way of using all available CPU cores on the device/computer. Modern mobile phones can have 4, 6, even 8 cores. If you don't get a decent grasp of how concurrency works and how to use it properly, your app code will be limited to 1 or 1.5 cores at most which is not a crime but a shame really.
That's where it all starts. You want to execute things in parallel but also want to ensure data integrity. If the compiler doesn't like something, it means a design flaw and/or misconception of structured concurrency, not "oh I forgot @MainActor".
Swift 6.2 is quite decent at its job already, I should say the transition from 5 to 6 was maybe a bit rushed and wasn't very smooth. But I'm happy with where Swift is today, it's an amazing, very concise and expressive language that allows you to be as minimalist as you like, and a pretty elegant concurrency paradigm as a big bonus.
I wish it was better known outside of the Apple ecosystem because it fully deserves to be a loved, general purpose mainstream language alongside Python and others.
Every time I think I “get” concurrency, a real bug proves otherwise.
What finally helped wasn’t more theory, but forcing myself to answer basic questions:
What can run at the same time here?
What must be ordered?
What happens if this suspends at the worst moment?
A rough framework I use now:
First understand the shape of execution (what overlaps)
Then define ownership (who’s allowed to touch what)
Only then worry about syntax or tools
Still feels fragile.
How do you know when your mental model is actually correct? Do you rely on tests, diagrams, or just scars over time?
(bracketed statement added by me to make the implied explicit)
This sums up my (personal, I guess) beef with coroutines in general. I have dabbled with them since different experiments were tried in C many moons ago.
I find that programming can be hard. Computers are very pedantic about how they get things done. And it pays for me to be explicit and intentional about how computation happens. The illusory nature of async/await coroutines that makes it seem as if code continues procedurally demos well for simple cases, but often grows difficult to reason about (for me).
Obviously I'm not saying you throw out big O notation or stop benchmarking, but it does seem like eliminating an extra network call from your pipeline is likely to have a much higher ROI than nearly any amount of CPU optimization has; people forget how unbelievably slow the network actually is compared to CPU cache and even system memory. I think the advent of these async-first frameworks and languages like Node.js and Vert.x and Tokio is sort of the industry acknowledgement of this.
We all learn all these fun CPU optimization tricks in school, and it's all for not because anything we do in CPU land is probably going to be undone by a lazy engineer making superfluous calls to postgres.
1. sending. Using this keyword liberally will save you from the more heavyweight options like actors and Sendable.
2. isolated parameters. Inheriting the isolation of the caller is critical for functional style programming.
3. Dynamic isolation in general. Sometimes `assumeIsolated` is all you need.
The fact that it recommends you pass this document to an agent without including these concepts almost guarantees the LLM is going to program itself into a corner.
Are there any reference counting optimizations like biased counting? One big problem with Python multithreading is that atomic RCs are expensive, so you often don't get as much performance from multiple threads as you expect.
But in Swift it's possible to avoid atomics in most cases, I think?
https://forums.swift.org/t/is-concurrent-now-the-standard-to...
And after all this "fucking approachable swift concurrency", at the end of the day, one still ends up with a program that can deadlock (because of resources waiting for each other) or exhaust available threads and deadlock.
Also, the overload of keywords and language syntax around this feature is mind blowing... and keywords change meaning depending on compiler flags so you can never know what a code snippet really does unless it's part of a project. None of the safeties promised by Swift 6 are worth the burnout that would come with trying to keep all this crap in one's mind.