> When using an Io.Threaded instance, the async() function doesn't actually do anything asynchronously — it just runs the provided function right away.
While this is a legal implementation strategy, this is not what std.Io.Threaded does. By default, it will use a configurably sized thread pool to dispatch async tasks. It can, however, be statically initialized with init_single_threaded in which case it does have the behavior described in the article.
The only other issue I spotted is:
> For that use case, the Io interface provides a separate function, asyncConcurrent() that explicitly asks for the provided function to be run in parallel.
There was a brief moment where we had asyncConcurrent() but it has since been renamed more simply to concurrent().
(To my understanding this is pretty similar to how Go solves asynchronicity, expect that in Go's case the "token" is managed by the runtime.)
var a_future = io.async(saveFile, .{io, data, "saveA.txt"});
var b_future = io.async(saveFile, .{io, data, "saveB.txt"});
const a_result = a_future.await(io);
const b_result = b_future.await(io);
In Rust or Python, if you make a coroutine (by calling an async function, for example), then that coroutine will not generally be guaranteed to make progress unless someone is waiting for it (i.e. polling it as needed). In contrast, if you stick the coroutine in a task, the task gets scheduled by the runtime and makes progress when the runtime is able to schedule it. But creating a task is an explicit operation and can, if the programmer wants, be done in a structured way (often called “structured concurrency”) where tasks are never created outside of some scope that contains them.From this example, if the example allows the thing that is “io.async”ed to progress all by self, then I guess it’s creating a task that lives until it finishes or is cancelled by getting destroyed.
This is certainly a valid design, but it’s not the direction that other languages seem to be choosing.
const std = @import("std");
const Io = std.Io;
fn saveFile(io: Io, data: []const u8, name: []const u8) !void {
const file = try Io.Dir.cwd().createFile(io, name, .{});
defer file.close(io);
try file.writeAll(io, data);
}
the phrase “Either way, the operation is guaranteed to be complete by the time writeAll() returns” is too weak. Given that the function can, over time, be called with different implementations of IO and users can implement IO themselves, I think the only way this can work is that the operation is guaranteed to be complete when the defer starts (if not, what part of the code makes sure the createFile must have completed when writeAll starts? (The IO instance could know, but it would either have to allow for only one ‘in flight’ call or have to keep track of in-progress calls and know of dependency between creating a file and writing to it)But then, how is this really different from a blocking call?
Also, if that’s the case, why is that interface called IO? It looks more like a “do this in a different context” thing than specific to I/O to me (https://ziglang.org/documentation/master/std/#std.Io seems to confirm that. It doesn’t mention I/O at all)
From the article:
std.Io.Threaded - based on a thread pool.
-fno-single-threaded - supports concurrency and cancellation.
-fsingle-threaded - does not support concurrency or cancellation.
std.Io.Evented - work-in-progress [...]
Should `std.Io.Threaded` not be split into `std.Io.Threaded` and `std.Io.Sequential` instead? Single threaded is another word for "not threaded", or am I wrong here?I seem to recall reading about some downsides to that approach, e.g. that calling C libraries is relatively expensive (because a real stack has to be allocated) and that circumventing libc to do direct syscalls is fragile and unsupported on some platforms.
Does the Zig implementation improve on Go's approach? Is it just that it makes it configurable, so that different tradeoffs can be made without changing the code?
the core problem is that language/library authors need to provide some way to bridge between different execution contexts, like containing these different contexts (sync / async) under FSMs and then providing some sort of communication channel between both.
Although, it does seem like dependency injection is becoming a popular trend in zig, first with Allocator and now with Io. I wonder if a dependency injection framework within the std could reduce the amount of boilerplate all of our functions will now require. Every struct or bare fn now needs (2) fields/parameters by default.
Really think Zig is right about this, excited to use it and feel it out.
IMO every low level language's async thing is terrible and half-baked, and I hate that this sort of rushed job is now considered de rigueur.
(IMO We need a language that makes the call stack just another explicit data structure, like assembly and has linearity, "existential lifetimes", locations that change type over the control flow, to approach the question. No language is very close.)
Yes, eventually you're gonna lift sync to async code, and that works fine as it is generally also the runtime model (asynchronous, event-based).
Eg.
doSomethingAsync().defer
This removes stupid parentheses because of precedence rules.
Biggest issue with async/await in other languages.
What the heck did I just read. I can only guess they confused Haskell for OCaml or something; the former is notorious for requiring that all I/O is represented as values of some type encoding the full I/O computation. There's still coloring since you can't hide it, only promote it to a more general colour.
Plus, isn't Go the go-to example of this model nowadays?