It's simple and also has an excellent choice of where to invest in powerful features. It looks like an elegant, minimal selection of things existing languages already do well, while cutting out a lot of cruft.
The site also mentions two differentiating and less established features that make it sound like more than yet another fp remix: type-based ownership and algebraic effects.
While ownership stuff is well explored by Rust and a less explicit variation by Mojo, this sounds like a meaningful innovation and deserves a good write-up! Ownership is an execution-centric idea, where fp usually tries to stay evaluation-centric (Turing v Church). It's hard to make these ideas work will together, and real progress is exciting.
I'm less familiar with algebraic effects, but it seems like a somewhat newer (in the broader consciousness) idea with a lot of variation. How does Loon approach it?
These seem like the killer features, and I'd love to see more details.
(The one technical choice I just can't agree with is multi-arity definitions. They make writing code easier and reading it harder, which is rarely or never the better choice. Teams discourage function overloading all the time for this reason.)
Thanks for sharing!
This looks like a really neat project/idea; seeing the road map is exciting too, nearly everything I'd want.
I don't love the brackets syntax, or the [op val1 val2] ([* x x]) style, but I appreciate the attempt at clarity and consistency and none of these things are dealbreakers.
I do wonder why they've leaned so hard into talking about the type system being out of sight. Again, not a dealbreaker, but I feel strongly that explicit typing has a place in codebases beyond "describe something because you have to".
Strongly typed languages strike me as providing detailed hints throughout the codebase about what "shape" I need my data in or what shape of data I'm dealing with (without needing to lean on an LSP). I find it makes things very readable, almost self-documenting when done right.
From their docs about their choices: "The reasoning is simple: types exist to help the compiler catch your mistakes. They do not exist to help you express intent, at least not primarily." This strikes me as unnecessarily pedantic; as someone reading more code than I write (even my own), seeing a type distinctly—particular as part of a function signature—helps me understand (or add strong context) to the original author's goal before I even get to reading the implementation.
I find this doubly so when working through monadic types where I may get a typed error, a value, and have it all wrapped in an async promise of some kind (or perhaps an effect or two).
By the same token many languages allow you to leave out type annotations where they may be simple or clearly implied (and/or inferred by the compiler), so again, I'm not understanding the PoV (or need) for these claims. Perhaps Loon simply does it better? Am I missing something? Can I write return types to stub functions?
From the above blog post: "That's how good type inference feels! You write code. The types are just there. Because the language can see where it's going." Again, it feels strongly geared towards a world where we value writing code over reading/maintaining/understanding code, but maybe that's just my own bias/limitations.
Will follow it closely.
How much of this is actually real?
[0] https://github.com/ecto/loon
That being said I took a look at the roadmap and the next major release is the one that focuses on Effects, so perhaps I'm jumping the gun a tad. Maybe I'll whip this out for AoC this year!
Oh dear, why? Abrasive aesthetics aside, this is bad for people with certain non-English keyboard layouts. Not me, but many do exist.
2. The macros examples on the website don't show binding situations. Are the macro hygienic like in Scheme?
3. Why the choice of [] over ()?
Why the square brackets in particular? Notation is such an annoying part of this stuff, I’m actually leaning towards pushing a lot of structure to the filesystem
[fn square [x] [* x x]]
Could very easily be fn square(x) = x * x;
Or something like that, which is much more readable.Also
> Hindley-Milner inference eliminates type annotations.
I think it's pretty widely agreed at this point that global type inference is a bad idea. The downsides outweigh the upsides. Specifically: much worse errors & much less readable code.