Neither the present article, nor the original one has much mathematical originality, though: Odrzywolek's result is immediately obvious, while this blog post is a rehash of Arnold's proof of the unsolvability of the quintic.
> Elementary functions typically include arbitrary polynomial roots, and EML terms cannot express them.
If you take a real analysis class, the elementary functions will be defined exactly as the author of the EML paper does.
I've actually just learnt that some consider roots of arbitrary polynomials being part of the elementary functions before, but I'm a physicist and only ever took some undergraduate mathematics classes. Nonetheless, calling these elementary feels a bit of stretch considering that the word literally means basic stuff, something that a beginner will learn first.
In a similar vein to this post, the paper points out that general polynomials do not have solutions in E, so of course exp-minus-log is similarly incomplete.
What is intriguing is that we don’t even know whether many simple equations like exp(-x) = x (i.e. the [omega constant]) have solutions in E. We of course suspect they don’t, but this conjecture is not proven: https://en.wikipedia.org/wiki/Schanuel%27s_conjecture
What is a closed-form number?: http://timothychow.net/closedform.pdf omega constant: https://en.wikipedia.org/wiki/Omega_constant
If nothing else you could solve simple differential equations with them. And it gives you the 'power' function.
The very fact that the set of functions is largely arbitrary is a much bigger issue. Or at least it limits the use of the fact that you can represent those functions.
Edit: I feel the need to add that just because it is a weak critique doesn't mean the argument itself is not interesting.
But the fact that a single function can represent a large number of other functions isn't that surprising at all.
It's probably obvious to anyone (it wasn't initially to me), but given enough arguments I can represent any arbitrary set of n+1 functions (they don't even have to be functions on the reals - just as long as the domain has a multiplicative zero available) as a sort of "selector":
g(x_0, c_0, x_1, c_1, ... , x_n, c_n) = c_0 * f_0(x_0) + ... + c_n * f_n(x_n)
The trick is to minimize the number of arguments and complexity of the RHS - but that there's a trivial upper-bound (in terms of number of arguments).
Can anyone please explain this further? It seems like he’s moving the goalposts.
Admittedly this may be above my math level, but this just seems like a bad definition of elementary functions, given the context.
Interestingly, the abs (absolute value) function is non-elementary. I wonder if exp-minus-log can represent it.
AFAIU the original paper is a result in the field of symbolic regression. What definition of elementary function do they use?
Tests for the trig functions aren't passing yet due to an issue with the derived eml form in some mirrored cases.
Also I'd be glad to see a specific example of a function, considered elementary, which is not representable by EML.
It could be hard, and in any case, thanks for the article. I wish it would be more accessible to me.
Don't have anything for the perfect numbers though.
https://en.wikipedia.org/wiki/Template:Mathematical_expressi...