It was odd to me, because it hadn't really occurred to me before that, given infinite memory (and within a mathematical framework), there's fundamentally not necessarily a difference between a "list" and a "function". With pure functions, you could in "theoretical-land", replace any "function" with an array, where the "argument" is replaced with an index.
And it makes sense to me now; TLA+ functions can be defined like maps or lists, but they can also be defined in terms of operations to create the values in the function. For example, you could define the first N factorials like
Fact == <<1, 2, 6, 24, 120>>
or like this: Fact[n \in Nat] ==
IF n = 0 THEN 1
ELSE n * Fact[n - 1]
in both cases, if you wanted to get the factorial of 5, you'd call Fact[5], and that's on purpose because ultimately from TLA+'s perspective they are equivalent.[1] At least sort of; they superficially look like functions but they're closer to hygienic macros.
(map v [4 5 7])
Would return you a list of the items at index 4, 5, and 7 in the vector v.Real signature of an array implementation would be something like V: [0, N] -> T, but that implies you need to statically prove that each index i for V[i] is less than N. So your code would be littered with such guards for dynamic indexing. Also, N itself will be dynamic, so you need some (at least limited) dependent typing on top of this.
So you don't want these things in your language so you just accept the domain as some integer type, so now you don't really have V: ℕ -> T, since for i > N there is no value. You could choose V: ℕ -> Maybe<T> and have even cases where i is provably less than N to be littered with guards, so this cure is worse than the disease. Same if you choose V: ℕ -> Result<T, IndexOutOfBounds>. So instead you panic or throw, now you have an effect which isn't really modeled well as a function (until we start calling the code after it a continuation and modify the signature, and...).
So it looks like a function if you squint or are overly formal with guards or effects, but the arrays of most languages aren't that.
> for one of the best ways to improve a language is to make it smaller.
I think this isn't one of those cases.
Should you?
This is where I'd be more careful. Maybe it makes sense to some of the langs in TFA. But it reminds me of [named]tuples in Python, which are iterable, but when used as tuples, in particular, as heterogeneous arrays¹ to support returning multiple values or a quick and dirty product type (/struct), the ability to iterate is just a problem. Doing so is almost always a bug, because iteration through a such tuple is nigh always nonsensical.
So, can an array also be f(index) -> T? Sure. But does that make sense in enough context, or does it promote more bugs and less clear code is what I'd be thinking hard about before I implemented such a thing.
¹sometimes tuples are used as an immutable homogeneous array, and that case is different; iteration is clearly sane, then
In fact, from wikipedia:
```
In mathematics, a tuple is a finite sequence or ordered list of numbers or, more generally, mathematical objects, which are called the elements of the tuple. An n-tuple is a tuple of n elements, where n is a non-negative integer. There is only one 0-tuple, called the empty tuple. A 1-tuple and a 2-tuple are commonly called a singleton and an ordered pair, respectively. The term "infinite tuple" is occasionally used for "infinite sequences".
Tuples are usually written by listing the elements within parentheses "( )" and separated by commas; for example, (2, 7, 4, 1, 7) denotes a 5-tuple. Other types of brackets are sometimes used, although they may have a different meaning.[a]
An n-tuple can be formally defined as the image of a function that has the set of the n first natural numbers as its domain. Tuples may be also defined from ordered pairs by a recurrence starting from an ordered pair; indeed, an n-tuple can be identified with the ordered pair of its (n − 1) first elements and its nth element
```
(https://en.wikipedia.org/wiki/Tuple)
From a data structure standpoint, a tuple can be seen as an array of fixed arity/size, then if an array is not a function, so shouldn't a tuple too.
And all three are tuple [input, output] pattern matches, with the special case that in “call/select tuples”, input is always fully defined, with output simply being the consequence of its match.
And with arrays, structures and overloaded functions being unions of tuples to match to. And structure inputs (I.e. fields) being literal inline enumeration values.
And so are generics.
In fact, in functional programming, everything is a pattern match if you consider even enumeration values as a set of comparison functions that return the highly used enumerations true or false, given sibling values.
> Haskell provides indexable arrays, which may be thought of as functions whose domains are isomorphic to contiguous subsets of the integers.
with
> Haskell provides indexable arrays, which are functions on the domain [0, ..., k-1]?
Or is the domain actually anything "isomorphic to contiguous subsets of the integers"?
const list = ['a', 'b', 'c']
is syntactic sugar for expressing something like this: function list(index) {
switch (index) {
case 0: return 'a'
case 1: return 'b'
case 2: return 'c'
}
} Functions are first class objects. Funsors generalize the tensor interface to also cover arbitrary functions of multiple variables ("dims"), where variables may be integers, real numbers or themselves tensors. Function evaluation / substitution is the basic operation, generalizing tensor indexing. This allows probability distributions to be first-class Funsors and make use of existing tensor machinery, for example we can generalize tensor contraction to computing analytic integrals in conjugate probabilistic models.
More in the paper: https://arxiv.org/abs/1910.10775 let v = myObject [ myFunk ];
Like with arrays-as-functions, the domain of the object-as-function would be its existing keys. Or we could say the domain is any possible value, with the assumption that value of keys which are not stored in the object is always null.Whether new keys could be added at runtime or not is a sepearate question.
It should be easy to extend the syntax so that
myObject (anything) === myObject [anything]
whether myObject is an object, or a 'function' defined with traditional syntax.Take the command `ls` for example, it could be expressed as either:
ls -la *.png # lazy
ls(-la, *.png); # formal
For pipes, they could be expressed as either: ls -la *.png | grep cat # lazy
ls(-la, *.png) | grep(cat)
|(ls(-la, *.png), grep(cat)); # formal
I thought about writing this with something like Micropython that could have a very small memory footprint.Not even in a functional sense because, even though functions are input-output maps we define, the inputs are dimensionally rich, it's nowhere close to equivalent to jerry rig a contiguous input space for that purpose.
instance pi n. Representable (Vec n) where
type Rep (Vec n) = Fin n
index :: Vec n a -> (Fin n -> a)
index = ..
tabulate :: (Fin n -> a) -> Vec n a
tabulate = ..
[1] https://hackage-content.haskell.org/package/adjunctions-4.4....When I was learning programming, I was surprised that in most programming languages we write f(k), but vec[k].
However, we do have syntax vec.at(k) or vec.get(k) in quite a few languages.
I don't get it. How is that not trivial with something like
array·slice(from: initial, to: juncture)
Which is not much different from a·/(i,j) when one want to play the monograph game instead. It can be further reduced to a/(i,j) taking from granted that "/" is given special treatment so it can be employed as part of identifiers.Composing injective functions is like applying a permutation array to another.
This is cleverness over craftsmanship. Keeping data and execution as separate as possible is what leads to simplicity and modularity.
The exception is data structures which need the data and the functions that deal with it to expose that data conveniently to be closely tied to each other.
Everything else is an unnecessary dependency that obscures what is actually happening and makes two things that could be separated depend on each other.
The matrix multiplication of vectors - or a row and a column vector - which is then just taking the dot product is called an inner product. So for functions the inner product is an integral over where the functions are defined -
< f, g> = \int f(x) g(x) dx
Likewise you can multiply functions by a "kernel" which is a bit like multiplying a vector by a matrix
< A f, g> = \int \int A(x,y) f(y) g(x) dx dy
The fourier transform is a particular kernel
When I'm writing code and need to reach for an array-like data structure, the conceptual correspondence to a function is not even remotely on my radar. I'm considering algorithmic complexity of reads vs writes, managed vs unmanaged collections, allocation, etc.
I guess this is one of those things that's of primary interest to language designers?
Yes. And a sandwich is "a stack-based heterogeneous data structure with edible semantics." This is not insight. It is taxonomy cosplay.
Look, arrays and functions share some mathematical structure! - Irrelevant. We do not unify them because representation matters.
When a language makes arrays "feel like functions," what it usually means is: "You no longer know when something is cheap." That is not abstraction. That is obscurity.
Industry programmers do not struggle because arrays lack ontological clarity. They struggle because memory hierarchies exist, cache lines exist, branch predictors exist, GPUs exist, deadlines exist.
> the correspondence between arrays and functions [...] is alluring, for one of the best ways to improve a language is to make it smaller
No. The best way to improve a language is to make it faster, simpler to reason about, and less painful to debug.
> I imagine a language that allows shared abstractions that work for both arrays and appropriate functions
What if we invented increasingly abstract our own words so we don’t have to say ‘for loop’, map, SIMD, kernels?
Making arrays pretend to be functions achieves exactly none of those things. It achieves conference papers that end with “future work”.
Why is this academic slop keep happening ? - Professors are rewarded for novel perspectives, not usable ones.
> I found this to be a hilariously obtuse and unnecessarily formalist description of a common data structure.
Well it is haskell. Try to understand what a monad is. Haskell loves complexity. That also taps right into the documentation.
> I look at this description and think that it is actually a wonderful definition of the essence of arrays!
I much prefer simplicity. Including in documentation.
I do not think that description is useful.
To me, Arrays are about storing data. But functions can also do that, so I also would not say the description is completely incorrect either.
> who can say that it is not actually a far better piece of documentation than some more prosaic description might have been?
I can say that. The documentation does not seem to be good, in my opinion. Once you reach this conclusion, it is easy to say too. But this is speculative because ... what is a "more prosaic description"? There can be many ways to make a worse documentation too. But, also, better documentation.
> To a language designer, the correspondence between arrays and functions (for it does exist, independent of whether you think it is a useful way to document them) is alluring, for one of the best ways to improve a language is to make it smaller.
I agree that there is a correspondence. I disagree that Haskell's documentation is good here.
> currying/uncurrying is equivalent to unflattening/flattening an array
So, there are some similarities between arrays and functions. I do not think this means that both are equivalent to one another.
> would like to see what it would be like for a language to fully exploit the array-function correspondence.
Does Haskell succeed in explaining what a Monad is? If not, then it failed there. What if it also fails in other areas with regards to documentation?
I think you need to compare Haskell to other languages, C or Python. I don't know if C does a better job with regards to its documentation; or C++. But I think Python does a better job than the other languages. So that is a comparison that should work.