The min of two random vars has the opposite effect as the max does in this video. And now I’m curious - if we use the function definition of min/max — the nth root of the sum of the nth powers of the arguments — there is a continuum from min to sum to max, right? Are there useful applications of this generalized distribution? Does it already have a name?
P(max(X1 ... Xn) < x) =
P(X1 < x and X2 < x ... and Xn < x) =
P(X1 < x) P(X2 < x) ... P(Xn < x) =
x^n
Also,
P(X^{1/n} < x) = P(X < x^n) = x^n
I guess I am just an old man yelling at clouds, but it seems so strange to me that one would bother checking this with a numerical simulation. Is this a common way to think about, or teach, mathematics to computer scientists?
Short, to the point, and the illustrations/animations actually helped convey the message.
Would be super cool if someone could recommend some social media account/channel with collections of similar quality videos (for any field).