https://www.cs.utexas.edu/~EWD/transcriptions/EWD10xx/EWD109...
I was surprised at how difficult I found math. Now, I was never really great at math; logic and calculation in the head I could do fairly well (above average), but just foundational knowledge was hard and mathematical theory even harder. But now I even had trouble with integration and differentiation and even with understanding a problem to put it down into a formula. I am far from being the youngest anymore, but I was surprised at how shockingly bad I have become in the last some +25 years. So I decided to change this in the coming months. I think in a way computers actually made our brains worse; many problems can be auto-solved (python numpy, sympy etc...) and the computers work better than hand-held calculators, but math is actually surprisingly difficult without a computer. (Here I also include algorithms by the way, or rather, the theory behind algorithms. And of course I also forgot a lot of the mathematical notation - somehow programming is a lot easier than higher math.)
EDIT: I think this line is the most telling:
> But he cautioned that the reverse mathematics approach may be most useful for revealing new connections between theorems that researchers have already proved. "It doesn’t tell us much, as far as we can say, about the complexity of statements which we do not know how to prove."
So, at this point, it helps us understand more about problems we already understand a little about, but nothing yet about new problems.
In a lot of ways this feels related to the development of noneuclidean geometry so it's kind of surprising it wasn't a hot topic sooner. Maybe it was waiting for a critical mass of computer scientists to come along, or maybe we just needed to run out of other low-hanging fruit so that foundational topics in complexity became more interesting
Given two bit strings of length n, if we compare fewer than n pairs, we cannot tell whether they are equal. The strings being equal is the proposition a_0 = b_0 ^ a_1 = b_1 ^ ... ^ a_n-1 = b_n-1. You cannot simplify this formula such that any a_i or b_i primary is taken away.
Also, whether some problem has polynomial complexity or exponential complexity depends on what you consider as the problem space. Complexity of b^c is polynomial if b is the problem space and exponential if c is the problem space.
Complexity of traveling salesman problem depends on what you consider as the problem space - number of cities or number of connections between cities.
If you're reading the paper, I recommend Section 1.3 where it goes over the examples of Binary Search and Dijkstra. The idea that "natural numbers are encoded by binary strings just as other data structures" in the preface is prevalent in their constructions of their proofs. As a computer scientist, this is great because I intuitively recognize that both algorithms and memory consist of only 1's and 0's underneath all the abstractions and code.
This work ties together practical applications and complexity theory to create a new bridge in mathematics that I'm excited about exploring. I'm especially interested in reverse mathematics applied to space complexity.
Here's some additional resources I found on this, Talk by Jiatu Li, joint work with Lijie Chen, Igor Carboni Oliveira Title: Reverse Mathematics of Complexity Lower Bounds https://www.youtube.com/watch?v=g5EqAgDxxE0