A number of years after I finished school, I was in a new town without a job, and got hired to teach a freshman algebra course at the nearby Big Ten university. About halfway into teaching the class, I was struck by the realization that virtually every problem was solved in the same way, by recognizing the "form" of a problem and applying an algorithm appropriate for that form, drawn from the most recent chapter.
In the TFA, the natural log in the integrand was a dead give-away because it only comes from one place in the standard order of topics in calculus class.
Is this what we call intuition?
The students called this the "trick." Many of them had come from high school math under the impression that math was subjective, and was a matter of guessing the teacher's preferred trick from among the many possible.
For instance, all of the class problems involving maxima and minima involved a quadratic equation, since it was the only form with an extremum that the students had learned. Every min/max problem culminated with completing the square. I taught my students a formula that they could just memorize.
The whole affair left me with a bad taste in my mouth.
I just haven’t had to use integral calculus in so many years, I don’t recall what the symbols mean and I certainly don’t care about them. That doesn’t mean I wouldn’t find the problem domain interesting, if it was expressed as such. Instead, though, I get a strong dose of mathematical formalism disconnected from anything I can meaningfully reason about. Too bad.
[0] https://archive.org/details/advancedcalculus031579mbp/mode/1...
This is the most important lesson I learned in grad school. Methods are so important. I really think it is the core of what we call "critical thinking" - knowing how facts are made.
OTOH, if I'm given the expression, it's just mechanical and unrewarding.
But I had always loved maths and went back to it much later. After having done some computer science, some concepts just made it click more for me. Like sets were a big one. Seeing functions as just a mapping between sets. Seeing functions as set elements. Seeing derivatives and integrals as simply the mapping between sets of functions.
What fascinates me is that differentiation is solved, basically. Don't come at me about known closed form expressions. But integration is not. Now this makes a certain amount of sense. Differentiation is non-injective after all. But what's more fascinating (and possibly really good evidence of my own neurodivergence) is that integration isn't just an algorithm. It requires some techniques to find, of which the Feynman technique is just one. I think I was introduced to it with the Basel problem. I have to confess I end up watching daily Tiktok integration problems. It scratches an itch.
I kinda wish I'd made it to complex analysis at least in college. I mean I kinda did. I do remember doing something with contour integrals. But it just wasn't structured well. By that I mean Laplace transforms, poles of a function in the S-plane and analytic continuations.
I'm not particularly proficient at the Feynman technique. Like I can't generally spot the alpha substitution that should be made. Maybe one day.
I have needed to know the values of a few integrals in my job, but I have always ended up with a close enough answer using computational methods. What am I missing by not solving analytically?
First, a motivational anecdote, then some straightforward theory, a simple (yet impressive) example fully worked out, the general method, and further examples of increasing difficulty for practice with hints.
Feynman’s trick is equivalent to extending it into a double integral and then switching the order of integration.
I'(t)=\int_0^1 \partial/(\partial t)((x^t - 1)/(ln x))dx = \int_0^1 x^t dx=1/(t+1), when it is actually equal to \int_0^1 x^{t-1}/ln(x)dx.
These two are definitely not always equal to each other.