A fictionalized Albert Einstein (portrayed by the late Walter Matthau) plays mischievous matchmaker between his egghead niece, Catherine Boyd, and a good-hearted auto mechanic named Ed Walters in the charming 1994 romantic comedy, I.Q. Some might object to the considerable liberties taken with historical fact and illustrious personages. After all, a key plot point turns on Einstein essentially committing a form of academic fraud by passing off his own work as Ed's, to help the latter impress his niece. But there's a lot to admire in the film, if for no other reason than its inclusion of Einstein's real-life cronies, Kurt Godel, Boris Podolsky, and Nathan Liebknecht as supporting characters. And how could you not love this exchange, as Ed introduces Einstein to Frank, one of his co-workers at the garage: "This is Albert Einstein, the smartest man in the world!" Intones Frank (while shaking hands) in his best Joisey accent, "Hey, how they hangin'?"
This being a mainstream movie, there's very little actual science portrayed, despite the predominance of scientists in the cast. But there is a lovely little scene in a diner, where Catherine tries to explain to Ed the gist of Zeno's Paradox. She explains it thus: if she takes one step forward, and then halves the distance traveled with her next step, then halves it again, and so forth, such that the progression goes on for infinity, she will never be able to reach Ed. The distance between them will get smaller and smaller, but will never reach zero. The astute viewer will note that the subtext here is Catherine's belief that there is no way to bridge the gap between the couple's intellectual differences and social status. Which makes it all the more refreshing for us diehard romantics when the practical-minded Ed simply steps over the imaginary line to close the gap: "So how did I do that?" A confused Catherine stammers, "I... I don't know." But if she knows her calculus (and she should), the "mystery" should be easy to solve.
I've encountered numerous versions and variations on Zeno's Paradox over the years, but -- and it pains me to admit this publicly -- I did not realize it was tied to the essence of calculus. If I get nothing else out of my fledgling experiment in self-instruction regarding calculus, at least I have learned that much. For those who haven't encountered Zeno before, he was a Greek philosopher living in the 5th century B.C., who thought a great deal about motion. Some might argue he thought a bit too hard about motion; the guy was always playing devil's advocate, even with his own arguments, which is how he arrived at his eponymous paradox. He used an arrow flying through the air towards a target to illustrate his points, rather than a young couple in a diner, but the basic idea is the same: to reach the target, the arrow must first cover half the distance, then half the remaining distance, and so on, moving an infinite number of times. By that logic, the distance between the arrow and the target would keep getting smaller and smaller, and yet the arrow could never close the gap completely in order to actually reach the target.
There's an equally paradoxical corollary: At any given moment in time, the arrow has a specific fixed position -- it can only be in just one place at any given time -- which means it is technically "at rest" (not moving) at that particular instant, even though, taken all together, those individual points add up to an arrow in motion. Motion, after all, is basically the measure of how an object's position has changed over time. But break down motion into infinitely small increments, and you find yourself trying to determine how far it traveled in zero amount of time: instantaneous motion. Ergo, the paradox. Zeno stumbled on a new way of looking at the world. Alas, it would take a couple of millennia before mathematics developed sufficiently to make sense of his logical conundrum. (For starters, they needed the concept of zero.)
Calculus has a rather formidable reputation. I have always been among those non-mathematical sorts who viewed it with intense trepidation and preferred to keep a safe distance, despite my love for all that science-y stuff. But according to my new virtual instructor, Michael Starbird (University of Texas, Austin), the entire discipline is encapsulated in two fundamental ideas: (1) the derivative, which is a way of measuring instantaneous change (such as finding speed from position); and (2) the integral, which describes the accumulation of tiny pieces that add up to a whole (and can be used, for instance, to determine the distance traveled based on speed). Everything else involved in calculus is just variations on these two themes.
Naturally, any physicist reading this post already knows this stuff. But it wasn't always the case. Borrowing one of the central concepts of the discipline itself, one could argue that calculus was invented via tiny infinitesemal bits of accrued knowledge that, taken together, added up to a whole. (Isn't metaphor a marvelous thing?) The roots of calculus are usually traced back to ancient Greece and the Pythagorean theorem, a century or so before Zeno. A century after Zeno, Eudoxus developed something called a "method of exhaustion," which made it possible to determine the area and volume of regions by breaking them into smaller shapes. Archimedes later adapted the method of exhaustion to determine the areas and volumes of geometric objects around 225 BC.
A lot of mathematicians over the subsequent centuries -- in lots of different geographic regions, including India and Japan -- made vital contributions, and a handful "almost" invented calculus, most notably Pierre de Fermat in 1629. But ultimately, the credit for inventing calculus is given jointly to Isaac Newton and Gottfried Wilhelm Leibniz, who independently made their revolutionary discoveries in the 1660s and 1670s.
Newton hardly needs an introduction, being almost universally recognized as the father of modern physics via his work on gravity and the laws of motion (detailed in the massive Principia) and the nature of light (Opticks). Around 1666, Newton worked out his so-called "theory of fluxions." It's early days yet in my remedial calculus DVD course, and I have yet to find a good lay person's explanation of what Newton's theory entailed, but it seems to correlate pretty well to the notion of differentials. At any rate, it's generally agreed that Newton was the first to state the fundamental theorem of calculus, and was also the first to apply derivatives and integrals in a single work (although he didn't use those terms).
Leibniz is less well-known to non-scientists. He was born in Germany in 1646, and since his father died when he was 6, was largely raised by his mother. He taught himself Latin and Greek so he could read the great works of Aristotle and other philosophers. He entered the University of Leipzig at age 15, and left two years later with his degree in law (he eventually earned a doctorate in law).
A chance meeting with Christopher Huygens ignited Leibniz's interest in the study of geometry and the mathematics of motion; he described their meeting as "opening a whole new world" to him. He pursued these interests in his spare time, inventing (in 1671, well before Charles Babbage and his Difference Engines) a handy little machine called the step reckoner. A forerunner of the modern calculator, the device could add, subtract, multiply, divide, and even extract square roots. His reasoning: "It is unworthy of excellent men to lose hours like slaves in the labor of calculation, which could be safely relegated to anyone else if machines were used."
But it was the problem of motion that most intrigued Leibniz, who published the first account of differential calculus in 1684, followed by a discussion of integral calculus two years later. Newton's work on the subject didn't appear in print until 1687. His procrastination led to one of the most bitter controversies in scientific history. Using present-day practices, Leibniz would have won credit for the discovery, simply because he published first, but at the time, Newton was by far the more famous scientist, and a prominent member of the Royal Society. And he wasn't above using his considerable influence to crush the scientific competition. In addition to Leibniz, he fought with John Flamsteed, with Huygens , and with Robert Hooke, and each proved to be an acrimonious battle. In short, Newton was not a "people person;" no wonder he purportedly died a virgin.
The Royal Society sided with Newton on the controversy, crediting him with the discovery of calculus in 1715 after a prolonged dispute. Leibniz wasn't given shared credit until after his death a year later, and even then, it was one of those never-ending controversies that occasionally plagues the physics profession. (Get a couple of physicists on opposite ends of the "does centrifugal force really exist" debate, then sit back and watch the fur fly.) Today, the consensus seems to be that the two men represent the two approaches that form the basis of the discipline they co-invented. Leibniz was the more abstract of the two, much like I.Q.'s Catherine -- and it's his superior system of notation that modern scientists still use today -- while Newton focused on the more practical applications of calculus, like Ed the science-minded garage mechanic.
So that's the gist of what I learned this week. Calculus is fairly simple and straightforward in concept; the devil is in the details. But essentially it's a way of measuring change, whether that be a change in position, temperature, or what have you. Its power comes from its universality: the same basic concepts can be applied to systems as diverse as a car driving down a road, the stock market, even traffic flow.
Personally, the most striking thing about my first calculus lesson was the notion of the integral, because I hadn't considered the deeper implication of this seemingly obvious statement: viewing objects as being formed via the accumulation of infinitesimally small pieces enables us to see the world as a dynamic rather than static place. A simple medicine ball, viewed through the lens of the integral, can be seen as growing by accretion -- i.e., a form of motion -- rather than just sitting there complacently on the floor of my gym, waiting to be of some use.
It's a whole new way of thinking, but at least I've got the gist of the "What", in the broadest possible brushstrokes -- plus a useful historical context. All those devilish details will be filled in bit by bit in the coming weeks as I begin the hard part: working through the equations and learning how to apply the universal principle to a wide variety of applications. The biggest challenge, for my abstractly-challenged brain, will be grasping the over-arching "Why" -- the contextual framework into which it all fits. Because ultimately, the "why" is everything. In this case, the "why" comes down to the function: the derivative and the integral are flip sides of the same coin, two different ways of looking at the same thing, and the function is the connection between them. Understanding how the two relate to each other, and the significance of that dependence, as expressed in a universal "Fundamental Theorem," is my ultimate goal.
In the meantime, my 14-year-old niece, Kathryn, sent me a little Internet meme in which one can determine one's age using "chocolate math." It's significantly easier than calculus:
1. Pick the number of times a week that you would like to eat chocolate. (This number must be more than 1 but less than 10, much to Jen-Luc Piquant's disappointment, as she would like to have chocolate for breakfast, lunch and dinner, 7 days a week -- provided it's gourmet, organic chocolate, that is; none of that cheap-o Hershey stuff.)
2. Multiply this number by 2.
3. Add 5.
4. Multiply the total by 50.
5. If you have already had your birthday this year, add 1756. If you haven't, add 1755.
6. Now subtract the four-digit year in which you were born.
Ta-da! The result should be a three-digit number. The first digit is the original number you chose, pertaining to how often per week you would like to eat chocolate. And the next two numbers are your age (one assumes anyone capable of taking the quiz is aged 10 or older, otherwise they'd end up with a two-digit number). I can faithfully report that it really does work, a nice little example of "mathemagic," or hidden patterns in numbers. Apparently 2006 is the only year this particular trick will work, although I'd think that altering the numerical values in Step 5 might make it possible to adapt the meme to other years. That doesn't mean I understand to the nth degree why it works, of course, but I'm pretty sure it has nothing to do with any mysterious innately mathematical properties of chocolate (more's the pity).
Teenagers today have unprecedented access to cutting-edge technology, but they rarely use email, cell phones, MP3 players, or computers for much more than entertainment. So I was thrilled to get a mathematical meme (however elementary) from my niece, and not, say, a giggly personality "quiz" or cautionary chain letter, which is more the norm for kids her age. Maybe she'll manage to enjoy her high school calculus class senior year -- unlike her recalcitrant aunt, who skipped out on it altogether. Then again, maybe I'm finally mature enough to appreciate why calculus is so seminal to science.
Recent Comments