Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What those have in common is that they're numerical, not that they require (integral and differential) calculus.

It's true that in a lot of cases, deeply understanding discrete numerical algorithms is a lot easier if you can analyze the continuous versions, which of course cannot be executed directly. But you can get really far with just the discrete versions, and you can understand useful things about the continuous versions without knowing what a derivative or an integral is.

And I don't just mean that you can use Unity or Pure Data to wire together pre-existing algorithms and get interesting results, although that's true too. You don't even need to understand any calculus to write a ray-tracer from scratch like http://canonical.org/~kragen/sw/aspmisc/my-very-first-raytra..., which is four pages of C.

You could maybe argue that it's using square roots, and calculating square roots efficiently requires using Newton's method or something more sophisticated. But Heron of Alexandria described "Newton's" method 2000 years ago, although he hadn't generalized it to finding zeroes of arbitrary analytic functions, perhaps because he didn't have concepts of zero or functions.

You could argue that it's using the pow() function, but it's using it to take the 64th power of a dot product in order to get specular reflections. People were taking integer powers of things quite a long time ago.

Even using computers for really analytic things, like finding zeroes of arbitrary analytic functions, can be done with just a minimal, even intuitive, notion of continuity.

Alan Kay's favorite demo of using computers to build human-comprehensible models of things is to take a video of a falling ball and then make a discrete-time model of the ball's position. A continuous-time model really does require calculus, and famously this is one of the things calculus was invented for; a discrete-time model requires the finite difference operator (and maybe its sort-of inverse, the prefix sum). Mathematics for the Million starts out with finite difference operators in its first chapter or two. You don't even need to know how to multiply and divide to compute finite differences, although a little algebra will get you a lot farther with them. A deep understanding of the umbral calculus may be inspirational and broadening in this context, and may even help you debug your programs, but you can get by without it.

I agree that calculus is really powerful in extending the abilities of computers to model things, but I think you're overstating how fundamental it is.



I think we approach this from different ends. What one can achieve (your approach here) and into which boxes of science and mathematics are relevant to the said work. Yes, one can do lot of things by fumbling in the dark, so to speak, but that does not mean it's not isomorphic to the existing theory, rather, the experimenter lacks a map from the problem she is solving to the established theory. I'm all for experimentation! It's often better to first fumble a bit and then see what others have done. But it's often hard to map the relevant problem to existing theory without examples of application. Here comes the academic training part - it's a ridiculously well established training path to a set of tools forged by the greatest minds of humans.

A programmer equipped with a bit of calculus is so much more powerfull than a programmer without. It's like one is climbing from a canyon. Both the guy with the training and the utilities and the rookie with bare hands will probably reach the top, but it takes a shorter time for the better equipped person to reach the top, and he is already tackling other interesting problems when the other finally reaches the top.

Humans have a limited time on this planet. Really, learning calculus formallly is one of the most efficient and painless boosters for productivity when creating new bicycles of the mind. It's not the only one, and it's not necessary like you pointed out, but compared to the utility it's so cheap to aquire I can't really see no reason not to force it on people. This is still my opinion, I don't have sufficient practical didactic chops to even anecdotally prove this.


I think you didn't understand what I wrote. I wasn't arguing for fumbling in the dark. My example of a ray-tracer is, I'm pretty sure, not something you can do by trial and error. I was arguing that the mathematical theory you need for the DSP things you mentioned isn't, mostly, the (integral and differential) calculus. There's a lot of mathematical theory you do need, but the calculus isn't it.

I totally agree that (integral and differential) calculus is a massive mental productivity booster. I'm not very convinced of the utility of schooling in acquiring that ability, because I've known far too many people who passed their calculus classes and then forgot everything, probably because they stopped using it. I've forgotten a substantial amount of calculus myself due to disuse. But I agree that schooling can work.

But I wasn't arguing against schooling, even though our current methods of schooling are clearly achieving very poor results, because they're clearly a lot better than nothing.

I was arguing that, for programming, the schooling should be directed at the things that increase your power the most. Two semesters of proving limits and finding closed-form integrals of algebraic expressions aren't it. Hopefully those classes will teach you about parametric functions, Newton's method, and Taylor series, but you can get through those classes without ever hearing about vectors (much less vector spaces and the meaning of linearity), Lambertian reflection, Nyquist frequencies, Fourier transforms, convolution, difference equations, recurrence relations, probability distributions, GF(2ⁿ) and GF(2)ⁿ, lattices (in the order-theory sense), numerical approximation with Chebyshev polynomials, coding theory, or even asymptotic notation.

In many cases, understanding the continuous case of a problem is easier than understanding the discrete case; but in other cases, the discrete case is easier, and trying to understand it as an approximation to the continuous case can be actively misleading. You may end up doing scale-space representation of signals with a sampled Gaussian, for example, or trying to use the Laplace transform instead of the Z-transform on discrete signals.

If you really want to get into arguing by way of stupid metaphors, I'd say that when you're climbing the wall of a canyon, a lightweight kayak will be of minimal help, though it may shield you from the occasional falling rock.

But I don't know, maybe you've had different experiences where itnegral and differential calculus were a lot more valuable than the stuff I mentioned above.


Might be we have different chunking. In my preconceptions calculus is the first necessary stepping stone to the other stuff you mentioned. I have no idea how to approach Fourier transform conceptually for example than by the calculus route since the integral form is always introduced first. It's true linear algebra and calculus don't often meet at first - until one needs to do coordinate tranforms from e.g. spherical coordinates to cartesian.

It's true I don't need that suff in my daily work that much. But I recognise a lot of problems I might meet are trivial with some applied calculus. Like the newton iteration, which you mentioned.


http://www.dspguide.com/ch8/1.htm talks about the discrete Fourier transform, which decomposes a discrete (periodic) signal into a sum of a discrete set of sinusoids. The Fourier transform is actually a case where the continuous case is misleading — in the continuous case, you unavoidably have the Gibbs phenomenon, a complication which disappears completely in the discrete case, and the argument for this is a great deal simpler than the analogous reasoning for analytic signals. And even if you show that, for example, sinusoids of different frequencies are orthogonal in the continuous case, it doesn't immediately follow that this is true of the sampled versions of those same signals — and in fact it isn't true in general, only in some special cases. You can show by a simple counting argument that no other sampled sinusoids are orthogonal to the basis functions of a DFT, for example. Showing that the DFT basis is orthogonal is more difficult!

You definitely don't need calculus to transform between spherical and Cartesian coordinates. I mean I'm pretty sure Descartes did that half a century before calculus was invented. You do need trigonometry, which is about a thousand years older.

Newton iteration is a bit dangerous; it can give you an arbitrary answer, and it may not converge. In cases where you think you might need Newton iteration, I'd like to suggest that you try interval arithmetic (see http://canonical.org/~kragen/sw/aspmisc/intervalgraph), which is guaranteed to converge and will give you all the answers but is too slow in high dimensionality, or gradient descent, which does kind of require that you know calculus to understand and works in more cases than Newton iteration, although more slowly.


A sufficiently advanced understanding of the finite difference operator might well be considered indistinguishable from understanding "calculus"...


I did mention that, as you can see :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: