>Fundamentally new paradigm and tech the world will have to adapt to? Not hype at all.
I generally agree with your statements. But i personally tend to think of of the various ML flavors as only a natural evolution of the same Turing/von Neumann paradigms. Neural networks simulate aspects of cognition but don’t redefine the computation model. They are vectorized functions, optimized using classical gradient descent on finite machines.
Training and inference pipelines are composed of matrix multiplications, activation functions, and classical control flow—all fully describable by conventional programming languages and Turing Machines. AI, no matter how sophisticated, does not violate or transcend this model. In fact, LLMs like ChatGPT are fully emulatable by Turing machines given sufficient memory and time.
(*) Not playing the curmudgeon here, mind you, only trying to keep the perspective, as hype around "AI" often blurs the distinction between paradigm and application.
You're right. I meant "new paradigm" though more in regard to its societal adoption — not that the number crunching going on in the GPUs was some new tech.
Umm, it's a pretty fundamental theorem (that's assumed to be true) that all computation is equivalent to Turing Machines. Unless we're very wrong about fundamentals of CS here, we'll never see anything that is more powerful, computationally speaking, than a Turing machine.
I generally agree with your statements. But i personally tend to think of of the various ML flavors as only a natural evolution of the same Turing/von Neumann paradigms. Neural networks simulate aspects of cognition but don’t redefine the computation model. They are vectorized functions, optimized using classical gradient descent on finite machines.
Training and inference pipelines are composed of matrix multiplications, activation functions, and classical control flow—all fully describable by conventional programming languages and Turing Machines. AI, no matter how sophisticated, does not violate or transcend this model. In fact, LLMs like ChatGPT are fully emulatable by Turing machines given sufficient memory and time.
(*) Not playing the curmudgeon here, mind you, only trying to keep the perspective, as hype around "AI" often blurs the distinction between paradigm and application.