No, but it is weird to use "modern" here. Modern suggest a longer timeframe. I would say machine learning deep NNs is modern AI. It's just not true that everything not transformer is outdated, but it is "kinda" true that everything not deep NN is outdated.
But that's been the case for the last 60 years. Whatever came out in the last 10 years is the first thing deserving to be called AI, and everything else is just basic computer science algorithms that every practitioner should know. Eliza was AI in 1967; now it's just string substitution. Prolog was AI in 1972; now it's logic programming. Beam search and A* were AI in the 1970s; now they're just search algorithms. Expert systems were AI in the 1980s; now they're just rules engines. Handwriting recognition, voice recognition, and speech synthesis were AI in the 90s; now they're just multilayer perceptrons aka artificial neural nets. SVMs were AI in the 00s; now they're just support vector machines. CNNs and LSTMs were AI in the 10s; now they're just CNNs and LSTMs.
Yeah, but for a while it seemed we'd gotten over that, and in the "modern era" people were just talking about ML. Nobody in 2012, as best I can recall, was referring to AlexNet as "AI", but then (when did it start?) at some point the media started calling everything AI, and eventually the ML community capitulated and started calling it that too - maybe because the VC's wanted to invest in sexy AI, not ML.
Consider "modern" to mean NN/connectionist vs GOFAI AI attempts like CYC or SOAR.
I dunno. The earliest research into what we now call "neural networks" dates back to at least the 1950's (Frank Rosenblatt and the Perceptron) and arguably into the 1940's (Warren McCulloch and Walter Pitts and the TLU "neuron"). And depending on how generous one is with their interpretation of certain things, arguments have been made that the history of neural network research dates back to before the invention of the digital computer altogether, or even before electrical power was ubiquitous (eg, late 1800's). Regarding the latter bit, I believe it was Jurgen Schmidhuber who advanced that argument in an interview I saw a while back and as best as I can recall, he was referring to a certain line of mathematical research from that era.
In the end, defining "modern" is probably not something we're ever going to reach consensus on, but I really think your proposal misses the mark by a small touch.
Sure, the history of NNs goes back a while, but nobody was attempting to build AI out of perceptrons (single layer), which were famously criticized as not being able to even implement an XOR function.
The modern era of NNs started with being able to train multilayer neural nets using backprop, but the ability to train NNs large enough to actually be useful for complex things AI research, can arguably be dated to the 2012 Imagenet competition when Geoff Hinton's team repurposed GPUs to train AlexNet.
But, AlexNet was just a CNN, a classifier, which IMO is better just considered as ML, not AI, so if we're looking for the first AI in this post-GOFAI world of NN-based experimentation, then it seems we have to give the nod to transformer-based LLMs.
Well transformers still have some plausible competition in NLP but besides that, there are other fields of AI where convnets or RNNs still make a lot of sense.