--
I am not an expert on the subject of communications or electronics. I am under the impression that the limits are not to be found in hardware (unless we figure out that the only way to overcome the software problems is new hardware like quantum computers, but that is wild speculation).
My area of expertise is on the cognitive side of software systems. And we know very well what the challenges are there: simply put, we require actual intelligence but what we have - however impressive it may seem to the public - is not actual intelligence.
By actual intelligence I mean something capable of abstraction, generalisation, analogy, common sense reasoning, speculation (projecting situations that we can imagine without having ever encountered them), learning from single examples, etc... unfortunately deep learning has zero of these abilities.
I quote from Chollet (who wrote the Keras DL library): "You cannot achieve general intelligence simply by scaling up today’s deep learning techniques.
Humans only need to be told once to avoid cars. We’re equipped with the ability to generalize from just a few examples and are capable of imagining (i.e. modeling) the dire consequences of being run over. Without losing life or limb, most of us quickly learn to avoid being overrun by motor vehicles."
(https://venturebeat.com/2017/04/02/understanding-the-limits-of-deep-learning/)
One of the most accessible explanations I can think of is in Melanie Mitchell's talks... here is one:
https://www.youtube.com/watch?v=4QBvSVYotVc
or another
https://www.youtube.com/watch?v=ImKkaeUx1MU
(since 2019.. the situation has only gotten worse, as we now understand more clearly the diminishing returns and exploding costs of doing DL by brute force, as we are currently doing: https://spectrum.ieee.org/deep-learning-computational-cost)