Anybody who has tried to learn Deep Learning quickly realized that it involved a lot of maths. However, despite all the equations you encountered, much of Deep Learning is poorly understood from a mathematical standpoint.
Our understanding is progressing, though, and this Quanta Magazine article does a good job of summarizing recent advances on the theoretical front.
Within the sprawling community of neural network development, there is a small group of mathematically minded researchers who are trying to build a theory of neural networks — one that would explain how they work and guarantee that if you construct a neural network in a prescribed manner, it will be able to perform certain tasks.
Boris Hanin, a mathematician at Texas A&M University, likens the situation to the development of another revolutionary technology: the steam engine. At first, steam engines weren’t good for much more than pumping water. Then they powered trains, which is maybe the level of sophistication neural networks have reached. Then scientists and mathematicians developed a theory of thermodynamics, which let them understand exactly what was going on inside engines of any kind. Eventually, that knowledge took us to the moon.