NeurIPS, the most famous conference in AI these days, was born of the intentional collision of neuroscience and AI — a handful of researchers in both fields seeing value in getting inspired by one another.
My recent conversations with AI researchers, most notably with Yoshua Bengio, have thrown me on a collision course with another group of researchers, one often vilified by us, explorers of “hard” science: philosophers.
Indeed, one of Yoshua’s belief is that to increase the generality of AI we need to uncover more generic priors: general guidelines we could bake into our untrained models so that, as they start to observe data, they learn a more effective hierarchy of structures. Structures that allow them to apply their knowledge in more situations, or to adjust their model faster when new data is observed, or to be more robust when faced with contradictory data… In short, Yoshua (and many other researchers, myself included) believe that better and more generic priors could help tackle the challenges AI is facing these days.
In an infamous paper, Yoshua called one of these very generic (and elusive) priors The Consciousness Prior. Some researchers went up in arms at the use of such a loaded term, accusing him of the academic equivalent of clickbait.
In my case, however, it just made me aware that I had no clue what consciousness was.
In the last few months, through a random encounter with an excellent popularizer of analytical philosophy, I dove deep into the topic. I gained a better understanding of terms like qualia, consciousness, dualism, illusionism, etc. Words that philosophers use to approach questions that us, hard scientists, don’t even dare to ask.
Beyond my own improved understanding of some non-scientific (yet very important) questions, I discovered a community of thoughtful thinkers that are not as enamoured as I imagined with useless rhetorical debates.
I discovered, against my own biases, that philosophy offered a very valid approach to improving our understanding of the world.
The following article, that I discovered via Waverly, explores the difficult topic of emotions from a psychological and philosophical angle. Since we often talk about emotions when discussing Artificial General Intelligence, I felt the article might be interesting both to my AI friends and to my (soon to grow?) group of philosopher friends. Maybe we need a PhilIPS conference, creating an intentional collision between these two worlds? (I kinda like that name 😉)