The sentience question: opening the AI black box

Opening the AI black box is not a hard as you might think.

At least not to get the basic knowledge you need to understand the difference between an AI language generator and a speaking human.

An AI generator looks at the words forming the beginning of a sentence and rolls a dice to select the next word. To do this, they use a table associating a different word to any dice roll.

Yes, AI generators need an unimaginably large number of tables to perform this feat — one for the beginning of any possible sentence. Fortunately, they have clever tricks to compress these tables and store them in memory.

At their heart, however, that’s how they operate:

  • Step 1. Fetch the table that corresponds to the beginning of the sentence.
  • Step 2. Roll a dice to look-up a word from that table, append that word to the sentence.
  • Step 3. Go back to step 1.
  • Step 4. Use the generated sentence to convince humans you’re sentient. 😉

Yes, it’s simplified. For example, modern AI language generators produce sentences letter-by-letter. You get the gist of it though, and it’s enough to understand just how different these systems are from your fellow humans.

We get fooled because AI is the very first non-human thing that can generate fluent language. Unconsciously we imagine their black box work the same way as our black box. We imagine that if an AI writes “I’m afraid of dying” it reflected on its existence and experienced a feeling of fear. After all, if a fellow human wrote that sentence, it would very likely be the expression of such an internal experience.

Same goes for a simpler sentence. If a human writes “peanut butter and jelly are delicious together” it’s likely because they tried that famous dish and experienced joy as the tastes mixed on their tongue. We know our computers dont have a tongue, so if an AI generator wrote that we wouldn’t assume it really experienced the joy of biting into a peanut butter and jelly sandwich.

For humans, language is most often the tip of a complex iceberg that finds its roots in our conscious experiences. For an AI, it’s the result of a series of dice rolls.

This article, written by experts in the field, goes deeper into that “cognitive illusion” of believing that fluent language means conscious experience.

In particular, it shows how this illusion often operates in reverse — in a way that can be much more damageable. Indeed, we often imagine that a human who doesn’t express themselves fluently is having a lesser conscious experience. That is, we imagine they’re less intelligent.

Now that’s a cognitive illusion worth fighting.