A vision for meaningful AI

Sebastian Kahl
5 min readJun 23, 2020

Very often at the moment I find myself wondering again and again what I should do with my life. Often I try to imagine specific situations that I would love to see myself in. If you are like me your imagination of these situations is first of all one that you experience visually. But the more profound bit of this experience is the feeling from your gut that tells you how right or wrong this imagined situation fits you specifically. I think this is especially important in our modern life, as we are constantly bombarded with images of situations that other people love (or at least seem to love) to be in.

I feel like being at the crossroads. With my PhD dissertation finished, a date for the defense set, I’m looking for ways to live my life sustainably. This means to live more healthy, with time at my own disposal, while leading a meaningful and productive working life without having to move wherever the next post-doc position, or in the end professorship, would be. Thus, the best looking way forward for me is to either think entrepreneurial or find a place where I can embrace new work with the possibility to work remotely whenever possible, and the flexibility to choose when to work (within bounds) while doing the best work of my life towards a goal that will matter. One that will be able to touch everyones lifes.

I wonder if you, dear reader, can understand me. I can imagine that some of the concepts, references or metaphors you didn’t get at first. That you inferred their meaning from the rest of the text or even looked them up.

Now imagine how hard it must be for a piece of software to try to understand this text. Modern machine learning techniques have become quite good at knowing the best word to predict after more or less long sentences like: “Why did the chicken cross the ____?”. But are they able to answer the question without having processed a text with a discussion on a similar topic? Can they genuinely infer a best guess for the meaning of the completed sentence and creatively try to answer from the perspective of the chicken? Could it come up with an answer like “The chicken crossed the road because its chicks ran to the other side and a truck was quickly coming closer.”?

I don’t think so and this lack of meaning in state of the art AI bothers me. Let’s break it down to your everyday work, what part of everyday activities bothers you? I bet under the top 5 is to open your E-Mail program every morning, finding roughly 20+ mails, sift through them all, filter out the spam, filter out whats unimportant to you, prioritize what is left and start answering, often with similar and repeated blocks of text. Why can your E-Mail program not see the mails that are important to you? You see, what is bothering you is also bothering me. So why don’t we try to change this?

I’m not talking about a fancy spam filter. I’m talking about why we don’t tackle the problem of understanding and of processing and using meaning in artificial intelligence? The reason is that this is hard and it has been perceived and analyzed to be hard and rightfully so!

Here is why this shouldn’t keep us from trying again: humans actively search for points of reference in texts that help them to anchor their understanding. Understanding often is nothing more than best guesses that can always be revised. If you cannot find an anchor you start digging deeper into the text, seemingly pausing your process of understanding until you finally find your anchor in the next section of the text. You could also try to find the meaning of the text without understanding that specific bit. It is quite interesting how often this happens and how often even in face to face conversation people don’t get each other’s meaning but want to get on and simply act like they understood. In conversation rapport often beats understanding. If you don’t belief me try testing your understanding during you next converstions. If the missing meaning is really bothering you, looking up a concept today mostly is just a voice command away.

So, am I saying that understanding is negligible? To the contrary! Understanding is the key to meaning, but meaning comes to you without perfect understanding, but more often through a process of best guesses, information from your situational context, and knowledge about your conversation partner. You try to anchor your understanding in this contextual meaning manifold. When something doesn’t fit you either take this as a vantage point to drive your conversation forward (clarifying your understanding) or you try other means to find a way to make the meaning fit.

All these aspects you can find in recent theories of brain function that care about your mind sitting in your body. These theories consider what it means for your mind to grow up in your body, being situated in the environments you came to know. Environments you actively explored, in which you took in and processed all this information. We start to get a grasp at how you form meaning by repeatedly discovering similar information in a process of understanding in different contexts, as this allows you to detach your understanding from the context and form meaning that is valid without the context, while still being associated to the context you discovered it in. We also start to get a grasp on the process of having uncertainty fundamentally at the core of your cognitive processes. In the framework of predictive processing we see your brain as a prediction machine that is in the process of constantly refining its models of the world. It does this so it won’t find itself in an unpredicted and thus uncertain state in which you wouldn’t know what to do next and if worse come to worst you would die. In this theoretical framework the drive to minimize this uncertainty is so strong that it not only leads to the adaptation of the brain’s internal models of the world, but it can also drive your behavior. It is through your behavior that you can change your environment, move to the fridge to find food and drink, or go to the toilet. In effect your behavior changes your perception of the world and thus the predictions from your model of the world will end up fitting again. In a conversation you would try to minimize your uncertainty about your conversation partner’s understanding by asking specific questions.

When you try to grasp the meaning of the text of the introduction, maybe now you can start to imagine a piece of software to implement similar processes for discovering meaning and also to become active with a drive to find meaning.

The introductory text is not only there to catch your attention, but it is there to describe to you my predicament. Through this text I want to find minds that can share this vision and who are motivated to crack this nut, and find out what a meaningful AI could do for us, if not just give us a glimpse of understanding ourselves a little bit better.

--

--

Sebastian Kahl

Brains think about brains and minds, convergent on doing the right thing Twitter: @glialfire Web: glialfire.net