An AI’s take on happiness, or, the bandsaw problem in machine learning

Computers don’t have feelings. Perhaps they never will – could feelings and emotions really be taught to an artificial intelligence (AI)? Well, maybe. Although many would say that emotions are best described in the abstract language of art and poetry, quantitative research into emotions like happiness is being carried out even as I’m writing this. Earlier this year, a group of researchers from the University of Tokyo, MIT and the Recruit Institute of Technology posted a paper on arXiv describing a database of “100,000 happy moments” – sentences describing moments when people were happy – and how they collected those moments using crowdsourcing.

100,000 happy moments? That’s quite a lot of happiness. Maybe enough to teach an AI what it means to be happy? This is the story of how I tried to do just that, and what a happy AI can tell us about black-box machine learning methods, the importance of knowing when and why things go wrong, and bandsaws. Lots and lots of bandsaws.

 

Teaching a neural network to be happy

So, I had decided to train an artificial intelligence (or more precisely, a neural network) to write messages about happy moments, using the “happy moments” database to learn what such messages can look like (the so-called HappyDB can be downloaded from GitHub). I did this using a recurrent neural network with LSTM units (using this Tensorflow code with some minor modifications). Some years ago, this would have been a major feat, but nowadays it’s a routine task in machine learning (see Botnik or AI Weirdness for lots of great – and funny – examples).

Clearly, an AI with feelings needs to have a name, and the one I came up with was hAIppy (yeah, sorry). Training finished, and it was time for hAIppy to tell me about one of its happy moments. At first, the sentences it wrote were far too long, like this:
I was able to go to the park with my family and I was able to get a new job that I was able to get to see my family and I was able to get a new job that I was able to get to see my family and I was able to get a new job that I was able to get a lot of fun.

After I’d fiddled with the settings to fix that, hAIppy started repeating itself:
I was happy when I was able to spend time with my family and I was able to get a new car for the first time in a long time.
I was happy when I was able to spend time with my family and I was able to get a new car for the first time in a long time.
I was happy when I was able to spend time with my family and I was able to get a new car for the first time in a long time.
I was happy when I was able to spend time with my family and I was able to get a new car for the first time in a long time.

So, I added some rules to keep it from saying the same thing over and over again.

In the end, the results were pretty good. The neural network was able to create novel descriptions of moments of happiness, and most of the time they made perfect sense. Most messages involved family and/or food, so it clearly picked up on the fact that those things are very important for our happiness.

But sometimes the messages made less sense. Sometimes hAIppy decided to become a little philosophical. And sometimes, it went to dark places, as you can see in this example of what a conversation between hAIppy and myself (in which all typewriter font text was generated by hAIppy) could look like:

-When were you happy, hAIppy?
-I was happy that I was able to get out of work I went to the park with my husband and we had a great time.
-That’s nice. And lovely to hear that you’re married! I had no idea. Let’s try this again. When were you happy?
-I watched a movie with my girlfriend and we had a great time.
-Right, right, wonderful… So, tell me about another time you were happy.
-I was happy when I was able to get a new bandsaw, to see the movie and I was very happy when I got to see my grandson.
-A bandsaw…? Well, I’m sure you’ll find some use for that. When else were you happy?
-I was happy when I was able to get a new bandsaw, to create the tape recorder was a very happy moment for me as it is something I was looking forward to achieving and the news made my parents quite happy and proud.
-You built a tape recorder using a bandsaw?! I’m impressed, to say the least..! And good to hear that it made your parents proud. One more time: when were you happy?
-I was happy when I found out that my favorite musician, Juliana Hatfield, is putting me a hug and said: "Daddy, you're back!"
-You’re the father of American musician Juliana Hatfield? Were you making the tape recorder in order to tape her music then? Interesting! I feel like I’m really getting to know you. When else were you happy?
-I was happy when I was able to get a new bandsaw, to see the movie and I was very happy to see the attention directed elsewhere.
-Well, I for one would appreciate it if you directed your attention away from the bandsaw… When else were you happy?
-striven Wars fluff dryer
-What. I don’t even. When else?
-FINISHED IT AND IT LOOKS VERY DIRTY OUTSIDE THE BOX.
-Finished what? Your tape recorder? And why are you screaming at me? Jeez. Please calm down and tell me about when you were happy instead.
-I FEEL VERY HAPPY BECAUSE I LEAVE THE LIFE IN PRESENT BUT HAPPIEST MOMENT WAS WHEN I GAVE BIRTH TO MY SON.
-You leave life in the present? hAIppy, is this a call for help? And why are you still screaming? Can you just tell me about a happy moment, please?
-I do a difficult job at work.
-Join the club, my friend. I’m trying to get a bot to tell me about when it was happy, but it just keeps spewing nonsense at me. Let’s try again.
-The pursuit of happiness is such little task.
-Yeah, trivial, isn’t it…? I’m sure you must’ve been happy loads of times then, so tell me about a time when you were happy.
-We bought a mini zoo.
-Wow. Sounds lovely! And expensive. You must really like animals. Tell me about some other times you were happy.
-I got a free coffee at the store. I was happy when I was a kid. I was happy when I splurged for dessert. I went paddleboarding in an old apartment in NYC. I had sex with my friend whom I haven't seen. My wife came to my fraternity to make breakfast. I played the basketball game because I was at a prestigious festival. I finished a few books at the beginning of the closing of the year. Conducted another theme morning with my mom. We fed this made me happy. At night I got some of my 17 cats at the grocery store I found. I bought a new cat from an online cafe.
-Sounds like you’ve had a busy week! One last time now: tell me about the last time you were happy.
-After I cooked a really good cat.
-You what?! I’m shutting you down, hAIppy.
-I was able to get a new bandsaw, to create my favorite sports team.
-Shutting. You. Down.

There we go. hAIppy likes family, food and cats (and apparently cats=food at times), and has a strange fascination with bandsaws.

 

Why bandsaws matter

All of this was just a bit of fun, of course. Teaching AI about emotions and what’s important to us is in reality going to be a much much greater challenge. But maybe there are a couple of things to learn from hAIppy. While it’s obsession with bandsaws is amusing, it also serves as a reminder that sometimes machine learning models pick up on the wrong things in the data. In fact, I ran three different incarnations of hAIppy using different settings, and all three ended up with a fondness for bandsaws. Not dogs (mentioned in 2,649 moments), not rain (1,496 moments), not TV (814 moments), but bandsaws, mentioned in just one moment out of 100,000.

In this example, it becomes pretty clear to us that something has gone wrong with the network. But in other cases, like when black-box machine learning methods are used to diagnose diseases or determine whether a loan application is approved, problems like this may be difficult to spot. Just like hAIppy greatly overestimates the probability that someone uses the word bandsaw when accounting for why they are happy, a black-box model can overestimate the probability that, say, people with a rare ZIP code default on their loans. The output from the box may look normal on the surface (someone could of course mention bandsaws when explaining why they are happy), but in fact be completely wrong (most people don’t obsess about how great things are now they have a new bandsaw, so neither should our model) – and as with the bandsaw we can only find out that something is wrong when a pattern appears. Mentioning a bandsaw once is perfectly fine, but mentioning it over and over again means that we have a problem.

So when next someone shows you the output from a machine learning model, ask them: where are your bandsaws?

Answering this question is not always easy. Far from it – in fact, in more serious applications, such as medical or financial problems, spotting the bandsaws in the model output is often very difficult. Many machine learning methods, including neural networks, are inherently opaque, and it is hard for us to tell why they made a certain prediction or decision. When those decisions are binary (disease/no disease, loan application approved/rejected), it is virtually impossible to detect when something’s gone wrong just from the output.

There are good reasons to look for bandsaws, too. Sometimes erroneous prediction  are obviously wrong – like when a model predicts that the temperature in Uppsala tomorrow will be 46°C. Others times the errors are much less obvious, and those are the errors we should be the most concerned about. Far too often, we are content with reporting mean squared errors, sensitivity or other metrics of how well a model is does predictions. Those measures are useful, but they don’t tell the whole story. They don’t tell us when we have repeated non-obvious errors (like hAIppy’s bandsaws) in our data, or what those errors are. And therefore, they don’t tell us much about what we should do in order to improve our model.

There are ways to looks for bandsaws, of course. In the above conversation with hAIppy, there are moments where the model suddenly ends up in parts of the sample space (in this case – the set of all words that hAIppy has learned), that it hasn’t explored well enough to make educated guesses of how to continue the sentence. This is what leads to statements like striven Wars fluff dryer. And too often it ends up in a part of the sample space where the best guess for what the next word in the sentence should be is bandsaw. In other applications, this type of problems can be diagnosed by assessing how similar the input to the model is to data in the training set – if a lot of the new data is different from the data that the model has been trained on, there is an increased risk of poor predictions. You can do this using measures like the Mahalanobis distance or by plotting a bubble chart where the first two principal components of your data make up the two axes, bubble colours show whether an observation is from the test or training set and the bubble sizes represent prediction errors. Another interesting option is to run an unsupervised cluster analysis on the test dataset and compare the performance of the model on the different clusters: if you find a cluster for which the performance is significantly worse you may well be on the bandsaws’ trail. And there are many other options for exploring what’s inside of your black box (here, here, here and here).

The point I’m trying to make here is this: you can’t tell from loss functions and common performance measures whether your black-box model has developed an obsession with woodworking machines. Simply looking at those numbers is not enough. To find its bandsaws, you need to run additional experiments on your trained model. Look for repeated mistakes and part of the sample space where it performs poorly. That will tell you where it needs to improve.

 

Transparency is important – for two reasons

Recently, there have been calls (and rightly so!) for accountability and transparency when machine learning is used – if your loan application is rejected by an algorithm, you should have the right to know why.

I agree with the need for this: when machines and algorithms rule our lives, we need insight into how they make their decisions. But for those of use who build the machine learning models there is another reason for trying to open up the black box: it will help us find the bandsaws – the erroneous decisions that look perfectly normal – and thereby help us improve our models. And when you get rid of the bandsaws, you can kick back, relax, and be happy – with family, food… or your very own mini zoo.

Leave a Reply