Are we Bayesian reasoners? Some thoughts.

Are humans Bayesian reasoners? Do they have a set of probabilistic beliefs about the world, or 'priors', which get updated by experience? To me this seems like an very strange and unlikely way of describing how people think most of the time, but a lot of very intelligent people seem to disagree, so the question seems worth taking seriously.

I still think the answer is no, but first I think it's worth discussing what I think people mean when they say people are Bayesian. I don't think they mean that this literally describes what people's conscious thought processes are like most of the time. That is just too obviously false. Even when it comes to every day events, like rain, or the train being late, most of the time people could only give a very vague idea of how likely they thought an event was. There are a huge number of things which matter a great deal to us that we would be even less able to meaningfully say anything about the probability of, let alone consciously 'update' these prior beliefs as new evidence comes to light. Consciously doing this in a systematical, mathematically valid fashion is a rarefied activity that requires specialist training and is only applied to limited circumstances (work in finance, weather modelling etc).

A more reasonable interpretation would be to say that the 'Bayesian reasoner' model of humans describes not conscious thought processes but behaviour, or perhaps some mixture of the two. Consciously or otherwise, our patterns of thought and behaviour respond to what we see and experience. Perhaps every so often we actually think 'I thought that was unlikely, but now I think that is more likely', but more often than that, we simply worry about something more, take a possibility more seriously etc. This interpretation could also allow for flaws in reasoning: perhaps people only roughly adapt their beliefs and behaviours in this way, perhaps there are certain systemic flaws in human psychology which lead them to sometimes over adjust or under adjust their beliefs (let's say for example we made allowances for exceptional events, like witnessing a plane crash, totally skewing how much we worried about plane crashes).

My problem with this behavioural description is that I don't think Bayesianism is really the right way of thinking about what is going on. This is because probability is on a continuous scale, but behaviour is discrete. What do I mean by this? There are an infinite number of probabilities I can assign to my train being late today (there are an infinite number of values between 0 and 1). In reality, my behaviour and thought processes only take certain discrete characters. Either I worry about my train being late or I don't. Perhaps I worry a little or a lot, but we can't meaningfully break this down much more than that. More importantly, my behaviour is extremely discrete. Perhaps I think of alternative routes in case my train is late, perhaps I don't bother. Most of life seems like this. Things we either worry about or don't, things we hope for or think are unrealistic, ideas which occur to us or do not, thoughts we think are worth entertaining or aren't, possibilities we factor in to our decisions or we do not. OK, there is a bit of space for something muddy in between all of these, but when it actually comes to trying to classify these in between states, there aren’t all that many. Take the way people talk about their own certainty about something- it’s almost always one of a few values if expressed numerically (‘50/50’, ‘80/20’ ‘I’m 99% sure’). And it’s true that continuous scales can be useful approximations of data which takes discrete values (a normal distribution curve could usefully approximate the probability of rolling a certain number of 6’s provided I rolled a sufficiently large number of times) but in this case, the discrete behaviours and beliefs seem too few for this to be the true.

And on that note- I've got a train to catch.


No comments:

Post a Comment