Spixii Blog

Interview with Patrick Fagan, Behavioural Scientist

Written by The Spixii Marketing Team | Jul 18, 2018 8:00:00 AM

3 min read

Welcome to the Spixii Spotlight, a series designed to interview great minds  in the insurance and InsurTech space. This month, we picked the brains of Patrick Fagan, behavioural scientist, lecturer,  media commentator, self-described 'Alpha Primate' at BrainChimp and one of Spixii's first advisors.

 

Emma: Hi Patrick! In your last interview for the Spixii blog, you discussed how neuroeconomics is transforming technology. Have your views changed since then?

Patrick: While that certainly still stands, people are a lot more aware of data now. Since the  Facebook data scandal, many people deleted their Facebook accounts. Yet actually, usage of the social media channel has increased. At the same time, there are now more Facebook adverts around London.

Companies are now more aware of how valuable data is and are generally more interested in psychologically understanding personalities, including cognitive biases as an influential factor.

It's the importance of the unconscious.

Research surveys are not enough. They can miss a lot. For instance, you cannot ask outright questions like: "if this chatbot sounded more friendly, would you be more likely to talk to it?" Yet, you could find out this answer through other techniques.

 

Emma: Have you been surprised by anything in technology, particularly InsurTech?

Patrick: One thing I saw recently was the ability to use facial coding through a webcam to register emotions and find out what a person thought about money. A pay-as-you-go insurance company wanted to ask, how can we make this emotional? How can we build an emotional relationship?

Insurance is often very rational and abstract. We mainly have a negative experience with insurers; it is sometimes traumatic and often stressful to make a claim. Sometimes we don't get the money back or get less than what we were expecting. Outside of that, the only interaction with an insurer is with the exchange of money, or answering repetitive boring questions in the place of a form.

 

Emma: How do you see neuroeconomics shaping insurance and technology as a future for good?

Patrick: Biometrics are becoming more socially acceptable, and may well become the next big data scandal - version 2. Take the new iPhone. It can analyse emotion through facial recognition, while Amazon Alexa can see if you are stressed through the sound of your voice. This technology will keep snowballing. Users may not be aware of that.

But one way neuroeconomics could positively shape the future is by applying the same principle as telematics for car insurance to health insurance. Is this person physically and mentally healthy? Health insurers could then provide tailored advice and support. 

Personalisation can creep some people out but it could be a force for food. If giving people content, policies and personalised products, you could provide an extroverted, sensation seeker a policy to cover extreme sports.

People will always assume you have bad intentions unless you tell them otherwise. Some companies are looking into giving them something in return for their data, or even creating a data cryptocurrency.

Or, more cynically, this will be the passage of time. People were resistant to buying online due to security and privacy concerns. Now it's about companies are understanding them.

Renaud Million, CEO and co-founder of Spixii, and Patrick Fagan, Behavioural Scientist, at the Spixii nest in Plexal

 

Emma: Is there a tech company you think are doing fantastically in developing emotionally intelligent products?

Patrick: Of course, Apple. Historically, they have always been very emotionally minded - making products that are much more creative while seeing the importance of emotion and storytelling. Their devices are very simple to use. The new iPhone has no buttons, and has also been developed with Paul Eckman in its facial coding (Paul Eckman developed the 6 facial expression concept).

Yet, no company is really doing it in a customer-facing sense. It is still mainly behind the scenes in user research and product testing. People might be a little too creeped out. Tech is not quite there on an individual basis.

 

Emma: Where do you see the future of behavioural economics in the next 5 years?

Patrick: There will be an arms race between brands and consumers or watchdogs. Brands will try to use the principles of neuroeconomics while consumers and watchdogs will try to regulate its use to avoid being manipulated. The more you are aware of the psychology behind buying decisions, the better you will be at protecting yourself. Companies will also try to at least look like they are helping consumers. Facebook will want to protect consumers and help them avoid psychological manipulation, for instance against the rise of 'fake news'.

 

Emma: Right now, we are seeing that one of insurers' top priorities is to tap into younger markets. How do you think insurers can use behavioural economics to do that?

Patrick: Understand your customers from a psychological point of view. Younger people tend to be more sensation-seeking and adventurous, a bit more liberal and open-minded, plus often less organised. Of course this varies from segment to segment.

You can use this information to profile them, send relevant product concepts and messaging that really resonates with them. In general, the younger consumer mindset is probably less family-focused and more social and adventure-focused. Understand their motivations and give your standard research a psychological slant.

Also, always make things as simple and flexible as possible.

 

Renaud: Do you think there should be bot psychologists?

Patrick: Yes, absolutely. At the Science Museum recently, there were a robot that could read your facial expressions and react in kind. This is an extension of psychology but applied to data and human computing.

With artificial intelligence (AI), we've found that people are more trusting of others' opinions than they are of AI. There is a lot of scepticism and concerns about bots taking over like Terminator. Really interestingly, a group of scientists created a psychopathic bot fed by hateful reddit forums.

Psychology gives you the why, but also gives the hypothesis meaning. 

 

Emma: What are the top three things to consider when designing an emotionally intelligent chatbot? 

Patrick: First, understand your customer. For instance if you see that your customers love to dance, they are likely more extraverted. When talking to this group of customers, you could make the chatbot friendlier.

Second, humanise the chatbot. Dr. Robert Cialdini's book, Influence: The Psychology of Persuasion, is (ironically) a very influential book with six universal principles. One of the most important principles is liking the person you are talking to. We generally like people who are attractive, warm, competent, familiar and similar to us. Bear these principles in mind when developing your chatbot persona, through personalising the experience, making sure your copy sounds warm and also knowing what you're talking about. Make sure you also humanise the chatbot. Give it a picture of person or robot mascot as an avatar, and give it a name.

Third, keep the whole thing as simple as possible. Insurance applications are challenging because there are so many questions to ask, especially as we all have limited attention spans. Simplicity is actually the most important thing.

 

To read more from Patrick on how to apply neuroscience and behavioural economics to your business, here's his book #Hooked!