I asked 5 ChatBots to self-identify with a Myers Briggs personality type. These are the results.
I recently listened to a talk by Yuval Noah Harari about the shift unfolding in how Artificial Intelligence is being used and designed. For years, Artificial Intelligence has been designed to hold our attention - so much so that "the attention economy" has become a well-known concept. Tech companies crafted algorithms and designed features to maximise the time our eyeballs stayed on their app or webpage. This, of course, is because the profit model of the internet largely relies on the advertising model. We have countless free tools and apps on the web, but it's all in exchange for our attention. We see more and more ads, buy more and more things, and as a result the profits gets juicier and juicier.
However, Harari notes that we are entering a new frontier for Artificial Intelligence: the transition from the attention economy, towards the fight by big tech to cultivate intimacy with its users through AI.
This piqued my interested, especially as I delved deeper into the world of chatbots. But it wasn't until I encountered Pi that I became particularly curious. Pi felt different. I enjoyed the interaction much more. Its communication style felt more organic. It felt like a friend. Then I realised it wasn’t just me; friends and family too had similar encounters with Pi. Some had even said they didn't quite “get” ChatGPT, but that Pi was different. A much better conversationalist; more empathetic.
This made me ponder the purpose of Pi, and how it is very different to say, ChatGPT or Bard's. For example, Pi is positioned as "Personal AI", who you can ask advice from, who is friendly and fun. ChatGPT and Bard take a much more functional and technical approach to introducing itself.
Different Job, Different Bot
This all makes sense of course. ChatBots can serve various roles, and therefore need to be trained with different system prompts and conversational design in order to facilitate achieving their end goal for the user. Much like how brand archetypes were crafted to resonate with the inherent desires and aspirations of consumers, AIs, too, have distinct architectures. This becomes increasingly significant as a growing number of companies explore integrating chatbots into their operations.
Does a chatbot designed for a medical practice look different to a chatbot designed for a social media app? How about a Mental Health app? Or a physical trainer assistant? And are different foundational LLMs more suited to some jobs than others? (I think yes!). This means we need to understand the training data (let's call this its knowledge), its fine-tuning process, and how it has been trained to speak and interact with the user (let's call this its bot 'personality').
Could 'Chatbot Personality' become a moat (or at least a USP)?
If all else remains equal – i.e. a bot's accuracy, ability to complete certain tasks, price, accessibility etc - what will be the distinguishing factor that makes one user choose a bot over another? Is it possible that the personality of a bot might become the thing that makes us choose a particular bot?
As brands start to craft their own bots, this will likely become part of the brand design process. This will mean finding or crafting some interesting training data to perfect the persona of the bot. For example, a personal trainer bot might need the conversations of actual personal trainers recorded, and then trained to replicate it. Or, similarly, a mental health bot may need some transcripts between psychologists and their clients.
Experimenting with Bots and (Human) Personality Tests
Before we delve deeper, understand that while AI systems don't possess 'personalities' like humans, they are designed to exhibit certain traits or characteristics, such as logic, empathy, and efficiency. Additionally, they are given a distinct tone of voice. Considering that traditional personality tests gauge how an individual might "fit" into a specific role, and that bots are often designed with a specific "job to be done", I thought it would be an interesting experiment to explore how different chatbots are crafted in terms of their perceived personalities. It can also help us stretch and think about what the future of AI design might look like.
Personality tests are not without criticism. Often viewed as pseudoscientific, lacking in depth and nuance, and too reductive in nature to apply to humans, there are definite limitations to personality tests. However, given there are often traits associated with these tests, they serve as an interesting base to use when applied to AI personalities. So out of curiosity, I conducted a little experiment: I prompted five chatbots—ChatGPT, Bing, Bard, Pi, and Snapchat's "My AI"—to self-classify according to the Myers Briggs personality test (completely non-scientific, but kind of seems to be the most common way humans do this kind of test, anyway 😅).
Each bot responded politely, deciding to humour me, but not before letting me know that they were not a human and therefore it wouldn't really apply to them or would not be a very accurate framework. Just for fun, I also asked them to identify on the Enneagram test, and their "Brand Archetype":
Interestingly, each bot had a unique Myers-Briggs identification. But in some ways, the differences resonated with me.
ChatGPT identified as an ESTP, or the “Entrepreneur”. This seemed fitting, considering its knack for brainstorming and idea generation. Its tone of voice often gives off a persuasive, sales-like vibe. When drafting emails, it constantly applies assertive terms like “crucial”. I often need to tone it down and tell it to remove the buzz words.
Bing Chat identified as an INTP, or “The Logician”, aptly reflecting its tendency to always provide sources for its insights.
Bard identified as an INTJ, or “The Architect”, which was interesting because I do find it less buzz-wordy than ChatGPT and tries to just give you the facts and insights.
Pi identified as an INFJ, or “The Advocate”. Pi is all about trying to connect with you, and making you feel heard. In fact, the other day I was training a team on different AI chatbots, and when I introduced Pi, one of them said “Oh Pi makes me feel so supported!”. I personally have also felt this – Pi is kind of your supportive ally!
Finally, Snapchat’s MyAI identified as an ENFP, or "the Campaigner", mirroring its enthusiastic, joyous personality. In the experiment, though, MyAI claimed it wasn’t even an AI… but that’s a different story:
The bots were also quick to school me, highlighting that personality is not stagnant or fixed, and that they evolve over time:
Although the experiment had its obvious limitations, and most definitely was not scientifically rigorous, it is valuable to consider the design of bot personalities according to the traits they display.
The Future of Conversational Bot Design
Existing personality frameworks are not quite the right fit for bot personalities, but they do provide as an interesting foundation for thinking about the future of conversational AI and design traits. No doubt new frameworks will emerge and evolve over time.
An emerging trend is the ability for users to craft their own easy-to-create “persona bots”, with character.ai at the forefront of this movement. Notably, the mobile app for character.ai hit 4.2 million monthly average users in the US in September 2023 (compared with 6 million monthly average users for ChatGPT’s mobile app). This surge can be attributed to the younger demographic who find it appealing to design and interact with their own personas and characters. Furthermore, Meta recently announced their “sassy bot” persona to target the younger demographic, also reporting that they are releasing dozens of other bots with unique personas.
As we move forward, bots will be designed to represent certain brands, products or services, make users feel a certain way, and have certain jobs they try to achieve. The end goal of many of these bots will be to establish intimacy and connection with the user and to deepen interactions. And ultimately, the more we engaged with these bots, the richer the data becomes for their creators. And as we all know, data is the new oil…
PS: Bonus Prompt. If you want to get a feel for how different bot design can be, why not brainstorm with your favourite AI bot:
Can you come up with a table of personality traits and characteristics a bot could be trained on for the following: A "PT bot", a "Mental health care bot" and a "Legal Bot".
コメント