blog post

Meta Chatbots

Facebook owner Meta is preparing to launch a range of artificial intelligence-powered chatbots that exhibit different personalities as soon as next month, in an attempt to boost engagement with its social media platforms. The tech giant led by chief executive Mark Zuckerberg has been designing prototypes for chatbots that can have humanlike discussions with its nearly 4bn users, according to three people with knowledge of the plans.

These people said some of the chatbots, which staffers have dubbed “personas”, take the form of different characters. The company has explored launching one that emulates Abraham Lincoln and another that advises on travel options in the style of a surfer, according to a person with knowledge of the plans. The chatbots could launch as soon as September, the person said. Their purpose will be to provide a new search function and offer recommendations, as well as being a fun product for people to play with.

The move comes as the $800bn company seeks to attract and retain users while it battles competition from social media upstarts such as TikTok, and attempts to seize upon widespread hype in Silicon Valley around AI since Microsoft-backed OpenAI launched ChatGPT in November. On top of boosting engagement, chatbots could collect vast new amounts of data on users’ interests, said experts. That could help Meta better target users with more relevant content and adverts. Most of Meta’s $117bn a year in revenues come from advertising.

“Once users interact with a chatbot, it really exposes much more of their data to the company, so that the company can do anything they want with that data,” said Ravit Dotan, an AI ethics adviser and researcher.  The developments raise concerns around privacy as well as potential “manipulation and nudging”, she added.

Meta declined to comment.

Rival tech groups have already launched chatbots that feature personalities. Character.ai, an Andreessen Horowitz-backed start-up valued at $1bn, uses large language models to generate conversation in the style of individuals such as Tesla chief executive Elon Musk and Nintendo character Mario. Snap has said its “My AI” feature — a single bot launched in February — is an “experimental, friendly chatbot”, with whom 150mn of its users have interacted so far. It recently began “early testing” of sponsored links within the feature.

Zuckerberg said he envisaged AI “agents that act as assistants, coaches or that can help you interact with businesses and creators”, adding: “We don’t think that there’s going to be one single AI that people interact with.” He has also said the company was building AI agents that can help businesses with customer service, as well as an internal AI-powered productivity assistant for staff. In the longer term developing an avatar chatbot in the metaverse would be explored, one person familiar with the matter said. “Zuckerberg is spending all his energy and time on ideating about this,” that person added.

Meta has been investing in generative AI, technology that can create text, images and code. This month, it released a commercial version of a large language model that could power its chatbots, called Llama 2. As part of building the infrastructure to support the AI products, Meta has been trying to procure tens of thousands of GPUs, chips that are vital for powering large language models, according to two people familiar with the matter. Meta will probably draw scrutiny from experts policing the chatbots for signs of bias, or the risk that they share dangerous material or misinformation, known as “hallucinations”.

The company has already made brief forays into chatbots on a smaller scale that have demonstrated these risks. Researchers found that a previous Meta AI model, BlenderBot 2, released in 2021, quickly started spreading misinformation. Meta said it made the BlenderBot 3 released in 2022 more resistant to this content, although users still found it generated false information and hate speech. According to a Meta insider, the company will probably build in technology that will screen users’ questions to ensure they are appropriate. The company may also automate checks on the output from its chatbots to ensure what it says is accurate, and avoids hate or rule-breaking speech for example, the person added.

Financial Times 8/2/2023

Author

Steve King

Managing Director, CyberEd

King, an experienced cybersecurity professional, has served in senior leadership roles in technology development for the past 20 years. He has founded nine startups, including Endymion Systems and seeCommerce. He has held leadership roles in marketing and product development, operating as CEO, CTO and CISO for several startups, including Netswitch Technology Management. He also served as CIO for Memorex and was the co-founder of the Cambridge Systems Group.

 

Get In Touch!

Leave your details and we will get back to you.