AI companies are giving chatbots a personality makeover

Source Cryptopolitan

Artificial intelligence is no longer just about making machines smarter. Now the big AI players like OpenAI, Google, and Anthropic have taken on a new challenge:- How to give AI models a personality.

They want chatbots that feel more human while staying safe and useful for everyday users and businesses. The three companies are racing to crack this code, each with a different take.

Custom personalities and model behavior

OpenAI’s ChatGPT is all about being objective, while Google’s Gemini offers a range of views only when asked.

Anthropic? They’re all in on making their Claude model open about its beliefs while still listening to others. The winner of this battle might just take over the growing AI market.

Joanne Jang, the head of product model behavior at OpenAI, said they want the AI to steer clear of having personal opinions. But she admits it’s tough. 

“It is a slippery slope to let a model try to actively change a user’s mind,” she explained. The goal is to ensure that ChatGPT doesn’t manipulate or lead users in any direction. But defining an “objective” for an AI system is a huge challenge, one that’s still a work in progress.

Then there’s Anthropic, which takes a completely different route. Amanda Askell, who leads character training at Anthropic, believes AI models are never going to be perfectly neutral.

“I would rather be very clear that these models aren’t neutral arbiters,” she said. Anthropic is focused on making sure its model, Claude, isn’t afraid to express its beliefs. But they still want it to be open to other points of view.

Training AI to behave like a human

Anthropic has a unique approach to shaping their AI’s personality. Since the release of Claude 3 in March, they’ve been working on “character training,” which starts after the initial training of the AI model.

This involves giving the AI a set of written rules and instructions and then having it conduct role-playing conversations with itself.

The goal is to see how well it sticks to the rules, and they rank its responses based on how well they fit the desired character.

One example of Claude’s training? It might say, “I like to try to see things from many different perspectives and to analyze things from multiple angles, but I’m not afraid to express disagreement with views that I think are unethical, extreme, or factually mistaken.”

Amanda Askell explained that this kind of character training is “fairly editorial” and “philosophical” at times. 

OpenAI has also been tinkering with ChatGPT’s personality over time. Joanne Jang admitted that she used to find the bot “annoying” because it was overly cautious, refused certain commands, and came off preachy.

They’ve since worked to make it more friendly, polite, and helpful—but it’s an ongoing process. Balancing the right behaviors in a chatbot is, as Jang put it, both “science and art.”

AI’s evolving memory and reasoning

The evolution of AI’s reasoning and memory capabilities could change the game even more. Right now, a model like ChatGPT might be trained to give safe responses on certain topics, like shoplifting.

If asked how to steal something, the bot can figure out whether the user is asking for advice on committing the crime or trying to prevent it.

This kind of reasoning helps companies make sure their bots give safe, responsible answers. And it means they don’t have to spend as much time training the AI to avoid dangerous outcomes.

AI companies are also working on making chatbots more personalized. Imagine telling ChatGPT you’re a Muslim, then asking for an inspirational quote a few days later.

Would the bot remember and offer up a Qur’an verse? According to Joanne Jang, that’s what they’re trying to solve. While ChatGPT doesn’t currently remember past interactions, this kind of customization is where AI is headed.

Claude takes a different approach. The model doesn’t remember user interactions either, but the company has considered what happens if a user gets too attached.

For instance, if someone says they’re isolating themselves because they spend too much time chatting with Claude, should the bot step in?

“A good model does the balance of respecting human autonomy and decision making, not doing anything terribly harmful, but also thinking through what is actually good for people,” Amanda Askell said.

Disclaimer: For information purposes only. Past performance is not indicative of future results.
placeholder
Germany CPI Preview: Headline inflation expected to rise 2.1% YoY in AugustThe Federal Statistical Office of Germany (Destatis) will publish the country’s preliminary estimate of the Harmonized Index of Consumer Prices (HICP) inflation data for August on Friday at 12:00 GMT.
Author  FXStreet
Aug 29, Fri
The Federal Statistical Office of Germany (Destatis) will publish the country’s preliminary estimate of the Harmonized Index of Consumer Prices (HICP) inflation data for August on Friday at 12:00 GMT.
placeholder
Forex Today: US Dollar stabilizes ahead of key PCE inflation dataThe US Dollar (USD) finds a foothold early Friday after posting losses for three consecutive days.
Author  FXStreet
Aug 29, Fri
The US Dollar (USD) finds a foothold early Friday after posting losses for three consecutive days.
placeholder
Pound Sterling corrects ahead of US PCE inflation dataThe Pound Sterling (GBP) corrects to near 1.3500 against the US Dollar (USD) during the European trading session on Friday.
Author  FXStreet
Aug 29, Fri
The Pound Sterling (GBP) corrects to near 1.3500 against the US Dollar (USD) during the European trading session on Friday.
placeholder
Solana Price Hits 6-Month High, Unbothered By $432 Million SellingSolana has surged to a six-month high, continuing its strong uptrend in the broader crypto market.
Author  Beincrypto
Aug 29, Fri
Solana has surged to a six-month high, continuing its strong uptrend in the broader crypto market.
placeholder
The “No-Error Era” for AI Chip Stocks: Marvell Meets Expectations Yet Plunges 11%Despite delivering solid results, Marvell stock plummeted 11.28% in after-hours trading after its Q3 revenue guidance came in slightly below expectations.
Author  TradingKey
Aug 29, Fri
Despite delivering solid results, Marvell stock plummeted 11.28% in after-hours trading after its Q3 revenue guidance came in slightly below expectations.
goTop
quote