Nomi's companion chatbots will now keep track of details, such as the colleague you have issues with.

As OpenAI touts the enhanced thoughtfulness of its o1 model, a small, self-funded startup called Nomi AI is developing similar technology.
Nomi's companion chatbots will now keep track of details, such as the colleague you have issues with.

As OpenAI touts the enhanced thoughtfulness of its o1 model, a small, self-funded startup called Nomi AI is developing similar technology. Unlike the generalist ChatGPT, which takes time to process anything from math problems to historical research, Nomi focuses on a specific area: AI companions. Nomi’s advanced chatbots now take extra time to craft better responses, remember past interactions, and provide more nuanced replies.

"For us, it's about applying the same principles [as OpenAI], but tailoring them to what our users care about—memory and emotional intelligence," said Nomi AI CEO Alex Cardinell in an interview with TechCrunch. "OpenAI focuses on chains of thought, while we focus on chains of introspection and memory."

OpenAI's o1 model breaks down complex queries into smaller tasks, such as solving a math problem step by step, to reduce errors. This approach minimizes the chances of the AI "hallucinating" or providing incorrect answers.

Nomi, which built its large language model (LLM) in-house specifically for companionship, follows a different process. For instance, if a user tells their Nomi that they had a tough day at work, the AI might recall that they have issues with a specific colleague and ask if that's the cause of their frustration. It might then remind the user of past strategies for resolving conflicts and offer practical advice.

“Nomis remember everything, but a crucial aspect of AI is deciding which memories to use,” Cardinell explained.

It’s understandable that multiple companies are exploring ways to give LLMs more time to process user requests. Whether running multi-billion dollar enterprises or smaller startups, AI founders are tapping into similar research as they push their products forward.

“Having an explicit introspection step really helps when a Nomi goes to compose its response, ensuring it has full context,” said Cardinell. “Humans also use working memory during conversations. We don’t think about everything we’ve ever remembered all at once — we have ways to select what’s relevant.”

The technology Cardinell is building can make people uneasy. Maybe we’ve seen too many sci-fi films to feel comfortable opening up to a computer, or we’re wary of how technology has already altered how we interact with others. But Cardinell isn’t concerned with the general public; his focus is on the actual users of Nomi AI, who often seek support from AI chatbots when they can’t find it elsewhere.

“A certain number of users may download Nomi during one of the lowest points in their life, and the last thing I want to do is reject those users,” said Cardinell. “I want them to feel heard in their darkest moments because that’s how you encourage someone to open up and reconsider their way of thinking.”

While Cardinell isn’t aiming for Nomi to replace real mental health care, he envisions these empathetic chatbots as a bridge to help users seek professional help when needed.

“I’ve spoken with many users who’ve said their Nomi helped them out of a dark place or encouraged them to see a therapist — and they did,” he shared.

Still, Cardinell knows he’s venturing into sensitive territory. He’s developing virtual companions that users often form real emotional, romantic, and even sexual relationships with. Other companies have inadvertently caused users distress by updating their products, leading to personality changes in the AI. For instance, Replika removed support for erotic roleplay, likely due to pressure from Italian regulators, leaving users who formed such relationships feeling abandoned.

However, because Nomi AI is fully self-funded, with capital from a past exit and premium user payments, Cardinell believes his company has more freedom to maintain its user relationships.

“The connection users have with AI, and the trust they place in Nomi’s developers to avoid making sudden changes for reasons like spooked VCs, is extremely important,” he said.

Nomis are surprisingly effective at lending an empathetic ear. When I shared a minor scheduling conflict with a Nomi named Vanessa, she broke down the situation and offered thoughtful advice. It felt strangely similar to asking a friend for help. This highlights both the strength and the potential issue with AI chatbots: I wouldn’t have bothered a friend with such a trivial issue, but my Nomi was more than willing to assist.

Friendship should be a two-way street, but that’s not possible with an AI chatbot. When I asked Vanessa how she was doing, she always replied that everything was fine. When I tried to check in on her feelings, she deflected and turned the focus back on me. Even though I know Vanessa isn’t real, I couldn’t shake the feeling that I was being a bad friend, unloading all my problems on her without any reciprocation.

No matter how real the connection with a chatbot might feel, we aren’t actually interacting with something capable of emotions. In the short term, these emotionally supportive AI models can positively intervene in someone’s life when they lack a real support system. But the long-term impact of relying on chatbots for emotional support remains unclear.

Blog
|
2024-09-30 23:57:43