Automated conversations are now well established in the domain of customer engagement. The covid-19 pandemic was one of the factors that sped up the transition to automate conversational interactions with customers. While most corporations closed their doors during the pandemic, the floodgates on digital interfaces burst open. In March 2020, the insurance sector reported a 200 percent increase in virtual agent traffic, with the public and government sectors seeing an increase of over 300 percent. Digital engagement – and how effectively it could be automated – soon became an impediment for some, and the defining moment for others. 

Since, the swift adoption of ChatGPT – reaching one million users in just five days – signalled an eagerness from the general public to engage in a conversational manner. Building on Google’s transformer architecture, OpenAI released the first Generative Pre-trained Transformer (GPT) with a chat interface, now commonly known as ChatGPT. Researcher Jason Tabeling determined that there was a shift in search habits after the release of ChatGPT, with the increased length of queries. Google’s AI Mode reinforced this trend further. In June and July of 2025, the average search query length was around 8 words. By August, this hit 10 to 11 words. 

Despite this unmistakable shift, the power of conversation has not been harnessed in human-to-machine interactions. According to Forrester Consulting, 75% of customers say that chatbots are unable to handle complex questions. And a third say that negative interactions drive them straight into the competition’s hands. 

Ingredients for a good conversation

The willingness of consumers to engage with digital interfaces in a conversational manner is unmistakable, but large language models (LLMs) – forming the foundations of chatbots – have not quite caught up with their expectations. A 2025 study by Salesforce and Microsoft researchers outlined how LLMs struggle with underspecification, which is prevalent in human conversation, leading them to get lost in multi-turn conversations. The study found that, in cases of underspecification, performance dropped by 39% across various generative tasks, attributed primarily to a massive increase in unreliability. Further studies on LLM hallucinations revealed that even “state-of-the-art” systems prioritize guessing over acknowledging uncertainty. 

Additionally, few automated conversational interfaces use the behavioral principles that make human-to-human conversations rewarding. A 2024 study, which measured the brain activity of humans engaging in conversation, found that conversations between friends were generally more rewarding than conversations between strangers. The reason for this came down to the balance between historical knowledge and novelty. Generic, surface-level interactions – common between strangers – were found to be far less stimulating, while the more explorative nature of conversations between friends was more entertaining.

The new challenge for businesses is building chatbots that not only circumvent the weaknesses of LLMs, but also make use of proven behavioral principles to make conversations more rewarding.

ecosystem.Ai’s 4 pillars for good conversations with chatbots:

  • Behavioral intelligence: As discussed earlier, the ingredients to a good conversation constitute a good balance between common ground and novelty. This is why we ground customer engagement tooling in Interaction Science, which . With our proprietary behavioral algorithms, businesses can craft engagement practices that employ explore/exploit algorithms, and act on and learn from behavior in real-time.
  • Contextual relevance: For LLMs in the chatbot era, we can counter Google’s claim that ‘attention is all you need’. The real differentiator is focus. With a well-designed agentic framework, key terms can act as contextual anchors, keeping the model within the desired conversational bounds until an explicit trigger redirects it.
  • Truthfulness: Your chatbots should be guided by truth, not pattern-completion. The Fact Injection tool sets smart contextual triggers that tell the chat agent what kind of information to fetch, from where, and when during a customer journey to do it.
  • Intent detection: Intent is often hidden in user queries. Guided by Interaction Science and a well-designed agentic framework, intent detection routes queries to different chat nodes based on the user’s intent.

Conclusion

Customers want to engage with businesses through conversation, but scaling this in an effective and secure way has proven a challenge. With more and more weaknesses coming to the fore as the AI ‘honeymoon phase’ fades, businesses need strategic approaches for not only automating conversation, but automating good conversation.