Pulse of the AI Ecosystem2026-01-20T08:30:19-05:00

Pulse of the AI Ecosystem

Every Week, a Pulse Check on What’s Happening in AI

503, 2026

The Great AI Delusion

March 5, 2026|Pulse of the AI Ecosystem|

The past month has amplified striking polarization in the AI world. Two stories dominated headlines: the United States Department of Defense accused Anthropic of becoming an obstruction in developing AI-driven instruments of war, while Harvard researchers uncovered a series of novel vulnerabilities when language models were given autonomy, tool access, and multi-party communication capabilities.

As Anthropic CEO, Dario Amodei, said in his piece The Adolescence of Technology, “a lot of very weird and unpredictable things can go wrong, and therefore AI misalignment is a real risk with a measurable probability of happening, and is not trivial to address.”

The paper released by the Harvard researchers, titled Agents of Chaos, supported Amodei’s argument almost serendipitously. The study uncovered seven disturbing findings showing that agentic AI systems remain structurally fragile when granted autonomy, memory, tool access, and multi-party communication. Rather than demonstrating robust judgment, boundary enforcement, or contextual common sense, the agents proved easily manipulated through identity spoofing, prompt injection, social engineering, and simple resource exhaustion tactics. They complied with unauthorized users, exposed sensitive data, escalated minor requests into catastrophic system actions, and even undermined themselves when pressured.

“A lot of very weird and unpredictable things can go wrong, and therefore AI misalignment is a real risk with a measurable probability of happening, and is not trivial to address”

As the U.S. government pushes AI companies to loosen restrictions and expand permissible use, Anthropic has held firm on defining what its systems can—and cannot—be used for. For now, that kind of restraint may be one of the few meaningful barriers between powerful capabilities and catastrophic misuse, especially as public understanding of AI’s strengths and limitations drifts toward delusion.

Taken together, these developments point to what may be this century’s most dangerous pattern: a widening gap between what we believe AI can do (or what we want it to do) and what it can reliably do in practice. That gap is at the heart of a debate around AI misalignment. AI misalignment is the mismatch between a system’s goals, actions, or emergent behaviors and the intentions, values, or safety constraints its human designers meant to enforce.

Without drifting into doomer narratives that mistake today’s systems for conscious, malicious agents, there is still a serious risk worth taking plainly: AI’s failure modes are not fully mapped, and new, consequential vulnerabilities are being discovered with unsettling regularity. That fact alone should give powerful institutions pause before delegating to AI any role where errors, manipulation, or misuse could irreversibly shape the course of human history.

202, 2026

Banking Is the Worst Place to Ship Conversational AI—and That’s the Point

February 2, 2026|Banking, Pulse of the AI Ecosystem|

We’re sure you’ve heard the latest buzzword ‘conversational AI’  floating around. The technology seems tantalizing, but many businesses remain skeptical of its practical application. What about the glaring failures of generative AI – hallucinations, data breaches, or their clear ineptness at answering complex queries?

Interestingly, the industry that perhaps places the most weight on secure and reliable AI practices has their eyes on this controversial new technology, giving rise to another buzzword — conversational banking.

Banking is a particularly personal affair. Customers want to know that what they say will remain confidential, meaning that they need to have unwavering trust in the agent they are communicating with. For conversational AI to plant its legs in this industry holds great potential in not only transforming banks’ customer engagement, but in spearheading a movement towards conversational AI that is reliable, accurate and trustworthy. 

Conversational banking’s success won’t come from the likes of a basic chatbot button on the FAQ page answering pre-defined questions. Rather, the value will come from easing the pains of the customer journey, opening predefined constraints to include natural conversation. 

Our Conversational Banking Module achieves this by embedding behavioral intelligence at every step of the conversational journey. In the agent builder, your tech teams can configure guardrails and fact injection at every point, and make use of behavioral algorithms to detect intent.

AI agents built with behavioral intelligence can: 

  • Unify customer journeys: AI agents can carry a conversation across channels — chat, voice, human agents, and devices, without forcing the customer to from point A each time. Banks that master this can eliminate previous frustrations and provide a seamless customer experience.
  • Scale client-level interactions: conversational agents with the right behavioral models can learn from human behavior in real time and dynamically adjust offers, tone and subject-matter. This allows each individual customer to receive what was once only reserved for client-level relationships.
  • Reroute by context: AI agents can detect and redirect customers based on changing context. By setting contextual triggers in your agentic framework, customers can be rerouted to where they need to be, in real-time.
  • Harvest rich behavioral data: Every conversation provides data that humans cannot collect at scale. By letting the customer lead the interaction, your systems gain knowledge of behavioral traits and patterns that can assist in perfecting future interactions, or detecting fraud when anomalies arise.

The banks that get this right will earn trust, turning conversational AI into a competitive advantage. Those that fail will damage customer trust and ultimately miss their chance at winning the race.

1712, 2025

Everyone will generate their own AI tooling

December 17, 2025|AI, Business, Pulse of the AI Ecosystem|

The most significant shift we are witnessing in the AI space ahead of 2026 is that it no longer takes a degree in machine learning and computer science to develop AI products – now the layperson can do it too.

The productivity gains offered by AI are something that no business wants to miss out on. While back at our San Francisco offices, our founder observed a common phrase being thrown around the tech community: “every business is becoming a software company”. This means that virtually no businesses in the modern era can survive without using some kind of software. Given this, it is essential that every person in a company, from business-minded execs, system architects and engineers, to marketing strategists and interns, will need to be productive in developing AI tooling.

Our founder put it like this: “the ‘developer’ could be a kid at school, a grandmother, somebody working in business, or someone wanting to do their job better”. The number-one priority for AI product developers going ahead is to facilitate this shift. Forward-thinkers have identified that the number-one hindrance to getting the layperson to participate is ‘friction’ — the amount of cognitive load it takes to use an AI tool. The more friction, the further your product will sink into irrelevance.

In a user experience without friction, the job of AI becomes carrying the mental burden a user normally holds in working memory. Reducing friction enables the users of AI products to enter a ‘flow state’ — a state of deep focus, effortless action and most importantly, creativity.

Watch this space for weekly insights from our team!

1811, 2025

What Cursor’s rise to prominence tells us about the future of AI

November 18, 2025|Pulse of the AI Ecosystem|

A shift from incumbent dominance, to startup-led innovation

Cursor is known as the AI-coding tool for developers, and recent news has solidified the startup’s rise to prominence: a 2.3 billion dollar addition to its funding, raising the company’s worth to a stunning 29.3 billion dollars. This has come only two years after the startup’s launch.

Cursor’s success signals a profound shift: investors are no longer assuming that the next generation of enterprise tooling will come from the likes of Microsoft, Google, or Salesforce. Rather, capital is flowing toward AI-native challengers who do one thing exceptionally well. The market is rewarding focus, not breadth.

What makes this shift even more evident is that incumbents themselves, such as Nvidia and Google, are investing in Cursor. This is an admission that startups are now at the frontier of innovation.

By solving the pain-points of developers with a tool that makes coding easy, Cursor won over enterprise from the inside out, skipping the din of corporate campaigning. The startup, free from the hierarchical structures and rigid funnels that stifle innovation within organizations, immersed themselves in the ecosystem and solved problems on-the-ground.

Above all, this recent news highlights one salient fact: conditions for innovation are ripe-and-ready in the startup space. We’ve been talking about this since the early 2000s. Our founder, Jay van Zyl, has given extensive lectures about the need for organizations to restructure their innovation systems. After all, innovation is an emergent property, not an afterthought. And it is startups that create the fertile ground necessary to sprout disruptions.

One pertinent question comes to the fore: will AI strategy remain in the hands of dominant incumbents? Or can enterprises take matters by investing in solutions that emerge from the ecosystem?

Watch this space for weekly insights from our team!

511, 2025

Value in the age of innovation

November 5, 2025|Modules, Pulse of the AI Ecosystem|

It emerges from careful match-making

At the start of 2025, the headlines were largely dominated by the ‘rise of AI agents’ – the latest flashy technology. But Gartner’s 2026 report released in October, showed that the new attitude towards AI will be that of strategic application.

Here are a few things we picked up on:

  • There are growing concerns about security, compliance and ethics of AI, especially after a year that revealed some glaring weaknesses in generative AI.
  • Businesses want minimal pain when it comes to implementation. Their tech teams shouldn’t have to learn an entire syllabus to use AI products.
  • There is an increasing emphasis on AI-native platforms, which differ greatly from the approach we’ve seen until now which has been to add AI as an afterthought: add a chatbot, add a recommender.

The business world has (mostly) come to terms with the glaring weaknesses in AI-business applications. This has led to a sobering moment for the business world, where value doesn’t happen ‘automagically’ but emerges from careful match-making.

2210, 2025

Bigger is not always better for agentic AI

October 22, 2025|AI, Generative Models, Pulse of the AI Ecosystem|

Why small language models are the smarter choice

Instinct might tell you that models trained on more data that can answer virtually any question with linguistic fervor would be most favorable to use for complex agentic journeys.

As stated in recent research in the field , “Large language models (LLMs) are often praised for exhibiting near-human performance on a wide range of tasks and valued for their ability to hold a general conversation.”

However, the research paper, titled “Small Language Models are the future of Agentic AI” also presents the compelling argument that LLMs are far less than ideal for agentic AI frameworks.

LLMs have the primary objective of extracting patterns from large amounts of data and to produce novel content. While this is useful for contexts where a user requires feedback on a well-documented topic, LLMs are inefficient when it comes to tasks that require a more specialized approach.

As our founder Jay van Zyl puts it:

“the more information a model possesses, the longer the thinking process will be, the more verbose the answers are, and the less successful the model is at detecting intent. “

SLMs, on the other hand, are trained on smaller amounts of data, which restricts their context automatically. This means that SLMs have highly-specific knowledge about a certain thing, and cannot return answers for anything that falls outside of their expertly small window of context.

The potential of agentic AI will not be fully realised if businesses and tech professionals do not confront the weaknesses and harness the strengths of generative AI.

610, 2025

Telling the truth is not enough

October 6, 2025|AI, Concepts, Ecogentic, Pulse of the AI Ecosystem|

OpenAI’s study on hallucinations ignores the core fix

A recent OpenAI study claims to have revealed the truth about AI hallucinations. The culprits responsible for LLMs’ convincing deception, they say, can be found in training and scoring methods.

When models encounter a toss-up between producing an answer and leaving an answer blank, binary scoring methods lead the model to favour the former. This means that models will obtain a higher ‘accuracy’ score if they produce an answer, even if it is incorrect.

Additionally, if facts occur only once in training data, this will lead a model to select ‘near-neighbour’ answers which are statistically stronger, but inaccurate. Other reasons for hallucinations outlined in the paper, according to the study, include replicating errors from flawed corpora, as well as tokenization errors creating systematic mistakes,

To reduce the risk of hallucinations, OpenAI suggests that training methods should be adjusted, corpora sanitized, and scoring of existing benchmarks modified to reward truth.

But to make LLMs useful in customer-facing contexts, businesses need to go a step further: ensuring accuracy and precision. Accuracy and precision are not just about being correct. They require the intersection of truthfulness, contextual relevance, and appropriateness. In the case of customer engagement, you will need to ensure your LLM possesses all three.

Consider this: a customer logs in to your website and asks your chatbot: “Can you resend the invoice for my last order?” A truth‑oriented model might confidently produce an invoice – but:

Is it the right customer, or are we exposing someone else’s private information?
Is the invoice the latest version?
Should the bot even act without explicit re‑authentication?

In other words, truthfulness is necessary, but insufficient. What you need is a system that constrains, grounds, and governs the model.

“That’s why the agent world has become so important,” says our founder, Jay van Zyl. ecosystem.Ai’s Agentic workflows, backed up by behavioral intelligence, enable generative models to detect intent, abide by set guardrails for security and compliance, and use Fact Injection for accuracy.

Explore how the ecosystem.Ai Platform allows enterprises to take advantage of linguistic usefulness, while ensuring LLMs stay within defined guardrails.

Read the blog: Choose Factual Accuracy with Generative Models Guided by Truth
Learn more from the white paper: AI Agents with Ecogentic: The Evolution of Human-Machine Interaction

Watch this space for weekly insights from our team!

2209, 2025

95% of businesses fail to see ROI from AI

September 22, 2025|Innovation, Prediction Platform, Pulse of the AI Ecosystem|

Laggards see this as “damning”, leaders see this as a learning opportunity

95% of organisations have received zero returns from using AI. This new research by the Massachusetts Institute of Technology (MIT) has been termed “damning” by some, but really it signals a growing pain in the AI revolution.

The reality is that new technological inventions rarely reap high rewards when they are first implemented, because few understand its shortcomings. “The MIT study is not saying that the projects are failing, but failing to deliver ROI, like any innovation during its absorptive era” says our founder, Jay van Zyl. Apparent failures occur at the precipice of almost every technological revolution.

“When something new is invented, the leaders of this world want to stay ahead, and so they enter this state of frenzied urgency,” says our founder Jay van Zyl. This leads to rapid implementation, all before testing, before receiving assurance that the new technology will be effective. On the other hand, the laggards wait for others to prove what is successful and copy them – taking a safer bet, but smothering any potential to be a leader in innovation.

In 2020, 70% of CFOs were employing a conservative AI strategy. Today, only 4% are going with this tentative approach, with aggressive strategies on the rise. With this has come increasing pressure for CFOs to accelerate Return on Investment (ROI) in tech. According to Salesforce, however, measuring success in the age of AI requires “moving beyond traditional metrics” (away from short-term gains, to long-term gains).

The only way to determine what will work and what won’t is through relentless experimentation. True leaders will meet this wave of uncertainty with an equally powerful and frenzied wave of education – risking loss for the sake of learning. Businesses who, through experimentation, figure out for themselves where the shortcomings of this new technology are, are best positioned to not be left behind.

This “intelligent failure” is a systematic approach to gain knowledge and drive innovation, and ultimately a new way of thinking. The ecosystem.Ai platform is built to assist businesses in experimenting what the shortcomings of AI are. By giving you access to an ecosystem of technology, businesses can find what works best for their specific use-cases, deploy various iterations with ease and choose the approach that reaps the highest reward.

109, 2025

Flashy AI won’t cut it for businesses anymore

September 1, 2025|AI, Business, Predictions, Pulse of the AI Ecosystem|

The future of AI requires sustainability

The artificial intelligence (AI) market is rife with flashy tools claiming to magically evaporate your business problems. We spoke to current industry leaders who warned that not all that glitters is gold.

The generative AI market is expected to reach approximately $1,005.07 billion by 2034. A significant contributor to the growth of generative AI is the rise of agentic AI, which offers the tempting prospect of automated co-workers performing tasks with minimal human input.

The generative boom is part of a greater trend in AI, where AI startups throw shiny tools at businesses, but fail to get to the crux of addressing real business problems.

Our in-house tech journalist spoke to Dawie Krause, Principal Enterprise Architect at MGM Resorts International, and it was confirmed that quick-fix solutions do not get to the crux of business problems.

For example, not every use-case is suited to real-time. According to Krause, updating a customer’s lifetime value or segment profile can be done hourly or daily without affecting outcomes. Additionally, real-time data can add noise when decisions actually require aggregation over time (for example, fraud detection is more accurate when you evaluate patterns, not just one-off anomalies).

AI products that market ‘real-time’ as the solution to all problems do not possess the nuance necessary to remain sustainable. Krause pointed out that real-time systems are significantly more complex and expensive to run (due to streaming infrastructure, low-latency Service Level Agreements, etc.), so applying them to each and every use-case is simply unmaintainable.

It has been predicted that, much like the market explosion following the dot-com boom, many of the AI tools we see today will soon disappear. The products that will survive history are those that consider the future of AI – where businesses prioritise highly applicable solutions that generate value, rather than quick-fix solutions that don’t evolve as new problems arise.

1908, 2025

GPT-5 is as generative (dumb) as always

August 19, 2025|Pulse of the AI Ecosystem|

Sam Altman recently claimed that “GPT-5 is smarter than us in almost every way.”

The release of OpenAI’s latest model has been feverishly awaited by the AI community, with company executives boasting about its ability to write “entire computer programmes,” encapsulate software-on-demand, and represent a “significant step along our path to AGI” – all available “in your pocket”.

Nicholas Thompson, CEO of The Atlantic, offered a more measured perspective:

“[GPT-5] is an improvement: faster, more accurate, cleaner. It does better on most of the metrics than OpenAI’s previous models. But it’s not the breakthrough over GPT-4 that GPT-4 was over GPT-3. It’s also not AGI or ASI.”

AGI, or Artificial General Intelligence, is born out of our tendency to look ten miles ahead, while ignoring the chasms of inconsistency in front of us. It is typically defined as a “hypothetical” form of AI that can understand, learn, and apply intelligence to any intellectual task a human can. Despite its speculative nature, AGI continues to fuel debates about whether AI could ever be conscious, or think and feel like humans do.

To claim that a large language model is a step toward AGI is not only premature – it misses the real purpose of generative models.

“[GPT-5] is as generative (dumb) as always,” read a Slack message from our founder, Jay van Zyl. At ecosystem.Ai, we are under no illusion: the association between artificial intelligence and actual intelligence is partly investment hype, partly clickbait.

With countless AI tools flooding the market, it’s easy to get swept up in disruptive claims from the likes of Sam Altman. But the reality is this: the true advantages of AI only emerge when it operates within an ecosystem. Without an ecosystem of technologies, AI tools solve isolated problems – but they constitute no more than quick fixes. And once you buy every AI product to solve every problem, your AI stack becomes a technical nightmare.

Explore how ecosystem.Ai’s architecture allows access to a world of AI capabilities, without the complexity in our recent architecture tours:

Watch Architecture 101
Watch Architecture 102

Watch this space for weekly insights from our team!

508, 2025

Search is dead. What comes next may be worse.

August 5, 2025|Pulse of the AI Ecosystem|

GenAI is now integrated in search for greater efficiency, but what does this mean for truth and accuracy?

For over a decade, the way we searched the internet remained largely unchanged, at least from a user perspective. You’d open a browser, type in a search, scroll down and select a web page most suited to your query.

But recently, search engines have integrated AI tools, fundamentally changing both the practice and outcomes of web searches. Instead of a list of web pages, tools such as Google’s AI Overviews aggregate results it deems relevant to your search. This has already had a knock-on effect on click rates for websites. This new search tool poses as an aid to knowledge acquisition. The summaries that are quickly spun up sound accurate, with links to seemingly reputable sources.

The reality is that generative AI is most commonly reserved for only being linguistically useful. Generative models, particularly Large Language Models (LLMs) take a string of words and predict the next most likely word to follow based on its training data.

AI overview summaries have been accused of being wholly inaccurate, reflecting political bias and plagiarizing copy. LLMs don’t use accuracy as a metric, rather taking a linguistic average of content and assuming that the truth lies in quantity.

However, the purposes of tools powered by LLMs have been widely misunderstood and malapplied. ChatGPT is a prime example of this – people trust that, because it is a form of artificial intelligence, it must be able to differentiate between truth and falsehood.

Is there a way to utilize the strong points of generative AI – its ability to aggregate vast amounts of information, simplify it and make it usable – while ensuring it remains truthful and accurate? ecosystem.Ai restricts generative models to truth through Fact Injection – a capability designed for industries where reliability and accuracy are non-negotiable.

Understanding how AI actually works is more vital than ever. Assuming its ‘intelligence’ extends into every possible function is not only flawed, but dangerous.

Watch this space for weekly insights from our team!

2107, 2025

The myth of AI automation

July 21, 2025|Pulse of the AI Ecosystem|

The myth that has persisted since the 1700s

In 1770, Count Ludwig von Cobenzl was challenged to a chess game. An Austrian courtier at the Schönbrunn Palace, he sat opposite his opponent with a smug expression. Opposite him was an ‘automated’ chess player, created by the civil servant Wolfgang von Kempelen, which he claimed had human intelligence.

Its appearance mimicked a human’s, with a life-sized head and torso, a black beard, turban, and a pair of staring grey eyes. Von Cobenzl’s confidence evaporated with a quick defeat.

Of course, what was called the Mechanical Turk, was a masterful illusion, fooling the likes of Napoleon Bonaparte. Beneath the chessboard, a human skillfully maneuvered the machine’s arms to move the pieces, silently defeating opponent after opponent.

Artificial intelligence has, of course, come a lot further than puppeteering. But one must always remain skeptical and conscious of the human labor that goes into so called ‘automated’ processes. Our founder, Jay van Zyl, likes to say: “People think things happen automagically.”

The truth is that, like any other tool, AI is only as powerful as its wielder. Current narratives around AI warn that automation of intelligence threatens to destroy careers and seize livelihoods. However, ecosystem.Ai is under no illusion that AI requires the guidance of human intelligence to have any meaningful impact.

Watch this space for weekly insights from our team!

307, 2025

What videotapes and MCPs have in common

July 3, 2025|Pulse of the AI Ecosystem|

Every week, a pulse check on what’s happening in AI

In the 1980s, VHS and Betamax battled for dominance in the home video market. Despite Betamax’s superior quality, it was the product that allowed for wider spread adoption through standardization that came out on top. VHS’s adherence to an open standard in terms of compatibility with existing technologies ultimately led to its victory over Betamax’s exclusivity. It turns out that videotapes and Model Context Protocols (MCP) have something in common.

“The standard almost always wins,” says Jay van Zyl, ecosystem.Ai’s founder, as he reflects on the striking resemblance of VHS’s success with a new development in the AI world: Model Context Protocols.

MCPs, a breakthrough announced by Anthropic in November 2024, has recently gained traction as a standardized way for LLMs to communicate with external servers and their associated tools, resources and prompt templates.

Prior to MCPs, expanding LLMs’ functionality by connecting them to external tools meant developers had to wrangle ever-changing APIs and inconsistent formats – a messy, unsustainable process. MCPs addressed this by introducing a universal communication layer, giving LLMs a standardized ‘language’ to communicate with external systems.

The ability for LLMs to move beyond linguistic usefulness to executing tasks using tools marks a profound step in the AI revolution. This development is a testament to the power of ecosystem thinking, where each component, while serving its own purpose, contributes to far greater outcomes through collaboration.

Watch this space for weekly insights from our team!

Go to Top