Foundational LLMs vs. compound AI
It feels like the last two years have been a whirlwind for those of us focused on powering our products with artificial intelligence (AI) capabilities. First, starting in 2022, it was monolithic foundational large language models (LLMs). While very powerful, foundational LLMs were still limited in their knowledge and were hard to adapt to specific tasks that required access to private or proprietary data.
Monolithic or foundational LLMs were excellent tools for generic tasks, but they needed to be integrated into business processes and given access to data they were not trained on, in order to provide value. Enter compound AI techniques. Sometime in 2023 the talk seems to have shifted heavily towards compound AI systems, especially the most widely talked about compound AI technique called RAG (retrieval augmented generation).
Compound AI approaches like RAG use the principles of system design. The goal of system design is to use multiple components in a modular fashion and have those components interact with each other to solve a particular problem. In this context, combining the power of a monolithic foundational LLM with other components that are unique or proprietary to an organization as part of a system proved to be a far more cost-effective approach to exploiting the power of LLMs than fine-tuning a foundational model, or worse, building a new one.
The rise of agentic AI
In 2024, compound AI innovation took another interesting turn and suddenly the talk was all about “agentic AI.” So, what is agentic AI and how is it different from techniques like RAG?
Compound AI systems like RAG lack decision making capabilities; they are designed largely to enhance the responses of the foundational model by retrieving information from knowledge sources.
Agents, on the other hand, use foundational models as their “brain” and can reason (i.e. form a plan of action and act after accessing many external tools they have access to like databases, web sites, search engines, APIs etc.) Additionally, agents can go a step further and record past experiences in their memory and continuously learn. Finally, agents can collaborate by interacting with multiple agents to achieve a goal that requires coordination of action.
Bridging classical ML and agentic AI with IMO Clinical AI
It would seem that with all these advances in generative AI-based approaches, all other previous forms of AI, lumped together under the category “classical machine learning (ML),” would become obsolete. Turns out classical ML-based approaches are still alive and kicking. This points to a market where customer preferences and needs for AI-based products are fragmented.
While LLMs are the future, product teams need to contend with the reality of a current market where not every customer is ready to adopt LLM-based solutions. The arguments against LLM-based solutions range from high compute costs, privacy, IP protection, the ability to run foundational models “on-prem,” and over reliance on a single foundation model.
IMO Health, via its AI platform—IMO Clinical AI—recognizes this market reality and provides products that can utilize a range of capabilities, from classical ML to agentic AI. IMO Health understands that customers today are at varying comfort and adoption levels when it comes to AI. While some customers have embraced generative AI-based approaches wholeheartedly, others rely on classical ML approaches as they wait for their organizations to fully grasp the impact of generative AI on their business operations.
IMO Health is committed to being a strong and reliable long-term AI partner for its customers by providing them with products and services based on a range of technologies.