Press "Enter" to skip to content

Collaborative Machine Reasoning Marks AI’s Next Inflection Point

I’ve watched generative AI and large language models advance at a pace unlike anything I’ve seen in my more than 30 years as a cognitive scientist. I’ve lived through many AI hype cycles and ‘AI winters’ where progress stalled, but over the past three years, we have seen extraordinary breakthroughs in natural language processing (NLP), reasoning capabilities, and agentic workflows.

With all of this innovation, we’re at an inflection point. In the next year, the way companies use and rely on AI is going to significantly change – and it’s going to happen quickly. We’re moving from using AI as a tool to creating AI-native systems that think, adapt, and govern themselves.

Instead of simply using AI models individually to perform automated tasks, companies are starting to integrate AI into their workflows, infrastructure and governance. AI is rapidly becoming the brain of an operation by connecting decisions, policies and actions with new ways in which humans and machines interact.

A cognitive OS for AI agents

Large language models will continue to serve as the human-facing, conversational side of AI and the generative engine powering the technology. However, neuro-symbolic AI (where LLMs meet knowledge graphs, rules and reasoning) is rapidly emerging as the necessary operating system for AI agents.

Think of it as a ‘central intelligence orchestrator.’ While LLMs handle the conversation between humans and machines, the symbolic layer is responsible for determining what an agent knows, what they can do, and what they cannot do.

By combining multiple layers of AI technology, enterprises will gain the type of governance they’ve always desired from AI – control at the level of cognition itself.

Compliance will no longer be an add-on to the end of a project. AI agents will be governed by compliance logic as part of their reasoning process. Regulations such as HIPAA or SOX will be converted into machine-readable rules and used as input to AI agents to reason over policies prior to acting.

This is not about identifying mistakes made after a problem occurs. Rather, it’s about preventing problems from occurring. This is similar to a compiler that prevents bad code from running, except in this case, the policy engine will prevent AI from violating rules. The AI OS will begin to function like a 24/7 compliance officer.

LLM ensembles

Companies realize they cannot rely on a single model for critical business decisions. Therefore, they are employing LLM orchestration. Imagine a team of experts – one generates ideas, another checks the facts, a third looks at compliance, and another optimizes performance. That’s how LLM ensembles work.

It’s a lot like how real teams operate through debate, checks and balances, and verification. An LLM ensemble approach provides better accuracy, more reliability, and eliminates the risks involved in relying on a single, monolithic AI. The future will be about collaborative machine reasoning.

First-generation RAG systems merely collected text from a database. Newer generation graph-augmented RAG v2 functions differently. These systems are capable of retrieving structured knowledge: entities, relationships, timelines, constraints. Each answer retrieved is based on contextual information: who created the document, when was it created, and how is the answer linked to policies or previous decisions.

In addition, each answer is able to be traced back through provenance chains and knowledge graphs. Therefore, instead of trusting the model, enterprises are able to verify the reasoning behind each answer.

Corporate amnesia

Most companies suffer from what I call ‘corporate amnesia.’ Important ideas vanish in email threads, big decisions disappear into Zoom recordings, and expertise walks out the door when people retire.

But with AI summarization, graph memory, and semantic indexing, we’re finally turning this around. Communications from Slack chats to CRM notes are starting to become part of a living knowledge graph. Every conversation and every decision can be captured as structured intelligence. AI-oriented companies will remember how the organization as a whole thinks, decides, and learns – making enterprise knowledge a lasting asset.

Maybe the biggest shift isn’t technical at all – it’s psychological.

As the distinction between human thinking and machine intelligence continues to blur, we’re seeing humans and AI working together. We already see clinicians using AI copilots to create summaries of patient history. Technicians are wearing AR headsets that guide them through the steps of complex repair procedures. Writers, artists, and filmmakers are collaborating with models that can generate, sketch, and storyboard with them.

AI is evolving into a collaborator in how we think, not just another tool. This is just the beginning – all industries will feel the effects as we adapt together.

Author

  • Jan Aasman

    Jans Aasman, a psychologist with a doctorate in cognitive science, is the CEO of Franz Inc., an early innovator in artificial intelligence and a supplier of graph database technology for neuro-symbolic AI solutions. He works hand-in-hand with organizations such as Montefiore Medical Center, Blue Cross/Blue Shield, Siemens, Merck, Pfizer, Wells Fargo, BAE Systems, as well as U.S. and foreign governments. A frequent speaker within the database and semantic technology industries, he has authored multiple research papers and bylines on the subject.

     

    View all posts

One Comment

  1. Dave Cooper Dave Cooper January 6, 2026

    Hi Jans and Happy New Year!

    Thank you for the insightful article, and I like the way you describe a “symbolic operating system” for the neural net AI (LLM) to use. This puts nicely into words the concept which occurred to me early on and which is resulting in the rapidly developing (but already usable) skewed-emacs stack, which includes lisply-mcp for connecting to “Lisply Backends” of which our Allegro CL – powered Genworks GDL is a top qualified candidate — the Gendl GWL layer has a lightweight lisply-backend written in a few lines of code and making use of Franz Inc’s AllegroServe.

    I will soon be releasing a live in-browser ttyd-based terminal instance of a shared skewed-emacs playground where you will directly be able to see and interact with what I’m talking about, and I feel this setup will encapsulate some of the vision you are describing in your article. (preview is already live at genworks.com/ttyd but it’s not really self-explanatory yet how to use everything).

    After things are more officially released (or before if you’d like an early guided tour), I will look forward to your feedback with anticipation!

Leave a Reply

Your email address will not be published. Required fields are marked *

×