When reflecting on the pivotal computing trends that shaped the past year, one topic reigned supreme: AI. AI dominated the headlines, but unlike the hype cycles associated with far less transformative innovations of the past, the consensus was anything but rosy. Yes, we saw glimpses of AI’s singular potential in arenas like healthcare, but collectively we also lamented its return on investment, hallucinations, and the quality of outputs – all while some predicted a massive market correction.
We also came increasingly face-to-face with the reality of AI as we now define it and the resulting price of admission – massive, unrealistic infrastructure requirements and unsustainable power consumption that threaten to bring brownouts to unimpressed consumers already concerned about AI-related job loss and skyrocketing utility rates.
That is why our love affair with AI was a tumultuous courtship at best, one that left lots of questions unanswered and more than a few fears unaddressed. That doesn’t have to be the case.
Most of the problems associated with AI are, at their core, the result of a flawed approach to AI architecture, an engineering problem that can be solved. Importantly, this includes not only the well-publicized challenges associated with AI's increasing demand for ever more powerful data centers, but also its applicability and value to consumers.
Recently, Charles Yeomans – Atombeam’s founder and CEO – wrote an article for Forbes titled “The Limits of LLMs And Why The Architecture Must Change,” that offers a compelling, frank and realistic overview of this flawed approach to architecture, one which is significantly limited by the very nature of Large Language Models (LLMs).
LLMs are exceptionally powerful for analyzing massive datasets and will continue to play an important role; however, unlike other advancements that become more efficient over time, a core factor of Moore's Law, LLMs require disproportionally more compute power and more energy with even incremental increases in performance. That’s not sustainable and is why LLMs should not, and cannot, serve as the foundation for the AI revolution.
Fortunately, there is a solution, and it’s one we are hard at work refining here at Atombeam. We are confident that our Persistent Cognitive Machine (PCM) ultimately offers not only a solution to the infrastructure challenges associated with LLMs, but also addresses their inherent weaknesses, namely that they are stateless, don’t really learn and must start each task from scratch, even when combined with RAG and other database lookup technologies.
Unlike LLMs, our PCM will feature persistent, auditable memory and have the ability to link facts over time. Exceptionally light, it will also make real AI applicable to devices, where, much like the human brain, it can proactively link facts, details and context while using dramatically less power. The human brain after all only requires about 20 watts of energy, but has the incredible ability to comprehend context and details.
With PCM, we are creating an AI that enables learning which people can benefit from – an AI companion that can retain, reuse and refine information that the user finds helpful, whether it’s solving a complex business or scientific problem, enabling smart robotics, or serving as an exceptional personal assistant that really knows and pursues the outputs each individual finds helpful.
We believe PCM will be a pivotal innovation, one that will ultimately empower people to realize the real and full potential of AI. We invite you to learn more.








