WELCOME TO THE
Investor Hub
Welcome to our Investor Hub — where Atombeam innovation meets opportunity. Here, you'll find direct access to the content you need to stay informed on company updates, key activities, and strategic insights. We're committed to empowering you with the information you need to make confident, informed investment decisions.
Latest updates from Charles
Charles Yeomans, our CEO, has been at the forefront of driving innovation in data management. In his latest update, he shares insights on how Atombeam is leveraging cutting-edge technology to enhance data efficiency and security. By integrating advanced algorithms and AI, we are setting new standards in the industry, ensuring our investors are part of a transformative journey.
Latest Investor Video
Stay up to date with Atombeam’s latest investor insights. In this section, you’ll find our most recent investor video—featuring company updates, strategic milestones, and a look at how our breakthrough data compaction technology is shaping the future of digital communication. Whether you’re a current investor or exploring new opportunities, watch to learn how Atombeam is driving innovation, expanding partnerships, and delivering value across industries.
04.10.26
We’re Going Live Today!

Today is the day. Our livestream with Chief Architect of Atombeam's Persistent Cognitive Machine (PCM)*, Alexandria Tucker, PhD starts at 11am PT / 2pm ET. Make sure you’ve downloaded the StartEngine app and signed up for notifications so you’re ready to join.
This will be a different kind of discussion than we’ve been having recently, with a voice you haven’t heard much from before. We’ll be talking about the science behind our business – from theoretical development to practical implementation.
See you there, and get your questions ready.
Apple App Store | Google Play Store
*The PCM technology is still in development, and there are substantial technical risks that could prevent us from achieving these efficiency gains at scale. Competitive advantages in technology are often temporary, and competitors may develop alternative approaches that match or exceed our efficiency claims. Market adoption of power-efficient AI is not guaranteed, and regulatory, technical, or economic factors could impact the viability of our approach. This technology has not been validated in large-scale commercial deployments, and significant engineering challenges remain before commercial release. Investors should consider this a high-risk, early-stage technology investment with uncertain outcomes.
This Reg A+ offering is made available through StartEngine Primary, LLC, member FINRA/SIPC. This investment is speculative, illiquid, and involves a high degree of risk, including the possible loss of your entire investment. For more information about this offering, please view the Offering Circular and Related Risks.
04.10.26
Responses to Webinar Questions
Any key updates on progress at AU, Digital Barriers, Trilliant?
Let me take each one:
Alhamrani Universal (AU): AU remains an active partner and customer. They are the largest fintech company in Saudi Arabia, processing millions of ATM transactions daily for 50% of Saudi banks. They are currently in discussions with Saudi Telecom and other major Saudi entities to expand Neurpac* deployment and build out their reseller channel. The Iran conflict has slowed some of these conversations, but AU is still actively working to move their reseller efforts forward. This is a significant market — Saudi Arabia is investing heavily in digital infrastructure, and AU gives us a strong foothold in the region.
Digital Barriers: Digital Barriers is an active partner. They are a UK-based company with an engineering-heavy team — mostly engineers, very few sales people. We are working with them on several joint opportunities. Our strategy with Digital Barriers is twofold: pursue new joint customers together, and work to get introductions to their existing customer base, which includes major government and law enforcement agencies. They bring deep technical credibility in their market; we bring Neurpac’s data compaction capability that enhances what they deliver.
Trilliant: The reseller agreement is signed and Trilliant is actively marketing Neurpac to its utility customer base. They started marketing within a week of signing. There are active discussions with specific utilities but no signed deployments yet — this is normal timeline for a regulated industry where procurement cycles run 6–12 months. That being said, as I have mentioned before, an eight million meter utility has already expressed strong interest in utilizing Neurpac for their entire system. Trilliant’s 40-million-device installed base and integration with 340+ meter brands worldwide give us considerable leverage once the first deployments prove out. There is significant technical work to be done, but we remain very confident in this channel.
"Charles after working in the IT infrastructure for 25 yrs. & knowing the importance in what ATOMBEAM is trying to accomplish in delivering data at accelerated speed is EXTREMELY HUGE CONGRATS"**
Thank you — I genuinely appreciate the kind words, and even more, I appreciate your confidence in what we’re building. Twenty-five years in IT infrastructure means you understand the problem firsthand: data is growing exponentially while the pipes carrying it are not. That’s the structural mismatch Neurpac solves. We’re grateful to have shareholders who understand the technology at this level.
How do you see quantum cryptography impacting your solution?
This is a great question and one we think about carefully. The short answer is that quantum computing is a tailwind for Neurpac, not a threat — and we believe our approach may actually deliver what quantum cryptography is trying to achieve.
Neurpac+ may be quantum-resistant. The goal of quantum cryptography is to create encryption that is resistant to quantum computing attacks. We believe our combination of standard encryption and Neurpac compaction may achieve exactly that. Here’s why: Neurpac’s compaction transforms and obfuscates data using a codebook that is generated by machine learning. Without the codebook, the output is meaningless — but unlike traditional encryption, there is no mathematical trapdoor for a quantum computer to exploit. There’s no large number to factor, no discrete logarithm to compute. The security is based on the codebook being secret, not on a mathematical problem being hard.
Pen testing supports this. We have engaged TrustFoundry, an independent professional penetration testing firm, to test Neurpac’s combined compaction-and-encryption capability against known attack vectors including VORACLE, CRIME, and BREACH. To date, only one successful attack has been mounted; we have identified one area requiring a fix that would prevent that attack from being successful that we are confident we can address. Once complete, we believe this will demonstrate a unique capability: data that is simultaneously compacted and encrypted in a way that resists both conventional and quantum attacks. This would be, we think, of very high interest to the U.S. Cyber Command – and a very important capability.
Post-quantum cryptography makes Neurpac potentially more valuable. The transition to post-quantum cryptography (PQC) — which government and industry will need to undertake — will make encrypted data significantly larger. NIST’s new PQC standards produce ciphertexts and signatures that are 2–10× larger than current algorithms. Every network in the world will need to carry more overhead per packet once PQC is adopted. Neurpac’s ability to reduce data volume before or alongside encryption becomes even more valuable when the encryption itself makes data bigger. This is separate from the point above about pen testing – this would be an independent application of Neurpac to data that is encrypted using PQC.
What do you see as the first step for monetization for PCM?*** When/Where will revenue start flowing from?
PCM is our next-generation AI platform — a fundamentally different architecture from the large language models that power today’s AI systems. We’re at TRL 4–5 with a working prototype, and the monetization path is becoming clearer.
Navy demonstration — April 29: Our live demonstration at NIWC Pacific is the first major milestone. This is a demonstration of persistent learning and geometric decision support in a maritime domain awareness scenario. A successful demo should validate the technology in front of the people who need it most.
DARPA contract — potentially later this year: DARPA has expressed strong interest in PCM and we are hopeful they issue a Broad Agency Announcement in the near term, with program funding planned at eight figures. We believe we could secure a DARPA contract later this year. A DARPA contract would be PCM’s first revenue and, and we believe it provides the kind of institutional validation that changes every subsequent conversation, including commercially.
Drone autonomy — pitching next month: We are working on a separate Department of War effort involving PCM for drone autonomy. We plan to pitch that next month to very senior levels at the Pentagon. Drones operating in contested environments need the ability to learn on the fly, make decisions without calling home, and never hallucinate — exactly PCM’s design point. That could be several million dollars if we can get a contract, which we are hopeful it will happen and quickly.
Institutional capital for PCM: PCM is so important — and the market need so clear — that we believe we can attract significant institutional capital to drive it to production quickly. We are targeting production capability by end of this year, or even possibly sooner on a limited basis. A major new study from Harvard, MIT, and Stanford researchers (“Agents of Chaos,” February 2026) just documented exactly the structural failures in today’s AI agents that PCM is designed to prevent. Microsoft’s AI assistant Copilot is struggling so badly that the company just posted its worst quarter since 2008. The market is educating itself toward PCM’s value proposition.
When will we hear about new commercial clients? We closed Trilliant and there hasn’t been an update on new clients since.
I understand the question and want to be straightforward about where we are and why this takes time.
How deep-tech, in-line technology works: I’ve said this many times, but it bears repeating because it’s fundamental to understanding our sales cycle. Neurpac is an in-line technology — it sits inside the data path of another company’s product. That means we have to partner with a company making gateways or networking equipment, integrate and test Neurpac in their devices, and then they resell it to their customers. That is not a quick sale. But it is extremely high leverage when it happens, because one integration unlocks their entire customer base. I anticipate sales momentum to build in Q3/Q4 of this year as current integration work matures.
Ericsson — along with Trilliant, our most significant commercial relationship: Ericsson completed a year of technical testing, formalized a Technology Alliance Partner agreement in December 2024, and is now actively referring accounts to us. Ericsson has referred Verizon, Dell, Siemens, and a robotics company to Atombeam, among others. All of those engagements are active and working right now — it is just not fast. Each one involves technical evaluation, proof of concept, and procurement. When contracts are signed, we will announce them.
Oil & Gas. We have a purchase order from an oil & gas technology company, and are working on their principal hardware supplier. We think this sector will be extremely strong for us, and in this case the Iran conflict, with the subsequent rise in oil prices, could open up more opportunities for us, since higher oil prices can lead to greater interest in investing in technology by the sector.
International expansion: Alhamrani Universal in Saudi Arabia is an active customer processing millions of transactions daily, with reseller efforts underway. We also anticipate opening up efforts in South Korea and Japan.
Data center opportunity: We’ve developed a comprehensive ROI model for data center operators showing $3M–$5M in annual NOI impact per 50MW facility. This opens an entirely new and very large buyer category — the PE-backed colocation operators converting legacy facilities to AI workloads, in addition to the Mag-7 data center operators.
The bottom line: We have multiple active engagements with major companies. The sales cycle for enterprise infrastructure technology is measured in quarters, not weeks. I’d rather under-promise and over-deliver than hype engagements that haven’t closed. The leverage in this business model is that each partnership, once proven, has the potential to unlock hundreds of thousands to millions of endpoints with long-duration recurring revenue.
How sure are you that Neurpac can be put onto silicon?
Very high confidence. Neurpac’s core operation — codebook lookup — is one of the most silicon-friendly operations in computing. It’s a fixed table lookup: match an input pattern to a codebook entry and output the corresponding codeword. This is what hardware does best. It’s the same class of operation as AES encryption, which followed exactly this path — software first, then hardware accelerator, then standard on every networking chip. We have made Neurpac work in the past on a 10-cent chip – it is super light.
Several factors give us confidence:
1. The operation is simple and deterministic. No floating point math, no neural network inference, no variable-length processing. A codebook lookup is a hash match against a fixed table. FPGA and ASIC designers implement these routinely.
2. Our patent portfolio covers the hardware path. We hold patents specifically covering FPGA, ASIC, SoC IP block, and PCIe/CXL accelerator implementations of Neurpac. We didn’t file these speculatively — they reflect actual architectural work on how the codebook lookup maps to silicon.
3. We have the right partnerships. NVIDIA’s BlueField DPU is our primary hardware target for the data center deployment path. Intel’s Altera FPGA division is directly relevant. These are companies that turn software IP into silicon for a living.
4. The economics demand it. In a data center running tens of thousands of network interfaces, software-based compaction consumes CPU cycles that could be running workloads. A hardware implementation in the NIC or DPU eliminates that overhead entirely. The market will pull Neurpac into silicon because the ROI improves dramatically when compaction runs at line rate with zero CPU cost.
The path is: software deployment now (generating revenue and proving the value), FPGA implementation next (for customers who need line-rate performance), then ASIC/SoC IP licensing (where every chip with Neurpac earns a royalty). We’re on the software step. The silicon steps are engineering execution, not research risk.
How sure are you that PCM can be put on a chip?
The important thing to understand is that PCM doesn’t need a custom chip to run on small hardware. It can do that today. Once PCM is trained for a particular job, the ongoing compute is remarkably light. The entire knowledge structure for a domain-specific application is approximately 12 to 120 kilobytes. That’s smaller than a photograph. Compare that to an LLM like GPT-4, which requires hundreds of gigabytes and a data center full of GPUs. This means PCM can run on the kind of hardware already inside submarines, drones, autonomous vehicles, industrial control systems, tactical radios, and edge servers — anywhere you need intelligence without a cloud connection.
Before a dedicated PCM chip exists, there are several paths to deploy PCM on edge hardware that is available right now:
FPGA — the most natural near-term path. Small FPGA boards are already common on military platforms, industrial systems, and commercial drones. PCM’s computational operations could be implemented as dedicated logic on an FPGA at 1–5 watts, weighing grams. The FPGA approach also lets you iterate the hardware design before committing to a fixed chip — it’s essentially a programmable prototype of what the eventual dedicated PCM processor will do. Our patent portfolio already covers FPGA implementations.
Small ARM processors — already deployed in edge applications. Processors like NVIDIA’s Jetson Orin Nano are already inside drones, autonomous vehicles, robotic systems, and edge servers running vision processing and navigation. PCM’s compute requirements for a domain-specific application are well within what these processors can handle alongside existing software stacks. The PCM module runs as a process on the existing processor — no additional hardware needed.
Microcontrollers — for the simplest applications. For the most constrained applications, a modern microcontroller costing dollars and consuming milliwatts might be sufficient. This proves the fundamental point: PCM runs on hardware that LLMs and other transformer-based AI cannot even contemplate.
How deployment works in practice: You train and bootstrap the PCM instance on a workstation or server — that’s where the heavier compute lives. Once the system has learned the target domain (which, given PCM’s logarithmic learning curve, happens faster than you’d expect), you export the knowledge structure — 12 to 120 kilobytes — to the deployed hardware. The system then operates independently with no cloud connection and no GPU. It continues learning from operational experience in the field using only its onboard compute. To use a drone example: when the drone returns to base, it shares what it learned back to the ground station, which merges it with experience from other drones and pushes updated knowledge back to the fleet. The same federation model applies to a submarine returning from patrol (a new operating profile for an opposing submarine, say), an autonomous vehicle fleet (potholes, broken stop lights, obstructions), or a network of industrial controllers (robot #322 needs servicing, operating slowly) — any population of PCM instances that periodically reconnect. This “federating” of data is passing along conclusions, not the original data. We believe it is like "real thought", like humans have.
So why would we need a dedicated chip? Unit economics at scale. For defense applications — hundreds of drones, a fleet of submarines, thousands of tactical radios — the FPGA or ARM processor path works now. For commercial and industrial applications — enterprise edge servers, autonomous vehicles, IoT gateways — it also works now. The dedicated PCM chip is about deploying to billions of devices at the lowest possible cost and power. That’s a 2–3 year design cycle, and our patent portfolio covers single-chip implementations. But the key point is that PCM on edge hardware is not something we’re waiting for. We think PCM on an FPGA or an Nvidia Jetson module could be deployed within months of the Navy demo. Our April 29 demonstration at NIWC Pacific runs on heavier compute, but the same architecture at a smaller scale runs on hardware that’s already inside military platforms, commercial drones, and industrial edge systems today.
This is a powerful counter to the idea that PCM is years away from deployment. The dedicated chip is years away. But we believe PCM running on existing hardware in real applications is not. The Navy demo on April 29 is the first public validation, and the Department of War drone effort we hope to pitch next month will demonstrate PCM on exactly this kind of edge deployment — one of many applications where PCM’s tiny footprint and zero cloud dependency could change what’s possible.
Charles
*Neurpac’s power efficiency projections are based on prototype testing and theoretical modeling. Actual results may vary significantly from these estimates.
**This testimonial may not be representative of the experience of other customers and is no guarantee of future performance or success.
***The PCM technology is still in development, and there are substantial technical risks that could prevent us from achieving these efficiency gains at scale. Competitive advantages in technology are often temporary, and competitors may develop alternative approaches that match or exceed our efficiency claims. Market adoption of power-efficient AI is not guaranteed, and regulatory, technical, or economic factors could impact the viability of our approach. This technology has not been validated in large-scale commercial deployments, and significant engineering challenges remain before commercial release. Investors should consider this a high-risk, early-stage technology investment with uncertain outcomes.
This Reg A+ offering is made available through StartEngine Primary, LLC, member FINRA/SIPC. This investment is speculative, illiquid, and involves a high degree of risk, including the possible loss of your entire investment.
04.09.26
Tomorrow: Join Livestream with Our Chief Architect

If you haven’t joined one of our livestreams before, it’s time to get ready.
Alexandria Tucker, PhD, Chief Architect of the Persistent Cognitive Machine (PCM)* AI at Atombeam, will be going live to give insight into how some of our most exciting technological breakthroughs actually work, and what we’re working on next.
Here’s what you need to have ready:
- Download the StartEngine App
- Follow Atombeam and subscribe to push notifications
- Make sure you’ve enabled notifications in your settings – we will let you know when the stream is starting!
It all takes place tomorrow, April 10, at 11am PT / 2pm ET. Join us and get a closer look at the science behind this investment opportunity.
Apple App Store | Google Play Store
*The PCM technology is still in development, and there are substantial technical risks that could prevent us from achieving these efficiency gains at scale. Competitive advantages in technology are often temporary, and competitors may develop alternative approaches that match or exceed our efficiency claims. Market adoption of power-efficient AI is not guaranteed, and regulatory, technical, or economic factors could impact the viability of our approach. This technology has not been validated in large-scale commercial deployments, and significant engineering challenges remain before commercial release. Investors should consider this a high-risk, early-stage technology investment with uncertain outcomes.
This Reg A+ offering is made available through StartEngine Primary, LLC, member FINRA/SIPC. This investment is speculative, illiquid, and involves a high degree of risk, including the possible loss of your entire investment. For more information about this offering, please view the Offering Circular and Related Risks.



