close
close

Beyond LLMs: How SandboxAQ’s Large Quantitative Models Could Power Enterprise AI

Beyond LLMs: How SandboxAQ’s Large Quantitative Models Could Power Enterprise AI

Join our daily and weekly newsletters for the latest updates and exclusive content covering cutting-edge AI. Learn more


While large language models (LLM) and generative AI have dominated conversations about enterprise AI over the past year, there are other ways businesses can benefit from AI.

An alternative is to use large quantitative models (LQM). These models are trained to optimize specific goals and parameters relevant to the industry or application, such as material properties or financial risk metrics. This contrasts with the more general language comprehension and generation tasks of LLMs. Among LQM’s leading advocates and commercial providers is SandboxAQ, which today announced it has raised $300 million in a new funding round. The company was originally part of Alphabet and was spun off into a separate company in 2022.

This funding is a testament to the company’s success and, more importantly, its prospects for future growth as it seeks to solve enterprise AI use cases. SandboxAQ has partnered with major consulting firms, including Accenture, Deloitte and EY, to distribute its enterprise solutions. The main advantage of LQMs is their ability to solve complex, domain-specific problems in industries where underlying physics and quantitative relationships are essential.

“It’s all about building core products in companies that use our AI,” Jack Hidary, CEO of SandboxAQ, told VentureBeat. “And so if you want to create a drug, a diagnostic, a new material or you want to manage risk at a big bank, that’s where quantitative models shine.”

Why LQMs are important for enterprise AI

LQMs have different objectives and operate in a different way than LLMs. Unlike LLMs which process textual data from the Internet, LQMs generate their own data from mathematical equations and physical principles. The goal is to address the quantitative challenges a business might face.

“We generate data and get data from quantitative sources,” Hidary explained.

This approach enables breakthroughs in areas where traditional methods have stalled. For example, in battery development, where lithium-ion technology has dominated for 45 years, LQMs can simulate millions of possible chemical combinations without physical prototyping.

Similarly, in pharmaceutical development, where traditional approaches face a high failure rate in clinical trials, LQMs can analyze molecular structures and interactions at the electronic level. In the financial services industry, LQMs address the limitations of traditional modeling approaches.

“Monte Carlo simulation is no longer sufficient to handle the complexity of structured instruments,” Hidary said.

A Monte Carlo simulation is a classic form of computer algorithm that uses random sampling to obtain results. Using the SandboxAQ LQM approach, a financial services company can scale in ways that a Monte Carlo simulation cannot. Hidary noted that some financial portfolios can be extremely complex with all kinds of structured instruments and options.

“If I have a portfolio and I want to know what the tail risk is given the changes in that portfolio,” Hidary said. “What I would like to do is create 300 to 500 million versions of this wallet with slight modifications and then look at the tail risk.”

How SandboxAQ uses LQMs to improve cybersecurity

Sandbox AQ’s LQM technology aims to enable companies to create new products, materials and solutions, rather than simply optimize existing processes.

Among the business verticals in which the company has innovated is cybersecurity. In 2023, the company first launched its Sandwich crypto management technology. This has since been further expanded with the company’s AQtive Guard enterprise solution.

The software can analyze a company’s files, applications, and network traffic to identify the encryption algorithms used. This includes detecting the use of outdated or faulty encryption algorithms such as MD5 and SHA-1. SandboxAQ feeds this information into a management model that can alert the Chief Information Security Officer (CISO) and compliance teams of potential vulnerabilities.

Although an LLM could be used for the same purpose, the LQM offers a different approach. LLMs are trained on large, unstructured Internet data, which can include information about encryption algorithms and vulnerabilities. In contrast, Sandbox AQ LQMs are built from targeted quantitative data on encryption algorithms, their properties and known vulnerabilities. LQMs use this structured data to create models and knowledge graphs specifically for encryption analysis, rather than relying on a general understanding of the language.

Looking ahead, Sandbox AQ is also working on a future remediation module that will be able to automatically suggest and implement updates to the encryption used.

Quantum dimensions without quantum computer or transformers

The original idea behind SandboxAQ was to combine AI techniques with quantum computing.

Hidary and his team realized early on that true quantum computers would not be easy to find or powerful enough in the short term. SandboxAQ uses quantum principles implemented through an enhanced GPU infrastructure. Through a partnership, SandboxAQ has extended Nvidia’s CUDA capabilities to handle quantum techniques.

SandboxAQ also does not use transformers, which form the basis of almost all LLMs.

“The models we train are neural network models and knowledge graphs, but they are not transformers,” Hidary said. “You can generate from equations, but you can also have quantitative data from sensors or other types of sources and networks.”

Although LQMs are different from LLMs, Hidary doesn’t see this as a best-of-breed situation for companies.

“Use LLMs for what they are good at, then introduce LQMs for what they are good at,” he said.