onymos-logo
< Onymos Blog

What Is Actual AI Innovation?

What is actual AI innovation?

In June, VentureBeat nominated Onymos VP of Engineering Bhavani Vangala for its Women in AI Awards. The awards recognize AI entrepreneurs, mentors, and researchers — the kind of women “bringing AI out of the lab and into the real world.”

I asked her what she thinks about the state of “AI innovation” today (“How much is hype? How much innovation is actually happening?”)  and about how Onymos incorporates AI into the products it’s building. 

“Big companies, small companies, they’re all trying to find ways to innovate AI,” she started. “It’s a wildfire. Everyone has to have their own ‘AI something’ now. They’re not taking their time with it.”

Some of the so-called AI innovations we read about (maybe most of them) aren’t innovations at all. 

These are “wrapper products” — essentially a UI layer over someone else’s large language model (LLM). Sometimes, these are barely more than literal UI wrappers over a foundational LLM. Other times, companies try to augment the LLM’s functionality with superficial bells and whistles. 

Either way, these are the kinds of products companies release to look like they’re innovating new technology and not being “left behind.”

Jasper, the AI content platform, is arguably a good example. In fact, Jasper looked like it was leading the charge. By October of 2022, it had had raised over $125M. But one month later, ChatGPT was released.

Jasper had been licensing ChatGPT’s underlying model at the time, GPT-3. It wouldn’t be ungenerous to say that GPT-3 was Jasper. But suddenly, anyone could access GPT-3 (for free). Jasper’s then-CEO David Rogenmoser confronted OpenAI’s Sam Altman on a Zoom call: “Look, we need to know some of what y’all are planning on doing.”

Jasper had to scramble to differentiate itself. It cut subscription costs and started incorporating other models into its platform.  

The Turing Post called the whole thing “a cautionary tale.”

Bhavani told me that there are, broadly, three different methods companies can use to build their AI products. Jasper had been using the first.

“Companies can basically just fine-tune an LLM.” 

“So, that means they use an LLM from another company like OpenAI or an open-source model they find somewhere like Hugging Face and wrap around it. They can give it a very specific context that includes all of their references and additional training data,” she said.

“The advantage of using this ‘fine-tune’ method is that you don’t have to build an LLM, and you don’t have to teach the LLM natural language. But one of the technical problems you might run into is the hallucinations. Because you’re using an LLM that has already been trained on so much data, that already has so much data to reference, it tends to hallucinate more. It’s harder for it to understand your specific context.” 

Hallucinations are when, well, AI makes things up. Large data sets exacerbate hallucinations when they’re “noisy” — filled with contradictions, errors, and misinformation. Unfortunately, that about perfectly describes the Internet, which is what most large-scale LLMs are trained on. 

There’s also the aforementioned business problem of over-relying on a third party you can’t control. 

If you want to avoid all of that, you can use the second method to build your AI product. 

“You can create your own LLM.”

“If you’re a smaller company, you might not want to try creating one of those giant LLMs because you’ll compete with larger companies trying to move the needle globally.  But you can focus on some niche areas and get ahead of everyone else there. I’ve spent a lot of time experimenting with these domain-specific LLMs using very small data sets. I’ve had some promising results.” 

Domain-specific LLMs offer several advantages over general-purpose language models like ChatGPT or Claude. I talked about the problem with “noisy” training data, but what if you had total control over the training data? What if the LLM wasn’t trained on “the Internet” but only on clinical research, crime maps, prediction markets, or, say, even a company’s internal documents or proprietary data? 

That can sound dangerous to security-minded tech leaders worried about SaaS overreach, like Onymos CEO Shiva Nathan. That’s why “data encapsulation” — who has access to the training data and how — is an important part of using an LLM trained on you or your customers. 

Finally, there’s the third method for building your AI product. 

“You can build a hybrid with RAG.”

“The way this works is you create a searchable database where the information, or the context, you want to reference is stored. Once a user submits a prompt, you can use an embedded textual model to query the database, and then it all goes to the LLM for processing.”

This is called retrieval-augmented generation (RAG). You’re basically creating a custom “source of truth” for the LLM to reference. It just isn’t explicitly trained on that data like it would be if you used it to build your own LLM from scratch. 

You can let the LLM use as much of its pre-trained knowledge as you want or constrain it to your “source of truth” entirely.

AI at Onymos

Today, Onymos is building AI products that are actually innovative. No UI wrappers here. 

Instead, our engineering team focuses on architecting highly secure, configurable “create-your-own-LLM” and “hybrid-using-RAG” solutions whose data our customers own and control — Onymos sees no data and saves no data. 

Bhavani told me, “Everybody has an API. Some have the UI layer. Our offering includes all of that, but it goes beyond that. I call what we’re doing knowledge-base embedding. It’s end-to-end.” 

Whether you want to build a smarter smart device, an automated document processor, or a bespoke AI assistant, Onymos can help you do it. Reach out to the team to learn more.

Ask us if we've already built the solution you need

Building new apps from scratch is a waste of your developers’ time and skills. Get core features your app needs now — because we already built them for you.

Talk to an expert

We know app dev

What does the latest iOS release tell us about Apple’s strategy? Does tech have an innovation problem? Is your team ready for a passwordless future? Subscribe to our blog for:

  • Trends in app development
  • Research reports
  • Demo videos and more

Subscribe to the Onymos blog

Overlay