onymos-logo
< Onymos Blog

Is AI’s Wild West Era Coming to an End? 

A group of anonymous developers concerned that AI is stealing their code is suing Microsoft (and its subsidiaries, OpenAI and GitHub) for a cool $12 billion. Their lawsuit alleges (in part) that Microsoft and Co. are scraping code from open-source repos to train AI without proper attribution. 

Now, Microsoft is sued a lot, but this particular case is a little different from most. It’s not common for anonymous groups of developers to drive class actions, and this is the first time Microsoft has faced a challenge over how its AI is trained. 

A judge has tossed most of the plaintiffs’ claims but not all of them, and if the defendants lose, AI products like ChatGPT and Copilot could still be fundamentally changed. Aside from the monetary penalties, GitHub, for example, might have to restrict Copilot’s functionality until it can retrain it on legally compliant data sets. Open-source licenses might have to be rewritten (and might be rewritten anyway) to address AI model training. It could dramatically slow down AI innovation — which, depending on your point of view, might be a good thing. 

In the meantime, there’s actually not much stopping you from doing, well, pretty much whatever you want with AI right now (I’m only slightly exaggerating). This is AI’s Wild West Era… but for how much longer?

And what should you be doing to prepare your organization for when it ends?

“If you climb in the saddle, be ready for the ride.” – Unknown

Not a software engineer? Don’t have any code you’re worried about being (let’s call it) misappropriated? Well, Patagonia and its partner, Talkdesk, are being accused of misappropriating your actual voice. If you’ve interacted with the outfitter’s customer support recently, your “verbal and acoustic information” may have been “intercepted, recorded, and analyzed” by Talkdesk’s AI agents. 

Patagonia joins the ranks of other companies like X and Zoom that have been accused of using their customers’ data to train proprietary machine learning models. 

While the courts hash out precedent-setting cases like these, regulators are beginning to publish their first-generation AI policies. 

At the end of 2023, the Biden administration released the “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” the first major executive order specifically focused on AI. 

“Artificial intelligence (AI) holds extraordinary potential for both promise and peril,” it begins. But it’s less of a set of policies itself and more of an announcement that ‘We’re working on it.’

It states, among many other things, “Within 270 days of the date of this order [the relevant agencies will] establish guidelines and best practices, with the aim of promoting consensus industry standards, for developing and deploying safe, secure, and trustworthy AI systems.”

270 days is less than a month ago (from the time of this writing). That means agencies like the National Institute of Standards and Technology (NIST) have only just started to respond. 

NIST's due dates under Executive Order 14110

States are regulating AI, too. Maybe even regulating a bit too much?

California’s Senate Bill 1047 is making everyone in tech nervous. Dean W. Ball, a George Mason University research fellow in AI, says the bill requires “developers to guarantee, with extensive documentation and under penalty of perjury, that their models do not have a ‘hazardous capability,’ either autonomously or at the behest of humans. The problem is that it is very hard to guarantee that a general-purpose tool won’t be used for nefarious purposes, especially because it’s hard to define what ‘used’ means in this context. If I use GPT-4 to write a phishing email against an urban wastewater treatment plan, does that count? Under this bill, quite possibly so.”

Shiva Nathan, Onymos Founder and CEO, takes a dim view of what the regulatory landscape is going to look like in the near term. He told me, “Regulators don’t really understand the technology. They will be hand puppets for lobbyists and, by extension, large corporations. It’s going to hurt smaller firms and consumers. Then there will be a period of whack-a-mole, where government officials draft reactionary policies and take partisan political action.” 

Are you prepared?

If you’re like most tech leaders, you’re exploring AI use cases.

Whether you’re using AI assistants or building models yourself, there are things you can do to feel more confident that your products and services will meet the as-yet-nonexistent regulatory requirements of the future. 

  • Use Explainable AI (XAI). XAI is a framework for building and using AI systems whose decisions humans can understand. You can contrast XAI with “black box” models like ChatGPT. As AI expert Sam Bowman puts it: “If we open up ChatGPT or a system like it and look inside, you just see millions of numbers flipping around a few hundred times a second. And we just have no idea what any of it means.”

    Getting more visibility into LLMs like ChatGPT is an active area of research for good reason. Regulators won’t continue tolerating unexplainable AI, especially as it becomes more powerful and ubiquitous. The European Union has already targeted OpenAI over its “black box” problems.

    Thorough documentation, working with auditors, and selecting collaborative vendors who use interpretable models are all straightforward ways to start practicing XAI. 
  • Own your data. Owning the data AI models are trained on can help companies future-proof their AI products against regulatory scrutiny. Owning your model training data ensures your organization has complete control over how the data is processed, stored, and accessed. That’s crucial for adhering to data privacy laws like GDPR.
  • Have contingencies. You can’t be prepared for everything, so part of good preparation is being prepared for being unprepared. Have continuity plans in place to account for regulatory changes that could disrupt your AI operations.

    What will you do if you need to suddenly withdraw an AI system from production due to non-compliance or need to rapidly implement new features to meet new legal requirements? 

The sun is setting on AI’s Wild West — and you have to be ready for the evolving regulatory landscape. Onymos is at the forefront of AI innovation, so if you need a trusted partner to help you build your AI-powered products and services, or if you need to leverage a platform that’s already integrated with a transparent AI system, we can help. Get in touch with our team to learn more. 

Ask us if we've already built the solution you need

Building new apps from scratch is a waste of your developers’ time and skills. Get core features your app needs now — because we already built them for you.

Talk to an expert

We know app dev

What does the latest iOS release tell us about Apple’s strategy? Does tech have an innovation problem? Is your team ready for a passwordless future? Subscribe to our blog for:

  • Trends in app development
  • Research reports
  • Demo videos and more

Subscribe to the Onymos blog

Overlay