Beyond OpenAI: Reasons, Timing, and Strategies for Exploring Alternatives

OpenAI's recent hurdles is a wake up call. AI leaders must explore alternatives to guarantee sustainable AI strategies.

The drama that unfolded at OpenAI over Thanksgiving week has sent chills around the industry. Even though Sam Altman is back at the helm, permanent damage was made. On top of pre-existing concerns around cost, reliability, and safety, this leadership shake-up instills new uncertainties around OpenAI’s leadership structure and overall corporate direction.

We can all be thankful to OpenAI for ushering in a new era in Artificial Intelligence, but if you are trying to design a sustainable and robust AI strategy, you may want to diversify your options to mitigate those uncertainties.

Luckily, there are many alternatives out there, and the ecosystem keeps growing on a daily basis.

Reasons to move away from OpenAI

OpenAI’s GPT-4 is still the best-in-class model out there. It still beats every alternative on most benchmarks. Google’s newly announced Gemini may come close but it’s still too early to tell.

So why move away from OpenAI’s models if they are so good?

Cost

Despite recent price cuts, OpenAI is still very expensive. With its per-token pricing, your OpenAI bill will grow linearly with your user base, and the size of your inputs and inferences. Many developers have inadvertently racked up very sizable OpenAI bills after their product went viral. Even among similar API-based providers, OpenAI is the most expensive.

Comparing per-token price of AI APIs

Granted, these alternatives do not always match GPT-4 in terms of model performance, but it is still worth noting that OpenAI indulges in high prices.

Reliability

Screenshot of OpenAI's status page

In the last 90 days, OpenAI’s API suffered at least 5 outages lasting over two hours, resulting in a 99.65% uptime. Although that sounds very close to 100%, industry uptime standards are usually counted in “number of 9s” and two 9s are quite shameful. Four or five 9s (i.e. 99.99 to 99.999%) are the gold standard, especially for business-critical high-volume APIs.

If your business relies on OpenAI’s APIs to serve its customers, a 3.5h outage (as shown in the screenshot above) means a 3.5h outage of your product as well, which can have very dire consequences on your revenue and your user retention and satisfaction.

Security

As a third-party hosted service – as opposed to on-premise – OpenAI is inherently not secure. You should be very wary of sending any confidential or private data to OpenAI’s API.

Here are a few important aspects to consider when integrating with OpenAI:

  • Your user’s conversations with the GPT models can be persisted and used to further train the models. OpenAI’s models get better thanks to RLHF, HF standing for Human Feedback. It is impossible to know for sure to what degree OpenAI is persisting conversations and other behavioral signals to grade the quality of generated outputs.
  • Your data could leak or be exposed. OpenAI persists your data for a certain window of time for monitoring purposes, which means that it could be exposed to internal staff or potentially third-party contractors. Even worse, OpenAI’s data could leak in an adversarial attack.
  • OpenAI’s compliance with GDPR has been challenged by legal and academic institutions. A central concern regarding GDPR compliance is the potential use of personal data without explicit consent
  • OpenAI’s ironic lack of transparency when it comes to its training data and human feedback mechanism means that enterprises are not able to thoroughly evaluate and audit models, leading to potential safety and security issues.

Latency

OpenAI's API response times. Source.

OpenAI’s API response times are much slower than competitors. Depending on the size of the generated output, API calls can take up to 30 seconds to return a response.

As seen in this post, Anthropic’s response times systematically below 10 seconds when OpenAI’s are frequently above the 10 second bar.

Uncertainties around product roadmap

Announcement of the GPT Store

On OpenAI’s 2023 Dev Day, Sam Altman announced the launch of “GPTs” and the GPT Store. This announcement seems to signal a new product direction towards the consumer space. Many drew the analogy between the Apple Store and the GPT Store, both consumer-facing products.

To this day, OpenAI is maintaining an Enterprise offering, but it is unclear how long that will last or where the focus will be in the coming years.

OpenAI’s mission, as stated on their website, is still to advance development towards AGI. So which is it? AGI research or GPT Store gimmicks?

Building an entire AI strategy on top of OpenAI seems increasingly risky. As Google showed many times over, they can sunset beloved products any time, leaving users and customers in the dust.

Uncertainties around leadership

To this day, we still don’t know why Sam Altman was ousted. Some suspect internal politics, others mention a technological breakthrough that would challenge safety, others say Altman was going to fast towards commercialization at the expense of OpenAI’s research mission.

What the event of November 2023 show though is that the OpenAI board behaved very irresponsibly and unprofessionally, alienating partners (e.g. Microsoft), customers, and the community at large.

These events do not inspire trust and stability in OpenAI’s leadership to have a clear and steady direction going forward.

When to think about moving away from OpenAI

Cloud-based APIs such as OpenAI are very appealing because they are extremely easy to use. Sign up, enter your credit card details, get an API key and start building your product. They seem like a logical place to start, but when is it time to move on?

Prototyping

“If it doesn’t work with GPT-4, don’t bother.”

GPT-4 is currently the upper bound of AI capabilities. It is simply the best model on the market. So it is a great place to start validating a product idea. If you are unable to create a useful and workable product on top of GPT-4, it is unlikely that alternative models will do the job better.

Cloud APIs are the lowest possible effort to get the best possible performance. So they are a great place to start prototyping. You can quickly build a low-volume, low-fidelity product and collect user feedback, before you even consider alternatives.

Scaling up

After the prototyping phase, comes the deployment and scaling phase. In an effort to go to market fast and maintain model performance, it makes sense to keep using cloud APIs. But get ready for a salty bill and make sure you have considered all the security implications listed above.

Arguably, this is the time to start considering a longer term AI strategy. That means investigating alternatives that are cheaper, more reliable, more secure, and can be customized to your use case and industry.

Enterprise-grade AI strategy

As your AI-powered product gains traction, and your business wants to expand its AI-based portfolio, it is really important to consider solutions that give you full control over your AI infrastructure.

That means building teams of AI experts, thinking about bringing tools and models on-premised to guarantee security and reliability. It also means considering more specialized model options that may yield similar performance to GPT-4 on specific tasks for a fraction of the cost.

How to move away from OpenAI

If you have been building your AI-powered products on top of OpenAI’s APIs and you are looking to move away from it, be it for cost, security, or performance reasons, here are a few options to look into.

GPT-4 outside OpenAI

As you know, OpenAI received a sizable investment from Microsoft, which granted them unprecedented access to the GPT models. Microsoft’s cloud service Azure offers what they call Azure OpenAI Service. Within Azure, you can deploy one of the OpenAI models and benefit from the rest of the Azure suite.

What are the benefits?

  • Azure-grade security – Being deployed within the Azure network infrastructure, these models may benefit from heightened security and privacy, as well as stronger guarantees that user data will not be used for training.
  • Azure’s pricing is identical to OpenAI’s so there are no benefits on that side. Although, if you are an early-stage startup, you can benefit from up to $350k in Azure credits, that you can spend towards GPT usage.
  • Reliability – because Azure’s deployments are directly managed by Azure’s cloud engineering teams, it is likely that outages are less frequent.

Other API-based offerings

Without going all the way to on-premise, you may want to use other API-based providers as fallback when OpenAI is down. After careful testing and evaluation, you can set up integrations with providers such as Cohere, Perplexity, Anthropic, et cetera.

These alternatives may alleviate cost and reliability issues, but they will not improve safety, security, or control, which are only addressable with on-premise deployments.

On-premise models

The golden standard for reliability, security, and control are on-premise deployments. With on-prem. models, user data never leaves your VPC, and you can control exactly what gets tracked and persisted.

At this time, only open-source (OSS) models can be deployed on-premise. OSS models such as Llama 2, Falcon, Mistral are amazing alternatives to closed source ones. They usually do not perform as well but can be fine-tuned and integrated into an information retrieval pipelines (RAG) to produce results of similar quality to closed models on specific tasks.

In terms of cost, it is arguable whether on-premise OSS models are cheaper. Although you do not pay per-token anymore, you have to pay for the GPU machines hosting the models, as well as the engineering teams maintaining the infrastructure.

OSS models also come with much greater transparency than closed models. Their training data and alignement techniques are usually documented in academic papers.

On-premise fine-tuned OSS models should be the gold standard for any enterprise willing to take their AI strategy seriously, and ready to invest in the engineering talent to guarantee safe, reliable, cost-effective and transparent AI.

Watch our video on the new deal between OpenAI and open-source models.

How Airtrain.ai can help you move away from proprietary models

At Airtrain.ai, we believe the future is small models customized to specific use cases. Small models are cheaper, faster, and easier to wrangle. It's been shown repeatedly that small models can achieve similar performance to larger models when fine-tuned for specific tasks on high-quality datasets.

Airtrain.ai is a no-code compute platform for Large Language Models. The platform lets you evaluate alternatives to proprietary models, then fine-tune and deploy them to integrate them back into your apps. Reach out to us to learn how Airtrain.ai can help you reduce your AI bills and supercharge your AI strategy.

Conclusion

As developers and business leaders navigate along their AI journey, it is important to recognize the great value provided by OpenAI (model performance and easy of use), but also the risks and liabilities that it comes with.

Although relying on OpenAI to prototype and bring new products to market is perfectly justifiable, there comes a time to consider a wider and more robust AI strategy which should include more diversified options, such as other third-party AI providers, but also on-premise OSS deployments.

The Airtrain Al Youtube channel

Subscribe now to learn about Large Language Models, stay up to date with Al news, and discover Airtrain Al's product features.

Subscribe now