At Adaptia, we’re always on the lookout for ways to integrate the best AI technology into our business. We do this not just because we know AI is (and will continue to be) highly disruptive, but also because we know our tech-savvy and ceaselessly curious people are bound to experiment with exciting new tools—and we want to make sure this happens in the most secure way possible. We all remember pivotal blunders of these past months, like private code being leaked out into the public domain, and thus it comes as no surprise that our Legal and InfoSec teams have been pushing the brakes a bit on what tech we can adopt, taking the safety of our brand and those of our partners into consideration.
So, when OpenAI—the force behind ChatGPT—updated their terms of service, allowing people who leverage the API to utilize the service without any of their data being used to train the model as a default setting, we were presented with a huge opportunity. Naturally, we seized it with both hands and decided to build our own internal version of the popular tool by leveraging OpenAI’s API: AdaptiaAI, which allows our teams to harness the power of this platform while layering in our own security and privacy checks. Why? So that our talent can use a tool that’s both business-specific and much safer, with the aim to mitigate risks like data leaks.
You can’t risk putting brand protection in danger. Ever since generative AI sprung onto the scene, we’ve been experimenting with these tools while exploring how endless their possibilities are. As it turns out, AI tools are incredible, but they don’t necessarily come without limitations. Besides not being tailored to specific business needs, public AI platforms may use proprietary algorithms or models, which could raise concerns about intellectual property rights and ownership. In line with this, these public tools typically collect data, the use of which may not be transparent and may fail to meet an organization’s privacy policies and security measures. Brand risk is what we’re most worried about, as our top priority is to protect both our intellectual property and our employee and customer data. Interestingly, a key solution is to build the tools yourself. Besides, there’s no better way to truly understand the capabilities of a technology than by rolling up your sleeves and getting your hands dirty.
In creating AdaptiaAI, there was no need to reinvent the wheel. Sure, we can—and do—train our own LLMs, but with the rapid success of ChatGPT, we decided to leverage OpenAI’s API and popular open source libraries vetted by our engineers to bring this generative AI functionality into our business quickly and safely.{" "}
In fact, the main hurdle we had to overcome was internal. Our Legal and InfoSec teams are critical of AI tooling terms of service (ToS), especially when it comes to how data is managed, owned and stored. So, we needed to get alignment with them on data risk and updates to OpenAI’s ToS—which had been modified for API users specifically so that it disabled data passed through OpenAI’s service to be used to train their models by default.
Though OpenAI stores the data that's passed through the API for a period of 30 days for audit purposes (after which it’s immediately deleted), their ToS states that it does not use this data to train its models. Coupling this with our internal best practices documentation, which all our people have access to and are urged to review before using AdaptiaAI, we make sure that we minimize any potential for sensitive data to persist in OpenAI’s model.
As I’ve seen time and time again, ain’t no hurdle high enough to keep us from turning our ideas into reality—and useful tools for our talent. Within just 35 days we were able to deploy AdaptiaAI, scale it out across the company, and launch it at our global All Hands meeting. Talking about faster, better and cheaper, this project is our motto manifested. Of course, we didn’t stop there.