Skip to main content
Security

Our commitment to data security and privacy.

Oriol Zertuche avatar
Written by Oriol Zertuche
Updated over a week ago

Data security and integrity are top priorities for many businesses and individuals utilizing AI for personalized use-cases. When constructing language models, such as GPT, substantial training datasets are used and potential concerns regarding data privacy may arise. At Cody AI, we're committed to addressing these concerns by safeguarding your data and privacy.

In this support article, we will provide insight into the security measures of Cody AI to help you understand how we protect your data at each stage of the process. We'll look into three crucial stages - Documents, Embeddings, and Model.

Documents

Cody utilizes the secure and private Amazon Simple Storage Service (S3) to store your documents in the initial stage before further processing. S3 ensures encryption of all object uploads to all buckets, maintaining compliance with various programs like PCI-DSS, HIPAA/HITECH, FedRAMP, EU Data Protection Directive, and FISMA. This ensures that your data remains protected and compliant with regulatory requirements. Documents uploaded to Cody follow the SSE-S3 (Server-Side Encryption) protocol, allowing exclusive access to you and your team members, ensuring data confidentiality and privacy.

Embeddings

Embeddings are essentially a representation of your data in the form of vectors (lists of numbers). Since the data provided to Cody is unstructured, converting it into embeddings allows for faster retrievals and semantic search. To learn more about how Cody generates responses from your documents, check out this article.

For storing these vectors or embeddings, Cody relies on Pinecone, a secure vector database trusted by some of the largest enterprises.

Pinecone offers robust security features like:

  • SOC2 Type II certification

  • GDPR-compliance

  • Routine Penetration Tests to check for vulnerabilities.

  • Isolated Kubernetes containers on fully managed and secure AWS infrastructure for storing data.

Model

Cody AI leverages OpenAI’s GPT models, including GPT-3.5, GPT-3.5 16K, and GPT-4, to generate responses. Due to resource limitations, these models are not hosted on Cody’s native servers. Instead they utilise the APIs provided by OpenAI (also used for creating embeddings for your documents and queries). When generating responses, only the specific portion of data relevant to the question asked is sent in the request, rather than transmitting all the documents. This approach ensures efficient processing, data integrity and minimizes unnecessary data transfers. An additional security mechanism provided by the API is that your data will not be used to train any existing or new language model. This ensures that your data remains restricted to your bot and is not utilized for model training purposes.

Starting on March 1, 2023, we are making two changes to our data usage and retention policies:

1. OpenAI will not use data submitted by customers via our API to train or improve our models, unless you explicitly decide to share your data with us for this purpose. You can opt-in to share data.

2. Any data sent through the API will be retained for abuse and misuse monitoring purposes for a maximum of 30 days, after which it will be deleted (unless otherwise required by law).

Source: OpenAI

This commitment provides an additional layer of confidentiality and ensures the privacy and security of your data. To know more, you can read this article.


Compliance

As an early stage startup, we are diligently working towards earning SOC 2 compliance. This is a lengthy process as our team has put in place stringent policies and procedures to certify that our systems and operations maintain a high level of security and reliability.

Did this answer your question?