CotranslatorAI
CotranslatorAI employs a direct API connection to OpenAI’s GPT models, ensuring a secure, encrypted pathway for your data.
- Your data never resides on our servers. We don’t even have a server in the loop; the software is installed on your computer and your data is directly transmitted to and from OpenAI’s GPT models for processing.
- Your data is transmitted with end-to-end encryption. By utilizing the OpenAI API, we fully protect your data from being intercepted or accessed during transmission.
OpenAI
- OpenAI’s commitment to privacy ensures that data sent through the API is not used for model training or shared with third parties. For further details, please refer to OpenAI’s Enterprise Privacy Policy, which covers data privacy through the API as used by CotranslatorAI.
Let’s clear up a misconception about model training!
A common concern is whether input data through CotranslatorAI is used to train the OpenAI models. It is important to understand that GPT and similar large language models (LLMs) are static; they do not dynamically update with new data. Instead, they are trained on vast datasets. Then, each version is deployed in a static way that does not further incorporate new user data into the core model.
On the other hand, OpenAI does engage in a process known as “reinforced learning” to fine-tune model outputs based on desired behaviors (e.g., politeness, accuracy). This process does take place dynamically. However, reinforced learning does not directly feed user data back into the model. Even so, OpenAI explicitly states that data processed through the API is not used even for this reinforcement learning.