top of page
Writer's pictureKirsten Doyle

Are Your Business Users Entering Confidential Business Data into ChatGPT?

Updated: Oct 6, 2023


Since OpenAI introduced ChatGPT, an advanced AI chatbot, in December last year, this powerful tool has skyrocketed in popularity, responding to users’ wide range of information or content-related requests rapidly, and with a high level of confidence.


The tool quickly gained traction, drawing in a significant number of users who began using it for a slew of tasks such as writing letters, editing text, creating lists, preparing presentations, and generating code, among many other applications.


However, while using ChatGPT has the potential for businesses to enhance efficiency and reduce costs, it also comes hand-in-hand with risks and challenges that organisations need to acknowledge and address.


ChatGPT and confidential business data


For users, ChatGPT isn’t unlike using a search engine or a translation application, where queries or translation text are inputted into the user interface. However, in the case of ChatGPT, employees can put in prompts of up to 25,000 words, which is dramatically more than what is typically entered in a search engine query.


In addition, all usage of ChatGPT by employees is logged in, providing OpenAI with detailed information about the user making the query, giving rise to several major issues.


There have been instances where employees have entered confidential company data into the tool with the best of intentions, and have unintentionally exposed proprietary information outside the organisation. To raise awareness about the associated risks, several companies have taken measures.


For instance, JPMorgan has imposed restrictions on employees' usage of ChatGPT, and other prominent companies such as Amazon, Microsoft, and Samsung, have issued warnings to their employees, stressing the need for caution when using these types of service.


No explicit indication


It’s important to remember, that while there is no explicit indication that information submitted to ChatGPT will be shared with external parties, there is also no guarantee of its non-disclosure. The introduction of new devices and software often introduces unforeseen glitches, particularly when rushed to market.


According to ChatGPT’s terms of use, OpenAI retains the right to utilise the input content for the development and enhancement of its services. As a result, any information entered into the engine by employees can be stored and accessed by OpenAI staff or their subcontractors for these purposes.


While it is possible to opt out of this, whether or not the input data is still retained remains unclear, giving rise to concerns regarding potential disclosures of confidential business data, as well as breaches of contractual confidentiality obligations to third parties.


Secondly, OpenAI's terms of use do not offer explicit assurances in terms of security measures, and say: “ You must implement reasonable and appropriate measures designed to help secure your access to and use of the Services. If you discover any vulnerabilities or breaches related to your use of the Services, you must promptly contact OpenAI and provide details of the vulnerability or breach.”


Misleading information


It’s also important to note, that ChatGPT does not perform internet searches per se, instead, it generates content based on its training data and algorithms. The output's creativity or conservatism is determined by its hyper-parameters. Furthermore, the quality and specificity of the input prompt have a serious impact on the response's accuracy.


Relying on ChatGPT's output without a framework to assess the prompt's quality and the output's accuracy is inherently risky. It makes sense then, that the output should not be used without review from someone who has a deep understanding of the model's workings as well as the necessary expertise to gauge its accuracy and quality.


Who owns what?


Currently, OpenAI grants users all rights to the output, although it does retain the right to use it for service improvement. However, in certain jurisdictions, non-human-authored content may not be eligible for copyright protection, which can present challenges when it comes to enforcing rights against third parties.


It's also important to remember that ChatGPT was trained on copyrighted works, and the generated output may closely resemble or even replicate these materials. At some point, this could lead to copyright infringement by OpenAI and the user.


When the output is of value, and is widely reproduced, or disseminated, the risk of IP infringement grows, and employees should have to disclose whether the output is AI-generated to assess these risks before making use of it.


Comprehensive policy


Dasha Diaz, CEO and founder of itrainsec, says: "In the age of ChatGPT, where innovation knows no bounds, the key to unlocking its potential for businesses lies in striking a delicate balance. Harness its power wisely, and safeguard your company's most precious secrets closely. Through the implementation of a comprehensive policy, companies can navigate the evolving landscape of AI with confidence, and harness the opportunities presented by ChatGPT while effectively mitigating the potential pitfalls that loom."


This is why all companies need a comprehensive corporate policy to inform employees about the uncertainties surrounding ChatGPT, as well as the handling of input prompts. It should explicitly prohibit the use of personal information, as well as any client or confidential data, within these prompts.


Completely prohibiting its use may not be a practical solution, and it is possible for people to harness the benefits of this technology while managing the risks. By providing adequate training to users, and enforcing a comprehensive usage policy, entities can capitalise on the opportunities presented by ChatGPT while mitigating potential pitfalls.


Dasha Diaz at  World Summit AI 2023

Dasha Díaz, CEO & Founder of itrainsec, will be moderating the panel discussion 'The cyber security arms race and how generative AI is changing the game' at World Summit AI! Check the agenda of the upcoming event and join us on 11th-12th October 2023 in Amsterdam.


Comments


bottom of page