More
Template is not defined.

How Poor Generative AI Security Could Threaten Your Intellectual Property

4 min
Share
A businessman freaks out after realizing he has accidentally leaked company IP.

The great thing about generative AI? It learns as it goes.

The tricky thing about generative AI and security? It learns as it goes.

The prompts you type into ChatGPT and other generative AI can help those artificial intelligence tools learn. But that’s not entirely a good thing, because those inputs — which may include proprietary information — could eventually be incorporated into the AI’s Large Language Model (LLM) and spit back out to other users as it answers their questions.

The buzziest AI tool, ChatGPT, is a good example. In its Terms of Use, developer OpenAI says:

“When you use our non-API consumer services ChatGPT or DALL-E, we may use the data you provide us to improve our models,” the Terms for those consumer services said as of April 10, 2023. (Users’ data is NOT ingested when processed through OpenAI’s API.)

To be clear, OpenAI offers an opt-out form that organizations can submit to prevent their data from being ingested by ChatGPT or DALL-E. Further, OpenAI says that only a small sample of customer data will be used to train the model. But, by default, the risk exists.

So what does that mean for Intellectual Property and security in the age of generative AI?

Generative AI and Risks to Intellectual Property

Unless you know that you are using Generative AI in a brand-safe environment, the safest course of action is simple: Don’t feed the AI anything you don’t want incorporated into its knowledge stores. (We’ll discuss Generative AI “brand-safe environments” in the next section of this post.)

Imagine the nightmare scenario of someone trying to get a headstart on writing a presentation by feeding ChatGPT details about an upcoming product roadmap or other sensitive communications. 

Once this data is ingested by the AI, there’s a chance it will be incorporated into the AI’s training. OpenAI assures users that it is careful to avoid using sensitive info when sampling customer inputs for training purposes, but neither people nor AI are perfect. If sensitive company data is ingested into the corpus, it could eventually leak back out.

Samsung is already living part of this nightmare. Some of its developers used ChatGPT to help write code for a new software program. In the process, TechRadar reports, they uploaded source code as well as internal meeting notes. That data now lives on OpenAI’s servers. It’s unclear if Samsung has any recourse to re-secure the data. If not, it will become available to competitors.

Samsung is now working to prevent more data leaks by developing an in-house generative AI solution.

Mitigating Risk

B2B marketers and salespeople should be trained to understand the risks of sharing sensitive information with a publicly available generative AI model. 

If you’re a B2B content marketer using a tool like ChatGPT to help craft top-of-funnel general knowledge content, it’s probably not a problem. But if you are looking for shortcuts for writing about new initiatives or product offerings, you should stay away — or wait until your IT team fills out an opt-out form and lets you know the tool is safe to use.

Brand-safe environments already exist. Tools like 6sense’s Conversational Email, Writer.com, and Jasper.ai have all created secure environments where brands can take advantage of generative AI without having their data incorporated back into a publicly accessible LLM.

Brand-Safe Environments Are Flourishing

Companies are rushing to build secure environments where they can deploy fine-tuned custom AI models. A fine-tuned generative AI model is trained on specific datasets that enable the AI to create content tailored to use cases such as:

  • Writing in a company’s brand voice
  • Leveraging internal data to respond to queries
  • Responding reliably to highly technical questions

Think of a generative AI tool like it’s recently hired employee. It brings a wealth of knowledge provided by a large language model like GPT-4. It’s been educated well — but it doesn’t know specifics about your company yet.

Within this context, a fine-tuned custom AI model is like your employee handbook. It gives instructions and examples that help generative AI perform better at specific tasks that your company needs.

Importantly, these custom AI models can also be much more secure. For instance, OpenAI offers an API that companies can use to connect to its language models, like GPT-4. Companies can then use the LLMs to power their own fine-tuned generative AI products. Unlike ChatGPT, OpenAI does not save the prompts and outputs generated via the API

If you are using a tool like 6sense Conversational Email, which connects to OpenAI via API, the information you provide the AI won’t get incorporated into the broader corpus. Your data remains yours alone. 

Then you just need to make sure your own data collections and servers are secure. 

Securing Your Fine-Tuned AI Model

If you choose to develop your own custom model, consider these security suggestions (helpfully provided by ChatGPT 🙂):

  • Secure data handling: Use encryption and access controls to protect data at rest and in transit
  • Model security: Use proper access controls, encryption, and monitoring. Limit access to the model and ensure that it’s not vulnerable to attacks
  • Robustness testing: Test the model by using various scenarios, such as data poisoning and adversarial examples, to ensure it can withstand such attacks
  • Regular updates: Keep the generative AI system updated with the latest security patches and updates to ensure that any security vulnerabilities are addressed promptly
  • Continuous monitoring: Monitor the generative AI system continuously for any signs of unusual activity or security breaches. Set up alerts and notifications to alert security teams in case of any issues
  • Ethical considerations: Consider the ethical implications of generative AI systems and ensure that they are not used for malicious purposes, including deepfakes or other forms of disinformation
  • Regulatory compliance: Ensure the system complies with relevant regulations, such as data privacy laws, and that any data used by the system is collected and used ethically and lawfully

Conclusion

The progression of generative AI tools and the ability for companies to make use of them is exciting. But, as always, new technology comes with a heap of questions and potential pitfalls — from intellectual property to safety and security. 

The 6sense Team

6sense helps B2B organizations achieve predictable revenue growth by putting the power of AI, big data, and machine learning behind every member of the revenue team.

Related Blogs