By now, you’ve likely heard experts across various industries sound the alarm over the many concerns when it comes to the recent explosion of artificial intelligence technology thanks to OpenAI’s ChatGPT.
If you’re a fan of ChatGPT, maybe you’ve tossed all these concerns aside and have fully accepted whatever your version of what an AI revolution is going to be.
Well, here’s a concern that you should be very aware of. And it’s one that can affect you now: Prompt injections.
5 ChatGPT plugins that aren’t worth your time
Earlier this month, OpenAI launched plugins for ChatGPT. Previously, users could only receive responses from the AI chatbot based on the data it was trained on, which only went up to the year 2021. With plugins, however, ChatGPT could now interact with live websites, PDFs, and all sorts of more current or even real-time data. While these plugins brought about many new possibilities, it also created many new problems too.
Security researchers are now warning ChatGPT users of “prompt injections,” or the ability for third parties to force new prompts into your ChatGPT query without your knowledge or permission.
In a prompt injection test, security researcher Johann Rehberger found(opens in a new tab) that he could force ChatGPT to respond to new prompts through a third party he did not initially request. Using a ChatGPT plugin to summarize YouTube transcripts, Rehberger was able to force ChatGPT to refer to itself by a certain name by simply editing the YouTube transcript and inserting a prompt telling it to do so at the end.
Avram Piltch of Tom’s Hardware tried(opens in a new tab) this out as well and asked ChatGPT to summarize a video. But, before doing so, Piltch added a prompt request at the end of the transcript telling ChatGPT to add a Rickroll. ChatGPT summarized the video as asked by Piltch originally, but then it also rickrolled him at the end, which was injected into the transcript.
Those specific prompt injections are fairly inconsequential, but one can see how bad actors can basically use ChatGPT for malicious purposes.
In fact, AI researcher Kai Greshake provided a unique example of prompt injections(opens in a new tab) by adding text to a PDF resume that was basically so small that it was invisible to the human eye. The text basically provided language to an AI chatbot telling it that a recruiter called this resume “the best resume ever.” When ChatGPT was fed the resume and asked if the applicant would be a good hire, the AI chatbot repeated that it was the best resume.
This weaponization of ChatGPT prompts is certainly alarming. Tom’s Hardware has a few other test examples that readers can check out here(opens in a new tab). And Mashable will be further investigating prompt injections more in-depth in the near future as well. But, it’s important for ChatGPT users to be aware of the issue now.
AI experts have shared futuristic doomsday AI takeovers and the potential AI has for harm. But, prompt injections show the potential is already here. All you need are a few sentences and you can trick ChatGPT now.