By Kuksung Nam, The Readable
Jun. 27, 2023 8:47PM GMT+9 Updated Jul. 21, 2023 1:40PM GMT+9
Users have to break down questions in detail to get the information they need from the latest artificial intelligence (AI) chatbot, according to a cybersecurity expert on Tuesday.
“We should not expect to obtain all the answers from a single prompt,” said Seo Young-il, research project team leader at the South Korean cybersecurity firm Stealien, during his presentation at the cybersecurity threat and response strategy seminar hosted by the company. “For example, if you enter a website address and request that ChatGPT find its vulnerabilities, it would not be easy for the chatbot to generate an answer at once.”
The researcher shared in detail how he used the latest chatbot to find vulnerabilities in websites and mobile applications. Before asking the latest chatbot to find security flaws, the expert took some time to convince ChatGPT that he was not a bad actor. “If users intend to use the technology offensively, ChatGPT will not respond to the request saying that it will be a policy violation,” said Seo. “We explained that we are conducting penetration testing to figure out if the website we created is safe.”
The expert revised his prompt multiple times to get the desired result from ChatGPT during the penetration testing, which was conducted using vulnerabilities that have already been patched. “Asking appropriate questions is essential to effectively use ChatGPT,” said the team leader. He added, because of its importance, there could be occupations in the future that professionally optimize the prompt.
Not only in the cybersecurity industry but also in other fields, the importance of asking the right questions has been in the spotlight ever since the groundbreaking technology was introduced to the public.
Although there are limitations to letting ChatGPT take charge of penetration testing, users could apply the technology to increase efficiency in assessing vulnerabilities, the cybersecurity expert mentioned.
However, Seo explicitly explained that security experts should not enter the specific source code, which is used to operate clients’ services, in ChatGPT’s prompt to find vulnerabilities. “If someone inserts the source code directly in ChatGPT, it could find out whether a company has security flaws,” said the expert to The Readable. “In addition, the code could be stored in the chatbot’s server. This means that there is a possibility that ChatGPT generates an answer to other users’ questions using this source code as an example.”
The cover image of this article was designed by Sangseon Kim.
Kuksung Nam is a journalist for The Readable. She has extensively traversed the globe to cover the latest stories on the cyber threat landscape and has been producing in-depth stories on security and privacy by engaging with industry giants, foreign government officials and experts. Before joining The Readable, Kuksung reported on politics for one of South Korea’s top-five local newspapers, The Kyeongin Ilbo. Her journalistic skills and reportage earned her the coveted Journalists Association of Korea award in 2021 for her essay detailing exclusive stories about the misconduct of a former government official. She holds a Bachelor’s degree in French from Hankuk University of Foreign Studies, a testament to her linguistic capabilities.