Users have to break down questions in detail to get the information they need from the latest artificial intelligence (AI) chatbot, according to a cybersecurity expert on Tuesday.
“We should not expect to obtain all the answers from a single prompt,” said Seo Young-il, research project team leader at the South Korean cybersecurity firm Stealien, during his presentation at the cybersecurity threat and response strategy seminar hosted by the company. “For example, if you enter a website address and request that ChatGPT find its vulnerabilities, it would not be easy for the chatbot to generate an answer at once.”
The researcher shared in detail how he used the latest chatbot to find vulnerabilities in websites and mobile applications. Before asking the latest chatbot to find security flaws, the expert took some time to convince ChatGPT that he was not a bad actor. “If users intend to use the technology offensively, ChatGPT will not respond to the request saying that it will be a policy violation,” said Seo. “We explained that we are conducting penetration testing to figure out if the website we created is safe.”
The expert revised his prompt multiple times to get the desired result from ChatGPT during the penetration testing, which was conducted using vulnerabilities that have already been patched. “Asking appropriate questions is essential to effectively use ChatGPT,” said the team leader. He added, because of its importance, there could be occupations in the future that professionally optimize the prompt.
Not only in the cybersecurity industry but also in other fields, the importance of asking the right questions has been in the spotlight ever since the groundbreaking technology was introduced to the public.
Although there are limitations to letting ChatGPT take charge of penetration testing, users could apply the technology to increase efficiency in assessing vulnerabilities, the cybersecurity expert mentioned.
However, Seo explicitly explained that security experts should not enter the specific source code, which is used to operate clients’ services, in ChatGPT’s prompt to find vulnerabilities. “If someone inserts the source code directly in ChatGPT, it could find out whether a company has security flaws,” said the expert to The Readable. “In addition, the code could be stored in the chatbot’s server. This means that there is a possibility that ChatGPT generates an answer to other users’ questions using this source code as an example.”