Bad actors could compromise artificial intelligence models by installing backdoors and using them to exploit their targets secretly, warned a cybersecurity expert on Wednesday.
Lim Jong-in, Distinguished Professor in the School of Cybersecurity at Korea University, speaking at the 8th AI Ethics and Legislation Forum held in Seoul, discussed cybersecurity challenges likely to arise in the era of AI’s ascension. Lim, who also has served as special advisor to the President for National Security, detailed the possible disastrous outcomes that could arise as a result of hackers implementing backdoors in AI models.
Citing the work of a renowned academic who won the Turing Award in 2012, Shafi Goldwasser, Lim mentioned that hackers could implement an undetectable backdoor in the AI model and manipulate their target’s credit score, thereby making it impossible for a victim to get a loan. He explained that even though a person has a credit score, for example, of 7, which accords with the standards, the bad actors could send a malicious prompt and abruptly change the number to 0. Because it is difficult to presume the wrongdoing of AI, the problem may go overlooked and undetected.
Lim went on to state that the problem will extend beyond the mere manipulation of credit ratings as more and more industries adopt AI technology. For example, he explained the possible consequences of hackers compromising AI used in the military. “If the hacker has installed a backdoor meticulously, it will be almost impossible for the target to detect its presence,” said Lim. “Finding it will be the equivalent of pitting an all-piercing spear against an impenetrable shield.” What Lim meant by this was that, once a hacker succeeds in installing a backdoor in an AI model, it will be exceedingly difficult to stop him or her, for example, from extorting confidential information or shutting down a company’s system at will.
Lim predicts, as AI technology continues to develop at a rapid pace and grow more and more ubiquitous, that a time will come when personal AI devices will be the norm, much in the same way that handheld computers became the norm with the rise of the smartphone. Because of this, Lim stressed, it is important that we balance innovation against the potential danger that accompanies the arrival of cutting-edge technologies. “We could achieve our goals by properly managing the risks of AI,” Lim commented. “We should be proactive and demand that AI products and services be built with proper consideration paid to security, and this from its very initial stages.”