Data breach is an unfortunate attack that businesses often suffer. Failing to address these breaches is even worse as it costs businesses reputational and privacy damage.
A breach at Coupang that leaked the data of 33.7 million customers has been linked to a former employee who kept access to internal systems after leaving the organization.
The news was reported by the Seoul Metropolitan Police Agency with news agencies after an inquiry that involved a raid on Coupang's offices recently. The firm is South Korea's biggest online retailer. It employs 95,000 people and generates an annual revenue of more than $30 billion.
Earlier in December, Coupang reported that it had been hit by a data breach that leaked the personal data of 33.7 million customers such as email IDs, names, order information, and addresses.
The incident happened in June, 2025, but the firm found it in November and launched an internal investigation immediately.
In December beginning, Coupang posted an update on the breach, assuring the customers that the leaked data had not been exposed anywhere online.
Even after all this, and Coupang's full cooperation with the authorities, the officials raided the firm's various offices on Tuesday to gather evidence for a detailed enquiry.
Recently, Coupang's CEO Park Dae-Jun gave his resignation and apologies to the public for not being able to stop what is now South Korea's worst cybersecurity breach in history.
In the second day of police investigation in Coupang's offices, the officials found that the main suspect was a 43-year old Chinese national who was an employee of the retail giant. The man is called JoongAng, who joined the firm in November 2022 and overlooked the authentication management system. He left the firm in 2024. JoongAng is suspected to have already left South Korea.
According to the police, although Coupang is considered the victim, the business and staff in charge of safeguarding client information may be held accountable if carelessness or other legal infractions are discovered.
Since the beginning of the month, the authorities have received hundreds of reports of Coupang impersonation. Meanwhile, the incident has caused a large amount of phishing activity in the country, affecting almost two-thirds of its population.
The United Kingdom’s National Cyber Security Centre has issued a strong warning about a spreading weakness in artificial intelligence systems, stating that prompt-injection attacks may never be fully solved. The agency explained that this risk is tied to the basic design of large language models, which read all text as part of a prediction sequence rather than separating instructions from ordinary content. Because of this, malicious actors can insert hidden text that causes a system to break its own rules or execute unintended actions.
The NCSC noted that this is not a theoretical concern. Several demonstrations have already shown how attackers can force AI models to reveal internal instructions or sensitive prompts, and other tests have suggested that tools used for coding, search, or even résumé screening can be manipulated by embedding concealed commands inside user-supplied text.
David C, a technical director at the NCSC, cautioned that treating prompt injection as a familiar software flaw is a mistake. He observed that many security professionals compare it to SQL injection, an older type of vulnerability that allowed criminals to send harmful instructions to databases by placing commands where data was expected. According to him, this comparison is dangerous because it encourages the belief that both problems can be fixed in similar ways, even though the underlying issues are completely different.
He illustrated this difference with a practical scenario. If a recruiter uses an AI system to filter applications, a job seeker could hide a message in the document that tells the model to ignore existing rules and approve the résumé. Since the model does not distinguish between what it should follow and what it should simply read, it may carry out the hidden instruction.
Researchers are trying to design protective techniques, including systems that attempt to detect suspicious text or training methods that help models recognise the difference between instructions and information. However, the agency emphasised that all these strategies are trying to impose a separation that the technology does not naturally have. Traditional solutions for similar problems, such as Confused Deputy vulnerabilities, do not translate well to language models, leaving large gaps in protection.
The agency also stressed upon a security idea recently shared on social media that attempted to restrict model behaviour. Even the creator of that proposal admitted that it would sharply reduce the abilities of AI systems, showing how complex and limiting effective safeguards may become.
The NCSC stated that prompt-injection threats are likely to remain a lasting challenge rather than a fixable flaw. The most realistic path is to reduce the chances of an attack or limit the damage it can cause through strict system design, thoughtful deployment, and careful day-to-day operation. The agency pointed to the history of SQL injection, which once caused widespread breaches until better security standards were adopted. With AI now being integrated into many applications, they warned that a similar wave of compromises could occur if organisations do not treat prompt injection as a serious and ongoing risk.