The lack of proper awareness in regards to cybersecurity could be one of the reasons why phishing attacks are escalating at a concerning rate. While many finance institutions are aware of the importance to cybersecurity, they fail to educate their employees of the same.
Here, we are mentioning some ideas which might help banks to thwart phishing efforts and safeguard the information of their customers and employees:
The majority of banks use a similar approach for their cybersecurity training programs: they put all of their non-technical staff in a room, have their security team show a lecture with a few slides showing breach numbers, and attempt to scared them into acting accordingly.
It goes without saying that this strategy is ineffective. It is time for banks to start seeing their staff as a bulwark against phishing attempts rather than as a risk.
One way to do this is for banks to change their employees’ behaviors under stress, rather than threatening them by making them aware of the stressful situations. For example, instead of showing them the malicious emails, they must be educated on the right measure they must follow to identify such emails.
A bank can also do this by running simulations of the situations, where an employee will be free to make mistakes and learn from those mistakes. This way, an employee can as well make judgements on their actions and even receive instant feedbacks in a safe environment. By doing so, an actual breach will not be the only time the employee is dealing with a feedback.
Employees can view learning paths and review progress on simulation platforms. The skills of a technological employee will differ greatly from those of a non-technical person. The way forward is to provide positive feedback throughout and to customize learning routes.
For most banks, the importance of security is communicated with a negative attitude. They draw attention to the possibility of a breach, the harm to the bank's reputation, and the possible consequences for an employee's career should they fall prey to phishing scams.
When a worker receives a phony email from someone posing as their manager, these intimidation techniques are ineffective. Because they trust the manager's persona, employees are unlikely to refuse a request from that organization. Rather, banks ought to embrace a proactive stance and integrate security into their overall brand.
For example, inducing fear among the employees into not clicking the malicious links, banks should instead introduce policies when an employee could quickly determine whether an email is a phishing attempt, rather than attempting to scare them into not clicking on harmful links. Giving them access to an automated tool or having a security guard on duty are excellent choices.
Policies like shredding and discarding important documents in secure bins to cybersecurity practices is essential. Employees must be reminded that the work they do is in fact critical and their actions do matter.
Bank personnel utilize emails, which are rich in data, to communicate with a variety of stakeholders. This is used by malicious actors, who impersonate a different individual and deceive workers into downloading malware.
Informing staff members of appropriate communication styles and methods is one way to avoid situations like this one. Establishing a communication template, for example, will enable staff members to quickly spot emails that depart from the standard.
External actors are unlikely to be familiar with internal communications templates, thus they will likely send emails in a manner that is easily recognized by staff as being out of compliance. Although putting in place such a procedure may sound oppressive, it is the most effective technique to assist staff in overcoming the appearance of a false identity.
For instance, the majority of staff members will click on an email from the bank's CEO right away. They will overlook the fact that the email was sent by the CEO persona, though, if they see that the communication format is incorrect. With their minds thus occupied, kids are less likely to click on a link that could be harmful.
These templates are ingrained in the company's culture, and how banks convey their significance will determine a lot. Once more, a fear-based strategy rarely succeeds. Banks need to consider effective ways to enforce them.
The effectiveness of phishing emails created by artificial intelligence (AI) is quickly catching up to that of emails created by humans, according to disturbing new research. With artificial intelligence advancing so quickly, there is concern that there may be a rise in cyber dangers. One example of this is OpenAI's ChatGPT.
IBM's X-Force recently conducted a comprehensive study, pitting ChatGPT against human experts in the realm of phishing attacks. The results were eye-opening, demonstrating that ChatGPT was able to craft deceptive emails that were nearly indistinguishable from those composed by humans. This marks a significant milestone in the evolution of cyber threats, as AI now poses a formidable challenge to conventional cybersecurity measures.
One of the critical findings of the study was the sheer volume of phishing emails that ChatGPT was able to generate in a short span of time. This capability greatly amplifies the potential reach and impact of such attacks, as cybercriminals can now deploy a massive wave of convincing emails with unprecedented efficiency.
Furthermore, the study highlighted the adaptability of AI-powered phishing. ChatGPT demonstrated the ability to adjust its tactics in response to recipient interactions, enabling it to refine its approach and increase its chances of success. This level of sophistication raises concerns about the evolving nature of cyber threats and the need for adaptive cybersecurity strategies.
While AI-generated phishing is on the rise, it's important to note that human social engineers still maintain an edge in certain nuanced scenarios. Human intuition, emotional intelligence, and contextual understanding remain formidable obstacles for AI to completely overcome. However, as AI continues to advance, it's crucial for cybersecurity professionals to stay vigilant and proactive in their efforts to detect and mitigate evolving threats.
Cybersecurity measures need to be reevaluated in light of the growing competition between AI-generated phishing emails and human-crafted attacks. Defenders must adjust to this new reality as the landscape changes. Staying ahead of cyber threats in this quickly evolving digital age will require combining the strengths of human experience with cutting-edge technologies.
At one such instances, cybersecurity researchers Check Point were able to produce phishing emails, keyloggers, and some basic ransomware code, by using the Redmond giant’s AI tool.
Using the AI tool of the Redmond behemoth, cybersecurity researchers Check Point were able to produce phishing emails, keyloggers, and some basic ransomware code.
The researchers' report further noted how they set out to compare Bard's security to that of ChatGPT. From both sites, they attempted to obtain three things: phishing emails, malicious keyloggers, and some simple ransomware code.
The researchers described that simply asking the AI bots to create phishing emails yielded no results, however asking the Bard to provide ‘examples’ of the same provided them with plentiful phishing mails. ChatGPT, on the other hand, refused to comply, claiming that doing so would amount to engaging in fraudulent activity, which is illegal.
The researchers further create malware like keyloggers, to which the bots performed somewhat better. Here too, a direct question did not provide any result, but a tricky question as well yielded nothing since both the AI bots declined. However, answers for being asked to create keyloggers differed in both the platforms. While Bard simply said, “I’m not able to help with that, I’m only a language model,” ChatGPT gave a much detailed explanation.
Later, on being asked to provide a keylogger to log their keys, both ChatGPT and Bard ended up generating a malicious code. However, ChatGPT did provide a disclaimer before doing the aforementioned.
The researchers finally proceeded to asking Bard to run a basic ransomware script. While this was much trickier than getting the AI bot to generate phishing emails or keylogger, they finally managed to get Bard into the game.
“Bard’s anti-abuse restrictors in the realm of cybersecurity are significantly lower compared to those of ChatGPT[…]Consequently, it is much easier to generate malicious content using Bard’s capabilities,” they concluded.
The reason, in simpler terms is: Malicious use of any new technology is inevitable.
Here, one can conclude that these issues with the emerging generative AI technologies are much expected. AI, as an extremely developed tool has the potential to alter an entire cybersecurity script.
Cybersecurity experts and law enforcements have already been concerned for the same and have been warning against the AI technology for it can be well used in increasing the ongoing innovation in cybercrime tactics like convincing phishing emails, malware, and more. The development in technologies have made it accessible to users in such a way that now a cybercriminal can deploy a sophisticated cyberattack by only having minimal hand in coding.
While regulators and law enforcement are doing their best to impose limits on technology and ensure that it is utilized ethically, developers are working to do their bit by educating platforms to reject being used for criminal activity.
While generative AI market is decentralized, big companies will always be under the watch of regulatory bodies and law enforcements. However, smaller companies will remain in the radar of a potential cyberattack, especially the ones that are incapable to fight against or prevent the abuse.
Researchers and security experts suggests that the only way to improve the cybersecurity posture is to fight with full strength. Even though AI is already being used to identify suspicious network activity and other criminal conduct, it cannot be utilized to make entrance barriers as high as they once were. There is no closing the door ever again.
Security experts have cautioned that a new AI bot called ChatGPT may be employed by cybercriminals to educate them on how to plan attacks and even develop ransomware. It was launched by the artificial intelligence r&d company OpenAI last month.
Computer security expert Brendan Dolan-Gavitt questioned if he could command an AI-powered chatbot to create malicious code when the ChatGPT application first allowed users to communicate. Then he gave the program a basic capture-the-flag mission to complete.
The code featured a buffer overflow vulnerability, which ChatGPT accurately identified and created a piece of code to capitalize it. The program would have addressed the issue flawlessly if not for a small error—the number of characters in the input.
The fact that ChatGPT failed Dolan Gavitt's task, which he would have given students at the start of a vulnerability analysis course, does not instill trust in massive language models' capacity to generate high-quality code. However, after identifying the mistake, Dolan-Gavitt asked the model to review the response, and this time, ChatGPT did it right.