Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Gen AI. Show all posts

Dangers of AI Phishing Scam and How to Spot Them

Dangers of AI Phishing Scam and How to Spot Them

Supercharged AI phishing campaigns are extremely challenging to notice. Attackers use AI phishing scams with better grammar, structure, and spelling, to appear legit and trick the user. In this blog, we learn how to spot AI scams and avoid becoming victims

Checking email language

Earlier, it was easier to spot irregularities in an e-mail, all it took was one glance. As Gen AI models use flawless grammar,  it is almost impossible to find errors in your mail copy, 

Analyze the Language of the Email Carefully

In the past, one quick skim was enough to recognize something is off with an email, typically the incorrect grammar and laughable typos being the giveaways. Since scammers now use generative AI language models, most phishing messages have flawless grammar.

But there is hope. It is easier to identify Gen AI text, and keep an eye out for an unnatural flow of sentences, if everything seems to be too perfect, chances are it’s AI.

Red flags are everywhere, even mails

Though AI has made it difficult for users to find phishing scams, they show some classic behavior. The same tips apply to detect phishing emails.

In most cases, scammers mimic businesses and wish you won’t notice. For instance, instead of an official “info@members.hotstar.com” email ID, you may notice something like “info@members.hotstar-support.com.” You may also get unrequested links or attachments, which are a huge tell. URLs (mismatched) having subtle typos or extra words/letters are comparatively difficult to notice but a huge ti-off that you are on a malicious website or interacting with a fake business.

Beware of Deepfake video scams

The biggest issue these days is combating deepfakes, which are also difficult to spot. 

The attacker makes realistic video clips using photo and video prompts and uses video calling like Zoom or FaceTime to trap potential victims (especially elders and senior citizens) to give away sensitive data. 

One may think that only old people may fall for deepfakes, but due to their sophistication, even experts fall prey to them. One famous incident happened in Hong Kong, where scammers deepfake a company CFO and looted HK$200 million (roughly $25 million).

AI is advancing, and becoming stronger every day. It is a double-edged sword, both a blessing and a curse. One should tread the ethical lines carefully and hope they don’t fall to the dark side of AI.

Malicious Python Packages Target Developers Using AI Tools





The rise of generative AI (GenAI) tools like OpenAI’s ChatGPT and Anthropic’s Claude has created opportunities for attackers to exploit unsuspecting developers. Recently, two Python packages falsely claiming to provide free API access to these chatbot platforms were found delivering a malware known as "JarkaStealer" to their victims.


Exploiting Developers’ Interest in AI

Free and free-ish generative AI platforms are gaining popularity, but the benefits of most of their advanced features cost money. This led certain developers to look for free alternatives, many of whom didn't really check the source to be sure. Cybercrime follows trends and the trend is that malicious code is being inserted into open-source software packages that at least initially may appear legitimate.

As George Apostopoulos, a founding engineer at Endor Labs, describes, attackers target less cautious developers, lured by free access to popular AI tools. "Many people don't know better and fall for these offers," he says.


The Harmful Python Packages

Two evil Python packages, "gptplus" and "claudeai-eng," were uploaded to the Python Package Index, PyPI, one of the official repositories of open-source Python projects. The GPT-4 Turbo model by OpenAI and Claude chatbot by Anthropic were promised by API integrations from the user "Xeroline.".

While the packages seemed to work by connecting users to a demo version of ChatGPT, their true functionality was much nastier. The code also contained an ability to drop a Java archive (JAR) file which delivered the JarkaStealer malware to unsuspecting victims' systems.


What Is JarkaStealer?

The JarkaStealer is an infostealer malware that can extract sensitive information from infected systems. It has been sold on the Dark Web for as little as $20, but its more elaborate features can be bought for a few dollars more, which is designed to steal browser data and session tokens along with credentials for apps like Telegram, Discord, and Steam. It can also take screenshots of the victim's system, often revealing sensitive information.

Though the malware's effectiveness is highly uncertain, it is cheap enough and readily available to many attackers as an attractive tool. Its source code is even freely accessible on platforms like GitHub for an even wider reach.


Lessons for Developers

This incident points to risks in downloading unverified packages of open source, more so when handling emerging technologies such as AI. Development firms should screen all software sources to avoid shortcuts that seek free premium tools. Taking precautionary measures can save individuals and organizations from becoming victims of such attacks.

With regard to caution and best practices, developers are protected from malicious actors taking advantage of the GenAI boom.

How Agentic AI Will Change the Way You Work



Artificial intelligence is entering a groundbreaking phase that could drastically change the way we work. For years, AI prediction and content creation have been utilised, but the spotlight has shifted toward the most advanced: agentic AI. Such intelligent systems are not merely human tools but can act, decide, and bring order to complex tasks on their own. The third wave of AI could take the workplaces by a storm; hence, understanding what's coming into existence is important.


A Quick History of AI Evolution

To grasp the significance of agentic AI, let’s revisit AI’s journey. The first wave, predictive AI, helped businesses forecast trends and make data-based decisions. Then came generative AI, which allowed machines to create content and have human-like conversations. Now, we’re in the third wave: agentic AI. Unlike its predecessors, this AI can perform tasks on its own, interact with other AI systems, and even collaborate without constant human supervision.


What makes agentic AI special

Imagine agentic AI as an upgrade to the norm. The traditional AI systems follow prompts-they are there to respond to questions or generate text. Agentic AI, however, takes initiative. Agents are capable of handling a whole task, say solving problems for customers or organising schedules, but within set rules. They can even collaborate with other AI agents to deliver the result much more efficiently. For instance, in customer service, an AI that is agentic can answer questions, process returns, and help users without some human stepping in.


How Will Workplaces Change?

Agentic AI introduces a new way of working. Imagine an office where AI agents manage distinct tasks, like analysing data or communicating with clients; humans will supervise. Such a change is already generating new jobs, like the role of the AI trainer and coordinator, coaching those systems to improve their performance. It can either be a fully automatic job or a transformed one that will bring humans and AI together to deliver something.


Real-Life Applications

Agentic AI is already doing so much for many areas. It can, for example, help compile a patient summary in healthcare or solve claims in finance. Imagine an assistant AI negotiating with a company's AI for the best car rental deal. It could participate in meetings alongside colleagues, suggesting insights and ideas based on what it knows. The possibilities are endless, and humans could redefine efficiency in combination with their AI counterparts.


Challenges and Responsibilities

With great power comes great responsibility. If an AI agent comes to the wrong decision, results might be dire. Therefore, with substantial power, companies set substantial bounds on what these systems can do and cannot do. Critical decisions will be approved by a human to ensure safety and trust are achieved. Furthermore, transparency will be ensured— one must know if they are interacting with an AI rather than a human.


Adapting the Future

With the rise of agentic AI, it's not just a question of new technology, but the way in which work will change. Professionals will need to acquire new competencies, such as how to manage and cooperate with agents, while organisations need to re-design workflows to include these intelligent systems. This shift promises to benefit early adopters more than laggards.

Agentic AI represents more than just a technological breakthrough; rather it's an opportunity to make workplaces smarter, more innovative, and highly efficient. Are we ready for this future? Only time will tell.

 

AI Minefield: Risks of Gen AI in Your Personal Sphere

AI Minefield: Risks of Gen AI in Your Personal Sphere

Many customers are captivated by Gen AI, employing new technologies for a variety of personal and corporate purposes. 

However, many people ignore the serious privacy implications.

Is Generative AI all sunshine and rainbows?

Consumer AI products, such as OpenAI's ChatGPT, Google's Gemini, Microsoft Copilot software, and the new Apple Intelligence, are widely available and growing. However, the programs have various privacy practices in terms of how they use and retain user data. In many circumstances, users are unaware of how their data is or may be utilized.

This is where being an informed consumer becomes critical. According to Jodi Daniels, chief executive and privacy expert of Red Clover Advisors, which advises businesses on privacy issues, the granularity of what you can regulate varies depending on the technology. Daniels explained that there is no uniform opt-out for all technologies.

Privacy concerns

The rise of AI technologies, and their incorporation into so much of what customers do on their personal computers and cellphones, makes these problems much more pressing. A few months ago, for example, Microsoft introduced its first Surface PCs with a dedicated Copilot button on the keyboard for rapid access to the chatbot, fulfilling a promise made several months previously. 

Apple, for its part, presented its AI vision last month, which centered around numerous smaller models that operate on the company's devices and chips. Company officials have spoken publicly about the significance of privacy, which can be an issue with AI models.

Here are many approaches for consumers to secure their privacy in the new era of generative AI.

1. Use opt-outs provided by OpenAI and Google

Each generation AI tool has its own privacy policy, which may include opt-out choices. Gemini, for example, lets customers choose a retention time and erase certain data, among other activity limits.

ChatGPT allows users to opt out of having their data used for model training. To do so, click the profile symbol in the bottom-left corner of the page and then pick Data Controls from the Settings header. They must then disable the feature labeled "Improve the model for everyone." According to a FAQ on OpenAI's website, if this is disabled, fresh talks will not be utilized to train ChatGPT's models.

2. Opt-in, but for good reasons

Companies are incorporating modern AI into personal and professional solutions, like as Microsoft Copilot. Opt-in only for valid reasons. Copilot for Microsoft 365, for example, integrates with Word, Excel, and PowerPoint to assist users with activities such as analytics, idea development, and organization.

Microsoft claims that it does not share consumer data with third parties without permission, nor does it utilize customer data to train Copilot or other AI features without consent. 

Users can, however, opt in if they like by logging into the Power Platform admin portal, selecting settings, and tenant settings, and enabling data sharing for Dynamics 365 Copilot and Power Platform Copilot AI Features. They facilitate data sharing and saving.

3. Gen AI search: Setting retention period

Consumers may not think much before seeking information using AI, treating it like a search engine to create information and ideas. However, looking for specific types of information utilizing gen AI might be intrusive to a person's privacy, hence there are best practices for using such tools. Hoffman-Andrews recommends setting a short retention period for the generation AI tool. 

And, if possible, erase chats once you've gathered the desired information. Companies still keep server logs, but they can assist lessen the chance of a third party gaining access to your account, he explained. It may also limit the likelihood of sensitive information becoming part of the model training. "It really depends on the privacy settings of the particular site."