For years, companies protected sensitive data by securing emails, devices, and internal networks. But work habits have changed. Now, most of the data moves through web browsers.
Employees often copy, paste, upload, or transfer information online without realizing the risks. Web apps, personal accounts, AI tools, and browser extensions have made it harder to track where the data goes. Old security methods can no longer catch these new risks.
How Data Slips Out Through Browsers
Data leaks no longer happen only through obvious channels like USB drives or emails. Today, normal work tasks done inside browsers cause unintentional leaks.
For example, a developer might paste secret codes into an AI chatbot. A salesperson could move customer details into their personal cloud account. A manager might give an online tool access to company data without knowing it.
Because these activities happen inside approved apps, companies often miss the risks. Different platforms also store data differently, making it harder to apply the same safety rules everywhere.
Simple actions like copying text, using extensions, or uploading files now create new ways for data to leak. Cloud services like AWS or Microsoft add another layer of confusion, as it becomes unclear where the data is stored.
The use of multiple browsers, Chrome, Safari, Firefox — makes it even harder for security teams to keep an eye on everything.
Personal Accounts Add to the Risk
Switching between work and personal accounts during the same browser session is very common. People use services like Gmail, Google Drive, ChatGPT, and others without separating personal and office work.
As a result, important company data often ends up in personal cloud drives, emails, or messaging apps without any bad intention from employees.
Studies show that nearly 40% of web use in Google apps involves personal accounts. Blocking personal uploads is not a solution. Instead, companies need smart browser rules to separate work from personal use without affecting productivity.
Moving Data Is the Most Dangerous Moment
Data is most vulnerable when it is being shared or transferred — what experts call "data in motion." Even though companies try to label sensitive information, most protections work only when data is stored, not when it moves.
Popular apps like Google Drive, Slack, and ChatGPT make sharing easy but also increase the risk of leaks. Old security systems fail because the biggest threats now come from tools employees use every day.
Extensions and Unknown Apps — The Hidden Threat
Browser extensions and third-party apps are another weak spot. Employees often install them without knowing how much access they give away.
Some of these tools can record keystrokes, collect login details, or keep pulling data even after use. Since these risks often stay hidden, security teams struggle to control them.
Today, browsers are the biggest weak spot in protecting company data. Businesses need better tools that control data flow inside the browser, keeping information safe without slowing down work.
The Ministry of Finance, under Nirmala Sitharaman’s leadership, has issued a directive prohibiting employees from using artificial intelligence (AI) tools such as ChatGPT and DeepSeek for official work. The decision comes over concerns about data security as these AI-powered platforms process and store information externally, potentially putting confidential government data at risk.
Why Has the Finance Ministry Banned AI Tools?
AI chatbots and virtual assistants have gained popularity for their ability to generate text, answer questions, and assist with tasks. However, since these tools rely on cloud-based processing, there is a risk that sensitive government information could be exposed or accessed by unauthorized parties.
The ministry’s concern is that official documents, financial records, and policy decisions could unintentionally be shared with external AI systems, making them vulnerable to cyber threats or misuse. By restricting their use, the government aims to safeguard national data and prevent potential security breaches.
Public Reactions and Social Media Buzz
The announcement quickly sparked discussions online, with many users sharing humorous takes on the decision. Some questioned how government employees would manage their workload without AI assistance, while others speculated whether Indian AI tools like Ola Krutrim might be an approved alternative.
A few of the popular reactions included:
1. "How will they complete work on time now?"
2. "So, only Ola Krutrim is allowed?"
3. "The Finance Ministry is switching back to traditional methods."
4. "India should develop its own AI instead of relying on foreign tools."
India’s Position in the Global AI Race
With AI development accelerating worldwide, several countries are striving to build their own advanced models. China’s DeepSeek has emerged as a major competitor to OpenAI’s ChatGPT and Google’s Gemini, increasing the competition in the field.
The U.S. has imposed trade restrictions on Chinese AI technology, leading to growing tensions in the tech industry. Meanwhile, India has yet to launch an AI model capable of competing globally, but the government’s interest in regulating AI suggests that future developments could be on the horizon.
While the Finance Ministry’s move prioritizes data security, it also raises questions about efficiency. AI tools help streamline work processes, and their restriction could lead to slower operations in certain departments.
Experts suggest that India should focus on developing AI models that are secure and optimized for government use, ensuring that innovation continues without compromising confidential information.
For now, the Finance Ministry’s stance reinforces the need for careful regulation of AI technologies, ensuring that security remains a top priority in government operations.
Earlier, it was easier to spot irregularities in an e-mail, all it took was one glance. As Gen AI models use flawless grammar, it is almost impossible to find errors in your mail copy,
In the past, one quick skim was enough to recognize something is off with an email, typically the incorrect grammar and laughable typos being the giveaways. Since scammers now use generative AI language models, most phishing messages have flawless grammar.
But there is hope. It is easier to identify Gen AI text, and keep an eye out for an unnatural flow of sentences, if everything seems to be too perfect, chances are it’s AI.
Though AI has made it difficult for users to find phishing scams, they show some classic behavior. The same tips apply to detect phishing emails.
In most cases, scammers mimic businesses and wish you won’t notice. For instance, instead of an official “info@members.hotstar.com” email ID, you may notice something like “info@members.hotstar-support.com.” You may also get unrequested links or attachments, which are a huge tell. URLs (mismatched) having subtle typos or extra words/letters are comparatively difficult to notice but a huge ti-off that you are on a malicious website or interacting with a fake business.
The biggest issue these days is combating deepfakes, which are also difficult to spot.
The attacker makes realistic video clips using photo and video prompts and uses video calling like Zoom or FaceTime to trap potential victims (especially elders and senior citizens) to give away sensitive data.
One may think that only old people may fall for deepfakes, but due to their sophistication, even experts fall prey to them. One famous incident happened in Hong Kong, where scammers deepfake a company CFO and looted HK$200 million (roughly $25 million).
AI is advancing, and becoming stronger every day. It is a double-edged sword, both a blessing and a curse. One should tread the ethical lines carefully and hope they don’t fall to the dark side of AI.
On Thursday, OpenAI’s ChatGPT experienced a significant outage in the UK, leaving thousands of users unable to access the popular AI chatbot. The disruption, which began around 11:00 GMT, saw users encountering a “bad gateway error” message when attempting to use the platform. According to Downdetector, a website that tracks service interruptions, over 10,000 users reported issues during the outage, which persisted for several hours and caused widespread frustration.
OpenAI acknowledged the issue on its official status page, confirming that a fix was implemented by 15:09 GMT. The company assured users that it was monitoring the situation closely, but no official explanation for the cause of the outage has been provided so far. This lack of transparency has fueled speculation among users, with theories ranging from server overload to unexpected technical failures.
As the outage unfolded, affected users turned to social media to voice their concerns and frustrations. On X (formerly Twitter), one user humorously remarked, “ChatGPT is down again? During the workday? So you’re telling me I have to… THINK?!” While some users managed to find humor in the situation, others raised serious concerns about the reliability of AI services, particularly those who depend on ChatGPT for professional tasks such as content creation, coding assistance, and research.
ChatGPT has become an indispensable tool for millions since its launch in November 2022. OpenAI CEO Sam Altman recently revealed that by December 2024, the platform had reached over 300 million weekly users, highlighting its rapid adoption as one of the most widely used AI tools globally. However, the incident has raised questions about service reliability, especially among paying customers. OpenAI’s premium plans, which offer enhanced features, cost up to $200 per month, prompting some users to question whether they are getting adequate value for their investment.
The outage comes at a time of rapid advancements in AI technology. OpenAI and other leading tech firms have pledged significant investments into AI infrastructure, with a commitment of $500 billion toward AI development in the United States. While these investments aim to bolster the technology’s capabilities, incidents like this serve as a reminder of the growing dependence on AI tools and the potential risks associated with their widespread adoption.
The disruption highlights the importance of robust technical systems to ensure uninterrupted service, particularly for users who rely heavily on AI for their daily tasks. Despite restoring services relatively quickly, OpenAI’s ability to maintain user trust and satisfaction may hinge on its efforts to improve its communication strategy and technical resilience. Paying customers, in particular, expect transparency and proactive measures to prevent such incidents in the future.
As artificial intelligence becomes more deeply integrated into everyday life, service disruptions like the ChatGPT outage underline both the potential and limitations of the technology. Users are encouraged to stay informed through OpenAI’s official channels for updates on any future service interruptions or maintenance activities.
Moving forward, OpenAI may need to implement backup systems and alternative solutions to minimize the impact of outages on its user base. Clearer communication during disruptions and ongoing efforts to enhance technical infrastructure will be key to ensuring the platform’s reliability and maintaining its position as a leader in the AI industry.
Artificial intelligence is rapidly advancing beyond its current capabilities, transitioning from tools that generate content to systems capable of making autonomous decisions and pursuing long-term objectives. This next frontier, known as Agentic AI, has the potential to revolutionize how machines interact with the world by functioning independently and adapting to complex environments.
Generative AI models, such as ChatGPT and Google Gemini, analyze patterns in vast datasets to generate responses based on user prompts. These systems are highly versatile and assist with a wide range of tasks but remain fundamentally reactive, requiring human input to function. In contrast, agentic AI introduces autonomy, allowing machines to take initiative, set objectives, and perform tasks without continuous human oversight.
The key distinction lies in their problem-solving approaches. Generative AI acts as a responsive assistant, while agentic AI serves as an independent collaborator, capable of analyzing its environment, recognizing priorities, and making proactive decisions. By enabling machines to work autonomously, agentic AI offers the potential to optimize workflows, adapt to dynamic situations, and manage complex objectives over time.
Agentic AI systems leverage advanced planning modules, memory retention, and sophisticated decision-making frameworks to achieve their goals. These capabilities allow them to:
By incorporating these features, agentic AI ensures continuity and efficiency in executing long-term projects, distinguishing it from its generative counterparts.
The potential impact of agentic AI spans multiple industries and applications. For example:
Major AI companies are already exploring agentic capabilities. Reports suggest that OpenAI is working on projects aimed at enhancing AI autonomy, potentially enabling systems to control digital environments with minimal human input. These advancements highlight the growing importance of autonomous systems in shaping the future of technology.
Despite its transformative potential, agentic AI raises several challenges that must be addressed:
Thoughtful development and robust regulation will be essential to ensure that agentic AI operates ethically and responsibly, mitigating potential risks while unlocking its full benefits.
The transition from generative to agentic AI represents a significant leap in artificial intelligence. By integrating autonomous capabilities, these systems can transform industries, enhance productivity, and redefine human-machine relationships. However, achieving this vision requires a careful balance between innovation and regulation. As AI continues to evolve, agentic intelligence stands poised to usher in a new era of technological progress, fundamentally reshaping how we interact with the world.
AI chatbots like ChatGPT have captured widespread attention for their remarkable conversational abilities, allowing users to engage on diverse topics with ease. However, while these tools offer convenience and creativity, they also pose significant privacy risks. The very technology that powers lifelike interactions can also store, analyze, and potentially resurface user data, raising critical concerns about data security and ethical use.
Chatbots like ChatGPT rely on Large Language Models (LLMs) trained on vast datasets to generate human-like responses. This training often includes learning from user interactions. Much like how John Connor taught the Terminator quirky catchphrases in Terminator 2: Judgment Day, these systems refine their capabilities through real-world inputs. However, this improvement process comes at a cost: personal data shared during conversations may be stored and analyzed, often without users fully understanding the implications.
For instance, OpenAI’s terms and conditions explicitly state that data shared with ChatGPT may be used to improve its models. Unless users actively opt-out through privacy settings, all shared information—from casual remarks to sensitive details like financial data—can be logged and analyzed. Although OpenAI claims to anonymize and aggregate user data for further study, the risk of unintended exposure remains.
Despite assurances of data security, breaches have occurred. In May 2023, hackers exploited a vulnerability in ChatGPT’s Redis library, compromising the personal data of around 101,000 users. This breach underscored the risks associated with storing chat histories, even when companies emphasize their commitment to privacy. Similarly, companies like Samsung faced internal crises when employees inadvertently uploaded confidential information to chatbots, prompting some organizations to ban generative AI tools altogether.
Governments and industries are starting to address these risks. For instance, in October 2023, President Joe Biden signed an executive order focusing on privacy and data protection in AI systems. While this marks a step in the right direction, legal frameworks remain unclear, particularly around the use of user data for training AI models without explicit consent. Current practices are often classified as “fair use,” leaving consumers exposed to potential misuse.
Until stricter regulations are implemented, users must take proactive steps to safeguard their privacy while interacting with AI chatbots. Here are some key practices to consider:
According to the FBI, criminals are increasingly using generative artificial intelligence (AI) to make their fraudulent schemes more convincing. This technology enables fraudsters to produce large amounts of realistic content with minimal time and effort, increasing the scale and sophistication of their operations.
Generative AI systems work by synthesizing new content based on patterns learned from existing data. While creating or distributing synthetic content is not inherently illegal, such tools can be misused for activities like fraud, extortion, and misinformation. The accessibility of generative AI raises concerns about its potential for exploitation.
AI offers significant benefits across industries, including enhanced operational efficiency, regulatory compliance, and advanced analytics. In the financial sector, it has been instrumental in improving product customization and streamlining processes. However, alongside these benefits, vulnerabilities have emerged, including third-party dependencies, market correlations, cyber risks, and concerns about data quality and governance.
The misuse of generative AI poses additional risks to financial markets, such as facilitating financial fraud and spreading false information. Misaligned or poorly calibrated AI models may result in unintended consequences, potentially impacting financial stability. Long-term implications, including shifts in market structures, macroeconomic conditions, and energy consumption, further underscore the importance of responsible AI deployment.
Fraudsters have increasingly turned to generative AI to enhance their schemes, using AI-generated text and media to craft convincing narratives. These include social engineering tactics, spear-phishing, romance scams, and investment frauds. Additionally, AI can generate large volumes of fake social media profiles or deepfake videos, which are used to manipulate victims into divulging sensitive information or transferring funds. Criminals have even employed AI-generated audio to mimic voices, misleading individuals into believing they are interacting with trusted contacts.
In one notable incident reported by the FBI, a North Korean cybercriminal used a deepfake video to secure employment with an AI-focused company, exploiting the position to access sensitive information. Similarly, Russian threat actors have been linked to fake videos aimed at influencing elections. These cases highlight the broad potential for misuse of generative AI across various domains.
To address these challenges, the FBI advises individuals to take several precautions. These include establishing secret codes with trusted contacts to verify identities, minimizing the sharing of personal images or voice data online, and scrutinizing suspicious content. The agency also cautions against transferring funds, purchasing gift cards, or sending cryptocurrency to unknown parties, as these are common tactics employed in scams.
Generative AI tools have been used to improve the quality of phishing messages by reducing grammatical errors and refining language, making scams more convincing. Fraudulent websites have also employed AI-powered chatbots to lure victims into clicking harmful links. To reduce exposure to such threats, individuals are advised to avoid sharing sensitive personal information online or over the phone with unverified sources.
By remaining vigilant and adopting these protective measures, individuals can mitigate their risk of falling victim to fraud schemes enabled by emerging AI technologies.
The rise of generative AI (GenAI) tools like OpenAI’s ChatGPT and Anthropic’s Claude has created opportunities for attackers to exploit unsuspecting developers. Recently, two Python packages falsely claiming to provide free API access to these chatbot platforms were found delivering a malware known as "JarkaStealer" to their victims.
Exploiting Developers’ Interest in AI
Free and free-ish generative AI platforms are gaining popularity, but the benefits of most of their advanced features cost money. This led certain developers to look for free alternatives, many of whom didn't really check the source to be sure. Cybercrime follows trends and the trend is that malicious code is being inserted into open-source software packages that at least initially may appear legitimate.
As George Apostopoulos, a founding engineer at Endor Labs, describes, attackers target less cautious developers, lured by free access to popular AI tools. "Many people don't know better and fall for these offers," he says.
The Harmful Python Packages
Two evil Python packages, "gptplus" and "claudeai-eng," were uploaded to the Python Package Index, PyPI, one of the official repositories of open-source Python projects. The GPT-4 Turbo model by OpenAI and Claude chatbot by Anthropic were promised by API integrations from the user "Xeroline.".
While the packages seemed to work by connecting users to a demo version of ChatGPT, their true functionality was much nastier. The code also contained an ability to drop a Java archive (JAR) file which delivered the JarkaStealer malware to unsuspecting victims' systems.
What Is JarkaStealer?
The JarkaStealer is an infostealer malware that can extract sensitive information from infected systems. It has been sold on the Dark Web for as little as $20, but its more elaborate features can be bought for a few dollars more, which is designed to steal browser data and session tokens along with credentials for apps like Telegram, Discord, and Steam. It can also take screenshots of the victim's system, often revealing sensitive information.
Though the malware's effectiveness is highly uncertain, it is cheap enough and readily available to many attackers as an attractive tool. Its source code is even freely accessible on platforms like GitHub for an even wider reach.
Lessons for Developers
This incident points to risks in downloading unverified packages of open source, more so when handling emerging technologies such as AI. Development firms should screen all software sources to avoid shortcuts that seek free premium tools. Taking precautionary measures can save individuals and organizations from becoming victims of such attacks.
With regard to caution and best practices, developers are protected from malicious actors taking advantage of the GenAI boom.
The supply chain campaign shows the advancement of cyber threats attacking developers and the urgent need for caution in open-source activities.
Experts have found two malicious packages uploaded to the Python Index (PyPI) repository pretending to be popular artificial intelligence (AI) models like OpenAI Chatgpt and Anthropic Claude to distribute an information stealer known as JarkaStealer.
Called gptplus and claudeai-eng, the packages were uploaded by a user called "Xeroline" last year, resulting in 1,748 and 1,826 downloads. The two libraries can't be downloaded from PyPI. According to Kaspersky, the malicious packages were uploaded to the repository by one author and differed only in name and description.
Experts believe the package offered a way to access GPT-4 Turbo and Claude AI API but contained malicious code that, upon installation, started the installation of malware.
Particularly, the "__init__.py" file in these packages included Base64-encoded data that included code to download a Java archive file ("JavaUpdater.jar") from a GitHub repository, also downloading the Java Runtime Environment (JRE) from a Dropbox URL in case Java isn't already deployed on the host, before running the JAR file.
Based on information stealer JarkaStealer, the JAR file can steal a variety of sensitive data like web browser data, system data, session tokens, and screenshots from a wide range of applications like Steam, Telegram, and Discord.
In the last step, the stolen data is archived, sent to the attacker's server, and then removed from the target's machine.JarkaStealer is known to offer under a malware-as-a-service (MaaS) model through a Telegram channel for a cost between $20 and $50, however, the source code has been leaked on GitHub.
ClickPy stats suggest packages were downloaded over 3,500 times, primarily by users in China, the U.S., India, Russia, Germany, and France. The attack was part of an all-year supply chain attack campaign.
The stolen information is compressed and transmitted to a remote server controlled by the hacker, where it is removed from the target’s device.
Within a year and a half, ChatGPT has grown from an AI prototype to a broad productivity assistant, even sporting its text and code editor called Canvas. Soon, OpenAI will add direct web search capability to ChatGPT, putting the platform at the same table as Google's iconic search. With these fast updates, ChatGPT is now sporting quite a few features that may not be noticed at first glance but are deepening the user experience if one knows where to look.
This is the article that will teach you how to tap into ChatGPT, features from customization settings to unique prompting techniques, and not only five must-know tips will be useful in unlocking the full range of abilities of ChatGPT to any kind of task, small or big.
1. Rename Chats for Better Organisation
A new conversation with ChatGPT begins as a new thread, meaning that it will remember all details concerning that specific exchange but "forget" all the previous ones. This way, you can track the activities of current projects or specific topics because you can name your chats. The chat name that it might try to suggest is related to the flow of the conversation, and these are mostly overlooked contexts that users need to recall again. Renaming your conversations is one simple yet powerful means of staying organised if you rely on ChatGPT for various tasks.
To give a name to a conversation, tap the three dots next to the name in the sidebar. You can also archive older chats to remove them from the list without deleting them entirely, so you don't lose access to the conversations that are active.
2. Customise ChatGPT through Custom Instructions
Custom Instructions in ChatGPT is a chance to make your answers more specific to your needs because you will get to share your information and preferences with the AI. This is a two-stage personalization where you are explaining to ChatGPT what you want to know about yourself and, in addition, how you would like it to be returned. For instance, if you ask ChatGPT for coding advice several times a week, you can let the AI know what programming languages you are known in or would like to be instructed in so it can fine-tune the responses better. Or, you should be able to ask for ChatGPT to provide more verbose descriptions or to skip steps in order to make more intuitive knowledge of a topic.
To set up personal preferences, tap the profile icon on the upper right, and then from the menu, "Customise ChatGPT," and then fill out your preferences. Doing this will enable you to get responses tailored to your interests and requirements.
3. Choose the Right Model for Your Use
If you are a subscriber to ChatGPT Plus, you have access to one of several AI models each tailored to different tasks. The default model for most purposes is GPT-4-turbo (GPT-4o), which tends to strike the best balance between speed and functionality and even supports other additional features, including file uploads, web browsing, and dataset analysis.
However, other models are useful when one needs to describe a rather complex project with substantial planning. You may initiate a project using o1-preview that requires deep research and then shift the discussion to GPT-4-turbo to get quick responses. To switch models, you can click on the model dropdown at the top of your screen or type in a forward slash (/) in the chat box to get access to more available options including web browsing and image creation.
4. Look at what the GPT Store has available in the form of Mini-Apps
Custom GPTs, and the GPT Store enable "mini-applications" that are able to extend the functionality of the platform. The Custom GPTs all have some inbuilt prompts and workflows and sometimes even APIs to extend the AI capability of GPT. For instance, with Canva's GPT, you are able to create logos, social media posts, or presentations straight within the ChatGPT portal by linking up the Canva tool. That means you can co-create visual content with ChatGPT without having to leave the portal.
And if there are some prompts you often need to apply, or some dataset you upload most frequently, you can easily create your Custom GPT. This would be really helpful to handle recipes, keeping track of personal projects, create workflow shortcuts and much more. Go to the GPT Store by the "Explore GPTs" button in the sidebar. Your recent and custom GPTs will appear in the top tab, so find them easily and use them as necessary.
5. Manage Conversations with a Fresh Approach
For the best benefit of using ChatGPT, it is key to understand that every new conversation is an independent document with its "memory." It does recall enough from previous conversations, though generally speaking, its answers depend on what is being discussed in the immediate chat. This made chats on unrelated projects or topics best started anew for clarity.
For long-term projects, it might even be logical to go on with a single thread so that all relevant information is kept together. For unrelated topics, it might make more sense to start fresh each time to avoid confusion. Another way in which archiving or deleting conversations you no longer need can help free up your interface and make access to active threads easier is
What Makes AI Unique Compared to Other Software?
AI performs very differently from other software in that it responds dynamically, at times providing responses or "backtalk" and does not simply do what it is told to do. Such a property leads to some trial and error to obtain the desired output. For instance, one might prompt ChatGPT to review its own output as demonstrated by replacing single quote characters by double quote characters to generate more accurate results. This is similar to how a developer optimises an AI model, guiding ChatGPT to "think" through something in several steps.
ChatGPT Canvas and other features like Custom GPTs make the AI behave more like software in the classical sense—although, of course, with personality and learning. If ChatGPT continues to grow in this manner, features such as these may make most use cases easier and more delightful.
Following these five tips should help you make the most of ChatGPT as a productivity tool and keep pace with the latest developments. From renaming chats to playing around with Custom GPTs, all of them add to a richer and more customizable user experience.