Adobe has expanded how users can edit images, create designs, and manage documents by integrating select features of its creative software directly into ChatGPT. This update allows users to make visual and document changes simply by describing what they want, without switching between different applications.
With the new integration, tools from Adobe Photoshop, Adobe Acrobat, and Adobe Express are now available inside the ChatGPT interface. Users can upload images or documents and activate an Adobe app by mentioning it in their request. Once enabled, the tool continues to work throughout the conversation, allowing multiple edits without repeatedly selecting the app.
For image editing, the Photoshop integration supports focused and practical adjustments rather than full professional workflows. Users can modify specific areas of an image, apply visual effects, or change settings such as brightness, contrast, and exposure. In some cases, ChatGPT presents multiple edited versions for users to choose from. In others, it provides interactive controls, such as sliders, to fine-tune the result manually.
The Acrobat integration is designed to simplify common document tasks. Users can edit existing PDF files, reduce file size, merge several documents into one, convert files into PDF format, and extract content such as text or tables. These functions are handled directly within ChatGPT once a file is uploaded and instructions are given.
Adobe Express focuses on design creation and quick visual content. Through ChatGPT, users can generate and edit materials like posters, invitations, and social media graphics. Every element of a design, including text, images, colors, and animations, can be adjusted through conversational prompts. If users later require more detailed control, their projects can be opened in Adobe’s standalone applications to continue editing.
The integrations are available worldwide on desktop, web, and iOS platforms. On Android, Adobe Express is already supported, while Photoshop and Acrobat compatibility is expected to be added in the future. These tools are free to use within ChatGPT, although advanced features in Adobe’s native software may still require paid plans.
This launch follows OpenAI’s broader effort to introduce third-party app integrations within ChatGPT. While some earlier app promotions raised concerns about advertising-like behavior, Adobe’s tools are positioned as functional extensions rather than marketing prompts.
By embedding creative and document tools into a conversational interface, Adobe aims to make design and editing more accessible to users who may lack technical expertise. The move also reflects growing competition in the AI space, where companies are racing to combine artificial intelligence with practical, real-world tools.
Overall, the integration represents a shift toward more interactive and simplified creative workflows, allowing users to complete everyday editing tasks efficiently while keeping professional software available for advanced needs.
Key concern homeowners should know:
Wi-Fi jammers operate by overpowering a network with a stronger signal on the same frequency used by home security systems. Though the technology itself isn’t new, law enforcement believes it is increasingly being exploited by burglars trying to avoid identification. A report by KPRC Click2Houston described a case where a homeowner noticed their camera feed becoming distorted as thieves approached, allegedly using a backpack containing a Wi-Fi jammer. Similar incidents were later reported by NBC Los Angeles in high-end neighborhoods in California.
How criminals may use jammers:
Despite these risks, Wi-Fi jammers are illegal in the Unite States under the Communications Act of 1934. Federal agencies including the Department of Justice, Homeland Security, and the Federal Communications Commission actively investigate and prosecute those who sell or use them. Some states, such as Indiana and Oregon, have strengthened laws to improve enforcement. Still, the devices remain accessible, making awareness and prevention essential.
Legal status at a glance:
While it’s unclear how often burglars rely on this method, smart home devices remain vulnerable to signal interference. According to CNET, encryption protects data but does not stop jamming. They also note that casual use by criminals is uncommon due to the technical knowledge required. However, real-world cases in California and Texas highlight why extra safeguards matter.
Ways to protect your home:
Taking proactive steps to safeguard your security cameras and smart devices can make a meaningful difference. Even a short disruption in surveillance may determine whether authorities can identify a suspect, making prevention just as important as detection.
Artificial intelligence is increasingly influencing the cyber security infrastructure, but recent claims about “AI-powered” cybercrime often exaggerate how advanced these threats currently are. While AI is changing how both defenders and attackers operate, evidence does not support the idea that cybercriminals are already running fully autonomous, self-directed AI attacks at scale.
For several years, AI has played a defining role in cyber security as organisations modernise their systems. Machine learning tools now assist with threat detection, log analysis, and response automation. At the same time, attackers are exploring how these technologies might support their activities. However, the capabilities of today’s AI tools are frequently overstated, creating a disconnect between public claims and operational reality.
Recent attention has been driven by two high-profile reports. One study suggested that artificial intelligence is involved in most ransomware incidents, a conclusion that was later challenged by multiple researchers due to methodological concerns. The report was subsequently withdrawn, reinforcing the importance of careful validation. Another claim emerged when an AI company reported that its model had been misused by state-linked actors to assist in an espionage operation targeting multiple organisations.
According to the company’s account, the AI tool supported tasks such as identifying system weaknesses and assisting with movement across networks. However, experts questioned these conclusions due to the absence of technical indicators and the use of common open-source tools that are already widely monitored. Several analysts described the activity as advanced automation rather than genuine artificial intelligence making independent decisions.
There are documented cases of attackers experimenting with AI in limited ways. Some ransomware has reportedly used local language models to generate scripts, and certain threat groups appear to rely on generative tools during development. These examples demonstrate experimentation, not a widespread shift in how cybercrime is conducted.
Well-established ransomware groups already operate mature development pipelines and rely heavily on experienced human operators. AI tools may help refine existing code, speed up reconnaissance, or improve phishing messages, but they are not replacing human planning or expertise. Malware generated directly by AI systems is often untested, unreliable, and lacks the refinement gained through real-world deployment.
Even in reported cases of AI misuse, limitations remain clear. Some models have been shown to fabricate progress or generate incorrect technical details, making continuous human supervision necessary. This undermines the idea of fully independent AI-driven attacks.
There are also operational risks for attackers. Campaigns that depend on commercial AI platforms can fail instantly if access is restricted. Open-source alternatives reduce this risk but require more resources and technical skill while offering weaker performance.
The UK’s National Cyber Security Centre has acknowledged that AI will accelerate certain attack techniques, particularly vulnerability research. However, fully autonomous cyberattacks remain speculative.
The real challenge is avoiding distraction. AI will influence cyber threats, but not in the dramatic way some headlines suggest. Security efforts should prioritise evidence-based risk, improved visibility, and responsible use of AI to strengthen defences rather than amplify fear.
Web browsers are increasingly evolving beyond basic internet access, and artificial intelligence is becoming a central part of that shift. Neo, an AI-powered browser developed by Norton, is designed to combine browsing, productivity tools, and security features within a single platform. The browser positions itself as a solution for users seeking efficiency, privacy control, and reduced online distractions.
Unlike traditional browsers that rely heavily on cloud-based data processing, Neo stores user information directly on the device. This includes browsing history, AI interactions, and saved preferences. By keeping this data local, the browser allows users to decide what information is retained, synchronized, or removed, addressing growing concerns around data exposure and third-party access.
Security is another core component of Neo’s design. The browser integrates threat protection technologies intended to identify and block phishing attempts, malicious websites, and other common online risks. These measures aim to provide a safer browsing environment, particularly for users who frequently navigate unfamiliar or high-risk websites.
Neo’s artificial intelligence features are embedded directly into the browsing experience. Users can highlight text on a webpage to receive simplified explanations or short summaries, which may help when reading technical, lengthy, or complex content. The browser also includes writing assistance tools that offer real-time grammar corrections and clarity suggestions, supporting everyday tasks such as emails, reports, and online forms.
Beyond text-based tools, Neo includes AI-assisted document handling and image-related features. These functions are designed to support content creation and basic processing tasks without requiring additional software. By consolidating these tools within the browser, Neo aims to reduce the need to switch between multiple applications during routine work.
To improve usability, Neo features a built-in ad blocker that limits intrusive advertising. Reducing ads not only minimizes visual distractions but can also improve page loading speeds. This approach aims to provide a smoother and more focused browsing experience for both professional and casual use.
Tab management is another area where Neo applies automation. Open tabs are grouped based on content type, helping users manage multiple webpages more efficiently. The browser also remembers frequently visited sites and ongoing tasks, allowing users to resume activity without manually reorganizing their workspace.
Customization plays a role in Neo’s appeal. Users can adjust the browser’s appearance, create shortcuts, and modify settings to better match their workflow. Neo also supports integration with external applications, enabling notifications and tool access without leaving the browser interface.
Overall, Neo reflects a broader trend toward AI-assisted browsing paired with stronger privacy controls. By combining local data storage, built-in security, productivity-focused AI tools, and performance optimization features, the browser presents an alternative approach to how users interact with the web. Whether it reshapes mainstream browsing habits remains to be seen, but it underlines how AI is steadily redefining everyday digital experiences.
A six-month research into AI-based development tools has disclosed over thirty security bugs that allow remote code execution (RCE) and data exfiltration. The findings by IDEsaster research revealed how AI agents deployed in IDEs like Visual Studio Code, Zed, JetBrains products and various commercial assistants can be tricked into leaking sensitive data or launching hacker-controlled code.
The research reports that 100% of tested AI IDEs and coding agents were vulnerable. Impacted products include GitHub, Windsurf, Copilot, Cursor, Kiro.dev, Zed.dev, Roo Code, Junie, Cline, Gemini CLI, and Claude Code. At least twenty-four assigned CVEs and additional AWS advisories were also included.
The main problem comes from the way AI agents interact with IDE features. Autonomous components that could read, edit, and create files were never intended for these editors. Once-harmless features turned become attack surfaces when AI agents acquired these skills. In their threat model, all AI IDEs essentially disregard the base software. Since these features have been around for years, they consider them to be naturally safe.
However, the same functionalities can be weaponized into RCE primitives and data exfiltration once autonomous AI bots are included. The research reported that this is an IDE-agnostic attack chain.
It begins with context hacking via prompt-injection. Covert instructions can be deployed in file names, rule files, READMEs, and outputs from malicious MCP servers. When an agent reads the context, the tool can be redirected to run authorized actions that activate malicious behaviours in the core IDE. The last stage exploits built-in features to steal data or run hacker code in AI IDEs sharing core software layers.
Writing a JSON file that references a remote schema is one example. Sensitive information gathered earlier in the chain is among the parameters inserted by the agent that are leaked when the IDE automatically retrieves that schema. This behavior was seen in Zed, JetBrains IDEs, and Visual Studio Code. The outbound request was not suppressed by developer safeguards like diff previews.
Another case study uses altered IDE settings to show complete remote code execution. An attacker can make the IDE execute arbitrary code as soon as a relevant file type is opened or created by updating an executable file that is already in the workspace and then changing configuration fields like php.validate.executablePath. Similar exposure is demonstrated by JetBrains utilities via workspace metadata.
According to the IDEsaster report, “It’s impossible to entirely prevent this vulnerability class short-term, as IDEs were not initially built following the Secure for AI principle. However, these measures can be taken to reduce risk from both a user perspective and a maintainer perspective.”
Threat actors are abusing a zero-day bug in Gogs- a famous self-hosted Git service. The open source project hasn't fixed it yet.
Over 700 incidents have been impacted in these attacks. Wiz researchers described the bug as "accidental" and said the attack happened in July when they were analyzing malware on a compromised system. During the investigation, the experts "identified that the threat actor was leveraging a previously unknown flaw to compromise instances. They “responsibly disclosed this vulnerability to the maintainers."
The team informed Gogs' maintainers about the bug, who are now working on the fix.
The flaw is known as CVE-2025-8110. It is primarily a bypass of an earlier patched flaw (CVE-2024-55947) that lets authorized users overwrite external repository files. This leads to remote code execution (RCE).
Gogs is written in Go, it lets users host Git repositories on their cloud infrastructure or servers. It doesn't use GitHub or other third parties.
Git and Gogs allow symbolic links that work as shortcuts to another file. They can also point to objects outside the repository. The Gogs API also allows file configuration outside the regular Git protocol.
The previous patch didn't address such symbolic links exploit and this lets threat actors to leverage the flaw and remotely deploy malicious codes.
While researchers haven't linked the attacks to any particular gang or person, they believe the threat actors are based in Asia.
Last year, Mandiant found Chinese state-sponsored hackers abusing a critical flaw in F5 through Supershell, and selling the access to impacted UK government agencies, US defense organizations, and others.
Researchers still don't know what threat actors are doing with access to compromised incidents. "In the environments where we have visibility, the malware was removed quickly so we did not see any post-exploitation activity. We don't have visibility into other compromised servers, beyond knowing they're compromised," researchers said.
Wiz has advised users to immediately disable open-registration (if not needed) and control internet exposure by shielding self-hosted Git services via VPN. Users should be careful of new repositories with unexpected usage of the PutContents API or random 8-character names.
For more details, readers can see the full list of indicators published by the researchers.
Meta has started taking down accounts belonging to Australians under 16 on Instagram, Facebook and Threads, beginning a week before Australia’s new age-restriction law comes into force. The company recently alerted users it believes are between 13 and 15 that their profiles would soon be shut down, and the rollout has now begun.
Current estimates suggest that a large number of accounts will be affected, including roughly hundreds of thousands across Meta’s platforms. Since Threads operates through Instagram credentials, any underage Instagram account will also lose access to Threads.
Australia’s new policy, which becomes fully active on 10 December, prevents anyone under 16 from holding an account on major social media sites. This law is the first of its kind globally. Platforms that fail to take meaningful action can face penalties reaching up to 49.5 million Australian dollars. The responsibility to monitor and enforce this age limit rests with the companies, not parents or children.
A Meta spokesperson explained that following the new rules will require ongoing adjustments, as compliance involves several layers of technology and review. The company has argued that the government should shift age verification to app stores, where users could verify their age once when downloading an app. Meta claims this would reduce the need for children to repeatedly confirm their age across multiple platforms and may better protect privacy.
Before their accounts are removed, underage users can download and store their photos, videos and messages. Those who believe Meta has made an incorrect assessment can request a review and prove their age by submitting government identification or a short video-based verification.
The new law affects a wide list of services, including Facebook, Instagram, Snapchat, TikTok, Threads, YouTube, X, Reddit, Twitch and Kick. However, platforms designed for younger audiences or tools used primarily for education, such as YouTube Kids, Google Classroom and messaging apps like WhatsApp, are not included. Authorities have also been examining whether children are shifting to lesser-known apps, and companies behind emerging platforms like Lemon8 and Yope have already begun evaluating whether they fall under the new rules.
Government officials have stated that the goal is to reduce children’s exposure to harmful online material, which includes violent content, misogynistic messages, eating disorder promotion, suicide-related material and grooming attempts. A national study reported that the vast majority of children aged 10 to 15 use social media, with many encountering unsafe or damaging content.
Critics, however, warn that age verification tools may misidentify users, create privacy risks or fail to stop determined teenagers from using alternative accounts. Others argue that removing teens from regulated platforms might push them toward unmonitored apps, reducing online safety rather than improving it.
Australian authorities expect challenges in the early weeks of implementation but maintain that the long-term goal is to reduce risks for the youngest generation of online users.
Microsoft stirred controversy when it revealed a Teams update that could tell your organization when you're not at work. Google did the same. Say goodbye to end-to-end encryption. With this new RCS and SMS Android update, your RCS and SMS texts are no longer private.
According to Android Authority, "Google is rolling out Android RCS Archival on Pixel (and other Android) phones, allowing employers to intercept and archive RCS chats on work-managed devices. In simpler terms, your employer will now be able to read your RCS chats in Google Messages despite end-to-end encryption.”
This is only applicable to work-managed devices and doesn't impact personal devices. In regulated industries, it will only add RCS archiving to existing SMS archiving. In an organization, however, texting is different than emailing. In the former, employees sometimes share about their non-work life. End-to-end encryptions keep these conversations safe, but this will no longer be the case.
There is alot of misunderstanding around end-to-end encryption. It protects messages when they are being sent, but once they are on your device, they are decrypted and no longer safe.
According to Google, this is "a dependable, Android-supported solution for message archival, which is also backwards compatible with SMS and MMS messages as well. Employees will see a clear notification on their device whenever the archival feature is active.”
With this update, getting a phone at work is no longer as good as it seems. Employees have always been insecure about the risks in over-sharing on email, as it is easy to spy. But not texts.
The update will make things different. According to Google, “this new capability, available on Google Pixel and other compatible Android Enterprise devices gives your employees all the benefits of RCS — like typing indicators, read receipts, and end-to-end encryption between Android devices — while ensuring your organization meets its regulatory requirements.”
Because of organizational surveillance, employees at times turn to shadow IT systems such as Whatsapp and Signal to communicate with colleagues. The new Google update will only make things worse.
“Earlier,” Google said, ““employers had to block the use of RCS entirely to meet these compliance requirements; this update simply allows organizations to support modern messaging — giving employees messaging benefits like high-quality media sharing and typing indicators — while maintaining the same compliance standards that already apply to SMS messaging."
Artificial intelligence is now built into many cybersecurity tools, yet its presence is often hidden. Systems that sort alerts, scan emails, highlight unusual activity, or prioritise vulnerabilities rely on machine learning beneath the surface. These features make work faster, but they rarely explain how their decisions are formed. This creates a challenge for security teams that must rely on the output while still bearing responsibility for the outcome.
Automated systems can recognise patterns, group events, and summarise information, but they cannot understand an organisation’s mission, risk appetite, or ethical guidelines. A model may present a result that is statistically correct yet disconnected from real operational context. This gap between automated reasoning and practical decision-making is why human oversight remains essential.
To manage this, many teams are starting to build or refine small AI-assisted workflows of their own. These lightweight tools do not replace commercial products. Instead, they give analysts a clearer view of how data is processed, what is considered risky, and why certain results appear. Custom workflows also allow professionals to decide what information the system should learn from and how its recommendations should be interpreted. This restores a degree of control in environments where AI often operates silently.
AI can also help remove friction in routine tasks. Analysts often lose time translating a simple question into complex SQL statements, regular expressions, or detailed log queries. AI-based utilities can convert plain language instructions into the correct technical commands, extract relevant logs, and organise the results. When repetitive translation work is reduced, investigators can focus on evaluating evidence and drawing meaningful conclusions.
However, using AI responsibly requires a basic level of technical fluency. Many AI-driven tools rely on Python for integration, automation, and data handling. What once felt intimidating is now more accessible because models can draft most of the code when given a clear instruction. Professionals still need enough understanding to read, adjust, and verify what the model generates. They also need awareness of how AI interprets instructions and where its logic might fail, especially when dealing with vague or incomplete information.
A practical starting point involves a few structured steps. Teams can begin by reviewing their existing tools to see where AI is already active and what decisions it is influencing. Treating AI outputs as suggestions rather than final answers helps reinforce accountability. Choosing one recurring task each week and experimenting with partial automation builds confidence and reduces workload over time. Developing a basic understanding of machine learning concepts makes it easier to anticipate errors and keep automated behaviours aligned with organisational priorities. Finally, engaging with professional communities exposes teams to shared tools, workflows, and insights that accelerate safe adoption.
As AI becomes more common, the goal is not to replace human expertise but to support it. Automated tools can process large datasets and reduce repetitive work, but they cannot interpret context, weigh consequences, or understand the nuance behind security decisions. Cybersecurity remains a field where judgment, experience, and critical thinking matter. When organisations use AI with intention and oversight, it becomes a powerful companion that strengthens investigative speed without compromising professional responsibility.
Today's information environment includes a wide range of communication. Social media platforms have enabled reposting, and comments. The platform is useful for both content consumers and creators, but it has its own challenges.
The rapid adoption of Generative AI has led to a significant increase in misleading content online. These chatbots have a tendency of generating false information which has no factual backing.
The internet is filled with AI slop- content that is made with minimal human input and is like junk. There is currently no mechanism to limit such massive production of harmful or misleading content that can impact human cognition and critical thinking. This calls for a robust mechanism that can address the new challenges that the current system is failing to tackle.
For restoring the integrity of digital information, Canada's Centre for Cyber Security (CCCS) and the UK's National Cyber Security Centre (NCSC) have launched a new report on public content provenance. Provenance means "place of origin." For building stronger trust with external audiences, businesses and organisations must improve the way they manage the source of their information.
NSSC chief technology officer said that the "new publication examines the emerging field of content provenance technologies and offers clear insights using a range of cyber security perspectives on how these risks may be managed.”
The industry is implementing few measures to address content provenance challenges like Coalition for Content Provenance and Authenticity (C2PA). It will benefit from the help of Generative AI and tech giants like Meta, Google, OpenAI, and Microsoft.
Currently, there is a pressing need for interoperable standards across various media types such as image, video, and text documents. Although there are content provenance technologies, this area is still in nascent stage.
The main tech includes genuine timestamps and cryptographically-proof meta to prove that the content isn't tampered. But there are still obstacles in development of these secure technologies, like how and when they are executed.
The present technology places the pressure on the end user to understand the provenance data.
A provenance system must allow a user to see who or what made the content, the time and the edits/changes that were made. Threat actors have started using GenAI media to make scams believable, it has become difficult to differentiate between what is fake and real. Which is why a mechanism that can track the origin and edit history of digital media is needed. The NCSC and CCCS report will help others to navigate this gray area with more clarity.