Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Tech. Show all posts

When Cybersecurity Fails: The Impact of the Microchip Technology Hack

When Cybersecurity Fails: The Impact of the Microchip Technology Hack

In an era where digital transformation is at the forefront of every industry, cybersecurity remains a critical concern. The recent cyberattack on Microchip Technology, a leading provider of microcontrollers and analog semiconductors, underscores the vulnerabilities that even the most advanced companies face. Detected last week, this incident has significantly affected the company’s operations, highlighting the urgent need for robust cybersecurity measures in the semiconductor industry.

The Incident

Microchip Technology in an SEC filing disclosed that the cyberattack disrupted several of its manufacturing facilities, leading to a slowdown in production. While the company has not yet confirmed the full extent of the disruption or whether ransomware was involved, the impact on its operations is evident. The attack has forced Microchip to isolate affected systems and initiate ongoing remediation efforts.

Implications for the Semiconductor Industry

The semiconductor industry is a critical component of the global technology infrastructure. Semiconductors are the building blocks of modern electronics, powering everything from smartphones to advanced medical devices. A disruption in the supply chain of semiconductors can have far-reaching consequences, affecting numerous sectors and potentially leading to significant economic losses.

What can Organizations Practice?

1. Proactive Cybersecurity Measures: The incident highlights the importance of proactive cybersecurity measures. Companies must invest in advanced threat detection and response systems to identify and mitigate potential threats before they can cause significant damage. Regular security audits and vulnerability assessments are essential to ensure that systems are secure and up-to-date.

2. Employee Training and Awareness: Human error remains one of the leading causes of cybersecurity breaches. Companies must invest in comprehensive training programs to educate employees about the latest cybersecurity threats and best practices. Creating a culture of security awareness can significantly reduce the risk of successful cyberattacks.

3. Incident Response Planning: A robust incident response plan is crucial for minimizing the impact of a cyberattack. Companies should develop and regularly update their incident response plans, ensuring that all employees are familiar with their roles and responsibilities in the event of a breach. Swift and coordinated action can help contain the damage and expedite recovery efforts.

4. Collaboration and Information Sharing: The semiconductor industry must foster a culture of collaboration and information sharing to combat cyber threats effectively. By sharing threat intelligence and best practices, companies can collectively enhance their cybersecurity posture and better protect the industry.

The Rise of Manual Techniques in Ransomware Attacks: A Growing Threat

The Rise of Manual Techniques in Ransomware Attacks: A Growing Threat

A recent report by CrowdStrike observes on a disturbing trend: the increasing use of manual techniques in ransomware attacks. This shift towards hands-on-keyboard activities is not only making these attacks more sophisticated but also more challenging to detect and mitigate.

The Surge in Interactive Intrusions

According to CrowdStrike’s findings, there has been a staggering 55% increase in interactive intrusions over the past year. These intrusions, characterized by direct human involvement rather than automated scripts, account for nearly 90% of e-crime activities. This trend underscores a critical shift in the tactics employed by cybercriminals, who are now leveraging manual techniques to bypass traditional security measures and achieve their malicious objectives.

Why Manual Techniques?

The adoption of manual techniques in ransomware attacks offers several advantages to cybercriminals. Firstly, these techniques allow attackers to adapt and respond in real-time to the defenses they encounter. Unlike automated attacks, which follow predefined scripts, manual intrusions enable attackers to think on their feet, making it harder for security systems to predict and counter their moves.

Secondly, manual techniques often involve the use of legitimate tools and credentials, making it difficult for security teams to distinguish between malicious and benign activities. This tactic, known as “living off the land,” involves using tools that are already present in the target environment, such as PowerShell or Remote Desktop Protocol (RDP). By blending in with normal network traffic, attackers can evade detection for extended periods, increasing the likelihood of a successful attack.

The Impact on the Technology Sector

The technology sector has been particularly hard-hit by this surge in manual ransomware attacks. CrowdStrike’s report indicates a 60% rise in such attacks targeting tech companies. This sector is an attractive target for cybercriminals due to its vast repositories of sensitive data and intellectual property. Additionally, technology companies often have complex and interconnected systems, providing multiple entry points for attackers to exploit.

The consequences of a successful ransomware attack on a tech company can be devastating. Beyond the immediate financial losses from ransom payments, these attacks can lead to prolonged downtime, loss of customer trust, and significant reputational damage. In some cases, the recovery process can take months, further compounding the financial and operational impact.

What to do?

Enhanced Monitoring and Detection: Implement advanced monitoring tools that can detect anomalous behavior indicative of manual intrusions. Behavioural analytics and machine learning can help identify patterns that deviate from the norm, providing early warning signs of an attack.

Regular Security Training: Educate employees about the latest phishing techniques and social engineering tactics used by cybercriminals. Regular training sessions can help staff recognize and report suspicious activities, reducing the risk of initial compromise.

Zero Trust Architecture: Adopt a Zero Trust approach to security, where no user or device is trusted by default. Implement strict access controls and continuously verify the identity and integrity of users and devices accessing the network.

Incident Response Planning: Develop and regularly update an incident response plan that outlines the steps to take in the event of a ransomware attack. Conduct regular drills to ensure that all team members are familiar with their roles and responsibilities during an incident.

Backup and Recovery: Maintain regular backups of critical data and ensure that these backups are stored securely and inaccessible from the main network. Regularly test the recovery process to ensure that data can be restored quickly in the event of an attack.

From Hype to Reality: Understanding Abandoned AI Initiatives

From Hype to Reality: Understanding Abandoned AI Initiatives

A survey discovered that nearly half of all new commercial artificial intelligence projects are abandoned in the middle.

Navigating the AI Implementation Maze

A recent study by the multinational law firm DLA Piper, which surveyed 600 top executives and decision-makers worldwide, sheds light on the considerable hurdles businesses confront when incorporating AI technologies. 

Despite AI's exciting potential to transform different industries, the path to successful deployment is plagued with challenges. This essay looks into these problems and offers expert advice for navigating the complex terrain of AI integration.

Why Half of Business AI Projects Get Abandoned

According to the report, while more than 40% of enterprises fear that their basic business models will become obsolete unless they incorporate AI technologies, over half (48%) of companies that have started AI projects have had to suspend or roll them back. Worries about data privacy (48%), challenges with data ownership and insufficient legislative frameworks (37%), customer apprehensions (35%), the emergence of new technologies (33%), and staff worries (29%).

The Hype vs. Reality

1. Unrealistic Expectations

When organizations embark on an AI journey, they often expect immediate miracles. The hype surrounding AI can lead to inflated expectations, especially when executives envision seamless automation and instant ROI. However, building robust AI systems takes time, data, and iterative development. Unrealistic expectations can lead to disappointment and project abandonment.

2. Data Challenges

AI algorithms thrive on data, but data quality and availability remain significant hurdles. Many businesses struggle with fragmented, messy data spread across various silos. With clean, labeled data, AI models can continue. Additionally, privacy concerns and compliance issues further complicate data acquisition and usage.

The Implementation Pitfalls

1. Lack of Clear Strategy

AI projects often lack a well-defined strategy. Organizations dive into AI without understanding how it aligns with their overall business goals. A clear roadmap, including pilot projects, resource allocation, and risk assessment, is crucial.

2. Talent Shortage

Skilled AI professionals are in high demand, but the supply remains limited. Organizations struggle to find data scientists, machine learning engineers, and AI architects. Without the right talent, projects stall or fail.

3. Change Management

Implementing AI requires organizational change. Employees must adapt to new workflows, tools, and mindsets. Resistance to change can derail projects, leading to abandonment.

AI's Role in Averting Future Power Outages

 

Amidst an ever-growing demand for electricity, artificial intelligence (AI) is stepping in to mitigate power disruptions.

Aseef Raihan vividly recalls a chilling night in February 2021 in San Antonio, Texas, during winter storm Uri. As temperatures plunged to -19°C, Texas faced an unprecedented surge in electricity demand to combat the cold. 

However, the state's electricity grid faltered, with frozen wind turbines, snow-covered solar panels, and precautionary shutdowns of nuclear reactors leading to widespread power outages affecting over 4.5 million homes and businesses. Raihan's experience of enduring cold nights without power underscored the vulnerability of our electricity systems.

The incident in Texas highlights a global challenge as countries witness escalating electricity demands due to factors like the rise in electric vehicle usage and increased adoption of home appliances like air conditioners. Simultaneously, many nations are transitioning to renewable energy sources, which pose challenges due to their variable nature. For instance, electricity production from wind and solar sources fluctuates based on weather conditions.

To bolster energy resilience, countries like the UK are considering the construction of additional gas-powered plants. Moreover, integrating large-scale battery storage systems into the grid has emerged as a solution. In Texas, significant strides have been made in this regard, with over five gigawatts of battery storage capacity added within three years following the storm.

However, the effectiveness of these batteries hinges on their ability to predict optimal charging and discharging times. This is where AI steps in. Tech companies like WattTime and Electricity Maps are leveraging AI algorithms to forecast electricity supply and demand patterns, enabling batteries to charge during periods of surplus energy and discharge when demand peaks. 

Additionally, AI is enhancing the monitoring of electricity infrastructure, with companies like Buzz Solutions employing AI-powered solutions to detect damage and potential hazards such as overgrown vegetation and wildlife intrusion, thus mitigating the risk of power outages and associated hazards like wildfires.

New AI System Aids Early Detection of Deadly Pancreatic Cancer Cases

 

A new research has unveiled a novel AI system designed to enhance the detection of the most prevalent type of pancreatic cancer. Identifying pancreatic cancer poses challenges due to the pancreas being obscured by surrounding organs, making tumor identification challenging. Moreover, symptoms rarely manifest in early stages, resulting in diagnoses at advanced stages when the cancer has already spread, diminishing chances of a cure.

To address this, a collaborative effort between MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Limor Appelbaum from Beth Israel Deaconess Medical Center produced an AI system aimed at predicting the likelihood of an individual developing pancreatic ductal adenocarcinoma (PDAC), the predominant form of the cancer. This AI system, named PRISM, demonstrated superior performance compared to existing diagnostic standards, presenting the potential for future clinical applications in identifying candidates for early screening or testing, ultimately leading to improved outcomes.

The researchers aspired to construct a model capable of forecasting a patient's risk of PDAC diagnosis within the next six to 18 months, facilitating early detection and treatment. Leveraging existing electronic health records, the PRISM system comprises two AI models. The first model, utilizing artificial neural networks, analyzes patterns in data such as age, medical history, and lab results to calculate a personalized risk score. The second model, employing a simpler algorithm, processes the same data to generate a comparable score.

The team fed anonymized data from 6 million electronic health records, including 35,387 PDAC cases, from 55 U.S. healthcare organizations into the models. By evaluating PDAC risk every 90 days, the neural network identified 35% of eventual pancreatic cancer cases as high risk six to 18 months before diagnosis, signifying a notable advancement over existing screening systems. With pancreatic cancer lacking routine screening recommendations for the general population, the current criteria capture only around 10% of cases.

While the AI system shows promise in early detection, experts caution that the model's impact depends on its ability to identify cases early enough for effective treatment. Michael Goggins, a pancreatic cancer specialist at Johns Hopkins University School of Medicine, emphasizes the importance of early detection and acknowledges the potential improvement offered by the PRISM system.

The study, while retrospective, sets the groundwork for future investigations involving real-time data and outcome assessments. The research team acknowledges potential challenges related to the generalizability of AI models across different healthcare organizations, emphasizing the need for diverse datasets. PRISM holds promise for deployment in two ways: selectively recommending pancreatic cancer testing for specific patients and initiating broader screenings using blood or saliva tests for asymptomatic individuals. Limor Appelbaum envisions the transition of such models from academic literature to clinical practice, emphasizing their life-saving potential.

Microsoft Implements Disablement of Widely Exploited MSIX App Installer Protocol Due to Malware Attacks

 

On Thursday, Microsoft announced the reactivation of the ms-appinstaller protocol handler, reverting it to its default state due to widespread exploitation by various threat actors for malware dissemination. The Microsoft Threat Intelligence team reported that the misuse of the current implementation of the ms-appinstaller protocol handler has become a common method for threat actors to introduce malware, potentially leading to the distribution of ransomware.

The team highlighted the emergence of cybercriminals offering a malware kit as a service, utilizing the MSIX file format and ms-appinstaller protocol handler. These alterations are now in effect starting from App Installer version 1.21.3421.0 or newer.

The attacks are manifested through signed malicious MSIX application packages, circulated through platforms such as Microsoft Teams or deceptive advertisements appearing on popular search engines like Google. Since mid-November 2023, at least four financially motivated hacking groups have exploited the App Installer service, utilizing it as an entry point for subsequent human-operated ransomware activities.

The identified groups involved in these activities include Storm-0569, employing BATLOADER through SEO poisoning with sites mimicking Zoom, Tableau, TeamViewer, and AnyDesk, ultimately leading to Black Basta ransomware deployment. Storm-1113 serves as an initial access broker distributing EugenLoader disguised as Zoom, facilitating the delivery of various stealer malware and remote access trojans. Sangria Tempest (also known as Carbon Spider and FIN7) utilizes EugenLoader from Storm-1113 to drop Carbanak, delivering an implant named Gracewire. 

Alternatively, the group relies on Google ads to entice users into downloading malicious MSIX application packages from deceptive landing pages, distributing POWERTRASH, which is then utilized to load NetSupport RAT and Gracewire. Storm-1674, another initial access broker, sends seemingly harmless landing pages masquerading as Microsoft OneDrive and SharePoint through Teams messages using the TeamsPhisher tool, leading recipients to download a malicious MSIX installer containing SectopRAT or DarkGate payloads.

Microsoft characterized Storm-1113 as an entity involved in "as-a-service," providing malicious installers and landing page frameworks imitating well-known software to other threat actors like Sangria Tempest and Storm-1674. In October 2023, Elastic Security Labs detailed a separate campaign involving counterfeit MSIX Windows app package files for popular applications like Google Chrome, Microsoft Edge, Brave, Grammarly, and Cisco Webex, used to distribute a malware loader called GHOSTPULSE.

This marks a recurrence of Microsoft taking action to disable the MSIX ms-appinstaller protocol handler in Windows. A similar step was taken in February 2022 to thwart threat actors from exploiting it to deliver Emotet, TrickBot, and Bazaloader. Microsoft emphasized that threat actors likely choose the ms-appinstaller protocol handler vector due to its ability to bypass safety mechanisms such as Microsoft Defender SmartScreen and built-in browser warnings designed to protect users from malicious content.

Is Your Android Device Tracking You? Understanding its Monitoring Methods

 

In general discussions about how Android phones might collect location and personal data, the focus often falls on third-party apps rather than Google's built-in apps. This awareness has grown due to numerous apps gathering significant information about users, leading to concerns, especially when targeted ads start appearing. The worry persists about whether apps, despite OS permissions, eavesdrop on private in-person conversations, a concern even addressed by Instagram's head in a 2019 CBS News interview.

However, attention to third-party apps tends to overshadow the fact that Android and its integrated apps track users extensively. While much of this tracking aligns with user preferences, it results in a substantial accumulation of sensitive personal data on phones. Even for those trusting Google with their information, understanding the collected data and its usage remains crucial, especially considering the limited options available to opt out of this data collection.

For instance, a lesser-known feature involves Google Assistant's ability to identify a parked car and send a notification regarding its location. This functionality, primarily guesswork, varies in accuracy and isn't widely publicized by Google, reflecting how tech companies leverage personal data for results that might raise concerns about potential eavesdropping.

The ways Android phones track users were highlighted in an October 2021 Kaspersky blog post referencing a study by researchers from the University of Edinburgh and Trinity College. While seemingly innocuous, the compilation of installed apps, when coupled with other personal data, can reveal intimate details about users, such as their religion or mental health status. This fusion of app presence with location data exposes highly personal information through AI-based assumptions.

Another focal point was the extensive collection of unique identifiers by Google and OEMs, tying users to specific handsets. While standard data collection aids app troubleshooting, these unique identifiers, including Google Advertising IDs, device serial numbers, and SIM card details, can potentially associate users even after phone number changes, factory resets, or ROM installations.

The study also emphasized the potential invasiveness of data collection methods, such as Xiaomi uploading app window histories and Huawei's keyboard logging app usage. Details like call durations and keyboard activity could lead to inferences about users' activities and health, reflecting the extensive and often unnoticed data collection practices by smartphones, as highlighted by Trinity College's Prof. Doug Leith.

Exploring Blockchain's Revolutionary Impact on E-Commerce

 

The trend of choosing online shopping over traditional in-store visits is on the rise, with e-commerce transactions dominating the digital landscape. However, the security of these online interactions is not foolproof, as security breaches leading to unauthorized access to vast amounts of data become increasingly prevalent. This growing concern highlights the vulnerabilities in current network structures and the need for enhanced security measures.

Blockchain technology emerges as a solution to bolster the security of online transactions. Operating as a decentralized, peer-to-peer network, blockchain minimizes the risk of malicious activities by eliminating the need for trusted intermediaries. The technology's foundation lies in automated access control and a public ledger, ensuring secure interactions among participants. The encryption-heavy nature of blockchain adds a layer of legitimacy and authority to every transaction within the network.

Initially designed as part of bitcoin technology for decentralized currency, blockchain has found applications in various sectors such as public services, Internet of Things (IoT), banking, healthcare, and finance. Its distributed and decentralized nature inherently provides a higher level of security compared to traditional databases.

As the demand for secure communication methods in e-commerce grows, blockchain technology plays a pivotal role in ensuring the security, efficiency, and speed of transactions on online platforms. Unlike traditional transactions that rely on third-party validation, blockchain integration transforms industries like e-commerce, banking, and energy, ushering in new technologies at a rapid pace. The distributed ledger technology of blockchain safeguards the integrity and authenticity of transactions, mitigating the risks associated with data leaks.

The intersection of blockchain and e-commerce is particularly crucial in the context of a data-driven world. Traditional centralized entities often control and manipulate user data without much user input, storing extensive personal information. Blockchain's decentralized and secure approach enhances the safety of conducting transactions and storing digital assets in the e-commerce landscape.

The transformative impact of blockchain on e-commerce is evident in its ability to optimize business processes, reduce operational costs, and improve overall efficiency. The technology's applications, ranging from supply chain management to financial services, bring advantages such as transparent business operations and secure, tamper-proof transaction records.

The evolution of the internet, transitioning from a tool for educational and military purposes to a platform hosting commercial applications, has led to the dominance of e-commerce, a trend accelerated by the global COVID-19 pandemic. Modern businesses leverage the internet for market research, customer service, product distribution, and issue resolution, resulting in increased efficiency and market transparency.

Blockchain, as a decentralized, peer-to-peer database distributed across a network of nodes, has significantly reshaped internet-based trade. Its cryptographic storage of transaction logs ensures an unchangeable record, resilient to disruptions in the digital age. Blockchain's current applications in digitizing financial assets highlight its potential for secure and distributable audit trails, particularly in payment and transaction systems.

The e-commerce sector, facing challenges since its inception, seeks a secure technological foundation, a role poised to be filled by blockchain technology. The decentralized nature of blockchain enhances operational efficiency by streamlining workflows, especially with intermediaries like logistics and payment processors. It introduces transparency, recording every transaction on a shared ledger, ensuring traceability and building trust among participants.

Cost-effectiveness is another advantage offered by blockchain in e-commerce, as it enables sellers to bypass intermediaries and associated transaction fees through cryptocurrencies like Bitcoin. The heightened security provided by blockchain, built on Distributed Ledger Technology (DLT), becomes indispensable in an industry where data breaches can lead to significant revenue losses and damage to brand reputation.

Blockchain's applications in e-commerce span various aspects, including inventory control, digital ownership, loyalty reward programs, identity management, supply chain tracking, and warranty management. These applications set new standards for online businesses, promising a more secure, efficient, and customer-centric e-commerce world.

As blockchain continues to evolve, its potential impact on the e-commerce sector is expected to grow. The technology holds the promise of unlocking more innovative applications, fostering an environment where trust, efficiency, and customer satisfaction take center stage. The future of e-commerce, driven by blockchain, transcends mere transactions; it aims to create a seamless, secure, and user-centric shopping experience that adapts to the evolving needs of businesses and consumers in the digital age.

Cryptocurrency Engineers Targeted by New macOS Malware 'KandyKorn'

 

A newly identified macOS malware called 'KandyKorn' has been discovered in a cyber campaign linked to the North Korean hacking group Lazarus. The targets of this attack are blockchain engineers associated with a cryptocurrency exchange platform.

The attackers are using Discord channels to pose as members of the cryptocurrency community and distribute Python-based modules. These modules initiate a complex KandyKorn infection process.

Elastic Security, the organization that uncovered the attack, has linked it to Lazarus based on similarities with their previous campaigns, including techniques used, network infrastructure, code-signing certificates, and custom detection methods for Lazarus activity. 

The attack starts with social engineering on Discord, where victims are tricked into downloading a malicious ZIP archive named 'Cross-platform Bridges.zip.' This archive contains a Python script ('Main.py') that imports 13 modules, triggering the first payload, 'Watcher.py.' 

Watcher.py downloads and executes another Python script called 'testSpeed.py' and a file named 'FinderTools' from a Google Drive URL. FinderTools then fetches and runs an obfuscated binary named 'SugarLoader,' which appears as both .sld and .log Mach-O executables.

SugarLoader establishes a connection with a command and control server to load the final payload, KandyKorn, into memory.

In the final stage, a loader known as HLoader is used. It impersonates Discord and employs macOS binary code-signing techniques seen in previous Lazarus campaigns. HLoader ensures persistence for SugarLoader by manipulating the real Discord app on the compromised system.

KandyKorn serves as the advanced final-stage payload, allowing Lazarus to access and steal data from the infected computer. It operates discreetly in the background, awaiting commands from the command and control server, and takes steps to minimize its trace on the system.

KandyKorn supports a range of commands, including terminating processes, gathering system information, listing directory contents, uploading and exfiltrating files, securely deleting files, and executing system commands, among others.

The Lazarus group primarily targets the cryptocurrency sector for financial gain, rather than engaging in espionage. The presence of KandyKorn highlights that macOS systems are also vulnerable to Lazarus' attacks, showcasing the group's ability to create sophisticated and inconspicuous malware tailored for Apple computers.

Signal's Meredith Whittaker Asserts: AI is Inherently a 'Surveillance Technology'

 

Many companies heavily invested in user data monetization also show a keen interest in AI technology. Signal's president, Meredith Whittaker, argues that this inclination is rooted in the fact that "AI is a surveillance technology."

Speaking at TechCrunch Disrupt 2023, Whittaker emphasized that AI is closely intertwined with the big data and targeting sector, dominated by giants like Google and Meta, as well as influential enterprise and defense corporations. 

She pointed out that AI amplifies the surveillance business model, an extension of the trend observed since the late '90s with the rise of surveillance advertising. According to her, AI serves to solidify and expand this model. She metaphorically described the relationship as a complete overlap in a Venn diagram.

Whittaker further highlighted that the utilization of AI itself is inherently surveillance-oriented. For instance, passing by a facial recognition camera equipped with pseudo-scientific emotion analysis results in the generation of data, accurate or not, about one's emotional state or character. These systems are ultimately tools of surveillance, marketed to entities like employers, governments, and border control, who hold sway over individuals' access to resources and opportunities.

Ironically, she pointed out that the very individuals whose data underpins these systems are often the ones responsible for organizing and annotating it. This step is crucial in the process of creating datasets for AI.

 Whittaker stressed that it's impossible to develop these systems without the labor of humans, who inform the ground truth of the data. This often involves tasks like reinforcement learning with human feedback, which she likened to a form of disguising labor in technological jargon. While this process collectively incurs high costs, individual workers are often paid meager wages. In essence, she revealed, the perceived intelligence behind these systems diminishes significantly when the curtain is pulled back.

However, not all AI and machine learning systems share the same level of exploitation. When asked if Signal incorporates any AI tools or procedures in its app or development work, Whittaker acknowledged the presence of a "small on-device model" that they didn't develop themselves, but rather acquired off the shelf. This model is utilized in Signal's face blur feature within their media editing toolkit. 

She noted that while it's not exceptionally effective, it aids in detecting faces in crowded photos and blurring them, ensuring that individuals' intimate biometric information isn't inadvertently disclosed on social media, particularly to entities like Clearview.

Whittaker concluded by emphasizing that while this is a commendable use of AI, it doesn't negate the negative aspects she discussed earlier. She emphasized that the economic motives driving the costly development and deployment of facial recognition technology would never limit its application to this singular purpose.

Investigation Exposes Covert Israeli Spyware Infecting Targets through Advertisements

 

Insanet, an Israeli software company, has reportedly developed a commercial product named Sherlock, capable of infiltrating devices through online advertisements to conduct surveillance on targets and gather data for its clients. 

This revelation comes from an investigation by Haaretz, which disclosed that the spyware system was sold to a non-democratic country. This marks the first public disclosure of Insanet and its surveillance software. Sherlock is capable of infiltrating devices running Microsoft Windows, Google Android, and Apple iOS, as per the provided marketing information.

According to journalist Omer Benjakob's findings, this is the first instance worldwide where a system of this nature is marketed as a technology rather than a service. Insanet obtained approval from Israel's Defense Ministry to globally market Sherlock as a military product, subject to stringent restrictions, including sales exclusively to Western nations. Even presenting it to potential clients in the West requires specific authorization from the Defense Ministry, which is not always granted.

Founded in 2019, Insanet is owned by individuals with backgrounds in the military and national defense. Its founders include Dani Arditi, former chief of Israel's National Security Council, and cyber entrepreneurs Ariel Eisen and Roy Lemkin. Despite attempts to reach out, Arditi and Lemkin did not respond to inquiries, and Eisen could not be reached for comment.

Insanet affirmed its adherence to Israeli law and strict regulatory guidelines. In marketing its surveillance software, Insanet collaborated with Candiru, an Israel-based spyware manufacturer previously sanctioned in the US. The combined offering includes Sherlock and Candiru's spyware, with the former priced at six million euros ($6.7 million, £5.2 million) for a client.

The Haaretz report cited a Candiru marketing document from 2019, confirming Sherlock's capability to breach Windows-based computers, iPhones, and Android devices. Traditionally, different companies specialized in breaching distinct devices, but this system demonstrates the ability to effectively breach any device.

The Electronic Frontier Foundation's Director of Activism, Jason Kelley, expressed concern over Insanet's use of advertising technology to infect devices and surveil targets. Dodgy online ads not only serve as potential carriers for malware but can also be tailored to specific groups of people, making it particularly worrisome.

Sherlock stands out for leveraging legal data collection and digital advertising technologies, commonly favored by Big Tech and online media, for government-level espionage. This differs from other spyware like NSO Group's Pegasus or Cytrox's Predator and Alien, which tend to be more precisely targeted.

Mayuresh Dani, Qualys' threat research manager, likened the threat to malvertising, where a malicious ad is broadly distributed to unsuspecting users. In this case, however, it involves a two-stage attack: first profiling users using advertising intelligence (AdInt) and then delivering malicious payloads via advertisements, making unsuspecting users vulnerable to such attacks.

Revolutionizing Everyday Life: The Transformative Potential of AI and Blockchain

 

Artificial intelligence (AI) and blockchain technology have emerged as two pivotal forces of innovation over the past decade, leaving a significant impact on diverse sectors like finance and supply chain management. The prospect of merging these technologies holds tremendous potential for unlocking even greater possibilities.

Although the integration of AI within the cryptocurrency realm is a relatively recent development, it demonstrates the promising potential for expansion. Forecasts suggest that the blockchain AI market could attain a valuation of $980 million by 2030.

Exploring below the potential applications of AI within blockchain reveals its capacity to bolster the crypto industry and facilitate its integration into mainstream finance.

Elevated Security and Fraud Detection

One domain where AI can play a crucial role is enhancing the security of blockchain transactions, resulting in more robust payment systems. Firstly, AI algorithms can scrutinize transaction data and patterns, preemptively identifying and preventing fraudulent activities on the blockchain.

Secondly, AI can leverage machine learning algorithms to reinforce transaction privacy. By analyzing substantial volumes of data, AI can uncover patterns indicative of potential data breaches or unauthorized account access. This enables businesses to proactively implement security measures, setting up automated alerts for suspicious behavior and safeguarding sensitive information in real time.

Instances of AI integration are already evident. Scorechain, a crypto-tracking platform, harnessed AI to enhance anti-money laundering transaction monitoring and fortify fraud prediction capabilities. CipherTrace, a Mastercard-backed blockchain security initiative, also adopted AI to assess risk profiles of crypto merchants based on on-chain data.

In essence, the amalgamation of AI algorithms and blockchain technology fosters a more dependable and trustworthy operational ecosystem for organizations.

Efficiency in Data Analysis and Management

AI can revolutionize data collection and analysis for enterprises. Blockchain, with its transparent and immutable information access, provides an efficient framework for swiftly acquiring accurate data. Here, AI can amplify this advantage by streamlining the data analysis process. AI-powered algorithms can rapidly process blockchain network data, identifying nuanced patterns that human analysts might overlook. The result is actionable insights to support business functions, accompanied by a significant reduction in manual processes, thereby optimizing operational efficiency.

Additionally, AI's integration can streamline supply chain management and financial transactions, automating tasks like invoicing and payment processing, eliminating intermediaries, and enhancing efficiency. AI can also ensure the authenticity and transparency of products on the blockchain, providing a shared record accessible to all network participants.

A case in point is IBM's blockchain-based platform introduced in 2020 for tracking food manufacturing and supply chain logistics, facilitating collaborative tracking and accounting among European manufacturers, distributors, and retailers.

Strengthening Decentralized Finance (DeFi)

The synergy of AI and blockchain can empower decentralized finance and Web3 by facilitating the creation of improved decentralized marketplaces. While blockchain's smart contracts automate processes and eliminate intermediaries, creating these contracts can be complex. AI algorithms, like ChatGPT, employ natural language processing to simplify smart contract creation, reducing errors, enhancing coding efficiency, and broadening access for new developers.

Moreover, AI can enhance user experiences in Web3 marketplaces by tailoring recommendations based on user search patterns. AI-powered chatbots and virtual assistants can enhance customer service and transaction facilitation, while blockchain technology ensures product authenticity.

AI's data analysis capabilities further contribute to identifying trends, predicting demand and supply patterns, and enhancing decision-making for Web3 marketplace participants.

Illustrating this integration is the example of Kering, a luxury goods company, which launched a marketplace combining AI-driven chatbot services with crypto payment options, enabling customers to use Ethereum for purchases.

Synergistic Future of AI and Blockchain

Though AI's adoption within the crypto sector is nascent, its potential applications are abundant. In DeFi and Web3, AI promises to enhance market segments and attract new users. Furthermore, coupling AI with blockchain technology offers significant potential for traditional organizations, enhancing business practices, user experiences, and decision-making.

In the upcoming months and years, the evolving collaboration between AI and blockchain is poised to yield further advancements, heralding a future of innovation and progress.

Amazon Executive Lacks Data for Return-to-Office Mandate

 

Amazon employees are expressing discontent over the company's recent decision to revoke remote work flexibility, and the situation has been exacerbated by comments made by a senior executive.

During an internal staff meeting, Mike Hopkins, the SVP of Amazon Video and Studios, admitted that there was no data to support the company's mandate for employees to return to the office. This stands in contrast to Amazon's reputation for data-driven decision making, leading to frustration among many workers.

The new mandate, announced in February, requires most employees to work in the office at least three days a week, reversing a previous commitment not to enforce physical office attendance.

Hopkins mentioned reasons for eliminating flexible work options, claiming that CEO Andy Jassy and other executives believe that employees perform better when working together in person. 

He also referred to a leadership principle encouraging employees to "have backbone, and disagree and commit," implying that now is the time to commit rather than disagree.

Despite data suggesting that remote work can increase productivity and employee happiness, Amazon's executives seem unwilling to consider these findings in their decision-making process.

Other companies are also pushing for a return to in-office work in 2023, possibly due to short-term financial considerations or a desire for increased control over employees.

Amazon workers have expressed their concerns through an internal petition, but the company appears determined to stick to its data-less decision, disregarding the disagreement from its employees.

Designers Still Have an Opportunity to Get AI Right

 

As ChatGPT attracts an unprecedented 1.8 billion monthly visitors, the immense potential it offers to shape our future world is undeniable.

However, amidst the rush to develop and release new AI technologies, an important question remains largely unaddressed: What kind of world are we creating?

The competition among companies to be the first in the AI race often overshadows thoughtful considerations about potential risks and implications. Startups working on AI applications like GPT-3 have not adequately addressed critical issues such as data privacy, content moderation, and harmful biases in their design processes.

Real-world examples highlight the need for more responsible AI design. For instance, creating AI bots that reinforce harmful behaviors or replacing human expertise with AI without considering the consequences can lead to unintended harmful effects.

Addressing these problems requires a cultural shift in the AI industry. While some companies may intentionally create exploitative products, many well-intentioned developers lack the necessary education and tools to build ethical and safe AI. 

Therefore, the responsibility lies with all individuals involved in AI development, regardless of their role or level of authority.

Companies must foster a culture of accountability and recruit designers with a growth mindset who can foresee the consequences of their choices. We should move away from prioritizing speed and focus on our values, making choices that align with our beliefs and respect user rights and privacy.

Designers need to understand the societal impact of AI and its potential consequences on racial and gender profiling, misinformation dissemination, and mental health crises. AI education should encompass fields like sociology, linguistics, and political science to instill a deeper understanding of human behavior and societal structures.

By embracing a more thoughtful and values-driven approach to AI design, we can shape a world where AI technologies contribute positively to society, bridging the gap between technical advancements and human welfare.

Singapore Explores Generative AI Use Cases Through Sandbox Options

 

Two sandboxes have been introduced in Singapore to facilitate the development and testing of generative artificial intelligence (AI) applications for government agencies and businesses. 

These sandboxes will be powered by Google Cloud's generative AI toolsets, including the Vertex AI platform, low-code developer tools, and graphical processing units (GPUs). Google will also provide pre-trained generative AI models, which include their language model Palm, AI models from partners, and open-source alternatives.

The initiative is a result of a partnership agreement between the Singapore government and Google Cloud to establish an AI Government Cloud Cluster. The purpose of this cloud platform is to promote AI adoption in the public sector.

The two sandboxes will be provided at no cost for three months and will be available for up to 100 use cases or organizations. Selection for access to the sandboxes will occur through a series of workshops over 100 days, where participants will receive training from Google Cloud engineers to identify suitable use cases for generative AI.

The government sandbox will be administered by the Smart Nation and Digital Government Office (SNDGO), while the sandbox for local businesses will be managed by Digital Industry Singapore (DISG).

Singapore has been actively pursuing its national AI strategy since 2019, with over 4,000 researchers currently contributing to AI research. However, the challenge lies in translating this research into practical applications across different industries. The introduction of these sandboxes aims to address potential issues related to data, security, and responsible AI implementation.

Karan Bajwa, Google Cloud's Asia-Pacific vice president, emphasized the need for a different approach when deploying generative AI within organizations, requiring robust governance and data security. It is crucial to calibrate and fine-tune AI models for specific industries to ensure optimal performance and cost-effectiveness.

Several organizations, including the Ministry of Manpower, GovTech, American Express, PropertyGuru Group, and Tokopedia, have already signed up to participate in the sandbox initiatives.

GovTech, the public sector's CIO office, is leveraging generative AI for its virtual intelligence chat assistant platform (Vica). By using generative AI, GovTech has reduced training hours significantly and achieved more natural responses for its chatbots.

During a panel discussion at the launch, Jimmy Ng, CIO and head of group technology and operations at DBS Bank, emphasized the importance of training AI models with quality data to mitigate risks associated with large language models learning from publicly available data.

Overall, the introduction of these sandboxes is seen as a positive step to foster responsible AI development and application in Singapore's public and private sectors.

Google Reaches an Agreement with 40 States Over Location Tracking Practices

 

Google has consented to a $391.5 million settlement with 40 states over its use of location tracking, according to Oregon Attorney General Ellen Rosenblum. Even when users thought they had turned off location tracking in their account settings, Google continued to collect information about their whereabouts, according to Oregon's Attorney General's office. 

Commencing in 2023, the settlement requires Google to be more transparent with users and provide clearer location-tracking disclosures. The settlement was led by Rosenblum and Nebraska Attorney General Doug Peterson. As per the release, it is the largest consumer privacy settlement ever led by a group of attorneys general.

“Consistent with improvements we’ve made in recent years, we have settled this investigation which was based on outdated product policies that we changed years ago,” said Google spokesperson José Castañeda in a statement.

The basis of the investigation was revealed in a 2018 Associated Press report.

Rosenblum said in the release, “For years Google has prioritized profit over their users’ privacy. They have been crafty and deceptive. Consumers thought they had turned off their location tracking features on Google, but the company continued to secretly record their movements and use that information for advertisers.”

Google paid $85 million to settle a similar lawsuit with Arizona last month, and the company is facing additional location tracking lawsuits in Washington, D.C., Indiana, Texas, and Washington state. According to the four AGs, Google was using location data for its ad business. 

The lawsuits instruct the court to order Google to hand over any algorithms developed with allegedly ill-gotten gains, as well as any monetary profits.

Google Kills its Game Streaming Service Stadia, Will Refund Purchases


About Stadia

Google is closing down its video game streaming service, Stadia, in January 2023. All purchases will be reverted back and the tech will continue to be used in YouTube and other areas of its business, however, the app for customers and storefront will shut down after five years of its launch, piling in the existing dump of projects that Google has shut down. 

While Stadia's aim towards streaming games for customers was based upon a robust tech foundation, it failed to gain the traction with the users that Google expected, resulting in the difficult decision of shutting down Stadia's streaming service. 

Google's Response

Vice President Phil Harrison said that Google is grateful for the players that have been there since the beginning of Stadia. The company will give back all the in-game purchases done on Google Store, including game and add-on content purchases made via the Stadia store. 

Players will continue to have access to their games library and can play until January 18, 2023, so that they complete the final play sessions. 

The gaming industry giant further said that refunds will be completed by mid-January, emphasizing that while Stadia will die, the tech behind it will still be available to "industry partners" for other joint-ventures, like AT&T's latest attempt to launch Batman: Arkham Knight on smartphones using streaming. 

People had a hunch of Google's moves, but what is surprising has Ubisoft announced "Assassin's Creed Mirage" will stream on Amazon's Luna service, but not Stadia, the first game in the blockbuster series to do this. 

The rise and fall of Stadia

When Stadia was initially launched, Google talked a huge game back during the Game Developer Conference 2019, however, it was evident later that Stadia wasn't quite up for the game. 

The tech was impressive, however, major features were missing, and the launch library was not up to the mark. Stadia kept on adding new games, most of them bought a la carte, to make it a lucrative investment for the casual audience Stadia was made for. 

However, Xbox Game Pass surfaced and combined a giant library with a mere monthly fee. Stadia, on the other hand, was struggling to bring big games to its platform, spending tens of millions to lure games like Red Dead Redemption 2. 

Google's next ventures

It doesn't mean that Stadia was a flop since the beginning. Google's track record, and Stadia's own history, make one ask whether they even wanted to be in this thing in the first place. 

Stadia's first-party studios closed down last year, abandoning projects in the pre-production stage and leaving a few developers who moved to a different place feeling cheated by the company. 

Harrison says Google is committed to gaming and will keep on investing in new tools, tech, and platforms that give a boost to developers, industry partners, cloud customers, and creators.