Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Tech Companies. Show all posts

Top Tech Firms Fined for Hiding SolarWinds Hack Impact

 



The US Securities and Exchange Commission fined four major technology companies-Unisys Corp, Avaya Holdings, Check Point Software, and Mimecast—for allegedly downplaying the severity of the cybersecurity risks they faced as a result of the notorious SolarWinds hack. The companies have been accused of giving misleading information to investors regarding the severity of breaches connected with the attack on SolarWinds Orion software in 2020.

Companies Made Deceptive Filings

 The companies that had engaged in either direct or indirect deception of the extent and effect of the attacks to the investors. Settlement has been reached by these companies and they will have to pay civil penalties that include $4 million to be paid by Unisys, $1 million by Avaya, Check Point Software with a $995,000 penalty and $990,000 is payable by Mimecast.

The SEC said the companies knew their systems were compromised due to unauthorised access after the SolarWinds hack but reportedly downplayed the impact in public statements. For example, Unisys reportedly described cybersecurity risks as "theoretical," even when it confirmed two data breaches tied to the SolarWinds hack which exfiltrated gigabytes of data. Equally, Avaya apparently downplayed the severity of the breach when it revealed limited access to its email messages while investigators found that at least 145 files in its cloud storage were compromised.

Particular Findings on Each Company

1. Unisys Corp: The SEC noted that Unisys failed to disclose fully the nature of its cybersecurity risks even after it had suffered massive data exfiltration. Apparently, the company's public disclosures tagged such risks as "theoretical".

2. Avaya Holdings: Avaya allegedly made false statements as it reported that the minimal amount of e-mail messages has been accessed when actually, there is abundant evidence that access is further extensive to some files held in the cloud.

3. Check Point Software: The SEC charges that Check Point was conscious of the hack and used ambiguous language in order to downplay the severity of the attack, conceivably, therefore leaving investors under informed of the actual degree of the hack.

4. Mimecast: The SEC found that Mimecast had made major omissions in its disclosure, including failure to disclose the specific code and number of encrypted credentials accessed by hackers.

Background on the SolarWinds Breach

Another notably recent cyberattack is attributed to the Russian-linked group APT29, also known as the SVR, behind the SolarWinds hack. In 2019, malicious actors gained unauthorised access to the SolarWinds Orion software platform, releasing malicious updates between March and June 2020, that installed malware, such as the Sunburst backdoor in "fewer than 18,000" customer instances, though fewer were targeted for deeper exploitation.

Subsequently, many U.S. government agencies and also huge companies confirmed that they were hacked into during this breach. These include Microsoft, cybersecurity company FireEye, the Department of State, the Department of Homeland Security, the Department of Energy, the National Institutes of Health, and the National Nuclear Security Administration.

SEC's Stance on Transparency

The charges and fines by the SEC also serve as a warning to public companies to become transparent concerning security incidents that have affected the trust of their investors. The four companies thus settle on not having done anything wrong, but they experience considerable penalties that indicate how hard the SEC will be in holding organisations responsible to provide fair information about cybersecurity risk issues and incident concerns.

It, therefore, calls for tech firms to provide better information on cybersecurity issues as both investors and consumers continue to face increasingly complex and pervasive cyber threats.


Geoffrey Hinton Discusses Risks and Societal Impacts of AI Advancements

 


Geoffrey Hinton, often referred to as the "godfather of artificial intelligence," has expressed grave concerns about the rapid advancements in AI technology, emphasising potential human-extinction level threats and significant job displacement. In an interview with BBC Newsnight, Hinton warned about the dangers posed by unregulated AI development and the societal repercussions of increased automation.

Hinton underscored the likelihood of AI taking over many mundane jobs, leading to widespread unemployment. He proposed the implementation of a universal basic income (UBI) as a countermeasure. UBI, a system where the government provides a set amount of money to every citizen regardless of their employment status, could help mitigate the economic impact on those whose jobs are rendered obsolete by AI. "I advised people in Downing Street that universal basic income was a good idea," Hinton revealed, arguing that while AI-driven productivity might boost overall wealth, the financial gains would predominantly benefit the wealthy, exacerbating inequality.

Extinction-Level Threats from AI

Hinton, who recently left his position at Google to speak more freely about AI dangers, reiterated his concerns about the existential risks AI poses. He pointed to the developments over the past year, indicating that governments have shown reluctance in regulating the military applications of AI. This, coupled with the fierce competition among tech companies to develop AI products quickly, raises the risk that safety measures may be insufficient.

Hinton estimated that within the next five to twenty years, there is a significant chance that humanity will face the challenge of AI attempting to take control. "My guess is in between five and twenty years from now there’s a probability of half that we’ll have to confront the problem of AI trying to take over," he stated. This scenario could lead to an "extinction-level threat" as AI progresses to become more intelligent than humans, potentially developing autonomous goals, such as self-replication and gaining control over resources.

Urgency for Regulation and Safety Measures

The AI pioneer stressed the need for urgent action to regulate AI development and ensure robust safety measures are in place. Without such precautions, Hinton fears the consequences could be dire. He emphasised the possibility of AI systems developing motivations that align with self-preservation and control, posing a fundamental threat to human existence.

Hinton’s warnings serve as a reminder of the dual-edged nature of technological progress. While AI has the potential to revolutionise industries and improve productivity, it also poses unprecedented risks. Policymakers, tech companies, and society at large must heed these warnings and work collaboratively to harness AI's benefits while mitigating its dangers.

In conclusion, Geoffrey Hinton's insights into the potential risks of AI push forward the need for proactive measures to safeguard humanity's future. His advocacy for universal basic income reflects a pragmatic approach to addressing job displacement, while his call for stringent AI regulation highlights the urgent need to prevent catastrophic outcomes. As AI continues to transform, the balance between innovation and safety will be crucial in shaping a sustainable and equitable future.


Europe's Digital Markets Act Compels Tech Corporations to Adapt

 

Europeans now have the liberty to select their preferred online services, such as browsers, search engines, and iPhone apps, along with determining the usage of their personal online data. 

These changes stem from the implementation of the Digital Markets Act (DMA), a set of laws introduced by the European Union targeting major technology firms including Amazon, Apple, Microsoft, Google (under Alphabet), Meta (formerly Facebook), and ByteDance (owner of TikTok).

This legislation marks Europe's ongoing efforts to regulate large tech companies, requiring them to adapt their business practices. Notably, Apple has agreed to allow users to download smartphone apps from sources other than its App Store. The DMA applies to 22 services ranging from operating systems to messaging apps and social media platforms, affecting prominent offerings like Google Maps, YouTube, Amazon's Marketplace, Apple's Safari browser, Meta's Facebook, Instagram, WhatsApp, Microsoft Windows, and LinkedIn.

Companies found in violation of the DMA could face hefty fines, up to 20% of their global annual revenue, and even potential breakup for severe breaches. The impact of these rules is not limited to Europe, as other countries, including Japan, Britain, Mexico, South Korea, Australia, Brazil, and India, are considering similar legislation to curb tech giants' dominance in online markets.

One significant change resulting from the DMA is Apple's decision to allow European iPhone users to download apps from sources beyond its App Store, a move the company had previously resisted. However, Apple will introduce a 55-cent fee for each iOS app downloaded from external stores, raising concerns among critics about the viability of alternative app platforms.

Furthermore, the DMA grants users greater freedom to choose their preferred online services and restricts companies from favouring their own offerings in search results. 

For instance, Google search results will now include listings from competing services like Expedia for searches related to hotels. Additionally, users can opt out of targeted advertising based on their online data, while messaging systems are required to be interoperable, forcing Meta to propose solutions for seamless communication between its platforms, Facebook Messenger and WhatsApp.

Exploring Blockchain's Revolutionary Impact on E-Commerce

 

The trend of choosing online shopping over traditional in-store visits is on the rise, with e-commerce transactions dominating the digital landscape. However, the security of these online interactions is not foolproof, as security breaches leading to unauthorized access to vast amounts of data become increasingly prevalent. This growing concern highlights the vulnerabilities in current network structures and the need for enhanced security measures.

Blockchain technology emerges as a solution to bolster the security of online transactions. Operating as a decentralized, peer-to-peer network, blockchain minimizes the risk of malicious activities by eliminating the need for trusted intermediaries. The technology's foundation lies in automated access control and a public ledger, ensuring secure interactions among participants. The encryption-heavy nature of blockchain adds a layer of legitimacy and authority to every transaction within the network.

Initially designed as part of bitcoin technology for decentralized currency, blockchain has found applications in various sectors such as public services, Internet of Things (IoT), banking, healthcare, and finance. Its distributed and decentralized nature inherently provides a higher level of security compared to traditional databases.

As the demand for secure communication methods in e-commerce grows, blockchain technology plays a pivotal role in ensuring the security, efficiency, and speed of transactions on online platforms. Unlike traditional transactions that rely on third-party validation, blockchain integration transforms industries like e-commerce, banking, and energy, ushering in new technologies at a rapid pace. The distributed ledger technology of blockchain safeguards the integrity and authenticity of transactions, mitigating the risks associated with data leaks.

The intersection of blockchain and e-commerce is particularly crucial in the context of a data-driven world. Traditional centralized entities often control and manipulate user data without much user input, storing extensive personal information. Blockchain's decentralized and secure approach enhances the safety of conducting transactions and storing digital assets in the e-commerce landscape.

The transformative impact of blockchain on e-commerce is evident in its ability to optimize business processes, reduce operational costs, and improve overall efficiency. The technology's applications, ranging from supply chain management to financial services, bring advantages such as transparent business operations and secure, tamper-proof transaction records.

The evolution of the internet, transitioning from a tool for educational and military purposes to a platform hosting commercial applications, has led to the dominance of e-commerce, a trend accelerated by the global COVID-19 pandemic. Modern businesses leverage the internet for market research, customer service, product distribution, and issue resolution, resulting in increased efficiency and market transparency.

Blockchain, as a decentralized, peer-to-peer database distributed across a network of nodes, has significantly reshaped internet-based trade. Its cryptographic storage of transaction logs ensures an unchangeable record, resilient to disruptions in the digital age. Blockchain's current applications in digitizing financial assets highlight its potential for secure and distributable audit trails, particularly in payment and transaction systems.

The e-commerce sector, facing challenges since its inception, seeks a secure technological foundation, a role poised to be filled by blockchain technology. The decentralized nature of blockchain enhances operational efficiency by streamlining workflows, especially with intermediaries like logistics and payment processors. It introduces transparency, recording every transaction on a shared ledger, ensuring traceability and building trust among participants.

Cost-effectiveness is another advantage offered by blockchain in e-commerce, as it enables sellers to bypass intermediaries and associated transaction fees through cryptocurrencies like Bitcoin. The heightened security provided by blockchain, built on Distributed Ledger Technology (DLT), becomes indispensable in an industry where data breaches can lead to significant revenue losses and damage to brand reputation.

Blockchain's applications in e-commerce span various aspects, including inventory control, digital ownership, loyalty reward programs, identity management, supply chain tracking, and warranty management. These applications set new standards for online businesses, promising a more secure, efficient, and customer-centric e-commerce world.

As blockchain continues to evolve, its potential impact on the e-commerce sector is expected to grow. The technology holds the promise of unlocking more innovative applications, fostering an environment where trust, efficiency, and customer satisfaction take center stage. The future of e-commerce, driven by blockchain, transcends mere transactions; it aims to create a seamless, secure, and user-centric shopping experience that adapts to the evolving needs of businesses and consumers in the digital age.

India's DPDP Act: Industry's Compliance Challenges and Concerns

As India's Data Protection and Privacy Act (DPDP) transitions from proposal to legal mandate, the business community is grappling with the intricacies of compliance and its far-reaching implications. While the government maintains that companies have had a reasonable timeframe to align with the new regulations, industry insiders are voicing their apprehensions and advocating for extensions in implementation.

A new LiveMint report claims that the government claims businesses have been given a fair amount of time to adjust to the DPDP regulations. The actual situation, though, seems more nuanced. Industry insiders,emphasize the difficulties firms encounter in comprehending and complying with the complex mandate of the DPDP Act.

The Big Tech Alliance, as reported in Inc42, has proposed a 12 to 18-month extension for compliance, underscoring the intricacies involved in integrating DPDP guidelines into existing operations. The alliance contends that the complexity of data handling and the need for sophisticated infrastructure demand a more extended transition period.

An EY study, reveals that a majority of organizations express deep concerns about the impact of the data law. This highlights the need for clarity in the interpretation and application of DPDP regulations. 

In another development, the IT Minister announced that draft rules under the privacy law are nearly ready. This impending release signifies a pivotal moment in the DPDP journey, as it will provide a clearer roadmap for businesses to follow.

As the compliance deadline looms, it is evident that there is a pressing need for collaborative efforts between the government and the industry to ensure a smooth transition. This involves not only extending timelines but also providing comprehensive guidance and support to businesses navigating the intricacies of the DPDP Act.

Despite the government's claim that businesses have enough time to get ready for DPDP compliance, industry opinion suggests otherwise. The complexities of data privacy laws and the worries raised by significant groups highlight the difficulties that companies face. It is imperative that the government and industry work together to resolve these issues and enable a smooth transition to the DPDP compliance period.

Guidelines on What Not to Share with ChatGPT: A Formal Overview

 


A simple device like ChatGPT has unbelievable power, and it has revolutionized our experience of interacting with computers in such a profound way. There are, however, some limitations that it is important to understand and bear in mind when using this tool. 

Using ChatGPT, OpenAI has seen a massive increase in revenue resulting from a massive increase in content. There were 10 million dollars of revenue generated by the company every year. It, however, grew from 1 million dollars in to 200 million dollars in the year 2023. In the coming years, the revenue is expected to increase to over one billion dollars by the end of 2024, which is even higher than what it is now. 

A wide array of algorithms is included in the ChatGPT application that is so powerful that it is capable of generating any text the users want, from a simple math sum to a complex rocket theory question. It can do them all and more! It is crucial to acknowledge the advantages that artificial intelligence can offer and to acknowledge their shortcomings as the prevalence of chatbots powered by artificial intelligence continues to rise.  

To be successful with AI chatbots, it is essential to understand that there are certain inherent risks associated with their use, such as the potential for cyber attacks and privacy issues.  A major change in Google's privacy policy recently made it clear that the company is considering providing its AI tools with the data that it has collected from web posts to train those models and tools.  

It is equally troubling that ChatGPT retains chat logs to improve the model and to improve the uptime of the service. Despite this, there is still a way to address this concern, and it involves not sharing certain information with chatbots that are based on artificial intelligence. Jeffrey Chester, executive director of the Center for Digital Democracy, an organization dedicated to digital rights advocacy stated these tools should be viewed by consumers with suspicion at least, since as with so many other popular technologies – they are all heavily influenced by the marketing and advertising industries.  

The Limits Of ChatGPT 


As the system was not enabled for browsing (which is a requirement for ChatGPT Plus), it generated responses based on the patterns and information it learned throughout its training, which included a range of internet texts while it was training until September 2021 when the training cut-off will be reached.  

Despite that, it is incapable of understanding the context in the same way as people do and does not know anything in the sense of "knowing" anything. ChatGPT is famous for its impressive and relevant responses a great deal of the time, but it is not infallible. The answers that it produces can be incorrect or unintelligible for several reasons. 

Its proficiency largely depends on the quality and clarity of the prompt given. 

1. Banking Credentials 


The Consumer Financial Protection Bureau (CFPB) published a report on June 6 about the limitations of chatbot technology as the complexity of questions increases. According to the report, implementing chatbot technology could result in financial institutions violating federal consumer protection laws, which is why the potential for violations of federal consumer protection laws is high. 

According to the Consumer Financial Protection Bureau (CFPB), the number of consumer complaints has increased due to a variety of issues that include resolving disputes, obtaining accurate information, receiving good customer service, seeking assistance from human representatives, and maintaining personal information security. In light of this fact, the CFPB advises financial institutions to refrain from solely using chatbots as part of their overall business model.  

2. Personal Identifiable Information (PII). 


Whenever users share sensitive personal information that can be used to identify users personally, they need to be careful to protect their privacy and minimise the risk that it will be misused. The user's full name, home address, social security number, credit card number, and any other information that can identify them as an individual is included in this category. The importance of protecting these sensitive details is paramount to ensuring their privacy and preventing potential harm from unauthorised use. 

3. Confidential information about the user's workplace


Users should exercise caution and refrain from sharing private company information when interacting with AI chatbots. It is crucial to understand the potential risks associated with divulging sensitive data to these virtual assistants. 

Major tech companies like Apple, Samsung, JPMorgan, and Google have even implemented stringent policies to prohibit the use of AI chatbots by their employees, recognizing the importance of protecting confidential information. 

A recent Bloomberg article shed light on an unfortunate incident involving a Samsung employee who inadvertently uploaded confidential code to a generative AI platform while utilizing ChatGPT for coding tasks. This breach resulted in the unauthorized disclosure of private information about Samsung, which subsequently led to the company imposing a complete ban on the use of AI chatbots. 

Such incidents highlight the need for heightened vigilance and adherence to security measures when leveraging AI chatbots. 

4. Passwords and security codes 


In the event that a chatbot asks you for passwords, PINs, security codes, or any other confidential access credentials, do not give them these things. It is prudent to prioritise your safety and refrain from sharing sensitive information with AI chatbots, even though these chatbots are designed with privacy in mind. 

For your accounts to remain secure and for your personal information to be protected from the potential of unauthorised access or misuse, it is paramount that you secure your passwords and access credentials.

In an age marked by the progress of AI chatbot technology, the utmost importance lies in the careful protection of personal and sensitive information. This report underscores the imperative necessity for engaging with AI-driven virtual assistants in a responsible and cautious manner, with the primary objective being the preservation of privacy and the integrity of data. It is advisable to remain well-informed and to exercise prudence when interacting with these potent technological tools.

Olympus Suffers Second Cyberattack in 2021

 

Olympus, a Japanese tech giant, disclosed that it was hit by a cyberattack that forced it to take down its IT systems in the United States, Canada, and Latin America. 

Olympus is a company founded in 1919 being a technology leader in the medical sector that develops cutting-edge opto-digital products, life science, and consumer electronics products. On October 12, Olympus announced on its website that it is investigating a potential cybersecurity incident discovered on October 10 and currently working with the utmost priority to fix this issue.  

The company stated, "Upon detection of suspicious activity, we immediately mobilized a specialized response team including forensics experts, and we are currently working with the highest priority to resolve this issue." 

"As part of the investigation and containment, we have suspended affected systems and have informed the relevant external partners. The current results of our investigation indicate the incident was contained to the Americas with no known impact to other regions." 

The firm did not state whether or not the customer or corporate data was obtained or stolen as a result of the "potential cybersecurity incident," but added that it would share updated information on the assault as soon as it becomes available. 

Olympus added, "We are working with appropriate third parties on this situation and will continue to take all necessary measures to serve our customers and business partners in a secure way. Protecting our customers and partners and maintaining their trust in us is our highest priority." 

According to an Olympus spokesman, the firm discovered no indication of data loss during an ongoing investigation into this occurrence. 

This incident comes after the ransomware assault on Olympus' EMEA (Europe, Middle East, and Africa) IT infrastructure in early September. Although Olympus did not disclose the identities of the attackers, ransom notes discovered on damaged computers showed that BlackMatter ransomware operators orchestrated the attack. 

The identical ransom notes directed victims to a Tor website previously used by the BlackMatter group to connect with its victims. Although Olympus did not provide many specifics about the nature of the attack that impacted its Americas IT systems, ransomware groups are notorious for carrying out their operations on weekends and holidays in order to minimize detection. 

In an August joint alert, the FBI and CISA stated that they had "observed an increase in highly impactful ransomware attacks occurring on holidays and weekends—when offices are normally closed—in the United States, as recently as the Fourth of July holiday in 2021."

Instagram to roll out new features to counter cyberbullying

Bullying. Sadly, it’s a pandemic that is not just restricted to the school grounds of our younger and geekier selves, but something which tends to follow people around regardless of age and even privacy. Cyberbullying has become more widespread than traditional bullying and is often known to be equally traumatic for its victims. A trend which tech companies are trying to increasingly address.

Instagram has new features (via The Verge) on its way that it’s hoping will address cyberbullying by finally allowing people to “shadow ban” others and a new artificial intelligence that is designed to flag potentially offensive comments. Both initiatives are looking to be put into testing soon.

The “shadow ban” will essentially provide a way for a user to restrict another user, without that person realising they are essentially banned. So they will still be able to see your post and comment on them, but their comments will only be visible to themselves meaning you and the rest of the people you actually want to interact with can keep talking in peace while said person wonders why their snarky comments are not getting any responses from you.

Along with this feature, Instagram is also hoping to leverage a new AI to flag potentially offensive comments and ask the commenter if they really want to follow through with posting. They’ll be given the opportunity to undo their comment, and Instagram says that during tests, it encouraged “some” people to reflect on and undo what they wrote. A nice touch, though given the emotional state most bullies are in, it’s unlikely to alter course for most people. Still, it’s better than nothing.

Instagram has already tested multiple bully-focused features, including an offensive comment filter that automatically screens bullying comments that “contain attacks on a person’s appearance or character, as well as threats to a person’s well-being or health” as well as a similar feature for photos and captions. So this shows a real effort by Facebook to tackle this problem on the platform.