Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Tech Firms. Show all posts

Hackers Are Sending Fake Police Data Requests To Tech Giants To Steal People's Private Data

 

The FBI has issued a warning that hackers are collecting sensitive user information, such as emails and contact details, from US-based tech firms by hacking government and police email addresses in order to file "emergency" data requests. 

The FBI's public notice filed last week is an unusual admission by the federal government regarding the threat posed by phoney emergency data requests, a legal process designed to assist police and federal authorities in obtaining information from firms in order to respond to immediate threats to people's safety or properties.

The misuse of emergency data requests is not new, and it has drawn significant attention in recent years. The FBI now warns that it noticed an "uptick" in criminal posts online advertising access to or carrying out false emergency data requests around August and is going public to raise awareness.

“Cyber-criminals are likely gaining access to compromised US and foreign government email addresses and using them to conduct fraudulent emergency data requests to US based companies, exposing the personal information of customers to further use for criminal purposes,” reads the FBI’s advisory. 

Police and law enforcement in the United States often require some form of legal basis to seek and acquire access to private data stored on company laptops. Typically, police must provide sufficient proof of a potential crime before a U.S. court will grant a search warrant authorising them to collect that information from a private corporation. 

Police can issue subpoenas, which do not require a court appearance, requesting that businesses access restricted amounts of information about a user, such as their username, account logins, email addresses, phone numbers, and, in some cases, approximate location. 

There are also emergency requests, which allow police enforcement to gather a person's information from a firm in the event of an immediate threat and there is insufficient time to secure a court order. Federal authorities claim that some cybercriminals abuse these emergency requests.

The FBI stated in its advisory that it had spotted many public posts from known hackers in 2023 and 2024 claiming access to email accounts used by US law enforcement and several foreign governments. According to the FBI, this access was later used to issue fake subpoenas and other legal demands to corporations in the United States in search of private user data kept on their systems. 

The cybercriminals were able to pass for law enforcement by sending emails to businesses asking for user data using hacked police accounts. False threats, such as allegations of human trafficking and, in one instance, the warning that a person would "suffer greatly or die" until the company in issue returned the requested information, were mentioned in some of the requests.

The FBI claimed that because the hackers had gained access to law enforcement accounts, they were able to create subpoenas that appeared authentic and forced companies to divulge user data, including phone numbers, emails, and usernames. However, the FBI noted that not all fraudulent attempts to submit emergency data demands were successful.

World's First AI Law: A Tough Blow for Tech Giants

World's First AI Law: A Tough Blow for Tech Giants

In May, EU member states, lawmakers, and the European Commission — the EU's executive body — finalized the AI Act, a significant guideline that intends to oversee how corporations create, use, and use AI. 

The European Union's major AI law goes into effect on Thursday, bringing significant implications for American technology companies.

About the AI Act

The AI Act is a piece of EU legislation that regulates AI. The law, first suggested by the European Commission in 2020, seeks to combat the harmful effects of artificial intelligence.

The legislation establishes a comprehensive and standardized regulatory framework for AI within the EU.

It will largely target huge U.S. tech businesses, which are currently the main architects and developers of the most advanced AI systems.

However, the laws will apply to a wide range of enterprises, including non-technology firms.

Tanguy Van Overstraeten, head of legal firm Linklaters' technology, media, and technology practice in Brussels, described the EU AI Act as "the first of its kind in the world." It is expected to influence many enterprises, particularly those building AI systems, as well as those implementing or simply employing them in certain scenarios, he said.

High-risk and low-risk AI systems

High-risk AI systems include self-driving cars, medical equipment, loan decisioning systems, educational scores, and remote biometric identification systems.

The regulation also prohibits all AI uses that are judged "unacceptable" in terms of danger. Unacceptable-risk artificial intelligence applications include "social scoring" systems that evaluate citizens based on data gathering and analysis, predictive policing, and the use of emotional detection technology in the workplace or schools.

Implication for US tech firms

Amid a global craze over artificial intelligence, US behemoths such as Microsoft, Google, Amazon, Apple, and Meta have been aggressively working with and investing billions of dollars in firms they believe can lead the field.

Given the massive computer infrastructure required to train and run AI models, cloud platforms such as Microsoft Azure, Amazon Web Services, and Google Cloud are critical to supporting AI development.

In this regard, Big Tech companies will likely be among the most aggressively targeted names under the new regulations.

Generative AI and EU

The EU AI Act defines generative AI as "general-purpose" artificial intelligence. This title refers to tools that are designed to do a wide range of jobs on a par with, if not better than, a person.

General-purpose AI models include but are not limited to OpenAI's GPT, Google's Gemini, and Anthropic's Claude.

The AI Act imposes stringent standards on these systems, including compliance with EU copyright law, disclosure of how models are trained, routine testing, and proper cybersecurity measures.

Canada Attempts to Control Big Tech as Data Gets More Potent

 

Whether you're booking a flight, opening a new bank account, or buying groceries, a select few well-known brands control the majority of the market. What this means for the nation's goods—and prices—is examined in the Canadian Press series Competition Ltd. 

Marc Poirier co-founded the search management platform Acquisio 20 years ago, but he will never forget how Google sparked the company's decline. 

It was 2015. The tech behemoth had recently reorganised its companies under the Alphabet brand and was assessing whether recent pushes into riskier projects like self-driving vehicles, internet-beaming balloons, and smart city infrastructure could match the success of its search engine business. The Brossard, Quebec-based business of Marc Poirier was in a lose-lose situation as advertising income and growth stagnated and the company felt pressure to increase earnings.

“I experienced first-hand Google going from partner to fierce competitor,” Poirier stated. “They started selling the same stuff that we built.” 

Sales growth at Acquisio, which sold software to assist advertisers manage bids and budgets for Google, Yahoo, and Microsoft search campaigns, abruptly came to a halt before starting to decline. Poirier began to consider selling, and in 2017 he finally did so through a contract with Web.com. 

Regulators all across the world have made controlling Big Tech a primary priority because of incidents like Poirier's and growing worries about the sheer scale and influence that tech companies have over users, their privacy, communications, and data. 

Google declined to comment on Poirier's particular situation, but spokesman Shay Purdy pointed out that Alphabet underwent significant changes between 2015 and 2017, including its complex restructuring, and claimed that external factors at the time included an economic downturn following a spike in oil prices. 

Many people are expecting that an ongoing review of the country's Competition Act would level the playing field for digital businesses, even as Canada moves closer to new legislation that will shift some revenue from social media giants to news publishers and better safeguard consumer privacy. 

It's not simple, though, to look into and dismantle monopolies in a sector that is constantly changing and formerly functioned under the motto "move fast and break things" popular in Silicon Valley. Tech companies, aware that regulators are following on their heels, are making the work even more difficult. 

The Competition Bureau, Canada's monopoly watchdog, has been given a lot of the job. It has looked into issues including Ticketmaster's deceptive price advertising, Thoma Bravo's acquisition of the oil and gas software business Aucerna, Amazon's market dominance, and other issues. But if real reform is to take place, according to the bureau and tech observers, the federal government must give the regulator additional authority. 

Collecting evidence of anti competitive behaviour is frequently the bureau's first obstacle. Technology companies are known for keeping their operations under wraps, depending on strong non-disclosure agreements and limiting personnel access to prevent product leaks before buzzy releases or competitors gaining an advantage over them. 

In order to make it more difficult to trace a paper trail, Krista McWhinnie notices companies becoming progressively more deliberate about how they record their decision-making or take any action that even seems to hint at anticompetitive purpose. 

“That alone can stop us from being able to remedy conduct that is having potentially quite a big impact in the market,” stated the deputy commissioner of the bureau’s Monopolistic Practices Directorate. 

It is insufficient to justify action under Canadian competition laws, even if the bureau has evidence that a company's practices are seriously hurting competition. Additionally, the bureau must show that a corporation planned to engage in anticompetitive action as well, which is "a very high bar" and "relatively unusual" in other nations. 

According to McWhinnie, "that's frequently a really difficult task that requires a lot of resources." It takes a lot of time, which is one of the factors contributing to the difficulty in bringing these cases quickly. The bureau has come under fire in recent months for moving too slowly on an examination of Google's possible involvement in anti-competitive practices in the online display advertising market, which is set to begin in October 2021. 

The investigation is predicated on the hypothesis that Google's hegemony in online advertising may be limiting the development of rivals, leading to higher costs, less variety, and less innovation, as well as harming advertisers, news publishers, and consumers. 

“Every day that Google is allowed to monopolise ad revenue, more harm is inflicted on the Canadian news industry, which has a negative impact on democracy as a whole,” stated Lana Payne, Unifor’s national president, in a press release. 

Google pointed The Canadian Press to a research on the economic impact of its services, which showed that the use of its search, cloud, advertising, and YouTube products generated $37 billion in revenue for Canadian companies, non-profits, publishers, creators, and developers. More than the total economic impact of the forestry and aviation industries, this is equal to 1.5% of Canada's gross domestic product, according to the statement.

Jim Balsillie, a former BlackBerry CEO and current head of the Council of Canadian Innovators, feels that Canada's problems with competition are caused by a lack of tools and a subpar approach to defending consumer rights in the digital age. The sheer quantity and specificity of consumer data that many large internet companies collect, together with their ability to use AI to mix it with that data to glean personal insights and sway public opinion, is what gives them their power and control.

Data gathering isn't only a Big Tech strategy. Balsillie cites pharmacies as having reams of health information on customers, cellular providers as knowing your whereabouts to within 10 metres, and banks as knowing what you're buying. 

According to Jennifer Quaid, estimating the potential worth of all that data—a crucial component of figuring out whether businesses are engaging in anticompetitive behavior—is not an easy task.

It's challenging to quantify the effects of mergers or tech company policies on innovation, creativity, and consumer behaviour, especially when the company deals in data "that isn't necessarily valuable at the time but ends up becoming valuable when it's aggregated with other information," said the competition law professor at the University of Ottawa's Civil Law Section.

Quaid and Balsillie concur that the problem would be made simpler if the Competition Bureau had a wider array of tools at its disposal, enabling it to impose more significant fines and overhauling some of the regulatory regimes that have allowed some monopolies to flourish unchecked.