When Bell Labs introduced the transistor in 1947, few could have predicted its pivotal role in shaping the digital age. Today, quantum computing stands at a similar crossroads, poised to revolutionise industries by solving some of the most complex problems with astonishing speed. Yet, several key challenges must be overcome to unlock its full potential.
The Promise of Quantum Computing
Quantum computers operate on principles of quantum physics, allowing them to process information in ways that classical computers cannot. Unlike traditional computers, which use bits that represent either 0 or 1, quantum computers use qubits that can exist in multiple states simultaneously. This capability enables quantum computers to perform certain calculations exponentially faster than today’s most advanced supercomputers.
This leap in computational power could revolutionise industries by simulating complex systems that are currently beyond the reach of classical computers. For example, quantum computing could imminently accelerate the development of new pharmaceuticals by modelling molecular interactions more precisely, reducing the costly and time-consuming trial-and-error process. Similarly, quantum computers could optimise global logistics networks, leading to more efficient and sustainable operations across industries such as shipping and telecommunications.
Although these transformative applications are not yet a reality, the rapid pace of advancement suggests that quantum computers could begin addressing real-world problems by the 2030s.
Overcoming the Challenges
Despite its promise, quantum computing faces technical challenges, primarily related to the stability of qubits, entanglement, and scalability.
Qubits, the fundamental units of quantum computation, are highly sensitive to environmental fluctuations, which makes them prone to errors. Currently, the information stored in a qubit is often lost within a fraction of a second, leading to error rates that are much higher than those of classical bits. To make quantum computing viable, researchers must develop methods to stabilise or correct these errors, ensuring qubits can retain information long enough to perform useful calculations.
Entanglement, another cornerstone of quantum computing, involves linking qubits in a way that their states become interdependent. For quantum computers to solve complex problems, they require vast networks of entangled qubits that can communicate effectively. However, creating and maintaining such large-scale entanglement remains a significant hurdle. Advances in topological quantum computing, which promises more stable qubits, may provide a solution, but this technology is still in its infancy.
Scalability is the final major challenge. Present-day quantum computers, even the smallest ones, require substantial energy and infrastructure to operate. Realising the full potential of quantum computing will necessitate either making these systems more efficient or finding ways to connect multiple quantum computers to work together seamlessly, thereby increasing their combined computational power.
As quantum computing progresses, so too must the measures we take to secure data. The very power that makes quantum computers so promising also makes them a potential threat if used maliciously. Specifically, a cryptographically relevant quantum computer (CRQC) could break many of the encryption methods currently used to protect sensitive data. According to a report by the Global Risk Institute, there is an 11% chance that a CRQC could compromise commonly used encryption methods like RSA-2048 within five years, with the risk rising to over 30% within a decade.
To mitigate these risks, governments and regulatory bodies worldwide are establishing guidelines for quantum-safe practices. These initiatives aim to develop quantum-safe solutions that ensure secure communication and data protection in the quantum era. In Europe, South Korea, and Singapore, for example, efforts are underway to create Quantum-Safe Networks (QSN), which use multiple layers of encryption and quantum key distribution (QKD) to safeguard data against future quantum threats.
Building a Quantum-Safe Infrastructure
Developing a quantum-safe infrastructure is becoming increasingly urgent for industries that rely heavily on secure data, such as finance, healthcare, and defence. Quantum-safe networks use advanced technologies like QKD and post-quantum cryptography (PQC) to create a robust defence against potential quantum threats. These networks are designed with a defence-in-depth approach, incorporating multiple layers of encryption to protect against attacks.
Several countries and companies are already taking steps to prepare for a quantum-secure future. For instance, Nokia is collaborating with Greece's national research network, GRNET, to build a nationwide quantum-safe network. In Belgium, Proximus has successfully tested QKD to encrypt data transmissions between its data centres. Similar initiatives are taking place in Portugal and Singapore, where efforts are focused on strengthening cybersecurity through quantum-safe technologies.
Preparing for the Quantum Future
Quantum computing is on the cusp of transforming industries by providing solutions to problems that have long been considered unsolvable. However, realising this potential requires continued innovation to overcome technical challenges and build the necessary security infrastructure. The future of quantum computing is not just about unlocking new possibilities but also about ensuring that this powerful technology is used responsibly and securely.
As we approach a quantum-secure economy, the importance of trust in our digital communications cannot be overstated. Now is the time to prepare for this future, as the impact of quantum computing on our lives is likely to be profound and far-reaching. By embracing the quantum revolution with anticipation and readiness, we can ensure that its benefits are both substantial and secure.
China has enacted new restrictions under its Counter-espionage Law, shocking the international world and raising severe concerns about privacy and human rights. These guidelines, which went into effect on July 1, 2024, provide state security officers broad rights to inspect and search electronic equipment such as smartphones and computers, presumably in the name of national security.
The "Provisions on Administrative Law Enforcement Procedures of National Security Organs" mark a considerable increase in state monitoring capabilities. Under the new legislation, authorities can now collect "electronic data" from personal devices such as text messages, emails, instant messages, group chats, documents, photos, audio and video files, apps, and log records. This broad mandate effectively converts each citizen's smartphone into a potential source of information for state security authorities.
One of the most concerning downsides to these new regulations is the ease with which state security agents can conduct searches. According to Article 40 of the regulations, law enforcement officers can undertake on-the-spot inspections by just producing their police or reconnaissance cards, with the agreement of a municipal-level state security organ head. In an emergency, these checks can even be conducted without warrants, weakening safeguards against arbitrary enforcement.
The regulations' ambiguous and sweeping nature is particularly concerning. Article 20 specifies "electronic data" and "audio-visual materials" as evidence that can be utilized in investigations, while Article 41 defines the "person being inspected" as not just the device's owner, but also its holder, custodian, or linked unit. This broad term may subject a wide range of individuals and organizations to examination.
Also, the regulations empower authorities to order individuals and organizations to stop utilizing specific electronic equipment, facilities, and related programs. In circumstances when people refuse to comply with "rectification requirements," state security agencies may seal or seize the gadgets in question. This provision opens the door to possible abuse, allowing the state to effectively muzzle dissenting voices or impede the functioning of organizations it considers harmful.
While the Ministry of State Security has attempted to soothe concerns by saying that these regulations would target "individuals and organizations related to spy groups" and that "ordinary passengers would not have their smartphones inspected at airports," the provisions' broad language leaves plenty of room for interpretation and potential abuse.
The adoption of these laws coincides with the Chinese government's wider drive to encourage residents to be watchful against perceived risks to national security, including keeping an eye out for foreign spies in their daily lives. This culture of distrust, combined with additional powers provided to state security institutions, is likely to limit free expression and international participation in China.
China's new legislation, which give state security organizations broad rights to examine and confiscate electronic devices, constitute a huge increase in the state's surveillance capabilities and a serious danger to individual privacy and freedom of speech. As the digital dragnet tightens, the international community must remain watchful and push for the protection of fundamental human rights in the digital era. The long-term repercussions of these actions may reach beyond China's borders, establishing a frightening precedent for authoritarian governance in the digital age.
Meta will reportedly amend its privacy policy beginning June 26 to allow its AI to be educated on your data.
The story spread on social media after Meta sent out emails and notifications to subscribers in the United Kingdom and the European Union informing them of the change and offering them the option to opt out of data collecting.
One UK-based user, Phillip Bloom, publicly published the message, informing everyone about the impending changes, which appear to also affect Instagram users.
These changes provide Meta permission to use your information and personal material from Meta-related services to train its AI. This implies that the social media giant will be able to use public Facebook posts, Instagram photographs and captions, and messages to Meta's AI chatbots to train its huge language model and other AI capabilities.
Meta states that private messages will not be included in the training data, and the business emphasizes in its emails and notifications that each user (in a protected region) has the "right to object" to the data being utilized.
Once implemented, the new policy will begin automatically extracting information from the affected types of material. To avoid Meta removing your content, you can opt out right now by going to this Facebook help website.
Keep in mind that this page will only load if you are in the European Union, the United Kingdom, or any country where Meta is required by law to provide an opt-out option.
If you live in the European Union, the United Kingdom, or another country with severe enough data protection regulations for Meta to provide an opt-out, go to the support page listed above, fill out the form, and submit it.
You'll need to select your nation and explain why you're opting out in a text box, and you'll have the option to offer more information below that. You should receive a response indicating whether Meta will honor your request to opt out of having your data utilized.
Prepare to fight—some users say that their requests are being denied, even though in countries governed by legislation such as the European Union's GDPR, Meta should be required to honor your request.
There are a few caveats to consider. While the opt-out protects you, it does not guarantee that your postings will be protected if they are shared by friends or family members who have not opted out of using data for AI training.
Make sure that any family members who use Facebook or other Meta services opt out, if possible. This move isn't surprising given that Meta has been gradually expanding its AI offerings on its platforms.
As a result, the utilization of user data, particularly among Meta services, was always expected. There is too much data for the corporation to pass up as training material for its numerous AI programs.
Google is set to introduce multiple anti-theft and data protection features later this year, targeting devices from Android 10 up to the upcoming Android 15. These new security measures aim to enhance user protection in cases of device theft or loss, combining AI and new authentication protocols to safeguard sensitive data.One of the standout features is the AI-powered Theft Detection Lock. This innovation will lock your device's screen if it detects abrupt motions typically associated with theft attempts, such as a thief snatching the device out of your hand. Another feature, the Offline Device Lock, ensures that your device will automatically lock if it is disconnected from the network or if there are too many failed authentication attempts, preventing unauthorized access.Google also introduced the Remote Lock feature, allowing users to lock their stolen devices remotely via android.com/lock. This function requires only the phone number and a security challenge, giving users time to recover their account details and utilize additional options in Find My Device, such as initiating a full factory reset to wipe the device clean.According to Google Vice President Suzanne Frey, these features aim to make it significantly harder for thieves to access stolen devices. All these features—Theft Detection Lock, Offline Device Lock, and Remote Lock—will be available through a Google Play services update for devices running Android 10 or later. Additionally, the new Android 15 release will bring enhanced factory reset protection. This upgrade will require Google account credentials during the setup process if a stolen device undergoes a factory reset.This step renders stolen devices unsellable, thereby reducing incentives for phone theft. Frey explained that without the device or Google account credentials, a thief won't be able to set up the device post-reset, essentially bricking the stolen device. To further bolster security, Android 15 will mandate the use of PIN, password, or biometric authentication when accessing or changing critical Google account and device settings from untrusted locations. This includes actions like changing your PIN, accessing Passkeys, or disabling theft protection.Similarly, disabling Find My Device or extending the screen timeout will also require authentication, adding another layer of security against criminals attempting to render a stolen device untrackable. Android 15 will also introduce "private spaces," which can be locked using a user-chosen PIN. This feature is designed to protect sensitive data stored in apps, such as health or financial information, from being accessed by thieves.
These updates, including factory reset protection and private spaces, will be part of the Android 15 launch this fall. Enhanced authentication protections will roll out to select devices later this year.
Google also announced at Google I/O 2024 new features in Android 15 and Google Play Protect aimed at combating scams, fraud, spyware, and banking malware. These comprehensive updates underline Google's commitment to user security in the increasingly digital age.
Ransomware attacks are becoming increasingly costly for businesses, with a new study shedding light on just how damaging they can be. According to research from Sophos, a staggering 94% of organisations hit by ransomware in 2023 reported attempts by cybercriminals to compromise their backups. This alarming trend poses a significant threat to businesses, as compromised backups can lead to a doubling of ransom demands and payments compared to incidents where backups remain secure.
The impact is particularly severe for certain sectors, such as state and local government, the media, and the leisure and entertainment industry, where 99% of attacks attempted to compromise backups. Perhaps most concerning is the revelation that overall recovery costs can skyrocket when backups are compromised, with organisations facing recovery costs up to eight times higher than those whose backups remain unaffected.
To mitigate the risk of falling victim to ransomware attacks, businesses are urged to take proactive measures. First and foremost, it's essential to backup data frequently and store backups securely in a separate physical location, such as the cloud, to prevent them from being compromised alongside the main systems. Regularly testing the restoration process is also crucial to ensure backups are functional in the event of an attack.
Furthermore, securing backups with robust encryption and implementing layered defences to prevent unauthorised access is essential for ransomware defence. Vigilance against suspicious activity that could signal attackers attempting to access backups is also recommended.
While it's tempting to believe that your organisation won't be targeted by ransomware, the reality is that it's not a matter of if, but when. Therefore, taking proactive steps to secure backups and prepare for potential attacks is imperative for businesses of all sizes.
For businesses seeking additional guidance on ransomware remediation, you can follow this step-by-step guide in order to navigate the recovery process. This Ransomware Defender solution aims to minimise the impact of data breaches and ensure business continuity by storing backups in a highly secure environment isolated from the main infrastructure.
The threat of ransomware attacks targeting backups is real and growing, with significant implications for businesses' financial, operational, and reputational security. By implementing robust backup strategies and proactive defence measures, organisations can better protect themselves against the rising tide of ransomware attacks.
Enterprises are rapidly embracing Artificial Intelligence (AI) and Machine Learning (ML) tools, with transactions skyrocketing by almost 600% in less than a year, according to a recent report by Zscaler. The surge, from 521 million transactions in April 2023 to 3.1 billion monthly by January 2024, underscores a growing reliance on these technologies. However, heightened security concerns have led to a 577% increase in blocked AI/ML transactions, as organisations grapple with emerging cyber threats.
The report highlights the developing tactics of cyber attackers, who now exploit AI tools like Language Model-based Machine Learning (LLMs) to infiltrate organisations covertly. Adversarial AI, a form of AI designed to bypass traditional security measures, poses a particularly stealthy threat.
Concerns about data protection and privacy loom large as enterprises integrate AI/ML tools into their operations. Industries such as healthcare, finance, insurance, services, technology, and manufacturing are at risk, with manufacturing leading in AI traffic generation.
To mitigate risks, many Chief Information Security Officers (CISOs) opt to block a record number of AI/ML transactions, although this approach is seen as a short-term solution. The most commonly blocked AI tools include ChatGPT and OpenAI, while domains like Bing.com and Drift.com are among the most frequently blocked.
However, blocking transactions alone may not suffice in the face of evolving cyber threats. Leading cybersecurity vendors are exploring novel approaches to threat detection, leveraging telemetry data and AI capabilities to identify and respond to potential risks more effectively.
CISOs and security teams face a daunting task in defending against AI-driven attacks, necessitating a comprehensive cybersecurity strategy. Balancing productivity and security is crucial, as evidenced by recent incidents like vishing and smishing attacks targeting high-profile executives.
Attackers increasingly leverage AI in ransomware attacks, automating various stages of the attack chain for faster and more targeted strikes. Generative AI, in particular, enables attackers to identify vulnerabilities and exploit them with greater efficiency, posing significant challenges to enterprise security.
Taking into account these advancements, enterprises must prioritise risk management and enhance their cybersecurity posture to combat the dynamic AI threat landscape. Educating board members and implementing robust security measures are essential in safeguarding against AI-driven cyberattacks.
As institutions deal with the complexities of AI adoption, ensuring data privacy, protecting intellectual property, and mitigating the risks associated with AI tools become paramount. By staying vigilant and adopting proactive security measures, enterprises can better defend against the growing threat posed by these cyberattacks.
The UK’s Department for Work and Pensions (DWP) is gearing up to deploy an advanced AI system aimed at detecting fraud and overpayments in social security benefits. The system will scrutinise millions of bank accounts, including those receiving state pensions and Universal Credit. This move comes as part of a broader effort to crack down on individuals either mistakenly or intentionally receiving excessive benefits.
Despite the government's intentions to curb fraudulent activities, the proposed measures have sparked significant backlash. More than 40 organisations, including Age UK and Disability Rights UK, have voiced their concerns, labelling the initiative as "a step too far." These groups argue that the planned mass surveillance of bank accounts poses serious threats to privacy, data protection, and equality.
Under the proposed Data Protection and Digital Information Bill, banks would be mandated to monitor accounts and flag any suspicious activities indicative of fraud. However, critics contend that such measures could set a troubling precedent for intrusive financial surveillance, affecting around 40% of the population who rely on state benefits. Furthermore, these powers extend to scrutinising accounts linked to benefit claims, such as those of partners, parents, and landlords.
In regards to the mounting criticism, the DWP emphasised that the new system does not grant them direct access to individuals' bank accounts or allow monitoring of spending habits. Nevertheless, concerns persist regarding the broad scope of the surveillance, which would entail algorithmic scanning of bank and third-party accounts without prior suspicion of fraudulent behaviour.
The joint letter from advocacy groups highlights the disproportionate nature of the proposed powers and their potential impact on privacy rights. They argue that the sweeping surveillance measures could infringe upon individual liberties and exacerbate existing inequalities within the welfare system.
As the debate rages on, stakeholders are calling for greater transparency and safeguards to prevent misuse of the AI-powered monitoring system. Advocates stress the need for a balanced approach that addresses fraud while upholding fundamental rights to privacy and data protection.
While the DWP asserts that the measures are necessary to combat fraud, critics argue that they represent a disproportionate intrusion into individuals' financial privacy. As this discourse takes shape, the situation is pronouncing the importance of finding a balance between combating fraud and safeguarding civil liberties in the digital sphere.