Scientists at Sandia National Laboratories have achieved a significant milestone by developing ultra-compact optical chips that power quantum navigation sensors. These sensors utilize atom interferometers, a sophisticated technology that measures the interference patterns of atoms to track position and motion with unparalleled accuracy. Unlike traditional GPS, which relies on satellite signals, quantum navigation sensors operate independently, immune to external disruptions.
At the heart of this innovation lies the principle of quantum mechanics. Atom interferometers work by cooling atoms to near absolute zero temperatures, creating a state where they exhibit both particle and wave-like properties. When these atoms are subjected to laser pulses, they form interference patterns that can be precisely measured. By analyzing these patterns, the sensors can precisely determine changes in position and velocity.
The optical chips developed by Sandia National Laboratories are designed to be ultra-compact, making them suitable for integration into various devices and systems. These chips are capable of maintaining the delicate quantum states of atoms, ensuring accurate measurements even in challenging environments.
The potential applications of quantum navigation are vast and transformative. One of the most significant advantages is its ability to function in GPS-denied areas. This is particularly crucial for military operations, where GPS signals can be jammed or spoofed by adversaries. Quantum navigation ensures that military personnel and autonomous vehicles can navigate accurately without relying on external signals.
In addition to military applications, quantum navigation holds promise for the commercial sector. Autonomous vehicles, such as drones and self-driving cars, can benefit from this technology by achieving precise navigation in urban environments where GPS signals are often weak or obstructed. Furthermore, quantum navigation can enhance the accuracy of scientific research, particularly in fields like geology and archaeology, where precise location data is essential.
While the potential of quantum navigation is immense, there are challenges to overcome before it becomes mainstream. One of the primary challenges is the complexity of maintaining quantum states in real-world conditions. The ultra-cold temperatures required for atom interferometers are difficult to achieve and maintain outside of laboratory settings. However, the development of ultra-compact optical chips is a significant step towards addressing this challenge.
Another challenge is the integration of quantum navigation sensors into existing systems. This requires advancements in both hardware and software to ensure seamless compatibility. Researchers are actively developing robust algorithms and interfaces to facilitate the integration process.
In a survey of 500 IT security experts, Exabeam researchers discovered that nearly two-thirds of their respondents (65%) prioritize prevention over detection as their number one endpoint security objective. For the remaining third (33%), detection remained their utmost priority.
To make the situation worse, the businesses actually act on this idea. The majority (59%) allocate the same amount to detection, investigation, and response, while nearly three-quarters (71%) spend between 21% and 50% of their IT security resources on prevention.
According to Steve Moore, chief security strategist at Exabeam, the issue with this strategy is that the businesses concentrate on prevention while threat actors are already there, rendering their efforts useless.
“As is well known, the real question is not whether attackers are on the network, but how many there are, how long they have had access and how far they have gone[…]Teams need to raise awareness of this question and treat it as an unwritten expectation to realign their investments and where they need to perform, paying due attention to adversary alignment and response to incidents. Prevention has failed,” says Moore.
The majority of responders said yes when asked if they are confident, they can prevent attacks. In fact, 97% of respondents indicated they felt confident in the ability of their tools and processes to detect and stop attacks and data breaches.
Only 62% of respondents agreed when asked if they could easily inform their boss that their networks were not compromised at the time, implying that over a third were still unsure.
Exabeam explains that security teams are overconfident and have data to support it. The company claims that 83% of organizations experienced more than one data breach last year, citing industry reports.
Among the many approaches implemented in order to combat security affairs, most organizations appear to be inclined towards the prevention-based strategy. The reason is, it strives to make systems more resistant to attack. Contrary to detection-based security, this approach is more effective in a variety of situations.
Implementing a preventive approach could aid a company in significantly reducing the risk of falling prey to a potential cyberattack if it applies appropriate security solutions like firewalls and antivirus software and patches detected vulnerabilities.
The study demonstrates how these AI systems can be programmed to reproduce precisely copyrighted artwork and medical images. It is a result that might help artists who are suing AI companies for copyright violations.
Researchers from Google, DeepMind, UC Berkeley, ETH Zürich, and Princeton obtained their findings by repeatedly prompting Google’s Imagen with image captions, like the user’s name. Following this, they analyzed if any of the images they produced matched the original photos stored in the model's database. The team was successful in extracting more than 100 copies of photos from the AI's training set.
These image-generating AI models are apparently produced over vast data sets, that consist of images with captions that have been taken from the internet. The most recent technology works by taking images in the data sets and altering pixels individually until the original image is nothing more than a jumble of random pixels. The AI model then reverses the procedure to create a new image from the pixelated mess.
According to Ryan Webster, a Ph.D. student from the University of Caen Normandy, who has studied privacy in other image generation models but is not involved in the research, the study is the first to demonstrate that these AI models remember photos from their training sets. This could also serve as an implication for startups wanting to use AI models in health care since it indicates that these systems risk leaking users’ private and sensitive data.
Eric Wallace, a Ph.D. scholar who was involved in the study group, raises concerns over the privacy issue and says they hope to raise alarm regarding the potential privacy concerns with these AI models before they are extensively implemented in delicate industries like medicine.
“A lot of people are tempted to try to apply these types of generative approaches to sensitive data, and our work is definitely a cautionary tale that that’s probably a bad idea unless there’s some kind of extreme safeguards taken to prevent [privacy infringements],” Wallace says.
Another major conflict between AI businesses and artists is caused by the extent to which these AI models memorize and regurgitate photos from their databases. Two lawsuits have been filed against AI by Getty Images and a group of artists who claim the company illicitly scraped and processed their copyrighted content.
The researchers' findings will ultimately aid artists to claim that AI companies have violated their copyright. The companies may have to pay artists whose work was used to train Stable Diffusion if they can demonstrate that the model stole their work without their consent.
According to Sameer Singh, an associate professor of computer science at the University of California, Irvine, these findings hold paramount importance. “It is important for general public awareness and to initiate discussions around the security and privacy of these large models,” he adds.
OTPs, originally presented at the Crypto’08 conference were described as a type of cryptographically obfuscated computer program that can only be run once. This significant property makes them useful for numerous applications.
The basic concept is that "Alice" could send "Bob" a computer program that was encrypted in a way that:
1. Bob can run the program on any computer with any valid inputs and obtain a correct result. Bob cannot rerun the program with different inputs.
2. Bob can learn nothing about the secret program by running it.
The run-only-once requirements encounter difficulties because it would be an easier task to install a run-once-only program on multiple virtual machines, trying different inputs on each one of them. Consequently, this would violate the entire premise of the technology.
The original idea for thwarting this (fairly obvious) hack was to only allow the secret program to run if accompanied by a physical token that somehow enforced the one-time rule for running the copy of the secret program that Alice had sent to Bob. No such tokens were ever made, so the whole idea has lain dormant for more than a decade.
OTP revived:
Recently, a team of computer scientists from Johns Hopkins University and NTT Research have established the basis of how it might be possible to create one-time programs using a combination of the functionality found in the chips found in mobile phones and cloud-based services.
They have hacked ‘counter lockbox’ technology and utilized the same for an unintended purpose. Counter lockboxes secure an encryption key under a user-specified password, administering a limited number of incorrect password guesses (usually 10) before having the protected information erased.
The hardware security module in iPhones or Android smartphones provides the needed base functionality, but it needs to be wrapped around technology that prevents Bob from attempting to deceive the system – the focus of the research.
Garbled circuits:
The research works show how multiple counter lockboxes might be linked together in order to form ‘garbled circuits’, i.e. a construction that might be utilized to build OTPs.
A paper illustrating this research, entitled ‘One-Time Programs from Commodity Hardware’ is due to be presented at the upcoming Theory of Cryptography Conference (TCC 2022).
Hardware-route discounted:
One alternative means of constructing one-time programs, considered in the research, is using tamper-proof hardware, although it would require a “token with a very powerful and expensive (not to mention complex) general-purpose CPU”, as explained in a blog post by cryptographer Mathew, a professor at Johns Hopkins University and one of the co-authors of the paper.
“This would be costly and worse, [and] would embed a large attack software and hardware attack surface – something we have learned a lot about recently thanks to Intel’s SGX, which keeps getting broken by researchers,” explains Green.
Rather than relying on hardware or the potential use of blockchain plus cryptographic tool-based technology, the Johns Hopkins’ researchers have built a form of memory device or token that spits out and erases secret keys when asked. It takes hundreds of lockboxes to make this construction – at least 256 for a 128-bit secret, a major drawback that the researchers are yet to overcome.
A bastion against brute-force attacks:
Harry Eldridge, from Johns Hopkins University, lead author of the paper, told The Daily Swig that one-time programs could have multiple uses.
“The clearest application of a one-time program (OTP) is preventing brute-force attacks against passwords […] For example, rather than send someone an encrypted file, you could send them an OTP that outputs the file if given the correct password. Then, the person on the other end can input their password to the OTP and retrieve the file.” Eldridge explained. “However, because of the one-time property of the OTP, a malicious actor only gets one chance to guess the password before being locked out forever, meaning that much weaker passwords [such as a four-digit PIN] can actually be pretty secure.”
Furthermore, this could as well be applied to other forms of authentication – for instance, if you wanted to protect a file using some sort of biometric match like a fingerprint or face scan.
‘Autonomous’ Ransomware Risk
One of the drawbacks led via the approach is that threat actors might utilize the technique to develop ‘autonomous’ ransomware.
“Typically, ransomware needs to ‘phone home’ somehow in order to fetch the decryption keys after the bounty has been paid, which adds an element of danger to the group perpetrating the attack,” according to Eldridge. “If they were able to use one-time programs, however, they could include with the ransomware an OTP that outputs the decryption keys when given proof that an amount of bitcoin has been paid to a certain address, completely removing the need to phone home at all.”
Although, the feedback on the work so far has been “generally positive”, according to Eldridge. “[Most agree] with the motivation that OTPs are an interesting but mostly unrealized cryptographic idea, with the most common criticism being that the number of lockboxes required by our construction is still rather high. There is possibly a way to more cleverly use lockboxes that would allow for fewer of them to be used.”