Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Deep Learning. Show all posts

Orion Brings Fully Homomorphic Encryption to Deep Learning for AI Privacy

 

As data privacy becomes an increasing concern, a new artificial intelligence (AI) encryption breakthrough could transform how sensitive information is handled. Researchers Austin Ebel, Karthik Garimella, and Assistant Professor Brandon Reagen have developed Orion, a framework that integrates fully homomorphic encryption (FHE) into deep learning. 

This advancement allows AI systems to analyze encrypted data without decrypting it, ensuring privacy throughout the process. FHE has long been considered a major breakthrough in cryptography because it enables computations on encrypted information while keeping it secure. However, applying this method to deep learning has been challenging due to the heavy computational requirements and technical constraints. Orion addresses these challenges by automating the conversion of deep learning models into FHE-compatible formats. 

The researchers’ study, recently published on arXiv and set to be presented at the 2025 ACM International Conference on Architectural Support for Programming Languages and Operating Systems, highlights Orion’s ability to make privacy-focused AI more practical. One of the biggest concerns in AI today is that machine learning models require direct access to user data, raising serious privacy risks. Orion eliminates this issue by allowing AI to function without exposing sensitive information. The framework is built to work with PyTorch, a widely used machine learning library, making it easier for developers to integrate FHE into existing models. 

Orion also introduces optimization techniques that reduce computational burdens, making privacy-preserving AI more efficient and scalable. Orion has demonstrated notable performance improvements, achieving speeds 2.38 times faster than previous FHE deep learning methods. The researchers successfully implemented high-resolution object detection using the YOLO-v1 model, which contains 139 million parameters—a scale previously considered impractical for FHE. This progress suggests Orion could enable encrypted AI applications in sectors like healthcare, finance, and cybersecurity, where protecting user data is essential. 

A key advantage of Orion is its accessibility. Traditional FHE implementations require specialized knowledge, making them difficult to adopt. Orion simplifies the process, allowing more developers to use the technology without extensive training. By open-sourcing the framework, the research team hopes to encourage further innovation and adoption. As AI continues to expand into everyday life, advancements like Orion could help ensure that technological progress does not come at the cost of privacy and security.

Hackers Can Now Intercept HDMI Signals Using Deep Learning

Hackers Can Now Intercept HDMI Signals Using Deep Learning

Secretly intercepting video signals is a very traditional way to do electronic spying, but experts have found a new that puts a frightening twist to it.

A team of experts from Uruguay has found that it's possible to hack electromagnetic radiation from HDMI cables and process the video via AI.

Using deep learning to trace HDMI signals

University of the Republic experts in Montevideo posted their findings on Cornell's ArXiv service. As per the findings, you can train an AI model to interpret minute fluctuations in electromagnetic radiation released from an HDMI cable. “In this work, we address the problem of eavesdropping on digital video displays by analyzing the electromagnetic waves that unintentionally emanate from the cables and connectors, particularly HDMI,” the researchers said.  Despite being a wired standard and digitally encrypted, abundant electromagnetic signals are released from these cables to track without needing direct access.

Detecting and decoding are different, but the experts also found that by pairing an AI model with text recognition software, one can "read" the wireless recorded EM radiation with a surprising 70% accuracy.

It is still distant from a traditional recording, but there's still a 60 percent improvement compared to earlier methods, making it capable of stealing passwords and other sensitive info. One can also do it wirelessly without physical access to attack a computer, from outside a building in real-life situations.

A new method for surveillance

Skimming from wireless electromagnetic signals for spying purposes isn't a new thing. It is a vulnerability called TEMPEST (Transient ElectroMagnetic Pulse Emanation Standard, a very awkward backronym) having links to espionage dating back to World War 2. 

However, because HDMI connections are digital transmissions with some kind of encryption utilizing the HDCP standard, they were not thought to be particularly vulnerable to it. The researchers' AI algorithm-assisted technique of assault (dubbed "Deep-TEMPEST") raises some troubling possibilities.

State-sponsored attacks

According to experts, the system and its related alternatives, are already in use by state-sponsored hackers and industrial espionage threat actors. The advanced nature of the methods and the need to be around the target systems suggest that they won’t cause harm to regular users. However, large businesses or government agencies should be on the lookout, to protect their sensitive data, they should consider EM-shielding measures- especially for the employees and stakeholders working from home. 

“The proposed system is based on widely available Software Defined Radio and is fully open-source, seamlessly integrated into the popular GNU Radio framework. We also share the dataset we generated for training, which comprises both simulated and over 1000 real captures. Finally, we discuss some countermeasures to minimize the potential risk of being eavesdropped by systems designed based on similar principles,” concluded experts in the report.