For those who are concerned about privacy, Proton has announced an end-to-end encrypted document editor intended to be a viable alternative to Microsoft Word and Google Docs. This application, released on Wednesday by the Swiss software vendor best known for its encrypted email app, provides office workers with many document creation features they might use in their daily work.
Swiss-based and privacy-conscious Proton is now focusing on cloud-based document editing as it has built up its email, VPN, cloud storage, password manager, and cloud storage offerings. Proton Docs, a newly launched service that offers an array of features and privacy protections, might be just what users need to make it work for them.
With regards to its user interface and user experience, Proton Docs draws inspiration from Google Docs while also introducing its distinctive twists. In addition to its clean, minimalist design, Proton Docs has a central focus on the document, and users can find familiar functions with icons at the top representing the common formatting options (such as bold, italics, headings, and lists).
However, the top of the screen does not have a dedicated menu bar, and all options can be found in the default toolbar. Proton Docs keeps a very similar layout to Google Docs and, therefore, if someone is transitioning from Google Docs to Proton Docs, they should not have any problems getting started with their drafts right away. The work that was done by Proton was excellent.
A lot of the basic features of Proton Docs are similar to those of Google Docs, and the first thing users will notice is that the application looks very much like Google Docs: white pages with a formatting toolbar up top, and a cursor at the top that displays who is in the document as well as a cursor to clear the document at the top. The fact is that this isn’t particularly surprising for a couple of reasons.
First of all, Google Docs is extremely popular, and the options for styling a document editor are not that many. In other words, Proton Docs has been created in large part to offer all the benefits of Google Docs, just without Google. Docs are launching inside Proton Drive today, and as part of the privacy-focused suite of work tools offered by Proton, it will be the latest addition.
It has become clear that Proton has expanded its offering from email to include a calendar, a file storage system, a password manager, and more since it began as an email client. Adding Docs to the company's ecosystem seems like a wise move since it aims to compete against Microsoft Office and Google Workspace, and it was coming soon after Proton acquired Standard Notes in April.
According to Proton PR manager Will Moore, Notes would not disappear — Docs is borrowing some of its features instead. Proton Docs is a full-featured, end-to-end encrypted word processor with the ability to store files and even its users' keys (keystrokes and cursor movements) end-to-end encrypted, so that no one, including Proton staff, will be able to access any of the users' files (not even the users). This makes it much more difficult for hackers and data breaches to access the files, thereby making them more secure. There has been a lack of improvement in this area in Proton Docs.
However, even though it is part of the growing portfolio of the company, it does not fully integrate with its existing platform. There is no ability to access calendars and contacts from the sidebar like Google Docs, and it does not have the same functionality as Google Pages. Additionally, there is no easy way for users to import existing documents, files, or media from a Proton Drive account directly into the application.
In contrast, Google Docs provides the convenience of typing an "@" followed by the name of a file from users' Google Drive account and inserting the document from there as soon as they click the hyperlink. A feature such as this is particularly useful when a document needs to include multiple files in addition to the document itself. A second advantage of Proton Docs is the use of Swiss cloud servers, which provide storage of users' data on Proton Docs' servers in Switzerland.
It is thanks to the strict Swiss laws that protect the information stored on these servers that they cannot be accessed by regulatory authorities in regions like the European Union and the United States. A new feature known as Proton Docs is scheduled to be rolled out to Proton Drive customers starting today, with the ability to access the feature expected to be available to everyone within the next few days, as per Proton.
Powered by the Proton Drive platform, Proton Drive operates on a freemium model with individual subscriptions to the platform costing as little as €10 per month (approximately $10.80 when billed annually). The monthly subscription fee for Proton for Business is €7 per user per month and can be purchased in any amount.
Artificial intelligence (AI) is rapidly transforming the world, and by 2025, its growth is set to reach new heights. While the advancements in AI promise to reshape industries and improve daily lives, they also bring a series of challenges that need careful navigation. From enhancing workplace productivity to revolutionizing robotics, AI's journey forward is as complex as it is exciting.
In recent years, AI has evolved from basic applications like chatbots to sophisticated systems capable of assisting with diverse tasks such as drafting emails or powering robots for household chores. Companies like OpenAI and Google’s DeepMind are at the forefront of creating AI systems with the potential to match human intelligence. Despite these achievements, the path forward isn’t without obstacles.
One major challenge in AI development lies in the diminishing returns from scaling up AI models. Previously, increasing the size of AI models drove progress, but developers are now focusing on maximizing computing power to tackle complex problems. While this approach enhances AI's capabilities, it also raises costs, limiting accessibility for many users. Additionally, training data has become a bottleneck. Many of the most valuable datasets have already been utilized, leading companies to rely on AI-generated data. This practice risks introducing biases into systems, potentially resulting in inaccurate or unfair outcomes. Addressing these issues is critical to ensuring that AI remains effective and equitable.
The integration of AI into robotics is another area of rapid advancement. Robots like Tesla’s Optimus, which can perform household chores, and Amazon’s warehouse automation systems showcase the potential of AI-powered robotics. However, making such technologies affordable and adaptable remains a significant hurdle. AI is also transforming workplaces by automating repetitive tasks like email management and scheduling. While these tools promise increased efficiency, businesses must invest in training employees to use them effectively.
Regulation plays a crucial role in guiding AI’s development. Countries like those in Europe and Australia are already implementing laws to ensure the safe and ethical use of AI, particularly to mitigate its risks. Establishing global standards for AI regulation is essential to prevent misuse and steer its growth responsibly.
Looking ahead, AI is poised to continue its evolution, offering immense potential to enhance productivity, drive innovation, and create opportunities across industries. While challenges such as rising costs, data limitations, and the need for ethical oversight persist, addressing these issues thoughtfully will pave the way for AI to benefit society responsibly and sustainably.
San Francisco-based data analytics leader Databricks has achieved a record-breaking milestone, raising $10 billion in its latest funding round. This has elevated the company's valuation to an impressive $62 billion, paving the way for a potential initial public offering (IPO).
Databricks has long been recognized for providing enterprises with a secure platform for hosting and analyzing their data. In 2023, the company further bolstered its offerings by acquiring MosaicML, a generative AI startup. This acquisition allows Databricks to enable its clients to build tailored AI models within a secure cloud environment.
In March, Databricks unveiled DBRX, an advanced large language model (LLM) developed through the MosaicML acquisition. DBRX offers its 12,000 clients a secure AI solution, minimizing risks associated with exposing proprietary data to external AI models.
Unlike massive models such as Google’s Gemini or OpenAI’s GPT-4, DBRX prioritizes efficiency and practicality. It addresses specific enterprise needs, such as:
DBRX employs a unique “mixture-of-experts” design, dividing its functionality into 16 specialized areas. A built-in "router" directs tasks to the appropriate expert, reducing computational demands. Although the full model has 132 billion parameters, only 36 billion are used at any given time, making it energy-efficient and cost-effective.
This efficiency lowers barriers for businesses aiming to integrate AI into daily operations, improving the economics of AI deployment.
Databricks CEO Ali Ghodsi highlighted the company's vision during a press event in March: “These are still the early days of AI. We are positioning the Databricks Data Intelligence Platform to deliver long-term value . . . and our team is committed to helping companies across every industry build data intelligence.”
With this landmark funding round, Databricks continues to solidify its role as a trailblazer in data analytics and enterprise AI. By focusing on secure, efficient, and accessible AI solutions, the company is poised to shape the future of technology across industries.
The cryptocurrency market reached a historic milestone this week as Bitcoin closed above $100,000 for the first time in history. This marks a defining moment, reflecting both market optimism and growing investor confidence. Despite reaching a peak of $104,000, Bitcoin experienced significant price volatility, dropping as low as $92,000 before stabilizing at $101,200 by the end of the week. These sharp fluctuations resulted in a massive liquidation of $1.8 billion, primarily from traders holding long positions.
In a major development, BlackRock's IBIT ETF purchased $398.6 million worth of Bitcoin on December 9. This acquisition propelled the fund's total assets under management to over $50 billion, setting a record as the fastest-growing ETF to reach this milestone in just 230 days. BlackRock's aggressive investment underscores the increasing institutional adoption of Bitcoin, solidifying its position as a mainstream financial asset.
Ripple made headlines this week with the approval of its RLUSD stablecoin by the New York Department of Financial Services. Designed for institutional use, the stablecoin will initially be launched on both Ripple's XRPL network and Ethereum. Analysts suggest this development could bolster Ripple's market standing, especially as rumors circulate about potential future partnerships, including discussions with Cardano's founder.
El Salvador created a buzz after announcing the discovery of $3 trillion worth of unmined gold. This announcement comes as the country negotiates with the International Monetary Fund (IMF) regarding its Bitcoin law. Reports indicate that El Salvador may make Bitcoin usage optional for merchants as part of an agreement to secure financial aid. This discovery adds an intriguing dimension to the nation’s economic strategy as it continues to embrace cryptocurrency alongside traditional resources.
Google showcased advancements in its quantum computing technology with its Willow chip, a quantum processor capable of solving problems exponentially faster than traditional supercomputers. While concerns have been raised about the potential impact on Bitcoin's security, experts confirm there is no immediate threat. Bitcoin's encryption, based on CDSA-256 and SHA-256, remains robust. With Willow currently at 105 qubits, it would take quantum technology reaching millions of qubits to penetrate Bitcoin's encryption methods effectively.
Bitcoin's surge past $100,000 is undoubtedly a significant achievement, but analysts predict a short-term consolidation phase. Experts anticipate sideways price action as traders and investors take profits before year-end. Meanwhile, Ethereum experienced a 10% decline this week, reflecting broader market adjustments amid declining trading volumes.
The crypto space continues to evolve rapidly, with milestones and challenges shaping the future of digital assets. While optimism surrounds Bitcoin’s rise, vigilance remains essential as market dynamics unfold.
Google has made a significant stride in quantum computing with the announcement of its latest chip, named "Willow." According to Google, this advanced chip can solve problems in just five minutes that would take the most powerful supercomputers on Earth an astonishing 10 septillion years to complete. This breakthrough underscores the immense potential of quantum computing, a field that seeks to harness the mysterious and powerful principles of quantum mechanics.
Quantum computing represents a revolutionary leap in technology, distinct from traditional computing. While classical computers use "bits" to represent either 0 or 1, quantum computers use "qubits," which can represent multiple states simultaneously. This phenomenon, known as superposition, arises from quantum mechanics—a branch of physics studying the behavior of particles at extremely small scales. These principles allow quantum computers to process massive amounts of information simultaneously, solving problems that are far beyond the reach of even the most advanced classical computers.
Google's Willow chip has tackled one of the most significant challenges in quantum computing: error rates. Typically, increasing the number of qubits in a quantum system leads to higher chances of errors, making it difficult to scale up quantum computers. However, Willow has achieved a reduction in error rates across the entire system, even as the number of qubits increases. This makes it a more efficient and reliable product than earlier models.
That said, Google acknowledges that Willow remains an experimental device. Scalable quantum computers capable of solving problems far beyond the reach of current supercomputers are likely years away, requiring many additional advancements.
Quantum computers hold the promise of solving problems that are impossible for classical computers, such as:
However, this power also comes with risks. For example, quantum computers could potentially "break" existing encryption methods, jeopardizing sensitive information. In response, companies like Apple are already developing "quantum-proof" encryption to counter future threats.
Google's Willow chip was developed in a cutting-edge facility in California, but the race for quantum supremacy is global:
These international efforts reflect intense competition to lead this transformative technology.
Experts describe Willow as an important milestone rather than a definitive breakthrough. While it is a game-changing chip, challenges such as further reductions in error rates remain before quantum computers see widespread practical use. Nevertheless, Google’s advancements have brought the world closer to a future where quantum computing can revolutionize industries and solve some of humanity’s most complex challenges.
This remarkable progress highlights the vast potential of quantum computing while reminding us of the responsibility to use its power wisely.
Artificial Intelligence (AI) has emerged as a transformative force, reshaping industries and delivering unprecedented value to businesses worldwide. From automating mundane tasks to offering predictive insights, AI has catalyzed innovation on a massive scale. However, its rapid adoption raises significant concerns about privacy, data ethics, and transparency, prompting urgent discussions on regulation. The need for robust frameworks has grown even more critical as AI technologies become deeply entrenched in everyday operations.
During the early development stages of AI, major tech players such as Meta and OpenAI often used public and private datasets without clear guidelines in place. This unregulated experimentation highlighted glaring gaps in data ethics, leading to calls for significant regulatory oversight. The absence of structured frameworks not only undermined public trust but also raised legal and ethical questions about the use of sensitive information.
Today, the regulatory landscape is evolving to address these issues. Europe has taken a pioneering role with the EU AI Act, which came into effect on August 1, 2024. This legislation classifies AI applications based on their level of risk and enforces stricter controls on higher-risk systems to ensure public safety and confidence. By categorizing AI into levels such as minimal, limited, and high risk, the Act provides a comprehensive framework for accountability. On the other hand, the United States is still in the early stages of federal discussions, though states like California and Colorado have enacted targeted laws emphasizing transparency and user privacy in AI applications.
AI’s impact on marketing is undeniable, with tools revolutionizing how teams create content, interact with customers, and analyze data. According to a survey, 93% of marketers using AI rely on it to accelerate content creation, optimize campaigns, and deliver personalized experiences. However, this reliance comes with challenges such as intellectual property infringement, algorithmic biases, and ethical dilemmas surrounding AI-generated material.
As regulatory frameworks mature, marketing professionals must align their practices with emerging compliance standards. Proactively adopting ethical AI usage not only mitigates risks but also prepares businesses for stricter regulations. Ethical practices can safeguard brand reputation, ensuring that marketing teams remain compliant and trusted by their audiences.
AI regulation is not just a passing concern but a critical element in shaping its responsible use. By embracing transparency, accountability, and secure data practices, businesses can stay ahead of legal changes while fostering trust with customers and stakeholders. Adopting ethical AI practices ensures that organizations are future-proof, resilient, and prepared to navigate the complexities of the evolving regulatory landscape.
As AI continues to advance, the onus is on businesses to balance innovation with responsibility. Marketing teams, in particular, have an opportunity to demonstrate leadership by integrating AI in ways that enhance customer relationships while upholding ethical and legal standards. By doing so, organizations can not only thrive in an AI-driven world but also set an example for others to follow.
According to the FBI, criminals are increasingly using generative artificial intelligence (AI) to make their fraudulent schemes more convincing. This technology enables fraudsters to produce large amounts of realistic content with minimal time and effort, increasing the scale and sophistication of their operations.
Generative AI systems work by synthesizing new content based on patterns learned from existing data. While creating or distributing synthetic content is not inherently illegal, such tools can be misused for activities like fraud, extortion, and misinformation. The accessibility of generative AI raises concerns about its potential for exploitation.
AI offers significant benefits across industries, including enhanced operational efficiency, regulatory compliance, and advanced analytics. In the financial sector, it has been instrumental in improving product customization and streamlining processes. However, alongside these benefits, vulnerabilities have emerged, including third-party dependencies, market correlations, cyber risks, and concerns about data quality and governance.
The misuse of generative AI poses additional risks to financial markets, such as facilitating financial fraud and spreading false information. Misaligned or poorly calibrated AI models may result in unintended consequences, potentially impacting financial stability. Long-term implications, including shifts in market structures, macroeconomic conditions, and energy consumption, further underscore the importance of responsible AI deployment.
Fraudsters have increasingly turned to generative AI to enhance their schemes, using AI-generated text and media to craft convincing narratives. These include social engineering tactics, spear-phishing, romance scams, and investment frauds. Additionally, AI can generate large volumes of fake social media profiles or deepfake videos, which are used to manipulate victims into divulging sensitive information or transferring funds. Criminals have even employed AI-generated audio to mimic voices, misleading individuals into believing they are interacting with trusted contacts.
In one notable incident reported by the FBI, a North Korean cybercriminal used a deepfake video to secure employment with an AI-focused company, exploiting the position to access sensitive information. Similarly, Russian threat actors have been linked to fake videos aimed at influencing elections. These cases highlight the broad potential for misuse of generative AI across various domains.
To address these challenges, the FBI advises individuals to take several precautions. These include establishing secret codes with trusted contacts to verify identities, minimizing the sharing of personal images or voice data online, and scrutinizing suspicious content. The agency also cautions against transferring funds, purchasing gift cards, or sending cryptocurrency to unknown parties, as these are common tactics employed in scams.
Generative AI tools have been used to improve the quality of phishing messages by reducing grammatical errors and refining language, making scams more convincing. Fraudulent websites have also employed AI-powered chatbots to lure victims into clicking harmful links. To reduce exposure to such threats, individuals are advised to avoid sharing sensitive personal information online or over the phone with unverified sources.
By remaining vigilant and adopting these protective measures, individuals can mitigate their risk of falling victim to fraud schemes enabled by emerging AI technologies.
Italy's data protection regulator, Garante per la Protezione dei Dati Personali, has cautioned GEDI, a leading Italian media group, to comply with EU data protection laws in its collaboration with OpenAI. Reuters reports that the regulator highlighted the risk of non-compliance if personal data from GEDI's archives were shared under a proposed agreement with OpenAI, the creator of ChatGPT.
The partnership, formed in September, would allow OpenAI to use Italian-language content from GEDI’s publications, including La Repubblica and La Stampa, to enhance its chatbot services. The regulator warned that the use of personal and sensitive data stored in digital archives requires stringent safeguards. “The digital archives of newspapers contain the stories of millions of people, with information, details, and even extremely sensitive personal data that cannot be licensed without due care for use by third parties to train artificial intelligence,” stated the Garante.
GEDI clarified that its agreement with OpenAI does not involve selling personal data. “The project has not been launched,” said GEDI. “No editorial content has been made available to OpenAI at the moment and will not be until the reviews underway are completed.” The company expressed hope for ongoing constructive dialogue with the Italian data protection authority.
The case highlights growing tension between European regulators and major AI developers. The EU’s Artificial Intelligence Act (EU AI Act), effective from August 2024, sets strict guidelines for AI systems based on their risk levels. While the Act aims to ensure transparency and data privacy, critics argue it imposes burdensome constraints that could hamper innovation.
AI industry leaders have voiced frustration over Europe's regulatory environment. OpenAI’s CEO, Sam Altman, warned in 2023 that the company might "cease operating" in the EU if compliance proved too difficult. In September 2024, executives from Meta and other firms cautioned in an open letter that the EU’s strict tech policies risk undermining Europe’s competitiveness in AI development.
The Italian regulator’s scrutiny of the GEDI-OpenAI partnership reflects broader EU attitudes toward AI regulation. While ensuring compliance with GDPR, such interventions exemplify Europe's cautious approach to AI innovation. Critics argue that this could slow progress in a field where other regions, such as the US and China, are advancing more aggressively.
According to the management at PlayStation, though artificial intelligence (AI) may potentially change the world of gaming, it can never supplant the human creativity behind game development. Hermen Hulst, co-CEO of PlayStation, stated that AI will complement but not substitute the "human touch" that makes games unique.
Hulst shared this view on the 30th anniversary of the classic PlayStation at Sony. Referring to the growing role of AI, Hulst admitted that AI has the ability to handle repetitive tasks in game development. However, he reassured fans and creators that human-crafted experiences will always have a place in the market alongside AI-driven innovations. “Striking the right balance between leveraging AI and preserving the human touch is key, indeed,” he said.
Sony’s year has been marked by both highs and lows. While the PlayStation 5 continues to perform well, the company faced numerous setbacks, including massive job cuts within the gaming industry and the failed launch of the highly anticipated game, Concord. The game resulted in players receiving refunds, and the studio behind it was shut down.
On the hardware side, Sony’s new model, the PlayStation 5 Pro, was heavily criticized for its steep £699.99 price point. However, the company had a major success with the surprise hit Astro Bot, which has received numerous Game of the Year nominations.
Sony is also adapting to changes in how people play games. Its handheld device, the PlayStation Portal, is a controller/screen combination that lets users stream games from their PS5. Recently, Sony launched a beta program that enables cloud streaming directly onto the Portal, marking a shift toward more flexibility in gaming.
In addition to gaming, Sony aims to expand its influence in the entertainment industry by adapting games into films and series. Successful examples include The Last of Us and Uncharted, both based on PlayStation games. Hulst hopes to further elevate PlayStation’s intellectual properties through future projects like God of War, which is being developed as an Amazon Prime series.
Launched in December 1994, the PlayStation console has become a cultural phenomenon, with each of its four main predecessors ranking among the best-selling gaming systems in history. Hulst and his co-CEO Hideaki Nishino, who grew up gaming in different ways, both credit their early experiences with shaping their passion for the industry.
As PlayStation looks toward the future, it aims to maintain a delicate balance between innovation and tradition, ensuring that gaming endures as a creative, immersive medium.
The UK is facing an increasing number of cyberattacks from Russia and China, with serious cases tripling in the past year, according to a new report by the National Cyber Security Centre (NCSC). On Tuesday, Richard Horne, the new NCSC chief, stated that the country is at a critical point in safeguarding its essential systems and services from these threats.
The report reveals a disturbing rise in sophisticated cyber threats targeting Britain’s public services, businesses, and critical infrastructure. Over the past year, the agency responded to 430 cyber incidents, a significant increase from 371 the previous year. Horne highlighted notable incidents such as the ransomware attack on pathology provider Synnovis in June, which disrupted blood supplies, and the October cyberattack on the British Library. These incidents underscore the severe consequences these cyber threats have on the UK.
Similar challenges are being faced by the UK’s close allies, including the U.S., with whom the country shares intelligence and collaborates on law enforcement. Horne emphasized the UK’s deep reliance on its digital infrastructure, which supports everything from powering homes to running businesses. This dependency has made the UK an appealing target for hostile actors aiming to disrupt operations, steal data, and cause destruction.
“Our critical systems are the backbone of our daily lives—keeping the lights on, the water running, and our businesses growing. But this reliance also creates vulnerabilities that our adversaries are eager to exploit,” Horne stated.
According to the report, Russia and China remain at the forefront of the UK’s cybersecurity challenges. Russian hackers, described as “reckless and capable,” continue to target NATO states, while China’s highly advanced cyber operations aim to extend its influence and steal critical data. Horne called for swift and decisive action, urging both the government and private sector to enhance their defenses.
Horne emphasized the need for more robust regulations and mandatory reporting of cyber incidents to better prepare for future threats. He stressed that a coordinated effort is necessary to improve the UK’s overall cybersecurity posture and defend against adversaries’ growing capabilities.
The tech industry has been hit by a wave of layoffs, with over 150,000 workers losing their jobs at major companies like Microsoft, Tesla, Cisco, and Intel. As the market adapts to new economic realities, tech firms are restructuring to reduce costs and align with evolving demands. Below are key instances of these workforce reductions.
Intel: To save $10 billion by 2025, Intel has announced layoffs affecting 15,000 employees—approximately 15% of its workforce. The company is scaling back on marketing, capital expenditures, and R&D to address significant financial challenges in a competitive market.
Tesla: Tesla has reduced its workforce by 20,000 employees, impacting junior staff and senior executives alike. Departments like the Supercharging team were hit hardest. According to Bloomberg, these layoffs may account for up to 20% of Tesla's workforce.
Cisco: Cisco has laid off 10,000 employees in two rounds this year—a 5% reduction in February followed by another 7%. CEO Chuck Robbins noted that these changes aim to focus on areas like cybersecurity and AI while adapting to a “normalized demand environment.”
SAP: Enterprise software giant SAP is undergoing a restructuring process affecting 8,000 employees, roughly 7% of its global workforce. This initiative seeks to streamline operations and prioritize future growth areas.
Uber: Since the COVID-19 pandemic, Uber has laid off 6,700 employees, closing some business units and shifting focus away from ventures like self-driving cabs. These adjustments aim to stabilize operations amid shifting market demands.
Dell: In its second round of layoffs in two years, Dell has cut 6,000 jobs due to declining PC market demand. Additional cuts are anticipated as the company seeks to address cost pressures in a tough economic environment.
These layoffs reflect broader economic shifts as tech companies streamline operations to navigate challenges and focus on strategic priorities like AI, cybersecurity, and operational efficiency.
Lisa Loud, Executive Director of the Secret Network Foundation, emphasized in her keynote that Secret Network has been pioneering confidential computing in Web3 since its launch in 2020. According to Loud, the focus now is to mainstream this technology alongside blockchain and decentralized AI, addressing concerns with centralized AI systems and ensuring data privacy.
Yannik Schrade, CEO of Arcium, highlighted the growing necessity for decentralized confidential computing, calling it the “missing link” for distributed systems. He stressed that as AI models play an increasingly central role in decision-making, conducting computations in encrypted environments is no longer optional but essential.
Schrade also noted the potential of confidential computing in improving applications like decentralized finance (DeFi) by integrating robust privacy measures while maintaining accessibility for end users. However, achieving a balance between privacy and scalability remains a significant hurdle. Schrade pointed out that privacy safeguards often compromise user experience, which can hinder broader adoption. He emphasized that for confidential computing to succeed, it must be seamlessly integrated so users remain unaware they are engaging with such technologies.
Shahaf Bar-Geffen, CEO of COTI, underscored the role of federated learning in training AI models on decentralized datasets without exposing raw data. This approach is particularly valuable in sensitive sectors like healthcare and finance, where confidentiality and compliance are critical.
Henry de Valence, founder of Penumbra Labs, discussed the importance of aligning cryptographic systems with user expectations. Drawing parallels with secure messaging apps like Signal, he emphasized that cryptography should function invisibly, enabling users to interact with systems without technical expertise. De Valence stressed that privacy-first infrastructure is vital as AI’s capabilities to analyze and exploit data grow more advanced.
Other leaders in the field, such as Martin Leclerc of iEXEC, highlighted the complexity of achieving privacy, usability, and regulatory compliance. Innovative approaches like zero-knowledge proof technology, as demonstrated by Lasha Antadze of Rarimo, offer promising solutions. Antadze explained how this technology enables users to prove eligibility for actions like voting or purchasing age-restricted goods without exposing personal data, making blockchain interactions more accessible.
Dominik Schmidt, co-founder of Polygon Miden, reflected on lessons from legacy systems like Ethereum to address challenges in privacy and scalability. By leveraging zero-knowledge proofs and collaborating with decentralized storage providers, his team aims to enhance both developer and user experiences.
As confidential computing evolves, it is clear that privacy and usability must go hand in hand to address the needs of an increasingly data-driven world. Through innovation and collaboration, these technologies are set to redefine how privacy is maintained in AI and Web3 applications.