Over the years, zero trust has become a popular model adopted by organisations due to a growing need to ensure confidential information is kept safe, an aspect that organisations view as paramount in cybersecurity. Zero-trust is a vital security framework that is fundamentally not like the traditional security perimeter-based model. Instead of relying on a robust boundary, zero-trust grants access to its resources after the constant validation of any user and every device they use, regardless of an individual's position within an organisation or the number of years since one first employed with the company. This "never trust, always verify" policy only grants minimum access to someone, even a long-tenured employee, about what is needed to fulfil their tasks. Because information for cybersecurity is often log file data, zero trust principles can provide better safeguarding of this sensitive information.
Log Files: Why They Are Both Precious and Vulnerable
Log files contain information that reflects all the digital interplay happening on the network, hence can indicate any vulnerability on a system for remediation purposes. For example, it's a good source where one will trace how companies' activities go regarding their performance by analysing log files for anything out of place or anomalies in systems' behaviours for speedy intervention for security lapses. At the same time, however, these log files can expose organisations to vulnerabilities when wrong hands gain access because of possible theft of confidential data or the intention of hacking or modification. The log files have to be strictly controlled and limited only for authorization, because the misuse has to be avoided for maintaining the network secure.
Collecting and Storing Log Data Securely
Zero trust can best be implemented only if gathering and storing of log file collection and storage are sound. It ensures that the real-time data is collected in an environment that has a tamper-resistant place that prevents data from unauthorised modification. Of late, there has been OpenTelemetry, which is gaining popularity due to its potential in the multiple data sources and secure integration with many databases, mostly PostgreSQL.
Secure log storage applies blockchain technology. A decentralised, immutable structure like blockchain ensures logs cannot be altered and their records will remain transparent as well as tamper-proof. The reason blockchain technology works through multiple nodes rather than one central point makes it nearly impossible to stage a focused attack on the log data.
Imposing Least Privilege Access Control
Least privilege access would be one of the greatest principles of zero-trust security, which means that end-users would have only access to what is required to achieve their task. However, it can be challenging when balancing this principle with being efficient in log analysis; traditional access control methods-such as data masking or classification-frequently fall short and are not very practical. One promising solution to this problem is homomorphic encryption, which enables analysis of data in its encrypted state. Analysts can evaluate log files without ever directly seeing the unencrypted data, ensuring that security is maintained without impacting workflow.
Homomorphic encryption is beyond the level of the analyst. This means other critical stakeholders, such as administrators, have access to permissions but are not allowed to read actual data. This means logs are going to be secure at internal teams and thus there is a lesser chance of accidental exposure.
In-House AI for Threat Detection
Companies can further secure log data by putting in-house AI models which are run directly within their database and hence minimise external access. For instance, the company can use a private SLM AI that was trained specifically to analyse the logs. This ensures there is safe and accurate threat detection without having to share any logs with third-party services. The other advantage that an AI trained on relevant log data provides is less bias, as all operations depend on only relevant encrypted log data that can give an organisation precise and relevant insights.
Organisations can ensure maximum security while minimising exposure to potential cyber threats by applying a zero-trust approach through strict access controls and keeping data encrypted all through the analysis process.
Zero-Trust for Optimal Log Security
One of the effective log file intelligence approaches appears to be zero trust security-a security approach that uses the technologies of blockchain and homomorphic encryption to ensure the integrity and privacy of information in management. It means one locks up logs, and it is a source for valuable security insights, kept well protected against unauthorised access and modifications.
Even if an organisation does not adopt zero-trust completely for its systems, it should still ensure that the protection of the logs is considered a priority. By taking the essential aspects of zero-trust, such as having minimal permissions and secured storage, it can help organisations decrease their vulnerability to cyber attacks while protecting this critical source of data.
Within a year and a half, ChatGPT has grown from an AI prototype to a broad productivity assistant, even sporting its text and code editor called Canvas. Soon, OpenAI will add direct web search capability to ChatGPT, putting the platform at the same table as Google's iconic search. With these fast updates, ChatGPT is now sporting quite a few features that may not be noticed at first glance but are deepening the user experience if one knows where to look.
This is the article that will teach you how to tap into ChatGPT, features from customization settings to unique prompting techniques, and not only five must-know tips will be useful in unlocking the full range of abilities of ChatGPT to any kind of task, small or big.
1. Rename Chats for Better Organisation
A new conversation with ChatGPT begins as a new thread, meaning that it will remember all details concerning that specific exchange but "forget" all the previous ones. This way, you can track the activities of current projects or specific topics because you can name your chats. The chat name that it might try to suggest is related to the flow of the conversation, and these are mostly overlooked contexts that users need to recall again. Renaming your conversations is one simple yet powerful means of staying organised if you rely on ChatGPT for various tasks.
To give a name to a conversation, tap the three dots next to the name in the sidebar. You can also archive older chats to remove them from the list without deleting them entirely, so you don't lose access to the conversations that are active.
2. Customise ChatGPT through Custom Instructions
Custom Instructions in ChatGPT is a chance to make your answers more specific to your needs because you will get to share your information and preferences with the AI. This is a two-stage personalization where you are explaining to ChatGPT what you want to know about yourself and, in addition, how you would like it to be returned. For instance, if you ask ChatGPT for coding advice several times a week, you can let the AI know what programming languages you are known in or would like to be instructed in so it can fine-tune the responses better. Or, you should be able to ask for ChatGPT to provide more verbose descriptions or to skip steps in order to make more intuitive knowledge of a topic.
To set up personal preferences, tap the profile icon on the upper right, and then from the menu, "Customise ChatGPT," and then fill out your preferences. Doing this will enable you to get responses tailored to your interests and requirements.
3. Choose the Right Model for Your Use
If you are a subscriber to ChatGPT Plus, you have access to one of several AI models each tailored to different tasks. The default model for most purposes is GPT-4-turbo (GPT-4o), which tends to strike the best balance between speed and functionality and even supports other additional features, including file uploads, web browsing, and dataset analysis.
However, other models are useful when one needs to describe a rather complex project with substantial planning. You may initiate a project using o1-preview that requires deep research and then shift the discussion to GPT-4-turbo to get quick responses. To switch models, you can click on the model dropdown at the top of your screen or type in a forward slash (/) in the chat box to get access to more available options including web browsing and image creation.
4. Look at what the GPT Store has available in the form of Mini-Apps
Custom GPTs, and the GPT Store enable "mini-applications" that are able to extend the functionality of the platform. The Custom GPTs all have some inbuilt prompts and workflows and sometimes even APIs to extend the AI capability of GPT. For instance, with Canva's GPT, you are able to create logos, social media posts, or presentations straight within the ChatGPT portal by linking up the Canva tool. That means you can co-create visual content with ChatGPT without having to leave the portal.
And if there are some prompts you often need to apply, or some dataset you upload most frequently, you can easily create your Custom GPT. This would be really helpful to handle recipes, keeping track of personal projects, create workflow shortcuts and much more. Go to the GPT Store by the "Explore GPTs" button in the sidebar. Your recent and custom GPTs will appear in the top tab, so find them easily and use them as necessary.
5. Manage Conversations with a Fresh Approach
For the best benefit of using ChatGPT, it is key to understand that every new conversation is an independent document with its "memory." It does recall enough from previous conversations, though generally speaking, its answers depend on what is being discussed in the immediate chat. This made chats on unrelated projects or topics best started anew for clarity.
For long-term projects, it might even be logical to go on with a single thread so that all relevant information is kept together. For unrelated topics, it might make more sense to start fresh each time to avoid confusion. Another way in which archiving or deleting conversations you no longer need can help free up your interface and make access to active threads easier is
What Makes AI Unique Compared to Other Software?
AI performs very differently from other software in that it responds dynamically, at times providing responses or "backtalk" and does not simply do what it is told to do. Such a property leads to some trial and error to obtain the desired output. For instance, one might prompt ChatGPT to review its own output as demonstrated by replacing single quote characters by double quote characters to generate more accurate results. This is similar to how a developer optimises an AI model, guiding ChatGPT to "think" through something in several steps.
ChatGPT Canvas and other features like Custom GPTs make the AI behave more like software in the classical sense—although, of course, with personality and learning. If ChatGPT continues to grow in this manner, features such as these may make most use cases easier and more delightful.
Following these five tips should help you make the most of ChatGPT as a productivity tool and keep pace with the latest developments. From renaming chats to playing around with Custom GPTs, all of them add to a richer and more customizable user experience.
What is Data Poisoning?
Data poisoning is an attack method on AI models by corrupting the data used for its training. In other words, the intent is to have the model make inappropriate predictions or choices. Besides, unlike traditional hacking, it doesn't require access to the system; therefore, data poisoning manipulates input data either before the deployment of an AI model or after the deployment of the AI model, and that makes it very difficult to detect.
One attack happens at the training phase when an attacker manages to inject malicious data into any AI model. Yet another attack happens post-deployment when poisoned data is fed to the AI; it yields wrong outputs. Both kinds of attacks remain hardly detectable and cause damage to the AI system in the long run.
According to research by JFrog, investigators found a number of suspicious models uploaded to Hugging Face, a community where users can share AI models. Those contained encoded malicious code, which the researchers believe hackers-those potentially coming from the KREOnet research network in Korea-might have embedded. The most worrying aspect, however, was the fact that these malicious models went undetected by masquerading as benign.
That's a serious threat because many AI systems today use a great amount of data from different sources, including the internet. In cases where attackers manage to change the data used in the training of a model, that could mean anything from misleading results to actual large-scale cyberattacks.
Why It's Hard to Detect
One of the major challenges with data poisoning is that AI models are built by using enormous data sets, which makes it difficult for researchers to always know what has gone into the model. A lack of clarity of this kind in turn creates ways in which attackers can sneak in poisoned data without being caught.
But it gets worse: AI systems that scrape data from the web continuously in order to update themselves could poison their own training data. This sets up the alarming possibility of an AI system's gradual breakdown, or "degenerative model collapse."
The Consequences of Ignoring the Threat
If left unmitigated, data poisoning could further allow attackers to inject stealth backdoors in AI software that enable them to conduct malicious actions or cause any AI system to behave in ways unexpected. Precisely, they can run malicious code, allow phishing, and rig AI predictions for various nefarious uses.
The cybersecurity industry must take this as a serious threat since more dependence occurs on generative AI linked together, alongside LLMs. If one fails to do so, widespread vulnerability across the complete digital ecosystem will result.
How to Defend Against Data Poisoning
The protection of AI models against data poisoning calls for vigilance throughout the process of the AI development cycle. Experts say that this may require oversight by organisations in using only data from sources they can trust for training the AI model. The Open Web Application Security Project, or OWASP, has provided a list of some best ways to avoid data poisoning; a few of these include frequent checks to find biases and abnormalities during the training of data.
Other recommendations come in the form of multiple AI algorithms that verify results against each other to locate inconsistency. If an AI model starts producing strange results, fallback mechanisms should be in place to prevent any harm.
This also encompasses simulated data poisoning attacks run by cybersecurity teams to test their AI systems for robustness. While it is hard to build an AI system that is 100% secure, frequent validation of predictive outputs goes a long way in detecting and preventing poisoning.
Creating a Secure Future for AI
While AI keeps evolving, there is a need to instil trust in such systems. This will only be possible when the entire ecosystem of AI, even the supply chains, forms part of the cybersecurity framework. This would be achievable through monitoring inputs and outputs against unusual or irregular AI systems. Therefore, organisations will build robust, and more trustworthy models of AI.
Ultimately, the future of AI hangs in the balance with our capability to race against emerging threats like data poisoning. In sum, the ability of businesses to proactively take steps toward the security of AI systems today protects them from one of the most serious challenges facing the digital world.
The bottom line is that AI security is not just about algorithms; it's about the integrity for the data powering those algorithms.
In response, Microsoft announced changes to Recall. Initially planned for a broad release on June 18, 2024, it will first be available to Windows Insider Program users. The company assured that Recall would be turned off by default and emphasised its commitment to privacy and security. Despite these assurances, Microsoft declined to comment on claims that the tool posed a security risk.
Recall was showcased during Microsoft's developer conference, with Yusuf Mehdi, Corporate Vice President, highlighting its ability to access virtually anything on a user's PC. Following its debut, the ICO vowed to investigate privacy concerns. On June 13, Microsoft announced updates to Recall, reinforcing its "commitment to responsible AI" and privacy principles.
Adobe Overhauls Terms of Service
Adobe faced a wave of criticism after updating its terms of service, which many users interpreted as allowing the company to use their work for AI training without proper consent. Users were required to agree to a clause granting Adobe a broad licence over their content, leading to suspicions that Adobe was using this content to train generative AI models like Firefly.
Adobe officials, including President David Wadhwani and Chief Trust Officer Dana Rao, denied these claims and clarified that the terms were misinterpreted. They reassured users that their content would not be used for AI training without explicit permission, except for submissions to the Adobe Stock marketplace. The company acknowledged the need for clearer communication and has since updated its terms to explicitly state these protections.
The controversy began with Firefly's release in March 2023, when artists noticed AI-generated imagery mimicking their styles. Users like YouTuber Sasha Yanshin cancelled their Adobe subscriptions in protest. Adobe's Chief Product Officer, Scott Belsky, admitted the wording was unclear and emphasised the importance of trust and transparency.
Meta Faces Scrutiny Over AI Training Practices
Meta, the parent company of Facebook and Instagram, has also been criticised for using user data to train its AI tools. Concerns were raised when Martin Keary, Vice President of Product Design at Muse Group, revealed that Meta planned to use public content from social media for AI training.
Meta responded by assuring users that it only used public content and did not access private messages or information from users under 18. An opt-out form was introduced for EU users, but U.S. users have limited options due to the lack of national privacy laws. Meta emphasised that its latest AI model, Llama 2, was not trained on user data, but users remain concerned about their privacy.
Suspicion arose in May 2023, with users questioning Meta's security policy changes. Meta's official statement to European users clarified its practices, but the opt-out form, available under Privacy Policy settings, remains a complex process. The company can only address user requests if they demonstrate that the AI "has knowledge" of them.
The recent actions by Microsoft, Adobe, and Meta highlight the growing tensions between tech giants and their users over data privacy and AI development. As these companies navigate user concerns and regulatory scrutiny, the debate over how AI tools should handle personal data continues to intensify. The tech industry's future will heavily depend on balancing innovation with ethical considerations and user trust.
The foundation of this innovative work is a database known as the Multimodal Sarcasm Detection Dataset (MUStARD). This dataset, annotated by a separate research team from the U.S. and Singapore, includes labels indicating the presence of sarcasm in various pieces of content. By leveraging this annotated dataset, the Dutch research team aimed to construct a robust sarcasm detection model.
After extensive training using the MUStARD dataset, the researchers achieved an impressive accuracy rate. The AI model could detect sarcasm in previously unlabeled exchanges nearly 75% of the time. Further developments in the lab, including the use of synthetic data, have reportedly improved this accuracy even more, although these findings are yet to be published.
One of the key figures in this project, Matt Coler from the University of Groningen's speech technology lab, expressed excitement about the team's progress. "We are able to recognize sarcasm in a reliable way, and we're eager to grow that," Coler told The Guardian. "We want to see how far we can push it." Shekhar Nayak, another member of the research team, highlighted the practical applications of their findings.
By detecting sarcasm, AI assistants could better interact with human users, identifying negativity or hostility in speech. This capability could significantly enhance the user experience by allowing AI to respond more appropriately to human emotions and tones. Gao emphasized that integrating visual cues into the AI tool's training data could further enhance its effectiveness. By incorporating facial expressions such as raised eyebrows or smirks, the AI could become even more adept at recognizing sarcasm.
The scenes from sitcoms used to train the AI model included notable examples, such as a scene from "The Big Bang Theory" where Sheldon observes Leonard's failed attempt to escape a locked room, and a "Friends" scene where Chandler, Joey, Ross, and Rachel unenthusiastically assemble furniture. These diverse scenarios provided a rich source of sarcastic interactions for the AI to learn from. The research team's work builds on similar efforts by other organizations.
For instance, the U.S. Department of Defense's Defense Advanced Research Projects Agency (DARPA) has also explored AI sarcasm detection. Using DARPA's SocialSim program, researchers from the University of Central Florida developed an AI model that could classify sarcasm in social media posts and text messages. This model achieved near-perfect sarcasm detection on a major Twitter benchmark dataset. DARPA's work underscores the broader significance of accurately detecting sarcasm.
"Knowing when sarcasm is being used is valuable for teaching models what human communication looks like and subsequently simulating the future course of online content," DARPA noted in a 2021 report. The advancements made by the University of Groningen team mark a significant step forward in AI's ability to understand and interpret human communication.
As AI continues to evolve, the integration of sarcasm detection could play a crucial role in developing more nuanced and responsive AI systems. This progress not only enhances human-AI interaction but also opens new avenues for AI applications in various fields, from customer service to mental health support.
Microsoft recently made headlines by temporarily blocking internal access to ChatGPT, a language model developed by OpenAI, citing data concerns. The move sparked curiosity and raised questions about the security and potential risks associated with this advanced language model.
According to reports, Microsoft took this precautionary step on Thursday, sending ripples through the tech community. The decision came as a response to what Microsoft referred to as data concerns associated with ChatGPT.
While the exact nature of these concerns remains undisclosed, it highlights the growing importance of scrutinizing the security aspects of AI models, especially those that handle sensitive information. With ChatGPT being a widely used language model for various applications, including customer service and content generation, any potential vulnerabilities in its data handling could have significant implications.
As reported by ZDNet, Microsoft still needs to provide detailed information on the duration of the block or the specific data issues that prompted this action. However, the company stated that it is actively working with OpenAI to address these concerns and ensure a secure environment for its users.
According to a Gartner report issued on Tuesday, 45% of firms are presently testing generative AI, while 10% have such technologies in use. During a webinar last month to examine the commercial costs and dangers of generative AI, 1,419 executives were polled.
In the recent survey, around 78% said that the advantages of generative AI exceeded its risks, compared to the 68% who felt the same way in the prior survey.
According to Gartner, 22% of firms are expanding their generative AI investments across at least three different functions, with 45% of businesses doing so overall. Software development saw the biggest investment in or adoption of generative AI, at 21%, followed by marketing and customer service, at 19% and 16%, respectively.
Gartner’s group chief of research and an acclaimed analyst, "Organizations are not just talking about generative AI – they're investing time, money, and resources to move it forward and drive business outcomes."
"Executives are taking a bolder stance on generative AI as they see the profound ways that it can drive innovation, optimization, and disruption[…]Business and IT leaders understand that the 'wait and see' approach is riskier than investing," said Karamouzis.
In order to grow their businesses companies must have a framework in place to ensure that they are adopting generative AI responsibly and ethically.
According to Kathy Baxter, Salesforce.com's principal architect of Responsible AI, skepticism should also be extended to technologies that can tell whether AI has been deployed.
Baxter further added that technology has now become ‘democratized,’ allowing anyone to have access to generative AI without many restrictions. However, despite the fact that many firms are making an attempt to screen out harmful information and are still investing in such initiatives, there is still a lack of knowledge regarding "how big a grain of salt" one should apply to AI-generated content.
Baxter noted that even AI detecting tools can make mistakes occasionally yet may be taken as always accurate in an interview with ZDNET, stressing that users accept all of this stuff as fact even if it is false. When generative AI and the tools that go along with it are employed in some fields, like education, these impressions could be detrimental since students might be falsely accused of employing AI in their work.
She further raised concerns over such risks, urging individuals and organizations to use generative AI with ‘enough skepticism.’
She further highlighted the need for sufficient restrictions to ensure the safety and accuracy of AI. This will also help in case deployments are rolled out along with mitigation tools, she added. These can involve fault detection and reporting features, and mechanisms to collect and provide human feedback.
Moreover, she emphasized the significance of the data used to train AI models and added that grounding AI is equally essential. But as she pointed out, not many businesses practice proper data hygiene.
The study demonstrates how these AI systems can be programmed to reproduce precisely copyrighted artwork and medical images. It is a result that might help artists who are suing AI companies for copyright violations.
Researchers from Google, DeepMind, UC Berkeley, ETH Zürich, and Princeton obtained their findings by repeatedly prompting Google’s Imagen with image captions, like the user’s name. Following this, they analyzed if any of the images they produced matched the original photos stored in the model's database. The team was successful in extracting more than 100 copies of photos from the AI's training set.
These image-generating AI models are apparently produced over vast data sets, that consist of images with captions that have been taken from the internet. The most recent technology works by taking images in the data sets and altering pixels individually until the original image is nothing more than a jumble of random pixels. The AI model then reverses the procedure to create a new image from the pixelated mess.
According to Ryan Webster, a Ph.D. student from the University of Caen Normandy, who has studied privacy in other image generation models but is not involved in the research, the study is the first to demonstrate that these AI models remember photos from their training sets. This could also serve as an implication for startups wanting to use AI models in health care since it indicates that these systems risk leaking users’ private and sensitive data.
Eric Wallace, a Ph.D. scholar who was involved in the study group, raises concerns over the privacy issue and says they hope to raise alarm regarding the potential privacy concerns with these AI models before they are extensively implemented in delicate industries like medicine.
“A lot of people are tempted to try to apply these types of generative approaches to sensitive data, and our work is definitely a cautionary tale that that’s probably a bad idea unless there’s some kind of extreme safeguards taken to prevent [privacy infringements],” Wallace says.
Another major conflict between AI businesses and artists is caused by the extent to which these AI models memorize and regurgitate photos from their databases. Two lawsuits have been filed against AI by Getty Images and a group of artists who claim the company illicitly scraped and processed their copyrighted content.
The researchers' findings will ultimately aid artists to claim that AI companies have violated their copyright. The companies may have to pay artists whose work was used to train Stable Diffusion if they can demonstrate that the model stole their work without their consent.
According to Sameer Singh, an associate professor of computer science at the University of California, Irvine, these findings hold paramount importance. “It is important for general public awareness and to initiate discussions around the security and privacy of these large models,” he adds.