Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Code. Show all posts

How Reachability Analysis Is Streamlining Security for Developers

 




Over the past few years, AI assistants have made coding easier for developers in that one is able to quickly develop and push code over to GitHub, among others. But with so much automation going on, the risk of coding vulnerabilities has also increased. The vast majority of those generated codes have security flaws. What has befallen the application security teams is a lot of vulnerability reports pouring in. But lately, Snyk has found that 31% of these vulnerability reports are completely false positives added to the burden of security teams.

In such cases, many teams tend to use a method called reachability analysis, which usually helps the security expert screen out noise and work only with the vulnerabilities that might be exploited during an attack-upon only accessible code during said attack. Since only 10% to 20% of the imported code is even used by any application on average, this approach cuts the number of reported vulnerabilities that developers have to fix in half. Joseph Hejderup, technical staff member at Endor Labs, demonstrated this approach during the SOSS Community Day Europe 2024 and talked about how it makes vulnerability reports more actionable.


False Positive Overload

The biggest problem of application security is false positives. The sooner security teams can ship out more code, the larger their impact will be as your security tool begins to flag issues that are not actually a risk. According to Snyk, 61% of the developers believe that the enhancement of false positives is due to automation. To the eyes of the security teams, sorting hundreds or thousands reported vulnerabilities in numerous projects becomes a daunting task.

According to Randall Degges, head of developer relations at Snyk, reachability analysis helps by narrowing down exactly which vulnerabilities are really dangerous. This calms the security teams, since they can now focus on issues being actively executed in the code. Filtering out the kind of vulnerabilities that attackers cannot reach makes companies remediate by as much as 60%. And as OX Security research put it, in some cases, teams even reduced the workload by nearly 99.5%, making improvements to the developers.


Reducing developer friction

It's not just about workload reduction, but rather reporting fewer, more accurate vulnerabilities back to developers, says Katie Teitler-Santullo, a cybersecurity strategist at OX Security. "Tools that focus on real risks over bombarding developers with false alerts improve collaboration and efficiency," she says.

The hardest part is to eliminate the noise that security tools produce, keeping the developers in the same pace with the growth of development while still having a secure solution. Focusing on reachability ensures that the reported vulnerabilities are really relevant to the code being worked on, allowing developers to tackle key issues without fear of information paralysis.


Two Approaches to Reachability Analysis

There are two primary ways of reachability analysis. The first of these is static code analysis-in the process, the code itself is analysed and a graph of function calls is constructed to determine whether vulnerable code can be executed. This method works but is not failsafe as some of the functions may only be called under specific conditions.

The second approach involves instrumenting the application to track code execution during runtime. This really gives a live snapshot of which parts are really being used, so you will be able to immediately know if the identified vulnerability is something that poses an actual threat.

While the current reachability analysis tools mainly focus on whether code is being executed, the future of this technology involves determining if vulnerable code is indeed exploitable. According to Hejderup, the next step in reaching that milestone of making security testing even more effective would be the combination of reachability with exploitability analysis.

Finally, reachability analysis offers an effective solution to the problem of vulnerability overload. This is because it allows security teams to remove extraneous reports and focus only on reachable, exploitable code. This approach reduces workloads and generates better collaboration between security teams and development teams. As companies adopt this way of doing things, the future of application security testing will be more complex, such that only the most crucial vulnerabilities are flagged and then fixed.

Reachability analysis isn't going to be a silver bullet, perhaps, but it is going to be a pretty useful tool in an era where code is being developed and deployed faster than ever-and the risks of ignorance on security have never been higher.


GitHub Unveils AI-Driven Tool to Automatically Rectify Code Vulnerabilities

GitHub has unveiled a novel AI-driven feature aimed at expediting the resolution of vulnerabilities during the coding process. This new tool, named Code Scanning Autofix, is currently available in public beta and is automatically activated for all private repositories belonging to GitHub Advanced Security (GHAS) customers.

Utilizing the capabilities of GitHub Copilot and CodeQL, the feature is adept at handling over 90% of alert types in popular languages such as JavaScript, Typescript, Java, and Python.

Once activated, Code Scanning Autofix presents potential solutions that GitHub asserts can resolve more than two-thirds of identified vulnerabilities with minimal manual intervention. According to GitHub's representatives Pierre Tempel and Eric Tooley, upon detecting a vulnerability in a supported language, the tool suggests fixes accompanied by a natural language explanation and a code preview, offering developers the flexibility to accept, modify, or discard the suggestions.

The suggested fixes are not confined to the current file but can encompass modifications across multiple files and project dependencies. This approach holds the promise of substantially reducing the workload of security teams, allowing them to focus on bolstering organizational security rather than grappling with a constant influx of new vulnerabilities introduced during the development phase.

However, it is imperative for developers to independently verify the efficacy of the suggested fixes, as GitHub's AI-powered feature may only partially address security concerns or inadvertently disrupt the intended functionality of the code.

Tempel and Tooley emphasized that Code Scanning Autofix aids in mitigating the accumulation of "application security debt" by simplifying the process of addressing vulnerabilities during development. They likened its impact to GitHub Copilot's ability to alleviate developers from mundane tasks, allowing development teams to reclaim valuable time previously spent on remedial actions.

In the future, GitHub plans to expand language support, with forthcoming updates slated to include compatibility with C# and Go.

For further insights into the GitHub Copilot-powered code scanning autofix tool, interested parties can refer to GitHub's documentation website.

Additionally, the company recently implemented default push protection for all public repositories to prevent inadvertent exposure of sensitive information like access tokens and API keys during code updates.

This move comes in response to a notable issue in 2023, during which GitHub users inadvertently disclosed 12.8 million authentication and sensitive secrets across more than 3 million public repositories. These exposed credentials have been exploited in several high-impact breaches in recent years, as reported by BleepingComputer.

ChatGPT's Plug-In Vulnerabilities

 

ChatGPT, the revolutionary language model developed by OpenAI, has been making waves in the tech world for its impressive capabilities in natural language understanding. However, recent developments have highlighted a significant concern – ChatGPT's plug-in problem, which poses potential cybersecurity risks.

According to cybersecurity experts, the surge in cybercrime and the role of cryptocurrencies in facilitating illegal activities necessitate a crackdown on potential vulnerabilities. A prominent expert emphasized, "As artificial intelligence-based models like ChatGPT become more prevalent, it's essential to address any potential plug-in vulnerabilities to safeguard against cyber threats."

One of the key aspects contributing to this problem is the pluggable architecture that allows third-party developers to create and integrate their custom-built models or plugins with ChatGPT. While this flexibility has enabled rapid advancements in the capabilities of the language model, it also opens avenues for malicious actors to exploit the system.

To better understand the issue, it's crucial to consider the technology behind ChatGPT and the potential implications of its plug-in capabilities. Blockchain, the foundational technology behind cryptocurrencies, has been gaining attention for its secure and decentralized nature. Blockchain's design ensures that transactions are tamper-resistant and transparent, making it an attractive option for secure data management.

However, the implementation of blockchain in the context of ChatGPT's plug-ins poses unique challenges. Blockchain is resource-intensive and requires a consensus mechanism, which can significantly impact the responsiveness of an AI model. Moreover, the decentralized nature of blockchain may complicate the handling of sensitive data in compliance with privacy regulations.

Experts suggest that addressing the plug-in problem may involve a careful balance between innovation and security. Integrating blockchain-based solutions in a way that doesn't compromise the core functionality of ChatGPT is a complex task that requires collaboration among AI researchers, cybersecurity experts, and blockchain developers.

Furthermore, implementing robust auditing and validation processes for third-party plug-ins is crucial to minimize potential security breaches. OpenAI must rigorously vet and monitor the code submitted by developers to ensure it complies with security standards and does not expose users to undue risks.

OpenAI has already taken measures to address the plug-in challenges. They have instituted an internal review process and are actively working to enhance the security of ChatGPT. Additionally, they are exploring options to leverage blockchain technology for improving the model's transparency and accountability without compromising performance.

Auto-GPT: New autonomous 'AI agents' Can Act Independently & Modify Their Own Code

 

The next phase of artificial intelligence is here, and it is already causing havoc in the technology sector. The release of Auto-GPT last week, an artificial intelligence program capable of operating autonomously and developing itself over time, has encouraged a proliferation of autonomous "AI agents" that some believe could revolutionize the way we operate and live. 

Unlike current systems such as ChatGPT, which require manual commands for every activity, AI agents can give themselves new tasks to work on with the purpose of achieving a larger goal, and without much human interaction – an unparalleled level of autonomy for AI models such as GPT-4. Experts say it's difficult to predict the technology's future consequences because it's still in its early stages. 

According to Steve Engels, a computer science professor at the University of Toronto who works with generative AI, an AI agent is any artificial intelligence capable of performing a certain function without human intervention.

“The term has been around for decades,” he said. For example, programs that play chess or control video game characters are considered agents because “they have the agency to be able to control some of their own behaviors and explore the environment.”

This latest generation of AI agents is similarly autonomous, but with significantly higher capabilities, thanks to state-of-the-art AI systems like OpenAI's GPT-4 — a massive language model capable of tasks ranging from writing difficult code to creating sonnets to passing the bar exam.

Earlier this month, OpenAI published an API for GPT-4 and their hugely popular chatbot ChatGPT, allowing any third-party developer to integrate the company's technology into their own products. Auto-GPT is one of the most recent products to emerge from the API, and it may be the first example of GPT-4 being allowed to operate fully autonomously.

What exactly is Auto-GPT and what can it do?

Toran Bruce Richards, the founder and lead developer at video game studio Significant Gravitas Ltd, designed Auto-GPT. Its source code is freely accessible on Github, allowing anyone with programming skills to create their own AI agents.

Based on the project's Github page, Auto-GPT can browse the internet for "searches and information gathering," make visuals, maintain short-term and long-term memory, and even use text-to-speech to allow the AI to communicate.

Most notably, the program can rewrite and improve on its own code, allowing it to "recursively debug, develop, and self-improve," according to Significant Gravitas. It remains to be seen how effective these self-updates are.

“Auto-GPT is able to actually take those responses and execute them in order to make some larger task happen,” Engels said, including coming up with its own prompts in response to new information.

Auto-GPT became the #1 trending repository on Github almost immediately after its launch, earning over 61,000 stars by Friday night and spawning a slew of offshoots. Over the last week, the program has led Twitter's trending tab, with innumerable programmers and entrepreneurs offering their perspectives.

Prior to publishing, Richards and Significant Gravitas did not respond to the Star's requests for comment. Twitter has been flooded with users describing their uses for Auto-GPT, ranging from creating business blueprints to automating to-do lists.

While anyone may use Auto-GPT, it does require some programming skills to set up. Users, thankfully, have produced AgentGPT, which integrates Auto-GPT into one's web browser, allowing anyone to make their own AI Agents.

Given the program's skills and affordability, AI agents may eventually replace human positions such as customer service representatives, content writers, and even financial advisors. At the moment, the technology has flaws — for example, ChatGPT has been known to manufacture news reports or scientific studies, while Auto-GPT has struggled to stay on goal. Still, AI is evolving at a dizzying speed, and it's impossible to predict what will happen next, according to Engels.

“We don’t really know at this point what it’s going to be or even what the next iteration of it is going to look like,” he said. “Things are still very much in the development stage right now.”

US NIST Uncovers Winning Encryption Algorithm for IoT Data Protection

The National Institute of Standards and Technology (NIST) has declared that ASCON has won the "lightweight cryptography" programme, which seeks the best algorithm to protect small IoT (Internet of Things) devices with limited hardware resources. Small IoT devices are becoming progressively popular and ubiquitous, being used in wearable technology, "smart home" applications, and so on. 

However, they are still utilized to store and handle sensitive personal information such as health records, financial information, etc. Having stated that, implementing a standard for data encryption is critical in securing people's data. However, the weak chips inside these devices necessitate the utilization of an algorithm capable of providing robust encryption while using very little computational power.

Kerry McKay, a computer scientist at NIST stated, "The world is moving toward using small devices for lots of tasks ranging from sensing to identification to machine control, and because these small devices have limited resources, they need security that has a compact implementation. These algorithms should cover most devices that have these sorts of resource constraints."

ASCON was chosen as the best of 57 proposals submitted to NIST after several rounds of security analysis by leading cryptographers, implementation and benchmarking results, and workshop feedback. The entire programme lasted four years and began in 2019.

As per NIST, all ten finalists demonstrated exceptional performance that exceeded the set standards without raising security concerns, making the final selection extremely difficult. ASCON was eventually chosen as the winner due to its flexibility, seven-family support, energy efficiency, speed on slow hardware, and low overhead for short messages.

The algorithm had also withstood the test of time, having been formed in 2014 by a team of cryptographers from Graz University of Technology, Infineon Technologies, Lamarr Security Research, and Radboud University, and winning the CAESAR cryptographic competition's "lightweight encryption" category in 2019.

AEAD (Authenticated Encryption with Associated Data) and hashing are two of ASCON's native features highlighted in NIST's announcement. AEAD is an encryption mode that combines symmetric encryption and MAC (message authentication code) to prevent unauthorized access or tampering with transmitted or stored data.

Hashing is a data integrity verification mechanism that generates a string of characters (hash) from distinct inputs, allowing two data exchange points to verify that the encrypted message has not been tampered with. NIST continues to recommend AES for AEAD and SHA-256 for hashing; however, these are incompatible with smaller, weaker devices.

Despite its lightweight nature, NIST claims that ASCON is powerful enough to withstand attacks from powerful quantum computers at its standard 128-bit nonce. This is not, however, the goal or purpose of this standard, and lightweight cryptography algorithms should only be used to protect ephemeral secrets.

The National Institute of Standards and Technology (NIST) treats post-quantum cryptography as a distinct challenge, with a separate programme for developing quantum-resistant standards, and the effort has already produced results.

The National Institute of Standards and Technology (NIST) treats post-quantum cryptography as a separate challenge, with a separate programme for developing quantum-resistant standards, and the effort has already yielded its first results.

More information on ASCON, it can be found on the algorithm's website or in the technical paper submitted to NIST in May 2021.

GitHub Introduces Private Flaw Reporting to Secure Software Supply Chain

 

GitHub, a Microsoft-owned code hosting platform, has announced the launch of a direct channel for security researchers to report vulnerabilities in public repositories that allow it. The new private vulnerability reporting capability allows repository administrators to enable security researchers to report any vulnerabilities found in their code to them. 

Some repositories may include instructions on how to contact the maintainers for vulnerability reporting, but for those that do not, researchers frequently report issues publicly. Whether the researcher reports the vulnerability through social media or by creating a public issue, this method may make vulnerability details insufficiently public. 

To avoid such situations, GitHub has implemented private reporting, which allows researchers to contact repository maintainers who are willing to enroll directly. If the functionality is enabled, the reporting security researchers are given a simple form to fill out with information about the identified problem.

According to GitHub, "anyone with admin access to a public repository can enable and disable private vulnerability reporting for the repository." When a vulnerability is reported, the repository maintainer is notified and can either accept or reject the report or ask additional questions about the issue.

According to GitHub, the benefits of the new capability include the ability to discuss vulnerability details privately, receiving reports directly on the same platform where the issue is discussed and addressed, initiating the advisory report, and a lower risk of being contacted publicly.

Private vulnerability reporting can be enabled from the repository's main page's 'Settings' section, in the 'Security' section of the sidebar, under 'Code security and analysis.' Once the functionality is enabled, security researchers can submit reports by clicking on a new 'Report a vulnerability' button on the repository's 'Advisories' page.

The private vulnerability reporting was announced at the GitHub Universe 2022 global developer event, along with the general availability of CodeQL support for Ruby, a new security risk and coverage view for GitHub Enterprise users, and funding for open-source developers.

The platform will provide a $20,000 incentive to 20 developers who maintain open-source repositories through the new GitHub Accelerator initiative. While, the new $10 million M12 GitHub Fund will support future open-source companies.