Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Salesforce. Show all posts

Microsoft and Salesforce Clash Over AI Autonomy as Competition Intensifies

 

The generative AI landscape is witnessing fierce competition, with tech giants Microsoft and Salesforce clashing over the best approach to AI-powered business tools. Microsoft, a significant player in AI due to its collaboration with OpenAI, recently unveiled “Copilot Studio” to create autonomous AI agents capable of automating tasks in IT, sales, marketing, and finance. These agents are meant to streamline business processes by performing routine operations and supporting decision-making. 

However, Salesforce CEO Marc Benioff has openly criticized Microsoft’s approach, likening Copilot to “Clippy 2.0,” referencing Microsoft’s old office assistant software that was often ridiculed for being intrusive. Benioff claims Microsoft lacks the data quality, enterprise security, and integration Salesforce offers. He highlighted Salesforce’s Agentforce, a tool designed to help enterprises build customized AI-driven agents within Salesforce’s Customer 360 platform. According to Benioff, Agentforce handles tasks autonomously across sales, service, marketing, and analytics, integrating large language models (LLMs) and secure workflows within one system. 

Benioff asserts that Salesforce’s infrastructure is uniquely positioned to manage AI securely, unlike Copilot, which he claims may leak sensitive corporate data. Microsoft, on the other hand, counters that Copilot Studio empowers users by allowing them to build custom agents that enhance productivity. The company argues that it meets corporate standards and prioritizes data protection. The stakes are high, as autonomous agents are projected to become essential for managing data, automating operations, and supporting decision-making in large-scale enterprises. 

As AI tools grow more sophisticated, both companies are vying to dominate the market, setting standards for security, efficiency, and integration. Microsoft’s focus on empowering users with flexible AI tools contrasts with Salesforce’s integrated approach, which centers on delivering a unified platform for AI-driven automation. Ultimately, this rivalry is more than just product competition; it reflects two different visions for how AI can transform business. While Salesforce focuses on integrated security and seamless data flows, Microsoft is emphasizing adaptability and user-driven AI customization. 

As companies assess the pros and cons of each approach, both platforms are poised to play a pivotal role in shaping AI’s impact on business. With enterprises demanding robust, secure AI solutions, the outcomes of this competition could influence AI’s role in business for years to come. As these AI leaders continue to innovate, their differing strategies may pave the way for advancements that redefine workplace automation and decision-making across the industry.

The Use of AI by Sales Teams is Booming

 

According to Salesforce's 2024 State of Sales report, sales teams are combining tools and strengthening data security to reap the benefits of AI. Following a global survey of 5,500 sales professionals, the report's four main findings are as follows: 

Mounting pressure: Sellers are struggling due to marketplace demands and not enough production. However, sales teams continue to overcome challenges in order to expand. Over the last year, 79% of sales teams increased their revenue. In addition, 82% of salespeople are confident in their company's 12-month growth strategy. 

Partner selling is helping to drive growth; 84% of sales professionals believe it has a greater influence on revenue than a year ago. According to the global answer, 89% of sales teams are presently using partner sales. Recurring revenues (42% of revenue source) are increasing, with more than 90% of sales teams using multiple revenue sources. 

Sellers face changing consumer expectations, which leads to challenging sales motions; 67% of sales professionals do not plan to fulfil their quota this year, and 84% missed it last year. According to the survey, marketplace competition is also becoming a headache. 57% believe it has been more difficult since last year, while only 13% feel it has become easier. The most apparent disclosure, in my opinion, is the non-selling duties that take up the majority of a seller's time. Sales representatives devote 70% of their time to non-selling tasks. 

Surge in AI adoption: Sales teams are using AI to increase efficiency and personalisation, but many are concerned about funding, training, and data gaps. 81% of sales teams claim to use AI currently. According to the survey, four out of five sales teams are either experimenting with AI or have fully incorporated it. The most significant advantages of AI are improved sales data quality and accuracy. The other significant advantage of AI is that 83% of sales teams with AI experienced revenue increase in the previous year, compared to 66% of teams without AI.

Using enablement tactic: Sales teams are enhancing training programs for both direct sellers and partners, as a critical tactic for providing additional value to consumers. Improving sales enablement is the most effective growth strategy. AI can aid with sales enablement. As per the survey, the most preferred technique for sales teams to use AI for enablement is to provide real-time selling advise, which involves AI technologies giving reps personalised advice while they are working.

Slack Faces Backlash Over AI Data Policy: Users Demand Clearer Privacy Practices

 

In February, Slack introduced its AI capabilities, positioning itself as a leader in the integration of artificial intelligence within workplace communication. However, recent developments have sparked significant controversy. Slack's current policy, which collects customer data by default for training AI models, has drawn widespread criticism and calls for greater transparency and clarity. 

The issue gained attention when Gergely Orosz, an engineer and writer, pointed out that Slack's terms of service allow the use of customer data for training AI models, despite reassurances from Slack engineers that this is not the case. Aaron Maurer, a Slack engineer, acknowledged the need for updated policies that explicitly detail how Slack AI interacts with customer data. This discrepancy between policy language and practical application has left many users uneasy. 

Slack's privacy principles state that customer data, including messages and files, may be used to develop AI and machine learning models. In contrast, the Slack AI page asserts that customer data is not used to train Slack AI models. This inconsistency has led users to demand that Slack update its privacy policies to reflect the actual use of data. The controversy intensified as users on platforms like Hacker News and Threads voiced their concerns. Many felt that Slack had not adequately notified users about the default opt-in for data sharing. 

The backlash prompted some users to opt out of data sharing, a process that requires contacting Slack directly with a specific request. Critics argue that this process is cumbersome and lacks transparency. Salesforce, Slack's parent company, has acknowledged the need for policy updates. A Salesforce spokesperson stated that Slack would clarify its policies to ensure users understand that customer data is not used to train generative AI models and that such data never leaves Slack's trust boundary. 

However, these changes have yet to address the broader issue of explicit user consent. Questions about Slack's compliance with the General Data Protection Regulation (GDPR) have also arisen. GDPR requires explicit, informed consent for data collection, which must be obtained through opt-in mechanisms rather than default opt-ins. Despite Slack's commitment to GDPR compliance, the current controversy suggests that its practices may not align fully with these regulations. 

As more users opt out of data sharing and call for alternative chat services, Slack faces mounting pressure to revise its data policies comprehensively. This situation underscores the importance of transparency and user consent in data practices, particularly as AI continues to evolve and integrate into everyday tools. 

The recent backlash against Slack's AI data policy highlights a crucial issue in the digital age: the need for clear, transparent data practices that respect user consent. As Slack works to update its policies, the company must prioritize user trust and regulatory compliance to maintain its position as a trusted communication platform. This episode serves as a reminder for all companies leveraging AI to ensure their data practices are transparent and user-centric.

ServiceNow Data Exposure Flaw Raises Concerns

ServiceNow, a popular enterprise cloud platform, was found to have a serious data exposure vulnerability. Concerns concerning the security of sensitive data in cloud-based systems have been highlighted by this occurrence, which has shocked the cybersecurity community.

According to reports from cybersecurity experts and firms, the vulnerability in ServiceNow's infrastructure could potentially lead to unauthorized access to sensitive data. The flaw, if exploited, could allow malicious actors to gain access to confidential information stored within the platform, posing a significant risk to organizations relying on ServiceNow for their day-to-day operations.

Enumerated, a cybersecurity firm, was among the first to identify and report the flaw. They disclosed that the issue stemmed from a misconfiguration in ServiceNow's security settings, leaving a gap that could be exploited by cybercriminals. This revelation has prompted immediate action from ServiceNow, as they work tirelessly to rectify the situation and implement robust security measures.

Salesforce, a leading cloud-based customer relationship management platform, was also mentioned in connection with the data exposure issue. While the exact nature of the link between Salesforce and ServiceNow remains unclear, experts speculate that this incident might highlight a broader concern regarding the security of cloud-based platforms and the need for enhanced vigilance in safeguarding sensitive data.

The cybersecurity community, along with industry experts, has been vocal about the importance of regular security audits and assessments for cloud-based platforms. This incident serves as a stark reminder of the potential risks associated with relying on third-party providers for critical business functions.

As the investigation into this data exposure flaw continues, organizations using ServiceNow are advised to review their security protocols and take immediate steps to mitigate potential risks. This includes ensuring that access controls and permissions are configured correctly and conducting thorough vulnerability assessments to identify and address any potential security gaps.

The ServiceNow data exposure vulnerability highlights how important it is for cloud-based platforms to have strong cybersecurity safeguards. It acts as a wake-up call for businesses, encouraging them to give security first priority and take preventative measures to protect sensitive data in an increasingly linked digital world.

Salesforce Unveils AI Cloud, Empowering Enterprises with Reliable Generative AI Capabilities

Today, Salesforce unveiled AI Cloud, an enterprise AI solution designed to enhance productivity throughout its suite of applications. This innovative platform integrates multiple Salesforce technologies, including Einstein, Data Cloud, Tableau, Flow, and MuleSoft, to deliver real-time generative AI capabilities that seamlessly integrate with business operations. 

With this open platform, businesses can easily incorporate AI into their workflows and drive greater efficiency across Salesforce applications. The foundational element of AI Cloud is the innovative Einstein Trust Layer, which Salesforce considers to be a groundbreaking enterprise AI architecture. 

This layer not only harnesses the benefits of generative AI but also prioritizes data privacy and security, aiming to establish a new industry standard. Salesforce ensures that the utmost measures are in place to safeguard sensitive information. With the Einstein Trust Layer, Salesforce strives to instill trust in enterprise-generative AI by safeguarding sensitive data in AI applications and workflows. 

This layer ensures that proprietary data remains separate from public models, addressing crucial aspects of data privacy, security, residency, and compliance specific to generative AI. Salesforce aims to establish a solid foundation of trust by prioritizing the protection of valuable data assets. 

What is generative artificial intelligence (AI)? 

Generative AIs play crucial roles in content creation across different industries. Movie makers utilize them to fill narrative gaps or even drive the storyline. News organizations employ generative AIs to generate short snippets or entire stories, particularly for structured reports like sports or financial updates. 

AI algorithms serve various purposes such as data classification, organization, and reasoning. Among them, generative algorithms stand out by creating data through a realistic synthesis of images, sounds, and videos. These algorithms utilize models of the world to generate simulated environments that align with the predefined model. Essentially, they start with a vision of what the world should be and bring it to life through simulated representations. 

Challenges of generative artificial intelligence (AI)? 

Certain advanced generative AI algorithms possess the ability to deceive, leading to the creation of what is commonly known as "deep fakes." These fabricated outputs can be misused for fraudulent activities, such as impersonating individuals to carry out various forms of fraud. 

For instance, malicious actors may attempt to mimic someone's identity to illicitly withdraw funds from a bank account and other sensitive information. Additionally, deep fakes can be used to manipulate and falsely attribute statements to individuals, potentially leading to serious consequences like defamation or slander. 

 What does the future look like? 

According to recent research by Salesforce, global economic growth is anticipated to receive a significant boost of over $15 trillion by 2030 due to the influence of AI. This growth is projected to lead to a substantial 26% increase in GDP. 

However, the widespread adoption of AI hinges on the crucial factors of building trust and ensuring robust data privacy measures. To fully leverage the potential of AI, it is imperative to establish a foundation of trust and safeguard the privacy of data.

Ghost Sites: Attackers are now Exposing Data From Deactivated Salesforce Sites


Varonis Threat Lab researchers recently discovered that Salesforce ‘ghost sites,’ that are no longer in use, if improperly deactivated and unmaintained may remain accessible and vulnerable of being illicitly used by threat actors. They noted how by compromising the host header, a hacker may gain access to sensitive PII and business data.

With the help of Salesforce Sites, businesses can build specialized communities where partners and clients could work collaboratively.

But when these communities are no longer required, they are frequently preserved rather than shut down. These sites aren't examined for vulnerabilities since they aren't maintained, and the administrators don't update the security measures in accordance with contemporary guidelines.

Apparently, Varonis Threat Labs on its recent findings discovered that since these ghost sites were not properly deactivated, they were easily accessible to attackers who were using them to put illicit data, exploiting the sites.

They added that the exposed data did not only consist of the old data of the sites, but also fresh records that were disclosed to guest user, who shared configuration in the Salesforce environment.

Salesforce Ghost Sites

According to Varonis Threat Labs, Salesforce ghost sites are created when a company, instead of using unappealing internet URLs uses a custom domain name. This is done so that the organization’s partners could browse the sites. . “This is accomplished by configuring the DNS record so that ‘partners.acme.org’ [for example] points to the lovely, curated Salesforce Community Site at “partners.acme.org. 00d400.live.siteforce.com[…]With the DNS record changed, partners visiting “partners.acme.org” will be able to browse Acme’s Salesforce site. The trouble begins when Acme decides to choose a new Community Site vendor,” the researchers said.

Companies might switch out a Salesforce Experience Site for an alternative, just like they would with any other technology. Varonis Threat Labs stated, "Acme subsequently updates the DNS record of 'partners.acme.org' to link toward a new site that might function in their AWS environment." The Salesforce Site is no longer present from the users' perspective, and a new Community page is now accessible. The new page may not be functioning in the environment or connected to Salesforce in any way, and no blatant integrations are visible.

However, the study found that a lot of businesses only modify DNS entries. “They do not remove the custom domain in Salesforce, nor do they deactivate the site. Instead, the site continues to exist, pulling data and becoming a ghost site,” a researcher said.

Attackers exploit these sites simply by changing the host header. They mislead Salesforce into believing that the site was accessed as https://partners.acme.org/ making the sites accessible to the attackers.

Although these sites can also be accessed through their whole internal URLs, an intruder would find it difficult to recognize these URLs. However, locating ghost sites is significantly simpler when utilizing tools that index and archive DNS information, like SecurityTrails and comparable technologies.

What is the Solution

Varonis Threat Labs advised that the sites that are no longer in use should be properly deactivated. They also recommended to track all Salesforce sites and their respective users’ permissions, involving both community and guest users. Moreover, the researchers created a guide on ‘protecting your active Salesforce Communities against recon and data theft.’