Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label CNBC. Show all posts

Microsoft Introduces AI Solution for Erasing Ex from Memories

 


It reveals the story of a woman who is emotionally disturbed and seeks the help of artificial intelligence as she tries to erase her past in director Vikramaditya Motwane's new Hindi film, CTRL. There is no doubt that the movie focuses on data and privacy, but humans are social animals and they need someone to listen to them, guide them, or be there as they go through life.  The CEO of Microsoft AI, Mustafa Suleyman, spoke about this recently in a CNBC interview. 

During an interview with CNN, Suleyman explained that the company is engineering AI companions to watch "what we are doing and to remember what we are doing." This will create a close relationship between AI and humans. As a result of the announcement of AI assistants for the workplace, many companies like Microsoft, OpenAI, and Google have come up with such solutions.  

It has been announced by Microsoft CEO Satya Nadella that Windows will be launching a new feature called Recall. A semantic search is more than just a keyword search; it digs deep into users' digital history to recreate moments from the past, tracking them back to the time they happened. It was announced today by Microsoft's AI CEO, Mustafa Suleyman, that Copilot, the company's artificial intelligence assistant, has been redesigned. 

Copilot, a newly revamped version of Microsoft's most popular AI companion, shares the same vision of a companion for AI that will revolutionize the way users interact with technology daily in their day-to-day lives with the AI head. After joining Microsoft earlier this year, after the company strategically hired key staff from Inflection AI, Suleyman wrote a 700-word memo describing what he refers to as a "technological paradigm shift." 

Copilot has been redesigned to create an AI experience that is more personalized and supportive, similar to Inflection AI's Pi product, which adapts to users' requirements over time, similar to the Pi product. The announcement of AI assistants for the workplace has been made by a number of companies, including Microsoft, OpenAI, and Google.  The Wall Street Journal reported that Microsoft CEO Satya Nadella explained that "Recall is not just about documents." in an interview. 

A sophisticated AI model embedded directly inside the device begins to take screenshots of users' activity and then feeds the data collected into an on-board database that analyzes these activities. By using neural processing technology, all images and interactions can be made searchable, even going as far as searching images by themselves. There are some concerns regarding the events, with Elon Musk warning in a characteristic post that this is akin to an episode of Black Mirror. Going to turn this 'feature' off in the future." 

OpenAI has introduced the ChatGPT desktop application, now powered by the latest GPT-4o model, which represents a significant advancement in artificial intelligence technology. This AI assistant offers real-time screen-reading capabilities, positioning itself as an indispensable support tool for professionals in need of timely assistance. Its enhanced functionality goes beyond merely following user commands; it actively learns from the user's workflow, adapts to individual habits, and anticipates future needs, even taking proactive actions when required. This marks a new era of intelligent and responsive AI companions. 

Jensen Huang also highlighted the advanced capabilities of AI Companion 2.0, emphasizing that this system does not just observe and support workflows—it learns and evolves with them, making it a more intuitive and helpful partner for users in their professional endeavors. Meanwhile, Zoom has introduced Zoom Workplace, an AI-powered collaboration platform designed to elevate teamwork and productivity in corporate environments. The platform now offers over 40 new features, which include updates to the Zoom AI Companion for various services such as Zoom Phone, Team Chat, Events, Contact Center, and the "Ask AI Companion" feature. 

The AI Companion functions as a generative AI assistant seamlessly integrated throughout Zoom’s platform, enhancing productivity, fostering stronger collaboration among team members, and enabling users to refine and develop their skills through AI-supported insights and assistance. The rapid advancements in artificial intelligence continue to reshape the technological landscape, as companies like Microsoft, OpenAI, and Google lead the charge in developing AI companions to support both personal and professional endeavors.

These AI solutions are designed to not only enhance productivity but also provide a more personalized, intuitive experience for users. From Microsoft’s innovative Recall feature to the revamped Copilot and the broad integration of AI companions across platforms like Zoom, these developments mark a significant shift in how humans interact with technology. While the potential benefits are vast, these innovations also raise important questions about data privacy, human-AI relationships, and the ethical implications of such immersive technology. 

As AI continues to evolve and become a more integral part of everyday life, the balance between its benefits and the concerns it may generate will undoubtedly shape the future of AI integration across industries. Microsoft and its competitors remain at the forefront of this technological revolution, striving to create tools that are not only functional but also responsive to the evolving needs of users in a rapidly changing digital world.

The Race to Train AI: Who's Leading the Pack in Social Media?

 


A growing number of computer systems and data sets consisting of large, complex information have enabled the rise of artificial intelligence over the last few years. AI has the potential to be practical and profitable by being used in numerous applications such as machine learning, which provides a way for a system to locate patterns within large sets of data. 

In modern times, AI is playing a significant role in a wide range of computer systems, including iPhones that recognize and translate voice, driverless cars that carry out complicated manoeuvres under their power, and robots in factories and homes that automate tasks. 

AI has become increasingly important in research over the last few years, and it is now being used in several applications, such as the processing of vast amounts of data that lie at the heart of fields like astronomy and genomics, producing weather forecasts and climate models, and interpreting medical imaging images for signs of disease. 

In a recent update to its privacy policy, X, the social media platform that used to be known as Twitter, stated it may train an AI model based on posts from users. According to Bloomberg early this week, X's recently updated privacy policy informs its users that the company is now collecting various kinds of information about its users, including biometric data, as well as their job history and educational background. 

The data that X will be collecting on users appears to be more than what it has planned to do with them. Another update to the company's policy specifies that it plans to use the data it collects along with other publicly available information to train its machine learning and artificial intelligence models based on the information it collects and other data sources.

Several schools are recommending the use of private data, such as text messages in direct messages, to train their models, according to The Office of Elon Musk, owner of the company and former CEO. There is no reason to be surprised by this change. 

According to Musk, his latest startup, xAI, was founded to help researchers and engineers in the enterprise build new products by utilizing data collected from the microblogging site and his latest startup, Twitter. For accessing the company's data via an API, X charges companies using its API $42,000. 

It was reported in April that X was removed from Microsoft's advertising platforms due to increased fees and, in response, had threatened to sue the company for allegedly using Twitter data illegally. These fees increased after Microsoft reportedly pulled X from its advertising platforms. Elon Musk has called on AI research labs to halt work on systems that can compete with human intelligence, in a tweet published late Thursday.

Musk has called on several tech leaders to stop the development of systems that are at human levels of brightness. Several AI labs have been strongly urged to cease training models more powerful than GPT-4, the newest version of the large language model software developed by the U.S. startup OpenAI, according to an open letter signed by Musk, Steve Wozniak, and 2020 presidential candidate Andrew Yang from the Future of Life Institute. 

Founded in Cambridge, Massachusetts, the Future of Life Institute is a non-profit organization dedicated to pushing forward the responsible and ethical development of artificial intelligence in the future. A few of the founders of the company include Max Tegmark, a cosmologist at MIT, and Jaan Tallinn, the co-founder of Skype. 

Musk and Google's AI lab DeepMind, which is owned by Google, have previously agreed not to develop lethal autonomous weapons systems as part of the organization's previous campaign. In an appeal to all AI labs, the institute said it was taking immediate steps to “pause for at least 6 months at least the training of AI systems with higher levels of power than the GPT-4.” 

The GPT-4, which was released earlier this month, is believed to be a far more sophisticated version of the GPT-3 than its predecessor. Researchers have been amazed to learn that ChatGPT, the viral artificial intelligence chatbot, has been able to mimic human-type responses when users ask its questions. In only two months after its launch in January this year, ChatGPT had accrued 100 million monthly active users, making it the fastest-growing application in the history of consumer applications. 

A machine learning algorithm is trained by taking vast amounts of data from the internet at a time and applying it to write poetry in the style of William Shakespeare to draft legal opinions based on the facts in a case. However, some ethical and moral scholars have raised concerns that AI might also be abused for crime and misinformation purposes, which could lead to exploitation. 

During CNBC's contact with OpenAI, the company was unable to comment immediately upon being contacted. Microsoft, the world's largest technology company whose headquarters are located in Redmond, Washington, has invested $10 billion in OpenAI, which is backed by the company. 

Microsoft is also integrating its natural language processing technology, called GPT, into its Bing search engine for natural language search to make it more conversational and useful to users. There was a follow-up announcement from Google, which announced its line of conversational artificial intelligence (AI) products aimed at consumers. 

According to Musk, AI, or artificial intelligence, may represent one of the biggest threats to civilization shortly. OpenAI was founded by Elon Musk and Sam Altman in 2015, though Musk left OpenAI's board in 2018 and therefore does not hold any stake in the company that he helped found. It has been his view that the organization has diverged from its original purpose several times lately, and he has voiced his opinion about the same.

There is also a race among regulators to get a grip on AI tools due to the rapid advance of technology in this area. In a report published on Wednesday, the United Kingdom announced the publication of a white paper on artificial intelligence, deferring the job of overseeing the use of such tools in different sectors by applying the existing laws within their jurisdictions.

Elevated Cybercrime Risks in Metro Cities: Understanding Urban Vulnerabilities

 


In metropolitan cities, cyber fraudsters understand how people think. It is not surprising that they provide certain services so quickly and efficiently to people with busy lives. Experts have found that this puts them at a higher risk of scams. With the help of cyber security pundits and regular victims of this problem, CNBC-TV18 gets to the bottom of the problem. 

Major metropolitan cities are seen as prime targets for cybercriminals as cybercrime becomes more common. They set up more operations to achieve their goals.  It is possible to estimate the gravity of the situation by looking at Chennai, which has been the victim of nearly 8 million malware-related attacks in its history alone.  

Metropolitan cities are more vulnerable to cybercrimes as they have a lot of digital infrastructure and online services available to them. However, they lack strong data protection policies for their customers. The rise in cybercrime focuses cybercriminals' attention on the major metropolitan areas of the country. 

By simply looking at Chennai's condition, which received nearly 8 million malware-related attacks last year, one can determine the gravity of the situation, which can be calculated by looking at the condition there.

It has been announced by QuickHeal that more than 80,000 malware threats are detected and blocked every hour of the day, according to their official report. It has been estimated that there have been more than 1.91 million ransomware attacks to date. There have been numerous attacks resulting from the pandemic that have been used to benefit attackers. Arogya Setu is an app that you need to install on your smartphone if you want to attend Arogya Setu classes. 

People and organizations needed to track Covid-19-related information on the internet and social media regularly. Due to this, attackers were able to take advantage of it and created fake COVID-19 links to spread these links. 

Users clicked on the links in these messages, and malicious files were loaded onto their systems. Many of these files were detected and blocked by antivirus software. It was not only covid-19 that was included in the phishing links, but also other things like offers for jobs, free internet, online money, and other interesting things as well.  

Among the services QuickHeal provides, it has been reported that people are starting to become more familiar with the use of digital tools and antivirus software to protect their computers. Despite all this, there is still a long way to go since Internet usage is not considered a healthy activity by most people.  

In the order of most detected malware, the following were the most detected malware types: Trojans, Infectors, Worms, and Potentially Unwanted Applications (PUAs). The threat landscape still has its place for ransomware as it continues to encrypt sensitive user information, which is then sold on the dark web by attackers in exchange for money. 

Cybersecurity experts do not take data security very seriously and rely on third parties to maintain their data. According to these experts, many of these companies outsource their data maintenance to third parties. These third parties then sell the data to cyber criminals and cyber criminals get easy access to the data. Having more data means more opportunities for cyber fraudsters to commit fraud.

In 2019, according to the National Crime Record Bureau, there were 18,500 cases of cyber fraud reported in 19 metropolitan cities of the country, which accounted for 41 percent of the total cases of cyber fraud detected in the country. This number increased marginally in 2020 as 18,657 cases were reported in the metropolises of India - 37 percent of India's total number of cases that year. 

In contrast, cyber fraud cases in metropolitan cities have decreased since 2021, according to statistics - there were 17,115 reported cases - accounting for 32 percent of the total cases relating to cyber fraud in India. It is estimated that there are many more cases than reported, according to experts. 

As a result of the high number of cybercrime incidents targeting metropolitan cities, the authorities are aware of this problem. Several states and cities have created specialized cyber cells to combat such frauds, and they work together. Although, these authorities allege that operation hurdles have made it difficult to eradicate such crimes, which makes bringing them down difficult. 

When a person has realized that they have been scammed by a scammer, experts recommend that they log onto the cybercrime portal or call 1930 immediately. The experts suggest that any request for personal information, such as debit or credit card pins, or a one-time password should raise red flags and should be reported as soon as possible. 

Furthermore, these experts urge that all online transactions should only be carried out through secure, verified portals, and individuals should not upload sensitive documents or information to unverified or unknown portals without prior confirmation from the portal's owner.

Cybersecurity experts recommend that people avoid answering video calls from unknown numbers and not fall for lucrative offers. Anything that appears too unbelievable to be true is a scam. Thus, the best method of preventing cybercrime remains precaution and awareness. 

As per the findings of the National Crime Records Bureau (NCRB), 962 cybercrime cases were reported in India in 2014, 11592 cases were investigated in 2015, and 12,317 cases were reported in 2016. I believe that cybercrime incidents in India are increasing. 

Business is moving online, which means organizations have to ensure the network that their customers are using is safe and secure. As well as upgrading their technology, they should also hire employees with good management and security skills, who are trained in the protocols of security management, and who are adept at managing and securing sensitive customer data. 

The protection of adults' data is of paramount importance, especially for those over the age of 75. These people have an insufficient understanding of how technology works at the moment. As a result, companies and individuals both must understand how to tackle cyberattacks and educate the public about their detection.