Two federal employees have filed a lawsuit against the Office of Personnel Management (OPM), alleging that a newly implemented email system is being used to compile a database of federal workers without proper authorization. The lawsuit raises concerns about potential misuse of employee information and suggests a possible connection to Elon Musk, though no concrete evidence has been provided. The controversy began when OPM sent emails to employees, claiming it was testing a new communication system. Recipients were asked to reply to confirm receipt, but the plaintiffs argue that this was more than a routine test—it was an attempt to secretly create a list of government workers for future personnel decisions, including potential job cuts.
The lawsuit names Amanda Scales, a former executive at Musk’s artificial intelligence company, xAI, who now serves as OPM’s chief of staff. The plaintiffs suspect that her appointment may be linked to the email system’s implementation, though they have not provided definitive proof. They claim that an unauthorized email server was set up within OPM’s offices, making it appear as though messages were coming from official government sources when they were actually routed through a separate system.
An anonymous OPM employee’s post, cited in the lawsuit, alleges that the agency’s Chief Information Officer, Melvin Brown, was sidelined after refusing to implement the email list. The post further claims that a physical server was installed at OPM headquarters, enabling external entities to send messages that appeared to originate from within the agency. These allegations have raised serious concerns about transparency and data security within the federal government.
The lawsuit also argues that the email system violates the E-Government Act of 2002, which requires federal agencies to conduct strict privacy assessments before creating databases containing personal information. The plaintiffs contend that OPM bypassed these requirements, putting employees at risk of having their information used without consent.
Beyond the legal issues, the case reflects growing anxiety among federal employees about potential restructuring under the new administration. Reports suggest that significant workforce reductions may be on the horizon, and the lawsuit implies that the email system could play a role in streamlining mass layoffs. If the allegations are proven true, it could have major implications for how employee information is collected and used in the future.
As of now, OPM has not officially responded to the allegations, and there is no definitive proof linking the email system to Musk or any specific policy agenda. However, the case has sparked widespread discussions about transparency, data security, and the ethical use of employee information within the federal government. The lawsuit highlights the need for stricter oversight and accountability to ensure that federal employees’ privacy rights are protected.
The lawsuit against OPM underscores the growing tension between federal employees and government agencies over data privacy and transparency. While the allegations remain unproven, they raise important questions about the ethical use of employee information and the potential for misuse in decision-making processes. As the case unfolds, it could set a precedent for how federal agencies handle employee data and implement new systems in the future. For now, the controversy serves as a reminder of the importance of safeguarding privacy and ensuring accountability in government operations.
The recent global outage of Social Media Platform X caused a stir in the online community during a time when digital media predominates. Users everywhere became frustrated and curious about the cause of this extraordinary disruption when they realized they couldn't use the platform on December 21, 2023.
Reports of the outage, which was first discovered by Downdetector, began to arrive from all over the world, affecting millions of customers. The impact of the outage has increased because Social Media Platform X, a significant player in the social media ecosystem, has grown to be an essential part of peoples' everyday lives.
One significant aspect of the outage was the diverse range of issues users faced. According to reports, users experienced difficulties in tweeting, accessing their timelines, and even logging into their accounts. The widespread nature of these problems hinted at a major technical glitch rather than localized issues.
TechCrunch reported that the outage lasted for several hours, leaving users in limbo and sparking speculation about the root cause. The incident raised questions about the platform's reliability and prompted discussions about the broader implications of such outages in an interconnected digital world.
Speaking to BBC, he further notes that technology may as well be harnessed by “bad actors.” According to Mr. Wozniak, AI contents should well-labelled, and also highlighted the need for proper regulation in the industry.
In March, Apple, along with Meta CEO Elon Musk signed a letter, urging a halt to the development of more potent AI models.
Mr. Wozniak, also referred to as Woz in the tech community, is a seasoned veteran of Silicon Valley who co-founded Apple with Steve Jobs and created the company's first computer./ In an interview with BBC Technology Editor Zoe Kleinman, he discussed his fears as well as the advantages of artificial intelligence.
"AI is so intelligent it's open to the bad players, the ones that want to trick you about who they are," said Kleinman.
AI refers to computer programs that can perform tasks that would typically require human intelligence. This includes systems that can identify objects in images and chatbots that can comprehend queries and provide responses that seem human.
While Mr. Wozniak ardently believes that AI will not be replaying humans, since it lacks emotions. However, he warns against bad actors, since AI is making them more realistic, one example being generative AI ChatGPT that can carve texts which sounds human and “intelligent.”
Wozniak believes that any product of the artificial intelligence be held accountable for those who publish it. "A human really has to take the responsibility for what is generated by AI," he says.
The large tech companies that "feel they can kind of get away with anything" should be held accountable by regulations, according to him.
Yet he expressed doubt that authorities would make the correct decisions, saying, "I think the forces that drive for money usually win out, which is sort of sad."
Mr. Wozniak, a computer pioneer, believes that those developing artificial intelligence now might learn from the chances lost during the early stages of the internet. Although "we can't stop the technology," in his opinion, we can teach individuals to recognize fraud and other nefarious attempts to obtain personal information.
Last week, the current CEO of Apple, Tim Cook told investors that is crucial to be “deliberate and thoughtful,” is a way to approach AI. "We view AI as huge, and we'll continue weaving it in our products on a very thoughtful basis," he said.
The goal will be to link human brains to computers. The company is planning to test the technology on individuals with paralysis.
Apparently, a robot will be assigned the task of implanting a BCI to human brain, that will allow the subjects to take control of a computer cursor, or type using only their thoughts.
However, rival companies have already achieved the feet by implanting BCI devices in human.
Neuralink’s clinical trial has been approved by US Food and Drug Administration (FDA) in May, achieving an important milestone, taking into consideration the struggle it had faced to gain approval for the same.
In regards to this, Neuralink stated at the time that the FDA approval represented "an important first step that will one day allow our technology to help many people."
While the final number of people recruited has not yet been confirmed, according to a report by new agency Reuters, the company had sought FDA’s approval to implant the devices in 10 people ( their former or current employees)./ Brain Signals/ The six year study will commence following a surgery, where a robot will implant 64 flexible threads, thinner than a human hair, on a region of the brain that managed "movement intention."
These enable Neuralink's experimental N1 implant, which runs on a remotely rechargeable battery, to record and transmit brain impulses to an app that decodes a person's intended movement.
Neuralink informs that people are eligible for the trial in case they have quadriplegia resulting from an injury or amyotrophic lateral sclerosis (ALS) – a disease in which the nerve cells in the spinal cord and brain degenerates.
Precision Neuroscience, developed by a Neuralink co-founder, also aims at assisting those who are paralyzed. And it claims that its implant, which resembles a very thin piece of tape and rests on the surface of the brain, may be inserted via a "cranial micro-slit" in a less complicated manner.
Meanwhile, existing technology is producing results. Implants have been used in two different studies conducted in the US, that aimed to track brain activity during speech attempts, which might later be decoded to aid with communication.
While Mr. Musk’s involvement has played a major role in the raised popularity of Neuralink, he still face rivals, some of whom have a history going back almost two decades. In 2004, Blackrock Neurotech, a company based in Utah, implanted the first of several BCIs.
According to Dr Adrien Rapeaux, a research associate in the Neural Interfaces Lab at Imperial College London, "Neuralink no doubt has an advantage in terms of implantation," taking into account that a majority of its operations will be assisted robotically.
On contrary, Dr. Rapeaux, co-founder of a neural implant start-up Mintneuro, says that he is not sure how Neuralink’s attempt of converting brain signals into useful actions will do any better than the methods earlier used by Blackrock Neurotech for example. He also doubts if the technology will remain accurate and reliable over time, which is "a known issue in the field."
It has been confirmed in the biography that Twitter’s CEO once suggested Tesla record video of drivers' on-wheel behaviour using the internal monitoring camera. His asserted goal was to use the footage as proof to shield Tesla from inquiries in the event of a crash.
The book ‘Elon Musk’ stated that Elon Musk pushed for the usage of the internal monitoring camera to record footage of Tesla drivers at first without their awareness with the intention of using the footage as proof in investigations linked to the Autopilot ADAS.
According to an excerpt from the book, Musk was convinced that one of the main reasons for accidents was bad drivers and not bad software. "At one meeting, he suggested using data collected from the car's cameras – one of which is inside the car and focused on the driver – to prove when there was driver error," the excerpt read.
However, several privacy concerns were raised, one of them being a woman citing legal assistance from the corporation and privacy concerns about the fact that Tesla could not link the selfie streams to specific vehicles, even if they were involved in accidents.
Apparently, Musk was not happy with the answer as according to Isaacson, the "concept of 'privacy teams' did not warm his heart[…]I am the decision-maker at this company, not the privacy team. I don't even know who they are. They are so private you never know who they are," Musk said during their meeting.
Musk then recommended that a pop-up could be used instead to tell people that if they used Full Self-Driving Beta, Tesla would collect data in the event of a crash. The woman nodded, noting that "as long as we are communicating it to customers, I think we're okay with that." The exchange is quite telling of the way Elon Musk runs his companies, and also of his stance on privacy.
The pop-ups are currently a feature in Tesla vehicles, where the company will use the data from internal cameras and notifications will be provided to the users with an option to either agree or disagree with Tesla in collecting their cabin camera data. It is important to note that Tesla has not yet used inside photos of cars to defend itself in court cases or government inquiries involving the Autopilot system.
Currently, Tesla is facing a class action lawsuit in terms of video privacy, following allegations that groups of Tesla employees privately share invasive videos and images, that were the recordings of customers’ car cameras between 2019 and 2022. Another lawsuit was filed in Illinois that focused particularly on the cabin camera.