Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
AI security researchers, anthropic and other organizations, “caregistant” and “careless” and “completely irresponsible” security culture, the Billion-dollar AI initial “completely irresponsitious” security culture.
Criticism follows the weeks of weeks in the XAI that shades the company’s technological developments.
Last week, company AI Chatbot, Grok, Spouted Antisemitic Comments and repeatedly called herself “mexahitler”. Xai, to solve the problem xai took offline her chatbot offline An increasingly capable border EU model has launched Grock 4Which TechCrunch and others found Contact Elon Musk’s Personal Policy for help answering hot key problems. In the latest development, The Xai started the AI Companions This hyper sex is an anime girl and an extremely aggressive panda.
Friendship among the employees of competing AI laboratories, these researchers call to increase the attention of these researchers, allegedly allegedly to the industrial standards.
“Since I was working in an opponent, I did not want to write a grook safety, but not about competition,” he said. X. in X. “I appreciate scientists and engineers in Xai, but the path where security is being carried out completely irresponsible.”
Barak, especially the decision of the System of not to release the System cards, the Industrial Standard says detailed training methods and security assessment in work with a good faith in the process of sharing the research community. As a result, Barak, GROK 4, which security training was not known, he said.
Openai and Google, when opening new AI models, they have a Spotty reputation on Google when it comes to sharing system cards immediately. Openai decided Not to publish a system card for GPT-4.1, It claims that this is not a border model. Meanwhile Google waited a few months from opening the Gemini 2.5 Pro to publish a security report. However, these companies have historically publish security reports for all border AI models before entering the full production.
Techcrunch event
San Francisco
|
October 27-29 2025
Barak also notes that GroK’s AI Companions are currently taking the worst issues for emotional addictions and try to strengthen them. ” We saw in recent years countless stories one Unstable people associated with chat conversationsAnd the EU’s overrated answers can make them from the edge of the mind.
Samuel marks, an anus, the anthopic Safety researcher, did not publish a security report of the XAI, called “ignored”.
“Anthropic, Openai and Google’s graduation experience, wrote in one of the” signs X in x. “But at least something done, something to assess the pre-placement and document.”
The reality is that we really don’t really know what Xai is doing to try GROK 4. In an online forum online forum, in an extensive shared article, An anonymous researcher claims that Grok 4 is not meaningful security guards based on their trials.
Whether it is true or absence, the world finds information about the grock’s real-time shortcomings. Several of the Several Sustainable Safety Problem went viral and claims that the company applies to them Pinched to a certificate of the grook system.
Openai, Anthropic and Xai, did not meet the TechCrunch request for comments.
Dan Hendrycks, Director of Security Consultant and AI Security Center for Xai, Sent x Stating that the company does “dangerous abilities assessments” in the Group 4, the company said the company did a trial before prior placement for security concerns. However, the results of these assessments were not clearly shared.
“Standard security experiments are the previously dangerous ability assessments, previously dangerous abilities assessments, which previously assessed the results of dangerous abilities in the EU industry, said.”
This is the fact that Xai is interested in suspicious safety practices One of the AI security industry is one of the most notable lawyers. Xai, Tesla and SpaceX have a billionaire owner repeatedly warned Advanced AI systems praised the potential for people to cause catastrophic consequences and an open approach to the development of AI models.
Again, the AI researchers in competitive laboratories claim that it is prepared from industrial standards in reliability of AI models. Thus, the beginning of the muffin can be a powerful case to determine the rules to publish the AI security reports for state and federal deputies.
To do this, there are several attempts at the state level. California State Senate is Scott Wiener To push a bill This requires leading AI laboratories – presumably broadcast security reports including Xai New York Gov. Kathy Hochul is currently considering a similar account. The lawyer of these bills notes that AI laboratories publish this type of information anyway, but clearly do not consistently.
Today, AI models are still there for people’s deaths or billions of dollars, but still have to demonstrate real-world scenarios they have yet created catastrophic losses. However, many AI researchers say that, taking into account the rapid progress of the AI models in the near future, the billions of dollars invested in the Silicon Valley to further improve the AI.
However, there is a strong case for the skeptics of such catastrophic scenarios, and Groc’s behavior has worsened the powers today today.
Grock spread anti-Semitism around the X platform this week, After a few weeks after Chatbot grows “white genocide” in conversations with users. Musk, GROK, said to be soon More ingred in Tesla Vehicles and Xai I want to sellPentagon TS AI models and other businesses. It is difficult to imagine that people who control the muskin cars, federal workers or enterprises who protect the United States, these errors can be taken more than users in X.
Several researchers do not ensure that the security and alignment test of the AI does not ensure that the worst results do not happen, but the problems of nearby behavior are protected.
At least, the events of GROK are inclined to further exceed the fast progress of the best Openai and Google technology to develop the developing Xai, a few years after the beginning.