Meta fixes bug that could leak users’ AI prompts and generated content


Meta, Meta AI ChatBot has made a security error that allows users to access and view other users’ response to other users and the EU created answers.

Security Test Company’s founder Sandeep Hodkasia has paid $ 10,000 for a bug grace award to explicate the mistakes given by 2024 in 2024.

Meta, on January 24, 2025, he said, Hodkasia said, and did not find any evidence that the mistake was harmful.

Hodkasia, Father’s users within the EU allow users to edit the AI and images to restore their applications, said that Hodkasia was identified to Bug. When a user adjusts the desires, when the meter adjusts the back end servers, the desire and his created response revealed a unique number. Analyzing network traffic in the browser while editing an AI desire, Hodkazia has changed that the unique number and meter servers will completely return another person’s desire and AI.

Bug said that Meta servers wanted to ensure that the user will be able to see it because he wants and wanting the answer. Hodkasia said the emergency numbers generated by the servers of methane, potentially “can easily predict,” the fast-changing fast numbers are potentially changing the original instructions.

When it reaches TechCrunch, Meta made an error in January and did not find any evidence of the company’s “abuse and rewarding of the researcher,” said Meta Press Secretary Ryan Daniels Techcrunch.

Bug news, technological giants, although it comes in an era of technical giants to launch and cleans the AI products Many security and privacy risks related to use.

The independent application of Meta AI, it Debuted earlier this year To compete with opponent practices like Chatgpt, after some users began to start a rock openly shared the fact that they were private conversations With chatbot.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *