Cybercrime is a great job in Asia and the EU may have to worsen everything


Southeast Asia has become a global epicenter of cyber cheaters, where high-tech fraud faced human trafficking. In countries such as Cambodia and Myanmar, criminal syndicate, Singapore and Hong Kong manage industrial scale “pig butcher” operations-cunning centers that have to manage victims of wealthy markets.

The scale is amazing: An UN is an UN from these schemes evaluates global losses $ 37 billion. Can get worse soon.

The increase in cybercrime in the region already affects politics and politics. Thailand said it was a drop of Chinese visitors this year A Chinese actor was abducted and the Myanmar was forced to work in a combination of cheaters; Is Bangkok She was fighting now It is safe to convince tourists. And Singapore gave an anti-SCAM law that allows the opportunity to freeze bank accounts of the victims of the law enforcement.

Why can Asia be inferior to cybercrime? Ben Goodman, Okta’s CEO for Asia-Pacific, said that the region offers some unique dynamics that facilitates the construction of cyber criminal cheats. For example, the region is “Mobile-First Market”: Whatsapp, Line and Wechat, such as popular mobile messaging platforms, fraudulent and victims.

AI also helps cheaters to eliminate Asia’s language volatility. Goodman notes that car translations, “Fenomenal use of phenomenal for the AI” is easier than clicking on people’s incorrect links or to approve something. “

The NPRA also includes. Goodman also points to allegations used by fake workers to collect intelligence in the main technological companies in North Korea and get a very need to a very need to the isolated country.

A new risk: ‘shadow’ ai

Goodman is worried about a new risk related to AI in the workplace: “Shadow” employees who use personal accounts to access AI models without the control of the AI or company. “It may be to prepare a presentation for a work research, create an image in his own personal accounts and create an image,” he said.

This can cause employees to create a “potential risk” to the AI platform, “Information leakage risk”, “a potential risk”.

The courtesy of octa

The Agentic AI can also confuse the boundaries between personal and professional personalities: for example, something that is connected to your personal email, unlike your corporate. “As a corporate user, my company gives me an application for use and want to manage how I used it,” he said.

But “I never use my personal profile for corporate services and never use my corporate profile for personal service,” he said. “The ability to determine who you are in the workplace or in life or own personal services, and the ability to identify who you are in your own personal services, we think about the customer ID.”

And for Goodman, things become more difficult. AI agents have the authority to decide on behalf of a user, it is important to determine that it acts in the individual or corporate abilities of a user.

“If your personality is not stolen, the explosion radius, the prestige is more likely to be able to steal money from you,” Goodman warns.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *