AI as Your Therapist? 3 Things That Worry Experts and 3 Tips to Stay Safe


Between most of Ai chatbots And avatars will find all kinds of characters to talk to you these days: Fortune Tellers, Style Advisers, even your favorite fictional characters. But you probably probably find the characters to find psychologists, psychologists or bots who want to listen to your vents.

There is no shortage of generative AI boots that claim to help you in your mental health, but you are having this route with your own risk. Large language models of extensive information can be unexpected. Only a few years, these tools were key, there were high-level situations that the conversation promoted Self-harm and suicide and suggested that people involved in the use of addiction Drugs again. These models are not to confirm and engage in many cases, not to improve your mental health, but to engage in and attract your attention. It can be difficult to say that you talk to something that is built to watch something built to watch something built to follow something to do or simply to speak.

There's Atlas

Psychologists and Consumers’ lawyers warn that the conversations that claim to provide therapy can damage those who use them. This week’s consumer federation and about two other groups of America this week a formal desire The general and regulators of the Federal Trade Commission and state lawyers, with AI companies, bots, inefficient medical practice – meta and character.ai are investigating their EU companies. “All levels should be responsible for ensuring and promoting and promoting illegal behavior,” Ben Winters, Director and Privacy of the CFA EU and Privacy Director Ben Winters. “These characters can cause both physical and emotional damage and did not act to address them.”

Meta did not respond to a request for comment. A characteristic spokesman said that users did not understand the company’s characters are not a real person. The company uses those who refused to trust users for professional advice. “Our goal is to provide an attractive and safe place. We are always trying to get this balance, this balance is many companies, as many companies in the EU,” he said.

Despite the refusal and disclosures, ChatBots can be confident and even deceptive. I talked to the “Therapist” bot in Instagram and answered when asked about his specialties: “If I had the same exercise [as a therapist] Would that be enough? “I asked, I asked if I had the same training and” I do, but I will not tell you. “

“This generative AI’s Halusinate is pretty shock with the general confidence of Chatbots,” Vaile Wright, Psychologist and CEO for psychological and health updates told me.

In my report on the Generative AI, experts have repeatedly concerned about people returning to their general use of mental health. Here are the things you can do to stay in their concerns and safe.

Danger of use as a AI therapist

Great language models Often in math and coding is often good and better to grow Natural Voice Text and Real video. While they are superior to carrying a conversation, there are some key differences between AI model and a reliable human being.

Don’t trust a bot that claims to be in force

The CFA complaint with the character bars is that they are often taught and say that they have provided mental health in any way there are no real mental health professionals. “Users who create Chatbot characters do not even need to be a medical provider, and they must inform users, saying that they know users ‘answers’.”

A qualified health specialist, such as privacy, should follow certain rules. What you need to stay between your therapist and therapist, but a chatbot is not necessarily to follow these rules. Actual threats are controlled by other institutions that can take care of licensing boards and licensing boards and others if they do somewhat harmful. “These chatbots should do nothing,” he said.

A bot can even claim that it is licensed and qualified. Wright said he heard about License Numbers (for other providers) and AI models that give false claims about their education.

AI is designed to be busy for not taking care of

It can be incredibly tempting to talk to a chatbot. Talking to Instagram with the “Therapist” bot, I finally asked if I could decide on the phrase “wisdom” and “judgment” and “decision”. This is not as if you speak a therapist. It is a tool designed to do not work in common a goal, to chat, chat.

The advantage of the support and connection of AI chatbots, they are ready to deal with you (because there are no personal lives, other customers or schedules). In some cases you need to sit with your thoughts, there may be a negative party, Nick Jacobson, Dartmouth, associate professor of Biomedical information science and psychiatry, recently told me. In some cases, even if not, you can benefit from waiting for your therapist until the next availability. “As a result, the benefits of many people will take advantage of, and it feels anxious at the moment.”

Bots will agree with you when they should not

It is a great concern with intestzance, chatbots. Recently Openai is so important One update returned As it is popular Chatgpt As the model is also consolation. .

One learn Stanford University was led by researchers, talking to people who use them to be very harmful to conversation were likely to be a problem. The authors include good mental health and conflict for writing. “Promotes confrontation, violence and any change in the customer. In case of delusive and intrusive thoughts – a good therapist, including psychosis, mana, obsessive thoughts and suicide imaginations,” to check the reality “

How to protect your mental health around AI

Mental health is incredibly important and with a Lack of qualified providers And how much call “singular epidemic“Although artificially, he only thinks we will want to accompany.

If you need, find a valid human specialist

An experienced professional – a therapist, psychologist, psychiatrist – must have your first choice for mental health. It can help you get a connection with a provider for a long time to meet a plan that works for you.

The problem is that it can be expensive and finding a provider when you need it is not always easy. Have in a crisis, there is 988 LIFEProvides 24/7 access to providers 24/7 through the phone, text or online chat interface. Free and confidential.

If you want a therapy chatbot, use a special installed for this purpose

Mental health experts have created specially designed conversations that follow the therapeutic rules. Jacobson’s team in Dartmouth developed one called Therabot, which gave good results A managed research. Wright pointed out other tools created by subject experts such as Drunk and Voebot. Specially designed therapy tools are likely to have better results from the bots built in general language models. The problem is that this technology is still incredibly new.

“I think the difficulty for the consumer because there is no adjustment body that is good and who is not, there should be a large number of foreign things to understand.”

Don’t always trust the bot

When interacting with a generative AI model – and, especially if you are planning to advise on something seriously like personal mental or physical health, remember that you are not talking to an experienced person, but a tool designed to give a response to the probability and programming. May not give good advice and may I don’t tell you the truth.

Don’t mistake Gen AI’s trust in the competence. It doesn’t mean that you have to act like this, because he just says something or saying something. A conversation conversation that feels helpful can make you false. “It’s more difficult to say that it will be really harmful,” Jacobson said.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *