Lawyers could face ‘severe’ penalties for fake AI-generated citations, UK court warns


The Supreme Court of England and Wales says lawyers should take stronger steps to prevent artificial intelligence abuse of artificial intelligence.

In Bundar Judge Victoria Sharp, which connects the last two jobs, wrote “not capable of conducting reliable legal investigations such as Chatgpt.”

“Such tools can create corresponding and appropriate answers, but these consistent and appropriate answers can be completely wrong,” said Judge Keskin said. “Answers can simply make confident claims that are not false.”

This does not mean that lawyers cannot use AI, but before using professional work, there is a vocational fee to check the accuracy of such studies. “

Judge Keskin suggested that the lawyers (including the United States, including the growing cases of the circumstances Lawyers representing great AI platforms) The lies that appeared in the AI, the visible things “should be done to ensure the deeds and sues,” he said he would be sent to his positions, including vocational authorities and legal society.

In one of the work in one of the work, a lawyer, who would not have 45 cesses, 18 of these cases were not available, many did not quote them, and there was nothing to do with the application.

In the other, a lawyer representing a man who was expelled from London’s house, made a court referred to five cases. (The lawyer said that quotes could come from the AI ​​generated summaries, which appeared in Google or Safari.

“In this regard, lawyers who do not inappropriate to vocational liabilities are risking serious sanctions,” he said.

Both lawyers were either referenced, or applied to professional regulators. Judge Keskin noted that if lawyers do not fulfill their duties in court, the court’s powers from the “public admonition”, the process of expenses, contemptations or even “applied to the police”.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *