
ChatGPT is one of the best, if not the best AI tiils that one can use, and honestly, I have used it on several occasions to get some data. However, the one thing that I would never use the platform is to access anything that might prove harmless otherwise. For instance, in the past, we have seen lawyers getting help from this AI chatbot and getting in trouble and while one would imagine that once the precedent is set, people would stop but that does not appear to be the case.
Another lawyer finds himself in trouble after citing information from ChatGPT shows how unreliable AI chatbots can be in such cases
It turns out that a U.S. attorney Steven Schwartz got himself and several other parties in trouble after using some points he had taken from ChatGPT. This has resulted in a fine of $5,000 on Schwartz, along with other people involved in this.
Court was quick to find that the ChatGPT-generated citations that Schwartz used in the case were false. This means that Schwartz had access to all the false citations, and he went on to present the case with these citations at hand. This error resulted in a $5,000 fine.
But how did this all transpire? Well, this is where things get a bit funny, and, honestly, sad, as well. Steven Schwartz's client was suing a Colombian airline Avianca, and for some reason, Schwartz decided to get some more information on similar cases from ChatGPT instead of doing his own homework. This wasn't even the biggest mistake. The biggest mistake was when Schwartz decided to trust the results given to him by the AI chatbot instead of actually vetting the results. This resulted in him presenting his findings to the court, and well, that is where his case fell apart.
You see, ChatGPT gave Schwartz fake court cases that were similar to what he was dealing with. When court did some research, it found out that the cases never really existed and that resulted in Schwartz being fined $5,000 along with his associates. This isn't the first time something like this has happened as we have reported a similar incident in the past.
Honestly, this is nothing new. ChatGPT, along with all the other large language models, tends to manipulate information when they do not have any data to formulate a correct answer. In the case of Steven Schwartz, the data provided to him was incorrect on all accounts, and even though the presiding judge did mention that the use of technology is becoming a lot more common and while there is nothing wrong with it, it completely falls on the lawyer to fact check the information that they had been provided, and by not doing so, they have "abandoned their responsibilities." Going forward, I really hope we stop seeing such issues becoming a common practice because this definitely can cause a problem not just for lawyers, but for a lot of other people in different fields.
Source: Engadget
from Wccftech https://ift.tt/XCFdifG
0 Comments