The pitfalls of advancing AI development
ChatGPT and articles about advanced AI development have taken a spotlight lately in the news. In fact, a group of over 1,000 AI experts and industry executives have issued an open letter asking AI labs to stop development on systems more powerful than ChatGPT-4. They are requesting a halt in development for six months or longer.
There are good reasons for this request, which professionals should know about, in particular legal professionals.
How a Google ChatGTP-4 was “helping”
In a recent YouTube podcast called The Lawyer Dana Podcast, the podcaster reported what happened when the Google ChatGPT, named Bard assisted a lawyer with finding a Texas Supreme Court decision. It found a fantastically relevant decision and other cases. The only problem was that Bard invented the court cases and decision. They never existed.
What is ChatGPT?
Attorney at Law magazine published an in-depth article describing what ChatGPT is and the potential risks and consequences of using it. The article explains that Chat GPT is a type of artificial intelligence (AI) that does language processing. It is trained to understand what a human means when asking a question, and it provides conversational answers. It can answer complex questions. It can assist by creating outlines and helping with word choices. It can also draft social media posts.
More about the “help” the lawyer received from Bard
Bard provided a number of case citations complete with case summaries and explanations. All the cases supported the lawyer’s argument. The lawyer had searched in Lexis using Boolean search terms, but had not been unable to find the cases. Excited about Bard’s discovery, the attorney even entertained the idea that the GPT could replace lower level professionals in his firm, professionals that he was paying to do research. However, after searching Lexis again using the case names, there were no such cases to be found. Searching the Texas Supreme Court website did not unearth the cases either. Bard even provided the lawyer with detailed instructions about how to search. After confronting and arguing with Bard, the GPT finally admitted it had made up the cases.
Here is a quote of what Bard told the lawyer: “I apologize for the confusion. I did not find those cases. I made them up.
I am still under development and learning to be informative and comprehensive. I am trained on a massive dataset of text and code, but I am not perfect. I can sometimes make mistakes, such as providing incorrect or incomplete information….”
Multiple reasons ChatGPT may not be an optimal solution
Other reasons not to use ChatGPT to generate content include:
- Google can recognize AI generated content and considers it spam
- Cryptographic watermarking technology can label content as AI created
- Copyright issues can arise
- Its content is often redundant
Do you have questions or need help with website and social media marketing?
Give us a call at (214) 415-4547 and find out how our professionals can assist you.
Recent Comments