[ATTW-L] Great cautionary tale re: using ChatGPT as a research assistant

Emily DeJeu edejeu at andrew.cmu.edu
Thu Jun 15 18:58:28 UTC 2023


Taking a cue from Sam Dragga, who is so kind about sending links to
relevant and timely case studies, I wanted to share some links that might
be useful for those of you who (like me) are looking for real-world cases
about the affordances and drawbacks of using ChatGPT as a writing
assistant. These links reference a case that’s been in the news a lot
lately — the one of the lawyer who, in an opposition to a motion to
dismiss, cited a bunch of nonexistent cases that ChatGPT had provided to
him.

This link (https://www.courtlistener.com/docket/63107798/mata-v-avianca-inc/)
has a boatload of documents, but most relevant are 3/1/23 (the plaintiff
counsel’s opposition with bogus citations), 3/15/23 (the defense counsel’s
shorter response noting that they can’t find many of the cases cited in the
opposition), 4/25/23 (the plaintiff counsel’s response to a request from
the judge to provide full briefs of the mysterious cases in question), and
6/6/23 (lawyer Schwartz’s acknowledgment that he was badly misled by
ChatGPT).

This link (
https://www.documentcloud.org/documents/23844470-mata-v-avianca-20230608-show-cause-transcript)
contains the full transcript of a recent hearing on 6/8 in which the court
pressed Schwartz and several other attorneys in his law office about their
use of ChatGPT.

While there are a lot of potentially fruitful takeaways here, so far, I've
found two that may be particularly relevant for students:

1. Schwartz repeatedly indicates in the hearing transcript that he assumed
ChatGPT had access to information that he didn’t and that even though
neither he nor members of the opposing counsel could actually find the
cases ChatGPT was referencing, those cases must exist somewhere if ChatGPT
was referencing them. Even when the judge presents the plaintiff’s legal
team with evidence that the case summaries ChatGPT provided for the false
cases are full of contradictions (the judge calls them “legal gibberish”),
the legal team says that they assumed the cases were legitimate. This
assumption — that AI knows more than we do, has access to information we
humans cannot access ourselves, and can be trusted even when the
information it provides is contradictory and nonsensical — is obviously
dangerous. This seems like a great opening for students to discuss agency
and responsibility in ethical communication.

2. In the 6/6 documents (see exhibit A in particular - a truly fascinating
transcript of Schwartz's ChatGPT queries) and in the hearing transcript,
Schwartz admits that he did not approach ChatGPT in a spirit of genuine
inquiry but rather asked the AI to argue in favor of a particular position
and then to provide cases that supported that position. As the judge notes
in the hearing transcript on page 26, lines 23-24, “The computer complied.
It provided you with a case. It wrote a case. It followed your command.” I
think this could produce fruitful discussion with students about the
difference between genuine inquiry and “inquiry” that sets out with an
argumentative end in mind and seeks corresponding evidence. ChatGPT seems
particularly sensitive to the latter and, as this case suggests, will
sometimes “hallucinate” in order to satisfy user expectations.

I realize this is a lot of legal nerding out (especially during summer
vacation), but I find all of this fascinating and thought some of you
might, too!

-- 

*Emily B. DeJeu, PhD *| she/her

Assistant Teaching Professor, Business Management Communication

*z   *https://cmu.zoom.us/j/2060195608

*o*   TPR 4133 | 412-268-5120

*cv* emilybdejeu.com



*Carnegie Mellon University*

Tepper School of Business

5000 Forbes Avenue Pittsburgh, PA 15213
cmu.edu/tepper
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://attw.org/pipermail/attw-l_attw.org/attachments/20230615/16b36b37/attachment-0001.htm>


More information about the ATTW-L mailing list