Does ChatGPT lie more than we do?
January 27, 2023
An increasing number of people are experimenting with ChatGPT, and the majority see its advantages. However, from my tech standpoint, there are some statements and discussions surrounding ChatGPT that I cannot fully understand.
How about the context?
I recently came across an article with the headline "ChatGPT lies." Yes, it's true that ChatGPT makes mistakes. It's widely known that the data available to ChatGPT only goes up until 2021, which means it may provide outdated information. Like referring to Annegret Kramp-Karrenbauer as the current Minister of Defense of Germany, even though there have been two successors since then. However, using the term "lying" is quite strong. It not only implies unintentional misinformation due to limitations in the data but also deliberate deception. Why use such a strong term?
And what else is implied by this statement that ChatGPT lies? Does it suggest that ChatGPT lies more frequently than humans? Is it less capable of providing accurate answers? Should we exercise greater caution when using ChatGPT as a source compared to other sources? I'm missing the context in which these claims are being made. Simply because ChatGPT occasionally provides incorrect information, it shouldn't be portrayed as an entirely unreliable source. It reminds me of the debate surrounding Wikipedia, where the traditional encyclopedia, such as the reputable Brockhaus, was deemed more reliable than the user-generated online platform. However, studies conducted in the early 2000s showed that Wikipedia outperformed the professionals in terms of accuracy, comprehensiveness, timeliness, and understandability (source: Der Spiegel - "Vergleichstest: Wikipedia schlägt die Profis"). Do we need similar comparisons again?
Who needs sources?
In another post, it is pointed out that everyone should know whether a text was written by an AI or a human. Why? What difference does it make for readers? Personally, I'm not interested in that unless I'm specifically studying the topic of automated text generation. In that case, it can be intriguing to find out whether a text was written exclusively by a human or with the assistance of an AI. Otherwise, what matters to me is the output and not the process of creation.
When I consider those who have had to relinquish their doctoral titles due to plagiarism allegations or have done so "voluntarily," I begin to understand why this discussion is taking place. We need to redefine what we consider an intellectual achievement by humans. However, does knowing who wrote a particular text bring us closer to finding answers?
A matter of definition
In other posts, it is emphasized that AI lacks creativity, vision, and empathy. Are you sure about that?
When it comes to creativity and vision, it depends on how we understand them. If they involve the ability to generate new solutions from patterns and context, then AI is already creative and visionary. If creativity and visions rely on intuition, it will likely take some time for us to make AI understand this concept. There may still be a few ingenious insights that only a human can have. But are all the ideas presented to us today as products of creativity or visions really based on intuition?
Now, let's talk about empathy, which, from my perspective, is the most challenging aspect. It's difficult for me to imagine how we can teach AI what it feels like to experience empathy from another human being. We can definitely train it to exhibit empathetic behaviors using patterns. Based on current advancements, the behavior would be appropriate, but "something would be missing." I wouldn't dare to claim that AI can never learn empathy. I'm too impressed by what is already possible today, and I'm not just referring to ChatGPT.
And then there's the question of what creativity, vision, and empathy have to do with ChatGPT. Are there tasks related to text writing that require creativity, visions, and empathy in a way that AI cannot or not yet fulfill? I lack the imagination for that.