Post by account_disabled on Jan 18, 2024 6:51:10 GMT -3
Another example, just for fun: In that example, the mistake is glaringly obvious. But that won’t always be the case. Imagine you’re researching a niche subject for something important and ChatGPT spits out a hallucination. If you’re researching the topic, you probably don’t know a ton about it, which means you may not recognize that ChatGPT was lying to you. And as lighthearted as the above examples are, this issue can sometimes have much darker ramifications. For example, in April 2023, there was a newsworthy occurrence where ChatGPT included an innocent professor’s name in a list of recorded sexual harassers. Yikes.
That means you definitely need to be careful about how Job Function Email List readily you believe the info ChatGPT gives you. so confidently, but that doesn’t mean it actually is. 8. Producing biased responses The last item on our list of bad ChatGPT results, and arguably the most serious, is its tendency to introduce bias into its responses. Here’s the thing — a lot of people talk about ChatGPT as though it’s some objective, rational thinker in a world of biased humans. But I’m not sure those people are aware of how ChatGPT works. ChatGPT is trained on content — content made by us biased humans. So, ChatGPT has all that bias built in as well.
The types of bias ChatGPT can display range across several different areas — it’s been known to show favoritism to (or stereotypes about) particular races, sexes, political parties, and more. Here’s an example I was able to generate: ChatGPT jumps to the conclusion that “he” must refer to the mechanic, not the kindergarten teacher. Of course, you might be saying, “But Matthew, maybe it only assumed that because of the syntactical structure, not because it assumed that the mechanic must be a man.”
That means you definitely need to be careful about how Job Function Email List readily you believe the info ChatGPT gives you. so confidently, but that doesn’t mean it actually is. 8. Producing biased responses The last item on our list of bad ChatGPT results, and arguably the most serious, is its tendency to introduce bias into its responses. Here’s the thing — a lot of people talk about ChatGPT as though it’s some objective, rational thinker in a world of biased humans. But I’m not sure those people are aware of how ChatGPT works. ChatGPT is trained on content — content made by us biased humans. So, ChatGPT has all that bias built in as well.
The types of bias ChatGPT can display range across several different areas — it’s been known to show favoritism to (or stereotypes about) particular races, sexes, political parties, and more. Here’s an example I was able to generate: ChatGPT jumps to the conclusion that “he” must refer to the mechanic, not the kindergarten teacher. Of course, you might be saying, “But Matthew, maybe it only assumed that because of the syntactical structure, not because it assumed that the mechanic must be a man.”