Stop using generative AI as a search engine
ChatGPT, it turns out, is a woefully bad way to explore the historical record.
I would say, ChatGPT is woefully bad way to seek truth. But to explore the historical record, it’s actually quite good.
But just as the beginning days of Wikipedia, you shouldn’t just stop there. It is a great starting point, just like Wikipedia. Trust but verify. See more from John Gruber.
Here’s the key point: It is truly confusing to the average person what is real and what isn’t.
That’s not just a problem for AI models. It’s also a problem for our mainstream media.
Digital literacy continues to be an urgent need! How do you know what is real and what isn’t when you see something online?
Lopatto continues:
I recognize this is a bit tedious, but I’m trying to show my work. It’s the best way I can establish my trustworthiness, and something you won’t get from ChatGPT. It’s also something none of the people who made erroneous claims did.
Go review the article, it is tedious to cite AI. And it is not always essential to cite AI. My program is currently considering adding a requirement to submitted papers calling out AI usage and not counting it towards page number requirements, so if 2 of your required 5 pages are AI generated, you have to write even more.
There are several problems with this approach, first of all that it is not about how much you write.
Second, students are being used as guinea pigs as those who seem to be losing control grapple with how to maintain that control.
There’s still a lot of questions about how to do this right.
iA Writer automatically keeps track of AI writing, in an intuitive, smart way. When you paste something in, you can paste it in as a different author. This feature is called Authorship, and the challenge is that it is a feature available on a niche writing software.
Watch this incredibly elegant way of dealing with this very issue: