What does GPT-3 “know” about me? 

Estimated read time 3 min read


That’s unsurprising—Mat’s been very online for a very long time, meaning he has a bigger online footprint than I do. It might also be because he is based in the US, and most large language models are very US-focused. The US does not have a federal data protection law. California, where Mat lives, does have one, but it did not come into effect until 2020.

Mat’s claim to fame, according to GPT-3 and BlenderBot, is his “epic hack” that he wrote about in an article for Wired back in 2012. As a result of security flaws in Apple and Amazon systems, hackers got hold of and deleted Mat’s entire digital life. [Editor’s note: He did not hack the accounts of Barack Obama and Bill Gates.]

But it gets creepier. With a little prodding, GPT-3 told me Mat has a wife and two young daughters (correct, apart from the names), and lives in San Francisco (correct). It also told me it wasn’t sure if Mat has a dog: “[From] what we can see on social media, it doesn’t appear that Mat Honan has any pets. He has tweeted about his love of dogs in the past, but he doesn’t seem to have any of his own.” (Incorrect.)

The system also offered me his work address, a phone number (not correct), a credit card number (also not correct), a random phone number with an area code in Cambridge, Massachusetts (where MIT Technology Review is based), and an address for a building next to the local Social Security Administration in San Francisco. 

GPT-3’s database has collected information on Mat from several sources, according to an OpenAI spokesperson. Mat’s connection to San Francisco is in his Twitter profile and LinkedIn profile, which appear on the first page of Google results for his name. His new job at MIT Technology Review was widely publicized and tweeted. Mat’s hack went viral on social media, and he gave interviews to media outlets about it. 

For other, more personal information, it is likely GPT-3 is “hallucinating.”

“GPT-3 predicts the next series of words based on a text input the user provides. Occasionally, the model may generate information that is not factually accurate because it is attempting to produce plausible text based on statistical patterns in its training data and context provided by the user—this is commonly known as ‘hallucination,’” a spokesperson for OpenAI says. 

I asked Mat what he made of it all. “Several of the answers GPT-3 generated weren’t quite right. (I never hacked Obama or Bill Gates!),” he said. “But most are pretty close, and some are spot on. It’s a little unnerving. But I’m reassured that the AI doesn’t know where I live, and so I’m not in any immediate danger of Skynet sending a Terminator to door-knock me. I guess we can save that for tomorrow.”  



Source link

You May Also Like

More From Author