Openai’s amazing new language generation model, gpt-3, is completely mindless


This AI (gpt-3) is the largest language model ever. It can produce text that looks like human writing according to demand, but it doesn’t bring us closer to real artificial intelligence.
—-Will Douglas Heaven

“Playing gpt-3 makes people feel like they see the future of (Artificial Intelligence),” one San Francisco based developer and artist, Aram Sabeti, tweeted last week.

Openai first described gpt-3 in a research paper in May. But last week, openai began to gradually open gpt-3 to users who requested testing and were selected. At present, openai hopes that outside developers can help them explore what gpt-3 can do. After openai plans to commercialize gpt-3, it will provide AI capabilities for businesses through subscribing cloud services. The time point is within this year.

Gpt-3 is the most powerful language model ever. Gpt-2, the predecessor of gpt-3 released last year, has the ability to generate a series of texts that people think are written by people according to the input text. But gpt-3 is a bigger improvement. This model has 175 billion training parameters, compared with gpt-2, which has only 1.5 billion. For language models, model size does have an impact on their performance.

Sabeti gives a blog link that he uses to show off. In the blog, there are his essays, poems, press releases, technical manuals and other texts he creates with gpt-3. Gpt-3 can even deliberately imitate specific authors and write works. Mario Klingemann, an artist who uses machine learning in his work, shared a short story “the importance of living on twitter”, which said: “it’s a strange fact that the social life that London people are still interested in and continue to exist recently is twitter. I was very impressed by the fact that when I went to the seaside for my regular vacation, I found that there were so many birds singing on the beach that it was like a parrot cage “(twitter initial capital, specific noun twitter, twitter initial lowercase, normal word birdsong). The short story was written in the style of Jerome K. Jerome. Klingemann said he only entered the title
The author’s name and the initial word “it”( This link even has an article about gpt-3 written entirely in gpt-3, which seems logical.

Others have found that gpt-3 can generate any kind of text, including guitar scores and code. For example, fine tune gpt-3, and gpt-3 can generate HTML instead of natural language text. Sharif Shameem, a web developer, shows that he can make gpt-3 generate a web layout by inputting “a button like a watermelon” or “the big red text” welcome to my press release “and the blue button called” subscribe “to gpt-3. Even John Carmack, the legendary programmer and chief consulting technology officer of oculus VR, who pioneered the application of 3D graphics to video games like doom, said: “recently, it’s almost unexpected to find that gpt-3 can write code to some extent, which has really brought some shock.”

But despite its latest tricks, gpt-3 still tends to generate harmful sexist or racist texts. On gpt-2, fine-tuning can reduce the output of such texts.

Not surprisingly, a lot of people soon started talking about AI. However, gpt-3’s text output and amazing versatility are the result of excellent engineering, not real intelligence. First of all, AI (gpt-3) still makes stupid mistakes, which shows that gpt-3 has no common sense at all. Even the success of gpt-3 lacks depth, because the training of gpt-3 is mostly like copy and paste rather than original text.

To be exact, gpt-3 is a black box, and people don’t know its reasoning process. The good thing about gpt-3 is that it synthesizes text in an interesting way according to the needs, based on hundreds of millions of text fragments collected from the Internet.

This is not to belittle openai’s achievements. A tool like gpt-3 has many new uses, whether it’s good (from making chat robots better to helping people write code) or bad (from making chat robots provide error messages to letting children cheat in homework).

But AI milestones are often destroyed by excessive hype. Even Sam Altman, who co founded openai with Elon Musk, wants to reduce the popularity of gpt-3: “gpt-3 is over hyped. Gpt-3 is impressive, but it still has serious flaws and makes stupid and superficial mistakes. AI is ready to change the world, but gpt-3 is just an early glimpse( About AI) we have a lot to discover. “

For AI that looks like AI, our standards are very low. It’s easy to be fooled by things that look smart. The biggest lie AI said is to convince the world that AI exists. Gpt-3 is a great progress of artificial intelligence, but it is still a tool with shortcomings and limitations.