

Discover more from B2B Wins by Steve Zakur
B2B Wins #9: The fluent bullshit of ChatGPT
Often wrong. Never in doubt. Kinda like some humans.
ChatGPT demonstrates the coming fundamental change in how we interact with the internet. Is ChatGPT itself that change? Probably not. As we'll discuss there are some very rough edges. But I do believe that it is the kernel for a foundational change in how we'll do digital discovery in the future. It's a change equal to that of Google back in 1998.
What is ChatGPT you ask?
ChatGPT is a chatbot that uses a large language model known as GPT-3 (Generative Pretrained Transformer 3) to generate responses to user input in a conversational manner. This allows ChatGPT to understand and respond to a wide range of topics in a natural and human-like way.
The integration of GPT-3 into a chatbot platform like ChatGPT has the potential to revolutionize how we interact with the internet. With its ability to understand and respond to a wide range of topics, ChatGPT can serve as a virtual assistant that can help users find information, answer questions, and perform a variety of tasks. This could make it easier for users to access information and services online, and make the internet a more user-friendly and accessible place.
ChatGPT wrote the two previous paragraphs in response to the following prompt: Can you explain in two paragraphs what ChatGPT is and how will it change how we interact with the internet?
So, is AI about to replace Google? To replace authors everywhere? Let's get into it.
Don't let the AI do your homework.
"This means, though, that they lack hard-coded rules for how certain systems in the world operate, leading to their propensity to generate “fluent bullshit.” -James Vincent, The Verge
Ben Thompson, in his December 5th Stratechery Post, talks about an experiment he ran. He asked ChatGPT to do his daughter's homework. He asked the bot whether Thomas Hobbes believed in the Separation of Powers. ChatGPT came back with three paragraphs describing Hobbes's framework for three branches of government. Great stuff. The only problem is, as Thompson points out, having three branches of government was James Madison's idea (riffing off of some other folks). Hobbes believed in a monarchy.
I had no idea what Hobbes believed in and I was convinced by the ChatGPT response. It was truly eloquent bullshit.
While much of the stuff published on the internet reads like eloquent bullshit, most writers shoot for bullshit that is at least somewhat grounded in reality. Not so for ChatGPT. And to be fair, they make no representation that what they are saying is true.
Key learning. Don't let ChatGPT do your homework.
And if you're running a large internet site where users contribute content you might want to prohibit the use of ChatGPT to protect your users from such bullshit. Stack Overflow announced that ChatGPT's content was not welcome on the site and that anyone using ChatGPT in their post content would be sanctioned.
The problem is the parrots
"How big is too big? What are the possible risks associated with this technology and what paths are available for mitigating those risks?" Bender et al, On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?
ChatGPT correctly told us they use the GPT-3 platform to generate responses to questions. GPT-3 is a large language model (LLM) technology that got smart by basically "reading" large swaths of the internet.
We know from further Googling research that OpenAI used a method called reinforcement learning from human preferences to help ChatGPT become good at writing answers to questions. This training uses humans to give the model feedback on how close they're coming to the goal. In this case, the goal is fluent responses to questions. You can see this process continue as every ChatGPT chat has a feedback tool, thumbs-up or thumbs-down plus a comments box so that the model can continue to get smarter.
So why is ChatGPT wrong so often? It may be because of the ambitiously broad original training of the model. They basically crawled a large portion of the internet gathering every web page they could legally put their hands on.
GPT-3 doesn't know any facts. What GPT-3 knows is how data, in this case, words on web pages, are statistically related to one another. Information in a particular piece of content is related to other information in that piece of content. Information in other pieces of content could be related to this original content. And so on. All those relationships are modeled in a way that allows GPT-3 to infer relationships between information.
That’s probably why the Hobbes homework response was so wrong. I’m sure there are web pages that talk about Hobbes, Locke, Montesquieu, and Madison. ChatGPT hoovered them up and made the correct statistical inference and the wrong factual inference.
Researchers are looking at the risks of these large language models. One of the things that may be causing these factual accuracy problems is the size of the original training data. If you're relying upon "the whole internet" to train a model you may sacrifice factual accuracy (something ML folks call precision) in your quest to be able to answer any question (something called recall).
If our student had asked a question of a version of ChatGPT that was trained on political philosophy content instead of the whole internet they may have gotten a correct response. The research of Bender et al recommends just such an approach. Curation of content could help models become more accurate at answering these questions. Of course, those domain-specific models wouldn’t be able to answer questions in other domains.
The lesson for B2B marketers is that a model trained in their domain may help create content that is more relevant and more accurate to their audiences.
Breaking the Google Pact
Let us steal your content and we’ll bring you the world. - The promise Google made when they stole your content so they could make billions.
The first thought that I had when I used ChatGPT was "Damn, this is an awesome way to search the web". You don't get links, you get answers. Isn’t that why we search in the first place?
Google has known this for years. They've been providing answer boxes, those "People also ask" things that appear in search results, that grab content off of a page and present the answer to the question. No need to visit the website with the answer. That must make website owners happy. /s
As a B2B marketer, you have to begin wondering how tools like ChatGPT will affect your future. Will this get me more traffic? Will this get me more engagement? Will this get me more leads?
Website owners create content to serve their purposes. Many of those purposes are business related. The goal is to get people in their target audience to see this content and act. The challenge of course is how to find those people. Advertising is one way. Search engine marketing--both the organic and paid versions--remains very popular. So how does that model work in the future?
If a ChatGPT-powered search engine is providing answers, does it provide attribution so that someone can visit the source? If a ChatGPT-powered Chabot on a website is providing answers that may have been powered by information from another site should it provide attribution? What if that attribution leads someone to a competitor?
We made a pact with Google back in 1998. That pact is currently rocky. Very few people who are doing SEO feel like they're being treated fairly. Even some of the folks paying for search advertising believe it's fair. But Google is the only game in town and we play by their rules.
So when Google goes from "People also ask" to "Here's your answer" is the pact shattered? What replaces it?
So many questions. So few answers.
Getting Chat Right for B2B
"While chatbots can make some tasks easier for both vendors and prospective customers, these systems are only as good as the user experience they provide." -coredna
The reality is that many chat interfaces are underwhelming for users and companies. They fail users by not being able to answer their questions. They fail companies by not creating tangible results.
The early problem with chatbots was that humans had to write all the answers. Along came AI technologies like natural language processing and chat vendors were able to hoover up all manner of content to make the chatbots smarter.
In some areas, like customer support, there is tons of data. Call logs in the contact center. Transcripts for "training purposes". Support community posts. All that content contributes to making the bot able to suggest answers to all manner of questions. Most of those suggestions are made to contact center reps vs to actual users. That's because even the best bots trained on the best data aren't really ready for public consumption.
The challenge on the pre-customer side of things is that there isn't all that much content on which to train the model. At IBM, of the tens of millions of web pages they had when I was there the vast majority were either support pages or out-of-date landing pages. If you looked at primary marketing content it was a tiny fraction of the site.
So, how do you make the best use of chat in the B2B context? It starts with really good content. You're already doing that for everything else (e.g. SEO). Keep doing it.
You then need to choose a vendor who can harvest that content in the most effective manner. We've seen some ways that work (and don't) with ChatGPT. There are vendors out there doing a nice job with much less sophisticated technologies.
The final point is the most important. Chat should be integrated with everything else you're doing. If you're personalizing landing pages, ABM target pages, and the website in general, then feed that capability into your chatbot.
Chat is not a thing. Chat is part of a marketing system.
Are you a bot?
I promise I’m not a bot. -Steve
One might think that this article was written by an eloquent, award-winning writer like Steve Zakur. If you believe so, you would be correct. No really, I wrote this. Except for that bit that ChatGPT wrote at the beginning. I promise.
So what are we to take away from all this?
It’s early days. The flaws in ChatGPT are going to be figured out by vendors. Some of these flaws, the broad domain knowledge, could also turn out to be a feature. For example, content elsewhere on the web could be used to inform models that are going to help your customers engage with your content.
Of course, that also works for your competitors. At the end of the day, you’re still going to have to create compelling content that describes the value of your business to prospects and customers. Some of that data could power interfaces like chat. Some of it will power the simple act of human reading.
That’s a worthy goal. Connecting with people so they can connect with you.