If you've been in the digital world but haven't yet heard of chatGPT, then you've been living under an NFT rock of sorts.
While some may roll their eyes at it (I recently saw someone on Twitter note that it was the next Clubhouse: i.e., a fad that will flame out into oblivion), Microsoft is betting against that. Big time.
ICYMI, Microsoft invested $1B (yup, that's a B) into openAI (parent company of chatGPT) in 2019 and has plans for another $10B (yup, another B) over the next few years.
To put that into perspective, Google saw potential in an upcoming rideshare platform called Uber and invested a huge chunk of money into it back in 2013... that huge chunk being a measly $258M.
Visualized:
10,000,000,000
equals 39(ish) of these 👇
258,000,000
258,000,000
258,000,000
258,000,000
258,000,000
258,000,000
258,000,000
258,000,000
258,000,000
258,000,000
258,000,000
258,000,000
258,000,000
258,000,000
258,000,000
258,000,000
258,000,000
258,000,000
258,000,000
258,000,000
258,000,000
258,000,000
258,000,000
258,000,000
258,000,000
258,000,000
258,000,000
258,000,000
258,000,000
258,000,000
258,000,000
258,000,000
258,000,000
258,000,000
258,000,000
258,000,000
258,000,000
258,000,000
258,000,000
I mean, there's a reason we're seeing chatGPT in threads all the time with those numbers thrown around...
So with all the excitement, are there any concerns we should ponder about this "amazing" AI (it's not technically AI but a form of ML?).
To start with, I asked chatGPT this question. I mean, might as well see what the machines think their weaknesses are before they're too smart to hide things from us, right?
Here was the response:
Q:what are the downsides to humanity with chatGPT?
A: There are a few downsides to using language models like ChatGPT for human communication. One is that the model is not truly "intelligent" and can only respond based on the patterns it has seen in the data it was trained on. This can lead to it providing inaccurate or inappropriate responses. Additionally, because the model is trained on a large dataset of text from the internet, it may have learned biases or offensive language. Finally, using a language model like this can also discourage people from thinking critically and forming their own thoughts and ideas.
Not bad, huh?
I have some thoughts I've been trying to ponder on chatGPT for awhile (I've not talked about it at all before yesterday, actually).
It's really amazing as a tool, but there are definite things to consider that should cause at least a little hesitancy in adoption and wisdom in widespread usage.
For instance, I think the devil is in the details with stuff like this, especially in professional applications where some people don't know what they don't know.... and it's often the years of work that teach the finer subtleties that set apart professionals from amateurs.
It's the journey of learning that is sometimes most crucial, not the destination.
How will this be reflected in chatGPT responses?
Part of the reason we (humans) get from Point A to Point E is because we learned how to first move from Point A to B, then from B to C, C to ... you get the picture.
If chatGPT helps some people get to point E from using other human's B to E data inputs (to teach chatGPT), then that means the person asking doesn’t really know anything about any of the points between B to E.
So, in a business context, what if Point C breaks that the human doesn't know how to fix?
Or, worse, what if Point C is incorrect information (leading to E being an incorrect outcome that is presented as the correct one) that the human asking doesn't know is incorrect?
That person who got hired because they used chatGPT to show discuss Point E is helpless (or makes a decision based on bad information), because they never actually learned the subtleties along the way.
I think that is a really crucial concern about the societal impact of things like chatGPT.
My assumption is chatGPT will be most beneficial when used by true experts, to help them think in a different way than they normally do. In other words, it will compound on what they already know rather than give a beginner an answer for a complex question.
The latter scenario will work until it doesn't, and then the fireworks will be spectacular.
I think teachers, universities, trainers, etc will need to consider carefully the role of AI such as chatGPT since students learn most by not receiving an answer, but working out the problem for themselves. That is how they become actual subject matter experts, and shortcuts will simply churn out lower level knowledge workers.
I liked this analogy Nathan used on my LinkedIn post pondering these things:
Good thinking Nathan, and let's all keep the conversation going!
As is my regular habit with Googel Ads automation such as Performance Max Campaigns, I'm not anti-Machine Learning, I'm just anti-ML adoption that wasn't properly thought through. Poorly thought through/rushed crap begets crap that is more difficult to reverse than if things just slow down a hair while everyone ponders the reality of these major changes in our world.
We're all in this together! Let's do it well.