Should We Fear AI?

Should We Fear AI

I am up on Technology. I have a master’s degree in computer science and have worked in the Information Technology field for 20+ years, but the entire ascendancy of AI (Artificial Intelligence) took me by surprise. It’s not a new area for me, I studied AI in University, and my Graduate Thesis was on Path Finding algorithms, which is a branch of AI, but when I started to see articles about ChatGPT pop up everywhere I realized I didn’t understand it as much as I thought I did.

This all brings us to today. I decided to look under the covers and get an idea of what is one of the more popular AI tools, ChatGPT, and what it isn’t. Just FYI, this is a HUGE topic and will require several posts to have a full conversation, this post will try to lay the groundwork for additional posts that will go into much greater detail.

Fist off, what is ChatGPT. Well, according to itself when asked ‘What is ChatGPT?’ it replied “ChatGPT is a large language model developed by OpenAI, based on the GPT-3.5 architecture. It is a computer program that has been trained on a massive corpus of text data using deep learning techniques, which allows it to generate human-like responses to natural language prompts.”

“ChatGPT can answer questions, carry out conversations, and generate text in a variety of styles and formats. Its ability to understand natural language and generate text has many potential applications, such as chatbots, virtual assistants, and automated customer support systems.”

Well, there you have it, all in its own words. It’s a little clinical but it is bang on. It is a program designed to take some input, a call, from a user (human or system) and respond with a relevant response, the reply, in a style and format that matches the call.

So, what does this really mean?

At a high level it can answer questions for users looking for information such as support, chat, virtual assistants but that is only the surface. It is also capable of simulating entire chat rooms, playing games like Tic-Tac-Toe, and writing songs and lyrics (which causes a whole other issue that will need its own post).

At a high level, ChatGPT is not the first AI based system. The earliest AI systems were developed in the 1950’s and 1960’s and they were primarily focused on solving problems using logic and rule-based systems. There are several foundational systems from that time that following systems built upon to get us to today and ChatGPT as well as Google’s Bard, Meta’s LLaMA and Microsoft’s own NLP (natural language processing) offering.

Ok, but what is it really? Hang on, this is going to get a bit technical, but I’ll try to break it down.

ChatGPT is a member of the generative pre-trained transformer (the GPT in ChatGPT) family of Language models. That’s a mouth full but just means that there is a language family, or a grouping of models, that is used to ‘train’ the system to express an infinite number of valid responses to a call. Models have been used for a long time to represent fields of interest.

The concept of language Models is a huge area and warrants its own post or two but for now simply think of it as the way ChatGPT and other AI systems learn to do what they do.

A good resource for learning all about language models in a friendly-ish way is What Is a Language Model? | deepset.

So, why do some people seem to fear it? Here are several reasons straight from ChatGPT:

There is no inherent reason to fear ChatGPT or any other AI system, as they are simply computer programs that are designed to perform specific tasks based on the data they have been trained on. However, there are concerns about the potential misuse or unintended consequences of AI systems like ChatGPT, which could pose risks to society or individuals.

Some of the reasons people might be concerned about ChatGPT or other AI systems include:

1. Bias: AI systems like ChatGPT can inherit biases from the data they are trained on, which could lead to unfair or discriminatory outcomes. For example, if ChatGPT is trained on biased or incomplete data, it may generate responses that perpetuate stereotypes or reinforce inequality.

2. Misinformation: ChatGPT can generate convincing text, which can be used to spread false or misleading information at scale. This can be particularly concerning when it comes to issues like election interference, disinformation campaigns, or propaganda.

3. Privacy: ChatGPT and other AI systems may collect or process sensitive personal data, which could be used for malicious purposes if it falls into the wrong hands. Additionally, there are concerns about how AI systems like ChatGPT could be used to conduct surveillance or monitor individuals without their consent.

4. Autonomy: Some people may be concerned about the potential for AI systems like ChatGPT to make decisions or take actions autonomously, without human oversight or intervention. This could raise questions about accountability and responsibility in the event of unintended consequences or harmful outcomes.

Overall, the potential risks associated with ChatGPT, and other AI systems depend on how they are designed, deployed, and used in practice. While there is no need to fear AI systems per se, it is important to be aware of their potential risks and to work to mitigate them through responsible development, deployment, and use.

I think it is on to something, but I think there are other more human reasons that the topic of AI seems to make people a little uneasy.

In general, people are afraid of change. In the 1960’s Elisabeth Kubler-Ross in her groundbreaking work on the stages of grief, gave us the Kubler-Ross Change Curve that tries to map out the various stages of acceptance of change both from a personal lose perspective but also from a business perspective such as the introduction of new systems, processes and technology. The stages are – denial, anger, bargaining, depression and finally, acceptance. It is safe to say that those who fear the coming onslaught of AI are probably at the early stages but is this a valid feeling. Depends on who you ask.

A helpful and seemingly unbiased article to review is Will AI Replace Jobs? | Built In. It takes a middle of the road approach by discussing what the author believes will be 10 jobs that AI will replace and then 9 jobs that AI won’t replace. I’m not sure I am inline with the lists, but it is worthy starting point for a discussion and it is not based in fear.

But there are always those that need to run around saying the sky is falling and here in Canada, where I am located, no one does it better than the CBC (Canadian Broadcasting Corporation). Their reporting on AI includes the article Learn AI now or risk losing your job, experts warn | CBC News which talks about how AI has already been integrated into our lives using a hospital as an example, it then meanders into which jobs will be most affected, with the general consensus being that it will affect all but it does leave some hope for the future by reminding us that even with the introduction of the calculator, there are still accounting jobs. This is not a critique of this article, so I leave you to read it and develop your own thoughts and understandings from it.

So, I ask the question, should we fear the coming of our computerized overlords (just kidding!) or will life continue.

No easy answers, my personal thoughts are that AI tools such as ChatGPT and the many others out there will enhance our lives, both business and private but I feel the need to say that from my vantage point I caution us to not blindly accept something because it is AI but question what the tool or system is suppose to do and ask, is it doing as intended.

One of the most unsettling statistics I was able to find in my research was that over half of all Americans, which I assume is similar in other countries such as Canada, are unable to define AI and cannot identify its role in our lives. I believe this points to a real-life crisis where most people don’t understand it but are willing to us it without understanding the consequences. See above where it says – bias, misinformation, privacy, and autonomy. I believe this human factor is extremely important and will necessitates its own article.

As an example, I believe AI systems will have a profound impact on our over worked, underfunded medical infrastructure but before I discuss my symptoms with a ChatGPT like tool and blindly accept the prescriptions it prints out, I will want to make sure I’m not taking Soylent Green for my aches and pains.

Next AI based blog entry will be on how AI can help small businesses and a discussion of some of the popular tools available today.