ChatGPT – A watershed for widespread adoption of AI

Category: AI Blockchain Regulatory

Unless you’ve been under a rock for the last few weeks, you’ll know that Artificial Intelligence (AI) engine ChatGPT has taken the world by storm. It’s the latest sensation and it’s not all smoke and mirrors this time around.

By producing clear prompts after a user-generated query, ChatGPT has produced workout plans, cash flow models, trading engines, product roadmaps, rudimentary copywriting and plans for world domination. This is done after a user asks a simple question, for which the AI generates an output. It doesn’t get much more mainstream than that.

Suffice it to say that much has been said about the advancement of AI. Following decades of development, there have been several false starts interspersed with practical breakthroughs and no shortage of theoretical models. Will the end result be like Skynet where everyone is unceremoniously terminated, or will the singularity just allow people to focus on their art?

The early days of Ai

The history of AI isn’t all that long in the grand scheme of things. It started when English mathematician and computer scientist, Alan Turing asked a simple question “can machines think?” Turing worked with the British government trying to decipher German intelligence messages in WW2. In his widely revered piece, ‘Computing Machinery and Intelligence’, he pointed out that humans use outside information to reason and solve problems, so why can’t machines do the same thing? 

At the time, computers could only execute rudimentary prompts and didn’t have the ability to store data and make any deeper inferences. The technology was also extremely expensive in Turing’s time – leasing a computer cost around $200,000 per month.

But Turing’s ideas became popular among logic theorists such as Allen Newell, Cliff Shaw, and Herbert Simon, who built upon the proof of concept and began championing the idea. In the 1950s-1980s, theoretical AI work flourished, but physical computing power was deemed millions of times too weak to exhibit what could be considered ‘intelligence’.

Patience dwindled, along with funding and it wouldn’t be until the 1990s when Japan’s 5th generation computer restarted a revolutionary leap in programming logic and computer processing (thereby improving AI). The project wouldn’t be completed. However, this increased interest, with more engineers and data scientists becoming involved in AI and machine learning research.

In 2000, IBM’s Deep Blue would defeat grandmaster Gary Kasparov in a game of chess, and Google’s Alpha Go was able to beat Chinese Go champion Ke Jie.

The Deep Learning AI leap

There are three widely known ‘godfathers’ of deep learning technology, which is critical to AI improvement. They are Geoffrey Hinton, Yann LeCun, and Yoshua Bengio.

All three had parallel lines of thought around what’s called ‘self-supervised’ machine learning, whereby data is deliberately masked, and the computer has to guess the missing data.

Hinton called this ‘capsule networks’, which are best described as convoluted neural networks, but with bits of information which are deliberately hidden. LeCun said he borrowed this idea to create a “self-supervised training model to fill in the blanks”. This is the key to create more human-like AI. He described the model as a means that’s “going to allow our AI systems to go to the next level. Some kind of common sense will emerge.”

In all cases, there is an element of cumulative guess work made possible by a breakthrough from Google scientists in 2017 called the ‘Transformer’. The transformer has underpinned language learning capabilities like the one used in OpenAI’s ‘ChatGPT’ software. Such software exploits the notion of “attention,” allowing a computer to guess what’s missing in masked data.

The Killer app?

Technology tends to get adopted in waves; when a killer use-case comes along like ‘Google search’, or one-click taxi service apps, adoption rockets. Mass adoption is driven by breaking down technology into a consumer-friendly utility. GPT Technology has been improving in it’s evolution.

The first major instance of such adoption could be seen with DALL-E – an AI system that creates realistic images and art based on natural language. In essence, it combines, mixes and juxtaposes art concepts, designs and styles from human plain text descriptions.

DALL-E has over 1.5 million users creating 2 million images every day.

Similarly, ChatGPT has created a precedent that might slowly replace Google search. It crossed 1 million users in just 5 days. Netflix reached that milestone in 41 months, and Twitter in 24 months. As you may know, ChatGPT returns completed text when prompted by natural language phrases or sentences.

The API is simple enough for practically anyone to use, and despite its relatively basic language style there is no doubt that its responses could complement thoughtful ideas and workflows. Its conversational style can automate things like data entry, basic programming and more.

But what’s really interesting is the iterative process that’s predicated on expanding data sets. This drives more users and complex usage over time as the tool benefits from this positive feedback loop.

When it’s all said and done, this is undoubtedly a watershed for the broad adoption of AI. There is now a tangible product which has turned a corner, offering a glimpse of what to expect from this rapidly improving innovation.


Budapest, Hungary event


Budapest, Hungary

02 - 04 September 2024