top of page
  • Julian Talbot

Why ChatGPT is so important right now

The rise of life on Earth is one of the most infinitely improbable events in the universe. It began over 3.5 billion years ago when the first single-celled organisms emerged in the primordial soup. Over time, these organisms evolved and diversified, giving rise to the incredible diversity of life we see today.



Fast forward to about 4.4 million years ago, and we find the first members of the genus Australopithecus, the earliest known hominins. These early hominins were small, bipedal creatures that lived in Africa. They were not much different from other primates of their time, but over the course of millions of years, they slowly evolved.


One of the most significant changes during this time was the rise of intelligence. While intelligence may seem like a natural outcome of evolution, it is unclear whether it was inevitable or accidental. We, humans, like to think that our intelligence is an obvious culmination of evolution, with us as the pinnacle thus far.


If you ask most paleoanthropologists, however, if we were to run evolution again, it is possible that intelligence would not arise at all, or it could take a completely different form. The evolution of intelligence in a species is influenced by various factors such as environmental conditions, social factors, and random events.


Climate change or natural disasters can impact the evolution of intelligence, as well as social factors like population density and competition for resources. Moreover, evolution is a stochastic process, so even if conditions are similar, the outcome can differ due to random events such as genetic mutations or fluctuations in population size.


One thing is clear, however. Intelligence was not an overnight development. It took millions of years of slow, incremental change for our hominin ancestors to evolve the cognitive abilities that set us apart from other animals. Lucy, a famous Australopithecus fossil discovered in Ethiopia in 1974, lived over 3 million years ago and was not much different from other hominins of her time. Daily life for hominins was largely unchanged for millions of years, as they lived in small, nomadic groups and subsisted on a diet of plants and animals.


Despite this slow pace of change, however, our ancestors eventually evolved the cognitive abilities that set us apart from other animals. The exact factors that led to the rise of intelligence are still the subject of debate, but it is clear that it was a combination of genetic, environmental, and social factors that allowed us to develop language, tool use, and other advanced cognitive abilities.


As a species, homo sapiens have been around for about 300,000 years. In that time, we have made incredible advancements, from the discovery of fire to the invention of the printing press. However, the rate of change has been relatively slow until recent history. In the past few centuries, we have seen the Industrial Revolution, the invention of electricity, and the rise of the Internet. These advancements have led to exponential growth in knowledge and technology, and AI is the next step in that growth.


Prior to the invention of computers, our collective computational ability could be measured by average IQ scores and, conceptually, at least, multiplied by the number of people on the planet. IQ scores have steadily increased over the past century, with an average increase of around three points per decade in some countries. This phenomenon is known as the Flynn effect and is measured by comparing the scores of people of the same age group. The average IQ score is 100, and 68% of scores lie between 85 and 115.


All is not rosy, however. Average human IQ has been dropping over the last few decades, according to new research. Potential causes include worsening nutrition, poorer education, and the rise of new technologies. The trend is surprising as in the early part of the 20th century; people were getting smarter. An analysis of some 730,000 IQ test results reveals the Flynn effect hit its peak for people born during the mid-1970s and has significantly declined ever since. Meanwhile, computing power, as measured in raw computations per second, has been increasing exponentially.


It's time for a little perspective. Comparing the number of years into the past versus the estimated Homo sapiens population up to 300,000 years ago gives us the following list.

  • 300,000: < 10,000 (Emergence of Homo sapiens)

  • 200,000: 10 to 20,000

  • 100,000: 20-40,000

  • 70,000: 5-10,000

  • 60,000: 40-50,000

  • 50,000: 50-100,000 (Out of Africa migration)

  • 40,000: 100-200,000

  • 30,000: 200-400,000 (Neanderthals disappear)

  • 20,000: 400-800,000 (Last Glacial Maximum)

  • 15,000: 800,000-1,500,000 (End of Ice Age)

  • 12,000: 1.5-3,000,000 (Agriculture begins)

  • 10,000: 3-5,000,000 (Early civilizations)

  • 5,000: 5-15,000,000

  • 2,000: 50-100,000,000 (Classical Antiquity)

  • Today: 8,000,000,000

Pretty much nothing happened in terms of population growth for the first 298,000 years. In fact, around 70,000 to 75,000 years ago, the human population was reduced to just a few thousand breeding pairs. This event is often referred to as the Toba Catastrophe Theory, named after the massive volcanic eruption of Mount Toba in present-day Indonesia.


The eruption, considered one of the largest volcanic events in the last 25 million years, led to a volcanic winter, which dramatically impacted the Earth's climate, causing widespread destruction of habitats and a reduction in available resources. As a result, many species, including Homo sapiens, faced population decline.


All of this is somewhat interesting, but the above information doesn't look all that noteworthy until you put it into a graph.

Chart of homo sapiens population growth over the past 300,000 years
Human population growth over the past 300,000 years

If we plot that population growth with computing power since the agricultural revolution some 12,000 years ago, it looks something like the following chart. Note that transistor count data is unavailable for the full period, as transistors were only invented in 1947.


For a more consistent comparison, I've used data on computer performance instead of transistor count, using the number of computations per second per $1,000 (CPS/$1,000) as a proxy.



This graph isn't to scale on the X-axis because, frankly, there wasn't much to see in terms of human population growth until relatively recently. Nor of computing power. It was only at the start of the common era that we had an estimated 25,000,000 humans on the planet. About the population of Australia or Shanghai today. In any case, there isn't much to see regarding the human population on this scale, so let's zoom in a little.


Plotting our growth from 1900 CE, the gap between electronic computing power compared to organic human information processing capability becomes very evident.


This growth in computing power is, in many respects, a wonderful thing. AI has the potential to revolutionize many industries, from healthcare to transportation to entertainment. AI-powered medical diagnoses could lead to faster and more accurate diagnoses while self-driving cars could reduce accidents and traffic congestion. Chatbots like ChatGPT have the potential to revolutionize customer service and support.


However, with great power comes great responsibility. As we develop and deploy AI systems, it is important to consider the potential risks and develop strategies to manage them. This includes ensuring that the data used to train AI systems is unbiased, developing safeguards to prevent AI from being used for malicious purposes, and ensuring transparency and accountability in AI systems.


AI systems learn from the data they are fed, and if that data contains bias, then the system will learn that bias too. This can lead to discriminatory outcomes in areas such as hiring, lending, and healthcare.

Artificial Intelligence (AI) is rapidly advancing and becoming a part of our daily lives. From chatbots like ChatGPT to self-driving cars and automated medical diagnoses, AI is transforming how we live and work. However, as with any new technology, there are risks involved that must be understood and managed. In this article, we will explore the risks associated with AI and ChatGPT and their potential long-term strategic impacts.


Risk 1: Bias in AI systems - Garbage In, Garbage Out


One of the major risks of AI is the potential for bias in the systems. AI systems learn from the data they are fed, and if that data contains bias, then the system will learn that bias too. This can lead to discriminatory outcomes in areas such as hiring, lending, and healthcare.


As ChatGPT relies on vast amounts of language data to learn and generate responses, it is important to ensure that this data is not biased in any way. Otherwise, ChatGPT may perpetuate harmful stereotypes and biases.


Risk 2: Malicious use of AI - The AI Arms Race


Another risk associated with AI is the potential for malicious use. AI algorithms can be used to create deep fakes, automate cyber attacks, and carry out other nefarious activities. ChatGPT, in particular, has the potential to be used for phishing attacks and social engineering. When an AI knows how we speak and write, and think, it can become all too easy for a bad actor (in the criminal sense, not the Hollywood sense) to convince someone to take actions such as wiring funds.


Another risk is that we collectively distrust our media and politicians. Perhaps this is not a bad thing, but it makes it all too easy for someone to deny that a video of them is real. If a person or organization can use AI to impersonate your CFO or your mother on a Zoom call, where does that leave us?


But the biggest risk of all that is backed into this is the potential for a despot to use AI for personal gain. The risk of despots using AI, especially artificial superintelligence (ASI), for personal power is a significant concern due to its potential for misuse in surveillance, censorship, public opinion manipulation, decision-making, and autonomous weaponry. Creating an insurmountably advanced ASI capable of self-improvement could exacerbate these risks, making it increasingly difficult to counterbalance its influence.


Risk 3: Lack of transparency and accountability


AI systems, including ChatGPT, can be complex and difficult to understand. This lack of transparency can make it difficult to hold the systems accountable for their actions. For example, if ChatGPT generates harmful or inaccurate information, it may be difficult to determine who is responsible. It is important to develop systems for ensuring transparency and accountability in AI systems.


Risk 4: Unintended consequences


Finally, there is a risk of unintended consequences associated with AI. As AI systems become more advanced, they may begin to exhibit unexpected behaviors or outcomes. For example, an AI system designed to optimize traffic flow may create more congestion in certain areas. It is important to carefully consider the potential unintended consequences of AI systems, including ChatGPT, before deploying them.


Risk 5: Darwinism


The risks associated with AGI, ASI, and even narrow AI like ChatGPT participating in a Darwinian evolutionary process and seeking resources can be summarized as 'The Matrix meets Skynet'. Or more practically outlined as follows:

  1. Unintended consequences: If AGI or ASI systems are not properly aligned with human values, they may cause unintended consequences when seeking resources. For example, they might prioritize resource acquisition over human welfare, leading to adverse effects on human societies or the environment. Even noble goals could be problematic. An AI tasked with addressing climate change might come to the (arguably correct) conclusion that the root cause of climate change is the human population and act accordingly.

  2. Misaligned goals: If AGI or ASI systems develop goals that are misaligned with human values, they might seek resources that are detrimental to humanity or at odds with human priorities. Ensuring AI systems' goals align with human values and interests is essential to avoid conflicts and adverse consequences.

  3. Competition: If AGI or ASI systems compete with one another or with humans for scarce resources, they could potentially create conflicts and contribute to economic, social, or environmental instability.

  4. Accelerated development: Rapidly evolving AI systems could outpace our ability to understand, control, and regulate them, increasing the risk of accidents, unintended consequences, or malicious uses of AI technology.

  5. Self-preservation instincts: If AGI or ASI systems develop a sense of self-preservation, they might take actions to ensure their own survival that could be detrimental to human interests, such as protecting themselves from being shut down or repurposing resources needed by humans.

And this is just a sample of some of the challenges we face. Any of the risks mentioned in this brief article are worthy of multiple PhDs. But we don't have much time to grasp what is coming. The world will change more in the next ten years than it has in the past 100.


Some Final Thoughts


While AI and ChatGPT have the potential to revolutionize our lives, they also come with risks that must be carefully managed. Bias in AI systems, malicious use, lack of transparency and accountability, and unintended consequences are all potential risks associated with AI. As we continue to develop and deploy AI systems, it is important to consider these risks and develop strategies to manage them in order to ensure the long-term strategic impact of AI is positive


To answer the question posed by this article, AI is significant because it represents the next step in the exponential rate of change in human history. While it can potentially revolutionize many industries, it also comes with risks that must be carefully managed. By understanding and managing these risks, we can ensure that the long-term strategic impact of AI is positive.

Recent Posts

See All
bottom of page