How will 
artificial intelligence
change our future?
AI Insights
What is generative AI?
Quick links
A futuristic art deco image depicting a parrot perched on a branch with various abstract, colorful shapes and patterns emanating from its beak. These shapes represent the diverse and creative outputs of generative AI, highlighting its potential to produce novel and artistic content.—What is generative AI?

What is generative AI?

By Jacob Andra / Published June 26, 2024 
Last Updated: July 28, 2024

Imagine if a parrot could not only repeat words but also learn the patterns in language to create new sentences. Then imagine that parrot consumes petabytes of content from the internet and remembers it all. It doesn’t understand human language, but it can believably respond to anything you say. You’d have something similar to generative artificial intelligence (gen AI)—at least on the surface.

Gen AI uses machine learning models to understand and recreate patterns. Not simply to mimic what they’ve learned, but to produce novel recombinations for original (and believable) text, images, videos, and more.

But, gen AI is far more than a parrot on steroids. Buckle up to learn how it’s going to change the world as we know it.

Key takeaways
Gen AI will revolutionize almost every area of business.
While impressive, gen AI has limitations.
Gen AI brings new ethical dilemmas.
The faster you learn to leverage gen AI, the better you’ll be positioned.

How does generative AI work?

Generative AI uses algorithms to create new content based on existing data. This process is driven by advanced machine learning techniques, primarily deep learning and neural networks.

Neural networks

Neural networks mimic the human brain's structure and function. These networks consist of layers of nodes, or neurons, that process data and learn patterns.

When trained on large datasets, neural networks can generate new content by predicting and assembling elements based on learned patterns.

  1. Train the model. The first step in generative AI is training the model using a huge amount of data. For example, a generative AI model designed to create text (such as GPT-3) is trained on diverse text datasets, including books, articles, and websites. The model learns grammar, context, and nuances of language during this phase.
  2. Pattern recognition. As the model processes data, it recognizes patterns and relationships within the input data. For text generation, this might include understanding sentence structure, word associations, and contextual relevance. For image generation, it could involve recognizing shapes, colors, and textures.
  3. Generate new content. Once trained, the model uses its learned patterns to generate new content. For text, it can write essays, stories, or articles by predicting what comes next based on the initial input. For images, it can create new visuals by blending learned elements in novel ways.

Deep learning

Deep learning, a subset of machine learning, drives the capabilities of neural networks through multiple layers of processing. These layers allow the model to learn complex patterns and representations from large data sets.

  • Layered learning. Deep learning models consist of many layers, each extracting higher-level features from the raw input. For example, in image generation, initial layers might detect edges and simple shapes, while deeper layers identify complex structures such as objects and scenes.
  • Backpropagation. This algorithmic approach trains deep learning models. It involves adjusting the weights of connections between neurons based on the error rate of the output and expected result.
  • High computational power. Deep learning requires substantial computational resources. Specialized hardware such as graphics processing units and tensor processing units handle the intense computations involved in training deep learning models.

Types of generative AI

Most generative models fall into the following three categories:

  • Large language models (LLMs) predict the next word in a sequence based on extensive training data. LLMs are the best for creating textual content and they power many chatbots, writing assistants, and text completion tools.
  • Generative adversarial networks (GANs) employ two competing neural networks to produce new content. One network generates content, while the other evaluates it. This back-and-forth process results in increasingly realistic outputs. They work well for creating visual and audio content, such as artificial images or synthetic voices.
  • Variational autoencoders (VAEs) compress input into a coded form and then decode it to create a new output. VAEs often produce visual content or code and excel at tasks such as image generation and style transfer.

Benefits and limitations of generative AI

Generative AI will revolutionize almost every aspect of business in the coming years, but it’s not perfect, nor is it free of negative consequences. Here at Talbot West, given our focus on AI implementation, we spend more time on the use cases of gen AI than on the downsides. However, we still keep our eye on the limitations and adverse effects of artificial intelligence technologies.

As they say, you can’t put the cat back into the bag. Generative AI is here to stay, so we may as well leverage its upsides while managing the ethical issues and other considerations it brings.

Uses for gen AI

Creators have adopted generative AI to create new content, from articles to images to video. Content creation is the most obvious use case for the tech, but it’s far from the only one. Here is a small sampling of some of the uses for gen AI in enterprise:

  • Product design: gen AI can automate many subprocesses in the design of new products, prototypes, and simulations. It can also ideate and iterate rapidly to augment human creativity.
  • Customer service: enhance chatbots and virtual assistants for more natural and efficient customer interactions.
  • Data augmentation: generate synthetic data to improve model training and performance, especially in data-scarce environments.
  • Marketing: gen AI can help with audience segmentation, predictive analytics, and automated A/B testing to create targeted, effective marketing campaigns and optimize budget allocation.
  • Fraud detection: identify patterns and anomalies in financial transactions to prevent fraudulent activities.
  • Drug discovery: accelerating the discovery and development of new drugs by simulating molecular structures and interactions.
  • Supply chain optimization: predict demand, optimize inventory management, and improve logistics through advanced simulations.
  • Training simulations: create realistic training environments for employees in healthcare, aviation, the military, and other sectors.
  • Code generation: automate code writing for software development, speed up the development process, and reduce human error.
  • RAG systems: retrieval-augmented generation (RAG) systems give enterprises an internal AI expert for instant and accurate information on SOPs, policies, inventory, pricing, or anything else you want to include.
  • Market research: analyze vast datasets to uncover market trends, customer preferences, and competitive insights for more informed decision-making.
  • Agentic AI: utilizing autonomous AI agents for such tasks as managing IT systems, optimizing workflows, automating routine processes, and even negotiating contracts.
  • Scientific research: generate hypotheses, design experiments, and analyze complex data sets to accelerate discoveries in fields such as biology, chemistry, and physics.
  • Engineering: optimize designs, simulate performance, and predict failures in engineering projects for more efficient and innovative solutions.
  • Advancing innovation: assist in patent analysis, discovering new materials, and innovating new technologies by synthesizing vast amounts of scientific literature and data.

As with the internet in the 1990s, it’s impossible to predict all the ways generative AI will impact our lives and our organizations. Here at Talbot West, we’re discovering new uses for it all the time.

But make no mistake: the impacts of gen AI will be on a similar scale to the internet. These impacts are already showing up, though in the early stages.

If you’d like to explore how generative AI can drive efficiencies in your workplace, request a free consultation with Talbot West. We can discuss specific tools, implementations, and risk management strategies.

Get in touch

Gen AI limitations

Generative AI technologies are not a golden panacea. They have the following limitations and must be thoughtfully combined with human ingenuity for the best results. We view gen AI as a force multiplier for humans, not a replacement for them.

  • Biases: generative AI systems may contain built-in biases from their training data. These biases can be difficult to detect. AI tools are blind to their own biases. The solution: careful attention by human experts to spot and account for biases.
  • Closed-loop reinforcement: generative AI tools sometimes participate in a closed feedback loop of reinforcement that’s divorced from reality. One example: generative AI creates internet content which is used to train generative AI models. The solution: original human input into the system, as well as subject matter expertise to spot erroneous closed-loop cycles.
  • Hallucination: when a generative AI model doesn’t know something, it doesn’t admit its ignorance. Instead, it will fabricate an answer. Solution: human experts should vet information generated by AI.
  • Can't think independently or generate truly new ideas. While generative AI creates original content, it can’t generate truly novel paradigms (at least as of August 2024). This is because it’s limited by the scope of its training dataset.
  • Lack of first-hand experience and personal opinions. Generative AI tools can't think for themselves, can’t experience anything, and can’t have an opinion. Human reviews, experiences, and opinions should always be used to enrich generative AI outputs.

Despite these limitations, generative AI’s abilities are truly mindboggling and are increasing by the day. We believe the ultimate gamechanger is the harnessing of human and AI abilities.

Concerns surrounding generative AI

A stylized, fragmented face made up of interlocking puzzle pieces, some pieces missing and falling away into darkness, representing the potential for loss of control and unintended consequences in generative AI.—Talbot West AI advisory and implementation

While much of the public discourse on AI is fanciful and disconnected from reality, there are some legitimate critiques of the technology, which can broadly be grouped into ethical, ecological, and doomsday categories of concern.

Ethical concerns

IssueDescriptionPotential solutions

Discrimination

Generative AI models learn from existing data, which can contain biases reflecting societal inequalities and lead to stereotypes and unfair treatment of certain groups. For example, if trained on job descriptions that favor male candidates for engineering roles, the AI might generate biased job postings, potentially leading to unfair hiring practices.

Implement bias detection and correction, ensure diverse datasets, and invest in corporate AI governance.

Exacerbating unemployment

People worry about technology taking jobs away from humans. This is exacerbated in the age of AI, with the role of artists, content creators, low-level administrative staff, and many knowledge workers wondering what the fate of their jobs will be.

Promote reskilling programs, develop AI technologies that augment human work, and support policies for worker transition.

Lack of transparency

Generative AI models, especially large language models and stable diffusion models, work in ways that are hard for humans to understand. This "black box" nature makes it difficult to figure out why an AI made a certain decision or generated a particular output. This is called the problem of explainability.

Promote reskilling programs, develop AI technologies that augment human work, and support policies for worker transition.

Environmental concerns

IssueDescriptionPotential solutions

Energy consumption

Training and running AI models, especially large ones, require significant computational power, leading to high energy usage and increased carbon footprints.

Promote energy-efficient algorithms, use renewable energy for data centers, and implement carbon offset programs.

Resource intensive

The hardware needed for AI, such as GPUs and data centers, requires substantial resources to manufacture and maintain.

Optimize AI hardware lifecycle through recycling, develop less power-intensive models, and research alternative materials.

E-waste

The rapid obsolescence of AI hardware contributes to the growing problem of electronic waste.

Support e-waste recycling programs, design durable and upgradeable hardware, and raise awareness about sustainable practices.

Water usage

Data centers consume large amounts of water for cooling purposes, which can strain local water resources.

Develop water-efficient cooling technologies, monitor and optimize water use, and adopt closed-loop cooling systems.

Ecosystem degradation

The mining of rare earth metals for AI hardware can have detrimental environmental impacts, including habitat destruction and pollution.

Promote sustainable mining practices, invest in recycling of rare earth metals, and support research into alternative materials.

Doomsday concerns

IssueDescriptionPotential solutions

Runaway intelligence

AI gets intelligent enough to initiate a self-improvement cycle in which it progressively reinforces itself, detaches from human oversight, provisions other AI models, deploys resources at scale, and basically becomes the stuff of sci-fi nightmares.

Implement safety measures, foster interdisciplinary research on superintelligent AI, and establish international regulations and oversight.

The “bad actor” scenario

Terrorists, rogue states, or other bad actors leverage AI to carry out actions that would have previously been difficult or impossible.

Strengthen cybersecurity, develop ethical AI frameworks, and collaborate globally to mitigate threats from malicious AI use.

What types of data is generative AI best for?

An arrangement of different stylized symbols representing various generative AI types, such as music notes, paintbrush strokes, and text fragments, all orbiting around a central glowing sphere.—Types of generative AI

Depending on the specific model, generative AI can process, understand, and output a wide range of data types:

  • Text. AI can write stories, articles, social media posts, and computer code.
  • Images. AI can create new images or edit existing images.
  • Audio. AI can create fake voices that sound real, make music, and create sound effects.
  • Video. AI can generate video, including realistic-seeming “live-action” footage, animations, and more.
  • Multimodal. Advanced generative AI systems can work with different types of data at the same time. For example, creating a video with sound, or a website with text, images, and graphics.
  • Synthetic data. Generative AI can create synthetic datasets that mimic real-world data. This is useful for training other AI systems, especially when real data is hard to get or needs to be kept private.

What are foundation models in generative AI?

Foundation models are large-scale machine learning systems that form the core technology for generative AI applications. 

Think of a foundation model as a car engine, and generative AI applications as the vehicle that the engine powers. The engine on its own can’t do much, but an engine-powered car can be driven to the grandparent’s house, take one’s teenager to soccer practice, and do many other tasks. 

Similarly, a foundation model is the “engine” that powers a generative AI tool such as ChatGPT. 

Parameters are the building blocks of foundation models, functioning like the model's "brain cells." They store the patterns and relationships the model learns from its training data. More parameters allow a model to capture more complex language nuances and store broader knowledge. This increased capacity translates to better performance, but more parameters also equate to higher computational demands and increased resource usage.

Here are some of the leading foundation models on the market:

ToolDeveloped byDescriptionParameters

GPT-4

OpenAI

Advanced language model for text generation and complex tasks

Estimated to be in the range of 100-170 billion

Gemini

Google

Multimodal AI model capable of processing and generating text, images, audio, and video

Not publicly disclosed

LLaMA 2

Meta

Open-source model optimized for research and academic use, strong text generation

Up to 70 billion

Claude 2

Anthropic

Focus on safety and alignment, designed to be more interpretable and controllable

Not publicly disclosed

Turing-NLG

Microsoft

High-quality text generation, strong performance in various NLP tasks

17 billion

Command R

Cohere

Optimized for retrieval-augmented generation (RAG), improved performance in tasks requiring external knowledge retrieval

Not publicly disclosed

StableLM

Stability AI

Open-source, designed for stability and reliability, strong performance in text generation and understanding

Not publicly disclosed

Mistral 7B

Mistral AI

Efficient and lightweight model designed for a wide range of NLP tasks, high performance despite smaller size

7 billion

A brief history of generative AI

  • 1956: the term "artificial intelligence" was coined by John McCarthy at the Dartmouth Conference and represents the birth of AI as a field of study.
  • 1960s: early AI systems such as ELIZA, developed by Joseph Weizenbaum, simulate human language.
  • 1972: the Prolog programming language is developed, which becomes essential for AI research and the development of logic-based AI systems.
  • 1980s: the rise of expert systems, which use rule-based models to simulate human expertise in specific domains, advances the understanding of AI's potential.
  • 1997: IBM's Deep Blue defeats world chess champion Garry Kasparov and showcases the power of machine learning and AI in complex problem-solving.
  • 2006: Geoffrey Hinton and his team introduce the concept of deep learning, which involves training artificial neural networks with multiple layers. This breakthrough leads to significant advancements in AI's ability to process and generate complex data.
  • 2014: Ian Goodfellow and his colleagues introduce Generative Adversarial Networks (GANs), a revolutionary approach that pits two neural networks against each other to improve the quality of generated data. One network, the generator, creates fake data, while the other, the discriminator, evaluates its authenticity. This competition drives both to improve.
  • 2015: OpenAI is founded with the mission to ensure that artificial general intelligence (AGI) benefits all of humanity. This organization is one of the most important in advancing generative AI research.
  • 2018: OpenAI releases GPT-2, a generative pre-trained transformer model capable of generating coherent and contextually relevant text based on a given prompt.
  • 2020: OpenAI releases GPT-3, the successor to GPT-2, with 175 billion parameters. GPT-3 showcases unprecedented language generation capabilities and leads to widespread adoption in different applications, from chatbots to creative content writing.
  • 2023: the release of GPT-4 by OpenAI brings further advancements in natural language processing.
  • 2024: Anthropic’s Claude beats GPT-4 in several accepted benchmarks of generative AI performance.

The future of generative AI

By 2025, generative AI is expected to produce 30% of outbound messaging, and by 2026, it could create up to 90% of online content. This rapid adoption signals a transformative shift in how businesses operate.

AI will be an integral part of creative processes, data analysis, and decision-making across industries. As the technology evolves, we can expect more sophisticated, context-aware AI systems that can handle a range of tasks and collaborate seamlessly with humans.

This growth will also bring challenges, including the need for robust AI governance, ethical considerations, and the evolution of human roles in an AI-augmented workplace. The future of generative AI promises increased efficiency and innovation, but it will require careful navigation to maximize benefits while addressing risks.

Generative AI FAQ

Grammarly is not a generative AI. It is an AI-powered writing assistant that uses natural language processing to check grammar, punctuation, style, and clarity. Grammarly improves and suggests edits for existing text but does not generate new content.

Google is not a generative AI. Google is a technology company that offers many different AI-powered services and products, including search engines, virtual assistants, and cloud computing. Google has developed generative AI models, such as Gemini, Bard, and PaLM-2, which can generate text and other content. These models are part of Google's broader AI initiatives.

Generative AI creates new content, such as text, images, or music, based on learned patterns from data. Adaptive AI modifies its behavior and improves performance over time in response to new data and experiences.

Predictive AI uses machine learning algorithms to analyze historical data and make forecasts about future events or behaviors. It identifies patterns and trends in data to predict outcomes, which is useful in applications like weather forecasting, stock market analysis, and maintenance planning.

AI-generated content can be produced quickly and at scale, but it may lack the nuance and creativity of human-created content. AI content is useful for drafts, ideas, and routine tasks, while human content excels in originality, emotional depth, and complex reasoning.

We predict that the best content of the future will be a collaboration between human and artificial intelligence.

Generative AI automates data entry, generates synthetic data for testing, and assists in data cleaning, anomaly detection, and generating reports from complex datasets.

Resources

  • Dupré, M. H. (2022, September 18). Experts: 90% of Online Content Will Be AI-Generated by 2026. Futurism. https://futurism.com/the-byte/experts-90-online-content-ai-generated
  • Generative AI Use Cases for Industries and Enterprises. (2023, January 26). Gartner. https://www.gartner.com/en/articles/beyond-chatgpt-the-future-of-generative-ai-for-enterprises

About the author

Jacob Andra is the founder of Talbot West and a co-founder of The Institute for Cognitive Hive AI, a not-for-profit organization dedicated to promoting Cognitive Hive AI (CHAI) as a superior architecture to monolithic AI models. Jacob serves on the board of 47G, a Utah-based public-private aerospace and defense consortium. He spends his time pushing the limits of what AI can accomplish, especially in high-stakes use cases. Jacob also writes and publishes extensively on the intersection of AI, enterprise, economics, and policy, covering topics such as explainability, responsible AI, gray zone warfare, and more.
Jacob Andra

Industry insights

We stay up to speed in the world of AI so you don’t have to.
View All

Subscribe to our newsletter

Cutting-edge insights from in-the-trenches AI practicioners
Subscription Form

About us

Talbot West bridges the gap between AI developers and the average executive who's swamped by the rapidity of change. You don't need to be up to speed with RAG, know how to write an AI corporate governance framework, or be able to explain transformer architecture. That's what Talbot West is for. 

magnifiercrosschevron-downchevron-leftchevron-rightarrow-right linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram