Thought Leadership

What is Generative AI?

The term Generative AI has garnered a lot of attention lately, with the likes of ChatGPT, Stable Diffusion, and DALL-E making frequent headlines. But with how popular it has become, there is also a lot of confusion around what the term Generative AI actually means, not to mention its relation to another similarly named field: Generative Engineering. When it comes to the fast-moving world of AI, terminology advances almost as quickly as technology leading to confusion when it comes to understanding the latest buzz.

Terminology: The basics

Since the public release of ChatGPT and other similar offerings it is not uncommon to find the term Generative AI conflated with AI in general. Just like how all squares are rectangles but the converse is not true, not all AI algorithms are generative in nature. Generative AI algorithms are a subset of all AI algorithms that take a prompt as an input and generate (hence the name) an output in the form of text, images and other forms of media.

Asking a Generative AI model for something isn’t that different from asking a human for it. For example, asking an artist to paint a picture of a flower or a writer to write an article about how a CPU works. A simple prompt in the form of a question (“Paint me a picture of a flower.”) is transformed into a complete work, in this case a painting, without further input from the person making the request with the creator, be it human or AI, filling in missing information based on existing knowledge and examples.

Generative AI models are prominently divided into two types, generative adversarial networks (GANs) and generative pre-trained transformers (GPTs) the latter being the more widely used of the two and the basis of many well-known Generative AI models, including its namesake, ChatGPT. Just like other types of AI, GPTs require massive amounts of training data during a largely unsupervised initial training processes before more fine-tuned data is used to prepare the model for its intended use.

GPTs are also part of the family of Large Language Models or LLMs, while no formal definition of an LLM exists, they broadly encompass large, multi-billion parameter models trained using unlabeled text. For example, ChatGPT-3 has 175 billion parameters and was trained using 570Gb of plain text, data crawled from many popular websites, several books copora from Google Books (comprising billions of words), among other, similar, data sources.

Generative AI vs. General AI

Generative AI has a wide range of applications but that does not make it a good fit for everything. Many of the existing applications of AI, especially in industry or professional tools like those found in the Siemens Xcelerator suit, have long used other, non-generative methods to achieve their results.

Take, for example, the AI inferencing systems used in Simcenters surrogate models. These are powerful models capable of accurately inferring the results of complex simulations thanks to machine learning (ML) trained on existing data-rich simulations. Compared to Generative AI which seeks to generate original and novel results based on, but distinct from, its training data, these surrogate models must maintain a strong grounding in the very real simulation data they are trained with in order to be useful.

Likewise, many other examples of industrial AI rely on inferencing from well-established data or patterns to arrive at a reliable, trustworthy result. In these cases, having an AI model generate new and original outputs could range from inconvenient to disastrous and in either case would be wholly unsuitable for industrial applications. As such, it is important to recognize when Generative AI can provide a new avenue for advancement and when existing AI implementation already offer the best available solution.

A word on Generative Engineering

Generative Design and in a broader scope, Generative Engineering, are a collection of methods encompassing topology optimization, automation, simulation, and AI/ML technologies which enable intelligent design space exploration for requirement-driven and system level digital engineering approaches. This collection of technologies which can, and often does, include one or more AI or ML elements to understand and recommend optimal approaches and the best final designs, does draw a distinct line with simply applying Generative AI to an engineering problem.

None of the existing AI methods employed in Generative Engineering are generative in nature, instead offering inferences in a more traditional AI sense. However, that is not to say Generative AI couldn’t play a role in the Generative Engineering process as it matures and gains the ability to produce robust, reliable results suitable for the rigors of the engineering world.

The future of AI generated content

As Generative AI takes the world by storm, many people and industries will see major changes as AI offers a new way to bridge the digital and physical worlds. With as fast-paced as the world of AI changes it’s easy for terms, technology and advancements to become muddled with each other, resulting in confusion and challenges adapting to the latest AI-powered tools. While AI generating everything from text and code to images might seem like technology taking over, in reality it is just another way in which AI is changing the way we interact with the world.


Siemens Digital Industries Software helps organizations of all sizes digitally transform using software, hardware and services from the Siemens Xcelerator business platform. Siemens’ software and the comprehensive digital twin enable companies to optimize their design, engineering and manufacturing processes to turn today’s ideas into the sustainable products of the future. From chips to entire systems, from product to process, across all industries. Siemens Digital Industries Software – Accelerating transformation.

Spencer Acain

Leave a Reply

This article first appeared on the Siemens Digital Industries Software blog at https://blogs.sw.siemens.com/thought-leadership/2023/06/01/what-is-generative-ai/