Jul 15 2023
What Generative AI Generates
Let us focus on what Generative AI does generate, speaking exclusively from personal experience. Generative AI is the generic term for systems like ChatGPT, Google’s Bard, DALL-E, or MidJourney, that generate text, images, or other media in response to prompts. All sorts of news sources report it as a revolutionary technology poised to dramatically change how we work and communicate.
Contents
- DALL-E
- Uses of ChatGPT
- Generative AI
- My Background with AI
- The McKinsey Report on Generative AI
- Overall Conclusion
- References
DALL-E
I have been using DALL-E 2 since it became available in September 2022 and found it a great help in generating evocative illustrations for a variety of
documents. Featured images in blog posts are meant to catch the reader’s eye. For a post about analyzing variation with more advanced tools than the ancient
histogram, DALL-E instantly provided a picture of a dinosaur giving a presentation about histograms:
For a discussion of using videos to analyze operations, it provided the following photorealistic picture of an operator’s hands on a workpiece:
About the effectiveness of COVID-19 vaccines, it came up with this:
DALL-E saved me time and produced better images than I would have by other means. It’s a useful tool for this purpose but you would not want to use it where you need actual photographs of real scenes or technical drawings.
The technology can, of course, be used for misinformation but photo fakery is as old as photography itself. DALL-E and MidJourney may make it easier but it’s not new and vigilance in the interpretation of photographic images has always been needed.
Uses of ChatGPT
DALL-E, however, was promptly eclipsed by ChatGPT, which has been hogging media attention for the past six months. ChatGPT generates various kinds of text, so I asked it to draft a description of real estate for sale, an article for publication, generate computer code, answer a question about an established scientific theory, and write a sonnet about usability engineering.
Writing a Property Description for a Real Estate Listing
As seen below, when you ask ChatGPT to describe a house for sale in a high-crime neighborhood, the first thing it does is reclassify the neighborhood as vibrant. then is makes up an appealing description of this charming house, making up all sorts of features about its rooms and its environment, supposedly “up and coming.”
It is reminiscent of the famous description of Uncle George, who died in the electric chair: “He held the chair of applied electricity at one of the largest government institutions. He was strongly attached to his position and his death came as a great shock.” To spin the truth about any property, ChatGPT is a match for almost any human real estate agent. It is not obvious at all, however, that it enhances any buyer’s experience.
Drafting a Paper for Publication
Assume you have to write a complex document for an audience that has special requirements as to its structure, and that such documents were part of the corpus used to train ChatGPT. Then asking ChatGPT to write such a document can save you time in generating your own.
Following is ChatGPT’s idea of a paper about manufacturing quality for the MIT Sloan Management Review (SMR):
Style
The ChatGPT response would be useful as a template if it were in the style expected by this publication, as opposed to, say, the Harvard Business Review or USA Today.
The title and abstract are the parts readers are most likely to see, and are essential to the publication and success of a paper. Let’s compare ChatGPT’s output with one actual paper from SMR’s Summer 2023 edition.
Title
The title is “Moving Beyond Islands of Experimentation to AI Everywhere.” Unlike the title from ChatGPT, it does not include a colon, and neither do the titles of any of the other articles in this issue or in any of the preceding three. The SMR style is titles that are thumbnail summaries of the content.
Colons in titles command a surprising amount of attention. While absent in SMR papers, they are in about half the senior theses of Princeton University undergrads, but with an imbalance across disciplines. Colons are in less than 20% of titles in math and physics, as well as in electrical, chemical, and biological engineering; they are in more than 80% of theses in social sciences. Some scientists treat a colon in the title of a paper as a tell-tale sign of weakness and don’t read further.
Abstract
Each issue of the SMR collects the abstracts of all articles into a guide, each with a one-sentence “Key Insight” and a paragraph of “Key Takeaways.” To attract readers, the abstract must stand out in this guide. And since it is detached from the paper, it must stand alone. The entry in the guide for the selected paper is as follows:
The article abstract from ChatGPT doesn’t do any of this. Its content is pure business drivel, starting with “In today’s highly competitive business landscape,…” Then, sentences that begin with “This paper explores…” announce the content rather than summarize it. They belong in the introduction, not the abstract that is supposed to compel readers.
Content
Then, then the ChatGPT output contains an outline that you could flesh out into an actual paper and a conclusion paragraph. The outline is an incomplete list of approaches that don’t add up to complete coverage of the topic, which makes it of limited value.
Conclusion
Altogether, a would-be author is better off studying the papers SMR actually publishes than using ChatGPT’s template. The challenge with manufacturing quality is that the SMR has published nothing about it since 2001. This output of ChatGPT is mediocre, generic, and incomplete, in other words useless.
Drafting computer code
Generative AI is supposed to radically change software development, replacing coders with “prompt engineers,” who describe in plain language the functions of a new program, and then let ChatGPT or Bard write the code. Let’s give it a try.
Generating a Simple Plot
This is the result of asking ChatGPT for code to generate a plot in R, using the ggplot2 package:
This code works! Here is the plot it produces:
Generating a More Complex Plot
To test its capabilities further, I then asked ChatGPT to generate code to plot a torus in 3D using R. It produced code that didn’t work. So did Bard. Eventually, I found code that worked “the old-fashioned way,” by Googling for it:
Conclusion
So yes, ChapGPT or Bard can produce code, but without any guarantee that it works, and it’s not obvious at all that defective code is any help to developers, as writing code from scratch in a high-level language is often easier and faster than debugging code written by someone else.
The search for solutions to coding problems on the web often leads to StackOverflow, where you can ask for help or review answers already provided to similar questions. The answers are from humans who derive no economic benefit from sharing them. There is no guarantee that their code works but it astonishingly often does.
One of the attractions of R when working with data is that, regardless of what you are trying to do, there usually is a package for it, either in CRAN or on GitHub, contributed by another human, with no hope of reward.
The AI of the 1980s and 90s struggled with the “knowledge acquisition bottleneck.” “Knowledge-based” systems couldn’t work because human couldn’t be persuaded, coerced, or bribed into contributing their expertise to the systems. Meanwhile, the same individuals were sharing it willingly in forums and discussion groups.
The only prompt they needed was a request from another human. In Life on Earth, David Attenborough called humans “compulsive communicators.” Ask total strangers for online help on a coding problem, and answers gush forth.
A more effective use of AI than producing defective code would be to validate code humans generate and donate for others to use.
Asking ChatGPT About Science
Young’s modulus is a concept dating back to 1807 and part the physical theory of elasticity. It’s known to professionals involved with the strength of materials but not outside of this group. it’s non-controversial, well-established science, and therefore a topic that ChatGPT should be able to explain. What follows is
its response to the prompt “Please explain Young’s Modulus”:
Alternatively, you could search Wikipedia for Young’s Modulus, and get a richer answer, with charts like the following one explaining, for example, that it is only about the early, linear part of the stress versus strain chart:
It also gives you many more details like a table of values for different materials and a long list of references to dig deeper.
In other words, if you really want to learn about Young’s Modulus, don’t ask ChatGPT. Look it up instead on Wikipedia. It’s an encyclopedia. It doesn’t dialogue with you. Each article is written by an individual, and edited by readers if they disagree. It works well on scientific and technical subjects, particularly when they have been settled for 200 years. It doesn’t work nearly as well on controversial subjects in politics or society.
Writing Poetry with ChatGPT
Before writing existed, human cultures developed poetry as a means of transmitting information, with rules for meter and rhyme making long texts easier to remember and recitation mistakes obvious, especially when accompanied with music. It was an error-correction code. The stories were in verse for oral transmission between generations with minimal distortion.
With the diffusion of writing, versification became unnecessary for communication but poetic writing endures today in advertising slogans, political speeches, and song lyrics. It is seldom found in work instructions or any form of professional communication.
A Sonnet on Usability Engineering from ChatGPT
Writing verse is one of the celebrated capabilities of ChatGPT, so I asked it for a sonnet on usability engineering, or the art of designing human interfaces to devices, appliances, or machines that are easy to learn and with which you are unlikely to make mistakes. As can be expected from ChatGPT, it delivered a formally perfect sonnet:
It’s not bad for a program, but the ChatGPT sonnet gives no information on usability engineering. It talks of “wizards,” who are “adept and wise,” but reads as if written by someone who doesn’t know what they are talking about.
A Sonnet on Usability Engineering by a Human
A couple of years ago, I tried my hand at such a sonnet, with the goal of conveying in a mnemonic way concepts like affordance, natural mappings, or cultural constraints, as I learned them from Don Norman and Asaf Degani. I came to describe the subject, not to praise engineers:
Two knobs, alike in shapes, distinct in roles,
Confuse the work, causing mistakes and misses.
Let knobs and levers vary for clear controls.
Do not just label! Ban the rows of switches!Your fingers then tell you which one you touch.
Meanwhile, your eyes may roam elsewhere on task,
In knowledge that they won’t miss much.
They see the road, machine, or line, and bask.Pull hook; turn knob; flick switch; snap clasp; press key.
To find controls, map them to functions straight.
Use red for stop and green for go, and QWERTY.
You do it often? Make it easy. Don’t wait.With training fast and easy, the errors are few.
It’s usability engineering, and it’s for you.
Conclusion
As we have just seen, ChatGPT is no good at producing poetry for professional communication.
ChatGPT as a Source of Inaccurate Information
Four months ago, I posted an account of wrong things ChatGPT says about me. I asked again yesterday, and tried also with Bard, and it is still wrong. This software’s problem with truth has also had more serious consequences, as for the lawyers who relied on it to produce a brief that cited five nonexistent cases and now face legal trouble.
The lawyers’ case is an anecdote and a morality tale. Other lawyers, going forward, are unlikely to make the same mistake. The story in the MIT Technology Review about Junk websites filled with AI-generated text are pulling in money from programmatic ads is on a larger scale.
McKinsey, Generative AI, and Truth
In a recent report from McKinsey hailing Generative AI’s future contribution to the the world economy, The word “truth” appears only once towards the end of the report, on p.50:
“They [Generative AI tools] are capable of that most human of abilities, language, which is a fundamental requirement of most work activities linked to expertise and knowledge as well as a skill that can be used to hurt feelings, create misunderstandings, obscure truth, and incite violence and even
wars.”
This report downplays generative AI’s inability to provide accurate information.
The Truthiness of Generative AI
The output of ChatGPT has what Stephen Colbert called “truthiness”: it has the look and feel of truth, but only that. Reading one of the lawyers’ fake cases, the judge on the receiving end of the ChatGPT brief described its content as “gibberish,… bordering on nonsensical.” In other words, today’s Generative AI is only good at producing documents for which truth is not a concern.
Generative AI
With text, today’s ChatGPT and Bard are Swiss army knives: they do many things but none of them well, and it’s worth asking why. The key question is whether their low output quality is fixable. Is it currently underwhelming because the tools are immature or because there is a fundamental flaw in what they are attempting?
The Secret Sauce
The secret sauce in ChatGPT is the T, which stands for “Transformer.” Chat is for interaction through a chat box; G is for “Generative,” and describes the kind of output it produces; P is for “Pre-Trained,” meaning that it has processed a corpus of data into a Large Language Model (LLM) before you prompt it. The G and P are easy to understand; the T is not.
The Transformer is the algorithm that turns your prompt into an answer from ChatGPT, and it is anything but self-explanatory. It clearly involves randomness, because, if it didn’t, the same prompt would always trigger the same response.
Per Wikipedia, a Transformer is a Deep Learning algorithm that relies on Attention. “Deep” here refers to having more than one level of “Neural Networks,” with each layer taking data from the layer below and feeding data to the layer above, while Attention is a weighing technique to emphasize parts of the input data and de-emphasize others. Transformers were first used in the automatic translation of natural languages in the 2010s.
Irreproducible Answers
Give ChatGPT the same prompt over and over again, and it produces different answers. This was first brought to my attention when my son asked ChatGPT about me. It provided different wrong details every time.
If a class of students prompted ChatGPT with the same essay question, they could all turn in different essays. When DALL-E produces different pictures for the same prompt, it’s a feature, because it gives you options to choose from. With technical or management documents, it’s not.
ChatGPT and Meaning
ChatGPT is like a human being living in sensory deprivation except for the ability to see symbols within a large set of documents. All it knows about words that that they are sequences of symbols separated by spaces, also known as “tokens.” It notices that the same tokens appear in multiple places, surrounded by the same other tokens. It recognizes patterns and is able to reproduce them.
ChatGPT has no notion of meaning. It does not map the sequence “bread” to something you eat or the sequence “logarithm” to a mathematical function. As a result, it conjures up perfectly formatted inaccurate data. This is how it produces wrong biographical details about you and me or non-existent legal cases.
It’s usable to produce formally correct documents that can pass off as written by people. ChatGPT produces complete, grammatically correct sentences. Its output is persuasive, until checking out the details often reveals that they are made up.
Pragmatics and Learning Machines
ChatGPT has no reality check, but it doesn’t mean a computer system can’t have one. There are words, rules for combining words into statements as in a game, and mappings from words to reality, also known as pragmatics. ChatGPT is great with words and statements but has no pragmatics.
Whenever we use a control system for a physical device, like a car or a kitchen stove, we are interacting with a computer system that is connected to reality through sensors. My oven control system, however, does not change its behavior based on my roasting history. In other words, it’s not learning. The streaming services I use on TV, on the other hand, tailor their recommendations to me based on my viewing history, which means that they are learning.
In “Pre-Training,” ChatGPT abosorbs a corpus of 45BG of internet flotsam from September 2021 and earlier. It doesn’t automatically learn from user interactions; instead, these interactions are curated by humans before they are allowed to influence ChatGPT.
AI Doesn’t Need to Emulate The Human Brain
If you prompt DALL-E 2 with “Generative AI,” it produces a picture like this one, which reinforces the idea that AI actually emulates human brains:
There is literature on mathematical models of how the brain works, from a 1943 paper on A Logical Calculus of Ideas Immanent in Nervous Activity now perceived as having inspired neural networks to a 2022 book on Active Inference focused on cognitive behavior.
It’s math, and the authors give no evidence that the models match the way actual neurons work or humans behave. We should not assume they do. Scientists can ponder whether they do, but it is irrelevant if, following Michael Richter, we are just looking for new ways to generate programs, and they provide such new ways.
The real question is whether these models help us develop faster software that is more powerful and more useful, regardless of whether they represent the way our brains really work. The science of the brain is a separate topic, and its application results in, for example, implantable brain-computer interface devices that help paralyzed humans recover control of limbs.
Serving versus Impersonating
The test defined by Alan Turing in 1950 is users’ ability to detect whether they are interacting with another human or with a machine. If the user cannot tell the difference, then, in Turing’s view, the machine has artificial intelligence. It should be noted that it is a test of the ability to deceive. Truth is irrelevant. It doesn’t matter whether the information provided by the machine is false. All that matters is that the recipient cannot tell whether it comes from a human or a machine.
ChatGPT passes the Turing test, which means that it is time to move beyond it. We want machines to do useful, complicated things for us, but we don’t actually want them to impersonate us. When we interact with a machine, we want it to be clear and obvious that it is not a human.
Parallel parking is not a skill that defines me as a human. On a busy street, in a tight spot, and with a rental car of unfamiliar dimensions, it is nerve-wracking and I welcome a car control system smart enough to do it for me.
On the other hand, I write to communicate my ideas to others. When I read, I enjoy the voice of an individual human author more than a committee’s product. The last thing I need is for a machine to regurgitate the conventional wisdom of the internet.
Science Fiction on Machines Impersonating Humans
Science fiction authors have taken a dim view of machines impersonating people for 100 years:
- Karel Čapek coined the word “robot” in his play Rossum’s Universal Robots in 1920.
- In 1968’s 2001, A Space Odyssey, HAL, the smart spaceship control system, turned on the crew.
- In 1979’s Alien, the “science officer” on the spaceship was a villainous robot.
- 1982’s Blade Runner was sympathetic to the “replicants,” but still made them tragic figures.
- In 1984, we met the Terminator cyborg.
- The hero of 1987’s Robocop is a dead human cop resurrected by robotic prostheses, pitted against clumsy, murderous robots.
- Small Soldiers (1998)
- Bicentennial Man (1999) shows a robot’s 200-year struggle to become human.
- Stephen Spielberg’s 2001 AI features a robotic 10-year-old boy capable of love.
- In 2001, S1m0ne shows a movie director using a computer-generated actress.
- In 2004, I, robot shows domestic help robots going rogue.
- Wall-e (2008)
- In 2013, Her shows ravages done by computer operating systems manipulating human emotions.
- In 2015, Chappie is an update on the theme of Robocop.
- The cutesy droids of Star Wars, C3PO and R2D2, are non-threatening because no one would mistake them for humans.
Deception
Other than to deceive, we don’t need machines to emulate humans. If you point out an error to ChatGPT, it apologizes, which raises the question of what it means for a chatbot to apologize. This is in the tradition of HAL in 2001:A Space Odyssey (1968), saying “I’m sorry Dave, I can’t do that,” when trying to kill Dave.
Some humans constantly apologize, prefacing requests with “I’m sorry but…”, which they intend as just a means of defusing the irritation they expect to cause with what follows the “but.” It can also be a major act of contrition, as when former Alabama governor George Wallace publicly apologized in 1982 for the suffering his policies had caused black people. They accepted his apology and voted him back into office for a 5th term.
This kind of apology is accepted when its audience believes it to be sincere. Can a chatbot be sincere? Does the question make any sense? It is simply a programmed attempt at the emotional manipulation of the user.
Children have no problem bonding emotionally with inanimate objects, like Teddy bears or Barbie dolls, and responded well to toys like Furbies or Tamagochis. Adults, on the other hand, cringe at emotional manipulation by any kind of machine. For example, we want AI to produce systems that help children learn how to read, but we want their function to be nothing more. We don’t want systems that create bonds with children and come between them and their parents.
Except for the works of a few auteurs with recognizable styles, the screenplays of today’s Hollywood movies seem machine-written. It’s not an improvement over human screenwriters and makes the spectator feel cheated.
Conclusion
A chatbot that can have a conversation with a human user that passes the Turing test is a technological tour de force that seemed out of reach until ChatGPT came along. It doesn’t mean that it makes the world a better place. We all want systems that make our lives easier, but we don’t actually want them to pass the Turing test.
My Background with AI
My son describes my recent comments on AI as “boomery.” While my age makes me a boomer. It doesn’t make me a luddite. I keep learning, particularly in data science, sometimes techniques that are useful in my work, but also sometimes out of curiosity.
I never committed to AI as a career choice, but my long-term exposure gives me some perspective on its technological silver bullets.
AI and Hype
The AI community has been engaged in hype for at least four decades. Names like “Expert Systems,” “Case-Based Reasoning,” “Genetic Algorithms,” “Neural Networks,” or “Deep Learning” are all exaggerations. They capture the public’s imagination until software users realize the systems don’t do what their names suggest. It doesn’t mean they are useless but it means that you should take claims with a grain of salt. The name “Neural Networks,” for example, misleadingly suggests an emulation of the way actual neurons work in a human brain.
Earliest Exposure to Intelligent Systems
In 1979, Fernand Meyer had developed a method to diagnose cervical cancer cells under a microscope by automatically detecting contrast anomalies in the chromatin of cells. His system was more accurate than human technicians but slower, perhaps because it was 1979 technology. Meyer’s top-hat transform was based on Georges Matheron’s theory of mathematical morphology, and no one called it Artificial Intelligence.
Yet it was smarter than anything to come out of AI in the following 10 years. By the late 1980s, suppliers of scheduling systems that applied slightly more sophisticated rules than the earliest due date or critical ratio to sequence work called it “AI-based scheduling.”
44 years after Meyer successfully automated the diagnosis of cervical cancer, we might expect the practice to have expanded to many other types of cancer and to be much faster than the manual process but it didn’t happen. In the US in 2023, human technicians are still at it, and AI researchers are trying to re-solve the problem with Deep Learning. Fernand Meyer’s better mousetrap from 1979 is not even referenced in a 2022 paper entitled Cancer Diagnosis of Microscopic Biopsy Images Using a Social Spider Optimisation-Tuned Neural Network.
While not making headlines like AI, mathematical morphology is still around, and Jon Clayden just posted the R package mmand on CRAN, to allow anyone to analyze multidimensional images with its tools.
Palo Alto in the 1980s
In the early 1980s, as “member of research staff” at Fairchild’s lab in Palo Alto, I was developing a Manufacturing Execution System (MES) for semiconductors. Across the hall from us was the Fairchild Laboratory for Artificial Intelligence Research (FLAIR), which gave me the opportunity to interact with leading lights of AI at the time, including Peter Hart, who ran the lab, Dick Duda, with whom I co-wrote a paper, and Marty Tenenbaum from whom I first learned about the Worldwide Web. Marty later pivoted to e-commerce. FLAIR became Schlumberger Palo Alto Research (SPAR) in 1985 and shut down in 1989. Since 2014, Schlumberger, now SLB, has again a Software Technology Innovation Center in Menlo Park, CA, that is involved in AI.
At FLAIR, I remember running a test with the late AI researcher Slava Prazdny, on coding a scheduling algorithm. He did it in LISP in 25 minutes in front of me. It took me 2 hours and 50 minutes to replicate his work in the language we were using for commercial software development. So I learned LISP. My paper with Dick Duda was about a technique then called Scale-Space Filtering, which lives on today in Kernel Density Estimation (KDE). To date, our paper has had 1,236 citations on Google Scholar.
Europe in the 1990s
A few years later, entrepreneur Michel Manago asked me to coordinate an AI research project named INRECA funded by the European Union. His company was the prime contractor, and the project involved participants from Germany and Ireland. It was an opportunity to work with Michel Manago, as well as German entrepreneur Stefan Wess and Prof. Michael Richter, who led the AI department at the University of Kaiserslautern, founded the German Research Center for AI (DFKI), and thought AI was just “new ways of writing computer programs.”
Further Exposure
To top it off, my wife Catherine Baudin has spent her entire career as a researcher in AI, at places like Stanford’s Knowledge Systems lab, NASA, and eBay Research. At NASA, she led the development of a knowledge-based documentation system for engineers and scientists called DEDAL. At eBay, she applied text mining and machine learning to identify bargains and fraudulent listings.
Assessment of the 1980s Wave
All this is to say that, if I didn’t work in AI, it wasn’t for lack of exposure. The first wave of AI didn’t collapse in the 1990s for lack of valuable technology but because that technology had been oversold. I never thought that it added up to anything that could legitimately be called “artificial intelligence.” “Expert Systems” did not work the way human experts do, and “Case-Based Reasoning” only retrieved cases by similarity and did not involve any reasoning. The technology could be useful but didn’t do what the names suggested.
Michael Richter’s assessment was correct but it wasn’t what investors, large company executives, or the public wanted to hear. Finding new ways to develop software was incremental, and they wanted a revolution. This lasted until the chasm between hype and reality made the bubble burst and started a decades-long “winter of AI,” during which AI was a dirty word.
The Winter of AI
During that time, incremental improvements continued, enabled in part by increases in the power of computing hardware. The victories of IBM’s Deep Blue chess player over world champion Garry Kasparov in 1997 and of Google’s AlphaGo against go champion Lee Sedol in 2016 contributed to the rehabilitation of AI. So did progress in fields like automatic translation, character recognition, or autonomous vehicles.
The Resurgent AI
The resurgent AI of the late 2010s, however, is just as addicted to hype as its ancestor of the 1980s. Technically, it focuses on applications of Deep Learning, which doesn’t mean deep learning as we otherwise know it. AI’s “Deep Learning” isn’t about studying, observing, and experimenting to master a
particular subject.
Strip away the attractive metaphors and Deep Learning boils down to multiplying large, sparse matrices, and applying gradient descent to tweak their coefficients with the goal of minimizing a function of errors in a training dataset.
Stefan Wess and I have stayed in touch. A Ph.D. candidate in the days of INRECA, became CEO of Empolis, a leading supplier of AI technology in Germany. Their latest newsletter opens with: “The hype around Artificial Intelligence and especially Generative AI is omnipresent. Most recently, since ChatGPT, fantasy knows no bounds.”
The McKinsey Report on Generative AI
The front page of a recent report from McKinsey shows it as having eight authors, and this report will no doubt be frequently cited and used by entrepreneurs in pitches to investors.
The report claims that the 63 use cases in the report could add $2.6 trillion to $4.4 trillion annually to the world economy. Where do these numbers come from?
Starting from a classification by the US Bureau of Labor Statistics of 850 occupations and 2,100 activities these occupations entail, they surveyed experts in automation and in generative AI. Then they built scenarios based on the experts’ estimation of technology adoption timelines, looking as far as 70 years ahead. Finally, they used these scenarios to generate specific numbers for specific technologies.
The first concern with this approach is that the main benefits of technology do not come from the automation of existing practices. This automation is only the beginning when you replace horse-drawn with horseless carriages. A technology’s payoff is instead in enabling activities that were impossible or unthinkable before it became available.
The second concern is the reliance on subjective assessments from experts who were willing to talk to the authors and make predictions about technology decades into the future. Perhaps, the best experts in automation and AI were unwilling to make such predictions.
Overall Conclusion
DALL-E and MidJourney are useful. This technology has issues with potentially nefarious uses and with intellectual property, but its impact is globally positive. ChatGPT can produce the kind of routine advertising copy that is fodder for junk mail and spam, but is incapable of producing usable documents where substance and truth matter. As many early adopters are discovering, trying to use it for such purposes is a waste of time and dangerous for their reputation.
References
- The economic potential of generative AI (2023), Michael Chui, Eric Hazan, Roger Roberts, Alex Singla, Kate Smaje, Alex Sukharevsky, Lareina Yee, Rodney Zemmel, McKinsey & Company
- Generative AI: Zwischen Hype und Realität, Empolis Newsletter, 6/30/2023
- Ryan-Mosley, T. (2023) Junk websites filled with AI-generated text are pulling in money from programmatic ads, MIT Technology Review
- Friston, K. J., Parr, T., Pezzulo, G. (2022). Active Inference: The Free Energy Principle in Mind, Brain, and Behavior. United Kingdom: MIT Press.
- Balaji, P., & Chidambaram, K. (2022). Cancer Diagnosis of Microscopic Biopsy Images Using a Social Spider Optimisation-Tuned Neural Network. Diagnostics, 12(1).
- Norman, D. (2013). The Design of Everyday Things: Revised and Expanded Edition. United States: Basic Books.
- Lewison, Grant & Hartley, James. (2005). What’s in a title? Number of words and the presence of colons. Scientometrics. 63.
341-356. 10.1007/s11192-005-0216-0. - Degani, A. (2004). Taming HAL: Designing Interfaces Beyond 2001. United Kingdom: Palgrave Macmillan.
- Meyer F. (1979) Iterative image transformations for an automatic screening of cervical smears. Journal of Histochemistry & Cytochemistry. 1979;27(1):128-135. doi:10.1177/27.1.438499
- McCulloch, Warren; Walter Pitts (1943). A Logical Calculus of Ideas Immanent in Nervous Activity. Bulletin of Mathematical Biophysics. 5 (4): 115–133.
doi:10.1007/BF02478259
#chagpt, #dall-e, #bard, #midjourney, #generativeai, #ai, #deeplearning
What Generative AI Generates | Lean Office .org
July 15, 2023 @ 3:28 am
[…] post What Generative AI Generates appeared first on Michel Baudin's […]
Becky Morgan
July 15, 2023 @ 9:19 am
I have believed there are several Generative AI ‘methodologies’ with very different purposes. For example, pharmaceutical and chemical development, product development/design. Admittedly I skimmed, but I did skim this entire article. It seems focused only on uses of ChatGPT and not other generative AI purposes/processes. Am I wrong?
Michel Baudin
July 15, 2023 @ 9:32 am
I also tried DALL-E and, in one case, Bard, but most examples are from ChatGPT, which has been hogging attention.
Lonnie Wilson
July 15, 2023 @ 9:48 am
Michel,
Enlightening as usual. I belong to a Deming Study Group and in one session we used ChatGPT to research Dr. Deming’s philosophy. This group was formed over 40 years ago and was routinely attended by Dr. Deming, as they would meet where he spoke. There are some prestigious Deming disciples who studied and worked firsthand with Dr. Deming in this group and hence have a deep and abiding understanding of his messages. Regarding Dr. Deming and his teachings, we concluded, as you did, that:
“ChatGPT can produce the kind of routine advertising copy that is fodder for junk mail and spam, but is incapable of producing usable documents where substance and truth matter. ”
Be well