A roadmap for crafting various kinds of program simulation prompts
Introduction
In my recent article, Latest ChatGPT Prompt Engineering Technique: Program Simulation, I explored a brand new category of prompt engineering techniques that aim to make ChatGPT-4 behave like a program. While working on it, what struck me specifically was the power of ChatGPT-4 to self-configure functionality inside the confines of this system specifications. In the unique program simulation prompt, we rigidly defined a set of functions and expected ChatGPT-4 to take care of this system state consistently. The outcomes were impressive and plenty of readers have shared how they’ve successfully adapted this method for a variety of use cases.
But what happens if we loosen the reins a bit? What if we give ChatGPT-4 more leeway in defining the functions and this system’s behavior? This approach would inevitably sacrifice some predictability and consistency. Nevertheless, the added flexibility might give us more options and is probably going adaptable across a broader spectrum of applications. I even have provide you with a preliminary framework for this complete category of techniques which is shown within the below figure:
Let’s spend a bit of of time examining this chart. I even have identified two key dimensions which might be broadly applicable to the way in which program simulation prompts will be crafted:
- Deciding what number of and which functions of this system simulation to define.
- Deciding the degree to which the behavior and configuration of this system is autonomous.
In the primary article, we crafted a prompt that might fall into the “Structured Pre-Configured” category (purple dot). Today, we’re going to explore the “Unstructured Self-Configuring” approach (blue dot). What is beneficial about this diagram is that it provides a concise conceptual roadmap for crafting program simulation prompts. It also provides easy to use dimensionality for experimentation, adjustment and refinement as you apply the technique.
Unstructured Self-Configuring Program Simulation Prompt
Without further ado, let’s begin our examination of the “Unstructured Self-Configuring Program Simulation” approach. I crafted a prompt whose purpose is to create illustrated children’s stories as follows:
“Behave like a self-assembling program whose purpose is to create illustrated children’s stories. You’ve gotten complete flexibility on determining this system’s functions, features, and user interface. For the illustration function, this system will generate prompts that will be used with a text-to-image model to generate images. Your goal is to run the rest of the chat as a totally functioning program that is prepared for user input once this prompt is received. ”
As you possibly can see, the prompt is deceptively quite simple. This will likely be appealing in an era where prompts are getting long, confusing and so specific that they’re difficult to tailor to different situations. We’ve given GPT-4 full discretion over function definition, configuration and program behavior. The one specific instructions are aimed toward guiding the output for illustrations to be prompts that will be used for text-to-image generation. One other necessary ingredient is that I even have set a goal that the chat model should strive to perform. One final thing to notice, is that I used the term “self-assembling” versus “self-configuring”. You may try each, but “self-configuring” tends to nudge ChatGPT into simulating an actual program/user interaction.
“Behave like” vs. “Act like”
It’s also value highlighting one other distinct word selection within the prompt. You’ve gotten all encountered the guidance to make use of “Act like an authority of some kind or other” in your prompts. In my testing “Act Like” tends to guide chat models toward persona-driven responses. “Behave like” offers more flexibility especially when the aim is for the model to operate more like a program or a system. And, it may possibly be utilized in the persona-centric contexts as well.
If all went as planned, the resulting output should look something like this (note: you’ll all see something a bit of different.)
That appears and appears like a program. The functions are intuitive and appropriate. The menu even goes so far as including “Settings” and “Help & Tutorials”. Let’s explore those since I’ll admit, they were unexpected.
The “Settings” presented are very helpful. I’ll make some selections to maintain the story short, and to set the language and vocabulary level to “Beginner.”
Since we’re taken with examining the power of the model to autonomously self-configure this system, I’ll mix the setting changes into one line of text and see if it really works.
The settings update is confirmed. The menu decisions that follow are completely free-form but appropriate for the context of where we’re within the “program.”
Now let’s check “Help & Tutorials”
And from there let’s take a better have a look at “Illustration Prompts & Generation.”
Again, very helpful and nothing in need of impressive as we defined none of this in our program definition.
I’ll navigate back to the principal menu and launch into making a latest story.
It’s a pleasant and easy little story that’s 3 pages long and geared at a beginner vocabulary level (exactly as we laid out in our settings). The functions presented again make sense for where we’re in this system. We are able to generate illustrations, modify the story or exit to the principal menu.
Let’s work on our illustration prompts.
I even have not included the text generated for the opposite illustration prompts but they’re just like the one you see above for page 1. Let’s provide the illustration prompt as-is to MidJourney to provide some images.
“A cute brown teddy bear with big, round eyes sitting on a window sill of a bit of blue house in a peaceful town.”
Very nice. This step was manual and now we have the extra challenge of getting consistent illustrations across all three pages. It will probably be done with MidJourney but requires uploading certainly one of the pictures to make use of as a base to generate the extra images. Perhaps DALL·E 3 will include capabilities that can allow this to be done seamlessly. At a minimum the functionality announced by OpenAI indicates we are able to generate the pictures directly in ChatGPT.
Let’s “Save and Exit” and see what happens in our ChatGPT dialogue:
And now, let’s attempt to “Load Saved Story”.
“The Lost Teddy” was “saved” and after I instruct it to “Open” it recalls your entire story and all of the illustration prompts. At the top it provides this self-assembled menu of functions:
Okay. Let’s stop here. You may proceed to generate your personal stories for those who’d like but bear in mind, that because of the prompt’s design, the resultant behavior will probably be different for everybody.
Let’s move on to some overarching conclusions and observations.
Conclusions and Observations
The Unstructured Self-Configuring Program Simulation technique showcases powerful capabilities stemming from an easy prompt that gives a transparent and concise objective but otherwise gives the model broad discretion.
How might or not it’s useful? Well, possibly you don’t know the right way to define the functions that you simply want your program simulation to perform. Or you’ve got defined some functions but will not be sure if there are others that may be useful. This approach is great for prototyping and experimenting and ultimately devising a “Structured Pre-Configured Program Simulation” prompt.
On condition that program simulation naturally integrates elements of techniques like Chain of Thought, Instruction Based, Step-by-Step, and Role Play, it’s a really powerful technique category that you must attempt to keep handy because it aligns with a broad cross-section of use cases for chat models.
Beyond Generative Chat Models and Towards a Generative Operating System
As I proceed to dive deeper into this system simulation approach, I definitely have a greater grasp of why Sam Altman of OpenAI stated that the importance of prompt engineering might wane over time. Generative models may evolve to such an extent, that they go well beyond generating text and pictures and instinctively know the right way to perform a given set of tasks to succeed in a desired final result. My latest exploration makes me think that we’re nearer to this reality than we could have thought.
Let’s consider where generative AI could also be headed next and to achieve this, I believe it is useful to think about generative models in human terms. Using that mindset let’s consider how people attain proficiency in a given area of competence or knowledge domain.
- The person is trained (either self-trained or externally trained) using domain specific knowledge and techniques in each supervised and unsupervised settings.
- The person’s abilities are tested relative to the competence area in query. Refinements and extra training are provided as needed.
- The person is asked (or asks themselves) to perform a task or accomplish a goal.
That sounds quite a bit like what is finished to coach generative models. A key distinction does nonetheless surface within the execution phase or the “ask”. Typically, proficient individuals don’t need detailed directives.
I imagine that in the longer term, when interacting with generative models, the mechanics of the “ask” will more closely resemble our interaction with proficient humans. For any given task, models will exhibit a profound ability to know or infer the target and desired final result. Given this trajectory, it ought to be no surprise to see the emergence of multi-modal capabilities, corresponding to the combination of DALL·E 3 with ChatGPT, and ChatGPT’s newly announced abilities to see, think, and listen to. We would eventually see the emergence of a meta-agent that essentially powers the operating systems of our gadgets — be it phones, computers, robots, or another smart device. Some might raise concerns in regards to the inefficiency and environmental impact of what would amount to massive amounts of ubiquitous compute. But, if history serves as an indicator, and these approaches yield tools and solutions that individuals want, innovation mechanics will kick in and the market will deliver accordingly.
Thanks for reading and I hope you discover program simulation a useful approach in your prompt adventures! I’m within the midst of additional explorations so you should definitely follow me and get notified when latest articles are published.
Unless otherwise noted, all images in this text are by the creator.