Home Artificial Intelligence Turn GPT-4 right into a Poker Coach Organising some context Initial Exploration Transforming the concept right into a concrete application Conclusion

Turn GPT-4 right into a Poker Coach Organising some context Initial Exploration Transforming the concept right into a concrete application Conclusion

Turn GPT-4 right into a Poker Coach
Organising some context
Initial Exploration
Transforming the concept right into a concrete application

Unleashing Creativity Beyond Chatbot Boundaries

Photo by Michał Parzuchowski on Unsplash

In this text, we won’t discuss how LLM models can pass a law exam or replace a developer.

We won’t have a look at hints on optimizing prompts for making GPT do motivation letters or marketing content.

Like many individuals, I believe that the emergence of the LLM like GPT4 is a little bit revolution from which a variety of latest applications will emerge. I also think that we must always not reduce their use to easy “chatbot assistants” and that with the suitable backend and UX, those models will be leveraged to incredible next-level applications.

Because of this, in this text, we’re going to think a bit out of the box and create an actual application across the GPT API that would not be accessed simply via the chatbot interface and the way a correct app design could serve a greater user experience.

Leveraging GPT4 in businesses

I played rather a lot with GPT4 since its release and I believe there’s globally two predominant families of use cases for using the model to generate a business.

The primary way is to make use of GPT4 to generate static content. Say you should write a cooking book with a specific theme (for instance Italian food). You possibly can make detailed prompts, generate just a few recipes from GPT, try them yourself, and integrate the one you want in your book. In that case “prompting” could have a set cost and once the recipes are generated you don’t need GPT anymore. This sort of use case can find a variety of variation (Marketing content, website content, and even generating some datasets for other uses), but will not be as interesting if we wish to concentrate on AI-oriented apps.

The logic of generating the content is outside the appliance, Creator Illustration

The second use case is live prompting through an interface of your design. Going back to the cooking field: we could imagine a well-suited interface through which a user can pick up just a few ingredients, a specialty, and ask the appliance to generate directly the recipe. Unlike in the primary case, the content generated will be potentially infinite and suit higher the needs of your users.

On this scenario, the user interacts directly with the LLM via a well-designed UX which can generate prompts and content, Creator Illustration

The disadvantage of that is that the variety of calls to the LLM will probably be potentially infinite and grow with the variety of users, unlike before where the quantity of calls to the LLM was finite and controlled. This means that you’ll should design properly what you are promoting model and take a variety of care into including the fee of prompts in what you are promoting model.

As of after I am writing these lines, GPT4 “prompt” costs 0.03$/1000 tokens (with each request and answer tokens counted within the pricing). It doesn’t appear to be rather a lot, but could quickly escalate when you don’t listen to it. To work around this, you may for instance propose to your user a subscription depending on the quantity of prompts or limited the quantity of prompts per user (via a login system etc…). We’ll talk a bit more intimately about pricing later in this text.

Why a use-case around Poker?

I believed for a while of the proper use case to try around LLMs.

First, poker evaluation is theoretically a field through which LLM should perform well. Actually, every poker hand played will be translated right into a standardized easy text describing the evolution of the hand. For instance, the hand below describes a sequence through which “player1” win the pot after making a raise on the bet of “player2” after the “flop” motion.

Seat 2: player1(€5.17 in chips) 
Seat 3: player3(€5 in chips)
Seat 4: player2(€5 in chips)
player1: posts small blind €0.02
player2: posts big blind €0.05
*** HOLE CARDS ***
Dealt to player2[4s 4c]
player2: raises €0.10 to €0.15
player1: calls €0.13
player3: folds
*** FLOP *** [Th 7h Td]
player1: checks
player2: bets €0.20
player1: raises €0.30 to €0.50
player2: folds
Uncalled bet (€0.30) returned to player1
player1collected €0.71 from pot

This standardization is significant because it’s going to make the event more easy. We’ll have the option to simulate hands, translate them into this sort of prompt message, and “force” the reply of the LLM to proceed the sequence.

Numerous theoretical content is obtainable in books, online, etc… Making it likely that GPT has “learned” things across the game and good moves.

Also, a variety of added value will come from the app engine and the UX, and never only from the LLM itself (for instance we could have to design our own poker engine to simulate a game), which can make the appliance harder to copy, or to easily “reproduce” via GPTChat.

Finally, the use case might adapt well to the second case scenario described above, where the LLM and UX can bring a very latest experience to users. We could imagine our application playing hands again an actual user, analyzing hands and in addition giving rates and areas of improvement. The worth per request mustn’t be an issue as poker learners are used to paying for this sort of service, so a “pay as you employ” is perhaps possible on this particular use case (unlike the recipe concept app mentioned earlier for instance)

About GPT4 API

I made a decision to construct this text around GPT4 API for its accuracy as compared to GPT3.5. OpenAI provides a straightforward Python wrapper that will be used to send your inputs and receive your outputs from the model. For instance:

import openai
openai.api_key = os.environ['OPENAI_KEY']

completion = openai.ChatCompletion.create(
messages=[{"role": "system", "content": preprompt_message},
{"role": "user", "content": user_message}]


The “pre-prompt” used with the role “system” will help the model to act the way in which you would like him to act (you should utilize it typically to implement a response format), the role “user” is used so as to add the message from the user. In our case, those messages will probably be pre-designed by our engine, for instance, passing a specific poker hand to finish.

Note that each one the tokens from “system”, “user” and from the reply are counted in the value scheme, so it is de facto necessary to optimize those queries as much as you possibly can.


Please enter your comment!
Please enter your name here