#LLM for beginners
Understand the fundamentals of agents, tools, and prompts and a few learnings along the best way
Audience: For those feeling overwhelmed with the large (yet good) library…
I’d be lying if I said I actually have got all the LangChain library covered — in reality, I’m removed from it. But the excitement surrounding it was enough to shake me out of my writing hiatus and provides it a go 🚀.
The initial motivation was to see what was it that LangChain was adding (on a practical level) that set it aside from the chatbot I built last month using the ChatCompletion.create()
function from the openai
package. Whilst doing so, I spotted I needed to know the constructing blocks for LangChain first before moving on to the more complex parts.
That is what this text does. Heads-up though, this shall be more parts coming as I’m truly fascinated by the library and can proceed to explore to see what all could be built through it.
Let’s begin by understanding the basic constructing blocks of LangChain — i.e. Chains. Should you’d prefer to follow along, here’s the GitHub repo.
What are chains in LangChain?
Chains are what you get by connecting a number of large language models (LLMs) in a logical way. (Chains could be built of entities aside from LLMs but for now, let’s follow this definition for simplicity).
OpenAI is a kind of LLM (provider) which you can use but there are others like Cohere, Bloom, Huggingface, etc.
Note: Just about most of those LLM providers will need you to request an API key as a way to use them. So make sure that you do this before proceeding with the rest of this blog. For instance:
import os
os.environ["OPENAI_API_KEY"] = "..."
P.S. I’m going to make use of OpenAI for this tutorial because I actually have a key with credits that expire in a month’s time, but be happy to switch it with some other LLM. The concepts covered here shall be useful regardless.
Chains could be easy (i.e. Generic) or specialized (i.e. Utility).
- Generic — A single LLM is the only chain. It takes an input prompt and the name of the LLM after which uses the LLM for text generation (i.e. output for the prompt). Here’s an example:
Let’s construct a basic chain — create a prompt and get a prediction
Prompt creation (using PromptTemplate
) is a bit fancy in Lanchain but this might be because there are quite just a few alternative ways prompts could be created depending on the use case (we’ll cover AIMessagePromptTemplate
,HumanMessagePromptTemplate
etc. in the following blog post). Here’s an easy one for now:
from langchain.prompts import PromptTemplateprompt = PromptTemplate(
input_variables=["product"],
template="What's a great name for a corporation that makes {product}?",
)
print(prompt.format(product="podcast player"))
# OUTPUT
# What's a great name for a corporation that makes podcast player?
Note: Should you require multiple input_variables
, as an illustration: input_variables=["product", "audience"]
for a template reminiscent of “What's a great name for a corporation that makes {product} for {audience}”
, it’s essential do print(prompt.format(product="podcast player", audience="children”)
to get the updated prompt.
Once you’ve got built a prompt, we are able to call the specified LLM with it. To achieve this, we create an LLMChain
instance (in our case, we use OpenAI
‘s large language model text-davinci-003
). To get the prediction (i.e. AI-generated text), we use run
function with the name of the product
.
from langchain.llms import OpenAI
from langchain.chains import LLMChainllm = OpenAI(
model_name="text-davinci-003", # default model
temperature=0.9) #temperature dictates how whacky the output ought to be
llmchain = LLMChain(llm=llm, prompt=prompt)
llmchain.run("podcast player")
# OUTPUT
# PodConneXion
Should you had multiple input_variables, then you definately won’t find a way to make use of run
. As a substitute, you’ll must pass all of the variables as a dict
. For instance, llmchain({“product”: “podcast player”, “audience”: “children”})
.
Note 1: In response to OpenAI, davinci
text-generation models are 10x costlier than their chat counterparts i.e gpt-3.5-turbo
, so I attempted to modify from a text model to a chat model (i.e. from OpenAI
to ChatOpenAI
) and the outcomes are just about the identical.
Note 2: You may see some tutorials using OpenAIChat
as a substitute of ChatOpenAI
. The previous is deprecated and can not be supported and we’re imagined to use ChatOpenAI
.
from langchain.chat_models import ChatOpenAIchatopenai = ChatOpenAI(
model_name="gpt-3.5-turbo")
llmchain_chat = LLMChain(llm=chatopenai, prompt=prompt)
llmchain_chat.run("podcast player")
# OUTPUT
# PodcastStream
This concludes our section on easy chains. It will be significant to notice that we rarely use generic chains as standalone chains. More often they’re used as constructing blocks for Utility chains (as we’ll see next).
2. Utility — These are specialized chains, comprised of many LLMs to assist solve a selected task. For instance, LangChain supports some end-to-end chains (reminiscent of AnalyzeDocumentChain
for summarization, QnA, etc) and a few specific ones (reminiscent of GraphQnAChain
for creating, querying, and saving graphs). We are going to take a look at one specific chain called PalChain
on this tutorial for digging deeper.
PAL stands for Programme Aided Language Model. PALChain
reads complex math problems (described in natural language) and generates programs (for solving the mathematics problem) because the intermediate reasoning steps, but offloads the answer step to a runtime reminiscent of a Python interpreter.
To substantiate that is in reality true, we are able to inspect the _call()
in the bottom code here. Under the hood, we are able to see this chain:
P.S. It’s a great practice to examine _call()
in base.py
for any of the chains in LangChain to see how things are working under the hood.
from langchain.chains import PALChain
palchain = PALChain.from_math_prompt(llm=llm, verbose=True)
palchain.run("If my age is half of my dad's age and he's going to be 60 next 12 months, what's my current age?")# OUTPUT
# > Entering latest PALChain chain...
# def solution():
# """If my age is half of my dad's age and he's going to be 60 next 12 months, what's my current age?"""
# dad_age_next_year = 60
# dad_age_now = dad_age_next_year - 1
# my_age_now = dad_age_now / 2
# result = my_age_now
# return result
#
# > Finished chain.
# '29.5'
Note1: verbose
could be set to False
for those who don’t have to see the intermediate step.
Now a few of you could be wondering — but what in regards to the prompt? We actually didn’t pass one as we did for the generic llmchain
we built. The actual fact is, it’s robotically loaded when using .from_math_prompt()
. You possibly can check the default prompt using palchain.prompt.template
or you’ll be able to directly inspect the prompt file here.
print(palchain.prompt.template)
# OUTPUT
# 'Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left?nn# solution in Python:nnndef solution():n """Olivia has $23. She bought five bagels for $3 each. How much money does she have left?"""n money_initial = 23n bagels = 5n bagel_cost = 3n money_spent = bagels * bagel_costn money_left = money_initial - money_spentn result = money_leftn return resultnnnnnnQ: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. What number of golf balls did he have at the tip of wednesday?nn# solution in Python:nnndef solution():n """Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. What number of golf balls did he have at the tip of wednesday?"""n golf_balls_initial = 58n golf_balls_lost_tuesday = 23n golf_balls_lost_wednesday = 2n golf_balls_left = golf_balls_initial - golf_balls_lost_tuesday - golf_balls_lost_wednesdayn result = golf_balls_leftn return resultnnnnnnQ: There have been nine computers within the server room. Five more computers were installed every day, from monday to thursday. What number of computers are actually within the server room?nn# solution in Python:nnndef solution():n """There have been nine computers within the server room. Five more computers were installed every day, from monday to thursday. What number of computers are actually within the server room?"""n computers_initial = 9n computers_per_day = 5n num_days = 4 # 4 days between monday and thursdayn computers_added = computers_per_day * num_daysn computers_total = computers_initial + computers_addedn result = computers_totaln return resultnnnnnnQ: Shawn has five toys. For Christmas, he got two toys each from his mom and pa. What number of toys does he have now?nn# solution in Python:nnndef solution():n """Shawn has five toys. For Christmas, he got two toys each from his mom and pa. What number of toys does he have now?"""n toys_initial = 5n mom_toys = 2n dad_toys = 2n total_received = mom_toys + dad_toysn total_toys = toys_initial + total_receivedn result = total_toysn return resultnnnnnnQ: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. What number of lollipops did Jason give to Denny?nn# solution in Python:nnndef solution():n """Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. What number of lollipops did Jason give to Denny?"""n jason_lollipops_initial = 20n jason_lollipops_after = 12n denny_lollipops = jason_lollipops_initial - jason_lollipops_aftern result = denny_lollipopsn return resultnnnnnnQ: Leah had 32 chocolates and her sister had 42. In the event that they ate 35, what number of pieces have they got left in total?nn# solution in Python:nnndef solution():n """Leah had 32 chocolates and her sister had 42. In the event that they ate 35, what number of pieces have they got left in total?"""n leah_chocolates = 32n sister_chocolates = 42n total_chocolates = leah_chocolates + sister_chocolatesn chocolates_eaten = 35n chocolates_left = total_chocolates - chocolates_eatenn result = chocolates_leftn return resultnnnnnnQ: If there are 3 cars within the car parking zone and a couple of more cars arrive, what number of cars are within the car parking zone?nn# solution in Python:nnndef solution():n """If there are 3 cars within the car parking zone and a couple of more cars arrive, what number of cars are within the car parking zone?"""n cars_initial = 3n cars_arrived = 2n total_cars = cars_initial + cars_arrivedn result = total_carsn return resultnnnnnnQ: There are 15 trees within the grove. Grove employees will plant trees within the grove today. After they're done, there shall be 21 trees. What number of trees did the grove employees plant today?nn# solution in Python:nnndef solution():n """There are 15 trees within the grove. Grove employees will plant trees within the grove today. After they're done, there shall be 21 trees. What number of trees did the grove employees plant today?"""n trees_initial = 15n trees_after = 21n trees_added = trees_after - trees_initialn result = trees_addedn return resultnnnnnnQ: {query}nn# solution in Python:nnn'
Note: Many of the utility chains can have their prompts pre-defined as a part of the library (check them out here). They’re, at times, quite detailed (read: plenty of tokens) so there is unquestionably a trade-off between cost and the standard of response from the LLM.
Are there any Chains that don’t need LLMs and prompts?
Despite the fact that PalChain requires an LLM (and a corresponding prompt) to parse the user’s query written in natural language, there are some chains in LangChain that don’t need one. These are mainly transformation chains that preprocess the prompt, reminiscent of removing extra spaces, before inputting it into the LLM. You possibly can see one other example here.
Can we get to the nice part and begin creating chains?
In fact, we are able to! We have now all the fundamental constructing blocks we’d like to begin chaining together LLMs logically such that input from one could be fed to the following. To achieve this, we’ll use SimpleSequentialChain
.
The documentation has some great examples on this, for instance, you’ll be able to see here find out how to have two chains combined where chain#1 is used to wash the prompt (remove extra whitespaces, shorten prompt, etc) and chain#2 is used to call an LLM with this clean prompt. Here’s one other one where chain#1 is used to generate a synopsis for a play and chain#2 is used to put in writing a review based on this synopsis.
While these are excellent examples, I need to deal with something else. Should you remember before, I discussed that chains could be composed of entities aside from LLMs. More specifically, I’m excited by chaining agents and LLMs together. But first, what are agents?
Using agents for dynamically calling LLMs
It would be much easier to clarify what an agent does vs. what it’s.
Say, we wish to know the weather forecast for tomorrow. If were to make use of the easy ChatGPT API and provides it a prompt Show me the weather for tomorrow in London
, it won’t know the reply since it doesn’t have access to real-time data.
Wouldn’t or not it’s useful if we had an arrangement where we could utilize an LLM for understanding our query (i.e prompt) in natural language after which call the weather API on our behalf to fetch the info needed? This is precisely what an agent does (amongst other things, after all).
An agent has access to an LLM and a set of tools for instance Google Search, Python REPL, math calculator, weather APIs, etc.
There are quite just a few agents that LangChain supports — see here for the whole list, but quite frankly probably the most common one I got here across in tutorials and YT videos was zero-shot-react-description
. This agent uses ReAct (Reason + Act) framework to select probably the most usable tool (from an inventory of tools), based on what the input query is.
P.S.: Here’s a pleasant article that goes in-depth into the ReAct framework.
Let’s initialize an agent using initialize_agent
and pass it the tools
and LLM
it needs. There’s a protracted list of tools available here that an agent can use to interact with the surface world. For our example, we’re using the identical math-solving tool as above, called pal-math
. This one requires an LLM on the time of initialization, so we pass to it the identical OpenAI LLM instance as before.
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.agents import load_toolsllm = OpenAI(temperature=0)
tools = load_tools(["pal-math"], llm=llm)
agent = initialize_agent(tools,
llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True)
Let’s try it out on the identical example as above:
agent.run("If my age is half of my dad's age and he's going to be 60 next 12 months, what's my current age?")# OUTPUT
# > Entering latest AgentExecutor chain...
# I want to determine my dad's current age after which divide it by two.
# Motion: PAL-MATH
# Motion Input: What's my dad's current age if he's going to be 60 next 12 months?
# Commentary: 59
# Thought: I now know my dad's current age, so I can divide it by two to get my age.
# Motion: Divide 59 by 2
# Motion Input: 59/2
# Commentary: Divide 59 by 2 just isn't a legitimate tool, try one other one.
# Thought: I can use PAL-MATH to divide 59 by 2.
# Motion: PAL-MATH
# Motion Input: Divide 59 by 2
# Commentary: 29.5
# Thought: I now know the ultimate answer.
# Final Answer: My current age is 29.5 years old.
# > Finished chain.
# 'My current age is 29.5 years old.'
Note 1: At each step, you’ll notice that an agent does one in all three things — it either has an commentary
, a thought
, or it takes an motion
. This is principally as a consequence of the ReAct framework and the associated prompt that the agent is using:
print(agent.agent.llm_chain.prompt.template)
# OUTPUT
# Answer the next questions as best you'll be able to. You have got access to the next tools:
# PAL-MATH: A language model that is basically good at solving complex word math problems. Input ought to be a totally worded hard word math problem.# Use the next format:
# Query: the input query you need to answer
# Thought: it's best to all the time take into consideration what to do
# Motion: the motion to take, ought to be one in all [PAL-MATH]
# Motion Input: the input to the motion
# Commentary: the results of the motion
# ... (this Thought/Motion/Motion Input/Commentary can repeat N times)
# Thought: I now know the ultimate answer
# Final Answer: the ultimate answer to the unique input query
# Begin!
# Query: {input}
# Thought:{agent_scratchpad}
Note2: You is likely to be wondering what’s the purpose of getting an agent to do the identical thing that an LLM can do. Some applications would require not only a predetermined chain of calls to LLMs/other tools, but potentially an unknown chain that is determined by the user’s input [Source]. In a majority of these chains, there’s an “agent” which has access to a set of tools.
For example, here’s an example of an agent that may fetch the right documents (from the vectorstores) for RetrievalQAChain
depending on whether the query refers to document A or document B.
For fun, I attempted making the input query more complex (using Demi Moore’s age as a placeholder for Dad’s actual age).
agent.run("My age is half of my dad's age. Next 12 months he's going to be same age as Demi Moore. What's my current age?")
Unfortunately, the reply was barely off because the agent was not using the newest age for Demi Moore (since Open AI models were trained on data until 2020). This could be easily fixed by including one other tool —tools = load_tools([“pal-math”, "serpapi"], llm=llm)
. serpapi
is beneficial for answering questions on current events.
Note: It will be significant so as to add as many tools as you’re thinking that could also be relevant to the user query. The issue with using a single tool is that the agent keeps attempting to use the identical tool even when it’s not probably the most relevant for a specific commentary/motion step.
Here’s one other example of a tool you should use — podcast-api
. It’s worthwhile to get your personal API key and plug it into the code below.
tools = load_tools(["podcast-api"], llm=llm, listen_api_key="...")
agent = initialize_agent(tools,
llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True)agent.run("Show me episodes for money saving suggestions.")
# OUTPUT
# > Entering latest AgentExecutor chain...
# I should seek for podcasts or episodes related to money saving
# Motion: Podcast API
# Motion Input: Money saving suggestions
# Commentary: The API call returned 3 podcasts related to money saving suggestions: The Money Nerds, The Rachel Cruze Show, and The Martin Lewis Podcast. These podcasts offer priceless money saving suggestions and advice to assist people take control of their funds and create a life they love.
# Thought: I now have some options to select from
# Final Answer: The Money Nerds, The Rachel Cruze Show, and The Martin Lewis Podcast are great podcast options for money saving suggestions.
# > Finished chain.
# 'The Money Nerds, The Rachel Cruze Show, and The Martin Lewis Podcast are great podcast options for money saving suggestions.'
Note1: There may be a known error with using this API where you would possibly see, openai.error.InvalidRequestError: This model’s maximum context length is 4097 tokens, nevertheless you requested XXX tokens (XX in your prompt; XX for the completion). Please reduce your prompt; or completion length.
This happens when the response returned by the API is likely to be too big. To work around this, the documentation suggests returning fewer search results, for instance, by updating the query to "Show me episodes for money saving suggestions, return just one result"
.
Note2: While tinkering around with this tool, I noticed some inconsistencies. The responses aren’t all the time complete the primary time around, as an illustration listed below are the input and responses from two consecutive runs:
Input: “Podcasts for convalescing at French”
Response 1: “The most effective podcast for learning French is the one with the best review rating.”
Response 2: ‘The most effective podcast for learning French is “FrenchPod101”.
Under the hood, the tool is first using an LLMChain for constructing the API URL based on our input instructions (something along the lines of https://listen-api.listennotes.com/api/v2/search?q=french&type=podcast&page_size=3
) and making the API call. Upon receiving the response, it uses one other LLMChain that summarizes the response to get the reply to our original query. You possibly can try the prompts here for each LLMchains which describe the method in additional detail.
I’m inclined to guess the inconsistent results seen above are resulting from the summarization step because I actually have individually debugged and tested the API URL (created by LLMChain#1) via Postman and received the suitable response. To further confirm my doubts, I also stress-tested the summarization chain as a standalone chain with an empty API URL hoping it will throw an error but got the response “Investing’ podcasts were found, containing 3 ends in total.” 🤷♀ I’d be curious to see if others had higher luck than me with this tool!
Use Case 2: Mix chains to create an age-appropriate gift generator
Let’s put our knowledge of agents and sequential chaining to good use and create our own sequential chain. We are going to mix:
- Chain #1 — The
agent
we just created that may solve age problems in math. - Chain #2 — An LLM that takes the age of an individual and suggests an appropriate gift for them.
# Chain1 - solve math problem, get the age
chain_one = agent# Chain2 - suggest age-appropriate gift
template = """You might be a present recommender. Given an individual's age,n
it's your job to suggest an appropriate gift for them.
Person Age:
{age}
Suggest gift:"""
prompt_template = PromptTemplate(input_variables=["age"], template=template)
chain_two = LLMChain(llm=llm, prompt=prompt_template)
Now that we now have each chains ready we are able to mix them using SimpleSequentialChain
.
from langchain.chains import SimpleSequentialChainoverall_chain = SimpleSequentialChain(
chains=[chain_one, chain_two],
verbose=True)
A few things to notice:
- We’d like not explicitly pass
input_variables
andoutput_variables
forSimpleSequentialChain
because the underlying assumption is that the output from chain 1 is passed as input to chain 2.
Finally, we are able to run it with the identical math problem as before:
query = "If my age is half of my dad's age and he's going to be 60 next 12 months, what's my current age?"
overall_chain.run(query)# OUTPUT
# > Entering latest SimpleSequentialChain chain...
# > Entering latest AgentExecutor chain...
# I want to determine my dad's current age after which divide it by two.
# Motion: PAL-MATH
# Motion Input: What's my dad's current age if he's going to be 60 next 12 months?
# Commentary: 59
# Thought: I now know my dad's current age, so I can divide it by two to get my age.
# Motion: Divide 59 by 2
# Motion Input: 59/2
# Commentary: Divide 59 by 2 just isn't a legitimate tool, try one other one.
# Thought: I want to make use of PAL-MATH to divide 59 by 2.
# Motion: PAL-MATH
# Motion Input: Divide 59 by 2
# Commentary: 29.5
# Thought: I now know the ultimate answer.
# Final Answer: My current age is 29.5 years old.
# > Finished chain.
# My current age is 29.5 years old.
# Given your age, an ideal gift could be something which you can use and luxuriate in now like a pleasant bottle of wine, a luxury watch, a cookbook, or a present card to a favourite store or restaurant. Or, you would get something that may last for years like a pleasant piece of knickknack or a top quality leather wallet.
# > Finished chain.
# 'nGiven your age, an ideal gift could be something which you can use and luxuriate in now like a pleasant bottle of wine, a luxury watch, a cookbook, or a present card to a favourite store or restaurant. Or, you would get something that may last for years like a pleasant piece of knickknack or a top quality leather wallet
There is likely to be times when it’s essential pass along some additional context to the second chain, along with what it’s receiving from the primary chain. For example, I need to set a budget for the gift, depending on the age of the person who is returned by the primary chain. We are able to achieve this using SimpleMemory
.
First, let’s update the prompt for chain_two
and pass to it a second variable called budget
inside input_variables
.
template = """You might be a present recommender. Given an individual's age,n
it's your job to suggest an appropriate gift for them. If age is under 10,n
the gift should cost not more than {budget} otherwise it should cost atleast 10 times {budget}.Person Age:
{output}
Suggest gift:"""
prompt_template = PromptTemplate(input_variables=["output", "budget"], template=template)
chain_two = LLMChain(llm=llm, prompt=prompt_template)
Should you compare the template
we had for SimpleSequentialChain
with the one above, you’ll notice that I actually have also updated the primary input’s variable name from age
→ output
. This is an important step, failing which an error could be raised on the time of chain validation — Missing required input keys: {age}, only had {input, output, budget}
.
It’s because the output from the primary entity within the chain (i.e. agent
) shall be the input for the second entity within the chain (i.e. chain_two
) and subsequently the variable names must match. Upon inspecting agent
’s output keys, we see that the output variable is named output
, hence the update.
print(agent.agent.llm_chain.output_keys)# OUTPUT
["output"]
Next, let’s update the sort of chain we’re making. We are able to not work with SimpleSequentialChain
since it only works in cases where this can be a single input and single output. Since chain_two
is now taking two input_variables
, we’d like to make use of SequentialChain
which is tailored to handle multiple inputs and outputs.
overall_chain = SequentialChain(
input_variables=["input"],
memory=SimpleMemory(memories={"budget": "100 GBP"}),
chains=[agent, chain_two],
verbose=True)
A few things to notice:
- Unlike
SimpleSequentialChain
, passinginput_variables
parameter is mandatory forSequentialChain
. It’s an inventory containing the name of the input variables that the primary entity within the chain (i.e.agent
in our case) expects.
Now a few of you could be wondering find out how to know the precise name utilized in the input prompt that theagent
goes to make use of. We actually didn’t write the prompt for this agent (as we did forchain_two
)! It’s actually pretty straightforward to search out it out by inspecting the prompt template of thellm_chain
that the agent is made up of.
print(agent.agent.llm_chain.prompt.template)# OUTPUT
#Answer the next questions as best you'll be able to. You have got access to the next tools:
#PAL-MATH: A language model that is basically good at solving complex word math problems. Input ought to be a totally worded hard word math problem.
#Use the next format:
#Query: the input query you need to answer
#Thought: it's best to all the time take into consideration what to do
#Motion: the motion to take, ought to be one in all [PAL-MATH]
#Motion Input: the input to the motion
#Commentary: the results of the motion
#... (this Thought/Motion/Motion Input/Commentary can repeat N times)
#Thought: I now know the ultimate answer
#Final Answer: the ultimate answer to the unique input query
#Begin!
#Query: {input}
#Thought:{agent_scratchpad}
As you’ll be able to see toward the tip of the prompt, the questions being asked by the end-user is stored in an input variable by the name input
. If for some reason you had to control this name within the prompt, make sure that you’re also updating the input_variables
on the time of the creation of SequentialChain
.
Finally, you would have came upon the identical information without going through the entire prompt:
print(agent.agent.llm_chain.prompt.input_variables)# OUTPUT
# ['input', 'agent_scratchpad']
SimpleMemory
is a straightforward option to store context or other bits of knowledge that shouldn’t ever change between prompts. It requires one parameter on the time of initialization —memories
. You possibly can pass elements to it indict
form. For example,SimpleMemory(memories={“budget”: “100 GBP”})
.
Finally, let’s run the brand new chain with the identical prompt as before. You’ll notice, the ultimate output has some luxury gift recommendations reminiscent of weekend getaways in accordance with the upper budget in our updated prompt.
overall_chain.run("If my age is half of my dad's age and he's going to be 60 next 12 months, what's my current age?")# OUTPUT
#> Entering latest SequentialChain chain...
#> Entering latest AgentExecutor chain...
# I want to determine my dad's current age after which divide it by two.
#Motion: PAL-MATH
#Motion Input: What's my dad's current age if he's going to be 60 next 12 months?
#Commentary: 59
#Thought: I now know my dad's current age, so I can divide it by two to get my age.
#Motion: Divide 59 by 2
#Motion Input: 59/2
#Commentary: Divide 59 by 2 just isn't a legitimate tool, try one other one.
#Thought: I can use PAL-MATH to divide 59 by 2.
#Motion: PAL-MATH
#Motion Input: Divide 59 by 2
#Commentary: 29.5
#Thought: I now know the ultimate answer.
#Final Answer: My current age is 29.5 years old.
#> Finished chain.
# For somebody of your age, a great gift could be something that's each practical and meaningful. Consider something like a pleasant watch, a bit of knickknack, a pleasant leather bag, or a present card to a favourite store or restaurant.nIf you've got a bigger budget, you would consider something like a weekend getaway, a spa package, or a special experience.'}
#> Finished chain.
For somebody of your age, a great gift could be something that's each practical and meaningful. Consider something like a pleasant watch, a bit of knickknack, a pleasant leather bag, or a present card to a favourite store or restaurant.nIf you've got a bigger budget, you would consider something like a weekend getaway, a spa package, or a special experience.'}