Home Artificial Intelligence Conversations as Directed Graphs with LangChain A Description of the Problem The Approach Key Technologies Defining Nodes and Edges Defining the Conversation Implementing the Conversational Graph Conclusion Follow For More!

Conversations as Directed Graphs with LangChain A Description of the Problem The Approach Key Technologies Defining Nodes and Edges Defining the Conversation Implementing the Conversational Graph Conclusion Follow For More!

0
Conversations as Directed Graphs with LangChain
A Description of the Problem
The Approach
Key Technologies
Defining Nodes and Edges
Defining the Conversation
Implementing the Conversational Graph
Conclusion
Follow For More!

Constructing a chatbot designed to know key details about latest prospective customers.

Towards Data Science
Image by Daniel Warfield using MidJourney. All images by the writer unless otherwise specified.

On this post we’ll use LangChain to do lead qualification in a real-estate context. We imagine a scenario where latest potential customers contact a real-estate agent for the primary time. We’ll design a system which communicates with a brand new prospective result in extract key information before the real-estate agent takes over.

Who’s this handy for? Anyone serious about applying natural language processing (NLP) in a practical context.

How advanced is that this post? This instance is conceptually straightforward, but you would possibly struggle to follow along for those who don’t have a firm grasp of Python and a general understanding of language models

Prerequisites: Fundamental programming knowledge in Python, and a high level understanding of language models.

This use case is directly inspired by a piece request I received while operating as a contractor. The possible client owned a real-estate company, and located that a big amount of their agent’s time was spent performing the identical repetitive task at the start of every conversation: lead qualification.

Lead qualification is the real-estate term for the primary pass at a lead. Getting their contact information, their budget, etc. It’s a fairly broad term, and the small print can fluctuate from organization to organization. For this post, we’ll consider extracting the next information as “qualifying” a lead:

  1. Name: the name of the lead.
  2. Contact Info: The e-mail or phone variety of the lead.
  3. Financing: Their budget to rent monthly.
  4. Readiness: How quickly can they meet with an agent.

The naive approach

While large language models are incredibly powerful, they need proper contextualization of the use case to be consistently successful. You could possibly, as an illustration, give a language model a prompt saying something like:

"You might be a real-estate agent attempting to qualify a brand new client.
Extract the next information:
- email
- phone
....
Once all information has been extracted from the client, politely
thank them you can be re-directing them to an agent"

Then, you may put your latest client in a chat room with a model initialized with that prompt. This could be an amazing method to start experimenting with an LLM in a specific business context, but can also be an amazing method to begin realizing how fragile LLMs are to certain varieties of feedback. The conversation could quickly derail if a user asked a benign but irrelevant query like “Did you catch the sport last night?” or “Yeah, I used to be walking down the road and I saw your complex on second.” This may occasionally or is probably not a serious issue depending on the use case, but imposing a rigid structure across the conversation may also help keep things on the right track.

Conversations as Directed Graphs

We are able to frame a conversation as a directed graph, where each node represents a certain conversational state, and every edge represents an impetus to vary the conversational state, like a accomplished introduction or acquired piece of data.

Example of what directed graph traversal might appear to be within the context of the issue we’re trying to unravel

That is nearly essentially the most fundamental directed graph we could compose for this problem. It’s price noting that this approach can easily grow, shrink, or otherwise change based on the needs of the system.

As an example, in case your clients consistently ask the chatbot about sports, which was unanticipated within the initial design phase, then you definately can add the relevant logic to envision for this kind of query and respond appropriately.

Example modification of coping with sports related questions. We’ll persist with the unique easy graph, however it’s easy to see how emerging edge cases and poor performance scenarios will be mitigated by adding additional elements to an existing directed graph.

When making a latest system which interacts with humans in an organic way it’s vital for it to be easily iterated on as latest and unexpected issues arise. We’ll keep it easy for the needs of this instance, but extensibility is one among the core abilities of this approach.

We’ll be using LangChain to do a a lot of the heavy lifting. Specifically, we’ll be using:

  1. an LLM: We’ll be using OpenAI’s Text DaVinci 3 model.
  2. Output Parsing: We’ll be using LangChain’s Pydantic parser to parse results into easy to eat formats.

We’ll even be implementing a Directed Graph from scratch, with some functionality baked into that graph to attain the specified functionality.

The Model

In this instance we’re using OpenAI’s Text Davinci 3 model. While you may use almost any modern large language model, I selected to make use of this particular model since it’s widely utilized in LangChain examples and documentation.

LangChain does its best to be a strong and resilient framework, but working with large language models is fiddly work. Different models can behave drastically in a different way to a given prompt. I discovered that Text Davinci 3 behaved consistently with prompts from LangChain.

LangChain lets you use self-hosted models, models hosted totally free on Hugging Face, or models from quite a few other sources. Be at liberty to experiment along with your alternative of model; it’s pretty easy to swap between them (though, in my experience, you’ll likely must adjust your prompts to the actual model you’re using).

Text Davinci 3 is a transformer model, be happy to read the next article for more information:

LangChain Parsing

LangChain has quite a lot of parsers designed for use with large language models. We’ll be using the PydanticOutputParser.

LangChain parsers not only extract key information from LLM responses, but in addition modify prompts to entice more parsable responses from the LLM. With the Pydantic parser you first define a category representing the format of the outcomes you wish from the LLM. Let’s say you wish to get a joke, complete with setup and punchline, from an LLM:

""" Define the info structure we would like to be parsed out from the LLM response

notice that the category accommodates a setup (a string) and a punchline (a string.
The descriptions are used to construct the prompt to the llm. This particular
example also has a validator which checks if the setup accommodates an issue mark.

from: https://python.langchain.com/docs/modules/model_io/output_parsers/pydantic
"""

class Joke(BaseModel):
setup: str = Field(description="query to establish a joke")
punchline: str = Field(description="answer to resolve the joke")

@validator("setup")
def question_ends_with_question_mark(cls, field):
if field[-1] != "?":
raise ValueError("Badly formed query!")
return field

you possibly can then define the actual query you wish to send to the model.

"""Defining the query from the user
"""
joke_query = "Tell me a joke about parrots"

This question then gets modified by the parser, combining the user’s query and data in regards to the final parsing format to construct the prompt to the llm.

"""Defining the prompt to the llm

from: https://python.langchain.com/docs/modules/model_io/output_parsers/pydantic
"""
parser = PydanticOutputParser(pydantic_object=Joke)

prompt = PromptTemplate(
template="Answer the user query.n{format_instructions}n{query}n",
input_variables=["query"],
partial_variables={"format_instructions": parser.get_format_instructions()},
)

input = prompt.format_prompt(query=joke_query)
print(input.text)

The prompt for this particular example is the next:

Answer the user query.
The output must be formatted as a JSON instance that conforms to the JSON schema below.

For instance, for the schema {"properties": {"foo": {"title": "Foo", "description": "a listing of strings", "type": "array", "items": {"type": "string"}}}, "required": ["foo"]}
the item {"foo": ["bar", "baz"]} is a well-formatted instance of the schema. The article {"properties": {"foo": ["bar", "baz"]}} is just not well-formatted.

Here is the output schema:
```
{"properties": {"setup": {"title": "Setup", "description": "query to establish a joke", "type": "string"}, "punchline": {"title": "Punchline", "description": "answer to resolve the joke", "type": "string"}}, "required": ["setup", "punchline"]}
```
Tell me a joke about parrots

Notice how the query from the user “Tell me a joke about parrots” is combined with information in regards to the desired end format.

This formatted query can then be passed to the model, and the parser will be used to extract the result:


"""Declaring a model and querying it with the parser defined input
"""

model_name = "text-davinci-003"
temperature = 0.0
model = OpenAI(model_name=model_name, temperature=temperature)

output = model(input.to_string())
parser.parse(output)

Here’s the result from this particular example:

"""The ultimate output, a Joke object with a setup and punchline attribute
"""
Joke(setup="Why don't parrots make good detectives?",
punchline="Because they're at all times repeating themselves!")

The PydanticOutputParser is each powerful and versatile, which is why it’s essentially the most commonly used parser in LangChain. We’ll be exploring this parser more throughout this post. The OutputFixingParser and RetryOutputParser are two other very useful output parsers which can not be explored on this post, but definitely may very well be utilized in this use case.

Conversations as a Directed Graph

We’ll be abstracting a conversation right into a directed graph.

The approach in its most elementary form. A series of conversational states, where the state of the conversation progresses once certain information is received from the human being communicated with.

Each node and edge will have to be customized, but will follow the identical general structure:

How nodes and edges will work. The box in blue represents a conversational state, so your entire box in blue represents a single node and its functionality. The box in red represents the required steps to transition between conversational states, so your entire box in red represents an edge and its functionality.

It’s price noting that LangChain has an analogous structure, called a Chain. We can’t be discussing Chains on this post, but they’re useful for direct and sequential LLM tasks.

That is where we start coding up an LLM supported directed graph with the core aforementioned structure. We’ll be using Pydantic parsers for each the input validation step in addition to the actual content parsing.

I’m including the code for reference, but don’t be daunted by the length. You possibly can skim through the code, or not confer with the code in any respect for those who don’t wish to. The ultimate notebook will be found here:

General Utilities

For demonstrative purposes, all of this may exist inside a single Jupyter notebook, and the ultimate forwards and backwards between the model can be executed in the ultimate cell. So as to improve readability, we’ll define three functions: one for model output to the user, one for user input to the model, and one other for printing key information for demonstration, like the outcomes of parsing.

"""Defining utility functions for constructing a readable exchange
"""

def system_output(output):
"""Function for printing out to the user
"""
print('======= Bot =======')
print(output)

def user_input():
"""Function for getting user input
"""
print('======= Human Input =======')
return input()

def parsing_info(output):
"""Function for printing out key info
"""
print(f'*Info* {output}')

Defining the Edge

Because the code suggests, an edge takes some input, checks it against a condition, after which parses the input if the condition was met. The sting accommodates the relevant logic for recording the variety of times it’s been attempted and failed, and is accountable for telling higher level units whether we should always progress through the directed graph along the sting or not.

from typing import List

class Edge:

"""Edge
at its highest level, an edge checks if an input is nice, then parses
data out of that input if it is nice
"""

def __init__(self, condition, parse_prompt, parse_class, llm, max_retrys=3, out_node=None):
"""
condition (str): a True/False query in regards to the input
parse_query (str): what the parser whould be extracting
parse_class (Pydantic BaseModel): the structure of the parse
llm (LangChain LLM): the big language model getting used
"""
self.condition = condition
self.parse_prompt = parse_prompt
self.parse_class = parse_class
self.llm = llm

#how repeatedly the sting has failed, for any reason, for deciding to skip
#when successful this resets to 0 for posterity.
self.num_fails = 0

#what number of retrys are acceptable
self.max_retrys = max_retrys

#the node the sting directs towards
self.out_node = out_node

def check(self, input):
"""ask the llm if the input satisfies the condition
"""
validation_query = f'following the output schema, does the input satisfy the condition?ninput:{input}ncondition:{self.condition}'
class Validation(BaseModel):
is_valid: bool = Field(description="if the condition is satisfied")
parser = PydanticOutputParser(pydantic_object=Validation)
input = f"Answer the user query.n{parser.get_format_instructions()}n{validation_query}n"
return parser.parse(self.llm(input)).is_valid

def parse(self, input):
"""ask the llm to parse the parse_class, based on the parse_prompt, from the input
"""
parse_query = f'{self.parse_prompt}:nn"{input}"'
parser = PydanticOutputParser(pydantic_object=self.parse_class)
input = f"Answer the user query.n{parser.get_format_instructions()}n{parse_query}n"
return parser.parse(self.llm(input))

def execute(self, input):
"""Executes your entire edge
returns a dictionary:
{
proceed: bool, weather or not should proceed to next
result: parse_class, the parsed result, if applicable
num_fails: int the variety of failed attempts
}
"""

#input did't make it past the input condition for the sting
if not self.check(input):
self.num_fails += 1
if self.num_fails >= self.max_retrys:
return {'proceed': True, 'result': None, 'num_fails': self.num_fails}
return {'proceed': False, 'result': None, 'num_fails': self.num_fails}

try:
#attempting to parse
self.num_fails = 0
return {'proceed': True, 'result': self.parse(input), 'num_fails': self.num_fails}
except:
#there was some error in parsing.
#note, using the retry or correction parser here is perhaps a great idea
self.num_fails += 1
if self.num_fails >= self.max_retrys:
return {'proceed': True, 'result': None, 'num_fails': self.num_fails}
return {'proceed': False, 'result': None, 'num_fails': self.num_fails}

I created just a few unit tests within the code here which illustrate how the sting functions.

Defining the Node

Now that now we have an Edge, which handles input validation and parsing, we are able to define a Node, which handles conversational state. The Node requests a user for input, and passes that input to the directed edges coming from that Node. If none of the perimeters execute successfully, the Node asks the user for the input again.

class Node:

"""Node
at its highest level, a node asks a user for some input, and trys
that input on all edges. It also manages and executes all
the perimeters it accommodates
"""

def __init__(self, prompt, retry_prompt):
"""
prompt (str): what to ask the user
retry_prompt (str): what to ask the user if all edges fail
parse_class (Pydantic BaseModel): the structure of the parse
llm (LangChain LLM): the big language model getting used
"""

self.prompt = prompt
self.retry_prompt = retry_prompt
self.edges = []

def run_to_continue(self, _input):
"""Run all edges until one continues
returns the results of the continuing edge, or None
"""
for edge in self.edges:
res = edge.execute(_input)
if res['continue']: return res
return None

def execute(self):
"""Handles the present conversational state
prompots the user, tries again, runs edges, etc.
returns the result from an adge
"""

#initial prompt for the conversational state
system_output(self.prompt)

while True:
#getting users input
_input = user_input()

#running through edges
res = self.run_to_continue(_input)

if res is just not None:
#parse successful
parsing_info(f'parse results: {res}')
return res

#unsuccessful, prompting retry
system_output(self.retry_prompt)

With this implemented, we are able to begin seeing conversations happen. We’ll implement a Node which requests contact information, and two edges: one which attempts to parse out a legitimate email, and one which attempts to parse out a legitimate phone number.

"""Defining an example
this instance asks for contact information, and parses out either an email
or a phone number.
"""

#defining the model utilized in this test
model_name = "text-davinci-003"
temperature = 0.0
model = OpenAI(model_name=model_name, temperature=temperature)

#Defining 2 edges from the node
class sampleOutputTemplate(BaseModel):
output: str = Field(description="contact information")
condition1 = "Does the input contain a full and valid email?"
parse_prompt1 = "extract the e-mail from the next text."
edge1 = Edge(condition1, parse_prompt1, sampleOutputTemplate, model)
condition2 = "Does the input contain a full and valid phone number (xxx-xxx-xxxx or xxxxxxxxxx)?"
parse_prompt2 = "extract the phone number from the next text."
edge2 = Edge(condition2, parse_prompt2, sampleOutputTemplate, model)

#Defining A Node
test_node = Node(prompt = "Please input your full email address or phone number",
retry_prompt = "I'm sorry, I didn't understand your response.nPlease provide a full email address or phone number(within the format xxx-xxx-xxxx)")

#Defining Connections
test_node.edges = [edge1, edge2]

#running node. This handles all i/o and the logic to re-ask on failure.
res = test_node.execute()

Here’s just a few examples of conversations with this single node:

Example 1)

======= Bot =======
Please input your full email address or phone number
======= Human Input =======
input: Hey, yeah I'm so excited to rent from you guys. My email is hire@danielwarfield.dev
*Info* parse results: {'proceed': True, 'result': sampleOutputTemplate(output='hire@danielwarfield.dev'), 'num_fails': 0, 'continue_to': None}

Example 2)

======= Bot =======
Please input your full email address or phone number
======= Human Input =======
input: do you wish mine or my wifes?
======= Bot =======
I'm sorry, I didn't understand your response.
Please provide a full email address or phone number(within the format xxx-xxx-xxxx)
======= Human Input =======
input: okay, I assume you wish mine. 413-123-1234
*Info* parse results: {'proceed': True, 'result': sampleOutputTemplate(output='413-123-1234'), 'num_fails': 0, 'continue_to': None}

Example 3)

======= Bot =======
Please input your full email address or phone number
======= Human Input =======
input: No
======= Bot =======
I'm sorry, I didn't understand your response.
Please provide a full email address or phone number(within the format xxx-xxx-xxxx)
======= Human Input =======
input: nope
======= Bot =======
I'm sorry, I didn't understand your response.
Please provide a full email address or phone number(within the format xxx-xxx-xxxx)
======= Human Input =======
input: I said no
*Info* parse results: {'proceed': True, 'result': None, 'num_fails': 3, 'continue_to': None}

In example 1 the user includes some irrelevant information, but has a legitimate email within the response. In example 2 the user doesn’t have a legitimate email or phone number in the primary response, but does have one within the second. In example 3 the user has no valid responses, and one among the perimeters gives up and allows the conversation to progress.

It’s price noting, from a user feel perspective, this approach feels a bit robotic. While not explored on this post, it’s easy to assume how the user input may very well be used to construct the systems output to the user, either through string formatting or by asking an LLM to format a response.

Now that now we have Nodes and Edges, and have defined their functionality, we are able to put all of it together to create the ultimate conversation. We covered a general blueprint previously, but let’s brush it as much as be more reflective of what the graph will actually be doing. Recall the next:

  • Nodes have an initial prompt and a retry prompt
  • Edges have a condition, a parsing prompt, and a parsing structure. The condition is a boolean query asked in regards to the users input. If the condition is satisfied, the parsing structure is parsed based on the parsing prompt and the users input. This is finished by asking the big language model to reformat the users input right into a parsable representation using the pydantic parser.

Lets construct a conversational graph based on these definitions:

The conversational graph we’ll be implementing, complete with all of the obligatory parameters for the nodes and edges.

As will be seen within the diagram above, some prompt engineering has been done to accommodate certain edge cases. As an example, the parsing prompt for Budget allows the parser to parse user responses like “my budget is around 1.5k”.

Due to the flexibleness of LLMs, it’s really as much as the engineer exactly how a graph like this is perhaps implemented. if price parsing proves to be a difficulty in the longer term, one might need just a few edges, each with different conditions and parsing prompts. As an example, one could imagine an edge that checks if a budget is over a certain value, thus implying that they’re providing a yearly budget as a substitute of a monthly budget. The ability of this method is for the seamless addition or removal of those modifications.

We’ve already done all of the heavy lifting, now we just have to code it up and see how it really works. Here’s the implementation:

"""Implementing the conversation as a directed graph
"""

# Defining Nodes
name_node = Node("Hello! My name's Dana and I will be getting you began in your renting journey. I will be asking you just a few questions, after which forwarding you to one among our excellent agents to enable you to discover a place you'd like to call home.nnFirst, are you able to please provide your name?", "I'm sorry, I do not understand, are you able to provide just your name?")
contact_node = Node("do you've a phone number or email we are able to use to contact you?", "I'm sorry, I didn't understand that. Are you able to please provide a legitimate email or phone number?")
budget_node = Node("What's your monthly budget for rent?", "I'm sorry, I do not understand the rent you provided. Try providing your rent in a format like '$1,300'")
avail_node = Node("Great, When is your soonest availability?", "I'm sorry, yet one more time, are you able to please provide a date you are willing to satisfy?")

#Defining Data Structures for Parsing
class nameTemplate(BaseModel): output: str = Field(description="a persons' name")
class phoneTemplate(BaseModel): output: str = Field(description="phone number")
class emailTemplate(BaseModel): output: str = Field(description="email address")
class budgetTemplate(BaseModel): output: float = Field(description="budget")
class dateTemplate(BaseModel): output: str = Field(description="date")

#defining the model
model_name = "text-davinci-003"
temperature = 0.0
model = OpenAI(model_name=model_name, temperature=temperature)

#Defining Edges
name_edge = Edge("Does the input contain a persons' name?", " Extract the individuals name from the next text.", nameTemplate, model)
contact_phone_edge = Edge("does the input contain a legitimate phone number?", "extract the phone number within the format xxx-xxx-xxxx", phoneTemplate, model)
contact_email_edge = Edge("does the input contain a legitimate email?", "extract the e-mail from the next text", emailTemplate, model)
budget_edge = Edge("Does the input contain a number within the hundreds?", "Extract the number from the next text from the next text. Remove any symbols and multiply a number followed by the letter 'k' to hundreds.", budgetTemplate, model)
avail_edge = Edge("does the input contain a date or day? dates or relative terms like 'tommorrow' or 'in 2 days'.", "extract the day discussed in the next text as a date in mm/dd/yyyy format. Today is September twenty third 2023.", dateTemplate, model)

#Defining Node Connections
name_node.edges = [name_edge]
contact_node.edges = [contact_phone_edge, contact_email_edge]
budget_node.edges = [budget_edge]
avail_node.edges = [avail_edge]

#defining edge connections
name_edge.out_node = contact_node
contact_phone_edge.out_node = budget_node
contact_email_edge.out_node = budget_node
budget_edge.out_node = avail_node

#running the graph
current_node = name_node
while current_node is just not None:
res = current_node.execute()
if res['continue']:
current_node = res['continue_to']

And listed here are just a few example conversations:


======= Bot =======
Hello! My name's Dana and I will be getting you began in your renting journey. I will be asking you just a few questions, after which forwarding you to one among our excellent agents to enable you to discover a place you'd like to call home.

First, are you able to please provide your name?
======= Human Input =======
input: daniel warfield
*Info* parse results: {'proceed': True, 'result': nameTemplate(output='daniel warfield'), 'num_fails': 0, 'continue_to': <__main__.Node object at 0x7b196801dc60>}
======= Bot =======
do you've a phone number or email we are able to use to contact you?
======= Human Input =======
input: 4131231234
======= Bot =======
I'm sorry, I didn't understand that. Are you able to please provide a legitimate email or phone number?
======= Human Input =======
input: my phone number is 4131231234
*Info* parse results: {'proceed': True, 'result': phoneTemplate(output='413-123-1234'), 'num_fails': 0, 'continue_to': <__main__.Node object at 0x7b196801c610>}
======= Bot =======
What's your monthly budget for rent?
======= Human Input =======
input: 1.5k
*Info* parse results: {'proceed': True, 'result': budgetTemplate(output=1500.0), 'num_fails': 0, 'continue_to': <__main__.Node object at 0x7b196801c7c0>}
======= Bot =======
Great, When is your soonest availability?
======= Human Input =======
input: 2 days
*Info* parse results: {'proceed': True, 'result': dateTemplate(output='09/25/2023'), 'num_fails': 0, 'continue_to': None}

======= Bot =======
Hello! My name's Dana and I will be getting you began in your renting journey. I will be asking you just a few questions, after which forwarding you to one among our excellent agents to enable you to discover a place you'd like to call home.

First, are you able to please provide your name?
======= Human Input =======
input: Hi Dana, my name's mike (michael mcfoil), it is a pleasure to satisfy you!
*Info* parse results: {'proceed': True, 'result': nameTemplate(output='Michael Mcfoil'), 'num_fails': 0, 'continue_to': <__main__.Node object at 0x7b19681087c0>}
======= Bot =======
do you've a phone number or email we are able to use to contact you?
======= Human Input =======
input: yeah, you possibly can reach me at mike at gmail
======= Bot =======
I'm sorry, I didn't understand that. Are you able to please provide a legitimate email or phone number?
======= Human Input =======
input: oh, sorry okay it's mike@gmail.com
*Info* parse results: {'proceed': True, 'result': emailTemplate(output='mike@gmail.com'), 'num_fails': 0, 'continue_to': <__main__.Node object at 0x7b1968109960>}
======= Bot =======
What's your monthly budget for rent?
======= Human Input =======
input: I can do anywhere from 2 thousand to five thousand, depending on the property
*Info* parse results: {'proceed': True, 'result': budgetTemplate(output=5000.0), 'num_fails': 0, 'continue_to': <__main__.Node object at 0x7b196810a260>}
======= Bot =======
Great, When is your soonest availability?
======= Human Input =======
input: does october 2nd give you the results you want?
======= Bot =======
I'm sorry, yet one more time, are you able to please provide a date you are willing to satisfy?
======= Human Input =======
input: october 2nd
*Info* parse results: {'proceed': True, 'result': dateTemplate(output='10/02/2023'), 'num_fails': 0, 'continue_to': None}

======= Bot =======
Hello! My name's Dana and I will be getting you began in your renting journey. I will be asking you just a few questions, after which forwarding you to one among our excellent agents to enable you to discover a place you'd like to call home.

First, are you able to please provide your name?
======= Human Input =======
input: je m'appelle daniel warfield
*Info* parse results: {'proceed': True, 'result': nameTemplate(output='Daniel Warfield'), 'num_fails': 0, 'continue_to': <__main__.Node object at 0x7b196801c7c0>}
======= Bot =======
do you've a phone number or email we are able to use to contact you?
======= Human Input =======
input: mi número de teléfono es 410-123-1234
*Info* parse results: {'proceed': True, 'result': phoneTemplate(output='410-123-1234'), 'num_fails': 0, 'continue_to': <__main__.Node object at 0x7b196801ec20>}
======= Bot =======
What's your monthly budget for rent?
======= Human Input =======
input: Mein monatliches Budget beträgt 3.000
*Info* parse results: {'proceed': True, 'result': budgetTemplate(output=3000.0), 'num_fails': 0, 'continue_to': <__main__.Node object at 0x7b196801d390>}
======= Bot =======
Great, When is your soonest availability?
======= Human Input =======
input: אני יכול להיפגש מחר
======= Bot =======
I'm sorry, yet one more time, are you able to please provide a date you are willing to satisfy?
======= Human Input =======
input: Yes karogh yem handipel vaghy
======= Bot =======
I'm sorry, yet one more time, are you able to please provide a date you are willing to satisfy?
======= Human Input =======
input: I can meet tomorrow
*Info* parse results: {'proceed': True, 'result': dateTemplate(output='09/24/2023'), 'num_fails': 0, 'continue_to': None}

In this text we formatted a lead qualification use case as a directed graph, implemented the obligatory parsing functionality and data structures, and made an example graph which extracts key information from users. As will be seen in the instance conversations this method is not at all perfect, but due to the nature of directed graphs we are able to easily add latest nodes to alleviate the impact of certain edge cases.

While not discussed in this text, there’s a variety of ways to enhance upon this method:

  • We could use different LangChain parsers to try and re-try or correct queries.
  • We could use an LLM Cache to attempt to cache certain common responses, thus saving on budget.
  • We could connect this method with a vector database to permit query answering against a knowledge base.
  • We could use the LLM to construct the prompts to the user, together with context in regards to the conversation, to encourage more organic responses.

While my contracting gig didn’t pan out, I believe this approach highlights a versatile and robust framework which is extensible and applicable to quite a lot of applications.

I describe papers and ideas within the ML space, with an emphasis on practical and intuitive explanations.

Please like, share, and follow. As an independent writer, your support really makes an enormous difference!

Attribution: All of the photographs on this document were created by Daniel Warfield, unless a source is otherwise provided. You need to use any images on this post for your individual non-commercial purposes, as long as you reference this text, https://danielwarfield.dev, or each.

LEAVE A REPLY

Please enter your comment!
Please enter your name here