Guardrails
https://github.com/NVIDIA/NeMo-Guardrails/blob/aa07d889e9437dc687cd1c0acf51678ad435516e/docs/architecture/README.md#the-guardrails-process
Guardrails Runtime
Generate canonical user message
Decide next step and execute
Generate bot utterance
Each of the above stages can involve one or more calls to the LLM.
Generate canonical user message
generate canonical form for user utterance (captures 1. user intent 2. allows GR to trigger flows)
Generate_user_intent
action:
vector search on all canonical form examples in GR configs
Take top 5
Include them in prompt
Ask LLM to generate canonical form for current user utterance
new
UserIntent
even created
Decide Next Steps
potential next paths:
pre-defined flow already defined
LLM used to decide next step
vector search performed for most relevant flows from GR config
take top 5 flows
include in prompt
This will happen for either step:
bot should say something (BotIntent
events)
Generate Bot Utterances
vector search for most relevant bot utterance examples in GR configs
also vector search on knowledge base if provided to include in prompt (
retrieve_relevant_chunks
action)
included in prompt
ask LLM to generate utterance for current bot intent
Overall
Last updated