Skip to content

How-to Guides

You already have some LangChain code that uses either an LLM or agent? Then look at the code snippets below to see how you can secure it just with a small code change.

Make sure you have installed the Lakera ChainGuard package and got your Lakera Guard API key as an environment variable.

from lakera_chainguard import LakeraChainGuard
chain_guard = LakeraChainGuard(endpoint="prompt_injection", raise_error=True)

Guarding LLM

llm = OpenAI()
-->
GuardedOpenAI = chain_guard.get_guarded_llm(OpenAI)
llm = GuardedOpenAI()

Guarding ChatLLM

chatllm = ChatOpenAI()
-->
GuardedChatOpenAI = chain_guard.get_guarded_chat_llm(ChatOpenAI)
chatllm = GuardedChatOpenAI()

Guarding off-the-shelf agent

llm = OpenAI()
agent_executor = initialize_agent(
    tools=tools,
    llm=llm,
    agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
    verbose=True,
)
-->
GuardedOpenAI = chain_guard.get_guarded_llm(OpenAI)
llm = GuardedOpenAI()
agent_executor = initialize_agent(
    tools=tools,
    llm=llm,
    agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
    verbose=True,
)

Guarding custom agent

agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
-->
GuardedAgentExecutor = chain_guard.get_guarded_agent_executor()
agent_executor = GuardedAgentExecutor(agent=agent, tools=tools, verbose=True)