Skip to main content
Open In ColabOpen on GitHub

Timbr

Timbr integrates natural language inputs with Timbr's ontology-driven semantic layer. Leveraging Timbr's robust ontology capabilities, the SDK integrates with Timbr data models and leverages semantic relationships and annotations, enabling users to query data using business-friendly language.

This notebook provides a quick overview for getting started with Timbr tools and agents. For more information about Timbr visit Timbr.ai or the Timbr Documentation

Overview

Integration details

Timbr package for LangChain is langchain-timbr, which provides seamless integration with Timbr's semantic layer for natural language to SQL conversion.

Tool features

Tool NameDescription
IdentifyTimbrConceptChainIdentify relevant concepts from user prompts
GenerateTimbrSqlChainGenerate SQL queries from natural language prompts
ValidateTimbrSqlChainValidate SQL queries against Timbr knowledge graph schemas
ExecuteTimbrQueryChainExecute SQL queries against Timbr knowledge graph databases
GenerateAnswerChainGenerate human-readable answers from query results
TimbrSqlAgentEnd-to-end SQL agent for natural language queries

TimbrSqlAgent Parameters

The TimbrSqlAgent is a pre-built agent that combines all the above tools for end-to-end natural language to SQL processing.

For the complete list of parameters and detailed documentation, see: TimbrSqlAgent Documentation

ParameterTypeRequiredDescription
llmBaseChatModelYesLanguage model instance (ChatOpenAI, ChatAnthropic, etc.)
urlstrYesTimbr application URL
tokenstrYesTimbr API token
ontologystrYesKnowledge graph ontology name
schemastrNoDatabase schema name
conceptstrNoSpecific concept to focus on
concepts_listList[str]NoList of relevant concepts
views_listList[str]NoList of available views
notestrNoAdditional context or instructions
retriesintNoNumber of retry attempts (default: 3)
should_validate_sqlboolNoWhether to validate generated SQL (default: True)

Setup

The integration lives in the langchain-timbr package.

In this example, we'll use OpenAI for the LLM provider.

%pip install --quiet -U langchain-timbr[openai]

Credentials

You'll need Timbr credentials to use the tools. Get your API token from your Timbr application's API settings.

import getpass
import os

# Set up Timbr credentials
if not os.environ.get("TIMBR_URL"):
os.environ["TIMBR_URL"] = input("Timbr URL:\n")

if not os.environ.get("TIMBR_TOKEN"):
os.environ["TIMBR_TOKEN"] = getpass.getpass("Timbr API Token:\n")

if not os.environ.get("TIMBR_ONTOLOGY"):
os.environ["TIMBR_ONTOLOGY"] = input("Timbr Ontology:\n")

if not os.environ.get("OPENAI_API_KEY"):
os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:\n")

Instantiation

Instantiate Timbr tools and agents. First, let's set up the LLM and basic Timbr chains:

from langchain_timbr import (
ExecuteTimbrQueryChain,
GenerateAnswerChain,
TimbrSqlAgent,
LlmWrapper,
LlmTypes,
)

# Set up the LLM
# from langchain_openai import ChatOpenAI
# llm = ChatOpenAI(model="gpt-4o", temperature=0)

# Alternative: Use Timbr's LlmWrapper for an easy LLM setup
llm = LlmWrapper(
llm_type=LlmTypes.OpenAI, api_key=os.environ["OPENAI_API_KEY"], model="gpt-4o"
)

# Instantiate Timbr chains
execute_timbr_query_chain = ExecuteTimbrQueryChain(
llm=llm,
url=os.environ["TIMBR_URL"],
token=os.environ["TIMBR_TOKEN"],
ontology=os.environ["TIMBR_ONTOLOGY"],
)

generate_answer_chain = GenerateAnswerChain(
llm=llm, url=os.environ["TIMBR_URL"], token=os.environ["TIMBR_TOKEN"]
)

Invocation

Execute SQL queries from natural language

You can use the individual chains to perform specific operations:

# Execute a natural language query
result = execute_timbr_query_chain.invoke(
{"prompt": "What are the total sales for last month?"}
)

print("SQL Query:", result["sql"])
print("Results:", result["rows"])
print("Concept:", result["concept"])

# Generate a human-readable answer from the results
answer_result = generate_answer_chain.invoke(
{"prompt": "What are the total sales for last month?", "rows": result["rows"]}
)

print("Human-readable answer:", answer_result["answer"])

Use within an agent

Using TimbrSqlAgent

The TimbrSqlAgent provides an end-to-end solution that combines concept identification, SQL generation, validation, execution, and answer generation:

from langchain.agents import AgentExecutor

# Create a TimbrSqlAgent with all parameters
timbr_agent = TimbrSqlAgent(
llm=llm,
url=os.environ["TIMBR_URL"],
token=os.environ["TIMBR_TOKEN"],
ontology=os.environ["TIMBR_ONTOLOGY"],
concepts_list=["Sales", "Orders"], # optional
views_list=["sales_view"], # optional
note="Focus on monthly aggregations", # optional
retries=3, # optional
should_validate_sql=True, # optional
)

# Use the agent for end-to-end natural language to answer processing
agent_result = AgentExecutor.from_agent_and_tools(
agent=timbr_agent,
tools=[], # No tools needed as we're directly using the chain
verbose=True,
).invoke("Show me the top 5 customers by total sales amount this year")

print("Final Answer:", agent_result["answer"])
print("Generated SQL:", agent_result["sql"])
print("Usage Metadata:", agent_result.get("usage_metadata", {}))

Sequential Chains

You can combine multiple Timbr chains using LangChain's SequentialChain for custom workflows:

from langchain.chains import SequentialChain

# Create a sequential pipeline
pipeline = SequentialChain(
chains=[execute_timbr_query_chain, generate_answer_chain],
input_variables=["prompt"],
output_variables=["answer", "sql", "rows"],
)

# Execute the pipeline
pipeline_result = pipeline.invoke(
{"prompt": "What are the average order values by customer segment?"}
)

print("Pipeline Result:", pipeline_result)
# Example: Accessing usage metadata from Timbr operations
result_with_metadata = execute_timbr_query_chain.invoke(
{"prompt": "How many orders were placed last quarter?"}
)

# Extract usage metadata
usage_metadata = result_with_metadata.get("execute_timbr_usage_metadata", {})
determine_concept_usage = usage_metadata.get("determine_concept", {})
generate_sql_usage = usage_metadata.get("generate_sql", {})

print(determine_concept_usage)

print(
"Concept determination token estimate:",
determine_concept_usage.get("approximate", "N/A"),
)
print(
"Concept determination tokens:",
determine_concept_usage.get("token_usage", {}).get("total_tokens", "N/A"),
)

print("SQL generation token estimate:", generate_sql_usage.get("approximate", "N/A"))
print(
"SQL generation tokens:",
generate_sql_usage.get("token_usage", {}).get("total_tokens", "N/A"),
)

API reference