High accuracy for defined metrics
Relta sets up a semantic layer which user queries are matched to. The semantic layer guardrails the generated SQL, allowing you to have full certainly the AI assistant will work for common user questions.
Data privacy with sandboxed per user databases
With Relta, you no longer have to execute LLM generated SQL against your production database. Per-user, in-process databases create isolated copies of the data to prevent data leakage and provide bullet proof data privacy.
How it works
1. Connect to your database
Connect to your Postgres, MySQL, Parquet or CSV data.
2. Generate semantic model
Relta will use sample questions and the database schema to automatically generate a semantic layer for your data, defining metrics, dimensions and measures from the underlying data.
The semantic layer is stored as a collection of JSON files. You can freely edit and iterate on them before deploying.
3. Deploy
Use the Python library directly in your code or use one of our integrations with LLM frameworks such as Assistants API and LangGraph Cloud.
4. Automate updates from user feedback
Relta suggests refinements to the semantic layer based on user feedback. A PR is automatically raised into your repo with the proposed changes to the semantic layer.