Vector Similarity Search
Supabase pgvector stores 768-d embeddings for all chunks. Semantic nearest-neighbor retrieval finds relevant code even when exact keywords differ.
Codebase Intelligence
Repo Lens ingests your repository, generates vector embeddings, and answers natural-language questions with exact file-path and line-range citations — no hallucinations.
Upload a ZIP (≤25 MB) or paste a public GitHub URL. Repo Lens indexes text and code files, skipping binaries automatically.
Files are chunked into 60-line windows, vectorized with all-mpnet-base-v2, and stored in Supabase pgvector.
Type a natural-language question. The nearest chunks are retrieved and fed to Groq Llama 3.1 to generate a grounded answer.
Every answer arrives with file-path and line-range citations you can click through to the original source.
From ingestion to answer — every step is grounded, auditable, and precise.
Supabase pgvector stores 768-d embeddings for all chunks. Semantic nearest-neighbor retrieval finds relevant code even when exact keywords differ.
Every LLM answer comes with file path and line range references so you can inspect the source directly.
Fast inference via Groq ensures answers arrive in under two seconds while staying grounded in retrieved evidence.
Generate grounded refactor ideas from retrieved snippets. Every suggestion links back to the exact file and line range that motivated it.
Sign up free and start asking your codebase in natural language — answers in seconds, citations included.
Get Started Free