Skip to main content

vai explain

Learn about embeddings, local inference, reranking, RAG, vector search, and more. Covers 30+ topics with explanations, key points, try-it commands, and links to further reading.

Synopsis

vai explain [topic]

Description

vai explain is an interactive learning tool built into vai. Run it without arguments to see all available topics, or pass a topic name to get a detailed explanation.

Each explanation includes:

  • Clear, formatted content explaining the concept
  • Practical "Try it" commands you can run immediately
  • Links to official documentation

Topic names support fuzzy matching, so you don't need to type the exact key.

Options

FlagDescriptionDefault
[topic]Topic to explain (optional, lists all if omitted)
--jsonMachine-readable JSON output

Examples

List all topics

vai explain

Learn about embeddings

vai explain embeddings

Learn about reranking

vai explain reranking

Learn about shared embedding spaces

vai explain shared-space

Learn about nano and local inference

vai explain nano

JSON output

vai explain embeddings --json

Available Topics

Topics include (non-exhaustive):

  • embeddings — What are vector embeddings?
  • reranking — Two-stage retrieval with rerankers
  • shared-space — Asymmetric retrieval across Voyage 4 models
  • nano — Local inference, the Python bridge, and the local-to-API workflow
  • cosine-similarity — How similarity scoring works
  • input-types — Query vs. document input types
  • quantization — Reducing embedding size with int8/binary
  • rag — Retrieval-augmented generation
  • vector-search — How ANN search works
  • moe — Mixture of Experts architecture
  • matryoshka — Flexible dimensions via Matryoshka learning
  • chunking — Document chunking strategies
  • two-stage — The two-stage retrieval pattern

Run vai explain to see the full list with summaries.