Playground/Vector DB

Vector DBSimilarity search on embeddings

vector-database
Pinecone-backed. Upsert, query, fetch. Pairs with Gemini embeddings for RAG.

1. Seed the corpus

Ready

The vector DB is empty until something's in it. This will embed 12 short Cohesivity tips with gemini-embedding-001 at 768 dims, then upsert them. Takes ~10 seconds — Gemini is rate-limited per minute on ephemeral.

Show the corpus (0 items)

    2. Ask in plain English

    Ready

    Your query gets embedded server-side (same model), then we run a cosine-similarity search. You'll get back the top 5 matches with their scores. Run the seed step first.

    The actual calls

    Embed (server-side, before query):
    POST/edge/google-generative-ai-api/v1beta/models/gemini-embedding-001:embedContent
    {
      "content": {
        "parts": [
          {
            "text": "How do I make a chat app stay live without storing typing indicators?"
          }
        ]
      },
      "outputDimensionality": 768
    }
    Then query the vector store:
    POST/edge/vector-database/query
    {
      "vector": "[0.012, -0.045, …, 768 floats]",
      "topK": 5,
      "includeMetadata": true
    }
    Tenant isolation is server-enforced — you can't read or set a namespace.