Layer 1 — The Knowledge Graph
As content syncs from your connectors, an extraction pipeline identifies entities and relationships. Entities become typed nodes. Relationships become typed edges with weights and timestamps.Edge types extracted today
| Relation | Meaning | Example |
|---|---|---|
sent_by | Message authored by a person | Slack message → user who posted it |
replied_to | Response relationship | Email → the email it replies to |
in_thread | Message belongs to a thread | Email → its parent thread node |
posted_in | Message posted in a channel | Slack message → #billing channel |
attended | Person attended a meeting | Fireflies transcript → person |
organized_by | Person organized the meeting | Meeting → organizer |
participant | Person involved in a thread | Email thread → each participant |
in_folder | File lives in a folder | Drive file → parent folder |
from_domain | Person’s email domain | Person → domain node |
has_email | Person → their email address | Person → person:email@company.com |
Why typed edges
A vector store knows
"Cole Smith" is near "Project Phoenix" in embedding space. It doesn’t know why. The graph knows Cole attended the Phoenix kickoff, sent_by three emails about the launch, and replied_to the legal review thread on April 7.Multi-hop traversal
The reasoning loop walks 2–3 hops out from a seed node via recursive CTE:SELECT against it, join it to your operational data, and inspect it in any Postgres client.
Layer 2 — Fused Retrieval via Reciprocal Rank Fusion
Every query runs two rankers in parallel over thegraph_nodes and observations tables:
BM25 keyword relevance
Postgres full-text search with
ts_rank_cd cover-density ranking on tsvector columns. Title weighted A, body weighted B. Parsed via websearch_to_tsquery for safe handling of punctuation.Vector similarity
pgvector HNSW indexes with cosine distance. Embedding model: OpenAI
text-embedding-3-small (1536 dimensions).k = 60:
Why RRF and not a weighted sum
RRF sidesteps this entirely. It uses each ranker’s opinion about ordering, not the raw score. Documents that rank well on both lists naturally rise to the top.k = 60 is the standard constant from Cormack, Clarke & Büttcher (2009); it dampens the weight of top ranks slightly so a single ranker’s #1 doesn’t automatically win.
Implementation sketch
Layer 3 — Semantic Memory with Decay
Every user conversation produces observations — typed facts extracted by Claude Haiku from question-answer pairs:| Type | Meaning |
|---|---|
fact | Stable truths about people, systems, preferences |
decision | Choices made and their rationale |
commitment | Things someone said they’d do, with a deadline |
risk | Concerns or blockers that were flagged |
insight | Analytical conclusions drawn from data |
pattern | Recurring behaviors or practices |
Importance math
Strengthened on reference: × 1.1
If the observation is pulled into a later conversation, importance multiplies by 1.1 (capped at 1.0).
Co-occurrence edges
When multiple observations are retrieved together enough times, a weighted edge forms between them. Over time, the memory graph encodes not just what Fabric knows but what knowledge travels together.Grounded in the knowledge graph
Every observation points back at the source content — the email thread, the meeting transcript, the Slack message where the fact originated. This is the difference between mem0 (floating memories with no provenance) and Fabric (facts with citations).
Layer 4 — Databases as First-Class Citizens
Fabric connects directly to PostgreSQL and MySQL. Not API wrappers — real connections with schema discovery.Discover
Fabric introspects the schema: tables, columns, types, primary keys, foreign keys. The schema becomes queryable context for the agent.
Example: cross-source join
Q: Pull the top 10 customers by revenue fromFabric generates:public.customerswho opened a support ticket in the last 7 days, and show me any Slack#supportthreads that mention them.
#support Slack threads whose content matches any of those customer names. Returns a unified result with both.
Why the layers compound
| Layer | Provides | Can’t do alone |
|---|---|---|
| Knowledge graph | Relational reasoning, typed edges, cross-source timelines | Retrieve content to ground an answer |
| Fused retrieval (RRF) | Keyword + semantic precision in one query | Relationships between entities |
| Semantic memory | Accumulated understanding that adapts over time | Grounding in live data |
| Database connections | Real operational data in the same reasoning loop | Structure across unstructured sources |