WFGY/adapters/llamaindex
Zaious b53de5acd2 fix: capture query embedding before retriever callback and guard empty source_nodes
LangChain: Add on_chain_start to capture query embedding before
on_retriever_end fires in RetrievalQA flow. on_llm_start retained
as fallback for non-chain invocations.

LlamaIndex: Double guard on source_nodes — check both attribute
existence AND non-empty list before computing average ΔS, preventing
ZeroDivisionError on responses without retrieved nodes.

Addresses Codex automated review feedback
2026-03-10 09:14:53 +08:00
..
firewall.py fix: capture query embedding before retriever callback and guard empty source_nodes 2026-03-10 09:14:53 +08:00
README.md feat: Introduce LangChain and LlamaIndex Semantic Firewall adapters 2026-03-10 04:30:30 +08:00

WFGY LlamaIndex Adapter (Semantic Firewall)

This adapter provides a wrapper for LlamaIndex QueryEngines to enable semantic tension (ΔS) monitoring.

Usage

from wfgy.adapters.llamaindex.firewall import WFGYSemanticFirewallLlama
from llama_index.embeddings.openai import OpenAIEmbedding

# Initialize embeddings
embed_model = OpenAIEmbedding()

# Initialize Firewall
firewall = WFGYSemanticFirewallLlama(embedding_model=embed_model)

# Create your engine
query_engine = index.as_query_engine()

# Wrap it
firewall_engine = firewall.wrap_query_engine(query_engine)

# Use it as normal
response = firewall_engine.query("What is semantic tension?")