mirror of
https://github.com/onestardao/WFGY.git
synced 2026-04-28 03:29:51 +00:00
LangChain: Add on_chain_start to capture query embedding before on_retriever_end fires in RetrievalQA flow. on_llm_start retained as fallback for non-chain invocations. LlamaIndex: Double guard on source_nodes — check both attribute existence AND non-empty list before computing average ΔS, preventing ZeroDivisionError on responses without retrieved nodes. Addresses Codex automated review feedback |
||
|---|---|---|
| .. | ||
| firewall.py | ||
| README.md | ||
WFGY LlamaIndex Adapter (Semantic Firewall)
This adapter provides a wrapper for LlamaIndex QueryEngines to enable semantic tension (ΔS) monitoring.
Usage
from wfgy.adapters.llamaindex.firewall import WFGYSemanticFirewallLlama
from llama_index.embeddings.openai import OpenAIEmbedding
# Initialize embeddings
embed_model = OpenAIEmbedding()
# Initialize Firewall
firewall = WFGYSemanticFirewallLlama(embedding_model=embed_model)
# Create your engine
query_engine = index.as_query_engine()
# Wrap it
firewall_engine = firewall.wrap_query_engine(query_engine)
# Use it as normal
response = firewall_engine.query("What is semantic tension?")