Configuration Properties
Complete reference for all Contexa configuration properties. All properties are set in application.yml and bound through Spring Boot's @ConfigurationProperties mechanism. Properties are organized into 5 categories below.
Configuration Categories
Contexa provides 27 @ConfigurationProperties classes across all modules. Select a category to view the full property reference.
Infrastructure
ContexaProperties, Bridge, Cache, OpenTelemetry, Redis, Kafka, Event, Router, Pipeline, Plane, Cold Path
Core platform, bridge, cache, telemetry, and distributed runtime properties
Security
Zero Trust, HCAD, Session, Step-Up, Decision Plane, Distributed Enforcement
Zero Trust, HCAD, session, and enforcement properties
AI Engine
Tiered LLM, Mapping, Advisor, RAG, Streaming, PgVector
Tiered LLM, mapping, advisor, RAG, streaming, and vector-store properties
Identity
Authentication Context, MFA, State Machine
Auth context and state machine properties
IAM
Policy Combining, Step-Up, Admin Console
Policy combining, step-up, and admin console properties
Essential Properties
The most important properties to configure when starting with Contexa:
| Property | Default | Description |
|---|---|---|
contexa.enabled |
true |
Master switch for the entire Contexa platform |
contexa.infrastructure.mode |
STANDALONE |
STANDALONE (in-memory) or DISTRIBUTED (Redis + Kafka) |
contexa.llm.enabled |
true |
Enable LLM integration for AI-driven security decisions |
security.zerotrust.enabled |
true |
Enable zero-trust continuous verification |
spring.auth.state-type |
OAUTH2 |
State management: SESSION or OAUTH2. Can be omitted — defaults work for most setups. |
contexa.llm.chat-model-priority Contexa |
LLM provider priority (e.g., ollama,anthropic,openai). This is a Contexa-specific property, not part of Spring AI. |
Minimal Configuration
A minimal application.yml to get started with Contexa in standalone mode:
contexa:
enabled: true
infrastructure:
mode: standalone
llm:
enabled: true
chat-model-priority: ollama,anthropic,openai
chat:
ollama:
base-url: http://127.0.0.1:11434
model: qwen3.5:9b
rag:
enabled: true
security:
zerotrust:
enabled: true
spring:
auth:
state-type: SESSION
ai:
security:
layer1:
model: qwen2.5:14b
layer2:
model: exaone3.5:latest
Contexa Core Properties
Top-level properties under the contexa prefix, bound to ContexaProperties.
| Property | Type | Default | Description |
|---|---|---|---|
contexa | |||
.enabled |
boolean |
true |
Master switch to enable or disable the entire Contexa platform |
.infrastructure.mode |
enum |
STANDALONE |
Infrastructure mode: STANDALONE (in-memory) or DISTRIBUTED (Redis, Kafka) |
.infrastructure.redis.enabled |
boolean |
true |
Enable Redis integration for distributed caching |
.enterprise.enabled |
boolean |
false |
Enable enterprise-only integrations when the runtime provides them |
.infrastructure.kafka.enabled |
boolean |
true |
Enable Kafka integration for event streaming (distributed mode) |
.infrastructure.observability.enabled |
boolean |
true |
Enable observability infrastructure |
.infrastructure.observability.open-telemetry-enabled |
boolean |
true |
Enable OpenTelemetry integration for distributed tracing |
LLM Properties
| Property | Type | Default | Description |
|---|---|---|---|
contexa.llm | |||
.enabled | boolean | true | Enable LLM integration for AI-driven security decisions |
.chat-model-priority | String | ollama,anthropic,openai | Preferred chat provider order used by Contexa model resolution |
.advisor-enabled | boolean | true | Enable the AI advisor chain |
.embedding-model-priority | String | ollama,openai | Preferred embedding provider order used by Contexa model resolution |
.chat.ollama.base-url | String | "" | Dedicated Ollama chat runtime URL required when Ollama chat is enabled |
.chat.ollama.model | String | "" | Ollama chat model name used by the Contexa chat runtime |
.chat.ollama.keep-alive | String | "" | Optional keep-alive hint passed to the Ollama chat runtime |
.embedding.ollama.dedicated-runtime-enabled | boolean | false | Use a dedicated Ollama embedding runtime instead of the shared chat runtime |
.embedding.ollama.base-url | String | "" | Dedicated Ollama embedding runtime URL when dedicated-runtime-enabled is true |
.embedding.ollama.model | String | "" | Embedding model name for the Ollama embedding runtime |
Spring AI Tiered and External Provider Configuration
Contexa reads tier selection from spring.ai.security*. External Anthropic and OpenAI providers use standard spring.ai.* properties. Ollama runtime selection is configured under contexa.llm.*, not spring.ai.ollama.*.
| Property | Type | Description |
|---|---|---|
spring.ai | ||
.security.layer1.model | String | Tier-1 model name used for the first analysis pass |
.security.layer2.model | String | Tier-2 model name used for deep analysis and escalation |
.security.tiered.prompt-compression.enabled | boolean | Enable runtime prompt compression for tiered execution |
.security.tiered.layer1.timeout.total-ms | long | Total timeout budget for the tier-1 execution path |
.security.tiered.layer2.timeout-ms | long | Total timeout budget for the tier-2 execution path |
.anthropic.api-key | String | Anthropic API key for the standard Spring AI Anthropic client |
.openai.api-key | String | OpenAI API key for the standard Spring AI OpenAI client |
.openai.base-url | String | Override the OpenAI API base URL when a proxy or compatible endpoint is used |
Vector Store Properties
| Property | Type | Default | Description |
|---|---|---|---|
contexa.rag | |||
.enabled | boolean | true | Enable the RAG subsystem inside Contexa |
.defaults.top-k | int | 10 | Default number of retrieved documents for general retrieval |
.defaults.similarity-threshold | double | 0.7 | Default similarity threshold for general retrieval |
.risk.top-k | int | 50 | Number of retrieved documents for risk-oriented retrieval |
.risk.similarity-threshold | double | 0.8 | Similarity threshold for risk-oriented retrieval |
.etl.vector-table-name | String | vector_store | Logical vector table name used by Contexa ETL jobs |
.etl.chunk-size | int | 500 | Document chunk size used during vector ETL |
.etl.chunk-overlap | int | 50 | Chunk overlap used during vector ETL |
spring.ai.vectorstore.pgvector | |||
.dimensions | int | 1024 | Embedding dimension used by the pgvector store |
.batch-size | int | 100 | Batch size used when storing vectors |
.top-k | int | 100 | Default retrieval limit inside the pgvector store adapter |
.similarity-threshold | double | 0.5 | Minimum similarity threshold enforced by the pgvector adapter |
.search-timeout-ms | long | 10000 | Search timeout budget for pgvector queries |
.store-timeout-ms | long | 10000 | Store timeout budget for pgvector writes |
.document.chunk-size | int | 1000 | Chunk size used when preparing source documents for storage |
.document.chunk-overlap | int | 200 | Chunk overlap used when preparing source documents for storage |
Zero Trust Properties
Properties under security.zerotrust, bound to SecurityZeroTrustProperties.
| Property | Type | Default | Description |
|---|---|---|---|
security.zerotrust | |||
.enabled |
boolean |
true |
Enable Zero Trust evaluation engine |
.threat.initial |
double |
0.3 |
Initial threat score assigned to new sessions |
.cache.ttl-hours |
int |
24 |
Trust evaluation cache TTL in hours |
.cache.session-ttl-minutes |
int |
30 |
Session cache TTL in minutes |
.cache.invalidated-ttl-minutes |
int |
60 |
Invalidated session cache TTL in minutes |
.redis.timeout |
int |
5 |
Redis operation timeout in seconds |
.redis.update-interval-seconds |
int |
30 |
Interval for syncing trust scores to Redis |
.session.tracking-enabled |
boolean |
true |
Enable AI-driven session tracking |
HCAD Properties
Hierarchical Context-Aware Detection properties under hcad, bound to HcadProperties.
| Property | Type | Default | Description |
|---|---|---|---|
hcad | |||
.enabled | boolean | true | Enable the HCAD anomaly detection engine |
.filter-order | int | 100 | Order of the HCAD filter in the security filter chain |
.similarity.hot-path-threshold | double | 0.7 | Similarity threshold used by the hot path evaluation stage |
.baseline.learning.enabled | boolean | true | Enable continuous baseline learning |
.baseline.bootstrap.initial-samples | int | 10 | Minimum bootstrap sample count before the initial baseline is accepted |
.baseline.statistical.min-samples | int | 20 | Minimum sample count for statistical baseline updates |
Autonomous Security Properties
| Property | Type | Default | Description |
|---|---|---|---|
contexa.autonomous | |||
.enabled | boolean | true | Enable autonomous security response processing |
.event-timeout | long | 30000 | Timeout for autonomous event processing in milliseconds |
Session Security Properties
Properties under security.session, bound to SecuritySessionProperties.
| Property | Type | Default | Description |
|---|---|---|---|
security.session | |||
.cookie.name |
String |
SESSION |
Session cookie name |
.header.name |
String |
X-Auth-Token |
Session header name for token-based sessions |
.bearer.enabled |
boolean |
true |
Enable bearer token session resolution |
Full Configuration Example
A complete application.yml showing all major configuration sections:
contexa:
enabled: true
infrastructure:
mode: standalone
redis:
enabled: true
kafka:
enabled: false
observability:
enabled: true
open-telemetry-enabled: true
hcad:
enabled: true
similarity:
hot-path-threshold: 0.7
baseline:
min-samples: 10
cache-ttl: 3600
llm:
enabled: true
advisor-enabled: true
chat-model-priority: ollama,anthropic,openai
embedding-model-priority: ollama,openai
chat:
ollama:
base-url: http://127.0.0.1:11434
model: qwen3.5:9b
keep-alive: 24h
embedding:
ollama:
dedicated-runtime-enabled: false
model: mxbai-embed-large
rag:
enabled: true
defaults:
top-k: 10
similarity-threshold: 0.7
etl:
vector-table-name: vector_store
chunk-size: 500
chunk-overlap: 50
autonomous:
enabled: true
event-timeout: 30000
hcad:
enabled: true
filter-order: 100
baseline:
learning:
enabled: true
security:
zerotrust:
enabled: true
mode: ENFORCE
threat:
initial: 0.3
cache:
ttl-hours: 24
session-ttl-minutes: 30
redis:
timeout: 5
update-interval-seconds: 30
session:
cookie:
name: SESSION
header:
name: X-Auth-Token
bearer:
enabled: true
spring:
auth:
state-type: SESSION
ai:
security:
layer1:
model: qwen2.5:14b
layer2:
model: exaone3.5:latest
tiered:
prompt-compression:
enabled: true
anthropic:
api-key: ${ANTHROPIC_API_KEY:}
openai:
api-key: ${OPENAI_API_KEY:}
vectorstore:
pgvector:
dimensions: 1024
batch-size: 100
top-k: 100
similarity-threshold: 0.5
document:
chunk-size: 1000
chunk-overlap: 200
ContexaProperties Quick Reference
Complete list of high-signal contexa.* properties from ContexaProperties. Each section links to the detailed sub-page.
Master Switches
| Property | Type | Default | Description |
|---|---|---|---|
contexa.enabled | boolean | true | Master switch for the entire Contexa platform. |
HCAD (Behavioral Analysis)
| Property | Type | Default | Description |
|---|---|---|---|
contexa.hcad.enabled | boolean | true | Enable the Contexa-side HCAD toggle. |
contexa.hcad.similarity.hot-path-threshold | double | 0.7 | Hot path similarity threshold exposed through ContexaProperties. |
contexa.hcad.baseline.min-samples | int | 10 | Minimum baseline sample count in the Contexa wrapper properties. |
contexa.hcad.baseline.cache-ttl | int | 3600 | Baseline cache TTL in seconds in the Contexa wrapper properties. |
LLM (Language Model)
| Property | Type | Default | Description |
|---|---|---|---|
contexa.llm.enabled | boolean | true | Enable LLM integration. |
contexa.llm.advisor-enabled | boolean | true | Enable the advisor chain. |
contexa.llm.chat-model-priority | String | ollama,anthropic,openai | Chat model provider priority order. |
contexa.llm.embedding-model-priority | String | ollama,openai | Embedding model provider priority order. |
contexa.llm.chat.ollama.base-url | String | "" | Ollama chat runtime URL. |
contexa.llm.embedding.ollama.dedicated-runtime-enabled | boolean | false | Enable a dedicated embedding runtime. |
RAG (Retrieval-Augmented Generation)
| Property | Type | Default | Description |
|---|---|---|---|
contexa.rag.enabled | boolean | true | Enable the RAG pipeline wrapper. |
contexa.rag.defaults.top-k | int | 10 | Default retrieval size. |
contexa.rag.defaults.similarity-threshold | double | 0.7 | Default similarity threshold. |
contexa.rag.etl.vector-table-name | String | vector_store | Logical vector table name for ETL output. |
Autonomous Agent
| Property | Type | Default | Description |
|---|---|---|---|
contexa.autonomous.enabled | boolean | true | Enable autonomous security processing. |
contexa.autonomous.event-timeout | long | 30000 | Event processing timeout in milliseconds. |
Infrastructure
| Property | Type | Default | Description |
|---|---|---|---|
contexa.infrastructure.mode | enum | STANDALONE | STANDALONE (in-memory) or DISTRIBUTED (Redis + Kafka). |
contexa.infrastructure.redis.enabled | boolean | true | Enable Redis for distributed caching. |
contexa.infrastructure.kafka.enabled | boolean | true | Enable Kafka for event streaming. |
contexa.infrastructure.observability.enabled | boolean | true | Enable observability. |
contexa.infrastructure.observability.open-telemetry-enabled | boolean | true | Enable OpenTelemetry integration. |
contexa.enterprise.enabled | boolean | false | Enable enterprise-only integrations when the runtime provides them. |
SaaS Integration (Enterprise)
| Property | Type | Default | Description |
|---|---|---|---|
contexa.saas.enabled | boolean | false | Enable SaaS integration with Contexa Cloud. |
contexa.saas.endpoint | String | https://saas.ctxa.ai | SaaS platform endpoint URL. |
contexa.saas.outbox-batch-size | int | 50 | Outbox batch size for event forwarding. |
contexa.saas.max-retry-attempts | int | 10 | Maximum retry attempts for failed event delivery. |
contexa.saas.dispatch-interval-ms | long | 30000 | Dispatch interval for SaaS forwarding jobs. |