Quick Start

Get Started

Get Contexa AI security running in your Spring Boot application in under a minute. Add the starter dependency, enable AI security, configure, and run.

Prerequisites

Before you begin, ensure the following are installed and running:

Requirement Version Notes
Java 21+ LTS release
PostgreSQL 15+ With pgvector extension
Ollama Latest Local LLM inference

Database Setup

Create the database and enable required extensions:

SQL
CREATE DATABASE contexa;
\c contexa
CREATE EXTENSION IF NOT EXISTS vector;
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";

Ollama Models

Pull the required models for chat and embedding:

Shell
ollama pull qwen3.5:9b
ollama pull mxbai-embed-large
1

Add Dependency

Add the Contexa Spring Boot Starter to your project. This single dependency brings in all required modules: contexa-core, contexa-identity, contexa-iam, contexa-common, and contexa-autoconfigure.

Gradle (Kotlin DSL)
dependencies {
    implementation("ai.ctxa:spring-boot-starter-contexa:0.1.0")
}
Gradle (Groovy)
dependencies {
    implementation 'ai.ctxa:spring-boot-starter-contexa:0.1.0'
}
Maven
<dependency>
    <groupId>ai.ctxa</groupId>
    <artifactId>spring-boot-starter-contexa</artifactId>
    <version>0.1.0</version>
</dependency>

Transitive Dependencies — The starter includes Spring Security, Spring AI, and all Contexa modules. You do not need to declare them individually unless you require a specific version override.

2

Enable AI Security

Annotate your main application class with @EnableAISecurity. This activates the full Contexa AI security infrastructure. If no user-defined PlatformConfig bean is present, AiSecurityConfiguration.platformDslConfig() auto-registers a default one built through IdentityDslRegistry, wiring the Zero Trust filter chain automatically.

Java
@SpringBootApplication
@EnableAISecurity
public class MyApplication {

    public static void main(String[] args) {
        SpringApplication.run(MyApplication.class, args);
    }
}

What happens behind the scenes@EnableAISecurity is meta-annotated with @Import(AiSecurityImportSelector.class), which imports AiSecurityConfiguration. That configuration class contains platformDslConfig(), a @ConditionalOnMissingBean(PlatformConfig.class) factory that builds a default PlatformConfig via IdentityDslRegistry when you have not declared your own.

3

Configure

Create or update your application.yml with the minimal required configuration for infrastructure mode, datasource, and LLM provider.

YAML
contexa:
  infrastructure:
    mode: standalone
  llm:
    chatModelPriority: ollama,anthropic,openai
    embeddingModelPriority: ollama,openai
    chat:
      ollama:
        baseUrl: http://127.0.0.1:11434
        model: qwen3.5:9b
        keepAlive: 24h
    embedding:
      ollama:
        baseUrl: http://127.0.0.1:11434
        model: mxbai-embed-large

spring:
  datasource:
    url: jdbc:postgresql://localhost:5432/contexa
    username: ${DB_USERNAME}
    password: ${DB_PASSWORD}
    driver-class-name: org.postgresql.Driver

  ai:
    vectorstore:
      pgvector:
        table-name: vector_store
        index-type: HNSW
        distance-type: COSINE_DISTANCE
        dimensions: 1024
        initialize-schema: true

  jpa:
    database: POSTGRESQL
    hibernate:
      ddl-auto: update

Model Priority — The contexa.llm.chatModelPriority property accepts a comma-separated list of providers. Contexa will attempt each provider in order, falling back to the next if unavailable. Supported values: ollama, anthropic, openai. The default is ollama,anthropic,openai.

4

Run & Verify

Start your application and verify Contexa is active.

Shell
# Gradle
./gradlew bootRun

# Maven
mvn spring-boot:run

Verify: Auto-Configuration Report

Run with the --debug flag to see which Contexa auto-configurations were applied:

Shell
./gradlew bootRun --args='--debug'

Look for CoreInfrastructureAutoConfiguration, CoreLLMAutoConfiguration, and CoreHCADAutoConfiguration in the positive matches section.

Verify: Actuator Health Endpoint

Add spring-boot-starter-actuator to your dependencies if not already present, then check the health endpoint:

Shell
curl http://localhost:8080/actuator/health

Database Required — Contexa will fail to start if PostgreSQL is unreachable. Ensure your datasource configuration is correct and the database exists before launching the application.

Method-Level Security

Protect Your First Method

Apply AI-driven zero trust enforcement to any service method using the @Protectable annotation. Each invocation is intercepted by AuthorizationManagerMethodInterceptor and evaluated by ProtectableMethodAuthorizationManager through Spring Security's MethodSecurityExpressionHandler. When sync = true, SynchronousProtectableDecisionService invokes SecurityPlaneAgent inline and produces a ZeroTrustAction decision before the method returns; when sync = false (default), a ZeroTrustSpringEvent is published and evaluation runs asynchronously.

Java
@Service
public class OrderService {

    @Protectable
    public Order getOrder(Long orderId) {
        return orderRepository.findById(orderId)
                .orElseThrow(() -> new OrderNotFoundException(orderId));
    }
}

@Protectable Attributes

Attribute Type Default Description
ownerField String "" Field name on the return type used to identify the resource owner for ownership-based authorization checks.
sync boolean false When true, the Zero Trust evaluation completes synchronously before the method returns. When false (default), evaluation runs asynchronously and the method proceeds immediately.
AI Decisions

What Happens Next — Zero Trust Actions

Every request evaluated by Contexa produces one of the following ZeroTrustAction decisions. The AI engine determines the appropriate action based on behavioral analysis, threat assessment, and trust scoring.

Action HTTP Status TTL Behavior
ALLOW 200 15s Request proceeds normally
BLOCK 403 Permanent Request is denied; user receives ROLE_BLOCKED
CHALLENGE 401 1800s Additional authentication required (MFA); user receives ROLE_MFA_REQUIRED
ESCALATE 423 300s Request held for manual review; user receives ROLE_REVIEW_REQUIRED
PENDING_ANALYSIS 503 0s AI analysis in progress; user receives ROLE_PENDING_ANALYSIS and request is deferred until evaluation completes

AI-Driven Decisions — Unlike traditional rule-based systems, Contexa's LLM engine determines the appropriate action for each request based on real-time behavioral analysis. No static rules to configure — the AI learns and adapts to your application's traffic patterns.