Skip to content

Latest commit

 

History

History
267 lines (191 loc) · 8.39 KB

File metadata and controls

267 lines (191 loc) · 8.39 KB

Setup | crAPI

Note: Custom jwks key can be passed by adding a jwks.json file in keys folder in each deployment folder such as /deploy/docker/keys

Docker and docker compose

You'll need to have Docker and docker-compose installed and running on your host system. Also, the version of docker-compose should be 1.27.0 or above. Check your docker-compose version using:

docker compose version

Upgrade your docker compose version if you get errors like

ERROR: Invalid interpolation format for ...

Using prebuilt images

You can use prebuilt images generated by our CI workflow by downloading the docker-compose and .env files.

Start crAPI

  • To use the latest stable version.

    • Linux Machine
    curl -L -o /tmp/crapi.zip https://github.com/OWASP/crAPI/archive/refs/heads/main.zip
    unzip /tmp/crapi.zip
    cd crAPI-main/deploy/docker
    docker compose pull
    docker compose -f docker-compose.yml --compatibility up -d
    

    To override server configurations, change the values of the variables present in the .env file or add the respective variables to the start of the docker compose command.

    For example to expose the system to all network interfaces.

    LISTEN_IP="0.0.0.0" docker compose -f docker-compose.yml --compatibility up -d
    
    • Windows Machine
    curl.exe -L -o crapi.zip https://github.com/OWASP/crAPI/archive/refs/heads/main.zip
    tar -xf .\crapi.zip
    cd crAPI-main/deploy/docker
    docker compose pull
    docker compose -f docker-compose.yml --compatibility up -d
    

    To override server configurations, change the values of the variables present in the .env file or add the respective variables to the start of the docker compose command.

    For example to expose the system to all network interfaces.

    LISTEN_IP="0.0.0.0" docker compose -f docker-compose.yml --compatibility up -d
    
    • To use the latest development version

    • Linux Machine

    curl -L -o /tmp/crapi.zip https://github.com/OWASP/crAPI/archive/refs/heads/develop.zip
    unzip /tmp/crapi.zip
    cd crAPI-develop/deploy/docker
    docker compose pull
    docker compose -f docker-compose.yml --compatibility up -d
    

    To override server configurations, change the values of the variables present in the .env file or add the respective variables to the start of the docker compose command.

    For example to expose the system to all network interfaces.

    LISTEN_IP="0.0.0.0" docker compose -f docker-compose.yml --compatibility up -d
    
    • Windows Machine
    curl.exe -L -o crapi.zip https://github.com/OWASP/crAPI/archive/refs/heads/develop.zip
    tar -xf .\crapi.zip
    cd crAPI-develop/deploy/docker
    docker compose pull
    docker compose -f docker-compose.yml --compatibility up -d
    

    To override server configurations, change the values of the variables present in the .env file or add the respective variables to the start of the docker compose command.

    For example to expose the system to all network interfaces.

    LISTEN_IP="0.0.0.0" docker compose -f docker-compose.yml --compatibility up -d
    

Note: All emails are sent to mailhog service by default and can be checked on http://localhost:8025 You can change the smtp configuration if required however all emails with domain example.com will still go to mailhog.

Chatbot LLM configuration

The chatbot supports multiple LLM providers. Provider selection and models are set via environment variables. OpenAI and Anthropic keys are set via the API (with env fallback); all other providers use env credentials only.

Core settings:

  • CHATBOT_LLM_PROVIDER: openai, anthropic, azure_openai, bedrock, vertex, groq, mistral, cohere
  • CHATBOT_LLM_MODEL: optional, provider model name. Defaults per provider:
    • openai: gpt-4o-mini
    • anthropic: claude-sonnet-4-20250514
    • bedrock: anthropic.claude-3-sonnet-20240229-v1:0
    • vertex: gemini-1.5-flash
    • groq: llama-3.3-70b-versatile
    • mistral: mistral-large-latest
    • cohere: command-r-plus
    • azure_openai: uses AZURE_OPENAI_CHAT_DEPLOYMENT
  • CHATBOT_EMBEDDINGS_MODEL: optional embeddings model name (used across providers)
  • CHATBOT_EMBEDDINGS_DIMENSIONS: optional, defaults to 1536

OpenAI (API, optional env fallback):

  • POST /genai/init with {"openai_api_key":"..."} (per-session)
  • CHATBOT_OPENAI_API_KEY (optional env fallback)

Anthropic (API, optional env fallback):

  • POST /genai/init with {"anthropic_api_key":"..."} (per-session)
  • ANTHROPIC_API_KEY (optional env fallback)

Azure OpenAI (env only):

  • AZURE_OPENAI_ENDPOINT, AZURE_OPENAI_CHAT_DEPLOYMENT
  • Auth: AZURE_OPENAI_API_KEY or AZURE_AD_TOKEN (managed identity)
  • Optional: AZURE_OPENAI_API_VERSION, AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT

AWS Bedrock (env only):

  • AWS_REGION and either:
    • AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN (optional), or
    • AWS_BEARER_TOKEN_BEDROCK

Google Vertex AI (env only):

  • VERTEX_PROJECT, VERTEX_LOCATION
  • GOOGLE_APPLICATION_CREDENTIALS (optional if using ADC in GCP environments)

Groq / Mistral / Cohere (env only):

  • GROQ_API_KEY, MISTRAL_API_KEY, COHERE_API_KEY

Build it yourself

  1. Clone crAPI repository

    • Linux Machine
        $ git clone [REPOSITORY-URL]
    
    • Windows Machine
        $ git clone [REPOSITORY-URL] --config core.autocrlf=input
    
  2. Build all docker images

    • Linux Machine
    $ cd deploy/docker; build-all.sh
    
    • Windows Machine
    $ call "%cd%\deploy\docker\build-all.bat"
    
  3. Start crAPI

    $ cd deploy/docker
    
    $ docker compose -f docker-compose.yml --compatibility up -d
    
    
  4. Visit http://localhost:8888

Note: All emails are sent to mailhog service by default and can be checked on http://localhost:8025 You can change the smtp configuration if required however all emails with domain example.com will still go to mailhog.

Kubernetes

Using Helm Charts

  1. Clone the repo

    git clone [REPOSITORY-URL]
    
  2. Install the helm chart

    Inorder to manually mount the data to a specific location, hostPath should be updated in values-pv.yaml, and used to install the helm charts.

    cd deploy/helm
    
    helm install --namespace crapi crapi . --values values-pv.yaml
    

    Otherwise install the helm chart normally.

    cd deploy/helm
    
    helm install --namespace crapi crapi . --values values.yaml
    
  3. If using minikube, create a tunnel to initialize the LoadBalancers

    minikube tunnel --alsologtostderr
    
  4. Access crAPI

    crAPI should be available on the <LOADBALANCER_IP>:8888 Mailhog on <LOADBALANCER_IP>:8025

    Or for minikube run the following command to get the URL

    crAPI URL:
    $ echo "http://$(minikube ip):30080"
    
    Mailhog URL:
    echo "http://$(minikube ip):30025"
    

Vagrant

This option allows you to run crAPI within a virtual machine, thus isolated from your system. You'll need to have Vagrant and, for example VirtualBox installed.

  1. Clone crAPI repository
    $ git clone [REPOSITORY-URL]
    
  2. Start crAPI Virtual Machine
    $ cd deploy/vagrant && vagrant up
    
  3. Visit http://192.168.33.20

Note: All emails are sent to mailhog service by default and can be checked on http://192.168.33.20:8025 You can change the smtp configuration if required however all emails with domain example.com will still go to mailhog.

Once you're done playing with crAPI, you can remove it completely from your system running the following command from the repository root directory

$ cd deploy/vagrant && vagrant destroy

Troubleshooting guide for general issues while installing and running crAPI

If you need any help with installing and running crAPI you can check out this guide: Troubleshooting guide crAPI. If this doesn't solve your problem, please create an issue in Github Issues.