Back

Technology & System Requirements

Technical specifications and security architecture of LokalKI.

Data Flow – 100% Local

Your Data
LokalKI Server
Employee

No Cloud Data Transfer

Technical Specifications

RequirementDetails
DockerContainer-based deployment, Docker Compose
Linux SupportUbuntu 20.04+, Debian 11+, RHEL 8+
NVIDIA GPURecommended for LLM inference (CUDA 11.8+)

System Requirements

Docker Support

LokalKI is deployed as Docker containers. Full orchestration via Docker or Docker Compose. Easy deployment without complex dependencies.

Hardware Recommendations

  • NVIDIA GPU recommended (for optimal inference performance)
  • 16 GB RAM minimum (32 GB recommended for larger models)
  • Sufficient disk space for vector database and document index

Security Architecture

Air-gapped Capable

LokalKI can run in a fully air-gapped environment. No dependency on external services – ideal for highly sensitive environments.

No External API Calls

All AI inference runs locally. No data is sent to OpenAI, Anthropic or other cloud providers. Full GDPR compliance.

Local Vector Database

Document embeddings and RAG indexes are stored exclusively on your servers. No cloud storage, no third-party infrastructure.

Request Demo