Component C112 – AI Model

By Raj Marni. March 28, 2025. Revised. Version: 0.0.02

1. Overview

Component 112 – AI Models is the central intelligence within the k8or Orbit ecosystem, responsible for unifying knowledge from embedded AI agents across all components and orchestrating advanced reasoning, question answering, and code-level insights. It sits at the heart of the AI-driven architecture, collecting domain-specific data from each local agent (e.g., about code references, environment configurations, logs) and synthesizing it into coherent, context-aware responses or actions. This Layer 3 document explores how the AI Models operate internally, how they exchange data with local agents and the main LLM, and how they ensure security and compliance throughout the process.

AI Model Architectural Diagram

2. Internal Modules & Responsibilities

2.1 Core Model Engine

  • LLM / Transformer-Based Core:

    • Uses advanced large language models (LLMs) to interpret user queries, code references, or system logs.

    • Maintains a knowledge graph that merges data from all local AI agents, enabling it to deliver domain-specific answers or code suggestions.

  • Inference & Contextual Reasoning:

    • Dynamically constructs context windows by pulling relevant knowledge from local agents, ensuring the model’s responses remain accurate and up to date with the current codebase and environment states.

2.2 Knowledge Aggregation & Indexing

  • Global Knowledge Store:

    • AI Models maintain or interface with an internal indexing system that references code, config schemas, and logs across the entire orbit-plane.

    • Receives updates from local AI agents whenever code merges, environment changes, or new logs appear.

  • Versioning & Temporal Context:

    • Each piece of knowledge is versioned (e.g., commits from GitLab, container image versions from Docker Hub), allowing the AI Models to reference the correct snapshot of code or config when answering queries about older releases.

2.3 Security & Privacy Layer

  • AccessPoint Integration:

    • Any requests to or from the AI Models pass through AccessPoint (C52) for authentication and authorization, ensuring only valid, role-based queries or tasks are executed.

    • The AI Models themselves do not bypass orbit-plane security; they rely on each local AI agent’s privileges.

  • Data Segregation & RBAC:

    • The AI Models enforce role-based or environment-based constraints, so that sensitive code references or logs from production clusters are not revealed to unauthorized users.

2.4 API & Event-Driven Interfaces

  • Pub/Sub with NATS (C132):

    • The AI Models may subscribe to relevant topics (e.g., “ai.model.configUpdate” or “component.logs.*”) to continuously ingest updated knowledge.

    • When a user query arrives, the AI Models can broadcast requests to local AI agents via NATS to fetch domain-specific context.

  • Synchronous Query Interface:

    • In parallel, a user or microservice might make a direct, synchronous call to the AI Models (via a web or CLI-based chat), requesting code suggestions or deployment instructions. The AI Models then gather relevant data from local agents to craft a response.


3. Data Flow & Process IDs

Below is a typical scenario illustrating how the AI Models handle a user query:

  1. User Query:

    • A user (developer or admin) in the orbit-plane Portal issues a question: “How do I enable canary deployments in ArgoCD for the dev cluster?”

    • This query is sent to the AI Models with a PID like portal-c112-req-e10.

  2. Context Assembly:

    • The AI Models retrieve domain-specific data from the local AI agents:

      • The ArgoCD (C108) agent provides knowledge about canary strategies.

      • The environment agent for the dev cluster supplies environment-specific constraints or config details.

    • Additional references from code repositories or chart definitions are aggregated, forming the context.

  3. Inference & Response:

    • The AI Models’ LLM processes the assembled context, generating a structured response or step-by-step instructions on enabling canary deployments.

    • If additional logs or historical data are needed, the AI Models might send further requests to local agents, collecting logs or commit diffs.

  4. Response Delivery:

    • The user receives an answer in the Portal UI (e.g., a text explanation, code snippet, or even a short YAML/Helm example).

    • The entire transaction is logged with a PID like c112-portal-resp-e20 for audit trails.


4. Integration with k8or Orbit Components

4.1 Local AI Agents

  • C100 (GitLab), C96 (Docker Hub), C128 (Rancher), C108 (ArgoCD), etc. each embed a local AI agent that knows their code and config intimately.

  • The AI Models rely on these agents to fetch precise, domain-specific data, ensuring that answers remain accurate.

4.2 AccessPoint (C52)

  • Security Gate:

    • All queries or data pulls from local agents are routed through AccessPoint, maintaining a strict RBAC policy.

    • The AI Models cannot read or modify code/config data without going through the correct orbit-plane security checks.

4.3 NATS (C132)

  • Event-Driven Communication:

    • The AI Models broadcast or receive real-time messages from local agents via NATS subjects, e.g., “ai.model.logs” or “component.statusChange”.

    • This ensures that any significant event (like a new code commit or cluster scaling event) updates the global knowledge base.

4.4 K8rix (C84) & Others

  • Contextual Visualization:

    • K8rix can query the AI Models for advanced analytics or recommended actions.

    • The AI Models may use K8rix’s real-time cluster data to refine or check the validity of proposed solutions (like verifying cluster resource usage before recommending scaling).


5. Benefits & Impact

  1. Unified Knowledge & Guidance

    • The AI Models unify knowledge from every local AI agent, delivering comprehensive, context-aware answers that reflect the entire orbit-plane’s state.

  2. Accelerated Development & Troubleshooting

    • Developers can ask the AI Models for code-level advice, environment-specific config steps, or best practices, drastically reducing time spent searching through documentation or logs.

  3. Adaptive & Secure

    • The system’s dynamic approach ensures that as components evolve, the AI Models remain current. Meanwhile, AccessPoint enforces security boundaries so that only authorized queries or actions are allowed.

  4. Reduced Cognitive Load

    • Teams no longer must memorize every component’s config or code references. The AI Models handle the complexity, enabling them to focus on higher-level tasks.

  5. Auditable & Controlled

    • All AI queries and responses are subject to orbit-plane logging, providing an audit trail of who requested what information, ensuring compliance and traceability.

Last updated