Technical page

The CraftingData for OpenClaw Azure architecture, security model, and operating path

The landing page defines the first two activated use cases as the personal assistant and research assistant workflows delivered in Microsoft Teams first, with WhatsApp planned as the next channel. This page stays technical: how a slug-scoped environment is deployed, where the trust boundaries sit, how runtime state persists, how observability is wired, and which optional Azure dependencies can be attached around the core platform.

Topology illustration showing a per-slug CraftingData for OpenClaw Azure deployment with resource-group isolation, Container Apps, Key Vault, storage, observability, and Azure AI Foundry as a separate Azure service
Representative topology showing the core deployment boundary and adjacent Azure services around it.

Azure Architecture Pattern

A per-slug Azure operating model for secure hosted CraftingData for OpenClaw

Each CraftingData for OpenClaw environment is deployed as its own Azure footprint under a dedicated resource group named claw-<slug>. Identity, secrets, runtime, persistence, observability, and Azure AI Foundry integration are kept explicit so the environment can meet data privacy, residency, and audit requirements without collapsing into a shared tenant model.

Isolation model

One resource group per slug, created from a subscription-scope deployment

Azure Resource Groups architecture icon

Resource group per slug

claw-<slug> isolates each customer environment as its own Azure footprint.

Azure Container Apps environments architecture icon

Container Apps managed environment

The shared environment boundary for the slug deployment, with logging and storage wiring attached at the environment level.

Azure Container App architecture icon

OpenClaw Container App

Public gateway app running the OpenClaw container plus an NGINX sidecar that separates the current Teams bot webhook path, leaves room for additional channel callbacks such as WhatsApp, and proxies other traffic internally.

Security boundary

Entra sign-in at the edge, group-based authorization, and secret storage in Key Vault

App Registrations architecture icon Groups architecture icon

Microsoft Entra ID

Install creates or updates the slug app registration, then applies shire-admins and shire-<slug> groups as the main authorization boundary.

Azure Key Vaults architecture icon

Key Vault in RBAC mode

Stores the Entra secret, OpenClaw gateway token, channel-specific secrets such as Teams, WhatsApp, or Telegram, the optional Foundry key, and the Azure Files storage key.

Managed Identities architecture icon

User-assigned gateway identity

Receives AcrPull for the registry and Key Vault Secrets User so the gateway can pull its image and load secrets without embedded credentials.

Unauthenticated traffic is redirected to Microsoft login except the bot webhook path, and access is constrained to the configured Entra groups and optional explicit identities.

Runtime and persistence

Registry, gateway runtime, and durable state are wired as one operating path

Azure Container Registries architecture icon

Azure Container Registry

Holds the OpenClaw runtime image consumed by the gateway container app.

Azure Container App architecture icon

Gateway runtime

The public Container App hosts the OpenClaw app container and NGINX sidecar and exposes the environment to users and bot callbacks.

Azure Storage Accounts architecture icon

Storage account plus Azure Files

The Azure Files share is registered with the managed environment and mounted into the gateway so pairing state and runtime configuration survive revisions and restarts.

Core LLM layer

Azure AI Foundry provides the model layer for the solution

Azure AI Foundry or Azure OpenAI architecture icon

Azure AI Foundry account

Azure AI Foundry provides the LLM models used by the solution, and the connection flows through Key Vault and the Container App so privacy and residency requirements stay within Azure.

Observability

Log Analytics and Application Insights are attached at the environment layer

Log Analytics Workspaces architecture icon

Log Analytics workspace

Collects the platform log stream for the managed environment and supports operational analysis.

Application Insights architecture icon

Workspace-based Application Insights

Receives OpenTelemetry traces and logs so the OpenClaw runtime can be monitored as a first-class production service.

Azure Monitor architecture icon

Azure Monitor view

Operational visibility is surfaced through the Azure monitoring stack rather than through app-local diagnostics alone.

Supporting resources

Supporting resources stay visible alongside the core request path

Broker job

Brokered lifecycle operations stay in a supporting resource path without changing the main gateway runtime path.

Azure Storage Accounts architecture icon

Channel-specific static website storage

Channel-specific storage can host the static Teams tab configuration page when Teams is one of the delivery channels.

Why per-slug isolation matters

Each deployment is a separate Azure environment, not an in-app tenant. That makes identity scope, secret handling, operational ownership, and future teardown or migration clearer.

Where the privacy boundary sits

The runtime, secrets, storage, and telemetry remain inside the slug-specific Azure environment, while Azure AI Foundry provides the core LLM capability for the solution and remains integrated through Azure-native controls.

Official Microsoft Azure architecture icons are included here in a labeled architecture-diagram context.

Service components

What the managed platform includes operationally

Managed product

Slug-scoped Azure deployment and hosting

CraftingData for OpenClaw is deployed into a dedicated resource group per slug, with the core runtime hosted on Azure Container Apps rather than as an in-app tenant inside a shared footprint.

Security operations

Identity enforcement, secret handling, and audit posture

Microsoft Entra ID is enforced at the edge, Key Vault is used as the central secret store in RBAC mode, and the managed identity model removes embedded pull and secret credentials from runtime configuration.

Continuous improvement

Persistence, observability, and supportable revisions

Azure Files preserves state across restarts and revisions, while Log Analytics and workspace-based Application Insights keep the environment observable as an operated production service.

Delivery around the platform

Core LLM capability and supporting resources

Azure AI Foundry supplies the core LLM models for the solution, while broker jobs and channel-specific static website storage remain supporting resources around the core hosting boundary. Teams is the current primary delivery channel, WhatsApp is the next planned channel, and Telegram can remain optional where a workflow needs it.

Delivery sequence

From environment assessment to managed runtime operation

01

Assess the Azure and identity boundary

We start with the target subscription, identity boundary, security requirements, residency needs, and the operational workflows the environment has to support.

02

Deploy the slug-scoped Azure pattern

We provision the resource group, runtime, identity wiring, secret handling, persistence, and observability model through a known Azure deployment path.

03

Operate, review, and attach optional services

From there the environment is upgraded, monitored, and reviewed, while optional integrations, AI services, and workflow-specific extensions can be added around the core path.

Delivery context

Background relevant to designing and operating this kind of platform

Enterprise cloud and data systems

Experience includes cloud architecture, analytics, NLP automation, reinforcement learning prototypes, and secure, scalable production delivery for operational environments.

Meta-Analytics

Co-founded and helped shape technical direction for a company focused on optimization and ML-driven systems, including threat detection work that informed later AI and agent research.

Consulting and client delivery

Delivery history includes automation, integration, databases, web services, and analytics across multiple industries and operational contexts.

Client feedback

Selected testimonials from prior clients

“Rajesh is that rare mix of highly technical and a fantastic communicator. He is mature, capable of owning work without supervision, and consistently raises the right flags when appropriate.”

A. Brooks Hollar, Director of Engineering, Ad Adapted

“Rajesh and Theresa demonstrated a high level of competency in the technical aspects of UNIX, X-Windows, and C language design and development. I would retain their services again without hesitation.”

Frank Kistner, Director of Software Development, Alcatel

Next step

Start with your target Azure environment and OpenClaw requirements

The fastest next step is a technical conversation about the target Azure environment, identity and security requirements, and whether the deployment needs only the managed core platform or additional Azure services around it.