Resource group per slug
claw-<slug> isolates each customer environment as its own Azure footprint.
Technical page
The landing page defines the first two activated use cases as the personal assistant and research assistant workflows delivered in Microsoft Teams first, with WhatsApp planned as the next channel. This page stays technical: how a slug-scoped environment is deployed, where the trust boundaries sit, how runtime state persists, how observability is wired, and which optional Azure dependencies can be attached around the core platform.
Azure Architecture Pattern
Each CraftingData for OpenClaw environment is deployed as its own Azure footprint under a dedicated resource group named claw-<slug>. Identity, secrets, runtime, persistence, observability, and Azure AI Foundry integration are kept explicit so the environment can meet data privacy, residency, and audit requirements without collapsing into a shared tenant model.
Isolation model
claw-<slug> isolates each customer environment as its own Azure footprint.
The shared environment boundary for the slug deployment, with logging and storage wiring attached at the environment level.
Public gateway app running the OpenClaw container plus an NGINX sidecar that separates the current Teams bot webhook path, leaves room for additional channel callbacks such as WhatsApp, and proxies other traffic internally.
Security boundary
Install creates or updates the slug app registration, then applies shire-admins and shire-<slug> groups as the main authorization boundary.
Stores the Entra secret, OpenClaw gateway token, channel-specific secrets such as Teams, WhatsApp, or Telegram, the optional Foundry key, and the Azure Files storage key.
Receives AcrPull for the registry and Key Vault Secrets User so the gateway can pull its image and load secrets without embedded credentials.
Unauthenticated traffic is redirected to Microsoft login except the bot webhook path, and access is constrained to the configured Entra groups and optional explicit identities.
Runtime and persistence
Holds the OpenClaw runtime image consumed by the gateway container app.
The public Container App hosts the OpenClaw app container and NGINX sidecar and exposes the environment to users and bot callbacks.
The Azure Files share is registered with the managed environment and mounted into the gateway so pairing state and runtime configuration survive revisions and restarts.
Core LLM layer
Azure AI Foundry provides the LLM models used by the solution, and the connection flows through Key Vault and the Container App so privacy and residency requirements stay within Azure.
Observability
Collects the platform log stream for the managed environment and supports operational analysis.
Receives OpenTelemetry traces and logs so the OpenClaw runtime can be monitored as a first-class production service.
Operational visibility is surfaced through the Azure monitoring stack rather than through app-local diagnostics alone.
Supporting resources
Brokered lifecycle operations stay in a supporting resource path without changing the main gateway runtime path.
Channel-specific storage can host the static Teams tab configuration page when Teams is one of the delivery channels.
Each deployment is a separate Azure environment, not an in-app tenant. That makes identity scope, secret handling, operational ownership, and future teardown or migration clearer.
The runtime, secrets, storage, and telemetry remain inside the slug-specific Azure environment, while Azure AI Foundry provides the core LLM capability for the solution and remains integrated through Azure-native controls.
Official Microsoft Azure architecture icons are included here in a labeled architecture-diagram context.
Service components
Managed product
CraftingData for OpenClaw is deployed into a dedicated resource group per slug, with the core runtime hosted on Azure Container Apps rather than as an in-app tenant inside a shared footprint.
Security operations
Microsoft Entra ID is enforced at the edge, Key Vault is used as the central secret store in RBAC mode, and the managed identity model removes embedded pull and secret credentials from runtime configuration.
Continuous improvement
Azure Files preserves state across restarts and revisions, while Log Analytics and workspace-based Application Insights keep the environment observable as an operated production service.
Delivery around the platform
Azure AI Foundry supplies the core LLM models for the solution, while broker jobs and channel-specific static website storage remain supporting resources around the core hosting boundary. Teams is the current primary delivery channel, WhatsApp is the next planned channel, and Telegram can remain optional where a workflow needs it.
Delivery sequence
01
We start with the target subscription, identity boundary, security requirements, residency needs, and the operational workflows the environment has to support.
02
We provision the resource group, runtime, identity wiring, secret handling, persistence, and observability model through a known Azure deployment path.
03
From there the environment is upgraded, monitored, and reviewed, while optional integrations, AI services, and workflow-specific extensions can be added around the core path.
Delivery context
Experience includes cloud architecture, analytics, NLP automation, reinforcement learning prototypes, and secure, scalable production delivery for operational environments.
Co-founded and helped shape technical direction for a company focused on optimization and ML-driven systems, including threat detection work that informed later AI and agent research.
Delivery history includes automation, integration, databases, web services, and analytics across multiple industries and operational contexts.
Client feedback
“Rajesh is that rare mix of highly technical and a fantastic communicator. He is mature, capable of owning work without supervision, and consistently raises the right flags when appropriate.”
A. Brooks Hollar, Director of Engineering, Ad Adapted
“Rajesh and Theresa demonstrated a high level of competency in the technical aspects of UNIX, X-Windows, and C language design and development. I would retain their services again without hesitation.”
Frank Kistner, Director of Software Development, Alcatel
Next step
The fastest next step is a technical conversation about the target Azure environment, identity and security requirements, and whether the deployment needs only the managed core platform or additional Azure services around it.