Private Deployments: Technical Architecture of Pivot in Single-Tenant Mode
Overview
The Pivot Cloud Platform single-tenant deployment as an option for enterprise customers, where we manage the infrastructure entirely, and provide ongoing updates and an SLA, like with the primary multi-tenant Pivot Cloud Platform offering, but the customer chooses the regions, can provide the keys used to encrypt databases, and gets any other security and compliance benefits of single-tenant over multi-tenant deployment.
The private deployment model includes almost everything that is deployed to AWS in the multi-tenant environment, other than the frontend apps. The frontend applications (web app, desktop app, mobile apps) are not redeployed for private deployments. Rather, the frontend apps determine the URL of the Friend endpoint and Visa service from a stringified JSON object received from the single-tenant Visa service at the same time it sets the refresh token cookie and returns the access token. The applications can fallback to hardcoded default endpoints when unauthenticated. Publicly published blocks are always published to a custom domain, so frontend apps can use that domain to determine what Friend service to request the block from.
The multi-tenant Facebox service is aware of each privately deployed organization which allows the multi-tenant Visa service to use SSO configuration to route users to their single-tenant Visa service. The request path is basically: frontend -> multi-tenant Visa service -> Facebox -> multi-tenant Visa service -> frontend -> single-tenant Visa service -> frontend -> single-tenant Friend service.
Authentication
The frontend apps are not aware of backend endpoint URLs for single-tenant deployment until the login session is established, which poses a challenge: how can the frontend actually get logged in, if the multi-tenant Visa service can't find the user?
-
If a user navigates to the frontend using a custom domain, the frontend will use the Friend service to find out the backend endpoints for that custom domain. If the Friend service fails to return a Visa service URL, then clearly the custom domain is not valid. If a Visa service URL is returned, the frontend simply redirects to it as normal.
-
If the user navigates to the frontend at the normal domain or is using a native app rather than the web app, then the frontend has no way of knowing that there is a single-tenant deployment involved, so it must defer to the multi-tenant Visa service at login-time for clarification.
-
The multi-tenant Visa service will allow a user to sign in to the multi tenant service using email or Google even if the organization table in the Facebox service database states that the email address domain is verified by an organization deployed to a single-tenant. However, of a user tries to use SSO (SAML presumably, or potentially OIDC in the future), then the Auth service responds to the browser with a
307redirect tohttps://auth.[whatever-domain].pivot.app/login/start/saml?email=, to trigger a service provider initiated SAML flow on the single-tenant Visa service. -
The Visa service deployed in single-tenant mode must therefore always have SAML enabled for the (single) organization in that tenant. It's the concept of organization-specific SSO that triggers the multi-tenant Visa service to redirect, thereby allowing users to sign in with email/Google to the multi-tenant service even if that domain is used for a private deployment. This is important because it allows users in organizations deployed as a single tenant to be invited to spaces/blocks/rooms in the multi-tenant service, albeit with a technically different user account that uses Google or email authentication. (Customers can disable this behavior; an organization record in multi-tenant Facebox can be configured to redirect to the single-tenant Visa service for all authentication methods.)
-
This architecture requires that an organization record exists for organizations that are privately deployed in the multi-tenant Facebox service database. These are managed by the Pivot team via the PivotAdmin app. The PivotAdmin app can also connect to the single-tenant deployment via the Tunnel service deployed there.
Disabling Multi-Tenant UI and Logic
The frontend can hide the option to create a new organization and other multi-tenant only features when it knows it is rendering a single-tenant session (based on the context JSON object returned from Auth).
Backend services can disable / disregard such features using the PIVOT_CONTEXT
environment variable which will be SINGLE_TENANT for single-tenant
deployments. For example, the Wallstreet service is almost dorment in
single-tenant deployments, with the exception of paid space memberships.
Shared Services
The following services are not deployed per-tenant, because they are 1) deployed globally and/or 2) don't store customer data in a way that would be 'more isolated' if the services were independently deployed.
- Cloudflare
- DNS (Whether our domain or their domain, its not a seperate Cloudflare account)
- WAF configuration for backend endpoints
- R2 (Shared desktop app download URL)
- Web frontend apps (Next.js sites)
- Rootly (Incident management and status page)
- Sentry
- Axiom (Shared observability layer)
- Mux (Mux transcoding/streaming runs on global CDN (no regions) but we do create a logical Mux environment per tenant with a scoped access key)
- LiveKit Cloud
- Temporal Cloud (each Pivot service in each environment uses its own Temporal Cloud namespace, and each namespace can be placed in its own Temporal Cloud AWS region. There is also an encryption option.)
- Expo / mobile apps
- PostHog
- OpenAI
- AssemblyAI
AWS Deployment Process
First, read Infrastructure Provisioning.
The following systems have to be configured and deployed for each tenant:
- New AWS account (inside our AWS organization) which will be populated by Terraform
- Terraform Cloud workspace pointing to that AWS account
- Terraform Cloud workspace name added to the
single_tenant_workspaces.jsonfile inpivot-internal. (This will insert the new Terraform Clould workspace into the CD workflow.) - DNS in Cloudflare for all public endpoints needed (updated via Terraform)
- Secrets in the tenant-specific AWS account's Parameter Store (many of these will be added automatically via Terraform references, but secrets based on values that are not outputs from Terraform itself need to be manually added to AWS or Terraform Cloud as variables)
- Mux environment (we use a single Mux account without the option to select regions or a specific cloud provider, however having a logical environment allows us to scope access keys to a specific tenant, preventing Blobby from accessing Mux assets outside of the scope of its access key) with webhook configuration
- LiveKit Cloud project with webhook configuration to hit Tunnel
- Temporal Cloud namespace
- Turbopuffer organization and configuration (as it's a managed service, specific setup might involve API keys or similar configuration within AWS Parameter Store/Terraform Cloud variables)
- The multi-tenant Facebox service must be updated with a new organization record, so that it knows how to redirect
- The single-tenant Postgres databases must be bootstrapped with the database names and database users for each service. This is done with PGA. (Keyspaces also needs provisioning via Terraform).
- ECS and all other components must be configured to push metrics/logs/tracing to Axiom. For ECS, this is done with an OpenTelemetry daemon that is defined in our Terraform ECS configuration. However, further setup on the Axiom side may be required, and even if it is not, it's important to verify upon spinning up a new ECS environment that Axiom is registering the incoming data and that the data is in-scope to trigger alerts.
Azure Deployment Process
On Azure, we would replace AWS services with their closest equivalent.
The pivot-internal repository would need a seperate set of .tf files and
Terraform Cloud workspaces to configure the resources in each Azure
subscription. From the perspective of our CD workflows, terraform apply can be
run as normal as part of our single-tenant deployment for loop, where we loop
through all single-tenant environments/Terraform Cloud workspaces, however this
would need to be run as its own workflow step, so that the Azure plan and
apply commands can run in the context of the
/apps/terraform-single-tenant-azure directory.
Azure Kubernetes Service, Azure Container Apps, or whatever service we would use would need credentials to read from our ECR image repositories.
AWS to Azure infrastructure translation:
- ECS to AKS or ACA / Fargate to 'consumption' compute / Fargate volumes to managed disks
- Azure has multiple Postgres hosting options
- As of April 2025, Turbopuffer does not support Azure unless paying for a single-tenant deployment.
- As of October 2024, Synadia does not support Azure for hosted NATS
- Some equivalent to ECS Service Connect / AWS ECS task-level security groups
- Some mechanism to manage Dealer instance registration/deregistration, like AWS Cloud Map does.
- For S3, we will need to update Blobby and Flipt to support using Azure Blob
Storage for user content.
- Instead of Cloudfront and Lambda@Edge, we would need an Azure native solution, presumably using Front Door, though that doesn't address replacements for our Lambda@Edge File Proxy services.
- We would also need Blobby to and any other services that read S3 file uploaded SNS notifications from SQS to parse the notifications send from Azure Event Grid to Azure Storage Queue.
- LiveKit supports Azure Blob Storage, so it can easily write recordings there with a small change to Stagehand.
- For SES, we would need to update Buzzbuzz to understand the Azure Communication Services Email API as well as the deliverability events sent by Azure Event Grid to Azure Storage Queue (replacing the role of SNS/SQS).
- Azure Cache for Redis could replace ElastiCache/Valkey.
- Azure Cosmos DB for Apache Cassandra could replace Keyspaces.