The Infrastream Security Framework
The fundamental security principle of the Infrastream platform is that security is not an optional feature, but an inherent, non-negotiable property of the system. The platform is designed to be secure by default, enforcing a rigorous, defense-in-depth security posture at every layer.
This is achieved by codifying security best practices directly into the reusable Core Modules that form the building blocks of all infrastructure. End-users (developers) do not have the option to deploy an insecure resource; the security guardrails are built-in and automatically enforced by the platform. This approach eliminates configuration drift and human error, which are the primary causes of cloud security breaches.
Infrastream uses the GitOps workflow not just for auditing but for active, automated governance. This is achieved by dynamically managing the CODEOWNERS file within the central GitOps repository.
-
Git as the Immutable Ledger: The Git repository serves as the immutable audit log for all infrastructure changes. Every modification is tied to a specific commit, showing who requested the change, who approved it, and when it was applied.
-
Automated Stakeholder Enforcement: The platform dynamically updates the
CODEOWNERSfile based on the permissions defined in your manifests. When you declare administrators for a resource (like anOrganizationalUnitorProject), the platform automatically writes rules to theCODEOWNERSfile targeting the directory where that manifest is stored.- How it works: If you define
team-alpha-adminsas administrators of aProjectmanifest, the platform will identify its location in the repository and add an entry to theCODEOWNERSfile. For example:/path/to/your-project/ @your-github-org/team-alpha-admins. - The Result: Any future Pull Request that modifies any file within that project's directory will automatically require a formal review and approval from
team-alpha-admins. This guarantees that even though you have flexibility in where you place your files, the security guardrails follow the resources wherever they live.
- How it works: If you define
-
Separation of Duties: This mechanism enforces a powerful, automated separation of duties. A development team cannot modify the production network configuration, and the network team cannot alter an application's resource limits without the other's approval, because the
CODEOWNERSfile ensures the correct experts are always in the loop. This converts governance from a manual, ticket-based process into a fully automated, auditable, and developer-friendly workflow.
The platform enforces the principle of least privilege at every level.
- Hierarchical Permissions: IAM policies are inherited down the organizational hierarchy (Organization -> OU -> Environment -> Project). This allows for broad, global policies to be set at the top, with the ability to grant more specific, additive permissions at lower levels.
- Application Identity: Every application deployed by Infrastream is assigned its own dedicated Google Service Account. By default, this service account has zero permissions. It cannot access any other resource.
- Explicit Access Grants: For an application to access a resource (e.g., a database or bucket), the permission must be explicitly defined in its
accessControlblock. The platform then creates a specific, narrow-scoped IAM binding between that application's service account and that specific resource.
Infrastream creates a secure-by-default, zero-trust network environment.
- Shared VPC Model: The platform uses a Hub-and-Spoke network architecture. The Core Project's VPC acts as the central "hub," providing centralized control and auditing for all network traffic. Each service project is a "spoke," connected via VPC Peering or Private Service Connect, but isolated from other spokes by default.
- Automated Firewall Rules: Firewall rules are managed automatically. The platform creates precise, least-privilege rules to allow only the traffic explicitly defined in manifests (e.g., via ingress routes or
accessControlblocks). - Unified Service Mesh (Google Cloud Service Mesh): All compute assets (GKE, Cloud Run, and Compute Engine VMs) are automatically enrolled in a service mesh. This provides a consistent, platform-managed layer for:
- Zero-Trust Security: Enforcing strict, identity-based mutual TLS (mTLS) for all service-to-service communication, ensuring all traffic is encrypted and authenticated.
- Fine-Grained Traffic Control: Implementing advanced traffic management policies.
- Consistent Observability: Providing uniform metrics and traces for all services, regardless of the underlying compute platform.
- Controlled Egress: Egress traffic is controlled at the project level via a default-deny firewall and a dedicated NAT gateway, configured with an explicit allowlist of destinations defined in the
allowedEgressfield of the relevant manifest.
The combination of least-privilege IAM and a zero-trust network creates a secure sandbox for all applications.
Even experimental or AI-generated ("vibe-coded") applications are completely contained by default. An application cannot access unauthorized resources (like databases or secrets) or exfiltrate data unless it is explicitly granted permission via a manifest change. Since any change to a manifest requires a formal pull request, it is subject to the rigorous, multi-stakeholder review process enforced by the CODEOWNERS rules. This allows for rapid innovation while mitigating the risk of rogue or insecure code.
- Encryption in Transit: All traffic within the service mesh is automatically encrypted using mTLS. All public-facing ingress traffic is terminated with Google-managed SSL certificates.
- Encryption at Rest: All data stored in resources provisioned by the platform (e.g.,
google_storage_bucket,google_alloydb_cluster,google_compute_disk) is encrypted at rest by default using Google-managed keys. The platform also supports the use of Customer-Managed Encryption Keys (CMEK) for resources like Pub/Sub topics.