DocsFeaturesNative Helm Deployment

Native Helm Deployment

Deploy services using Helm directly in Kubernetes without external CI/CD dependencies

⚠️

This feature is still in alpha and might change with breaking changes.

Native Helm is an alternative deployment method that runs Helm deployments directly within Kubernetes jobs, eliminating the need for external CI/CD systems. This provides a more self-contained and portable deployment solution.

Native Helm deployment is an opt-in feature that can be enabled globally or per-service.

Overview

When enabled, Native Helm:

  • Creates Kubernetes jobs to execute Helm deployments
  • Runs in ephemeral namespaces with proper RBAC
  • Provides real-time deployment logs via WebSocket
  • Handles concurrent deployments automatically
  • Supports all standard Helm chart types

Quickstart

Want to try native Helm deployment? Here’s the fastest way to get started:

This configuration:

  1. Enables native Helm for the my-api service
  2. Uses a local Helm chart from your repository
  3. Applies values from ./helm/values.yaml
  4. Runs deployment as a Kubernetes job

To enable native Helm for all services at once, see Global Configuration.

Configuration

Enabling Native Helm

There are two ways to enable native Helm deployment:

Per Service Configuration

Enable native Helm for individual services:

Global Configuration

Enable native Helm for all services:

Configuration Precedence

Lifecycle uses a hierarchical configuration system with three levels of precedence:

  1. helmDefaults - Base defaults for all deployments (database: global_config table)
  2. Chart-specific config - Per-chart defaults (database: global_config table)
  3. Service YAML config - Service-specific overrides (highest priority)
💡

Service-level configuration always takes precedence over global defaults.

Global Configuration (Database)

Global configurations are stored in the global_config table in the database. Each configuration is stored as a row with:

  • key: The configuration name (e.g., ‘helmDefaults’, ‘postgresql’, ‘redis’)
  • config: JSON object containing the configuration

helmDefaults Configuration

Stored in database with key helmDefaults:

Field Descriptions:

  • enabled: When true, enables native Helm deployment for all services unless they explicitly set deploymentMethod: "ci"
  • defaultArgs: Arguments automatically appended to every Helm command (appears before service-specific args)
  • defaultHelmVersion: The Helm version to use when not specified at the service or chart level

Chart-specific Configuration

Example: PostgreSQL configuration stored with key postgresql:

These global configurations are managed by administrators and stored in the database. They provide consistent defaults across all environments and can be overridden at the service level.

Usage Examples

Quick Experiment: Deploy Jenkins!

Want to see native Helm in action? Let’s deploy everyone’s favorite CI/CD tool - Jenkins! This example shows how easy it is to deploy popular applications using native Helm.

🎉 That’s it! With just a few lines of configuration, you’ll have Jenkins running in your Kubernetes cluster.

To access your Jenkins instance:

  1. Check the deployment status in your PR comment
  2. Click the Deploy Logs link to monitor the deployment
  3. Once deployed, Jenkins will be available at the internal hostname

For more Jenkins configuration options and values, check out the Bitnami Jenkins chart documentation. This same pattern works for any Bitnami chart (PostgreSQL, Redis, MongoDB) or any other public Helm chart!

Basic Service Deployment

PostgreSQL with Overrides

Custom Environment Variables

Lifecycle supports flexible environment variable formatting through the envMapping configuration. This feature allows you to control how environment variables from your service configuration are passed to your Helm chart.

Why envMapping? Different Helm charts expect environment variables in different formats. Some expect an array of objects with name and value fields (Kubernetes standard), while others expect a simple key-value map. The envMapping feature lets you adapt to your chart’s requirements.

Default envMapping Configuration

You can define default envMapping configurations in the global_config database table. These defaults apply to all services using that chart unless overridden at the service level.

Example: Setting defaults for your organization’s chart

With this configuration, any service using the myorg-web-app chart will automatically use array format for environment variables:

Setting envMapping in global_config is particularly useful when: - You have a standard organizational chart used by many services - You want consistent environment variable handling across services - You’re migrating multiple services and want to reduce configuration duplication

Array Format

Best for charts that expect Kubernetes-style env arrays.

This produces the following Helm values:

Your chart’s values.yaml would use it like:

Map Format

Best for charts that expect a simple key-value object.

This produces the following Helm values:

⚠️

Note: Underscores in environment variable names are converted to double underscores (__) in map format to avoid Helm parsing issues.

Your chart’s values.yaml would use it like:

Complete Example with Multiple Services

Templated Variables

Lifecycle supports template variables in Helm values that are resolved at deployment time. These variables allow you to reference dynamic values like build UUIDs, docker tags, and internal hostnames.

Available Variables

Template variables use the format {{{variableName}}} and are replaced with actual values during deployment:

VariableDescriptionExample Value
{{{serviceName_dockerTag}}}Docker tag for a servicemain-abc123
{{{serviceName_dockerImage}}}Full docker image pathregistry.com/org/repo:main-abc123
{{{serviceName_internalHostname}}}Internal service hostnameapi-service.env-uuid.svc.cluster.local
{{{build.uuid}}}Build UUIDenv-12345
{{{build.namespace}}}Kubernetes namespaceenv-12345

Usage in Values

Docker Image Mapping: When using custom charts, you’ll need to map {{{serviceName_dockerImage}}} or {{{serviceName_dockerTag}}} to your chart’s expected value path. Common patterns include:

  • image.repository and image.tag (most common)
  • deployment.image (single image string)
  • app.image or application.image
  • Custom paths specific to your chart

Check your chart’s values.yaml to determine the correct path.

Image Mapping Examples

⚠️

Important: Always use triple braces {{{variable}}} instead of double braces {{variable}} for Lifecycle template variables. This prevents Helm from trying to process them as Helm template functions and ensures they are passed through correctly for Lifecycle to resolve.

Template Resolution Order

  1. Lifecycle resolves {{{variables}}} before passing values to Helm
  2. The resolved values are then passed to Helm using --set flags
  3. Helm processes its own template functions (if any) after receiving the resolved values

Example with Service Dependencies

Deployment Process

  1. Job Creation: A Kubernetes job is created in the ephemeral namespace 2. RBAC Setup: Service account with namespace-scoped permissions is created
  2. Git Clone: Init container clones the repository 4. Helm Deploy: Main container executes the Helm deployment 5. Monitoring: Logs are streamed in real-time via WebSocket

Concurrent Deployment Handling

Native Helm automatically handles concurrent deployments by:

  • Detecting existing deployment jobs
  • Force-deleting the old job
  • Starting the new deployment

This ensures the newest deployment always takes precedence.

Monitoring Deployments

Deploy Logs Access

For services using native Helm deployment, you can access deployment logs through the Lifecycle PR comment:

  1. Add the lifecycle-status-comments! label to your PR
  2. In the status comment that appears, you’ll see a Deploy Logs link for each service using native Helm
  3. Click the link to view real-time deployment logs

Log Contents

The deployment logs show:

  • Git repository cloning progress (clone-repo container)
  • Helm deployment execution (helm-deploy container)
  • Real-time streaming of all deployment output
  • Success or failure status

Chart Types

Lifecycle automatically detects and handles three chart types:

TypeDetectionFeatures
ORG_CHARTMatches orgChartName AND has helm.dockerDocker image injection, env var transformation
LOCALName is “local” or starts with ”./” or ”../“Flexible envMapping support
PUBLICEverything elseStandard labels and tolerations
💡

The orgChartName is configured in the database’s global_config table with key orgChart. This allows organizations to define their standard internal Helm chart.

Troubleshooting

Deployment Fails with “Another Operation in Progress”

Symptom: Helm reports an existing operation is blocking deployment

Solution: Native Helm automatically handles this by killing existing jobs. If the issue persists:

Environment Variables Not Working

Symptom: Environment variables not passed to the deployment

Common Issues:

  1. envMapping placed under chart instead of directly under helm
  2. Incorrect format specification (array vs map)
  3. Missing path configuration

Correct Configuration:

Migration Example

Here’s a complete example showing how to migrate from GitHub-type services to Helm-type services:

Before: GitHub-type Services

After: Helm-type Services with Native Deployment

Key Migration Points

  1. Service Type Change: Changed from github: to helm: configuration
  2. Repository Location: repository and branchName move from under github: to directly under helm:
  3. Deployment Method: Added deploymentMethod: "native" to enable native Helm
  4. Chart Configuration: Added chart: section with local or public charts
  5. Environment Mapping: Added envMapping: to control how environment variables are passed
  6. Helm Arguments: Added args: for Helm command customization
  7. Docker Configuration: Kept existing docker: config for build process
⚠️

Note that when converting from GitHub-type to Helm-type services, the repository and branchName fields move from being nested under github: to being directly under helm:.

Many configuration options (like Helm version, args, and chart details) can be defined in the global_config database table, making the service YAML cleaner. Only override when needed.