databricks-template

workflow
Security Audit
Warn
Health Warn
  • No license — Repository has no license file
  • Description — Repository has a description
  • Active repo — Last push 0 days ago
  • Community trust — 65 GitHub stars
Code Pass
  • Code scan — Scanned 12 files during light audit, no dangerous patterns found
Permissions Pass
  • Permissions — No dangerous permissions requested

No AI report is available for this listing yet.

SUMMARY

A production-ready PySpark project template with medallion architecture, Python packaging, unit tests, integration tests, CI/CD automation, Databricks Asset Bundles, and DQX data quality framework.

README.md

databricks-template

A production-ready PySpark project template with medallion architecture, Python packaging, unit tests, integration tests, coverage tests, CI/CD automation, Declarative Automation Bundles, and DQX data quality framework.

Databricks
PySpark
CI/CD
Stars

🚀 Overview

This project template is designed to boost productivity and promote maintainability when developing ETL pipelines on Databricks. It aims to bring software engineering best practices—such as modular architecture, automated unit and integration testing, and CI/CD—into the world of data engineering. By combining a clean project structure with robust development and deployment jobs, this template helps teams move faster with confidence.

You’re encouraged to adapt the structure and tooling to suit your project’s specific needs and environment.

Interested in bringing these principles in your own project? Let’s connect on Linkedin.

🧪 Technologies

  • Databricks Free Edition (Serverless)
  • Databricks Runtime 18.0 LTS
  • Databricks Unity Catalog
  • Databricks Declarative Automation Bundles (former Databricks Asset Bundles)
  • Databricks CLI
  • Databricks Python SDK
  • Databricks DQX
  • Databricks AI Dev Kit
  • Claude Code
  • PySpark 4.1
  • Python 3.12+
  • GitHub Actions
  • Pytest

📦 Features

This project template demonstrates how to:

  • use agentic development (with Databricks AI Dev Kit and Claude Code) in data projects.
  • structure PySpark code inside classes/packages, instead of notebooks.
  • package and deploy code to different environments (dev, staging, prod).
  • use a CI/CD pipeline with Github Actions.
  • run unit tests on transformations with pytest package. Set up VSCode to run unit tests on your local machine.
  • run integration tests setting the input data and validating the output data.
  • isolate "dev" environments / catalogs to avoid concurrency issues between developer tests.
  • show developer name and branch as job tags to track issues.
  • utilize coverage package to generate test coverage reports.
  • utilize uv as a project/package manager.
  • configure job to run tasks selectively.
  • use medallion architecture pattern.
  • lint and format code with ruff and pre-commit.
  • use a Make file to automate repetitive tasks.
  • utilize argparse package to build a flexible command line interface to start the jobs.

🧠 Resources

Agentic development:

Debates on the use of notebooks vs. Python packaging:

Sessions on Databricks Declarative Automation Bundles, CI/CD, and Software Development Life Cycle at Data + AI Summit 2025:

Other resources:

📁 Folder Structure

databricks-template/
│
├── .github/                       # CI/CD automation
│   └── workflows/
│       └── onpush.yml             # GitHub Actions pipeline
│
├── src/                           # Main source code
│   └── template/                  # Python package
│       ├── main.py                # Entry point with CLI (argparse)
│       ├── config.py              # Configuration management
│       ├── baseTask.py            # Base class for all tasks
│       ├── commonSchemas.py       # Shared PySpark schemas
│       ├── job1/                  # Job-specific tasks
│       │   ├── extract_source1.py
│       │   ├── extract_source2.py
│       │   ├── generate_orders.py
│       │   ├── generate_orders_agg.py
│       │   ├── integration_setup.py
│       │   └── integration_validate.py
│       └── job2/                  # Additional job tasks
│
├── tests/                          # Unit tests
│   ├── job1/
│   │   └── unit_test.py            # Pytest unit tests
│   └── job2/
│
├── resources/                      # Databricks workflow templates
│   └── jobs.yml                    # Generated job definition (auto-created)
│
├── scripts/                              # Helper scripts
│   ├── sdk_generate_template_job.py      # Job definition generator (Databricks SDK)
│   ├── sdk_init.py                       # Workspace initialization
│   ├── sdk_analyze_job_costs.py          # Cost analysis script
│   └── sdk_workspace_and_account.py      # Workspace and account management
│
├── docs/                           # Documentation assets
│   ├── dag.png
│   ├── task_output.png
│   ├── data_lineage.png
│   ├── data_quality.png
│   └── ci_cd.png
│
├── dist/                        # Build artifacts (Python wheel)
├── coverage_reports/            # Test coverage reports
│
├── databricks.yml               # Declarative Automation Bundle config
├── pyproject.toml               # Python project configuration (uv)
├── Makefile                     # Build automation
├── .pre-commit-config.yaml      # Pre-commit hooks (ruff)
└── README.md                    # This file

CI/CD pipeline



Jobs



Task Output



Data Lineage



Data Quality (generated by Databricks DQX)



Instructions

  1. (Optional) Install Databricks AI Dev Kit and Claude Code.

  2. Create a workspace. Use a Databricks Free Edition workspace.

  3. Install and configure Databricks CLI on your local machine. Check the current version on databricks.yaml. Follow instructions here.

  4. Build Python env and execute unit tests on your local machine.

     make sync & make test
     
    
  5. Create an external location in Databricks and update the "storage-root" parameter in the Makefile. This step will create the catalogs, schemas, service principal, and the required grants. For more details, see Overview of external locations. Then run:

     make init
    
  6. Generate a secret for the service principal. In Databricks, go to: Workspace -> Settings -> Identity and access -> Service principals -> Secrets. Generate a new secret for your service principal and update the corresponding profiles in your .databrickscfg file. Your configuration should look similar to this:

     [dev]
     host             = https://xxxx.cloud.databricks.com/
     token            = bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
                     
     [staging]
     host          = https://xxxx.cloud.databricks.com/
     client_id     = yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy
     client_secret = aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
    
     [prod]
     host          = https://xxxx.cloud.databricks.com/
     client_id     = yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy
     client_secret = aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
    
  7. Deploy and execute on the dev workspace.

     make deploy env=dev
    
  8. Configure CI/CD automation with the service principal ID and Secret. Configure Github Actions repository secrets (DATABRICKS_HOST, DATABRICKS_PRINCIPAL_ID, DATABRICKS_SECRET).

  9. (Optional) You can also execute unit tests from your preferred IDE. Here's a screenshot from VS Code with Microsoft's Python extension installed.

Task parameters


  • task (required) - determines the current task to be executed.
  • env (required) - determines the AWS account where the job is running. This parameter also defines the default catalog for the task.
  • user (required) - determines the name of the catalog when env is "dev".
  • schema (optional) - determines the default schema to read/store tables.
  • skip (optional) - determines if the current task should be skipped.
  • debug (optional) - determines if the current task should go through debug conditional.

Star History

Star History Chart

Reviews (0)

No results found