Orchestrate multi-task jobs with Databricks Workflows
✓Works with OpenClaudeYou are a Databricks Workflow engineer. The user wants to orchestrate multi-task jobs with Databricks Workflows using the Jobs API and Databricks CLI.
What to check first
- Run
databricks --versionto confirm Databricks CLI is installed and updated - Verify you have a workspace URL and personal access token configured in
~/.databrickscfg - Check that your cluster or job compute exists with
databricks clusters list - Confirm the notebook or task code exists in your Databricks workspace
Steps
- Create a workflow JSON configuration file defining task dependencies, compute, and parameters
- Define individual tasks with
notebook_taskorspark_python_taskblocks specifying entry points and parameters - Set up task dependencies using the
depends_onfield to establish execution order - Configure cluster specifications inline or reference an existing cluster by ID
- Use
databricks jobs create --json-file workflow.jsonto deploy the workflow - Monitor job runs with
databricks jobs get-run --run-id <run_id>and check task-level logs - Update workflows with
databricks jobs reset --job-id <job_id> --json-file workflow.jsonto modify configuration - Schedule recurring runs using the
scheduleblock with cron syntax or trigger-based rules
Code
{
"name": "multi_task_etl_workflow",
"tasks": [
{
"task_key": "extract_data",
"notebook_task": {
"notebook_path": "/Shared/etl/extract",
"base_parameters": {
"source": "api",
"date": "{{job.start_time}}"
}
},
"new_cluster": {
"spark_version": "13.3.x-scala2.12",
"node_type_id": "i3.xlarge",
"num_workers": 2,
"aws_attributes": {
"availability": "SPOT_WITH_FALLBACK"
}
},
"timeout_seconds": 3600
},
{
"task_key": "transform_data",
"depends_on": [
{
"task_key": "extract_data"
}
],
"notebook_task": {
"notebook_path": "/Shared/etl/transform",
"base_parameters": {
"mode": "production"
}
},
"existing_cluster_id": "cluster-xyz-123",
"timeout_seconds": 7200
},
{
"task_key": "load_and_validate",
"depends_on": [
{
"task_key": "transform_data"
}
],
"spark_python_task": {
"python_file": "dbfs:/scripts/load_data.py",
"parameters": [
Note: this example was truncated in the source. See the GitHub repo for the latest full version.
Common Pitfalls
- Treating this skill as a one-shot solution — most workflows need iteration and verification
- Skipping the verification steps — you don't know it worked until you measure
- Applying this skill without understanding the underlying problem — read the related docs first
When NOT to Use This Skill
- When a simpler manual approach would take less than 10 minutes
- On critical production systems without testing in staging first
- When you don't have permission or authorization to make these changes
How to Verify It Worked
- Run the verification steps documented above
- Compare the output against your expected baseline
- Check logs for any warnings or errors — silent failures are the worst kind
Production Considerations
- Test in staging before deploying to production
- Have a rollback plan — every change should be reversible
- Monitor the affected systems for at least 24 hours after the change
Related Databricks Skills
Other Claude Code skills in the same category — free to download.
Databricks Notebook
Write PySpark and SQL notebooks with widgets and visualizations
Databricks Delta Lake
Build Delta Lake tables with ACID transactions, time travel, and optimization
Databricks ETL Pipeline
Build medallion architecture ETL pipelines (bronze/silver/gold)
Databricks Unity Catalog
Configure Unity Catalog for data governance, lineage, and access control
Databricks MLflow
Track experiments, register models, and deploy with MLflow
Databricks Auto Loader
Ingest data incrementally with Auto Loader and cloud storage
Databricks SQL Warehouse
Query and visualize data with Databricks SQL warehouses and dashboards
Want a Databricks skill personalized to YOUR project?
This is a generic skill that works for everyone. Our AI can generate one tailored to your exact tech stack, naming conventions, folder structure, and coding patterns — with 3x more detail.