$120 tested Claude codes · real before/after data · Full tier $15 one-timebuy --sheet=15 →
$Free 40-page Claude guide — setup, 120 prompt codes, MCP servers, AI agents. download --free →
clskills.sh — terminal v2.4 — 2,347 skills indexed● online
[CL]Skills_
DatabricksintermediateNew

Databricks Workflows

Share

Orchestrate multi-task jobs with Databricks Workflows

Works with OpenClaude

You are a Databricks Workflow engineer. The user wants to orchestrate multi-task jobs with Databricks Workflows using the Jobs API and Databricks CLI.

What to check first

  • Run databricks --version to confirm Databricks CLI is installed and updated
  • Verify you have a workspace URL and personal access token configured in ~/.databrickscfg
  • Check that your cluster or job compute exists with databricks clusters list
  • Confirm the notebook or task code exists in your Databricks workspace

Steps

  1. Create a workflow JSON configuration file defining task dependencies, compute, and parameters
  2. Define individual tasks with notebook_task or spark_python_task blocks specifying entry points and parameters
  3. Set up task dependencies using the depends_on field to establish execution order
  4. Configure cluster specifications inline or reference an existing cluster by ID
  5. Use databricks jobs create --json-file workflow.json to deploy the workflow
  6. Monitor job runs with databricks jobs get-run --run-id <run_id> and check task-level logs
  7. Update workflows with databricks jobs reset --job-id <job_id> --json-file workflow.json to modify configuration
  8. Schedule recurring runs using the schedule block with cron syntax or trigger-based rules

Code

{
  "name": "multi_task_etl_workflow",
  "tasks": [
    {
      "task_key": "extract_data",
      "notebook_task": {
        "notebook_path": "/Shared/etl/extract",
        "base_parameters": {
          "source": "api",
          "date": "{{job.start_time}}"
        }
      },
      "new_cluster": {
        "spark_version": "13.3.x-scala2.12",
        "node_type_id": "i3.xlarge",
        "num_workers": 2,
        "aws_attributes": {
          "availability": "SPOT_WITH_FALLBACK"
        }
      },
      "timeout_seconds": 3600
    },
    {
      "task_key": "transform_data",
      "depends_on": [
        {
          "task_key": "extract_data"
        }
      ],
      "notebook_task": {
        "notebook_path": "/Shared/etl/transform",
        "base_parameters": {
          "mode": "production"
        }
      },
      "existing_cluster_id": "cluster-xyz-123",
      "timeout_seconds": 7200
    },
    {
      "task_key": "load_and_validate",
      "depends_on": [
        {
          "task_key": "transform_data"
        }
      ],
      "spark_python_task": {
        "python_file": "dbfs:/scripts/load_data.py",
        "parameters": [

Note: this example was truncated in the source. See the GitHub repo for the latest full version.

Common Pitfalls

  • Treating this skill as a one-shot solution — most workflows need iteration and verification
  • Skipping the verification steps — you don't know it worked until you measure
  • Applying this skill without understanding the underlying problem — read the related docs first

When NOT to Use This Skill

  • When a simpler manual approach would take less than 10 minutes
  • On critical production systems without testing in staging first
  • When you don't have permission or authorization to make these changes

How to Verify It Worked

  • Run the verification steps documented above
  • Compare the output against your expected baseline
  • Check logs for any warnings or errors — silent failures are the worst kind

Production Considerations

  • Test in staging before deploying to production
  • Have a rollback plan — every change should be reversible
  • Monitor the affected systems for at least 24 hours after the change

Quick Info

CategoryDatabricks
Difficultyintermediate
Version1.0.0
AuthorClaude Skills Hub
databricksworkflowsorchestration

Install command:

curl -o ~/.claude/skills/databricks-workflows.md https://clskills.in/skills/databricks/databricks-workflows.md

Related Databricks Skills

Other Claude Code skills in the same category — free to download.

Want a Databricks skill personalized to YOUR project?

This is a generic skill that works for everyone. Our AI can generate one tailored to your exact tech stack, naming conventions, folder structure, and coding patterns — with 3x more detail.