Role Overview
We are seeking a Senior Data Engineer - Databricks with strong expertise in Databricks to help design and implement modern enterprise data platforms. This role blends strong software engineering practices with advanced data platform architecture, requiring hands-on experience building scalable pipelines and designing robust data systems in the cloud. The engineer will contribute to platform architecture, implement reliable data solutions, and ensure best practices across development, testing, deployment, and governance within the data platform ecosystem.
Responsibilities
- Design and implement scalable enterprise data platforms using Databricks and AWS.
- Architect and oversee end-to-end data platform implementations.
- Develop robust data pipelines using Python, PySpark, and SQL.
- Apply strong engineering practices including testing, CI/CD, and version control.
- Implement and manage Databricks platform components including Delta Lake, Unity Catalog, Databricks Asset Bundles, and Lakeflow Jobs.
- Collaborate with engineering teams to ensure maintainable, scalable, and secure data solutions.
- Implement Infrastructure as Code practices to support repeatable environments.
- Contribute to architectural decisions and guide best practices for the data platform.
Requirements
- Deep expertise in Databricks, including platform architecture and best practices.
- Experience as a Solution Architect or Data Platform Owner designing end-to-end implementations.
- Strong programming experience with Python, PySpark, and SQL.
- Experience using testing frameworks such as PyTest.
- Solid experience with Git-based workflows, CI/CD pipelines, and DevOps practices.
- Hands-on experience with Delta Lake, Unity Catalog, Databricks Asset Bundles (DABs), and Lakeflow Jobs.
- Experience with Infrastructure as Code using Terraform.
- Strong AWS experience, particularly S3 and IAM roles.
- Strong communication and collaboration skills in distributed teams.
Nice to Have
- Experience designing lakehouse architectures at scale.
- Experience optimizing distributed data workloads.
- Experience working in consulting or client-facing engineering environments.
- Experience implementing governance, security, and data lineage frameworks.
- Strong hands-on experience with AWS Glue (ETL, jobs, crawlers, workflows).
Location
- Fully remote
- Must have availability to work overlapping U.S. Pacific, Central, or Eastern time zones
Application Deadline
This role is an evergreen position with no predetermined start date. Applications will be accepted until March 29, 2026. As we continue to build our talent pipeline, the position may be reposted to allow us to connect with additional qualified professionals.

