At Lumenalta, we create impactful software solutions that drive innovation and transform businesses. Since 2000, we’ve partnered with visionary leaders to build cutting-edge tech, solve complex challenges, and deliver results faster through our elite teams and tech-driven approach. Join us in shaping the future of technology.
What you’ll do
- Architect end-to-end pipelines using PySpark, SQL, DLT, and dbt for batch and streaming use cases.
- Design governed lakehouse solutions using Unity Catalog, Delta Lake, and scalable architecture best practices.
- Leverage cutting-edge Databricks features including:
- Unity Catalog for access control and lineage
- Delta Live Tables for reliable streaming
- Liquid Clustering for performance tuning
- System Tables for monitoring and cost insights
- Serverless Compute for elastic scalability
- AI/BI Genie for natural language insights
- Query Federation and Delta Sharing for unified access
- Automate everything: Use Terraform, Databricks CLI, and REST APIs to provision and manage infrastructure.
- Streamline CI/CD: Build pipelines for notebooks, workflows, DLT, and ML models using GitHub Actions, Jenkins, or Azure DevOps.
- Collaborate cross-functionally: Translate complex business needs into simple, scalable data architecture.
- Champion governance and security: Enforce fine-grained access, logging, and compliance using Unity Catalog and platform controls.
- Stay ahead of the curve: Explore and guide adoption of new Databricks features like Lakehouse Federation, GenAI integrations, and Model Serving.
- Mentor and uplift: Share knowledge, contribute to internal standards, and help raise the bar for data engineering across the team.
What you bring
- 6+ years of experience in data engineering/architecture, including 3+ years hands-on with Databricks in production.
- Deep understanding of Databricks features and architecture, especially: Unity Catalog, Delta Live Tables, Liquid Clustering, Serverless Compute, Photon Engine, System Tables, Query Federation and Delta Sharing
- Strong cloud knowledge—AWS is your home base (S3, IAM, Glue, Redshift, Lambda, Athena, ECS/EKS).
- Infrastructure automation experience with Terraform, CLI tools, and REST APIs.
- CI/CD pro with GitHub Actions, Jenkins, Azure DevOps, or similar.
- Fluency in PySpark, SQL, and Python; experience with dbt, MLflow, Spark internals a plus.
- Practical knowledge of data governance, lineage, access controls, and compliance in cloud-native environments.
- Clear communicator with strong architectural instincts and attention to detail.
Nice to have
- Databricks certifications (Data Engineer Pro, Machine Learning Pro, Lakehouse Architect)
- Experience with Model Serving, Feature Store, Vector Search, or GenAI use cases
- Comfort integrating with BI tools like Tableau, Power BI, and Databricks SQL
- Past work on migrations from Hadoop, Teradata, Informatica, or similar
- Exposure to Databricks on Azure or GCP
- Familiarity with data observability tools
Why join Lumenalta?
- You’ll work on high-impact projects that power real-time analytics, AI, and innovation at scale.
- You’ll be surrounded by smart, supportive people who care about doing great work and helping each other grow.
- You’ll stay on the forefront of Databricks capabilities and cloud-native design patterns.
- We offer competitive compensation, flexibility, and the chance to shape meaningful solutions for forward-thinking clients.
What's it like to work at Lumenalta?
Ongoing recruitment – no set deadline.
Traits of a Lumen
Radically Engaged
Strong performers, constant communication, quality work. We deliver impact.
Bright mindset
Ambitious, energized, kind. We tackle with optimism.
Lead the way
Professional, adaptable, thoughtful. We set the standard.
Lightspeed
Agile, collaborative, action-first. We move fast to deliver the best.
Join the bright side
Hiring Process
Learn more about how we interview and select candidates.
Career Opportunities
Find a role that best matches your skill set and career goals.