JobsJornal
M

Tech Lead Databricks Data Engineer

Mitre Media·March 26, 2026·0 views
🌍 Remote · USA, Canada (Eastern Time ±3hrs)Full-timeData & Analytics

Job Description

About Mitre Media

Mitre Media is redefining FinTech with AI-driven tools that empower millions of investors. Our portfolio, including Dividend.com and MutualFunds.com, leverages large language models to deliver novel data insights and visually rich user experiences. For over a decade, we've served individual investors, financial advisors, and top asset managers like BlackRock and Vanguard through premium data, tools, and advertising solutions. We're seeking talented engineers excited by the intersection of big data, artificial intelligence, and investing to join our lean, entrepreneurial team.

About the Role

As Tech Lead Data Engineer, you'll architect and maintain the data backbone powering every feature across our product suite. Reporting to the CTO, you will design Databricks-based ETL pipelines, model complex investment data, and surface low-latency, high-quality datasets for both user-facing features and internal AI/analytics workloads. You'll collaborate in a remote-first hybrid culture that values in-person collaboration bursts, follow ShapeUp for project planning, and ship pragmatic solutions that prioritize delivery over scope creep.

Key Responsibilities

  • Design, implement, and optimize large-scale ETL workflows in Databricks using Apache Spark, Delta Lake, and DBT
  • Build and maintain scalable data pipelines processing investment market data at scale
  • Collaborate with product, analytics, and AI teams to deliver high-quality datasets for machine learning and user-facing features
  • Mentor junior engineers on data engineering best practices and architecture patterns
  • Ensure data quality, security, and compliance across all data infrastructure
  • Participate in code reviews and contribute to technical documentation
  • Work with cloud platforms (AWS/GCP) to optimize data storage and compute costs

Required Skills & Experience

  • Expert-level proficiency with Databricks, Apache Spark, and Delta Lake
  • Strong programming skills in Python, Scala, or Java
  • Advanced SQL knowledge for complex data modeling and optimization
  • Experience building production ETL pipelines and data warehouses
  • Familiarity with CI/CD practices and version control (GitHub)
  • Understanding of data architecture and SOLID principles
  • Experience with FinTech, financial data, or investment platforms is a plus

Location: Remote within ±3 hours of Eastern Time (USA/Canada timezones). Willingness to attend occasional in-person collaboration sessions required.