Arbetsbeskrivning
A hands-on data engineering role with real business impact, international exposure, and strong room to grow
- within a fast-moving, empowered Nordic organization where your work truly matters.
What you’ll be doing
- Design, build, and optimize scalable data pipelines on Databricks, covering the full lifecycle from ingestion to transformation and performance tuning
- Work closely with business stakeholders, analysts, and fellow engineers to deliver high-quality, business-critical data solutions
- Take ownership of end-to-end data flows, ensuring reliability, scalability, and best-practice architecture
- Actively contribute to the evolution of our modern Lakehouse platform, including governance, CI/CD, and data quality standards
- Support and contribute to AI, ML, and Generative AI initiatives, from solution design to implementation
- Proactively identify improvements and help shape how data engineering is done at Karo
Requirements
- 3+ years of hands-on experience as a Data Engineer (or similar role) in a fast-paced, delivery-focused environment
- Strong experience with Databricks and solid understanding of Lakehouse and Medallion architectures
- Advanced SQL, solid Python, and experience working with Spark
- Experience with ETL/ELT pipelines, data modeling (e.g.
Kimball / star schema), and CI/CD practices
- Comfortable working with Git and modern development workflows
- Curious, proactive, and eager to grow—especially within AI/ML and modern data platforms
- Bachelor’s or Master’s degree in Computer Science, Data Science, or a related field
- Experience or interest in Machine Learning or Generative AI is a strong plus