Arbetsbeskrivning
About the CompanyAvaron AB is a growing consultancy focused on technology, finance, and business support.
We match your expertise with the market's most interesting assignments, offering a platform where your professional development is central.
About the AssignmentAs a Data Engineer, you will help build and maintain the core data infrastructure powering performance evaluation products within the retail domain.
The focus is on unifying data from multiple operational sources into a dependable platform—improving data quality, consistency, and timeliness to support better business and performance insights.
You’ll work end-to-end with modern, cloud-native data engineering practices, creating resilient pipelines and automated validation to reduce manual effort and accelerate reporting and decision-making.
Job Description- Design, build, and maintain scalable data pipelines (batch and streaming) across ingestion, transformation, and serving layers.
- Develop and optimize data models and ingestion frameworks to consolidate diverse datasets into a unified platform.
- Implement automated data validation checks, monitoring, and troubleshooting routines for production environments.
- Contribute throughout the full data engineering lifecycle: ideation, architecture, requirements, design, estimation, sprint planning, development, testing, documentation, deployment, and operational follow-up.
- Optimize performance and cost-efficiency with a strong focus on maintainability and reusability.
- Collaborate with stakeholders to explain data architectures, trade-offs, and pipeline behavior in a clear and structured way.
Requirements- Proven experience designing and implementing end-to-end data pipelines (batch and streaming).
- Strong understanding of data engineering best practices, including release processes, quality expectations, data validation, performance optimization, monitoring, and production support.
- Hands-on expertise with Databricks, including Apache Spark, Delta Lake, job orchestration, performance tuning, and environment management.
- Strong knowledge of Microsoft Azure, including services commonly used in data platforms (e.g., Azure Data Lake Storage).
- Solid experience with DevOps ways of working: Git/source control, CI/CD, automated testing, environment promotion, and infrastructure-as-code for data platforms.
- Ability to communicate data solutions clearly to non-technical stakeholders.
Nice to have- Experience working with lakehouse architecture and related data modeling best practices.
ApplicationSelections are made on an ongoing basis, so we recommend that you apply as soon as possible.