Arbetsbeskrivning
About the CompanyAvaron AB is a growing consultancy focused on technology, finance, and business support.
We match your expertise with the market's most interesting assignments, offering a platform where your professional development is central.
About the AssignmentYou will join a team building customer-facing machine learning solutions, with a focus on recommendation and personalization.
The environment is strongly oriented around Google Cloud Platform and production-grade data/ML pipelines, with an emphasis on code quality, automation, and scalable system design.
Job Description- Design, build, and maintain ML pipelines using Vertex AI pipelines / Kubeflow
- Develop and optimize data workflows using BigQuery and SQL
- Orchestrate pipelines with Cloud Composer / Airflow
- Work with IAM and service accounts to enable secure access patterns
- Use and maintain metadata through Data Catalog
- Apply Infrastructure as Code principles in the platform setup and evolution
- Develop Python code following best practices (OOP, linting, typing, formatting, and static analysis)
- Write and maintain unit and end-to-end tests using established Python testing frameworks
- Collaborate via Git workflows (PRs, code reviews, and merge conflict resolution)
- Build and maintain CI/CD pipelines (e.g., GitHub Actions)
- Work with Docker-based development and runtime environments
- Contribute to data modeling and system design for robust, scalable solutions
Requirements- Strong experience in Python development, including OOP and coding best practices
- Experience with flake8, mypy, black, SonarQube, and pre-commit
- Strong testing experience (unit and end-to-end), using tools such as Pytest/fixtures/unittest
- Solid experience with SQL and BigQuery
- Hands-on experience with Vertex AI pipelines / Kubeflow pipelines
- Experience with Cloud Composer / Airflow
- Understanding of IAM and service accounts
- Experience working with Data Catalog
- Understanding of Infrastructure as Code concepts
- Deep understanding of Docker and Unix environments, including shell usage
- Strong Git skills (PR workflow and merge conflict handling)
- Ability to create CI/CD pipelines (e.g., GitHub Actions) using best practices
- Strong understanding of data modeling and system design
Nice to have- Experience with Dataflow
- Experience with Kubernetes
- Experience building high-availability APIs
- Experience with ML-based recommendations and personalization systems
- Strong DBT experience, preferably in a GCP context
ApplicationSelections are made on an ongoing basis, so we recommend that you apply as soon as possible.