Arbetsbeskrivning
Nordea is a leading Nordic universal bank.
Job id: 27379
Are you ready to help shape the future of AI & Generative AI at Scandinavia´s largest bank, working alongside a dynamic, fast-pased team?
The Group AI Center of Excellence is an established unit within Group Data Management focusing on delivering and enabling generative AI use cases across the organisation in a secure and compliant way. The unit provides reusable AI Assets that are developed in conjunction with business stakeholders to be leveraged by a wide variety of use cases.
We are now looking for an AI Risk Analyst who is interested in Risk and Responsible AI topics, to apply in the field of Generative AI with use-cases across the bank. You will work closely with data scientists, ML engineers, and business stakeholders to ensure all AI and GenAI applications and use cases are developed and deployed in compliance with the AI EU Act.
About this opportunity
Welcome to the Applied Data Science team. Your passion for working with people from both a technical and a business background, coupled with your curiosity for AI and new innovations like Generative AI, makes you well-suited for this position. As Nordea pioneers its way into an AI and Gen AI driven future, you will thrive by embracing a dynamic environment, fostering collaboration, and maintaining an open and adaptable mindset.
What you will be doing:
* Support the design and ensure EU AI Act implementation plan across the organization
* Disseminate best practices, serving as a point of contact to all AI efforts, guaranteeing best practices on responsible AI are interestingly adopted in the Bank
* Educate use case owners and other relevant stakeholders in bank-wide risk protocols and external regulations
* Conduct research and ensure commitment in ethical standards of AI governance, such as fairness, accountability, transparency, privacy, and human rights
Our AI team culture is built on collaboration, continuous learning and innovation. We foster an open environment where diverse ideas thrive, encouraging everyone to push the boundaries of technology while supporting each other´s growth. With a strong focus on well-being, we prioritize work-life balance and encourage open communication.
At Nordea, inclusion and diversity are at the heart of our values. We believe that a diverse workforce drives innovation, fosters creativity and creates a more supportive workplace for all. The role is based in Høje Taastrup, Helsinki, or Stockholm.
Who you are
Collaboration. Ownership. Passion. Courage. These are the values that guide us in being at our best – and that we imagine you share with us.
To succeed in this role, we believe that you:
* Embrace a growth mindset, where you enjoy taking initiative and bringing solutions
* Enjoy problem solving, and tackling challenges
* Are an awesome team member and you have a strong ability to inspire people to take action
Your experience and background:
* BSc or MSc in Computer Science, Data Science, Risk Management, or a related Legal discipline
* At least 7 years of experience in AI Risk, Model Risk Management and related activities, with strong proven experience and familiarity with EU AI Act and other relevant AI regulations (GDPR)
* Vas professional experience and familiarity with EU AI Act and other relevant AI regulations (GDPR)
* Ability to clearly communicate technical content into non-technical audience and educate multiple stakeholders on responsible AI
* Knowledge of Data Management and Governance frameworks, including data quality, security, privacy and compliance processes and procedures
* Understanding of AI & GenAI model mitigation processes for associated risks e.g. hallucination, fairness, explainability
* Understanding of GenAI and AI development and deployment process e.g., data exploration, feature engineering, production testing, and output validation
* Experience with AI/ML projects in designing, developing and maintaining advanced analytics use cases
If this sounds like you, get in touch!
Next steps
We kindly ask you to submit your application as soon as possible, but no later than 17/12/2024. We will ongoingly be reviewing applications and conducting interviews as we receive them, and might close the recruitment process before the posting end date. Any applications or CVs sent by email, direct messages, or any other channel than our application forms, will not be accepted or considered.
If you have any questions about the role or this recruitment process, please reach out to our tech recruiter and main point of contact Anna Dahlström,
[email protected].
For union information, please contact
[email protected] or
[email protected].