If you are passionate about upholding customer trust and safety across AI development, we want to speak with you. Amazon AGI organization is hiring a Risk Manager to lead risk assessment and policy management throughout its AI development lifecycle. This role will have the autonomy to identify evolving technology, industry and societal risks with respect to AI development, and prioritize mitigations in partnership with the science and engineering teams.
Key job responsibilities
- Lead ongoing risk assessments and corresponding policy management.
- Recommend opportunities to business partners and tech teams/leaders for leveraging innovative technologies in order to meet risk management needs.
- Measure program metrics goals and write reports for leadership.
- Partner with legal counsel, security, and other stakeholders to our AI products' risk posture.
- Develop and deliver risk management frameworks to scale Responsible AI development.
- Earn trust of business and technical leadership by proactively engaging in order to understand product offerings and technical architecture.
- Implement and maintain mechanisms which educate the org proactively (e.g., onboarding trainings, wikis with self-service resources, office hours to guide teams through Responsible AI development, etc.).
- Improve employee understanding of policies through various engagement initiatives, including certifications/trainings associated with the policies.
- Establish and own feedback mechanisms to identify and surface issues.
About the team
Amazon AGI Team builds the most capable Foundation Models using which teams across Amazon and beyond leverage to build Generative AI Applications across several different modalities, and capabilities. This position will be part of the AGI Product team that is responsible for leading the product function for AGI.