Senior Applied Scientist - Microsoft
New York, NY 10261
About the Job
Are you seeking opportunities at the intersection of generative artificial intelligence (AI), such as large-scale multimodal models like GPT-4o, and responsible AI, trust, and safety? Do you want to be a member of an interdisciplinary team of researchers, applied scientists, and software engineers? Do you embrace complex sociotechnical challenges? Microsoft Research is looking to hire a Senior Applied Scientist to join our team bridging responsible AI research and practice. The team focuses on sociotechnical alignment of generative AI systems, with a particular focus on measuring risks, including risk systematization, risk annotation, dataset creation, and metric design. As a member of this team, you’ll develop central resources and tooling, you’ll partner with Microsoft product teams who wish to be proactive about responsible AI, and you’ll contribute to and/or drive research projects intended to advance the state-of-the-art. Successful candidates will have experience working with large-scale language models and/or multimodal models, and will be passionate about prioritizing diversity, inclusion, and fairness.
Qualifications
Required Qualifications:
- Bachelor’s degree in computer science or a related field (e.g., statistics, information science) and 4+ years of experience working with AI, ML, and/or NLP. This can include product experience, industry research, open-source contributions, or academic research (but not coursework).
- OR Master’s degree in computer science or a related field and 3+ years of experience working with AI, ML, and/or NLP. This can include product experience, industry research, open-source contributions, or academic research (but not coursework).
- OR PhD in computer science or a related field and 1+ year(s) of experience working with AI, ML, and/or NLP. This can include product experience, industry research, open-source contributions, or academic research (but not coursework).
- OR equivalent experience.
- Experience putting responsible AI principles (e.g., fairness, transparency) into practice.
- Coding skills in a general-purpose coding language (e.g., Python, Java, Scala, C#, or C/C++).
Preferred Qualifications:
- Master’s degree in computer science or a related field (e.g., statistics, information science) and 6+ years of experience working with AI, ML, and/or NLP. This can include product experience, industry research, open-source contributions, or academic research (but not coursework).
- OR PhD in computer science or a related field and 3+ years of experience working with AI, ML, and/or NLP. This can include product experience, industry research, open-source contributions, or academic research (but not coursework).
- OR equivalent experience.
- 3+ years of experience contributing to or driving peer-reviewed academic papers.
- 1+ year(s) of experience presenting at conferences in the research or industry communities.
- 3+ years of experience conducting research as part of an academic or industry research program.
- 1+ year(s) of experience developing and deploying production systems, as part of a product team.
- Experience working at the intersection of generative AI and responsible AI, trust, and safety.
- Experience approaching AI risks as sociotechnical challenges rather than purely algorithmic ones.
- Experience working with products and services that incorporate generative AI.
- Experience working productively on an interdisciplinary team spanning multiple roles.
- Track record of prioritizing diversity and inclusion in the workplace.
Applied Sciences IC4 - The typical base pay range for this role across the U.S. is USD $117,200 - $229,200 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $153,600 - $250,200 per year.
Microsoft will accept applications for the role until September 22, 2024.
Responsibilities
- You’ll be a member of an interdisciplinary team of experts on AI risks.
- You’ll design and run experiments that use API calls to generative AI systems.
- You’ll monitor research advances and draw on them to shape your applied science work.
- You’ll contribute to and/or drive research projects intended to advance the state-of-the-art.
- You’ll learn new skills and apply them as needed: e.g., you might be asked to learn about measurement modeling and produce a report on a dataset’s content validity.
- You’ll work with stakeholders with a variety of backgrounds in a variety of roles.
- You’ll present your work to internal and external stakeholders, including Microsoft leadership.
- You’ll develop central resources and tooling for identifying, measuring, and mitigating AI risks:
- You’ll work with policy and engineering teams to systematize AI risks.
- You’ll build and validate annotation guidelines and datasets for measuring AI risks.
- You’ll work with Microsoft product teams to generalize these resources and methods to be appropriate for a variety of systems, use cases, and deployment contexts.