Research Scientist, Systems ML - SW/HW Co-Design - Inference - Meta
Menlo Park, CA
About the Job
Meta is seeking a Research Scientist to join our Research & Development teams. The ideal candidate will have industry experience working on AI Infrastructure related topics. The position will involve taking these skills and applying them to solve for some of the most crucial & exciting problems that exist on the web. We are hiring in multiple locations.The Kernel team focuses on maximizing the inference performance for Generative AI and Recommendation models by developing high-performance kernels. Our expertise lies in creating specialized kernels that significantly improve the efficiency of these models. We have successfully developed and deployed the first FP8 kernel in Meta's production, as well as FBGEMM TBE. By continuously advancing our kernel optimization capabilities, we enable better user experiences and drive innovation in the field of Generative AI and Recommendation systems.The E2E Performance team is dedicated to optimizing the end-to-end performance of Generative AI and Recommendation models. We employ various parallelism strategies and distributed inference techniques to enhance TTIT and TTFT for LLM and LDM. By relentlessly pursuing performance improvements, we have achieved notable successes such as enabling the utilization of AMD GPUs for GenAI production applications and subsequently optimizing their performance. Our ongoing efforts ensure the continuous betterment of these models' performance, ultimately providing more responsive and seamless experiences for users interacting with Generative AI.
RESPONSIBILITIES
Research Scientist, Systems ML - SW/HW Co-Design - Inference Responsibilities:
MINIMUM QUALIFICATIONS
Minimum Qualifications:
PREFERRED QUALIFICATIONS
Preferred Qualifications:
RESPONSIBILITIES
Research Scientist, Systems ML - SW/HW Co-Design - Inference Responsibilities:
- Apply relevant AI infrastructure and hardware acceleration techniques to build & optimize our intelligent ML systems that improve Meta’s products and experiences
- Develop high performance kernels and different parallelism techniques to improve E2E performance.
- Goal setting related to project impact, AI system design, and infrastructure/developer efficiency
- Directly or influencing partners to deliver impact through deep, thorough data-driven analysis
- Drive large efforts across multiple teams
- Define use cases, and develop methodology & benchmarks to evaluate different approaches
- Apply in depth knowledge of how the ML infra interacts with the other systems around it
MINIMUM QUALIFICATIONS
Minimum Qualifications:
- Currently has, or is in the process of obtaining a Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience. Degree must be completed prior to joining Meta.
- Currently has, or is in the process of obtaining, a PhD degree in Computer Science, Computer Vision, Generative AI, NLP, relevant technical field, or equivalent practical experience. Degree must be completed prior to joining Meta.
- Specialized experience in one or more of the following machine learning/deep learning domains: Model compression, hardware accelerators architecture, GPU architecture, machine learning compilers, or ML systems, AI infrastructure, high performance computing, performance optimizations, or Machine learning frameworks (e.g. PyTorch), numerics and SW/HW co-design
- Experience developing AI-System infrastructure or AI algorithms in C/C++ or Python
- Must obtain work authorization in the country of employment at the time of hire, and maintain ongoing work authorization during employment.
PREFERRED QUALIFICATIONS
Preferred Qualifications:
- Experience or knowledge of training/inference of Large scale AI models
- Experience or knowledge of distributed systems or on-device algorithm development
- Experience or knowledge of recommendation and ranking models
Source : Meta