Research Scientist Intern, Gen AI Language for Llama Data (PhD) - Meta
Menlo Park, CA
About the Job
Meta was built to help people connect and share, and over the last decade our tools have played a critical part in changing how people around the world communicate with one another. With over a billion people using the service and more than fifty offices around the globe, a career at Meta offers countless ways to make an impact in a fast growing organization.Meta is seeking Research Interns to join the Llama pre-training data team to advance the state of the art of our Generative AI efforts . We are committed to advancing the field of artificial intelligence by making fundamental advances in technologies to help interact with and understand our world. We are seeking individuals passionate in areas such as deep learning, computer vision, audio and speech processing, natural language processing, machine learning, reinforcement learning, computational statistics, and applied mathematics. Our interns have an opportunity to make core algorithmic advances and apply their ideas at an unprecedented scale.Our internships are twelve (12) to twenty-four (24) weeks long and we have various start dates throughout the year.
RESPONSIBILITIES
Research Scientist Intern, Gen AI Language for Llama Data (PhD) Responsibilities:
MINIMUM QUALIFICATIONS
Minimum Qualifications:
PREFERRED QUALIFICATIONS
Preferred Qualifications:
RESPONSIBILITIES
Research Scientist Intern, Gen AI Language for Llama Data (PhD) Responsibilities:
- Develop novel state-of-the-art methods and algorithms to automatically curate pre-training data for training Large Language Models
- Help analyze and improve safety and robustness of existing methods and algorithms for model-based data curation
- Perform research to advance the science and technology of model-based data curation
- Collaborate with researchers from Llama Data and Llama Pre-training teams and cross-functional partners including communicating research plans, progress, and results
- Disseminate research results
- Publish research results and contribute to research that can be applied to Meta product development
MINIMUM QUALIFICATIONS
Minimum Qualifications:
- Currently has or is in the process of obtaining a Ph.D. degree in Computer Science, Computer Vision, Audio Processing, Artificial Intelligence, Generative AI, or relevant technical field
- Must obtain work authorization in the country of employment at the time of hire and maintain ongoing work authorization during employment
- Research experience in machine learning, deep learning, computer vision and/or natural language processing
- Experience with Python, C++, C, Java or other related languages
- Experience with deep learning frameworks such as Pytorch or Tensorflow
PREFERRED QUALIFICATIONS
Preferred Qualifications:
- Intent to return to the degree program after the completion of the internship/co-op
- Proven track record of achieving significant results as demonstrated by grants, fellowships, patents, as well as first-authored publications at leading workshops or conferences such as NeurIPS, ICLR, AAAI, RecSys, KDD, IJCAI, CVPR, ECCV, ACL, NAACL, EACL, ICASSP, or similar
- Experience working and communicating cross functionally in a team environment
- Publications or experience in machine learning, AI, computer vision, optimization, computer science, statistics, applied mathematics, or data science
- Experience solving analytical problems using quantitative approaches
- Experience setting up ML experiments and analyzing their results
- Experience manipulating and analyzing complex, large scale, high-dimensionality data from varying sources
- Experience in utilizing theoretical and empirical research to solve problems
- Demonstrated software engineer experience via an internship, work experience, coding competitions, or widely used contributions in open source repositories (e.g. GitHub)
- Experience with data curation for pre-training data necessary to train Large Language Models
- Experience with prompting and evaluating Large Language Models
- Experience with PySpark for processing large amounts of text data
Source : Meta