Senior Solutions Architect, Generative AI - Inference at Nvidia
Austin, TX 73301
About the Job
NVIDIA is seeking outstanding AI Solutions Architects to assist and support customers that are building solutions with our newest AI technology
At NVIDIA, our solutions architects work across different teams and enjoy helping customers with the latest Accelerated Computing and Deep Learning software and hardware platforms
We're looking to grow our company, and build our teams with the smartest people in the world
Would you like to join us at the forefront of technological advancement? You will become a trusted technical advisor with our customers and work on exciting projects and proof-of-concepts focused on Generative AI and Large Language Models
You will also collaborate with a diverse set of internal teams on performance analysis and modeling of inference software
You should be comfortable working in a dynamic environment, and have experience with Generative AI, Large Language Models, Deep Learning and GPU technologies
This role is an excellent opportunity to work in an interdisciplinary team with the latest technologies at NVIDIA!What You Will Be Doing:Partnering with other solution architects, engineering, product and business teams
Understanding their strategies and technical needs and helping define high-value solutionsDynamically engaging with developers, scientific researchers, data scientists, which will give you experience across a range of technical areasStrategically partnering with lighthouse customers and industry-specific solution partners targeting our computing platformWorking closely with customers to help them adopt and build solutions using NVIDIA technologyAnalyze performance and power efficiency of deep learning inference workloadsSome travel to conferences and customers may be requiredWhat We Need To See:BS, MS, or PhD in Computer Science, Electrical/Computer Engineering, Physics, Mathematics, other Engineering or related fields (or equivalent experience)5+ years of hands-on experience with Deep Learning frameworks such as PyTorch and TensorFlowStrong fundamentals in programming, optimizations and software design, especially in PythonStrong problem-solving and debugging skillsExcellent knowledge of theory and practice of Large Language Models and Deep Learning inferenceExcellent presentation, communication and collaboration skillsDesire to be involved in multiple diverse and creative projectsWays To Stand Out From The Crowd:Experience with NVIDIA GPUs and software libraries, such as NVIDIA NeMo Framework, NVIDIA Triton Inference Server, TensorRT, TensorRT-LLMExcellent C/C++ programming skills, including debugging, profiling, code optimization, performance analysis, and test designFamiliarity with parallel programming and distributed computing platformsPrior experience with DL training at scale, deploying or optimizing DL inference in productionThe base salary range is 148,000 USD - 339,250 USD
Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.You will also be eligible for equity and benefits
NVIDIA accepts applications on an ongoing basis
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer
As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.SummaryLocation: US, CA, Santa Clara; US, TX, Remote; US, WA, Remote; US, CA, Remote; US, NY, RemoteType: Full time
At NVIDIA, our solutions architects work across different teams and enjoy helping customers with the latest Accelerated Computing and Deep Learning software and hardware platforms
We're looking to grow our company, and build our teams with the smartest people in the world
Would you like to join us at the forefront of technological advancement? You will become a trusted technical advisor with our customers and work on exciting projects and proof-of-concepts focused on Generative AI and Large Language Models
You will also collaborate with a diverse set of internal teams on performance analysis and modeling of inference software
You should be comfortable working in a dynamic environment, and have experience with Generative AI, Large Language Models, Deep Learning and GPU technologies
This role is an excellent opportunity to work in an interdisciplinary team with the latest technologies at NVIDIA!What You Will Be Doing:Partnering with other solution architects, engineering, product and business teams
Understanding their strategies and technical needs and helping define high-value solutionsDynamically engaging with developers, scientific researchers, data scientists, which will give you experience across a range of technical areasStrategically partnering with lighthouse customers and industry-specific solution partners targeting our computing platformWorking closely with customers to help them adopt and build solutions using NVIDIA technologyAnalyze performance and power efficiency of deep learning inference workloadsSome travel to conferences and customers may be requiredWhat We Need To See:BS, MS, or PhD in Computer Science, Electrical/Computer Engineering, Physics, Mathematics, other Engineering or related fields (or equivalent experience)5+ years of hands-on experience with Deep Learning frameworks such as PyTorch and TensorFlowStrong fundamentals in programming, optimizations and software design, especially in PythonStrong problem-solving and debugging skillsExcellent knowledge of theory and practice of Large Language Models and Deep Learning inferenceExcellent presentation, communication and collaboration skillsDesire to be involved in multiple diverse and creative projectsWays To Stand Out From The Crowd:Experience with NVIDIA GPUs and software libraries, such as NVIDIA NeMo Framework, NVIDIA Triton Inference Server, TensorRT, TensorRT-LLMExcellent C/C++ programming skills, including debugging, profiling, code optimization, performance analysis, and test designFamiliarity with parallel programming and distributed computing platformsPrior experience with DL training at scale, deploying or optimizing DL inference in productionThe base salary range is 148,000 USD - 339,250 USD
Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.You will also be eligible for equity and benefits
NVIDIA accepts applications on an ongoing basis
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer
As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.SummaryLocation: US, CA, Santa Clara; US, TX, Remote; US, WA, Remote; US, CA, Remote; US, NY, RemoteType: Full time