We are looking for engineers with significant problem solving experience in PyTorch, CUDA and distributed systems. You will work with Research Scientists to build & train cutting edge foundation models on thousands of GPUs.
Responsibilities
• Ensure efficient implementation of models & systems for data processing, training, inference and deployment
• Identify and implement optimization techniques for massively parallel and distributed systems
• Identify and remedy efficiency bottlenecks (memory, speed, utilization) by profiling and implementing high-performance CUDA, C++ and PyTorch code
• Work closely together with the research team to ensure systems are planned to be as efficient as possible from start to finish
• Build tools to visualize, evaluate and filter datasets
• Implement cutting-edge product prototypes based on multimodal generative AI
Experience
• Experience training large models using Python & Pytorch, including practical experience working with the entire development pipeline from data processing, preparation & data loading to training and inference.
• Experience with profiling CPU & GPU code in PyTorch, including Nvidia Nsight or similar.
• Experience writing & improving highly parallel & distributed PyTorch code, with familiarity in DDP, FSDP, Tensor Parallel, etc.
• Experience writing high-performance parallel C++. Bonus if done within an ML context with PyTorch, like for data loading, data processing, inference code.
• Experience with high-performance CUDA and writing custom PyTorch kernels. Top candidates will be able to utilize tensor cores; optimize performance with CUDA memory and other similar skills.
• Good to have experience working with Deep learning concepts such as Transformers & Multimodal Generative models such as Diffusion Models and GANs.
• Good to have experience building inference / demo prototype code (incl. Gradio, Docker etc.)
•
Location: Palo Alto, CA
Posted: Sept. 15, 2024, 6:24 a.m.
Apply Now Company Website