Responsibilities: You will develop and refine core components of the ML Platform Own and drive initiatives from conception to completion and production monitoring Collaborate with data scientists, engineers, product teams and other key stakeholders You will work in a fast-paced cross-functional environment Build and maintain tools and infrastructure for efficient ML model development and deployment Help us build and automate our ML workstream from data analysis, experimentation, operationalization, model training, model tuning to visualization Build and maintain data pipelines for analytics, model evaluation and training (includes versioning, compliance and validation). Improve and maintain our automated CI/CD pipeline. Increase our deployment velocity, including the process for deploying models and data pipelines into production. Leverage open-source tools and cloud computing technologies Qualifications: Bachelor’s degree in Computer Science, Software Engineering, Electrical Engineering, or similar field, or equivalent work experience At least 5 years of experience in professional software development. At least 3 years of experience in cloud infrastructure such as AWS, GCP or Azure. You have strong previous experience in data engineering, software engineering, MLOps, data science or research Experience with deep learning frameworks (Tensorflow, Pytorch, etc.) You have a strong programming background and pride yourself on writing clean, testable code Proven programming/scripting skills with Java, Scala, and/or Python. Bash scripting and Unix skills Experience with DevOps tools such as GitLab, Docker. Experience with database systems including MySQL, Postgres, MongoDB, etc. Excellent software design, problem solving and debugging skills. You’re familiar with best practices in the data engineering and MLOps community You have experience working with relational and NoSQL databases. Data warehousing experience, particularly with Redshift, is a plus You like to think at scale and design, develop and operate large-scale data pipelines and services that meet goals of low latency, high availability, resiliency, security and quality You develop with an empathy for people and how they use your work, particularly with translating requests from data scientists and other stakeholders into requirements You have experience with containerization (Docker) and container-orchestration systems such as Kubernetes; experience with data workflow managers such is a bonus You have experience with cloud ecosystems. Experience setting up CI/CD pipelines Strong interpersonal skills; able to work independently as well as in a team You love to architect and build infrastructure in the cloud. You are service-oriented and help the engineering team to develop, deploy and maintain our products with ease. You believe in continuous learning, sharing best practices, encouraging and elevating less experienced colleagues as they learn. You have a strong commitment to development best practices and code reviews.
Software Engineering
Location: Los Angeles, CA
Posted: Oct. 23, 2024, 9:09 a.m.
Apply Now Company Website