Hello,
We are looking for Senior Data Engineer candidates in Dearborn, MI for Hybrid opportunity.
Title: Data Engineer General
Location: Dearborn, MI (Hybrid)
Job Type: Long Term Contract
Position Description:
We are seeking an experienced and highly skilled Senior DevOps Engineer to join the GDIA department at Ford Motor Company. The ideal candidate will have a strong background in data warehousing and significant experience working with Google Cloud Platform (GCP). As a Senior DevOps Engineer, you will play a critical role in designing, implementing, and maintaining the infrastructure and tools that enable our data engineering and analytics teams to operate efficiently and effectively.
Skills Required:
1. Infrastructure as Code:
• Design, build, and maintain scalable and reliable infrastructure on GCP using Infrastructure as Code (IaC) tools such as Terraform and Deployment Manager.
• Automate the provisioning and management of cloud resources to ensure consistency and repeatability.
2. Continuous Integration and Continuous Deployment (CI/CD):
• Implement and manage CI/CD pipelines using tools like Jenkins, GitLab CI, or Cloud Build to facilitate seamless code integration and deployment.
• Ensure automated testing and monitoring are integrated into the CI/CD process to maintain high-quality code and rapid delivery cycles.
3. Data Pipeline Management:
• Collaborate with data engineers to design and optimize data pipelines on GCP using tools such as Apache Airflow, Cloud Composer, and Cloud Dataflow.
• Implement monitoring and alerting solutions to detect and resolve issues in data pipelines promptly.
4. Cloud Platform Expertise:
• Utilize GCP services such as Cloud Storage, Cloud Run, and Cloud Functions to build scalable and cost-effective solutions.
• Implement best practices for cloud security, cost management, and resource optimization.
5. Collaboration and Communication:
• Work closely with data engineers, data scientists, and other stakeholders to understand their requirements and provide the necessary infrastructure and tooling support.
• Foster a culture of collaboration and continuous improvement within the team.
6. Monitoring and Incident Management:
• Implement robust monitoring, logging, and alerting solutions using tools like Stackdriver, Prometheus, and Grafana.
• Manage and respond to incidents, ensuring minimal downtime and quick resolution of issues.
7. Documentation and Training:
• Create and maintain comprehensive documentation for infrastructure, CI/CD pipelines, and operational procedures.
• Provide training and support to team members on DevOps best practices and GCP services.
Skills Preferred:
• Proficiency in Infrastructure as Code (IaC) tools such as Terraform, Deployment Manager, or CloudFormation.
• Strong knowledge of CI/CD tools and practices, including Jenkins, GitLab CI, and Cloud Build.
• Experience with data pipeline tools and frameworks such as Apache Airflow, Cloud Composer, and Cloud Dataflow.
• Familiarity with GCP services, including Cloud Storage, Cloud Run, Cloud Functions, and BigQuery.
• Proficiency in scripting languages such as Python, Bash, or PowerShell.
• Excellent problem-solving and analytical skills.
• Strong communication and collaboration abilities.
• Ability to work independently and as part of a team in a fast-paced, dynamic environment.
Experience Required:
• Minimum of 5 years of experience in DevOps or infrastructure engineering, with a strong focus on data warehousing.
• At least 2 years of hands-on experience working with Google Cloud Platform (GCP).
Education Required:
Bachelor's degree in Computer Science, Information Technology, or a related field is required.
Education Preferred:
A Master's degree in a relevant field is preferred.
Thanks,
Lingesh
Saamvi Technologies
Location: Dearborn Heights, MI
Posted: Aug. 18, 2024, 5:29 a.m.
Apply Now Company Website