About the Role:
As a Mid-Level Data Engineer at Perpay, you will contribute to the development and optimization of our data pipelines and architectures. You will collaborate with data scientists, analysts, and other team members to support Perpay’s mission of creating inclusive financial products that enhance the lives of our members. With the introduction of our credit card, the demand for efficient data engineering solutions has grown significantly to meet our expanding modeling, reporting, and analytical needs.
You will have the opportunity to work on diverse projects across multiple domains, ensuring our data infrastructure supports a variety of business functions, including risk, commerce, marketing, operations, and more. Your contributions will directly impact our customers by enabling automated and efficient data-driven services.
We are seeking a Mid-Level Data Engineer who is a quantitative, analytical thinker with a strong passion for data and a solid foundation in data engineering principles. The ideal candidate has experience in building and maintaining data pipelines, implementing ETL processes, and contributing to data governance efforts. You should be comfortable working in a dynamic, fast-paced environment and managing multiple tasks with various stakeholders.
Why You’ll Love It Here:
• Diverse Projects: Work on a wide range of projects that span various business domains, giving you exposure to different aspects of the business.
• Skill Development: Enhance your technical skills and gain hands-on experience with cutting-edge technologies.
• Impact: See the direct impact of your work on our customers and the business.
• Supportive Team: Be part of a collaborative team that encourages learning and growth.
Our greatest strength is our people and we’d love for you to be one of them!
Responsibilities:
• Develop and optimize ETL pipelines using tools like Apache Airflow, AWS Glue, and Fivetran to support various business requirements
• Collaborate with data producers to understand data sources and contribute to the design and implementation of efficient data models using Redshift and Snowflake
• Implement data governance practices, including metadata management and data lineage using tools such as Open Lineage and Datahub
• Work with cross-functional teams to ensure data pipelines are scalable, reliable, and meet performance standards
• Identify and resolve data-related issues, optimizing workflows using techniques like partitioning, indexing, and caching
• Contribute to the ongoing development and maintenance of a modern data architecture, ensuring data quality and integrity
• Stay up-to-date with industry best practices and emerging technologies in data engineering
What You’ll Bring:
• Bachelor’s degree in a quantitative/technical field (Computer Science, Statistics, Engineering, Mathematics, Physics, Chemistry)
• 3+ years of experience in data engineering, with hands-on experience in building and maintaining data pipelines
• Proficiency in SQL and Python, with a strong understanding of cloud data platforms such as AWS, GCP, or Azure
• Experience with data warehouse solutions like Redshift, Snowflake, or BigQuery, and data orchestration tools such as Apache Airflow
• Understanding of data modeling techniques, ETL processes, and data governance best practices
• Familiarity with distributed data processing frameworks such as Apache Spark or Hadoop
• Knowledge of infrastructure as code tools like Terraform and CI/CD practices
• Strong analytical skills and the ability to troubleshoot and resolve complex data issues
Hey, we know not everybody checks all the boxes, so if you’re interested, please apply because you could be just what we’re looking for!
Location: Philadelphia, PA
Posted: Aug. 16, 2024, 2:25 p.m.
Apply Now Company Website