Job Listings

Data Engineer - Mox

Standard Chartered Bank

About Standard Chartered

We're an international bank, nimble enough to act, big enough for impact. For more than 170 years, we've worked to make a positive difference for our clients, communities, and each other. We question the status quo, love a challenge and enjoy finding new opportunities to grow and do better than before. If you're looking for a career with purpose and you want to work for a bank making a difference, we want to hear from you. You can count on us to celebrate your unique talents and we can't wait to see the talents you can bring us.

Our purpose, to drive commerce and prosperity through our unique diversity, together with our brand promise, to be here for good are achieved by how we each live our valued behaviours. When you work with us, you'll see how we value difference and advocate inclusion.

Together we:

Do the right thing and are assertive, challenge one another, and live with integrity, while putting the client at the heart of what we do

Never settle, continuously striving to improve and innovate, keeping things simple and learning from doing well, and not so well

Are better together, we can be ourselves, be inclusive, see more good in others, and work collectively to build for the long term

About Mox Bank

Mox is built by and for the ones who aspire to live life to the fullest – we call them Generation Mox! The name Mox reflects the endless opportunities we can create, - Mobile eXperience; Money eXperience; Money X (multiplier), eXponential growth, eXploration… it’s all up for us to define together.

Why Mox

Everything at Mox – from our products, features, to rewards – is designed based on customer research, tailor made for your needs. We care about what customers care about, especially in data security and privacy. Data ethics is core to everyone here at Mox.

Mox rewards you with an array of banking and lifestyle benefits. Who says banking can’t be fun?

Job Summary

We are looking for a Data Engineer you'd be working with us to design, maintain, and improve various analytical and operational services and infrastructure which are critical for many other functions within the organization.

Key Responsibilities

As a Data Engineer you'd be working with us to design, maintain, and improve various analytical and operational services and infrastructure which are critical for many other functions within the organization.

These include the data lake, operational databases, data pipelines, large-scale batch and real-time data processing systems, a metadata and lineage repository, which all work in concert to provide the company with accurate, timely, and actionable metrics and insights to grow and improve our business using data.

You may be collaborating with our data science team to design and implement processes to structure our data schemas and design data models, working with our product teams to integrate new data sources, or pairing with other data engineers to bring to fruition cutting-edge technologies in the data space.

Skills and Experience

We expect candidates to have in-depth experience in some of the following skills and technologies and be motivated to build up experience and fill any gaps in knowledge on the job. More importantly, we seek people who are highly logical, with a balance of respect for best practices and using their own critical thinking, adaptable to new situations, capable of working independently to deliver projects end-to-end, communicates well in English, collaborates effectively with teammates and stakeholders, and eager to be on a high-performing team, taking their careers to the next level with us.

Highly relevant:

General computing concepts and expertise: Unix environments, networking, distributed and cloud computing.

Python frameworks and tools: pip, pytest, boto3, pyspark, pylint, pandas, scikit-learn, keras.

Agile/Lean project methodologies and rituals: Scrum, Kanban.

Workflow scheduling and monitoring tools: Apache Airflow, Luigi, AWS Batch.

Columnar and big data databases: Athena, Redshift, Vertica, Hive/Hadoop.

Version control: git commands, branching strategies, collaboration etiquette, documentation best practices.

General AWS or Cloud services: Glue, EMR, EC2, ELB, EFS, S3, Lambda, API Gateway, IAM, Cloudwatch, DMS.

Container management and orchestration: Docker, Docker Swarm, ECS, EKS/Kubernetes, Mesos.

CI / CD tools: CircleCI, Jenkins, TravisCI, Spinnaker, AWS CodePipeline

Also good to have:

JVM languages, frameworks and tools: Kotlin, Java, Scala / Maven, Spring, Lombok, Spark, JDK Mission Control.

RDBMS and NoSQL databases: MySQL, PostgreSQL / DynamoDB, Redis, Hbase.

Enterprise BI tools: Tableau, Qlik, Looker, Superset, PowerBI, Quicksight.

Data science environments: AWS Sagemaker, Project Jupyter, Databricks.

Log ingestion and monitoring: ELK stack (Elasticsearch, Logstash, Kibana), Datadog, Prometheus, Grafana.

Metadata catalogue and lineage systems: Amundsen, Databook, Apache Atlas, Alation, uMetric.

Data

Location: 香港

Posted: Aug. 13, 2024, 12:47 a.m.

Apply Now Company Website