[VNG]ZaloPay - Senior Data Engineer

Mã vị trí: 21-PCD-1015
Nơi làm việc: Tp.Hồ Chí Minh
Lương: Thỏa thuận

Mô tả công việc

At ZaloPay, we rely on powerfully insightful data to inform our systems and solutions—and we’re seeking an experienced pipeline-centric data engineer to put it to use. Our ideal hire will have the mathematical and statistical expertise you’d expect, combined with a rare curiosity and creativity. You’ll wear many hats in this role, but much of your focus will be building out our ETL processes and finding what is the fact of our data saying, especially this is financial data. Beyond technical prowess, you’ll need the soft skills it takes to communicate highly complex data trends to organizational leaders in a way that’s easy to understand. We’re looking for someone willing to jump right in to help the company get the most out of our data.

Yêu cầu

Responsibilities
  • Create and maintain optimal data pipeline architecture
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics.
  • Work with stakeholders including the Executive, Product, Data, and Design teams to assist with data-related technical issues and support their data infrastructure needs.
Qualifications and Required Skills
  • Experience building and optimizing ‘big data’ data pipelines, architectures, and data sets.
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Strong analytic skills related to working with unstructured datasets.
  • Build processes supporting data transformation, data structures, metadata, dependency, and workload management.
  • Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
  • Experience supporting and working with cross-functional teams in a dynamic environment.
We are looking for a candidate with 4+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:
  • Experience with big data tools: Hadoop, Spark, Kafka, etc.
  • Experience with relational SQL and NoSQL databases, including MongoDB, MySQL, etc
  • Experience with data pipeline and workflow management tools: Airflow, Google Composer, etc.
  • Experience with object-oriented/object function scripting languages: Python, Java, Scala, etc.
  • Experience with Google Cloud Platform (GCP)
  • Experience with Data warehouses: ClickHouse, Google Big Query, etc
  • Experience with Google Cloud Storage.
  • Experience with stream-processing systems: Apache Spark, Spark-Streaming, Druid, Apache Flink, Apache Nifi, etc.
  • Experience working with ETL tasks.
  • Experience with monitoring data pipeline: Prometheus, Grafana, etc