Description: |
Our client, a leading insurer, is looking for a data engineer to join the team.
Responsibilities
- Design and develop scalable data platforms with ETL/ELT pipelines for diverse data sources.
- Implement automation using CI/CD pipelines to optimize workflows.
- Manage Data Lakes, relational databases (e.g., PostgreSQL), and NoSQL databases (e.g., MongoDB).
- Utilize PySpark for distributed data processing and Apache Kafka
- Ensure data quality, security, and compliance with regulations
- Collaborate with data scientists and analysts to support analytics initiatives.
- Optimize pipelines for performance and cost-efficiency across AWS and Azure.
- Contribute to key business areas like risk modeling and finance processing.
Requirements
- Bachelor’s or Master’s in Computer Science, Engineering, or related field.
- 3+ years of data engineering experience.
- Expertise in AWS, Azure
- Proficient in Python, SQL, and NoSQL databases (e.g., MongoDB).
- Hands-on experience with PySpark and Apache Kafka.
- Skilled in Data Lakes and automation tools
- Strong problem-solving and communication skills for agile environments.
If this outstanding opportunity sounds like your next career move, please send your resume in Word format to Harry Tsang at resume[at]pinpointasia.com and put Data Engineer - Leading Insurance Company in the subject header. Data provided is for recruitment purposes only.
Pinpoint Asia is the leading specialist Financial IT recruitment firm in the Asia Pacific region. Visit Pinpoint Asia’s website at pinpointasia.com today to see other exciting job opportunities.
|