Job Reference: GTG007386-KV-1
Responsible for maintaining the data warehouse through design and implementation of ETL/ELT methodologies and technologies, as well as providing maintenance and support to our ETL and ML environments. To ensure optimal performance, the candidate will conduct root cause analysis on production issues and provide technical leadership throughout the entire information management process of both structured and unstructured data.
Duties & Responsibilities
Duties:
- Responsible for solution design and development of various functionalities in AWS for the project flow.
- Develop and maintain automated ETL pipelines (with monitoring) using scripting languages such as Python, SQL, and AWS services such as S3, Glue, Lambda, SNS, Redshift, SQS, KMS.
- Develop Glue Jobs for batch data processing and create Glue Catalog for metadata synchronization.
- Develop data pipelines using AWS Lambda and Step Functions for data processing.
- Manage data coming from different sources.
- Experience with AWS services to manage applications in the cloud and create or modify instances.
- Implement solutions using the Scaled Agile Framework (SAFe).
- Be involved in the performance and optimization of existing algorithms in Hadoop using Spark Context.
- Create Hive Tables, load data, and write Hive queries.
Requirements
- Degree in Information Technology.
- 6-7 years of experience in a similar role.
- Experience with Talend.
- Knowledge of SSIS, SSAS, and SSRS.
- Experience with Clover ETL.
- Strong SQL background.
- Experience with AWS.
- Familiarity with Jupyter Notebook.
- Experience in Data Warehousing.
- Experience in Cloud Warehousing.
CVs should be submitted directly to or
If you do not receive communication within 2 weeks of your application, kindly consider your application unsuccessful.
#J-18808-Ljbffr