What you will be doing - The ideal candidate will have extensive experience in data engineering, particularly with Ab Initio, and will be responsible for designing, developing, and maintaining the data infrastructure.
- Design, build, and maintain scalable data pipelines using Ab Initio.
- Develop ETL processes to extract, transform, and load data from various sources.
- Integrate data from multiple sources to create a unified data environment.
- Ensure data quality and consistency across different systems.
- Monitor and optimize the performance of data pipelines and ETL processes.
- Implement best practices for data storage and retrieval.
- Work closely with data scientists, analysts, and other stakeholders to understand data needs and requirements.
- Collaborate with cross-functional teams to ensure seamless data integration.
- Identify and resolve data-related issues and discrepancies.
- Provide support for data-related queries and requests.
What we are looking for:Â - Bachelorâs or Masterâs degree in Computer Science, Engineering, Information Technology, or a related field.
- Â 5 years of experience in data engineering and ETL development.
- Extensive experience with Ab Initio, including developing and maintaining data pipelines.
- Proficiency in SQL and other database technologies.
- Experience with big data technologies such as Hadoop, Spark, and Hive.
- Experience with cloud platforms such as Azure.
- Familiarity with scripting languages like Python or R.
- Knowledge of data governance and security practices
Please note that if you do not hear from us within 3 weeks, consider your application unsuccessful.
Â
Please note that most of our positions are remote however candidates should be residing within the traveling distance as circumstance of the opportunity can change.