Qualifications: - BSc Degree in Computer Science/Information Systems/Engineering or equivalent work experience
- 5+ years of relevant experience, including 2+ years in building and optimizing big data pipelines
Requirements: - Proficiency in Python, SQL (PostgreSQL, MS SQL), and cloud services (AWS, Azure, or GCP)
- Strong skills in version control, CI/CD, and GitHub
- Experience with Palantir Foundry
- Expertise in managing data life cycle, large datasets, and data transformation processes
- Experience with Glue and PySpark is a plus
- Knowledge in data quality assurance, governance, and processing unstructured data
- Familiarity with message queuing and stream processing
KPAs: - Propose and implement process improvements for automation, scalability, and efficiency
- Develop and enhance data systems and streamline CI/CD processes for optimal data pipelines
- Assemble complex data sets to meet business requirements and drive efficient ETL processes
- Conduct unit tests and ensure data consistency while building automated monitoring solutions
- Support reporting and analytics infrastructure and uphold data quality, governance, and infrastructure maintenance (e.g., AWS, database security)
- Maintain metadata, data catalogues, user documentation, and apply best practices in database management (collations, engines, indices)
Apply now and be a key contributor in revolutionizing data infrastructure!
For more IT jobs, visit