We are looking for Intermediate to Senior Data Engineers with strong experience in modern data platforms and streaming services to join our growing team. The ideal candidate will have in-depth expertise in Microsoft Fabric tools (including Delta Lake, Synapse, Databricks, and Data Factory) and hands-on experience with streaming services like Kafka , Flink , or Azure Data Analytics . You will play a key role in designing, implementing, and optimising data pipelines that drive our data architecture.
Key Responsibilities:
- Design, build, and maintain data pipelines using Fabric tools such as Delta Lake, Synapse, Databricks , and Azure Data Factory .
- Develop and manage real-time streaming data pipelines using technologies such as Kafka , Flink , or Azure Data Analytics .
- Implement scalable and reliable ETL/ELT solutions to process large datasets.
- Collaborate with data scientists, analysts, and software engineers to design data solutions that meet business requirements.
- Optimise and monitor data pipelines for performance, scalability, and reliability.
- Ensure the integrity and security of data across the organisation's data infrastructure.
- Work with stakeholders to ensure proper data governance and regulatory compliance.
- Troubleshoot and resolve issues with data pipelines and infrastructure as they arise.
- Stay updated with the latest industry trends and technologies in data engineering and apply them to improve our data systems.
Key Requirements:
- Experience with Fabric tools such as Delta Lake, Azure Synapse, Databricks , and Azure Data Factory .
- Extensive knowledge of streaming services like Kafka , Flink , or Azure Data Analytics (strong priority).
- Proficiency in SQL and experience working with relational and non-relational databases.
- Strong experience with ETL/ELT processes and data modelling.
- Hands-on experience with big data technologies and cloud platforms (Azure preferred).
- Familiarity with data governance, data security , and best practices in handling large datasets.
- Ability to work in an Agile environment and collaborate with cross-functional teams.
Preferred Qualifications:
- Bachelor's degree in Computer Science, Data Engineering, or a related field (or equivalent experience).
- Experience with Azure DevOps and CI/CD pipelines for data workflows.
- Knowledge of Python, Scala , or other scripting languages for data processing.
- Experience with real-time data processing and event-driven architecture .