Job Description
Profile Outline
A technology focussed individual, data engineers will need to design, configure, develop and deploy data transformations.
Key Responsibilities
- Develop ETL pipelines. The data transformations will be developed in Azure Databricks using Python and on Azure SQL using T-SQL and deployed using ARM templates.
- Combine and curate data in a central data lake.
- Serve data for application and analytics through a variety of technologies such as SQL, Server Synapse, CosmosDB and TSI.
- Build transformation pipelines into dimensions and facts; a strong knowledge of standard BI concepts is mandatory.
- Build stream pipelines leveraging IoT Hub, Event Hub, Databricks streaming and other Azure stream technologies.
- Work in a fluid environment with changing requirements whilst maintaining absolute attention to detail.
Knowledge
Bachelor Degree in Computer Science, Software Engineering or Engineering.
Key Skills
Python
PySpark
SQL
Solution Architecture
API Design
Containers
CI/CD
Azure Cloud
Data stream patterns and technology
Data engineering design patterns
Package & Remuneration
Salary Market Related
#J-18808-Ljbffr