Core requirements Advanced TSQL scripting: stored procedures, views, functions Building and maintaining SSIS pipeline Data warehousing & modelling Working with CSV, TXT, XML, JSON, XLS, XLSX. Etc Data Analysis Working with relational databases Building and maintaining SSRS reports Using GIT CI/CD pipelines Performance monitoring and tuning Responsibilities Maintain, support, and monitor existing production SSIS packages and SQL Queries and Stored Procedures, CICD pipelines to make sure all data loads on data warehouse meet data quality standards and business SLA requirements. Build, maintain, support, and monitor Synapse data engineering pipelines on data platform if required. Participate and contribute to data architecture design, data modelling, gathering and analysis of data requirements; understand document, communicate, and build appropriate solutions. Participate, design, build, deliver, and document data related projects with various environment specific data analytics technologies. Promote data engineering best practices with CICD pipelines and automation. Collaborate and work closely with team members and contribute significantly to building a high performing, collaborative, transparent, and result-driven data engineering team. Support the Data Engineering Practice Team Manager with best fit-to-purpose data engineering solutions, quality engineering artifacts and high standard documentation. Follow Data Ethics standards to protect personal information and do the right thing to meet our customers, partners, and communitys expectations. Skills and Experience 8 year demonstrable experience in design, build and support of data engineering pipelines in the data warehousing, data ingestion, cleansing, manipulation, modelling and Reporting. Experience in ETL using Microsoft technologies. Strong experience in writing MS SQL server queries, stored procedures, and SSIS Packages. Experience with SSRS would be an advantage. Experience of manipulating semi-structured data (XML, JSON) Strong knowledge and extensive experience in working in an Agile framework with CI/CD using modern DevOps / Data Ops integrated processes with YAML pipelines. Bachelors degree in computer/data science technical or related field is a must. Post-graduate is highly regarded. Knowledge of Azure Synapse data engineering pipelines, PySpark notebooks, data platform lake house architecture and Azure SQL ODS storage is desirable.