JOB TITLE:
Senior Data Consultant
LOCATION:
Remote (Johannesburg / Stellenbosch / Cape Town)
ABOUT CYBERLOGIC:
Cyberlogic is a trusted Managed Solutions Provider with offices in South Africa, Mauritius, and the UK. Serving a diverse range of clients, spanning numerous industries, including the international maritime sector, Cyberlogic specialises in IT leadership, cyber security, cloud solutions, and business intelligence. For almost three decades, Cyberlogic has been committed to enabling digital transformation through delivering unquestionable value.
Our delivery focus has enabled us to build up a national and international footprint of loyal clients that rely on us to provide transparent, open guidance to improve their processes, grow their businesses, and secure their data.
Cyberlogic is part of the Hyperclear Technology group, which boasts a diverse technology offering including robotic process automation (RPA), business process management (BPM), data analytics, and decisioning technology.
OUR VALUES:
- We challenge ourselves to be more AWESOME
- We are driven to KEEP learning and EVOLVING
- We look beyond symptoms to identify and RESOLVE ROOT CAUSES
- We hold each other accountable through CANDID and constructive FEEDBACK
- We respect and care for each other and know we will only SUCCEED if we work AS A TEAM
- We CARE deeply ABOUT the success of CYBERLOGIC
- We FINISH WHAT WE START
- We always GIVE OUR BEST even if it means putting in the hard yards
- We KEEP THINGS SIMPLE
ABOUT OUR TEAM:
We are a fast-growing data consultancy that partners with organizations to unlock the power of their data. Our dynamic team of data professionals works on cutting-edge projects that drive business insights and strategic decisions. As we continue to expand, we are looking for a talented Senior Data Consultant to join our team of 8-10 experts. If you're passionate about working with data, building scalable solutions, and driving innovation, we want to hear from you!
PURPOSE OF POSITION:
As a Senior Data Consultant, you will play a critical role in designing, developing, and maintaining data pipelines to enable data-driven decision-making across our clients' businesses. You will work closely with a small but highly skilled team of data engineers, data scientists, and business analysts to deliver high-impact solutions. You’ll be expected to bring both technical expertise and a collaborative mindset, helping to shape the architecture and development of data solutions on cloud platforms.
KEY RESPONSIBILITIES:
- Design, build, and maintain scalable data pipelines for the collection, transformation, and storage of data.
- Work with large, complex datasets to ensure efficient processing and integration.
- Develop data engineering solutions using Azure technologies and PySpark for distributed data processing.
- Implement ETL processes and automate data workflows to support analytics, reporting, and business intelligence initiatives.
- Collaborate with data scientists, business analysts, and stakeholders to understand data requirements and deliver actionable insights.
- Ensure data quality, consistency, and reliability across all pipelines and datasets.
- Optimize data models, storage, and processing workflows for performance and scalability.
- Contribute to the architecture and design decisions for cloud-based data platforms.
- Mentor junior team members and foster a collaborative and innovative team culture.
- Troubleshoot and resolve issues related to data pipelines, ensuring high availability and performance.
KEY REQUIREMENTS:
Essential:
- 8-10 years of experience in data engineering or a related role with a strong background in building and optimizing data pipelines.
- Expertise in PySpark for large-scale data processing and distributed computing.
- Strong experience with Azure data technologies (Azure Data Factory, Azure Databricks, Azure SQL Database, etc.).
- Proficiency in SQL and experience working with relational and NoSQL databases.
- Experience with cloud-based data platforms and services, especially in Azure .
- Solid understanding of ETL processes and data modeling techniques.
- Strong problem-solving and troubleshooting skills.
- Experience working with version control systems (e.g., Git) and continuous integration/continuous deployment (CI/CD) practices.
- Strong communication skills and the ability to work effectively within a collaborative, small-team environment.
- A proactive, self-starter attitude with a passion for data engineering and technology.
Preferred:
- Experience with data warehousing solutions (e.g., Azure Synapse Analytics, Snowflake).
- Familiarity with other big data technologies (e.g., Hadoop, Kafka, Spark Streaming).
- Experience with infrastructure as code (IaC) tools such as Terraform or Azure Resource Manager (ARM).
- Knowledge of containerization technologies (e.g., Docker, Kubernetes).
- Background in Agile development methodologies.
#J-18808-Ljbffr