Business Segment: Personal & Private Banking
Provide infrastructure, tools, and frameworks used to deliver end-to-end solutions to business problems. Build scalable infrastructure for supporting the delivery of clear business insights from raw data sources, with a focus on collecting, managing, analyzing, visualizing data, and developing analytical solutions. Responsible for expanding and optimizing Standard Bank's data and data pipeline architecture while optimizing data flow and collection to ultimately support data initiatives.
Qualifications
Minimum Qualifications
Type of Qualification: First Degree in Information Technology
Experience Required
- Data Monetisation
5-7 years of experience with big data tools: Hadoop, Spark, Kafka, etc. - Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
- Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
- Experience with AWS cloud services: EC2, EMR, RDS, Redshift.
- Experience with stream-processing systems: Storm, Spark-Streaming, etc.
- Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
- Strong analytic skills related to working with unstructured datasets.
- Build processes supporting data transformation, data structures, metadata, dependency, and workload management.
- A successful history of manipulating, processing, and extracting value from large disconnected datasets.
- Working knowledge of message queuing, stream processing, and highly scalable 'big data' data stores.
- Working SQL knowledge and experience working with relational databases, query authoring (SQL).
#J-18808-Ljbffr