Detail:Responsibilities:
Design, develop, and maintain scalable backend services and APIs. Design, implement, and provide support for real-time data platforms and pipelines. Architect and implement ETL processes, ensuring data accuracy and availability. Collaborate with data scientists and other engineers to integrate machine learning models into production systems. Work closely with other team members to peer review, cross-train, and share expertise. Participate in code and design reviews, fostering a culture of quality and growth. Stay updated with industry trends and technologies, ensuring our systems remain modern and efficient. Evaluate, conceptualize and create proofs of concepts for new models, tools, and techniques. Fine-tune system performance while constantly seeking new methods to enhance efficiency. Mentor junior engineers, sharing expertise in backend and data engineering best practices. Requirements:
5+ years of professional software development experience. Proficient in backend development languages and frameworks, preferably Python (Django, Flask, FastAPI). Strong experience with ETL tools such as Apache Airflow and large-scale data processing frameworks like Apache Spark. Deep understanding of relational databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB, Cassandra). Experience with cloud platforms (e.g., AWS, GCP, Azure) and their data-related services. Familiarity with containerization and orchestration tools like Docker and Kubernetes. Strong understanding of software development best practices, including CI/CD, testing, and version control (e.g., Git). Excellent problem-solving skills and the ability to work in a fast-paced, collaborative environment. Hybrid work – 2 days wfh, 3 days in office