20 Jun 2025

Big Data Engineer at Safaricom Kenya

Recruit candidates with Ease. 100% recruitment control with Employer Dashboard.
We have the largest Job seeker visits by alexa rankings. Post a Job

Resubmit your Resume Today. Click Here to Start

We have started building our professional LinkedIn page. Follow


Job Description

Safaricom is the leading provider of converged communication solutions in Kenya. In addition to providing a broad range of first-class products and services for Telephony, Broadband Internet and Financial services, Safaricom seeks to uplift the welfare of Kenyans through value-added services and support for community projects.

Big Data Engineer

Key Responsibilities

  • Data Pipeline Development: Design, implement, and maintain robust data pipelines for ingesting, processing, and transforming large volumes of structured and unstructured data. Develop ETL (Extract, Transform, Load) processes to cleanse, enrich, and aggregate data for analysis.
  • Data Storage Solutions: Architect and optimize data storage solutions, including distributed file systems, NoSQL databases, and data warehouses. Implement data partitioning, indexing, and compression techniques to maximize storage efficiency and performance.
  • Big Data Technologies: Utilize and optimize big data technologies and frameworks such as Apache Hadoop, Apache Spark, Apache Flink, and Apache Kafka. Develop and maintain data processing jobs, queries, and analytics workflows using distributed computing frameworks and query languages.
  • Scalability and Performance: Optimize data processing workflows for scalability, performance, and reliability. Implement parallel processing, distributed computing, and caching mechanisms to handle large-scale data processing workloads.
  • Monitoring and Optimization: Develop monitoring and alerting solutions to track the health, performance, and availability of big data systems. Implement automated scaling, load balancing, and resource management mechanisms to optimize system utilization and performance.
  • Data Quality and Governance: Ensure data quality and integrity throughout the data lifecycle. Implement data validation, cleansing, and enrichment processes to maintain high-quality data. Ensure compliance with data governance policies and regulatory standards.
  • Collaboration and Documentation: Collaborate with cross-functional teams to understand data requirements and business objectives. Document data pipelines, system architecture, and best practices. Provide training and support to stakeholders on data engineering tools and technologies.

Qualifications

  • Bachelor’s or master’s degree in computer science, Engineering, or related field.
  • Proven professional SQL capabilities
  • Solid understanding of big data technologies, distributed systems, and database management principles.
  • Proficiency in programming languages such as Python, Java, or Scala.
  • Experience with big data frameworks such as Apache Hadoop, Apache Spark, or Apache Flink.
  • Knowledge of database systems such as SQL databases, NoSQL databases, and distributed file systems.
  • Familiarity with cloud platforms such as AWS, GCP, or Azure.
  • Strong problem-solving skills and attention to detail.
  • Excellent communication and collaboration skills.
  • Ability to work independently and manage multiple priorities in a fast-paced environment.


Method of Application

Submit your CV and Application on Company Website : Click Here

Closing Date : July 10, 2025





Subscribe


Apply for this Job