This job listing has expired and may no longer be relevant!
19 Dec 2023

Data Engineer at Shining Hope For Communities

Recruit candidates with Ease. 100% recruitment control with Employer Dashboard.
We have the largest Job seeker visits by alexa rankings. Post a Job

Resubmit your Resume Today. Click Here to Start

We have started building our professional LinkedIn page. Follow


Job Description

Shining Hope for Communities (SHOFCO) is a non-profit organization based in Nairobi, Kenya and New York, NY that combats urban poverty and gender inequity in the slums of Nairobi. Kennedy Odede, who grew up in the Kibera slum, founded SHOFCO in 2004 with a focus on youth and gender empowerment

Job Purpose

As a Data Engineer at SHOFCO, this role will be responsible for designing, building and maintaining the data architecture, pipelines and systems that support SHOFCO’s data-driven initiatives. This role will collaborate closely with cross-functional teams including Monitoring, Evaluation and Learning (MEL) Data analytics team, software engineers and program managers to ensure that data is collected, processed and made accessible for meaningful insights and informed decision-making. The role demands a solid understanding of data engineering best practices, data modelling and proficiency in using a variety of data tools and technologies.

Key Responsibilities

  • Design and implement scalable and robust data architectures to support SHOFCO’s data needs, considering both current requirements and future scalability.
  • Evaluate and choose appropriate technologies for data storage, processing, and analytics, such as data warehouses, data lakes and distributed computing frameworks.
  • Develop, maintain, and optimize ETL (Extract, Transform, Load) processes to extract data from various sources, transform it into usable formats, and load it into the appropriate data repositories.
  • Collaborate with cross-functional teams to understand data requirements and ensure smooth data integration across different systems and platforms.
  • Implement data quality checks, data validation, and data cleansing processes to ensure the accuracy, consistency and reliability of the data.
  • Establish and enforce data governance policies, standards and best practices to maintain data integrity and security.
  • Build and maintain data pipelines that enable the efficient movement of data from source to destination, using tools and frameworks like Apache Spark, Apache Airflow, or similar technologies.
  • Monitor pipeline performance, troubleshoot issues, and ensure optimal data flow and processing.
  • Continuously optimize data processing and storage systems to improve performance, scalability, and efficiency.
  • Identify and address bottlenecks, optimize queries, and fine-tune database systems as needed.
  • Collaborate with Data Scientists, Analysts, and other stakeholders to understand data requirements and ensure that the data infrastructure meets their needs.
  • Stay updated with the latest trends and technologies in the data engineering field, and assess their potential impact on SHOFCO’s data ecosystem.
  • Propose and implement innovative solutions to leverage new technologies and improve data engineering practices.
  • Stay up-to-date with the latest trends and technologies in data engineering, recommending and implementing improvements to the organization’s data infrastructure and practices.
  • Work closely with the IT team to ensure proper integration of data solutions with existing systems and infrastructure.
  • Monitor data pipeline health, troubleshoot issues, and provide timely resolutions to minimize downtime and disruptions.
  • Collaborate with external partners, vendors, and stakeholders on data integration projects as needed.

Requirements

Academic Qualifications

  • Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field.
  • Proficiency in programming languages such as Python, Java, or Scala for building data pipelines and data manipulation.
  • Proficiency in working with SQL and relational databases.
  • 3+ years of experience designing and maintaining data pipelines
  • Proficiency working with third party API’s.
  • Proficiency with data modelling techniques.
  • Experience with version control and code management in git
  • Strong foundations in mathematical analysis especially statistics and probability
  • Knowledge of good software engineering principles

Professional Qualifications

  • Proven experience (3+ years) as a Data Engineer or similar role, working with large-scale data pipelines and architectures.
  • Strong experience with data warehousing concepts and technologies (e.g., SQL, NoSQL databases, data lakes).
  • Familiarity with cloud platforms such as AWS, Azure, or GCP and their data services (e.g., AWS Redshift, Google BigQuery).
  • Hands-on experience with data pipeline orchestration tools (e.g., Apache Airflow, Luigi).
  • Knowledge of data modelling, schema design, and data governance best practices.
  • Familiarity with containerization and orchestration technologies (Docker, Kubernetes) for deploying and managing data applications.
  • Previous exposure to data cataloging and metadata management tools.
  • Knowledge of machine learning workflows and how data engineering supports machine learning pipelines is a plus.
  • Demonstrated ability to manage and prioritize multiple projects and tasks in a dynamic environment.
  • Strong analytical and problem-solving abilities, with a keen attention to detail.
  • Strong communication skills, with the ability to work collaboratively within cross-functional teams.
  • Develop solutions for real-time data processing and streaming, enabling timely insights and analytics from live data sources.
  • Implement technologies like Apache Kafka or similar tools to capture and process real-time data events.
  • Work closely with the IT team to ensure data protection, privacy, and compliance with relevant data regulations (Kenya’s data protection act, GDPR, HIPAA, etc.).
  • Implement data encryption, access controls, and other security measures to safeguard sensitive information.
  • Design and implement automated testing frameworks to validate the accuracy and quality of data transformations and ETL processes.
  • Set up monitoring and alerting systems to proactively detect and address data pipeline issues.
  • Experience in the non-profit sector or social impact organizations is a plus.
  • Other requirements (unique/job specific)
  • Any professional certification on data management or cloud services is a plus.

Functional Skills:

  • Data Integration and ETL
  • Database Management
  • Data Modelling
  • Big Data Technologies
  • Data Warehousing
  • Data Pipeline Orchestration
  • Real-time Data Processing
  • Version Control
  • Cloud Platforms
  • Data Security and Compliance
  • Data Governance
  • Data Visualization
  • Automated Testing
  • Performance Tuning
  • API Integration
  • Machine Learning Infrastructure
  • Behavioural Competencies/Attributes:
  • Analytical Thinking
  • Adaptability
  • Attention to Detail
  • Problem-Solving
  • Communication
  • Ownership and Accountability
  • Innovation
  • Ethical and Social Responsibility
  • Time Management
  • Continuous Learning
  • Cultural Sensitivity
  • Risk Management
  • Interdisciplinary Collaboration


Method of Application

Submit your CV, copies of relevant documents and Application to  [email protected]
Use the title of the position as the subject of the email

Closing Date : 31 December. 2023





Subscribe


Apply for this Job