MENU
Corporate Staffing Services
Big Data Engineer

Corporate Staffing Services

Nairobi | FULL_TIME | IT

Closing in 5 days from now

Big Data Engineer Job. IT Jobs In Kenya

Brief Description

Reporting to DataOps Engineering Lead. The Big Data Engineer will be responsible for designing, developing, and maintaining scalable big data solutions to enable efficient storage, processing, and analysis of large volumes of data. You will work closely with cross-functional teams including data scientists, data analysts, and software engineers to build and optimize data pipelines and systems. This role requires expertise in big data technologies, database management, and software engineering best practices.

Must Read>>Get Noticed Faster: 4 CV Upgrades To Win More Interviews

Key Responsibilities

  • Data Pipeline Development: Design, implement, and maintain robust data pipelines for ingesting, processing, and transforming large volumes of structured and unstructured data. Develop ETL (Extract, Transform, Load) processes to cleanse, enrich, and aggregate data for analysis.
  • Data Storage Solutions: Architect and optimize data storage solutions, including distributed file systems, NoSQL databases, and data warehouses. Implement data partitioning, indexing, and compression techniques to maximize storage efficiency and performance.
  • Big Data Technologies: Utilize and optimize big data technologies and frameworks such as Apache Hadoop, Apache Spark, Apache Flink, and Apache Kafka. Develop and maintain data processing jobs, queries, and analytics workflows using distributed computing frameworks and query languages.
  • Scalability and Performance: Optimize data processing workflows for scalability, performance, and reliability. Implement parallel processing, distributed computing, and caching mechanisms to handle large-scale data processing workloads.
  • Monitoring and Optimization: Develop monitoring and alerting solutions to track the health, performance, and availability of big data systems. Implement automated scaling, load balancing, and resource management mechanisms to optimize system utilization and performance.
  • Data Quality and Governance: Ensure data quality and integrity throughout the data lifecycle. Implement data validation, cleansing, and enrichment processes to maintain high-quality data. Ensure compliance with data governance policies and regulatory standards.
  • Collaboration and Documentation: Collaborate with cross-functional teams to understand data requirements and business objectives. Document data pipelines, system architecture, and best practices. Provide training and support to stakeholders on data engineering tools and technologies.

Qualifications

  • Bachelor's or master’s degree in computer science, Engineering, or related field.
  • Proven professional SQL capabilities
  • Solid understanding of big data technologies, distributed systems, and database management principles.
  • Proficiency in programming languages such as Python, Java, or Scala.
  • Experience with big data frameworks such as Apache Hadoop, Apache Spark, or Apache Flink.
  • Knowledge of database systems such as SQL databases, NoSQL databases, and distributed file systems.
  • Familiarity with cloud platforms such as AWS, GCP, or Azure.
  • Strong problem-solving skills and attention to detail.
  • Excellent communication and collaboration skills.
  • Ability to work independently and manage multiple priorities in a fast-paced environment.

Must Read>>Why You’re Failing Interviews (Even With a Good CV) – Fix This Now!

How to Apply

Click here to apply

Never miss a chance!

Subscribe to get latest job listings, career insights and guidance in your inbox