Role & responsibilities
• 3+ years of hands-on experience with Apache Kafka and Confluent Kafka in a production environment, including experience with Confluent Control Center, KSQLDB, Kafka Streams, and Kafka Connect.
• Proven experience in Kafka development, including producer and consumer API, stream processing, and connector development.
• Experience with Kafka cluster management, including setup, configuration, monitoring, and troubleshooting.
• Familiarity with distributed systems, microservices architecture, and event-driven design patterns.
• Experience with cloud platforms (e.g., AWS, Azure) and containerization (Kubernetes) is a plus.
Preferred candidate profile
• Proficiency in programming languages such as Java, Python, or Scala.
• Strong knowledge of Kafka internals, including brokers, zookeepers, topics, partitions, and offsets.
• Experience with monitoring tools (e.g., Prometheus, Grafana) and logging frameworks (e.g., Log4j, ELK Stack).
• Proficiency in using Confluent Control Center for monitoring, managing, and optimizing Kafka clusters.
• Expertise in Kafka Streams for building scalable, fault-tolerant stream processing applications.
• Experience with KSQLDB for real-time processing and analytics on Kafka topics.
• Strong understanding of Kafka Connect for integrating Kafka with external data sources and sinks.
• Understanding of networking, security, and compliance aspects related to Kafka.
• Familiarity with CI/CD pipelines and automation tools (e.g., Jenkins, GitLab CI).