Role: AWS Data Engineer
Location: Whippany NJ, Onsite
Duration: 6-12 Months (with possible extension)
Job Summary:
We are looking for a highly skilled AWS Data Engineer with strong expertise in Python, Kafka, AWS Lambda, and Kinesis. The ideal candidate will design and implement scalable, real-time data pipelines and ensure reliable data ingestion, transformation, and delivery across the analytics ecosystem.
Key Responsibilities:
• Design, build, and optimize data pipelines and streaming solutions using AWS services (Kinesis, Lambda, Glue, S3, Redshift, etc.).
• Develop and maintain real-time data ingestion frameworks leveraging Kafka and Kinesis.
• Write clean, efficient, and reusable Python code for ETL and data transformation.
• Implement event-driven and serverless architectures using AWS Lambda.
• Collaborate with data scientists, analysts, and application teams to deliver high-quality data solutions.
• Monitor and troubleshoot data flow and performance issues in production environments.
• Ensure data quality, security, and compliance with enterprise standards.
Required Skills & Experience:
• 5+ years of experience as a Data Engineer or similar role.
• Strong hands-on expertise with AWS cloud services (Kinesis, Lambda, Glue, S3, IAM, Redshift).
• Proficiency in Python for data engineering and automation tasks.
• Experience with Apache Kafka for streaming and messaging pipelines.
• Strong understanding of data modeling, ETL workflows, and distributed data systems.
• Working knowledge of SQL and data warehousing concepts.
• Familiarity with CI/CD pipelines and infrastructure as code (IaC) using CloudFormation or Terraform is a plus.