Job Title: Lead Data Engineer
Years of Experience: 9+ Years
Location: Remote
Duration:6+ Months
Responsibilities:
• Lead the design, development, and management of scalable data pipelines, architectures, and datasets for large-scale data processing.
• Oversee and guide a team of data engineers in the implementation of efficient and robust data solutions.
• Take a leadership role in shaping and implementing ETL processes to ensure data quality, integrity, and availability.
• Utilize expertise in Node.JS and AWS Lambda to develop and optimize data processing applications and functions.
• Demonstrate proficiency in Python for scripting, automation, and building data-related applications.
• Apply strong SQL skills to design and optimize database queries and ensure efficient data retrieval.
• Leverage data science knowledge, specifically using Pandas, to support advanced analytics and insights.
• Manage and work with various databases, including but not limited to relational and NoSQL databases.
• Implement containerization using Docker to enhance the portability and scalability of data applications.
• Develop and maintain Rest APIs for seamless integration of data across systems.
• Provide leadership and strategic guidance in making data-related recommendations and decisions.
• Possess hands-on experience with cloud platforms, with a specific focus on AWS services.
• Demonstrate expertise in enterprise data warehouse solutions, such as Snowflake or Databricks.
• Collaborate with cross-functional teams, including data scientists, analysts, and business stakeholders.
• Stay abreast of industry best practices, emerging technologies, and trends in data engineering.
Qualifications:
• Bachelor’s degree in Computer Science, Information Technology, or a related field.
• Proven experience of 9+ years in "Big Data" managing data pipelines, architectures, and datasets.
• Previous experience in a lead role overseeing a team of data engineers.
• Strong proficiency (3+ years) in Node.JS and AWS Lambda for building scalable applications.
• Extensive experience (3+ years) with Python for scripting and data-related application development.
• In-depth knowledge of SQL and database optimization techniques.
• Familiarity with data science libraries, particularly Pandas.
• Hands-on experience with Docker for containerization.
• Expertise in designing and maintaining Rest APIs.
• Previous exposure to cloud platforms, specifically AWS.
• Experience with enterprise data warehouse solutions, such as Snowflake or Databricks.
• Strong problem-solving and analytical skills.
• Excellent communication and leadership abilities.
• Proven ability to work in a collaborative and dynamic team environment