Possess a degree in Computer Science/Information Technology or related fields.
· At least 3 years of experience in a role focusing on data pipelines.
· Experience in building on data platforms (e.g. Snowflake, Redshift, Databricks).
· Experience in Continuous Integration and Continuous Deployment (CICD).
· Experience in Software Development Life Cycle (SDLC) methodology.
· Experience in data warehousing concepts.
· Strong problem-solving and troubleshooting skills.
· Strong communication and collaboration skills.
· Proficient in SQL and Python.
Experience in building on data transformation frameworks (e.g. dbt, Dataform).
· Cloud environments (e.g. AWS, GCP, Azure).
· Big data technologies (e.g. Spark).
· Data platform migration.Responsibilities:
· Work closely with stakeholders to understand their requirements.
· Design, develop, and maintain data pipelines and data models.
· Continuous optimization of data pipelines and data models.
· Support operational needs of data pipelines and data models
Summary:
The Data Engineer will work with engineering and business stakeholders to gather requirements before designing, developing, and maintaining purpose-built data pipelines and data models. The ideal candidate will have a strong background in building Extract-Load-Transform (ELT) or Extract-Transform-Load (ETL) data pipelines