Role : Data Engineering Lead
Location : Toronto, ON
Job Summary
We are looking for a skilled Data Engineer to design, build, and maintain scalable data pipelines and data platforms. The ideal candidate has strong hands-on experience in Python, PySpark, SQL, Snowflake, and ETL/data integration, with a solid focus on data quality, reliability, and performance.
Key Responsibilities
• Design, develop, and maintain robust ETL/ELT pipelines to process large-scale structured and semi-structured data
• Build and optimize data workflows using Python and PySpark
• Develop and maintain data models and transformations in Snowflake
• Write complex, high-performance SQL queries for analytics and reporting
• Integrate data from multiple sources using data integration tools
• Implement and enforce data quality checks, validations, and monitoring
• Collaborate with analytics, data science, and business teams to understand data requirements
• Optimize pipeline performance, scalability, and cost
• Troubleshoot and resolve data issues across the pipeline lifecycle
• Follow best practices for coding standards, version control, testing, and documentation
Required Skills & Qualifications
• Strong Data Engineering and software development experience
• Proficiency in Python and PySpark
• Advanced SQL skills
• Hands-on experience with Snowflake
• Solid experience building ETL/ELT pipelines
• Experience with data integration tools (e.g., Informatica, Talend, Fivetran, Airflow, or similar)
• Strong understanding of data quality, data validation, and data governance
• Experience working with large datasets and distributed data processing
• Strong problem-solving and debugging skills
--
•