Job Description:
• Design and implement scalable, real-time data pipelines to move and transform customer and transaction data from multiple global sources.
• Architect resilient data systems that ensure reliability, observability, and data integrity across distributed environments.
• Collaborate with AI and product teams to structure and prepare data for intelligent features like recommendation systems, insights generation, and predictive analytics.
• Champion event-driven architectures, supporting seamless data ingestion from POS systems, APIs, and third-party integrations.
• Define best practices for data governance, schema evolution, and real-time monitoring.
• Mentor and guide team members, fostering growth and sharing knowledge across the engineering organization.
Requirements:
• Extensive experience building and scaling data architectures that handle diverse global data sources.
• Proven background in real-time data processing, leveraging tools like Kafka, Pub/Sub, and ClickHouse.
• Proficiency in TypeScript and experience working across full-stack environments.
• Strong understanding of SQL and NoSQL databases, with expertise in schema design, performance tuning, and data modeling.
• Familiarity with ETL/ELT frameworks (e.g., dbt, Airflow, Dagster) and cloud platforms like AWS and GCP.
• Experience with Vercel is a plus.
• Experience empowering AI systems through structured, high-quality data (e.g., embeddings, vector databases, RAG pipelines).
• Strong grasp of event-driven software patterns and distributed system fundamentals.
• An interest in mentorship and helping elevate others on the team.
Benefits:
• Personal or family health insurance options
• Generous annual stipend for work equipment and hardware
• Unlimited vacation with a minimum required use per year
• RRSP matching
• Competitive compensation and flexible remote work culture