About us:
We are Psychology Today – the world's #1 psychology site visited by over 30 million people a month. We match thousands of therapists with clients every day. Think Airbnb, but for Therapy. (And yes, we still publish the magazine
If you're a talented and experienced data engineer who wants to bring mental health and therapy to the world, then we want to hear from you!
We offer:
• Work/life balance.
• A product and team that will inspire you - so you can do meaningful work with people who make you laugh.
• A healthy, profitable, and stable company where team members stay for many years.
• The freedom to work remotely from wherever home is.
• A supportive platform and a flat structure to do innovative work, be acknowledged, and make a difference.
• A chance to apply data solutions to big business challenges and opportunities.
On a daily basis, the Data Engineer will:
• Participate in agile rituals with other data team engineers.
• Work with business stakeholders, product owners and development teams to determine how data/analytics for applications should be structured.
• Create and maintain optimal data pipeline architecture.
• Transform large and potentially complex data sets into structured data that can be used for dashboards/reporting.
• Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
• Write and test AWS glue jobs written in python which will land data from source tables, process the data, and publish it to other data stores for analysis.
• Write clean, well-structured, and maintainable terraform code to spin up infrastructure components.
• Collaborate with team members to refine requirements for data products that solve business intelligence problems while still being technically sound.
Requirements
• 3+ years of relevant experience with detailed knowledge of data warehouse technical architectures, infrastructure components, ETL/ ELT and reporting/analytic tools.
• Experience designing, building and maintaining ETL jobs using AWS Glue.
• Experience with Python including use of data science libraries such as PySpark, Pandas and NumPy.
• Experience with Redshift or similar cloud based data warehouse systems.
• Ability to create scripts and programs that automate big data operations.
• Experience Designing and implementing data solutions using industry best practices.
• Strong understanding of data structures and algorithms.
• Strong understanding and experience with SQL and NoSQL database technologies and management systems such as RDS and DynamoDB.
• Good understanding of how to structure large datasets in a way that allows for responsive reporting and queries.
• Experience working on CI/CD processes such as Gitlab CI/CD etc. and source control tools such as Git, etc.
• Experience with monitoring tools like Datadog, CloudWatch etc.
Benefits
• Highly competitive salaries
• No politics: just a collaborative, focused, energetic, work environment that encourages creative solutions
• We are a remote business: freedom and independence to work from anywhere
• An awesome, talented team of friendly people