Sr Data Engineer

Sydney 23 months agoFull-time External
Negotiable
Sr Data Engineer - CREQ177048 Description Data Solution Architect Job Description:Activities: Design and develop scalable data pipelines from multiple sources that includes batch data and event data Design, develop, test, deploy and maintain data platform (storage, API, pipelines, catalogues) Work with various types of data - JSON, XML, Parquet, Avro, Delimited files Implement strategy on data reliability, efficiency, performance, and quality Perform peer design reviews, code reviews, pair programming and functional testing to ensure quality releases Deploy data pipelines following DevOps principles and incorporating automated testing Automating monitoring of data pipelines and data integrity Showcasing designs for new features and get approvals from the stakeholders Support team members with design, development and testing when blocked Must-have: Knowledge and working experience of modern data platform architectures including event-based architecture Experience in configuration-based design and development of large data projects Extensive and proven experience with Python, SQL, Shell Scripting, API, and processing large datasets Excellent understanding of data engineering concepts and data modelling Experience working with AWS environment including S3, EC2, Redshift, Aurora, SQS, SNS, IAM etc. Experience with streaming data platforms like Kafka Experience with CI/CD Methodologies and Building Infrastructure as a Code. Experience with container-based development & deployment such as Docker Adhere to data governance standards and principles Nice-to-have skills: Experience with Kubernetes and Argo Workflows Experience with AWS Glue, Lambda, EMR, DMS, CloudWatch, KMS, Secrets Manager Experience working with microservices architecture Job description: Understand the existing data architecture of the data platform Design, develop, test and deploy features for generic data pipeline using Kubernetes, Argo Workflow, Python, Aurora PostgreSQL, Redshift, S3, Kafka Work in a fast-paced and agile environment with tight deadlines Work in a team environment with efficient communication and collaboration Leadership by empowering team members to deliver quality results on time Primary Location Sydney, New South Wales, Australia Job Type Experienced Primary Skills Python Years of Experience 8 Qualification Data Solution Architect Job Description:Activities: Design and develop scalable data pipelines from multiple sources that includes batch data and event dataDesign, develop, test, deploy and maintain data platform (storage, API, pipelines, catalogues)Work with various types of data - JSON, XML, Parquet, Avro, Delimited filesImplement strategy on data reliability, efficiency, performance, and qualityPerform peer design reviews, code reviews, pair programming and functional testing to ensure quality releasesDeploy data pipelines following DevOps principles and incorporating automated testingAutomating monitoring of data pipelines and data integrityShowcasing designs for new features and get approvals from the stakeholdersSupport team members with design, development and testing when blockedMust-have:Knowledge and working experience of modern data platform architectures including event-based architectureExperience in configuration-based design and development of large data projectsExtensive and proven experience with Python, SQL, Shell Scripting, API, and processing large datasetsExcellent understanding of data engineering concepts and data modellingExperience working with AWS environment including S3, EC2, Redshift, Aurora, SQS, SNS, IAM etc.Experience with streaming data platforms like KafkaExperience with CI/CD Methodologies and Building Infrastructure as a Code.Experience with container-based development & deployment such as DockerAdhere to data governance standards and principlesNice-to-have skills:Experience with Kubernetes and Argo WorkflowsExperience with AWS Glue, Lambda, EMR, DMS, CloudWatch, KMS, Secrets ManagerExperience working with microservices architecture Job description:Understand the existing data architecture of the data platformDesign, develop, test and deploy features for generic data pipeline using Kubernetes, Argo Workflow, Python, Aurora PostgreSQL, Redshift, S3, KafkaWork in a fast-paced and agile environment with tight deadlinesWork in a team environment with efficient communication and collaborationLeadership by empowering team members to deliver quality results on time Travel No