Data Reliability Engineer (Python/AWS)

Chicago 3 days agoContractor External
313 - 327 / hr
1 year contract for a Data Reliability Engineer (Python/AWS) role with a leading client in Chicago, IL or Peoria, IL area. APPLY NOW! Title: Data Reliability Engineer (Python/AWS) Location: Chicago, IL or Peoria, IL Workplace type: 100% onsite (M-F, 40 hour work week) Type: Contract Pay: $45-46.73 MAX/hour on w2 IV type: 2 rounds (in person) Length: 12 months- possibility to extend Screenings: no tech, full panel drug and background once hired +optional benefits and 401K Must haves: • Education & Experience Required:Degree required with 5+ years’ experience in this capacity. • Masters degree with 4+ years’ experience in this capacity. • No degree but technical certifications with 8+ years exp in this capacity is welcomed as well. Required Technical Skills • (Required)2-4 years of python and SQL experience • Experience with development and delivery of microservices using serverless AWS services (S3, Cloudwatch, RDS, Aurora, DynamoDB, Lambda, SNS, SQS, Kinesis, IAM) • Background in data management, data engineering, or data operations • Familiarity with ADO pipeline framework, or CICD experience (Jenkins) Disqualifiers/Red Flags/Overqualifications: • Choppy tenure/ consistent job hopping • If candidate is not local but good with relocating on their own dime and being onsite day 1, please make sure that is clear on resume. • Proper hands on experience is a hard requirement, so if they don’t have that, please do not submit them to this role. Job Description This position will contribute significantly to the efficiency and effectiveness of our data operations, enabling the team to deliver high-quality data solutions and support to various business units. Story Behind the Need – Business Group & Key Projects Position’s Contributions to Work Group: • Perform all necessary data related tasks which could include data design, data quality, data triage, data governance or data architecture – SQL, Snowflake, AWS • Develop a break/fix solution and address the root cause in the data pipeline implementation/code – Python, AWS • Develop scripts and automation tools to better detect and correct data issues. • Develop monitoring and alerting capabilities to proactively detect data issues • Works directly on complex application/technical problem identification and resolution, including responding to off-shift and weekend support calls. • Interaction with team: • New team, working with Operations teams ( tier 2 support and Tier 3 support) • Working with Internal technical teams Typical task breakdown: • Identify, investigate, and obtain resolution commitments for platform and data issues to maintain and improve the quality and performance of assigned digital product data. • Issue Identification: Reports in all forms from customers, dealers, industry representatives, and subsidiaries. • Issue Investigation: Statistical analysis, data triage, and infrastructure problem-solving. • Issue Resolution: Identify root causes, create SageMaker scripts to fix data, and perform break/fix tasks on data pipeline code. • Develop scripts and automation tools to better detect and correct data issues. • Develop monitoring and alerting capabilities to proactively detect data issues. • Work directly on complex application and technical problem identification and resolution, including responding to off-shift and weekend support calls. • Communicate with end users and internal customers to help direct the development, debugging, and testing of application software for accuracy, integrity, interoperability, and completeness. • Employee is also responsible for performing other job duties as assigned by (client name removed) management from time to time.