**PYTHON DEVELOPER**
My client strive to build a business that you can shape, an inclusive workplace where everyone’s ideas are valued and a culture where we can thrive together. Our people stay connected and tuned in to what’s happening around us, keeping us ahead of the curve. While focused on the long-term, we look to the future to bring growth, development and benefit to everyone whose lives we touch.
• *Key Accountabilities**
Main responsibilities:
- Design and implement our Data and AI central data platform as well as related tools/systems for advanced business analytics and enterprise data governance
- Manage data modeling design, writing, and optimizing ETL jobs
- Participate in building and enhancing enterprise cloud data warehouse
- Deliver and manage in-house and cloud-native data solutions to meet business requirements across firm-wide business units
- Assist in creating and monitoring analytics dashboards, for different business functions
- Ensure quality, integrity and accuracy of datasets through tracked, secured and auditable controls
- Work with stakeholders to assist with data-related technical issues and support their data needs.
- Follow and enforce best practices in software development and data engineering
• *Requirements**:
- Excellent coding skills with Python and SQL, and solid understanding of object-oriented analysis and design
- Working knowledge of common algorithms and data structures, with strong analytical and problem-solving skills
- Hands-on experience with Linux and shell scripting
- Working experience with containerization (Docker/K8S) and task orchestration tools (Airflow/Luigi, etc.)
- Experience with cloud service and tools (AWS/Azure/GCP), as well as cloud data warehouse platforms.
- Experience with modern DevOps practices including version control, TDD, CI/CD, etc., for both code and configuration changes.
- Basic understanding and experience with ML/AI concepts (e.g. deep learning, deep reinforcement learning, deep bayesian learning), workflows, and toolsets (Jupyter Notebook, etc.), and libraries (Numpy, Pandas, Scikit Learn, PyTorch, etc.) preferably in both cloud-native and desktop deployments.
- Experience with traditional RDBMS based systems, including Data Lake, Data Warehouses and Marts, and more modern NoSQL and cloud-native big-data technology stacks such as document-oriented databases, Hadoop, columnar data files (e.g., Parquet), etc.
- Familiar with REST APIs, service-oriented architectures (SOA) / microservices, virtualization, and serverless deployment architectures.
- Demonstrated ability to understand, work with and deliver robust solutions in more than one programming language, framework, technology stack, runtime environment, etc..
QUALIFICATIONS / EXPERIENCE:
- Degree level or higher in Computer Science or another quantitative field
- 1-5 year(s) technical experience showing increasing sophistication of solutions implemented, and ability to deliver
- Fluency in both written and spoken English