Posted On 16 September
Specific responsibilities :
- Work as part of an implementation team from concept to operations, providing deep technical subject matter expertise for successful deployment.
- Ingest and integrate massive datasets from multiple data sources, while designing and developing solutions for data integration, data modeling, and data inference and insights.
- Design, monitor, and improve development, test, and production infrastructure and automation for data pipelines and data stores.
- Troubleshoot and performance tune data pipelines and processes for data ingestion, merging, and integration across multiple teschnologies and architectures including ETL, ELT, API, and SQL.
- Test and compare competing solutions and provide informed POV on the best solutions for data ingestion, transformation, storage, retrieval, and insights.
- Work well within the quality and code standards, and engineering practices, in place, and maintained by the team.
Experience and Skills :
- 3-5 years of data engineering experience required, 3+ years of Azure AWS Big Data.
- 2 + years of coding in Python.
- 2+ years of experience working with JSON, SQL, document data stores, and relational databases.
- Solid understanding of ETL ELT concepts, data architectures, data modelling, data manipulation languages and techniques, and data query languages and techniques.