Posted On 03 September
• Skilled with Docker & Kubernetes
• Development experience with Golang (alternatively a strong background in Scala or Java)
• Skilled in Flink Cluster, Kafka Cluster, Redis Cluster & Elastic Search Cluster.
• Skilled at CI/CD using automation tools such as Jenkins, Ansible script.
• Experience using and configuring operational tools such as Splunk, Humio, Prometheus & Grafana.
• Experienced at administering a code repo such as Github
• Minimum of 2-3 years’ experience in production CI pipelines, utilizing big data engineering techniques that enable statistical solutions to solve business problems
• Post graduate degree in Computer Science/ Engineering, Information Science or a related discipline with strong technical experiences highly desired
• Previous exposure to financial services, credit cards or merchant analytics is a plus, but not required
• Extensive experience with SQL and big data technologies (Hadoop, Python , Spark, Hive etc.) tools for large scale data processing, data transformation and machine learning pipelines
• Familiarity or experience with data mining and statistical modeling (e.g., regression modeling, clustering techniques, decision trees, etc.) is very helpful
• Strategic thinker and good business acumen to orient data engineering to the business needs of internal clients
• Demonstrated intellectual and analytical rigor, strong attention to detail, team oriented, energetic, collaborative, diplomatic, and flexible style