Key Skills: Hadoop Big Data Development, Hands-on experience with Big Data Technologies, AWS, EMR, Dynamodb, Lambda, Spark, Strong Problem solving and logical skills, Data Structures, and Algorithms
Number of Openings: 2
Joining time: Immediate to 30 days
Work Location: Hyderabad - India
Education: Master's/Bachelors Degree in Computer Science or equivalent experience
Detailed Job Description:
We are looking for a dedicated Hadoop Big Data Engineer to manage one of the largest distributed database infrastructures. The successful candidate will enjoy Development, Building, and Managing of highly available and efficient distributed database systems that serve around the globe.
Key Qualifications:
5+ Years of strong experience in developing Big Data applications using Scala/ Java/ Python, Spark, Hadoop, HDFS, Hive, Oozie, Kafka, and Map Reduce is a huge plus
Programming experience in building high-quality applications, data pipelines, and analytics solutions.
Design and build highly scalable data pipelines using new generation tools and technologies like Spark, Kafka to induct data from various distributed database systems.
Experience building large-scale data pipelines using AWS S3, EMR, Dynamodb, Lambda, Spark
Experience in designing and building dimensional data models to improve accessibility, efficiency, and quality of data
Strong analytical and communication skills. Should be self-driven, highly motivated, and able to learn quick
Key Responsibilities:
Designing and building the next-generation technologies that will make EMR the best environment to run large-scale data processing workloads.
Working on complex problems in distributed systems and query engines.
Translation of complex functional and technical requirements into detailed architecture and design.
You will work with many global teams, communicate effectively, both written and verbal, with technical and non-technical multi-functional teams
You will interact with many other groups' internal teams to lead and deliver elite products in an exciting rapidly changing environment.
Interview Process:
Interview Rounds: 3 to 4
Nature of Interview: Technical, Programming, Coding Interview
Mode of Interview: Google Meet/Webex Video call (Must enable the video in the interview)
NOTE: We are also looking for Scala Functional programming, Java back-end, Java Full-Stack, MEAN/MERN Stack Developers, and SRE/DevOps, Data Cloud Engineers.