Posted On 01 August

  • Spark Data Engineer

    • Company Unify Technologies
    • No. of Openings 4
    • Salary Not Disclosed
    • Work Type on-site

    Job Description :

    Unify Technologies Pvt Ltd

    Company Nature of work: IT-Software Product Development and Product Engineering Services

    Founded Year: 2015

    Company: Locations: Hyderabad, Bangalore, Pune, Chandigarh, Gurgaon - India, Seatle - USA

    Num of total Employees: 1,500+

    Company Website: http://unifytech.com/

    Company LinkedIn Website Address: https://www.linkedin.com/company/9206998

     

    A few words about Unify Technologies: Unify is a Digital Product Development and Product Engineering services company. We have extensive experience in software product engineering and a successful track record of delivering on aggressive delivery plans without compromising on the quality of Data, Cloud, Mobile, Cyber Security, E-Learning, E-commerce, and Healthcare platforms.

     

    If you are looking for a challenging opportunity to put your Computer Science skills in the right place. We're looking for Big Data engineers who have previously worked on delivering scalable backend distributed systems, big data platforms, and data pipeline and streaming systems.

     

    Employment Type: Full-Time

    Role: Software Development Engineer

    Position: Developer, Senior Developer, and Lead Developer

    Project: Maps and Advertising Platforms product

    Experience: 2-4 Years - SDE; 4-8 Years - Sr SDE; 8-10 Years - Lead SDE;

    Key Skills: Spark Core/Streaming with Any programming language Scala/ Java/ Python, Good Understanding of Data Transformation, Data Ingestion, Optimization mechanism, Good understanding of Big Data (Hadoop, MapReduce, Kafka, Cassandra) Technologies

    Joining time: Immediate to 30 days

    Job Location: Hyderabad, Bangalore - India (Hybrid Mode)

    Education: Masters/bachelor's degree in Computer Science, Statistics, Engineering, or a related technical discipline will be preferred

     

    Detailed Job Description:

    Key Qualifications:

    Minimum 2+ years of working experience in Big Data platforms and building large-scale distributed systems with high availability.

    • Experience in developing Spark Applications using Spark RDD API, Spark-SQL, Spark GraphX API, Spark Streaming API, Spark -Yarn, Spark MLib API, and Data frame APIs
    • Should have a broad knowledge of Spark Advantages, Spark Workflows, How to write Spark Jobs, Spark query tuning, and performance optimization.
    • Solid in Data Structures, Algorithms basics and Should have good hands-on experience with any - one programming language (Scala/ Java8 - 1st Preference OR Python - 2nd Preference OR Java - 3rd preference) and Strong investigative and problem-solving skills
    • Data Ingestion, Optimization Techniques, Data Transformation, and aggregation pipeline design/development knowledge is required.
    • Experience working on cutting-edge Big Data storage systems and technologies like Hadoop, HDFS, AWS S3, AWS Lambda, Storm/Heron, Cassandra, Apache Kafka, Solr/ElasticSearch, MongoDB, DynamoDB, Postgres, and/or MySQL, etc.

    Roles & Responsibilities:

    • Create new, and maintain existing Scala/Spark jobs for data transformation and aggregation from simple to Complex Data transformations involving structured & unstructured data.
    • Produce unit tests for Spark transformations and helper methods
    • Develop data processing pipelines, data storage, and management architecture.
    • Define scalable calculation logic for interactive and batch use cases
    • Interact with infrastructure and data teams to produce complex analysis across data
    • You'll be working on a unique and challenging big data ecosystem with focuses on storage efficiency, data security, and privacy, scalable and performant queries, expandability and flexibility, etc, and with the goal to help better measure the quality of map data.
    • You will work with engineers to build a big data platform that processes and manages Exabytes of data and enables efficient access to those data.

     

    Interview Process:

    Interview Rounds: 3 to 4

    Nature of Interview: Technical, Programming, Coding Interview

    Mode of Interview: Google Meet/Webex Video call (Must enable the video in the interview)

     

    Contact Person: varalakshmi@unifytech.com

     

    NOTE: We are also looking for Scala Functional programming, Java back-end, Java Full-Stack, MEAN/MERN Stack Developers, and SRE/DevOps, Data Cloud Engineers.

    Information

    • HR Name :Human Resource
    • HR Email :varalakshmi@unifytech.com
    • HR Phone :094936 92255
Top