Posted On 08 September

  • Pyspark developer 005

    • Company U3 Infotech
    • No. of Openings 10+
    • Salary Not Disclosed
    • Work Type on-site

    Job Description :

    Duties And Responsibilities

    • Familiarize with Ensign’s business domain and objectives to develop and deploy big data analytics applications that meet internal business requirements and the needs of partners and customers
    • Lead the design, development, testing, deployment of efficient and reliable big data processing workflows
    • Design, develop, manage data warehouse architecture and relational databases
    • Provide monitoring, maintenance and support for system operations as part of M&S as required in commercial projects
    • Embrace the challenge of dealing with terabytes to petabytes of data on a daily basis
    • Manage different experimentation, development, staging, production environments to provide overall system functionality, health, scalability, resiliency, and security
    • Responsible for implementing and maintaining complex big data projects with a focus on collecting, parsing, managing, and analysing large sets of data to turn information into insights using multiple platforms
    • Deliver detailed documentation and ensure quality throughout project lifecycle

    Requirements

    • Bachelor’s degree in Computer Science/Information Systems/Computer Engineering or equivalent
    • Minimum 5 years of experience working on big data analytics development (e.g. Hadoop, Apache Spark, MPP DBs)
    • Good in-depth knowledge of Hadoop ecosystem (HDFS, Impala, Kafka, Spark, NiFi, Elasticsearch, etc.), associated tools and cloud-based technologies (e.g. EMR, Redshift, S3, etc.)
    • Extensive experience in programming (Python, Scala, Java) for data processing and analytics
    • Understanding of modern software engineering tools such as Git, Bitbucket, Jenkins, Maven
    • Highly proficient at reading, profiling, parsing, transforming, cleansing and integrating data from various sources (structured, semi-structured and unstructured)
    • Strong awareness of data security, data governance and performance, with an ability to deliver these key non-functional requirements.

     

    Desired Skills and Experience

    Git, Apache Spark, Scala, Big Data, Experimentation, Software Engineering, Data Governance, EMR, Maven, Python, Data Warehouse Architecture, Java, Data Analytics, S3, Databases, Business Requirements.

    Information

    • HR Name :Human Resource
    • HR Email :career@U3infotech.com
    • HR Phone :+65 6423 4739
Top