Take complete end-to-end ownership of one or more applications components.
Responsible for the code, unit test, automation, peer review and CI/CD of one or more application components and data pipelines.
Translate business requirements into technical solutions.
Responsible for the design and implementation of an innovative, scalable, and distributed systems.
Design and deploy data and pipeline management frameworks built on top of open-source components, including Hadoop, Hive, Spark, HBase, Kafka streaming and other Big Data technologies.
Work in and promote an agile development culture focused on high quality code development and deployment to production.
Champion Design and Coding best practices.
This is a hybrid position. Hybrid employees can alternate time between both remote and office. Employees in hybrid roles are expected to work from the office two days a week, Tuesdays and Wednesdays with a general guidepost of being in the office 50% of the time based on business needs.
Qualifications
- Bachelor s degree in Computer Science, or related technical discipline. With 7 or more years of software development experience in building large scale data processing platforms.
- Proficiency in engineering practices and writing high quality code, with expertise in Java
- Experience in building applications on top of Big Data open source platforms like Hadoop, Hive, Spark, HBase, Kafka streaming and other Big Data technologies.
- Experience in building applications on top Kubernetes and containers.
- Experience in handling large data volume in low latency and/or batch mode.
- Experience in building micro services with spring boot or node.js would be a plus.
- Strong team player including mentoring/growing junior engineers
- Strong on driving for results and self-motivated, strong learning mindset, with good understanding of related advanced/new technology.
- Strong verbal, written, presentation and interaction skills.
- Quick learner, self-starter, detailed and work with minimal supervision.