BigData- Hadoop Administrator

Job Description

• Hadoop Cluster Administration v2.0 with Good Knowledge in Kafka, ElasticSearch, YARN and YARN-based applications such as Spark

• Good knowledge of Hadoop ecosystem (Hadoop, Hive, Pig, Oozie, Hbase, Flume, sqoop) using both automated toolsets as well as manual processes.

• Install Hadoop updates, patches, and version upgrades

• Cluster maintenance and deployments including creation and removal of nodes.

• Performing HDFS backups and restores, Kerberos setup, access to other Hadoop stack components.

• User management from Hadoop perspective inclusive of setting up user in Linux,

• HDFS Support and maintenance, Monitor Hadoop cluster connectivity and security

• Ensure cluster and MapReduce routines are tuned for optimal performance

• Working transversally with other teams to guarantee high data quality and availability.

• Be the link between developers and build/architecture teams.

• Ability to work independently as well as in a team environment

• Expertise in writing shell scripts, Perl/Python scripts and debugging existing scripts.

• Experience in the Financial and banking industry will be a Plus

• Professional certifications (or equivalent) for BigData will be a Plus.

Job Requirements

• Minimum 8 years’ experience in a typical system administration role, performing system monitoring, storage capacity management, performance tuning, and system infrastructure development.

• Minimum 1-2 years of experience in deploying and administering large Hadoop clusters.

• Ability to quickly learn new technologies and enable/train other Members