The focus of this position is to provide solutions within the Hadoop environment using technologies such as HDFS, MapReduce, Pig, Hive, Hbase, Zookeeper, Sqoop, Flume, Kafka and other big data technologies. You will be responsible for providing architectural patterns to development teams as well as ensuring standards are adhered to.
Degree in Computer Science (or) equivalent.
5 -8 years of professional software development experience developing mission critical applications in a production UNIX environment with at least 2 years in a technical leadership role.
Experienced with the Hadoop ecosystem – HDFS, MapReduce, Pig/Hive, Hbase desired.
Strong technical skills in Core Java, XML, MySQL, Linux, Apache and Tomcat/JBoss.
Acts as the subject matter expert for Big Data related technology to address application integration and infrastructure framework related questions.
Common web services protocols (DNS, HTTP, SSL, REST).
Understanding of scalability considerations in designing high-performance servers.
Unix operating systems (especially Linux) and system calls.
TCP/IP networking and network programming.
Hashing and caching.
Memory management, threads, I/O performance optimization
Strong skills and proven record of designing and implementing multi-tier web solutions that are scalable, extensible, and maintainable.
Provides architectural leadership and ensures alignment with industry best practices.
Excellent communication and interpersonal skills.
Knowledge and prior work experience on data modeling, security, performance, and scalability.
Experience with Perl and/or shell-scripting.
Experience in resource scheduling, task decomposition, and risk management in an Agile environment.
Experience with Unicode/encoding and internationalization/localization.
Strong technical skills in JSP, Servlets, Struts, Hibernate, Spring.
Good understanding of various Machine Learning and Data Mining algorithms.