Location: Gurgaon, India

Employment Type: Full time

Min. Experience: 2 - 4 years

Position Summary:

We are looking for candidates with hands on experience in Big Data technologies to be based out of our Gurgaon office.

Key Responsibilities:

  • Build the Big Data infrastructure to store and process terabytes of data
  • Understand the business need what kind of data, how much data, types of algorithms to be run, load on the system, budget etc.- and recommend optimal solutions
  • Build and implement the solution. This will need you to be hands on to build in quick prototypes / proof of concepts data processing benchmarks
  • Work with the operations team to build systems, process and team required to run and maintain the systems securely, reliably and in a scalable manner
  • Work with the analytics team to understand what data landscaping would be required

Qualifications and Skills:

  • Must have 2-4 years of experience with Big Data technologies such as Hadoop and the related ecosystem
  • Practical experience and in-depth understanding of Map Reduce
  • Hands-on experience with Spark/Hive/Pig/Flume/Sqoop
  • Should have a good programming background with expertise in Java
  • Data infrastructure tools landscape e.g cloud service providers, virtualization software, system monitoring tools and development environments
  • Ability to program and guide junior resources on technical aspects
  • Ability to craft documents that can explain complex ideas in simple terms in order to build consensus or educate
  • Knowledge of R or any other Statistical Programming Language is a plus
  • Degree - Graduates/Postgraduates in CSE or related field

Applying for

First Name

Last Name



Profile Summary (200 characters)

Highest Education (Degree name)

Highest Education (Institute name)

Year of Completion

Current Organisation

Total Years of Experience

Key Skills Summary (200 characters)

Current CTC

Expected CTC

Current Location

Preferred Location

Upload your resume