Defining project requirements, building SQL/HIVE databases, job automation, using analytical tools to find trends and predictive characteristics, developing and automating reports, data visualization, and exploring new analytical and big data tools.
The candidate must possess clear communicative skills to work in a highly collaborative and fast-paced team environment.
A successful candidate must have the ability to understand complex business problems to ensure projects are leveraging the appropriate technology and analytical tools in the delivery of a comprehensive solution.
Bachelor’s Degree in Computer Science, Engineering, Mathematics, Statistics or a related field
1+ year of experience in a data-oriented role working in a multi-disciplinary team
Master’s Degree in Computer Science, Engineering, Mathematics, Statistics or a related field
Have familiarity and experience with tools like Python, Hadoop, TensorFlow, scikit-learn, SQL, d3.js, map-reduce, Tableau or R
Previous experience in working with and building for location aware services and familiarity with web service development and API integration with multiple systems
Have a portfolio of open source contributions, personal projects, presentations, or other things that show that you are passionate about a subject, took some initiative to learn about it, and applied it to a real problem
Design, build, optimize, launch and support new and existing data models and ETL processes in production
Interface with engineers, product managers and product analysts to understand data needs.
Manage and verify data accuracy for Hadoop cluster.
Responsible for support of Hadoop cluster environment which includes Hive, Spark, Hbase, Presto, etc.
Bachelor’s Degree or equivalent experience in Computer Science or related field
2+ years experience in custom ETL design, implementation and maintenance on Hadoop clusters
2+ years on hand-on development coding
Understanding of Hadoop ecosystem such as HDFS, YARN, MapReduce, Zookeeper, Kafka, HBase, Spark and Hive
Strong SQL skills, especially in the area of data aggregation
Good understanding of distributed system, basic mathematics such as statistics and probability
Comfortable with Git version control
At least 2 years’ experience of architecture and design infrastructure on AWS
Experience building real-world data pipelines
Automation skills such as Airflow, Python and Bash code
Experience with A/B testing environment
Experience with analytics tools like R, Matlab
Strong Java or Scala skills
Design, develop, test, deploy and support large-scale projects using latest technologies.
Employ agile development practices including test and deployment automation as well as Continuous Integration using Jenkins to improve overall execution speed and product quality.
Collaboratively support agile applications to ensure minimal business downtime
Be a technology advocate and share expertise with other team members.
Execute best support practices in Incident investigation, technical escalation, diagnosis, recovery, documentation, communication and transition to Problem Management.
Be a key partner to the business and the rest of the team throughout the delivery and support cycle.
Bachelor’s Degree in or equivalent experience in Computer Science or related field.
Strong fundamentals in data structures, algorithms and object oriented programming.
Software development experience in one or more JVM based general purpose programming languages, preferably Java 8 and above.
Must possess strong verbal and written communication skills.
Interest and ability to learn other coding languages and new technologies as needed.
SOLID Principles and Practices, IoC & TDD
Experience with Git.
Experienced and knowledgeable in CI/CD and different testing strategies and techniques (Unit, Integration, UI tests).
Working in an Agile environment and team
Experience in dealing with multi-threaded scenarios and concurrency issues in code, as well as experience in working on high-performance software
An understanding of distributed computing or experience writing such applications
Experience with implementing Web APIs, Pub/Sub Systems, Event Sourcing Applications
Privacy & Cookies Policy
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.