Responsibilities
- Design and develop scalable, reliable, secure and fault tolerant data systems and work on improving to deliver next gen systems
- Data engineering on the AWS platform, including tools like S3 and Redshift.
- Build readable, concise, reusable, extensible code that avoids re-inventing solutions to problems that you solved
- Best practice dimensional modelling techniques such as derived fact tables and type 2 slowly changing dimensions, and leveraging them in a business intelligence system such as Metabase / Tableau.
Requirements
- At least 3-4 years of experience as a Data Engineer
- Strong coding skills in one of the following: Python, Scala
- Experience with NoSQL data stores
- Experience in SQL with an in-depth understanding of query optimization. You should also understand DDL very well.
- Excellent understanding of building data models for the target data warehouse
- Hands-on experience in any of the cloud platforms and having a thorough understanding of the cloud concepts are required.
- Experience with any automation/orchestration tool - Airflow, Kubeflow, Oozie, Luigi, Azkaban, Step Functions
- Must have an inherent talent of understanding data and its interpretation for end business users
- Ability to drive initiatives and work independently
- Good communication skills to be able to express one’s point to internal stakeholders concisely
Good to have
- Exposure to machine learning algorithms with implementation in practice
- Any big data or data engineering certification on any of the clouds
- Built an entire ETL pipeline from scratch
Due to the high volume of applicants, only candidates who will be invited for an interview will be notified.
This job has now closed
You can find more jobs over on our careers page.
See More Jobs