Amazon - Hiring for Data Engineer I, Apply by 30 November 2022.
Overview:
Amazon is hiring for the role of Data Engineer I!
Amazon is a multinational technology giant that focuses on various services like e-commerce, cloud computing, digital streaming, and artificial intelligence. It is one of the Big Five companies in the US IT industry, along with Google, Microsoft, Facebook, and Apple.
Qualification:
- Enrolled in Bachelor's or Master's degree in Computer Science, Engineering, Mathematics, or a related technical discipline
- Industry experience in Data Engineering, BI Engineer, or related field
- Hands-on experience in building big data solutions using EMR/ Elastic Search/ Redshift or equivalent MPP database.
- Must possess strong verbal and written communication skills, be self-driven and deliver high-quality results in a fast-paced environment
- Hands-on experience and advanced knowledge of SQL and scripting languages such as Python, Shell, Ruby, etc.
- Hands-on experience in working with different reporting/ visualization tools available in the Industry
- Demonstrated strength and experience in data modeling, ETL development, and data warehousing concepts
Responsibilities:
- Design data schema and operate internal data warehouses and SQL/ NoSQL database systems
- Design data models, implement, automate, optimize and monitor data pipelines
- Own the design, development, and maintenance of ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions
- Analyze and solve problems at their root, stepping back to understand the broader context
- Manage Redshift/ Spectrum/ EMR infrastructure and drive architectural plans and implementation for future data storage, reporting, and analytic solutions
- Work on different AWS technologies such as S3, Redshift, Lambda, Glue, etc.
- Explore and learn the latest AWS technologies to provide new capabilities and increase efficiency
- Work on the data lake platform and different components in the data lake such as Hadoop, Amazon S3, etc.
- Work on SQL technologies on Hadoop such as Spark. Hive, Impala, etc.
- Recognize and adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation
- Conduct rapid prototyping and proof of concepts
- Conceptualize and develop automation tools for benchmarking data collection and analytics
- Interface with other technology teams to extract, transform and load data from a wide variety of data sources with the help of SQL and AWS big data technologies.
Location:
Chennai