Remote Data Engineer (Entry-Level) - $35/Hour
Join Amazon's innovative team as a Remote Data Engineer and play a key role in driving automation for Amazon Pay Corporation! As a talented and motivated individual, you will have the opportunity to work with a diverse group of professionals to design, build, and deliver high-performance, fault-tolerant data pipelines. With a competitive salary of $35/hour, this is an exciting chance to launch your career and make a meaningful impact.
As a Data Engineer, you will be part of the Data Engineering team, responsible for envisioning, building, and delivering data pipelines that empower stakeholders to make data-driven decisions. You will work closely with cross-functional partners from technology, product, and operations to translate raw data into actionable insights.
Key Responsibilities:
- Design, implement, and support a platform offering ad-hoc access to large data sets
- Interface with different technology groups to extract, transform, and load data from a wide variety of data assets
- Implement data structures using best practices in data modeling, ETL/ELT processes, and SQL, Redshift, and OLAP technologies
- Model data and metadata for ad-hoc and pre-built reporting
- Interface with business clients, collecting requirements and delivering complete reporting solutions
- Build strong and scalable data integration (ETL) pipelines using SQL, Python, and Spark
- Build and deliver high-quality data sets to support business analysts, data scientists, and customer reporting needs
- Continuously improve ongoing reporting and analysis techniques, automating or simplifying self-service support for clients
About the Team:
The Amazon Pay Data Engineering and Analytics team's mission is to transform raw data into actionable insights. We achieve this by providing a single source of truth, standardized metrics, reporting with deep dive capabilities, producing ML models, and actionable insights to perceive growth opportunities and drive the Amazon pay flywheel.
Basic Qualifications:
- 3+ years of data engineering experience
- Experience with data modeling, warehousing, and building ETL pipelines
- Experience with SQL
- Experience with big data technology, including: Hadoop, Hive, Spark, EMR
- Experience in at least one modern scripting or programming language, including: Python, Java, Scala, or Node.js
Desired Qualifications:
- Experience with AWS technology, including: Redshift, S3, AWS Glue, EMR, Kinesis, Firehose, Lambda, and IAM roles and permissions
- Experience with non-relational databases/data stores (object storage, document or key-value stores, graph databases, column-family databases)
Join Our Team!
This is a fantastic opportunity to grow your career. If you have the skills and passion we're looking for, please submit your application today.
Apply To This Job Apply for this job