Data Engineer Jobs
A

Amazon

Data Engineer, Amazon

Seattle

FullTime

2 weeks ago

Back to jobs

Job Description

Core Shopping Analytics transforms disparate, raw data into actionable insights to improve customer product & program discovery and evaluation across the Amazon store. In 2025, we will own the analytics for the Homepage, Navigation, Inspire, MobileX, List, Detail Page, and cross-Store shopping experiences.

Key job responsibilities
You are a member of a 3 person DE team that owns the analytics infrastructure, standardize logging, and a central suite of fact and dim tables used by internal team members, CS business & tech partners, and Stores stakeholders across the globe. You will evolve our infrastructure to leverage latest technologies to balance accuracy and speed. You will partner with CS Tech teams to ensure new logging meets business and availability requirements. You will build and enhance central fact and dim tables to drive standardization and improve consumption by downstream consumers. Lastly, you will collaborate with DE teams across Stores to share 'best practices'.

A day in the life
We build in 5-week Sprints with 1 week for planning and 4 weeks for execution. Planning Week is used to groom the backlog and to scope & prioritize projects. We reserve the Friday of every Planning Week for a self-directed Learning Day. Over the 4 Execution Weeks, we have regular stand-ups and a mid-sprint demo. Projects will vary sprint to sprint among building derived/aggregate datasets, conducting analysis, and pushing learnings to customers.

About the team
Today, we empower ~615 Builders directly and 334 unique partner teams via our 'Source of Truth' datasets, self-service dashboards/gits, scheduled reports, and custom metrics consumed for cross-store evaluation. In 2024, we supported new experiences end-to-end from logging to reporting/self-service and continued maturing our foundational service offerings to enable more bandwidth for insight generation.

  • 1 years of data engineering experience
  • Experience with data modeling, warehousing and building ETL pipelines
  • Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala)
  • Experience with one or more scripting language (e.g., Python, KornShell)