Title: Senior Software Engineer, Data Platform
Location: CHICAGO, IL, US, 60661-4555, LAKE FOREST, IL, US
Workplace: Hybrid
Department: Technology (US)
Job Description:
Work Location Type: Hybrid
As a leading industrial distributor with operations primarily in North America, Japan and the United Kingdom, We Keep The World Working® by serving more than 4.5 million customers worldwide with products delivered through innovative technology and deep customer relationships. With 2023 sales of $16.5 billion, we’re dedicated to providing value for customers, fostering an engaging culture for team members and driving strong financial results.
Our welcoming workplace enables you to learn, grow and make a difference by keeping businesses running and their people safe. As a 2024 Glassdoor Best Place to Work and a Great Place to Work-Certified™ company, we’re looking for passionate people to join our team as we continue leading the industry over our next 100 years.
Compensation:
The anticipated base pay compensation range for this position is $93,800.00 to $156,400.00.
Rewards and Benefits:
With benefits starting on day one, our programs provide choice and flexibility to meet team members' individual needs, including:
Medical, dental, vision, and life insurance plans with coverage starting on day one of employment and 6 free sessions each year with a licensed therapist to support your emotional wellbeing.
18 paid time off (PTO) days annually for full-time employees (accrual prorated based on employment start date) and 6 company holidays per year.
6% company contribution to a 401(k) Retirement Savings Plan each pay period, no employee contribution required.
Employee discounts, tuition reimbursement, student loan refinancing and free access to financial counseling, education, and tools.
Maternity support programs, nursing benefits, and up to 14 weeks paid leave for birth parents and up to 4 weeks paid leave for non-birth parents.
The pay range provided above is not a guarantee of compensation. The range reflects the potential base pay for this role at the time of this posting based on the job grade for this position. Individual base pay compensation will depend, in part, on factors such as geographic work location and relevant experience and skills.
The anticipated compensation range described above is subject to change and the compensation ultimately paid may be higher or lower than the range described above.
Grainger reserves the right to amend, modify, or terminate its compensation and benefit programs in its sole discretion at any time, consistent with applicable law.
Position Details:
This new team at Grainger is focused on transforming data from Grainger's important domains into reliable and real-time analytics products that address important business needs. You will be focused on building and operating data pipelines that power analytics ranging from key financial reports to production models that define data engineering experience at Grainger. You will play an important part in defining the strategy of the team, evaluating, and integrating data patterns and technologies, and building data products alongside domain experts. You are a thoughtful observer who enjoys investigating business problems and building data solutions that address them.
You are a technical teacher that can guide teams to adopt the capabilities and products you build.
You Will:
As the senior technical engineer, your primary responsibility will be to design and implement highly efficient, reusable, and scalable data processing systems and pipelines across the tech stack, including Kubernetes, Databricks, and Snowflake.
Design with test driven development and implement technical solutions to ensure data reliability and accuracy.
Develop data models and mappings and build new data assets required by users. Perform exploratory data analysis on existing products and datasets.
Educate data engineering teams in adopting new data patterns and tools.
Understand trends and emerging technologies and evaluate the performance and applicability of potential tools for our requirements.
Work within an Agile delivery / DevOps methodology to deliver product increments in iterative sprints.
Function as SME within this area when engaging with our AI, Platform, and Business Analytics teams to build useful pipelines and data assets.
Work with product and business to define roadmap, communication, and architecture.
Mentor junior team members.
You Have:
5+ years of experience in batch and streaming ETL using Spark, Python, Scala, Snowflake, or Databricks for Data Engineering or Machine Learning workloads.
3+ years orchestrating and implementing pipelines with workflow tools like Databricks Workflows, Apache Airflow, or Luigi
3+ years of experience prepping structured and unstructured data for data science models.
4+ years of experience with containerization and orchestration technologies (Docker, Kubernetes) and experience with shell scripting in Bash, Unix, or Windows shell is preferable.
Experience working with a variety of databases including but not limited to vector databases, graph databases, relational, and NoSQL
Experience using machine learning in data pipelines to discover, classify, and clean data.
Implemented CI/CD with automated testing in Jenkins, Github Actions, or Gitlab CI/CD
Familiarity with AWS or other cloud services such as AWS Glue, Athena, Lambda, S3, and DynamoDB
Demonstrated experience implementing data management life cycle, using data quality functions like standardization, transformation, rationalization, linking, and matching.
We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender, gender identity or expression, or veteran status. We are proud to be an equal opportunity workplace.
We are committed to fostering an inclusive, accessible environment that includes both providing reasonable accommodations to individuals with disabilities during the application and hiring process as well as throughout the course of one’s employment. With this in mind, should you need a reasonable accommodation during the application and selection process, please advise us so that we can provide appropriate assistance.