hero

Jobs at Alumni Ventures Portfolio Companies

444
companies
2,370
Jobs

Staff Data Engineer

Jump

Jump

Software Engineering, Data Science
Remote · United States
Posted on Monday, March 18, 2024

Description

Jump is the only end-to-end fan experience platform built for sports teams and venues, breaking the mold for what fans can expect at live events.

Jump’s enterprise software enables sports teams and venues to unlock the massive opportunities that come from real relationships with their fans, rethinking the traditional model that hasn’t put the fan experience first.

Founded in 2021 by Marc Lore, Alex Rodriguez, and Jordy Leiser and backed by top venture firms including Forerunner Ventures, Will Ventures, Mastry Ventures, Courtside Ventures, and more, we’re just getting started!

We are a remote first team that grounds our actions and decisions in our core values — begin with trust, bring our all, and blaze a trail. Living our values means that we always assume positive intent, show up with authenticity and empathy, and push the limit of what is possible with our collective creativity.

We’re actively recruiting for smart, tenacious, adaptable, and most importantly kind people to join our team!

The Role

As a Staff Data Engineer at Jump, you will be responsible for designing, developing, and optimizing our data warehouse, data pipelines, and related infrastructure—systems that will underpin sophisticated machine learning-based fan experiences. You will also build and maintain reports and dashboards using data visualization tools. You will work closely with our cross-functional teams to ensure that data is collected, processed, and made available for analysis, driving informed decision-making across the organization.

What You’ll Do

  • Lead building a data platform and machine learning practice from the ground up

  • Data Pipeline Development: Extend Jump’s existing data pipelines that collect, process, and transform data from various sources into usable formats for analysis and reporting.

  • Data Modeling: Develop and maintain data models, schemas, and databases to support analytical, reporting, and machine learning needs. Ensure data integrity and quality throughout the data lifecycle.

  • EtLT/ETL/ELT Processes: Create and optimize processes to move and transform data efficiently. Monitor and troubleshoot data pipelines to ensure data reliability.

  • Data Integration: Collaborate with cross-functional teams to integrate data sources and third-party APIs, ensuring seamless data flow between different systems.

  • Performance Optimization: Continuously monitor and optimize data pipelines and data infrastructure for performance, scalability, and cost-effectiveness.

  • Machine Learning: Train and implement machine learning models that influence fan experiences, and MLOps tools to keep them performant.

  • Data Security: Implement data security and privacy measures to protect sensitive information and ensure compliance with relevant regulations (e.g., GDPR).

  • Documentation: Maintain comprehensive documentation for data pipelines, processes, and data models to facilitate knowledge sharing, onboarding team members, and integrating with teams.

  • Data Quality Assurance: Develop best practices for data quality management, ensuring accurate, high-quality, reliable, and timely data delivery across the organization.

  • Lead data migration projects from a technical perspective (data mapping, scripting).

  • Collaboration: Work closely with data analysts, data scientists, and other stakeholders to understand their data requirements and provide support in accessing and analyzing data.

  • Stay Current: Stay up-to-date with industry best practices, emerging technologies, and trends in data engineering to recommend and implement improvements.

What You’ll Bring

  • 8+ years of experience in data engineering, with a proven track record of designing and implementing data pipelines.

  • Highly proficient in Python with hands-on experience in utilizing libraries such as pandas, Jupyter Notebooks, NumPy, or equivalent.

  • Experience architecting and implementing data warehouses on top of a multitenant SaaS product using Snowflake or AWS Redshift.

  • Strong experience with data migration and transformation tooling (e.g. dbt, FiveTran, AWS Glue).

  • Expert in writing and optimizing SQL queries.

  • Understanding of machine learning concepts and experience deploying, maintaining, and retraining machine learning models in production.

  • Proficient with AWS data-related services such as RDS, DynamoDb, Redshift, Data Migration Service (DMS), SageMaker.

  • Experience with data streaming technologies (e.g. Kinesis, EventBridge, Kafka).

  • Experience utilizing Infrastructure-as-Code (IaC) tooling such as Terraform or CloudFormation.

  • Experience with data visualization tools (e.g. Tableau, Looker, ThoughtSpot).

  • Experience integrating data warehouse with analytics and CDP vendors (e.g. Segment, Amplitude).

  • Familiarity with software development methodologies like Agile, DevOps principles, and CI/CD pipelines.

  • Excellent problem-solving skills and the ability to work in a fast-paced, collaborative startup environment.

  • Strong communication skills to collaborate effectively with cross-functional teams.

Nice-to-Have Skills

  • Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field.

  • Experience building backend REST or GraphQL APIs on AWS.

  • Knowledge of data privacy regulations (e.g., GDPR) is a plus.

Attributes that will make you successful on our team

  • A strong desire to learn. You have strong experience and want to continue building your technical skills.

  • Tenacity. You enjoy working on challenges that others can’t or don’t want to tackle and you aren’t afraid of failing fast in order to find better solutions.

  • Passion. You love using your technical skills to build products that solve real problems. You hold yourself to a high standard and help to elevate others as well.

  • Empathy. You thrive in an environment where everyone can truly be themselves. You understand that our differing life experiences influence who we are and how we show up, and these diverse perspectives enrich both our team and our product.

  • Customer-centric mindset. You can understand the problem to be solved and who we are solving it for.

Benefits

  • Remote first

  • Competitive salary and equity

  • Flex PTO policy

  • 401(k)

  • Generous medical, dental and vision plans

  • 16 weeks paid parental leave for primary and secondary caregivers

  • $1,000 reimbursement for work-from-home tech setup

  • Company-paid sustainability subscription to ensure carbon neutrality is maintained for employee activities, such as travel

Compensation

Compensation is something we don’t want our candidates or employees to worry about. Our goal is to offer competitive salaries that are regularly benchmarked against the market. The core tenants of our compensation philosophy are fairness and transparency.

We have established a standardized leveling framework based on job scope and responsibilities. The compensation package for each level is standard across all engineering roles. This means that every person at a certain level is paid the same as everyone else, regardless of their background, previous compensation, location, or any other factor.

The compensation for this role is $205,000 and includes a generous equity package.

Application

Some candidates may see the requirements and feel unsure that they match all the criteria. We encourage you to apply! There's a good chance you have important skills that we have not stated. We especially encourage members of traditionally underrepresented communities to apply, including women, nonbinary folx, people of color, members of the LGBTQ community, veterans, and people with disabilities. We’re committed to building an inclusive workplace where everyone can bring their authentic self and thrive, and we value the diversity brought by different life experiences.