## Responsibilities
Build and maintain data pipelines that power both ad-hoc analyses and production dashboards
Develop statistical models and data science solutions while also implementing the infrastructure to deploy them
Create self-serve analytics tools and datasets that empower stakeholders across the organization
Design experiments and perform statistical analyses to measure product and marketing initiatives
Build data models in our warehouse that balance analytical flexibility with performance
Partner directly with product, marketing, and leadership teams to identify opportunities and measure impact
Own the full lifecycle of data products - from initial exploration to production deployment
## Requirements: Core Technical Skills
Strong experience with modern data stack tools (e.g., dbt, Airflow/Dagster, Snowflake, BigQuery, Redshift, or similar)
Proven ability to design and manage ETL pipelines and database architectures
Advanced SQL skills and high proficiency in Python for both data analysis and engineering
Understanding of data modeling principles (Kimball, Data Vault
Experience with cloud data platforms (AWS, GCP, or Azure)
• *The crypto industry is evolving rapidly, offering new opportunities in blockchain, web3, and remote crypto roles — don’t miss your chance to be part of it.**