We are looking for talented and passionate Data Engineer to join the Data Engineering team in our Barcelona HQ.
Glovo has a culture of data-driven decision-making, and demands data that is timely, accurate, and actionable. We grow really fast collecting Terabytes of data from tens of data sources and providing interfaces for our internal customers to access and query the data hundreds of thousands of times per day.
As a Data Engineer you will be building and constantly improving Glovo’s reliable and scalable Big Data Platform using technologies like Amazon Web Services (AWS), Spark, Python and many more. Joining us your work will have an immediate influence on the shape of data consumed by teams across Glovo including Central BI, Data Scientists and Business Analysts.
Your depth of experience and past achievements will speak for itself having helped deliver on data platform wide projects that have had significant impact. You have developed complex, scalable and well designed data pipelines and defined engineering standards which have helped different data teams achieve efficiency while providing mentoring and technical leadership where necessary. You will be at times working independently and at other times within a team to achieve a given goal. Others respect you and you are a sought out Data Engineer because of your expertise and depth of knowledge.
Design, implement and keep improving Glovo Data Platform
Build scalable data pipelines using different technologies
Participate in the development of Data Lake, Data Warehouse, different methods of data ingestion and Self Service ETL tools using best architectural practices.
Mentor & share technical expertise with data engineers, data scientists, BI analysts and other technology colleagues
Be on top of new technologies and industry trends
At least 3+ years of software/data engineering experience
At least 2+ years experience in Python and Spark
Professional experience building complex ETLs/data pipelines
Working experience with Amazon Web Services / Google Cloud Platform
Experience with task orchestration tools (Airflow, Luigi)
Cloud Data Warehousing experience in Redshift or another distributed platform (e.g. Hadoop + Hive/Presto, BigQuery or Snowflake)
Experience in Data Streaming (Spark, Flume, Kafka, Kinesis, Flink, etc.)
Import and transform data from many third-party APIs
Strong analytical and problem-solving skills
Very good English
Experience working with AWS EMR, AWS Glue, Databricks
Experience with Docker, Kubernetes
Experience in building Data Lake
Orchestration of Machine Learning pipelines
Enticing Phantom Shares plan
Attractive Relocation package
Comprehensive Private Health Insurance
Cobee discounts on kindergarten, transportation, and food
Free monthly Glovo credits to spend on our restaurant products (and zero Glovo delivery fee on all Glovo orders!)
Cool perks such as fresh fruit and healthy snacks every day, beers on Fridays, Culture Days every 2 months!
Discounted Gym memberships
Flexible working environment
Gas: We work hard with energy and passion for what we do.
Care: We act in the best interest of a sustainable future.
Good vibes: We always see the positive side in every situation and act with fairness and honesty with everyone.
Stay Humble: We embrace mistakes and feedback to learn from them.
Glownership: We roll up our sleeves and get work done no matter our position and level.
If you believe you match these values, we look forward to meeting you!
Que no te vuelva a pasar. Crea tu alerta de empleo personalizada y recibe nuevas ofertas como ésta.