Data Engineer

Job description

About us

Dashmote is an AI technology scale-up headquartered in Amsterdam, The Netherlands. We connect the offline and online worlds by decoding the digital footprint of locations, allowing companies in the F&B industry to understand the on-trade market and make smarter and data-driven decisions.

Today, our company has offices in Amsterdam, Shanghai, Vienna, and New York. Over the past few years, our teams have solved a wide variety of cases, such as analyzing beer drinking and hairstyle trends by utilizing our Visual Recognition Tools, as well as identifying prospective leads by generating intelligence dashboards.

 

Role Description

As our Data Engineer in the Amsterdam office, you’ll be part of our rockstar Product-team consisting of Designers, Data Scientists, Data Engineers and Data Analysts. You’ll support our data engineering team in building the productized Dashmote Platform to help our clients drive top line revenue. You will be focused on building scalable solutions that work in a variety of scenarios for our global clients. We’re always looking for new ways to improve our infrastructure required for optimal ETL process so a part of your role is to research and come up with ideas on how to do this.

You will be building on epics like: automate our data pipeline to crunch millions of data points on a weekly basis, help develop and scale our recommendation engine that helps clients to focus on high value prospects, etc. We are moving from data to insights to real time actions and you have a key role to make that happen. You will be building upon our software as a service application which helps our clients win new business through our unique data and data science capabilities.
 

Main Responsibilities

  • You will be contributing to the design and development of data components within an agreed timeframe and defined quality standard;
  • Develop an approach to deploy quality automation at scale;
  • Help to build actual data products that work in various scenarios.
  • Help the data team to build (implement new features/functionality) and maintain (make sure that what is there, keeps working) our data pipeline architecture;
  • Drive implementation of Big Data tools to increase the efficiency of the product team.
  • Be aware about the technological opportunities and threats that could affect our infrastructure.

Job requirements

  • Experience working in software/ETL development environments and knowledge of Agile methodologies would be highly advantageous
  • Solid understanding of version control tools (e.g. git).
  • Your technical experience will ideally include some of the following programming languages; Python or other OOP languages, SQL, Bash scripting.
  • Experience building data pipelines using Hadoop, Apache Spark, Apache Airflow, Docker and AWS services
  • Experience working with APIs and building microservices is a plus
  • Preferably have experience working with CI/CD (e.g. Jenkins)
  • Entrepreneurial minded, hands-on and willing to work in a fast-paced environment

 

What’s in it for you

  • Great office location right in the city centre of Amsterdam
  • Working within an international team of over 65 people that truly values your contribution
  • Growing company full of opportunities & awarded by Google, McKinsey and Rocket Internet for best B2B startup in Europe
  • An awesome culture of responsibility and the freedom to turn your ambition into reality - regardless of your role and level
  • Exciting work atmosphere with no shortage of snacks, drinks, birthday treats, and social events
  • Monthly team events and weekly Friday company catch-ups and drinks

If this sounds like a match, we would love to hear from you!

Please note that to be considered for this role you will need to hold a valid NL Work Visa or NL Residency or NL Citizenship.