Data Engineer

Job description

About us

Dashmote is an AI technology scale-up headquartered in Amsterdam, The Netherlands. With the goal of bridging the gap between images and data, we are working to bring AI-based solutions to marketers at clients like Heineken, Unilever, Philips, L’Oreal, and Coca-Cola. We add value in areas such as Location Analysis, Trends Analysis, and Marketing Intelligence.

 

Today, our company has offices in Amsterdam, Shanghai, Vienna, and New York. Over the past few years, our teams have solved a wide variety of cases, such as analyzing beer drinking and hairstyle trends by utilizing our Visual Recognition Tools, as well as identifying prospective leads by generating intelligence dashboards derived from Visual Content Analysis.

 

Role Description

As our very first Data Engineer in the Amsterdam office, you’ll be responsible for building our data pipeline architecture and working closely together with our Data Scientists. We’re looking for an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up.


The typical responsibilities include:

  • Create and maintain optimal data pipeline architecture.

  • Assemble large, complex data sets that meet functional / non-functional business requirements.

  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.

  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS technologies.

  • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.

Requirements

  • A degree in Computer Science, Informatics, or another quantitative field.

  • Minimum 1 year of experience in a Data Engineer position

  • Experience working with Python, SQL and NoSQL databases (Elasticsearch is a plus point) and:

    • Technologies: Hadoop, Spark, Rabbitmq etc.

    • API Deployment: Docker or Serverless

    • AWS cloud services: EC2, EMR, and EDS

  • Experience building and optimizing ‘big data’ data pipelines, architectures and data sets (both structured and unstructured data sets)

  • Ability to build processes supporting data transformation, data structures, metadata, dependency and workload management.

  • Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.

What's in it for you?

  • Great office location right in the city centre of Amsterdam 
  • Working within an international team that truly values your contribution
  • An awesome culture of responsibility and the freedom to turn your ambition into reality - regardless of your role and level
  • Exciting work atmosphere with no shortage of snacks, drinks, birthday treats, and social events
  • Monthly team events and weekly Friday company catch-ups and drinks