Analytics Engineer

At Jimdo, our mission is to help small businesses start, grow, and ultimately thrive online. Small businesses face new challenges with very little support or recognition. We know how hard this can be, because we have been in their position. That’s where you can help us—by designing intuitive tools to help small businesses solve complex problems.

With a forward-leaning and self-driven attitude, we continue to find new ways to help our customers get their ideas out into the world. As a team, we run at a steady pace to achieve what we aim for. We learn best by gathering data, trying new things, and sometimes even falling down along the way. It’s the lessons we learn in the process that make us better problem-solvers for small business owners.

If you’re motivated by our mission and excited to roll up your sleeves, try new things, learn from mistakes, and make a difference to small businesses around the world, we would love to work with you.

Location

We are happy to invite you to work with us in our office in Hamburg or permanently remote from any location. Wherever your working location of choice will be, we will make sure you get proper onboarding (virtual or in person) and that you are fully equipped to become part of the team from day one.
Regarding candidates who want to relocate to Hamburg: Due to COVID our offices are temporarily closed. We will invite you to start working with us from your current location for the time being. Once the borders are reopened, and the Embassies accept visa applications again, we will initiate the relocation process to Hamburg. Until then, you will be invited to work with us remotely with a temporary contract.

Your Mission

As an Analytics Engineer within the Analytics Engineering & Modelling Team you will contribute to shaping our long-term Data Platform architecture and strategy in order to make our data and data services usable and accessible for the whole organization.

You’ll be the bridge between the Data Engineers and Data Analysts by transforming and shaping the data to allow Analysts to work with it every day. The Data Engineers are making data from new data sources available (internal & external sources) and it’ll be your responsibility to transform this data into well-structured tables that can be queried by Analysts.

You’ll work closely with the Data Engineers and all other Analysts in the company to understand what they want to achieve and how to create data models or table structures that can benefit everybody. Right now our ETL process is running in Python and most of our models are based on a rule-set implemented with SQL, but we are open to moving to the next level with this.

We work with AWS Redshift, Tableau, Python, Github.

Responsibilities

  • Own our Business Logic (Subscriptions, Payments, Lifetime Value, Marketing Attribution, Churn, …) and make sure that it is aligned with the development of new products
  • Advance the Business Logic from a technical/methodological point of view as well as in terms of establishing a company-wide understanding and trust in it
  • Consider data as a product, i.e. support stakeholders in the design of specific data models and build data value chains for different needs of data consumers
  • Transform raw data, which is brought into our DWH by the engineers, into metrics, models, tables & table structures
  • Be the expert on how our data works, be able to explain & demonstrate this within the Data Team and to the whole company
  • Be a sparring partner with your technical & analytical knowledge for the Data Engineers and Data Analysts and be available for code review (SQL, Python, ...)
  • Build cross-functional relationships between the teams inside and outside the data department.

    Requirements

    • 3+ years of experience as Data Analyst / Developer / Engineer
    • Self-starting mentality, understand what is valuable for the data department and the company and drive own initiatives forward on own accord
    • Experienced in handling various database technologies, cloud platforms (AWS), and frameworks for large scale data processing (Airflow, Kafka)
    • In SQL, you not only know how to query tables, but also how to create them and when to use certain table structures and schema designs (star schema, 3NF etc.)
    • Python is your swiss army knife, you know how to use it for writing ETL pipelines as well as for doing exploratory data analysis
    • Very good communication skills and active stakeholder management

    Nice to have

    • Familiarity with specifics of Redshift and Postgres (e.g. indexing, window-functions)
    • Familiarity with data visualization tools (e.g. Tableau) in order to visualize new solutions and their output or to visualize changes/improvements that you’ve applied
    • Basic knowledge of machine learning or an interest in integrating machine learning solutions in the existing data infrastructure

    Olaf will be happy to receive your application in English or German.

    We hire for several positions within our data department at the moment. Please have a read through the other positions as well and feel free to let us know about all positions that would interest you. 1 application is enough - we’ll consider you for the others as well.

    Jimdo is proud to be an equal opportunity employer. This means that we don’t discriminate based on race or ethnic origin, color, the language(s) you speak, where you (or your parents) are from, or whether or not you consider yourself to have a disability. Neither will your age, gender, gender identity, sexual orientation, religion, beliefs, or political opinions play a part in your application with us. We’re a diverse team in so many ways, and we love it that way.