Skip to main content

Beyond dbt fundamentals: The analytics engineering mock project

By noviembre 16, 2023marzo 5th, 2024No Comments

Having all completed the dbt fundamentals course, the next goal in the Nimbus Academy is passing the dbt Analytics Engineering exam.

In addition to those technical skills it is also time to tighten up our consultancy soft skills. To train both these disciplines on top of our previously tested Snowflake knowledge, we have started our biggest exercise yet: an analytics engineering mock project.

analytics engineering mock project.

First encounters in the analytics engineering mock project

«Good morning, welcome to Nimbus Intelligence. Can we offer you something to drink?»

With these words to the CTO of our fictional company our exercise began. The goal of the first meeting: To get a clear view on the client’s need, get enough information to start, and of course leave a positive impression.

As soon as the CTO left the office, next up was contacting the knowledge partner providing the data. Shortly after we established this communication, our fictional client informed us of a new project manager overseeing the project. In short, the amount of people involved in the project -and our personal communication- escalated rapidly.

Keeping all stakeholders informed and knowing what questions to ask to which person is a core element in consulting. Even when none of the people on the list know the answer, it is important to know who can point you to someone that does.

For the purpose of the mock project, our coach takes on all the roles behind different email aliases. In reality, making sure everyone involved knows everything they need comes with no such luxury.

Transforming the non-ideal

Since the analytics engineering mock project is supposed to be a real challenge, the data provided was of course not fully structured and ready for ingestion. All of it was semi-structured, needing the appropriate skills to extract.

Joining the different streams of data proved another challenge, requiring the consideration of file metadata. All in all this should be a good preparation for real-world tasks.

With the source data sorted out and consolidated, we could now start on the actual transformation to analytics-ready aggregations. While the CTO thought up some preliminary tables, more needs will most likely become clear when the visualization team joins the fray.

A small appearance was made already by the IT department for connection purposes. The coming weeks hold the promise of more stakeholders and more needs. Writing the actual SQL might just turn out the easiest part of it all.


Leave a Reply