Starting a data project by building a data lake from scratch is a lot of work, making it expensive and prone to errors. Building the technical foundation while under pressure to deliver data use cases can result in short-term focus, increasing the chance of making mistakes that can be quite costly in the long run. Or, for example, databases and servers being set up without thorough monitoring.
Using standardized building blocks is a way to solve this problem. This is already common practice in the cloud world, where we see a catalog-approach of semi-finished products such as applications, databases, middleware, storage, computing or network products. Giving you the freedom to pick-and-mix your custom solution together. Why not use this approach for data projects?
Best of both worlds: one size fits all vs. unique and customized
Our Data DevOps team has developed many standard data building blocks. They are tried and tested, based on our experience and lessons learned from quite some data projects. From that experience, it became clear that although the data for every project is unique, the process to yield value is surprisingly similar. Comparable to a factory, the different steps in this process have an optimal sequence. We have cut up these steps in framed units to create standardized building blocks, enabling us to move fast in a controllable way.
Back to overview