Let’s zoom in on an example project: StreetWise project. Together with TNO, we created a scenario-based database with real-life traffic situations, by data-mining many thousands of miles of data via complex algorithms – to be used by companies to test software for self-driving cars. As the performance of these cars is directly reliant on the data quality, mistakes in gathering, processing, or analyzing this data could have disastrous consequences.
Especially in the case of the TNO-project where the data is used to teach self-driving cars, it becomes clear that there can’t be any compromise on quality. That is why we used the data governance and security building block when architecting the platform. By design, the sensor data and mined scenarios are stored separately from the client-facing generated test cases. Additionally, the engineers and data scientists have read/write rights on the environment to perform their work, but cannot accidentally break the platform, as these rights are restricted. This allows the team to focus on their core responsibilities, without being limited or distracted by data security.
Other examples of those building blocks in the standardized data factory are an Azure-based database that we can deploy almost instantly; data ingestion pipelines that ensure a smooth transition from raw data to use-case-specific data; or an Analysis Service server for fast access to the data sets. All these building blocks come with monitoring and performance controls. And by using a software-defined infrastructure, we constantly keep the building blocks up to date using the latest insights. Since in DevOps, the things you build (development) are also your responsibility in terms of support (operations) – you better make it right from the start.
Back to overview