There are numerous problems which have been exposed by deep learning models due to the sheer ability of the current generation of GPUs to create and run a large volume of models, and we are going to show people how to fix them. The exponential compute growth which has occurred in this area has opened the doors to creating and testing hundreds or thousands more models than the, one-by-one by hand which was performed in the past. These models use and generate data for both batch and real-time as well as training and scoring use cases. As data becomes enriched, and model parameters are explored, there is a real need for versioning everything, including the data. Many of these issues are similar to other software engineering problems, but new approaches must be taken to create solutions given the complexity of the problems. We will discuss what exactly these problems are, how they came to be and how to fix them.