This document discusses collaborating with data scientists and practicing dataops. It begins by describing a data scientist's work and expectations. When doing a data science project, the key steps are data cleaning, analysis, validation, splitting, model training, and validation. When developing a data science product, additional steps are needed like model scaling, updating, deployment, monitoring, logging, and optimization. The document advocates for consistent workflows, collaborative modeling, continuous improvement, automated deployment, reproducible results, and quality monitoring when practicing dataops. Dataops combines development and operations to continuously deliver high quality data, bringing together data professionals from various roles. Examples are provided for implementing dataops in practice using platforms like Kubeflow and Paperspace. Automated