Skip to main content

MLOps Zoomcamp 2024 – Module 2

Module 2 – Experiment-Tracking


Source


https://github.com/DataTalksClub/mlops-zoomcamp/tree/main/02-experiment-tracking


Homework


Q1. Install MLflow

To get started with MLflow you’ll need to install the MLflow Python package.

For this we recommend creating a separate Python environment, for example, you can use conda environments, and then install the package there with pip or conda.

Once you installed the package, run the command mlflow –version and check the output.

What’s the version that you have?

import mlflow
mlflow.__version__
'2.13.0'

Answer of Q1: 2.13.0



Q2. Download and preprocess the data

We’ll use the Green Taxi Trip Records dataset to predict the duration of each trip.

Download the data for January, February and March 2023 in parquet format from here.

Use the script preprocess_data.py located in the folder homework to preprocess the data.

The script will:

load the data from the folder <TAXI_DATA_FOLDER> (the folder where you have downloaded the data), fit a DictVectorizer on the training set (January 2023 data), save the preprocessed datasets and the DictVectorizer to disk. Your task is to download the datasets and then execute this command:

python preprocess_data.py –raw_data_path <TAXI_DATA_FOLDER> –dest_path ./output Tip: go to 02-experiment-tracking/homework/ folder before executing the command and change the value of <TAXI_DATA_FOLDER> to the location where you saved the data.

How many files were saved to OUTPUT_FOLDER?

  • 1
  • 3
  • 4
  • 7
!python preprocess_data.py --raw_data_path ~/zoomcamp/myproject/data --dest_path ./output
import os
os.listdir(os.getcwd() + "/output")
['dv.pkl', 'val.pkl', 'test.pkl', 'train.pkl']

Answer of Q2: 4



Q3. Train a model with autolog

We will train a RandomForestRegressor (from Scikit-Learn) on the taxi dataset.

We have prepared the training script train.py for this exercise, which can be also found in the folder homework.

The script will:

load the datasets produced by the previous step, train the model on the training set, calculate the RMSE score on the validation set. Your task is to modify the script to enable autologging with MLflow, execute the script and then launch the MLflow UI to check that the experiment run was properly tracked.

Tip 1: don’t forget to wrap the training code with a with mlflow.start_run(): statement as we showed in the videos.

Tip 2: don’t modify the hyperparameters of the model to make sure that the training will finish quickly.

What is the value of the min_samples_split parameter:

  • 2
  • 4
  • 8
  • 10
!python train.py --data_path "/home/hduser/zoomcamp/myproject/mlops/homework/02/output/"


Answer of Q3: 2



Q4. Launch the tracking server locally

Now we want to manage the entire lifecycle of our ML model. In this step, you’ll need to launch a tracking server. This way we will also have access to the model registry.

Your task is to:

launch the tracking server on your local machine, select a SQLite db for the backend store and a folder called artifacts for the artifacts store. You should keep the tracking server running to work on the next two exercises that use the server.

In addition to backend-store-uri, what else do you need to pass to properly configure the server?

  • default-artifact-root
  • serve-artifacts
  • artifacts-only
  • artifacts-destination

Answer of Q4: default-artifact-root



Q5. Tune model hyperparameters

Now let’s try to reduce the validation error by tuning the hyperparameters of the RandomForestRegressor using hyperopt. We have prepared the script hpo.py for this exercise.

Your task is to modify the script hpo.py and make sure that the validation RMSE is logged to the tracking server for each run of the hyperparameter optimization (you will need to add a few lines of code to the objective function) and run the script without passing any parameters.

After that, open UI and explore the runs from the experiment called random-forest-hyperopt to answer the question below.

Note: Don’t use autologging for this exercise.

The idea is to just log the information that you need to answer the question below, including:

the list of hyperparameters that are passed to the objective function during the optimization, the RMSE obtained on the validation set (February 2023 data). What’s the best validation RMSE that you got?

  • 4.817
  • 5.335
  • 5.818
  • 6.336
!python hpo.py --data_path "/home/hduser/zoomcamp/myproject/mlops/homework/02/output/"
100%|██████████| 15/15 [00:46<00:00,  3.13s/trial, best loss: 5.335419588556921]

Answer of Q5: 5.335



Q6. Promote the best model to the model registry

The results from the hyperparameter optimization are quite good. So, we can assume that we are ready to test some of these models in production. In this exercise, you’ll promote the best model to the model registry. We have prepared a script called register_model.py, which will check the results from the previous step and select the top 5 runs. After that, it will calculate the RMSE of those models on the test set (March 2023 data) and save the results to a new experiment called random-forest-best-models.

Your task is to update the script register_model.py so that it selects the model with the lowest RMSE on the test set and registers it to the model registry.

Tip 1: you can use the method search_runs from the MlflowClient to get the model with the lowest RMSE,

Tip 2: to register the model you can use the method mlflow.register_model and you will need to pass the right model_uri in the form of a string that looks like this: “runs:/<RUN_ID>/model”, and the name of the model (make sure to choose a good one!).

What is the test RMSE of the best model?

  • 5.060
  • 5.567
  • 6.061
  • 6.56
!python register_model.py --data_path "/home/hduser/zoomcamp/myproject/mlops/homework/02/output/"


Answer of Q5: 5.335

 

Comments

Popular posts from this blog

MLOps Zoomcamp 2024 - Module 1 - Introduction

   Source https://github.com/DataTalksClub/mlops-zoomcamp/tree/829c51c11962e427e62b0afc63d6c4f7d6e34ac0/01-intro Homework The goal of this homework is to train a simple model for predicting the duration of a ride - similar to what we did in this module. Q1. Downloading the data We'll use the same NYC taxi dataset, but instead of "Green Taxi Trip Records", we'll use "Yellow Taxi Trip Records". Download the data for January and February 2023. Read the data for January. How many columns are there? 16 17 18 19 import pandas as pd # Load the data for January 2023 url_january = 'https://d37ci6vzurychx.cloudfront.net/trip-data/yellow_tripdata_2023-01.parquet' df_january = pd . read_parquet(url_january) df_january 3066766 rows × 19 columns Answer Q1 : 19 Q2. Computing duration Now let's compute the duration variable. It should contain the duration of a ride in minutes. What's the standard deviation of the trips duration in January? 32.59 42.59 52

Module 2 – Working with Data in Pandas (Dataframe Analysis)

  Module 2 – Working with Data in Pandas (Dataframe Analysis) Source:  1.  [Stock Markets Analytics Zoomcamp] Module2 “Working with Data in Pandas” (youtube.com) 2.  https://docs.google.com/presentation/d/e/2PACX-1vT5XMStGsWf5tQkt-ulyk4MmWoSXTP4PqglHsrzGIlpd_cQ7nAzxNJVmUS7L67vAbYybZhxMNGZy-kY/pub?start=false&loop=false&delayms=3000 Module 2 Homework In this homework, we’re going to combine data from various sources to process it in Pandas and generate additional fields. If not stated otherwise, please use the  Colab  covered at the livestream to re-use the code snippets. Question 1: IPO Filings Web Scraping and Data Processing What’s the total sum ($m) of 2023 filings that happenned of Fridays? Re-use the [Code Snippet 1] example to get the data from web for this endpoint:  https://stockanalysis.com/ipos/filings/ Convert the ‘Filing Date’ to datetime(), ‘Shares Offered’ to float64 (if ‘-‘ is encountered, populate with NaNs). Define a new field ‘Avg_price’ based on the “Price Ra

Module 1 : Introduction and Data Sources

  Stock Markets Analytics Zoomcamp Reference :  1. Youtube :  2. Slides : https://docs.google.com/presentation/d/e/2PACX-1vTzt1RZQn3fItTdueUmh6FJyNd7X0XzwtcUeFu2S8gI0E0eVvk5bpozkKSv53G1hs03jBrWtHxzx_an/pub?start=false&loop=false&delayms=3000&slide=id.p Module01_Colab_Introduction_and_Data_SourcesL.ipynb 1 2 # install main library YFinance !pip install yfinance 1 2 3 4 5 6 7 8 9 10 11 12 13 14 # IMPORTS import numpy as np import pandas as pd   #Fin Data Sources import yfinance as yf import pandas_datareader as pdr   #Data viz import plotly.graph_objs as go import plotly.express as px   import time from datetime import date 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 import pandas as pd   # Read dataset df = pd.read_csv("GDPC1.csv")   # Assuming your dataframe is named df # Convert DATE column to datetime if it's not already in datetime format df['DATE'] = pd.to_datetime(df['DATE'])   # Shift GDPC1 column by 4 rows to get va