Skip to main content

MLOps Zoomcamp 2024 - Module 1 - Introduction

  

Source

https://github.com/DataTalksClub/mlops-zoomcamp/tree/829c51c11962e427e62b0afc63d6c4f7d6e34ac0/01-intro

Homework

The goal of this homework is to train a simple model for predicting the duration of a ride - similar to what we did in this module.

Q1. Downloading the data

We'll use the same NYC taxi dataset, but instead of "Green Taxi Trip Records", we'll use "Yellow Taxi Trip Records".

Download the data for January and February 2023.

Read the data for January. How many columns are there?

  • 16
  • 17
  • 18
  • 19

import pandas as pd

# Load the data for January 2023

url_january = 'https://d37ci6vzurychx.cloudfront.net/trip-data/yellow_tripdata_2023-01.parquet'
df_january = pd.read_parquet(url_january)
df_january

3066766 rows × 19 columns

Answer Q1 : 19

Q2. Computing duration

Now let's compute the duration variable. It should contain the duration of a ride in minutes.

What's the standard deviation of the trips duration in January?

  • 32.59
  • 42.59
  • 52.59
  • 62.59

import numpy as np

# The columns 'tpep_pickup_datetime' and 'tpep_dropoff_datetime' are required to calculate the duration

if 'tpep_pickup_datetime' in df_january.columns and 'tpep_dropoff_datetime' in df_january.columns:
    # Compute the duration in minutes
    df_january['duration'] = (df_january['tpep_dropoff_datetime'] - df_january['tpep_pickup_datetime']).dt.total_seconds() / 60

    # Compute the standard deviation of the duration
    std_dev_duration = np.std(df_january['duration'])

    print("Standard Deviation of Trip Durations in January 2023:", std_dev_duration)
else:
    print("The necessary columns are not present in the dataset.")
Standard Deviation of Trip Durations in January 2023: 42.59434429744777

Answer Q2 : 42.59

Q3. Dropping outliers

Next, we need to check the distribution of the duration variable. There are some outliers. Let's remove them and keep only the records where the duration was between 1 and 60 minutes (inclusive).

What fraction of the records left after you dropped the outliers?

  • 90%
  • 92%
  • 95%
  • 98%
# Compute the duration in minutes
df_january['duration'] = (df_january['tpep_dropoff_datetime'] - df_january['tpep_pickup_datetime']).dt.total_seconds() / 60

# Filter the records to keep only those with duration between 1 and 60 minutes (inclusive)
df_filtered = df_january[(df_january['duration'] >= 1) & (df_january['duration'] <= 60)]

# Calculate the fraction of records left
fraction_left = len(df_filtered) / len(df_january)

print("Fraction of records left after dropping outliers:", fraction_left)
Fraction of records left after dropping outliers: 0.9812202822125979

Answer Q3 : 98%

Q4. One-hot encoding

Let's apply one-hot encoding to the pickup and dropoff location IDs. We'll use only these two features for our model.

  • Turn the dataframe into a list of dictionaries (remember to re-cast the ids to strings - otherwise it will label encode them)
  • Fit a dictionary vectorizer
  • Get a feature matrix from it

What's the dimensionality of this matrix (number of columns)?

  • 2
  • 155
  • 345
  • 515
  • 715
import pandas as pd
from sklearn.feature_extraction import DictVectorizer

# Assume df_filtered is already defined and filtered
# Ensure the columns are of integer type first (if they aren't already)
df_filtered.loc[:, 'PULocationID'] = df_filtered['PULocationID'].astype(int)
df_filtered.loc[:, 'DOLocationID'] = df_filtered['DOLocationID'].astype(int)

# Create a copy of the DataFrame and convert IDs to strings in the new DataFrame
df_filtered_str = df_filtered.copy()
df_filtered_str.loc[:, 'PULocationID'] = df_filtered_str['PULocationID'].astype(str)
df_filtered_str.loc[:, 'DOLocationID'] = df_filtered_str['DOLocationID'].astype(str)

# Turn the DataFrame into a list of dictionaries
dicts = df_filtered_str[['PULocationID', 'DOLocationID']].to_dict(orient='records')

# Fit a dictionary vectorizer
dv = DictVectorizer()
X = dv.fit_transform(dicts)

# Get the dimensionality of the feature matrix
print("Dimensionality of the feature matrix (number of columns):", X.shape[1])
Dimensionality of the feature matrix (number of columns): 515

Answer Q4 : 515

Q5. Training a model

Now let's use the feature matrix from the previous step to train a model.

  • Train a plain linear regression model with default parameters
  • Calculate the RMSE of the model on the training data

What's the RMSE on train?

  • 3.64
  • 7.64
  • 11.64
  • 16.64
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error

# Fit a dictionary vectorizer
dv = DictVectorizer()
X_train = dv.fit_transform(dicts)

# Prepare the target variable
y_train = df_filtered['duration'].values

# Train a linear regression model
lr = LinearRegression()
lr.fit(X_train, y_train)

# Make predictions on the training data
y_pred = lr.predict(X_train)

# Calculate the RMSE on the training data
rmse = np.sqrt(mean_squared_error(y_train, y_pred))
print("RMSE on training data:", rmse)
RMSE on training data: 7.649261937621321

Answer Q5 : 7.64

Q6. Evaluating the model

Now let's apply this model to the validation dataset (February 2023).

What's the RMSE on validation?

  • 3.81
  • 7.81
  • 11.81
  • 16.81
import pandas as pd
import numpy as np
from sklearn.feature_extraction import DictVectorizer
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error

# Load the data for January 2023
url_january = 'https://d37ci6vzurychx.cloudfront.net/trip-data/yellow_tripdata_2023-01.parquet'
df_january = pd.read_parquet(url_january)

# Compute the duration in minutes
df_january['duration'] = (df_january['tpep_dropoff_datetime'] - df_january['tpep_pickup_datetime']).dt.total_seconds() / 60

# Filter the records to keep only those with duration between 1 and 60 minutes (inclusive)
df_filtered = df_january[(df_january['duration'] >= 1) & (df_january['duration'] <= 60)].copy()

# Cast IDs to string after ensuring they are of object type
df_filtered['PULocationID'] = df_filtered['PULocationID'].astype('object').astype(str)
df_filtered['DOLocationID'] = df_filtered['DOLocationID'].astype('object').astype(str)

# Turn the DataFrame into a list of dictionaries
dicts = df_filtered[['PULocationID', 'DOLocationID']].to_dict(orient='records')

# Fit a dictionary vectorizer
dv = DictVectorizer()
X_train = dv.fit_transform(dicts)

# Prepare the target variable
y_train = df_filtered['duration'].values

# Train a linear regression model
lr = LinearRegression()
lr.fit(X_train, y_train)

# Load the data for February 2023
url_february = 'https://d37ci6vzurychx.cloudfront.net/trip-data/yellow_tripdata_2023-02.parquet'
df_february = pd.read_parquet(url_february)

# Compute the duration in minutes
df_february['duration'] = (df_february['tpep_dropoff_datetime'] - df_february['tpep_pickup_datetime']).dt.total_seconds() / 60

# Filter the records to keep only those with duration between 1 and 60 minutes (inclusive)
df_feb_filtered = df_february[(df_february['duration'] >= 1) & (df_february['duration'] <= 60)].copy()

# Cast IDs to string after ensuring they are of object type
df_feb_filtered['PULocationID'] = df_feb_filtered['PULocationID'].astype('object').astype(str)
df_feb_filtered['DOLocationID'] = df_feb_filtered['DOLocationID'].astype('object').astype(str)

# Turn the DataFrame into a list of dictionaries
dicts_feb = df_feb_filtered[['PULocationID', 'DOLocationID']].to_dict(orient='records')

# Transform the validation data using the same dictionary vectorizer
X_val = dv.transform(dicts_feb)

# Prepare the target variable for the validation data
y_val = df_feb_filtered['duration'].values

# Make predictions on the validation data
y_pred_val = lr.predict(X_val)

# Calculate the RMSE on the validation data
rmse_val = np.sqrt(mean_squared_error(y_val, y_pred_val))
print("RMSE on validation data:", rmse_val)
RMSE on validation data: 7.811817646307258

Answer Q6 : 7.81

Comments

Popular posts from this blog

Module 2 – Working with Data in Pandas (Dataframe Analysis)

  Module 2 – Working with Data in Pandas (Dataframe Analysis) Source:  1.  [Stock Markets Analytics Zoomcamp] Module2 “Working with Data in Pandas” (youtube.com) 2.  https://docs.google.com/presentation/d/e/2PACX-1vT5XMStGsWf5tQkt-ulyk4MmWoSXTP4PqglHsrzGIlpd_cQ7nAzxNJVmUS7L67vAbYybZhxMNGZy-kY/pub?start=false&loop=false&delayms=3000 Module 2 Homework In this homework, we’re going to combine data from various sources to process it in Pandas and generate additional fields. If not stated otherwise, please use the  Colab  covered at the livestream to re-use the code snippets. Question 1: IPO Filings Web Scraping and Data Processing What’s the total sum ($m) of 2023 filings that happenned of Fridays? Re-use the [Code Snippet 1] example to get the data from web for this endpoint:  https://stockanalysis.com/ipos/filings/ Convert the ‘Filing Date’ to datetime(), ‘Shares Offered’ to float64 (if ‘-‘ is encountered, populate with NaNs). Define a new field ‘Avg_price’ based on the “Price Ra

Module 1 : Introduction and Data Sources

  Stock Markets Analytics Zoomcamp Reference :  1. Youtube :  2. Slides : https://docs.google.com/presentation/d/e/2PACX-1vTzt1RZQn3fItTdueUmh6FJyNd7X0XzwtcUeFu2S8gI0E0eVvk5bpozkKSv53G1hs03jBrWtHxzx_an/pub?start=false&loop=false&delayms=3000&slide=id.p Module01_Colab_Introduction_and_Data_SourcesL.ipynb 1 2 # install main library YFinance !pip install yfinance 1 2 3 4 5 6 7 8 9 10 11 12 13 14 # IMPORTS import numpy as np import pandas as pd   #Fin Data Sources import yfinance as yf import pandas_datareader as pdr   #Data viz import plotly.graph_objs as go import plotly.express as px   import time from datetime import date 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 import pandas as pd   # Read dataset df = pd.read_csv("GDPC1.csv")   # Assuming your dataframe is named df # Convert DATE column to datetime if it's not already in datetime format df['DATE'] = pd.to_datetime(df['DATE'])   # Shift GDPC1 column by 4 rows to get va