if you want to remove an article from website contact us from top.

    which statstical method we can use for replacing missing values for categorical feature

    Mohammed

    Guys, does anyone know the answer?

    get which statstical method we can use for replacing missing values for categorical feature from screen.

    Tackling Missing Value in Dataset

    Learn about various type of missing value and how to treat them using different approaches to increase the efficacy of your model.

    Nasima Tamboli — Published On October 29, 2021 and Last Modified On July 25th, 2022

    Beginner Data Exploration Python

    This article was published as a part of the Data Science Blogathon

    Introduction

    The problem of missing value is quite common in many real-life datasets. Missing value can bias the results of the machine learning models and/or reduce the accuracy of the model. This article describes what is missing data, how it is represented, and the different reasons for the missing data. Along with the different categories of missing data, it also details out different ways of handling missing values with examples.

    The following topics are covered in this guide:

    What Is Missing Data (Missing Values)?

    How Missing Data/Values Are Represented In The Dataset?

    Why Is Data Missing From The Dataset?

    Types Of Missing Values

    Missing Completely At Random (MCAR)

    Missing At Random (MAR)

    Missing Not At Random (MNAR)

    Why Do We Need To Care About Handling Missing Data?

    How To Handle Missing Values?

    Checking for missing values

    Figure Out How To Handle The Missing Data

    Deleting the Missing values

    Deleting the Entire Row

    Deleting the Entire Column

    Imputing the Missing Value

    Replacing With Arbitrary Value

    Replacing With Mean Replacing With Mode

    Replacing With Median

    Replacing with Previous Value – Forward Fill

    Replacing with Next Value – Backward Fill

    Interpolation

    Imputing Missing Values For Categorical Features

    Impute the Most Frequent Value

    Impute the Value “missing”, which treats it as a Separate Category

    Imputation of Missing Values using sci-kit learn library

    Univariate Approach

    Multivariate Approach

    Nearest Neighbors Imputations (KNNImputer)

    Adding missing indicator to encode “missingness” as a feature

    EndNote

    What is a Missing Value?

    Missing data is defined as the values or data that is not stored (or not present) for some variable/s in the given dataset.

    Below is a sample of the missing data from the Titanic dataset. You can see the columns ‘Age’ and ‘Cabin’ have some missing values.

    Image 1

     

      How is Missing Value Represented In The Dataset?

    In the dataset, blank shows the missing values.

    In Pandas, usually, missing values are represented by NaN.

    It stands for Not a Number.

    Image 2

    The above image shows the first few records of the Titanic dataset extracted and displayed using Pandas.

    Why Is Data Missing From The Dataset

    There can be multiple reasons why certain values are missing from the data.

    Reasons for the missing data from the dataset affect the approach of handling missing data. So it’s necessary to understand why the data could be missing.

    Some of the reasons are listed below:

    Past data might get corrupted due to improper maintenance.

    Observations are not recorded for certain fields due to some reasons. There might be a failure in recording the values due to human error.

    The user has not provided the values intentionally.

    Types Of Missing Value

    Formally the missing values are categorized as follows:

    Image 3

    Missing Completely At Random (MCAR)

    In MCAR, the probability of data being missing is the same for all the observations.

    In this case, there is no relationship between the missing data and any other values observed or unobserved (the data which is not recorded) within the given dataset.

    That is, missing values are completely independent of other data. There is no pattern.

    In the case of MCAR, the data could be missing due to human error, some system/equipment failure, loss of sample, or some unsatisfactory technicalities while recording the values.

    For Example, suppose in a library there are some overdue books. Some values of overdue books in the computer system are missing. The reason might be a human error like the librarian forgot to type in the values. So, the missing values of overdue books are not related to any other variable/data in the system.

    It should not be assumed as it’s a rare case. The advantage of such data is that the statistical analysis remains unbiased.

    Missing At Random (MAR)

    Missing at random (MAR) means that the reason for missing values can be explained by variables on which you have complete information as there is some relationship between the missing data and other values/data.

    In this case, the data is not missing for all the observations. It is missing only within sub-samples of the data and there is some pattern in the missing values.

    For example, if you check the survey data, you may find that all the people have answered their ‘Gender’ but ‘Age’ values are mostly missing for people who have answered their ‘Gender’ as ‘female’. (The reason being most of the females don’t want to reveal their age.)

    स्रोत : www.analyticsvidhya.com

    6 Different Ways to Compensate for Missing Values In a Dataset (Data Imputation with examples)

    Many real-world datasets may contain missing values for various reasons. They are often encoded as NaNs, blanks or any other placeholders. Training a model with a dataset that has a lot of missing…

    Photo by Vilmos Heim on Unsplash

    6 Different Ways to Compensate for Missing Values In a Dataset (Data Imputation with examples)

    6 Different Ways to Compensate for Missing Values In a Dataset (Data Imputation with examples) Popular strategies to statistically impute missing values in a dataset.

    Many real-world datasets may contain missing values for various reasons. They are often encoded as NaNs, blanks or any other placeholders. Training a model with a dataset that has a lot of missing values can drastically impact the machine learning model’s quality. Some algorithms such as scikit-learn estimators assume that all values are numerical and have and hold meaningful value.

    One way to handle this problem is to get rid of the observations that have missing data. However, you will risk losing data points with valuable information. A better strategy would be to impute the missing values. In other words, we need to infer those missing values from the existing part of the data. There are three main types of missing data:

    Missing completely at random (MCAR)

    Missing at random (MAR)

    Not missing at random (NMAR)

    However, in this article, I will focus on 6 popular ways for data imputation for cross-sectional datasets ( Time-series dataset is a different story ).

    1- Do Nothing:

    That’s an easy one. You just let the algorithm handle the missing data. Some algorithms can factor in the missing values and learn the best imputation values for the missing data based on the training loss reduction (ie. XGBoost). Some others have the option to just ignore them (ie. LightGBM — use_missing=false). However, other algorithms will panic and throw an error complaining about the missing values (ie. Scikit learn — LinearRegression). In that case, you will need to handle the missing data and clean it before feeding it to the algorithm.

    Let’s see some other ways to impute the missing values before training:

    Note: All the examples below use the California Housing Dataset from Scikit-learn.

    2- Imputation Using (Mean/Median) Values:

    This works by calculating the mean/median of the non-missing values in a column and then replacing the missing values within each column separately and independently from the others. It can only be used with numeric data.

    Mean Imputation

    Pros:

    Easy and fast.

    Works well with small numerical datasets.

    Cons:

    Doesn’t factor the correlations between features. It only works on the column level.

    Will give poor results on encoded categorical features (do NOT use it on categorical features).

    Not very accurate.

    Doesn’t account for the uncertainty in the imputations.

    Mean/Median Imputation

    3- Imputation Using (Most Frequent) or (Zero/Constant) Values:

    Most Frequent isanother statistical strategy to impute missing values and YES!! It works with categorical features (strings or numerical representations) by replacing missing data with the most frequent values within each column.Pros:

    Works well with categorical features.

    Cons:

    It also doesn’t factor the correlations between features.

    It can introduce bias in the data.

    Most Frequent Imputation

    Zero or Constant imputation — as the name suggests — it replaces the missing values with either zero or any constant value you specify

    4- Imputation Using k-NN:

    The k nearest neighbours is an algorithm that is used for simple classification. The algorithm uses ‘feature similarity’ to predict the values of any new data points. This means that the new point is assigned a value based on how closely it resembles the points in the training set. This can be very useful in making predictions about the missing values by finding the k’s closest neighbours to the observation with missing data and then imputing them based on the non-missing values in the neighbourhood. Let’s see some example code using Impyute library which provides a simple and easy way to use KNN for imputation:

    KNN Imputation for California Housing Dataset

    How does it work?

    It creates a basic mean impute then uses the resulting complete list to construct a KDTree. Then, it uses the resulting KDTree to compute nearest neighbours (NN). After it finds the k-NNs, it takes the weighted average of them.

    Pros:

    Can be much more accurate than the mean, median or most frequent imputation methods (It depends on the dataset).

    Cons:

    Computationally expensive. KNN works by storing the whole training dataset in memory.

    K-NN is quite sensitive to outliers in the data (unlike SVM)

    5- Imputation Using Multivariate Imputation by Chained Equation (MICE)

    Main steps used in multiple imputations [1]

    स्रोत : towardsdatascience.com

    Handle missing values Categorical Features

    In this post will be shown how to deal with categorical features with missing values with several examples compared to each other.

    Handle missing values in Categorical Features

    Handle missing values in Categorical Features An useful guide to a proper deal with missing categorical data, with use cases

    In this post, it will be shown how to deal with categorical features with missing values with several examples compared to each other. It will be used the Classified Ads for Cars dataset to predict the price of ADs through a simple model of Linear Regression.

    To show the various strategies and relevant pros / cons, we will focus on a particular categorical feature of this dataset, the maker, the name of the brand of cars (Toyota, Kia, Ford, Bmw, …).

    Post Steps:

    Show Raw Data: let’s see how our dataset looks like.Deal with missing values in Categorical Features: we will deal missing values by comparing different techniques.

    1 — Delete the entire column maker.

    2 — Replace missing values with the most frequent values.

    3 — Delete rows with null values.

    4 — Predict values using a Classifier Algorithm (supervised or unsupervised).

    Conclusions!

    Show Raw Data

    Let’s start importing some libraries

    import pandas as pd import numpy as np

    from sklearn.linear_model import LinearRegression

    from sklearn.metrics import mean_squared_error, r2_score

    from sklearn.model_selection import train_test_split

    from scipy import stats

    import matplotlib.pyplot as plt

    import seaborn as sns

    %matplotlib inline

    First of all let’s see how our dataset looks like

    filename = "cars.csv"

    dtypes = {

    "maker": str, # brand name

    "model": str,

    "mileage": float, # km

    "manufacture_year": float,

    "engine_displacement": float,

    "engine_power": float,

    "body_type": str, # almost never present

    "color_slug": str, # also almost never present

    "stk_year": str,

    "transmission": str, # automatic or manual

    "door_count": str, "seat_count": str,

    "fuel_type": str, # gasoline or diesel

    "date_created": str, # when the ad was scraped

    "date_last_seen": str, # when the ad was last seen

    "price_eur": float} # list price converted to EUR

    df_cleaned = pd.read_csv(filename, dtype=dtypes)

    print(f"Raw data has {df_cleaned.shape[0]} rows, and {df_cleaned.shape[1]} columns")

    Raw data has 3552912 rows, and 16 columns

    After cleaning all columns from missing data and not useful features (the whole procedure is shown on my github), with the exception of the maker,

    we will find ourselves in this situation:

    # Missing values

    print(df_cleaned.isna().sum())

    maker 212897

    mileage 0

    manufacture_year 0

    engine_displacement 0

    engine_power 0

    price_eur 0

    fuel_type_diesel 0

    fuel_type_gasoline 0

    ad_duration 0

    seat_str_large 0

    seat_str_medium 0

    seat_str_small 0

    transmission_auto 0

    transmission_man 0

    dtype: int64

    Correlation Matrix

    corr = df_cleaned.corr()

    plt.subplots(figsize=(15,10))

    sns.heatmap(corr, xticklabels=corr.columns,yticklabels=corr.columns, annot=True, )

    Deal with missing values in Categorical Features

    Now we just have to handle the maker feature, and we will do it in four different ways. Then we will create a simple model of Linear Regression for each ways to predict the price.

    1st Model: Delete the entire column maker.2nd Model: Replace missing values with the most frequent values.3rd Model: Delete rows with null values.4th Model: Predict the missing values with the RandomForestClassifier.

    mse_list = [] r2_score_list = []

    def remove_outliers(dataframe):

    '''

    return a dataframe without rows that are outliers in any column

    ''' return dataframe\

    .loc[:, lambda df: df.std() > 0.04]\

    .loc[lambda df: (np.abs(stats.zscore(df)) < 3).all(axis=1)]

    def plot_regression(Y_test, Y_pred):

    '''

    method that plot a linear regression line on a scatter plot

    ''' x = Y_test y = Y_pred

    plt.xlabel("True label")

    plt.ylabel("Predicted label")

    plt.plot(x, y, 'o')

    m, b = np.polyfit(x, y, 1)

    plt.plot(x, m*x + b)

    def train_and_score_regression(df):

    df_new = remove_outliers(df)

    # split the df

    X = df_new.drop("price_eur", axis=1).values

    Y = np.log1p(df_new["price_eur"].values)

    X_train, X_test, Y_train, Y_test = train_test_split(X,Y,

    test_size=0.1, random_state=0)

    # train and test of the model

    ll = LinearRegression()

    ll.fit(X_train, Y_train)

    Y_pred = ll.predict(X_test)

    mse_list.append(mean_squared_error(Y_test, Y_pred))

    r2_score_list.append(r2_score(Y_test, Y_pred))

    # print the metrics

    print("MSE: "+str(mean_squared_error(Y_test, Y_pred)))

    स्रोत : medium.com

    Do you want to see answer or more ?
    Mohammed 10 day ago
    4

    Guys, does anyone know the answer?

    Click For Answer