China GDP Estimation
Objective: -
In China, Gross Domestic Product is divided by three sectors: Primary, Secondary and Tertiary. The Primary Industry includes Farming, Forestry, Animal Husbandry, and Fishery and accounts for around 9 percent of GDP. The Secondary sector, which includes Industry (40 percent of GDP) and Construction (9 percent of GDP) is the most important. The Tertiary sector accounts for the remaining 44 percent of total output and consist of Wholesale and Retail Trades; Transport, Storage, and Post, Financial Intermediation, Real Estate, Hotel and Catering Services and Others.
Beijing set an ambitious target of around 5.5% growth for 2022. But Covid controls and the real estate slump weighed heavily. China’s GDP grew by only 3% last year. On Sunday, the Chinese government is widely expected to announce a GDP growth target of around or above 5% for the year.
The goal of this challenge is to use this data to train a machine learning model to predict the estimate china GDP.
Step 1: Import all the required libraries
- Pandas : In computer programming, pandas is a software library written for the Python programming language for data manipulation and analysis and storing in a proper way. In particular, it offers data structures and operations for manipulating numerical tables and time series
- Sklearn : Scikit-learn (formerly scikits.learn) is a free software machine learning library for the Python programming language. It features various classification, regression and clustering algorithms including support vector machines, random forests, gradient boosting, k-means and DBSCAN, and is designed to interoperate with the Python numerical and scientific libraries NumPy and SciPy. The library is built upon the SciPy (Scientific Python) that must be installed before you can use scikit-learn.
- Pickle : Python pickle module is used for serializing and de-serializing a Python object structure. Pickling is a way to convert a python object (list, dict, etc.) into a character stream. The idea is that this character stream contains all the information necessary to reconstruct the object in another python script.
- Seaborn : Seaborn is a Python data visualization library based on matplotlib. It provides a high-level interface for drawing attractive and informative statistical graphics.
- Matplotlib : Matplotlib is a plotting library for the Python programming language and its numerical mathematics extension NumPy. It provides an object-oriented API for embedding plots into applications using general-purpose GUI toolkits like Tkinter, wxPython, Qt, or GTK.
#Loading libraries
import pandas as pd
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn import preprocessing
import sklearn.linear_model
import sklearn
import pickle
import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import OneHotEncoder
from matplotlib.pyplot import figure
import matplotlib.pyplot as plt
from sklearn.metrics import mean_absolute_percentage_error
from sklearn.metrics import mean_squared_error
from sklearn.metrics import r2_score
from sklearn.preprocessing import scale
from sklearn.linear_model import LinearRegression, Ridge, RidgeCV, Lasso, LassoCV
from sklearn.model_selection import KFold, cross_val_score, train_test_split
from sklearn.metrics import mean_squared_error
from sklearn.decomposition import PCA
import warnings
warnings.filterwarnings('ignore')
Step 2 : Read dataset and basic details of dataset
Goal:- In this step we are going to read the dataset, view the dataset and analysis the basic details like total number of rows and columns, what are the column data types and see to need to create new column or not.
In this stage we are going to read our problem dataset and have a look on it.
#loading the dataset
try:
df = pd.read_csv('C:/Users/YAJENDRA/Documents/final notebooks/China GDP Estimation/Data/data.csv') #Path for the file
print('Data read done successfully...')
except (FileNotFoundError, IOError):
print("Wrong file or file path")Data read done successfully...# To view the content inside the dataset we can use the head() method that returns a specified number of rows, string from the top.
# The head() method returns the first 5 rows if a number is not specified.
df.head()
Dataset: -
Attribute Information:
- 2021: GDP of 2021
GDP of previous years:
- 2000
- 2001
- 2002
- 2003
- 2004
- 2005
- 2006
- 2007
- 2008
- 2009
- 2010
- 2011
- 2012
- 2013
- 2014
- 2015
- 2016
- 2017
- 2018
- 2019
- 2020
Step3: Data Preprocessing
Why need of Data Preprocessing?
Preprocessing data is an important step for data analysis. The following are some benefits of preprocessing data:
- It improves accuracy and reliability. Preprocessing data removes missing or inconsistent data values resulting from human or computer error, which can improve the accuracy and quality of a dataset, making it more reliable.
- It makes data consistent. When collecting data, it’s possible to have data duplicates, and discarding them during preprocessing can ensure the data values for analysis are consistent, which helps produce accurate results.
- It increases the data’s algorithm readability. Preprocessing enhances the data’s quality and makes it easier for machine learning algorithms to read, use, and interpret it.
After we read the data, we can look at the data using:
# count the total number of rows and columns.
print ('The train data has {0} rows and {1} columns'.format(df.shape[0],df.shape[1]))The train data has 1147 rows and 22 columns
By analysing the problem statement and the dataset, we get to know that the target variable is “2021” column which is continuous and shows the excepted china gdp in 2021.
df['2021'].value_counts()9.900000e+01 3
6.000000e+00 3
3.550000e+12 2
4.390000e+13 2
6.504820e+09 2
..
9.771014e+08 1
6.240003e+00 1
7.434911e+00 1
7.788113e+00 1
1.210000e+12 1
Name: 2021, Length: 402, dtype: int64
The df.value_counts() method counts the number of types of values a particular column contains.
df.shape(1147, 22)
The df.shape method shows the shape of the dataset.
We can identify that their are 1147 rows and 22 columns.
df.info()<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1147 entries, 0 to 1146
Data columns (total 22 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 2000 848 non-null float64
1 2001 798 non-null float64
2 2002 823 non-null float64
3 2003 802 non-null float64
4 2004 788 non-null float64
5 2005 856 non-null float64
6 2006 857 non-null float64
7 2007 876 non-null float64
8 2008 891 non-null float64
9 2009 866 non-null float64
10 2010 946 non-null float64
11 2011 887 non-null float64
12 2012 882 non-null float64
13 2013 910 non-null float64
14 2014 930 non-null float64
15 2015 920 non-null float64
16 2016 908 non-null float64
17 2017 906 non-null float64
18 2018 910 non-null float64
19 2019 862 non-null float64
20 2020 718 non-null float64
21 2021 418 non-null float64
dtypes: float64(22)
memory usage: 197.3 KB
The df.info() method prints information about a DataFrame including the index dtype and columns, non-null values and memory usage.
df.iloc[1]2000 100.000000
2001 106.779613
2002 130.654928
2003 175.851816
2004 238.089429
2005 305.755950
2006 388.830793
2007 489.743703
2008 574.107455
2009 482.181996
2010 633.119986
2011 761.780958
2012 822.106475
2013 886.427932
2014 939.913645
2015 912.295598
2016 841.736255
2017 908.233850
2018 997.859175
2019 1002.980301
2020 1039.402054
2021 NaN
Name: 1, dtype: float64
df.iloc[ ] is primarily integer position based (from 0 to length-1 of the axis), but may also be used with a boolean array. The iloc property gets, or sets, the value(s) of the specified indexes.
Data Type Check for every column
Why data type check is required?
Data type check helps us with understanding what type of variables our dataset contains. It helps us with identifying whether to keep that variable or not. If the dataset contains contiguous data, then only float and integer type variables will be beneficial and if we have to classify any value then categorical variables will be beneficial.
objects_cols = ['object']
objects_lst = list(df.select_dtypes(include=objects_cols).columns)print("Total number of categorical columns are ", len(objects_lst))
print("There names are as follows: ", objects_lst)Total number of categorical columns are 0
There names are as follows: []int64_cols = ['int64']
int64_lst = list(df.select_dtypes(include=int64_cols).columns)print("Total number of numerical columns are ", len(int64_lst))
print("There names are as follows: ", int64_lst)Total number of numerical columns are 0
There names are as follows: []float64_cols = ['float64']
float64_lst = list(df.select_dtypes(include=float64_cols).columns)print("Total number of numerical columns are ", len(float64_lst))
print("There names are as follows: ", float64_lst)Total number of numerical columns are 22
There names are as follows: ['2000', '2001', '2002', '2003', '2004', '2005', '2006', '2007', '2008', '2009', '2010', '2011', '2012', '2013', '2014', '2015', '2016', '2017', '2018', '2019', '2020', '2021']
Step 2 Insights: -
- We have total 22 features and all are float type.
After this step we have to calculate various evaluation parameters which will help us in cleaning and analysing the data more accurately.
All the columns shows the GDP of different years.
Step 3: Descriptive Analysis
Goal/Purpose: Finding the data distribution of the features. Visualization helps to understand data and also to explain the data to another person.
Things we are going to do in this step:
- Mean
- Median
- Mode
- Standard Deviation
- Variance
- Null Values
- NaN Values
- Min value
- Max value
- Count Value
- Quatilers
- Correlation
- Skewness
df.describe()
The df.describe() method returns description of the data in the DataFrame. If the DataFrame contains numerical data, the description contains these information for each column: count — The number of not-empty values. mean — The average (mean) value.
Measure the variability of data of the dataset
Variability describes how far apart data points lie from each other and from the center of a distribution.
1. Standard Deviation
The standard deviation is the average amount of variability in your dataset.
It tells you, on average, how far each data point lies from the mean. The larger the standard deviation, the more variable the data set is and if zero variance then there is no variability in the dataset that means there no use of that dataset.
So, it helps in understanding the measurements when the data is distributed. The more the data is distributed, the greater will be the standard deviation of that data.Here, you as an individual can determine which company is beneficial in long term. But, if you didn’t know the SD you would have choosen a wrong compnay for you.
df.std()2000 1.575812e+12
2001 1.789790e+12
2002 1.967696e+12
2003 2.255225e+12
2004 2.587085e+12
2005 2.819561e+12
2006 3.266597e+12
2007 3.838339e+12
2008 4.363238e+12
2009 5.072114e+12
2010 5.641567e+12
2011 6.719934e+12
2012 7.500660e+12
2013 8.189493e+12
2014 8.921376e+12
2015 9.971653e+12
2016 1.106504e+13
2017 1.216853e+13
2018 1.323740e+13
2019 1.469762e+13
2020 1.710652e+13
2021 1.724152e+13
dtype: float64
We can also understand the standard deviation using the below function.
def std_cal(df,float64_lst):
cols = ['normal_value', 'zero_value']
zero_value = 0
normal_value = 0
for value in float64_lst:
rs = round(df[value].std(),6)
if rs > 0:
normal_value = normal_value + 1
elif rs == 0:
zero_value = zero_value + 1
std_total_df = pd.DataFrame([[normal_value, zero_value]], columns=cols)
return std_total_dffloat64_cols = ['float64']
float64_lst = list(df.select_dtypes(include=float64_cols).columns)
std_cal(df,float64_lst)
zero_value -> is the zero variance and when then there is no variability in the dataset that means there no use of that dataset.
2. Variance
The variance is the average of squared deviations from the mean. A deviation from the mean is how far a score lies from the mean.
Variance is the square of the standard deviation. This means that the units of variance are much larger than those of a typical value of a data set.
Why do we used Variance ?
By Squairng the number we get non-negative computation i.e. Disperson cannot be negative. The presence of variance is very important in your dataset because this will allow the model to learn about the different patterns hidden in the data
df.var()2000 2.483183e+24
2001 3.203349e+24
2002 3.871828e+24
2003 5.086040e+24
2004 6.693010e+24
2005 7.949923e+24
2006 1.067066e+25
2007 1.473284e+25
2008 1.903785e+25
2009 2.572634e+25
2010 3.182728e+25
2011 4.515752e+25
2012 5.625990e+25
2013 6.706780e+25
2014 7.959096e+25
2015 9.943387e+25
2016 1.224352e+26
2017 1.480730e+26
2018 1.752289e+26
2019 2.160202e+26
2020 2.926330e+26
2021 2.972700e+26
dtype: float64
We can also understand the Variance using the below function.
zero_cols = []
def var_cal(df,float64_lst):
cols = ['normal_value', 'zero_value']
zero_value = 0
normal_value = 0
for value in float64_lst:
rs = round(df[value].var(),6)
if rs > 0:
normal_value = normal_value + 1
elif rs == 0:
zero_value = zero_value + 1
zero_cols.append(value)
var_total_df = pd.DataFrame([[normal_value, zero_value]], columns=cols)
return var_total_dfvar_cal(df, float64_lst)
zero_value -> Zero variance means that there is no difference in the data values, which means that they are all the same.
Measure central tendency
A measure of central tendency is a single value that attempts to describe a set of data by identifying the central position within that set of data. As such, measures of central tendency are sometimes called measures of central location. They are also classed as summary statistics.
Mean — The average value. Median — The mid point value. Mode — The most common value.
1. Mean
The mean is the arithmetic average, and it is probably the measure of central tendency that you are most familiar.
Why do we calculate mean?
The mean is used to summarize a data set. It is a measure of the center of a data set.
df.mean()2000 3.099808e+11
2001 3.638318e+11
2002 3.930431e+11
2003 4.601087e+11
2004 5.475127e+11
2005 5.810408e+11
2006 6.778633e+11
2007 8.082403e+11
2008 9.077508e+11
2009 1.045704e+12
2010 1.112190e+12
2011 1.366392e+12
2012 1.513430e+12
2013 1.613578e+12
2014 1.731923e+12
2015 1.917144e+12
2016 2.053211e+12
2017 2.257665e+12
2018 2.441673e+12
2019 2.761748e+12
2020 3.456720e+12
2021 4.913748e+12
dtype: float64
We can also understand the mean using the below function.
def mean_cal(df,int64_lst):
cols = ['normal_value', 'zero_value']
zero_value = 0
normal_value = 0
for value in int64_lst:
rs = round(df[value].mean(),6)
if rs > 0:
normal_value = normal_value + 1
elif rs == 0:
zero_value = zero_value + 1
mean_total_df = pd.DataFrame([[normal_value, zero_value]], columns=cols)
return mean_total_dfmean_cal(df, float64_lst)
zero_value -> that the mean of a paticular column is zero, which isn’t usefull in anyway and need to be drop.
2.Median
The median is the middle value. It is the value that splits the dataset in half.The median of a dataset is the value that, assuming the dataset is ordered from smallest to largest, falls in the middle. If there are an even number of values in a dataset, the middle two values are the median.
Why do we calculate median ?
By comparing the median to the mean, you can get an idea of the distribution of a dataset. When the mean and the median are the same, the dataset is more or less evenly distributed from the lowest to highest values.
df.median()2000 56.280055
2001 59.115259
2002 44.450568
2003 54.471594
2004 58.812338
2005 52.433750
2006 55.991184
2007 55.502575
2008 53.500000
2009 54.434185
2010 51.256158
2011 54.055000
2012 51.165057
2013 46.841200
2014 49.167473
2015 50.239331
2016 50.868845
2017 51.646583
2018 46.639925
2019 53.957457
2020 68.857160
2021 82.185542
dtype: float64
We can also understand the median using the below function.
def median_cal(df,int64_lst):
cols = ['normal_value', 'zero_value']
zero_value = 0
normal_value = 0
for value in int64_lst:
rs = round(df[value].mean(),6)
if rs > 0:
normal_value = normal_value + 1
elif rs == 0:
zero_value = zero_value + 1
median_total_df = pd.DataFrame([[normal_value, zero_value]], columns=cols)
return median_total_dfmedian_cal(df, float64_lst)
zero_value -> that the median of a paticular column is zero which isn’t usefull in anyway and need to be drop.
3. Mode
The mode is the value that occurs the most frequently in your data set. On a bar chart, the mode is the highest bar. If the data have multiple values that are tied for occurring the most frequently, you have a multimodal distribution. If no value repeats, the data do not have a mode.
Why do we calculate mode ?
The mode can be used to summarize categorical variables, while the mean and median can be calculated only for numeric variables. This is the main advantage of the mode as a measure of central tendency. It’s also useful for discrete variables and for continuous variables when they are expressed as intervals.
df.mode()
def mode_cal(df,int64_lst):
cols = ['normal_value', 'zero_value', 'string_value']
zero_value = 0
normal_value = 0
string_value = 0
for value in int64_lst:
rs = df[value].mode()[0]
if isinstance(rs, str):
string_value = string_value + 1
else:
if rs > 0:
normal_value = normal_value + 1
elif rs == 0:
zero_value = zero_value + 1
mode_total_df = pd.DataFrame([[normal_value, zero_value, string_value]], columns=cols)
return mode_total_dfmode_cal(df, list(df.columns))
zero_value -> that the mode of a paticular column is zero which isn’t usefull in anyway and need to be drop.
Null and Nan values
- Null Values
A null value in a relational database is used when the value in a column is unknown or missing. A null is neither an empty string (for character or datetime data types) nor a zero value (for numeric data types).
df.isnull().sum()2000 299
2001 349
2002 324
2003 345
2004 359
2005 291
2006 290
2007 271
2008 256
2009 281
2010 201
2011 260
2012 265
2013 237
2014 217
2015 227
2016 239
2017 241
2018 237
2019 285
2020 429
2021 729
dtype: int64
As we notice that there are some null values in our dataset.
- Nan Values
NaN, standing for Not a Number, is a member of a numeric data type that can be interpreted as a value that is undefined or unrepresentable, especially in floating-point arithmetic.
df.isna().sum()2000 299
2001 349
2002 324
2003 345
2004 359
2005 291
2006 290
2007 271
2008 256
2009 281
2010 201
2011 260
2012 265
2013 237
2014 217
2015 227
2016 239
2017 241
2018 237
2019 285
2020 429
2021 729
dtype: int64
As we notice that there are some nan values in our dataset. So we replace null value by the overall average of GDP of all years.
Another way to remove null and nan values is to use the method “df.dropna(inplace=True)” or to replace null and nan values use the method “df.fillna(value,inplace=True)”.
s=df.mean().sum()
t=len(df.columns.tolist())
a=s/t
we replace the null values with the overall mean to dataframe.
df.fillna(a,inplace=True)df.isnull().sum()2000 0
2001 0
2002 0
2003 0
2004 0
2005 0
2006 0
2007 0
2008 0
2009 0
2010 0
2011 0
2012 0
2013 0
2014 0
2015 0
2016 0
2017 0
2018 0
2019 0
2020 0
2021 0
dtype: int64
Count of unique occurences of every value in all categorical value
objects_cols = ['object']
objects_lst = list(df.select_dtypes(include=objects_cols).columns)
for value in objects_lst:
print(f"{value:{10}} {df[value].value_counts()}")
- Categorical data are variables that contain label values rather than numeric values.The number of possible values is often limited to a fixed set.
- Use Label Encoder to label the categorical data. Label Encoder is the part of SciKit Learn library in Python and used to convert categorical data, or text data, into numbers, which our predictive models can better understand.
Label Encoding refers to converting the labels into a numeric form so as to convert them into the machine-readable form. Machine learning algorithms can then decide in a better way how those labels must be operated. It is an important pre-processing step for the structured dataset in supervised learning.
Skewness
Skewness is a measure of the asymmetry of a distribution. A distribution is asymmetrical when its left and right side are not mirror images. A distribution can have right (or positive), left (or negative), or zero skewness
Why do we calculate Skewness ?
Skewness gives the direction of the outliers if it is right-skewed, most of the outliers are present on the right side of the distribution while if it is left-skewed, most of the outliers will present on the left side of the distribution
Below is the function to calculate skewness.
def right_nor_left(df, int64_lst):
temp_skewness = ['column', 'skewness_value', 'skewness (+ve or -ve)']
temp_skewness_values = []
temp_total = ["positive (+ve) skewed", "normal distrbution" , "negative (-ve) skewed"]
positive = 0
negative = 0
normal = 0
for value in int64_lst:
rs = round(df[value].skew(),4)
if rs > 0:
temp_skewness_values.append([value,rs , "positive (+ve) skewed"])
positive = positive + 1
elif rs == 0:
temp_skewness_values.append([value,rs,"normal distrbution"])
normal = normal + 1
elif rs < 0:
temp_skewness_values.append([value,rs, "negative (-ve) skewed"])
negative = negative + 1
skewness_df = pd.DataFrame(temp_skewness_values, columns=temp_skewness)
skewness_total_df = pd.DataFrame([[positive, normal, negative]], columns=temp_total)
return skewness_df, skewness_total_dffloat64_cols = ['float64']
float64_lst_col = list(df.select_dtypes(include=float64_cols).columns)
skew_df,skew_total_df = right_nor_left(df, float64_lst_col)skew_df
skew_total_df
We notice with the above results that we have following details:
- 22 columns are positive skewed
Step 3 Insights: -
With the statistical analysis we have found that the data have a lot of skewness in them all the columns are positively skewed with mostly zero variance.
Statistical analysis is little difficult to understand at one glance so to make it more understandable we will perform visulatization on the data which will help us to understand the process easily.
Why we are calculating all these metrics?
Mean / Median /Mode/ Variance /Standard Deviation are all very basic but very important concept of statistics used in data science. Almost all the machine learning algorithm uses these concepts in data preprocessing steps. These concepts are part of descriptive statistics where we basically used to describe and understand the data for features in Machine learning
Why china GDP prediction is important?
GDP is important because it gives information about the size of the economy and how an economy is performing. The growth rate of real GDP is often used as an indicator of the general health of the economy. In broad terms, an increase in real GDP is interpreted as a sign that the economy is doing well. When real GDP is growing strongly, employment is likely to be increasing as companies hire more workers for their factories and people have more money in their pockets. When GDP is shrinking, as it did in many countries during the recent global economic crisis, employment often declines. In some cases, GDP may be growing, but not fast enough to create a sufficient number of jobs for those seeking them. But real GDP growth does move in cycles over time. Economies are sometimes in periods of boom, and sometimes in periods of slow growth or even recession
Step 4: Data Exploration
Goal/Purpose:
Graphs we are going to develop in this step
- Histogram of all columns to check the distrubution of the columns
- Distplot or distribution plot of all columns to check the variation in the data distribution
- Heatmap to calculate correlation within feature variables
- Boxplot to find out outlier in the feature columns
- Scatter Plot to show the relation between variables
1. Histogram
A histogram is a bar graph-like representation of data that buckets a range of classes into columns along the horizontal x-axis.The vertical y-axis represents the number count or percentage of occurrences in the data for each column
Histogram Insight: -
Histogram helps in identifying the following:
- View the shape of your data set’s distribution to look for outliers or other significant data points.
- Determine whether something significant has boccurred from one time period to another.
From the above histogram we observe that the china GDP is above 800 from 2010 to 2018.
Why Histogram?
It is used to illustrate the major features of the distribution of the data in a convenient form. It is also useful when dealing with large data sets (greater than 100 observations). It can help detect any unusual observations (outliers) or any gaps in the data.
From the above graphical representation we can identify that the highest bar represents the outliers which is above the maximum range.
We can also identify that the values are moving on the right side, which determines positive and the centered values determines normal skewness.
2. Distplot
A Distplot or distribution plot, depicts the variation in the data distribution. Seaborn Distplot represents the overall distribution of continuous data variables. The Seaborn module along with the Matplotlib module is used to depict the distplot with different variations in it
num = [f for f in df.columns if df.dtypes[f] != 'object']
nd = pd.melt(df, value_vars = num)
n1 = sns.FacetGrid (nd, col='variable', col_wrap=4, sharex=False, sharey = False)
n1 = n1.map(sns.distplot, 'value')
n1<seaborn.axisgrid.FacetGrid at 0x208efa58a50>
Distplot Insights: -
Above is the distrution bar graphs to confirm about the statistics of the data about the skewness, the above results are:
- 22 columns are positive skewed.
- 1 column is added here i.e 2021 which is our target variable ~ which is also +ve skewed. In that case we’ll need to cube root transform this variable so that it becomes normally distributed. A normally distributed (or close to normal) target variable helps in better modeling the relationship between target and independent variables
Why Distplot?
Skewness is demonstrated on a bell curve when data points are not distributed symmetrically to the left and right sides of the median on a bell curve. If the bell curve is shifted to the left or the right, it is said to be skewed.
We can observe that the bell curve is shifted to left we indicates positive skewness.As all the column are positively skewed we don’t need to do scaling.
Let’s proceed and check the distribution of the target variable.
#+ve skewed
df['2021'].skew()8.05929384926432
The target variable is positively skewed.A normally distributed (or close to normal) target variable helps in better modeling the relationship between target and independent variables.
3. Heatmap
A heatmap (or heat map) is a graphical representation of data where values are depicted by color.Heatmaps make it easy to visualize complex data and understand it at a glance
Correlation — A positive correlation is a relationship between two variables in which both variables move in the same direction. Therefore, when one variable increases as the other variable increases, or one variable decreases while the other decreases.
Correlation can have a value:
- 1 is a perfect positive correlation
- 0 is no correlation (the values don’t seem linked at all)
- -1 is a perfect negative correlation
#correlation plot
sns.set(rc = {'figure.figsize':(15,15)})
corr = df.corr().abs()
sns.heatmap(corr,annot=True)
plt.show()
Notice the last column from right side of this map. We can see the correlation of all variables against 2021 . As you can see, some variables seem to be strongly correlated with the target variable. Here, a numeric correlation score will help us understand the graph better.
print (corr['2021'].sort_values(ascending=False)[:15], '\n') #top 15 values
print ('----------------------------------------')
print (corr['2021'].sort_values(ascending=False)[-5:]) #last 5 values`2021 1.000000
2008 0.845227
2007 0.837429
2006 0.825094
2005 0.821808
2009 0.813363
2011 0.812387
2010 0.811999
2004 0.809099
2012 0.802112
2013 0.791025
2003 0.787156
2002 0.781040
2014 0.780462
2001 0.776434
Name: 2021, dtype: float64
----------------------------------------
2018 0.736423
2017 0.733195
2016 0.730789
2019 0.727580
2020 0.696640
Name: 2021, dtype: float64
Here we see that the 2008 feature is 84% correlated with the target variable.
corr
Heatmap insights: -
As we know, it is recommended to avoid correlated features in your dataset. Indeed, a group of highly correlated features will not bring additional information (or just very few), but will increase the complexity of the algorithm, hence increasing the risk of errors.
Why Heatmap?
Heatmaps are used to show relationships between two variables, one plotted on each axis. By observing how cell colors change across each axis, you can observe if there are any patterns in value for one or both variables.
We will drop some columns which have correlation close to zero.
4. Boxplot
A boxplot is a standardized way of displaying the distribution of data based on a five number summary (“minimum”, first quartile [Q1], median, third quartile [Q3] and “maximum”).
Basically, to find the outlier in a dataset/column.
features = df.columns.tolist()
features.remove('2021')sns.boxplot(data=df)<Axes: >
The dark points are known as Outliers. Outliers are those data points that are significantly different from the rest of the dataset. They are often abnormal observations that skew the data distribution, and arise due to inconsistent data entry, or erroneous observations.
Boxplot Insights: -
- Sometimes outliers may be an error in the data and should be removed. In this case these points are correct readings yet they are different from the other points that they appear to be incorrect.
- The best way to decide wether to remove them or not is to train models with and without these data points and compare their validation accuracy.
- So we will keep it unchanged as it won’t affect our model.
Here, we can see that most of the variables possess outlier values. It would take us days if we start treating these outlier values one by one. Hence, for now we’ll leave them as is and let our algorithm deal with them. As we know, tree-based algorithms are usually robust to outliers.
Why Boxplot?
Box plots are used to show distributions of numeric data values, especially when you want to compare them between multiple groups. They are built to provide high-level information at a glance, offering general information about a group of data’s symmetry, skew, variance, and outliers.
In the next step we will divide our cleaned data into training data and testing data.
Step 2: Data Preparation
Goal:-
Tasks we are going to in this step:
- Now we will spearate the target variable and feature columns in two different dataframe and will check the shape of the dataset for validation purpose.
- Split dataset into train and test dataset.
- Scaling on train dataset.
1. Now we spearate the target variable and feature columns in two different dataframe and will check the shape of the dataset for validation purpose.
# Separate target and feature column in X and y variable
target = '2021'
# X will be the features
X = df.drop(target,axis=1)
#y will be the target variable
y = df[target]
y have target variable and X have all other variable.
Here in china GDP estimation, 2021 is the target variable.
X.info()<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1147 entries, 0 to 1146
Data columns (total 21 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 2000 1147 non-null float64
1 2001 1147 non-null float64
2 2002 1147 non-null float64
3 2003 1147 non-null float64
4 2004 1147 non-null float64
5 2005 1147 non-null float64
6 2006 1147 non-null float64
7 2007 1147 non-null float64
8 2008 1147 non-null float64
9 2009 1147 non-null float64
10 2010 1147 non-null float64
11 2011 1147 non-null float64
12 2012 1147 non-null float64
13 2013 1147 non-null float64
14 2014 1147 non-null float64
15 2015 1147 non-null float64
16 2016 1147 non-null float64
17 2017 1147 non-null float64
18 2018 1147 non-null float64
19 2019 1147 non-null float64
20 2020 1147 non-null float64
dtypes: float64(21)
memory usage: 188.3 KBy0 2.995781e+01
1 1.510659e+12
2 1.510659e+12
3 1.510659e+12
4 3.360000e+12
...
1142 1.510659e+12
1143 1.510659e+12
1144 1.510659e+12
1145 1.510659e+12
1146 1.510659e+12
Name: 2021, Length: 1147, dtype: float64# Check the shape of X and y variable
X.shape, y.shape((1147, 21), (1147,))# Reshape the y variable
y = y.values.reshape(-1,1)# Again check the shape of X and y variable
X.shape, y.shape((1147, 21), (1147, 1))
2. Spliting the dataset in training and testing data.
Here we are spliting our dataset into 80/20 percentage where 80% dataset goes into the training part and 20% goes into testing part.
# Split the X and y into X_train, X_test, y_train, y_test variables with 80-20% split.
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)# Check shape of the splitted variables
X_train.shape, X_test.shape, y_train.shape, y_test.shape((917, 21), (230, 21), (917, 1), (230, 1))
Insights: -
Train test split technique is used to estimate the performance of machine learning algorithms which are used to make predictions on data not used to train the model.It is a fast and easy procedure to perform, the results of which allow you to compare the performance of machine learning algorithms for your predictive modeling problem. Although simple to use and interpret, there are times when the procedure should not be used, such as when you have a small dataset and situations where additional configuration is required, such as when it is used for classification and the dataset is not balanced.
In the next step we will train our model on the basis of our training and testing data.
Step 3: Model Training
Goal:
In this step we are going to train our dataset on different regression algorithms. As we know that our target variable is not discrete format so we have to apply regression algorithm. In our dataset we have the outcome variable or Dependent variable i.e Y having non discrete values. So we will use Regression algorithm.
Algorithms we are going to use in this step
- Linear Regression
- Lasso Regression
- Ridge Regression
- RandomForestRegressor
K-fold cross validation is a procedure used to estimate the skill of the model on new data. There are common tactics that you can use to select the value of k for your dataset. There are commonly used variations on cross-validation, such as stratified and repeated, that are available in scikit-learn
# Define kfold with 10 split
cv = KFold(n_splits=10, shuffle=True, random_state=42)
The goal of cross-validation is to test the model’s ability to predict new data that was not used in estimating it, in order to flag problems like overfitting or selection bias and to give an insight on how the model will generalize to an independent dataset (i.e., an unknown dataset, for instance from a real problem).
1. Linear Regression
Linear regression is one of the easiest and most popular Machine Learning algorithms. It is a statistical method that is used for predictive analysis. Linear regression makes predictions for continuous/real or numeric variables.
Train set cross-validation
#Using Linear Regression Algorithm to the Training Set
from sklearn.linear_model import LinearRegression
li_R = LinearRegression() #Object Creation
li_R.fit(X_train, y_train)LinearRegression()
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
LinearRegression
LinearRegression()#Accuracy check of trainig data
#Get R2 score
li_R.score(X_train, y_train)0.867865174293472#Accuracy of test data
li_R.score(X_test, y_test)0.7584707489484169# Getting kfold values
li_scores = -1 * cross_val_score(li_R,
X_train,
y_train,
cv=cv,
scoring='neg_root_mean_squared_error')
li_scoresarray([2.78459596e+12, 5.05583163e+12, 5.44009314e+12, 3.49625717e+12,
4.39666992e+12, 3.60670906e+12, 7.04152406e+12, 3.94044876e+12,
2.41844192e+12, 2.82870228e+12])# Mean of the train kfold scores
li_score_train = np.mean(li_scores)
li_score_train4100927389346.208
Prediction
Now we will perform prediction on the dataset using Linear Regression.
# Predict the values on X_test_scaled dataset
y_predicted = li_R.predict(X_test)
Various parameters are calculated for analysing the predictions.
- Confusion Matrix 2)Classification Report 3)Accuracy Score 4)Precision Score 5)Recall Score 6)F1 Score
Confusion Matrix
A confusion matrix presents a table layout of the different outcomes of the prediction and results of a classification problem and helps visualize its outcomes. It plots a table of all the predicted and actual values of a classifier.
This diagram helps in understanding the concept of confusion matrix.
Evaluating all kinds of evaluating parameters.
Classification Report :
A classification report is a performance evaluation metric in machine learning. It is used to show the precision, recall, F1 Score, and support of your trained classification model.
F1_score :
The F1 score is a machine learning metric that can be used in classification models.
Precision_score :
The precision is the ratio tp / (tp + fp) where tp is the number of true positives and fp the number of false positives. The precision is intuitively the ability of the classifier not to label as positive a sample that is negative. The best value is 1 and the worst value is 0.
Recall_score :
Recall score is used to measure the model performance in terms of measuring the count of true positives in a correct manner out of all the actual positive values. Precision-Recall score is a useful measure of success of prediction when the classes are very imbalanced.
# Evaluating the classifier
# printing every score of the classifier
# scoring in anything
from sklearn.metrics import r2_score
li_acc = r2_score(y_test, y_predicted)*100
print("The model used is Linear Regression")
print("R2 Score is: -")
print()
print(li_acc)The model used is Linear Regression
R2 Score is: -
75.8470748948417
2. Lasso Regression
Lasso regression is also called Penalized regression method. This method is usually used in machine learning for the selection of the subset of variables. It provides greater prediction accuracy as compared to other regression models. Lasso Regularization helps to increase model interpretation.
#Using Lasso Regression
from sklearn import linear_model
la_R = linear_model.Lasso(alpha=0.1)#looking for training data
la_R.fit(X_train,y_train)Lasso(alpha=0.1)
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
Lasso
Lasso(alpha=0.1)#Accuracy check of trainig data
la_R.score(X_train, y_train)0.8497104168177774# Getting kfold values
la_scores = -1 * cross_val_score(la_R,
X_train,
y_train,
cv=cv,
scoring='neg_root_mean_squared_error')
la_scoresarray([3.44616989e+12, 5.97634961e+12, 4.87129556e+12, 4.32134069e+12,
4.04496269e+12, 4.15798144e+12, 7.88456659e+12, 4.24234282e+12,
2.87403086e+12, 2.66452878e+12])# Mean of the train kfold scores
la_score_train = np.mean(la_scores)
la_score_train4448356892438.384
Prediction
Now we will perform prediction on the dataset using Lasso Regression.
# Predict the values on X_test_scaled dataset
y_predicted=la_R.predict(X_test)
Evaluating all kinds of evaluating parameters.
#Accuracy check of test data
la_acc = r2_score(y_test,y_predicted)*100
print("The model used is Lasso Regression")
print("R2 Score is: -")
print()
print(la_acc)The model used is Lasso Regression
R2 Score is: -
66.14756351742535
3. Ridge Regression
Ridge regression is used when there are multiple variables that are highly correlated. It helps to prevent overfitting by penalizing the coefficients of the variables. Ridge regression reduces the overfitting by adding a penalty term to the error function that shrinks the size of the coefficients.
#Using Ridge Regression
from sklearn.linear_model import Ridge
ri_R = Ridge(alpha=1.0)#looking for training data
ri_R.fit(X_train,y_train)Ridge()
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
Ridge
Ridge()#Accuracy check of trainig data
ri_R.score(X_train, y_train)0.8678651742934719# Getting kfold values
ri_scores = -1 * cross_val_score(ri_R,
X_train,
y_train,
cv=cv,
scoring='neg_root_mean_squared_error')
ri_scoresarray([2.78459596e+12, 5.05583163e+12, 5.44009314e+12, 3.49625717e+12,
4.39666992e+12, 3.60670906e+12, 7.04152406e+12, 3.94044876e+12,
2.41844192e+12, 2.82870228e+12])# Mean of the train kfold scores
ri_score_train = np.mean(ri_scores)
ri_score_train4100927389346.1523
Prediction
Now we will perform prediction on the dataset using Ridge Regression.
# Predict the values on X_test_scaled dataset
y_predicted=ri_R.predict(X_test)
Evaluating all kinds of evaluating parameters.
#Accuracy check of test data
ri_acc = r2_score(y_test,y_predicted)*100
print("The model used is Ridge Regression")
print("R2 Score is: -")
print()
print(ri_acc)The model used is Ridge Regression
R2 Score is: -
75.84707489479894
4. RandomForestRegressor
Random Forest Regression algorithms are a class of Machine Learning algorithms that use the combination of multiple random decision trees each trained on a subset of data. The use of multiple trees gives stability to the algorithm and reduces variance. The random forest regression algorithm is a commonly used model due to its ability to work well for large and most kinds of data.
#Using Logistic Regression Algorithm to the Training Set
from sklearn.ensemble import RandomForestRegressor
rr_R = RandomForestRegressor() #Object Creation
rr_R.fit(X_train, y_train)RandomForestRegressor()
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
RandomForestRegressor
RandomForestRegressor()#Accuracy check of trainig data
#Get R2 score
rr_R.score(X_train, y_train)0.986919387254626#Accuracy of test data
rr_R.score(X_test, y_test)0.9601424935673899
Prediction
Now we will perform prediction on the dataset using Random Forest Regressor.
# Predict the values on X_test_scaled dataset
y_predicted = rr_R.predict(X_test)# Evaluating the classifier
# printing every score of the classifier
# scoring in anything
print("The model used is RandomForestRegressor")
rr_acc = r2_score(y_test, y_predicted)*100
print("R2 Score is: -")
print()
print(rr_acc)The model used is RandomForestRegressor
R2 Score is: -
96.01424935673899
Insight: -
cal_metric=pd.DataFrame([li_acc,la_acc,ri_acc,rr_acc],columns=["Score in percentage"])
cal_metric.index=['Linear Regression',
'Lasso Regression',
'Ridge Regression',
'Random Forest Regressor']
cal_metric
- As you can see with our Random Forest Regressor(0.9527 or 95.27%) we are getting a better result.
- So we gonna save our model with Random Forest Regressor Algorithm.
Step 4: Save Model
Goal:- In this step we are going to save our model in pickel format file.
import pickle
pickle.dump(li_R , open('china_gdp_estimate_li.pkl', 'wb'))
pickle.dump(la_R , open('china_gdp_estimate_la.pkl', 'wb'))
pickle.dump(ri_R , open('china_gdp_estimate_ri.pkl', 'wb'))
pickle.dump(rr_R , open('china_gdp_estimate_rr.pkl', 'wb'))import pickle
def model_prediction(features):
pickled_model = pickle.load(open('china_gdp_estimate_rr.pkl', 'rb'))
ch = str(pickled_model.predict(features)[0])
return str(f'The china estimate GDP is {ch}')
We can test our model by giving our own parameters or features to predict.
Y_2000 = 1510658920328.1545
Y_2001 = 1510658920328.1545
Y_2002 = 1510658920328.1545
Y_2003 = 1510658920328.1545
Y_2004 = 1510658920328.1545
Y_2005 = 1510658920328.1545
Y_2006 = 1510658920328.1545
Y_2007 = 1510658920328.1545
Y_2008 = 1510658920328.1545
Y_2009 = 1510658920328.1545
Y_2010 = 1510658920328.1545
Y_2011 = 1510658920328.1545
Y_2012 = 1510658920328.1545
Y_2013 = 1510658920328.1545
Y_2014 = 84.57353
Y_2015 = 84.57353
Y_2016 = 84.57353
Y_2017 = 84.57353
Y_2018 = 73.57353
Y_2019 = 73.57353
Y_2020 = 1510658920328.1545model_prediction([[Y_2000, Y_2001, Y_2002, Y_2003, Y_2004, Y_2005, Y_2006, Y_2007, Y_2008, Y_2009,Y_2010,Y_2011,Y_2012,Y_2013,Y_2014,Y_2015,Y_2016,Y_2017,Y_2018,Y_2019,Y_2020]])'The china estimate GDP is 1510658920328.156'
Conclusion
After observing the problem statement we have build an efficient model to solve the problem. The above model helps in predicting the estimate GDP of china. The accuracy for the prediction is 95.27%.
Checkout whole project code here (github repo).
🚀 Unlock Your Dream Job with HiDevs Community!
🔍 Seeking the perfect job? HiDevs Community is your gateway to career success in the tech industry. Explore free expert courses, job-seeking support, and career transformation tips.
💼 We offer an upskill program in Gen AI, Data Science, Machine Learning, and assist startups in adopting Gen AI at minimal development costs.
🆓 Best of all, everything we offer is completely free! We are dedicated to helping society.
Book free of cost 1:1 mentorship on any topic of your choice — topmate
✨ We dedicate over 30 minutes to each applicant’s resume, LinkedIn profile, mock interview, and upskill program. If you’d like our guidance, check out our services here
💡 Join us now, and turbocharge your career!
Deepak Chawla LinkedIn
Vijendra Singh LinkedIn
Yajendra Prajapati LinkedIn
YouTube Channel
Instagram Page
HiDevs LinkedIn
Project Youtube Link