The Palos Publishing Company

Follow Us On The X Platform @PalosPublishing
Categories We Write About

How to Explore Data in Python Using Pandas for EDA

Exploratory Data Analysis (EDA) is a critical first step in data science that involves summarizing the main characteristics of a dataset, often using visual methods. Python’s pandas library is one of the most powerful tools for performing EDA quickly and efficiently. Here’s a comprehensive guide to exploring data in Python using pandas, covering all key techniques needed for an effective EDA workflow.

Importing Necessary Libraries

Begin by importing the essential libraries. In most EDA processes, you’ll use pandas, numpy, and visualization libraries like matplotlib or seaborn.

python
import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns

Loading the Dataset

Pandas provides convenient methods to load data from different sources. For CSV files:

python
df = pd.read_csv('your_dataset.csv')

For Excel files:

python
df = pd.read_excel('your_dataset.xlsx')

To preview the data:

python
df.head()

This displays the first five rows of the dataset and gives a sense of the structure and type of data you’re working with.

Basic Data Exploration

Shape and Size

To understand the dimensions of your dataset:

python
df.shape # (rows, columns) df.size # total number of elements

Column Names and Data Types

Check the list of columns and their data types:

python
df.columns df.dtypes

Summary Statistics

Pandas describe() method offers a statistical summary:

python
df.describe()

This gives information like mean, standard deviation, min, and max for numerical columns.

Info Summary

For a concise summary:

python
df.info()

This shows data types, non-null counts, and memory usage, which is helpful for identifying missing values and data formats.

Identifying Missing Values

To check for missing data:

python
df.isnull().sum()

This reveals how many missing values exist in each column. You can visualize them with seaborn:

python
sns.heatmap(df.isnull(), cbar=False, cmap='viridis') plt.show()

Data Cleaning Techniques

Handling Missing Values

You can either fill or drop missing data:

python
df.fillna(value, inplace=True) # replace missing with a specific value df.dropna(inplace=True) # drop rows with missing values

Renaming Columns

Standardize column names for readability:

python
df.rename(columns={'oldName': 'newName'}, inplace=True)

Changing Data Types

Convert columns to appropriate data types:

python
df['date_column'] = pd.to_datetime(df['date_column']) df['category_column'] = df['category_column'].astype('category')

Univariate Analysis

Categorical Columns

To explore categorical data:

python
df['category_column'].value_counts() df['category_column'].value_counts(normalize=True) # for percentage

Visualize it using a bar plot:

python
sns.countplot(x='category_column', data=df) plt.xticks(rotation=45) plt.show()

Numerical Columns

For distribution analysis:

python
df['numeric_column'].hist(bins=30) plt.show() sns.boxplot(y='numeric_column', data=df) plt.show()

Bivariate and Multivariate Analysis

Correlation Matrix

To understand relationships between numeric variables:

python
correlation = df.corr() sns.heatmap(correlation, annot=True, cmap='coolwarm') plt.show()

Scatter Plots

To explore relationships between two numerical features:

python
sns.scatterplot(x='feature1', y='feature2', data=df) plt.show()

Groupby Aggregations

Summarize data by groups:

python
df.groupby('category_column')['numeric_column'].mean() df.groupby(['col1', 'col2']).agg({'col3': ['mean', 'sum']})

Outlier Detection

Using IQR (Interquartile Range):

python
Q1 = df['numeric_column'].quantile(0.25) Q3 = df['numeric_column'].quantile(0.75) IQR = Q3 - Q1 outliers = df[(df['numeric_column'] < Q1 - 1.5 * IQR) | (df['numeric_column'] > Q3 + 1.5 * IQR)]

You can also visualize outliers:

python
sns.boxplot(x=df['numeric_column']) plt.show()

Feature Engineering

Creating New Columns

You can derive new columns:

python
df['new_column'] = df['col1'] / df['col2']

Binning

Convert continuous data into categorical bins:

python
df['binned'] = pd.cut(df['numeric_column'], bins=[0, 10, 20, 30], labels=['Low', 'Medium', 'High'])

Encoding Categorical Variables

Convert categories into numbers:

python
df['category_encoded'] = df['category_column'].astype('category').cat.codes

Or use one-hot encoding:

python
df = pd.get_dummies(df, columns=['category_column'], drop_first=True)

Time Series Exploration

If your data involves dates:

python
df['date'] = pd.to_datetime(df['date']) df.set_index('date', inplace=True) df['value'].resample('M').mean().plot() plt.show()

Pivot Tables

Summarize data dynamically:

python
pd.pivot_table(df, values='sales', index='region', columns='year', aggfunc='sum')

Exporting Cleaned Data

After cleaning and analyzing, export the final dataset:

python
df.to_csv('cleaned_data.csv', index=False)

Final Thoughts

Exploring data using pandas provides a powerful framework for understanding and preparing data for modeling or reporting. With functions ranging from basic inspection to advanced statistical summaries and visualizations, pandas allows for flexible, scalable, and intuitive data manipulation. Combining pandas with visualization libraries such as seaborn or matplotlib enhances the depth of your analysis and uncovers insights that might otherwise remain hidden.

Share this Page your favorite way: Click any app below to share.

Enter your email below to join The Palos Publishing Company Email List

We respect your email privacy

Categories We Write About