# Python Machine Learning Notebooks (Tutorial style)¶

Authored and maintained by Dr. Tirthajyoti Sarkar, Fremont, CA. Please feel free to add me on LinkedIn here.

# Requirements¶

- Python 3.5
- NumPy (
`pip install numpy`

) - Pandas (
`pip install pandas`

) - Scikit-learn (
`pip install scikit-learn`

) - SciPy (
`pip install scipy`

) - Statsmodels (
`pip install statsmodels`

) - MatplotLib (
`pip install matplotlib`

) - Seaborn (
`pip install seaborn`

) - Sympy (
`pip install sympy`

)

You can start with this article that I wrote in Heartbeat magazine (on Medium platform):

“Some Essential Hacks and Tricks for Machine Learning with Python”

# Essential tutorial-type notebooks on Pandas, Numpy, and visualizations¶

Jupyter notebooks covering a wide range of functions and operations on the topics of NumPy, Pandans, Seaborn, matplotlib etc.

- Detailed Numpy operations
- Detailed Pandas operations
- Numpy and Pandas quick basics operations
- Basics of visualization with Matplotlib and Seaborn
- Advanced Pandas operations
- How to read various data sources
- PDF reading and table processing demo
- How fast are Numpy operations compared to pure Python code? (Read my article on Medium related to this topic)
- Fast reading from Numpy using .npy file format (Read my article on Medium on this topic)

# Complexity and Learning curve analysis¶

Complexity and learning curve analyses are essentially are part of the visual analytics that a data scientist must perform using the available dataset for comparing the merits of various ML algorithms.

**Learning curve**: Graphs that compares the performance of a model on training and testing data over a varying number of training instances. We should generally see performance improve as the number of training points increases.

**Complexity curve**: Graphs that show the model performance over training and validation set for varying degree of model complexity (e.g. degree of polynomial for linear regression, number of layers or neurons for neural networks, number of estimator trees for a Boosting algorithm or Random Forest).

- Complexity and learning curve with Lending club dataset (Here is the Notebook).
- Complexity and learning curve with a synthetic dataset using the
`Hastie function`

from Scikit-learn (Here is the Notebook).

# Random data generation using symbolic expressions¶

- Simple script to generate random polynomial expression/function (Here is the Notebook).
- How to use Sympy package to generate random datasets using symbolic mathematical expressions (Here is the Notebook). Also, here is the Python script if anybody wants to use it directly in their project.
- Here is my article on Medium on this topic: Random regression and classification problem generation with symbolic expression

# Simple deployment examples (serving ML models on web API)¶

- Serving a linear regression model through a simple HTTP server
interface.
User needs to request predictions by executing a Python script. Uses
`Flask`

and`Gunicorn`

. - Serving a recurrent neural network (RNN) through a HTTP
webpage,
complete with a web form, where users can input parameters and click
a button to generate text based on the pre-trained RNN model. Uses
`Flask`

,`Jinja`

,`Keras`

/`TensorFlow`

,`WTForms`

.

# Object-oriented programming with machine learning¶

Implementing some of the core OOP principles in a machine learning context by building your own Scikit-learn-like estimator, and making it better.

Here is the complete Python script with the linear regression class, which can do fitting, prediction, cpmputation of regression metrics, plot outliers, plot diagnostics (linearity, constant variance, etc.), compute variance inflation factors.

I created a Python package based on this work, which offers simple Scikit-learn style interface API along with deep statistical inference and residual analysis capabilities for linear regression problems. Check it out here.

See my articles on Medium on this topic.