SlideShare una empresa de Scribd logo
1 de 60
Introduction to Data Analysis
techniques using Python
First steps into insight discovery using Python and specialized libraries
Alex Chalini - sentoul@Hotmail.com
About me
• Computer Systems Engineer, Master of Computer Science (…)
• Actively working in Business Solutions development since
2001
• My areas of specialty are Business Intelligence, Data Analysis,
Data Visualization, DB modeling and optimization.
• I am also interested in Data Science path for engineering.
2
Alex Chalini
Agenda
• What is Data Analysis?
• Python Libraries for Data Analysis and Data Science
• Hands-on data analysis workflow using Python
• Statistical Analysis & ML overview
• Big Data & Data Analytics working together
• Applicatons in Pharma industry
3
Question:
The process of systematically applying
techniques to evaluate data is known as ?
A. Data Munging
B. Data Analysis
C. Data Science
D. Data Bases
0A B C D
4
Data Analysis:
•What is it?
•Apply logical
techniques to
•Describe, condense,
recap and evaluate
Data and
•Illustrate Information
•Goals of Data Analysis:
1. Discover useful
information
2. Provide insights
3. Suggest conclusions
4. Support Decision
Making
5
Phyton Data Analysis Basics
• Series
• DataFrame
• Creating a DataFrame from a dict
• Select columns, Select rows with Boolean indexing
6
Essential Concepts
• A Series is a named Python list (dict with list as value).
{ ‘grades’ : [50,90,100,45] }
• A DataFrame is a dictionary of Series (dict of series):
{ { ‘names’ : [‘bob’,’ken’,’art’,’joe’]}
{ ‘grades’ : [50,90,100,45] }
}
7
Python Libraries for Data Analysis and Data
Science
Many popular Python toolboxes/libraries:
• NumPy
• SciPy
• Pandas
• SciKit-Learn
Visualization libraries
• matplotlib
• Seaborn
8
All these libraries are
free to download and
use
AnalyticsWorkflow
9
Overview of Python Libraries for Data
Scientists
Reading Data; Selecting and Filtering the Data; Data manipulation,
sorting, grouping, rearranging
Plotting the data
Descriptive statistics
Inferential statistics
NumPy:
 introduces objects for multidimensional arrays and matrices, as well as
functions that allow to easily perform advanced mathematical and statistical
operations on those objects
 provides vectorization of mathematical operations on arrays and matrices
which significantly improves the performance
 many other python libraries are built on NumPy
10
Link: http://www.numpy.org/
SciPy:
 collection of algorithms for linear algebra, differential equations, numerical
integration, optimization, statistics and more
 part of SciPy Stack
 built on NumPy
11
Link: https://www.scipy.org/scipylib/
• It Provides built-in data structures which simplify the manipulation and analysis of data sets.
• Pandas is easy to use and powerful, but “with great power comes great responsibility”
• adds data structures and tools designed to work with table-like data (similar to Series and Data
Frames in R)
• provides tools for data manipulation: reshaping, merging, sorting, slicing, aggregation etc.
• allows handling missing data
I cannot teach you all things Pandas, we must focus on how it works, so you can figure out the rest
on your own.
12
Link: http://pandas.pydata.org/
Pandas is Python package for data analysis.
Link: http://scikit-learn.org/
SciKit-Learn:
 provides machine learning algorithms: classification, regression, clustering,
model validation etc.
 built on NumPy, SciPy and matplotlib
13
Link: https://www.tensorflow.org/
TensorFlow™ is an open source software library for high performance numerical
computation.
It comes with strong support for machine learning and deep learning and the
flexible numerical computation core is used across many other scientific domains
14
matplotlib:
 python 2D plotting library which produces publication quality figures in a
variety of hardcopy formats
 a set of functionalities similar to those of MATLAB
 line plots, scatter plots, barcharts, histograms, pie charts etc.
 relatively low-level; some effort needed to create advanced visualization
Link: https://matplotlib.org/
15
Seaborn:
 based on matplotlib
 provides high level interface for drawing attractive statistical graphics
 Similar (in style) to the popular ggplot2 library in R
Link: https://seaborn.pydata.org/
16
Hands-on workflow using Python
17
In [ ]:
Loading Python Libraries
18
#Import Python Libraries
import numpy as np
import scipy as sp
import pandas as pd
import matplotlib as mpl
import seaborn as sns
In [ ]:
Reading data using pandas
19
#Read csv file
df = pd.read_csv("http://rcs.bu.edu/examples/python/data_analysis/Salaries.csv")
There is a number of pandas commands to read other data formats:
pd.read_excel('myfile.xlsx',sheet_name='Sheet1', index_col=None, na_values=['NA'])
pd.read_stata('myfile.dta')
pd.read_sas('myfile.sas7bdat')
pd.read_hdf('myfile.h5','df')
Note: The above command has many optional arguments to fine-tune the data import process.
In [3]:
Exploring data frames
20
#List first 5 records
df.head()
Out[3]:
Data Frame data types
Pandas Type Native Python Type Description
object string The most general dtype. Will be
assigned to your column if column
has mixed types (numbers and
strings).
int64 int Numeric characters. 64 refers to
the memory allocated to hold this
character.
float64 float Numeric characters with decimals.
If a column contains numbers and
NaNs(see below), pandas will
default to float64, in case your
missing value has a decimal.
datetime64, timedelta[ns] N/A (but see the datetime module
in Python’s standard library)
Values meant to hold time data.
Look into these for time series
experiments.
21
In [4]:
Data Frame data types
22
#Check a particular column type
df['salary'].dtype
Out[4]: dtype('int64')
In [5]: #Check types for all the columns
df.dtypes
Out[4]: rank
discipline
phd
service
sex
salary
dtype: object
object
object
int64
int64
object
int64
Data Frames attributes
23
Python objects have attributes and methods.
df.attribute description
dtypes list the types of the columns
columns list the column names
axes list the row labels and column names
ndim number of dimensions
size number of elements
shape return a tuple representing the dimensionality
values numpy representation of the data
Hands-on exercises
24
 Find how many records this data frame has;
 How many elements are there?
 What are the column names?
 What types of columns we have in this data frame?
In [5]: df.shape
Out[5]: (4, 3)
>>> df.count()
Person 4
Age 4
Single 5
dtype: int64
list(my_dataframe.columns.values)
Also you can simply use:
list(my_dataframe)
>>> df2.dtypes
Series or DataFrame?
Match the code to the
result. One result is a Series,
the other a DataFrame
1.df[‘Quarter’]
2.df[ [‘Quarter’] ]
A. Series B. Data Frame
0A B
25
Data Frames methods
26
df.method() description
head( [n] ), tail( [n] ) first/last n rows
describe() generate descriptive statistics (for numeric columns only)
max(), min() return max/min values for all numeric columns
mean(), median() return mean/median values for all numeric columns
std() standard deviation
sample([n]) returns a random sample of the data frame
dropna() drop all the records with missing values
Unlike attributes, python methods have parenthesis.
All attributes and methods can be listed with a dir() function: dir(df)
Selecting a column in a Data Frame
Method 1: Subset the data frame using column name:
df[‘gender']
Method 2: Use the column name as an attribute:
df.gender
Note: there is an attribute rank for pandas data frames, so to select a column with a name
"rank" we should use method 1.
27
Data Frames groupby method
28
Using "group by" method we can:
• Split the data into groups based on some criteria
• Calculate statistics (or apply a function) to each group
• Similar to dplyr() function in R
In [ ]: #Group data using rank
df_rank = df.groupby(['rank'])
In [ ]: #Calculate mean value for each numeric column per each group
df_rank.mean()
Data Frames groupby method
29
Once groupby object is create we can calculate various statistics for each group:
In [ ]: #Calculate mean salary for each professor rank:
df.groupby('rank')[['salary']].mean()
Note: If single brackets are used to specify the column (e.g. salary), then the output is Pandas Series object.
When double brackets are used the output is a Data Frame
Data Frames groupby method
30
groupby performance notes:
- no grouping/splitting occurs until it's needed. Creating the groupby object
only verifies that you have passed a valid mapping
- by default the group keys are sorted during the groupby operation. You may
want to pass sort=False for potential speedup:
In [ ]: #Calculate mean salary for each professor rank:
df.groupby(['rank'], sort=False)[['salary']].mean()
Data Frame: filtering
31
To subset the data we can apply Boolean indexing. This indexing is commonly
known as a filter. For example if we want to subset the rows in which the salary
value is greater than $120K:
In [ ]: #Calculate mean salary for each professor rank:
df_sub = df[ df['salary'] > 120000 ]
In [ ]: #Select only those rows that contain female professors:
df_f = df[ df['sex'] == 'Female' ]
Any Boolean operator can be used to subset the data:
> greater; >= greater or equal;
< less; <= less or equal;
== equal; != not equal;
Boolean filtering
Which rows are included in this
Boolean index?
df[ df[‘Sold’] < 110 ]
A. 0, 1, 2
B. 1, 2, 3
C. 0, 1
D. 0, 3
0A B C D
32
Data Frames: Slicing
33
There are a number of ways to subset the Data Frame:
• one or more columns
• one or more rows
• a subset of rows and columns
Rows and columns can be selected by their position or label
Data Frames: Slicing
34
When selecting one column, it is possible to use single set of brackets, but the
resulting object will be a Series (not a DataFrame):
In [ ]: #Select column salary:
df['salary']
When we need to select more than one column and/or make the output to be a
DataFrame, we should use double brackets:
In [ ]: #Select column salary:
df[['rank','salary']]
Data Frames: Selecting rows
35
If we need to select a range of rows, we can specify the range using ":"
In [ ]: #Select rows by their position:
df[10:20]
Notice that the first row has a position 0, and the last value in the range is omitted:
So for 0:10 range the first 10 rows are returned with the positions starting with 0
and ending with 9
Data Frames: method loc
36
If we need to select a range of rows, using their labels we can use method loc:
In [ ]: #Select rows by their labels:
df_sub.loc[10:20,['rank','sex','salary']]
Out[ ]:
Data Frames: method iloc
37
If we need to select a range of rows and/or columns, using their positions we can
use method iloc:
In [ ]: #Select rows by their labels:
df_sub.iloc[10:20,[0, 3, 4, 5]]
Out[ ]:
Data Frames: method iloc (summary)
38
df.iloc[0] # First row of a data frame
df.iloc[i] #(i+1)th row
df.iloc[-1] # Last row
df.iloc[:, 0] # First column
df.iloc[:, -1] # Last column
df.iloc[0:7] #First 7 rows
df.iloc[:, 0:2] #First 2 columns
df.iloc[1:3, 0:2] #Second through third rows and first 2 columns
df.iloc[[0,5], [1,3]] #1st and 6th rows and 2nd and 4th columns
Data Frames: Sorting
39
We can sort the data by a value in the column. By default the sorting will occur in
ascending order and a new data frame is return.
In [ ]: # Create a new data frame from the original sorted by the column Salary
df_sorted = df.sort_values( by ='service')
df_sorted.head()
Out[ ]:
Data Frames: Sorting
40
We can sort the data using 2 or more columns:
In [ ]: df_sorted = df.sort_values( by =['service', 'salary'], ascending = [True, False])
df_sorted.head(10)
Out[ ]:
Missing Values
41
Missing values are marked as NaN
In [ ]: # Read a dataset with missing values
flights = pd.read_csv("http://rcs.bu.edu/examples/python/data_analysis/flights.csv")
In [ ]: # Select the rows that have at least one missing value
flights[flights.isnull().any(axis=1)].head()
Out[ ]:
Missing Values
42
There are a number of methods to deal with missing values in the data frame:
df.method() description
dropna() Drop missing observations
dropna(how='all') Drop observations where all cells is NA
dropna(axis=1, how='all') Drop column if all the values are missing
dropna(thresh = 5) Drop rows that contain less than 5 non-missing values
fillna(0) Replace missing values with zeros
isnull() returns True if the value is missing
notnull() Returns True for non-missing values
Missing Values
43
• When summing the data, missing values will be treated as zero
• If all values are missing, the sum will be equal to NaN
• cumsum() and cumprod() methods ignore missing values but preserve them in
the resulting arrays
• Missing values in GroupBy method are excluded (just like in R)
• Many descriptive statistics methods have skipna option to control if missing
data should be excluded . This value is set to True by default (unlike R)
Aggregation Functions in Pandas
44
Aggregation - computing a summary statistic about each group, i.e.
• compute group sums or means
• compute group sizes/counts
Common aggregation functions:
min, max
count, sum, prod
mean, median, mode, mad
std, var
Aggregation Functions in Pandas
45
agg() method are useful when multiple statistics are computed per column:
In [ ]: flights[['dep_delay','arr_delay']].agg(['min','mean','max'])
Out[ ]:
Basic Descriptive Statistics
46
df.method() description
describe Basic statistics (count, mean, std, min, quantiles, max)
min, max Minimum and maximum values
mean, median, mode Arithmetic average, median and mode
var, std Variance and standard deviation
sem Standard error of mean
skew Sample skewness
kurt kurtosis
Graphics to explore the data
47
To show graphs within Python notebook include inline directive:
In [ ]: %matplotlib inline
Seaborn package is built on matplotlib but provides high level
interface for drawing attractive statistical graphics, similar to ggplot2
library in R. It specifically targets statistical data visualization
Graphics
48
description
distplot histogram
barplot estimate of central tendency for a numeric variable
violinplot similar to boxplot, also shows the probability density of the
data
jointplot Scatterplot
regplot Regression plot
pairplot Pairplot
boxplot boxplot
swarmplot categorical scatterplot
factorplot General categorical plot
49
Statistical Analysis & ML overview
50
statsmodel and scikit-learn - both have a number of function for statistical analysis
The first one is mostly used for regular analysis using R style formulas, while scikit-learn and Tensorflow are more
tailored for Machine Learning.
statsmodels:
• linear regressions
• ANOVA tests
• hypothesis testings
• many more ...
scikit-learn:
• kmeans
• support vector machines
• random forests
• many more ...
Tensorflow:
• Image Recognition
• Neural Networks
• Linear Models
• TensorFlow Wide & Deep Learning
• etc...
51
Big Data & Data Analytics working together
Big Data & Data Analytics working together
WORKING WITH BIG DATA: MAP-REDUCE
• When working with large datasets, it’s often useful to utilize MapReduce.
MapReduce is a method when working with big data which allows you to
first map the data using a particular attribute, filter or grouping and then
reduce those using a transformation or aggregation mechanism. For
example, if I had a collection of cats, I could first map them by what color
they are and then reduce by summing those groups. At the end of the
MapReduce process, I would have a list of all the cat colors and the sum of
the cats in each of those color groupings.
• Almost every data science library has some MapReduce functionality built
in. There are also numerous larger libraries you can use to manage the data
and MapReduce over a series of computers (or a cluster / grouping of
computers). Python can speak to these services and software and extract
the results for further reporting, visualization or alerting.
52
Big Data & Data Analytics working together
Hadoop
• If the most popular libraries for MapReduce with large datasets is Apache’s Hadoop. Hadoop
uses cluster computing to allow for faster data processing of large datasets. There are many
Python libraries you can use to send your data or jobs to Hadoop and which one you choose
should be a mixture of what’s easiest and most simple to set up with your infastructure, and
also what seems like the most clear library for your use case.
Spark
• If you have large data which might work better in streaming form (real-time data, log data,
API data), then Apache’s Spark is a great tool. PySpark, the Python Spark API, allows you to
quickly get up and running and start mapping and reducing your dataset. It’s also incredibly
popular with machine learning problems, as it has some built-in algorithms.
• There are several other large scale data and job libraries you can use with Python, but for now
we can move along to looking at data with Python.
53
Big Data & Data Analytics working together
54
Apache Spark is written in Scala programming language. To
support Python with Spark, Apache Spark community released
a tool, PySpark. Using PySpark, you can work with RDDs in
Python programming language also.
BigQuery is Google's serverless, highly scalable, low cost enterprise data
warehouse.
BigQuery allows organizations to capture and analyze data in real-time
using its powerful streaming ingestion capability so that your insights are
always current.
Industries using Real-Time Big Data-Analytics
• e-Commerce
• Social Networks
• Healthcare
• Fraud Detection
55
OPTIMIZE CUSTOMER SERVICE PROCESS IN A FLOW OF
CONTINUOUS DATA, MAKING LIFE SAVING DECISIONS IN
A SAFE ENVIRONMENT TO RUN THE BUSINESS.
56
Applications in Pharma Industry
Applications in the Pharma Industry
57
58
59
Thank you!
60

Más contenido relacionado

La actualidad más candente

Curse of dimensionality
Curse of dimensionalityCurse of dimensionality
Curse of dimensionality
Nikhil Sharma
 

La actualidad más candente (20)

Machine learning with scikitlearn
Machine learning with scikitlearnMachine learning with scikitlearn
Machine learning with scikitlearn
 
Data science
Data scienceData science
Data science
 
Data Visualization in Python
Data Visualization in PythonData Visualization in Python
Data Visualization in Python
 
Introduction to NumPy (PyData SV 2013)
Introduction to NumPy (PyData SV 2013)Introduction to NumPy (PyData SV 2013)
Introduction to NumPy (PyData SV 2013)
 
Python For Data Analysis | Python Pandas Tutorial | Learn Python | Python Tra...
Python For Data Analysis | Python Pandas Tutorial | Learn Python | Python Tra...Python For Data Analysis | Python Pandas Tutorial | Learn Python | Python Tra...
Python For Data Analysis | Python Pandas Tutorial | Learn Python | Python Tra...
 
Python Seaborn Data Visualization
Python Seaborn Data Visualization Python Seaborn Data Visualization
Python Seaborn Data Visualization
 
Python for Data Science | Python Data Science Tutorial | Data Science Certifi...
Python for Data Science | Python Data Science Tutorial | Data Science Certifi...Python for Data Science | Python Data Science Tutorial | Data Science Certifi...
Python for Data Science | Python Data Science Tutorial | Data Science Certifi...
 
Python for Data Science with Anaconda
Python for Data Science with AnacondaPython for Data Science with Anaconda
Python for Data Science with Anaconda
 
Feature Engineering in Machine Learning
Feature Engineering in Machine LearningFeature Engineering in Machine Learning
Feature Engineering in Machine Learning
 
Data Analysis in Python
Data Analysis in PythonData Analysis in Python
Data Analysis in Python
 
Machine Learning Algorithms
Machine Learning AlgorithmsMachine Learning Algorithms
Machine Learning Algorithms
 
Machine Learning and Real-World Applications
Machine Learning and Real-World ApplicationsMachine Learning and Real-World Applications
Machine Learning and Real-World Applications
 
Data science
Data scienceData science
Data science
 
Simple overview of machine learning
Simple overview of machine learningSimple overview of machine learning
Simple overview of machine learning
 
Data science and Artificial Intelligence
Data science and Artificial IntelligenceData science and Artificial Intelligence
Data science and Artificial Intelligence
 
Basic of python for data analysis
Basic of python for data analysisBasic of python for data analysis
Basic of python for data analysis
 
Introduction to Python for Data Science
Introduction to Python for Data ScienceIntroduction to Python for Data Science
Introduction to Python for Data Science
 
Python for data analysis
Python for data analysisPython for data analysis
Python for data analysis
 
Introduction to Machine learning with Python
Introduction to Machine learning with PythonIntroduction to Machine learning with Python
Introduction to Machine learning with Python
 
Curse of dimensionality
Curse of dimensionalityCurse of dimensionality
Curse of dimensionality
 

Similar a Meetup Junio Data Analysis with python 2018

Lecture 1 Pandas Basics.pptx machine learning
Lecture 1 Pandas Basics.pptx machine learningLecture 1 Pandas Basics.pptx machine learning
Lecture 1 Pandas Basics.pptx machine learning
my6305874
 
Lesson 2 data preprocessing
Lesson 2   data preprocessingLesson 2   data preprocessing
Lesson 2 data preprocessing
AbdurRazzaqe1
 
Congrats ! You got your Data Science Job
Congrats ! You got your Data Science JobCongrats ! You got your Data Science Job
Congrats ! You got your Data Science Job
Rohit Dubey
 

Similar a Meetup Junio Data Analysis with python 2018 (20)

Python-for-Data-Analysis.pptx
Python-for-Data-Analysis.pptxPython-for-Data-Analysis.pptx
Python-for-Data-Analysis.pptx
 
Lecture 9.pptx
Lecture 9.pptxLecture 9.pptx
Lecture 9.pptx
 
PPT on Data Science Using Python
PPT on Data Science Using PythonPPT on Data Science Using Python
PPT on Data Science Using Python
 
More on Pandas.pptx
More on Pandas.pptxMore on Pandas.pptx
More on Pandas.pptx
 
interenship.pptx
interenship.pptxinterenship.pptx
interenship.pptx
 
Python-for-Data-Analysis.pptx
Python-for-Data-Analysis.pptxPython-for-Data-Analysis.pptx
Python-for-Data-Analysis.pptx
 
Python for Data Analysis.pdf
Python for Data Analysis.pdfPython for Data Analysis.pdf
Python for Data Analysis.pdf
 
Python-for-Data-Analysis.pptx
Python-for-Data-Analysis.pptxPython-for-Data-Analysis.pptx
Python-for-Data-Analysis.pptx
 
Python-for-Data-Analysis.pdf
Python-for-Data-Analysis.pdfPython-for-Data-Analysis.pdf
Python-for-Data-Analysis.pdf
 
Lecture 1 Pandas Basics.pptx machine learning
Lecture 1 Pandas Basics.pptx machine learningLecture 1 Pandas Basics.pptx machine learning
Lecture 1 Pandas Basics.pptx machine learning
 
Lecture 3 intro2data
Lecture 3 intro2dataLecture 3 intro2data
Lecture 3 intro2data
 
Unit 3_Numpy_Vsp.pptx
Unit 3_Numpy_Vsp.pptxUnit 3_Numpy_Vsp.pptx
Unit 3_Numpy_Vsp.pptx
 
Unit 4_Working with Graphs _python (2).pptx
Unit 4_Working with Graphs _python (2).pptxUnit 4_Working with Graphs _python (2).pptx
Unit 4_Working with Graphs _python (2).pptx
 
Unit 3_Numpy_VP.pptx
Unit 3_Numpy_VP.pptxUnit 3_Numpy_VP.pptx
Unit 3_Numpy_VP.pptx
 
Lesson 2 data preprocessing
Lesson 2   data preprocessingLesson 2   data preprocessing
Lesson 2 data preprocessing
 
BDACA - Tutorial5
BDACA - Tutorial5BDACA - Tutorial5
BDACA - Tutorial5
 
Pandas Dataframe reading data Kirti final.pptx
Pandas Dataframe reading data  Kirti final.pptxPandas Dataframe reading data  Kirti final.pptx
Pandas Dataframe reading data Kirti final.pptx
 
Unit 3_Numpy_VP.pptx
Unit 3_Numpy_VP.pptxUnit 3_Numpy_VP.pptx
Unit 3_Numpy_VP.pptx
 
Congrats ! You got your Data Science Job
Congrats ! You got your Data Science JobCongrats ! You got your Data Science Job
Congrats ! You got your Data Science Job
 
2. Data Preprocessing with Numpy and Pandas.pptx
2. Data Preprocessing with Numpy and Pandas.pptx2. Data Preprocessing with Numpy and Pandas.pptx
2. Data Preprocessing with Numpy and Pandas.pptx
 

Más de DataLab Community

Más de DataLab Community (11)

Meetup Julio Algoritmos Genéticos
Meetup Julio Algoritmos GenéticosMeetup Julio Algoritmos Genéticos
Meetup Julio Algoritmos Genéticos
 
Meetup Junio Apache Spark Fundamentals
Meetup Junio Apache Spark FundamentalsMeetup Junio Apache Spark Fundamentals
Meetup Junio Apache Spark Fundamentals
 
Procesar e interpretar señales biológicas para hacer predicción de movimiento...
Procesar e interpretar señales biológicas para hacer predicción de movimiento...Procesar e interpretar señales biológicas para hacer predicción de movimiento...
Procesar e interpretar señales biológicas para hacer predicción de movimiento...
 
Metodos de kernel en machine learning by MC Luis Ricardo Peña Llamas
Metodos de kernel en machine learning by MC Luis Ricardo Peña LlamasMetodos de kernel en machine learning by MC Luis Ricardo Peña Llamas
Metodos de kernel en machine learning by MC Luis Ricardo Peña Llamas
 
Curse of dimensionality by MC Ivan Alejando Garcia
Curse of dimensionality by MC Ivan Alejando GarciaCurse of dimensionality by MC Ivan Alejando Garcia
Curse of dimensionality by MC Ivan Alejando Garcia
 
Tensor models and other dreams by PhD Andres Mendez-Vazquez
Tensor models and other dreams by PhD Andres Mendez-VazquezTensor models and other dreams by PhD Andres Mendez-Vazquez
Tensor models and other dreams by PhD Andres Mendez-Vazquez
 
Quiénes somos - DataLab Community
Quiénes somos - DataLab CommunityQuiénes somos - DataLab Community
Quiénes somos - DataLab Community
 
Profesiones de la ciencia de datos
Profesiones de la ciencia de datosProfesiones de la ciencia de datos
Profesiones de la ciencia de datos
 
El arte de la Ciencia de Datos
El arte de la Ciencia de DatosEl arte de la Ciencia de Datos
El arte de la Ciencia de Datos
 
Presentación de DataLab Community
Presentación de DataLab CommunityPresentación de DataLab Community
Presentación de DataLab Community
 
De qué hablamos cuando hablamos de Data Science
De qué hablamos cuando hablamos de Data ScienceDe qué hablamos cuando hablamos de Data Science
De qué hablamos cuando hablamos de Data Science
 

Último

Abortion pills in Doha Qatar (+966572737505 ! Get Cytotec
Abortion pills in Doha Qatar (+966572737505 ! Get CytotecAbortion pills in Doha Qatar (+966572737505 ! Get Cytotec
Abortion pills in Doha Qatar (+966572737505 ! Get Cytotec
Abortion pills in Riyadh +966572737505 get cytotec
 
Call Girls In Doddaballapur Road ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Doddaballapur Road ☎ 7737669865 🥵 Book Your One night StandCall Girls In Doddaballapur Road ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Doddaballapur Road ☎ 7737669865 🥵 Book Your One night Stand
amitlee9823
 
Call Girls Indiranagar Just Call 👗 9155563397 👗 Top Class Call Girl Service B...
Call Girls Indiranagar Just Call 👗 9155563397 👗 Top Class Call Girl Service B...Call Girls Indiranagar Just Call 👗 9155563397 👗 Top Class Call Girl Service B...
Call Girls Indiranagar Just Call 👗 9155563397 👗 Top Class Call Girl Service B...
only4webmaster01
 
FESE Capital Markets Fact Sheet 2024 Q1.pdf
FESE Capital Markets Fact Sheet 2024 Q1.pdfFESE Capital Markets Fact Sheet 2024 Q1.pdf
FESE Capital Markets Fact Sheet 2024 Q1.pdf
MarinCaroMartnezBerg
 
Mg Road Call Girls Service: 🍓 7737669865 🍓 High Profile Model Escorts | Banga...
Mg Road Call Girls Service: 🍓 7737669865 🍓 High Profile Model Escorts | Banga...Mg Road Call Girls Service: 🍓 7737669865 🍓 High Profile Model Escorts | Banga...
Mg Road Call Girls Service: 🍓 7737669865 🍓 High Profile Model Escorts | Banga...
amitlee9823
 
Call Girls Jalahalli Just Call 👗 7737669865 👗 Top Class Call Girl Service Ban...
Call Girls Jalahalli Just Call 👗 7737669865 👗 Top Class Call Girl Service Ban...Call Girls Jalahalli Just Call 👗 7737669865 👗 Top Class Call Girl Service Ban...
Call Girls Jalahalli Just Call 👗 7737669865 👗 Top Class Call Girl Service Ban...
amitlee9823
 
Chintamani Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore ...
Chintamani Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore ...Chintamani Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore ...
Chintamani Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore ...
amitlee9823
 
Probability Grade 10 Third Quarter Lessons
Probability Grade 10 Third Quarter LessonsProbability Grade 10 Third Quarter Lessons
Probability Grade 10 Third Quarter Lessons
JoseMangaJr1
 
Call Girls In Hsr Layout ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Hsr Layout ☎ 7737669865 🥵 Book Your One night StandCall Girls In Hsr Layout ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Hsr Layout ☎ 7737669865 🥵 Book Your One night Stand
amitlee9823
 
Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...
Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...
Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...
amitlee9823
 
➥🔝 7737669865 🔝▻ Bangalore Call-girls in Women Seeking Men 🔝Bangalore🔝 Esc...
➥🔝 7737669865 🔝▻ Bangalore Call-girls in Women Seeking Men  🔝Bangalore🔝   Esc...➥🔝 7737669865 🔝▻ Bangalore Call-girls in Women Seeking Men  🔝Bangalore🔝   Esc...
➥🔝 7737669865 🔝▻ Bangalore Call-girls in Women Seeking Men 🔝Bangalore🔝 Esc...
amitlee9823
 

Último (20)

Abortion pills in Doha Qatar (+966572737505 ! Get Cytotec
Abortion pills in Doha Qatar (+966572737505 ! Get CytotecAbortion pills in Doha Qatar (+966572737505 ! Get Cytotec
Abortion pills in Doha Qatar (+966572737505 ! Get Cytotec
 
Predicting Loan Approval: A Data Science Project
Predicting Loan Approval: A Data Science ProjectPredicting Loan Approval: A Data Science Project
Predicting Loan Approval: A Data Science Project
 
Accredited-Transport-Cooperatives-Jan-2021-Web.pdf
Accredited-Transport-Cooperatives-Jan-2021-Web.pdfAccredited-Transport-Cooperatives-Jan-2021-Web.pdf
Accredited-Transport-Cooperatives-Jan-2021-Web.pdf
 
Call Girls In Doddaballapur Road ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Doddaballapur Road ☎ 7737669865 🥵 Book Your One night StandCall Girls In Doddaballapur Road ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Doddaballapur Road ☎ 7737669865 🥵 Book Your One night Stand
 
Call Girls Indiranagar Just Call 👗 9155563397 👗 Top Class Call Girl Service B...
Call Girls Indiranagar Just Call 👗 9155563397 👗 Top Class Call Girl Service B...Call Girls Indiranagar Just Call 👗 9155563397 👗 Top Class Call Girl Service B...
Call Girls Indiranagar Just Call 👗 9155563397 👗 Top Class Call Girl Service B...
 
BigBuy dropshipping via API with DroFx.pptx
BigBuy dropshipping via API with DroFx.pptxBigBuy dropshipping via API with DroFx.pptx
BigBuy dropshipping via API with DroFx.pptx
 
Anomaly detection and data imputation within time series
Anomaly detection and data imputation within time seriesAnomaly detection and data imputation within time series
Anomaly detection and data imputation within time series
 
Call Girls in Sarai Kale Khan Delhi 💯 Call Us 🔝9205541914 🔝( Delhi) Escorts S...
Call Girls in Sarai Kale Khan Delhi 💯 Call Us 🔝9205541914 🔝( Delhi) Escorts S...Call Girls in Sarai Kale Khan Delhi 💯 Call Us 🔝9205541914 🔝( Delhi) Escorts S...
Call Girls in Sarai Kale Khan Delhi 💯 Call Us 🔝9205541914 🔝( Delhi) Escorts S...
 
Thane Call Girls 7091864438 Call Girls in Thane Escort service book now -
Thane Call Girls 7091864438 Call Girls in Thane Escort service book now -Thane Call Girls 7091864438 Call Girls in Thane Escort service book now -
Thane Call Girls 7091864438 Call Girls in Thane Escort service book now -
 
Sampling (random) method and Non random.ppt
Sampling (random) method and Non random.pptSampling (random) method and Non random.ppt
Sampling (random) method and Non random.ppt
 
BDSM⚡Call Girls in Mandawali Delhi >༒8448380779 Escort Service
BDSM⚡Call Girls in Mandawali Delhi >༒8448380779 Escort ServiceBDSM⚡Call Girls in Mandawali Delhi >༒8448380779 Escort Service
BDSM⚡Call Girls in Mandawali Delhi >༒8448380779 Escort Service
 
FESE Capital Markets Fact Sheet 2024 Q1.pdf
FESE Capital Markets Fact Sheet 2024 Q1.pdfFESE Capital Markets Fact Sheet 2024 Q1.pdf
FESE Capital Markets Fact Sheet 2024 Q1.pdf
 
Invezz.com - Grow your wealth with trading signals
Invezz.com - Grow your wealth with trading signalsInvezz.com - Grow your wealth with trading signals
Invezz.com - Grow your wealth with trading signals
 
Mg Road Call Girls Service: 🍓 7737669865 🍓 High Profile Model Escorts | Banga...
Mg Road Call Girls Service: 🍓 7737669865 🍓 High Profile Model Escorts | Banga...Mg Road Call Girls Service: 🍓 7737669865 🍓 High Profile Model Escorts | Banga...
Mg Road Call Girls Service: 🍓 7737669865 🍓 High Profile Model Escorts | Banga...
 
Call Girls Jalahalli Just Call 👗 7737669865 👗 Top Class Call Girl Service Ban...
Call Girls Jalahalli Just Call 👗 7737669865 👗 Top Class Call Girl Service Ban...Call Girls Jalahalli Just Call 👗 7737669865 👗 Top Class Call Girl Service Ban...
Call Girls Jalahalli Just Call 👗 7737669865 👗 Top Class Call Girl Service Ban...
 
Chintamani Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore ...
Chintamani Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore ...Chintamani Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore ...
Chintamani Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore ...
 
Probability Grade 10 Third Quarter Lessons
Probability Grade 10 Third Quarter LessonsProbability Grade 10 Third Quarter Lessons
Probability Grade 10 Third Quarter Lessons
 
Call Girls In Hsr Layout ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Hsr Layout ☎ 7737669865 🥵 Book Your One night StandCall Girls In Hsr Layout ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Hsr Layout ☎ 7737669865 🥵 Book Your One night Stand
 
Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...
Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...
Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...
 
➥🔝 7737669865 🔝▻ Bangalore Call-girls in Women Seeking Men 🔝Bangalore🔝 Esc...
➥🔝 7737669865 🔝▻ Bangalore Call-girls in Women Seeking Men  🔝Bangalore🔝   Esc...➥🔝 7737669865 🔝▻ Bangalore Call-girls in Women Seeking Men  🔝Bangalore🔝   Esc...
➥🔝 7737669865 🔝▻ Bangalore Call-girls in Women Seeking Men 🔝Bangalore🔝 Esc...
 

Meetup Junio Data Analysis with python 2018

  • 1. Introduction to Data Analysis techniques using Python First steps into insight discovery using Python and specialized libraries Alex Chalini - sentoul@Hotmail.com
  • 2. About me • Computer Systems Engineer, Master of Computer Science (…) • Actively working in Business Solutions development since 2001 • My areas of specialty are Business Intelligence, Data Analysis, Data Visualization, DB modeling and optimization. • I am also interested in Data Science path for engineering. 2 Alex Chalini
  • 3. Agenda • What is Data Analysis? • Python Libraries for Data Analysis and Data Science • Hands-on data analysis workflow using Python • Statistical Analysis & ML overview • Big Data & Data Analytics working together • Applicatons in Pharma industry 3
  • 4. Question: The process of systematically applying techniques to evaluate data is known as ? A. Data Munging B. Data Analysis C. Data Science D. Data Bases 0A B C D 4
  • 5. Data Analysis: •What is it? •Apply logical techniques to •Describe, condense, recap and evaluate Data and •Illustrate Information •Goals of Data Analysis: 1. Discover useful information 2. Provide insights 3. Suggest conclusions 4. Support Decision Making 5
  • 6. Phyton Data Analysis Basics • Series • DataFrame • Creating a DataFrame from a dict • Select columns, Select rows with Boolean indexing 6
  • 7. Essential Concepts • A Series is a named Python list (dict with list as value). { ‘grades’ : [50,90,100,45] } • A DataFrame is a dictionary of Series (dict of series): { { ‘names’ : [‘bob’,’ken’,’art’,’joe’]} { ‘grades’ : [50,90,100,45] } } 7
  • 8. Python Libraries for Data Analysis and Data Science Many popular Python toolboxes/libraries: • NumPy • SciPy • Pandas • SciKit-Learn Visualization libraries • matplotlib • Seaborn 8 All these libraries are free to download and use
  • 9. AnalyticsWorkflow 9 Overview of Python Libraries for Data Scientists Reading Data; Selecting and Filtering the Data; Data manipulation, sorting, grouping, rearranging Plotting the data Descriptive statistics Inferential statistics
  • 10. NumPy:  introduces objects for multidimensional arrays and matrices, as well as functions that allow to easily perform advanced mathematical and statistical operations on those objects  provides vectorization of mathematical operations on arrays and matrices which significantly improves the performance  many other python libraries are built on NumPy 10 Link: http://www.numpy.org/
  • 11. SciPy:  collection of algorithms for linear algebra, differential equations, numerical integration, optimization, statistics and more  part of SciPy Stack  built on NumPy 11 Link: https://www.scipy.org/scipylib/
  • 12. • It Provides built-in data structures which simplify the manipulation and analysis of data sets. • Pandas is easy to use and powerful, but “with great power comes great responsibility” • adds data structures and tools designed to work with table-like data (similar to Series and Data Frames in R) • provides tools for data manipulation: reshaping, merging, sorting, slicing, aggregation etc. • allows handling missing data I cannot teach you all things Pandas, we must focus on how it works, so you can figure out the rest on your own. 12 Link: http://pandas.pydata.org/ Pandas is Python package for data analysis.
  • 13. Link: http://scikit-learn.org/ SciKit-Learn:  provides machine learning algorithms: classification, regression, clustering, model validation etc.  built on NumPy, SciPy and matplotlib 13
  • 14. Link: https://www.tensorflow.org/ TensorFlow™ is an open source software library for high performance numerical computation. It comes with strong support for machine learning and deep learning and the flexible numerical computation core is used across many other scientific domains 14
  • 15. matplotlib:  python 2D plotting library which produces publication quality figures in a variety of hardcopy formats  a set of functionalities similar to those of MATLAB  line plots, scatter plots, barcharts, histograms, pie charts etc.  relatively low-level; some effort needed to create advanced visualization Link: https://matplotlib.org/ 15
  • 16. Seaborn:  based on matplotlib  provides high level interface for drawing attractive statistical graphics  Similar (in style) to the popular ggplot2 library in R Link: https://seaborn.pydata.org/ 16
  • 18. In [ ]: Loading Python Libraries 18 #Import Python Libraries import numpy as np import scipy as sp import pandas as pd import matplotlib as mpl import seaborn as sns
  • 19. In [ ]: Reading data using pandas 19 #Read csv file df = pd.read_csv("http://rcs.bu.edu/examples/python/data_analysis/Salaries.csv") There is a number of pandas commands to read other data formats: pd.read_excel('myfile.xlsx',sheet_name='Sheet1', index_col=None, na_values=['NA']) pd.read_stata('myfile.dta') pd.read_sas('myfile.sas7bdat') pd.read_hdf('myfile.h5','df') Note: The above command has many optional arguments to fine-tune the data import process.
  • 20. In [3]: Exploring data frames 20 #List first 5 records df.head() Out[3]:
  • 21. Data Frame data types Pandas Type Native Python Type Description object string The most general dtype. Will be assigned to your column if column has mixed types (numbers and strings). int64 int Numeric characters. 64 refers to the memory allocated to hold this character. float64 float Numeric characters with decimals. If a column contains numbers and NaNs(see below), pandas will default to float64, in case your missing value has a decimal. datetime64, timedelta[ns] N/A (but see the datetime module in Python’s standard library) Values meant to hold time data. Look into these for time series experiments. 21
  • 22. In [4]: Data Frame data types 22 #Check a particular column type df['salary'].dtype Out[4]: dtype('int64') In [5]: #Check types for all the columns df.dtypes Out[4]: rank discipline phd service sex salary dtype: object object object int64 int64 object int64
  • 23. Data Frames attributes 23 Python objects have attributes and methods. df.attribute description dtypes list the types of the columns columns list the column names axes list the row labels and column names ndim number of dimensions size number of elements shape return a tuple representing the dimensionality values numpy representation of the data
  • 24. Hands-on exercises 24  Find how many records this data frame has;  How many elements are there?  What are the column names?  What types of columns we have in this data frame? In [5]: df.shape Out[5]: (4, 3) >>> df.count() Person 4 Age 4 Single 5 dtype: int64 list(my_dataframe.columns.values) Also you can simply use: list(my_dataframe) >>> df2.dtypes
  • 25. Series or DataFrame? Match the code to the result. One result is a Series, the other a DataFrame 1.df[‘Quarter’] 2.df[ [‘Quarter’] ] A. Series B. Data Frame 0A B 25
  • 26. Data Frames methods 26 df.method() description head( [n] ), tail( [n] ) first/last n rows describe() generate descriptive statistics (for numeric columns only) max(), min() return max/min values for all numeric columns mean(), median() return mean/median values for all numeric columns std() standard deviation sample([n]) returns a random sample of the data frame dropna() drop all the records with missing values Unlike attributes, python methods have parenthesis. All attributes and methods can be listed with a dir() function: dir(df)
  • 27. Selecting a column in a Data Frame Method 1: Subset the data frame using column name: df[‘gender'] Method 2: Use the column name as an attribute: df.gender Note: there is an attribute rank for pandas data frames, so to select a column with a name "rank" we should use method 1. 27
  • 28. Data Frames groupby method 28 Using "group by" method we can: • Split the data into groups based on some criteria • Calculate statistics (or apply a function) to each group • Similar to dplyr() function in R In [ ]: #Group data using rank df_rank = df.groupby(['rank']) In [ ]: #Calculate mean value for each numeric column per each group df_rank.mean()
  • 29. Data Frames groupby method 29 Once groupby object is create we can calculate various statistics for each group: In [ ]: #Calculate mean salary for each professor rank: df.groupby('rank')[['salary']].mean() Note: If single brackets are used to specify the column (e.g. salary), then the output is Pandas Series object. When double brackets are used the output is a Data Frame
  • 30. Data Frames groupby method 30 groupby performance notes: - no grouping/splitting occurs until it's needed. Creating the groupby object only verifies that you have passed a valid mapping - by default the group keys are sorted during the groupby operation. You may want to pass sort=False for potential speedup: In [ ]: #Calculate mean salary for each professor rank: df.groupby(['rank'], sort=False)[['salary']].mean()
  • 31. Data Frame: filtering 31 To subset the data we can apply Boolean indexing. This indexing is commonly known as a filter. For example if we want to subset the rows in which the salary value is greater than $120K: In [ ]: #Calculate mean salary for each professor rank: df_sub = df[ df['salary'] > 120000 ] In [ ]: #Select only those rows that contain female professors: df_f = df[ df['sex'] == 'Female' ] Any Boolean operator can be used to subset the data: > greater; >= greater or equal; < less; <= less or equal; == equal; != not equal;
  • 32. Boolean filtering Which rows are included in this Boolean index? df[ df[‘Sold’] < 110 ] A. 0, 1, 2 B. 1, 2, 3 C. 0, 1 D. 0, 3 0A B C D 32
  • 33. Data Frames: Slicing 33 There are a number of ways to subset the Data Frame: • one or more columns • one or more rows • a subset of rows and columns Rows and columns can be selected by their position or label
  • 34. Data Frames: Slicing 34 When selecting one column, it is possible to use single set of brackets, but the resulting object will be a Series (not a DataFrame): In [ ]: #Select column salary: df['salary'] When we need to select more than one column and/or make the output to be a DataFrame, we should use double brackets: In [ ]: #Select column salary: df[['rank','salary']]
  • 35. Data Frames: Selecting rows 35 If we need to select a range of rows, we can specify the range using ":" In [ ]: #Select rows by their position: df[10:20] Notice that the first row has a position 0, and the last value in the range is omitted: So for 0:10 range the first 10 rows are returned with the positions starting with 0 and ending with 9
  • 36. Data Frames: method loc 36 If we need to select a range of rows, using their labels we can use method loc: In [ ]: #Select rows by their labels: df_sub.loc[10:20,['rank','sex','salary']] Out[ ]:
  • 37. Data Frames: method iloc 37 If we need to select a range of rows and/or columns, using their positions we can use method iloc: In [ ]: #Select rows by their labels: df_sub.iloc[10:20,[0, 3, 4, 5]] Out[ ]:
  • 38. Data Frames: method iloc (summary) 38 df.iloc[0] # First row of a data frame df.iloc[i] #(i+1)th row df.iloc[-1] # Last row df.iloc[:, 0] # First column df.iloc[:, -1] # Last column df.iloc[0:7] #First 7 rows df.iloc[:, 0:2] #First 2 columns df.iloc[1:3, 0:2] #Second through third rows and first 2 columns df.iloc[[0,5], [1,3]] #1st and 6th rows and 2nd and 4th columns
  • 39. Data Frames: Sorting 39 We can sort the data by a value in the column. By default the sorting will occur in ascending order and a new data frame is return. In [ ]: # Create a new data frame from the original sorted by the column Salary df_sorted = df.sort_values( by ='service') df_sorted.head() Out[ ]:
  • 40. Data Frames: Sorting 40 We can sort the data using 2 or more columns: In [ ]: df_sorted = df.sort_values( by =['service', 'salary'], ascending = [True, False]) df_sorted.head(10) Out[ ]:
  • 41. Missing Values 41 Missing values are marked as NaN In [ ]: # Read a dataset with missing values flights = pd.read_csv("http://rcs.bu.edu/examples/python/data_analysis/flights.csv") In [ ]: # Select the rows that have at least one missing value flights[flights.isnull().any(axis=1)].head() Out[ ]:
  • 42. Missing Values 42 There are a number of methods to deal with missing values in the data frame: df.method() description dropna() Drop missing observations dropna(how='all') Drop observations where all cells is NA dropna(axis=1, how='all') Drop column if all the values are missing dropna(thresh = 5) Drop rows that contain less than 5 non-missing values fillna(0) Replace missing values with zeros isnull() returns True if the value is missing notnull() Returns True for non-missing values
  • 43. Missing Values 43 • When summing the data, missing values will be treated as zero • If all values are missing, the sum will be equal to NaN • cumsum() and cumprod() methods ignore missing values but preserve them in the resulting arrays • Missing values in GroupBy method are excluded (just like in R) • Many descriptive statistics methods have skipna option to control if missing data should be excluded . This value is set to True by default (unlike R)
  • 44. Aggregation Functions in Pandas 44 Aggregation - computing a summary statistic about each group, i.e. • compute group sums or means • compute group sizes/counts Common aggregation functions: min, max count, sum, prod mean, median, mode, mad std, var
  • 45. Aggregation Functions in Pandas 45 agg() method are useful when multiple statistics are computed per column: In [ ]: flights[['dep_delay','arr_delay']].agg(['min','mean','max']) Out[ ]:
  • 46. Basic Descriptive Statistics 46 df.method() description describe Basic statistics (count, mean, std, min, quantiles, max) min, max Minimum and maximum values mean, median, mode Arithmetic average, median and mode var, std Variance and standard deviation sem Standard error of mean skew Sample skewness kurt kurtosis
  • 47. Graphics to explore the data 47 To show graphs within Python notebook include inline directive: In [ ]: %matplotlib inline Seaborn package is built on matplotlib but provides high level interface for drawing attractive statistical graphics, similar to ggplot2 library in R. It specifically targets statistical data visualization
  • 48. Graphics 48 description distplot histogram barplot estimate of central tendency for a numeric variable violinplot similar to boxplot, also shows the probability density of the data jointplot Scatterplot regplot Regression plot pairplot Pairplot boxplot boxplot swarmplot categorical scatterplot factorplot General categorical plot
  • 49. 49
  • 50. Statistical Analysis & ML overview 50 statsmodel and scikit-learn - both have a number of function for statistical analysis The first one is mostly used for regular analysis using R style formulas, while scikit-learn and Tensorflow are more tailored for Machine Learning. statsmodels: • linear regressions • ANOVA tests • hypothesis testings • many more ... scikit-learn: • kmeans • support vector machines • random forests • many more ... Tensorflow: • Image Recognition • Neural Networks • Linear Models • TensorFlow Wide & Deep Learning • etc...
  • 51. 51 Big Data & Data Analytics working together
  • 52. Big Data & Data Analytics working together WORKING WITH BIG DATA: MAP-REDUCE • When working with large datasets, it’s often useful to utilize MapReduce. MapReduce is a method when working with big data which allows you to first map the data using a particular attribute, filter or grouping and then reduce those using a transformation or aggregation mechanism. For example, if I had a collection of cats, I could first map them by what color they are and then reduce by summing those groups. At the end of the MapReduce process, I would have a list of all the cat colors and the sum of the cats in each of those color groupings. • Almost every data science library has some MapReduce functionality built in. There are also numerous larger libraries you can use to manage the data and MapReduce over a series of computers (or a cluster / grouping of computers). Python can speak to these services and software and extract the results for further reporting, visualization or alerting. 52
  • 53. Big Data & Data Analytics working together Hadoop • If the most popular libraries for MapReduce with large datasets is Apache’s Hadoop. Hadoop uses cluster computing to allow for faster data processing of large datasets. There are many Python libraries you can use to send your data or jobs to Hadoop and which one you choose should be a mixture of what’s easiest and most simple to set up with your infastructure, and also what seems like the most clear library for your use case. Spark • If you have large data which might work better in streaming form (real-time data, log data, API data), then Apache’s Spark is a great tool. PySpark, the Python Spark API, allows you to quickly get up and running and start mapping and reducing your dataset. It’s also incredibly popular with machine learning problems, as it has some built-in algorithms. • There are several other large scale data and job libraries you can use with Python, but for now we can move along to looking at data with Python. 53
  • 54. Big Data & Data Analytics working together 54 Apache Spark is written in Scala programming language. To support Python with Spark, Apache Spark community released a tool, PySpark. Using PySpark, you can work with RDDs in Python programming language also. BigQuery is Google's serverless, highly scalable, low cost enterprise data warehouse. BigQuery allows organizations to capture and analyze data in real-time using its powerful streaming ingestion capability so that your insights are always current.
  • 55. Industries using Real-Time Big Data-Analytics • e-Commerce • Social Networks • Healthcare • Fraud Detection 55 OPTIMIZE CUSTOMER SERVICE PROCESS IN A FLOW OF CONTINUOUS DATA, MAKING LIFE SAVING DECISIONS IN A SAFE ENVIRONMENT TO RUN THE BUSINESS.
  • 57. Applications in the Pharma Industry 57
  • 58. 58
  • 59. 59