Advanced Analytics with PySpark: Patterns for Learning from Data at Scale Using Python and Spark

Advanced Analytics with PySpark: Patterns for Learning from Data at Scale Using Python and Spark

Advanced Analytics with PySpark: Patterns for Learning from Data at Scale Using Python and Spark

Advanced Analytics with PySpark: Patterns for Learning from Data at Scale Using Python and Spark

Paperback

$65.99 
  • SHIP THIS ITEM
    Qualifies for Free Shipping
  • PICK UP IN STORE
    Check Availability at Nearby Stores

Related collections and offers


Overview

The amount of data being generated today is staggering and growing. Apache Spark has emerged as the de facto tool to analyze big data and is now a critical part of the data science toolbox. Updated for Spark 3.0, this practical guide brings together Spark, statistical methods, and real-world datasets to teach you how to approach analytics problems using PySpark, Spark's Python API, and other best practices in Spark programming.

Data scientists Akash Tandon, Sandy Ryza, Uri Laserson, Sean Owen, and Josh Wills offer an introduction to the Spark ecosystem, then dive into patterns that apply common techniques-including classification, clustering, collaborative filtering, and anomaly detection, to fields such as genomics, security, and finance. This updated edition also covers NLP and image processing.

If you have a basic understanding of machine learning and statistics and you program in Python, this book will get you started with large-scale data analysis.

  • Familiarize yourself with Spark's programming model and ecosystem
  • Learn general approaches in data science
  • Examine complete implementations that analyze large public datasets
  • Discover which machine learning tools make sense for particular problems
  • Explore code that can be adapted to many uses

Product Details

ISBN-13: 9781098103651
Publisher: O'Reilly Media, Incorporated
Publication date: 07/19/2022
Pages: 233
Product dimensions: 7.00(w) x 9.19(h) x (d)

About the Author

Akash Tandon is an independent consultant and experienced full-stack data engineer. Previously, he was a senior data engineer at Atlan, where he built software for enterprise data science teams. In another life, he had worked on data science projects for governments, and built risk assessment tools at a FinTech startup. As a student, he wrote open source software with the R project for statistical computing and Google. In his free time, he researches things for no good reason.

Sandy Ryza is software engineer at Elementl. Previously, he developed algorithms for public transit at Remix and was a senior data scientist at Cloudera and Clover Health. He is an Apache Spark committer, Apache Hadoop PMC member, and founder of the Time Series for Spark project.

Uri Laserson is founder & CTO of Patch Biosciences. Previously, he worked on big data and genomics at Cloudera.

Sean Owen is a principal solutions architect focusing on machine learning and data science at Databricks. He is an Apache Spark committer and PMC member, and co-author Advanced Analytics with Spark. Previously, he was director of Data Science at Cloudera and an engineer at Google.

Josh Wills is an independent data science and engineering consultant, the former head of data engineering at Slack and data science at Cloudera, and wrote a tweet about data scientists once.

Table of Contents

Preface vii

1 Analyzing Big Data 1

Working with Big Data 2

Introducing Apache Spark and PySpark 4

Components 4

PySpark 6

Ecosystem 7

Spark 3.0 8

PySpark Addresses Challenges of Data Science 8

Where to Go from Here 9

2 Introduction to Data Analysis with PySpark 11

Spark Architecture 13

Installing PySpark 14

Setting Up Our Data 17

Analyzing Data with the DataFrame API 22

Fast Summary Statistics for DataFrames 26

Pivoting and Reshaping DataFrames 28

Joining DataFrames and Selecting Features 30

Scoring and Model Evaluation 32

Where to Go from Here 34

3 Recommending Music and the Audioscrobbler Dataset 35

Setting Up the Data 36

Our Requirements for a Recommender System 38

Alternating Least Squares Algorithm 40

Preparing the Data 41

Building a First Model 44

Spot Checking Recommendations 48

Evaluating Recommendation Quality 49

Computing AUC 51

Hyperparameter Selection 52

Making Recommendations 55

Where to Go from Here 56

4 Making Predictions with Decision Trees and Decision Forests 59

Decision Trees and Forests 60

Preparing the Data 63

Our First Decision Tree 67

Decision Tree Hyperparameters 74

Tuning Decision Trees 76

Categorical Features Revisited 79

Random Forests 82

Making Predictions 85

Where to Go from Here 85

5 Anomaly Detection with K-means Clustering 87

K-means Clustering 88

Identifying Anomalous Network Traffic 89

KDD Cup 1999 Dataset 90

A First Take on Clustering 91

Choosing k 93

Visualization with SparkR 96

Feature Normalization 100

Categorical Variables 102

Using Labels with Entropy 103

Clustering in Action 105

Where to Go from Here 106

6 Understanding Wikipedia with LDA and Spark NLP 109

Latent Dirichlet Allocation 110

LDA in PySpark 110

Getting the Data 111

Spark NLP 112

Setting Up Your Environment 113

Parsing the Data 114

Preparing the Data Using Spark NLP 115

TF-IDF 119

Computing the TF-IDFs 120

Creating Our LDA Model 121

Where to Go from Here 124

7 Geospatial and Temporal Data Analysis on Taxi Trip Data 125

Preparing the Data 126

Converting Datetime Strings to Timestamps 128

Handling Invalid Records 130

Geospatial Analysis 132

Intro to GeoJSON 132

GeoPandas 133

Sessionization in PySpark 136

Building Sessions: Secondary Sorts in PySpark 137

Where to Go from Here 139

8 Estimating Financial Risk 141

Terminology 142

Methods for Calculating VaR 143

Variance-Covariance 143

Historical Simulation 143

Monte Carlo Simulation 143

Our Model 144

Getting the Data 145

Preparing the Data 146

Determining the Factor Weights 148

Sampling 152

The Multivariate Normal Distribution 154

Running the Trials 155

Visualizing the Distribution of Returns 158

Where to Go from Here 158

9 Analyzing Genomics Data and the BDG Project 161

Decoupling Storage from Modeling 162

Setting Up ADAM 164

Introduction to Working with Genomics Data Using ADAM 166

File Format Conversion with the ADAM CLI 166

Ingesting Genomics Data Using PySpark and ADAM 167

Predicting Transcription Factor Binding Sites from ENCODE Data 173

Where to Go from Here 178

10 Image Similarity Detection with Deep Learning and PySpark LSH 179

PyTorch 180

Installation 180

Preparing the Data 181

Resizing Images Using PyTorch 181

Deep Learning Model for Vector Representation of Images 182

Image Embeddings 183

Import Image Embeddings into PySpark 185

Image Similarity Search Using PySpark LSH 186

Nearest Neighbor Search 187

Where to Go from Here 190

11 Managing the Machine Learning Lifecycle with MLflow 191

Machine Learning Lifecycle 192

MLflow 193

Experiment Tracking 194

Managing and Serving ML Models 197

Creating and Using MLflow Projects 200

Where to Go from Here 203

Index 205

From the B&N Reads Blog

Customer Reviews