Skip to content

pyspark

PySpark Pivoting

Today's post covers the following:

  • Basic pivot operation
  • Pivot with multiple aggregations
  • Conditional pivoting
  • Pivoting with specified column values

PySpark Data Filtration

Todays post covers the following:

  • Filtration by column value (one or multiple conditions)
  • String related filtration using like / contains
  • Missing data filtration
  • List based filtration using isin
  • General data clearning operations

PySpark Pipelines

Todays post covers the following:

  • Missing data treatment classification pipeline
  • Feature scaling using ScandardScaler classification pipeline
  • TF-IDF corpus classification pipeline
  • PCA dimensionality reduction classification pipeline

PySpark Daily Summary II

Continuing on where we left off last post, I'll be exploring pypspark on a daily basis, just to get more used to it. Here I will be posting summaries that cover roughtly 10 days worth of posts that I make on Kaggle, so that would equate to three posts a month

PySpark Daily Summary I

Something I decided would be fun to do on a daily basis; write pyspark code everyday and post about it, this is mainly because I don't use it as often as I would like, so this is my motivation. If you too want to join in, just fork the notebook (on Kaggle) and practice various pyspark codings everyday! Visit my telegram channel if you have any questions or just post them here!

Here I will be posting summaries that cover roughtly 10 days worth of posts that I make on Kaggle, so that would equate to three posts a month

Utilising Prophet with PySpark

In this notebook, we look at how to use a popular machine learning library prophet with the pyspark architecture. pyspark itself unfortunatelly does not contain such an additive regression model, however we can utilise user defined functions, UDF, which allows us to utilise different functionality of different libraries that is not available in pyspark

Hyperparameter Tuning with Pipelines


This post is the last of the three posts on the titanic classification problem in pyspark

  • In the last post, we started with a clearned dataset, which we prepared for machine learning, by utilising StringIndexer & VectorAssembler, and then the model training stage itself.
  • These steps are a series of stages in the construction of a model, which we can group into a single pipline. pyspark like sklearn has such pipeline classes that help us keep things organised