# Percentile Rank of the column in pyspark

In order to calculate the percentile rank of the column in pyspark we use percent_rank() Function.  percent_rank() function along with partitionBy() of other column calculates the percentile Rank of the column by group. Let’s see an example on how to calculate percentile rank of the column in pyspark.

• Percentile Rank of the column in pyspark using percent_rank()
• percent_rank() of the column by group in pyspark

We will be using the dataframe df_basket1

#### percent_rank() of the column in pyspark:

Percentile rank of the column is calculated by percent_rank() function. We will be using partitionBy(), orderBy() functions . partitionBy() function does not take any argument as we are not grouping by any variable. As the result percentile rank is populated and stored in the new column named “percent_rank” as shown below.

```### Percentile Rank of the column in pyspark

from pyspark.sql.window import Window
import pyspark.sql.functions as F

```

so in the the resultant dataframe percentile rank is calculated and stored in a column as shown below.

#### Percentile Rank of the column by group in pyspark:

Percentile rank of the column by group is calculated by percent_rank() function. We will be using partitionBy() on “Item_group” column, orderBy() on “Price” column so that Percentile rank will be populated by group here in our case by “Item_group”.

```### Percentile Rank of the column by group in pyspark

from pyspark.sql.window import Window
import pyspark.sql.functions as F

```

So the resultant dataframe with percentile rank populated by group will be

#### Other Related Topics:

## Author

• With close to 10 years on Experience in data science and machine learning Have extensively worked on programming languages like R, Python (Pandas), SAS, Pyspark.