partitionBy(scala.collection.SeqcolNames)]]
Apply function
Sort
Coalesce
Example:
// coalesce the data to a single file.
data.coalesce(1).write
Repartition
Repartition creates partitions based on the user’s input by performing a full shuffle on the data (after read) If the number of partitions is not specified, the number is taken from spark.sql.shuffle.partitions.Returns a new Dataset partitioned by the given partitioning expressions. Same as DISTRIBUTE BY in SQL. All rows with the same Distribute By columns will go to the same reducer. However, Distribute By does not guarantee clustering or sorting properties on the distributed keys.
Support
Number of dynamic partitions created is XXXX, which is more than 1000.
Number of dynamic partitions created is 2100, which is more than 1000. 
To solve this try to set hive.exec.max.dynamic.partitions to at least 2100.;

The below configuration must be set before starting the spark application
spark.hadoop.hive.exec.max.dynamic.partitions

A set with Spark SQL Server will not work. You need to set the configuration at the start of the server.from https://github.com/apache/spark/pull/18769
Documentation / Reference