How to import multiple csv files in a single load?

Apache SparkApache Spark-SqlSpark Dataframe

Apache Spark Problem Overview


Consider I have a defined schema for loading 10 csv files in a folder. Is there a way to automatically load tables using Spark SQL. I know this can be performed by using an individual dataframe for each file [given below], but can it be automated with a single command rather than pointing a file can I point a folder?

df = sqlContext.read
       .format("com.databricks.spark.csv")
       .option("header", "true")
       .load("../Downloads/2008.csv")

Apache Spark Solutions


Solution 1 - Apache Spark

Use wildcard, e.g. replace 2008 with *:

df = sqlContext.read
       .format("com.databricks.spark.csv")
       .option("header", "true")
       .load("../Downloads/*.csv") // <-- note the star (*)

Spark 2.0

// these lines are equivalent in Spark 2.0
spark.read.format("csv").option("header", "true").load("../Downloads/*.csv")
spark.read.option("header", "true").csv("../Downloads/*.csv")

Notes:

  1. Replace format("com.databricks.spark.csv") by using format("csv") or csv method instead. com.databricks.spark.csv format has been integrated to 2.0.

  2. Use spark not sqlContext

Solution 2 - Apache Spark

Ex1:

Reading a single CSV file. Provide complete file path:

 val df = spark.read.option("header", "true").csv("C:spark\\sample_data\\tmp\\cars1.csv")

Ex2:

Reading multiple CSV files passing names:

val df=spark.read.option("header","true").csv("C:spark\\sample_data\\tmp\\cars1.csv", "C:spark\\sample_data\\tmp\\cars2.csv")

Ex3:

Reading multiple CSV files passing list of names:

val paths = List("C:spark\\sample_data\\tmp\\cars1.csv", "C:spark\\sample_data\\tmp\\cars2.csv")
val df = spark.read.option("header", "true").csv(paths: _*)

Ex4:

Reading multiple CSV files in a folder ignoring other files:

val df = spark.read.option("header", "true").csv("C:spark\\sample_data\\tmp\\*.csv")

Ex5:

Reading multiple CSV files from multiple folders:

val folders = List("C:spark\\sample_data\\tmp", "C:spark\\sample_data\\tmp1")
val df = spark.read.option("header", "true").csv(folders: _*)

Solution 3 - Apache Spark

Note that you can use other tricks like :

-- One or more wildcard:
       .../Downloads20*/*.csv
--  braces and brackets   
       .../Downloads201[1-5]/book.csv
       .../Downloads201{11,15,19,99}/book.csv

Solution 4 - Apache Spark

Reader's Digest: (Spark 2.x)

For Example, if you have 3 directories holding csv files:

> dir1, dir2, dir3

You then define paths as a string of comma delimited list of paths as follows:

> paths = "dir1/,dir2/,dir3/*"

Then use the following function and pass it this paths variable

def get_df_from_csv_paths(paths):

        df = spark.read.format("csv").option("header", "false").\
            schema(custom_schema).\
            option('delimiter', '\t').\
            option('mode', 'DROPMALFORMED').\
            load(paths.split(','))
        return df

By then running:

df = get_df_from_csv_paths(paths)

You will obtain in df a single spark dataframe containing the data from all the csvs found in these 3 directories.

===========================================================================

Full Version:

In case you want to ingest multiple CSVs from multiple directories you simply need to pass a list and use wildcards.

For Example:

if your data_path looks like this:

> 's3://bucket_name/subbucket_name/2016-09-*/184/*,
> s3://bucket_name/subbucket_name/2016-10-*/184/*,
> s3://bucket_name/subbucket_name/2016-11-*/184/*,
> s3://bucket_name/subbucket_name/2016-12-*/184/*, ... '

you can use the above function to ingest all the csvs in all these directories and subdirectories at once:

This would ingest all directories in s3 bucket_name/subbucket_name/ according to the wildcard patterns specified. e.g. the first pattern would look in

> bucket_name/subbucket_name/

for all directories with names starting with

> 2016-09-

and for each of those take only the directory named

> 184

and within that subdirectory look for all csv files.

And this would be executed for each of the patterns in the comma delimited list.

This works way better than union..

Solution 5 - Apache Spark

Using Spark 2.0+, we can load multiple CSV files from different directories using df = spark.read.csv(['directory_1','directory_2','directory_3'.....], header=True). For more information, refer the documentation http://spark.apache.org/docs/2.1.0/api/python/pyspark.sql.html?highlight=read#pyspark.sql.DataFrameReader">here</a>

Solution 6 - Apache Spark

val df = spark.read.option("header", "true").csv("C:spark\\sample_data\\*.csv)

will consider files tmp, tmp1, tmp2, ....

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionChendurView Question on Stackoverflow
Solution 1 - Apache SparkYaronView Answer on Stackoverflow
Solution 2 - Apache SparkmputhaView Answer on Stackoverflow
Solution 3 - Apache SparkJamal JamView Answer on Stackoverflow
Solution 4 - Apache SparkeiTan LaViView Answer on Stackoverflow
Solution 5 - Apache SparkNeeleshkumar SView Answer on Stackoverflow
Solution 6 - Apache Sparkuser14454778View Answer on Stackoverflow