Specifies the input data source format.
Specifies the input data source format.
1.4.0
Construct a DataFrame representing the database table accessible via JDBC URL url named table using connection properties.
Construct a DataFrame representing the database table accessible via JDBC URL
url named table using connection properties. The predicates
parameter gives a list
expressions suitable for inclusion in WHERE clauses; each one defines one partition
of the DataFrame.
Don't create too many partitions in parallel on a large cluster; otherwise Spark might crash your external database systems.
JDBC database url of the form jdbc:subprotocol:subname
Name of the table in the external database.
Condition in the where clause for each partition.
JDBC database connection arguments, a list of arbitrary string tag/value. Normally at least a "user" and "password" property should be included.
1.4.0
Construct a DataFrame representing the database table accessible via JDBC URL url named table.
Construct a DataFrame representing the database table accessible via JDBC URL url named table. Partitions of the table will be retrieved in parallel based on the parameters passed to this function.
Don't create too many partitions in parallel on a large cluster; otherwise Spark might crash your external database systems.
JDBC database url of the form jdbc:subprotocol:subname
Name of the table in the external database.
the name of a column of integral type that will be used for partitioning.
the minimum value of columnName
used to decide partition stride
the maximum value of columnName
used to decide partition stride
the number of partitions. the range minValue
-maxValue
will be split
evenly into this many partitions
JDBC database connection arguments, a list of arbitrary string tag/value. Normally at least a "user" and "password" property should be included.
1.4.0
Construct a DataFrame representing the database table accessible via JDBC URL url named table and connection properties.
Construct a DataFrame representing the database table accessible via JDBC URL url named table and connection properties.
1.4.0
Loads an RDD[String]
storing JSON objects (one object per record) and
returns the result as a DataFrame.
Loads an RDD[String]
storing JSON objects (one object per record) and
returns the result as a DataFrame.
Unless the schema is specified using schema function, this function goes through the input once to determine the input schema.
input RDD with one JSON object per record
1.4.0
Loads an JavaRDD[String]
storing JSON objects (one object per record) and
returns the result as a DataFrame.
Loads an JavaRDD[String]
storing JSON objects (one object per record) and
returns the result as a DataFrame.
Unless the schema is specified using schema function, this function goes through the input once to determine the input schema.
input RDD with one JSON object per record
1.4.0
Loads a JSON file (one object per line) and returns the result as a DataFrame.
Loads a JSON file (one object per line) and returns the result as a DataFrame.
This function goes through the input once to determine the input schema. If you know the schema in advance, use the version that specifies the schema to avoid the extra scan.
You can set the following JSON-specific options to deal with non-standard JSON files:
primitivesAsString
(default false
): infers all primitive values as a string typeallowComments
(default false
): ignores Java/C++ style comment in JSON recordsallowUnquotedFieldNames
(default false
): allows unquoted JSON field namesallowSingleQuotes
(default true
): allows single quotes in addition to double quotesallowNumericLeadingZeros
(default false
): allows leading zeros in numbers
(e.g. 00012)
1.6.0
Loads a JSON file (one object per line) and returns the result as a DataFrame.
Loads a JSON file (one object per line) and returns the result as a DataFrame.
This function goes through the input once to determine the input schema. If you know the schema in advance, use the version that specifies the schema to avoid the extra scan.
You can set the following JSON-specific options to deal with non-standard JSON files:
primitivesAsString
(default false
): infers all primitive values as a string typeallowComments
(default false
): ignores Java/C++ style comment in JSON recordsallowUnquotedFieldNames
(default false
): allows unquoted JSON field namesallowSingleQuotes
(default true
): allows single quotes in addition to double quotesallowNumericLeadingZeros
(default false
): allows leading zeros in numbers
(e.g. 00012)
1.4.0
Loads input in as a DataFrame, for data sources that support multiple paths.
Loads input in as a DataFrame, for data sources that support multiple paths. Only works if the source is a HadoopFsRelationProvider.
1.6.0
Loads input in as a DataFrame, for data sources that don't require a path (e.
Loads input in as a DataFrame, for data sources that don't require a path (e.g. external key-value stores).
1.4.0
Loads input in as a DataFrame, for data sources that require a path (e.
Loads input in as a DataFrame, for data sources that require a path (e.g. data backed by a local or distributed file system).
1.4.0
Adds an input option for the underlying data source.
Adds an input option for the underlying data source.
1.4.0
Adds input options for the underlying data source.
Adds input options for the underlying data source.
1.4.0
(Scala-specific) Adds input options for the underlying data source.
(Scala-specific) Adds input options for the underlying data source.
1.4.0
Loads an ORC file and returns the result as a DataFrame.
Loads an ORC file and returns the result as a DataFrame.
input path
1.5.0
Currently, this method can only be used together with HiveContext
.
Loads a Parquet file, returning the result as a DataFrame.
Specifies the input schema.
Specifies the input schema. Some data sources (e.g. JSON) can infer the input schema automatically from data. By specifying the schema here, the underlying data source can skip the schema inference step, and thus speed up data loading.
1.4.0
Returns the specified table as a DataFrame.
Returns the specified table as a DataFrame.
1.4.0
Loads a text file and returns a DataFrame with a single string column named "value".
Loads a text file and returns a DataFrame with a single string column named "value". Each line in the text file is a new row in the resulting DataFrame. For example:
// Scala: sqlContext.read.text("/path/to/spark/README.md") // Java: sqlContext.read().text("/path/to/spark/README.md")
input path
1.6.0
:: Experimental :: Interface used to load a DataFrame from external storage systems (e.g. file systems, key-value stores, etc). Use SQLContext.read to access this.
1.4.0