public class InsertIntoParquetTable extends SparkPlan implements scala.Product, scala.Serializable
WARNING: EXPERIMENTAL! InsertIntoParquetTable with overwrite=false may cause data corruption in the case that multiple users try to append to the same table simultaneously. Inserting into a table that was previously generated by other means (e.g., by creating an HDFS directory and importing Parquet files generated by other tools) may cause unpredicted behaviour and therefore results in a RuntimeException (only detected via filename pattern so will not catch all cases).
| Constructor and Description |
|---|
InsertIntoParquetTable(org.apache.spark.sql.parquet.ParquetRelation relation,
SparkPlan child,
boolean overwrite,
SQLContext sqlContext) |
| Modifier and Type | Method and Description |
|---|---|
SparkPlan |
child() |
RDD<org.apache.spark.sql.catalyst.expressions.Row> |
execute()
Inserts all rows into the Parquet file.
|
scala.collection.immutable.List<SQLContext> |
otherCopyArgs() |
scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> |
output() |
boolean |
overwrite() |
org.apache.spark.sql.parquet.ParquetRelation |
relation() |
SQLContext |
sqlContext() |
executeCollect, outputPartitioning, requiredChildDistributionexpressions, generateSchemaString, generateSchemaString, org$apache$spark$sql$catalyst$plans$QueryPlan$$transformExpressionDown$1, org$apache$spark$sql$catalyst$plans$QueryPlan$$transformExpressionUp$1, outputSet, printSchema, schemaString, transformAllExpressions, transformExpressions, transformExpressionsDown, transformExpressionsUpapply, argString, asCode, children, collect, fastEquals, flatMap, foreach, generateTreeString, getNodeNumbered, id, makeCopy, map, mapChildren, nextId, nodeName, numberedTreeString, sameInstance, simpleString, stringArgs, toString, transform, transformChildrenDown, transformChildrenUp, transformDown, transformUp, treeString, withNewChildrenpublic InsertIntoParquetTable(org.apache.spark.sql.parquet.ParquetRelation relation,
SparkPlan child,
boolean overwrite,
SQLContext sqlContext)
public org.apache.spark.sql.parquet.ParquetRelation relation()
public SparkPlan child()
public boolean overwrite()
public SQLContext sqlContext()
public RDD<org.apache.spark.sql.catalyst.expressions.Row> execute()
public scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> output()
output in class org.apache.spark.sql.catalyst.plans.QueryPlan<SparkPlan>public scala.collection.immutable.List<SQLContext> otherCopyArgs()