public interface CompressibleColumnBuilder<T extends org.apache.spark.sql.types.NativeType> extends ColumnBuilder, Logging
.--------------------------- Column type ID (4 bytes)
| .----------------------- Null count N (4 bytes)
| | .------------------- Null positions (4 x N bytes, empty if null count is zero)
| | | .------------- Compression scheme ID (4 bytes)
| | | | .--------- Compressed non-null elements
V V V V V
+---+---+-----+---+---------+
| | | ... | | ... ... |
+---+---+-----+---+---------+
\-----------/ \-----------/
header body
| Modifier and Type | Method and Description |
|---|---|
void |
appendFrom(org.apache.spark.sql.Row row,
int ordinal)
Appends
row(ordinal) to the column builder. |
java.nio.ByteBuffer |
build()
Returns the final columnar byte buffer.
|
scala.collection.Seq<Encoder<T>> |
compressionEncoders() |
void |
gatherCompressibilityStats(org.apache.spark.sql.Row row,
int ordinal) |
void |
initialize(int initialSize,
String columnName,
boolean useCompression)
Initializes with an approximate lower bound on the expected number of elements in this column.
|
boolean |
isWorthCompressing(Encoder<T> encoder) |
columnStatsinitializeIfNecessary, initializeLogging, isTraceEnabled, log_, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarningvoid initialize(int initialSize,
String columnName,
boolean useCompression)
ColumnBuilderinitialize in interface ColumnBuildervoid gatherCompressibilityStats(org.apache.spark.sql.Row row,
int ordinal)
void appendFrom(org.apache.spark.sql.Row row,
int ordinal)
ColumnBuilderrow(ordinal) to the column builder.appendFrom in interface ColumnBuilderjava.nio.ByteBuffer build()
ColumnBuilderbuild in interface ColumnBuilder