public final class QuantileDiscretizer extends Estimator<Bucketizer> implements DefaultParamsWritable, HasInputCols, HasOutputCols
QuantileDiscretizer takes a column with continuous features and outputs a column with binned
categorical features. The number of bins can be set using the numBuckets parameter. It is
possible that the number of buckets used will be smaller than this value, for example, if there
are too few distinct values of the input to create enough distinct quantiles.
Since 2.3.0, QuantileDiscretizer can map multiple columns at once by setting the inputCols
parameter. If both of the inputCol and inputCols parameters are set, an Exception will be
thrown. To specify the number of buckets for each column, the numBucketsArray parameter can
be set, or if the number of buckets should be the same across columns, numBuckets can be
set as a convenience.
NaN handling:
null and NaN values will be ignored from the column during QuantileDiscretizer fitting. This
will produce a Bucketizer model for making predictions. During the transformation,
Bucketizer will raise an error when it finds NaN values in the dataset, but the user can
also choose to either keep or remove NaN values within the dataset by setting handleInvalid.
If the user chooses to keep NaN values, they will be handled specially and placed into their own
bucket, for example, if 4 buckets are used, then non-NaN data will be put into buckets[0-3],
but NaNs will be counted in a special bucket[4].
Algorithm: The bin ranges are chosen using an approximate algorithm (see the documentation for
org.apache.spark.sql.DataFrameStatFunctions.approxQuantile
for a detailed description). The precision of the approximation can be controlled with the
relativeError parameter. The lower and upper bin bounds will be -Infinity and +Infinity,
covering all real values.
| Constructor and Description |
|---|
QuantileDiscretizer() |
QuantileDiscretizer(String uid) |
| Modifier and Type | Method and Description |
|---|---|
static Params |
clear(Param<?> param) |
QuantileDiscretizer |
copy(ParamMap extra)
Creates a copy of this instance with the same UID and some extra params.
|
static String |
explainParam(Param<?> param) |
static String |
explainParams() |
static ParamMap |
extractParamMap() |
static ParamMap |
extractParamMap(ParamMap extra) |
Bucketizer |
fit(Dataset<?> dataset)
Fits a model to the input data.
|
static <T> scala.Option<T> |
get(Param<T> param) |
static <T> scala.Option<T> |
getDefault(Param<T> param) |
static String |
getHandleInvalid() |
static String |
getInputCol() |
static String[] |
getInputCols() |
static int |
getNumBuckets() |
int |
getNumBuckets() |
static int[] |
getNumBucketsArray() |
int[] |
getNumBucketsArray() |
static <T> T |
getOrDefault(Param<T> param) |
static String |
getOutputCol() |
static String[] |
getOutputCols() |
static Param<Object> |
getParam(String paramName) |
static double |
getRelativeError() |
double |
getRelativeError() |
static Param<String> |
handleInvalid() |
Param<String> |
handleInvalid()
Param for how to handle invalid entries.
|
static <T> boolean |
hasDefault(Param<T> param) |
static boolean |
hasParam(String paramName) |
static Param<String> |
inputCol() |
static StringArrayParam |
inputCols() |
static boolean |
isDefined(Param<?> param) |
static boolean |
isSet(Param<?> param) |
static QuantileDiscretizer |
load(String path) |
static IntParam |
numBuckets() |
IntParam |
numBuckets()
Number of buckets (quantiles, or categories) into which data points are grouped.
|
static IntArrayParam |
numBucketsArray() |
IntArrayParam |
numBucketsArray()
Array of number of buckets (quantiles, or categories) into which data points are grouped.
|
static Param<String> |
outputCol() |
static StringArrayParam |
outputCols() |
static Param<?>[] |
params() |
static DoubleParam |
relativeError() |
DoubleParam |
relativeError()
Relative error (see documentation for
org.apache.spark.sql.DataFrameStatFunctions.approxQuantile for description)
Must be in the range [0, 1]. |
static void |
save(String path) |
static <T> Params |
set(Param<T> param,
T value) |
QuantileDiscretizer |
setHandleInvalid(String value) |
QuantileDiscretizer |
setInputCol(String value) |
QuantileDiscretizer |
setInputCols(String[] value) |
QuantileDiscretizer |
setNumBuckets(int value) |
QuantileDiscretizer |
setNumBucketsArray(int[] value) |
QuantileDiscretizer |
setOutputCol(String value) |
QuantileDiscretizer |
setOutputCols(String[] value) |
QuantileDiscretizer |
setRelativeError(double value) |
static String |
toString() |
StructType |
transformSchema(StructType schema)
:: DeveloperApi ::
|
String |
uid()
An immutable unique ID for the object and its derivatives.
|
MLWriter |
write()
Returns an
MLWriter instance for this ML instance. |
equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitgetHandleInvalidgetInputCol, inputColgetOutputCol, outputColclear, copyValues, defaultCopy, defaultParamMap, explainParam, explainParams, extractParamMap, extractParamMap, get, getDefault, getOrDefault, getParam, hasDefault, hasParam, isDefined, isSet, paramMap, params, set, set, set, setDefault, setDefault, shouldOwntoStringsavegetInputCols, inputColsgetOutputCols, outputColsinitializeLogging, initializeLogIfNecessary, initializeLogIfNecessary, isTraceEnabled, log_, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarningpublic QuantileDiscretizer(String uid)
public QuantileDiscretizer()
public static QuantileDiscretizer load(String path)
public static String toString()
public static Param<?>[] params()
public static String explainParam(Param<?> param)
public static String explainParams()
public static final boolean isSet(Param<?> param)
public static final boolean isDefined(Param<?> param)
public static boolean hasParam(String paramName)
public static Param<Object> getParam(String paramName)
public static final <T> scala.Option<T> get(Param<T> param)
public static final <T> T getOrDefault(Param<T> param)
public static final <T> scala.Option<T> getDefault(Param<T> param)
public static final <T> boolean hasDefault(Param<T> param)
public static final ParamMap extractParamMap()
public static final String getHandleInvalid()
public static final Param<String> inputCol()
public static final String getInputCol()
public static final Param<String> outputCol()
public static final String getOutputCol()
public static IntParam numBuckets()
public static int getNumBuckets()
public static IntArrayParam numBucketsArray()
public static int[] getNumBucketsArray()
public static DoubleParam relativeError()
public static double getRelativeError()
public static Param<String> handleInvalid()
public static void save(String path)
throws java.io.IOException
java.io.IOExceptionpublic static final StringArrayParam inputCols()
public static final String[] getInputCols()
public static final StringArrayParam outputCols()
public static final String[] getOutputCols()
public String uid()
Identifiableuid in interface Identifiablepublic QuantileDiscretizer setRelativeError(double value)
public QuantileDiscretizer setNumBuckets(int value)
public QuantileDiscretizer setInputCol(String value)
public QuantileDiscretizer setOutputCol(String value)
public QuantileDiscretizer setHandleInvalid(String value)
public QuantileDiscretizer setNumBucketsArray(int[] value)
public QuantileDiscretizer setInputCols(String[] value)
public QuantileDiscretizer setOutputCols(String[] value)
public StructType transformSchema(StructType schema)
PipelineStageCheck transform validity and derive the output schema from the input schema.
We check validity for interactions between parameters during transformSchema and
raise an exception if any parameter value is invalid. Parameter value checks which
do not depend on other parameters are handled by Param.validate().
Typical implementation should first conduct verification on schema change and parameter validity, including complex parameter interaction checks.
transformSchema in class PipelineStageschema - (undocumented)public Bucketizer fit(Dataset<?> dataset)
Estimatorfit in class Estimator<Bucketizer>dataset - (undocumented)public QuantileDiscretizer copy(ParamMap extra)
ParamsdefaultCopy().copy in interface Paramscopy in class Estimator<Bucketizer>extra - (undocumented)public MLWriter write()
MLWritableMLWriter instance for this ML instance.write in interface DefaultParamsWritablewrite in interface MLWritablepublic IntParam numBuckets()
See also handleInvalid, which can optionally create an additional bucket for NaN values.
default: 2
public int getNumBuckets()
public IntArrayParam numBucketsArray()
See also handleInvalid, which can optionally create an additional bucket for NaN values.
public int[] getNumBucketsArray()
public DoubleParam relativeError()
org.apache.spark.sql.DataFrameStatFunctions.approxQuantile for description)
Must be in the range [0, 1].
Note that in multiple columns case, relative error is applied to all columns.
default: 0.001public double getRelativeError()
public Param<String> handleInvalid()
handleInvalid in interface HasHandleInvalid