Welcome to the Deep Learning Pipelines Python API docs!

Horovod Runner

class sparkdl.HorovodRunner(*, np, driver_log_verbosity='log_callback_only')[source]

Bases: object

HorovodRunner runs distributed deep learning training jobs using Horovod.

On Databricks Runtime 5.0 ML and above, it launches the Horovod job as a distributed Spark job. It makes running Horovod easy on Databricks by managing the cluster setup and integrating with Spark. Check out Databricks documentation to view end-to-end examples and performance tuning tips.

The open-source version only runs the job locally inside the same Python process, which is for local development only.

Note

Horovod is a distributed training framework developed by Uber.

Parameters
  • np

    number of parallel processes to use for the Horovod job. This argument only takes effect on Databricks Runtime 5.0 ML and above. It is ignored in the open-source version. On Databricks, each process will take an available task slot, which maps to a GPU on a GPU cluster or a CPU core on a CPU cluster. Accepted values are:

    • If <0, this will spawn -np subprocesses on the driver node to run Horovod locally. Training stdout and stderr messages go to the notebook cell output, and are also available in driver logs in case the cell output is truncated. This is useful for debugging and we recommend testing your code under this mode first. However, be careful of heavy use of the Spark driver on a shared Databricks cluster. Note that np < -1 is only supported on Databricks Runtime 5.5 ML and above.

    • If >0, this will launch a Spark job with np tasks starting all together and run the Horovod job on the task nodes. It will wait until np task slots are available to launch the job. If np is greater than the total number of task slots on the cluster, the job will fail. As of Databricks Runtime 5.4 ML, training stdout and stderr messages go to the notebook cell output. In the event that the cell output is truncated, full logs are available in stderr stream of task 0 under the 2nd spark job started by HorovodRunner, which you can find in the Spark UI.

  • driver_log_verbosity – driver log verbosity, “all” or “log_callback_only”(default). During training, the first worker process will collect logs from all workers. The training logs are always merged into the first Spark executors stderr logs. If driver log verbosity is “all”, HorovodRunner streams all logs to the driver and shows them in the notebook cell output. However, this might generate excessive amount of logs during distributed training. You can turn it off by setting driver log verbosity to “log_callback_only”. In this mode, HorovodRunner will only stream selected logs from remote worker. logs from remote worker can be sent by sparkdl.horovod.log_to_driver(), or use log callback in the first worker process, e.g., sparkdl.horovod.tensorflow.keras.LogCallback.

run(main, **kwargs)[source]

Runs a Horovod training job invoking main(**kwargs).

The open-source version only invokes main(**kwargs) inside the same Python process. On Databricks Runtime 5.0 ML and above, it will launch the Horovod job based on the documented behavior of np. Both the main function and the keyword arguments are serialized using cloudpickle and distributed to cluster workers.

Parameters
  • main – a Python function that contains the Horovod training code. The expected signature is def main(**kwargs) or compatible forms. Because the function gets pickled and distributed to workers, please change global states inside the function, e.g., setting logging level, and be aware of pickling limitations. Avoid referencing large objects in the function, which might result large pickled data, making the job slow to start.

  • kwargs – keyword arguments passed to the main function at invocation time.

Returns

return value of the main function. With np>=0, this returns the value from the rank 0 process. Note that the returned value should be serializable using cloudpickle.

sparkdl.horovod.log_to_driver(message)[source]

Send a log message (string type) to driver side, and driver will print log to stdout. If message length is greater than 4000, it will be truncated.

class sparkdl.horovod.tensorflow.keras.LogCallback(per_batch_log=False)[source]

Bases: tensorflow.python.keras.callbacks.Callback

A simple HorovodRunner log callback that streams event logs to notebook cell output.

Parameters

per_batch_log – whether to output logs per batch, default: False.

on_batch_begin(batch, logs=None)

A backwards compatibility alias for on_train_batch_begin.

on_batch_end(batch, logs=None)[source]

A backwards compatibility alias for on_train_batch_end.

on_epoch_begin(epoch, logs=None)[source]

Called at the start of an epoch.

Subclasses should override for any actions to run. This function should only be called during TRAIN mode.

Arguments:

epoch: integer, index of epoch. logs: dict. Currently no data is passed to this argument for this method

but that may change in the future.

on_epoch_end(epoch, logs=None)[source]

Called at the end of an epoch.

Subclasses should override for any actions to run. This function should only be called during TRAIN mode.

Arguments:

epoch: integer, index of epoch. logs: dict, metric results for this training epoch, and for the

validation epoch if validation is performed. Validation result keys are prefixed with val_.

on_predict_batch_begin(batch, logs=None)

Called at the beginning of a batch in predict methods.

Subclasses should override for any actions to run.

Arguments:

batch: integer, index of batch within the current epoch. logs: dict. Has keys batch and size representing the current batch

number and the size of the batch.

on_predict_batch_end(batch, logs=None)

Called at the end of a batch in predict methods.

Subclasses should override for any actions to run.

Arguments:

batch: integer, index of batch within the current epoch. logs: dict. Metric results for this batch.

on_predict_begin(logs=None)

Called at the beginning of prediction.

Subclasses should override for any actions to run.

Arguments:
logs: dict. Currently no data is passed to this argument for this method

but that may change in the future.

on_predict_end(logs=None)

Called at the end of prediction.

Subclasses should override for any actions to run.

Arguments:
logs: dict. Currently no data is passed to this argument for this method

but that may change in the future.

on_test_batch_begin(batch, logs=None)

Called at the beginning of a batch in evaluate methods.

Also called at the beginning of a validation batch in the fit methods, if validation data is provided.

Subclasses should override for any actions to run.

Arguments:

batch: integer, index of batch within the current epoch. logs: dict. Has keys batch and size representing the current batch

number and the size of the batch.

on_test_batch_end(batch, logs=None)

Called at the end of a batch in evaluate methods.

Also called at the end of a validation batch in the fit methods, if validation data is provided.

Subclasses should override for any actions to run.

Arguments:

batch: integer, index of batch within the current epoch. logs: dict. Metric results for this batch.

on_test_begin(logs=None)

Called at the beginning of evaluation or validation.

Subclasses should override for any actions to run.

Arguments:
logs: dict. Currently no data is passed to this argument for this method

but that may change in the future.

on_test_end(logs=None)

Called at the end of evaluation or validation.

Subclasses should override for any actions to run.

Arguments:
logs: dict. Currently no data is passed to this argument for this method

but that may change in the future.

on_train_batch_begin(batch, logs=None)

Called at the beginning of a training batch in fit methods.

Subclasses should override for any actions to run.

Arguments:

batch: integer, index of batch within the current epoch. logs: dict. Has keys batch and size representing the current batch

number and the size of the batch.

on_train_batch_end(batch, logs=None)

Called at the end of a training batch in fit methods.

Subclasses should override for any actions to run.

Arguments:

batch: integer, index of batch within the current epoch. logs: dict. Metric results for this batch.

on_train_begin(logs=None)

Called at the beginning of training.

Subclasses should override for any actions to run.

Arguments:
logs: dict. Currently no data is passed to this argument for this method

but that may change in the future.

on_train_end(logs=None)

Called at the end of training.

Subclasses should override for any actions to run.

Arguments:
logs: dict. Currently no data is passed to this argument for this method

but that may change in the future.

set_model(model)
set_params(params)

Xgboost for PySpark Pipeline

class sparkdl.xgboost.XgboostClassifier(**kwargs)[source]

Bases: sparkdl.xgboost.xgboost._XgboostEstimator, pyspark.ml.param.shared.HasProbabilityCol, pyspark.ml.param.shared.HasRawPredictionCol

XgboostClassifier is a PySpark ML estimator. It implements the XGBoost classification algorithm based on XGBoost python library, and it can be used in PySpark Pipeline and PySpark ML meta algorithms like CrossValidator/TrainValidationSplit/OneVsRest.

XgboostClassifier automatically supports most of the parameters in xgboost.XGBClassifier constructor and most of the parameters used in xgboost.XGBClassifier fit and predict method (see API docs for details), excluding the unsupported parameters: gpu_id, kwargs, output_margin, base_margin, validate_features.

Parameters

Note

The Parameters chart above contains parameters that need special handling. For a full list of parameters, see entries with Param(parent=… below.

Note

XgboostClassifier currently only supports training on a single worker node, and it would load all the training data into memory during training. If the training data cannot fit into the worker’s memory, an error will be raised. XgboostClassifier supports predicting on multiple workers in parallel.

Note

This API is experimental.

Examples

>>> from sparkdl.xgboost import XgboostClassifier
>>> from pyspark.ml.linalg import Vectors
>>> df_train = spark.createDataFrame([
...     (Vectors.dense(1.0, 2.0, 3.0), 0, False, 1.0),
...     (Vectors.sparse(3, {1: 1.0, 2: 5.5}), 1, False, 2.0),
...     (Vectors.dense(4.0, 5.0, 6.0), 0, True, 1.0),
...     (Vectors.sparse(3, {1: 6.0, 2: 7.5}), 1, True, 2.0),
... ], ["features", "label", "isVal", "weight"])
>>> df_test = spark.createDataFrame([
...     (Vectors.dense(1.0, 2.0, 3.0), ),
... ], ["features"])
>>> xgb_classifier = XgboostClassifier(max_depth=5, missing=0.0,
...     validationIndicatorCol='isVal', weightCol='weight',
...     early_stopping_rounds=1, eval_metric='logloss')
>>> xgb_clf_model = xgb_classifier.fit(df_train)
>>> xgb_clf_model.transform(df_test).show()
callbacks = Param(parent='undefined', name='callbacks', doc='Refer to XGBoost doc of `xgboost.XGBClassifier.fit()` or `xgboost.XGBRegressor.fit()` for this param callbacks.The callbacks can be arbitrary functions. It is saved using cloudpickle which is not a fully self-contained format. It may fail to load with different versions of dependencies.')
clear(param)

Clears a param from the param map if it has been explicitly set.

copy(extra=None)

Creates a copy of this instance with the same uid and some extra params. The default implementation creates a shallow copy using copy.copy(), and then copies the embedded and extra parameters over and returns the copy. Subclasses should override this method if the default approach is not sufficient.

Parameters

extra – Extra parameters to copy to the new instance

Returns

Copy of this instance

explainParam(param)

Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.

explainParams()

Returns the documentation of all params with their optionally default values and user-supplied values.

extractParamMap(extra=None)

Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.

Parameters

extra – extra param values

Returns

merged param map

featuresCol = Param(parent='undefined', name='featuresCol', doc='features column name.')
fit(dataset, params=None)

Fits a model to the input dataset with optional parameters.

Parameters
  • dataset – input dataset, which is an instance of pyspark.sql.DataFrame

  • params – an optional param map that overrides embedded params. If a list/tuple of param maps is given, this calls fit on each param map and returns a list of models.

Returns

fitted model(s)

New in version 1.3.0.

fitMultiple(dataset, paramMaps)

Fits a model to the input dataset for each param map in paramMaps.

Parameters
  • dataset – input dataset, which is an instance of pyspark.sql.DataFrame.

  • paramMaps – A Sequence of param maps.

Returns

A thread safe iterable which contains one model for each param map. Each call to next(modelIterator) will return (index, model) where model was fit using paramMaps[index]. index values may not be sequential.

New in version 2.3.0.

getFeaturesCol()

Gets the value of featuresCol or its default value.

getLabelCol()

Gets the value of labelCol or its default value.

getOrDefault(param)

Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set.

getParam(paramName)

Gets a param by its name.

getPredictionCol()

Gets the value of predictionCol or its default value.

getProbabilityCol()

Gets the value of probabilityCol or its default value.

getRawPredictionCol()

Gets the value of rawPredictionCol or its default value.

getValidationIndicatorCol()

Gets the value of validationIndicatorCol or its default value.

getWeightCol()

Gets the value of weightCol or its default value.

hasDefault(param)

Checks whether a param has a default value.

hasParam(paramName)

Tests whether this instance contains a param with a given (string) name.

isDefined(param)

Checks whether a param is explicitly set by user or has a default value.

isSet(param)

Checks whether a param is explicitly set by user.

labelCol = Param(parent='undefined', name='labelCol', doc='label column name.')
classmethod load(path)

Reads an ML instance from the input path, a shortcut of read().load(path).

missing = Param(parent='undefined', name='missing', doc='Specify the missing value in the features, default np.nan. We recommend using 0.0 as the missing value for better performance. Note: In a spark DataFrame, the inactive values in a sparse vector mean 0 instead of missing values, unless missing=0 is specified.')
property params

Returns all params ordered by name. The default implementation uses dir() to get all attributes of type Param.

predictionCol = Param(parent='undefined', name='predictionCol', doc='prediction column name.')
probabilityCol = Param(parent='undefined', name='probabilityCol', doc='Column name for predicted class conditional probabilities. Note: Not all models output well-calibrated probability estimates! These probabilities should be treated as confidences, not precise probabilities.')
rawPredictionCol = Param(parent='undefined', name='rawPredictionCol', doc='raw prediction (a.k.a. confidence) column name.')
classmethod read()

Returns an MLReader instance for this class.

save(path)

Save this ML instance to the given path, a shortcut of ‘write().save(path)’.

set(param, value)

Sets a parameter in the embedded param map.

validationIndicatorCol = Param(parent='undefined', name='validationIndicatorCol', doc='name of the column that indicates whether each row is for training or for validation. False indicates training; true indicates validation.')
weightCol = Param(parent='undefined', name='weightCol', doc='weight column name. If this is not set or empty, we treat all instance weights as 1.0.')
write()

Returns an MLWriter instance for this ML instance.

class sparkdl.xgboost.XgboostClassifierModel(xgb_sklearn_model=None)[source]

Bases: sparkdl.xgboost.xgboost._XgboostModel, pyspark.ml.param.shared.HasProbabilityCol, pyspark.ml.param.shared.HasRawPredictionCol

The model returned by sparkdl.xgboost.XgboostClassifier.fit()

Note

This API is experimental.

callbacks = Param(parent='undefined', name='callbacks', doc='Refer to XGBoost doc of `xgboost.XGBClassifier.fit()` or `xgboost.XGBRegressor.fit()` for this param callbacks.The callbacks can be arbitrary functions. It is saved using cloudpickle which is not a fully self-contained format. It may fail to load with different versions of dependencies.')
clear(param)

Clears a param from the param map if it has been explicitly set.

copy(extra=None)

Creates a copy of this instance with the same uid and some extra params. The default implementation creates a shallow copy using copy.copy(), and then copies the embedded and extra parameters over and returns the copy. Subclasses should override this method if the default approach is not sufficient.

Parameters

extra – Extra parameters to copy to the new instance

Returns

Copy of this instance

explainParam(param)

Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.

explainParams()

Returns the documentation of all params with their optionally default values and user-supplied values.

extractParamMap(extra=None)

Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.

Parameters

extra – extra param values

Returns

merged param map

featuresCol = Param(parent='undefined', name='featuresCol', doc='features column name.')
getFeaturesCol()

Gets the value of featuresCol or its default value.

getLabelCol()

Gets the value of labelCol or its default value.

getOrDefault(param)

Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set.

getParam(paramName)

Gets a param by its name.

getPredictionCol()

Gets the value of predictionCol or its default value.

getProbabilityCol()

Gets the value of probabilityCol or its default value.

getRawPredictionCol()

Gets the value of rawPredictionCol or its default value.

getValidationIndicatorCol()

Gets the value of validationIndicatorCol or its default value.

getWeightCol()

Gets the value of weightCol or its default value.

get_booster()

Return the xgboost.core.Booster instance.

hasDefault(param)

Checks whether a param has a default value.

hasParam(paramName)

Tests whether this instance contains a param with a given (string) name.

isDefined(param)

Checks whether a param is explicitly set by user or has a default value.

isSet(param)

Checks whether a param is explicitly set by user.

labelCol = Param(parent='undefined', name='labelCol', doc='label column name.')
classmethod load(path)

Reads an ML instance from the input path, a shortcut of read().load(path).

missing = Param(parent='undefined', name='missing', doc='Specify the missing value in the features, default np.nan. We recommend using 0.0 as the missing value for better performance. Note: In a spark DataFrame, the inactive values in a sparse vector mean 0 instead of missing values, unless missing=0 is specified.')
property params

Returns all params ordered by name. The default implementation uses dir() to get all attributes of type Param.

predictionCol = Param(parent='undefined', name='predictionCol', doc='prediction column name.')
probabilityCol = Param(parent='undefined', name='probabilityCol', doc='Column name for predicted class conditional probabilities. Note: Not all models output well-calibrated probability estimates! These probabilities should be treated as confidences, not precise probabilities.')
rawPredictionCol = Param(parent='undefined', name='rawPredictionCol', doc='raw prediction (a.k.a. confidence) column name.')
classmethod read()

Returns an MLReader instance for this class.

save(path)

Save this ML instance to the given path, a shortcut of ‘write().save(path)’.

set(param, value)

Sets a parameter in the embedded param map.

transform(dataset, params=None)

Transforms the input dataset with optional parameters.

Parameters
  • dataset – input dataset, which is an instance of pyspark.sql.DataFrame

  • params – an optional param map that overrides embedded params.

Returns

transformed dataset

New in version 1.3.0.

validationIndicatorCol = Param(parent='undefined', name='validationIndicatorCol', doc='name of the column that indicates whether each row is for training or for validation. False indicates training; true indicates validation.')
weightCol = Param(parent='undefined', name='weightCol', doc='weight column name. If this is not set or empty, we treat all instance weights as 1.0.')
write()

Returns an MLWriter instance for this ML instance.

class sparkdl.xgboost.XgboostRegressor(**kwargs)[source]

Bases: sparkdl.xgboost.xgboost._XgboostEstimator

XgboostRegressor is a PySpark ML estimator. It implements the XGBoost regression algorithm based on XGBoost python library, and it can be used in PySpark Pipeline and PySpark ML meta algorithms like CrossValidator/TrainValidationSplit/OneVsRest.

XgboostRegressor automatically supports most of the parameters in xgboost.XGBRegressor constructor and most of the parameters used in xgboost.XGBRegressor fit and predict method (see API docs for details), excluding the unsupported parameters: gpu_id, kwargs, output_margin, base_margin, validate_features.

Parameters

Note

The Parameters chart above contains parameters that need special handling. For a full list of parameters, see entries with Param(parent=… below.

Note

XgboostRegressor currently only supports training on a single worker node, and it would load all the training data into memory during training. If the training data cannot fit into the worker’s memory, an error will be raised. XgboostRegressor supports predicting on multiple workers in parallel.

Note

This API is experimental.

Examples

>>> from sparkdl.xgboost import XgboostRegressor
>>> from pyspark.ml.linalg import Vectors
>>> df_train = spark.createDataFrame([
...     (Vectors.dense(1.0, 2.0, 3.0), 0, False, 1.0),
...     (Vectors.sparse(3, {1: 1.0, 2: 5.5}), 1, False, 2.0),
...     (Vectors.dense(4.0, 5.0, 6.0), 2, True, 1.0),
...     (Vectors.sparse(3, {1: 6.0, 2: 7.5}), 3, True, 2.0),
... ], ["features", "label", "isVal", "weight"])
>>> df_test = spark.createDataFrame([
...     (Vectors.dense(1.0, 2.0, 3.0), ),
...     (Vectors.sparse(3, {1: 1.0, 2: 5.5}), )
... ], ["features"])
>>> xgb_regressor = XgboostRegressor(max_depth=5, missing=0.0,
... validationIndicatorCol='isVal', weightCol='weight',
... early_stopping_rounds=1, eval_metric='rmse')
>>> xgb_reg_model = xgb_regressor.fit(df_train)
>>> xgb_reg_model.transform(df_test)
callbacks = Param(parent='undefined', name='callbacks', doc='Refer to XGBoost doc of `xgboost.XGBClassifier.fit()` or `xgboost.XGBRegressor.fit()` for this param callbacks.The callbacks can be arbitrary functions. It is saved using cloudpickle which is not a fully self-contained format. It may fail to load with different versions of dependencies.')
clear(param)

Clears a param from the param map if it has been explicitly set.

copy(extra=None)

Creates a copy of this instance with the same uid and some extra params. The default implementation creates a shallow copy using copy.copy(), and then copies the embedded and extra parameters over and returns the copy. Subclasses should override this method if the default approach is not sufficient.

Parameters

extra – Extra parameters to copy to the new instance

Returns

Copy of this instance

explainParam(param)

Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.

explainParams()

Returns the documentation of all params with their optionally default values and user-supplied values.

extractParamMap(extra=None)

Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.

Parameters

extra – extra param values

Returns

merged param map

featuresCol = Param(parent='undefined', name='featuresCol', doc='features column name.')
fit(dataset, params=None)

Fits a model to the input dataset with optional parameters.

Parameters
  • dataset – input dataset, which is an instance of pyspark.sql.DataFrame

  • params – an optional param map that overrides embedded params. If a list/tuple of param maps is given, this calls fit on each param map and returns a list of models.

Returns

fitted model(s)

New in version 1.3.0.

fitMultiple(dataset, paramMaps)

Fits a model to the input dataset for each param map in paramMaps.

Parameters
  • dataset – input dataset, which is an instance of pyspark.sql.DataFrame.

  • paramMaps – A Sequence of param maps.

Returns

A thread safe iterable which contains one model for each param map. Each call to next(modelIterator) will return (index, model) where model was fit using paramMaps[index]. index values may not be sequential.

New in version 2.3.0.

getFeaturesCol()

Gets the value of featuresCol or its default value.

getLabelCol()

Gets the value of labelCol or its default value.

getOrDefault(param)

Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set.

getParam(paramName)

Gets a param by its name.

getPredictionCol()

Gets the value of predictionCol or its default value.

getValidationIndicatorCol()

Gets the value of validationIndicatorCol or its default value.

getWeightCol()

Gets the value of weightCol or its default value.

hasDefault(param)

Checks whether a param has a default value.

hasParam(paramName)

Tests whether this instance contains a param with a given (string) name.

isDefined(param)

Checks whether a param is explicitly set by user or has a default value.

isSet(param)

Checks whether a param is explicitly set by user.

labelCol = Param(parent='undefined', name='labelCol', doc='label column name.')
classmethod load(path)

Reads an ML instance from the input path, a shortcut of read().load(path).

missing = Param(parent='undefined', name='missing', doc='Specify the missing value in the features, default np.nan. We recommend using 0.0 as the missing value for better performance. Note: In a spark DataFrame, the inactive values in a sparse vector mean 0 instead of missing values, unless missing=0 is specified.')
property params

Returns all params ordered by name. The default implementation uses dir() to get all attributes of type Param.

predictionCol = Param(parent='undefined', name='predictionCol', doc='prediction column name.')
classmethod read()

Returns an MLReader instance for this class.

save(path)

Save this ML instance to the given path, a shortcut of ‘write().save(path)’.

set(param, value)

Sets a parameter in the embedded param map.

validationIndicatorCol = Param(parent='undefined', name='validationIndicatorCol', doc='name of the column that indicates whether each row is for training or for validation. False indicates training; true indicates validation.')
weightCol = Param(parent='undefined', name='weightCol', doc='weight column name. If this is not set or empty, we treat all instance weights as 1.0.')
write()

Returns an MLWriter instance for this ML instance.

class sparkdl.xgboost.XgboostRegressorModel(xgb_sklearn_model=None)[source]

Bases: sparkdl.xgboost.xgboost._XgboostModel

The model returned by sparkdl.xgboost.XgboostRegressor.fit()

Note

This API is experimental.

callbacks = Param(parent='undefined', name='callbacks', doc='Refer to XGBoost doc of `xgboost.XGBClassifier.fit()` or `xgboost.XGBRegressor.fit()` for this param callbacks.The callbacks can be arbitrary functions. It is saved using cloudpickle which is not a fully self-contained format. It may fail to load with different versions of dependencies.')
clear(param)

Clears a param from the param map if it has been explicitly set.

copy(extra=None)

Creates a copy of this instance with the same uid and some extra params. The default implementation creates a shallow copy using copy.copy(), and then copies the embedded and extra parameters over and returns the copy. Subclasses should override this method if the default approach is not sufficient.

Parameters

extra – Extra parameters to copy to the new instance

Returns

Copy of this instance

explainParam(param)

Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.

explainParams()

Returns the documentation of all params with their optionally default values and user-supplied values.

extractParamMap(extra=None)

Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.

Parameters

extra – extra param values

Returns

merged param map

featuresCol = Param(parent='undefined', name='featuresCol', doc='features column name.')
getFeaturesCol()

Gets the value of featuresCol or its default value.

getLabelCol()

Gets the value of labelCol or its default value.

getOrDefault(param)

Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set.

getParam(paramName)

Gets a param by its name.

getPredictionCol()

Gets the value of predictionCol or its default value.

getValidationIndicatorCol()

Gets the value of validationIndicatorCol or its default value.

getWeightCol()

Gets the value of weightCol or its default value.

get_booster()

Return the xgboost.core.Booster instance.

hasDefault(param)

Checks whether a param has a default value.

hasParam(paramName)

Tests whether this instance contains a param with a given (string) name.

isDefined(param)

Checks whether a param is explicitly set by user or has a default value.

isSet(param)

Checks whether a param is explicitly set by user.

labelCol = Param(parent='undefined', name='labelCol', doc='label column name.')
classmethod load(path)

Reads an ML instance from the input path, a shortcut of read().load(path).

missing = Param(parent='undefined', name='missing', doc='Specify the missing value in the features, default np.nan. We recommend using 0.0 as the missing value for better performance. Note: In a spark DataFrame, the inactive values in a sparse vector mean 0 instead of missing values, unless missing=0 is specified.')
property params

Returns all params ordered by name. The default implementation uses dir() to get all attributes of type Param.

predictionCol = Param(parent='undefined', name='predictionCol', doc='prediction column name.')
classmethod read()

Returns an MLReader instance for this class.

save(path)

Save this ML instance to the given path, a shortcut of ‘write().save(path)’.

set(param, value)

Sets a parameter in the embedded param map.

transform(dataset, params=None)

Transforms the input dataset with optional parameters.

Parameters
  • dataset – input dataset, which is an instance of pyspark.sql.DataFrame

  • params – an optional param map that overrides embedded params.

Returns

transformed dataset

New in version 1.3.0.

validationIndicatorCol = Param(parent='undefined', name='validationIndicatorCol', doc='name of the column that indicates whether each row is for training or for validation. False indicates training; true indicates validation.')
weightCol = Param(parent='undefined', name='weightCol', doc='weight column name. If this is not set or empty, we treat all instance weights as 1.0.')
write()

Returns an MLWriter instance for this ML instance.