Welcome to the Deep Learning Pipelines Python API docs!

Horovod Runner

class sparkdl.HorovodRunner(*, np, driver_log_verbosity='log_callback_only')[source]

Bases: object

HorovodRunner runs distributed deep learning training jobs using Horovod.

On Databricks Runtime 5.0 ML and above, it launches the Horovod job as a distributed Spark job. It makes running Horovod easy on Databricks by managing the cluster setup and integrating with Spark. Check out Databricks documentation to view end-to-end examples and performance tuning tips.

The open-source version only runs the job locally inside the same Python process, which is for local development only.

Note

Horovod is a distributed training framework developed by Uber.

Parameters
  • np

    number of parallel processes to use for the Horovod job. This argument only takes effect on Databricks Runtime 5.0 ML and above. It is ignored in the open-source version. On Databricks, each process will take an available task slot, which maps to a GPU on a GPU cluster or a CPU core on a CPU cluster. Accepted values are:

    • If <0, this will spawn -np subprocesses on the driver node to run Horovod locally. Training stdout and stderr messages go to the notebook cell output, and are also available in driver logs in case the cell output is truncated. This is useful for debugging and we recommend testing your code under this mode first. However, be careful of heavy use of the Spark driver on a shared Databricks cluster. Note that np < -1 is only supported on Databricks Runtime 5.5 ML and above.

    • If >0, this will launch a Spark job with np tasks starting all together and run the Horovod job on the task nodes. It will wait until np task slots are available to launch the job. If np is greater than the total number of task slots on the cluster, the job will fail. As of Databricks Runtime 5.4 ML, training stdout and stderr messages go to the notebook cell output. In the event that the cell output is truncated, full logs are available in stderr stream of task 0 under the 2nd spark job started by HorovodRunner, which you can find in the Spark UI.

  • driver_log_verbosity – driver log verbosity, “all” or “log_callback_only”(default). During training, the first worker process will collect logs from all workers. The training logs are always merged into the first Spark executors stderr logs. If driver log verbosity is “all”, HorovodRunner streams all logs to the driver and shows them in the notebook cell output. However, this might generate excessive amount of logs during distributed training. You can turn it off by setting driver log verbosity to “log_callback_only”. In this mode, HorovodRunner will only stream selected logs from remote worker. logs from remote worker can be sent by sparkdl.horovod.log_to_driver(), or use log callback in the first worker process, e.g., sparkdl.horovod.tensorflow.keras.LogCallback.

run(main, **kwargs)[source]

Runs a Horovod training job invoking main(**kwargs).

The open-source version only invokes main(**kwargs) inside the same Python process. On Databricks Runtime 5.0 ML and above, it will launch the Horovod job based on the documented behavior of np. Both the main function and the keyword arguments are serialized using cloudpickle and distributed to cluster workers.

Parameters
  • main – a Python function that contains the Horovod training code. The expected signature is def main(**kwargs) or compatible forms. Because the function gets pickled and distributed to workers, please change global states inside the function, e.g., setting logging level, and be aware of pickling limitations. Avoid referencing large objects in the function, which might result large pickled data, making the job slow to start.

  • kwargs – keyword arguments passed to the main function at invocation time.

Returns

return value of the main function. With np>=0, this returns the value from the rank 0 process. Note that the returned value should be serializable using cloudpickle.

sparkdl.horovod.log_to_driver(message)[source]

Send a log message (string type) to driver side, and driver will print log to stdout. If message length is greater than 4000, it will be truncated.

class sparkdl.horovod.tensorflow.keras.LogCallback(per_batch_log=False)[source]

Bases: keras.callbacks.Callback

A simple HorovodRunner log callback that streams event logs to notebook cell output.

Parameters

per_batch_log – whether to output logs per batch, default: False.

on_batch_begin(batch, logs=None)

A backwards compatibility alias for on_train_batch_begin.

on_batch_end(batch, logs=None)[source]

A backwards compatibility alias for on_train_batch_end.

on_epoch_begin(epoch, logs=None)[source]

Called at the start of an epoch.

Subclasses should override for any actions to run. This function should only be called during TRAIN mode.

Args:

epoch: Integer, index of epoch. logs: Dict. Currently no data is passed to this argument for this method

but that may change in the future.

on_epoch_end(epoch, logs=None)[source]

Called at the end of an epoch.

Subclasses should override for any actions to run. This function should only be called during TRAIN mode.

Args:

epoch: Integer, index of epoch. logs: Dict, metric results for this training epoch, and for the

validation epoch if validation is performed. Validation result keys are prefixed with val_. For training epoch, the values of the

Model’s metrics are returned. Example`{‘loss’: 0.2, ‘accuracy’:

0.7}`.

on_predict_batch_begin(batch, logs=None)

Called at the beginning of a batch in predict methods.

Subclasses should override for any actions to run.

Note that if the steps_per_execution argument to compile in tf.keras.Model is set to N, this method will only be called every N batches.

Args:

batch: Integer, index of batch within the current epoch. logs: Dict. Currently no data is passed to this argument for this method

but that may change in the future.

on_predict_batch_end(batch, logs=None)

Called at the end of a batch in predict methods.

Subclasses should override for any actions to run.

Note that if the steps_per_execution argument to compile in tf.keras.Model is set to N, this method will only be called every N batches.

Args:

batch: Integer, index of batch within the current epoch. logs: Dict. Aggregated metric results up until this batch.

on_predict_begin(logs=None)

Called at the beginning of prediction.

Subclasses should override for any actions to run.

Args:
logs: Dict. Currently no data is passed to this argument for this method

but that may change in the future.

on_predict_end(logs=None)

Called at the end of prediction.

Subclasses should override for any actions to run.

Args:
logs: Dict. Currently no data is passed to this argument for this method

but that may change in the future.

on_test_batch_begin(batch, logs=None)

Called at the beginning of a batch in evaluate methods.

Also called at the beginning of a validation batch in the fit methods, if validation data is provided.

Subclasses should override for any actions to run.

Note that if the steps_per_execution argument to compile in tf.keras.Model is set to N, this method will only be called every N batches.

Args:

batch: Integer, index of batch within the current epoch. logs: Dict. Currently no data is passed to this argument for this method

but that may change in the future.

on_test_batch_end(batch, logs=None)

Called at the end of a batch in evaluate methods.

Also called at the end of a validation batch in the fit methods, if validation data is provided.

Subclasses should override for any actions to run.

Note that if the steps_per_execution argument to compile in tf.keras.Model is set to N, this method will only be called every N batches.

Args:

batch: Integer, index of batch within the current epoch. logs: Dict. Aggregated metric results up until this batch.

on_test_begin(logs=None)

Called at the beginning of evaluation or validation.

Subclasses should override for any actions to run.

Args:
logs: Dict. Currently no data is passed to this argument for this method

but that may change in the future.

on_test_end(logs=None)

Called at the end of evaluation or validation.

Subclasses should override for any actions to run.

Args:
logs: Dict. Currently the output of the last call to

on_test_batch_end() is passed to this argument for this method but that may change in the future.

on_train_batch_begin(batch, logs=None)

Called at the beginning of a training batch in fit methods.

Subclasses should override for any actions to run.

Note that if the steps_per_execution argument to compile in tf.keras.Model is set to N, this method will only be called every N batches.

Args:

batch: Integer, index of batch within the current epoch. logs: Dict. Currently no data is passed to this argument for this method

but that may change in the future.

on_train_batch_end(batch, logs=None)

Called at the end of a training batch in fit methods.

Subclasses should override for any actions to run.

Note that if the steps_per_execution argument to compile in tf.keras.Model is set to N, this method will only be called every N batches.

Args:

batch: Integer, index of batch within the current epoch. logs: Dict. Aggregated metric results up until this batch.

on_train_begin(logs=None)

Called at the beginning of training.

Subclasses should override for any actions to run.

Args:
logs: Dict. Currently no data is passed to this argument for this method

but that may change in the future.

on_train_end(logs=None)

Called at the end of training.

Subclasses should override for any actions to run.

Args:
logs: Dict. Currently the output of the last call to on_epoch_end()

is passed to this argument for this method but that may change in the future.

set_model(model)
set_params(params)

Xgboost for PySpark Pipeline

class sparkdl.xgboost.XgboostClassifier(**kwargs)[source]

Bases: sparkdl.xgboost.xgboost._XgboostEstimator, pyspark.ml.param.shared.HasProbabilityCol, pyspark.ml.param.shared.HasRawPredictionCol

XgboostClassifier is a PySpark ML estimator. It implements the XGBoost classification algorithm based on XGBoost python library, and it can be used in PySpark Pipeline and PySpark ML meta algorithms like CrossValidator/TrainValidationSplit/OneVsRest.

XgboostClassifier automatically supports most of the parameters in xgboost.XGBClassifier constructor and most of the parameters used in xgboost.XGBClassifier fit and predict method (see API docs for details).

XgboostClassifier doesn’t support setting gpu_id but support another param use_gpu, see doc below for more details.

XgboostClassifier doesn’t support setting base_margin explicitly as well, but support another param called baseMarginCol. see doc below for more details.

XgboostClassifier doesn’t support setting output_margin, but we can get output margin from the raw prediction column. See rawPredictionCol param doc below for more details.

XgboostClassifier doesn’t support validate_features and output_margin param.

Parameters
  • callbacks – The export and import of the callback functions are at best effort. For details, see sparkdl.xgboost.XgboostClassifier.callbacks param doc.

  • missing – The parameter missing in XgboostClassifier has different semantics with that in xgboost.XGBClassifier. For details, see sparkdl.xgboost.XgboostClassifier.missing param doc.

  • rawPredictionCol – The output_margin=True is implicitly supported by the rawPredictionCol output column, which is always returned with the predicted margin values.

  • validationIndicatorCol – For params related to xgboost.XGBClassifier training with evaluation dataset’s supervision, set sparkdl.xgboost.XgboostClassifier.validationIndicatorCol parameter instead of setting the eval_set parameter in xgboost.XGBClassifier fit method.

  • weightCol – To specify the weight of the training and validation dataset, set sparkdl.xgboost.XgboostClassifier.weightCol parameter instead of setting sample_weight and sample_weight_eval_set parameter in xgboost.XGBClassifier fit method.

  • xgb_model – Set the value to be the instance returned by sparkdl.xgboost.XgboostClassifierModel.get_booster().

  • num_workers – Integer that specifies the number of XGBoost workers to use. Each XGBoost worker corresponds to one Spark task. This parameter is only supported on Databricks Runtime 9.0 ML and above.

  • use_gpu – Boolean that specifies whether the executors are running on GPU instances. This parameter is only supported on Databricks Runtime 9.0 ML and above.

  • use_external_storage – Boolean that specifices whether you want to use external storage when training in a distributed manner. This allows using disk as cache. Setting this to true is useful when you want better memory utilization but is not needed for small test datasets. This parameter is only supported on Databricks Runtime 9.0 ML and above.

  • baseMarginCol – To specify the base margins of the training and validation dataset, set sparkdl.xgboost.XgboostClassifier.baseMarginCol parameter instead of setting base_margin and base_margin_eval_set in the xgboost.XGBClassifier fit method. Note: this isn’t available for distributed training. This parameter is only supported on Databricks Runtime 9.0 ML and above.

Note

The Parameters chart above contains parameters that need special handling. For a full list of parameters, see entries with Param(parent=… below.

Note

This API is experimental.

Examples

>>> from sparkdl.xgboost import XgboostClassifier
>>> from pyspark.ml.linalg import Vectors
>>> df_train = spark.createDataFrame([
...     (Vectors.dense(1.0, 2.0, 3.0), 0, False, 1.0),
...     (Vectors.sparse(3, {1: 1.0, 2: 5.5}), 1, False, 2.0),
...     (Vectors.dense(4.0, 5.0, 6.0), 0, True, 1.0),
...     (Vectors.sparse(3, {1: 6.0, 2: 7.5}), 1, True, 2.0),
... ], ["features", "label", "isVal", "weight"])
>>> df_test = spark.createDataFrame([
...     (Vectors.dense(1.0, 2.0, 3.0), ),
... ], ["features"])
>>> xgb_classifier = XgboostClassifier(max_depth=5, missing=0.0,
...     validationIndicatorCol='isVal', weightCol='weight',
...     early_stopping_rounds=1, eval_metric='logloss')
>>> xgb_clf_model = xgb_classifier.fit(df_train)
>>> xgb_clf_model.transform(df_test).show()
baseMarginCol = Param(parent='undefined', name='baseMarginCol', doc='Specify the base margins of the training and validation dataset. Set this value instead of setting `base_margin` and `base_margin_eval_set` in the fit method. Note: this parameter is not available for distributed training. Note: This parameter is only supported on Databricks Runtime 9.0 ML and above.')
callbacks = Param(parent='undefined', name='callbacks', doc='Refer to XGBoost doc of `xgboost.XGBClassifier.fit()` or `xgboost.XGBRegressor.fit()` for this param callbacks. The callbacks can be arbitrary functions. It is saved using cloudpickle which is not a fully self-contained format. It may fail to load with different versions of dependencies.')
clear(param)

Clears a param from the param map if it has been explicitly set.

copy(extra=None)

Creates a copy of this instance with the same uid and some extra params. The default implementation creates a shallow copy using copy.copy(), and then copies the embedded and extra parameters over and returns the copy. Subclasses should override this method if the default approach is not sufficient.

Parameters

extra – Extra parameters to copy to the new instance

Returns

Copy of this instance

explainParam(param)

Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.

explainParams()

Returns the documentation of all params with their optionally default values and user-supplied values.

external_storage_precision = Param(parent='undefined', name='external_storage_precision', doc='The number of significant digits for data storage on disk when using external storage. Note: This parameter is only supported on Databricks Runtime 9.0 ML and above.')
extractParamMap(extra=None)

Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.

Parameters

extra – extra param values

Returns

merged param map

featuresCol = Param(parent='undefined', name='featuresCol', doc='features column name.')
fit(dataset, params=None)

Fits a model to the input dataset with optional parameters.

Parameters
  • dataset – input dataset, which is an instance of pyspark.sql.DataFrame

  • params – an optional param map that overrides embedded params. If a list/tuple of param maps is given, this calls fit on each param map and returns a list of models.

Returns

fitted model(s)

New in version 1.3.0.

fitMultiple(dataset, paramMaps)

Fits a model to the input dataset for each param map in paramMaps.

Parameters
  • dataset – input dataset, which is an instance of pyspark.sql.DataFrame.

  • paramMaps – A Sequence of param maps.

Returns

A thread safe iterable which contains one model for each param map. Each call to next(modelIterator) will return (index, model) where model was fit using paramMaps[index]. index values may not be sequential.

New in version 2.3.0.

force_repartition = Param(parent='undefined', name='force_repartition', doc='A boolean variable. Set force_repartition=true if you want to force the input dataset to be repartitioned before XGBoost training. Note: The auto repartitioning judgement is not fully accurate, so it is recommended to have force_repartition be True. Note: This parameter is only supported on Databricks Runtime 9.0 ML and above.')
getFeaturesCol()

Gets the value of featuresCol or its default value.

getLabelCol()

Gets the value of labelCol or its default value.

getOrDefault(param)

Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set.

getParam(paramName)

Gets a param by its name.

getPredictionCol()

Gets the value of predictionCol or its default value.

getProbabilityCol()

Gets the value of probabilityCol or its default value.

getRawPredictionCol()

Gets the value of rawPredictionCol or its default value.

getValidationIndicatorCol()

Gets the value of validationIndicatorCol or its default value.

getWeightCol()

Gets the value of weightCol or its default value.

hasDefault(param)

Checks whether a param has a default value.

hasParam(paramName)

Tests whether this instance contains a param with a given (string) name.

isDefined(param)

Checks whether a param is explicitly set by user or has a default value.

isSet(param)

Checks whether a param is explicitly set by user.

labelCol = Param(parent='undefined', name='labelCol', doc='label column name.')
classmethod load(path)

Reads an ML instance from the input path, a shortcut of read().load(path).

missing = Param(parent='undefined', name='missing', doc='Specify the missing value in the features, default np.nan. We recommend using 0.0 as the missing value for better performance. Note: In a spark DataFrame, the inactive values in a sparse vector mean 0 instead of missing values, unless missing=0 is specified.')
num_workers = Param(parent='undefined', name='num_workers', doc='The number of XGBoost workers. Each XGBoost worker corresponds to one spark task. Note: This parameter is only supported on Databricks Runtime 9.0 ML and above.')
property params

Returns all params ordered by name. The default implementation uses dir() to get all attributes of type Param.

predictionCol = Param(parent='undefined', name='predictionCol', doc='prediction column name.')
probabilityCol = Param(parent='undefined', name='probabilityCol', doc='Column name for predicted class conditional probabilities. Note: Not all models output well-calibrated probability estimates! These probabilities should be treated as confidences, not precise probabilities.')
rawPredictionCol = Param(parent='undefined', name='rawPredictionCol', doc='raw prediction (a.k.a. confidence) column name.')
classmethod read()

Returns an MLReader instance for this class.

save(path)

Save this ML instance to the given path, a shortcut of ‘write().save(path)’.

set(param, value)

Sets a parameter in the embedded param map.

uid

A unique id for the object.

use_external_storage = Param(parent='undefined', name='use_external_storage', doc="A boolean variable (that is False by default). External storage is a parameter for distributed training that allows external storage (disk) to be used when you have an exceptionally large dataset. This should be set to false for small datasets. Note that base margin and weighting doesn't work if this is True. Also note that you may use precision if you use external storage. Note: This parameter is only supported on Databricks Runtime 9.0 ML and above.")
use_gpu = Param(parent='undefined', name='use_gpu', doc='A boolean variable. Set use_gpu=true if the executors are running on GPU instances. Currently, only one GPU per task is supported. Note: This parameter is only supported on Databricks Runtime 9.0 ML and above.')
validationIndicatorCol = Param(parent='undefined', name='validationIndicatorCol', doc='name of the column that indicates whether each row is for training or for validation. False indicates training; true indicates validation.')
weightCol = Param(parent='undefined', name='weightCol', doc='weight column name. If this is not set or empty, we treat all instance weights as 1.0.')
write()

Returns an MLWriter instance for this ML instance.

class sparkdl.xgboost.XgboostClassifierModel(xgb_sklearn_model=None)[source]

Bases: sparkdl.xgboost.xgboost._XgboostModel, pyspark.ml.param.shared.HasProbabilityCol, pyspark.ml.param.shared.HasRawPredictionCol

The model returned by sparkdl.xgboost.XgboostClassifier.fit()

Note

This API is experimental.

baseMarginCol = Param(parent='undefined', name='baseMarginCol', doc='Specify the base margins of the training and validation dataset. Set this value instead of setting `base_margin` and `base_margin_eval_set` in the fit method. Note: this parameter is not available for distributed training. Note: This parameter is only supported on Databricks Runtime 9.0 ML and above.')
callbacks = Param(parent='undefined', name='callbacks', doc='Refer to XGBoost doc of `xgboost.XGBClassifier.fit()` or `xgboost.XGBRegressor.fit()` for this param callbacks. The callbacks can be arbitrary functions. It is saved using cloudpickle which is not a fully self-contained format. It may fail to load with different versions of dependencies.')
clear(param)

Clears a param from the param map if it has been explicitly set.

copy(extra=None)

Creates a copy of this instance with the same uid and some extra params. The default implementation creates a shallow copy using copy.copy(), and then copies the embedded and extra parameters over and returns the copy. Subclasses should override this method if the default approach is not sufficient.

Parameters

extra – Extra parameters to copy to the new instance

Returns

Copy of this instance

explainParam(param)

Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.

explainParams()

Returns the documentation of all params with their optionally default values and user-supplied values.

external_storage_precision = Param(parent='undefined', name='external_storage_precision', doc='The number of significant digits for data storage on disk when using external storage. Note: This parameter is only supported on Databricks Runtime 9.0 ML and above.')
extractParamMap(extra=None)

Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.

Parameters

extra – extra param values

Returns

merged param map

featuresCol = Param(parent='undefined', name='featuresCol', doc='features column name.')
force_repartition = Param(parent='undefined', name='force_repartition', doc='A boolean variable. Set force_repartition=true if you want to force the input dataset to be repartitioned before XGBoost training. Note: The auto repartitioning judgement is not fully accurate, so it is recommended to have force_repartition be True. Note: This parameter is only supported on Databricks Runtime 9.0 ML and above.')
getFeaturesCol()

Gets the value of featuresCol or its default value.

getLabelCol()

Gets the value of labelCol or its default value.

getOrDefault(param)

Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set.

getParam(paramName)

Gets a param by its name.

getPredictionCol()

Gets the value of predictionCol or its default value.

getProbabilityCol()

Gets the value of probabilityCol or its default value.

getRawPredictionCol()

Gets the value of rawPredictionCol or its default value.

getValidationIndicatorCol()

Gets the value of validationIndicatorCol or its default value.

getWeightCol()

Gets the value of weightCol or its default value.

get_booster()

Return the xgboost.core.Booster instance.

hasDefault(param)

Checks whether a param has a default value.

hasParam(paramName)

Tests whether this instance contains a param with a given (string) name.

isDefined(param)

Checks whether a param is explicitly set by user or has a default value.

isSet(param)

Checks whether a param is explicitly set by user.

labelCol = Param(parent='undefined', name='labelCol', doc='label column name.')
classmethod load(path)

Reads an ML instance from the input path, a shortcut of read().load(path).

missing = Param(parent='undefined', name='missing', doc='Specify the missing value in the features, default np.nan. We recommend using 0.0 as the missing value for better performance. Note: In a spark DataFrame, the inactive values in a sparse vector mean 0 instead of missing values, unless missing=0 is specified.')
num_workers = Param(parent='undefined', name='num_workers', doc='The number of XGBoost workers. Each XGBoost worker corresponds to one spark task. Note: This parameter is only supported on Databricks Runtime 9.0 ML and above.')
property params

Returns all params ordered by name. The default implementation uses dir() to get all attributes of type Param.

predictionCol = Param(parent='undefined', name='predictionCol', doc='prediction column name.')
probabilityCol = Param(parent='undefined', name='probabilityCol', doc='Column name for predicted class conditional probabilities. Note: Not all models output well-calibrated probability estimates! These probabilities should be treated as confidences, not precise probabilities.')
rawPredictionCol = Param(parent='undefined', name='rawPredictionCol', doc='raw prediction (a.k.a. confidence) column name.')
classmethod read()

Returns an MLReader instance for this class.

save(path)

Save this ML instance to the given path, a shortcut of ‘write().save(path)’.

set(param, value)

Sets a parameter in the embedded param map.

transform(dataset, params=None)

Transforms the input dataset with optional parameters.

Parameters
  • dataset – input dataset, which is an instance of pyspark.sql.DataFrame

  • params – an optional param map that overrides embedded params.

Returns

transformed dataset

New in version 1.3.0.

uid

A unique id for the object.

use_external_storage = Param(parent='undefined', name='use_external_storage', doc="A boolean variable (that is False by default). External storage is a parameter for distributed training that allows external storage (disk) to be used when you have an exceptionally large dataset. This should be set to false for small datasets. Note that base margin and weighting doesn't work if this is True. Also note that you may use precision if you use external storage. Note: This parameter is only supported on Databricks Runtime 9.0 ML and above.")
use_gpu = Param(parent='undefined', name='use_gpu', doc='A boolean variable. Set use_gpu=true if the executors are running on GPU instances. Currently, only one GPU per task is supported. Note: This parameter is only supported on Databricks Runtime 9.0 ML and above.')
validationIndicatorCol = Param(parent='undefined', name='validationIndicatorCol', doc='name of the column that indicates whether each row is for training or for validation. False indicates training; true indicates validation.')
weightCol = Param(parent='undefined', name='weightCol', doc='weight column name. If this is not set or empty, we treat all instance weights as 1.0.')
write()

Returns an MLWriter instance for this ML instance.

class sparkdl.xgboost.XgboostRegressor(**kwargs)[source]

Bases: sparkdl.xgboost.xgboost._XgboostEstimator

XgboostRegressor is a PySpark ML estimator. It implements the XGBoost regression algorithm based on XGBoost python library, and it can be used in PySpark Pipeline and PySpark ML meta algorithms like CrossValidator/TrainValidationSplit/OneVsRest.

XgboostRegressor automatically supports most of the parameters in xgboost.XGBRegressor constructor and most of the parameters used in xgboost.XGBRegressor fit and predict method (see API docs for details).

XgboostRegressor doesn’t support setting gpu_id but support another param use_gpu, see doc below for more details.

XgboostRegressor doesn’t support setting base_margin explicitly as well, but support another param called baseMarginCol. see doc below for more details.

XgboostRegressor doesn’t support validate_features and output_margin param.

Parameters
  • callbacks – The export and import of the callback functions are at best effort. For details, see sparkdl.xgboost.XgboostRegressor.callbacks param doc.

  • missing – The parameter missing in XgboostRegressor has different semantics with that in xgboost.XGBRegressor. For details, see sparkdl.xgboost.XgboostRegressor.missing param doc.

  • validationIndicatorCol – For params related to xgboost.XGBRegressor training with evaluation dataset’s supervision, set sparkdl.xgboost.XgboostRegressor.validationIndicatorCol parameter instead of setting the eval_set parameter in xgboost.XGBRegressor fit method.

  • weightCol – To specify the weight of the training and validation dataset, set sparkdl.xgboost.XgboostRegressor.weightCol parameter instead of setting sample_weight and sample_weight_eval_set parameter in xgboost.XGBRegressor fit method.

  • xgb_model – Set the value to be the instance returned by sparkdl.xgboost.XgboostRegressorModel.get_booster().

  • num_workers – Integer that specifies the number of XGBoost workers to use. Each XGBoost worker corresponds to one Spark task. This parameter is only supported on Databricks Runtime 9.0 ML and above.

  • use_gpu – Boolean that specifies whether the executors are running on GPU instances. This parameter is only supported on Databricks Runtime 9.0 ML and above.

  • use_external_storage – Boolean that specifices whether you want to use external storage when training in a distributed manner. This allows using disk as cache. Setting this to true is useful when you want better memory utilization but is not needed for small test datasets. This parameter is only supported on Databricks Runtime 9.0 ML and above.

  • baseMarginCol – To specify the base margins of the training and validation dataset, set sparkdl.xgboost.XgboostRegressor.baseMarginCol parameter instead of setting base_margin and base_margin_eval_set in the xgboost.XGBRegressor fit method. Note: this isn’t available for distributed training. This parameter is only supported on Databricks Runtime 9.0 ML and above.

Note

The Parameters chart above contains parameters that need special handling. For a full list of parameters, see entries with Param(parent=… below.

Note

This API is experimental.

Examples

>>> from sparkdl.xgboost import XgboostRegressor
>>> from pyspark.ml.linalg import Vectors
>>> df_train = spark.createDataFrame([
...     (Vectors.dense(1.0, 2.0, 3.0), 0, False, 1.0),
...     (Vectors.sparse(3, {1: 1.0, 2: 5.5}), 1, False, 2.0),
...     (Vectors.dense(4.0, 5.0, 6.0), 2, True, 1.0),
...     (Vectors.sparse(3, {1: 6.0, 2: 7.5}), 3, True, 2.0),
... ], ["features", "label", "isVal", "weight"])
>>> df_test = spark.createDataFrame([
...     (Vectors.dense(1.0, 2.0, 3.0), ),
...     (Vectors.sparse(3, {1: 1.0, 2: 5.5}), )
... ], ["features"])
>>> xgb_regressor = XgboostRegressor(max_depth=5, missing=0.0,
... validationIndicatorCol='isVal', weightCol='weight',
... early_stopping_rounds=1, eval_metric='rmse')
>>> xgb_reg_model = xgb_regressor.fit(df_train)
>>> xgb_reg_model.transform(df_test)
baseMarginCol = Param(parent='undefined', name='baseMarginCol', doc='Specify the base margins of the training and validation dataset. Set this value instead of setting `base_margin` and `base_margin_eval_set` in the fit method. Note: this parameter is not available for distributed training. Note: This parameter is only supported on Databricks Runtime 9.0 ML and above.')
callbacks = Param(parent='undefined', name='callbacks', doc='Refer to XGBoost doc of `xgboost.XGBClassifier.fit()` or `xgboost.XGBRegressor.fit()` for this param callbacks. The callbacks can be arbitrary functions. It is saved using cloudpickle which is not a fully self-contained format. It may fail to load with different versions of dependencies.')
clear(param)

Clears a param from the param map if it has been explicitly set.

copy(extra=None)

Creates a copy of this instance with the same uid and some extra params. The default implementation creates a shallow copy using copy.copy(), and then copies the embedded and extra parameters over and returns the copy. Subclasses should override this method if the default approach is not sufficient.

Parameters

extra – Extra parameters to copy to the new instance

Returns

Copy of this instance

explainParam(param)

Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.

explainParams()

Returns the documentation of all params with their optionally default values and user-supplied values.

external_storage_precision = Param(parent='undefined', name='external_storage_precision', doc='The number of significant digits for data storage on disk when using external storage. Note: This parameter is only supported on Databricks Runtime 9.0 ML and above.')
extractParamMap(extra=None)

Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.

Parameters

extra – extra param values

Returns

merged param map

featuresCol = Param(parent='undefined', name='featuresCol', doc='features column name.')
fit(dataset, params=None)

Fits a model to the input dataset with optional parameters.

Parameters
  • dataset – input dataset, which is an instance of pyspark.sql.DataFrame

  • params – an optional param map that overrides embedded params. If a list/tuple of param maps is given, this calls fit on each param map and returns a list of models.

Returns

fitted model(s)

New in version 1.3.0.

fitMultiple(dataset, paramMaps)

Fits a model to the input dataset for each param map in paramMaps.

Parameters
  • dataset – input dataset, which is an instance of pyspark.sql.DataFrame.

  • paramMaps – A Sequence of param maps.

Returns

A thread safe iterable which contains one model for each param map. Each call to next(modelIterator) will return (index, model) where model was fit using paramMaps[index]. index values may not be sequential.

New in version 2.3.0.

force_repartition = Param(parent='undefined', name='force_repartition', doc='A boolean variable. Set force_repartition=true if you want to force the input dataset to be repartitioned before XGBoost training. Note: The auto repartitioning judgement is not fully accurate, so it is recommended to have force_repartition be True. Note: This parameter is only supported on Databricks Runtime 9.0 ML and above.')
getFeaturesCol()

Gets the value of featuresCol or its default value.

getLabelCol()

Gets the value of labelCol or its default value.

getOrDefault(param)

Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set.

getParam(paramName)

Gets a param by its name.

getPredictionCol()

Gets the value of predictionCol or its default value.

getValidationIndicatorCol()

Gets the value of validationIndicatorCol or its default value.

getWeightCol()

Gets the value of weightCol or its default value.

hasDefault(param)

Checks whether a param has a default value.

hasParam(paramName)

Tests whether this instance contains a param with a given (string) name.

isDefined(param)

Checks whether a param is explicitly set by user or has a default value.

isSet(param)

Checks whether a param is explicitly set by user.

labelCol = Param(parent='undefined', name='labelCol', doc='label column name.')
classmethod load(path)

Reads an ML instance from the input path, a shortcut of read().load(path).

missing = Param(parent='undefined', name='missing', doc='Specify the missing value in the features, default np.nan. We recommend using 0.0 as the missing value for better performance. Note: In a spark DataFrame, the inactive values in a sparse vector mean 0 instead of missing values, unless missing=0 is specified.')
num_workers = Param(parent='undefined', name='num_workers', doc='The number of XGBoost workers. Each XGBoost worker corresponds to one spark task. Note: This parameter is only supported on Databricks Runtime 9.0 ML and above.')
property params

Returns all params ordered by name. The default implementation uses dir() to get all attributes of type Param.

predictionCol = Param(parent='undefined', name='predictionCol', doc='prediction column name.')
classmethod read()

Returns an MLReader instance for this class.

save(path)

Save this ML instance to the given path, a shortcut of ‘write().save(path)’.

set(param, value)

Sets a parameter in the embedded param map.

uid

A unique id for the object.

use_external_storage = Param(parent='undefined', name='use_external_storage', doc="A boolean variable (that is False by default). External storage is a parameter for distributed training that allows external storage (disk) to be used when you have an exceptionally large dataset. This should be set to false for small datasets. Note that base margin and weighting doesn't work if this is True. Also note that you may use precision if you use external storage. Note: This parameter is only supported on Databricks Runtime 9.0 ML and above.")
use_gpu = Param(parent='undefined', name='use_gpu', doc='A boolean variable. Set use_gpu=true if the executors are running on GPU instances. Currently, only one GPU per task is supported. Note: This parameter is only supported on Databricks Runtime 9.0 ML and above.')
validationIndicatorCol = Param(parent='undefined', name='validationIndicatorCol', doc='name of the column that indicates whether each row is for training or for validation. False indicates training; true indicates validation.')
weightCol = Param(parent='undefined', name='weightCol', doc='weight column name. If this is not set or empty, we treat all instance weights as 1.0.')
write()

Returns an MLWriter instance for this ML instance.

class sparkdl.xgboost.XgboostRegressorModel(xgb_sklearn_model=None)[source]

Bases: sparkdl.xgboost.xgboost._XgboostModel

The model returned by sparkdl.xgboost.XgboostRegressor.fit()

Note

This API is experimental.

baseMarginCol = Param(parent='undefined', name='baseMarginCol', doc='Specify the base margins of the training and validation dataset. Set this value instead of setting `base_margin` and `base_margin_eval_set` in the fit method. Note: this parameter is not available for distributed training. Note: This parameter is only supported on Databricks Runtime 9.0 ML and above.')
callbacks = Param(parent='undefined', name='callbacks', doc='Refer to XGBoost doc of `xgboost.XGBClassifier.fit()` or `xgboost.XGBRegressor.fit()` for this param callbacks. The callbacks can be arbitrary functions. It is saved using cloudpickle which is not a fully self-contained format. It may fail to load with different versions of dependencies.')
clear(param)

Clears a param from the param map if it has been explicitly set.

copy(extra=None)

Creates a copy of this instance with the same uid and some extra params. The default implementation creates a shallow copy using copy.copy(), and then copies the embedded and extra parameters over and returns the copy. Subclasses should override this method if the default approach is not sufficient.

Parameters

extra – Extra parameters to copy to the new instance

Returns

Copy of this instance

explainParam(param)

Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.

explainParams()

Returns the documentation of all params with their optionally default values and user-supplied values.

external_storage_precision = Param(parent='undefined', name='external_storage_precision', doc='The number of significant digits for data storage on disk when using external storage. Note: This parameter is only supported on Databricks Runtime 9.0 ML and above.')
extractParamMap(extra=None)

Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.

Parameters

extra – extra param values

Returns

merged param map

featuresCol = Param(parent='undefined', name='featuresCol', doc='features column name.')
force_repartition = Param(parent='undefined', name='force_repartition', doc='A boolean variable. Set force_repartition=true if you want to force the input dataset to be repartitioned before XGBoost training. Note: The auto repartitioning judgement is not fully accurate, so it is recommended to have force_repartition be True. Note: This parameter is only supported on Databricks Runtime 9.0 ML and above.')
getFeaturesCol()

Gets the value of featuresCol or its default value.

getLabelCol()

Gets the value of labelCol or its default value.

getOrDefault(param)

Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set.

getParam(paramName)

Gets a param by its name.

getPredictionCol()

Gets the value of predictionCol or its default value.

getValidationIndicatorCol()

Gets the value of validationIndicatorCol or its default value.

getWeightCol()

Gets the value of weightCol or its default value.

get_booster()

Return the xgboost.core.Booster instance.

hasDefault(param)

Checks whether a param has a default value.

hasParam(paramName)

Tests whether this instance contains a param with a given (string) name.

isDefined(param)

Checks whether a param is explicitly set by user or has a default value.

isSet(param)

Checks whether a param is explicitly set by user.

labelCol = Param(parent='undefined', name='labelCol', doc='label column name.')
classmethod load(path)

Reads an ML instance from the input path, a shortcut of read().load(path).

missing = Param(parent='undefined', name='missing', doc='Specify the missing value in the features, default np.nan. We recommend using 0.0 as the missing value for better performance. Note: In a spark DataFrame, the inactive values in a sparse vector mean 0 instead of missing values, unless missing=0 is specified.')
num_workers = Param(parent='undefined', name='num_workers', doc='The number of XGBoost workers. Each XGBoost worker corresponds to one spark task. Note: This parameter is only supported on Databricks Runtime 9.0 ML and above.')
property params

Returns all params ordered by name. The default implementation uses dir() to get all attributes of type Param.

predictionCol = Param(parent='undefined', name='predictionCol', doc='prediction column name.')
classmethod read()

Returns an MLReader instance for this class.

save(path)

Save this ML instance to the given path, a shortcut of ‘write().save(path)’.

set(param, value)

Sets a parameter in the embedded param map.

transform(dataset, params=None)

Transforms the input dataset with optional parameters.

Parameters
  • dataset – input dataset, which is an instance of pyspark.sql.DataFrame

  • params – an optional param map that overrides embedded params.

Returns

transformed dataset

New in version 1.3.0.

uid

A unique id for the object.

use_external_storage = Param(parent='undefined', name='use_external_storage', doc="A boolean variable (that is False by default). External storage is a parameter for distributed training that allows external storage (disk) to be used when you have an exceptionally large dataset. This should be set to false for small datasets. Note that base margin and weighting doesn't work if this is True. Also note that you may use precision if you use external storage. Note: This parameter is only supported on Databricks Runtime 9.0 ML and above.")
use_gpu = Param(parent='undefined', name='use_gpu', doc='A boolean variable. Set use_gpu=true if the executors are running on GPU instances. Currently, only one GPU per task is supported. Note: This parameter is only supported on Databricks Runtime 9.0 ML and above.')
validationIndicatorCol = Param(parent='undefined', name='validationIndicatorCol', doc='name of the column that indicates whether each row is for training or for validation. False indicates training; true indicates validation.')
weightCol = Param(parent='undefined', name='weightCol', doc='weight column name. If this is not set or empty, we treat all instance weights as 1.0.')
write()

Returns an MLWriter instance for this ML instance.