mlp_models#


class MultiLayerPerceptronTorchModel(cuda: bool, hidden_dims: Sequence[int], hid_activation_function: Callable[[torch.Tensor], torch.Tensor], output_activation_function: Optional[Callable[[torch.Tensor], torch.Tensor]], p_dropout: Optional[float] = None, input_dim: Optional[int] = None)[source]#

Bases: VectorTorchModel

Parameters:
  • cuda – whether to enable CUDA

  • hidden_dims – the sequence of hidden layer dimensions

  • hid_activation_function – the output activation function for hidden layers

  • output_activation_function – the output activation function

  • p_dropout – the dropout probability for training

  • input_dim – the input dimension; if None, use dimensions determined by the input data (number of columns in data frame)

create_torch_module_for_dims(input_dim: int, output_dim: int) torch.nn.Module[source]#
Parameters:
  • input_dim – the number of input dimensions as reported by the data set provider (number of columns in input data frame for default providers)

  • output_dim – the number of output dimensions as reported by the data set provider (for default providers, this will be the number of columns in the output data frame or, for classification, the number of classes)

Returns:

the torch module

cuda: bool#
module: Optional[torch.nn.Module]#
outputScaler: Optional[TensorScaler]#
inputScaler: Optional[TensorScaler]#
trainingInfo: Optional[TrainingInfo]#
class MultiLayerPerceptronVectorRegressionModel(hidden_dims: Sequence[int] = (5, 5), hid_activation_function: Callable[[torch.Tensor], torch.Tensor] = torch.sigmoid, output_activation_function: Optional[Callable[[torch.Tensor], torch.Tensor]] = None, input_dim: Optional[int] = None, normalisation_mode: NormalisationMode = NormalisationMode.NONE, cuda: bool = True, p_dropout: Optional[float] = None, nn_optimiser_params: Optional[NNOptimiserParams] = None)[source]#

Bases: TorchVectorRegressionModel

Parameters:
  • hidden_dims – sequence containing the number of neurons to use in hidden layers

  • hid_activation_function – the activation function (torch.nn.functional.* or torch.*) to use for all hidden layers

  • output_activation_function – the output activation function (torch.nn.functional.* or torch.* or None)

  • input_dim – the input dimension; if None, use dimensions determined by the input data (number of columns in data frame)

  • normalisation_mode – the normalisation mode to apply to input and output data

  • cuda – whether to use CUDA (GPU acceleration)

  • p_dropout – the probability with which to apply dropouts after each hidden layer

  • nn_optimiser_params – parameters for NNOptimiser; if None, use default (or what is specified in nnOptimiserDictParams)

class MultiLayerPerceptronVectorClassificationModel(hidden_dims: Sequence[int] = (5, 5), hid_activation_function: Callable[[torch.Tensor], torch.Tensor] = torch.sigmoid, output_activation_function: Optional[Union[Callable[[torch.Tensor], torch.Tensor], str, ActivationFunction]] = ActivationFunction.LOG_SOFTMAX, input_dim: Optional[int] = None, normalisation_mode: NormalisationMode = NormalisationMode.NONE, cuda: bool = True, p_dropout: Optional[float] = None, nn_optimiser_params: Optional[NNOptimiserParams] = None)[source]#

Bases: TorchVectorClassificationModel

Parameters:
  • hidden_dims – sequence containing the number of neurons to use in hidden layers

  • hid_activation_function – the activation function (torch.nn.functional.* or torch.*) to use for all hidden layers

  • output_activation_function – the output activation function (function from torch.nn.functional.*, function name, enum instance or None)

  • input_dim – the input dimension; if None, use dimensions determined by the input data (number of columns in data frame)

  • normalisation_mode – the normalisation mode to apply to input and output data

  • cuda – whether to use CUDA (GPU acceleration)

  • p_dropout – the probability with which to apply dropouts after each hidden layer

  • nn_optimiser_params – parameters for NNOptimiser; if None, use default