Models

Multi-Layer Perceptron

class lucidmode.models.NeuralNet(hidden_l, hidden_a, output_n, output_a, cost=None, hidden_r=None, output_r=None, optimizer=None)[source]

Artificial Neural Network: Feedforward multilayer pereceptron.

It supports a wide variations of topologies, from number of hidden layers, number of hidden neurons per layer, one input layer and one output layer where both of them could have from 1 to N number of neurons.

Parameters
hidden_l: list (of int)

Number of neurons to include per hidden layer.

hidden_a: list (list of str, with length hidden_l)

Activation of hidden layers

output_n: int

Number of neurons in output layer

output_a: str

Activation of output layer (str)

hidden_r / output_r: list (of str, of size l_hidden)

list with each pre-layer weights and biases regularization criteria, options are:

  • ‘l1’: Lasso regularization \(|b|\)

  • ‘l2’: Ridge regularization \(|b|^2\)

  • ‘elasticnet’: \(C(L1 - L2)\)

  • ‘dropout’: Randomly (uniform) select N neurons in layer and turn its weight to 0

cost: str

cost information for model.

  • ‘function’: ‘binary-logloss’, ‘multi-logloss’, ‘mse’

  • ‘reg’: {‘type’: [‘l1’, ‘l2’, ‘elasticnet’], ‘lambda’: 0.001, ‘ratio’: 0.01}

init: str

initialization of weights specified from compile method

fit(x_train, y_train, x_val=None, y_val=None, epochs=10, alpha=0.1, verbosity=3, random_state=1, callback=None, randomize=False)[source]

Train a previously specified (formed) model according to specified parameters.

Parameters
x_train: np.array / pd.Series

Features data with nxm dimensions, n = observations, m = features

y_train: np.array / pd.Series

Target variable data, dimensions of: nx1 por binary classification and nxm for multi-class

x_val: np.array / pd.Series

Same as x_train but with data considered as validation

y_val: np.array / pd.Series

Same as y_train but with data considered as validation

epochs: int

Epochs to iterate the model training

alpha: float

Learning rate for Gradient Descent

cost_f: str

Cost function, options are according to functions

verbosity: int

level of verbosity to show progress 3: cost train and cost val at every epoch

callback: dict

whether there is a stopping criteria or action {‘earlyStopping’: {‘metric’: ‘acc’, ‘threshold’: 0.80}}

Returns
history: dict

with dynamic keys and iterated values of selected metrics

formation(cost=None, optimizer=None, init=None, metrics=None)[source]

Neural Network Model Formation.

Parameters
cost: dict

Details of the cost function. Includes the following elements:

  • ‘cost_f’: Cost function by its name, options are: {‘logloss’, ‘mse’}

  • ‘cost_r’: Cost regularization

optimizer: dict, str

type: Name of method for optimization params: parameters according to method

init:

weight initialization

metrics:

metrics to monitor training

Returns
self: Modifications on instance of class
init_weights(input_shape, init_layers, random_state=1)[source]

Weight initialization of a model that was previously instantiated by a topology formation process

Parameters
input_shape: int

number of features (inputs) in the model

init_layers: list (of str, with size of n_layers)

list with each layer criteria for weights initialization, with options:

  • ‘common-uniform’: Commonly used factor & uniformly distributed random weights [1]

  • ‘xavier_uniform’: Xavier factor & uniformly distributed random weights [1]

  • ‘xavier_normal’: Xavier factor & standard-normally distributed random weights [1]

  • ‘he-standard’: Factor formulatated according to [2]

References

  • [1] X. Glorot and Y. Bengio, “Understanding the difficulty oftraining deep feedforward neural networks. International Conference on Artificial Intelligence and Statistics”, 2010.

  • [2] He et al, “Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification”, 2015 IEEE International Conference on Computer Vision (ICCV), 2015, pp. 1026-1034, doi: 10.1109/ICCV.2015.123.

inspect(*params)[source]

Method for model inspection, which consists in a terminal print of the model topology and values through the use of the inspect method from the rich package for rich text and beautiful formatting in the terminal.

Parameters
params: list

With the parameters to select which element to include in the console print, all the elements included in the list will be considered in conjunction, the options are the following:

  • ‘help’: Show full help for the model

  • ‘methods’: Show just callable methods

  • ‘private-l1’ Priavate and layer 1 methods (beginning with single underscore)

  • ‘private-l2’ Priavate and layer 2 methods (beginning with double underscore)

predict(X, threshold=0.5)[source]

Computes a class or value prediction given the inherited model of the class.

Parameters
x_train: np.array

Array with n-dimensional samples to generate the predictions from.

threshold: float

Threshold value for the classification case. Default is 0.5

predict_proba(X)[source]

Given the input samples, generates the class probability predictions for all the classes specified in the target variable. Inherits the model, hyperparameters and execution conditions from the class after the fit method is called.

Logistic Regression

class lucidmode.models.LogisticRegression(penalty='elasticnet')[source]

Logistic Regression model under construction …

Parameters
- ‘l1’: Lasso regularizationmath:|b|
- ‘l2’: Ridge regularizationmath:|b|^2
- ‘elasticnet’:math:C(L1 - L2)