Get weights for a neural network in an organized list by extracting values from a neural network object. This function is generally not called by itself.

neuralweights(mod_in, ...)

# S3 method for numeric
neuralweights(mod_in, rel_rsc = NULL, struct, ...)

# S3 method for nnet
neuralweights(mod_in, rel_rsc = NULL, ...)

# S3 method for mlp
neuralweights(mod_in, rel_rsc = NULL, ...)

# S3 method for nn
neuralweights(mod_in, rel_rsc = NULL, ...)

Arguments

mod_in

input object for which an organized model list is desired. The input can be an object of class numeric, nnet, mlp, or nn

...

arguments passed to other methods

rel_rsc

numeric indicating the scaling range for the width of connection weights in a neural interpretation diagram. Default is NULL for no rescaling.

struct

numeric vector equal in length to the number of layers in the network. Each number indicates the number of nodes in each layer starting with the input and ending with the output. An arbitrary number of hidden layers can be included.

Value

Returns a two-element list with the first element being a vector indicating the number of nodes in each layer of the neural network and the second element being a named list of weight values for the input model.

Details

Each element of the returned list is named using the construct 'layer node', e.g. 'out 1' is the first node of the output layer. Hidden layers are named using three values for instances with more than one hidden layer, e.g., 'hidden 1 1' is the first node in the first hidden layer, 'hidden 1 2' is the second node in the first hidden layer, 'hidden 2 1' is the first node in the second hidden layer, etc. The values in each element of the list represent the weights entering the specific node from the preceding layer in sequential order, starting with the bias layer if applicable. For example, the elements in a list item for 'hidden 1 1' of a neural network with a 3 5 1 structure (3 inputs, 5 hidden nodes, 1 output) would have four values indicating the weights in sequence from the bias layer, first input layer, second input layer, and third input layer going to the first hidden node.

The function will remove direct weight connections between input and output layers if the neural network was created with a skip-layer using skip = TRUE with the nnet or train functions. This may produce misleading results when evaluating variable performance with the garson function.

Examples


data(neuraldat)
set.seed(123)

## using numeric input

wts_in <- c(13.12, 1.49, 0.16, -0.11, -0.19, -0.16, 0.56, -0.52, 0.81)
struct <- c(2, 2, 1) #two inputs, two hidden, one output 

neuralweights(wts_in, struct = struct)
#> $struct
#> [1] 2 2 1
#> 
#> $wts
#> $wts$`hidden 1 1`
#> [1] 13.12  1.49  0.16
#> 
#> $wts$`hidden 1 2`
#> [1] -0.11 -0.19 -0.16
#> 
#> $wts$`out 1`
#> [1]  0.56 -0.52  0.81
#> 
#> 

## using nnet

library(nnet)

mod <- nnet(Y1 ~ X1 + X2 + X3, data = neuraldat, size = 5, linout = TRUE)
#> # weights:  26
#> initial  value 1768.536857 
#> iter  10 value 1.856275
#> iter  20 value 0.062478
#> iter  30 value 0.014742
#> iter  40 value 0.006542
#> iter  50 value 0.001109
#> iter  60 value 0.000999
#> iter  70 value 0.000402
#> final  value 0.000073 
#> converged
 
neuralweights(mod)  
#> $struct
#> [1] 3 5 1
#> 
#> $wts
#> $wts$`hidden 1 1`
#> [1]  0.79244745 -0.08878205 -0.34869446  0.99886444
#> 
#> $wts$`hidden 1 2`
#> [1]  1.63768901 -0.71701897  0.95502425 -0.06234557
#> 
#> $wts$`hidden 1 3`
#> [1]  1.4696987 -0.7838884 -0.6890212  0.7290061
#> 
#> $wts$`hidden 1 4`
#> [1]  1.0789572 -0.1274677 -0.1827771  0.1007104
#> 
#> $wts$`hidden 1 5`
#> [1] -1.4746862 -0.1537787 -0.2235660  0.1210513
#> 
#> $wts$`out 1`
#> [1] -1.094561e+00 -5.262218e-04  6.680189e-05 -2.581654e-03  1.809908e+00
#> [6]  1.174171e+00
#> 
#> 

if (FALSE) {
## using RSNNS, no bias layers

library(RSNNS)

x <- neuraldat[, c('X1', 'X2', 'X3')]
y <- neuraldat[, 'Y1']
mod <- mlp(x, y, size = 5, linOut = TRUE)

neuralweights(mod)

# pruned model using code from RSSNS pruning demo
pruneFuncParams <- list(max_pr_error_increase = 10.0, pr_accepted_error = 1.0, 
 no_of_pr_retrain_cycles = 1000, min_error_to_stop = 0.01, init_matrix_value = 1e-6, 
 input_pruning = TRUE, hidden_pruning = TRUE)
mod <- mlp(x, y, size = 5, pruneFunc = "OptimalBrainSurgeon", 
 pruneFuncParams = pruneFuncParams)

neuralweights(mod)

## using neuralnet

library(neuralnet)

mod <- neuralnet(Y1 ~ X1 + X2 + X3, data = neuraldat, hidden = 5)

neuralweights(mod)
}