NeuralNet.md 1.9 KB

Neural Networks

Neural Network Example from homework:

class LR:
    def __init__(self, dims=1, act=None):
        self.dims = dims
        self.weights = np.random.rand(dims) # Currently random set weights
        self.bias = np.random.rand(1)
    
        if act == None:
            self.act = lambda x:x # return value put into function is act is none
        else:
            self.act = act
        
    def weighted(self, vals):
        return np.dot(self.weights, vals) + self.bias
    
    def forward(self, vals):
        return self.act(self.weighted(vals))

class NN:
    def __init__(self, inputs=1, layers=1, dims=[1], acts=None):
        self.inputs = inputs
        self.layers = layers
        self.dims = dims
        self.nodes = [ [LR(inputs, acts[0]) for i in range(dims[0])] ]

        for i in range(1, layers):
            self.nodes.append([LR(dims[i-1], acts[i]) for j in range(dims[i])])

    def forward(self, x):
        rv = x[:]
        for layer in self.nodes:
            rv = np.array([LR.forward(rv) for LR in layer])
        return rv

Visual Guide to Neural Networks

Here is a visualization of the following Neural Net created with this code:

nn = NN(inputs=2,layers=2,dims=[1,1],acts=[None,None])
nn.nodes[0][0].weights = np.array([1.,1.])
nn.nodes[0][0].bias = np.array([0])
nn.nodes[1][0].weights = np.array([1.])
nn.nodes[1][0].bias = 6

This neural network as two inputs and 2 layers (sections after the inputs).

dims=[1,1] shows us that each layer has only one node. acts=[None, None] showa that each layer does not have any activation function.

The weights for moving the information from the inputs to the first layer is set to 1 for both edges. The bias for the first layer is set to 0.

The weights for moving the information from the first layer to the next layer is set to one. The bias is set to 6.

visualization of aformentioned Neural Net