Pytorch nn.Linear different output fir same input
For learning purposes, I'm trying to build a simple perceptron with pytorch which should not be trained but just give the output for set weights. Here's the code:
import torch.nn
from torch import tensor
class Net(torch.nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = torch.nn.Linear(3,1)
self.relu = torch.nn.ReLU()
# force weights to equal one
with torch.no_grad():
self.fc1.weight = torch.nn.Parameter(torch.ones_like(self.fc1.weight))
def forward(self, x):
x = self.fc1(x)
output = self.relu(x)
return output
net = Net()
test_tensor = tensor([1, 1, 1])
print(net(test_tensor.float()).item())
I expect this single layer neural network to output 3. And that is roughly(!) the output for every execution, but it ranges from 2.5 to 3.5. Where does randomness enter the model?
from Recent Questions - Stack Overflow https://ift.tt/3jaZa3L
https://ift.tt/eA8V8J
Comments
Post a Comment