ReLU Activation Function
The ReLU (Rectified Linear Unit) activation function maps all negative inputs to zero and positive outputs to the linear equivalent. This truncation can be useful in dealing with vanishing gradients but it may present it's own problems as outputs can become zero and no longer carry information through the network. Check out the dying ReLU problem.
Mathematically: $$ f(x) = max(0, x) $$
Below, we implement the ReLU activation function in pytorch and visualize the output
import torch
import matplotlib.pyplot as plt
x_inputs = torch.arange(-10., 10., 1)
y_outputs = torch.relu(x_values)
plt.figure(figsize=(9,6))
plt.plot(x_inputs, y_outputs)
plt.title("ReLU Function, x[-10, 10]")
plt.show()