ReLU Activation Function

The ReLU (Rectified Linear Unit) activation function essentially maps all negative inputs to zero and positive outputs to the linear equivalent. This truncation can be useful in dealing with vanishing gradients but it may present it's own problems as outputs become zero and no longer carry information in the network. Check out the dying ReLU problem.

Mathematically: $$ f(x) = max(0, x) $$

Below, we implement the ReLU activation function in pytorch and visualize the output

import torch
import matplotlib.pyplot as plt

x_inputs = torch.arange(-10., 10., 1)
y_outputs = torch.relu(x_values)

plt.figure(figsize=(9,6))
plt.plot(x_inputs, y_outputs)
plt.title("ReLU Function, x[-10, 10]")
plt.show()    
ReLU Activation Function