Here’s the table of contents:

  1. Introducing PyTorch tensor operations
  2. Importing PyTorch and the necessary modules
  3. Function 1 - torch.min() and torch.max()
    1. torch.min(input) → Tensor
    2. torch.max(input) → Tensor
    3. The following example will give an error
  4. Function 2 - torch.tanh(input, out=None) → Tensor
  5. Function 3 - torch.item() → number and torch.tolist()
    1. When to use
  6. Function 4 - torch.isnan()
  7. Example 3 - breaking
    1. When to use
    2. Function 5 - torch.view(*shape) → Tensor
      1. When to use
    3. Conclusion
    4. Reference Links

Introducing PyTorch tensor operations

PyTorch is a widely used, open-source deep learning platform which has been developed by the Facebook AI Research (FAIR) team, back in early 2017. PyTorch is a library for Python not a framework :)

Tensor operations are at the core of everything we do in Deep Learning and PyTorch is one of the main Python libraries to facilitate tensor operations.

This library allows us to use the usual arithmetic operations we use for numbers but applied to tensors. Also Pytorch also let us automatically compute the derivative of tensor operations which is very useful for Machine Learning and Deep Learning.

In this notebook I will introduce you to 5 useful functions to deal with tensors:

  • function 1: torch.min() and torch.max()
  • function 2: torch.tanh(input, out=None) → Tensor
  • function 3: torch.item() → number and torch.tolist()
  • function 4: torch.get_device() -> Device ordinal (Integer)
  • function 5: torch.view(*shape) → Tensor

Importing PyTorch and the necessary modules

Here below I will describe in details how to use the functions.

Function 1 - torch.min() and torch.max()

The two min and max functions are similar:

torch.min(input) → Tensor

Returns the minimum value of all elements in the input tensor. torch.min(input, dim, keepdim=False, out=None) -> (Tensor, LongTensor)
Returns a namedtuple (values, indices) where values is the minimum value of each row of the input tensor in the given dimension dim. And indices is the index location of each minimum value found (argmin). If keepdim is True, the output tensors are of the same size as input except in the dimension dim where they are of size 1.

In the example above a tensor of shape 2, 4 (2 rows x 4 columns) has been reduced to a single row with the command torch.min(a, 0), with 0 meaning the rows will be my axe, so it will return one row and the indices give me which row has the minimum, the first or the second in this case.

torch.max(input) → Tensor

the max function below is similar to min therefore:

The following example will give an error

This is because both min() and max() take a certain amount of parameters and at least one, which is the input tensor. The format is max(Tensor input, int dim, bool keepdim), so the third parameter if present needs to be a Boolean.

Function 2 - torch.tanh(input, out=None) → Tensor

The tanh function is often used in Deep Learning. It stays for hyperbolic tangent and returns a non linear output between -1 and 1. It is also used as an activation function.

I will first show the shape of the tanh using matplotlib, a library for plotting graphs

We can see that for inputs very close to zero the output is almost the same but grows rapidly but never get bigger than 1 or minus 1.

For very big positive or negative number the output grows asymptotically to 1 or minus 1. Sometimes we need to map the output values in between -1 to 1 like yes or no, and this is why we use activation functions like tanh() with Neural Networks.

The tahn() function is often used as an activator function in Deep Learning together with the sigmoid function and the ReLu. It is considered to be better than the sigmoid function because of a steeper curve for small values close to zero and it is also sigmoidal (s-shaped). It is a very robust function and cannot be broken easily unless we give a non-numeric input, this would be the only way to get an error message.
The output type is torch.float32

Function 3 - torch.item() → number and torch.tolist()

This function returns the tensor as a (nested) list as a standard Python number. For scalars, a standard Python number is returned, just like with item().

As in the example above we see that both methods can be applied to a tensor containing a single element.

In this case our input tensor has more than one item so we get a python (nested) list as output.

I could not find a way to break this function as long as the input tensors have a valid value.

When to use

It is useful for the case when I have the output values in a tensor type and need to translate those to a pure python environment.

Function 4 - torch.isnan()

Returns a new tensor with boolean elements representing if each element is NaN or not.

My output is a new tensor with the same dimension of the input containing only Boolean values. Using isnan() I can verify only the nan case. If I have an infinput it will not be returned true.

For infinity values I need another function isinf()

Example 3 - breaking

I could not find an example to break this method

When to use

It is useful when I want to make sure that my input data doesn’t contain values that could bring errors and detect nan or inf types in my dataset.

Function 5 - torch.view(*shape) → Tensor

This function returns a new tensor with the same data as the input tensor with a different shape.

The returned tensor shares the same data and must have the same number of elements, but may have a different size.

This example will not work!

If my new size doesn’t match the number of elements I will have an error. Because a shape of 3 by 8 has 24 elements, I cannot match the 4x4 (16 elements) shape of my original tensor

When to use

A common issue with designing Neural Networks is when the output tensor of one layer is having the wrong shape to act as the input tensor to the next layer.

Sometimes we need to explicitly reshape tensors and we can use the view function to achieve this.

Conclusion

These are just 5 functions selected from the many available in the PyTorch documentation showcasing the versatility of this library. There is still much more to discover.

  • Official documentation for torch.Tensor: https://pytorch.org/docs/stable/tensors.html
  • Plotting the tanh function: https://www.geeksforgeeks.org/numpy-tanh-python/