This week I have been working on extracting images from a 3D matrix to be displayed. Then, I discovered there were several options to achieve that. Allow me to present some options. I know there are others and maybe better ones. Those work are suitably for my purposes 🤓

The problem might be summarized as: given a 3D matrix of real values, extract grayscale images which represent a section of this matrix. In my context, the 3D matrix is a volume of dimensions `w * h * d`, and the image has a dimension of `w * h`.

Particularly, the volume represents a CT composes of values in Hounsfield Unit. Then, the idea is to transform the values of the matrix into a range `0-255`. Well, let's start 🤖

### Show me the codeeee!!

To begin with, we need to include a couple of basic libraries:

``````import numpy as np
import time
``````

I divided it into three methods with different characteristics. Each one has the same input:

• The volume as a NumPy array. For this article, the volume has the size `512 * 512 * 771`
• An index as an integer to extract an image

Functions return the transformed volume plus an image to be displayed. The image is to verify everything is ok. This could be dismissed at this step.

Method 1

This method converts the array into a flat list to use the method `mixmax_scale` presents in sklearn . After that, the list is reverted to a NumPy in the range `0-255`.

``````from sklearn.preprocessing import minmax_scale

def method_one(volume: np.ndarray, index: int):
shape = volume.shape
lvolume = list(volume.ravel())
norm_list = minmax_scale(lvolume)
rvolume = np.array(norm_list).reshape(shape) * 255
return rvolume, rvolume[:, :, index]
``````

This method could be effective when the usage of Python lists is necessary.

Method 2

Again, using the package sklearn, this method uses the MinMaxScaler which transforms features by scaling each feature to a given range. The transformation is carried out by:

``````X_std = (X - X.min(axis=0)) / (X.max(axis=0) - X.min(axis=0))
X_scaled = X_std * (max - min) + min
``````

Then, the idea is to use the MinMaxScaler in the range `0-255`. However, this function requires a 2D array. To complete that, the volume is resized to the size `(w*h, d)`.

``````from sklearn.preprocessing import MinMaxScaler
from PIL import Image

def method_two(volume: np.ndarray, index: int):
scaler = MinMaxScaler(feature_range=(0, 255))
rvolume = scaler.fit_transform(volume.reshape(-1, volume.shape[-1])).reshape(volume.shape)
img_arr = Image.fromarray(rvolume[:, :, index])
if img_arr.mode == "F" or img_arr.mode == "I":
img_arr = img_arr.convert('L')
return rvolume, img_arr
``````

Also, to convert a 2D NumPy array into a grayscale image, the Image from Pillow package is used. As values from the volume are real values, the `img_arr` should be F. Then, it is necessary to convert it into a grayscale (mode L). Obviously, this step depends of your goals.

Method 3

This latter method is purely using NumPy. The volume is resized into an array of size `(w * h *d, 1)`. Plus, this method is uglier than others 😆.

``````def method_three(volume: np.ndarray, index: int):
dim = volume.shape
volume = volume.reshape(-1, 1)
rvolume = np.divide(volume - volume.min(axis=0), (volume.max(axis=0) - volume.min(axis=0)))*255
rvolume = rvolume.reshape(dim)
img_arr = Image.fromarray(rvolume[:, :, index])
if img_arr.mode == "F" or img_arr.mode == "I":
img_arr = img_arr.convert('L')
return rvolume, img_arr
``````

This does not require anything else different than NumPy.

### Comparison

To compare which one is better, first, we need to know the output is the same for all methods:

The image corresponds to a slice from a thoracic CT(aka, it shows the two lungs). There are minor differences in the obtained values with method 2 (difficult to visually perceive). Method 1 and method 3 obtain the same result in real values.

The way to measure the time of methods, the following code was used:

``````start_time = time.time()
v1, array = method_one(volume, index)  # change this for other methods
total = (time.time() - start_time)
print(f"method one --- {total:.4f} seconds ---" )
``````

Running this 10 times on my machine for the 3 methods, I took some values to approach a better idea of how the performance is. The table reveals those values measured in seconds:

technique min max std mean
method_one 38.0779 47.4995 2.5415 44.6038
method_two 5.8577 9.3793 1.2485 7.2070
method_three 3.2709 5.3550 0.6943 3.9027

It is clear method 3 is faster than the others two. Notice that returned array values of methods 2 and 3 are from type `PIL.Image`, and for the method 1 is from type `numpy.ndarray`.

### Thoughts

Employ any method that you want! Plus, there are other ways to do the same. For me, I need faster execution times, then I chose method 3. However, this always depends on your context 🤓 This process to transform data from one range to another is called normalization, in this case the output range is in grayscale.

I hope this would be valuable to other developers 🤖

From a geek to geeks.

opengl

C++

deep learning

javascript

matlab

python

## Calling Matlab code in Python

You've successfully subscribed to The ecode.DEV repository