mridc.collections.common.parts package

Submodules

mridc.collections.common.parts.fft module

mridc.collections.common.parts.fft.fft2c(data: torch.Tensor, fft_type: str = 'orthogonal', fft_normalization: str = 'ortho', fft_dim: Union[int, None, List[int]] = None) torch.Tensor[source]

Apply centered 2 dimensional Fast Fourier Transform.

Parameters
  • data (Complex valued input data containing at least 3 dimensions: dimensions -2 & -1 are spatial dimensions. All) –

  • dimensions. (other dimensions are assumed to be batch) –

  • fft_type (Specify fft type. This is important if an orthogonal transformation is needed or not.) –

  • fft_normalization ("ortho" is the default normalization used by PyTorch. Can be changed to "ortho" or None.) –

  • fft_dim (dimensions to apply the FFT) –

Return type

The FFT of the input.

mridc.collections.common.parts.fft.ifft2c(data: torch.Tensor, fft_type: str = 'orthogonal', fft_normalization: str = 'ortho', fft_dim: Union[int, None, List[int]] = None) torch.Tensor[source]

Apply centered 2 dimensional Inverse Fast Fourier Transform.

Parameters
  • data (Complex valued input data containing at least 3 dimensions: dimensions -2 & -1 are spatial dimensions. All) –

  • dimensions. (other dimensions are assumed to be batch) –

  • fft_type (Specify fft type. This is important if an orthogonal transformation is needed or not.) –

  • fft_normalization ("ortho" is the default normalization used by PyTorch. Can be changed to "ortho" or None.) –

  • fft_dim (dimensions to apply the FFT) –

Return type

The IFFT of the input.

mridc.collections.common.parts.patch_utils module

mridc.collections.common.parts.ptl_overrides module

class mridc.collections.common.parts.ptl_overrides.MRIDCNativeMixedPrecisionPlugin(init_scale: float = 4294967296, growth_interval: int = 1000)[source]

Bases: pytorch_lightning.plugins.precision.native_amp.NativeMixedPrecisionPlugin

Native Mixed Precision Plugin for MRIDC.

mridc.collections.common.parts.rnn_utils module

mridc.collections.common.parts.rnn_utils.rnn_weights_init(module, std_init_range=0.02, xavier=True)[source]

# TODO: check if this is the correct way to initialize RNN weights Initialize different weights in Transformer model.

Parameters
  • module (torch.nn.Module to be initialized) –

  • std_init_range (standard deviation of normal initializer) –

  • xavier (if True, xavier initializer will be used in Linear layers as was proposed in AIAYN paper, otherwise normal) –

  • paper) (initializer will be used (like in BERT) –

mridc.collections.common.parts.utils module

mridc.collections.common.parts.utils.check_stacked_complex(data: torch.Tensor) torch.Tensor[source]

Check if tensor is stacked complex (real & imag parts stacked along last dim) and convert it to a combined complex tensor.

Parameters

data (A complex valued tensor, where the size of the final dimension might be 2.) –

Return type

A complex valued tensor.

mridc.collections.common.parts.utils.coil_combination(data: torch.Tensor, sensitivity_maps: torch.Tensor, method: str = 'SENSE', dim: int = 0) torch.Tensor[source]

Coil combination.

Parameters
  • data (The input tensor.) –

  • sensitivity_maps (The sensitivity maps.) –

  • method (The coil combination method.) –

  • dim (The dimensions along which to apply the coil combination transform.) –

Return type

Coil combined data.

mridc.collections.common.parts.utils.complex_abs(data: torch.Tensor) torch.Tensor[source]

Compute the absolute value of a complex valued input tensor.

Parameters

data (A complex valued tensor, where the size of the final dimension should be 2.) –

Return type

Absolute value of data.

mridc.collections.common.parts.utils.complex_abs_sq(data: torch.Tensor) torch.Tensor[source]

Compute the squared absolute value of a complex tensor.

Parameters

data (A complex valued tensor, where the size of the final dimension should be 2.) –

Return type

Squared absolute value of data.

mridc.collections.common.parts.utils.complex_conj(x: torch.Tensor) torch.Tensor[source]

Complex conjugate.

This applies the complex conjugate assuming that the input array has the last dimension as the complex dimension.

Parameters

x (A PyTorch tensor with the last dimension of size 2.) –

Return type

A PyTorch tensor with the last dimension of size 2.

mridc.collections.common.parts.utils.complex_mul(x: torch.Tensor, y: torch.Tensor) torch.Tensor[source]

Complex multiplication.

This multiplies two complex tensors assuming that they are both stored as real arrays with the last dimension being the complex dimension.

Parameters
  • x (A PyTorch tensor with the last dimension of size 2.) –

  • y (A PyTorch tensor with the last dimension of size 2.) –

Return type

A PyTorch tensor with the last dimension of size 2.

mridc.collections.common.parts.utils.rss(data: torch.Tensor, dim: int = 0) torch.Tensor[source]

Compute the Root Sum of Squares (RSS).

RSS is computed assuming that dim is the coil dimension.

Parameters
  • data (The input tensor) –

  • dim (The dimensions along which to apply the RSS transform) –

Return type

The RSS value.

mridc.collections.common.parts.utils.rss_complex(data: torch.Tensor, dim: int = 0) torch.Tensor[source]

Compute the Root Sum of Squares (RSS) for complex inputs.

RSS is computed assuming that dim is the coil dimension.

Parameters
  • data (The input tensor) –

  • dim (The dimensions along which to apply the RSS transform) –

Return type

The RSS value.

mridc.collections.common.parts.utils.save_reconstructions(reconstructions: Dict[str, numpy.ndarray], out_dir: pathlib.Path)[source]

Save reconstruction images.

This function writes to h5 files that are appropriate for submission to the leaderboard.

Parameters
  • reconstructions (A dictionary mapping input filenames to corresponding reconstructions.) –

  • out_dir (Path to the output directory where the reconstructions should be saved.) –

mridc.collections.common.parts.utils.sense(data: torch.Tensor, sensitivity_maps: torch.Tensor, dim: int = 0) torch.Tensor[source]

The SENSitivity Encoding (SENSE) transform 1.

References

1

Pruessmann KP, Weiger M, Scheidegger MB, Boesiger P. SENSE: Sensitivity encoding for fast MRI. Magn Reson Med 1999; 42:952-962.

Parameters
  • data (The input tensor) –

  • sensitivity_maps (The sensitivity maps) –

  • dim (The coil dimension) –

Return type

A coil-combined image.

mridc.collections.common.parts.utils.tensor_to_complex_np(data: torch.Tensor) numpy.ndarray[source]

Converts a torch tensor to a numpy array.

Parameters

data (Input torch tensor to be converted to numpy.) –

Return type

Complex Numpy array version of data.

mridc.collections.common.parts.utils.to_tensor(data: numpy.ndarray) torch.Tensor[source]

Converts a numpy array to a torch tensor.

For complex arrays, the real and imaginary parts are stacked along the last dimension.

Parameters

data (Input numpy array to be converted to torch.) –

Return type

Torch tensor version of data.

Module contents