mridc.collections.reconstruction.parts package

Submodules

mridc.collections.reconstruction.parts.transforms module

class mridc.collections.reconstruction.parts.transforms.MRIDataTransforms(mask_func: Optional[List[mridc.collections.reconstruction.data.subsample.MaskFunc]] = None, shift_mask: bool = False, mask_center_scale: Optional[float] = 0.02, half_scan_percentage: float = 0.0, crop_size: Optional[Tuple[int, int]] = None, kspace_crop: bool = False, crop_before_masking: bool = True, kspace_zero_filling_size: Optional[Tuple] = None, normalize_inputs: bool = False, fft_type: str = 'orthogonal', use_seed: bool = True)[source]

Bases: object

MRI preprocessing data transforms.

__call__(kspace: numpy.ndarray, sensitivity_map: numpy.ndarray, mask: numpy.ndarray, eta: numpy.ndarray, target: numpy.ndarray, attrs: Dict, fname: str, slice_idx: int) Tuple[Union[List[Union[torch.Tensor, Any]], torch.Tensor, Any], Union[torch.Tensor, None, Any], Union[List, Any], Union[torch.Tensor, None, Any], Union[torch.Tensor, Any], str, int, Union[List, torch.Tensor, Any]][source]

Apply the data transform.

Parameters
  • kspace (The kspace.) –

  • sensitivity_map (The sensitivity map.) –

  • mask (The mask.) –

  • eta (The initial estimation.) –

  • target (The target.) –

  • attrs (The attributes.) –

  • fname (The file name.) –

  • slice_idx (The slice number.) –

Return type

The transformed data.

mridc.collections.reconstruction.parts.utils module

mridc.collections.reconstruction.parts.utils.apply_mask(data: torch.Tensor, mask_func: mridc.collections.reconstruction.data.subsample.MaskFunc, seed: Optional[Union[int, Tuple[int, ...]]] = None, padding: Optional[Sequence[int]] = None, shift: bool = False, half_scan_percentage: Optional[float] = 0.0, center_scale: Optional[float] = 0.02) Tuple[Any, Any, Any][source]

Subsample given k-space by multiplying with a mask.

Parameters
  • data (The input k-space data. This should have at least 3 dimensions, where dimensions -3 and -2 are the) – spatial dimensions, and the final dimension has size 2 (for complex values).

  • mask_func (A function that takes a shape (tuple of ints) and a random number seed and returns a mask.) –

  • seed (Seed for the random number generator.) –

  • padding (Padding value to apply for mask.) –

  • shift (Toggle to shift mask when subsampling. Applicable on 2D data.) –

  • half_scan_percentage (Percentage of kspace to be dropped.) –

  • center_scale (Scale of the center of the mask. Applicable on Gaussian masks.) –

Return type

Tuple of subsampled k-space, mask, and mask indices.

mridc.collections.reconstruction.parts.utils.batched_mask_center(x: torch.Tensor, mask_from: torch.Tensor, mask_to: torch.Tensor, mask_type: str = '2D') torch.Tensor[source]

Initializes a mask with the center filled in. Can operate with different masks for each batch element.

Parameters
  • x (The input real image or batch of real images.) –

  • mask_from (Part of center to start filling.) –

  • mask_to (Part of center to end filling.) –

  • mask_type (Type of mask to apply. Can be either "1D" or "2D".) –

Return type

A mask with the center filled.

mridc.collections.reconstruction.parts.utils.center_crop(data: torch.Tensor, shape: Tuple[int, int]) torch.Tensor[source]

Apply a center crop to the input real image or batch of real images.

Parameters
  • data (The input tensor to be center cropped. It should have at least 2 dimensions and the cropping is applied) – along the last two dimensions.

  • shape (The output shape. The shape should be smaller than the corresponding dimensions of data.) –

Return type

The center cropped image.

mridc.collections.reconstruction.parts.utils.center_crop_to_smallest(x: Union[torch.Tensor, numpy.ndarray], y: Union[torch.Tensor, numpy.ndarray]) Tuple[Union[torch.Tensor, numpy.ndarray], Union[torch.Tensor, numpy.ndarray]][source]

Apply a center crop on the larger image to the size of the smaller.

The minimum is taken over dim=-1 and dim=-2. If x is smaller than y at dim=-1 and y is smaller than x at dim=-2,

then the returned dimension will be a mixture of the two.

Parameters
  • x (The first image.) –

  • y (The second image.) –

Return type

Tuple of tensors x and y, each cropped to the minimum size.

mridc.collections.reconstruction.parts.utils.complex_center_crop(data: torch.Tensor, shape: Tuple[int, int]) torch.Tensor[source]

Apply a center crop to the input image or batch of complex images.

Parameters
  • data (The complex input tensor to be center cropped. It should have at least 3 dimensions and the cropping is) – applied along dimensions -3 and -2 and the last dimensions should have a size of 2.

  • shape (The output shape. The shape should be smaller than the corresponding dimensions of data.) –

Return type

The center cropped image.

mridc.collections.reconstruction.parts.utils.mask_center(x: torch.Tensor, mask_from: Optional[int], mask_to: Optional[int], mask_type: str = '2D') torch.Tensor[source]

Apply a center crop to the input real image or batch of real images.

Parameters
  • x (The input real image or batch of real images.) –

  • mask_from (Part of center to start filling.) –

  • mask_to (Part of center to end filling.) –

  • mask_type (Type of mask to apply. Can be either "1D" or "2D".) –

Return type

A mask with the center filled.

Module contents