XCorrCenterFinding

giant.relative_opnav.estimators.cross_correlation:

class giant.relative_opnav.estimators.cross_correlation.XCorrCenterFinding(scene, camera, image_processing, options=None, brdf=None, rays=None, grid_size=1, peak_finder=<function quadric_peak_finder_2d>, min_corr_score=0.3, blur=True, search_region=None, template_overflow_bounds=-1)[source]

This class implements normalized cross correlation center finding for GIANT.

All of the steps required for performing cross correlation are handled by this class, including the rendering of the template, the actual cross correlation, and the identification of the peak of the correlation surface. This is all handled in the estimate() method and is performed for each requested target.

When all of the required data has been successfully loaded into an instance of this class, the estimate() method is used to perform the estimation for the requested image. The results are stored into the observed_bearings attribute for the observed center of figure locations. In addition, the predicted location for the center of figure for each target is stored in the computed_bearings attribute. Finally, the details about the fit are stored as a dictionary in the appropriate element in the details attribute. Specifically, these dictionaries will contain the following keys.

Key

Description

'Correlation Score'

The correlation score at the peak of the correlation surface. This is only available if the fit was successful.

'Correlation Surface'

The raw correlation surface as a 2D array. Each pixel in the correlation surface represents the correlation score when the center of the template is lined up with the corresponding image pixel. This is only available if the fit was successful.

'Correlation Peak Location'

The Location of the correlation peak before correcting it to find the location of the target center of figure. This is only available if the fit was successful.

'Target Template Coordinates'

The location of the center of figure of the target in the template. This is only available if the fit was successful.

'Failed'

A message indicating why the fit failed. This will only be present if the fit failed (so you could do something like 'Failed' in cross_correlation.details[target_ind] to check if something failed. The message should be a human readable description of what caused the failure.

'Max Correlation'

The peak value of the correlation surface. This is only available if the fit failed due to too low of a correlation score.

Warning

Before calling the estimate() method be sure that the scene has been updated to correspond to the correct image time. This class does not update the scene automatically, even if the scene attribute is an Scene instance.

Parameters
  • scene (giant.ray_tracer.scene.Scene) – The scene describing the a priori locations of the targets and the light source.

  • camera (giant.camera.Camera) – The Camera object containing the camera model and images to be analyzed

  • image_processing (giant.image_processing.ImageProcessing) – An instance of ImageProcessing. This is used for denoising the image and for generating the correlation surface using denoise_image() and correlate() methods respectively

  • options (Optional[giant.relative_opnav.estimators.cross_correlation.XCorrCenterFindingOptions]) – A dataclass specifying the options to set for this instance. If provided it takes preference over all key word arguments, therefore it is not recommended to mix methods.

  • brdf (Optional[giant.ray_tracer.illumination.IlluminationModel]) – The illumination model that transforms the geometric ray tracing results (see ILLUM_DTYPE) into a intensity values. Typically this is one of the options from the illumination module).

  • rays (Union[giant.ray_tracer.rays.Rays, None, List[giant.ray_tracer.rays.Rays]]) – The rays to use when rendering the template. If None then the rays required to render the template will be automatically computed. Optionally, a list of Rays objects where each element corresponds to the rays to use for the corresponding template in the Scene.target_objs list. Typically this should be left as None.

  • grid_size (int) – The subsampling to use per pixel when rendering the template. This should be the number of sub-pixels per side of a pixel (that is if grid_size=3 then subsampling will be in an equally spaced 3x3 grid -> 9 sub-pixels per pixel). If rays is not None then this is ignored

  • peak_finder (Callable[[numpy.ndarray, bool], numpy.ndarray]) – The peak finder function to use. This should be a callable that takes in a 2D surface as a numpy array and returns the (x,y) location of the peak of the surface.

  • min_corr_score (float) – The minimum correlation score to accept for something to be considered found in an image. The correlation score is the Pearson Product Moment Coefficient between the image and the template. This should be a number between -1 and 1, and in nearly every cast a number between 0 and 1. Setting this to -1 essentially turns the minimum correlation score check off.

  • blur (bool) – A flag to perform a Gaussian blur on the correlation surface before locating the peak to remove high frequency noise

  • search_region (Optional[int]) – The number of pixels to search around the a priori predicted center for the peak of the correlation surface. If None then searches the entire correlation surface.

  • template_overflow_bounds – The number of pixels to render in the template that overflow outside of the camera field of view. Set to a number less than 0 to accept all overflow pixels in the template. Set to a number greater than or equal to 0 to limit the number of overflow pixels.

observable_type: List[giant.relative_opnav.estimators.estimator_interface_abc.RelNavObservablesType] = [<RelNavObservablesType.CENTER_FINDING: 'CENTER-FINDING'>]

This technique generates CENTER-FINDING bearing observables to the center of figure of a target.

generates_templates: bool = True

A flag specifying that this RelNav estimator generates and stores templates in the templates attribute.

template_overflow_bounds

The number of pixels to render in the template that overflow outside of the camera field of view.

Set to a number less than 0 to accept all overflow pixels in the template. Set to a number greater than or equal to 0 to limit the number of overflow pixels.

This setting can be particularly important in cases where the body fills significantly more than the field of view of the camera because the camera distortion models are usually undefined as you get outside of the field of view. In most typical cases where you can see the entire body in the field of view though this setting should have little to no impact and can safely be left at -1

property camera: giant.camera.Camera

The camera instance that represents the camera used to take the images we are performing Relative OpNav on.

This is the source of the camera model, and may be used for other information about the camera as well. See the Camera property for details.

property scene: giant.ray_tracer.scene.Scene

The scene which defines the a priori locations of all targets and light sources with respect to the camera.

You can assume that the scene has been updated for the appropriate image time inside of the class.

Summary of Methods

compute_rays

This method computes the required rays to render a given target based on the location of the target in the image.

estimate

This method identifies the center of each target in the image using cross correlation

render

This method returns the computed illumination values for the given target and the (sub)pixels that each illumination value corresponds to

reset

This method resets the observed/computed attributes as well as the details attribute to have None for each target in scene.

target_generator

This method returns a generator which yields target_index, target pairs that are to be processed based on the input include_targets.