XCorrCenterFinding¶
giant.relative_opnav.estimators.cross_correlation
:
This class implements normalized cross correlation center finding for GIANT.
All of the steps required for performing cross correlation are handled by this class, including the rendering of the template, the actual cross correlation, and the identification of the peak of the correlation surface. This is all handled in the
estimate()
method and is performed for each requested target.When all of the required data has been successfully loaded into an instance of this class, the
estimate()
method is used to perform the estimation for the requested image. The results are stored into theobserved_bearings
attribute for the observed center of figure locations. In addition, the predicted location for the center of figure for each target is stored in thecomputed_bearings
attribute. Finally, the details about the fit are stored as a dictionary in the appropriate element in thedetails
attribute. Specifically, these dictionaries will contain the following keys.Key
Description
'Correlation Score'
The correlation score at the peak of the correlation surface. This is only available if the fit was successful.
'Correlation Surface'
The raw correlation surface as a 2D array. Each pixel in the correlation surface represents the correlation score when the center of the template is lined up with the corresponding image pixel. This is only available if the fit was successful.
'Correlation Peak Location'
The Location of the correlation peak before correcting it to find the location of the target center of figure. This is only available if the fit was successful.
'Target Template Coordinates'
The location of the center of figure of the target in the template. This is only available if the fit was successful.
'Failed'
A message indicating why the fit failed. This will only be present if the fit failed (so you could do something like
'Failed' in cross_correlation.details[target_ind]
to check if something failed. The message should be a human readable description of what caused the failure.'Max Correlation'
The peak value of the correlation surface. This is only available if the fit failed due to too low of a correlation score.
Warning
Before calling the
estimate()
method be sure that the scene has been updated to correspond to the correct image time. This class does not update the scene automatically, even if thescene
attribute is anScene
instance.- Parameters
scene (giant.ray_tracer.scene.Scene) – The scene describing the a priori locations of the targets and the light source.
camera (giant.camera.Camera) – The
Camera
object containing the camera model and images to be analyzedimage_processing (giant.image_processing.ImageProcessing) – An instance of
ImageProcessing
. This is used for denoising the image and for generating the correlation surface usingdenoise_image()
andcorrelate()
methods respectivelyoptions (Optional[giant.relative_opnav.estimators.cross_correlation.XCorrCenterFindingOptions]) – A dataclass specifying the options to set for this instance. If provided it takes preference over all key word arguments, therefore it is not recommended to mix methods.
brdf (Optional[giant.ray_tracer.illumination.IlluminationModel]) – The illumination model that transforms the geometric ray tracing results (see
ILLUM_DTYPE
) into a intensity values. Typically this is one of the options from theillumination
module).rays (Union[giant.ray_tracer.rays.Rays, None, List[giant.ray_tracer.rays.Rays]]) – The rays to use when rendering the template. If
None
then the rays required to render the template will be automatically computed. Optionally, a list ofRays
objects where each element corresponds to the rays to use for the corresponding template in theScene.target_objs
list. Typically this should be left asNone
.grid_size (int) – The subsampling to use per pixel when rendering the template. This should be the number of sub-pixels per side of a pixel (that is if grid_size=3 then subsampling will be in an equally spaced 3x3 grid -> 9 sub-pixels per pixel). If
rays
is not None then this is ignoredpeak_finder (Callable[[numpy.ndarray, bool], numpy.ndarray]) – The peak finder function to use. This should be a callable that takes in a 2D surface as a numpy array and returns the (x,y) location of the peak of the surface.
min_corr_score (float) – The minimum correlation score to accept for something to be considered found in an image. The correlation score is the Pearson Product Moment Coefficient between the image and the template. This should be a number between -1 and 1, and in nearly every cast a number between 0 and 1. Setting this to -1 essentially turns the minimum correlation score check off.
blur (bool) – A flag to perform a Gaussian blur on the correlation surface before locating the peak to remove high frequency noise
search_region (Optional[int]) – The number of pixels to search around the a priori predicted center for the peak of the correlation surface. If
None
then searches the entire correlation surface.template_overflow_bounds – The number of pixels to render in the template that overflow outside of the camera field of view. Set to a number less than 0 to accept all overflow pixels in the template. Set to a number greater than or equal to 0 to limit the number of overflow pixels.
This technique generates CENTER-FINDING bearing observables to the center of figure of a target.
A flag specifying that this RelNav estimator generates and stores templates in the
templates
attribute.
The number of pixels to render in the template that overflow outside of the camera field of view.
Set to a number less than 0 to accept all overflow pixels in the template. Set to a number greater than or equal to 0 to limit the number of overflow pixels.
This setting can be particularly important in cases where the body fills significantly more than the field of view of the camera because the camera distortion models are usually undefined as you get outside of the field of view. In most typical cases where you can see the entire body in the field of view though this setting should have little to no impact and can safely be left at -1
The camera instance that represents the camera used to take the images we are performing Relative OpNav on.
This is the source of the camera model, and may be used for other information about the camera as well. See the
Camera
property for details.
The scene which defines the a priori locations of all targets and light sources with respect to the camera.
You can assume that the scene has been updated for the appropriate image time inside of the class.
Summary of Methods
This method computes the required rays to render a given target based on the location of the target in the image. |
|
This method identifies the center of each target in the image using cross correlation |
|
This method returns the computed illumination values for the given target and the (sub)pixels that each illumination value corresponds to |
|
This method resets the observed/computed attributes as well as the details attribute to have |
|
This method returns a generator which yields target_index, target pairs that are to be processed based on the input |