SurfaceFeatureNavigation¶
giant.relative_opnav.estimators.sfn.sfn_class
:
- class giant.relative_opnav.estimators.sfn.sfn_class.SurfaceFeatureNavigation(scene, camera, image_processing, options=None, brdf=None, rays=None, grid_size=1, peak_finder=<function quadric_peak_finder_2d>, min_corr_score=0.5, blur=True, search_region=10, run_pnp_solver=False, pnp_ransac_iterations=0, second_search_region=None, measurement_sigma=1, position_sigma=None, attitude_sigma=None, state_sigma=None, max_lsq_iterations=None, lsq_relative_error_tolerance=1e-08, lsq_relative_update_tolerance=1e-08, cf_results=None, cf_index=None, show_templates=False)[source]¶
This class implements surface feature navigation using normalized cross correlation template matching for GIANT.
All of the steps required for performing surface feature navigation are handled by this class, including the identification of visible features in the image, the rendering of the templates for each feature, the actual cross correlation, the identification of the peaks of the correlation surfaces, and optionally the solution of a PnP problem based on the observed feature locations in the image. This is all handled in the
estimate()
method and is performed for each requested target. Note that targets must have shapes ofFeatureCatalogue
to use this class.When all of the required data has been successfully loaded into an instance of this class, the
estimate()
method is used to perform the estimation for the requested image. The results are stored into theobserved_bearings
attribute for the observed center of template locations. In addition, the predicted locations for the center of template for each template is stored in thecomputed_bearings
attribute. Finally, the details about the fit are stored as a dictionary in the appropriate element in thedetails
attribute. Specifically, these dictionaries will contain the following keys.Key
Description
'Correlation Scores'
The correlation score at the peak of the correlation surface for each feature as a list of floats. The corresponding element will be 0 for any features that were not found. Each element of this list corresponds to the feature according to the corresponding element in the
'Visible Features'
list. If no potential visible features were expected in the image then this is not available.'Visible Features'
The list of feature indices (into the
FeatureCatalogue.features
list) that were looked for in the image. Each element of this list corresponds to the corresponding element in thetemplates
list. If no potential visible features were expected in the image then this is not available.'Correlation Peak Locations'
The Location of the correlation peaks before correcting it to find the location of the location of the feature in the image as a list of size 2 numpy arrays. Each element of this list corresponds to the feature according to the corresponding element in the
'Visible Features'
list. Any features that were not found in the image havenp.nan
for their values. If no potential visible features were expected in the image then this is not available.'Correlation Surfaces'
The raw correlation surfaces as 2D arrays of shape
2*search_region+1 x 2*search_region+1
. Each pixel in the correlation surface represents a shift between the predicted and expected location, according tosfn_correlator()
. Each element of this list corresponds to the feature according to the corresponding element in the'Visible Features'
list. If no potential visible features were expected in the image then this is not available.'Target Template Coordinates'
The location of the center of each feature in its corresponding template. Each element of this list corresponds to the feature according to the corresponding element in the
'Visible Features'
list. If no potential visible features were expected in the image then this is not available.'Intersect Masks'
The boolean arrays the shape shapes of each rendered template with
True
where a ray through that pixel struct the surface of the template andFalse
otherwise. Each element of this list corresponds to the feature according to the corresponding element in the'Visible Features'
list. If no potential visible features were expected in the image then this is not available.'Space Mask'
The boolean array the same shape as the image specifying which pixels of the image we thought were empty space with a
True
and which we though were on the body with aFalse
. If no potential visible features were expected in the image then this is not available'PnP Solution'
A boolean indicating whether the PnP solution was successful (
True
) or not. This is only available if a PnP solution was attempted.'PnP Translation'
The solved for translation in the original camera frame that minimizes the residuals in the PnP solution as a length 3 array with units of kilometers. This is only available if a PnP solution was attempted and the PnP solution was successful.
'PnP Rotation'
The solved for rotation of the original camera frame that minimizes the residuals in the PnP solution as a
Rotation
. This is only available if a PnP solution was attempted and the PnP solution was successful.'PnP Position'
The solved for relative position of the target in the camera frame after the PnP solution is applied as a length 3 numpy array in km.
'PnP Orientation'
The solved for relative orientation of the target frame with respect to the camera frame after the PnP solution is applied as a
Rotation
.'Failed'
A message indicating why the SFN failed. This will only be present if the SFN fit failed (so you could do something like
'Failed' in sfn.details[target_ind]
to check if something failed. The message should be a human readable description of what caused the failure.Warning
Before calling the
estimate()
method be sure that the scene has been updated to correspond to the correct image time. This class does not update the scene automatically.- Parameters
scene (giant.ray_tracer.scene.Scene) – The scene describing the a priori locations of the targets and the light source.
camera (giant.camera.Camera) – The
Camera
object containing the camera model and images to be analyzedimage_processing (giant.image_processing.ImageProcessing) – An instance of
ImageProcessing
. This is used for denoising the image and for generating the correlation surface usingdenoise_image()
andcorrelate()
methods respectivelyoptions (Optional[giant.relative_opnav.estimators.sfn.sfn_class.SurfaceFeatureNavigationOptions]) – A dataclass specifying the options to set for this instance. If provided it takes preference over all key word arguments, therefore it is not recommended to mix methods.
brdf (Optional[giant.ray_tracer.illumination.IlluminationModel]) – The illumination model that transforms the geometric ray tracing results (see
ILLUM_DTYPE
) into a intensity values. Typically this is one of the options from theillumination
module).rays (Union[giant.ray_tracer.rays.Rays, None, List[giant.ray_tracer.rays.Rays]]) – The rays to use when rendering the template. If
None
then the rays required to render the template will be automatically computed. Optionally, a list ofRays
objects where each element corresponds to the rays to use for the corresponding template in theScene.target_objs
list. Typically this should be left asNone
.grid_size (int) – The subsampling to use per pixel when rendering the template. This should be the number of sub-pixels per side of a pixel (that is if grid_size=3 then subsampling will be in an equally spaced 3x3 grid -> 9 sub-pixels per pixel). If
rays
is not None then this is ignoredpeak_finder (Callable[[numpy.ndarray, bool], numpy.ndarray]) – The peak finder function to use. This should be a callable that takes in a 2D surface as a numpy array and returns the (x,y) location of the peak of the surface.
min_corr_score (float) – The minimum correlation score to accept for something to be considered found in an image. The correlation score is the Pearson Product Moment Coefficient between the image and the template. This should be a number between -1 and 1, and in nearly every cast a number between 0 and 1. Setting this to -1 essentially turns the minimum correlation score check off.
blur (bool) – A flag to perform a Gaussian blur on the correlation surface before locating the peak to remove high frequency noise
search_region (int) – The number of pixels to search around the a priori predicted center for the peak of the correlation surface. If
None
then searches the entire correlation surface.run_pnp_solver (bool) – A flag specifying whether to use the PnP solver to correct errors in the initial relative state between the camera and the target body
pnp_ransac_iterations (int) – The number of RANSAC iterations to attempt in the PnP solver. Set to 0 to turn the RANSAC component of the PnP solver
second_search_region (Optional[int]) – The distance around the nominal location to search for each feature in the image after correcting errors using the PnP solver.
measurement_sigma (Union[Sequence, numpy.ndarray, numbers.Real]) – The uncertainty to assume for each measurement in pixels. This is used to set the relative weight between the observed landmarks are the a priori knowledge in the PnP problem. See the
measurement_sigma
documentation for a description of valid inputs.position_sigma (Optional[Union[Sequence, numpy.ndarray, numbers.Real]]) – The uncertainty to assume for the relative position vector in kilometers. This is used to set the relative weight between the observed landmarks and the a priori knowledge in the PnP problem. See the
position_sigma
documentation for a description of valid inputs. If thestate_sigma
input is notNone
then this is ignored.attitude_sigma (Optional[Union[Sequence, numpy.ndarray, numbers.Real]]) – The uncertainty to assume for the relative orientation rotation vector in radians. This is used to set the relative weight between the observed landmarks and the a priori knowledge in the PnP problem. See the
attitude_sigma
documentation for a description of valid inputs. If thestate_sigma
input is notNone
then this is ignored.state_sigma (Optional[Union[Sequence, numpy.ndarray]]) – The uncertainty to assume for the relative position vector and orientation rotation vector in kilometers and radians respectively. This is used to set the relative weight between the observed landmarks and the a priori knowledge in the PnP problem. See the
state_sigma
documentation for a description of valid inputs. If this input is notNone
then theattitude_sigma
andposition_sigma
inputs are ignored.max_lsq_iterations (Optional[int]) – The maximum number of iterations to make in the least squares solution to the PnP problem.
lsq_relative_error_tolerance (float) – The relative tolerance in the residuals to signal convergence in the least squares solution to the PnP problem.
lsq_relative_update_tolerance (float) – The relative tolerance in the update vector to signal convergence in the least squares solution to the PnP problem
cf_results (Optional[Union[Sequence, numpy.ndarray]]) – A numpy array containing the center finding residuals for the target that the feature catalogue is a part of. If present this is used to correct errors in the a priori line of sight to the target before searching for features in the image.
cf_index (Optional[List[int]]) – A list that maps the features catalogues contained in the
scene
(in order) to the appropriate column of thecf_results
matrix. If left blank the mapping is assumed to be in like ordershow_templates (bool) – A flag to show the rendered templates for each feature “live”. This is useful for debugging but in general should not be used.
- observable_type: List[giant.relative_opnav.estimators.estimator_interface_abc.RelNavObservablesType] = [<RelNavObservablesType.LANDMARK: 'LANDMARK'>]¶
This technique generates LANDMARK bearing observables to the center of landmarks in the image.
- generates_templates: bool = True¶
A flag specifying that this RelNav estimator generates and stores templates in the
templates
attribute
- technique: str = 'sfn'¶
The name for the technique for registering with
RelativeOpNav
.If None then the name will default to the name of the module where the class is defined.
This should typically be all lowercase and should not include any spaces or special characters except for
_
as it will be used to make attribute/method names. (That isMyEstimator.technique.isidentifier()
should evaluateTrue
).
- property camera: giant.camera.Camera¶
The camera instance that represents the camera used to take the images we are performing Relative OpNav on.
This is the source of the camera model, and may be used for other information about the camera as well. See the
Camera
property for details.
- property scene: giant.ray_tracer.scene.Scene¶
The scene which defines the a priori locations of all targets and light sources with respect to the camera.
You can assume that the scene has been updated for the appropriate image time inside of the class.
- visible_features: List[Optional[List[int]]]¶
This variable is used to notify which features are predicted to be visible in the image.
Each visible feature is identified by its index in the
FeatureCatalogue.features
list.
Summary of Methods
This method applies the input options to the current instance. |
|
This method computes the required rays to render a given feature based on the current estimate of the location and orientation of the feature in the image. |
|
This method identifies the locations of surface features in the image through cross correlation of rendered templates with the image. |
|
This method attempts to solve for an update to the relative position/orientation of the target with respect to the image based on the observed feature locations in the image. |
|
This method renders each visible feature for the current target according to the current estimate of the relative position/orientation between the target and the camera using single bounce ray tracing. |
|
This method resets the observed/computed attributes as well as the details attribute to have |
|
This method returns a generator which yields target_index, target pairs that are to be processed based on the input |