limap.line2d.SOLD2.model package
Subpackages
- limap.line2d.SOLD2.model.nets package
Submodules
limap.line2d.SOLD2.model.line_detection module
Implementation of the line segment detection module.
- class limap.line2d.SOLD2.model.line_detection.LineSegmentDetectionModule(detect_thresh, num_samples=64, sampling_method='local_max', inlier_thresh=0.0, heatmap_low_thresh=0.15, heatmap_high_thresh=0.2, max_local_patch_radius=3, lambda_radius=2.0, use_candidate_suppression=False, nms_dist_tolerance=3.0, use_heatmap_refinement=False, heatmap_refine_cfg=None, use_junction_refinement=False, junction_refine_cfg=None)
Bases:
object
Module extracting line segments from junctions and line heatmaps.
- candidate_suppression(junctions, candidate_map)
Suppress overlapping long lines in the candidate segments.
- convert_inputs(inputs, device)
Convert inputs to desired torch tensor.
- detect(junctions, heatmap, device=device(type='cpu'))
Main function performing line segment detection.
- detect_bilinear(heatmap, cand_h, cand_w, H, W, device)
Detection by bilinear sampling.
- detect_local_max(heatmap, cand_h, cand_w, H, W, normalized_seg_length, device)
Detection by local maximum search.
- refine_heatmap(heatmap, ratio=0.2, valid_thresh=0.01)
Global heatmap refinement method.
- refine_heatmap_local(heatmap, num_blocks=5, overlap_ratio=0.5, ratio=0.2, valid_thresh=0.002)
Local heatmap refinement method.
- refine_junction_perturb(junctions, line_map_pred, heatmap, H, W, device)
Refine the line endpoints in a similar way as in LSD.
- segments_to_line_map(junctions, segments)
Convert the list of segments to line map.
limap.line2d.SOLD2.model.line_detector module
Line segment detection from raw images.
- class limap.line2d.SOLD2.model.line_detector.LineDetector(model_cfg, ckpt_path, device, line_detector_cfg, junc_detect_thresh=None)
Bases:
object
- limap.line2d.SOLD2.model.line_detector.line_map_to_segments(junctions, line_map)
Convert a line map to a Nx2x2 list of segments.
limap.line2d.SOLD2.model.line_matcher module
Implements the full pipeline from raw images to line matches.
- class limap.line2d.SOLD2.model.line_matcher.LineMatcher(model_cfg, ckpt_path, device, line_detector_cfg, line_matcher_cfg, multiscale=False, scales=[1.0, 2.0])
Bases:
object
Full line matcher including line detection and matching with the Needleman-Wunsch algorithm.
- line_detection(input_image, valid_mask=None, desc_only=False, profile=False)
- multiscale_line_detection(input_image, valid_mask=None, desc_only=False, profile=False, scales=[1.0, 2.0], aggregation='mean')
limap.line2d.SOLD2.model.line_matching module
Implementation of the line matching methods.
- class limap.line2d.SOLD2.model.line_matching.WunschLineMatcher(cross_check=True, num_samples=5, min_dist_pts=8, top_k_candidates=10, grid_size=4, sampling='regular', line_score=False)
Bases:
object
Class matching two sets of line segments with the Needleman-Wunsch algorithm.
- asl_feat_saliency_score(desc)
Compute the ASLFeat saliency score on a 3D or 4D descriptor.
- compute_descriptors(line_seg1, desc1)
- compute_matches(descinfo1, descinfo2)
- compute_matches_topk(descinfo1, descinfo2, topk=10)
- compute_matches_topk_gpu(descinfo1, descinfo2, topk=10)
- d2_net_saliency_score(desc)
Compute the D2-Net saliency score on a 3D or 4D descriptor.
- filter_and_match_lines(scores)
Use the scores to keep the top k best lines, compute the Needleman- Wunsch algorithm on each candidate pairs, and keep the highest score. Inputs:
- scores: a (N, M, n, n) np.array containing the pairwise scores
of the elements to match.
- Outputs:
matches: a (N) np.array containing the indices of the best match
- filter_and_match_lines_topk(scores, topk=10)
- filter_and_match_lines_topk_gpu(scores, topk=10)
- forward(line_seg1, line_seg2, desc1, desc2)
Find the best matches between two sets of line segments and their corresponding descriptors.
- get_pairwise_distance(line_seg1, line_seg2, desc1, desc2)
Compute the OPPOSITE of the NW score for pairs of line segments and their corresponding descriptors.
- needleman_wunsch(scores)
Batched implementation of the Needleman-Wunsch algorithm. The cost of the InDel operation is set to 0 by subtracting the gap penalty to the scores. Inputs:
- scores: a (B, N, M) np.array containing the pairwise scores
of the elements to match.
- sample_line_points(line_seg)
Regularly sample points along each line segments, with a minimal distance between each point. Pad the remaining points. Inputs:
line_seg: an Nx2x2 torch.Tensor.
- Outputs:
line_points: an Nxnum_samplesx2 np.array. valid_points: a boolean Nxnum_samples np.array.
- sample_salient_points(line_seg, desc, img_size, saliency_type='d2_net')
Sample the most salient points along each line segments, with a minimal distance between each point. Pad the remaining points. Inputs:
line_seg: an Nx2x2 torch.Tensor. desc: a NxDxHxW torch.Tensor. image_size: the original image size. saliency_type: ‘d2_net’ or ‘asl_feat’.
- Outputs:
line_points: an Nxnum_samplesx2 np.array. valid_points: a boolean Nxnum_samples np.array.
limap.line2d.SOLD2.model.loss module
Loss function implementations.
- class limap.line2d.SOLD2.model.loss.HeatmapLoss(class_weight)
Bases:
Module
Heatmap prediction loss.
- forward(prediction, target, valid_mask=None)
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class limap.line2d.SOLD2.model.loss.JunctionDetectionLoss(grid_size, keep_border)
Bases:
Module
Junction detection loss.
- forward(prediction, target, valid_mask=None)
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class limap.line2d.SOLD2.model.loss.RegularizationLoss
Bases:
Module
Module for regularization loss.
- forward(loss_weights)
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class limap.line2d.SOLD2.model.loss.TotalLoss(loss_funcs, loss_weights, weighting_policy)
Bases:
Module
Total loss summing junction, heatma, descriptor and regularization losses.
- forward(junc_pred, junc_target, heatmap_pred, heatmap_target, valid_mask=None)
Detection only loss.
- forward_descriptors(junc_map_pred1, junc_map_pred2, junc_map_target1, junc_map_target2, heatmap_pred1, heatmap_pred2, heatmap_target1, heatmap_target2, line_points1, line_points2, line_indices, desc_pred1, desc_pred2, epoch, valid_mask1=None, valid_mask2=None)
Loss for detection + description.
- training: bool
- class limap.line2d.SOLD2.model.loss.TripletDescriptorLoss(grid_size, dist_threshold, margin)
Bases:
Module
Triplet descriptor loss.
- descriptor_loss(desc_pred1, desc_pred2, points1, points2, line_indices, epoch)
- forward(desc_pred1, desc_pred2, points1, points2, line_indices, epoch)
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- limap.line2d.SOLD2.model.loss.get_descriptor_loss_and_weight(model_cfg, global_w_policy)
Get the descriptor loss function and weight.
- limap.line2d.SOLD2.model.loss.get_heatmap_loss_and_weight(model_cfg, global_w_policy, device)
Get the heatmap loss function and weight.
- limap.line2d.SOLD2.model.loss.get_junction_loss_and_weight(model_cfg, global_w_policy)
Get the junction loss function and weight.
- limap.line2d.SOLD2.model.loss.get_loss_and_weights(model_cfg, device=device(type='cuda'))
Get loss functions and either static or dynamic weighting.
- limap.line2d.SOLD2.model.loss.heatmap_loss(heatmap_gt, heatmap_pred, valid_mask=None, class_weight=None)
Heatmap prediction loss.
- limap.line2d.SOLD2.model.loss.junction_detection_loss(junction_map, junc_predictions, valid_mask=None, grid_size=8, keep_border=True)
Junction detection loss.
- limap.line2d.SOLD2.model.loss.space_to_depth(input_tensor, grid_size)
PixelUnshuffle for pytorch.
- limap.line2d.SOLD2.model.loss.triplet_loss(desc_pred1, desc_pred2, points1, points2, line_indices, epoch, grid_size=8, dist_threshold=8, init_dist_threshold=64, margin=1)
Regular triplet loss for descriptor learning.
limap.line2d.SOLD2.model.lr_scheduler module
This file implements different learning rate schedulers
- limap.line2d.SOLD2.model.lr_scheduler.get_lr_scheduler(lr_decay, lr_decay_cfg, optimizer)
Get the learning rate scheduler according to the config.
limap.line2d.SOLD2.model.metrics module
This file implements the evaluation metrics.
- class limap.line2d.SOLD2.model.metrics.AverageMeter(junc_metric_lst=None, heatmap_metric_lst=None, is_training=True, desc_metric_lst=None)
Bases:
object
- average()
- update(metrics, loss_dict=None, num_samples=1)
- class limap.line2d.SOLD2.model.metrics.Metrics(detection_thresh, prob_thresh, grid_size, junc_metric_lst=None, heatmap_metric_lst=None, pr_metric_lst=None, desc_metric_lst=None)
Bases:
object
Metric evaluation calculator.
- evaluate(junc_pred, junc_pred_nms, junc_gt, heatmap_pred, heatmap_gt, valid_mask, line_points1=None, line_points2=None, desc_pred1=None, desc_pred2=None, valid_points=None)
Perform evaluation.
- class limap.line2d.SOLD2.model.metrics.heatmap_precision(prob_thresh)
Bases:
object
Heatmap precision.
- class limap.line2d.SOLD2.model.metrics.heatmap_recall(prob_thresh)
Bases:
object
Heatmap recall.
- class limap.line2d.SOLD2.model.metrics.junction_pr(num_threshold=50)
Bases:
object
Junction precision-recall info.
- class limap.line2d.SOLD2.model.metrics.junction_precision(detection_thresh)
Bases:
object
Junction precision.
- class limap.line2d.SOLD2.model.metrics.junction_recall(detection_thresh)
Bases:
object
Junction recall.
- class limap.line2d.SOLD2.model.metrics.matching_score(grid_size)
Bases:
object
Descriptors matching score.
- limap.line2d.SOLD2.model.metrics.nms_fast(in_corners, H, W, dist_thresh)
- Run a faster approximate Non-Max-Suppression on numpy corners shaped:
3xN [x_i,y_i,conf_i]^T
Algo summary: Create a grid sized HxW. Assign each corner location a 1, rest are zeros. Iterate through all the 1’s and convert them to -1 or 0. Suppress points by setting nearby values to 0.
Grid Value Legend: -1 : Kept.
0 : Empty or suppressed. 1 : To be processed (converted to either kept or supressed).
NOTE: The NMS first rounds points to integers, so NMS distance might not be exactly dist_thresh. It also assumes points are within image boundary.
- Inputs
in_corners - 3xN numpy array with corners [x_i, y_i, confidence_i]^T. H - Image height. W - Image width. dist_thresh - Distance to suppress, measured as an infinite distance.
- Returns
nmsed_corners - 3xN numpy matrix with surviving corners. nmsed_inds - N length numpy vector with surviving corner indices.
- limap.line2d.SOLD2.model.metrics.super_nms(prob_predictions, dist_thresh, prob_thresh=0.01, top_k=0)
Non-maximum suppression adapted from SuperPoint.
limap.line2d.SOLD2.model.model_util module
- class limap.line2d.SOLD2.model.model_util.SOLD2Net(model_cfg)
Bases:
Module
Full network for SOLD².
- forward(input_images)
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- get_backbone()
Retrieve the backbone encoder network.
- get_descriptor_decoder()
Get the descriptor decoder.
- get_heatmap_decoder()
Get the heatmap decoder.
- get_junction_decoder()
Get the junction decoder.
- training: bool
- limap.line2d.SOLD2.model.model_util.get_model(model_cfg=None, loss_weights=None, mode='train', printing=False)
Get model based on the model configuration.
- limap.line2d.SOLD2.model.model_util.weight_init(m)
Weight initialization function.