The idea with this patch is that it is spatially invariant, so it will be the same (or very similar) in other images. If we are using 40 pixel wide patches, then the feature description will consist of an 8x8 grid of grayscale pixel values that correspond to a resized version of the 40x40 patch in the original image. Once we have pruned in this way, we go through and extract a feature description from the patch surrounding each point. The first step of this is to get rid of feature points that are too close to one another, picking the higher gradient points over the lower ones. We take these issues into account when we do our matching algorithm. Unfortunately, the algorithm also comes up with a lot of false positives, and by the nature of the problem some of the features in one image won't exist in the second. The useful aspect of this algorithm is that it will detect a lot of correct features, so we will be able to find correspondences reasonably well. This algorithm detects where there are corners in the image, that is where there are points such that the difference between a patch centered on that point varies widely with patches centered on nearby points. The idea is to first detect interest points in the two photos to stitch using the harris corners feature detection algorithm. ![]() For the first part of this algorithm, we're following a modified version of that in a paper by Brown et al.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |