I guess for object detection , which I personally use and recommend to all, is by using SIFT(Scale-Invariant Feature Transform) or SURF algorithm , but note that these algorithms are now patented , and no longer included in OpenCV 3, still availble in openCV2 , as good alternative to this I prefer to make use of ORB which is opensource implementaition of SIFT/SURF.
Brute-Force Matching with SIFT Descriptors and Ratio Test
here we use BFMatcher.knnMatch() to get k best matches. In this example, we will take k=2 so that we can apply ratio test explained by D.Lowe in his paper.
import numpy as np
import cv2
from matplotlib import pyplot as plt
img1 = cv2.imread('box.png',0) # queryImage
img2 = cv2.imread('box_in_scene.png',0) # trainImage
# Initiate SIFT detector
sift = cv2.SIFT()
# find the keypoints and descriptors with SIFT
kp1, des1 = sift.detectAndCompute(img1,None)
kp2, des2 = sift.detectAndCompute(img2,None)
# BFMatcher with default params
bf = cv2.BFMatcher()
matches = bf.knnMatch(des1,des2, k=2)
# Apply ratio test
good = []
for m,n in matches:
if m.distance < 0.75*n.distance:
good.append([m])
# cv2.drawMatchesKnn expects list of lists as matches.
img3 = cv2.drawMatchesKnn(img1,kp1,img2,kp2,good,flags=2)
plt.imshow(img3),plt.show()

Moving ahead with FLANN based Matcher
FLANN stands for Fast Library for Approximate Nearest Neighbors. It
contains a collection of algorithms optimized for fast nearest
neighbor search in large datasets and for high dimensional features.
It works more faster than BFMatcher for large datasets. We will see
the second example with FLANN based matcher.
For FLANN based matcher, we need to pass two dictionaries which
specifies the algorithm to be used, its related parameters etc. First
one is IndexParams. For various algorithms, the information to be
passed is explained in FLANN docs. As a summary, for algorithms like
SIFT, SURF etc.
Sample code by using FLANN with SIFT:
import numpy as np
import cv2
from matplotlib import pyplot as plt
img1 = cv2.imread('box.png',0) # queryImage
img2 = cv2.imread('box_in_scene.png',0) # trainImage
# Initiate SIFT detector
sift = cv2.SIFT()
# find the keypoints and descriptors with SIFT
kp1, des1 = sift.detectAndCompute(img1,None)
kp2, des2 = sift.detectAndCompute(img2,None)
# FLANN parameters
FLANN_INDEX_KDTREE = 0
index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
search_params = dict(checks=50) # or pass empty dictionary
flann = cv2.FlannBasedMatcher(index_params,search_params)
matches = flann.knnMatch(des1,des2,k=2)
# Need to draw only good matches, so create a mask
matchesMask = [[0,0] for i in xrange(len(matches))]
# ratio test as per Lowe's paper
for i,(m,n) in enumerate(matches):
if m.distance < 0.7*n.distance:
matchesMask[i]=[1,0]
draw_params = dict(matchColor = (0,255,0),
singlePointColor = (255,0,0),
matchesMask = matchesMask,
flags = 0)
img3 = cv2.drawMatchesKnn(img1,kp1,img2,kp2,matches,None,**draw_params)
plt.imshow(img3,),plt.show()
See the result below:

But what I recommend is, Brute-Force Matching with ORB Descriptors
In this example I used ORB with Bruteforce matcher, this code captures
frames from camera at realtime and computes the keypoints,descriptors
from input frames and compares it with the stored query image, by
doing the same , and returns the matching keypoint lengths, the
same can be applied on above code which uses SIFT algorithm instead of
ORB.
import numpy as np
import cv2
from imutils.video import WebcamVideoStream
from imutils.video import FPS
MIN_MATCH_COUNT = 10
img1 = cv2.imread('input_query.jpg', 0)
orb = cv2.ORB()
kp1, des1 = orb.detectAndCompute(img1, None)
webcam = WebcamVideoStream(src=0).start()
fps = FPS().start()
while True:
img2 = webcam.read()
key = cv2.waitKey(10)
cv2.imshow('',img2)
if key == 1048603:
break
kp2, des2 = orb.detectAndCompute(img2, None)
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
matches = bf.match(des1, des2)
matches = sorted(matches, key=lambda x: x.distance) # compute the descriptors with ORB
if not len(matches) > MIN_MATCH_COUNT:
print "Not enough matches are found - %d/%d" % (len(matches), MIN_MATCH_COUNT)
matchesMask = None
#simg2 = cv2.polylines(img2,[np.int32(dst)],True,255,3, cv2.LINE_AA)
print len(matches)
#img3 = cv2.drawMatches(img1, kp1, img2, kp2, matches[:10], None, flags=2)
fps.update()
fps.stop()
More descriptive video tutorial on this will be found here,
https://www.youtube.com/watch?v=ZW3nrP2OyLQ and one more good thing is
it's opensource :
https://gitlab.com/josemariasoladuran/object-recognition-opencv-python.git
cv2.drawContours(img_filt,contours,-1,(128,255,0),1). Although I would display them on the original image not the filtered one.