This document summarizes experiments comparing the open source search engine Lucene to a custom search engine called Juru on TREC data. The authors investigated differences in search quality between the two engines. They found that Lucene's default scoring was inferior to Juru's. They modified Lucene's scoring function by changing the document length normalization and term frequency normalization. Evaluations showed the modified Lucene performed comparably to Juru and other top systems in the TREC 1-Million Queries track, demonstrating the robustness of the modifications and the new evaluation measures.