我正在寻找如何在Apache Lucene 8.x中搜索标识符、产品代码或电话号码的一般建议。假设我正在尝试搜索产品代码列表(例如ISBN,例如978-3-86680-192-9)。如果有人输入9783、978 3或978-3,应该会出现978-3-86680-192-9。如果标识符使用字母、空格、数字、标点符号的任意组合,也会发生同样的情况(例如:TS 123、123.abc。我该怎么做呢?
我以为我可以用一个自定义分析器来解决这个问题,它可以删除所有的标点符号和空格,但结果好坏参半:
public class IdentifierAnalyzer extends Analyzer {
@Override
protected TokenStreamComponents createComponents(String fieldName) {
Tokenizer tokenizer = new KeywordTokenizer();
TokenStream tokenStream = new LowerCaseFilter(tokenizer);
tokenStream = new PatternReplaceFilter(tokenStream, Pattern.compile("[^0-9a-z]"), "", true);
tokenStream = new TrimFilter(tokenStream);
return new TokenStreamComponents(tokenizer, tokenStream);
}
@Override
protected TokenStream normalize(String fieldName, TokenStream in) {
TokenStream tokenStream = new LowerCaseFilter(in);
tokenStream = new PatternReplaceFilter(tokenStream, Pattern.compile("[^0-9a-z]"), "", true);
tokenStream = new TrimFilter(tokenStream);
return tokenStream;
}
}因此,虽然我在使用TS1*执行PrefixQuery时获得了预期的结果,但TS 1* (使用空格)并不能产生令人满意的结果。当我查看解析后的查询时,我看到Lucene将TS 1*拆分为两个查询:myField:TS myField:1*。WordDelimiterGraphFilter看起来很有趣,但我不知道如何在这里应用它。
发布于 2021-06-25 01:34:19
这不是一个全面的答案-但我同意WordDelimiterGraphFilter可能对这种类型的数据有帮助。然而,仍然可能有测试用例需要额外处理。
这是我的自定义分析器,使用的是WordDelimiterGraphFilter
import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.Tokenizer;
import org.apache.lucene.analysis.TokenStream;
import org.apache.lucene.analysis.core.KeywordTokenizer;
import org.apache.lucene.analysis.core.LowerCaseFilter;
import org.apache.lucene.analysis.miscellaneous.WordDelimiterGraphFilterFactory;
import java.util.Map;
import java.util.HashMap;
public class IdentifierAnalyzer extends Analyzer {
private WordDelimiterGraphFilterFactory getWordDelimiter() {
Map<String, String> settings = new HashMap<>();
settings.put("generateWordParts", "1"); // e.g. "PowerShot" => "Power" "Shot"
settings.put("generateNumberParts", "1"); // e.g. "500-42" => "500" "42"
settings.put("catenateAll", "1"); // e.g. "wi-fi" => "wifi" and "500-42" => "50042"
settings.put("preserveOriginal", "1"); // e.g. "500-42" => "500" "42" "500-42"
settings.put("splitOnCaseChange", "1"); // e.g. "fooBar" => "foo" "Bar"
return new WordDelimiterGraphFilterFactory(settings);
}
@Override
protected TokenStreamComponents createComponents(String fieldName) {
Tokenizer tokenizer = new KeywordTokenizer();
TokenStream tokenStream = new LowerCaseFilter(tokenizer);
tokenStream = getWordDelimiter().create(tokenStream);
return new TokenStreamComponents(tokenizer, tokenStream);
}
@Override
protected TokenStream normalize(String fieldName, TokenStream in) {
TokenStream tokenStream = new LowerCaseFilter(in);
return tokenStream;
}
}它使用WordDelimiterGraphFilterFactory辅助对象以及参数映射来控制应用哪些设置。
您可以在WordDelimiterGraphFilterFactory JavaDoc中查看可用设置的完整列表。您可能希望尝试设置/取消设置不同的设置。
以下是以下3个输入值的测试索引构建器:
978-3-86680-192-9
TS 123
123.abcpublic static void buildIndex() throws IOException, FileNotFoundException, ParseException {
final Directory dir = FSDirectory.open(Paths.get(INDEX_PATH));
Analyzer analyzer = new IdentifierAnalyzer();
IndexWriterConfig iwc = new IndexWriterConfig(analyzer);
iwc.setOpenMode(OpenMode.CREATE);
Document doc;
List<String> identifiers = Arrays.asList("978-3-86680-192-9", "TS 123", "123.abc");
try (IndexWriter writer = new IndexWriter(dir, iwc)) {
for (String identifier : identifiers) {
doc = new Document();
doc.add(new TextField("identifiers", identifier, Field.Store.YES));
writer.addDocument(doc);
}
}
}这将创建以下令牌:

为了查询上面的索引数据,我使用了以下代码:
public static void doSearch() throws IOException, ParseException {
Analyzer analyzer = new IdentifierAnalyzer();
QueryParser parser = new QueryParser("identifiers", analyzer);
List<String> searches = Arrays.asList("9783", "9783*", "978 3", "978-3", "TS1*", "TS 1*");
for (String search : searches) {
Query query = parser.parse(search);
printHits(query, search);
}
}
private static void printHits(Query query, String search) throws IOException {
System.out.println("search term: " + search);
System.out.println("parsed query: " + query.toString());
IndexReader reader = DirectoryReader.open(FSDirectory.open(Paths.get(INDEX_PATH)));
IndexSearcher searcher = new IndexSearcher(reader);
TopDocs results = searcher.search(query, 100);
ScoreDoc[] hits = results.scoreDocs;
System.out.println("hits: " + hits.length);
for (ScoreDoc hit : hits) {
System.out.println("");
System.out.println(" doc id: " + hit.doc + "; score: " + hit.score);
Document doc = searcher.doc(hit.doc);
System.out.println(" identifier: " + doc.get("identifiers"));
}
System.out.println("-----------------------------------------");
}这使用了以下搜索词-我将所有这些词都传递给了经典的查询解析器(当然,您也可以通过API使用更复杂的查询类型):
9783
9783*
978 3
978-3
TS1*
TS 1*唯一没有找到任何匹配文档的查询是第一个:
search term: 9783
parsed query: identifiers:9783
hits: 0这并不奇怪,因为这是一个不带通配符的部分令牌。第二个查询(添加了通配符)如预期的那样找到了一个文档。
我测试的最后一个查询TS 1*找到了三个匹配项--但我们想要的那个匹配率最高:
search term: TS 1*
parsed query: identifiers:ts identifiers:1*
hits: 3
doc id: 1; score: 1.590861
identifier: TS 123
doc id: 0; score: 1.0
identifier: 978-3-86680-192-9
doc id: 2; score: 1.0
identifier: 123.abchttps://stackoverflow.com/questions/68115969
复制相似问题