模式捕获分词器

编辑

pattern_capture 分词器与 pattern 分词器不同,它会为正则表达式中的每个捕获组发出一个分词。模式不会锚定到字符串的开头和结尾,因此每个模式可以匹配多次,并且允许匹配重叠。

注意病态正则表达式

模式捕获分词器使用 Java 正则表达式

编写不当的正则表达式可能会运行非常缓慢,甚至抛出 StackOverflowError 并导致其运行所在的节点突然退出。

阅读更多关于 病态正则表达式及其避免方法

例如,模式

"(([a-z]+)(\d*))"

当与

"abc123def456"

匹配时,将生成以下分词:[ abc123, abc, 123, def456, def, 456 ]

如果 preserve_original 设置为 true(默认值),则它还会发出原始分词:abc123def456

这对于索引驼峰式代码(例如 stripHTML)的文本特别有用,用户可以搜索 "strip html""striphtml"

resp = client.indices.create(
    index="test",
    settings={
        "analysis": {
            "filter": {
                "code": {
                    "type": "pattern_capture",
                    "preserve_original": True,
                    "patterns": [
                        "(\\p{Ll}+|\\p{Lu}\\p{Ll}+|\\p{Lu}+)",
                        "(\\d+)"
                    ]
                }
            },
            "analyzer": {
                "code": {
                    "tokenizer": "pattern",
                    "filter": [
                        "code",
                        "lowercase"
                    ]
                }
            }
        }
    },
)
print(resp)
response = client.indices.create(
  index: 'test',
  body: {
    settings: {
      analysis: {
        filter: {
          code: {
            type: 'pattern_capture',
            preserve_original: true,
            patterns: [
              '(\\p{Ll}+|\\p{Lu}\\p{Ll}+|\\p{Lu}+)',
              '(\\d+)'
            ]
          }
        },
        analyzer: {
          code: {
            tokenizer: 'pattern',
            filter: [
              'code',
              'lowercase'
            ]
          }
        }
      }
    }
  }
)
puts response
const response = await client.indices.create({
  index: "test",
  settings: {
    analysis: {
      filter: {
        code: {
          type: "pattern_capture",
          preserve_original: true,
          patterns: ["(\\p{Ll}+|\\p{Lu}\\p{Ll}+|\\p{Lu}+)", "(\\d+)"],
        },
      },
      analyzer: {
        code: {
          tokenizer: "pattern",
          filter: ["code", "lowercase"],
        },
      },
    },
  },
});
console.log(response);
PUT test
{
   "settings" : {
      "analysis" : {
         "filter" : {
            "code" : {
               "type" : "pattern_capture",
               "preserve_original" : true,
               "patterns" : [
                  "(\\p{Ll}+|\\p{Lu}\\p{Ll}+|\\p{Lu}+)",
                  "(\\d+)"
               ]
            }
         },
         "analyzer" : {
            "code" : {
               "tokenizer" : "pattern",
               "filter" : [ "code", "lowercase" ]
            }
         }
      }
   }
}

当用于分析文本

import static org.apache.commons.lang.StringEscapeUtils.escapeHtml

时,它会发出以下分词:[ import, static, org, apache, commons, lang, stringescapeutils, string, escape, utils, escapehtml, escape, html ]

另一个示例是分析电子邮件地址

resp = client.indices.create(
    index="test",
    settings={
        "analysis": {
            "filter": {
                "email": {
                    "type": "pattern_capture",
                    "preserve_original": True,
                    "patterns": [
                        "([^@]+)",
                        "(\\p{L}+)",
                        "(\\d+)",
                        "@(.+)"
                    ]
                }
            },
            "analyzer": {
                "email": {
                    "tokenizer": "uax_url_email",
                    "filter": [
                        "email",
                        "lowercase",
                        "unique"
                    ]
                }
            }
        }
    },
)
print(resp)
response = client.indices.create(
  index: 'test',
  body: {
    settings: {
      analysis: {
        filter: {
          email: {
            type: 'pattern_capture',
            preserve_original: true,
            patterns: [
              '([^@]+)',
              '(\\p{L}+)',
              '(\\d+)',
              '@(.+)'
            ]
          }
        },
        analyzer: {
          email: {
            tokenizer: 'uax_url_email',
            filter: [
              'email',
              'lowercase',
              'unique'
            ]
          }
        }
      }
    }
  }
)
puts response
const response = await client.indices.create({
  index: "test",
  settings: {
    analysis: {
      filter: {
        email: {
          type: "pattern_capture",
          preserve_original: true,
          patterns: ["([^@]+)", "(\\p{L}+)", "(\\d+)", "@(.+)"],
        },
      },
      analyzer: {
        email: {
          tokenizer: "uax_url_email",
          filter: ["email", "lowercase", "unique"],
        },
      },
    },
  },
});
console.log(response);
PUT test
{
   "settings" : {
      "analysis" : {
         "filter" : {
            "email" : {
               "type" : "pattern_capture",
               "preserve_original" : true,
               "patterns" : [
                  "([^@]+)",
                  "(\\p{L}+)",
                  "(\\d+)",
                  "@(.+)"
               ]
            }
         },
         "analyzer" : {
            "email" : {
               "tokenizer" : "uax_url_email",
               "filter" : [ "email", "lowercase",  "unique" ]
            }
         }
      }
   }
}

当上述分析器用于以下电子邮件地址时

它将生成以下分词

[email protected], john-smith_123,
john, smith, 123, foo-bar.com, foo, bar, com

需要多个模式来允许重叠捕获,但也意味着模式密度较低且更易于理解。

注意: 所有分词都在相同的位置发出,并且具有相同的字符偏移量。例如,这意味着对于使用此分析器的 [email protected]match 查询将返回包含任何这些分词的文档,即使使用 and 运算符。此外,当与高亮显示结合使用时,将高亮显示整个原始分词,而不仅仅是匹配的子集。例如,查询上述电子邮件地址的 "smith" 将高亮显示

而不是

  john-<em>smith</em>[email protected]