Classic 分词过滤器

编辑

对由classic 分词器生成的词条执行可选的后处理。

此过滤器会移除单词末尾的英文所有格('s)并移除首字母缩写词中的点。它使用 Lucene 的 ClassicFilter

示例

编辑

下面的 analyze API 请求演示了 classic 分词过滤器的使用方法。

resp = client.indices.analyze(
    tokenizer="classic",
    filter=[
        "classic"
    ],
    text="The 2 Q.U.I.C.K. Brown-Foxes jumped over the lazy dog's bone.",
)
print(resp)
response = client.indices.analyze(
  body: {
    tokenizer: 'classic',
    filter: [
      'classic'
    ],
    text: "The 2 Q.U.I.C.K. Brown-Foxes jumped over the lazy dog's bone."
  }
)
puts response
const response = await client.indices.analyze({
  tokenizer: "classic",
  filter: ["classic"],
  text: "The 2 Q.U.I.C.K. Brown-Foxes jumped over the lazy dog's bone.",
});
console.log(response);
GET /_analyze
{
  "tokenizer" : "classic",
  "filter" : ["classic"],
  "text" : "The 2 Q.U.I.C.K. Brown-Foxes jumped over the lazy dog's bone."
}

该过滤器生成以下词条:

[ The, 2, QUICK, Brown, Foxes, jumped, over, the, lazy, dog, bone ]

添加到分析器

编辑

下面的 创建索引 API 请求使用 classic 分词过滤器来配置新的 自定义分析器

resp = client.indices.create(
    index="classic_example",
    settings={
        "analysis": {
            "analyzer": {
                "classic_analyzer": {
                    "tokenizer": "classic",
                    "filter": [
                        "classic"
                    ]
                }
            }
        }
    },
)
print(resp)
response = client.indices.create(
  index: 'classic_example',
  body: {
    settings: {
      analysis: {
        analyzer: {
          classic_analyzer: {
            tokenizer: 'classic',
            filter: [
              'classic'
            ]
          }
        }
      }
    }
  }
)
puts response
const response = await client.indices.create({
  index: "classic_example",
  settings: {
    analysis: {
      analyzer: {
        classic_analyzer: {
          tokenizer: "classic",
          filter: ["classic"],
        },
      },
    },
  },
});
console.log(response);
PUT /classic_example
{
  "settings": {
    "analysis": {
      "analyzer": {
        "classic_analyzer": {
          "tokenizer": "classic",
          "filter": [ "classic" ]
        }
      }
    }
  }
}