字符组分词器
编辑字符组分词器
编辑char_group
分词器会在遇到定义集合中的字符时将文本分解成术语。它主要用于需要简单自定义分词的情况,并且 pattern
分词器 的开销不可接受。
配置
编辑char_group
分词器接受一个参数
|
包含用于对字符串进行分词的字符列表的列表。每当遇到此列表中的字符时,就会开始一个新的词元。它可以接受单个字符,例如 |
|
最大词元长度。如果看到的词元长度超过此长度,则会在 |
示例输出
编辑resp = client.indices.analyze( tokenizer={ "type": "char_group", "tokenize_on_chars": [ "whitespace", "-", "\n" ] }, text="The QUICK brown-fox", ) print(resp)
response = client.indices.analyze( body: { tokenizer: { type: 'char_group', tokenize_on_chars: [ 'whitespace', '-', "\n" ] }, text: 'The QUICK brown-fox' } ) puts response
const response = await client.indices.analyze({ tokenizer: { type: "char_group", tokenize_on_chars: ["whitespace", "-", "\n"], }, text: "The QUICK brown-fox", }); console.log(response);
POST _analyze { "tokenizer": { "type": "char_group", "tokenize_on_chars": [ "whitespace", "-", "\n" ] }, "text": "The QUICK brown-fox" }
返回
{ "tokens": [ { "token": "The", "start_offset": 0, "end_offset": 3, "type": "word", "position": 0 }, { "token": "QUICK", "start_offset": 4, "end_offset": 9, "type": "word", "position": 1 }, { "token": "brown", "start_offset": 10, "end_offset": 15, "type": "word", "position": 2 }, { "token": "fox", "start_offset": 16, "end_offset": 19, "type": "word", "position": 3 } ] }