分页搜索结果
编辑分页搜索结果
编辑默认情况下,搜索会返回前 10 个匹配的命中结果。要分页浏览更大的结果集,可以使用 搜索 API 的 from
和 size
参数。from
参数定义要跳过的命中数,默认为 0
。size
参数是要返回的最大命中数。这两个参数共同定义了一页结果。
resp = client.search( from_=5, size=20, query={ "match": { "user.id": "kimchy" } }, ) print(resp)
response = client.search( body: { from: 5, size: 20, query: { match: { 'user.id' => 'kimchy' } } } ) puts response
res, err := es.Search( es.Search.WithBody(strings.NewReader(`{ "from": 5, "size": 20, "query": { "match": { "user.id": "kimchy" } } }`)), es.Search.WithPretty(), ) fmt.Println(res, err)
const response = await client.search({ from: 5, size: 20, query: { match: { "user.id": "kimchy", }, }, }); console.log(response);
GET /_search { "from": 5, "size": 20, "query": { "match": { "user.id": "kimchy" } } }
避免使用 from
和 size
来进行过深的分页或一次请求过多结果。搜索请求通常跨越多个分片。每个分片都必须将其请求的命中结果以及任何先前页面的命中结果加载到内存中。对于深层页面或大型结果集,这些操作会显著增加内存和 CPU 使用率,从而导致性能下降或节点故障。
默认情况下,您不能使用 from
和 size
来分页浏览超过 10,000 个命中结果。此限制是 index.max_result_window
索引设置设置的安全措施。如果您需要分页浏览超过 10,000 个命中结果,请改用 search_after
参数。
Elasticsearch 使用 Lucene 的内部文档 ID 作为并列打破者。这些内部文档 ID 在同一数据的副本之间可能完全不同。分页搜索命中结果时,您可能会偶尔看到具有相同排序值的文档未按一致的顺序排列。
之后搜索
编辑您可以使用 search_after
参数,使用上一页的 排序值集来检索下一页的命中结果。
使用 search_after
需要使用相同的 query
和 sort
值进行多次搜索请求。第一步是运行初始请求。以下示例按两个字段(date
和 tie_breaker_id
)对结果进行排序
resp = client.search( index="twitter", query={ "match": { "title": "elasticsearch" } }, sort=[ { "date": "asc" }, { "tie_breaker_id": "asc" } ], ) print(resp)
response = client.search( index: 'twitter', body: { query: { match: { title: 'elasticsearch' } }, sort: [ { date: 'asc' }, { tie_breaker_id: 'asc' } ] } ) puts response
const response = await client.search({ index: "twitter", query: { match: { title: "elasticsearch", }, }, sort: [ { date: "asc", }, { tie_breaker_id: "asc", }, ], }); console.log(response);
GET twitter/_search { "query": { "match": { "title": "elasticsearch" } }, "sort": [ {"date": "asc"}, {"tie_breaker_id": "asc"} ] }
搜索响应包括每个命中的 sort
值数组
{ "took" : 17, "timed_out" : false, "_shards" : ..., "hits" : { "total" : ..., "max_score" : null, "hits" : [ ... { "_index" : "twitter", "_id" : "654322", "_score" : null, "_source" : ..., "sort" : [ 1463538855, "654322" ] }, { "_index" : "twitter", "_id" : "654323", "_score" : null, "_source" : ..., "sort" : [ 1463538857, "654323" ] } ] } }
要检索下一页结果,请重复该请求,从最后一个命中结果中获取 sort
值,并将这些值插入到 search_after
数组中
resp = client.search( index="twitter", query={ "match": { "title": "elasticsearch" } }, search_after=[ 1463538857, "654323" ], sort=[ { "date": "asc" }, { "tie_breaker_id": "asc" } ], ) print(resp)
response = client.search( index: 'twitter', body: { query: { match: { title: 'elasticsearch' } }, search_after: [ 1_463_538_857, '654323' ], sort: [ { date: 'asc' }, { tie_breaker_id: 'asc' } ] } ) puts response
const response = await client.search({ index: "twitter", query: { match: { title: "elasticsearch", }, }, search_after: [1463538857, "654323"], sort: [ { date: "asc", }, { tie_breaker_id: "asc", }, ], }); console.log(response);
GET twitter/_search { "query": { "match": { "title": "elasticsearch" } }, "search_after": [1463538857, "654323"], "sort": [ {"date": "asc"}, {"tie_breaker_id": "asc"} ] }
每次检索新的一页结果时,通过更新 search_after
数组来重复此过程。如果这些请求之间发生 刷新,则结果的顺序可能会发生变化,从而导致各页面的结果不一致。为了防止这种情况,您可以创建一个 时间点 (PIT),以在搜索过程中保留当前的索引状态。
resp = client.open_point_in_time( index="my-index-000001", keep_alive="1m", ) print(resp)
response = client.open_point_in_time( index: 'my-index-000001', keep_alive: '1m' ) puts response
const response = await client.openPointInTime({ index: "my-index-000001", keep_alive: "1m", }); console.log(response);
POST /my-index-000001/_pit?keep_alive=1m
API 返回一个 PIT ID。
{ "id": "46ToAwMDaWR5BXV1aWQyKwZub2RlXzMAAAAAAAAAACoBYwADaWR4BXV1aWQxAgZub2RlXzEAAAAAAAAAAAEBYQADaWR5BXV1aWQyKgZub2RlXzIAAAAAAAAAAAwBYgACBXV1aWQyAAAFdXVpZDEAAQltYXRjaF9hbGw_gAAAAA==", "_shards": ... }
要获取第一页结果,请提交带有 sort
参数的搜索请求。如果使用 PIT,请在 pit.id
参数中指定 PIT ID,并从请求路径中省略目标数据流或索引。
所有 PIT 搜索请求都会添加一个隐式的排序并列打破字段,称为 _shard_doc
,也可以显式提供。如果您不能使用 PIT,我们建议您在 sort
中包含一个并列打破字段。此并列打破字段应包含每个文档的唯一值。如果您不包含并列打破字段,则分页结果可能会遗漏或重复命中结果。
当排序顺序为 _shard_doc
且不跟踪总命中数时,之后的搜索请求具有优化,使其速度更快。如果您想遍历所有文档,而不管顺序如何,这是最有效的选择。
如果 sort
字段在某些目标数据流或索引中是 date
,但在其他目标中是 date_nanos
字段,请使用 numeric_type
参数将值转换为单一分辨率,并使用 format
参数为 sort
字段指定 日期格式。否则,Elasticsearch 将不会在每个请求中正确解释 search_after 参数。
resp = client.search( size=10000, query={ "match": { "user.id": "elkbee" } }, pit={ "id": "46ToAwMDaWR5BXV1aWQyKwZub2RlXzMAAAAAAAAAACoBYwADaWR4BXV1aWQxAgZub2RlXzEAAAAAAAAAAAEBYQADaWR5BXV1aWQyKgZub2RlXzIAAAAAAAAAAAwBYgACBXV1aWQyAAAFdXVpZDEAAQltYXRjaF9hbGw_gAAAAA==", "keep_alive": "1m" }, sort=[ { "@timestamp": { "order": "asc", "format": "strict_date_optional_time_nanos", "numeric_type": "date_nanos" } } ], ) print(resp)
const response = await client.search({ size: 10000, query: { match: { "user.id": "elkbee", }, }, pit: { id: "46ToAwMDaWR5BXV1aWQyKwZub2RlXzMAAAAAAAAAACoBYwADaWR4BXV1aWQxAgZub2RlXzEAAAAAAAAAAAEBYQADaWR5BXV1aWQyKgZub2RlXzIAAAAAAAAAAAwBYgACBXV1aWQyAAAFdXVpZDEAAQltYXRjaF9hbGw_gAAAAA==", keep_alive: "1m", }, sort: [ { "@timestamp": { order: "asc", format: "strict_date_optional_time_nanos", numeric_type: "date_nanos", }, }, ], }); console.log(response);
GET /_search { "size": 10000, "query": { "match" : { "user.id" : "elkbee" } }, "pit": { "id": "46ToAwMDaWR5BXV1aWQyKwZub2RlXzMAAAAAAAAAACoBYwADaWR4BXV1aWQxAgZub2RlXzEAAAAAAAAAAAEBYQADaWR5BXV1aWQyKgZub2RlXzIAAAAAAAAAAAwBYgACBXV1aWQyAAAFdXVpZDEAAQltYXRjaF9hbGw_gAAAAA==", "keep_alive": "1m" }, "sort": [ {"@timestamp": {"order": "asc", "format": "strict_date_optional_time_nanos", "numeric_type" : "date_nanos" }} ] }
搜索响应包括每个命中的 sort
值数组。如果使用了 PIT,则并列打破将作为每个命中的最后一个 sort
值包含在内。这个称为 _shard_doc
的并列打破会自动添加到每个使用 PIT 的搜索请求中。_shard_doc
值是 PIT 内的分片索引和 Lucene 内部文档 ID 的组合,它在每个文档中是唯一的,并且在 PIT 内是恒定的。您也可以在搜索请求中显式添加并列打破以自定义顺序
resp = client.search( size=10000, query={ "match": { "user.id": "elkbee" } }, pit={ "id": "46ToAwMDaWR5BXV1aWQyKwZub2RlXzMAAAAAAAAAACoBYwADaWR4BXV1aWQxAgZub2RlXzEAAAAAAAAAAAEBYQADaWR5BXV1aWQyKgZub2RlXzIAAAAAAAAAAAwBYgACBXV1aWQyAAAFdXVpZDEAAQltYXRjaF9hbGw_gAAAAA==", "keep_alive": "1m" }, sort=[ { "@timestamp": { "order": "asc", "format": "strict_date_optional_time_nanos" } }, { "_shard_doc": "desc" } ], ) print(resp)
const response = await client.search({ size: 10000, query: { match: { "user.id": "elkbee", }, }, pit: { id: "46ToAwMDaWR5BXV1aWQyKwZub2RlXzMAAAAAAAAAACoBYwADaWR4BXV1aWQxAgZub2RlXzEAAAAAAAAAAAEBYQADaWR5BXV1aWQyKgZub2RlXzIAAAAAAAAAAAwBYgACBXV1aWQyAAAFdXVpZDEAAQltYXRjaF9hbGw_gAAAAA==", keep_alive: "1m", }, sort: [ { "@timestamp": { order: "asc", format: "strict_date_optional_time_nanos", }, }, { _shard_doc: "desc", }, ], }); console.log(response);
GET /_search { "size": 10000, "query": { "match" : { "user.id" : "elkbee" } }, "pit": { "id": "46ToAwMDaWR5BXV1aWQyKwZub2RlXzMAAAAAAAAAACoBYwADaWR4BXV1aWQxAgZub2RlXzEAAAAAAAAAAAEBYQADaWR5BXV1aWQyKgZub2RlXzIAAAAAAAAAAAwBYgACBXV1aWQyAAAFdXVpZDEAAQltYXRjaF9hbGw_gAAAAA==", "keep_alive": "1m" }, "sort": [ {"@timestamp": {"order": "asc", "format": "strict_date_optional_time_nanos"}}, {"_shard_doc": "desc"} ] }
{ "pit_id" : "46ToAwMDaWR5BXV1aWQyKwZub2RlXzMAAAAAAAAAACoBYwADaWR4BXV1aWQxAgZub2RlXzEAAAAAAAAAAAEBYQADaWR5BXV1aWQyKgZub2RlXzIAAAAAAAAAAAwBYgACBXV1aWQyAAAFdXVpZDEAAQltYXRjaF9hbGw_gAAAAA==", "took" : 17, "timed_out" : false, "_shards" : ..., "hits" : { "total" : ..., "max_score" : null, "hits" : [ ... { "_index" : "my-index-000001", "_id" : "FaslK3QBySSL_rrj9zM5", "_score" : null, "_source" : ..., "sort" : [ "2021-05-20T05:30:04.832Z", 4294967298 ] } ] } }
要获取下一页结果,请使用最后一个命中的排序值(包括并列打破)作为 search_after
参数,重新运行之前的搜索。如果使用 PIT,请在 pit.id
参数中使用最新的 PIT ID。搜索的 query
和 sort
参数必须保持不变。如果提供了,则 from
参数必须为 0
(默认值)或 -1
。
resp = client.search( size=10000, query={ "match": { "user.id": "elkbee" } }, pit={ "id": "46ToAwMDaWR5BXV1aWQyKwZub2RlXzMAAAAAAAAAACoBYwADaWR4BXV1aWQxAgZub2RlXzEAAAAAAAAAAAEBYQADaWR5BXV1aWQyKgZub2RlXzIAAAAAAAAAAAwBYgACBXV1aWQyAAAFdXVpZDEAAQltYXRjaF9hbGw_gAAAAA==", "keep_alive": "1m" }, sort=[ { "@timestamp": { "order": "asc", "format": "strict_date_optional_time_nanos" } } ], search_after=[ "2021-05-20T05:30:04.832Z", 4294967298 ], track_total_hits=False, ) print(resp)
const response = await client.search({ size: 10000, query: { match: { "user.id": "elkbee", }, }, pit: { id: "46ToAwMDaWR5BXV1aWQyKwZub2RlXzMAAAAAAAAAACoBYwADaWR4BXV1aWQxAgZub2RlXzEAAAAAAAAAAAEBYQADaWR5BXV1aWQyKgZub2RlXzIAAAAAAAAAAAwBYgACBXV1aWQyAAAFdXVpZDEAAQltYXRjaF9hbGw_gAAAAA==", keep_alive: "1m", }, sort: [ { "@timestamp": { order: "asc", format: "strict_date_optional_time_nanos", }, }, ], search_after: ["2021-05-20T05:30:04.832Z", 4294967298], track_total_hits: false, }); console.log(response);
GET /_search { "size": 10000, "query": { "match" : { "user.id" : "elkbee" } }, "pit": { "id": "46ToAwMDaWR5BXV1aWQyKwZub2RlXzMAAAAAAAAAACoBYwADaWR4BXV1aWQxAgZub2RlXzEAAAAAAAAAAAEBYQADaWR5BXV1aWQyKgZub2RlXzIAAAAAAAAAAAwBYgACBXV1aWQyAAAFdXVpZDEAAQltYXRjaF9hbGw_gAAAAA==", "keep_alive": "1m" }, "sort": [ {"@timestamp": {"order": "asc", "format": "strict_date_optional_time_nanos"}} ], "search_after": [ "2021-05-20T05:30:04.832Z", 4294967298 ], "track_total_hits": false }
您可以重复此过程以获取其他结果页面。如果使用 PIT,您可以使用每个搜索请求的 keep_alive
参数来延长 PIT 的保留期。
完成后,您应该删除 PIT。
resp = client.close_point_in_time( id="46ToAwMDaWR5BXV1aWQyKwZub2RlXzMAAAAAAAAAACoBYwADaWR4BXV1aWQxAgZub2RlXzEAAAAAAAAAAAEBYQADaWR5BXV1aWQyKgZub2RlXzIAAAAAAAAAAAwBYgACBXV1aWQyAAAFdXVpZDEAAQltYXRjaF9hbGw_gAAAAA==", ) print(resp)
response = client.close_point_in_time( body: { id: '46ToAwMDaWR5BXV1aWQyKwZub2RlXzMAAAAAAAAAACoBYwADaWR4BXV1aWQxAgZub2RlXzEAAAAAAAAAAAEBYQADaWR5BXV1aWQyKgZub2RlXzIAAAAAAAAAAAwBYgACBXV1aWQyAAAFdXVpZDEAAQltYXRjaF9hbGw_gAAAAA==' } ) puts response
const response = await client.closePointInTime({ id: "46ToAwMDaWR5BXV1aWQyKwZub2RlXzMAAAAAAAAAACoBYwADaWR4BXV1aWQxAgZub2RlXzEAAAAAAAAAAAEBYQADaWR5BXV1aWQyKgZub2RlXzIAAAAAAAAAAAwBYgACBXV1aWQyAAAFdXVpZDEAAQltYXRjaF9hbGw_gAAAAA==", }); console.log(response);
DELETE /_pit { "id" : "46ToAwMDaWR5BXV1aWQyKwZub2RlXzMAAAAAAAAAACoBYwADaWR4BXV1aWQxAgZub2RlXzEAAAAAAAAAAAEBYQADaWR5BXV1aWQyKgZub2RlXzIAAAAAAAAAAAwBYgACBXV1aWQyAAAFdXVpZDEAAQltYXRjaF9hbGw_gAAAAA==" }
滚动搜索结果
编辑我们不再建议使用滚动 API 进行深度分页。如果需要在分页浏览超过 10,000 个命中结果时保留索引状态,请将 search_after
参数与时间点 (PIT) 一起使用。
虽然 search
请求返回单个“页面”的结果,但 scroll
API 可用于从单个搜索请求中检索大量结果(甚至所有结果),这与您在传统数据库中使用游标的方式非常相似。
滚动不适用于实时用户请求,而是用于处理大量数据,例如,为了将一个数据流或索引的内容重新索引到具有不同配置的新数据流或索引中。
从滚动请求返回的结果反映了进行初始 search
请求时数据流或索引的状态,就像时间快照一样。对文档的后续更改(索引、更新或删除)只会影响以后的搜索请求。
为了使用滚动,初始搜索请求应在查询字符串中指定 scroll
参数,这会告诉 Elasticsearch 它应该将“搜索上下文”保持活动状态多长时间(请参阅 保持搜索上下文处于活动状态),例如 ?scroll=1m
。
resp = client.search( index="my-index-000001", scroll="1m", size=100, query={ "match": { "message": "foo" } }, ) print(resp)
response = client.search( index: 'my-index-000001', scroll: '1m', body: { size: 100, query: { match: { message: 'foo' } } } ) puts response
const response = await client.search({ index: "my-index-000001", scroll: "1m", size: 100, query: { match: { message: "foo", }, }, }); console.log(response);
POST /my-index-000001/_search?scroll=1m { "size": 100, "query": { "match": { "message": "foo" } } }
上述请求的结果包括 _scroll_id
,应将其传递给 scroll
API,以便检索下一批结果。
resp = client.scroll( scroll="1m", scroll_id="DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==", ) print(resp)
res, err := es.Scroll( es.Scroll.WithBody(strings.NewReader(`{ "scroll": "1m", "scroll_id": "DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==" }`)), es.Scroll.WithPretty(), ) fmt.Println(res, err)
const response = await client.scroll({ scroll: "1m", scroll_id: "DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==", }); console.log(response);
POST /_search/scroll { "scroll" : "1m", "scroll_id" : "DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==" }
可以使用 |
|
|
|
|
size
参数允许您配置每批结果返回的最大命中数。每次调用 scroll
API 都会返回下一批结果,直到没有更多结果可返回为止,即 hits
数组为空。
初始搜索请求和每个后续滚动请求都返回一个 _scroll_id
。虽然 _scroll_id
可能会在请求之间发生变化,但并非总是会发生变化,无论如何,只应使用最近收到的 _scroll_id
。
如果请求指定了聚合,则只有初始搜索响应将包含聚合结果。
当排序顺序为 _doc
时,滚动请求具有优化,使其速度更快。如果您想遍历所有文档,而不管顺序如何,这是最有效的选择
$params = [ 'body' => [ 'sort' => [ '_doc', ], ], ]; $response = $client->search($params);
resp = client.search( scroll="1m", sort=[ "_doc" ], ) print(resp)
response = client.search( scroll: '1m', body: { sort: [ '_doc' ] } ) puts response
res, err := es.Search( es.Search.WithBody(strings.NewReader(`{ "sort": [ "_doc" ] }`)), es.Search.WithScroll(time.Duration(60000000000)), es.Search.WithPretty(), ) fmt.Println(res, err)
const response = await client.search({ scroll: "1m", sort: ["_doc"], }); console.log(response);
GET /_search?scroll=1m { "sort": [ "_doc" ] }
保持搜索上下文处于活动状态
编辑滚动返回在初始搜索请求时匹配搜索的所有文档。它会忽略对这些文档的任何后续更改。scroll_id
标识一个搜索上下文,该上下文会跟踪 Elasticsearch 返回正确文档所需的所有内容。搜索上下文由初始请求创建,并通过后续请求保持活动状态。
scroll
参数(传递给 search
请求和每个 scroll
请求)告诉 Elasticsearch 它应该将搜索上下文保持活动状态多长时间。它的值(例如 1m
,请参阅 时间单位)不需要长到足以处理所有数据,它只需要长到足以处理上一批结果。每个 scroll
请求(带有 scroll
参数)都会设置新的到期时间。如果 scroll
请求没有传入 scroll
参数,则搜索上下文将作为该scroll
请求的一部分被释放。
通常,后台合并过程会通过将较小的段合并在一起以创建新的、更大的段来优化索引。一旦不再需要较小的段,就会将其删除。此过程会在滚动期间继续进行,但打开的搜索上下文会阻止删除旧段,因为它们仍在被使用。
保持旧段处于活动状态意味着需要更多的磁盘空间和文件句柄。请确保已将节点配置为具有充足的可用文件句柄。请参阅 文件描述符。
此外,如果一个段包含已删除或更新的文档,那么搜索上下文必须跟踪该段中的每个文档在初始搜索请求时是否处于活动状态。如果您的索引上有许多打开的滚动,并且该索引会不断进行删除或更新操作,请确保您的节点有足够的堆空间。
为了防止因打开过多的滚动而导致问题,用户不允许打开超过一定限制的滚动。默认情况下,最大打开的滚动数为 500。可以使用 search.max_open_scroll_context
集群设置更新此限制。
您可以使用 节点统计 API 来检查有多少搜索上下文处于打开状态。
$params = [ 'metric' => 'indices', 'index_metric' => 'search', ]; $response = $client->nodes()->stats($params);
resp = client.nodes.stats( metric="indices", index_metric="search", ) print(resp)
response = client.nodes.stats( metric: 'indices', index_metric: 'search' ) puts response
res, err := es.Nodes.Stats( es.Nodes.Stats.WithMetric([]string{"indices"}...), es.Nodes.Stats.WithIndexMetric([]string{"search"}...), ) fmt.Println(res, err)
const response = await client.nodes.stats({ metric: "indices", index_metric: "search", }); console.log(response);
GET /_nodes/stats/indices/search
清除滚动
编辑当 scroll
超时时间被超过时,搜索上下文会自动删除。但是,如上一节所述,保持滚动打开是有代价的,因此一旦不再使用滚动,应立即使用 clear-scroll
API 显式清除滚动。
resp = client.clear_scroll( scroll_id="DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==", ) print(resp)
response = client.clear_scroll( body: { scroll_id: 'DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==' } ) puts response
res, err := es.ClearScroll( es.ClearScroll.WithBody(strings.NewReader(`{ "scroll_id": "DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==" }`)), ) fmt.Println(res, err)
const response = await client.clearScroll({ scroll_id: "DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==", }); console.log(response);
DELETE /_search/scroll { "scroll_id" : "DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==" }
多个滚动 ID 可以作为数组传递。
resp = client.clear_scroll( scroll_id=[ "DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==", "DnF1ZXJ5VGhlbkZldGNoBQAAAAAAAAABFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAAAxZrUllkUVlCa1NqNmRMaUhiQlZkMWFBAAAAAAAAAAIWa1JZZFFZQmtTajZkTGlIYkJWZDFhQQAAAAAAAAAFFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAABBZrUllkUVlCa1NqNmRMaUhiQlZkMWFB" ], ) print(resp)
response = client.clear_scroll( body: { scroll_id: [ 'DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==', 'DnF1ZXJ5VGhlbkZldGNoBQAAAAAAAAABFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAAAxZrUllkUVlCa1NqNmRMaUhiQlZkMWFBAAAAAAAAAAIWa1JZZFFZQmtTajZkTGlIYkJWZDFhQQAAAAAAAAAFFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAABBZrUllkUVlCa1NqNmRMaUhiQlZkMWFB' ] } ) puts response
res, err := es.ClearScroll( es.ClearScroll.WithBody(strings.NewReader(`{ "scroll_id": [ "DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==", "DnF1ZXJ5VGhlbkZldGNoBQAAAAAAAAABFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAAAxZrUllkUVlCa1NqNmRMaUhiQlZkMWFBAAAAAAAAAAIWa1JZZFFZQmtTajZkTGlIYkJWZDFhQQAAAAAAAAAFFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAABBZrUllkUVlCa1NqNmRMaUhiQlZkMWFB" ] }`)), ) fmt.Println(res, err)
const response = await client.clearScroll({ scroll_id: [ "DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==", "DnF1ZXJ5VGhlbkZldGNoBQAAAAAAAAABFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAAAxZrUllkUVlCa1NqNmRMaUhiQlZkMWFBAAAAAAAAAAIWa1JZZFFZQmtTajZkTGlIYkJWZDFhQQAAAAAAAAAFFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAABBZrUllkUVlCa1NqNmRMaUhiQlZkMWFB", ], }); console.log(response);
DELETE /_search/scroll { "scroll_id" : [ "DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==", "DnF1ZXJ5VGhlbkZldGNoBQAAAAAAAAABFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAAAxZrUllkUVlCa1NqNmRMaUhiQlZkMWFBAAAAAAAAAAIWa1JZZFFZQmtTajZkTGlIYkJWZDFhQQAAAAAAAAAFFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAABBZrUllkUVlCa1NqNmRMaUhiQlZkMWFB" ] }
可以使用 _all
参数清除所有搜索上下文。
$params = [ 'scroll_id' => '_all', ]; $response = $client->clearScroll($params);
resp = client.clear_scroll( scroll_id="_all", ) print(resp)
response = client.clear_scroll( scroll_id: '_all' ) puts response
res, err := es.ClearScroll( es.ClearScroll.WithScrollID("_all"), ) fmt.Println(res, err)
const response = await client.clearScroll({ scroll_id: "_all", }); console.log(response);
DELETE /_search/scroll/_all
scroll_id
也可以作为查询字符串参数或在请求正文中传递。多个滚动 ID 可以作为逗号分隔的值传递。
$params = [ 'scroll_id' => 'DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==,DnF1ZXJ5VGhlbkZldGNoBQAAAAAAAAABFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAAAxZrUllkUVlCa1NqNmRMaUhiQlZkMWFBAAAAAAAAAAIWa1JZZFFZQmtTajZkTGlIYkJWZDFhQQAAAAAAAAAFFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAABBZrUllkUVlCa1NqNmRMaUhiQlZkMWFB', ]; $response = $client->clearScroll($params);
resp = client.clear_scroll( scroll_id="DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==,DnF1ZXJ5VGhlbkZldGNoBQAAAAAAAAABFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAAAxZrUllkUVlCa1NqNmRMaUhiQlZkMWFBAAAAAAAAAAIWa1JZZFFZQmtTajZkTGlIYkJWZDFhQQAAAAAAAAAFFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAABBZrUllkUVlCa1NqNmRMaUhiQlZkMWFB", ) print(resp)
response = client.clear_scroll( scroll_id: 'DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==,DnF1ZXJ5VGhlbkZldGNoBQAAAAAAAAABFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAAAxZrUllkUVlCa1NqNmRMaUhiQlZkMWFBAAAAAAAAAAIWa1JZZFFZQmtTajZkTGlIYkJWZDFhQQAAAAAAAAAFFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAABBZrUllkUVlCa1NqNmRMaUhiQlZkMWFB' ) puts response
res, err := es.ClearScroll( es.ClearScroll.WithScrollID("DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==", "DnF1ZXJ5VGhlbkZldGNoBQAAAAAAAAABFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAAAxZrUllkUVlCa1NqNmRMaUhiQlZkMWFBAAAAAAAAAAIWa1JZZFFZQmtTajZkTGlIYkJWZDFhQQAAAAAAAAAFFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAABBZrUllkUVlCa1NqNmRMaUhiQlZkMWFB"), ) fmt.Println(res, err)
const response = await client.clearScroll({ scroll_id: "DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==,DnF1ZXJ5VGhlbkZldGNoBQAAAAAAAAABFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAAAxZrUllkUVlCa1NqNmRMaUhiQlZkMWFBAAAAAAAAAAIWa1JZZFFZQmtTajZkTGlIYkJWZDFhQQAAAAAAAAAFFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAABBZrUllkUVlCa1NqNmRMaUhiQlZkMWFB", }); console.log(response);
DELETE /_search/scroll/DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==,DnF1ZXJ5VGhlbkZldGNoBQAAAAAAAAABFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAAAxZrUllkUVlCa1NqNmRMaUhiQlZkMWFBAAAAAAAAAAIWa1JZZFFZQmtTajZkTGlIYkJWZDFhQQAAAAAAAAAFFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAABBZrUllkUVlCa1NqNmRMaUhiQlZkMWFB
切片滚动
编辑当分页浏览大量文档时,将搜索拆分为多个切片以独立使用它们会很有帮助。
resp = client.search( index="my-index-000001", scroll="1m", slice={ "id": 0, "max": 2 }, query={ "match": { "message": "foo" } }, ) print(resp) resp1 = client.search( index="my-index-000001", scroll="1m", slice={ "id": 1, "max": 2 }, query={ "match": { "message": "foo" } }, ) print(resp1)
response = client.search( index: 'my-index-000001', scroll: '1m', body: { slice: { id: 0, max: 2 }, query: { match: { message: 'foo' } } } ) puts response response = client.search( index: 'my-index-000001', scroll: '1m', body: { slice: { id: 1, max: 2 }, query: { match: { message: 'foo' } } } ) puts response
const response = await client.search({ index: "my-index-000001", scroll: "1m", slice: { id: 0, max: 2, }, query: { match: { message: "foo", }, }, }); console.log(response); const response1 = await client.search({ index: "my-index-000001", scroll: "1m", slice: { id: 1, max: 2, }, query: { match: { message: "foo", }, }, }); console.log(response1);
GET /my-index-000001/_search?scroll=1m { "slice": { "id": 0, "max": 2 }, "query": { "match": { "message": "foo" } } } GET /my-index-000001/_search?scroll=1m { "slice": { "id": 1, "max": 2 }, "query": { "match": { "message": "foo" } } }
第一个请求的结果返回属于第一个切片(id:0)的文档,第二个请求的结果返回属于第二个切片的文档。由于最大切片数设置为 2,因此两个请求的结果的并集等效于没有切片的滚动查询的结果。默认情况下,首先在分片上进行拆分,然后使用 _id
字段在每个分片上本地进行拆分。本地拆分遵循公式 slice(doc) = floorMod(hashCode(doc._id), max))
。
每个滚动都是独立的,可以像任何滚动请求一样并行处理。
如果切片数量大于分片数量,则切片过滤器在第一次调用时会非常慢,其复杂度为 O(N),内存成本等于每个切片 N 位,其中 N 是分片中文档的总数。几次调用后,过滤器应该被缓存,后续调用应该更快,但您应该限制并行执行的切片查询数量,以避免内存爆炸。
时间点 API 支持更高效的分区策略,并且不会遇到此问题。如果可能,建议使用带切片的时间点搜索而不是滚动。
避免这种高成本的另一种方法是使用另一个字段的 doc_values
进行切片。该字段必须具有以下属性:
- 该字段是数值类型。
-
在该字段上启用了
doc_values
。 - 每个文档都应包含单个值。如果一个文档对于指定的字段有多个值,则使用第一个值。
- 每个文档的值应在创建文档时设置一次,并且永远不会更新。这确保每个切片获得确定性的结果。
- 该字段的基数应很高。这确保每个切片获得大致相同数量的文档。
resp = client.search( index="my-index-000001", scroll="1m", slice={ "field": "@timestamp", "id": 0, "max": 10 }, query={ "match": { "message": "foo" } }, ) print(resp)
response = client.search( index: 'my-index-000001', scroll: '1m', body: { slice: { field: '@timestamp', id: 0, max: 10 }, query: { match: { message: 'foo' } } } ) puts response
const response = await client.search({ index: "my-index-000001", scroll: "1m", slice: { field: "@timestamp", id: 0, max: 10, }, query: { match: { message: "foo", }, }, }); console.log(response);
GET /my-index-000001/_search?scroll=1m { "slice": { "field": "@timestamp", "id": 0, "max": 10 }, "query": { "match": { "message": "foo" } } }
对于仅追加的时间索引,可以安全地使用 timestamp
字段。