时间点 API
编辑时间点 API
编辑默认情况下,搜索请求针对目标索引的最近可见数据执行,这称为时间点。Elasticsearch PIT(时间点)是对启动时数据状态的轻量级视图。在某些情况下,最好使用相同的时间点执行多个搜索请求。例如,如果在 search_after 请求之间发生刷新,则这些请求的结果可能不一致,因为搜索之间发生的变化仅对最近的时间点可见。
先决条件
编辑示例
编辑必须先显式打开时间点,然后才能在搜索请求中使用。keep_alive 参数告诉 Elasticsearch 应该保持时间点活动多长时间,例如 ?keep_alive=5m
。
resp = client.open_point_in_time( index="my-index-000001", keep_alive="1m", ) print(resp)
response = client.open_point_in_time( index: 'my-index-000001', keep_alive: '1m' ) puts response
const response = await client.openPointInTime({ index: "my-index-000001", keep_alive: "1m", }); console.log(response);
POST /my-index-000001/_pit?keep_alive=1m
以上请求的结果包括一个 id
,该 id 应传递到搜索请求的 pit
参数的 id
。
resp = client.search( size=100, query={ "match": { "title": "elasticsearch" } }, pit={ "id": "46ToAwMDaWR5BXV1aWQyKwZub2RlXzMAAAAAAAAAACoBYwADaWR4BXV1aWQxAgZub2RlXzEAAAAAAAAAAAEBYQADaWR5BXV1aWQyKgZub2RlXzIAAAAAAAAAAAwBYgACBXV1aWQyAAAFdXVpZDEAAQltYXRjaF9hbGw_gAAAAA==", "keep_alive": "1m" }, ) print(resp)
const response = await client.search({ size: 100, query: { match: { title: "elasticsearch", }, }, pit: { id: "46ToAwMDaWR5BXV1aWQyKwZub2RlXzMAAAAAAAAAACoBYwADaWR4BXV1aWQxAgZub2RlXzEAAAAAAAAAAAEBYQADaWR5BXV1aWQyKgZub2RlXzIAAAAAAAAAAAwBYgACBXV1aWQyAAAFdXVpZDEAAQltYXRjaF9hbGw_gAAAAA==", keep_alive: "1m", }, }); console.log(response);
POST /_search { "size": 100, "query": { "match" : { "title" : "elasticsearch" } }, "pit": { "id": "46ToAwMDaWR5BXV1aWQyKwZub2RlXzMAAAAAAAAAACoBYwADaWR4BXV1aWQxAgZub2RlXzEAAAAAAAAAAAEBYQADaWR5BXV1aWQyKgZub2RlXzIAAAAAAAAAAAwBYgACBXV1aWQyAAAFdXVpZDEAAQltYXRjaF9hbGw_gAAAAA==", "keep_alive": "1m" } }
带有 |
|
就像常规搜索一样,您可以使用 |
|
|
|
|
打开时间点请求和每个后续搜索请求都可以返回不同的 id
;因此,始终使用最近收到的 id
进行下一个搜索请求。
除了 keep_alive
参数外,还可以定义 allow_partial_search_results
参数。此参数确定在初始创建 PIT 时,时间点 (PIT) 是否应容忍不可用的分片或 分片故障。如果设置为 true,则将使用可用的分片创建 PIT,同时引用任何丢失的分片。如果设置为 false,则当任何分片不可用时,操作将失败。默认值为 false。
PIT 响应包括分片总数以及创建 PIT 时成功分片数量的摘要。
resp = client.open_point_in_time( index="my-index-000001", keep_alive="1m", allow_partial_search_results=True, ) print(resp)
const response = await client.openPointInTime({ index: "my-index-000001", keep_alive: "1m", allow_partial_search_results: "true", }); console.log(response);
POST /my-index-000001/_pit?keep_alive=1m&allow_partial_search_results=true
{ "id": "46ToAwMDaWR5BXV1aWQyKwZub2RlXzMAAAAAAAAAACoBYwADaWR4BXV1aWQxAgZub2RlXzEAAAAAAAAAAAEBYQADaWR5BXV1aWQyKgZub2RlXzIAAAAAAAAAAAwBYgACBXV1aWQyAAAFdXVpZDEAAQltYXRjaF9hbGw_gAAAAA=", "_shards": { "total": 10, "successful": 10, "skipped": 0, "failed": 0 } }
当在搜索请求中使用包含分片故障的 PIT 时,缺失的分片始终在搜索响应中报告为 NoShardAvailableActionException 异常。要消除这些异常,需要创建一个新的 PIT,以便可以处理先前 PIT 中丢失的分片,前提是它们在此期间变得可用。
保持时间点活动
编辑传递给打开时间点请求和搜索请求的 keep_alive
参数,可以延长相应时间点的生存时间。该值(例如 1m
,请参阅时间单位)不需要长到足以处理所有数据 - 它只需要足够长的时间来处理下一个请求即可。
通常,后台合并过程会通过合并较小的段来创建新的、更大的段来优化索引。一旦不再需要较小的段,就会将其删除。但是,打开的时间点会阻止删除旧的段,因为它们仍在被使用。
保持旧的段活动意味着需要更多的磁盘空间和文件句柄。请确保已将节点配置为具有充足的可用文件句柄。请参阅文件描述符。
此外,如果段包含已删除或更新的文档,则时间点必须跟踪段中的每个文档在初始搜索请求时是否处于活动状态。如果索引中有许多打开的时间点,并且该索引会不断受到删除或更新的影响,请确保您的节点具有足够的堆空间。请注意,时间点不会阻止删除其关联的索引。
您可以使用节点统计 API检查有多少时间点(即搜索上下文)处于打开状态
$params = [ 'metric' => 'indices', 'index_metric' => 'search', ]; $response = $client->nodes()->stats($params);
resp = client.nodes.stats( metric="indices", index_metric="search", ) print(resp)
response = client.nodes.stats( metric: 'indices', index_metric: 'search' ) puts response
res, err := es.Nodes.Stats( es.Nodes.Stats.WithMetric([]string{"indices"}...), es.Nodes.Stats.WithIndexMetric([]string{"search"}...), ) fmt.Println(res, err)
const response = await client.nodes.stats({ metric: "indices", index_metric: "search", }); console.log(response);
GET /_nodes/stats/indices/search
关闭时间点 API
编辑当 keep_alive
已过期时,时间点会自动关闭。但是,如上一节所述,保持时间点活动是有代价的。一旦不再在搜索请求中使用时间点,就应立即将其关闭。
resp = client.close_point_in_time( id="46ToAwMDaWR5BXV1aWQyKwZub2RlXzMAAAAAAAAAACoBYwADaWR4BXV1aWQxAgZub2RlXzEAAAAAAAAAAAEBYQADaWR5BXV1aWQyKgZub2RlXzIAAAAAAAAAAAwBYgACBXV1aWQyAAAFdXVpZDEAAQltYXRjaF9hbGw_gAAAAA==", ) print(resp)
response = client.close_point_in_time( body: { id: '46ToAwMDaWR5BXV1aWQyKwZub2RlXzMAAAAAAAAAACoBYwADaWR4BXV1aWQxAgZub2RlXzEAAAAAAAAAAAEBYQADaWR5BXV1aWQyKgZub2RlXzIAAAAAAAAAAAwBYgACBXV1aWQyAAAFdXVpZDEAAQltYXRjaF9hbGw_gAAAAA==' } ) puts response
const response = await client.closePointInTime({ id: "46ToAwMDaWR5BXV1aWQyKwZub2RlXzMAAAAAAAAAACoBYwADaWR4BXV1aWQxAgZub2RlXzEAAAAAAAAAAAEBYQADaWR5BXV1aWQyKgZub2RlXzIAAAAAAAAAAAwBYgACBXV1aWQyAAAFdXVpZDEAAQltYXRjaF9hbGw_gAAAAA==", }); console.log(response);
DELETE /_pit { "id" : "46ToAwMDaWR5BXV1aWQyKwZub2RlXzMAAAAAAAAAACoBYwADaWR4BXV1aWQxAgZub2RlXzEAAAAAAAAAAAEBYQADaWR5BXV1aWQyKgZub2RlXzIAAAAAAAAAAAwBYgACBXV1aWQyAAAFdXVpZDEAAQltYXRjaF9hbGw_gAAAAA==" }
该 API 返回以下响应
搜索切片
编辑在分页浏览大量文档时,将搜索拆分为多个切片以独立使用它们会很有帮助
resp = client.search( slice={ "id": 0, "max": 2 }, query={ "match": { "message": "foo" } }, pit={ "id": "46ToAwMDaWR5BXV1aWQyKwZub2RlXzMAAAAAAAAAACoBYwADaWR4BXV1aWQxAgZub2RlXzEAAAAAAAAAAAEBYQADaWR5BXV1aWQyKgZub2RlXzIAAAAAAAAAAAwBYgACBXV1aWQyAAAFdXVpZDEAAQltYXRjaF9hbGw_gAAAAA==" }, ) print(resp) resp1 = client.search( slice={ "id": 1, "max": 2 }, pit={ "id": "46ToAwMDaWR5BXV1aWQyKwZub2RlXzMAAAAAAAAAACoBYwADaWR4BXV1aWQxAgZub2RlXzEAAAAAAAAAAAEBYQADaWR5BXV1aWQyKgZub2RlXzIAAAAAAAAAAAwBYgACBXV1aWQyAAAFdXVpZDEAAQltYXRjaF9hbGw_gAAAAA==" }, query={ "match": { "message": "foo" } }, ) print(resp1)
const response = await client.search({ slice: { id: 0, max: 2, }, query: { match: { message: "foo", }, }, pit: { id: "46ToAwMDaWR5BXV1aWQyKwZub2RlXzMAAAAAAAAAACoBYwADaWR4BXV1aWQxAgZub2RlXzEAAAAAAAAAAAEBYQADaWR5BXV1aWQyKgZub2RlXzIAAAAAAAAAAAwBYgACBXV1aWQyAAAFdXVpZDEAAQltYXRjaF9hbGw_gAAAAA==", }, }); console.log(response); const response1 = await client.search({ slice: { id: 1, max: 2, }, pit: { id: "46ToAwMDaWR5BXV1aWQyKwZub2RlXzMAAAAAAAAAACoBYwADaWR4BXV1aWQxAgZub2RlXzEAAAAAAAAAAAEBYQADaWR5BXV1aWQyKgZub2RlXzIAAAAAAAAAAAwBYgACBXV1aWQyAAAFdXVpZDEAAQltYXRjaF9hbGw_gAAAAA==", }, query: { match: { message: "foo", }, }, }); console.log(response1);
GET /_search { "slice": { "id": 0, "max": 2 }, "query": { "match": { "message": "foo" } }, "pit": { "id": "46ToAwMDaWR5BXV1aWQyKwZub2RlXzMAAAAAAAAAACoBYwADaWR4BXV1aWQxAgZub2RlXzEAAAAAAAAAAAEBYQADaWR5BXV1aWQyKgZub2RlXzIAAAAAAAAAAAwBYgACBXV1aWQyAAAFdXVpZDEAAQltYXRjaF9hbGw_gAAAAA==" } } GET /_search { "slice": { "id": 1, "max": 2 }, "pit": { "id": "46ToAwMDaWR5BXV1aWQyKwZub2RlXzMAAAAAAAAAACoBYwADaWR4BXV1aWQxAgZub2RlXzEAAAAAAAAAAAEBYQADaWR5BXV1aWQyKgZub2RlXzIAAAAAAAAAAAwBYgACBXV1aWQyAAAFdXVpZDEAAQltYXRjaF9hbGw_gAAAAA==" }, "query": { "match": { "message": "foo" } } }
第一个请求的结果返回属于第一个切片(id:0)的文档,第二个请求的结果返回属于第二个切片的文档。由于切片的最大数量设置为 2,因此两个请求结果的并集等同于没有切片的时间点搜索结果。默认情况下,先在分片上执行拆分,然后在每个分片上本地执行拆分。本地拆分基于 Lucene 文档 ID 将分片划分为连续范围。
例如,如果分片数量等于 2,并且用户请求了 4 个切片,则切片 0 和 2 将分配给第一个分片,而切片 1 和 3 将分配给第二个分片。
所有切片都应使用相同的时间点 ID。如果使用不同的 PIT ID,则切片可能会重叠并丢失文档。这是因为拆分标准基于 Lucene 文档 ID,这些 ID 在索引发生更改时不稳定。