解决分片容量运行状况问题
编辑解决分片容量运行状况问题编辑
Elasticsearch 使用 cluster.max_shards_per_node
和 cluster.max_shards_per_node.frozen
设置限制每个节点上可容纳的最大分片数。集群的当前分片容量可在 运行状况 API 分片容量部分 中找到。
集群即将达到为数据节点配置的最大分片数。编辑
cluster.max_shards_per_node
集群设置限制集群中打开的分片的最大数量,仅计算不属于冻结层的数 据节点。
此症状表明应采取措施,否则可能会阻止创建新索引或升级集群。
如果您确信您的更改不会破坏集群的稳定性,则可以使用 集群更新设置 API 临时增加限制
使用 Kibana
- 登录到 Elastic Cloud 控制台。
-
在 Elasticsearch Service 面板上,单击部署的名称。
如果您的部署名称被禁用,则您的 Kibana 实例可能不正常,在这种情况下,请联系 Elastic 支持。如果您的部署不包括 Kibana,您只需 先启用它。
-
打开部署的侧边导航菜单(位于左上角 Elastic 徽标下方),然后转到 开发工具 > 控制台。
-
根据分片容量指标检查集群的当前状态
response = client.health_report( feature: 'shards_capacity' ) puts response
GET _health_report/shards_capacity
响应如下所示
{ "cluster_name": "...", "indicators": { "shards_capacity": { "status": "yellow", "symptom": "Cluster is close to reaching the configured maximum number of shards for data nodes.", "details": { "data": { "max_shards_in_cluster": 1000, "current_used_shards": 988 }, "frozen": { "max_shards_in_cluster": 3000, "current_used_shards": 0 } }, "impacts": [ ... ], "diagnosis": [ ... } } }
-
使用适当的值更新
cluster.max_shards_per_node
设置response = client.cluster.put_settings( body: { persistent: { 'cluster.max_shards_per_node' => 1200 } } ) puts response
PUT _cluster/settings { "persistent" : { "cluster.max_shards_per_node": 1200 } }
此增加应只是暂时的。作为长期解决方案,我们建议您向分片过多的数据层添加节点,或 减少不属于冻结层的节点上的集群分片数量。
-
要验证更改是否已解决问题,您可以通过检查 运行状况 API 的
data
部分来获取shards_capacity
指标的当前状态response = client.health_report( feature: 'shards_capacity' ) puts response
GET _health_report/shards_capacity
响应如下所示
{ "cluster_name": "...", "indicators": { "shards_capacity": { "status": "green", "symptom": "The cluster has enough room to add new shards.", "details": { "data": { "max_shards_in_cluster": 1000 }, "frozen": { "max_shards_in_cluster": 3000 } } } } }
-
当长期解决方案到位后,我们建议您重置
cluster.max_shards_per_node
限制。response = client.cluster.put_settings( body: { persistent: { 'cluster.max_shards_per_node' => nil } } ) puts response
PUT _cluster/settings { "persistent" : { "cluster.max_shards_per_node": null } }
根据分片容量指标检查集群的当前状态
response = client.health_report( feature: 'shards_capacity' ) puts response
GET _health_report/shards_capacity
响应如下所示
{ "cluster_name": "...", "indicators": { "shards_capacity": { "status": "yellow", "symptom": "Cluster is close to reaching the configured maximum number of shards for data nodes.", "details": { "data": { "max_shards_in_cluster": 1000, "current_used_shards": 988 }, "frozen": { "max_shards_in_cluster": 3000 } }, "impacts": [ ... ], "diagnosis": [ ... } } }
使用 集群设置 API
更新 cluster.max_shards_per_node
设置
response = client.cluster.put_settings( body: { persistent: { 'cluster.max_shards_per_node' => 1200 } } ) puts response
PUT _cluster/settings { "persistent" : { "cluster.max_shards_per_node": 1200 } }
此增加应只是暂时的。作为长期解决方案,我们建议您向分片过多的数据层添加节点,或 减少不属于冻结层的节点上的集群分片数量。要验证更改是否已解决问题,您可以通过检查 运行状况 API 的 data
部分来获取 shards_capacity
指标的当前状态
response = client.health_report( feature: 'shards_capacity' ) puts response
GET _health_report/shards_capacity
响应如下所示
{ "cluster_name": "...", "indicators": { "shards_capacity": { "status": "green", "symptom": "The cluster has enough room to add new shards.", "details": { "data": { "max_shards_in_cluster": 1200 }, "frozen": { "max_shards_in_cluster": 3000 } } } } }
当长期解决方案到位后,我们建议您重置 cluster.max_shards_per_node
限制。
response = client.cluster.put_settings( body: { persistent: { 'cluster.max_shards_per_node' => nil } } ) puts response
PUT _cluster/settings { "persistent" : { "cluster.max_shards_per_node": null } }
集群即将达到为冻结节点配置的最大分片数。编辑
cluster.max_shards_per_node.frozen
集群设置限制集群中打开的分片的最大数量,仅计算属于冻结层的数 据节点。
此症状表明应采取措施,否则可能会阻止创建新索引或升级集群。
如果您确信您的更改不会破坏集群的稳定性,则可以使用 集群更新设置 API 临时增加限制
使用 Kibana
- 登录到 Elastic Cloud 控制台。
-
在 Elasticsearch Service 面板上,单击部署的名称。
如果您的部署名称被禁用,则您的 Kibana 实例可能不正常,在这种情况下,请联系 Elastic 支持。如果您的部署不包括 Kibana,您只需 先启用它。
-
打开部署的侧边导航菜单(位于左上角 Elastic 徽标下方),然后转到 开发工具 > 控制台。
-
根据分片容量指标检查集群的当前状态
response = client.health_report( feature: 'shards_capacity' ) puts response
GET _health_report/shards_capacity
响应如下所示
{ "cluster_name": "...", "indicators": { "shards_capacity": { "status": "yellow", "symptom": "Cluster is close to reaching the configured maximum number of shards for frozen nodes.", "details": { "data": { "max_shards_in_cluster": 1000 }, "frozen": { "max_shards_in_cluster": 3000, "current_used_shards": 2998 } }, "impacts": [ ... ], "diagnosis": [ ... } } }
-
更新
cluster.max_shards_per_node.frozen
设置response = client.cluster.put_settings( body: { persistent: { 'cluster.max_shards_per_node.frozen' => 3200 } } ) puts response
PUT _cluster/settings { "persistent" : { "cluster.max_shards_per_node.frozen": 3200 } }
此增加应只是暂时的。作为长期解决方案,我们建议您向分片过多的数据层添加节点,或 减少属于冻结层的节点上的集群分片数量。
-
要验证更改是否已解决问题,您可以通过检查 运行状况 API 的
data
部分来获取shards_capacity
指标的当前状态response = client.health_report( feature: 'shards_capacity' ) puts response
GET _health_report/shards_capacity
响应如下所示
{ "cluster_name": "...", "indicators": { "shards_capacity": { "status": "green", "symptom": "The cluster has enough room to add new shards.", "details": { "data": { "max_shards_in_cluster": 1000 }, "frozen": { "max_shards_in_cluster": 3200 } } } } }
-
当长期解决方案到位后,我们建议您重置
cluster.max_shards_per_node.frozen
限制。response = client.cluster.put_settings( body: { persistent: { 'cluster.max_shards_per_node.frozen' => nil } } ) puts response
PUT _cluster/settings { "persistent" : { "cluster.max_shards_per_node.frozen": null } }
根据分片容量指标检查集群的当前状态
response = client.health_report( feature: 'shards_capacity' ) puts response
GET _health_report/shards_capacity
{ "cluster_name": "...", "indicators": { "shards_capacity": { "status": "yellow", "symptom": "Cluster is close to reaching the configured maximum number of shards for frozen nodes.", "details": { "data": { "max_shards_in_cluster": 1000 }, "frozen": { "max_shards_in_cluster": 3000, "current_used_shards": 2998 } }, "impacts": [ ... ], "diagnosis": [ ... } } }
使用 集群设置 API
更新 cluster.max_shards_per_node.frozen
设置
response = client.cluster.put_settings( body: { persistent: { 'cluster.max_shards_per_node.frozen' => 3200 } } ) puts response
PUT _cluster/settings { "persistent" : { "cluster.max_shards_per_node.frozen": 3200 } }
此增加应只是暂时的。作为长期解决方案,我们建议您向分片过多的数据层添加节点,或 减少属于冻结层的节点上的集群分片数量。要验证更改是否已解决问题,您可以通过检查 运行状况 API 的 data
部分来获取 shards_capacity
指标的当前状态
response = client.health_report( feature: 'shards_capacity' ) puts response
GET _health_report/shards_capacity
响应如下所示
{ "cluster_name": "...", "indicators": { "shards_capacity": { "status": "green", "symptom": "The cluster has enough room to add new shards.", "details": { "data": { "max_shards_in_cluster": 1000 }, "frozen": { "max_shards_in_cluster": 3200 } } } } }
当长期解决方案到位后,我们建议您重置 cluster.max_shards_per_node.frozen
限制。
response = client.cluster.put_settings( body: { persistent: { 'cluster.max_shards_per_node.frozen' => nil } } ) puts response
PUT _cluster/settings { "persistent" : { "cluster.max_shards_per_node.frozen": null } }