执行推理 API
编辑执行推理 API编辑
此功能处于技术预览阶段,可能会在未来版本中更改或删除。Elastic 将努力解决任何问题,但技术预览版中的功能不受官方 GA 功能支持 SLA 的约束。
使用推理端点对输入文本执行推理任务。
推理 API 使您能够使用某些服务,例如内置机器学习模型(ELSER、E5)、通过 Eland、Cohere、OpenAI 或 Hugging Face 上传的模型。对于内置模型和通过 Eland 上传的模型,推理 API 提供了一种使用和管理训练模型的替代方法。但是,如果您不打算使用推理 API 来使用这些模型,或者您想使用非 NLP 模型,请使用机器学习训练模型 API。
请求编辑
POST /_inference/<inference_id>
POST /_inference/<task_type>/<inference_id>
先决条件编辑
- 需要
monitor_inference
集群权限(内置的inference_admin
和inference_user
角色授予此权限)
说明编辑
执行推理 API 使您能够使用机器学习模型对您作为输入提供的数据执行特定任务。API 返回包含任务结果的响应。您使用的推理端点可以执行一项特定任务,该任务是在使用创建推理 API创建端点时定义的。
路径参数编辑
-
<inference_id>
- (必填,字符串)推理端点的唯一标识符。
-
<task_type>
- (可选,字符串)模型执行的推理任务类型。
查询参数编辑
-
timeout
- (可选,超时)控制等待推理完成的时间量。默认为 30 秒。
请求正文编辑
-
input
-
(必填,字符串或字符串数组)要对其执行推理任务的文本。
input
可以是单个字符串或数组。completion
任务类型的推理端点当前仅支持单个字符串作为输入。 -
query
- (必填,字符串)仅适用于
rerank
推理端点。搜索查询文本。
示例编辑
完成示例编辑
以下示例对示例问题执行完成。
resp = client.inference.inference( task_type="completion", inference_id="openai_chat_completions", body={"input": "What is Elastic?"}, ) print(resp)
POST _inference/completion/openai_chat_completions { "input": "What is Elastic?" }
API 返回以下响应
{ "completion": [ { "result": "Elastic is a company that provides a range of software solutions for search, logging, security, and analytics. Their flagship product is Elasticsearch, an open-source, distributed search engine that allows users to search, analyze, and visualize large volumes of data in real-time. Elastic also offers products such as Kibana, a data visualization tool, and Logstash, a log management and pipeline tool, as well as various other tools and solutions for data analysis and management." } ] }
重新排序示例编辑
以下示例对示例输入执行重新排序。
resp = client.inference.inference( task_type="rerank", inference_id="cohere_rerank", body={ "input": ["luke", "like", "leia", "chewy", "r2d2", "star", "wars"], "query": "star wars main character", }, ) print(resp)
POST _inference/rerank/cohere_rerank { "input": ["luke", "like", "leia", "chewy","r2d2", "star", "wars"], "query": "star wars main character" }
API 返回以下响应
{ "rerank": [ { "index": "2", "relevance_score": "0.011597361", "text": "leia" }, { "index": "0", "relevance_score": "0.006338922", "text": "luke" }, { "index": "5", "relevance_score": "0.0016166499", "text": "star" }, { "index": "4", "relevance_score": "0.0011695103", "text": "r2d2" }, { "index": "1", "relevance_score": "5.614787E-4", "text": "like" }, { "index": "6", "relevance_score": "3.7850367E-4", "text": "wars" }, { "index": "3", "relevance_score": "1.2508839E-5", "text": "chewy" } ] }
稀疏嵌入示例编辑
以下示例对示例句子执行稀疏嵌入。
resp = client.inference.inference( task_type="sparse_embedding", inference_id="my-elser-model", body={ "input": "The sky above the port was the color of television tuned to a dead channel." }, ) print(resp)
response = client.inference.inference( task_type: 'sparse_embedding', inference_id: 'my-elser-model', body: { input: 'The sky above the port was the color of television tuned to a dead channel.' } ) puts response
POST _inference/sparse_embedding/my-elser-model { "input": "The sky above the port was the color of television tuned to a dead channel." }
API 返回以下响应
{ "sparse_embedding": [ { "port": 2.1259406, "sky": 1.7073475, "color": 1.6922266, "dead": 1.6247464, "television": 1.3525393, "above": 1.2425821, "tuned": 1.1440028, "colors": 1.1218185, "tv": 1.0111054, "ports": 1.0067928, "poem": 1.0042328, "channel": 0.99471164, "tune": 0.96235967, "scene": 0.9020516, (...) }, (...) ] }