文档指南
编辑文档指南编辑
每个集成的文档目标是
- 帮助读者了解集成提供的优势以及 Elastic 如何帮助其使用案例。通知读者任何要求,包括系统兼容性、支持的第三方产品版本、所需的权限等等。
- 提供收集字段的综合列表以及每个字段的数据和指标类型。读者可以在评估集成、解释收集的数据或解决问题时参考此信息。
- 通过将读者与他们需要的任何其他资源连接起来,帮助他们成功安装和设置。
-
每个集成文档应包含几个部分,并且您应该使用一致的标题,以便单个用户更容易评估和使用多个集成。
在 _dev/build/docs/*.md
编写这些文档文件时的一些注意事项
- 这些文件遵循 Markdown 语法,并利用了 文档模板。
- 有一些可用的函数或占位符(
fields
、event
、url
)可用于帮助您编写文档。有关更多详细信息,请参阅 占位符。 -
关于
url
占位符,此占位符应用于在您的文档中添加指向 Elastic 文档指南 的链接- 包含所有定义链接的文件位于目录的根目录中: [
links_table.yml
](../links_table.yml) - 如果需要,可以将更多指向 Elastic 文档指南的链接添加到该文件中。
-
示例用法
- 在文档文件 (
_dev/build/docs/*.md
) 中,{{ url "getting-started-observability" "Elastic 指南" }}
生成指向可观察性入门指南的链接。
- 在文档文件 (
- 包含所有定义链接的文件位于目录的根目录中: [
概述编辑
概述部分解释了集成是什么,定义了提供数据的第三方产品,确定其与 Elastic 产品的更大生态系统的关系,并帮助读者了解如何使用它来解决一个切实的问题。
概述应回答以下问题
- 什么是集成?
- 提供数据的第三方产品是什么?
-
你能用它做什么?
- 一般描述
- 基本示例
模板编辑
将此模板语言作为起点,将 <占位符文本>
替换为有关集成的详细信息
The <name> integration allows you to monitor <service>. <service> is <definition>. Use the <name> integration to <function>. Then visualize that data in Kibana, create alerts to notify you if something goes wrong, and reference <data stream type> when troubleshooting an issue. For example, if you wanted to <use case> you could <action>. Then you can <visualize|alert|troubleshoot> by <action>.
示例编辑
The AWS CloudFront integration allows you to monitor your AWS CloudFront usage. AWS CloudFront is a content delivery network (CDN) service. Use the AWS CloudFront integration to collect and parse logs related to content delivery. Then visualize that data in Kibana, create alerts to notify you if something goes wrong, and reference logs when troubleshooting an issue. For example, you could use the data from this integration to know when there are more than some number of failed requests for a single piece of content in a given time period. You could also use the data to troubleshoot the underlying issue by looking at additional context in the logs like the number of unique users (by IP address) who experienced the issue, the source of the request, and more.
数据流编辑
数据流部分提供了对集成收集的数据类型的概述。这很有帮助,因为它可能很难仅从参考部分(因为它们太长)快速得出理解。
数据流部分应包括
- 集成收集的数据流类型的列表
-
每个数据流类型的摘要以及指向相关参考部分的链接
- 日志
- 指标
- 备注(可选)
模板编辑
将此模板语言作为起点,将 <占位符文本>
替换为有关集成的详细信息
## Data streams The <name> integration collects two types of data streams: logs and metrics. **Logs** help you keep a record of events happening in <service>. Log data streams collected by the <name> integration include <select data streams>, and more. See more details in the <Logs reference>. **Metrics** give you insight into the state of <service>. Metric data streams collected by the <name> integration include <select data streams> and more. See more details in the [Metrics]<#metrics-reference>. <!-- etc. --> <!-- Optional notes -->
示例编辑
The System integration collects two types of data: logs and metrics. Logs help you keep a record of events that happen on your machine. Log data streams collected by the System integration include application, system, and security events on machines running Windows or auth and syslog events on machines running macOS or Linux. See more details in the Logs reference. Metrics give you insight into the state of the machine. Metric data streams collected by the System integration include CPU usage, load statistics, memory usage, information on network behavior, and more. See more details in the Metrics reference. You can enable and disable individual data streams. If all data streams are disabled and the System integration is still enabled, Fleet uses the default data streams.
要求编辑
要求部分帮助读者确认集成是否与其系统兼容。
- Elastic 先决条件(例如,自管理或云部署)
- 系统兼容性
- 支持的第三方产品版本
- 所需的权限
- 任何可能阻止用户成功使用集成的其他内容
模板编辑
将此模板语言作为起点,包括集成的任何其他要求
## Requirements You need Elasticsearch for storing and searching your data and Kibana for visualizing and managing it. You can use our hosted Elasticsearch Service on Elastic Cloud, which is recommended, or self-manage the Elastic Stack on your own hardware. <!-- Other requirements -->
示例编辑
You need Elasticsearch for storing and searching your data and Kibana for visualizing and managing it. You can use our hosted Elasticsearch Service on Elastic Cloud, which is recommended, or self-manage the Elastic Stack on your own hardware. Each data stream collects different kinds of metric data, which may require dedicated permissions to be fetched and may vary across operating systems. Details on the permissions needed for each data stream are available in the Metrics reference.
有关更详细的示例,请参阅 AWS 集成要求。
设置编辑
设置部分将读者引导至可观察性 入门指南,以获取通用的分步说明。
此部分还应包括指南中未包含的任何其他设置说明,这些说明可能包括更新第三方服务的配置的说明。例如,对于 Cisco ASA 集成,用户需要根据 Cisco 文档中找到的步骤 配置其 Cisco 设备。
如果可能,使用链接指向第三方文档以配置非 Elastic 产品,因为工作流程可能会在不通知的情况下发生变化。
模板编辑
将此模板语言作为起点,包括集成的任何其他设置说明
## Setup <!-- Any prerequisite instructions --> For step-by-step instructions on how to set up an integration, see the {{ url "getting-started-observability" "Getting started" }} guide. <!-- Additional set up instructions -->
示例编辑
Before sending logs to Elastic from your Cisco device, you must configure your device according to <<Cisco's documentation on configuring a syslog server>>. After you've configured your device, you can set up the Elastic integration. For step-by-step instructions on how to set up an integration, see the <<Getting started>> guide.
故障排除(可选)编辑
故障排除部分是可选的。它应包含有关特殊情况和异常的信息,这些信息对于入门不是必需的,或者不适用于所有用户。
模板编辑
故障排除部分没有标准格式。
示例编辑
>Note that certain data streams may access `/proc` to gather process information, >and the resulting `ptrace_may_access()` call by the kernel to check for >permissions can be blocked by >[AppArmor and other LSM software](https://gitlab.com/apparmor/apparmor/wikis/TechnicalDoc_Proc_and_ptrace), even though the System module doesn't use `ptrace` directly. > >In addition, when running inside a container the proc filesystem directory of the host >should be set using `system.hostfs` setting to `/hostfs`.
参考编辑
读者可以在评估集成、解释收集的数据或解决问题时使用参考部分。
可以有任意数量的参考部分(例如,## 指标参考
、## 日志参考
)。每个参考部分可以包含一个或多个子部分,例如每个单独数据流的子部分(例如,### 访问日志
和 ### 错误日志
)。
每个参考部分应包含有关以下内容的详细信息
- 我们在集成中支持的日志或指标类型的列表以及指向相关第三方文档的链接。
- (可选)JSON 格式的示例事件。
- 使用实际类型(例如,
counters
、gauges
、histograms
与longs
和doubles
)导出的日志、指标和事件的字段。字段应使用 微调集成 中的说明生成。 - ML 模块作业。
模板编辑
<!-- Repeat for both Logs and Metrics if applicable --> ## <Logs|Metrics> reference <!-- Repeat for each data stream of the current type --> ### <Data stream name> The `<data stream name>` data stream provides events from <source> of the following types: <list types>. <!-- Optional --> <!-- #### Example --> <!-- An example event for `<data stream name>` looks as following: --> <!-- <code block with example> --> #### Exported fields <insert table>
示例编辑
>## Logs reference > >### PAN-OS > >The `panos` data stream provides events from Palo Alto Networks device of the following types: [GlobalProtect](https://docs.paloaltonetworks.com/pan-os/10-2/pan-os-admin/monitoring/use-syslog-for-monitoring/syslog-field-descriptions/globalprotect-log-fields), [HIP Match](https://docs.paloaltonetworks.com/pan-os/10-2/pan-os-admin/monitoring/use-syslog-for-monitoring/syslog-field-descriptions/hip-match-log-fields), [Threat](https://docs.paloaltonetworks.com/pan-os/10-2/pan-os-admin/monitoring/use-syslog-for-monitoring/syslog-field-descriptions/threat-log-fields), [Traffic](https://docs.paloaltonetworks.com/pan-os/10-2/pan-os-admin/monitoring/use-syslog-for-monitoring/syslog-field-descriptions/traffic-log-fields) and [User-ID](https://docs.paloaltonetworks.com/pan-os/10-2/pan-os-admin/monitoring/use-syslog-for-monitoring/syslog-field-descriptions/user-id-log-fields). > >#### Example > >An example event for `panos` looks as following: > >(code block) > >#### Exported fields > >(table of fields)