diff --git a/docs/detections/rules-ui-create.asciidoc b/docs/detections/rules-ui-create.asciidoc index dcb8574274..8b44ed230e 100644 --- a/docs/detections/rules-ui-create.asciidoc +++ b/docs/detections/rules-ui-create.asciidoc @@ -13,7 +13,7 @@ To create a new detection rule, follow these steps: . Configure basic rule settings. . Configure advanced rule settings (optional). . Set the rule's schedule. -. Set up alert notifications (optional). +. Set up rule actions (optional). . Set up response actions (optional). .Requirements @@ -616,9 +616,6 @@ run exactly at its scheduled time. `Additional look-back time` are _not_ created. ============== . Click *Continue*. The *Rule actions* pane is displayed. -+ -[role="screenshot"] -image::images/available-action-types.png[Available connector types] . Do either of the following: @@ -627,23 +624,26 @@ image::images/available-action-types.png[Available connector types] [float] [[rule-notifications]] -=== Set up alert notifications (optional) +=== Set up rule actions (optional) -Use {kib} Actions to set up notifications sent via other systems when alerts +Use {kib} actions to set up notifications sent via other systems when alerts are generated. -NOTE: To use {kib} Actions for alert notifications, you need the +NOTE: To use {kib} actions for alert notifications, you need the https://www.elastic.co/subscriptions[appropriate license] and your role needs *All* privileges for the *Action and Connectors* feature. For more information, see <>. . Select a connector type to determine how notifications are sent. For example, if you select the {jira} connector, notifications are sent to your {jira} system. + -NOTE: Each action type requires a connector. Connectors store the +[NOTE] +===== +Each action type requires a connector. Connectors store the information required to send the notification from the external system. You can configure connectors while creating the rule or in *{stack-manage-app}* -> *{connectors-ui}*. For more information, see {kibana-ref}/action-types.html[Action and connector types]. -+ -[role="screenshot"] -image::images/available-action-types.png[Available connector types] + +Some connectors that perform actions require less configuration. For example, you do not need to set the action frequency or variables for the {kibana-ref}/cases-action-type.html[Cases connector] + +===== . After you select a connector, set its action frequency to define when notifications are sent: @@ -775,8 +775,8 @@ Example using the mustache "current element" notation `{{.}}` to output all the [float] [[rule-response-action]] -=== Set up response actions (optional) -Use Response Actions to set up additional functionality that will run whenever a rule executes: +==== Set up response actions (optional) +Use response actions to set up additional functionality that will run whenever a rule executes: * **Osquery**: Include live Osquery queries with a custom query rule. When an alert is generated, Osquery automatically collects data on the system related to the alert. Refer to <> to learn more. @@ -784,9 +784,6 @@ Use Response Actions to set up additional functionality that will run whenever a IMPORTANT: Host isolation involves quarantining a host from the network to prevent further spread of threats and limit potential damage. Be aware that automatic host isolation can cause unintended consequences, such as disrupting legitimate user activities or blocking critical business processes. -[role="screenshot"] -image::images/available-response-actions.png[Shows available response actions] - [discrete] [[preview-rules]] === Preview your rule (optional) diff --git a/docs/serverless/AI-for-security/ai-assistant-alert-triage.mdx b/docs/serverless/AI-for-security/ai-assistant-alert-triage.mdx deleted file mode 100644 index 6251add6f9..0000000000 --- a/docs/serverless/AI-for-security/ai-assistant-alert-triage.mdx +++ /dev/null @@ -1,39 +0,0 @@ ---- -slug: /serverless/security/triage-alerts-with-elastic-ai-assistant -title: Triage alerts -description: Elastic AI Assistant can help you enhance and streamline your alert triage workflows. -tags: ["security", "overview", "get-started"] -status: in review ---- - - - -
- -Elastic AI Assistant can help you enhance and streamline your alert triage workflows. - -AI Assistant can help you interpret an alert and understand its context. When you view an alert in ((elastic-sec)), details such as related documents, hosts, and users appear alongside a synopsis of the events that triggered the alert. This data provides a starting point for understanding a potential threat. AI Assistant can answer questions about this data and offer insights and actionable recommendations to remediate the issue. - -
-## Use AI Assistant to triage an alert - -1. Choose an alert to investigate, then click the **View details** button from the Alerts table. -2. On the details flyout, click **Chat** to launch AI Assistant. Data related to the selected alert is automatically added to the prompt. -3. Click **Alert (from summary)** to view which alert fields will be shared with AI Assistant. (For more information about selecting which fields to send, and to learn about anonymizing your data, refer to AI Assistant.) -4. (Optional) Click a quick prompt to use it as a starting point for your query, for example, **Alert summarization**. Customize the prompt and add detail to improve AI Assistant's response. - Once you’ve submitted your query, the AI Assistant will process the information and provide a detailed response. Depending on your prompt and which alert data you included, its response can include a thorough analysis of the alert that highlights key elements such as the nature of the potential threat, potential impact, and suggested response actions. -6. (Optional) Ask follow-up questions, provide additional information for further analysis, and request clarification. The response is not a static report. - - -
-## Generate triage reports - -Elastic AI Assistant can streamline the documentation and report generation process by providing clear records of security incidents, their scope and impact, and your remediation efforts. You can use AI Assistant to create summaries or reports for stakeholders that include key event details, findings, and diagrams. Once the AI Assistant has finished analyzing one or more alerts, you can generate reports by using prompts such as: - -* “Generate a detailed report about this incident, including timeline, impact analysis, and response actions. Also, include a diagram of events.” -* “Generate a summary of this incident/alert and include diagrams of events.” -* “Provide more details on the mitigation strategies used.” - -After you review the report, click **Add to existing case** at the top of AI Assistant's response. This allows you to save a record of the report and make it available to your team. - - \ No newline at end of file diff --git a/docs/serverless/AI-for-security/ai-assistant-esql-queries.mdx b/docs/serverless/AI-for-security/ai-assistant-esql-queries.mdx deleted file mode 100644 index 5fec9ca59f..0000000000 --- a/docs/serverless/AI-for-security/ai-assistant-esql-queries.mdx +++ /dev/null @@ -1,21 +0,0 @@ ---- -slug: /serverless/security/ai-assistant-esql-queries -title: Generate, customize, and learn about ((esql)) queries -description: AI Assistant has specialized ((esql)) capabilities. -tags: ["security","overview","get-started"] -status: in review ---- - -Elastic AI Assistant can help you learn about and leverage the Elasticsearch Query Language (((esql))). - -With AI Assistant's enabled, AI Assistant benefits from specialized training data that enables it to answer questions related to ((esql)) at an expert level. - -AI Assistant can help with ((esql)) in many ways, including: - -* **Education and training**: AI Assistant can serve as a powerful ((esql)) learning tool. Ask it for examples, explanations of complex queries, and best practices. -* **Writing new queries**: Prompt AI Assistant to provide a query that accomplishes a particular task, and it will generate a query matching your description. For example: "Write a query to identify documents with `curl.exe` usage and calculate the sum of `destination.bytes`" or "What query would return all user logins to [a host] in the last six hours?" -* **Providing feedback to optimize existing queries**: Send AI Assistant a query you want to work on and ask it for improvements, refactoring, a general assessment, or to optimize the query's performance with large data sets. -* **Customizing queries for your environment**: Since each environment is unique, you may need to customize queries that you used in other contexts. AI Assistant can suggest necessary modifications based on contextual information you provide. -* **Troubleshooting**: Having trouble with a query or getting unexpected results? Ask AI Assistant to help you troubleshoot. - -In these ways and others, AI Assistant can enable you to make use of ((esql))'s advanced search capabilities to accomplish goals across ((elastic-sec)). \ No newline at end of file diff --git a/docs/serverless/AI-for-security/ai-assistant.mdx b/docs/serverless/AI-for-security/ai-assistant.mdx deleted file mode 100644 index 827f123f10..0000000000 --- a/docs/serverless/AI-for-security/ai-assistant.mdx +++ /dev/null @@ -1,185 +0,0 @@ ---- -slug: /serverless/security/ai-assistant -title: Elastic AI Assistant -description: Elastic AI Assistant is a generative AI open-code chat assistant. -tags: ["security","overview","get-started"] -status: in review ---- - - -
- -The Elastic AI Assistant utilizes generative AI to bolster your cybersecurity operations team. It allows users to interact with ((elastic-sec)) for tasks such as alert investigation, incident response, and query generation or conversion using natural language and much more. - - - - -Elastic AI Assistant is designed to enhance your analysis with smart dialogues. Its capabilities are still developing. Users should exercise caution as the quality of its responses might vary. Your insights and feedback will help us improve this feature. Always cross-verify AI-generated advice for accuracy. - - - - -* This feature requires the Security Analytics Complete . - -* You need an account with a third-party generative AI provider, which AI Assistant uses to generate responses. Supported providers are OpenAI, Azure OpenAI Service, and Amazon Bedrock. - - - -
- -## Your data and AI Assistant - -Elastic does not store or examine prompts or results used by AI Assistant, or use this data for model training. This includes anything you send the model, such as alert or event data, detection rule configurations, queries, and prompts. However, any data you provide to AI Assistant will be processed by the third-party large language model (LLM) provider you connected as part of AI Assistant setup. - -Elastic does not control third-party tools, and assumes no responsibility or liability for their content, operation, or use, nor for any loss or damage that may arise from your using such tools. Please exercise caution when using AI tools with personal, sensitive, or confidential information. Any data you submit may be used by the provider for AI training or other purposes. There is no guarantee that the provider will keep any information you provide secure or confidential. You should familiarize yourself with the privacy practices and terms of use of any generative AI tools prior to use. - - -Elastic can automatically anonymize event data that you provide to AI Assistant as context. To learn more, refer to Configure AI Assistant. - - -
- -## Set up AI Assistant - -You must create a generative AI connector before you can use AI Assistant. AI Assistant can connect to multiple large language model (LLM) providers so you can select the best model for your needs. To set up a connector, refer to . - - -While AI Assistant is compatible with many different models, refer to the to select models that perform well with your desired use cases. - - -
- -## Start chatting - -To open AI Assistant, select the **AI Assistant** button in the top toolbar from anywhere in the ((security-app)). You can also use the keyboard shortcut **Cmd + ;** (or **Ctrl + ;** on Windows). - - - -This opens the **Welcome** chat interface, where you can ask general questions about ((elastic-sec)). - -You can also chat with AI Assistant from several particular pages in ((elastic-sec)) where you can easily send context-specific data and prompts to AI Assistant. - -* Alert details or Event details flyout: Click **Chat** while viewing the details of an alert or event. -* Rules page: Use AI Assistant to help create or correct rule queries. -* Data Quality dashboard: Select the **Incompatible fields** tab, then click **Chat**. (This is only available for fields marked red, indicating they’re incompatible). -* Timeline: Select the **Security Assistant** tab. - - -Each user's chat history and custom quick prompts are automatically saved, so you can leave ((elastic-sec)) and return to pick up a conversation later. Chat history is saved in the `.kibana-elastic-ai-assistant-conversations` data stream. - - -
- -## Interact with AI Assistant - -Use these features to adjust and act on your conversations with AI Assistant: - -* Select a _system prompt_ at the beginning of a conversation to establish how detailed and technical you want AI Assistant's answers to be. - - - - System prompts provide context to the model, informing its response. To create a custom system prompt, open the system prompts dropdown menu and click **+ Add new system prompt...**. - - -* Select a _quick prompt_ at the bottom of the chat window to get help writing a prompt for a specific purpose, such as summarizing an alert or converting a query from a legacy SIEM to ((elastic-sec)). - - - - Quick prompt availability varies based on context — for example, the **Alert summarization** quick prompt appears when you open AI Assistant while viewing an alert. To customize existing quick prompts and create new ones, click **Add Quick prompt**. - - -* In an active conversation, you can use the inline actions that appear on messages to incorporate AI Assistant's responses into your workflows: - - * **Add note to timeline** (): Add the selected text to your currently active Timeline as a note. - * **Add to existing case** (): Add a comment to an existing case using the selected text. - * **Copy to clipboard** (): Copy the text to clipboard to paste elsewhere. Also helpful for resubmitting a previous prompt. - * **Add to timeline** (): Add a filter or query to Timeline using the text. This button appears for particular queries in AI Assistant's responses. - - Be sure to specify which language you'd like AI Assistant to use when writing a query. For example: "Can you generate an Event Query Language query to find four failed logins followed by a successful login?" - - -AI Assistant can remember particular information you tell it to remember. For example, you could tell it: "When anwering any question about srv-win-s1-rsa or an alert that references it, mention that this host is in the New York data center". This will cause it to remember the detail you highlighted. - - -
- -## Configure AI Assistant -The **Settings** menu () allows you to configure default conversations, quick prompts, system prompts, and data anonymization. - -![AI Assistant's settings menu, open to the Conversations tab](../images/ai-assistant/-assistant-assistant-settings-menu.png) - -The **Settings** menu has the following tabs: - -* **Conversations:** When you open AI Assistant from certain pages, such as Timeline or Alerts, it defaults to the relevant conversation type. Choose the system prompt for each conversation type, the connector, and model (if applicable). The **Streaming** setting controls whether AI Assistant's responses appear word-by-word (streamed), or as a complete block of text. Streaming is currently only available for OpenAI models. -* **Quick Prompts:** Modify existing quick prompts or create new ones. To create a new quick prompt, type a unique name in the **Name** field, then press **enter**. Under **Prompt**, enter or update the quick prompt's text. Under **Contexts**, select where the quick prompt should appear. -* **System Prompts:** Edit existing system prompts or create new ones. To create a new system prompt, type a unique name in the **Name** field, then press **enter**. Under **Prompt**, enter or update the system prompt's text. - - - To delete a custom prompt, open the **Name** drop-down menu, hover over the prompt you want to delete, and click the *X* that appears. You cannot delete the default prompts. - - -* **Anonymization:** Select fields to include as plaintext, to obfuscate, and to not send when you provide events to AI Assistant as context. -* **Knowledge base:** Provide additional context to AI Assistant so it can answer questions about ((esql)) and alerts in your environment. - -
- -### Anonymization - - -The **Anonymization** tab of the AI Assistant settings menu allows you to define default data anonymization behavior for events you send to AI Assistant. Fields with **Allowed** toggled on are included in events provided to AI Assistant. **Allowed** fields with **Anonymized** set to **Yes** are included, but with their values obfuscated. - -![AI Assistant's settings menu, open to the Anonymization tab](../images/ai-assistant/-assistant-assistant-anonymization-menu.png) - -The **Show anonymized** toggle controls whether you see the obfuscated or plaintext versions of the fields you sent to AI Assistant. It doesn't control what gets obfuscated — that's determined by the anonymization settings. It also doesn't affect how event fields appear _before_ being sent to AI Assistant. Instead, it controls how fields that were already sent and obfuscated appear to you. - -When you include a particular event as context, such as an alert from the Alerts page, you can adjust anonymization behavior for the specific event. Be sure the anonymization behavior meets your specifications before sending a message with the event attached. - -
- -### Knowlege base - - - -The **Knowledge base** tab of the AI Assistant settings menu allows you to enable AI Assistant to answer questions about the Elastic Search Query Language (((esql))), and about alerts in your environment. To use it, you must , - -### Knowledge base for ((esql)) - - -((esql)) queries generated by AI Assistant might require additional validation. To ensure they're correct, refer to the [((esql)) documentation](((ref))/esql-language.html). - - -When this feature is enabled, AI Assistant can help you write an ((esql)) query for a particular use case, or answer general questions about ((esql)) syntax and usage. To enable AI Assistant to answer questions about ((esql)): - -* Turn on the knowledge base by clicking **Setup**. If the **Setup** button doesn't appear, knowledge base is already enabled. -* Click **Save**. The knowledge base is now active. A quick prompt for ((esql)) queries becomes available, which provides a good starting point for your ((esql)) conversations and questions. - - -AI Assistant's knowledge base gets additional context from [Elastic Learned Sparse EncodeR (ELSER)](((ml-docs))/ml-nlp-elser.html#download-deploy-elser). - - -### Knowledge base for alerts - -When this feature is enabled, AI Assistant will receive multiple alerts as context for each of your prompts. It will receive alerts from the last 24 hours that have a status of `open` or `acknowledged`, ordered first by risk score, then by recency. Building block alerts are excluded. This enables it to answer questions about multiple alerts in your environment, rather than just the individual alerts you choose to include as context. - -To enable RAG for alerts: - -* Turn on the knowledge base by clicking **Setup**. If the **Setup** button doesn't appear, knowledge base is already enabled. -* Use the slider to select the number of alerts to send to AI Assistant. Click **Save**. - -![AI Assistant's settings menu, open to the Knowledge base tab](../images/ai-assistant/assistant-kb-menu.png) - - -Including a large number of alerts may cause your request to exceed the maximum token length of your third-party generative AI provider. If this happens, try selecting a lower number of alerts to send. - - -### Get the most from your queries - -Elastic AI Assistant helps you take full advantage of the Elastic Security platform to improve your security operations. Its ability to assist you depends on the specificity and detail of your questions. The more context and detail you provide, the more tailored and useful its responses will be. - -To maximize its usefulness, consider using more detailed prompts or asking for additional information. For instance, after asking for an ES|QL query example, you could ask a follow-up question like, “Could you give me some other examples?” You can also ask for clarification or further exposition, for example "Please provide comments explaining the query you just gave." - -In addition to practical advice, AI Assistant can offer conceptual advice, tips, and best practices for enhancing your security measures. You can ask it, for example: - -* “How do I set up a machine learning job in Elastic Security to detect anomalies in network traffic volume over time?” -* “I need to monitor for unusual file creation patterns that could indicate ransomware activity. How would I construct this query using EQL?” - diff --git a/docs/serverless/AI-for-security/ai-for-security-landing-pg.mdx b/docs/serverless/AI-for-security/ai-for-security-landing-pg.mdx deleted file mode 100644 index b4a5b206ac..0000000000 --- a/docs/serverless/AI-for-security/ai-for-security-landing-pg.mdx +++ /dev/null @@ -1,8 +0,0 @@ ---- -slug: /serverless/security/ai-for-security -title: AI for security -description: Learn about Elastic's native AI security tools. -tags: [ 'serverless', 'security', 'overview', 'LLM', 'artificial intelligence' ] -status: in review ---- -You can use ((elastic-sec))’s built-in AI tools to speed up your work and augment your team’s capabilities. The pages in this section describe , which answers questions and enhances your workflows throughout Elastic Security, and , which speeds up the triage process by finding patterns and identifying attacks spanning multiple alerts. \ No newline at end of file diff --git a/docs/serverless/AI-for-security/ai-use-cases.mdx b/docs/serverless/AI-for-security/ai-use-cases.mdx deleted file mode 100644 index 073ebce8b8..0000000000 --- a/docs/serverless/AI-for-security/ai-use-cases.mdx +++ /dev/null @@ -1,13 +0,0 @@ ---- -slug: /serverless/security/ai-use-cases -title: Use cases -description: Learn about use cases for AI in ((elastic-sec)). -tags: ["security","overview","get-started"] -status: in review ---- - -The guides in this section describe use cases for AI Assistant and Attack discovery. Refer to them for examples of each tool's individual capabilities, and of what they can do together. - -* -* -* \ No newline at end of file diff --git a/docs/serverless/AI-for-security/attack-discovery.mdx b/docs/serverless/AI-for-security/attack-discovery.mdx deleted file mode 100644 index 1603aea9ae..0000000000 --- a/docs/serverless/AI-for-security/attack-discovery.mdx +++ /dev/null @@ -1,81 +0,0 @@ ---- -slug: /serverless/security/attack-discovery -title: Attack discovery -description: Accelerate threat identification by triaging alerts with a large language model. -tags: [ 'serverless', 'security', 'overview', 'LLM', 'artificial intelligence' ] -status: in review ---- - - - - -This feature is in technical preview. It may change in the future, and you should exercise caution when using it in production environments. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of GA features. - - -Attack discovery leverages large language models (LLMs) to analyze alerts in your environment and identify threats. Each "discovery" represents a potential attack and describes relationships among multiple alerts to tell you which users and hosts are involved, how alerts correspond to the MITRE ATT&CK matrix, and which threat actor might be responsible. This can help make the most of each security analyst's time, fight alert fatigue, and reduce your mean time to respond. - -For a demo, refer to the following video. - - -This page describes: - -- How to start generating discoveries -- What information each discovery includes -- How you can interact with discoveries to enhance ((elastic-sec)) workflows - - -
-## Generate discoveries - -When you access Attack discovery for the first time, you'll need to select an LLM connector before you can analyze alerts. Attack discovery uses the same LLM connectors as Elastic AI Assistant. To get started: - -1. Click the **Attack discovery** page from ((elastic-sec))'s navigation menu. - -2. Select an existing connector from the dropdown menu, or add a new one. - - -While Attack discovery is compatible with many different models, our testing found increased performance with Claude 3 Sonnet and Claude 3 Opus. In general, models with larger context windows are more effective for Attack discovery. - - -![Attack discovery empty state](../images/attack-discovery/select-model-empty-state.png) - -3. Once you've selected a connector, click **Generate** to start the analysis. - -It may take from a few seconds up to several minutes to generate discoveries, depending on the number of alerts and the model you selected. - - -Attack discovery is in technical preview and will only analyze opened and acknowleged alerts from the past 24 hours. By default it only analyzes up to 20 alerts within this timeframe, but you can expand this up to 100 by going to **AI Assistant → Settings () → Knowledge Base** and updating the **Alerts** setting. - - -![AI Assistant knowledge base menu](../images/ai-assistant/assistant-kb-menu.png) - - - -Attack discovery uses the same data anonymization settings as Elastic AI Assistant. To configure which alert fields are sent to the LLM and which of those fields are obfuscated, use the Elastic AI Assistant settings. Consider the privacy policies of third-party LLMs before sending them sensitive data. - - -Once the analysis is complete, any threats it identifies appear as discoveries. Click each one's title to expand or collapse it. Click **Generate** at any time to start the Attack discovery process again with the most current alerts. - -
-## What information does each discovery include? - -Each discovery includes the following information describing the potential threat, generated by the connected LLM: - -- A descriptive title and a summary of the potential threat. -- The number of associated alerts and which parts of the [MITRE ATT&CK matrix](https://attack.mitre.org/) they correspond to. -- The implicated entities (users and hosts), and what suspicious activity was observed for each. - -![Attack discovery detail view](../images/attack-discovery/attack-discovery-full-card.png) - -
-## Incorporate discoveries with other workflows - -There are several ways you can incorporate discoveries into your ((elastic-sec)) workflows: - -- Click an entity's name to open the user or host details flyout and view more details that may be relevant to your investigation. -- Hover over an entity's name to either add the entity to Timeline () or copy its field name and value to the clipboard (). -- Click **Take action**, then select **Add to new case** or **Add to existing case** to add a discovery to a case. This makes it easy to share the information with your team and other stakeholders. -- Click **Investigate in timeline** to explore the discovery in Timeline. -- Click **View in AI Assistant** to attach the discovery to a conversation with AI Assistant. You can then ask follow up questions about the discovery or associated alerts. - -![Attack discovery view in AI Assistant](../images/attack-discovery/add-discovery-to-conversation.gif) diff --git a/docs/serverless/AI-for-security/connect-to-azure-openai.mdx b/docs/serverless/AI-for-security/connect-to-azure-openai.mdx deleted file mode 100644 index 79f9d8630d..0000000000 --- a/docs/serverless/AI-for-security/connect-to-azure-openai.mdx +++ /dev/null @@ -1,83 +0,0 @@ ---- -slug: /serverless/security/connect-to-azure-openai -title: Connect to Azure OpenAI -description: Set up an Azure OpenAI LLM connector. -tags: ["security", "overview", "get-started"] -status: in review ---- - -# Connect to Azure OpenAI - -This page provides step-by-step instructions for setting up an Azure OpenAI connector for the first time. This connector type enables you to leverage large language models (LLMs) within ((kib)). You'll first need to configure Azure, then configure the connector in ((kib)). - -## Configure Azure - -### Configure a deployment - -First, set up an Azure OpenAI deployment: - -1. Log in to the Azure console and search for Azure OpenAI. -2. In **Azure AI services**, select **Create**. -3. For the **Project Details**, select your subscription and resource group. If you don't have a resource group, select **Create new** to make one. -4. For **Instance Details**, select the desired region and specify a name, such as `example-deployment-openai`. -5. Select the **Standard** pricing tier, then click **Next**. -6. Configure your network settings, click **Next**, optionally add tags, then click **Next**. -7. Review your deployment settings, then click **Create**. When complete, select **Go to resource**. - -The following video demonstrates these steps. - - - - -### Configure keys - -Next, create access keys for the deployment: - -1. From within your Azure OpenAI deployment, select **Click here to manage keys**. -2. Store your keys in a secure location. - -The following video demonstrates these steps. - - - - -### Configure a model - -Now, set up the Azure OpenAI model: - -1. From within your Azure OpenAI deployment, select **Model deployments**, then click **Manage deployments**. -2. On the **Deployments** page, select **Create new deployment**. -3. Under **Select a model**, choose `gpt-4o` or `gpt-4 turbo`. -4. Set the model version to "Auto-update to default". -5. Under **Deployment type**, select **Standard**. -6. Name your deployment. -7. Slide the **Tokens per Minute Rate Limit** to the maximum. The following example supports 80,000 TPM, but other regions might support higher limits. -8. Click **Create**. - - -The models available to you will depend on [region availability](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models#model-summary-table-and-region-availability). For best results, use `GPT-4o 2024-05-13` with the maximum Tokens-Per-Minute (TPM) capacity. For more information on how different models perform for different tasks, refer to the . - - -The following video demonstrates these steps. - - - -## Configure Elastic AI Assistant - -Finally, configure the connector in ((kib)): - -1. Log in to ((kib)). -2. Go to **Stack Management → Connectors → Create connector → OpenAI**. -3. Give your connector a name to help you keep track of different models, such as `Azure OpenAI (GPT-4 Turbo v. 0125)`. -4. For **Select an OpenAI provider**, choose **Azure OpenAI**. -5. Update the **URL** field. We recommend doing the following: - - Navigate to your deployment in Azure AI Studio and select **Open in Playground**. The **Chat playground** screen displays. - - Select **View code**, then from the drop-down, change the **Sample code** to `Curl`. - - Highlight and copy the URL without the quotes, then paste it into the **URL** field in ((kib)). - - (Optional) Alternatively, refer to the [API documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/reference) to learn how to create the URL manually. -6. Under **API key**, enter one of your API keys. -7. Click **Save & test**, then click **Run**. - -The following video demonstrates these steps. - - diff --git a/docs/serverless/AI-for-security/connect-to-bedrock.mdx b/docs/serverless/AI-for-security/connect-to-bedrock.mdx deleted file mode 100644 index 7c8609b043..0000000000 --- a/docs/serverless/AI-for-security/connect-to-bedrock.mdx +++ /dev/null @@ -1,119 +0,0 @@ ---- -slug: /serverless/security/connect-to-bedrock -title: Connect to Amazon Bedrock -description: Set up an Amazon Bedrock LLM connector. -tags: ["security","overview","get-started"] -status: in review ---- - -# Connect to Amazon Bedrock - -This page provides step-by-step instructions for setting up an Amazon Bedrock connector for the first time. This connector type enables you to leverage large language models (LLMs) within ((kib)). You'll first need to configure AWS, then configure the connector in ((kib)). - - -Only Amazon Bedrock's `Anthropic` models are supported: `Claude` and `Claude instant`. - - -## Configure AWS - -### Configure an IAM policy - -First, configure an IAM policy with the necessary permissions: - -1. Log into the AWS console and search for Identity and Access Management (IAM). -2. From the **IAM** menu, select **Policies** → **Create policy**. -3. To provide the necessary permissions, paste the following JSON into the **Specify permissions** menu. - -```json -{ - "Version": "2012-10-17", - "Statement": [ - { - "Sid": "VisualEditor0", - "Effect": "Allow", - "Action": [ - "bedrock:InvokeModel", - "bedrock:InvokeModelWithResponseStream" - ], - "Resource": "*" - } - ] -} -``` - -These are the minimum required permissions. IAM policies with additional permissions are also supported. - - -4. Click **Next**. Name your policy. - -The following video demonstrates these steps. - - - -### Configure an IAM User - -Next, assign the policy you just created to a new user: - -1. Return to the **IAM** menu. Select **Users** from the navigation menu, then click **Create User**. -2. Name the user, then click **Next**. -3. Select **Attach policies directly**. -4. In the **Permissions policies** field, search for the policy you created earlier, select it, and click **Next**. -5. Review the configuration then click **Create user**. - -The following video demonstrates these steps. - - - -### Create an access key - -Create the access keys that will authenticate your Elastic connector: - -1. Return to the **IAM** menu. Select **Users** from the navigation menu. -2. Search for the user you just created, and click its name. -3. Go to the **Security credentials** tab. -4. Under **Access keys**, click **Create access key**. -5. Select **Third-party service**, check the box under **Confirmation**, click **Next**, then click **Create access key**. -6. Click **Download .csv file** to download the key. Store it securely. - -The following video demonstrates these steps. - - - - -### Enable model access - -Make sure the supported Amazon Bedrock LLMs are enabled: - -1. Search the AWS console for Amazon Bedrock. -2. From the Amazon Bedrock page, click **Get started**. -3. Select **Model access** from the left navigation menu, then click **Manage model access**. -4. Check the boxes for **Claude** and/or **Claude Instant**, depending which model or models you plan to use. -5. Click **Save changes**. - -The following video demonstrates these steps. - - - -## Configure the Amazon Bedrock connector - -Finally, configure the connector in ((kib)): - -1. Log in to ((kib)). -2. Go to **Stack Management → Connectors → Create connector → Amazon Bedrock**. -3. Name your connector. -4. (Optional) Configure the Amazon Bedrock connector to use a different AWS region where Anthropic models are supported by editing the **URL** field, for example by changing `us-east-1` to `eu-central-1`. -5. (Optional) Add one of the following strings if you want to use a model other than the default: - - For Haiku: `anthropic.claude-3-haiku-20240307-v1:0` - - For Sonnet: `anthropic.claude-3-sonnet-20240229-v1:0` - - For Opus: `anthropic.claude-3-opus-20240229-v1:0` -6. Enter the **Access Key** and **Secret** that you generated earlier, then click **Save**. - -Your LLM connector is now configured. For more information on using Elastic AI Assistant, refer to [AI Assistant](https://docs.elastic.co/security/ai-assistant). - - -If you're using [provisioned throughput](https://docs.aws.amazon.com/bedrock/latest/userguide/prov-throughput.html), your ARN becomes the model ID, and the connector settings **URL** value must be [encoded](https://www.urlencoder.org/) to work. For example, if the non-encoded ARN is `arn:aws:bedrock:us-east-2:123456789102:provisioned-model/3Ztr7hbzmkrqy1`, the encoded ARN would be `arn%3Aaws%3Abedrock%3Aus-east-2%3A123456789102%3Aprovisioned-model%2F3Ztr7hbzmkrqy1`. - - -The following video demonstrates these steps. - - diff --git a/docs/serverless/AI-for-security/connect-to-byo-llm.mdx b/docs/serverless/AI-for-security/connect-to-byo-llm.mdx deleted file mode 100644 index ccbb6e3cec..0000000000 --- a/docs/serverless/AI-for-security/connect-to-byo-llm.mdx +++ /dev/null @@ -1,174 +0,0 @@ ---- -slug: /serverless/security/connect-to-byo-llm -title: Connect to your own local LLM -description: Set up a connector to LM Studio so you can use a local model with AI Assistant. -tags: ["security", "overview", "get-started"] -status: in review ---- - -This page provides instructions for setting up a connector to a large language model (LLM) of your choice using LM Studio. This allows you to use your chosen model within ((elastic-sec)). You'll first need to set up a reverse proxy to communicate with ((elastic-sec)), then set up LM Studio on a server, and finally configure the connector in your ((elastic-sec)) project. [Learn more about the benefits of using a local LLM](https://www.elastic.co/blog/ai-assistant-locally-hosted-models). - -This example uses a single server hosted in GCP to run the following components: -- LM Studio with the [Mixtral-8x7b](https://mistral.ai/technology/#models) model -- A reverse proxy using Nginx to authenticate to Elastic Cloud - - - - - -For testing, you can use alternatives to Nginx such as [Azure Dev Tunnels](https://learn.microsoft.com/en-us/azure/developer/dev-tunnels/overview) or [Ngrok](https://ngrok.com/), but using Nginx makes it easy to collect additional telemetry and monitor its status by using Elastic's native Nginx integration. While this example uses cloud infrastructure, it could also be replicated locally without an internet connection. - - -## Configure your reverse proxy - - -If your Elastic instance is on the same host as LM Studio, you can skip this step. - - -You need to set up a reverse proxy to enable communication between LM Studio and Elastic. For more complete instructions, refer to a guide such as [this one](https://www.digitalocean.com/community/tutorials/how-to-configure-nginx-as-a-reverse-proxy-on-ubuntu-22-04). - -The following is an example Nginx configuration file: -``` -server { - listen 80; - listen [::]:80; - server_name ; - server_tokens off; - add_header x-xss-protection "1; mode=block" always; - add_header x-frame-options "SAMEORIGIN" always; - add_header X-Content-Type-Options "nosniff" always; - return 301 https://$server_name$request_uri; -} - -server { - - listen 443 ssl http2; - listen [::]:443 ssl http2; - server_name ; - server_tokens off; - ssl_certificate /etc/letsencrypt/live//fullchain.pem; - ssl_certificate_key /etc/letsencrypt/live//privkey.pem; - ssl_session_timeout 1d; - ssl_session_cache shared:SSL:50m; - ssl_session_tickets on; - ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256'; - ssl_protocols TLSv1.3 TLSv1.2; - ssl_prefer_server_ciphers on; - add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always; - add_header x-xss-protection "1; mode=block" always; - add_header x-frame-options "SAMEORIGIN" always; - add_header X-Content-Type-Options "nosniff" always; - add_header Referrer-Policy "strict-origin-when-cross-origin" always; - ssl_stapling on; - ssl_stapling_verify on; - ssl_trusted_certificate /etc/letsencrypt/live//fullchain.pem; - resolver 1.1.1.1; - location / { - - if ($http_authorization != "Bearer ") { - return 401; -} - - proxy_pass http://localhost:1234/; - } - -} -``` - - -* Replace `` with your actual token, and keep it safe since you'll need it to set up the ((elastic-sec)) connector. -* Replace `` with your actual domain name. -* Update the `proxy_pass` value at the bottom of the configuration if you decide to change the port number in LM Studio to something other than 1234. - - -### (Optional) Set up performance monitoring for your reverse proxy -You can use Elastic's [Nginx integration](https://www.elastic.co/docs/current/integrations/nginx) to monitor performance and populate monitoring dashboards in the ((security-app)). - -## Configure LM Studio and download a model - -First, install [LM Studio](https://lmstudio.ai/). LM Studio supports the OpenAI SDK, which makes it compatible with Elastic's OpenAI connector, allowing you to connect to any model available in the LM Studio marketplace. - -One current limitation of LM Studio is that when it is installed on a server, you must launch the application using its GUI before doing so using the CLI. For example, by using Chrome RDP with an [X Window System](https://cloud.google.com/architecture/chrome-desktop-remote-on-compute-engine). After you've opened the application the first time using the GUI, you can start it by using `sudo lms server start` in the CLI. - -Once you've launched LM Studio: - -1. Go to LM Studio's Search window. -1. Search for an LLM (for example, `Mixtral-8x7B-instruct`). Your chosen model must include `instruct` in its name in order to work with Elastic. -1. Filter your search for "Compatibility Guess" to optimize results for your hardware. Results will be color coded: - * Green means "Full GPU offload possible", which yields the best results. - * Blue means "Partial GPU offload possible", which may work. - * Red for "Likely too large for this machine", which typically will not work. -1. Download one or more models. - - -For security reasons, before downloading a model, verify that it is from a trusted source. It can be helpful to review community feedback on the model (for example using a site like Hugging Face). - - - - -In this example we used [`TheBloke/Mixtral-8x7B-Instruct-v0.1.Q3_K_M.gguf`](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF). It has 46.7B total parameters, a 32,000 token context window, and uses GGUF [quanitization](https://huggingface.co/docs/transformers/main/en/quantization/overview). For more information about model names and format information, refer to the following table. - -| Model Name | Parameter Size | Tokens/Context Window | Quantization Format | -|------------|----------------|-----------------------|---------------------| -| Name of model, sometimes with a version number. | LLMs are often compared by their number of parameters — higher numbers mean more powerful models. | Tokens are small chunks of input information. Tokens do not necessarily correspond to characters. You can use [Tokenizer](https://platform.openai.com/tokenizer) to see how many tokens a given prompt might contain. | Quantization reduces overall parameters and helps the model to run faster, but reduces accuracy. | -| Examples: Llama, Mistral, Phi-3, Falcon. | The number of parameters is a measure of the size and the complexity of the model. The more parameters a model has, the more data it can process, learn from, generate, and predict. | The context window defines how much information the model can process at once. If the number of input tokens exceeds this limit, input gets truncated. | Specific formats for quantization vary, most models now support GPU rather than CPU offloading. | - - -## Load a model in LM Studio - -After downloading a model, load it in LM Studio using the GUI or LM Studio's [CLI tool](https://lmstudio.ai/blog/lms). - -### Option 1: load a model using the CLI (Recommended) - -It is a best practice to download models from the marketplace using the GUI, and then load or unload them using the CLI. The GUI allows you to search for models, whereas the CLI only allows you to import specific paths, but the CLI provides a good interface for loading and unloading. - -Use the following commands in your CLI: - -1. Verify LM Studio is installed: `lms` -2. Check LM Studio's status: `lms status` -3. List all downloaded models: `lms ls` -4. Load a model: `lms load` - - - -After the model loads, you should see a `Model loaded successfully` message in the CLI. - - - -To verify which model is loaded, use the `lms ps` command. - - - -If your model uses NVIDIA drivers, you can check the GPU performance with the `sudo nvidia-smi` command. - -### Option 2: load a model using the GUI - -Refer to the following video to see how to load a model using LM Studio's GUI. You can change the **port** setting, which is referenced in the Nginx configuration file. Note that the **GPU offload** was set to **Max**. - - - -## (Optional) Collect logs using Elastic's Custom Logs integration - -You can monitor the performance of the host running LM Studio using Elastic's [Custom Logs integration](https://www.elastic.co/docs/current/integrations/log). This can also help with troubleshooting. Note that the default path for LM Studio logs is `/tmp/lmstudio-server-log.txt`, as in the following screenshot: - - - -## Configure the connector in ((elastic-sec)) - -Finally, configure the connector in your Security project: - -1. Log in to your Security project. -2. Navigate to **Stack Management → Connectors → Create Connector → OpenAI**. The OpenAI connector enables this use case because LM Studio uses the OpenAI SDK. -3. Name your connector to help keep track of the model version you are using. -4. Under **URL**, enter the domain name specified in your Nginx configuration file, followed by `/v1/chat/completions`. -5. Under **Default model**, enter `local-model`. -6. Under **API key**, enter the secret token specified in your Nginx configuration file. -7. Click **Save**. - - - -Setup is now complete. You can use the model you've loaded in LM Studio to power Elastic's generative AI features. You can test a variety of models as you interact with AI Assistant to see what works best without having to update your connector. - - -While local models work well for , we recommend you use one of for interacting with . As local models become more performant over time, this is likely to change. - \ No newline at end of file diff --git a/docs/serverless/AI-for-security/connect-to-openai.mdx b/docs/serverless/AI-for-security/connect-to-openai.mdx deleted file mode 100644 index 57a24d97b1..0000000000 --- a/docs/serverless/AI-for-security/connect-to-openai.mdx +++ /dev/null @@ -1,53 +0,0 @@ ---- -slug: /serverless/security/connect-to-openai -title: Connect to OpenAI -description: Set up an OpenAI LLM connector. -tags: ["security", "overview", "get-started"] -status: in review ---- - -# Connect to OpenAI - -This page provides step-by-step instructions for setting up an OpenAI connector for the first time. This connector type enables you to leverage OpenAI's large language models (LLMs) within ((kib)). You'll first need to create an OpenAI API key, then configure the connector in ((kib)). - -## Configure OpenAI - -### Select a model - -Before creating an API key, you must choose a model. Refer to the [OpenAI docs](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4) to select a model. Take note of the specific model name (for example `gpt-4-turbo`); you'll need it when configuring ((kib)). - - -`GPT-4o` offers increased performance over previous versions. For more information on how different models perform for different tasks, refer to the . - - -### Create an API key - -To generate an API key: - -1. Log in to the OpenAI platform and navigate to **API keys**. -2. Select **Create new secret key**. -3. Name your key, select an OpenAI project, and set the desired permissions. -4. Click **Create secret key** and then copy and securely store the key. It will not be accessible after you leave this screen. - -The following video demonstrates these steps. - - - - -## Configure the OpenAI connector - -Finally, configure the connector in ((kib)): - -1. Log in to ((kib)). -2. Navigate to **Stack Management → Connectors → Create Connector → OpenAI**. -3. Provide a name for your connector, such as `OpenAI (GPT-4 Turbo Preview)`, to help keep track of the model and version you are using. -4. Under **Select an OpenAI provider**, choose **OpenAI**. -5. The **URL** field can be left as default. -6. Under **Default model**, specify which [model](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4) you want to use. -7. Paste the API key that you created into the corresponding field. -8. Click **Save**. - -The following video demonstrates these steps. - - - diff --git a/docs/serverless/AI-for-security/connect-to-vertex.mdx b/docs/serverless/AI-for-security/connect-to-vertex.mdx deleted file mode 100644 index 5374db1439..0000000000 --- a/docs/serverless/AI-for-security/connect-to-vertex.mdx +++ /dev/null @@ -1,67 +0,0 @@ ---- -slug: /serverless/security/connect-to-google-vertex -title: Connect to Google Vertex AI -description: Set up a Google Vertex LLM connector. -tags: ["security", "overview", "get-started"] -status: in review ---- - -This page provides step-by-step instructions for setting up a Google Vertex AI connector for the first time. This connector type enables you to leverage Vertex AI's large language models (LLMs) within ((elastic-sec)). You'll first need to enable Vertex AI, then generate an API key, and finally configure the connector in your ((elastic-sec)) project. - - -Before continuing, you should have an active project in one of Google Vertex AI's [supported regions](https://cloud.google.com/vertex-ai/docs/general/locations#feature-availability). - - -## Enable the Vertex AI API - -1. Log in to the GCP console and navigate to **Vertex AI → Vertex AI Studio → Overview**. -2. If you're new to Vertex AI, the **Get started with Vertex AI Studio** popup appears. Click **Vertex AI API**, then click **ENABLE**. - -The following video demonstrates these steps. - - - - -For more information about enabling the Vertex AI API, refer to [Google's documentation](https://cloud.google.com/vertex-ai/docs/start/cloud-environment). - - -## Create a Vertex AI service account - -1. In the GCP console, navigate to **APIs & Services → Library**. -2. Search for **Vertex AI API**, select it, and click **MANAGE**. -3. In the left menu, navigate to **Credentials** then click **+ CREATE CREDENTIALS** and select **Service account**. -4. Name the new service account, then click **CREATE AND CONTINUE**. -5. Under **Select a role**, select **Vertex AI User**, then click **CONTINUE**. -6. Click **Done**. - -The following video demonstrates these steps. - - - -## Generate an API key - -1. Return to Vertex AI's **Credentials** menu and click **Manage service accounts**. -2. Search for the service account you just created, select it, then click the link that appears under **Email**. -3. Go to the **KEYS** tab, click **ADD KEY**, then select **Create new key**. -4. Select **JSON**, then click **CREATE** to download the key. Keep it somewhere secure. - -The following video demonstrates these steps. - - - -## Configure the Google Gemini connector - -Finally, configure the connector in ((kib)): - -1. Log in to ((kib)). -2. Navigate to **Stack Management → Connectors → Create Connector → Google Gemini**. -3. Name your connector to help keep track of the model version you are using. -4. Under **URL**, enter the URL for your region. -5. Enter your **GCP Region** and **GCP Project ID**. -6. Under **Default model**, specify either `gemini-1.5.pro` or `gemini-1.5-flash`. [Learn more about the models](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/models). -7. Under **Authentication**, enter your API key. -8. Click **Save**. - -The following video demonstrates these steps. - - \ No newline at end of file diff --git a/docs/serverless/AI-for-security/images/attck-disc-11-alerts-disc.png b/docs/serverless/AI-for-security/images/attck-disc-11-alerts-disc.png deleted file mode 100644 index 0f2bf87bac..0000000000 Binary files a/docs/serverless/AI-for-security/images/attck-disc-11-alerts-disc.png and /dev/null differ diff --git a/docs/serverless/AI-for-security/images/attck-disc-esql-query-gen-example.png b/docs/serverless/AI-for-security/images/attck-disc-esql-query-gen-example.png deleted file mode 100644 index 3ec015ced4..0000000000 Binary files a/docs/serverless/AI-for-security/images/attck-disc-esql-query-gen-example.png and /dev/null differ diff --git a/docs/serverless/AI-for-security/images/attck-disc-remediate-sodinokibi.gif b/docs/serverless/AI-for-security/images/attck-disc-remediate-sodinokibi.gif deleted file mode 100644 index f4fd2c9ed1..0000000000 Binary files a/docs/serverless/AI-for-security/images/attck-disc-remediate-sodinokibi.gif and /dev/null differ diff --git a/docs/serverless/AI-for-security/images/attck-disc-translate-japanese.png b/docs/serverless/AI-for-security/images/attck-disc-translate-japanese.png deleted file mode 100644 index 190efbb09e..0000000000 Binary files a/docs/serverless/AI-for-security/images/attck-disc-translate-japanese.png and /dev/null differ diff --git a/docs/serverless/AI-for-security/images/lms-cli-welcome.png b/docs/serverless/AI-for-security/images/lms-cli-welcome.png deleted file mode 100644 index c857d01454..0000000000 Binary files a/docs/serverless/AI-for-security/images/lms-cli-welcome.png and /dev/null differ diff --git a/docs/serverless/AI-for-security/images/lms-custom-logs-config.png b/docs/serverless/AI-for-security/images/lms-custom-logs-config.png deleted file mode 100644 index 35e82e89cd..0000000000 Binary files a/docs/serverless/AI-for-security/images/lms-custom-logs-config.png and /dev/null differ diff --git a/docs/serverless/AI-for-security/images/lms-edit-connector.png b/docs/serverless/AI-for-security/images/lms-edit-connector.png deleted file mode 100644 index 0359253eb1..0000000000 Binary files a/docs/serverless/AI-for-security/images/lms-edit-connector.png and /dev/null differ diff --git a/docs/serverless/AI-for-security/images/lms-model-select.png b/docs/serverless/AI-for-security/images/lms-model-select.png deleted file mode 100644 index 454fa2a1ab..0000000000 Binary files a/docs/serverless/AI-for-security/images/lms-model-select.png and /dev/null differ diff --git a/docs/serverless/AI-for-security/images/lms-ps-command.png b/docs/serverless/AI-for-security/images/lms-ps-command.png deleted file mode 100644 index af72b6976c..0000000000 Binary files a/docs/serverless/AI-for-security/images/lms-ps-command.png and /dev/null differ diff --git a/docs/serverless/AI-for-security/images/lms-studio-arch-diagram.png b/docs/serverless/AI-for-security/images/lms-studio-arch-diagram.png deleted file mode 100644 index 4b737bbb7c..0000000000 Binary files a/docs/serverless/AI-for-security/images/lms-studio-arch-diagram.png and /dev/null differ diff --git a/docs/serverless/AI-for-security/images/lms-studio-model-loaded-msg.png b/docs/serverless/AI-for-security/images/lms-studio-model-loaded-msg.png deleted file mode 100644 index c2e3ec8114..0000000000 Binary files a/docs/serverless/AI-for-security/images/lms-studio-model-loaded-msg.png and /dev/null differ diff --git a/docs/serverless/AI-for-security/llm-connector-guides.mdx b/docs/serverless/AI-for-security/llm-connector-guides.mdx deleted file mode 100644 index 31036a8376..0000000000 --- a/docs/serverless/AI-for-security/llm-connector-guides.mdx +++ /dev/null @@ -1,18 +0,0 @@ ---- -slug: /serverless/security/llm-connector-guides -title: LLM connector guides -description: Set up LLM connectors to enable AI features in ((elastic-sec)) -tags: ["security","overview","get-started"] -status: in review ---- - -This section contains instructions for setting up connectors for LLMs so you can use and . - -Setup guides are available for the following LLM providers: - -* -* -* -* -* - diff --git a/docs/serverless/AI-for-security/llm-performance-matrix.mdx b/docs/serverless/AI-for-security/llm-performance-matrix.mdx deleted file mode 100644 index 5aca75288c..0000000000 --- a/docs/serverless/AI-for-security/llm-performance-matrix.mdx +++ /dev/null @@ -1,18 +0,0 @@ ---- -slug: /serverless/security/llm-performance-matrix -title: Large language model performance matrix -description: Learn how different models perform on different tasks in ((elastic-sec)). -tags: ["security", "overview", "get-started"] -status: in review ---- - -This table describes the performance of various large language models (LLMs) for different use cases in ((elastic-sec)), based on our internal testing. To learn more about these use cases, refer to or . - -| **Feature** | **Model** | | | | | | | -|-------------------------------|-----------------------|--------------------|--------------------|------------|-----------------|------------------|-----------| -| | **Claude 3: Opus** | **Claude 3.5: Sonnet** | **Claude 3: Haiku** | **GPT-4o** | **GPT-4 Turbo** | **Gemini 1.5 Pro** | **Gemini 1.5 Flash** | -| **Assistant: general** | Excellent | Excellent | Excellent | Excellent | Excellent | Excellent | Excellent | -| **Assistant: ((esql)) generation** | Great | Great | Poor | Excellent | Poor | Good | Poor | -| **Assistant: alert questions** | Excellent | Excellent | Excellent | Excellent | Poor | Excellent | Good | -| **Attack discovery** | Excellent | Excellent | Poor | Poor | Good | Great | Poor | - diff --git a/docs/serverless/AI-for-security/usecase-attack-disc-ai-assistant-incident-reporting.mdx b/docs/serverless/AI-for-security/usecase-attack-disc-ai-assistant-incident-reporting.mdx deleted file mode 100644 index ce164a6dbd..0000000000 --- a/docs/serverless/AI-for-security/usecase-attack-disc-ai-assistant-incident-reporting.mdx +++ /dev/null @@ -1,64 +0,0 @@ ---- -slug: /serverless/security/ai-usecase-incident-reporting -title: Identify, investigate, and document threats -description: Use Attack discovery and AI Assistant to manage threats. -tags: ["security","overview","get-started"] -status: in review ---- - -Together, and can help you identify and mitigate threats, investigate incidents, and generate incident reports in various languages so you can monitor and protect your environment. - -In this guide, you'll learn how to: - -* -* -* -* - - -
-## Use Attack discovery to identify threats -Attack discovery can detect a wide range of threats by finding relationships among alerts that may indicate a coordinated attack. This enables you to comprehend how threats move through and affect your systems. Attack discovery generates a detailed summary of each potential threat, which can serve as the basis for further analysis. Learn how to . - - - -In the example above, Attack discovery found connections between nine alerts, and used them to identify and describe an attack chain. - -After Attack discovery outlines your threat landscape, use Elastic AI Assistant to quickly analyze a threat in detail. - -
-## Use AI Assistant to analyze a threat - -From a discovery on the Attack discovery page, click **View in AI Assistant** to start a chat that includes the discovery as context. - - - - -AI Assistant can quickly compile essential data and provide suggestions to help you generate an incident report and plan an effective response. You can ask it to provide relevant data or answer questions, such as “How can I remediate this threat?” or “What ((esql)) query would isolate actions taken by this user?” - - - -The image above shows an ((esql)) query generated by AI Assistant in response to a user prompt. Learn more about . - -At any point in a conversation with AI Assistant, you can add data, narrative summaries, and other information from its responses to ((elastic-sec))'s case management system to generate incident reports. - -
-## Generate reports - -From the AI Assistant dialog window, click **Add to case** () next to a message to add the information in that message to a . Cases help centralize relevant details in one place for easy sharing with stakeholders. - -If you add a message that contains a discovery to a case, AI Assistant automatically adds the attack summary and all associated alerts to the case. You can also add AI Assistant messages that contain remediation steps and relevant data to the case. - -
-## Translate incident information to a different human language using AI Assistant - - - - -AI Assistant can translate its findings into other human languages, helping to enable collaboration among global security teams, and making it easier to operate within multilingual organizations. - -After AI Assistant provides information in one language, you can ask it to translate its responses. For example, if it provides remediation steps for an incident, you can instruct it to “Translate these remediation steps into Japanese.” You can then add the translated output to a case. This can help team members receive the same information and insights regardless of their primary language. - - -In our internal testing, AI Assistant translations preserved the accuracy of the original content. However, all LLMs can make mistakes, so use caution. - diff --git a/docs/serverless/advanced-entity-analytics/advanced-behavioral-detections.mdx b/docs/serverless/advanced-entity-analytics/advanced-behavioral-detections.mdx deleted file mode 100644 index 42741b16a2..0000000000 --- a/docs/serverless/advanced-entity-analytics/advanced-behavioral-detections.mdx +++ /dev/null @@ -1,16 +0,0 @@ ---- -slug: /serverless/security/advanced-behavioral-detections -title: Advanced behavioral detections -description: Learn about advanced behavioral detections and its capabilities. -tags: [ 'serverless', 'security', 'overview', 'analyze' ] -status: in review ---- - - - -Elastic's ((ml)) capabilities and advanced correlation, scoring, and visualization techniques can help you identify potential behavioral threats that may be associated with security incidents. - -Advanced behavioral detections includes two key capabilities: - -* Anomaly detection -* diff --git a/docs/serverless/advanced-entity-analytics/advanced-entity-analytics-overview.mdx b/docs/serverless/advanced-entity-analytics/advanced-entity-analytics-overview.mdx deleted file mode 100644 index 1866f91f18..0000000000 --- a/docs/serverless/advanced-entity-analytics/advanced-entity-analytics-overview.mdx +++ /dev/null @@ -1,16 +0,0 @@ ---- -slug: /serverless/security/advanced-entity-analytics -title: Advanced Entity Analytics -description: Learn about Advanced Entity Analytics and its capabilities. -tags: [ 'serverless', 'security', 'overview', 'analyze' ] -status: in review ---- - - - -Advanced Entity Analytics generates a set of threat detection and risk analytics that allows you to expedite alert triage and hunt for new threats from within an entity's environment. This feature combines the power of the SIEM detection engine and Elastic's ((ml)) capabilities to identify unusual user behaviors and generate comprehensive risk analytics for hosts and users. - -Advanced Entity Analytics provides two key capabilities: - -* -* diff --git a/docs/serverless/advanced-entity-analytics/analyze-risk-score-data.mdx b/docs/serverless/advanced-entity-analytics/analyze-risk-score-data.mdx deleted file mode 100644 index 30a28bfdd6..0000000000 --- a/docs/serverless/advanced-entity-analytics/analyze-risk-score-data.mdx +++ /dev/null @@ -1,126 +0,0 @@ ---- -slug: /serverless/security/analyze-risk-score-data -title: View and analyze risk score data -description: Monitor risk score changes of hosts and users in your environment. -tags: [ 'serverless', 'security', 'how-to', 'analyze' ] -status: in review ---- - - - -The ((security-app)) provides several options to monitor the change in the risk posture of hosts and users from your environment. Use the following places in the ((security-app)) to view and analyze risk score data: - -* Entity Analytics dashboard -* Alerts page -* Alert details flyout -* Hosts and Users pages -* Host and user details pages -* Host and user details flyouts - - - -We recommend that you prioritize alert triaging to identify anomalies or abnormal behavior patterns. - - -## Entity Analytics dashboard - -From the Entity Analytics dashboard, you can access entity key performance indicators (KPIs), risk scores, and levels. You can also click the number link in the **Alerts** column to investigate and analyze the alerts on the Alerts page. - -![Entity Analytics dashboard](../images/detection-entity-dashboard/-dashboards-entity-dashboard.png) - -## Alert triaging -You can prioritize alert triaging to analyze alerts associated with risky or business-critical entities using the following features in the ((security-app)). - -### Alerts page - -Use the Alerts table to investigate and analyze: - -* Host and user risk levels -* Host and user risk scores -* Asset criticality - -To display entity risk score and asset criticality data in the Alerts table, select **Fields**, and add the following: - -* `user.risk.calculated_level` or `host.risk.calculated_level` -* `user.risk.calculated_score_norm` or `host.risk.calculated_score_norm` -* `user.asset.criticality` or `host.asset.criticality` - -Learn more about customizing the Alerts table. - -![Risk scores in the Alerts table](../images/analyze-risk-score-data/alerts-table-rs.png) - -#### Triage alerts associated with high-risk or business-critical entities - -To analyze alerts associated with high-risk or business-critical entities, you can filter or group them by entity risk level or asset criticality level. - - -If you change the entity's criticality level after an alert is generated, that alert document will include the original criticality level and will not reflect the new criticality level. - - -* Use the drop-down filter controls to filter alerts by entity risk level or asset criticality level. To do this, edit the default controls to filter by: - - * `user.risk.calculated_level` or `host.risk.calculated_level` for entity risk level: - - ![Alerts filtered by high host risk level](../images/analyze-risk-score-data/filter-by-host-risk-level.png) - - * `user.asset.criticality` or `host.asset.criticality` for asset criticality level: - - ![Filter alerts by asset criticality level](../images/analyze-risk-score-data/filter-by-asset-criticality.png) - -* To group alerts by entity risk level or asset criticality level, select **Group alerts by**, then select **Custom field** and search for: - - * `host.risk.calculated_level` or `user.risk.calculated_level` for entity risk level: - - ![Alerts grouped by host risk levels](../images/analyze-risk-score-data/group-by-host-risk-level.png) - - * `host.asset.criticality` or `user.asset.criticality` for asset criticality level: - - ![Alerts grouped by entity asset criticality levels](../images/analyze-risk-score-data/group-by-asset-criticality.png) - - * You can further sort the grouped alerts by highest entity risk score: - - 1. Expand a risk level group (for example, **High**) or an asset criticality group (for example, **high_impact**). - 1. Select **Sort fields** → **Pick fields to sort by**. - 1. Select fields in the following order: - 1. `host.risk.calculated_score_norm`or `user.risk.calculated_score_norm`: **High-Low** - 1. `Risk score`: **High-Low** - 1. `@timestamp`: **New-Old** - - ![High-risk alerts sorted by host risk score](../images/analyze-risk-score-data/hrl-sort-by-host-risk-score.png) - -### Alert details flyout - -To access risk score data in the alert details flyout, select **Insights** → **Entities** on the **Overview** tab: - -![Risk scores in the Alerts flyout](../images/analyze-risk-score-data/alerts-flyout-rs.png) - -### Hosts and Users pages - -On the Hosts and Users pages, you can access the risk score data: - -* In the **Host risk level** or **User risk level** column on the **All hosts** or **All users** tab: - - ![Host risk level data on the All hosts tab of the Hosts page](../images/analyze-risk-score-data/hosts-hr-level.png) - -* On the **Host risk** or **User risk** tab: - - ![Host risk data on the Host risk tab of the Hosts page](../images/analyze-risk-score-data/hosts-hr-data.png) - -### Host and user details pages - -On the host details and user details pages, you can access the risk score data: - -* In the Overview section: - - ![Host risk data in the Overview section of the host details page](../images/analyze-risk-score-data/host-details-overview.png) - -* On the **Host risk** or **User risk** tab: - - ![Host risk data on the Host risk tab of the host details page](../images/analyze-risk-score-data/host-details-hr-tab.png) - -### Host and user details flyouts - -In the host details and user details flyouts, you can access the risk score data in the risk summary section: - - ![Host risk data in the Host risk summary section](../images/analyze-risk-score-data/risk-summary.png) - diff --git a/docs/serverless/advanced-entity-analytics/asset-criticality.mdx b/docs/serverless/advanced-entity-analytics/asset-criticality.mdx deleted file mode 100644 index a8e6a8d966..0000000000 --- a/docs/serverless/advanced-entity-analytics/asset-criticality.mdx +++ /dev/null @@ -1,113 +0,0 @@ ---- -slug: /serverless/security/asset-criticality -title: Asset criticality -description: Learn how to use asset criticality to improve your security operations. -tags: [ 'serverless', 'security', 'overview', 'analyze' ] -status: in review ---- - - - - -To view and assign asset criticality, you must: -* Have the appropriate user role. -* Turn on the `securitySolution:enableAssetCriticality` advanced setting. - -For more information, refer to Entity risk scoring prerequisites. - - -The asset criticality feature allows you to classify your organization's entities based on various operational factors that are important to your organization. Through this classification, you can improve your threat detection capabilities by focusing your alert triage, threat-hunting, and investigation activities on high-impact entities. - -You can assign one of the following asset criticality levels to your entities, based on their impact: - -* Low impact -* Medium impact -* High impact -* Extreme impact - -For example, you can assign **Extreme impact** to business-critical entities, or **Low impact** to entities that pose minimal risk to your security posture. - -## View and assign asset criticality - -Entities do not have a default asset criticality level. You can either assign asset criticality to your entities individually, or bulk assign it to multiple entities by importing a text file. - -When you assign, change, or unassign an individual entity's asset criticality level, that entity's risk score is immediately recalculated. - - -If you assign asset criticality using the file import feature, risk scores are **not** immediately recalculated. The newly assigned or updated asset criticality levels will impact entity risk scores during the next hourly risk scoring calculation. - - -You can view, assign, change, or unassign asset criticality from the following places in the ((elastic-sec)) app: - -* The host details page and user details page: - - ![Assign asset criticality from the host details page](../images/asset-criticality/-assign-asset-criticality-host-details.png) - -* The host details flyout and user details flyout: - - ![Assign asset criticality from the host details flyout](../images/asset-criticality/-assign-asset-criticality-host-flyout.png) - -* The host details flyout and user details flyout in Timeline: - - ![Assign asset criticality from the host details flyout in Timeline](../images/asset-criticality/-assign-asset-criticality-timeline.png) - -### Bulk assign asset criticality - -You can bulk assign asset criticality to multiple entities by importing a CSV, TXT or TSV file from your asset management tools. - -The file must contain three columns, with each entity record listed on a separate row: - -1. The first column should indicate whether the entity is a `host` or a `user`. -1. The second column should specify the entity's `host.name` or `user.name`. -1. The third column should specify one of the following asset criticality levels: - * `extreme_impact` - * `high_impact` - * `medium_impact` - * `low_impact` - -The maximum file size is 1 MB. - -File structure example: - -``` -user,user-001,low_impact -user,user-002,medium_impact -host,host-001,extreme_impact -```` - -To import a file: -1. Go to **Project Settings** → **Stack Management** → **Asset criticality**. -1. Select or drag and drop the file you want to import. - - - The file validation step highlights any lines that don't follow the required file structure. The asset criticality levels for those entities won't be assigned. We recommend that you fix any invalid lines and re-upload the file. - - -1. Click **Assign**. - -This process overwrites any previously assigned asset criticality levels for the entities included in the imported file. The newly assigned or updated asset criticality levels are immediately visible within all asset criticality workflows and will impact entity risk scores during the next risk scoring calculation. - -## Improve your security operations - -With asset criticality, you can improve your security operations by: - -* Prioritizing open alerts -* Monitoring an entity's risk - -### Prioritize open alerts - -You can use asset criticality as a prioritization factor when triaging alerts and conducting investigations and response activities. - -Once you assign a criticality level to an entity, all subsequent alerts related to that entity are enriched with its criticality level. This additional context allows you to prioritize alerts associated with business-critical entities. - -### Monitor an entity's risk - -The risk scoring engine dynamically factors in an entity's asset criticality, along with `Open` and `Acknowledged` detection alerts to calculate the entity's overall risk score. This dynamic risk scoring allows you to monitor changes in the risk profiles of your most sensitive entities, and quickly escalate high-risk threats. - -To view the impact of asset criticality on an entity's risk score, follow these steps: - -1. Open the host details flyout or user details flyout. The risk summary section shows asset criticality's contribution to the overall risk score. -1. Click **View risk contributions** to open the flyout's left panel. -1. In the **Risk contributions** section, verify the entity's criticality level from the time the alert was generated. - -![View asset criticality impact on host risk score](../images/asset-criticality/-asset-criticality-impact.png) diff --git a/docs/serverless/advanced-entity-analytics/behavioral-detection-use-cases.mdx b/docs/serverless/advanced-entity-analytics/behavioral-detection-use-cases.mdx deleted file mode 100644 index 8ed886a669..0000000000 --- a/docs/serverless/advanced-entity-analytics/behavioral-detection-use-cases.mdx +++ /dev/null @@ -1,32 +0,0 @@ ---- -slug: /serverless/security/behavioral-detection-use-cases -title: Behavioral detection use cases -description: Detect internal and external threats using behavioral detection integrations. -tags: [ 'serverless', 'security', 'overview', 'analyze' ] -status: in review ---- - - - -Behavioral detection identifies potential internal and external threats based on user and host activity. It uses a threat-centric approach to flag suspicious activity by analyzing patterns, anomalies, and context enrichment. - -The behavioral detection feature is built on ((elastic-sec))'s foundational SIEM detection capabilities, leveraging ((ml)) algorithms to enable proactive threat detection and hunting. - -## Elastic integrations for behavioral detection use cases - -Behavioral detection integrations provide a convenient way to enable behavioral detection capabilities. They streamline the deployment of components that implement behavioral detection, such as data ingestion, transforms, rules, ((ml)) jobs, and scripts. - - -* Behavioral detection integrations require the Security Analytics Complete project feature. -* To learn more about the requirements for using ((ml)) jobs, refer to . - - -Here's a list of integrations for various behavioral detection use cases: - -* [Data Exfiltration Detection](((integrations-docs))/ded) -* [Domain Generation Algorithm Detection](((integrations-docs))/dga) -* [Lateral Movement Detection](((integrations-docs))/lmd) -* [Living off the Land Attack Detection](((integrations-docs))/problemchild) -* [Network Beaconing Identification](((integrations-docs))/beaconing) - -To learn more about ((ml)) jobs enabled by these integrations, refer to [Prebuilt job reference](((security-guide))/prebuilt-ml-jobs.html). \ No newline at end of file diff --git a/docs/serverless/advanced-entity-analytics/entity-risk-scoring.mdx b/docs/serverless/advanced-entity-analytics/entity-risk-scoring.mdx deleted file mode 100644 index afac426c31..0000000000 --- a/docs/serverless/advanced-entity-analytics/entity-risk-scoring.mdx +++ /dev/null @@ -1,115 +0,0 @@ ---- -slug: /serverless/security/entity-risk-scoring -title: Entity risk scoring -description: Learn about the risk scoring engine and its features. -tags: [ 'serverless', 'security', 'overview', 'analyze' ] -status: in review ---- - - - -Entity risk scoring is an advanced ((elastic-sec)) analytics feature that helps security analysts detect changes in an entity's risk posture, hunt for new threats, and prioritize incident response. - -Entity risk scoring allows you to monitor risk score changes of hosts and users in your environment. When generating advanced scoring analytics, the risk scoring engine utilizes threats from its end-to-end XDR use cases, such as SIEM, cloud, and endpoint. It leverages the Elastic SIEM detection engine to generate host and user risk scores from the last 30 days. - -It also generates risk scores on a recurring interval, and allows for easy onboarding and management. The engine is built to factor in risks from all ((elastic-sec)) use cases, and allows you to customize and control how and when risk is calculated. - -## Risk scoring inputs - -Entity risk scores are determined by the following risk inputs: - - - - Alerts - `.alerts-security.alerts-` index alias - - - Asset criticality level - `.asset-criticality.asset-criticality-` index alias - - - -The resulting entity risk scores are stored in the `risk-score.risk-score-` data stream alias. - - - -* Entities without any alerts, or with only `Closed` alerts, are not assigned a risk score. -* To use asset criticality, you must enable the `securitySolution:enableAssetCriticality` advanced setting. - - - -## How is risk score calculated? - -1. The risk scoring engine runs hourly to aggregate `Open` and `Acknowledged` alerts from the last 30 days. For each entity, the engine processes up to 10,000 alerts. - -1. The engine groups alerts by `host.name` or `user.name`, and aggregates the individual alert risk scores (`kibana.alert.risk_score`) such that alerts with higher risk scores contribute more than alerts with lower risk scores. The resulting aggregated risk score is assigned to the **Alerts** category in the entity's risk summary. - -1. The engine then verifies the entity's asset criticality level. If there is no asset criticality assigned, the entity risk score remains equal to the aggregated score from the **Alerts** category. If a criticality level is assigned, the engine updates the risk score based on the default risk weight for each criticality level. The asset criticality risk input is assigned to the **Asset Criticality** category in the entity's risk summary. - - | Asset criticality level | Default risk weight | - |-------------------------|---------------------| - | Low impact | 0.5 | - | Medium impact | 1 | - | High impact | 1.5 | - | Extreme impact | 2 | - - - Asset criticality levels and default risk weights are subject to change. - - -1. Based on the two risk inputs, the risk scoring engine generates a single entity risk score of 0-100. It assigns a risk level by mapping the risk score to one of these levels: - - | Risk level | Risk score | - | ------------- |---------------| - | Unknown | < 20 | - | Low | 20-40 | - | Moderate | 40-70 | - | High | 70-90 | - | Critical | > 90 | - - - -This example shows how the risk scoring engine calculates the user risk score for `User_A`, whose asset criticality level is **Extreme impact**. - -There are 5 open alerts associated with `User_A`: - -* Alert 1 with alert risk score 21 -* Alert 2 with alert risk score 45 -* Alert 3 with alert risk score 21 -* Alert 4 with alert risk score 70 -* Alert 5 with alert risk score 21 - ---- - -To calculate the user risk score, the risk scoring engine: - -1. Sorts the associated alerts in descending order of alert risk score: - - * Alert 4 with alert risk score 70 - * Alert 2 with alert risk score 45 - * Alert 1 with alert risk score 21 - * Alert 3 with alert risk score 21 - * Alert 5 with alert risk score 21 - -1. Generates an aggregated risk score of 36.16, and assigns it to `User_A`'s **Alerts** risk category. - -1. Looks up `User_A`'s asset criticality level, and identifies it as **Extreme impact**. - -1. Generates a new risk input under the **Asset Criticality** risk category, with a risk contribution score of 16.95. - -1. Increases the user risk score to 53.11, and assigns `User_A` a **Moderate** user risk level. - -If `User_A` had no asset criticality level assigned, the user risk score would remain unchanged at 36.16. - - - -Learn how to turn on the risk scoring engine. diff --git a/docs/serverless/advanced-entity-analytics/machine-learning.mdx b/docs/serverless/advanced-entity-analytics/machine-learning.mdx deleted file mode 100644 index f78ecb703d..0000000000 --- a/docs/serverless/advanced-entity-analytics/machine-learning.mdx +++ /dev/null @@ -1,87 +0,0 @@ ---- -slug: /serverless/security/machine-learning -title: Detect anomalies -description: Use the power of machine learning to detect outliers and suspicious events. -tags: ["serverless","security","overview","manage"] -status: in review ---- - - -
- -[((ml-cap))](((ml-docs))/ml-ad-overview.html) functionality is available when -you have the appropriate role. Refer to Machine learning job and rule requirements for more information. - -You can view the details of detected anomalies within the `Anomalies` table -widget shown on the Hosts, Network, and associated details pages, or even narrow -to the specific date range of an anomaly from the `Max anomaly score by job` field -in the overview of the details pages for hosts and IPs. These interfaces also -offer the ability to drag and drop details of the anomaly to Timeline, such as -the `Entity` itself, or any of the associated `Influencers`. - -
- -## Manage ((ml)) jobs -If you have the `machine_learning_admin` role, you can use the **ML job settings** interface on the **Alerts**, **Rules**, and **Rule Exceptions** pages to view, start, and stop ((elastic-sec)) ((ml)) jobs. - -![ML job settings UI on the Alerts page](../images/machine-learning/-detections-machine-learning-ml-ui.png) - -
- -### Manage ((ml)) detection rules - -You can also check the status of ((ml)) detection rules, and start or stop their associated ((ml)) jobs: - -* On the **Rules** page, the **Last response** column displays the rule's current status. An indicator icon () also appears if a required ((ml)) job isn't running. Click the icon to list the affected jobs, then click **Visit rule details page to investigate** to open the rule's details page. - - ![Rules table ((ml)) job error](../images/machine-learning/-detections-machine-learning-rules-table-ml-job-error.png) - -* On a rule's details page, check the **Definition** section to confirm whether the required ((ml)) jobs are running. Switch the toggles on or off to run or stop each job. - - ![Rule details page with ML job stopped](../images/machine-learning/-troubleshooting-rules-ts-ml-job-stopped.png) - -
- -### Prebuilt jobs - -((elastic-sec)) comes with prebuilt ((ml)) ((anomaly-jobs)) for automatically detecting -host and network anomalies. The jobs are displayed in the `Anomaly Detection` -interface. They are available when either: - -* You ship data using [Beats](https://www.elastic.co/products/beats) or the - ((agent)), and ((kib)) is configured with the required index - patterns (such as `auditbeat-*`, `filebeat-*`, `packetbeat-*`, or `winlogbeat-*` - in **Project settings** → **Management** → **Index Management**). - -Or - -* Your shipped data is ECS-compliant, and ((kib)) is configured with the shipped - data's index patterns in **Project settings** → **Management** → **Index Management**. - -Or - -* You install one or more of the Advanced Analytics integrations. - -Prebuilt job reference describes all available ((ml)) jobs and lists which ECS -fields are required on your hosts when you are not using ((beats)) or the ((agent)) -to ship your data. For information on tuning anomaly results to reduce the -number of false positives, see Optimizing anomaly results. - - -Machine learning jobs look back and analyze two weeks of historical data -prior to the time they are enabled. After jobs are enabled, they continuously -analyze incoming data. When jobs are stopped and restarted within the two-week -time frame, previously analyzed data is not processed again. - - -
- -## View detected anomalies -To view the `Anomalies` table widget and `Max Anomaly Score By Job` details, -the user must have the `machine_learning_admin` or `machine_learning_user` role. - - -To adjust the `score` threshold that determines which anomalies are shown, -you can modify the **`securitySolution:defaultAnomalyScore`** advanced setting. - - diff --git a/docs/serverless/advanced-entity-analytics/prebuilt-ml-jobs.mdx b/docs/serverless/advanced-entity-analytics/prebuilt-ml-jobs.mdx deleted file mode 100644 index a67a6ae8f0..0000000000 --- a/docs/serverless/advanced-entity-analytics/prebuilt-ml-jobs.mdx +++ /dev/null @@ -1,10 +0,0 @@ ---- -slug: /serverless/security/prebuilt-ml-jobs -title: Prebuilt ML job reference -# description: Description to be written -tags: [ 'serverless', 'security', 'reference' ] -status: in review ---- - - -Refer to [Prebuilt job reference](((security-guide))/prebuilt-ml-jobs.html) for information on available prebuilt ((ml)) jobs. diff --git a/docs/serverless/advanced-entity-analytics/tuning-anomaly-results.mdx b/docs/serverless/advanced-entity-analytics/tuning-anomaly-results.mdx deleted file mode 100644 index 511876c4f8..0000000000 --- a/docs/serverless/advanced-entity-analytics/tuning-anomaly-results.mdx +++ /dev/null @@ -1,171 +0,0 @@ ---- -slug: /serverless/security/tuning-anomaly-results -title: Optimizing anomaly results -description: Learn how to fine-tune and filter anomaly results. -tags: [ 'serverless', 'security', 'how-to' ] -status: in review ---- - - -
- -To gain clearer insights into real threats, you can tune the anomaly results. The following procedures help to reduce the number of false positives: - -* Tune results for rare applications and processes -* Define an anomaly threshold for a job - -
- -## Filter out anomalies from rarely used applications and processes - -When anomalies include results from a known process that only runs occasionally, -you can filter out the unwanted results. - -For example, to filter out results from a housekeeping process, named -`maintenanceservice.exe`, that only executes occasionally you need to: - -1. Create a filter list -1. Add the filter to the relevant job -1. Clone and rerun the job (optional) - -
- -### Create a filter list - -1. Go to **Machine learning** → **Anomaly Detection** → **Settings**. -1. Click **Filter Lists** and then **Create**. - - The **Create new filter list** pane is displayed. - -1. Enter a filter list ID. -1. Enter a description for the filter list (optional). -1. Click **Add item**. -1. In the **Items** textbox, enter the name of the process for which you want to - filter out anomaly results (`maintenanceservice.exe` in our example). - - ![](../images/tuning-anomaly-results/-detections-machine-learning-filter-add-item.png) - -1. Click **Add** and then **Save**. - - The new filter appears in the Filter List and can be added to relevant jobs. - -
- -### Add the filter to the relevant job - -1. Go to **Machine learning** → **Anomaly Detection** → **Anomaly Explorer**. -1. Navigate to the job results for which the filter is required. If the job results - are not listed, click **Edit job selection** and select the relevant job. - -1. In the **actions** column, click the gear icon and then select _Configure rules_. - - The **Create Rule** window is displayed. - - ![](../images/tuning-anomaly-results/-detections-machine-learning-rule-scope.png) - -1. Select: - 1. _Add a filter list to limit where the rule applies_. - 1. The _WHEN_ statement for the relevant detector (`process.name` in our - example). - - 1. The _IS IN_ statement. - 1. The filter you created as part of the Create a filter list procedure. - - - For more information, see - [Customizing detectors with custom rules](((ml-docs))/ml-configuring-detector-custom-rules.html). - - -1. Click **Save**. - - -Changes to rules only affect new results. All anomalies found by the job -before the filter was added are still displayed. - - -
- -### Clone and rerun the job - -If you want to remove all the previously detected results for the process, you -must clone and run the cloned job. - - -Running the cloned job can take some time. Only run the job after you -have completed all job rule changes. - - -1. Go to **Machine learning** → **Anomaly Detection** → **Jobs**. -1. Navigate to the job for which you configured the rule. -{/* Should this be "Navigate to the job that you want to clone"? */} -1. Optionally, expand the job row and click **JSON** to verify the configured filter - appears under `custom rules` in the JSON code. - -1. In the **actions** column, click the options menu () and select **Clone job**. - - The **Configure datafeed** page is displayed. - -1. Click **Data Preview** and check the data is displayed without errors. -{/* Unable to verify this step - don't think it exists anymore. */} -1. Click **Next** until the **Job details** page is displayed. -1. Enter a Job ID for the cloned job that indicates it is an iteration of the - original one. For example, append a number or a username to the original job - name, such as `windows-rare-network-process-2`. - - ![](../images/tuning-anomaly-results/-detections-machine-learning-cloned-job-details.png) - -1. Click **Next** and check the job validates without errors. You can ignore - warnings about multiple influencers. - -1. Click **Next** and then **Create job**. - - The **Start \** window is displayed. - {/* This page doesn't display. */} - - ![](../images/tuning-anomaly-results/-detections-machine-learning-start-job-window.png) - -1. Select the point of time from which the job will analyze anomalies. -{/* Users can't do this. I think their only option is to start the job in real time. */} -1. Click **Start**. - - After a while, results will start to appear on the **Anomaly Explorer** page. - -
- -## Define an anomaly threshold for a job - -{/* Unable to test these steps because I don't have the privs needed to enable/run ML jobs */} - -Certain jobs use a high-count function to look for unusual spikes in -process events. For some processes, a burst of activity is a normal, such as -automation and housekeeping jobs running on server fleets. However, sometimes a -high-delta event count is unlikely to be the result of routine behavior. In -these cases, you can define a minimum threshold for when a high-event count is -considered an anomaly. - -Depending on your anomaly detection results, you may want to set a -minimum event count threshold for the `packetbeat_dns_tunneling` job: - -1. Go to **Machine learning** → **Anomaly Detection** → **Anomaly Explorer**. -1. Navigate to the job results for the `packetbeat_dns_tunneling` job. If the - job results are not listed, click **Edit job selection** and select - `packetbeat_dns_tunneling`. - -1. In the **actions** column, click the gear icon and then select - **Configure rules**. - - The **Create Rule** window is displayed. - - ![](../images/tuning-anomaly-results/-detections-machine-learning-ml-rule-threshold.png) - -1. Select **Add numeric conditions for when the rule applies** and the following - `when` statement: - - _WHEN actual IS GREATER THAN \_ - - Where `` is the threshold above which anomalies are detected. - -1. Click **Save**. -1. To apply the new threshold, rerun the job (**Job Management** → **Actions** → **Start datafeed**). -{/* Re-added the part that was missing from this step (might've not been migrated over), but am unable to verify this step because idk where the Job Management page is. */} - diff --git a/docs/serverless/advanced-entity-analytics/turn-on-risk-engine.mdx b/docs/serverless/advanced-entity-analytics/turn-on-risk-engine.mdx deleted file mode 100644 index a0c4751a33..0000000000 --- a/docs/serverless/advanced-entity-analytics/turn-on-risk-engine.mdx +++ /dev/null @@ -1,38 +0,0 @@ ---- -slug: /serverless/security/turn-on-risk-engine -title: Turn on the risk scoring engine -description: Start generating host and user risk scores. -tags: [ 'serverless', 'security', 'how-to', 'manage' ] -status: in review ---- - - - - -To use entity risk scoring, you must have the appropriate user role. For more information, refer to . - - -## Preview risky entities - -You can preview risky entities before installing the risk engine. The preview shows the riskiest hosts and users found in the 1000 sampled entities during the time frame selected in the date picker. - - -The preview is limited to two risk scores per ((serverless-short)) ((security)) project. - - -To preview risky entities, go to **Project settings** → **Management** → **Entity Risk Score**: - -![Preview of risky entities](../images/turn-on-risk-engine/preview-risky-entities.png) - -## Turn on the risk engine - - -To view risk score data, you must have alerts generated in your environment. - - -If you're installing the risk scoring engine for the first time: - -1. Go to **Project settings** → **Management** → **Entity Risk Score**. -1. Turn the **Entity risk score** toggle on. - -![Turn on entity risk scoring](../images/turn-on-risk-engine/turn-on-risk-engine.png) diff --git a/docs/serverless/alerts/alert-schema.mdx b/docs/serverless/alerts/alert-schema.mdx deleted file mode 100644 index fb407cbf92..0000000000 --- a/docs/serverless/alerts/alert-schema.mdx +++ /dev/null @@ -1,907 +0,0 @@ ---- -slug: /serverless/security/alert-schema -title: Alert schema -description: The alert schema describes all the fields present in alert events. -tags: ["serverless","security","alerting","reference","manage"] -status: in review ---- - - -
- -((elastic-sec)) stores alerts that have been generated by detection rules in hidden ((es)) indices. The index pattern is `.alerts-security.alerts-`. - - -Users are advised NOT to use the `_source` field in alert documents, but rather to use the `fields` option in the search API to programmatically obtain the list of fields used in these documents. Learn more about [retrieving selected fields from a search](((ref))/search-fields.html). - - - -The non-ECS fields listed below are beta and subject to change. - - - - - `@timestamp` - - ECS field, represents the time when the alert was created or most recently updated. - - - - - `message` - - ECS field copied from the source document, if present, for custom query and indicator match rules. - - - - - `tags` - - ECS field copied from the source document, if present, for custom query and indicator match rules. - - - - - `labels` - - ECS field copied from the source document, if present, for custom query and indicator match rules. - - - - - `ecs.version` - - ECS mapping version of the alert. - - - - - `event.kind` - - ECS field, always `signal` for alert documents. - - - - - `event.category` - - ECS field, copied from the source document, if present, for custom query and indicator match rules. - - - - - `event.type` - - ECS field, copied from the source document, if present, for custom query and indicator match rules. - - - - - `event.outcome` - - ECS field, copied from the source document, if present, for custom query and indicator match rules. - - - - - `agent.*` - - ECS `agent.*` fields copied from the source document, if present, for custom query and indicator match rules. - - - - - `client.*` - - ECS `client.*` fields copied from the source document, if present, for custom query and indicator match rules. - - - - - `cloud.*` - - ECS `cloud.*` fields copied from the source document, if present, for custom query and indicator match rules. - - - - - `container.*` - - ECS `container.* fields` copied from the source document, if present, for custom query and indicator match rules. - - - - - `data_stream.*` - - ECS `data_stream.*` fields copied from the source document, if present, for custom query and indicator match rules. - - - These fields may be constant keywords in the source documents, but are copied into the alert documents as keywords. - - - - - - - - `destination.*` - - ECS `destination.*` fields copied from the source document, if present, for custom query and indicator match rules. - - - - - `dll.*` - - ECS `dll.*` fields copied from the source document, if present, for custom query and indicator match rules. - - - - - `dns.*` - - ECS `dns.*` fields copied from the source document, if present, for custom query and indicator match rules. - - - - - `error.*` - - ECS `error.*` fields copied from the source document, if present, for custom query and indicator match rules. - - - - - `event.*` - - ECS `event.*` fields copied from the source document, if present, for custom query and indicator match rules. - - - categorization fields above (`event.kind`, `event.category`, `event.type`, `event.outcome`) are listed separately above. - - - - - - - - `file.*` - - ECS `file.*` fields copied from the source document, if present, for custom query and indicator match rules. - - - - - `group.*` - - ECS `group.*` fields copied from the source document, if present, for custom query and indicator match rules. - - - - - `host.*` - - ECS `host.*` fields copied from the source document, if present, for custom query and indicator match rules. - - - - - `http.*` - - ECS `http.*` fields copied from the source document, if present, for custom query and indicator match rules. - - - - - `log.*` - - ECS `log.*` fields copied from the source document, if present, for custom query and indicator match rules. - - - - - `network.*` - - ECS `network.*` fields copied from the source document, if present, for custom query and indicator match rules. - - - - - `observer.*` - - ECS `observer.*` fields copied from the source document, if present, for custom query and indicator match rules. - - - - - `orchestrator.*` - - ECS `orchestrator.*` fields copied from the source document, if present, for custom query and indicator match rules. - - - - - `organization.*` - - ECS `organization.*` fields copied from the source document, if present, for custom query and indicator match rules. - - - - - `package.*` - - ECS `package.*` fields copied from the source document, if present, for custom query and indicator match rules. - - - - - `process.*` - - ECS `process.*` fields copied from the source document, if present, for custom query and indicator match rules. - - - - - `registry.*` - - ECS `registry.*` fields copied from the source document, if present, for custom query and indicator match rules. - - - - - `related.*` - - ECS `related.*` fields copied from the source document, if present, for custom query and indicator match rules. - - - - - `rule.*` - - ECS `rule.*` fields copied from the source document, if present, for custom query and indicator match rules. - - - These fields are not related to the detection rule that generated the alert. - - - - - - - - `server.*` - - ECS `server.*` fields copied from the source document, if present, for custom query and indicator match rules. - - - - - `service.*` - - ECS `service.*` fields copied from the source document, if present, for custom query and indicator match rules. - - - - - `source.*` - - ECS `source.*` fields copied from the source document, if present, for custom query and indicator match rules. - - - - - `span.*` - - ECS `span.*` fields copied from the source document, if present, for custom query and indicator match rules. - - - - - `threat.*` - - ECS `threat.*` fields copied from the source document, if present, for custom query and indicator match rules. - - - - - `tls.*` - - ECS `tls.*` fields copied from the source document, if present, for custom query and indicator match rules. - - - - - `trace.*` - - ECS `trace.*` fields copied from the source document, if present, for custom query and indicator match rules. - - - - - `transaction.*` - - ECS `transaction.*` fields copied from the source document, if present, for custom query and indicator match rules. - - - - - `url.*` - - ECS `url.*` fields copied from the source document, if present, for custom query and indicator match rules. - - - - - `user.*` - - ECS `user.*` fields copied from the source document, if present, for custom query and indicator match rules. - - - - - `user_agent.*` - - ECS `user_agent.*` fields copied from the source document, if present, for custom query and indicator match rules. - - - - - `vulnerability.*` - - ECS `vulnerability.*` fields copied from the source document, if present, for custom query and indicator match rules. - - - - - `kibana.alert.ancestors.*` - - Type: object - - - - - `kibana.alert.depth` - - Type: Long - - - - - `kibana.alert.new_terms` - - The value of the new term that generated this alert. - - Type: keyword - - - - - `kibana.alert.original_event.*` - - Type: object - - - - - `kibana.alert.original_time` - - The value copied from the source event (`@timestamp`). - - Type: date - - - - - `kibana.alert.reason` - - Type: keyword - - - - - `kibana.alert.rule.author` - - The value of the `author` who created the rule. Refer to configure advanced rule settings. - - Type: keyword - - - - - `kibana.alert.building_block_type` - - The value of `building_block_type` from the rule that generated this alert. Refer to configure advanced rule settings. - - Type: keyword - - - - - `kibana.alert.rule.created_at` - - The value of `created.at` from the rule that generated this alert. - - Type: date - - - - - `kibana.alert.rule.created_by` - - Type: keyword - - - - - `kibana.alert.rule.description` - - Type: keyword - - - - - `kibana.alert.rule.enabled` - - Type: keyword - - - - - `kibana.alert.rule.false_positives` - - Type: keyword - - - - - `kibana.alert.rule.from` - - Type: keyword - - - - - `kibana.alert.rule.uuid` - - Type: keyword - - - - - `kibana.alert.rule.immutable` - - Type: keyword - - - - - `kibana.alert.rule.interval` - - Type: keyword - - - - - `kibana.alert.rule.license` - - Type: keyword - - - - - `kibana.alert.rule.max_signals` - - Type: long - - - - - `kibana.alert.rule.name` - - Type: keyword - - - - - `kibana.alert.rule.note` - - Type: keyword - - - - - `kibana.alert.rule.references` - - Type: keyword - - - - - `kibana.alert.risk_score` - - Type: float - - - - - `kibana.alert.rule.rule_id` - - Type: keyword - - - - - `kibana.alert.rule.rule_name_override` - - Type: keyword - - - - - `kibana.alert.severity` - - Alert severity, populated by the `rule_type` at alert creation. Must have a value of `low`, `medium`, `high`, `critical`. - - Type: keyword - - - - - `kibana.alert.rule.tags` - - Type: keyword - - - - - `kibana.alert.rule.threat.*` - - Type: object - - - - - `kibana.alert.rule.timeline_id` - - Type: keyword - - - - - `kibana.alert.rule.timeline_title` - - Type: keyword - - - - - `kibana.alert.rule.timestamp_override` - - Type: keyword - - - - - `kibana.alert.rule.to` - - Type: keyword - - - - - `kibana.alert.rule.type` - - Type: keyword - - - - - `kibana.alert.rule.updated_at` - - Type: date - - - - - `kibana.alert.rule.updated_by` - - Type: keyword - - - - - `kibana.alert.rule.version` - - A number that represents a rule's version. - - Type: keyword - - - - - `kibana.alert.rule.revision` - - A number that gets incremented each time you edit a rule. - - Type: long - - - - - `kibana.alert.workflow_status` - - Type: keyword - - - - - `kibana.alert.workflow_status_updated_at` - - The timestamp of when the alert's status was last updated. - - Type: date - - - - `kibana.alert.threshold_result.*` - - Type: object - - - - - `kibana.alert.group.id` - - Type: keyword - - - - - `kibana.alert.group.index` - - Type: integer - - - - - `kibana.alert.rule.parameters.index` - - Type: flattened - - - - - `kibana.alert.rule.parameters.language` - - Type: flattened - - - - - `kibana.alert.rule.parameters.query` - - Type: flattened - - - - - `kibana.alert.rule.parameters.risk_score_mapping` - - Type: flattened - - - - - `kibana.alert.rule.parameters.saved_id` - - Type: flattened - - - - - `kibana.alert.rule.parameters.severity_mapping` - - Type: flattened - - - - - `kibana.alert.rule.parameters.threat_filters` - - Type: flattened - - - - - `kibana.alert.rule.parameters.threat_index` - - Names of the indicator indices. - - Type: flattened - - - - - `kibana.alert.rule.parameters.threat_indicator_path` - - Type: flattened - - - - - `kibana.alert.rule.parameters.threat_language` - - Type: flattened - - - - - `kibana.alert.rule.parameters.threat_mapping.*` - - Controls which fields will be compared in the indicator and source documents. - - Type: flattened - - - - - `kibana.alert.rule.parameters.threat_query` - - Type: flattened - - - - - `kibana.alert.rule.parameters.threshold.*` - - Type: flattened - - - - - `kibana.space_ids` - - Type: keyword - - - - - `kibana.alert.rule.consumer` - - Type: keyword - - - - - `kibana.alert.status` - - Type: keyword - - - - - `kibana.alert.rule.category` - - Type: keyword - - - - - `kibana.alert.rule.execution.uuid` - - Type: keyword - - - - - `kibana.alert.rule.producer` - - Type: keyword - - - - - `kibana.alert.rule.rule_type_id` - - Type: keyword - - - - - - `kibana.alert.suppression.terms.field` - - The fields used to group alerts for suppression. - - Type: keyword - - - - - `kibana.alert.suppression.terms.value` - - The values in the suppression fields. - - Type: keyword - - - - - `kibana.alert.suppression.start` - - The timestamp of the first document in the suppression group. - - Type: date - - - - - `kibana.alert.suppression.end` - - The timestamp of the last document in the suppression group. - - Type: date - - - - - `kibana.alert.suppression.docs_count` - - The number of suppressed alerts. - - Type: long - - - - - - `kibana.alert.url` - - The shareable URL for the alert. - - - This field only appears if you've set the [`server.publicBaseUrl`](((kibana-ref))/settings.html#server-publicBaseUrl) configuration setting in the `kibana.yml` file. - - - - Type: long - - - - - - `kibana.alert.workflow_tags` - - List of tags added to an alert. - - This field can contain an array of values, for example: `["False Positive", "production"]` - - Type: keyword - - - - - - `kibana.alert.workflow_assignee_ids` - - List of users assigned to an alert. - - An array of unique identifiers (UIDs) for user profiles, for example: `["u_1-0CcWliOCQ9T2MrK5YDjhpxZ_AcxPKt3pwaICcnAUY_0, u_2-0CcWliOCQ9T2MrK5YDjhpxZ_AcxPKt3pwaICcnAUY_1"]` - - UIDs are linked to user profiles that are automatically created when users first log into a project. These profiles contain names, emails, profile avatars, and other user settings. - - Type: string[] - - - - diff --git a/docs/serverless/alerts/alert-suppression.mdx b/docs/serverless/alerts/alert-suppression.mdx deleted file mode 100644 index 462c03402e..0000000000 --- a/docs/serverless/alerts/alert-suppression.mdx +++ /dev/null @@ -1,115 +0,0 @@ ---- -slug: /serverless/security/alert-suppression -title: Suppress detection alerts -description: Reduce noise from rules that create repeated or duplicate alerts. -tags: [ 'serverless', 'security', 'how-to' ] -status: in review ---- - - -
- - -Alert suppression is in technical preview for threshold, indicator match, event correlation, and new terms rules. The functionality may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features. - - -Alert suppression allows you to reduce the number of repeated or duplicate detection alerts created by these detection rule types: - -* -* -* -* (non-sequence queries only) -* -* -* - -Normally, when a rule meets its criteria repeatedly, it creates multiple alerts, one for each time the rule's criteria are met. When alert suppression is configured, duplicate qualifying events are grouped, and only one alert is created for each group. Depending on the rule type, you can configure alert suppression to create alerts each time the rule runs, or once within a specified time window. You can also specify multiple fields to group events by unique combinations of values. - -The ((security-app)) displays several indicators in the Alerts table and the alert details flyout when a detection alert is created with alert suppression enabled. You can view the original events associated with suppressed alerts by investigating the alert in Timeline. - - -Alert suppression is not available for Elastic prebuilt rules. However, if you want to suppress alerts for a prebuilt rule, you can duplicate it, then configure alert suppression on the duplicated rule. - - -## Configure alert suppression - -You can configure alert suppression when you create or edit a supported rule type. Refer to documentation for creating , , , , , , or rules for detailed instructions. - -1. When configuring the rule type (the **Define rule** step for a new rule, or the **Definition** tab for an existing rule), specify how you want to group events for alert suppression: - - * **Custom query rule, indicator match, threshold, event correlation (non-sequence queries only), new terms, ((esql)), or ((ml)) rules:** In **Suppress alerts by**, enter 1-3 field names to group events by the fields' values. - * **Threshold rule:** In **Group by**, enter up to 3 field names to group events by the fields' values, or leave the setting empty to group all qualifying events together. - - - If you specify a field with multiple values, alerts with that field are handled as follows: - - * **Custom query or threshold rules:** Alerts are grouped by each unique value. For example, if you suppress alerts by `destination.ip` of `[127.0.0.1, 127.0.0.2, 127.0.0.3]`, alerts will be suppressed separately for each value of `127.0.0.1`, `127.0.0.2`, and `127.0.0.3`. - * **Indicator match, event correlation (non-sequence queries only), new terms, ((esql)), or ((ml)) rules:** Alerts with the specified field name and identical array values are grouped together. For example, if you suppress alerts by `destination.ip` of `[127.0.0.1, 127.0.0.2, 127.0.0.3]`, alerts with the entire array are grouped and only one alert is created for the group. - - - -1. If available, select how often to create alerts for duplicate events: - - - Both options are available for custom query, indicator match, event correlation, new terms, ((esql)), and ((ml)) rules. Threshold rules only have the **Per time period** option. - - - * **Per rule execution**: Create an alert each time the rule runs and an event meets its criteria. - * **Per time period**: Create one alert for all qualifying events that occur within a specified time window, beginning from when an event first meets the rule criteria and creates the alert. - - For example, if a rule runs every 5 minutes but you don't need alerts that frequently, you can set the suppression time period to a longer time, such as 1 hour. If the rule meets its criteria, it creates an alert at that time, and for the next hour, it'll suppress any subsequent qualifying events. - - - -1. Under **If a suppression field is missing**, choose how to handle events with missing suppression fields (events in which one or more of the **Suppress alerts by** fields don't exist): - - - These options are not available for threshold rules. - - - * **Suppress and group alerts for events with missing fields**: Create one alert for each group of events with missing fields. Missing fields get a `null` value, which is used to group and suppress alerts. - * **Do not suppress alerts for events with missing fields**: Create a separate alert for each matching event. This basically falls back to normal alert creation for events with missing suppression fields. - -1. Configure other rule settings, then save and enable the rule. - - -* Use the **Rule preview** before saving the rule to visualize how alert suppression will affect the alerts created, based on historical data. -* If a rule times out while suppression is turned on, try shortening the rule's time or turn off suppression to improve the rule's performance. - - -## Confirm suppressed alerts - -The ((security-app)) displays several indicators of whether a detection alert was created with alert suppression enabled, and how many duplicate alerts were suppressed. - - -After an alert is moved to the `Closed` status, it will no longer suppress new alerts. To prevent interruptions or unexpected changes in suppression, avoid closing alerts before the suppression interval ends. - - -* **Alerts** table — Icon in the **Rule** column. Hover to display the number of suppressed alerts: - - - -* **Alerts** table — Column for suppressed alerts count. Select **Fields** to open the fields browser, then add `kibana.alert.suppression.docs_count` to the table. - - - -* Alert details flyout — **Insights** section: - - - -## Investigate events for suppressed alerts - -With alert suppression, detection alerts aren't created for the grouped source events, but you can still retrieve the events for further analysis or investigation. Do one of the following to open Timeline with the original events associated with both the created alert and the suppressed alerts: - -* **Alerts** table — Select **Investigate in timeline** in the **Actions** column. - - - -* Alert details flyout — Select **Take action** → **Investigate in timeline**. - -## Alert suppression limit by rule type - -Some rule types have a maximum number of alerts that can be suppressed (custom query rules don't have a suppression limit): - -* **Threshold, event correlation (non-sequence queries only, ((esql)), and ((ml)):** The maximum number is the value you choose for the rule's **Max alerts per run** advanced setting, which is `100` by default. -* **Indicator match and new terms:** The maximum number is five times the value you choose for the the rule's **Max alerts per run** advanced setting. The default value is `100`, which means the default maximum limit for indicator match rules and new terms rules is `500`. \ No newline at end of file diff --git a/docs/serverless/alerts/alerts-ui-manage.mdx b/docs/serverless/alerts/alerts-ui-manage.mdx deleted file mode 100644 index f398a48750..0000000000 --- a/docs/serverless/alerts/alerts-ui-manage.mdx +++ /dev/null @@ -1,326 +0,0 @@ ---- -slug: /serverless/security/alerts-manage -title: Manage detection alerts -description: Filter alerts, view trends, and start investigating and analyzing detections on the Alerts page. -tags: ["serverless","security","alerting","how-to","manage"] -status: in review ---- - - -
- -The Alerts page displays all detection alerts. - -![Alerts page overview](../images/alerts-ui-manage/-detections-alert-page.png) - -
- -## View and filter detection alerts -The Alerts page offers various ways for you to organize and triage detection alerts as you investigate suspicious events. You can: - -* View an alert's details. Click the **View details** button from the Alerts table to open the alert details flyout. Learn more at View detection alert details. - - - -* View the rule that created an alert. Click a name in the **Rule** column to open the rule's details page. - -* View the details of the host and user associated with the alert. In the Alerts table, click a host name to open the host details flyout, or a user name to open the user details flyout. - -* Filter for a specific rule in the KQL bar (for example, `kibana.alert.rule.name :"SSH (Secure Shell) from the Internet"`). KQL autocomplete is available for `.alerts-security.alerts-*` indices. - -* Use the date and time filter to define a specific time range. By default, this filter is set to search the last 24 hours. - -* Use the drop-down filter controls to filter alerts by up to four fields. By default, you can filter alerts by **Status**, **Severity**, **User**, and **Host**, and you can edit the controls to use other fields. - -* Visualize and group alerts by specific fields in the visualization section. Use the buttons on the left to select a view type (**Summary**, **Trend**, **Counts**, or **Treemap**), and use the menus on the right to select the ECS fields used for grouping alerts. Refer to Visualize detection alerts for more on each view type. - -* Hover over a value to display available inline actions, such as **Filter In**, **Filter Out**, and **Add to timeline**. Click the expand icon for more options, including **Show top _x_** and **Copy to Clipboard**. The available options vary based on the type of data. - - - -* Filter alert results to include building block alerts or to only show alerts from indicator match rules by selecting the **Additional filters** drop-down. By default, building block alerts are excluded from the Overview and Alerts pages. You can choose to include building block alerts on the Alerts page, which expands the number of alerts. - - ![Alerts table with Additional filters menu highlighted](../images/alerts-ui-manage/-detections-additional-filters.png) - - - When updating alert results to include building block alerts, the Security app searches the `.alerts-security.alerts-` index for the `kibana.alert.building_block_type` field. When looking for alerts created from indicator match rules, the app searches the same index for `kibana.alert.rule.type:'threat_match'`. - - -* View detection alerts generated by a specific rule. Go to **Rules** → **Detection rules (SIEM)**, then select a rule name in the table. The rule details page displays a comprehensive view of the rule's settings, and the Alerts table under the Trend histogram displays the alerts associated with the rule, including alerts from any previous or deleted revision of that rule. - - - -## Edit drop-down filter controls - -By default, the drop-down controls on the Alerts page filter alerts by **Status**, **Severity**, **User**, and **Host**. You can edit them to filter by different fields, as well as remove, add, and reorder them if you prefer a different order. - -![Alerts page with drop-down controls highlighted](../images/alerts-ui-manage/-detections-alert-page-dropdown-controls.png) - - - -* You can have a maximum of four controls on the Alerts page. -* You can't remove the **Status** control. -* If you make any changes to the controls, you _must_ save the pending changes for them to persist. -* Saved changes are stored in your browser's local storage, not your [user profile](((ref))/user-profile.html). If you clear your browser's storage or log into your user profile from a different browser, you will lose your customizations. - - - -1. Click the three-dot icon next to the controls (), then select **Edit Controls**. - -1. Do any of the following: - - * To reorder controls, click and drag a control by its handle (). - - * To remove a control, hover over it and select **Remove control** (). - - * To edit a control, hover over it and select **Edit control** (). - - * To add a new control, click **Add Controls** (). If you already have four controls, you must first remove one to make room for the new one. - -1. If you're editing or adding a control, do the following in the configuration flyout that opens: - - 1. In the **Field** list, select the field for the filter. The **Control type** is automatically applied to the field you selected. - - 1. Enter a **Label** to identify the control. - - 1. Click **Save and close**. - -1. Click **Save pending changes** (). - -
- -## Group alerts - -You can group alerts by rule name, user name, host name, source IP address, or any other field. Select **Group alerts by**, then select an option or **Custom field** to specify a different field. - -Select up to three fields for grouping alerts. The groups will nest in the order you selected them, and the nesting order is displayed above the table next to **Group alerts by**. - -![Alerts table with Group alerts by drop-down](../images/alerts-ui-manage/-detections-group-alerts.png) - -Each group displays information such as the alerts' severity and how many users, hosts, and alerts are in the group. The information displayed varies depending on the selected fields. - -To interact with grouped alerts: - -* Select the **Take actions** menu to perform a bulk action on all alerts in a group, such as changing their status. - -* Click a group's name or the expand icon () to display alerts within that group. You can filter and customize this view like any other alerts table. - - ![Expanded alert group with alerts table](../images/alerts-ui-manage/-detections-group-alerts-expand.png) - -
- -## Customize the Alerts table -Use the toolbar buttons in the upper-left of the Alerts table to customize the columns you want displayed: - -* **Columns**: Reorder the columns. -* **_x_ fields sorted**: Sort the table by one or more columns. -* **Fields**: Select the fields to display in the table. You can also add runtime fields to detection alerts and display them in the Alerts table. - -Click the **Full screen** button in the upper-right to view the table in full-screen mode. - -![Alerts table with toolbar buttons highlighted](../images/alerts-ui-manage/-detections-alert-table-toolbar-buttons.png) - -Use the view options drop-down in the upper-right of the Alerts table to control how alerts are displayed: - -* **Grid view**: Displays alerts in a traditional table view with columns for each field -* **Event rendered view**: Display alerts in a descriptive event flow that includes relevant details and context about the event. - -![Alerts table with the Event rendered view enabled](../images/alerts-ui-manage/-detections-event-rendered-view.png) - - -When using grid view, you can view alert-rendered reason statements and event renderings for specific alerts by clicking the expand icon in the **Reason** column. Some events do not have event renderings. - - -
- -## Take actions on an alert -From the Alerts table or the alert details flyout, you can: - -* Add detection alerts to cases -* Change an alert's status -* Add a rule exception from an alert -* Apply and filter alert tags -* Assign users to alerts -* Filter assigned alerts -* Add an endpoint exception from an alert -* Isolate an alert's host -* Perform response actions on an alert's host (Alert details flyout only) -* Run Osquery against an alert -* View alerts in Timeline -* Visually analyze an alert's process relationships - -
- -### Change an alert's status - -You can set an alert's status to indicate whether it needs to be investigated -(**Open**), is under active investigation (**Acknowledged**), or has been resolved -(**Closed**). By default, the Alerts page displays open alerts. To filter alerts that are **Acknowledged** or **Closed**, use the **Status** drop-down filter at the top of the Alerts page. - -To change an alert's status, do one of the following: - -* In the Alerts table, click **More actions** (**...**) in the alert's row, then select a status. - -* In the Alerts table, select the alerts you want to change, click **Selected _x_ alerts** at the upper-left above the table, and then select a status. - - - -* To bulk-change the status of grouped alerts, select the **Take actions** menu for the group, then select a status. - -* In an alert's details flyout, click **Take action** and select a status. - -
- -### Apply and filter alert tags - -Use alert tags to organize related alerts into categories that you can filter and group. For example, use the `False Positive` alert tag to label a group of alerts as false positives. Then, search for them by entering the `kibana.alert.workflow_tags : "False Positive"` query into the KQL bar. Alternatively, use the Alert table's drop-down filters to filter for tagged alerts. - - -You can manage alert tag options by updating the `securitySolution:alertTags` advanced setting. Refer to Manage alert tag options for more information. - - - -To display alert tags in the Alerts table, click **Fields** and add the `kibana.alert.workflow_tags` field. - - -To apply or remove alert tags on individual alerts, do one of the following: - - * In the Alerts table, click **More actions** (**...**) in an alert's row, then click **Apply alert tags**. Select or unselect tags, then click **Apply tags**. - * In an alert’s details flyout, click **Take action → Apply alert tags**. Select or unselect tags, then click **Apply tags**. - -To apply or remove alert tags on multiple alerts, select the alerts you want to change, then click **Selected _x_ alerts** at the upper-left above the table. Click **Apply alert tags**, select or unselect tags, then click **Apply tags**. - -![Bulk action menu with multiple alerts selected, 450](../images/alerts-ui-manage/-detections-bulk-apply-alert-tag.png) - -
- -### Assign users to alerts - -Assign users to alerts that you want them to investigate, and manage alert assignees throughout an alert's lifecycle. - - -All Security roles, except for the Viewer role, can assign and unassign users to alerts. - - - -Users are not notified when they've been assigned to, or unassigned from, alerts. - - - - - Assign users to an alert - - Choose one of the following: - * **Alerts table** - Click **More actions** (**...**) in an alert's row, then click **Assign alert**. Select users, then click **Apply**. - - * **Alert details flyout** - Click **Take action → Assign alert**. Alternatively, click the **Assign alert** icon () at the top of the alert details flyout, select users, then click **Apply**. - - - - - - Unassign all users from an alert - - Choose one of the following: - * **Alerts table** - Click **More actions** (**...**) in an alert's row, then click **Unassign alert**. - * **Alert details flyout** - Click **Take action → Unassign alert**. - - - - - - Assign users to multiple alerts - - From the Alerts table, select the alerts you want to change. Click **Selected _x_ alerts** at the upper-left above the table, then click **Assign alert**. Select users, then click **Apply**. - - - Users assigned to some of the selected alerts will be displayed as unassigned in the selection list. Selecting said users will assign them to all alerts they haven't been assigned to yet. - - - - - - - Unassign users from multiple alerts - - From the Alerts table, select the alerts you want to change and click **Selected _x_ alerts** at the upper-left above the table. Click **Unassign alert** to remove users from the alert. - - - - - -Show users that have been assigned to alerts by adding the **Assignees** column to the Alerts table (**Fields** → `kibana.alert.workflow_assignee_ids`). Up to four assigned users can appear in the **Assignees** column. If an alert is assigned to five or more users, a number appears instead. - - - -Assigned users are automatically displayed in the alert details flyout. Up to two assigned users can be shown in the flyout. If an alert is assigned to three or more users, a numbered badge displays instead. - - - -
- -### Filter assigned alerts - -Click the **Assignees** filter above the Alerts table, then select the users you want to filter by. - - - -
- -### Add a rule exception from an alert - -You can add exceptions to the rule that generated an alert directly from the -Alerts table. Exceptions prevent a rule from generating alerts even when its -criteria are met. - -To add an exception, click the **More actions** menu (**...**) in the Alerts table, then select -**Add exception**. Alternatively, select **Take action** → **Add rule exception** in the alert details flyout. - -For information about exceptions and how to use them, refer to -Add and manage exceptions. - -
- -### View alerts in Timeline - -* To view a single alert in Timeline, click the **Investigate in timeline** button in the Alerts table. Alternatively, select **Take action** → **Investigate in timeline** in the alert details flyout. - - - -* To view multiple alerts in Timeline (up to 2,000), select the checkboxes next to the alerts, then click **Selected _x_ alerts** → **Investigate in timeline**. - - - - -When you send an alert generated by a -threshold rule to Timeline, all matching events are -listed in the Timeline, even ones that did not reach the threshold value. For -example, if you have an alert generated by a threshold rule that detects 10 -failed login attempts, when you send that alert to Timeline, all failed login -attempts detected by the rule are listed. - - -Suppose the rule that generated the alert uses a Timeline template. In this case, when you investigate the alert in Timeline, the dropzone query values defined in the template are replaced with their corresponding alert values. - -**Example** - -This Timeline template uses the `host.name: "{host.name}"` dropzone filter in -the rule. When alerts generated by the rule are investigated in Timeline, the -`{host.name}` value is replaced with the alert's `host.name` value. If the -alerts's `host.name` value is `Windows-ArsenalFC`, the Timeline dropzone query -is `host.name: "Windows-ArsenalFC"`. - - -Refer to Investigate events in Timeline for information on creating Timelines and Timeline -templates. For information on how to add Timeline templates to rules, refer to . - - diff --git a/docs/serverless/alerts/query-alert-indices.mdx b/docs/serverless/alerts/query-alert-indices.mdx deleted file mode 100644 index 227bfc06cc..0000000000 --- a/docs/serverless/alerts/query-alert-indices.mdx +++ /dev/null @@ -1,18 +0,0 @@ ---- -slug: /serverless/security/query-alert-indices -title: Query alert indices -description: Index patterns for querying alert data. -tags: [ 'serverless', 'security', 'how-to' ] -status: in review ---- - - -
- -This page explains how you should query alert indices, for example, when building rule queries, custom dashboards, or visualizations. For more information about alert event field definitions, review the Alert schema. - -## Alert index aliases -We recommend querying the `.alerts-security.alerts-` index alias. You should not include a dash or wildcard after the space ID. To query all spaces, use the following syntax: `.alerts-security.alerts-*`. - -## Alert indices -For additional context, alert events are stored in hidden ((es)) indices. We do not recommend querying them directly. The naming convention for these indices and their aliases is `.internal.alerts-security.alerts--NNNNNN`, where `NNNNNN` is a number that increases over time, starting from 000001. diff --git a/docs/serverless/alerts/reduce-notifications-alerts.mdx b/docs/serverless/alerts/reduce-notifications-alerts.mdx deleted file mode 100644 index d4fcb3c3b4..0000000000 --- a/docs/serverless/alerts/reduce-notifications-alerts.mdx +++ /dev/null @@ -1,74 +0,0 @@ ---- -slug: /serverless/security/reduce-notifications-alerts -title: Reduce notifications and alerts -description: A comparison of alert-reduction features. -tags: [ 'serverless', 'security', 'how-to' ] -status: in review ---- - - -
- -((elastic-sec)) offers several features to help reduce the number of notifications and alerts generated by your detection rules. This table provides a general comparison of these features, with links for more details: - - - - - Rule action snoozing - - - - **_Stops a specific rule's notification actions from running_**. - - Use to avoid unnecessary notifications from a specific rule. The rule continues to run and generate alerts during the snooze period, but its notification actions don't run. - - - - - - - [Maintenance window](((kibana-ref))/maintenance-windows.html) - - - - **_Prevents all rules' notification actions from running_**. - - Use to avoid false alarms and unnecessary notifications during planned outages. All rules continue to run and generate alerts during the maintenance window, but their notification actions don't run. - - - - - Alert suppression - - - - **_Reduces repeated or duplicate alerts_**. - - Use to reduce the number of alerts created when a rule meets its criteria repeatedly. Duplicate qualifying events are grouped, and only one alert is created for each group. - - - - - - - Rule exception - - - - **_Prevents a rule from creating alerts under specific conditions_**. - - Use to reduce false positive alerts by preventing trusted processes and network activity from generating unnecessary alerts. You can configure an exception to be used by a single rule or shared among multiple rules, but they typically don't affect _all_ rules. - - - - - diff --git a/docs/serverless/alerts/signals-to-cases.mdx b/docs/serverless/alerts/signals-to-cases.mdx deleted file mode 100644 index 589b3cc780..0000000000 --- a/docs/serverless/alerts/signals-to-cases.mdx +++ /dev/null @@ -1,60 +0,0 @@ ---- -slug: /serverless/security/signals-to-cases -title: Add detection alerts to cases -description: Add alerts to new or existing cases in ((elastic-sec)). -tags: ["serverless","security","how-to","analyze"] -status: in review ---- - - -
- -From the Alerts table, you can attach one or more alerts to a new case or an existing one. Alerts from any rule type can be added to a case. - - -- After you add an alert to a case, you can remove it from the case activity under the alert summary or by using the [((elastic-sec)) Cases API](((security-guide))/cases-api-overview.html). -- Each case can have a maximum of 1,000 alerts. -{/* Link to classic docs until serverless API docs are available. */} - - - - -
- -## Add alerts to a new case -To add alerts to a new case: - -1. Do one of the following: - * To add a single alert to a case, select the **More actions** menu (*...*) in the Alerts table or **Take action** in the alert details flyout, then select **Add to a new case**. - * To add multiple alerts, select the alerts, then select **Add to a new case** from the **Bulk actions** menu. -1. Give the case a name, assign a severity level, and provide a description. You can use - [Markdown](https://docs.github.com/en/get-started/writing-on-github/getting-started-with-writing-and-formatting-on-github/basic-writing-and-formatting-syntax) syntax in the case description. - - - If you do not assign your case a severity level, it will be assigned **Low** by default. - - -1. Optionally, add a category, assignees and relevant tags. You can add users only if they - meet the necessary prerequisites. - -1. Specify whether you want to sync the status of associated alerts. It is enabled by default; however, you can toggle this setting on or off at any time. If it remains enabled, the alert's status updates whenever the case's status is modified. -1. Select a connector. If you've previously added one, that connector displays as the default selection. Otherwise, the default setting is `No connector selected`. -1. Click **Create case** after you've completed all of the required fields. A confirmation message is displayed with an option to view the new case. Click the link in the notification or go to the Cases page to view the case. - - - -
- -## Add alerts to an existing case -To add alerts to an existing case: - -1. Do one of the following: - * To add a single alert to a case, select the **More actions** menu (*...*) in the Alerts table or **Take action** in the alert details flyout, then select **Add to existing case**. - * To add multiple alerts, select the alerts, then select **Add to an existing case** from the **Bulk actions** menu. -1. From the **Select case** dialog box, select the case to which you want to attach the alert. A confirmation message is displayed with an option to view the updated case. Click the link in the notification or go to the Cases page to view the case's details. - - - If you attach the alert to a case that has been configured to sync its status with associated alerts, the alert's status updates any time the case's status is modified. - - - ![Select case dialog listing existing cases](../images/signals-to-cases/-detections-add-alert-to-existing-case.png) diff --git a/docs/serverless/alerts/view-alert-details.mdx b/docs/serverless/alerts/view-alert-details.mdx deleted file mode 100644 index 1b735a6e65..0000000000 --- a/docs/serverless/alerts/view-alert-details.mdx +++ /dev/null @@ -1,280 +0,0 @@ ---- -slug: /serverless/security/view-alert-details -title: View detection alert details -description: Expand an alert to view detailed alert data. -tags: ["serverless","security","defend","reference","manage"] -status: in review ---- - - -
- -To learn more about an alert, click the **View details** button from the Alerts table. This opens the alert details flyout, which helps you understand and manage the alert. - -![Expandable flyout](../images/view-alert-details/-detections-open-alert-details-flyout.gif) - -Use the alert details flyout to begin an investigation, open a case, or plan a response. Click **Take action** at the bottom of the flyout to find more options for interacting with the alert. - -
- -## Alert details flyout UI - -The alert details flyout has a right panel, a preview panel, and a left panel. Each panel provides a different perspective of the alert. - -
- -### Right panel - -The right panel provides an overview of the alert. Expand any of the collapsed sections to learn more about the alert. You can also hover over fields on the **Overview** and **Table** tabs to display available inline actions. - - - - -From the right panel, you can also: - -* Click **Expand details** to open the left panel, which shows more information about sections in the right panel. -* Click the **Chat** icon () to access the . -* Click the **Share alert** icon () to get a shareable alert URL. We _do not_ recommend copying the URL from your browser's address bar, which can lead to inconsistent results if you've set up filters or relative time ranges for the Alerts page. - - - If you've configured the [`server.publicBaseUrl`](((kibana-ref))/settings.html#server-publicBaseUrl) setting in the `kibana.yml` file, the shareable URL is also in the `kibana.alert.url` field. You can find the field by searching for `kibana.alert.url` on the **Table** tab. - - - - If you've enabled grouping on the Alerts page, the alert details flyout won't open until you expand a collapsed group and select an individual alert. - - -* Find basic details about the alert, such as the: - - * Associated rule - * Alert status - * Date and time the alert was created - * Alert severity and risk score (these are inherited from rule that generated the alert) - * Users assigned to the alert (click the icon to assign more users) -* Click the **Table** or **JSON** tabs to display the alert details in table or JSON format. In table format, alert details are displayed as field-value pairs. - -
- -### Preview panel - -Some areas in the flyout provide previews when you click on them. For example, clicking **Show rule summary** in the rule description displays a preview of the rule's details. To close the preview, click **x**. - -![Preview panel of the alert details flyout](../images/view-alert-details/-detections-alert-details-flyout-preview-panel.gif) - -
- -### Left panel - -The left panel provides an expanded view of what's shown in the right panel. To open the left panel, do one of the following: - -* Click **Expand details** at the top of the right panel. - - - -* Click one of the section titles on the **Overview** tab within the right panel. - - - -
- -## About - -The About section is located on the **Overview** tab in the right panel. It provides a brief description of the rule that's related to the alert and an explanation of what generated the alert. - - - -The About section has the following information: - -* **Rule description**: Describes the rule's purpose or detection goals. Click **Show rule summary** to display a preview of the rule's details. From the preview, click **Show rule details** to view the rule's details page. - -* **Alert reason**: Describes the source event that generated the alert. Event details are displayed in plain text and ordered logically to provide context for the alert. Click **Show full reason** to display the alert reason in the event rendered format within the preview panel. - - - The event renderer only displays if an event renderer exists for the alert type. Fields are interactive; hover over them to access the available actions. - - -* **Last Alert Status Change**: Shows the last time the alert's status was changed, along with the user who changed it. -* **MITRE ATT&CK**: Provides relevant [MITRE ATT&CK](https://attack.mitre.org/) framework tactics, techniques, and sub-techniques. - -
- -## Investigation - -The Investigation section is located on the **Overview** tab in the right panel. It offers a couple of ways to begin investigating the alert. - - - -The Investigation section provides the following information: - -* **Investigation guide**: The **Show investigation guide** button displays if the rule associated with the alert has an investigation guide. Click the button to open the expanded Investigation view in the left panel. - - - Add an investigation guide to a rule when creating a new custom rule or modifying an existing custom rule's settings. - - -* **Highlighted fields**: Shows relevant fields for the alert and any custom highlighted fields you added to the rule. Custom highlighted fields with values are added to this section. Those without values aren't added. - -
- -## Visualizations - -The Visualizations section is located on the **Overview** tab in the right panel. It offers a glimpse of the processes that led up to the alert and occurred after it. - - - -Click **Visualizations** to display the following previews: - -* **Session view preview**: Shows a preview of session view data. Click **Session viewer preview** to open the **Session View** tab in Timeline. - -* **Analyzer preview**: Shows a preview of the visual analyzer graph. The preview displays up to three levels of the analyzed event's ancestors and up to three levels of the event's descendants and children. The ellipses symbol (**`...`**) indicates the event has more ancestors and descendants to examine. Click **Analyzer preview** to open the **Event Analyzer** tab in Timeline. - -
- -## Insights - -The Insights section is located on the **Overview** tab in the right panel. It offers different perspectives from which you can assess the alert. Click **Insights** to display overviews for related entities, threat intelligence, correlated data, and host and user prevalence. - - - -
- -### Entities - -The Entities overview provides high-level details about the user and host that are related to the alert. Host and user risk classifications are also available if you have the Security Analytics Complete . - - - -
- -#### Expanded entities view - -From the right panel, click **Entities** to open a detailed view of the host and user associated with the alert. The expanded view also includes risk scores and classifications (if you have the Security Analytics Complete ) and activity on related hosts and users. - -![Expanded view of entity details](../images/view-alert-details/-detections-expanded-entities-view.png) - -
- -### Threat intelligence - -The Threat intelligence overview shows matched indicators, which provide threat intelligence relevant to the alert. - -![Overview of threat intelligence on the alert](../images/view-alert-details/-detections-threat-intelligence-overview.png) - -The Threat intelligence overview provides the following information: - -* **Threat match detected**: Only available when examining an alert generated from an indicator match rule. Shows the number of matched indicators that are present in the alert document. Shows zero if there are no matched indicators or you're examining an alert generated by another type of rule. - -* **Fields enriched with threat intelligence**: Shows the number of matched indicators that are present on an alert that _wasn't_ generated from an indicator match rule. If none exist, the total number of matched indicators is zero. - -
- -#### Expanded threat intelligence view - -From the right panel, click **Threat intelligence** to open the expanded Threat intelligence view within the left panel. - - -The expanded threat intelligence view queries indices specified in the `securitySolution:defaultThreatIndex` advanced setting. Refer to Update default Elastic Security threat intelligence indices to learn more about threat intelligence indices. - - -![Expanded view of threat intelligence on the alert](../images/view-alert-details/-detections-expanded-threat-intelligence-view.png) - -The expanded Threat intelligence view shows individual indicators within the alert document. You can expand and collapse indicator details by clicking the arrow button at the end of the indicator label. Each indicator is labeled with values from the `matched.field` and `matched.atomic` fields and displays the threat intelligence provider. - -Matched threats are organized into two sections, described below. Within each section, matched threats are shown in reverse chronological order, with the most recent at the top. All mapped fields are displayed for each matched threat. - -**Threat match detected** - -The Threat match detected section is only populated with indicator match details if you're examining an alert that was generated from an indicator match rule. Indicator matches occur when alert field values match with threat intelligence data you've ingested. - -**Fields enriched with threat intelligence** - -Threat intelligence can also be found on alerts that weren't generated from indicator match rules. To find this information, ((elastic-sec)) queries alert documents from the past 30 days and searches for fields that contain known threat intelligence. If any are found, they're logged in this section. - - -Use the date time picker to modify the query time frame, which looks at the past 30 days by default. You can also click the **Inspect** button to examine the query that the Fields enriched with threat intelligence section uses. - - -When searching for threat intelligence, ((elastic-sec)) queries the alert document for the following fields: - -- `file.hash.md5`: The MD5 hash -- `file.hash.sha1`: The SHA1 hash -- `file.hash.sha256`: The SHA256 hash -- `file.pe.imphash`: Imports in a PE file -- `file.elf.telfhash`: Imports in an ELF file -- `file.hash.ssdeep`: The SSDEEP hash -- `source.ip`: The IP address of the source (IPv4 or IPv6) -- `destination.ip`: The event's destination IP address -- `url.full`: The full URL of the event source -- `registry.path`: The full registry path, including the hive, key, and value - -
- -### Correlations - -The Correlations overview shows how an alert is related to other alerts and offers ways to investigate related alerts. Use this information to quickly find patterns between alerts and then take action. - - - -The Correlations overview provides the following information: - -* **Suppressed alerts**: Indicates that the alert was created with alert suppression, and shows how many duplicate alerts were suppressed. This information only appears if alert suppression is enabled for the rule. -* **Alerts related by source event**: Shows the number of alerts that were created by the same source event. -* **Cases related to the alert**: Shows the number of cases to which the alert has been added. -* **Alerts related by session ID**: Shows the number of alerts generated by the same session. -* **Alerts related by process ancestry**: Shows the number of alerts that are related by process events on the same linear branch. - -
- -#### Expanded correlations view - -From the right panel, click **Correlations** to open the expanded Correlations view within the left panel. - -![Expanded view of correlation data](../images/view-alert-details/-detections-expanded-correlations-view.png) - -In the expanded view, corelation data is organized into several tables: - -* **Suppressed alerts**: Shows how many duplicate alerts were suppressed. This information only appears if alert suppression is enabled for the rule. -* **Related cases**: Shows cases to which the alert has been added. Click a case's name to open its details. -* **Alerts related by source event**: Shows alerts created by the same source event. This can help you find alerts with a shared origin and provide more context about the source event. Click the **Investigate in timeline** button to examine related alerts in Timeline. -* **Alerts related by session**: Shows alerts generated during the same session. These alerts share the same session ID, which is a unique ID for tracking a given Linux session. To use this feature, you must enable the **Collect session data** setting in your ((elastic-defend)) integration policy. Refer to Enable Session View data for more information. -* **Alerts related by ancestry**: Shows alerts that are related by process events on the same linear branch. Note that alerts generated from processes on child or related branches are not shown. To further examine alerts, click **Investigate in timeline**. - -
- -### Prevalence - -The Prevalence overview shows whether data from the alert was frequently observed on other host events from the last 30 days. Prevalence calculations use values from the alert’s highlighted fields. Highlighted field values that are observed on less than 10% of hosts in your environment are considered uncommon (not prevalent) and are listed individually in the Prevalence overview. Highlighted field values that are observed on more than 10% of hosts in your environment are considered common (prevalent) and are described as frequently observed in the Prevalence overview. - -
- -#### Expanded prevalence view - -From the right panel, click **Prevalence** to open the expanded Prevalence view within the left panel. Examine the table to understand the alert's relationship with other alerts, events, users, and hosts. - - -Update the date time picker for the table to show data from a different time range. - - -![Expanded view of prevalence data](../images/view-alert-details/-detections-expanded-prevalence-view.png) - -The expanded Prevalence view provides the following details: - -* **Field**: Shows highlighted fields for the alert and any custom highlighted fields that were added to the alert's rule. - -* **Value**: Shows values for highlighted fields and any custom highlighted fields that were added to the alert's rule. - -* **Alert count**: Shows the total number of alert documents that have identical highlighted field values, including the alert you're currently examining. For example, if the `host.name` field has an alert count of 5, that means there are five total alerts with the same `host.name` value. The Alert count column only retrieves documents that contain the [`event.kind:signal`](((ecs-ref))/ecs-allowed-values-event-kind.html#ecs-event-kind-signal) field-value pair. - -* **Document count**: Shows the total number of event documents that have identical field values. A dash (`——`) displays if there are no event documents that match the field value. The Document count column only retrieves documents that don't contain the [`event.kind:signal`](((ecs-ref))/ecs-allowed-values-event-kind.html#ecs-event-kind-signal) field-value pair. - -* **Host prevalence**: Shows the percentage of unique hosts that have identical field values. Host prevalence for highlighted fields is calculated by taking the number of unique hosts with identical highlighted field values and dividing that number by the total number of unique hosts in your environment. - -* **User prevalence**: Shows the percentage of unique users that have identical highlighted field values. User prevalence for highlighted fields is calculated by taking the number of unique users with identical field values and dividing that number by the total number of unique users in your environment. - -
- -## Response - -The **Response** section is located on the **Overview** tab in the right panel. It shows response actions that were added to the rule associated with the alert. Click **Response** to display the response action's results in the left panel. - - diff --git a/docs/serverless/alerts/visual-event-analyzer.mdx b/docs/serverless/alerts/visual-event-analyzer.mdx deleted file mode 100644 index 02a3b8d75b..0000000000 --- a/docs/serverless/alerts/visual-event-analyzer.mdx +++ /dev/null @@ -1,138 +0,0 @@ ---- -slug: /serverless/security/visual-event-analyzer -title: Visual event analyzer -description: Examine events and processes in a graphical timeline. -tags: [ 'serverless', 'security', 'how-to' ] -status: in review ---- - - -
- -((elastic-sec)) allows any event detected by ((elastic-endpoint)) to be analyzed using a process-based visual analyzer, which shows a graphical timeline of processes that led up to the alert and the events that occurred immediately after. Examining events in the visual event analyzer is useful to determine the origin of potentially malicious activity and other areas in your environment that may be compromised. It also enables security analysts to drill down into all related hosts, processes, and other events to aid in their investigations. - - -If you're experiencing performance degradation, you can exclude cold and frozen tier data from analyzer queries. - - -
- -## Find events to analyze - -You can only visualize events triggered by hosts configured with the ((elastic-defend)) integration or any `sysmon` data from `winlogbeat`. - -In KQL, this translates to any event with the `agent.type` set to either: - -* `endpoint` -* `winlogbeat` with `event.module` set to `sysmon` - -To find events that can be visually analyzed: - -1. First, display a list of events by doing one of the following: - * Go to **Explore** → **Hosts**, then select the **Events** tab. A list of all your hosts' events appears at the bottom of the page. - * Go to **Alerts**, then scroll down to the Alerts table. - -1. Filter events that can be visually analyzed by entering either of the following queries in the KQL search bar, then selecting **Enter**: - * `agent.type:"endpoint" and process.entity_id :*` - - Or - - * `agent.type:"winlogbeat" and event.module: "sysmon" and process.entity_id : *` - -1. Events that can be visually analyzed are denoted by a cubical **Analyze event** icon. Select this option to open the event in the visual analyzer. - - - - - Events that cannot be analyzed will not have the **Analyze event** option available. This might occur if the event has incompatible field mappings. - - - ![](../images/visual-event-analyzer/-detections-analyze-event-timeline.png) - - - You can also analyze events from Timelines. - - -
- -## Visual event analyzer UI - -Within the visual analyzer, each cube represents a process, such as an executable file or network event. Click and drag in the analyzer to explore the hierarchy of all process relationships. - -To understand what fields were used to create the process, select the **Process Tree** to show the schema that created the graphical view. The fields included are: - -* `SOURCE`: Can be either `endpoint` or `winlogbeat` -* `ID`: Event field that uniquely identifies a node -* `EDGE`: Event field which indicates the relationship between two nodes - -![](../images/visual-event-analyzer/-detections-process-schema.png) - -Click the **Legend** to show the state of each process node. - -![](../images/visual-event-analyzer/-detections-node-legend.png) - -Use the date and time filter to analyze the event within a specific time range. By default, the selected time range matches that of the table from which you opened the alert. - -![](../images/visual-event-analyzer/-detections-date-range-selection.png) - -Select a different data view to further filter the alert's related events. - -![](../images/visual-event-analyzer/-detections-data-view-selection.png) - -To expand the analyzer to a full screen, select the **Full Screen** icon above the left panel. - -![](../images/visual-event-analyzer/-detections-full-screen-analyzer.png) - -The left panel contains a list of all processes related to the event, starting with the event chain's first process. **Analyzed Events** — the event you selected to analyze from the events list or Timeline — are highlighted with a light blue outline around the cube. - -![](../images/visual-event-analyzer/-detections-process-list.png) - -In the graphical view, you can: - -- Zoom in and out of the graphical view using the slider on the far right -- Click and drag around the graphical view to more process relationships -- Observe child process events that spawned from the parent process -- Determine how much time passed between each process -- Identify all events related to each process - -![](../images/visual-event-analyzer/-detections-graphical-view.png) - -
- -## Process and event details - -To learn more about each related process, select the process in the left panel or the graphical view. The left panel displays process details such as: - -* The number of events associated with the process -* The timestamp of when the process was executed -* The file path of the process within the host -* The `process-pid` -* The user name and domain that ran the process -* Any other relevant process information -* Any associated alerts - -![](../images/visual-event-analyzer/-detections-process-details.png) - -When you first select a process, it appears in a loading state. If loading data for a given process fails, click **Reload `{process_name}`** beneath the process to reload the data. - -Access event details by selecting that event's URL at the top of the process details view or choosing one of the event pills in the graphical view. - -Events are categorized based on the `event.category` value. - -![](../images/visual-event-analyzer/-detections-event-type.png) - -When you select an `event.category` pill, all the events within that category are listed in the left panel. To display more details about a specific event, select it from the list. - -![](../images/visual-event-analyzer/-detections-event-details.png) - - -There is no limit to the number of events that can be associated with a process. - - -You can also examine alerts associated with events. - -To examine alerts associated with the event, select the alert pill (**_x_ alert**). The left pane lists the total number of associated alerts, and alerts are ordered from oldest to newest. Each alert shows the type of event that produced it (`event.category`), the event timestamp (`@timestamp`), and rule that generated the alert (`kibana.alert.rule.name`). Click on the rule name to open the alert's details. - -In the example screenshot below, five alerts were generated by the analyzed event (`lsass.exe`). The left pane displays the associated alerts and basic information about each one. - -![](../images/visual-event-analyzer/-detections-alert-pill.png) diff --git a/docs/serverless/alerts/visualize-alerts.mdx b/docs/serverless/alerts/visualize-alerts.mdx deleted file mode 100644 index c34968e816..0000000000 --- a/docs/serverless/alerts/visualize-alerts.mdx +++ /dev/null @@ -1,83 +0,0 @@ ---- -slug: /serverless/security/visualize-alerts -title: Visualize detection alerts -description: Display alert trends and distributions on the Alerts page. -tags: [ 'serverless', 'security', 'how-to' ] -status: in review ---- - - -
- -Visualize and group detection alerts by specific parameters in the visualization section of the Alerts page. - -![Alerts page with visualizations section highlighted](../images/visualize-alerts/-detections-alert-page-visualizations.png) - -Use the left buttons to select a view type (**Summary**, **Trend**, **Counts**, or **Treemap**), and use the right menus to select the ECS fields to use for grouping: - -* **Top alerts by** or **Group by**: Primary field for grouping alerts. -* **Group by top** (if available): Secondary field for further subdividing grouped alerts. - -For example, you can group first by rule name (`Group by: kibana.alert.rule.name`), then by host name (`Group by top: host.name`) to visualize which detection rules generated alerts, and which hosts triggered each of those rules. For groupings with a lot of unique values, the top 1,000 results are displayed. - - -Some view types don't have the **Group by top** option. You can also leave **Group by top** blank to group by only the primary field in **Group by**. - - -To reset a view to default settings, hover over it and click the options menu () that appears, then select **Reset group by fields**. - - -The options menu also lets you inspect the visualization's queries. For the trend and counts views, you can add the visualization to a new or existing case, or open it in Lens. - - -Click the collapse icon () to minimize the visualization section and display a summary of key information instead. - -![Alerts page with visualizations section collapsed](../images/visualize-alerts/-detections-alert-page-viz-collapsed.png) - -## Summary - -On the Alerts page, the summary visualization displays by default and shows how alerts are distributed across these indicators: - -* **Severity levels**: How many alerts are in each severity level. -* **Alerts by name**: How many alerts each detection rule created. -* **Top alerts by**: Percentage of alerts with a specified field value: `host.name` (default), `user.name`, `source.ip`, or `destination.ip`. - -You can hover and click on elements within the summary — such as severity levels, rule names, and host names — to add filters with those values to the Alerts page. - -![Summary visualization for alerts](../images/visualize-alerts/-detections-alerts-viz-summary.png) - -## Trend -The trend view shows the occurrence of alerts over time. By default, it groups alerts by detection rule name (`kibana.alert.rule.name`). - - -The **Group by top** menu is unavailable for the trend view. - - -![Trend visualization for alerts](../images/visualize-alerts/-detections-alerts-viz-trend.png) - -## Counts -The counts view shows the count of alerts in each group. By default, it groups alerts first by detection rule name (`kibana.alert.rule.name`), then by host name (`host.name`). - -![Counts visualization for alerts](../images/visualize-alerts/-detections-alerts-viz-counts.png) - -## Treemap -The treemap view shows the distribution of alerts as nested, proportionally-sized tiles. This view can help you quickly pinpoint the most prevalent and critical alerts. - -![Treemap visualization for alerts](../images/visualize-alerts/-detections-alerts-viz-treemap.png) - -Larger tiles represent more frequent alerts, and each tile's color is based on the alerts' risk score: - -* **Green**: Low risk (`0` - `46`) -* **Yellow**: Medium risk (`47` - `72`) -* **Orange**: High risk (`73` - `98`) -* **Red**: Critical risk (`99` - `100`) - -By default, the treemap groups alerts first by detection rule name (`kibana.alert.rule.name`), then by host name (`host.name`). This shows which rules generated the most alerts, and which hosts were responsible. - - -Depending on the amount of alerts, some tiles and text might be very small. Hover over the treemap to display information in a tooltip. - - -You can click on the treemap to narrow down the alerts displayed in both the treemap and the alerts table below. Click the label above a group to display the alerts in that group, or click an individual tile to display the alerts related to that tile. This adds filters under the KQL search bar, which you can edit or remove to further customize the view. - -![Animation of clicking the treemap](../images/visualize-alerts/-detections-treemap-click.gif) diff --git a/docs/serverless/assets/asset-management.mdx b/docs/serverless/assets/asset-management.mdx deleted file mode 100644 index c13c882a97..0000000000 --- a/docs/serverless/assets/asset-management.mdx +++ /dev/null @@ -1,15 +0,0 @@ ---- -slug: /serverless/security/asset-management -title: Asset management -# description: Description to be written -tags: [ 'serverless', 'security', 'overview', 'manage' ] -status: in review ---- - - -The **Assets** page allows you to manage the following features: - -* [((fleet))](((fleet-guide))/manage-agents-in-fleet.html) -* [((integrations))](((fleet-guide))/integrations.html) -* Endpoint protection -* Cloud security diff --git a/docs/serverless/billing.mdx b/docs/serverless/billing.mdx deleted file mode 100644 index 3b12f7e337..0000000000 --- a/docs/serverless/billing.mdx +++ /dev/null @@ -1,59 +0,0 @@ ---- -slug: /serverless/security/security-billing -title: Security billing dimensions -description: Learn about how Security usage affects pricing. -tags: [ 'serverless', 'security', 'overview' ] ---- - -

- -((elastic-sec)) serverless projects provide you with all the capabilities of ((elastic-sec)) to perform SIEM, security analytics, endpoint security, and cloud security workflows. Projects are provided using a Software as a Service (SaaS) model, and pricing is entirely consumption based. Security Analytics/SIEM is available in two tiers of carefully selected features to enable common security operations: - -* **Security Analytics Essentials** — Includes everything you need to operationalize traditional SIEM in most organizations. -* **Security Analytics Complete** — Adds advanced security analytics and AI-driven features that many organizations will require when upgrading or replacing legacy SIEM systems. - -Your monthly bill is based on the capabilities you use. When you use Security Analytics/SIEM, your bill is calculated based on data volume, which has these components: - -* **Ingest** — Measured by the number of GB of log/event/info data that you send to your Security project over the course of a month. -* **Retention** — Measured by the total amount of ingested data stored in your Security project. - -## Endpoint Protection - -Endpoint Protection is an _optional_ add-on to Security Analytics that provides on-endpoint protection and prevention. Endpoint Protection is available in two tiers of selected features to enable common endpoint security operations: -* **Endpoint Protection Essentials** — Includes robust protection against malware, ransomware, and other malicious behaviors. -* **Endpoint Protection Complete** — Adds endpoint response actions and advanced policy management. - -You pay based on the number of protected endpoints you configure with the ((elastic-defend)) integration. Note that logs, events, and alerts ingested into your Security project from endpoints running ((elastic-defend)) are billed using the **Ingest** and **Retention** pricing described above. - -## Cloud Protection - -Cloud Protection is an _optional_ add-on to Security Analytics that provides value-added protection capabilities for cloud assets. Cloud Protection is available in two tiers of carefully selected features to enable common cloud security operations: -* **Cloud Protection Essentials** — Protects your cloud workloads, continuously tracks posture of your cloud assets, and helps you manage risks by detecting configuration issues per CIS benchmarks. -* **Cloud Protection Complete** — Adds response capabilities and configuration drift prevention for Cloud Workloads. - -Your total cost depends on the number of protected cloud workloads and other billable cloud assets you configure for use with Elastic Cloud Security. - -For , billing is based on how many billable resources (`resource.id`s) you monitor. The following types of assets are considered billable: - -- VMs: - - **AWS:** EC2 instances - - **Azure:** Virtual machines - - **GCP:** Compute engine instances -- Storage resources: - - **AWS:** S3, S3 Glacier, EBS - - **Azure:** Archive, Blob, Managed disk - - **GCP:** Cloud storage, Persistent disk, Coldline storage -- SQL databases and servers: - - **AWS:** RDS, DynamoDB, Redshift - - **Azure:** SQL database, Cosmos DB, Synapse Analytics - - **GCP:** Cloud SQL, Firestore, BigQuery - -For , billing is based on how many Kubernetes nodes (`agent.id`s) you monitor. - -For , billing is based on how many cloud assets (`cloud.instance.id`s) you monitor. - -For , billing is based on how many agents (`agent.id`s) you use. - -Logs, events, alerts, and configuration data ingested into your security project are billed using the **Ingest** and **Retention** pricing described above. - -For more details about ((elastic-sec)) serverless project rates and billable assets, refer to Cloud Protection in the [Elastic Cloud pricing table](https://cloud.elastic.co/cloud-pricing-table?productType=serverless&project=security). diff --git a/docs/serverless/cloud-native-security/benchmark-rules.mdx b/docs/serverless/cloud-native-security/benchmark-rules.mdx deleted file mode 100644 index 1c47d9727e..0000000000 --- a/docs/serverless/cloud-native-security/benchmark-rules.mdx +++ /dev/null @@ -1,48 +0,0 @@ ---- -slug: /serverless/security/benchmark-rules -title: Benchmarks -description: Review the cloud security benchmark rules used by the CSPM and KSPM integrations. -tags: [ 'serverless', 'security', 'overview', 'cloud security' ] -status: in review ---- - - -
- -The Benchmarks page lets you view the cloud security posture (CSP) benchmarks for the Cloud security posture management (CSPM) and Kubernetes security posture management (KSPM) integrations. - -![Benchmark rules page](../images/benchmark-rules/-cloud-native-security-benchmark-rules.png) - -## What are benchmarks? -Each benchmark contains benchmark rules which are used by the CSPM and KSPM integrations to identify configuration risks in your cloud infrastructure. There are different benchmarks for different cloud services, such as AWS, GCP, or Azure. They are based on the Center for Internet Security's (CIS) [secure configuration benchmarks](https://www.cisecurity.org/cis-benchmarks/). - -Each benchmark rule checks to see if a specific type of resource is configured according to a CIS Benchmark. The names of rules describe what they check, for example: - -* `Ensure Kubernetes Secrets are encrypted using Customer Master Keys (CMKs) managed in AWS KMS` -* `Ensure the default namespace is not in use` -* `Ensure IAM policies that allow full "*:*" administrative privileges are not attached` -* `Ensure the default namespace is not in use` - -When benchmark rules are evaluated, the resulting findings data appears on the Cloud Security Posture dashboard. - - -Benchmark rules are not editable. - - -## Review your benchmarks - -To access your active benchmarks, go to **Rules -> Benchmarks**. From there, you can click a benchmark's name to view the benchmark rules associated with it. You can click a benchmark rule's name to see details including information about how to remediate it, and related links. - -Benchmark rules are enabled by default, but you can disable some of them — at the benchmark level — to suit your environment. This means for example that if you have two CSPM integrations using the `CIS AWS` benchmark, disabling a rule for that benchmark affects both integrations. To enable or disable a rule, use the **Enabled** toggle on the right of the rules table. - - -Disabling a benchmark rule automatically disables any associated detection rules and alerts. Re-enabling a benchmark rule **does not** automatically re-enable them. - - - -## How benchmark rules work - -1. When a security posture management integration is deployed, and every four hours after that, ((agent)) fetches relevant cloud resources. -1. After resources are fetched, they are evaluated against all applicable enabled benchmark rules. -1. Finding values of `pass` or `fail` indicate whether the standards defined by benchmark rules were met. - diff --git a/docs/serverless/cloud-native-security/cloud-native-security-overview.mdx b/docs/serverless/cloud-native-security/cloud-native-security-overview.mdx deleted file mode 100644 index 29926e71d8..0000000000 --- a/docs/serverless/cloud-native-security/cloud-native-security-overview.mdx +++ /dev/null @@ -1,40 +0,0 @@ ---- -slug: /serverless/security/cloud-native-security-overview -title: Secure cloud native resources -description: Helps you improve your cloud security posture. -tags: [ 'serverless', 'security', 'overview', 'cloud security' ] -status: in review ---- - - -
- -Elastic Security for Cloud helps you improve your cloud security posture by comparing your cloud configuration to best practices, and scanning for vulnerabilities. It also helps you monitor and investigate your cloud workloads inside and outside Kubernetes. - -This page describes what each solution does and provides links to more information. - -## Cloud Security Posture Management (CSPM) -Discovers and evaluates the services in your cloud environment — like storage, compute, IAM, and more — against configuration security guidelines defined by the [Center for Internet Security](https://www.cisecurity.org/) (CIS) to help you identify and remediate risks that could undermine the confidentiality, integrity, and availability of your cloud data. - -Read the CSPM docs. - -## Kubernetes Security Posture Management (KSPM) -Allows you to identify configuration risks in the various components that make up your Kubernetes cluster. -It does this by evaluating your Kubernetes clusters against secure configuration guidelines defined by the Center for Internet Security (CIS) and generating findings with step-by-step instructions for remediating potential security risks. - -Read the KSPM docs. - -## Cloud Native Vulnerability Management (CNVM) -Scans your cloud workloads for known vulnerabilities. When it finds a vulnerability, it supports your risk assessment by quickly providing information such as the vulnerability's CVSS and severity, which software versions it affects, and whether a fix is available. - -Read the CNVM docs. - -## Cloud Workload Protection for Kubernetes -Provides cloud-native runtime protections for containerized environments by identifying and (optionally) blocking unexpected system behavior in Kubernetes containers. These capabilities are sometimes referred to as container drift detection and prevention. The solution also captures detailed process and file telemetry from monitored containers, allowing you to set up custom alerts and protection rules. - -Read the CWP for Kubernetes docs. - -## Cloud Workload Protection for VMs -Helps you monitor and protect your Linux VMs. It uses ((elastic-defend)) to instantly detect and prevent malicious behavior and malware, and captures workload telemetry data for process, file, and network activity. You can use this data with Elastic's out-of-the-box detection rules and ((ml)) models. These detections generate alerts that quickly help you identify and remediate threats. - -Read the CWP for VMs docs. \ No newline at end of file diff --git a/docs/serverless/cloud-native-security/cloud-workload-protection.mdx b/docs/serverless/cloud-native-security/cloud-workload-protection.mdx deleted file mode 100644 index 773914da23..0000000000 --- a/docs/serverless/cloud-native-security/cloud-workload-protection.mdx +++ /dev/null @@ -1,28 +0,0 @@ ---- -slug: /serverless/security/cloud-workload-protection -title: Cloud workload protection for VMs -description: Use cloud workload protection to monitor and protect your Linux VMs. -tags: [ 'serverless', 'security', 'overview', 'cloud security' ] -status: in review ---- - - -
- -Cloud workload protection helps you monitor and protect your Linux VMs. It uses the ((elastic-defend)) integration to capture cloud workload telemetry containing process, file, and network activity. - -Use this telemetry with out-of-the-box detection rules and machine learning models to automate processes that identify cloud threats. - -## Use cases - -* **Runtime monitoring of cloud workloads:** Provides visibility into cloud workloads, context for detected threats, and the historical data needed for retroactive threat investigations. -* **Cloud-native threat detection and prevention:** Provides security coverage for Linux, containers, and serverless applications. Protects against known and unknown threats using on-host detections and protections against malicious behavior, memory threats, and malware. -* **Reducing the time to detect and remediate runtime threats:** Helps you resolve potential threats by showing alerts in context, making the data necessary for further investigations readily available, and providing remediation options. - -To continue setting up your cloud workload protection, learn more about: - -* **Getting started with ((elastic-defend))**: configure ((elastic-defend)) to protect your hosts. Be sure to select one of the "Cloud workloads" presets if you want to collect session data by default, including process, file, and network telemetry. -* **Session view**: examine Linux process data organized in a tree-like structure according to the Linux logical event model, with processes organized by parentage and time of execution. Use it to monitor and investigate session activity, and to understand user and service behavior on your Linux infrastructure. -* **The Kubernetes dashboard**: Explore an overview of your protected Kubernetes clusters, and drill down into individual sessions within your Kubernetes infrastructure. -* **Environment variable capture**: Capture the environment variables associated with process events, such as `PATH`, `LD_PRELOAD`, or `USER`. - diff --git a/docs/serverless/cloud-native-security/cspm-findings-page.mdx b/docs/serverless/cloud-native-security/cspm-findings-page.mdx deleted file mode 100644 index 4719892ddb..0000000000 --- a/docs/serverless/cloud-native-security/cspm-findings-page.mdx +++ /dev/null @@ -1,78 +0,0 @@ ---- -slug: /serverless/security/cspm-findings-page -title: Findings page -description: Review your cloud security posture management data. -tags: [ 'serverless', 'security', 'overview', 'cloud security' ] -status: in review ---- - - -
- -The **Misconfigurations** tab on the Findings page displays the configuration risks identified by the CSPM and KSPM integrations. - -![Findings page](../images/findings-page/-cloud-native-security-findings-page.png) - -
- -## What are CSPM and KSPM findings? - -CSPM and KSPM findings indicate whether a given resource passed or failed evaluation against a specific security guideline. Each finding includes metadata about the resource evaluated and the security guideline used to evaluate it. Each finding's result (`pass` or `fail`) indicates whether a particular part of your infrastructure meets a security guideline. - -
- -## Group and filter findings -By default, the Findings page lists all findings, without grouping or filtering. - -### Group findings - -Click **Group findings by** to group your data by a field. Select one of the suggested fields or **Custom field** to choose your own. You can select up to three group fields at once. - -* When grouping is turned on, click a group to expand it and examine all sub-groups or findings within that group. -* To turn off grouping, click **Group findings by** and select **None**. - - -Multiple groupings apply to your data in the order you selected them. For example, if you first select **Cloud account**, then select **Resource**, the top-level grouping will be based on **Cloud account**, and its subordinate grouping will be based on **Resource**. - - -
- -### Filter findings -You can filter findings data in two ways: - -* **KQL search bar**: For example, search for `result.evaluation : failed` to view all failed findings. -* **In-table value filters**: Hover over a finding to display available inline actions. Use the **Filter In** (plus) and **Filter Out** (minus) buttons. - -## Customize the Findings table -You can use the toolbar buttons in the upper-left of the Findings table to select which columns appear: - -* **Columns**: Select the left-to-right order in which columns appear. -* **Sort fields**: Sort the table by one or more columns, or turn sorting off. -* **Fields**: Select which fields to display for each finding. Selected fields appear in the table and the **Columns** menu. - - -You can also click a column's name to open a menu that allows you to perform multiple actions on the column. - - -
- -## Remediate failed findings -To remediate failed findings and reduce your attack surface: - -1. First, filter for failed findings. -1. Click the arrow to the left of a failed finding to open the findings flyout. -1. Follow the steps under **Remediation**. - - - Remediation steps typically include commands for you to execute. These sometimes contain placeholder values that you must replace before execution. - - -
- -## Generate alerts for failed Findings -You can create detection rules that detect specific failed findings directly from the Findings page. - -1. Click the arrow to the left of a Finding to open the findings flyout. -1. Click **Take action**, then **Create a detection rule**. This automatically creates a detection rule that creates alerts when the associated benchmark rule generates a failed finding. -1. To review or customize the new rule, click **View rule**. - diff --git a/docs/serverless/cloud-native-security/cspm-get-started-azure.mdx b/docs/serverless/cloud-native-security/cspm-get-started-azure.mdx deleted file mode 100644 index 3a7d056a4b..0000000000 --- a/docs/serverless/cloud-native-security/cspm-get-started-azure.mdx +++ /dev/null @@ -1,173 +0,0 @@ ---- -slug: /serverless/security/cspm-get-started-azure -title: Get started with CSPM for Azure -description: Start monitoring the security posture of your Azure cloud assets. -tags: [ 'serverless', 'security', 'overview', 'cloud security' ] -status: in review ---- - - -
- -
- -## Overview - -This page explains how to get started monitoring the security posture of your cloud assets using the Cloud Security Posture Management (CSPM) feature. - - - -* CSPM only works in the `Default` ((kib)) space. Installing the CSPM integration on a different ((kib)) space will not work. -* CSPM is supported only on AWS, GCP, and Azure commercial cloud platforms, and AWS GovCloud. Other government cloud platforms are not supported ([request support](https://github.com/elastic/kibana/issues/new/choose)). -* To view posture data, you need `read` privileges for the following ((es)) indices: - * `logs-cloud_security_posture.findings_latest-*` - * `logs-cloud_security_posture.scores-*` - * `logs-cloud_security_posture.findings` -* The user who gives the CSPM integration permissions in Azure must be an Azure subscription `admin`. - - - -
- -## Set up CSPM for Azure - -You can set up CSPM for Azure by by enrolling an Azure organization (management group) containing multiple subscriptions, or by enrolling a single subscription. Either way, first add the CSPM integration, then enable cloud account access. - -
- -### Add your CSPM integration -1. From the Elastic Security **Get started** page, click **Add integrations**. -1. Search for `CSPM`, then click on the result. -1. Click **Add Cloud Security Posture Management (CSPM)**. -1. Under **Configure integration**, select **Azure**, then select either **Azure Organization** or **Single Subscription**, depending on which resources you want to monitor. -1. Give your integration a name that matches the purpose or team of the Azure resources you want to monitor, for example, `azure-CSPM-dev-1`. - -
- -### Set up cloud account access -To set up CSPM for an Azure organization or subscription, you will need admin privileges for that organization or subscription. - -For most users, the simplest option is to use an Azure Resource Manager (ARM) template to automatically provision the necessary resources and permissions in Azure. If you prefer a more hands-on approach or require a specific configuration not supported by the ARM template, you can use one of the manual setup options described below. - -
- -### ARM template setup (recommended) - -1. Under **Setup Access**, select **ARM Template**. -1. Under **Where to add this integration**: - 1. Select **New Hosts**. - 1. Name the ((agent)) policy. Use a name that matches the resources you want to monitor, for example, `azure-dev-policy`. Click **Save and continue**. The **ARM Template deployment** window appears. - 1. In a new tab, log in to the Azure portal, then return to ((kib)) and click **Launch ARM Template**. This will open the ARM template in Azure. - 1. If you are deploying to an Azure organization, select the management group you want to monitor from the drop-down menu. Next, enter the subscription ID of the subscription where you want to deploy the VM that will scan your resources. - 1. Copy the `Fleet URL` and `Enrollment Token` that appear in ((kib)) to the corresponding fields in the ARM Template, then click **Review + create**. - 1. (Optional) Change the `Resource Group Name` parameter. Otherwise, the name of the resource group defaults to a timestamp prefixed with `cloudbeat-`. - -1. Return to ((kib)) and wait for the confirmation of data received from your new integration. Then you can click **View Assets** to see your data. - -
- -### Manual setup - -For manual setup, multiple authentication methods are available: - -1. Managed identity (recommended) -1. Service principal with client secret -1. Service principal with client certificate - -
- -### Option 1: Managed identity (recommended) - -This method involves creating an Azure VM (or using an existing one), giving it read access to the resources you want to monitor with CSPM, and installing ((agent)) on it. - -1. Go to the Azure portal to create a new Azure VM. -1. Follow the setup process, and make sure you enable **System assigned managed identity** under the **Management** tab. -1. Go to your Azure subscription list and select the subscription or management group you want to monitor with CSPM. -1. Go to **Access control (IAM)**, and select **Add Role Assignment**. -1. Select the `Reader` role, assign access to **Managed Identity**, then select your VM. - -After assigning the role: - -1. Return to the **Add CSPM** page in ((kib)). -1. Under **Configure integration**, select **Azure**. Under **Setup access**, select **Manual**. -1. Under **Where to add this integration**, select **New hosts**. -1. Click **Save and continue**, then follow the instructions to install ((agent)) on your Azure VM. - -Wait for the confirmation that ((kib)) received data from your new integration. Then you can click **View Assets** to see your data. - -
- -### Option 2: Service principal with client secret - -Before using this method, you must have set up a [Microsoft Entra application and service principal that can access resources](https://learn.microsoft.com/en-us/entra/identity-platform/howto-create-service-principal-portal#get-tenant-and-app-id-values-for-signing-in). - -1. On the **Add Cloud Security Posture Management (CSPM) integration** page, scroll to the **Setup access** section, then select **Manual**. -1. Under **Preferred manual method**, select **Service principal with Client Secret**. -1. Go to the **Registered apps** section of [Microsoft Entra ID](https://ms.portal.azure.com/#view/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/~/RegisteredApps). -1. Click on **New Registration**, name your app and click **Register**. -1. Copy your new app's `Directory (tenant) ID` and `Application (client) ID`. Paste them into the corresponding fields in ((kib)). -1. Return to the Azure portal. Select **Certificates & secrets**, then go to the **Client secrets** tab. Click **New client secret**. -1. Copy the new secret. Paste it into the corresponding field in ((kib)). -1. Return to Azure. Go to your Azure subscription list and select the subscription or management group you want to monitor with CSPM. -1. Go to **Access control (IAM)** and select **Add Role Assignment**. -1. Select the `Reader` function role, assign access to **User, group, or service principal**, and select your new app. -1. Return to the **Add CSPM** page in ((kib)). -1. Under **Where to add this integration**, select **New hosts**. -1. Click **Save and continue**, then follow the instructions to install ((agent)) on your selected host. - -Wait for the confirmation that ((kib)) received data from your new integration. Then you can click **View Assets** to see your data. - -
- -### Option 3: Service principal with client certificate - -Before using this method, you must have set up a [Microsoft Entra application and service principal that can access resources](https://learn.microsoft.com/en-us/entra/identity-platform/howto-create-service-principal-portal#get-tenant-and-app-id-values-for-signing-in). - -1. On the **Add Cloud Security Posture Management (CSPM) integration** page, under **Setup access**, select **Manual**. -1. Under **Preferred manual method**, select **Service principal with client certificate**. -1. Go to the **Registered apps** section of [Microsoft Entra ID](https://ms.portal.azure.com/#view/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/~/RegisteredApps). -1. Click on **New Registration**, name your app and click **Register**. -1. Copy your new app's `Directory (tenant) ID` and `Application (client) ID`. Paste them into the corresponding fields in ((kib)). -1. Return to Azure. Go to your Azure subscription list and select the subscription or management group you want to monitor with CSPM. -1. Go to **Access control (IAM)** and select **Add Role Assignment**. -1. Select the `Reader` function role, assign access to **User, group, or service principal**, and select your new app. - -Next, create a certificate. If you intend to use a password-protected certificate, you must use a pkcs12 certificate. Otherwise, you must use a pem certificate. - -Create a pkcs12 certificate, for example: -```shell -# Create PEM file -openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365 -nodes - -# Create pkcs12 bundle using legacy flag (CLI will ask for export password) -openssl pkcs12 -legacy -export -out bundle.p12 -inkey key.pem -in cert.pem -``` - -Create a PEM certificate, for example: -```shell -# Generate certificate signing request (csr) and key -openssl req -new -newkey rsa:4096 -nodes -keyout cert.key -out cert.csr - -# Generate PEM and self-sign with key -openssl x509 -req -sha256 -days 365 -in cert.csr -signkey cert.key -out signed.pem - -# Create bundle -cat cert.key > bundle.pem -cat signed.pem >> bundle.pem -``` - -1. Return to Azure. -1. Navigate to the **Certificates & secrets** menu. Select the **Certificates** tab. -1. Click **Upload certificate**. - 1. If you're using a PEM certificate that was created using the example commands above, upload `signed.pem`. - 1. If you're using a pkcs12 certificate that was created using the example commands above, upload `cert.pem`. -1. Upload the certificate bundle to the VM where you will deploy ((agent)). - 1. If you're using a PEM certificate that was created using the example commands above, upload `bundle.pem`. - 1. If you're using a pkcs12 certificate that was created using the example commands above, upload `bundle.p12`. -1. Return to the **Add CSPM** page in ((kib)). -1. For **Client Certificate Path**, enter the full path to the certificate that you uploaded to the host where you will install ((agent)). -1. If you used a pkcs12 certificate, enter its password under **Client Certificate Password**. -1. Under **Where to add this integration**, select **New hosts**. -1. Click **Save and continue**, then follow the instructions to install ((agent)) on your selected host. - -Wait for the confirmation that ((kib)) received data from your new integration. Then you can click **View Assets** to see your data. \ No newline at end of file diff --git a/docs/serverless/cloud-native-security/cspm-get-started-gcp.mdx b/docs/serverless/cloud-native-security/cspm-get-started-gcp.mdx deleted file mode 100644 index a72b65a3b7..0000000000 --- a/docs/serverless/cloud-native-security/cspm-get-started-gcp.mdx +++ /dev/null @@ -1,177 +0,0 @@ ---- -slug: /serverless/security/cspm-get-started-gcp -title: Get started with CSPM for GCP -description: Start monitoring the security posture of your GCP cloud assets. -tags: [ 'serverless', 'security', 'overview', 'cloud security' ] -status: in review ---- - - -
- -
- -## Overview - -This page explains how to get started monitoring the security posture of your cloud assets using the Cloud Security Posture Management (CSPM) feature. - - - -* CSPM only works in the `Default` ((kib)) space. Installing the CSPM integration on a different ((kib)) space will not work. -* CSPM is supported only on AWS, GCP, and Azure commercial cloud platforms, and AWS GovCloud. Other government cloud platforms are not supported ([request support](https://github.com/elastic/kibana/issues/new/choose)). -* To view posture data, you need the appropriate user role to read the following ((es)) indices: - * `logs-cloud_security_posture.findings_latest-*` - * `logs-cloud_security_posture.scores-*` - * `Logs-cloud_security_posture.findings` -* The user who gives the CSPM integration GCP permissions must be a GCP project `admin`. - - - -
- -## Initial setup - -You can set up CSPM for GCP either by enrolling a single project, or by enrolling an organization containing multiple projects. Either way, you need to first add the CSPM integration, then enable cloud account access. - -
- -### Add your CSPM integration -1. From the Elastic Security **Get started** page, click **Add integrations**. -1. Search for `CSPM`, then click on the result. -1. Click **Add Cloud Security Posture Management (CSPM)**. -1. Under **Configure integration**, select **GCP**, then either **GCP Organization** (recommended) or **Single Account**. -1. Give your integration a name that matches the purpose or team of the GCP account you want to monitor, for example, `dev-gcp-project`. - -
- -### Set up cloud account access -To set up CSPM for a GCP project, you need admin privileges for the project. - -For most users, the simplest option is to use a Google Cloud Shell script to automatically provision the necessary resources and permissions in your GCP account. This method, as well as two manual options, are described below. - -
- -## Cloud Shell script setup (recommended) - -1. Under **Setup Access**, select **Google Cloud Shell**. Enter your GCP Project ID, and for GCP Organization deployments, your GCP Organization ID. -1. Under **Where to add this integration**: - 1. Select **New Hosts**. - 1. Name the ((agent)) policy. Use a name that matches the purpose or team of the cloud account or accounts you want to monitor. For example, `dev-gcp-account`. - 1. Click **Save and continue**, then **Add ((agent)) to your hosts**. The **Add agent** wizard appears and provides ((agent)) binaries, which you can download and deploy to a VM in your GCP account. -1. Click **Save and continue**. -1. Copy the command that appears, then click **Launch Google Cloud Shell**. It opens in a new window. -1. Check the box to trust Elastic's `cloudbeat` repo, then click **Confirm** - - ![The cloud shell confirmation popup](../images/cspm-get-started-gcp/-cloud-native-security-cspm-cloudshell-trust.png) - -1. In Google Cloud Shell, execute the command you copied. Once it finishes, return to ((kib)) and wait for the confirmation of data received from your new integration. Then you can click **View Assets** to see your data. - - -During Cloud Shell setup, the CSPM integration adds roles to Google's default service account, which enables custom role creation and attachment of the service account to a compute instance. -After setup, these roles are removed from the service account. If you attempt to delete the deployment but find the deployment manager lacks necessary permissions, consider adding the missing roles to the service account: -[Project IAM Admin](https://cloud.google.com/iam/docs/understanding-roles#resourcemanager.projectIamAdmin), [Role Administrator](https://cloud.google.com/iam/docs/understanding-roles#iam.roleAdmin). - - -
- -## Manual authentication (GCP organization) - -To authenticate manually to monitor a GCP organization, you'll need to create a new GCP service account, assign it the necessary roles, generate credentials, then provide those credentials to the CSPM integration. - -Use the following commands, after replacing `` with the name of your new service account, `` with your GCP organization's ID, and `` with the GCP project ID of the project where you want to provision the compute instance that will run CSPM. - -Create a new service account: - -```shell -gcloud iam service-accounts create \ - --description="Elastic agent service account for CSPM" \ - --display-name="Elastic agent service account for CSPM" \ - --project= -``` - -Assign the necessary roles to the service account: - -```shell -gcloud organizations add-iam-policy-binding \ - --member=serviceAccount:@.iam.gserviceaccount.com \ - --role=roles/cloudasset.viewer - -gcloud organizations add-iam-policy-binding \ - --member=serviceAccount:@.iam.gserviceaccount.com \ - --role=roles/browser -``` - -The `Cloud Asset Viewer` role grants read access to cloud asset metadata. The `Browser` role grants read access to the project hierarchy. - -Download the credentials JSON (first, replace `` with the location where you want to save it): - -```shell -gcloud iam service-accounts keys create \ - --iam-account=@.iam.gserviceaccount.com -``` - -Keep the credentials JSON in a secure location; you will need it later. - -Provide credentials to the CSPM integration: - -1. On the CSPM setup screen under **Setup Access**, select **Manual**. -2. Enter your GCP **Organization ID**. Enter the GCP **Project ID** of the project where you want to provision the compute instance that will run CSPM. -3. Select **Credentials JSON**, and enter the value you generated earlier. -4. Under **Where to add this integration**, select **New Hosts**. -5. Name the ((agent)) policy. Use a name that matches the purpose or team of the cloud account or accounts you want to monitor. For example, `dev-gcp-account`. -6. Click **Save and continue**, then follow the instructions to install ((agent)) in your chosen GCP project. - -Wait for the confirmation that ((kib)) received data from your new integration. Then you can click **View Assets** to see your data. - -
- -## Manual authentication (GCP project) - -To authenticate manually to monitor an individual GCP project, you'll need to create a new GCP service account, assign it the necessary roles, generate credentials, then provide those credentials to the CSPM integration. - -Use the following commands, after replacing `` with the name of your new service account, and `` with your GCP project ID. - -Create a new service account: - -```shell -gcloud iam service-accounts create \ - --description="Elastic agent service account for CSPM" \ - --display-name="Elastic agent service account for CSPM" \ - --project= -``` - -Assign the necessary roles to the service account: - -```shell -gcloud projects add-iam-policy-binding \ - --member=serviceAccount:@.iam.gserviceaccount.com \ - --role=roles/cloudasset.viewer - -gcloud projects add-iam-policy-binding \ - --member=serviceAccount:@.iam.gserviceaccount.com \ - --role=roles/browser -``` - - -The `Cloud Asset Viewer` role grants read access to cloud asset metadata. The `Browser` role grants read access to the project hierarchy. - - -Download the credentials JSON (first, replace `` with the location where you want to save it): - -```shell -gcloud iam service-accounts keys create \ - --iam-account=@.iam.gserviceaccount.com -``` - -Keep the credentials JSON in a secure location; you will need it later. - -Provide credentials to the CSPM integration: - -1. On the CSPM setup screen under **Setup Access**, select **Manual**. -2. Enter your GCP **Project ID**. -3. Select **Credentials JSON**, and enter the value you generated earlier. -4. Under **Where to add this integration**, select **New Hosts**. -5. Name the policy. Use a name that matches the purpose or team of the cloud account or accounts you want to monitor. For example, `dev-gcp-account`. -6. Click **Save and continue**, then follow the instructions to install the agent in your chosen GCP project. - -Wait for the confirmation that Kibana received data from your new integration. Then you can click **View Assets** to see your data. \ No newline at end of file diff --git a/docs/serverless/cloud-native-security/cspm-get-started.mdx b/docs/serverless/cloud-native-security/cspm-get-started.mdx deleted file mode 100644 index 0bfd0242cf..0000000000 --- a/docs/serverless/cloud-native-security/cspm-get-started.mdx +++ /dev/null @@ -1,306 +0,0 @@ ---- -slug: /serverless/security/cspm-get-started -title: Get started with CSPM for AWS -description: Start monitoring the security posture of your AWS cloud assets. -tags: [ 'serverless', 'security', 'overview', 'cloud security' ] -status: in review ---- - - -
- -
- -## Overview - -This page explains how to get started monitoring the security posture of your cloud assets using the Cloud Security Posture Management (CSPM) feature. - - - -* CSPM only works in the `Default` ((kib)) space. Installing the CSPM integration on a different ((kib)) space will not work. -* CSPM is supported only on AWS, GCP, and Azure commercial cloud platforms, and AWS GovCloud. Other government cloud platforms are not supported ([request support](https://github.com/elastic/kibana/issues/new/choose)). -* To view posture data, you need the appropriate user role to read the following ((es)) indices: - * `logs-cloud_security_posture.findings_latest-*` - * `logs-cloud_security_posture.scores-*` - * `Logs-cloud_security_posture.findings` -* The user who gives the CSPM integration AWS permissions must be an AWS account `admin`. - - - -
- -## Set up CSPM for AWS - -You can set up CSPM for AWS either by enrolling a single cloud account, or by enrolling an organization containing multiple accounts. Either way, first you will add the CSPM integration, then enable cloud account access. - -
- -## Add the CSPM integration -1. From the Elastic Security **Get started** page, click **Add integrations**. -1. Search for `CSPM`, then click on the result. -1. Click **Add Cloud Security Posture Management (CSPM)**. -1. Select **AWS**, then either **AWS Organization** to onboard multiple accounts, or **Single Account** to onboard an individual account. -1. Give your integration a name that matches the purpose or team of the AWS account/organization you want to monitor, for example, `dev-aws-account`. - -
- -## Set up cloud account access -The CSPM integration requires access to AWS’s built-in [`SecurityAudit` IAM policy](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_job-functions.html#jf_security-auditor) in order to discover and evaluate resources in your cloud account. There are several ways to provide access. - -For most use cases, the simplest option is to use AWS CloudFormation to automatically provision the necessary resources and permissions in your AWS account. This method, as well as several manual options, are described below. - -
- -### CloudFormation (recommended) -1. In the **Add Cloud Security Posture Management (CSPM) integration** menu, under **Setup Access**, select **CloudFormation**. -1. In a new browser tab or window, log in as an admin to the AWS account or organization you want to onboard. -1. Return to your ((kib)) tab. Click **Save and continue** at the bottom of the page. -1. Review the information, then click **Launch CloudFormation**. -1. A CloudFormation template appears in a new browser tab. -1. For organization-level deployments only, you must enter the ID of the organizational unit where you want to deploy into the `OrganizationalUnitIds` field in the CloudFormation template. You can find it in the AWS console under **AWS Organizations → AWS Accounts** (it appears under the organization name). -1. (Optional) Switch to the AWS region where you want to deploy using the controls in the upper right corner. -1. Tick the checkbox under **Capabilities** to authorize the creation of necessary resources. - - ![The Add permissions screen in AWS](../images/cspm-get-started/-cloud-native-security-cspm-cloudformation-template.png) - -1. At the bottom of the template, select **Create stack**. - -When you return to ((kib)), click **View assets** to review the data being collected by your new integration. - -
- -### Manual authentication for organization-level onboarding - - -If you're onboarding a single account instead of an organization, skip this section. - - -When using manual authentication to onboard at the organization level, you need to configure the necessary permissions using the AWS console for the organization where you want to deploy: - -* In the organization's management account (root account), create an IAM role called `cloudbeat-root` (the name is important). The role needs several policies: - - * The following inline policy: - - - -``` -{ - "Version": "2012-10-17", - "Statement": [ - { - "Action": [ - "organizations:List*", - "organizations:Describe*" - ], - "Resource": "*", - "Effect": "Allow" - }, - { - "Action": [ - "sts:AssumeRole" - ], - "Resource": "*", - "Effect": "Allow" - } - ] -} -``` - - - - * The following trust policy: - - - -``` -{ - "Version": "2012-10-17", - "Statement": [ - { - "Effect": "Allow", - "Principal": { - "AWS": "arn:aws:iam:::root" - }, - "Action": "sts:AssumeRole" - }, - { - "Effect": "Allow", - "Principal": { - "Service": "ec2.amazonaws.com" - }, - "Action": "sts:AssumeRole" - } - ] -} -``` - - - - * The AWS-managed `SecurityAudit` policy. - - -You must replace `` in the trust policy with your AWS account ID. - - -* Next, for each account you want to scan in the organization, create an IAM role named `cloudbeat-securityaudit` with the following policies: - * The AWS-managed `SecurityAudit` policy. - * The following trust policy: - - - -``` -{ - "Version": "2012-10-17", - "Statement": [ - { - "Effect": "Allow", - "Principal": { - "AWS": "arn:aws:iam:::role/cloudbeat-root" - }, - "Action": "sts:AssumeRole" - } - ] -} -``` - - - - -You must replace `` in the trust policy with your AWS account ID. - - -After creating the necessary roles, authenticate using one of the manual authentication methods. - - -When deploying to an organization using any of the authentication methods below, you need to make sure that the credentials you provide grant permission to assume `cloudbeat-root` privileges. - - -
- -## Manual authentication methods - -* Default instance role (recommended) -* Direct access keys -* Temporary security credentials -* Shared credentials file -* IAM role Amazon Resource Name (ARN) - - -Whichever method you use to authenticate, make sure AWS’s built-in [`SecurityAudit` IAM policy](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_job-functions.html#jf_security-auditor) is attached. - - -
- -### Option 1 - Default instance role - - -If you are deploying to an AWS organization instead of an AWS account, you should already have created a new role, `cloudbeat-root`. Skip to step 2 "Attach your new IAM role to an EC2 instance", and attach this role. You can use either an existing or new EC2 instance. - - -Follow AWS's [IAM roles for Amazon EC2](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html) documentation to create an IAM role using the IAM console, which automatically generates an instance profile. - -1. Create an IAM role: - 1. In AWS, go to your IAM dashboard. Click **Roles**, then **Create role**. - 1. On the **Select trusted entity** page, under **Trusted entity type**, select **AWS service**. - 1. Under **Use case**, select **EC2**. Click **Next**. - - ![The Select trusted entity screen in AWS](../images/cspm-get-started/-cloud-native-security-cspm-aws-auth-1.png) - - 1. On the **Add permissions** page, search for and select `SecurityAudit`. Click **Next**. - - ![The Add permissions screen in AWS](../images/cspm-get-started/-cloud-native-security-cspm-aws-auth-2.png) - - 1. On the **Name, review, and create** page, name your role, then click **Create role**. -1. Attach your new IAM role to an EC2 instance: - 1. In AWS, select an EC2 instance. - 1. Select **Actions → Security → Modify IAM role**. - - ![The EC2 page in AWS, showing the Modify IAM role option](../images/cspm-get-started/-cloud-native-security-cspm-aws-auth-3.png) - - 1. On the **Modify IAM role** page, search for and select your new IAM role. - 1. Click **Update IAM role**. - 1. Return to ((kib)) and finish manual setup. - - -Make sure to deploy the CSPM integration to this EC2 instance. When completing setup in Kibana, in the **Setup Access** section, select **Assume role** and leave **Role ARN** empty. Click **Save and continue**. - - -
- -### Option 2 - Direct access keys -Access keys are long-term credentials for an IAM user or AWS account root user. To use access keys as credentials, you must provide the `Access key ID` and the `Secret Access Key`. After you provide credentials, finish manual setup. - -For more details, refer to [Access Keys and Secret Access Keys](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html). - - -You must select **Programmatic access** when creating the IAM user. - - -
- -### Option 3 - Temporary security credentials -You can configure temporary security credentials in AWS to last for a specified duration. They consist of an access key ID, a secret access key, and a security token, which is typically found using `GetSessionToken`. - -Because temporary security credentials are short term, once they expire, you will need to generate new ones and manually update the integration's configuration to continue collecting cloud posture data. Update the credentials before they expire to avoid data loss. - - -IAM users with multi-factor authentication (MFA) enabled need to submit an MFA code when calling `GetSessionToken`. For more details, refer to AWS's [Temporary Security Credentials](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html) documentation. - - -You can use the AWS CLI to generate temporary credentials. For example, you could use the following command if you have MFA enabled: - -```console -sts get-session-token --serial-number arn:aws:iam::1234:mfa/your-email@example.com --duration-seconds 129600 --token-code 123456 -``` - -The output from this command includes the following fields, which you should provide when configuring the KSPM integration: - -* `Access key ID`: The first part of the access key. -* `Secret Access Key`: The second part of the access key. -* `Session Token`: The required token when using temporary security credentials. - -After you provide credentials, finish manual setup. - -
- -### Option 4 - Shared credentials file -If you use different AWS credentials for different tools or applications, you can use profiles to define multiple access keys in the same configuration file. For more details, refer to AWS' [Shared Credentials Files](https://docs.aws.amazon.com/sdkref/latest/guide/file-format.html) documentation. - -Instead of providing the `Access key ID` and `Secret Access Key` to the integration, provide the information required to locate the access keys within the shared credentials file: - -* `Credential Profile Name`: The profile name in the shared credentials file. -* `Shared Credential File`: The directory of the shared credentials file. - -If you don't provide values for all configuration fields, the integration will use these defaults: - -- If `Access key ID`, `Secret Access Key`, and `ARN Role` are not provided, then the integration will check for `Credential Profile Name`. -- If there is no `Credential Profile Name`, the default profile will be used. -- If `Shared Credential File` is empty, the default directory will be used. - - For Linux or Unix, the shared credentials file is located at `~/.aws/credentials`. - -After providing credentials, finish manual setup. - -
- -### Option 5 - IAM role Amazon Resource Name (ARN) -An IAM role Amazon Resource Name (ARN) is an IAM identity that you can create in your AWS account. You define the role's permissions. Roles do not have standard long-term credentials such as passwords or access keys. Instead, when you assume a role, it provides temporary security credentials for your session. - -To use an IAM role ARN, select **Assume role** under **Preferred manual method**, enter the ARN, and continue to Finish manual setup. - -
- -## Finish manual setup -Once you’ve provided AWS credentials, under **Where to add this integration**: - -If you want to monitor an AWS account or organization where you have not yet deployed ((agent)): - -* Select **New Hosts**. -* Name the ((agent)) policy. Use a name that matches the purpose or team of the cloud account or accounts you want to monitor. For example, `dev-aws-account`. -* Click **Save and continue**, then **Add ((agent)) to your hosts**. The **Add agent** wizard appears and provides ((agent)) binaries, which you can download and deploy to your AWS account. - -If you want to monitor an AWS account or organization where you have already deployed ((agent)): - -* Select **Existing hosts**. -* Select an agent policy that applies the AWS account you want to monitor. -* Click **Save and continue**. - diff --git a/docs/serverless/cloud-native-security/cspm-security-posture-faq.mdx b/docs/serverless/cloud-native-security/cspm-security-posture-faq.mdx deleted file mode 100644 index 7070fff474..0000000000 --- a/docs/serverless/cloud-native-security/cspm-security-posture-faq.mdx +++ /dev/null @@ -1,85 +0,0 @@ ---- -slug: /serverless/security/cspm-security-posture-faq -title: Frequently asked questions (FAQ) -description: Frequently asked questions about the CSPM and KSPM integrations. -tags: [ 'serverless', 'security', 'overview', 'cloud security' ] -status: in review ---- - - -
- -## CSPM FAQ -Frequently asked questions about the Cloud Security Posture Management (CSPM) integration and features. - -**How often is my cloud security posture evaluated?** - -Cloud accounts are evaluated when you first deploy the CSPM integration and every 24 hours afterward. - -**Can I onboard multiple accounts at one time?** - -Yes. Follow the onboarding instructions in the getting started guides for AWS, GCP, or Azure. - -**When do newly enrolled cloud accounts appear on the dashboard?** - -After you deploy the CSPM integration, it can take up to 10 minutes for resource fetching, evaluation, and data processing before a newly enrolled account appears on the Cloud Security Posture dashboard. - -**When do unenrolled cloud accounts disappear from the dashboard?** - -Newly unenrolled cloud accounts can take a maximum of 24 hours to disappear from the Cloud Security Posture dashboard. - -## KSPM FAQ -Frequently asked questions about the Kubernetes Security Posture Management (KSPM) integration and features. - -**What versions of Kubernetes are supported?** - -For self-managed/vanilla clusters, Kubernetes version 1.23 is supported. - -For EKS clusters, all Kubernetes versions available at the time of cluster deployment are supported. - -**Do benchmark rules support multiple Kubernetes deployment types?** -Yes. There are different sets of benchmark rules for self-managed and third party-managed deployments. Refer to Get started with KSPM for more information about setting up each deployment type. - -**Can I evaluate the security posture of my Amazon EKS clusters?** -Yes. KSPM currently supports the security posture evaluation of Amazon EKS and unmanaged Kubernetes clusters. - -**How often is my cluster’s security posture evaluated?** -Clusters are evaluated when you deploy a KSPM integration, and every four hours after that. - -**When do newly-enrolled clusters appear on the dashboard?** -It can take up to 10 minutes for deployment, resource fetching, evaluation, and data processing to complete before a newly-enrolled cluster appears on the dashboard. - -**When do unenrolled clusters disappear from the dashboard?** -A cluster will disappear as soon as the KSPM integration fetches data while that cluster is not enrolled. The fetch process repeats every four hours, which means a newly unenrolled cluster can take a maximum of four hours to disappear from the dashboard. - -## Findings page - -**Are all the findings page current?** -Yes. Only the most recent findings appear on the Findings page. - -**Can I build custom visualizations and dashboards that incorporate findings data?** -Yes, you can use custom visualization capabilities with findings data. To learn more, refer to [Dashboards and visualizations](((kibana-ref))/dashboard.html). - -**Where is Findings data saved?** -You can access findings data using the following index patterns: - -* **Current findings:** `logs-cloud_security_posture.findings_latest-*` -* **Historical findings:** `logs-cloud_security_posture.findings-*` - -## Benchmark rules - -**How often are my resources evaluated against benchmark rules?** -Resources are fetched and evaluated against benchmark rules when a security posture management integration is deployed. After that, the CSPM integration evaluates every 24 hours, and the KSPM integration evaluates every four hours. - -**Can I configure an integration's fetch cycle?** -No, the four-hour fetch cycle is not configurable. - -**Can I contribute to the CSP ruleset?** -You can't directly edit benchmark rules. The rules are defined [in this repository](https://github.com/elastic/csp-security-policies), where you can raise issues with certain rules. They are written in [Rego](https://www.openpolicyagent.org/docs/latest/policy-language/). - -**How can I tell which specific version of the CIS benchmarks is in use?** -Refer to the `rule.benchmark.name` and `rule.benchmark.version` fields for documents in these datastreams: - -* `logs-cloud_security_posture.findings-default` -* `logs-cloud_security_posture.findings_latest-default` - diff --git a/docs/serverless/cloud-native-security/cspm.mdx b/docs/serverless/cloud-native-security/cspm.mdx deleted file mode 100644 index 6c57adf859..0000000000 --- a/docs/serverless/cloud-native-security/cspm.mdx +++ /dev/null @@ -1,28 +0,0 @@ ---- -slug: /serverless/security/cspm -title: Cloud security posture management -description: Identify misconfigured cloud resources. -tags: [ 'serverless', 'security', 'overview' ] -status: in review ---- - - -
- -The Cloud Security Posture Management (CSPM) feature discovers and evaluates the services in your cloud environment — like storage, compute, IAM, and more — against configuration security guidelines defined by the [Center for Internet Security](https://www.cisecurity.org/) (CIS) to help you identify and remediate risks that could undermine the confidentiality, integrity, and availability of your cloud data. - -This feature currently supports Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure. For step-by-step getting started guides, refer to Get started with CSPM for AWS, Get started with CSPM for GCP, or Get started with CSPM for Azure. - - - -* CSPM only works in the `Default` ((kib)) space. Installing the CSPM integration on a different ((kib)) space will not work. -* CSPM is supported only on AWS, GCP, and Azure commercial cloud platforms, and AWS GovCloud. Other government cloud platforms are not supported ([request support](https://github.com/elastic/kibana/issues/new/choose)). - - - -
- -## How CSPM works - -Using the read-only credentials you will provide during the setup process, it will evaluate the configuration of resources in your environment every 24 hours. -After each evaluation, the integration sends findings to Elastic. A high-level summary of the findings appears on the Cloud Security Posture dashboard, and detailed findings appear on the Findings page. diff --git a/docs/serverless/cloud-native-security/d4c-get-started.mdx b/docs/serverless/cloud-native-security/d4c-get-started.mdx deleted file mode 100644 index baed5754df..0000000000 --- a/docs/serverless/cloud-native-security/d4c-get-started.mdx +++ /dev/null @@ -1,93 +0,0 @@ ---- -slug: /serverless/security/d4c-get-started -title: Get started with CWP -description: Secure your containerized workloads and start detecting threats and vulnerabilities. -tags: ["security","how-to","get-started", "cloud security"] -status: in review ---- - - - - - -
- -This page describes how to set up Cloud Workload Protection (CWP) for Kubernetes. - - - -- Kubernetes node operating systems must have Linux kernels 5.10.16 or higher. - - - -## Initial setup - -First, you'll need to deploy Elastic's Defend for Containers integration to the Kubernetes clusters you wish to monitor. - -1. Go to **Assets → Cloud**, then click **Add D4C Integration**. -1. Name the integration. The default name, which you can change, is `cloud_defend-1`. -1. Optional — make any desired changes to the integration's policy by adjusting the **Selectors** and **Responses** sections. (For more information, refer to the Defend for Containers policy guide). You can also change these later. -1. Under **Where to add this integration**, select an existing or new agent policy. -1. Click **Save & Continue**, then **Add ((agent)) to your hosts**. -1. On the ((agent)) policy page, click **Add agent** to open the Add agent flyout. -1. In the flyout, go to step 3 (**Install ((agent)) on your host**) and select the **Kubernetes** tab. -1. Download or copy the manifest (`elastic-agent-managed-kubernetes.yml`). -1. Open the manifest using your favorite editor, and uncomment the `#capabilities` section: - - ```console - #capabilities: - # add: - # - BPF # (since Linux 5.8) allows loading of BPF programs, create most map types, load BTF, iterate programs and maps. - # - PERFMON # (since Linux 5.8) allows attaching of BPF programs used for performance metrics and observability operations. - # - SYS_RESOURCE # Allow use of special resources or raising of resource limits. Used by 'Defend for Containers' to modify 'rlimit_memlock' - ``` - -1. From the directory where you saved the manifest, run the command `kubectl apply -f elastic-agent-managed-kubernetes.yml`. -1. Wait for the **Confirm agent enrollment** dialogue to show that data has started flowing from your newly-installed agent, then click **Close**. - -
- -## Get started with threat detection - -One of the default D4C policies sends process telemetry events (`fork` and `exec`) to ((es)). - -In order to detect threats using this data, you'll need active detection rules. Elastic has prebuilt detection rules designed for this data. (You can also create your own custom rules.) - -To install and enable the prebuilt rules: - -1. Go to **Security → Rules → Detection rules (SIEM)**, then click **Add Elastic rules**. -1. Click the **Tags** filter next to the search bar, and search for the `Data Source: Elastic Defend for Containers` tag. -1. Select all the displayed rules, then click **Install _x_ selected rule(s)**. -1. Return to the **Rules** page. Click the **Tags** filter next to the search bar, and search for the `Data Source: Elastic Defend for Containers` tag. -1. Select all the rules with the tag, and then click **Bulk actions → Enable**. - -
- -## Get started with drift detection and prevention - -((elastic-sec)) defines container drift as the creation or modification of an executable within a container. Blocking drift restricts the number of attack vectors available to bad actors by prohibiting them from using external tools. - -To enable drift detection, you can use the default D4C policy: - -1. Make sure the default D4C policy is active. -1. Make sure you enabled at least the "Container Workload Protection" rule, by following the steps to install prebuilt rules, above. - -To enable drift prevention, create a new policy: - -1. Add a new selector called `blockDrift`. -1. Go to **Security → Manage → Container Workload Security → Your integration name**. -1. Under **Selectors**, click **Add selector → File Selector**. By default, it selects the operations `createExecutable` and `modifyExecutable`. -1. Name the selector, for example: `blockDrift`. -1. Scroll down to the **Responses** section and click **Add response → File Response**. -1. Under **Match selectors**, add the name of your new selector, for example: `blockDrift`. -1. Select the **Alert** and **Block** actions. -1. Click **Save integration**. - - -Before you enable blocking, we strongly recommend you observe a production workload that's using the default D4C policy to ensure that the workload does not create or modify executables as part of its normal operation. - - -
- -## Policy validation -To ensure the stability of your production workloads, you should test policy changes before implementing them in production workloads. We also recommend you test policy changes on a simulated environment with workloads similar to production. This approach allows you to test that policy changes prevent undesirable behavior without disrupting your production workloads. diff --git a/docs/serverless/cloud-native-security/d4c-overview.mdx b/docs/serverless/cloud-native-security/d4c-overview.mdx deleted file mode 100644 index 9cfc66c674..0000000000 --- a/docs/serverless/cloud-native-security/d4c-overview.mdx +++ /dev/null @@ -1,52 +0,0 @@ ---- -slug: /serverless/security/d4c-overview -title: Container workload protection -description: Identify and block unexpected system behavior in Kubernetes containers. -tags: ["security","cloud","reference","manage"] -status: in review ---- - - - - - -
- -Elastic Cloud Workload Protection (CWP) for Kubernetes provides cloud-native runtime protections for containerized environments by identifying and optionally blocking unexpected system behavior in Kubernetes containers. - -
- -## Use cases - -### Threat detection & threat hunting -CWP for Kubernetes sends system events from your containers to ((es)). ((elastic-sec))'s prebuilt security rules include many designed to detect malicious behavior in container runtimes. These can help you detect events that should never occur in containers, such as reverse shell executions, privilege escalation, container escape attempts, and more. - -### Drift detection & prevention -Cloud-native containers should be immutable, meaning that their file systems should not change during normal operations. By leveraging this principle, security teams can detect unusual system behavior with a high degree of accuracy — without relying on more resource-intensive techniques like memory scanning or attack signature detection. Elastic’s Drift Detection mechanism has a low rate of false positives, so you can deploy it in most environments without worrying about creating excessive alerts. - -### Workload protection policies -CWP for Kubernetes uses a flexible policy language to restrict container workloads to a set of allowlisted capabilities chosen by you. When employed with Drift and Threat Detection, this can provide multiple layers of defense. - -## Support matrix: -| | EKS 1.24-1.27 (AL2022) | GKE 1.24-1.27 (COS) | -|---|---|---| -| Process event exports | ✓ | ✓ | -| Network event exports | ✓ | ✓ | -| File event exports | ✓ | ✓ | -| File blocking | ✓ | ✓ | -| Process blocking | ✓ | ✓ | -| Network blocking | ✗ | ✗ | -| Drift prevention | ✓ | ✓ | -| Mount point awareness | ✓ | ✓ | - -## How CWP for Kubernetes works -CWP for Kubernetes uses a lightweight integration, Defend for Containers (D4C). When you set up the D4C integration, it gets deployed by ((agent)). Specifically, the ((agent)) is installed as a DaemonSet on your Kubernetes clusters, where it enables D4C to use eBPF Linux Security Modules ([LSM](https://docs.kernel.org/bpf/prog_lsm.html)) and tracepoint probes to record system events. Events are evaluated against LSM hook points, enabling ((agent)) to evaluate system activity against your policy before allowing it to proceed. - -Your D4C integration policy determines which system behaviors (for example, process execution or file creation or deletion) will result in which actions. _Selectors_ and _responses_ define each policy. Selectors define the conditions which cause the associated responses to run. Responses are associated with one or more selectors, and specify one or more actions (such as `log`, `alert`, or `block`) that will occur when the conditions defined in an associated selector are met. - -The default D4C policy sends data about all running processes to your ((es)) cluster. This data is used by ((elastic-sec))'s prebuilt detection rules to detect malicious behavior in container workloads. - - -To learn more about D4C policies, including how to create your own, refer to the D4C policies guide. - - diff --git a/docs/serverless/cloud-native-security/d4c-policy-guide.mdx b/docs/serverless/cloud-native-security/d4c-policy-guide.mdx deleted file mode 100644 index 0419edd948..0000000000 --- a/docs/serverless/cloud-native-security/d4c-policy-guide.mdx +++ /dev/null @@ -1,121 +0,0 @@ ---- -slug: /serverless/security/d4c-policy-guide -title: Container workload protection policies -description: Learn to build policies for cloud workload protection for Kubernetes. -tags: ["security","cloud","reference","manage","cloud security"] -status: in review ---- - - -
- -To unlock the full functionality of the Defend for Containers (D4C) integration, you'll need to understand its policy syntax. This will enable you to construct policies that precisely allow expected container behaviors and prevent unexpected behaviors — thereby hardening your container workloads' security posture. - -D4C integration policies consist of _selectors_ and _responses_. Each policy must contain at least one selector and one response. Currently, the system supports two types of selectors and responses: `file` and `process`. -Selectors define which system operations to match and can include multiple conditions (grouped using a logical `AND`) to precisely select events. Responses define which actions to take when a system operation matches the conditions specified in an associated selector. - -The default policy described on this page provides an example that's useful for understanding D4C policies in general. Following the description, you'll find a comprehensive glossary of selector conditions, response fields, and actions. - -
- -## Default policies: -The default D4C integration policy includes two selector-response pairs. It is designed to implement core container workload protection capabilities: - -- **Threat Detection:** The first selector-response pair is designed to stream process telemetry data to your ((es)) cluster so ((elastic-sec)) can evaluate it to detect threats. Both the selector and response are named `allProcesses`. The selector selects all fork and exec events. The associated response specifies that selected events should be logged. -- **Drift Detection & Prevention:** The second selector-response pair is designed to create alerts when container drift is detected. Both the selector and response are named `executableChanges`. The selector selects all `createExecutable` and `modifyExecutable` events. The associated response specifies that the selected events should create alerts, which will be sent to your ((es)) cluster. You can modify the response to block drift operations by setting it to block. - -![The defend for containers policy editor with the default policies](../images/d4c-policy-guide/-cloud-native-security-d4c-policy-editor.png) - -
- -## Selectors -A selector requires a name and at least one operation. It will select all events of the specified operation types, unless you also include _conditions_ to narrow down the selection. Some conditions are available for both `file` and `process` selectors, while others only available for one type of selector. - -### Common conditions -These conditions are available for both `file` and `process` selectors. - -{/* [cols="1,1", options="header"] */} -| Name | Description | -|---|---| -| containerImageFullName | A list of full container image names to match on. For example: `docker.io/nginx`. | -| containerImageName | A list of container image names to match on. For example: `nginx`. | -| containerImageTag | A list of container image tags to match on. For example: `latest`. | -| kubernetesClusterId | A list of Kubernetes cluster IDs to match on. For consistency with KSPM, the `kube-system` namespace's UID is used as a cluster ID. | -| kubernetesClusterName | A list of Kubernetes cluster names to match on. | -| kubernetesNamespace | A list of Kubernetes namespaces to match on. | -| kubernetesPodName | A list of Kubernetes pod names to match on. Trailing wildcards supported. | -| kubernetesPodLabel | A list of resource labels. Trailing wildcards supported (value only), for example: `key1:val*`. | - -### File-selector conditions -These conditions are available only for `file` selectors. - -{/* [cols="1,1", options="header"] */} -| Name | Description | -|---|---| -| operation | The list of system operations to match on. Options include `createExecutable`, `modifyExecutable`, `createFile`, `modifyFile`, `deleteFile`. | -| ignoreVolumeMounts | If set, ignores file operations on _all_ volume mounts. | -| ignoreVolumeFiles | If set, ignores operations on file mounts only. For example: mounted files, `configMaps`, and secrets. | -| targetFilePath | A list of file paths to include. Paths are absolute and wildcards are supported. The `*` wildcard matches any sequence of characters within a single directory, while the `**` wildcard matches any sequence of characters across multiple directories and subdirectories. | - - -In order to ensure precise targeting of file integrity monitoring operations, a `TargetFilePath` is required whenever the `deleteFile`, `modifyFile`, or `createFile` operations are used within a selector. - - -### Process-selector conditions -These conditions are available only for `process` selectors. - -{/* [cols="1,1", options="header"] */} -| Name | Description | -|---|---| -| operation | The list of system operations to match on. Options include `fork` and `exec`. | -| processExecutable | A list of executables (full path included) to match on. For example: `/usr/bin/cat`. Wildcard support is same as targetFilePath above. | -| processName | A list of process names (executable basename) to match on. For example: `bash`, `vi`, `cat`. | -| sessionLeaderInteractive | If set to `true`, will only match on interactive sessions (defined as sessions with a controlling TTY). | - -### Response fields -A policy can include one or more responses. Each response is comprised of the following fields: - -{/* [cols="1,1", options="header"] */} -| Field | Description | -|---|---| -| match | An array of one or more selectors of the same type (`file` or `process`). | -| exclude | Optional. An array of one or more selectors to use as exclusions to everything in `match`. | -| actions | An array of actions to perform when at least one `match` selector matches and none of the `exclude` selectors match. Options include `log`, `alert`, and `block`. | - -### Response actions -D4C responses can include the following actions: - - - - log - - Sends events to the `logs-cloud_defend.file-*` data stream for file responses, and the `logs-cloud_defend.process-*` data stream for process responses. - - - - - alert - - Writes events (file or process) to the `logs-cloud_defend.alerts-*` data stream. - - - - - block - - Prevents the system operation from proceeding. This blocking action happens prior to the execution of the event. It is required that the alert action be set if block is enabled. - - **Note:** Currently, block is only supported on file operations. - - - - diff --git a/docs/serverless/cloud-native-security/enable-cloudsec.mdx b/docs/serverless/cloud-native-security/enable-cloudsec.mdx deleted file mode 100644 index 5a765d5855..0000000000 --- a/docs/serverless/cloud-native-security/enable-cloudsec.mdx +++ /dev/null @@ -1,23 +0,0 @@ ---- -slug: /serverless/security/enable-cloudsec -title: Enable cloud security features -description: Learn to turn on cloud security features in your project -tags: [ 'serverless', 'security', 'overview' ] -status: in review ---- - -To use cloud security features in your ((elastic-sec)) project, you must have the `Cloud Protection Essentials` or `Cloud Protection Complete` options enabled for your project. - -To enable these options or check their current status: - -1. Click your project name in the upper-left corner of ((kib)). Select **Manage project**. - - - -2. To the right of **Project features**, select **Edit**. - - - -3. Enable the necessary options, then click **Save**. - -Continue with cloud security setup. \ No newline at end of file diff --git a/docs/serverless/cloud-native-security/environment-variable-capture.mdx b/docs/serverless/cloud-native-security/environment-variable-capture.mdx deleted file mode 100644 index 2c9557100b..0000000000 --- a/docs/serverless/cloud-native-security/environment-variable-capture.mdx +++ /dev/null @@ -1,45 +0,0 @@ ---- -slug: /serverless/security/environment-variable-capture -title: Capture environment variables -description: Capture environment variables from monitored Linux sessions. -tags: [ 'serverless', 'security', 'overview', 'cloud security' ] -status: in review ---- - - -
- - -You can configure an Elastic Defend policy to capture up to five environment variables (`env vars`). - - - -* Env var names must be no more than 63 characters, and env var values must be no more than 1023 characters. Values outside these limits are silently ignored. - -* Env var names are case sensitive. - - - -To set up environment variable capture for an ((agent)) policy: - -1. Go to **Assets → Fleet → Agent policies**. -1. Select an ((agent)) policy, then the associated Elastic Defend policy. -1. Go to the **Settings** tab, then scroll to the bottom and click **Show advanced settings**. -1. Scroll down or search for `linux.advanced.capture_env_vars`, or `mac.advanced.capture_env_vars`. -1. Enter the names of env vars you want to capture, separated by commas. For example: `PATH,USER` -1. Click **Save**. - -![The "linux.advanced.capture_env_vars" advanced agent policy setting](../images/environment-variable-capture/-cloud-native-security-env-var-capture.png) - -
- -## Find captured environment variables -Captured environment variables are associated with process events, and appear in each event's `process.env_vars` field. - -To view environment variables in the **Events** table: - -1. Click the **Events** tab on the **Hosts**, **Network**, or **Users** pages (**Explore**), then click **Fields** in the Events table. -1. Search for the `process.env_vars` field, select it, and click **Close**. - A new column appears containing captured environment variable data. - -![The Events table with the "process.env_vars" column highlighted](../images/environment-variable-capture/-cloud-native-security-env-var-capture-detail.png) diff --git a/docs/serverless/cloud-native-security/get-started-with-kspm.mdx b/docs/serverless/cloud-native-security/get-started-with-kspm.mdx deleted file mode 100644 index 539aee4fe2..0000000000 --- a/docs/serverless/cloud-native-security/get-started-with-kspm.mdx +++ /dev/null @@ -1,418 +0,0 @@ ---- -slug: /serverless/security/get-started-with-kspm -title: Get started with KSPM -# description: Description to be written -tags: [ 'serverless', 'security', 'overview', 'cloud security' ] -status: in review ---- - - -
- -This page explains how to configure the Kubernetes Security Posture Management (KSPM) integration. - - -* KSPM only works in the `Default` ((kib)) space. Installing the KSPM integration on a different ((kib)) space will not work. -* KSPM is not supported on EKS clusters in AWS GovCloud ([request support](https://github.com/elastic/kibana/issues/new/choose)). -* To view posture data, ensure you have the appropriate user role to read the following ((es)) indices: - -- `logs-cloud_security_posture.findings_latest-*` -- `logs-cloud_security_posture.scores-*` -- `logs-cloud_security_posture.findings` - - - -The instructions differ depending on whether you're installing on EKS or on unmanaged clusters. - -* Install on EKS-managed clusters: - 1. Name your integration and select a Kubernetes deployment type - 1. Authenticate to AWS - 1. Finish configuring the KSPM integration - 1. Deploy the DaemonSet to your clusters - - -* Install on unmanaged clusters: - 1. Configure the KSPM integration - 1. Deploy the DaemonSet manifest to your clusters - -
- -## Set up KSPM for Amazon EKS clusters - -### Name your integration and select a Kubernetes Deployment type - -1. Go to **Dashboards → Cloud Security Posture**. -1. Click **Add a KSPM integration**. -1. Read the integration's description to understand how it works. Then, click [*Add Kubernetes Security Posture Management*](((integrations-docs))/cloud_security_posture). -1. Name your integration. Use a name that matches the purpose or team of the cluster(s) you want to monitor, for example, `IT-dev-k8s-clusters`. -1. Select **EKS** from the **Kubernetes Deployment** menu. A new section for AWS credentials will appear. - -
- -### Authenticate to AWS - -There are several options for how to provide AWS credentials: - -* Use Kubernetes Service Account to assume IAM role -* Use default instance role -* Use access keys directly -* Use temporary security credentials -* Use a shared credentials file -* Use an IAM role ARN - -Regardless of which option you use, you'll need to grant the following permissions: - -```console -ecr:GetRegistryPolicy, -eks:ListTagsForResource -elasticloadbalancing:DescribeTags -ecr-public:DescribeRegistries -ecr:DescribeRegistry -elasticloadbalancing:DescribeLoadBalancerPolicyTypes -ecr:ListImages -ecr-public:GetRepositoryPolicy -elasticloadbalancing:DescribeLoadBalancerAttributes -elasticloadbalancing:DescribeLoadBalancers -ecr-public:DescribeRepositories -eks:DescribeNodegroup -ecr:DescribeImages -elasticloadbalancing:DescribeLoadBalancerPolicies -ecr:DescribeRepositories -eks:DescribeCluster -eks:ListClusters -elasticloadbalancing:DescribeInstanceHealth -ecr:GetRepositoryPolicy -``` - -If you are using the AWS visual editor to create and modify your IAM Policies, you can copy and paste this IAM policy JSON object: - - - -``` -{ - "Version": "2012-10-17", - "Statement": [ - { - "Sid": "VisualEditor0", - "Effect": "Allow", - "Action": [ - "ecr:GetRegistryPolicy", - "eks:ListTagsForResource", - "elasticloadbalancing:DescribeTags", - "ecr-public:DescribeRegistries", - "ecr:DescribeRegistry", - "elasticloadbalancing:DescribeLoadBalancerPolicyTypes", - "ecr:ListImages", - "ecr-public:GetRepositoryPolicy", - "elasticloadbalancing:DescribeLoadBalancerAttributes", - "elasticloadbalancing:DescribeLoadBalancers", - "ecr-public:DescribeRepositories", - "eks:DescribeNodegroup", - "ecr:DescribeImages", - "elasticloadbalancing:DescribeLoadBalancerPolicies", - "ecr:DescribeRepositories", - "eks:DescribeCluster", - "eks:ListClusters", - "elasticloadbalancing:DescribeInstanceHealth", - "ecr:GetRepositoryPolicy" - ], - "Resource": "*" - } - ] -} -``` - - - -
- -#### Option 1 - [Recommended] Use Kubernetes Service Account to assume IAM role - -Follow AWS's [EKS Best Practices](https://aws.github.io/aws-eks-best-practices/security/docs/iam/#iam-roles-for-service-accounts-irsa) documentation to use the [IAM Role to Kubernetes Service-Account](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) (IRSA) feature to get temporary credentials and scoped permissions. - -During setup, do not fill in any option in the "Setup Access" section. Instead click **Save and continue**. - -
- -#### Option 2 - Use default instance role -Follow AWS's [IAM roles for Amazon EC2](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html) documentation to create an IAM role using the IAM console, which automatically generates an instance profile. - -During setup, do not fill in any option in the "Setup Access" section. Click **Save and continue**. - -
- -#### Option 3 - Use access keys directly -Access keys are long-term credentials for an IAM user or AWS account root user. To use access keys as credentials, you must provide the `Access key ID` and the `Secret Access Key`. - -For more details, refer to AWS' [Access Keys and Secret Access Keys](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html) documentation. - - -You must select "Programmatic access" when creating the IAM user. - - -
- -#### Option 4 - Use temporary security credentials -You can configure temporary security credentials in AWS to last for a specified duration. They consist of an access key ID, a secret access key, and a security token, which is typically found using `GetSessionToken`. - -Because temporary security credentials are short term, once they expire, you will need to generate new ones and manually update the integration's configuration to continue collecting cloud posture data. Update the credentials before they expire to avoid data loss. - - -IAM users with multi-factor authentication (MFA) enabled need to submit an MFA code when calling `GetSessionToken`. For more details, refer to AWS' [Temporary Security Credentials](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html) documentation. - - -You can use the AWS CLI to generate temporary credentials. For example, you could use the following command if you have MFA enabled: - -```console -`sts get-session-token --serial-number arn:aws:iam::1234:mfa/your-email@example.com --duration-seconds 129600 --token-code 123456` -``` - -The output from this command includes the following fields, which you should provide when configuring the KSPM integration: - -* `Access key ID`: The first part of the access key. -* `Secret Access Key`: The second part of the access key. -* `Session Token`: A token required when using temporary security credentials. - -
- -#### Option 5 - Use a shared credentials file -If you use different AWS credentials for different tools or applications, you can use profiles to define multiple access keys in the same configuration file. For more details, refer to AWS' [Shared Credentials Files](https://docs.aws.amazon.com/sdkref/latest/guide/file-format.html) documentation. - -Instead of providing the `Access key ID` and `Secret Access Key` to the integration, provide the information required to locate the access keys within the shared credentials file: - -* `Credential Profile Name`: The profile name in the shared credentials file. -* `Shared Credential File`: The directory of the shared credentials file. - -If you don't provide values for all configuration fields, the integration will use these defaults: - -- If `Access key ID`, `Secret Access Key`, and `ARN Role` are not provided, then the integration will check for `Credential Profile Name`. -- If there is no `Credential Profile Name`, the default profile will be used. -- If `Shared Credential File` is empty, the default directory will be used. - - For Linux or Unix, the shared credentials file is located at `~/.aws/credentials`. - -
- -#### Option 6 - Use an IAM role Amazon Resource Name (ARN) -An IAM role Amazon Resource Name (ARN) is an IAM identity that you can create in your AWS account. You define the role's permissions. -Roles do not have standard long-term credentials such as passwords or access keys. -Instead, when you assume a role, it provides temporary security credentials for your session. -An IAM role's ARN can be used to specify which AWS IAM role to use to generate temporary credentials. - -For more details, refer to AWS' [AssumeRole API](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html) documentation. -Follow AWS' instructions to [create an IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html), and define the IAM role's permissions using the JSON permissions policy above. - -To use an IAM role's ARN, you need to provide either a credential profile or access keys along with the `ARN role`. -The `ARN Role` value specifies which AWS IAM role to use for generating temporary credentials. - - -If `ARN Role` is present, the integration will check if `Access key ID` and `Secret Access Key` are present. -If not, the package will check for a `Credential Profile Name`. -If a `Credential Profile Name` is not present, the default credential profile will be used. - - -
- -### Finish configuring the KSPM integration for EKS -Once you've provided AWS credentials, finish configuring the KSPM integration: - -1. If you want to monitor Kubernetes clusters that aren’t yet enrolled in ((fleet)), select **New Hosts** under “where to add this integration”. -1. Name the ((agent)) policy. Use a name that matches the purpose or team of the cluster(s) you want to monitor. For example, `IT-dev-k8s-clusters`. -1. Click **Save and continue**, then **Add agent to your hosts**. The **Add agent** wizard appears and provides a DaemonSet manifest `.yaml` file with pre-populated configuration information, such as the `Fleet ID` and `Fleet URL`. - -
- -### Deploy the KSPM integration to EKS clusters -The **Add agent** wizard helps you deploy the KSPM integration on the Kubernetes clusters you wish to monitor. For each cluster: - -1. Download the manifest and make any necessary revisions to its configuration to suit the needs of your environment. -1. Apply the manifest using the `kubectl apply -f` command. For example: `kubectl apply -f elastic-agent-managed-kubernetes.yaml` - -After a few minutes, a message confirming the ((agent)) enrollment appears, followed by a message confirming that data is incoming. You can then click **View assets** to see where the newly-collected configuration information appears, including the Findings page and the Cloud Security Posture dashboard. - -
- -## Set up KSPM for unmanaged Kubernetes clusters - -Follow these steps to deploy the KSPM integration to unmanaged clusters. Keep in mind credentials are NOT required for unmanaged deployments. - -### Configure the KSPM integration -To install the integration on unmanaged clusters: - -1. Go to **Dashboards → Cloud Security Posture**. -1. Click **Add a KSPM integration**. -1. Read the integration's description to understand how it works. Then, click [*Add Kubernetes Security Posture Management*](((integrations-docs))/cloud_security_posture). -1. Name your integration. Use a name that matches the purpose or team of the cluster(s) you want to monitor, for example, `IT-dev-k8s-clusters`. -1. Select **Unmanaged Kubernetes** from the **Kubernetes Deployment** menu. -1. If you want to monitor Kubernetes clusters that aren’t yet enrolled in ((fleet)), select **New Hosts** when choosing the ((agent)) policy. -1. Select the ((agent)) policy where you want to add the integration. -1. Click **Save and continue**, then **Add agent to your hosts**. The **Add agent** wizard appears and provides a DaemonSet manifest `.yaml` file with pre-populated configuration information, such as the `Fleet ID` and `Fleet URL`. - -![The KSPM integration's Add agent wizard](../images/get-started-with-kspm/-cloud-native-security-kspm-add-agent-wizard.png) - -
- -### Deploy the KSPM integration to unmanaged clusters - -The **Add agent** wizard helps you deploy the KSPM integration on the Kubernetes clusters you wish to monitor. To do this, for each cluster: - -1. Download the manifest and make any necessary revisions to its configuration to suit the needs of your environment. -1. Apply the manifest using the `kubectl apply -f` command. For example: `kubectl apply -f elastic-agent-managed-kubernetes.yaml` - -After a few minutes, a message confirming the ((agent)) enrollment appears, followed by a message confirming that data is incoming. You can then click **View assets** to see where the newly-collected configuration information appears, including the Findings page and the Cloud Security Posture dashboard. - -
- -### Set up KSPM on ECK deployments -To run KSPM on an [ECK](https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-quickstart.html) deployment, -you must edit the [Elastic Agent CRD](https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-elastic-agent-configuration.html) and [Elastic Agent Cluster-Role](https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-elastic-agent-configuration.html#k8s-elastic-agent-role-based-access-control) `.yaml` files. - - - -Add `volumes` and `volumeMounts` to `podTemplate`: -```yaml -podTemplate: - spec: - containers: - - name: agent - volumeMounts: - - name: proc - mountPath: /hostfs/proc - readOnly: true - - name: cgroup - mountPath: /hostfs/sys/fs/cgroup - readOnly: true - - name: varlibdockercontainers - mountPath: /var/lib/docker/containers - readOnly: true - - name: varlog - mountPath: /var/log - readOnly: true - - name: etc-full - mountPath: /hostfs/etc - readOnly: true - - name: var-lib - mountPath: /hostfs/var/lib - readOnly: true - - name: etc-mid - mountPath: /etc/machine-id - readOnly: true - volumes: - - name: proc - hostPath: - path: /proc - - name: cgroup - hostPath: - path: /sys/fs/cgroup - - name: varlibdockercontainers - hostPath: - path: /var/lib/docker/containers - - name: varlog - hostPath: - path: /var/log - - name: etc-full - hostPath: - path: /etc - - name: var-lib - hostPath: - path: /var/lib - # Mount /etc/machine-id from the host to determine host ID - # Needed for Elastic Security integration - - name: etc-mid - hostPath: - path: /etc/machine-id - type: File -``` - - - - - -Make sure that the `elastic-agent` service-account has the following Role and ClusterRole: -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: RoleBinding -metadata: - namespace: default - name: elastic-agent -subjects: -- kind: ServiceAccount - name: elastic-agent - namespace: default -roleRef: - kind: Role - name: elastic-agent - apiGroup: rbac.authorization.k8s.io ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: elastic-agent - labels: - k8s-app: elastic-agent -rules: -- apiGroups: [""] - resources: - - nodes - - namespaces - - events - - pods - - services - - configmaps - - serviceaccounts - - persistentvolumes - - persistentvolumeclaims - verbs: ["get", "list", "watch"] -- apiGroups: ["extensions"] - resources: - - replicasets - verbs: ["get", "list", "watch"] -- apiGroups: ["apps"] - resources: - - statefulsets - - deployments - - replicasets - - daemonsets - verbs: ["get", "list", "watch"] -- apiGroups: - - "" - resources: - - nodes/stats - verbs: - - get -- apiGroups: [ "batch" ] - resources: - - jobs - - cronjobs - verbs: [ "get", "list", "watch" ] -- nonResourceURLs: - - "/metrics" - verbs: - - get -- apiGroups: ["rbac.authorization.k8s.io"] - resources: - - clusterrolebindings - - clusterroles - - rolebindings - - roles - verbs: ["get", "list", "watch"] -- apiGroups: ["policy"] - resources: - - podsecuritypolicies - verbs: ["get", "list", "watch"] ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: Role -metadata: - name: elastic-agent - namespace: default - labels: - k8s-app: elastic-agent -rules: - - apiGroups: - - coordination.k8s.io - resources: - - leases - verbs: ["get", "create", "update"] -``` - - diff --git a/docs/serverless/cloud-native-security/kspm.mdx b/docs/serverless/cloud-native-security/kspm.mdx deleted file mode 100644 index 0654a559cf..0000000000 --- a/docs/serverless/cloud-native-security/kspm.mdx +++ /dev/null @@ -1,87 +0,0 @@ ---- -slug: /serverless/security/kspm -title: Kubernetes security posture management -description: Identify configuration risks in your Kubernetes clusters. -tags: [ 'serverless', 'security', 'overview', 'cloud security' ] -status: in review ---- - - -
- -
- -## Overview -The Kubernetes Security Posture Management (KSPM) integration allows you to identify configuration risks in the various components that make up your Kubernetes cluster. -It does this by evaluating your Kubernetes clusters against secure configuration guidelines defined by the Center for Internet Security (CIS) and generating findings with step-by-step instructions for remediating potential security risks. - -This integration supports Amazon EKS and unmanaged Kubernetes clusters. For setup instructions, refer to Get started with KSPM. - - - -* KSPM only works in the `Default` ((kib)) space. Installing the KSPM integration on a different ((kib)) space will not work. -* KSPM is not supported on EKS clusters in AWS GovCloud ([request support](https://github.com/elastic/kibana/issues/new/choose)). -* To view posture data, ensure you have the appropriate user role to read the following ((es)) indices: - -- `logs-cloud_security_posture.findings_latest-*` -- `logs-cloud_security_posture.scores-*` -- `logs-cloud_security_posture.findings` - - - -
- -## How KSPM works -1. When you add a KSPM integration, it generates a Kubernetes manifest. When applied to a cluster, the manifest deploys an ((agent)) as a [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset) to ensure all nodes are evaluated. -1. Upon deployment, the integration immediately assesses the security posture of your Kubernetes resources. The evaluation process repeats every four hours. -1. After each evaluation, the integration sends findings to ((es)). Findings appear on the Cloud Security Posture dashboard and the findings page. - -
- -## Use cases - -The KSPM integration helps you to: - -* Identify and remediate `failed` findings -* Identify the most misconfigured resources -* Identify risks in particular CIS benchmark sections - -
- -### Identify and remediate failed findings - -To identify and remediate failed failed findings: - -1. Go to the Cloud Security Posture dashboard. -1. Click **View all failed findings**, either for an individual cluster or for all monitored clusters. -1. Click a failed finding. The findings flyout opens. -1. Follow the steps under **Remediation** to correct the misconfiguration. - - - Remediation steps typically include commands for you to execute. These sometimes contain placeholder values that you must replace before execution. - - -
- -### Identify the most misconfigured Kubernetes resources - -To identify the Kubernetes resources generating the most failed findings: - -1. Go to the Findings page. -1. Click the **Group by** menu near the search box and select **Resource** to view a list of resources sorted by their total number of failed findings. -1. Click a resource ID to view the findings associated with that resource. - -
- -### Identify configuration risks by CIS section - -To identify risks in particular CIS sections: - -1. Go to the Cloud Security Posture dashboard (**Dashboards → Cloud Security Posture**). -1. In the Failed findings by CIS section widget, click the name of a CIS section to view all failed findings for that section. - -Alternatively: - -1. Go to the Findings page. -1. Filter by the `rule.section` field. For example, search for `rule.section : API Server` to view findings for benchmark rules in the API Server category. - diff --git a/docs/serverless/cloud-native-security/security-posture-faq.mdx b/docs/serverless/cloud-native-security/security-posture-faq.mdx deleted file mode 100644 index f188318406..0000000000 --- a/docs/serverless/cloud-native-security/security-posture-faq.mdx +++ /dev/null @@ -1,87 +0,0 @@ ---- -slug: /serverless/security/security-posture-faq -title: Frequently asked questions (FAQ) -description: Frequently asked questions about the CSPM integration. -tags: [ 'serverless', 'security', 'overview', 'cloud security' ] -status: rough content ---- - - -
- -
- -## CSPM FAQ -Frequently asked questions about the Cloud Security Posture Management (CSPM) integration and features. - -**How often is my cloud security posture evaluated?** - -Cloud accounts are evaluated when you first deploy the CSPM integration and every 24 hours afterward. - -**Can I onboard multiple accounts at one time?** - -Yes. Follow the onboarding instructions in the getting started guides for AWS, GCP, or Azure. - -**When do newly enrolled cloud accounts appear on the dashboard?** - -After you deploy the CSPM integration, it can take up to 10 minutes for resource fetching, evaluation, and data processing before a newly enrolled account appears on the Cloud Security Posture dashboard. - -**When do unenrolled cloud accounts disappear from the dashboard?** - -Newly unenrolled cloud accounts can take a maximum of 24 hours to disappear from the Cloud Security Posture dashboard. - -
- -## KSPM FAQ -Frequently asked questions about the Kubernetes Security Posture Management (KSPM) integration and features. - -**What versions of Kubernetes are supported?** - -For self-managed/vanilla clusters, Kubernetes version 1.23 is supported. - -**Do benchmark rules support multiple Kubernetes deployment types?** -Yes. There are different sets of benchmark rules for self-managed and third party-managed deployments. Refer to Get started with KSPM for more information about setting up each deployment type. - -**Can I evaluate the security posture of my Amazon EKS clusters?** -Yes. KSPM currently supports the security posture evaluation of Amazon EKS and unmanaged Kubernetes clusters. - -**How often is my cluster’s security posture evaluated?** -Clusters are evaluated when you deploy a KSPM integration, and every four hours after that. - -**When do newly-enrolled clusters appear on the dashboard?** -It can take up to 10 minutes for deployment, resource fetching, evaluation, and data processing to complete before a newly-enrolled cluster appears on the dashboard. - -**When do unenrolled clusters disappear from the dashboard?** -A cluster will disappear as soon as the KSPM integration fetches data while that cluster is not enrolled. The fetch process repeats every four hours, which means a newly unenrolled cluster can take a maximum of four hours to disappear from the dashboard. - -## Findings page - -**Are all the findings page current?** -Yes. Only the most recent findings appear on the Findings page. - -**Can I build custom visualizations and dashboards that incorporate findings data?** -Yes. You can use ((kib))'s custom visualization capabilities with findings data. To learn more, refer to [Dashboards and visualizations](((kibana-ref))/dashboard.html). - -**Where is Findings data saved?** -You can access findings data using the following index patterns: - -* **Current findings:** `logs-cloud_security_posture.findings_latest-*` -* **Historical findings:** `logs-cloud_security_posture.findings-*` - -## Benchmark rules - -**How often are my resources evaluated against benchmark rules?** -Resources are fetched and evaluated against benchmark rules when a security posture management integration is deployed. After that, the CSPM integration evaluates every 24 hours, and the KSPM integration evaluates every four hours. - -**Can I configure an integration's fetch cycle?** -No, the fetch cycle's timing is not configurable. - -**Can I contribute to the CSP ruleset?** -You can't directly edit benchmark rules. The rules are defined [in this repository](https://github.com/elastic/csp-security-policies), where you can raise issues with certain rules. They are written in [Rego](https://www.openpolicyagent.org/docs/latest/policy-language/). - -**How can I tell which specific version of the CIS benchmarks is in use?** -Refer to the `rule.benchmark.name` and `rule.benchmark.version` fields for documents in these datastreams: - -* `logs-cloud_security_posture.findings-default` -* `logs-cloud_security_posture.findings_latest-default` - diff --git a/docs/serverless/cloud-native-security/security-posture-management.mdx b/docs/serverless/cloud-native-security/security-posture-management.mdx deleted file mode 100644 index c8a40f0492..0000000000 --- a/docs/serverless/cloud-native-security/security-posture-management.mdx +++ /dev/null @@ -1,50 +0,0 @@ ---- -slug: /serverless/security/security-posture-management -title: Security posture management overview -description: Discovers and evaluates your cloud services and resources against security best practices. -tags: [ 'serverless', 'security', 'overview', 'cloud security' ] -status: in review ---- - - -
- -## Overview -Elastic's Cloud Security Posture Management (CSPM) and Kubernetes Security Posture Management (KSPM) features help you discover and evaluate the services and resources in your cloud environment — like storage, compute, IAM, and more — against security guidelines defined by the Center for Internet Security (CIS). They help you identify and remediate configuration risks that could undermine the confidentiality, integrity, and availability of your cloud assets, such as publicly exposed storage buckets or overly permissive networking objects. - -The KSPM feature assesses the security of your Kubernetes assets, while the CSPM feature assesses the security of your AWS resources such as storage, compute, IAM, and more. - -
- -## Getting started -For setup instructions, refer to: - -* Get started with KSPM -* Get started with CSPM - -
- -## Use cases - -Using the data generated by these features, you can: - -**Identify and secure misconfigured infrastructure:** - -1. Go to the Cloud Security Posture dashboard (**Dashboards → Cloud Security Posture**). -1. Click **View all failed findings**, either for an individual resource or a group of resources. -1. Click a failed finding to open the Findings flyout. -1. Follow the steps under Remediation to fix the misconfiguration. - -**Identify the CIS Sections (security best practice categories) with which your resources are least compliant:** - -1. Go to the Cloud Security Posture dashboard (**Dashboards → Cloud Security Posture**). -1. Do one of the following: - 1. Under Failed findings by CIS section, click the name of a CIS section to view all failed findings from that section. - 1. Go to the **Findings** page and filter by the `rule.section` field. For example, search for `rule.section : API Server` to view findings from the API Server category. - -**Identify your least compliant cloud resources** - -1. Go to the **Findings** page. -1. Click the **Group by** menu near the search box, and select **Resource** to sort resources by their number of failed findings. -1. Click a resource ID to view associated findings. - diff --git a/docs/serverless/cloud-native-security/session-view.mdx b/docs/serverless/cloud-native-security/session-view.mdx deleted file mode 100644 index 6ee1b89187..0000000000 --- a/docs/serverless/cloud-native-security/session-view.mdx +++ /dev/null @@ -1,159 +0,0 @@ ---- -slug: /serverless/security/session-view -title: Session View -description: Examine Linux process data in context with Session View. -tags: [ 'serverless', 'security', 'overview', 'how to', 'cloud security' ] -status: in review ---- - - -
- -Session View is an investigation tool that allows you to examine Linux process data organized -in a tree-like structure according to the Linux logical event model, with processes organized by parentage and time of execution. -It displays events in a highly readable format that is inspired by the terminal. This makes it a powerful tool for monitoring -and investigating session activity on your Linux infrastructure and understanding user and service behavior. - -Session View has the following features: - -* **Interactive and non-interactive processes:** Processes and services with or without a controlling terminal. -* **User information:** The Linux user that executed each session or process, and any exec user changes. -* **Process and event telemetry:** Process information included in the Linux logical event model. -* **Nested sessions:** Sessions started by processes descended from the entry session. -* **Alerts:** Process, file, and network alerts in the context of the events which caused them. -* **Terminal output:** Terminal output associated with each process in the session. - - -To view Linux session data from your Kubernetes infrastructure, you'll need to set up the Kubernetes dashboard. - - -
- -## Enable Session View data -Session View uses process data collected by the ((elastic-defend)) integration, -but this data is not always collected by default. To confirm that Session View data is enabled: - -1. Go to **Assets** → **Policies**, select a policy and then edit one or more of your ((elastic-defend)) integration policies. -1. Select the **Settings** tab, then scroll down to the Linux event collection section near the bottom. -1. Check the box for **Process** events, and turn on the **Collect session data** toggle. -1. If you want to include file and network alerts in Session View, check the boxes for **Network** and **File** events. -1. If you want to enable terminal output capture, turn on the **Capture terminal output** toggle. - -Session View can only display data that was collected by ((elastic-defend)) when **Collect session data** was enabled. When this setting is enabled, ((elastic-defend)) includes additional process context data in captured process, file, and network events. For more information about the additional -fields collected when this setting is enabled, refer to the [Linux event model RFC](https://github.com/elastic/ecs/blob/main/rfcs/text/0030-linux-event-model.md). - -
- -## Open Session View -Session View is accessible from the **Hosts**, **Alerts**, and **Timelines** pages, as well as the **Kubernetes** dashboard. -Events and sessions that you can investigate in Session View have a rectangular -**Open Session View** button in the **Actions** column. For example: - -* On the Alerts page, scroll down to view the Alerts table. - Look for alerts that have the **Open Session View** button in the **Actions** column: - - - -* On the Hosts page (**Explore** → **Hosts**), select the **Sessions** or the **Events** tab. - From either of these tabs, click the **Open Session View** button for an event or session. - -
- -## Session View UI -The Session View UI has the following features: - - - -1. The **Close Session** and **Full screen** buttons. -1. The search bar. Use it to find and highlight search terms within the current session. - The left and right arrows allow you to navigate through search results. - -1. The **display settings** button. Click to toggle Timestamps and Verbose mode. - With Verbose mode enabled, Session View shows all processes created in a session, including shell startup, - shell completion, and forks caused by built-in commands. - It defaults to **off** to highlight the data most likely to be user-generated and non-standard. - -1. The **Detail panel** button. Click it to toggle the Detail panel, which appears below the button - and displays a wide range of additional information about the selected process’s ancestry and host, - and any associated alerts. To select a process in Session View, click on it. - -1. The startup process. In this example, it shows that the session was a bash session. - It also shows the Linux user "Ubuntu" started the session. - -1. The **Child processes** button. Click to expand or collapse a process’s children. - You can also expand collapsed alerts and scripts where they appear. - Collapsed processes will automatically expand when their contents match a search. - -1. The **Alerts** button. Click to show alerts caused by the parent process. In this example, the `(2)` indicates that there are two alerts. Note the red line to the left of the event that caused the alert. Both alerts caused by this event are `process` alerts, as indicated by the gear icon. -1. The **Terminal output** button. Hover to see how much output data has been captured from the session. Click to open the terminal output view, which is described in detail below. -1. The **Refresh session** button. Click to check for any new data from the current session. - -Session View includes additional badges not pictured above: -{/* -//* The **Script** button allows you to expand or collapse executed scripts: */} -{/* -//[role="screenshot"] */} -{/* */} - -* The alert badge for multiple alerts appears when a single event causes alerts of multiple types ( for `process` alerts, for `file` alerts, and for `network` alerts): - - - -* The **Exec user change** badge highlights exec user changes, such as when a user escalates to root: - - - -* The **Output** badge appears next to commands that generated terminal output. Click it to view that command's output in terminal output view. - - - -
- -## Terminal output view UI - - - -* Session output can only be collected from Linux OSes with eBPF-enabled kernels versions 5.10.16 or higher. - - - -In general, terminal output is the text that appears in interactive Linux shell sessions. This generally includes user-entered text (terminal input), which appears as output to facilitate editing commands, as well as the text output of executed programs. In certain cases such as password entry, terminal input is not captured as output. - -From a security perspective, terminal output is important because it offers a means of exfiltrating data. For example, a command like `cat tls-private-key.pem` could output a web server's private key. Thus, terminal output view can improve your understanding of commands executed by users or adversaries, and assist with auditing and compliance. - -To enable terminal output data capture: - -1. Go to **Assets** → **Policies**, select a policy and then edit one or more of your ((elastic-defend)) integration policies. -1. On the **Settings** tab, scroll down to the Linux event collection section near the bottom of the page - and select the **Collect session data** and **Capture terminal output** options. - -You can configure several additional settings by clicking **Advanced settings** at the bottom of the page: - -* `linux.advanced.tty_io.max_kilobytes_per_process`: The maximum number of kilobytes of output to record from a single process. Default: 512 KB. Process output exceeding this value will not be recorded. -* `linux.advanced.tty_io.max_kilobytes_per_event`: The maximum number of kilobytes of output to send to ((es)) as a single event. Default: 512 KB. Additional data is captured as a new event. -* `linux.advanced.tty_io.max_event_interval_seconds`: The maximum interval (in seconds) during which output is batched. Default: 30 seconds. Output will be sent to ((es)) at this interval (unless it first exceeds the `max_kilobytes_per_event` value, in which case it might be sent sooner). - -![Terminal output view](../images/session-view/-detections-session-view-output-viewer.png) - -1. Search bar. Use to find and highlight search terms within the current session. - The left and right arrows allow you to navigate through search results. - -1. Right-side scroll bar. Use along with the bottom scroll bar to navigate output data that doesn't fit on a single screen. -1. Playback controls and progress bar. Use to advance or rewind the session's commands and output. Click anywhere on the progress bar to jump to that part of the session. The marks on the bar represent processes that generated output. Click them or the **Prev** and **Next** buttons to skip between processes. -1. **Fit screen**, **Zoom in**, and **Zoom out** buttons. Use to adjust the text size. - - -Use Session view's **Fullscreen** button (located next to the **Close session viewer** button) to better fit output with long lines, such as for graphical programs like `vim`. - - -
- -### Terminal output limitations for search and alerting -You should understand several current limitations before building rules based on terminal output data: - -* Terminal output that appears in the `process.io.text` field includes [ANSI codes](https://gist.github.com/fnky/458719343aabd01cfb17a3a4f7296797) that represent, among other things, text color, text weight, and escape sequences. This can prevent EKS queries from matching as expected. Queries of this data will have more success matching single words than more complex strings. -* Queries of this data should include leading and trailing wildcards (for example `process where process.io.text : "*sudo*"`), since output events typically include multiple lines of output. -* The search functionality built into terminal output view is subject to similar limitations. For example, if a user accidentally entered `sdo` instead of `sudo`, then pressed backspace twice to fix the typo, the recorded output would be `sdo\b\budo`. This would appear in the terminal output view as `sudo`, but searching terminal output view for `sudo` would not result in a match. -* Output that seems like it should be continuous may be split into multiple events due to the advanced settings described above, which may prevent a query or search from matching as expected. -* Rules based on output data will identify which output event's `process.io.text` value matched the alert query, without identifying which specific part of that value matched. For example, the rule query `process.io.text: "*test*"` could match a large, multi-line log file due to a single instance of `test`, without identifying where in the file the instance occurred. - diff --git a/docs/serverless/cloud-native-security/vuln-management-faq.mdx b/docs/serverless/cloud-native-security/vuln-management-faq.mdx deleted file mode 100644 index 45343c15d2..0000000000 --- a/docs/serverless/cloud-native-security/vuln-management-faq.mdx +++ /dev/null @@ -1,68 +0,0 @@ ---- -slug: /serverless/security/vuln-management-faq -title: Frequently asked questions (FAQ) -description: Frequently asked questions about the CNVM integration. -tags: ["security","cloud","reference","manage"] -status: in review ---- - - -
- -Frequently asked questions about the Cloud Native Vulnerability Management (CNVM) integration and features. - -**Which security data sources does the CNVM integration use to identify vulnerabilities?** - -The CNVM integration uses various security data sources. The complete list can be found [here](https://github.com/aquasecurity/trivy/blob/v0.35.0/docs/docs/vulnerability/detection/data-source.md). - -**What's the underlying scanner used by CNVM integration?** - -CNVM uses the open source scanner [Trivy](https://github.com/aquasecurity/trivy) v0.35. - -**What system architectures are supported?** - -Because of Trivy's limitations, CNVM can only be deployed on ARM-based VMs. However, it can scan hosts regardless of system architecture. - -**How often are the security data sources synchronized?** - -The CNVM integration fetches the latest data sources at the beginning of every scan cycle to ensure up-to-date vulnerability information. - -**What happens if a scan cycle does not complete within 24 hours?** - -If a scan cycle doesn't finish within 24 hours, the ongoing cycle will continue until completion. When it finishes, a new cycle will immediately start. - -**How is the lifecycle of snapshots handled?** - -The CNVM integration manages the lifecycle of snapshots. Snapshots are automatically deleted/removed at the end of each scan cycle. - -**Does CNVM have an impact on the user's cloud expenses?** - -Yes, CNVM creates additional cloud expenses, since scanning involves provisioning a new virtual machine to conduct the scan. - -**Does CNVM also scan the new AWS EC2 instances that it creates?** - -Yes, CNVM scans all AWS EC2 instances in every scan cycle, including any created by the integration. - -**Does CNVM scan AWS EC2 instances with encrypted volumes?** - -Encrypted volumes can be scanned only if they were encrypted using Amazon's default EBS key, and are _not_ running Amazon Linux 2023. - -**Does CNVM prevent multiple installations in a single region?** - -No, CNVM does not currently prevent redundant deployment to the same region. - -**What volume types and file systems does CNVM support?** - -CNVM supports all AWS EBS volume types and works with `ext4` and `xfs` file systems. - -**Does CNVM scan stopped EC2 instances?** - -Yes, CNVM scans all EC2 instances, whether they are running or stopped, to ensure comprehensive vulnerability detection. - -**What AWS permissions does the user require to run the CloudFormation template for CNVM onboarding?** - -To run the CloudFormation template for CNVM onboarding, you need an AWS user account with permissions to perform the following actions: run CloudFormation templates, create IAM Roles and InstanceProfiles, and create EC2 SecurityGroups and Instances. - -**Why do I get an error when I try to run the CloudFormation template?** - -It's possible you're using an unsupported region. Currently the `eu-north-1` and `af-south-1` regions are not supported because they don't provide the required instance types. diff --git a/docs/serverless/cloud-native-security/vuln-management-findings.mdx b/docs/serverless/cloud-native-security/vuln-management-findings.mdx deleted file mode 100644 index 915b707b7f..0000000000 --- a/docs/serverless/cloud-native-security/vuln-management-findings.mdx +++ /dev/null @@ -1,80 +0,0 @@ ---- -slug: /serverless/security/vuln-management-findings -title: Findings page -description: The Findings page displays information about cloud vulnerabilities found in your environment. -tags: [ 'serverless', 'security', 'overview', 'cloud security' ] -status: in review ---- - - -
- -The **Vulnerabilities** tab on the Findings page displays the vulnerabilities detected by the CNVM integration. - -![The Vulnerabilities tab of the Findings page](../images/vuln-management-findings/-cloud-native-security-cnvm-findings-page.png) - -## What are CNVM findings? - -CNVM findings represent security vulnerabilities detected in your cloud. They include metadata such as the CVE identifier, CVSS score, severity, affected package, and fix version if available, as well as information about impacted systems. - -Clicking on a finding provides a detailed description of the vulnerability, and any available remediation information. - -
- -## Group and filter findings - -To help you prioritize remediation efforts, you can organize findings in various ways. - -### Group findings - -Click **Group vulnerabilities by** to group your data by a field. Select one of the suggested fields or **Custom field** to choose your own. You can select up to three group fields at once. - - -* When grouping is turned on, click a group to expand it and examine all sub-groups or findings within that group. -* To turn off grouping, click **Group vulnerabilities by:** and select **None**. - - -Multiple groupings apply to your data in the order you selected them. For example, if you first select **Cloud account**, then select **Resource**, the top-level grouping will be based on **Cloud account**, and its subordinate grouping will be based on **Resource**, as demonstrated in the following screenshot. - - -![The Vulnerabilities tab of the Findings page](../images/vuln-management-findings/-cloud-native-security-cnvm-findings-grouped.png) - -### Filter findings -You can filter the data in two ways: - -* **KQL search bar**: For example, search for `vulnerability.severity : "HIGH"` to view high severity vulnerabilities. -* **In-table value filters**: Hover over a finding to display available inline actions. Use the **Filter In** (plus) and **Filter Out** (minus) buttons. - -### Customize the Findings table -When grouping is turned off, you can use the toolbar buttons in the upper-left of the Findings table to select which columns appear: - -* **Columns**: Select the left-to-right order in which columns appear. -* **Sort fields**: Sort the table by one or more columns, or turn sorting off. -* **Fields**: Select which fields to display for each finding. Selected fields appear in the table and the **Columns** menu. - - -You can also click a column's name to open a menu that allows you to perform multiple actions on the column. - - -## Learn more about a vulnerability - -Click a vulnerability to open the vulnerability details flyout. This flyout includes a link to the related vulnerability database, the vulnerability's publication date, CVSS vector strings, fix versions (if available), and more. - -When you open the vulnerability details flyout, it defaults to the **Overview** tab, which highlights key information. To view every field present in the vulnerability document, select the **Table** or **JSON** tabs. - -
- -## Remediate vulnerabilities - -To remediate a vulnerability and reduce your attack surface, update the affected package if a fix is available. - -
- -## Generate alerts for failed Findings -You can create detection rules that detect specific vulnerabilities directly from the Findings page: - - -. Click a vulnerability to open the vulnerability details flyout flyout. -. Click **Take action**, then **Create a detection rule**. This automatically creates a detection rule that creates alerts when the associated vulnerability is found. -. To review or customize the new rule, click **View rule**. - diff --git a/docs/serverless/cloud-native-security/vuln-management-get-started.mdx b/docs/serverless/cloud-native-security/vuln-management-get-started.mdx deleted file mode 100644 index 1ad336be49..0000000000 --- a/docs/serverless/cloud-native-security/vuln-management-get-started.mdx +++ /dev/null @@ -1,77 +0,0 @@ ---- -slug: /serverless/security/vuln-management-get-started -title: Get started with CNVM -description: Set up cloud native vulnerability management. -tags: [ 'serverless', 'security', 'overview', 'cloud security' ] -status: in review ---- - - -
- -This page explains how to set up Cloud Native Vulnerability Management (CNVM). - - - -* CNVM only works in the `Default` ((kib)) space. Installing the CNVM integration on a different ((kib)) space will not work. -* Requires ((agent)) version 8.8 or higher. -* CNVM can only be deployed on ARM-based VMs. -* To view vulnerability scan findings, you need the appropriate user role to read the following indices: - * `logs-cloud_security_posture.vulnerabilities-*` - * `logs-cloud_security_posture.vulnerabilities_latest-*` -* You need an AWS user account with permissions to perform the following actions: run CloudFormation templates, create IAM Roles and InstanceProfiles, and create EC2 SecurityGroups and Instances. - - - - -CNVM currently only supports AWS EC2 Linux workloads. - - -
- -## Set up CNVM for AWS - -To set up the CNVM integration for AWS, install the integration on a new ((agent)) policy, sign into the AWS account you want to scan, and run the [CloudFormation](https://docs.aws.amazon.com/cloudformation/index.html) template. - - -Do not add the integration to an existing ((agent)) policy. It should always be added to a new policy since it should not run on VMs with existing workloads. For more information, refer to How CNVM works. - - -
- -### Step 1: Add the CNVM integration - -1. In the ((security-app)), go to the **Get started** page, then click **Add security integrations**. -1. Search for **Cloud Native Vulnerability Management**, then click on the result. -1. Click **Add Cloud Native Vulnerability Management**. -1. Give your integration a name that matches its purpose or the AWS account region you want to scan for vulnerabilities (for example, `uswest2-aws-account`.) - - ![The CNVM integration setup page](../images/vuln-management-get-started/-dashboards-cnvm-setup-1.png) - -1. Click **Save and continue**. The integration will create a new ((agent)) policy. -1. Click **Add ((agent)) to your hosts**. - -
- -### Step 2: Sign in to the AWS management console - -1. Open a new browser tab and use it to sign into your AWS management console. -1. Switch to the cloud region with the workloads that you want to scan for vulnerabilities. - - -The integration will only scan VMs in the region you select. To scan multiple regions, repeat this setup process for each region. - - -
- -### Step 3: Run the CloudFormation template - -1. Switch back to the tab with Elastic Security. -1. Click **Launch CloudFormation**. The CloudFormation page appears. - - ![The cloud formation template](../images/vuln-management-get-started/-dashboards-cnvm-cloudformation.png) - -1. Click **Create stack**. To avoid authentication problems, you can only make configuration changes to the VM InstanceType, which you could make larger to increase scanning speed. -1. Wait for the confirmation that ((agent)) was enrolled. -1. Your data will start to appear on the **Vulnerabilities** tab of the Findings page. - diff --git a/docs/serverless/cloud-native-security/vuln-management-overview.mdx b/docs/serverless/cloud-native-security/vuln-management-overview.mdx deleted file mode 100644 index 87ca2fad3b..0000000000 --- a/docs/serverless/cloud-native-security/vuln-management-overview.mdx +++ /dev/null @@ -1,42 +0,0 @@ ---- -slug: /serverless/security/vuln-management-overview -title: Cloud native vulnerability management -description: Find and track vulnerabilities in your cloud. -tags: [ 'serverless', 'security', 'overview', 'cloud security' ] -status: in review ---- - - -
- -Elastic's Cloud Native Vulnerability Management (CNVM) feature helps you identify known vulnerabilities in your cloud workloads. - -Setup uses infrastructure as code. For instructions, refer to Get started with Cloud Native Vulnerability Management. - - -CNVM currently only supports AWS EC2 Linux workloads. - - - - -* CNVM only works in the `Default` ((kib)) space. Installing the CNVM integration on a different ((kib)) space will not work. -* To view vulnerability scan findings, you need the appropriate user role to read the following indices: - * `logs-cloud_security_posture.vulnerabilities-*` - * `logs-cloud_security_posture.vulnerabilities_latest-*` - - - -
- -## How CNVM works - -During setup, you will use an infrastructure as code provisioning template to create a new virtual machine (VM) in the cloud region you wish to scan. This VM installs ((agent)) and the Cloud Native Vulnerability Management (CNVM) integration, and conducts all vulnerability scanning. - -The CNVM integration uses [Trivy](https://github.com/aquasecurity/trivy), a comprehensive open-source security scanner, to scan cloud workloads and identify security vulnerabilities. During each scan, the VM running the integration takes a snapshot of all cloud workloads in its region using the snapshot APIs of the cloud service provider, and analyzes them for vulnerabilities using Trivy. Therefore, scanning does not use resources on the VMs being scanned. All resource usage occurs on the VM installed during CNVM setup. - -The scanning process begins immediately upon deployment, then repeats every twenty-four hours. After each scan, the integration sends the discovered vulnerabilities to ((es)), where they appear in the **Vulnerabilities** tab of the Findings page. - - -Environments with more VMs take longer to scan. - - diff --git a/docs/serverless/dashboards/cloud-posture-dashboard-dash.mdx b/docs/serverless/dashboards/cloud-posture-dashboard-dash.mdx deleted file mode 100644 index 9fee49f066..0000000000 --- a/docs/serverless/dashboards/cloud-posture-dashboard-dash.mdx +++ /dev/null @@ -1,51 +0,0 @@ ---- -slug: /serverless/security/cloud-posture-dashboard-dash -title: Cloud Security Posture dashboard -description: The Cloud Security Posture dashboard summarizes your cloud infrastructure's performance on CIS security benchmarks. -tags: [ 'serverless', 'security', 'overview', 'cloud security' ] -status: in review ---- - - -
- -The Cloud Security Posture dashboard summarizes your cloud infrastructure's overall performance against security guidelines defined by the Center for Internet Security (CIS). To start collecting this data, refer to Get started with Cloud Security Posture Management or Get started with Kubernetes Security Posture Management. - -![The cloud Security dashboard](../images/cloud-posture-dashboard/-dashboards-cloud-sec-dashboard.png) - -The Cloud Security Posture dashboard shows: - -* Configuration risk metrics for all monitored cloud accounts and Kubernetes clusters -* Configuration risk metrics grouped by the applicable benchmark, for example, CIS GCP, CIS Azure, CIS Kubernetes, or CIS EKS -* Configuration risks grouped by CIS section (security guideline category) - - -
- -## Cloud Security Posture dashboard UI - -At the top of the dashboard, you can switch between the Cloud accounts and Kubernetes cluster views. - -The top section of either view summarizes your overall cloud security posture (CSP) by aggregating data from all monitored resources. The summary cards on the left show the number of cloud accounts or clusters evaluated, and the number of resources evaluated. You can click **Enroll more accounts** or **Enroll more clusters** to deploy to additional cloud assets. Click **View all resources** to open the Findings page. - -The remaining summary cards show your overall compliance score, and your compliance score for each CIS section. Click **View all failed findings** to view all failed findings, or click a CIS section name to view failed findings from only that section on the Findings page. - -Below the summary section, each row shows the CSP for a benchmark that applies to your monitored cloud resources. For example, if you are monitoring GCP and Azure cloud accounts, a row appears for CIS GCP and another appears for CIS Azure. Each row shows the CIS benchmark, the number of cloud accounts or Kubernetes clusters it applies to, its overall compliance score, and its compliance score grouped by CIS section. - -![A row representing a single cluster in the Cloud Security Posture dashboard](../images/cloud-posture-dashboard/-dashboards-cloud-sec-dashboard-individual-row.png) - -
- -## FAQ (Frequently Asked Questions) - - - -It can take up to 10 minutes for deployment, resource fetching, evaluation, and data processing before a newly-enrolled cluster appears on the dashboard. - - - - - -A cluster will disappear as soon as the KSPM integration fetches data while that cluster is not enrolled. The fetch process repeats every four hours, which means a newly unenrolled cluster can take a maximum of four hours to disappear from the dashboard. - - diff --git a/docs/serverless/dashboards/dashboards-overview.mdx b/docs/serverless/dashboards/dashboards-overview.mdx deleted file mode 100644 index 72fa0615ca..0000000000 --- a/docs/serverless/dashboards/dashboards-overview.mdx +++ /dev/null @@ -1,24 +0,0 @@ ---- -slug: /serverless/security/dashboards-overview -title: Dashboards -description: Dashboards give you insight into your security environment. -tags: ["security","overview","visualize","monitor","analyze"] -status: in review ---- - - -
- -The ((security-app))'s default dashboards provide useful visualizations of your security environment. To view them in ((elastic-sec)), select **Dashboards** from the navigation menu. From the Dashboards page, you can access the default dashboards, as well as create and access custom dashboards. - -To create a new custom dashboard, click **Create Dashboard**. You can control which custom dashboards appear in the table: - -* Use the text search field to filter by name or description. -* Use the **Tags** menu to filter by tag. -* Click a custom dashboard's tags to toggle filtering for each tag. - -To create a new tag or edit existing tags, open the **Tags** menu and click **Manage tags**. - -![The dashboards landing page](../images/dashboards-overview/-dashboards-dashboards-landing-page.png) - -Refer to documentation for the other ((elastic-sec)) dashboards to learn more about them. For more information about creating custom dashboards, refer to [Create your first dashboard](((kibana-ref))/create-a-dashboard-of-panels-with-web-server-data.html). diff --git a/docs/serverless/dashboards/data-quality-dash.mdx b/docs/serverless/dashboards/data-quality-dash.mdx deleted file mode 100644 index d64d47bb94..0000000000 --- a/docs/serverless/dashboards/data-quality-dash.mdx +++ /dev/null @@ -1,84 +0,0 @@ ---- -slug: /serverless/security/data-quality-dash -title: Data Quality dashboard -description: The Data Quality dashboard summarizes the health of your data ingest pipeline. -tags: [ 'serverless', 'security', 'how-to' ] -status: in review ---- - - -
- -The Data Quality dashboard shows you whether your data is correctly mapped to the [Elastic Common Schema](https://www.elastic.co/guide/en/ecs/current/ecs-reference.html) (ECS). Successful [mapping](((ref))/mapping.html) enables you to search, visualize, and interact with your data throughout ((elastic-sec)). - -![The Data Quality dashboard](../images/data-quality-dash/-dashboards-data-qual-dash.png) - -Use the Data Quality dashboard to: - -* Check one or multiple indices for unsuccessful mappings, to help you identify problems (the indices used by ((elastic-sec)) appear by default). -* View the number of documents stored in each of your indices. -* View detailed information about the fields in checked indices. -* Track unsuccessful mappings by creating a case or Markdown report based on data quality results. - - - -To use the Data Quality dashboard, you need the appropriate user role with the following privileges for each index you want to check: - -* `monitor` or `manage` -* `view_index_metadata` or `manage` (required for the [Get mapping API](((ref))/indices-get-mapping.html)) -* `read` (required for the [Search API](((ref))/search-search.html)) - - - -
- -## Check indices -When you open the dashboard, data does not appear until you select indices to check. - -* **Check multiple indices**: To check all indices in the current data view, click **Check all** at the top of the dashboard. A progress indicator will appear. - - -To customize which indices are checked when you click **Check all**, [change the current data view](((security-guide))/data-views-in-sec.html). - - -* **Check a single index**: To check a single index, click the **Check index** button under **Actions**. Checking a single index is faster than checking all indices. - -## Visualize checked indices -The treemap that appears at the top of the dashboard shows the relative document count of your indices. The color of each index's node refers to its status: - -* **Blue:** Not yet checked. -* **Green:** Checked, no incompatible fields found. -* **Red:** Checked, one or more incompatible fields found. - -Click a node in the treemap to expand the corresponding index. - -## Learn more about checked index fields -After an index is checked, `Pass` or `Fail` appears in its **Result** column. `Fail` indicates mapping problems in an index. To view index check details, including which fields weren't successfully mapped, click the **View check details** button under **Actions**. - -![An expanded index with some failed results in the Data Quality dashboard](../images/data-quality-dash/-dashboards-data-qual-dash-detail.png) - -The index check flyout provides more information about the status of fields in that index. Each of its tabs describe fields grouped by mapping status. - - -Fields in the Same family category have the correct search behavior, but might have different storage or performance characteristics (for example, you can index strings to both text and keyword fields). To learn more, refer to [Field data types](((ref))/mapping-types.html). - - -## Export data quality results - -You can share data quality results to help track your team's remediation efforts. First, follow the instructions under Check indices to generate results, then either: - -**Export results for all indices in the current data view**: - -1. At the top of the dashboard, under the **Check all** button, are two buttons that allow you to share results. Exported results include all the data which appears in the dashboard. -1. Click **Add to new case** to open a new case. -1. Click **Copy to clipboard** to copy a Markdown report to your clipboard. - -**Export results for one index**: - -1. View details for a checked index that has at least one incompatible field by clicking the **View check details** button under **Actions**. -1. From the **Incompatible fields** tab, select **Add to new case** to open a new case, or click **Copy to clipboard** to copy a Markdown report to your clipboard. - - -For more information about how to fix mapping problems, refer to [Mapping](((ref))/mapping.html). - - diff --git a/docs/serverless/dashboards/detection-entity-dashboard.mdx b/docs/serverless/dashboards/detection-entity-dashboard.mdx deleted file mode 100644 index 62ad1f2393..0000000000 --- a/docs/serverless/dashboards/detection-entity-dashboard.mdx +++ /dev/null @@ -1,94 +0,0 @@ ---- -slug: /serverless/security/detection-entity-dashboard -title: Entity Analytics dashboard -description: The Entity Analytics dashboard provides a centralized view of emerging insider threats -tags: [ 'serverless', 'security', 'how-to' ] -status: in review ---- - - -
- -The Entity Analytics dashboard provides a centralized view of emerging insider threats - including host risk, user risk, and anomalies from within your network. Use it to triage, investigate, and respond to these emerging threats. - - - -To display host and user risk scores, you must turn on the risk scoring engine. - - - -The dashboard includes the following sections: - -* Entity KPIs (key performance indicators) -* Host Risk Scores -* User Risk Scores -* Anomalies - -![Entity dashboard](../images/detection-entity-dashboard/-dashboards-entity-dashboard.png) - -
- -## Entity KPIs (key performance indicators) - -Displays the total number of critical hosts, critical users, and anomalies. Select a link to jump to the Host risk table, User risk table, or Anomalies table. - -
- -## Host Risk Scores - -Displays host risk score data for your environment, including the total number of hosts, and the five most recently recorded host risk scores, with their associated host names, risk data, and number of detection alerts. Host risk scores are calculated using a weighted sum on a scale of 0 (lowest) to 100 (highest). - -![Host risk scores table](../images/detection-entity-dashboard/-dashboards-host-score-data.png) - -Interact with the table to filter data, view more details, and take action: - -* Select the **Host risk level** menu to filter the chart by the selected level. -* Click a host name link to open the host details flyout. -* Hover over a host name link to display inline actions: **Add to timeline**, which adds the selected value to Timeline, and **Copy to Clipboard**, which copies the host name value for you to paste later. -* Click **View all** in the upper-right to display all host risk information on the Hosts page. -* Click the number link in the **Alerts** column to view the alerts on the Alerts page. Hover over the number and select **Investigate in timeline** () to launch Timeline with a query that includes the associated host name value. - -For more information about host risk scores, refer to Entity risk scoring. - -
- -## User Risk Scores - -Displays user risk score data for your environment, including the total number of users, and the five most recently recorded user risk scores, with their associated user names, risk data, and number of detection alerts. Like host risk scores, user risk scores are calculated using a weighted sum on a scale of 0 (lowest) to 100 (highest). - -![User risk table](../images/detection-entity-dashboard/-dashboards-user-score-data.png) - -Interact with the table to filter data, view more details, and take action: - -* Select the **User risk level** menu to filter the chart by the selected level. -* Click a user name link to open the user details flyout. -* Hover over a user name link to display inline actions: **Add to timeline**, which adds the selected value to Timeline, and **Copy to Clipboard**, which copies the user name value for you to paste later. -* Click **View all** in the upper-right to display all user risk information on the Users page. -* Click the number link in the **Alerts** column to view the alerts on the Alerts page. Hover over the number and select **Investigate in timeline** () to launch Timeline with a query that includes the associated user name value. - -For more information about user risk scores, refer to Entity risk scoring. - -
- -## Anomalies - -Anomaly detection jobs identify suspicious or irregular behavior patterns. The Anomalies table displays the total number of anomalies identified by these prebuilt ((ml)) jobs (named in the **Anomaly name** column). - - - -To display anomaly results, you must [install and run](((ml-docs))/ml-ad-run-jobs.html) one or more [prebuilt anomaly detection jobs](((security-guide))/prebuilt-ml-jobs.html). You cannot add custom anomaly detection jobs to the Entity Analytics dashboard. - - - -![Anomalies table](../images/detection-entity-dashboard/-dashboards-anomalies-table.png) - -Interact with the table to view more details: - -* Click **View all host anomalies** to go to the Anomalies table on the Hosts page. -* Click **View all user anomalies** to go to the Anomalies table on the Users page. -* Click **View all** to display and manage all machine learning jobs on the Anomaly Detection Jobs page. - - -To learn more about ((ml)), refer to [What is Elastic machine learning?](((ml-docs))/machine-learning-intro.html) - - diff --git a/docs/serverless/dashboards/detection-response-dashboard.mdx b/docs/serverless/dashboards/detection-response-dashboard.mdx deleted file mode 100644 index dd09bfbd1a..0000000000 --- a/docs/serverless/dashboards/detection-response-dashboard.mdx +++ /dev/null @@ -1,34 +0,0 @@ ---- -slug: /serverless/security/detection-response-dashboard -title: Detection & Response dashboard -description: The Detection & Response dashboard provides focused visibility into the day-to-day operations of your security environment -tags: [ 'serverless', 'security', 'how-to' ] -status: in review ---- - - -
- -The Detection & Response dashboard provides focused visibility into the day-to-day operations of your security environment. It helps security operations managers and analysts quickly monitor recent and high priority detection alerts and cases, and identify the hosts and users associated with alerts. - -![Overview of Detection & Response dashboard](../images/detection-response-dashboard/-detections-detection-response-dashboard.png) - -Interact with various dashboard elements: - -* Use the date and time picker in the upper-right to specify a time range for displaying information on the dashboard. - -* In sections that list alert counts, click a number to view the alerts on the Alerts page. Hover over the number and select **Investigate in timeline** () to open the alerts in Timeline. - -* Click the name of a detection rule, case, host, or user to open its details page. - -The following sections are included: - -{/* [width="100%",cols="s,"] */} -| | | -|---|---| -| Alerts | The total number of detection alerts generated within the time range, organized by status and severity. Select **View alerts** to open the Alerts page. | -| Cases | The total number of cases created within the time range, organized by status. Select **View cases** to open the Cases page. | -| Open alerts by rule | The top four detection rules with open alerts, organized by the severity and number of alerts for each rule. Select **View all open alerts** to open the Alerts page. | -| Recently created cases | The four most recently created cases. Select **View recent cases** to open the Cases page. | -| Hosts by alert severity | The hosts generating detection alerts within the time range, organized by the severity and number of alerts. Shows up to 100 hosts. | -| Users by alert severity | The users generating detection alerts within the time range, organized by the severity and number of alerts. Shows up to 100 users. | diff --git a/docs/serverless/dashboards/kubernetes-dashboard-dash.mdx b/docs/serverless/dashboards/kubernetes-dashboard-dash.mdx deleted file mode 100644 index 891d83281b..0000000000 --- a/docs/serverless/dashboards/kubernetes-dashboard-dash.mdx +++ /dev/null @@ -1,69 +0,0 @@ ---- -slug: /serverless/security/kubernetes-dashboard-dash -title: Kubernetes dashboard -description: The Kubernetes dashboard provides insight into Linux process data from your Kubernetes clusters. -tags: [ 'serverless', 'security', 'overview', 'cloud security' ] -status: in review ---- - - -
- -The Kubernetes dashboard provides insight into Linux process data from your Kubernetes clusters. It shows sessions in detail and in the context of your monitored infrastructure. - -![The Kubernetes dashboard, with numbered labels 1 through 3 for major sections](../images/kubernetes-dashboard/-dashboards-kubernetes-dashboard.png) -The numbered sections are described below: - - 1. The charts at the top of the dashboard provide an overview of your monitored Kubernetes infrastructure. You can hide them by clicking **Hide charts**. - 1. The tree navigation menu allows you to navigate through your deployments and select the scope of the sessions table to the right. You can select any item in the menu to show its sessions. In Logical view, the menu is organized by Cluster, Namespace, Pod, and Container image. In Infrastructure view, it is organized by Cluster, Node, Pod, and Container image. - 1. The sessions table displays sessions collected from the selected element of your Kubernetes infrastructure. You can view it in fullscreen by selecting the button in the table's upper right corner. You can sort the table by any of its fields. - -You can filter the data using the KQL search bar and date picker at the top of the page. - -From the sessions table's Actions column, you can take the following investigative actions: - -- View details -- Open in Timeline -- Run Osquery -- Analyze event -- Open Session View - -Session View displays Kubernetes metadata under the **Metadata** tab of the Detail panel: - -![The Detail panel's metadata tab](../images/kubernetes-dashboard/-dashboards-metadata-tab.png) - -The **Metadata** tab is organized into these expandable sections: - -- **Metadata:** `hostname`, `id`, `ip`, `mac`, `name`, Host OS information -- **Cloud:** `instance.name`, `provider`, `region`, `account.id`, `project.id` -- **Container:** `id`, `name`, `image.name`, `image.tag`, `image.hash.all` -- **Orchestrator:** `resource.ip`, `resource.name`, `resource.type`, `namespace`, `cluster.id`, `cluster.name`, `parent.type` - -
- -## Setup -To get data for this dashboard, set up Cloud Workload Protection for Kubernetes for the clusters you want to display on the dashboard. - - - -- Kubernetes node operating systems must have Linux kernels 5.10.16 or higher. - - - -**Support matrix**: -This feature is currently available on GKE and EKS using Linux hosts and Kubernetes versions that match the following specifications: -| | | | -|---|---|---| -| | EKS 1.24-1.26 (AL2022) | GKE 1.24-1.26 (COS) | -| Process event exports | ✓ | ✓ | -| Network event exports | ✓ | ✓ | -| File event exports | ✓ | ✓ | -| File blocking | ✓ | ✓ | -| Process blocking | ✓ | ✓ | -| Network blocking | ✗ | ✗ | -| Drift prevention | ✓ | ✓ | -| Mount point awareness | ✓ | ✓ | - - -This dashboard uses data from the `logs-*` index pattern, which is included by default in the `securitySolution:defaultIndex` advanced setting. To collect data from multiple ((es)) clusters (as in a cross-cluster deployment), update `logs-*` to `*:logs-*`. - \ No newline at end of file diff --git a/docs/serverless/dashboards/overview-dashboard.mdx b/docs/serverless/dashboards/overview-dashboard.mdx deleted file mode 100644 index effec8e44e..0000000000 --- a/docs/serverless/dashboards/overview-dashboard.mdx +++ /dev/null @@ -1,48 +0,0 @@ ---- -slug: /serverless/security/overview-dashboard -title: Overview dashboard -description: The Overview dashboard provides a high-level snapshot of alerts and events. -tags: [ 'serverless', 'security', 'how-to' ] -status: in review ---- - - -
- -The Overview dashboard provides a high-level snapshot of alerts and events. It helps you assess overall system health and find anomalies that may require further investigation. - -![Overview dashboard](../images/overview-dashboard/-dashboards-overview-pg.png) - -## Live feed - -The live feed on the Overview dashboard helps you quickly access recently created cases, favorited Timelines, and the latest ((elastic-sec)) news. - - -The **Security news** section provides the latest ((elastic-sec)) news to help you stay informed of new developments, learn about ((elastic-sec)) features, and more. - - -![Overview dashboard with live feed section highlighted](../images/overview-dashboard/-dashboards-live-feed-ov-page.png) - -## Histograms - -Time-based histograms show the number of detections, alerts, and events that have occurred within the selected time range. To focus on a particular time, click and drag to select a time range, or choose a preset value. The **Stack by** menu lets you select which field is used to organize the data. For example, in the Alert trend histogram, stack by `kibana.alert.rule.name` to display alert counts by rule name within the specified time frame. - -Hover over histograms, graphs, and tables to display an **Inspect** button () or options menu (). Click to inspect the visualization's ((es)) queries, add it to a new or existing case, or open it in Lens for customization. - -## Host and network events - -View event and host counts grouped by data source, such as **Auditbeat** or **((elastic-defend))**. Expand a category to view specific counts of host or network events from the selected source. - -![Host and network events on the Overview dashboard](../images/overview-dashboard/-getting-started-events-count.png) - -## Threat Intelligence - -The Threat Intelligence view on the Overview dashboard provides streamlined threat intelligence data for threat detection and matching. - -The view shows the total number of ingested threat indicators, enabled threat intelligence sources, and ingested threat indicators per source. To learn more about the ingested indicator data, click **View indicators**. - - -For more information about connecting to threat intelligence sources, visit Enable threat intelligence integrations. - - - diff --git a/docs/serverless/dashboards/rule-monitoring-dashboard.mdx b/docs/serverless/dashboards/rule-monitoring-dashboard.mdx deleted file mode 100644 index 547cb4ae91..0000000000 --- a/docs/serverless/dashboards/rule-monitoring-dashboard.mdx +++ /dev/null @@ -1,64 +0,0 @@ ---- -slug: /serverless/security/rule-monitoring-dashboard -title: Detection rule monitoring dashboard -description: Visualize your detection rules' performance. -tags: ["security","how-to","visualize","monitor"] -status: in review ---- - - -
- -The Detection rule monitoring dashboard provides visualizations to help you monitor the overall health and performance of ((elastic-sec))'s detection rules. Consult this dashboard for a high-level view of whether your rules are running successfully and how long they're taking to run, search data, and create alerts. - -![Overview of Detection rule monitoring dashboard](../images/rule-monitoring-dashboard/-dashboards-rule-monitoring-overview.png) - - - -To access this dashboard and its data, you must have the appropriate user role. - - - -
- -## Visualization data and types - -The dashboard presents a variety of information about your detection rules. Visualizations display and calculate data within the time range and filters selected at the top of the dashboard. - -The following visualizations are included: - -* **Rule KPIs (key performance indicators)**: The total number of rules enabled, how many times they collectively ran, and their response statuses. -* **Executions by rule type**: Rule executions over time, broken down by rule type. -* **Executions by status**: Rule executions over time, broken down by status. -* **Total rule execution duration**: How long rules take to run, from start to finish. -* **Rule schedule delay**: The delay between a rule's scheduled start time and when it actually starts running. -* **Search/query duration**: How long rules take to query source indices or data views. -* **Indexing duration**: How long rules take to generate alerts and write them to the `.alerts-security.alerts-*` index. -* **Top 10 rules**: Lists of the overall slowest rules, most delayed rules, and rules with the most **Failed** and **Warning** response statuses. - -
- -## Visualization panel actions - -Open a panel's options menu () customize the panel or use its data for further analysis and investigation: - -* **Edit panel settings**: Customize the panel's display settings. Options vary by visualization type. -* **Inspect**: Examine the panel's underlying data and queries. -* **Explore data in Discover**: Open Discover with preloaded filters to display the panel's data. -* **Maximize panel**: Expand the panel. -* **Download as CSV**: Download the panel's data in a CSV file. -* **Copy to dashboard**: Copy the panel to an existing or new dashboard. -* **Add to existing case**: Add the panel to an existing case. -* **Add to new case**: Create a new case and add the panel to it. -* **Create anomaly detection job**: Create a ((ml)) anomaly detection job using the panel's data. - -
- -## Clone and edit the dashboard - -To make persistent changes to the dashboard, you can clone the dashboard and edit the cloned copy, but your copy will not get updates from Elastic. - -1. Click **Edit**, then **Save as**. -1. On the **Save dashboard** dialog, enter a new **Title** for your cloned copy. -1. Make sure **Save as new dashboard** is selected, then click **Save**. You can now make additional changes and save them to your copy. - diff --git a/docs/serverless/dashboards/vuln-management-dashboard-dash.mdx b/docs/serverless/dashboards/vuln-management-dashboard-dash.mdx deleted file mode 100644 index 6121ec6621..0000000000 --- a/docs/serverless/dashboards/vuln-management-dashboard-dash.mdx +++ /dev/null @@ -1,42 +0,0 @@ ---- -slug: /serverless/security/vuln-management-dashboard-dash -title: Cloud Native Vulnerability Management Dashboard -description: The CNVM dashboard gives an overview of vulnerabilities detected in your cloud infrastructure. -tags: ["security","cloud","reference","manage"] -status: in review ---- - - -
- -The Cloud Native Vulnerability Management (CNVM) dashboard gives you an overview of vulnerabilities detected in your cloud infrastructure. - -![The CNVM dashboard](../images/vuln-management-dashboard-dash/-cloud-native-security-vuln-management-dashboard.png) - - - -* To collect this data, install the Cloud Native Vulnerability Management integration. - - - -
- -## CNVM dashboard UI -The summary cards at the top of the dashboard display the number of monitored cloud accounts, scanned virtual machines (VMs), and vulnerabilities (grouped by severity). - -The **Trend by severity** bar graph complements the summary cards by displaying the number of vulnerabilities found on your infrastructure over time, sorted by severity. It has a maximum time scale of 30 days. - - - -* Click the severity levels legend on its right to hide/show each severity level. -* To display data from specific cloud accounts, select the account names from the **Accounts** drop-down menu. - - - -The page also includes three tables: - -* **Top 10 vulnerable resources** shows your VMs with the highest number of vulnerabilities. -* **Top 10 patchable vulnerabilities** shows the most common vulnerabilities in your environment that can be fixed by a software update. -* **Top 10 vulnerabilities** shows the most common vulnerabilities in your environment, with additional details. - -Click **View all vulnerabilities** at the bottom of a table to open the Vulnerabilities Findings page, where you can view additional details. diff --git a/docs/serverless/edr-install-config/agent-tamper-protection.mdx b/docs/serverless/edr-install-config/agent-tamper-protection.mdx deleted file mode 100644 index 5ac1f86f7f..0000000000 --- a/docs/serverless/edr-install-config/agent-tamper-protection.mdx +++ /dev/null @@ -1,58 +0,0 @@ ---- -slug: /serverless/security/agent-tamper-protection -title: Prevent ((agent)) uninstallation -description: Block unauthorized attempts to uninstall ((agent)) on hosts. -tags: [ 'serverless', 'security', 'how-to' ] ---- - - -
- -For hosts enrolled in ((elastic-defend)), you can prevent unauthorized attempts to uninstall ((agent)) and ((elastic-endpoint)) by enabling **Agent tamper protection** on the Agent policy. This offers an additional layer of security by preventing users from bypassing or disabling ((elastic-defend))'s endpoint protections. - -When enabled, ((agent)) and ((elastic-endpoint)) can only be uninstalled on the host by including an uninstall token in the uninstall CLI command. One unique uninstall token is generated per Agent policy, and you can retrieve uninstall tokens in an Agent policy's settings or in the ((fleet)) UI. - - - -* Agent tamper protection requires the Endpoint Protection Complete . - -* Hosts must be enrolled in the ((elastic-defend)) integration. - -* ((agent))s must be version 8.11.0 or later. - -* This feature is supported for all operating systems. - - - - - -
- -## Enable Agent tamper protection - -You can enable Agent tamper protection by configuring the ((agent)) policy. - -1. Go to **((fleet))** -> **Agent policies**, then select the Agent policy you want to configure. -1. Select the **Settings** tab on the policy details page. -1. In the **Agent tamper protection** section, turn on the **Prevent agent tampering** setting. - - This makes the **Get uninstall command** link available, which you can follow to get the uninstall token and CLI command if you need to uninstall an Agent on this policy. - - - You can also access an Agent policy's uninstall tokens on the **Uninstall tokens** tab on the **((fleet))** page. Refer to Access uninstall tokens for more information. - - -1. Select **Save changes**. - -
- -## Access uninstall tokens - -If you need the uninstall token to remove ((agent)) from an endpoint, you can find it in several ways: - -* **On the Agent policy** — Go to the Agent policy's **Settings** tab, then click the **Get uninstall command** link. The **Uninstall agent** flyout opens, containing the full uninstall command with the token. - -* **On the ((fleet)) page** — Go to **((fleet))** -> **Uninstall tokens** for a list of the uninstall tokens generated for your Agent policies. You can: - - * Click the **Show token** icon in the **Token** column to reveal a specific token. - * Click the **View uninstall command** icon in the **Actions** column to open the **Uninstall agent** flyout, containing the full uninstall command with the token. diff --git a/docs/serverless/edr-install-config/artifact-control.mdx b/docs/serverless/edr-install-config/artifact-control.mdx deleted file mode 100644 index 9fa3601001..0000000000 --- a/docs/serverless/edr-install-config/artifact-control.mdx +++ /dev/null @@ -1,28 +0,0 @@ ---- -slug: /serverless/security/protection-artifact-control -title: Configure updates for protection artifacts -description: Configure updates for protection artifacts. -tags: [ 'serverless', 'security', 'how-to', 'secure', 'manage' ] -status: in review ---- - - -
- -On the **Protection updates** tab of the ((elastic-defend)) integration policy, you can configure how ((elastic-defend)) receives updates from Elastic with the latest threat detections, global exceptions, malware models, rule packages, and other protection artifacts. By default, these artifacts are automatically updated regularly, ensuring your environment is up to date with the latest protections. - -You can disable automatic updates and freeze your protection artifacts to a specific date, allowing you to control when to receive and install the updates. For example, you might want to temporarily disable updates to ensure resource availability during a high-volume period, test updates in a controlled staging environment before rolling out to production, or roll back to a previous version of protections. - -Protection artifacts will expire after 18 months, and you'll no longer be able to select them as a deployed version. If you're already using a specific version when it expires, you'll keep using it until you either select a later non-expired version or re-enable automatic updates. - - -It is strongly advised to keep automatic updates enabled to ensure the highest level of security for your environment. Proceed with caution if you decide to disable automatic updates. - - -To configure the protection artifacts version deployed in your environment: - -1. Go to **Manage** → **Policies**, select an ((elastic-defend)) integration policy, then select the **Protection updates** tab. -1. Turn off the **Enable automatic updates** toggle. -1. Use the **Version to deploy** date picker to select the date of the protection artifacts you want to use in your environment. -1. (Optional) Enter a **Note** to explain the reason for selecting a particular version of protection artifacts. -1. Select **Save**. diff --git a/docs/serverless/edr-install-config/configure-endpoint-integration-policy.mdx b/docs/serverless/edr-install-config/configure-endpoint-integration-policy.mdx deleted file mode 100644 index bcab7e77be..0000000000 --- a/docs/serverless/edr-install-config/configure-endpoint-integration-policy.mdx +++ /dev/null @@ -1,279 +0,0 @@ ---- -slug: /serverless/security/configure-endpoint-integration-policy -title: Configure an integration policy for ((elastic-defend)) -description: Configure settings on an ((elastic-defend)) integration policy. -tags: [ 'serverless', 'security', 'how-to' ] -status: in review ---- - - -
- -After the ((agent)) is installed with the ((elastic-defend)) integration, several protections features — including -preventions against malware, ransomware, memory threats, and malicious behavior — are automatically enabled -on protected hosts (most features require the Endpoint Protection Essentials or Endpoint Protection Complete ). If needed, you can update the -integration policy to configure protection settings, event collection, antivirus settings, trusted applications, -event filters, host isolation exceptions, and blocked applications to meet your organization's security needs. - -You can also create multiple ((elastic-defend)) integration policies to maintain unique configuration profiles. To create an additional ((elastic-defend)) integration policy, go to **Project settings** → **Integrations**, then follow the steps for adding the ((elastic-defend)) integration. - - - -You must have the appropriate user role to configure an integration policy. - - - -{/* Commented out because APIs are not exposed in initial serverless release. We can uncommment this and add a link to API docs once APIs are available. - -In addition to configuring an {elastic-defend} policy through the ((elastic-sec)) UI, you can create and customize an ((elastic-defend)) policy through the API. - -*/} - -To configure an integration policy: - -1. Go to **Assets** → **Endpoints** → **Policies** to view the **Policies** page. -1. Select the integration policy you want to configure. The integration policy configuration page appears. -1. On the **Policy settings** tab, review and configure the following settings as appropriate: - - * Malware protection - * Ransomware protection - * Memory threat protection - * Malicious behavior protection - * Attack surface reduction - * Event collection - * Register ((elastic-sec)) as antivirus (optional) - * Advanced policy settings (optional) - * Save the general policy settings - -1. Click the **Trusted applications**, **Event filters**, **Host isolation exceptions**, and **Blocklist** tabs to review the endpoint policy artifacts assigned to this integration policy (for more information, refer to trusted applications, event filters, host isolation exceptions, and blocklist). On these tabs, you can: - - * Expand and view an artifact — Click the arrow next to its name. - * View an artifact's details — Click the actions menu (), then select **View full details**. - * Unassign an artifact — Click the actions menu (), - then select **Remove from policy**. This does not delete the artifact; this just unassigns it from the current policy. - * Assign an existing artifact — Click **Assign _x_ to policy**, - then select an item from the flyout. This view lists any existing artifacts that aren't already assigned to the current policy. - - - You can't create a new endpoint policy artifact while configuring an integration policy. - To create a new artifact, go to its main page in the ((security-app)) (for example, - to create a new trusted application, go to **Assets** → **Endpoints** → **Trusted applications**). - - -1. Click the **Protection updates** tab to configure how ((elastic-defend)) receives updates from Elastic with the latest threat detections, malware models, and other protection artifacts. Refer to for more information. - -
- -## Malware protection - -((elastic-defend)) malware prevention detects and stops malicious attacks by using a machine learning model -that looks for static attributes to determine if a file is malicious or benign. - -By default, malware protection is enabled on Windows, macOS, and Linux hosts. -To disable malware protection, turn off the **Malware protections** toggle. - - - -Malware protection requires the Endpoint Protection Essentials . - - - -Malware protection levels are: - -* **Detect**: Detects malware on the host and generates an alert. The agent will **not** block malware. - You must pay attention to and analyze any malware alerts that are generated. - -* **Prevent** (Default): Detects malware on the host, blocks it from executing, and generates an alert. - -These additional options are available for malware protection: - -* **Blocklist**: Enable or disable the blocklist for all hosts associated with this ((elastic-defend)) policy. The blocklist allows you to prevent specified applications from running on hosts, extending the list of processes that ((elastic-defend)) considers malicious. - -* **Scan files upon modification**: By default, ((elastic-defend)) scans files every time they're modified, which can be resource-intensive on hosts where files are frequently modified, such as servers and developer machines. Turn off this option to only scan files when they're executed. ((elastic-defend)) will continue to identify malware as it attempts to run, providing a robust level of protection while improving endpoint performance. - -Select **Notify user** to send a push notification in the host operating system when activity is detected or prevented. Notifications are enabled by default for the **Prevent** option. - - -Endpoint Protection Complete customers can customize these notifications using the `Elastic Security {action} {filename}` syntax. - - -![Detail of malware protection section.](../images/configure-endpoint-integration-policy/-getting-started-install-endpoint-malware-protection.png) - -
- -### Manage quarantined files - -When **Prevent** is enabled for malware protection, ((elastic-defend)) will quarantine any malicious file it finds. Specifically ((elastic-defend)) will remove the file from its current location, encrypt it with the encryption key `ELASTIC`, move it to a different folder, and rename it as a GUID string, such as `318e70c2-af9b-4c3a-939d-11410b9a112c`. - -The quarantine folder location varies by operating system: - -- macOS: `/System/Volumes/Data/.equarantine` -- Linux: `.equarantine` at the root of the mount point of the file being quarantined -- Windows - ((elastic-defend)) versions 8.5 and later: `[DriveLetter:]\.quarantine`, unless the files are from the `C:` drive. These files are moved to `C:\Program Files\Elastic\Endpoint\state\.equarantine`. -- Windows - ((elastic-defend)) versions 8.4 and earlier: `[DriveLetter:]\.quarantine`, for any drive - -To restore a quarantined file to its original state and location, add an exception to the rule that identified the file as malicious. If the exception would've stopped the rule from identifying the file as malicious, ((elastic-defend)) restores the file. - -You can access a quarantined file by using the `get-file` response action command in the response console. To do this, copy the path from the alert's **Quarantined file path** field (`file.Ext.quarantine_path`), which appears under **Highlighted fields** in the alert details flyout. Then paste the value into the `--path` parameter. This action doesn't restore the file to its original location, so you will need to do this manually. - - -Response actions and the response console UI are Endpoint Protection Complete . - - -
- -## Ransomware protection - -Behavioral ransomware prevention detects and stops ransomware attacks on Windows systems by -analyzing data from low-level system processes. It is effective across an array of widespread -ransomware families — including those targeting the system’s master boot record. - - - -Ransomware protection requires the Endpoint Protection Essentials . - - - -Ransomware protection levels are: - -* **Detect**: Detects ransomware on the host and generates an alert. ((elastic-defend)) - will **not** block ransomware. You must pay attention to and analyze any ransomware alerts that are generated. - -* **Prevent** (Default): Detects ransomware on the host, blocks it from executing, - and generates an alert. - -When ransomware protection is enabled, canary files placed in targeted locations on your hosts provide an early warning system for potential ransomware activity. When a canary file is modified, Elastic Defend immediately generates a ransomware alert. If **prevent** ransomware is active, ((elastic-defend)) terminates the process that modified the file. - -Select **Notify user** to send a push notification in the host operating system when activity is detected or prevented. Notifications are enabled by default for the **Prevent** option. - - -Endpoint Protection Complete customers can customize these notifications using the `Elastic Security {action} {filename}` syntax. - - -![Detail of ransomware protection section.](../images/configure-endpoint-integration-policy/-getting-started-install-endpoint-ransomware-protection.png) - -
- -## Memory threat protection - -Memory threat protection detects and stops in-memory threats, such as shellcode injection, -which are used to evade traditional file-based detection techniques. - - - -Memory threat protection requires the Endpoint Protection Essentials . - - - -Memory threat protection levels are: - -* **Detect**: Detects memory threat activity on the host and generates an alert. - ((elastic-defend)) will **not** block the in-memory activity. You must pay attention to and analyze any alerts that are generated. - -* **Prevent** (Default): Detects memory threat activity on the host, forces the process - or thread to stop, and generates an alert. - -Select **Notify user** to send a push notification in the host operating system when activity is detected or prevented. Notifications are enabled by default for the **Prevent** option. - - -Endpoint Protection Complete customers can customize these notifications using the `Elastic Security {action} {rule}` syntax. - - -![Detail of memory protection section.](../images/configure-endpoint-integration-policy/-getting-started-install-endpoint-memory-protection.png) - -
- -## Malicious behavior protection - -Malicious behavior protection detects and stops threats by monitoring the behavior -of system processes for suspicious activity. Behavioral signals are much more difficult -for adversaries to evade than traditional file-based detection techniques. - - - -Malicious behavior protection requires the Endpoint Protection Essentials . - - - -Malicious behavior protection levels are: - -* **Detect**: Detects malicious behavior on the host and generates an alert. - ((elastic-defend)) will **not** block the malicious behavior. You must pay attention to and analyze any alerts that are generated. - -* **Prevent** (Default): Detects malicious behavior on the host, forces the process to stop, - and generates an alert. - -Select whether you want to use **Reputation service** for additional protection. Elastic's reputation service leverages our extensive threat intelligence knowledge to make high confidence real-time prevention decisions. For example, reputation service can detect suspicious downloads of binaries with low or malicious reputation. Endpoints communicate with the reputation service directly at https://cloud.security.elastic.co. - -Select **Notify user** to send a push notification in the host operating system when activity is detected or prevented. Notifications are enabled by default for the **Prevent** option. - - -Endpoint Protection Complete customers can customize these notifications using the `Elastic Security {action} {rule}` syntax. - - -![Detail of behavior protection section.](../images/configure-endpoint-integration-policy/-getting-started-install-endpoint-behavior-protection.png) - -
- -## Attack surface reduction - -This section helps you reduce vulnerabilities that attackers can target on Windows endpoints. - - - -Attack surface reduction requires the Endpoint Protection Essentials . - - - -* **Credential hardening**: Prevents attackers from stealing credentials stored in Windows system process memory. Turn on the toggle to remove any overly permissive access rights that aren't required for standard interaction with the Local Security Authority Subsystem Service (LSASS). This feature enforces the principle of least privilege without interfering with benign system activity that is related to LSASS. - -![Detail of attack surface reduction section.](../images/configure-endpoint-integration-policy/-getting-started-install-endpoint-attack-surface-reduction.png) - -
- -## Event collection - -In the **Settings** section, select which categories of events to collect on each operating system. -Most categories are collected by default, as seen below. - -![Detail of event collection section.](../images/configure-endpoint-integration-policy/-getting-started-install-endpoint-event-collection.png) - -
- -## Register ((elastic-sec)) as antivirus (optional) - -With ((elastic-defend)) version 7.10 or later on Windows 7 or later, you can -register ((elastic-sec)) as your hosts' antivirus software by enabling **Register as antivirus**. - - -Windows Server is not supported. Antivirus registration requires Windows Security Center, which is not included in Windows Server operating systems. - - -By default, the **Sync with malware protection level** is selected to automatically set antivirus registration to match how you've configured ((elastic-defend))'s malware protection. If malware protection is turned on _and_ set to **Prevent**, antivirus registration will also be enabled; in any other case, antivirus registration will be disabled. - -If you don't want to sync antivirus registration, you can set it manually with **Enabled** or **Disabled**. - -![Detail of Register as antivirus option.](../images/configure-endpoint-integration-policy/-getting-started-register-as-antivirus.png) - -
- -## Advanced policy settings (optional) - -Users with unique configuration and security requirements can select **Show advanced settings** -to configure the policy to support advanced use cases. Hover over each setting to view its description. - - -Advanced settings are not recommended for most users. - - -This section includes: - -* Turn off diagnostic data for ((elastic-defend)) -* Configure self-healing rollback for Windows endpoints -* Configure Linux file system monitoring - -
- -## Save the general policy settings - -After you have configured the general settings on the **Policy settings** tab, click **Save**. A confirmation message appears. diff --git a/docs/serverless/edr-install-config/deploy-endpoint-macos-cat-mont.mdx b/docs/serverless/edr-install-config/deploy-endpoint-macos-cat-mont.mdx deleted file mode 100644 index 76b61f07ff..0000000000 --- a/docs/serverless/edr-install-config/deploy-endpoint-macos-cat-mont.mdx +++ /dev/null @@ -1,87 +0,0 @@ ---- -slug: /serverless/security/install-endpoint-manually -title: Enable access for macOS Monterey -description: Configure access for deploying ((elastic-defend)) on macOS Monterey. -tags: ["security","how-to","secure"] -status: in review ---- - - -
- -To properly install and configure ((elastic-defend)) manually without a Mobile Device Management (MDM) profile, there are additional permissions that must be enabled on the host before ((elastic-endpoint))—the installed component that performs ((elastic-defend))'s threat monitoring and prevention—is fully functional: - -* Approve the system extension -* Approve network content filtering -* Enable Full Disk Access - - -The following permissions that need to be enabled are required after you configure and install the ((elastic-defend)) integration, which includes enrolling the ((agent)). - - -
- -## Approve the system extension for ((elastic-endpoint)) - -For macOS Monterey (12.x), ((elastic-endpoint)) will attempt to load a system extension during installation. This system extension must be loaded in order to provide insight into system events such as process events, file system events, and network events. - -The following message appears during installation: - -![](../images/deploy-elastic-endpoint/-getting-started-install-endpoint-system-ext-blocked.png) - -1. Click **Open Security Preferences**. -1. In the lower-left corner of the **Security & Privacy** pane, click the **Lock button**, then enter your credentials to authenticate. - - ![](../images/deploy-elastic-endpoint/-getting-started-fda-lock-button.png) - -1. Click **Allow** to allow the ((elastic-endpoint)) system extension to load. - - ![](../images/deploy-elastic-endpoint/-getting-started-install-endpoint-allow-system-ext.png) - -
- -## Approve network content filtering for ((elastic-endpoint)) - -After successfully loading the ((elastic-endpoint)) system extension, an additional message appears, asking to allow ((elastic-endpoint)) to filter network content. - -![](../images/deploy-elastic-endpoint/-getting-started-install-endpoint-filter-network-content.png) - -* Click **Allow** to enable content filtering for the ((elastic-endpoint)) system extension. Without this approval, ((elastic-endpoint)) cannot receive network events and, therefore, cannot enable network-related features such as host isolation. - -
- -## Enable Full Disk Access for ((elastic-endpoint)) - -((elastic-endpoint)) requires Full Disk Access to subscribe to system events via the ((elastic-defend)) framework and to protect your network from malware and other cybersecurity threats. To enable Full Disk Access on endpoints running macOS Catalina (10.15) and later, you must manually approve ((elastic-endpoint)). - - -The following instructions apply only to ((elastic-endpoint)) version 8.0.0 and later. To see Full Disk Access requirements for the Endgame sensor, refer to Endgame's documentation. - -{/* Might need to revisit this note and the section. Keep an eye on https://github.com/elastic/staging-serverless-security-docs/issues/124 */} - -1. Open the **System Preferences** application. -1. Select **Security and Privacy**. - - ![](../images/deploy-elastic-endpoint/-getting-started-fda-sec-privacy-pane.png) - -1. On the **Security and Privacy** pane, select the **Privacy** tab. -1. From the left pane, select **Full Disk Access**. - - ![Select Full Disk Access](../images/deploy-elastic-endpoint/-getting-started-fda-select-fda.png) - -1. In the lower-left corner of the pane, click the **Lock button**, then enter your credentials to authenticate. -1. In the **Privacy** tab, confirm that `ElasticEndpoint` AND `co.elastic.systemextension` are selected to properly enable Full Disk Access. - - ![](../images/deploy-elastic-endpoint/-getting-started-fda-select-endpoint-ext.png) - -If the endpoint is running ((elastic-endpoint)) version 7.17.0 or earlier: -{/* Might need to revisit this note and the section. Keep an eye on https://github.com/elastic/staging-serverless-security-docs/issues/124 */} - -1. In the lower-left corner of the pane, click the **Lock button**, then enter your credentials to authenticate. -1. Click the **+** button to view **Finder**. -1. Navigate to `/Library/Elastic/Endpoint`, then select the `elastic-endpoint` file. -1. Click **Open**. -1. In the **Privacy** tab, confirm that `elastic-endpoint` AND `co.elastic.systemextension` are selected to properly enable Full Disk Access. - - ![](../images/deploy-elastic-endpoint/-getting-started-fda-fda-7-16.png) - diff --git a/docs/serverless/edr-install-config/deploy-endpoint-macos-ven.mdx b/docs/serverless/edr-install-config/deploy-endpoint-macos-ven.mdx deleted file mode 100644 index 70ea365d04..0000000000 --- a/docs/serverless/edr-install-config/deploy-endpoint-macos-ven.mdx +++ /dev/null @@ -1,96 +0,0 @@ ---- -slug: /serverless/security/deploy-elastic-endpoint-ven -title: Enable access for macOS Ventura and higher -description: Configure access for deploying ((elastic-defend)) on macOS Ventura and higher. -tags: ["security","how-to","secure"] -status: in review ---- - - -
- -To properly install and configure ((elastic-defend)) manually without a Mobile Device Management (MDM) profile, there are additional permissions that must be enabled on the host before ((elastic-endpoint))—the installed component that performs ((elastic-defend))'s threat monitoring and prevention—is fully functional: - -* Approve the system extension -* Approve network content filtering -* Enable Full Disk Access - - -The following permissions that need to be enabled are required after you configure and install the ((elastic-defend)) integration, which includes enrolling the ((agent)). - - -
- -## Approve the system extension for ((elastic-endpoint)) - -For macOS Ventura (13.0) and later, ((elastic-endpoint)) will attempt to load a system extension during installation. This system extension must be loaded in order to provide insight into system events such as process events, file system events, and network events. - -The following message appears during installation: - -![](../images/deploy-elastic-endpoint-ven/-getting-started-install-endpoint-ven-system_extension_blocked_warning_ven.png) - -1. Click **Open System Settings**. -1. In the left pane, click **Privacy & Security**. - - ![](../images/deploy-elastic-endpoint-ven/-getting-started-install-endpoint-ven-privacy_security_ven.png) - -1. On the right pane, scroll down to the Security section. Click **Allow** to allow the ElasticEndpoint system extension to load. - - ![](../images/deploy-elastic-endpoint-ven/-getting-started-install-endpoint-ven-allow_system_extension_ven.png) - -1. Enter your username and password and click **Modify Settings** to save your changes. - - ![](../images/deploy-elastic-endpoint-ven/-getting-started-install-endpoint-ven-enter_login_details_to_confirm_ven.png) - -
- -## Approve network content filtering for ((elastic-endpoint)) - -After successfully loading the ElasticEndpoint system extension, an additional message appears, asking to allow ((elastic-endpoint)) to filter network content. - -![](../images/deploy-elastic-endpoint-ven/-getting-started-install-endpoint-ven-allow_network_filter_ven.png) - -Click **Allow** to enable content filtering for the ElasticEndpoint system extension. Without this approval, ((elastic-endpoint)) cannot receive network events and, therefore, cannot enable network-related features such as host isolation. - -
- -## Enable Full Disk Access for ((elastic-endpoint)) - -((elastic-endpoint)) requires Full Disk Access to subscribe to system events via the ((elastic-defend)) framework and to protect your network from malware and other cybersecurity threats. Full Disk Access permissions is a privacy feature introduced in macOS Mojave (10.14) that prevents some applications from accessing your data. - -If you have not granted Full Disk Access, the following notification prompt will appear. - -![](../images/deploy-elastic-endpoint-ven/-getting-started-install-endpoint-ven-allow_full_disk_access_notification_ven.png) - -To enable Full Disk Access, you must manually approve ((elastic-endpoint)). - - -The following instructions apply only to ((elastic-endpoint)) version 8.0.0 and later. To see Full Disk Access requirements for the Endgame sensor, refer to Endgame's documentation. - - -1. Open the **System Settings** application. -1. In the left pane, select **Privacy & Security**. - - ![](../images/deploy-elastic-endpoint-ven/-getting-started-install-endpoint-ven-privacy_security_ven.png) - -1. From the right pane, select **Full Disk Access**. - - ![Select Full Disk Access](../images/deploy-elastic-endpoint-ven/-getting-started-install-endpoint-ven-select_fda_ven.png) - -1. Enable `ElasticEndpoint` and `co.elastic` to properly enable Full Disk Access. - - ![](../images/deploy-elastic-endpoint-ven/-getting-started-install-endpoint-ven-allow_fda_ven.png) - - -If the endpoint is running ((elastic-endpoint)) version 7.17.0 or earlier: - -1. Click the **+** button to view **Finder**. -1. The system may prompt you to enter your username and password if you haven't already. - - ![](../images/deploy-elastic-endpoint-ven/-getting-started-install-endpoint-ven-enter_login_details_to_confirm_ven.png) - -1. Navigate to `/Library/Elastic/Endpoint`, then select the `elastic-endpoint` file. -1. Click **Open**. -1. In the **Privacy** tab, confirm that `ElasticEndpoint` and `co.elastic.systemextension` are selected to properly enable Full Disk Access. - -![Select Full Disk Access](../images/deploy-elastic-endpoint-ven/-getting-started-install-endpoint-ven-verify_fed_granted_ven.png) \ No newline at end of file diff --git a/docs/serverless/edr-install-config/deploy-endpoint-reqs.mdx b/docs/serverless/edr-install-config/deploy-endpoint-reqs.mdx deleted file mode 100644 index 425c7cf0e3..0000000000 --- a/docs/serverless/edr-install-config/deploy-endpoint-reqs.mdx +++ /dev/null @@ -1,26 +0,0 @@ ---- -slug: /serverless/security/elastic-endpoint-deploy-reqs -title: ((elastic-defend)) requirements -description: System requirements for ((elastic-defend)). -tags: ["security","other","secure"] -status: in review ---- - - -
- -To properly deploy ((elastic-defend)) without a Mobile Device Management (MDM) profile, you must manually enable additional permissions on the host before ((elastic-endpoint))—the installed component that performs ((elastic-defend))'s threat monitoring and prevention—is fully functional. For more information, refer to the instructions for your macOS version: - -* -* - -## Minimum system requirements - -| Requirement | Value | -|------------------------------------|----------| -| **CPU** | Under 2% | -| **Disk space** | 1 GB | -| **Resident set size (RSS) memory** | 500 MB | - - - diff --git a/docs/serverless/edr-install-config/deploy-with-mdm.mdx b/docs/serverless/edr-install-config/deploy-with-mdm.mdx deleted file mode 100644 index 1e1c32d04a..0000000000 --- a/docs/serverless/edr-install-config/deploy-with-mdm.mdx +++ /dev/null @@ -1,107 +0,0 @@ ---- -slug: /serverless/security/deploy-with-mdm -title: Deploy ((elastic-defend)) on macOS with mobile device management -description: Configure access for deploying ((elastic-defend)) on macOS with mobile device management. -tags: ["security","how-to","secure"] -status: in review ---- - - -
- -To silently install and deploy ((elastic-defend)) without the need for user interaction, you need to configure a mobile device management (MDM) profile for ((elastic-endpoint))—the installed component that performs ((elastic-defend))'s threat monitoring and prevention. This allows you to pre-approve the ((elastic-endpoint)) system extension and grant Full Disk Access to all the necessary components. - -This page explains how to deploy ((elastic-defend)) silently using Jamf. - -## Configure a Jamf MDM profile - -In Jamf, create a configuration profile for ((elastic-endpoint)). Follow these steps to configure the profile: - -1. Approve the system extension. -1. Approve network content filtering. -1. Enable notifications. -1. Enable Full Disk Access. - -### Approve the system extension - -1. Select the **System Extensions** option to configure the system extension policy for the ((elastic-endpoint)) configuration profile. -1. Make sure that **Allow users to approve system extensions** is selected. -1. In the **Allowed Team IDs and System Extensions** section, add the ((elastic-endpoint)) system extension: - 1. (Optional) Enter a **Display Name** for the ((elastic-endpoint)) system extension. - 1. From the **System Extension Types** dropdown, select **Allowed System Extensions**. - 1. Under **Team Identifier**, enter `2BT3HPN62Z`. - 1. Under **Allowed System Extensions**, enter `co.elastic.systemextension`. -1. Save the configuration. - -![](../images/deploy-with-mdm/system-extension-jamf.png) - -### Approve network content filtering - -1. Select the **Content Filter** option to configure the Network Extension policy for the ((elastic-endpoint)) configuration profile. -1. Under **Filter Name**, enter `ElasticEndpoint`. -1. Under **Identifier**, enter `co.elastic.endpoint`. -1. In the **Socket Filter** section, fill in these fields: - 1. **Socket Filter Bundle Identifier**: Enter `co.elastic.systemextension` - 1. **Socket Filter Designated Requirement**: Enter the following: - ``` - identifier "co.elastic.systemextension" and anchor apple generic and certificate 1[field.1.2.840.113635.100.6.2.6] /* exists */ and certificate leaf[field.1.2.840.113635.100.6.1.13] /* exists */ and certificate leaf[subject.OU] = "2BT3HPN62Z" - ``` -1. In the **Network Filter** section, fill in these fields: - 1. **Network Filter Bundle Identifier**: Enter `co.elastic.systemextension` - 1. **Network Filter Designated Requirement**: Enter the following: - ``` - identifier "co.elastic.systemextension" and anchor apple generic and certificate 1[field.1.2.840.113635.100.6.2.6] /* exists */ and certificate leaf[field.1.2.840.113635.100.6.1.13] /* exists */ and certificate leaf[subject.OU] = "2BT3HPN62Z" - ``` -1. Save the configuration. - -![](../images/deploy-with-mdm/content-filtering-jamf.png) - -### Enable notifications - -1. Select the **Notifications** option to configure the Notification Center policy for the ((elastic-endpoint)) configuration profile. -1. Under **App Name**, enter `Elastic Security.app`. -1. Under **Bundle ID**, enter `co.elastic.alert`. -1. In the **Settings** section, include these options with the following settings: - 1. **Critical Alerts**: **Enable**. - 1. **Notifications**: **Enable**. - 1. **Banner alert type**: **Persistent**. - 1. **Notifications on Lock Screen**: **Display**. - 1. **Notifications in Notification Center**: **Display**. - 1. **Badge app icon**: **Display**. - 1. **Play sound for notifications**: **Enable**. -1. Save the configuration. - -![](../images/deploy-with-mdm/notifications-jamf.png) - -### Enable Full Disk Access - -1. Select the **Privacy Preferences Policy Control** option to configure the Full Disk Access policy for the ((elastic-endpoint)) configuration profile. -1. Add a new entry with the following details: - 1. Under **Identifier**, enter `co.elastic.systemextension`. - 1. From the **Identifier Type** dropdown, select **Bundle ID**. - 1. Under **Code Requirement**, enter the following: - ``` - identifier "co.elastic.systemextension" and anchor apple generic and certificate 1[field.1.2.840.113635.100.6.2.6] /* exists */ and certificate leaf[field.1.2.840.113635.100.6.1.13] /* exists */ and certificate leaf[subject.OU] = "2BT3HPN62Z" - ``` - 1. Make sure that **Validate the Static Code Requirement** is selected. -1. Add a second entry with the following details: - 1. Under **Identifier**, enter `co.elastic.endpoint`. - 1. From the **Identifier Type** dropdown, select **Bundle ID**. - 1. Under **Code Requirement**, enter the following: - ``` - identifier "co.elastic.endpoint" and anchor apple generic and certificate 1[field.1.2.840.113635.100.6.2.6] /* exists */ and certificate leaf[field.1.2.840.113635.100.6.1.13] /* exists */ and certificate leaf[subject.OU] = "2BT3HPN62Z" - ``` - 1. Make sure that **Validate the Static Code Requirement** is selected. -1. Add a third entry with the following details: - 1. Under **Identifier**, enter `co.elastic.elastic-agent`. - 1. From the **Identifier Type** dropdown, select **Bundle ID**. - 1. Under **Code Requirement**, enter the following: - ``` - identifier "co.elastic.elastic-agent" and anchor apple generic and certificate 1[field.1.2.840.113635.100.6.2.6] /* exists */ and certificate leaf[field.1.2.840.113635.100.6.1.13] /* exists */ and certificate leaf[subject.OU] = "2BT3HPN62Z" - ``` - 1. Make sure that **Validate the Static Code Requirement** is selected. -1. Save the configuration. - -![](../images/deploy-with-mdm/fda-jamf.png) - -After you complete these steps, generate the mobile configuration profile and install it onto the macOS machines. Once the profile is installed, ((elastic-defend)) can be deployed without the need for user interaction. diff --git a/docs/serverless/edr-install-config/endpoint-data-volume.mdx b/docs/serverless/edr-install-config/endpoint-data-volume.mdx deleted file mode 100644 index 305cd9611e..0000000000 --- a/docs/serverless/edr-install-config/endpoint-data-volume.mdx +++ /dev/null @@ -1,10 +0,0 @@ ---- -slug: /serverless/security/endpoint-data-volume -title: Configure ((elastic-endpoint))'s data volume -description: -tags: [ 'serverless', 'security', 'how-to' ] ---- - - - **This is a placeholder for future documentation.** - diff --git a/docs/serverless/edr-install-config/endpoint-diagnostic-data.mdx b/docs/serverless/edr-install-config/endpoint-diagnostic-data.mdx deleted file mode 100644 index 9b4a17b012..0000000000 --- a/docs/serverless/edr-install-config/endpoint-diagnostic-data.mdx +++ /dev/null @@ -1,26 +0,0 @@ ---- -slug: /serverless/security/endpoint-diagnostic-data -title: Turn off diagnostic data for ((elastic-defend)) -description: Stop producing diagnostic data for Elastic defend by configuring your integration policy. -tags: [ 'serverless', 'security', 'how-to' ] -status: in review ---- - - -
- -By default, ((elastic-defend)) streams diagnostic data to your cluster, which Elastic uses to tune protection features. You can stop producing this diagnostic data by configuring the advanced settings in the ((elastic-defend)) integration policy. - - -((elastic-sec)) also collects usage telemetry, which includes ((elastic-defend)) diagnostic data. You can modify telemetry preferences in [Advanced Settings](((kibana-ref))/telemetry-settings-kbn.html). - - -1. Go to **Assets** → **Endpoints** to view the Endpoints list. -1. Locate the endpoint for which you want to disable diagnostic data, then click the integration policy in the **Policy** column. -1. Scroll down to the bottom of the policy and click **Show advanced settings**. -1. Enter `false` for these settings: - * `windows.advanced.diagnostic.enabled` - * `linux.advanced.diagnostic.enabled` - * `mac.advanced.diagnostic.enabled` -1. Click **Save**. - diff --git a/docs/serverless/edr-install-config/endpoint-protection-intro.mdx b/docs/serverless/edr-install-config/endpoint-protection-intro.mdx deleted file mode 100644 index 83503a4993..0000000000 --- a/docs/serverless/edr-install-config/endpoint-protection-intro.mdx +++ /dev/null @@ -1,11 +0,0 @@ ---- -slug: /serverless/security/endpoint-protection-intro -title: Configure endpoint protection with ((elastic-defend)) -description: Start protecting your endpoints with ((elastic-defend)). -tags: [ 'serverless', 'security', 'overview' ] ---- - - -
- -This section contains information on installing and configuring ((elastic-defend)) for endpoint protection. diff --git a/docs/serverless/edr-install-config/install-elastic-defend.mdx b/docs/serverless/edr-install-config/install-elastic-defend.mdx deleted file mode 100644 index 65a26e3f52..0000000000 --- a/docs/serverless/edr-install-config/install-elastic-defend.mdx +++ /dev/null @@ -1,146 +0,0 @@ ---- -slug: /serverless/security/install-edr -title: Install the ((elastic-defend)) integration -description: Start protecting your endpoints with ((elastic-defend)). -tags: [ 'serverless', 'security', 'how-to' ] -status: in review ---- - - -
- -Like other Elastic integrations, ((elastic-defend)) is integrated into the ((agent)) using [((fleet))](((fleet-guide))/fleet-overview.html). Upon configuration, the integration allows the ((agent)) to monitor events on your host and send data to the ((security-app)). - - - -* ((fleet)) is required for ((elastic-defend)). - -* To configure the ((elastic-defend)) integration on the ((agent)), you must have permission to use ((fleet)). - -* You must have the appropriate user role to configure an integration policy and access the **Endpoints** page. -{/* Placeholder statement until we know which specific roles are required. Classic statement below for reference. */} -{/* * You must have the **((elastic-defend)) Policy Management: All** privilege to configure an integration policy, and the **Endpoint List** privilege to access the **Endpoints** page. */} - - - -
- -## Before you begin - -If you're using macOS, some versions may require you to grant Full Disk Access to different kernels, system extensions, or files. Refer to requirements for ((elastic-endpoint)) for more information. - - -((elastic-defend)) does not support deployment within an ((agent)) DaemonSet in Kubernetes. - - -
- -## Add the ((elastic-defend)) integration - -1. Go to the **Integrations** page, which you can access in several ways: - - * The **Add integrations** link at the top of most pages - * **Assets** → **Browse Integrations** - * **Project settings** → **Integrations** - - - ![Search result for "((elastic-defend))" on the Integrations page.](../images/install-endpoint/-getting-started-install-endpoint-endpoint-cloud-sec-integrations-page.png) - -1. Search for and select **((elastic-defend))**, then select **Add ((elastic-defend))**. The integration configuration page appears. - - - If this is the first integration you've installed and the **Ready to add your first integration?** page appears instead, select **Add integration only (skip agent installation)** to proceed. You can install ((agent)) after setting up the ((elastic-defend)) integration. - - - - -1. Configure the ((elastic-defend)) integration with an **Integration name** and optional **Description**. -1. Select the type of environment you want to protect, either **Traditional Endpoints** or **Cloud Workloads**. -1. Select a configuration preset. Each preset comes with different default settings for ((agent)) — you can further customize these later by configuring the ((elastic-defend)) integration policy. - - - - - **Traditional Endpoint presets** - - - - All traditional endpoint presets _except_ **Data Collection** have these preventions enabled by default: malware, ransomware, memory threat, malicious behavior, and credential theft. Each preset collects the following events: - - * **Data Collection:** All events; no preventions - * **Next-Generation Antivirus (NGAV):** Process events; all preventions - * **Essential EDR (Endpoint Detection & Response):** Process, Network, File events; all preventions - * **Complete EDR (Endpoint Detection & Response):** All events; all preventions - - - - - - **Cloud Workloads presets** - - - - Both cloud workload presets are intended for monitoring cloud-based Linux hosts. Therefore, session data collection, which enriches process events, is enabled by default. They both have all preventions disabled by default, and collect process, network, and file events. - - * **All events:** Includes data from automated sessions. - * **Interactive only:** Filters out data from non-interactive sessions by creating an event filter. - - - - - -1. Enter a name for the agent policy in **New agent policy name**. If other agent policies already exist, you can click the **Existing hosts** tab and select an existing policy instead. For more details on ((agent)) configuration settings, refer to [((agent)) policies](((fleet-guide))/agent-policy.html). -1. When you're ready, click **Save and continue**. -1. To complete the integration, select **Add ((agent)) to your hosts** and continue to the next section to install the ((agent)) on your hosts. - -
- -## Configure and enroll the ((agent)) - -To enable the ((elastic-defend)) integration, you must enroll agents in the relevant policy using ((fleet)). - - - -Before you add an ((agent)), a ((fleet-server)) must be running. Refer to [Add a ((fleet-server))](((fleet-guide))/add-a-fleet-server.html). - -((elastic-defend)) cannot be integrated with an ((agent)) in standalone mode. - - - -
- -### Add the ((agent)) - -1. If you're in the process of installing an ((agent)) integration (such as ((elastic-defend))), the **Add agent** UI opens automatically. Otherwise, go to **Assets** → **((fleet))** → **Agents** → **Add agent**. - - ![Add agent flyout on the Fleet page.](../images/install-endpoint/-getting-started-install-endpoint-endpoint-cloud-sec-add-agent.png) - -1. Select an agent policy for the ((agent)). You can select an existing policy, or select **Create new agent policy** to create a new one. For more details on ((agent)) configuration settings, refer to [((agent)) policies](((fleet-guide))/agent-policy.html). - - The selected agent policy should include the integration you want to install on the hosts covered by the agent policy (in this example, ((elastic-defend))). - - - -1. Ensure that the **Enroll in ((fleet))** option is selected. ((elastic-defend)) cannot be integrated with ((agent)) in standalone mode. - -1. Select the appropriate platform or operating system for the host, then copy the provided commands. - -1. On the host, open a command-line interface and navigate to the directory where you want to install ((agent)). Paste and run the commands from ((fleet)) to download, extract, enroll, and start ((agent)). - -1. (Optional) Return to the **Add agent** flyout in ((fleet)), and observe the **Confirm agent enrollment** and **Confirm incoming data** steps automatically checking the host connection. It may take a few minutes for data to arrive in ((es)). - -1. After you have enrolled the ((agent)) on your host, you can click **View enrolled agents** to access the list of agents enrolled in ((fleet)). Otherwise, select **Close**. - - The host will now appear on the **Endpoints** page in the ((security-app)). It may take another minute or two for endpoint data to appear in ((elastic-sec)). - -1. For macOS, continue with these instructions to grant ((elastic-endpoint)) the required permissions. - diff --git a/docs/serverless/edr-install-config/linux-file-monitoring.mdx b/docs/serverless/edr-install-config/linux-file-monitoring.mdx deleted file mode 100644 index 749d11aa36..0000000000 --- a/docs/serverless/edr-install-config/linux-file-monitoring.mdx +++ /dev/null @@ -1,100 +0,0 @@ ---- -slug: /serverless/security/linux-file-monitoring -title: Configure Linux file system monitoring -description: Configure monitoring for Linux file systems. -tags: [ 'serverless', 'security', 'how-to' ] -status: in review ---- - - -
- -By default, ((elastic-defend)) monitors specific Linux file system types that Elastic has tested for compatibility. If your network includes nonstandard, proprietary, or otherwise unrecognized Linux file systems, you can configure the integration policy to extend monitoring and protections to those additional file systems. You can also have ((elastic-defend)) ignore unrecognized file system types if they don't require monitoring or cause unexpected problems. - - -Ignoring file systems can create gaps in your security coverage. Use additional security layers for any file systems ignored by ((elastic-defend)). - - -To monitor or ignore additional file systems, configure the following advanced settings related to **fanotify**, a Linux feature that monitors file system events. Go to **Assets** → **Policies**, click a policy's name, then scroll down and select **Show advanced settings**. - - -Even when configured to monitor all file systems (`ignore_unknown_filesystems` is `false`), ((elastic-defend)) will still ignore specific file systems that Elastic has internally identified as incompatible. The following settings apply to any _other_ file systems. - - -
- -`linux.advanced.fanotify.ignore_unknown_filesystems` - : Determines whether to ignore unrecognized file systems. Enter one of the following: - - * `true`: (Default) Monitor only Elastic-tested file systems, and ignore all others. You can still monitor or ignore specific file systems with `monitored_filesystems` and `ignored_filesystems`, respectively. - - * `false`: Monitor all file systems. You can still ignore specific file systems with `ignored_filesystems`. - - - If you don't need to monitor additional file systems, it's recommended to change `ignore_unknown_filesystems` to `true`. - - -
- -`linux.advanced.fanotify.monitored_filesystems` - : Specifies additional file systems to monitor. Enter a comma-separated list of file system names as they appear in `/proc/filesystems` (for example: `jfs,ufs,ramfs`). - - - It's recommended to avoid monitoring network-backed file systems. - - - This setting isn't recognized if `ignore_unknown_filesystems` is `false`, since that would mean you're already monitoring _all_ file systems. - - Entries in this setting are overridden by entries in `ignored_filesystems`. - - -
- -`linux.advanced.fanotify.ignored_filesystems` - : Specifies additional file systems to ignore. Enter a comma-separated list of file system names as they appear in `/proc/filesystems` (for example: `ext4,tmpfs`). - - Entries in this setting override entries in `monitored_filesystems`. - -
- -## Find file system names - -This section provides a few ways to determine the file system names needed for `linux.advanced.fanotify.monitored_filesystems` and `linux.advanced.fanotify.ignored_filesystems`. - -In a typical setup, when you install ((agent)), ((filebeat)) is installed alongside ((elastic-endpoint)) and will automatically ship ((elastic-endpoint)) logs to ((es)). ((elastic-endpoint)) will generate a log message about the file that was scanned when an event occurs. - -To find the system file name: - -1. From the Hosts page (**Explore** → **Hosts**), search for `message: "Current sync path"` to reveal the file path. - -1. If you have access to the endpoint, run `findmnt -o FSTYPE -T ` to return the file system. For example: - - ```shell - > findmnt -o FSTYPE -T /etc/passwd - FSTYPE - ext4 - ``` - - This returns the file system name as `ext4`. - -Alternatively, you can also find the file system name by correlating data from two other log messages: - -1. Search the logs for `message: "Current fdinfo"` to reveal the `mnt_id` value of the file path. In this example, the `mnt_id` value is `29`: - - ```shell - pos: 12288 - flags: 02500002 - mnt_id: 29 - ino: 2367737 - ``` - -1. Search the logs for `message: "Current mountinfo"` to reveal the file system that corresponds to the `mnt_id` value you found in the previous step: - - ```shell - - 29 1 8:2 / / rw,relatime shared:1 - ext4 /dev/sda2 rw,errors=remount-ro - - ``` - - The first number, `29`, is the `mnt_id`, and the first field after the hyphen (`-`) is the file system name, `ext4`. - diff --git a/docs/serverless/edr-install-config/self-healing-rollback.mdx b/docs/serverless/edr-install-config/self-healing-rollback.mdx deleted file mode 100644 index baaaf54730..0000000000 --- a/docs/serverless/edr-install-config/self-healing-rollback.mdx +++ /dev/null @@ -1,30 +0,0 @@ ---- -slug: /serverless/security/self-healing-rollback -title: Configure self-healing rollback for Windows endpoints -description: Revert file changes on the Windows endpoints. -tags: [ 'serverless', 'security', 'how-to' ] -status: in review ---- - - -
- -((elastic-defend))'s self-healing feature rolls back file changes on Windows endpoints when a prevention alert is generated by enabled protection features. File changes that occurred on the host within five minutes before the prevention alert will revert to their previous state (which may be up to two hours before the alert). - -This can help contain the impact of malicious activity, as ((elastic-defend)) not only stops the activity but also erases any attack artifacts deployed prior to detection. - -Self-healing rollback requires the Endpoint Protection Complete and is only supported for Windows endpoints. - - - -This feature can cause permanent data loss since it overwrites recent changes and deletes recently added files on the host. Self-healing rollback targets the changes related to a detected threat, but may also include incidental actions that aren't directly related to the threat. - -Also, rollback is triggered by _every_ ((elastic-defend)) prevention alert, so you should tune your system to eliminate false positives before enabling this feature. - - - -1. In the ((security-app)), go to **Assets** → **Policies**, then select the integration policy you want to configure. -1. Scroll down to the bottom of the policy and click **Show advanced settings**. -1. Enter `true` for the setting `windows.advanced.alerts.rollback.self_healing.enabled`. -1. Click **Save**. - diff --git a/docs/serverless/edr-install-config/uninstall-agent.mdx b/docs/serverless/edr-install-config/uninstall-agent.mdx deleted file mode 100644 index 800c68de8b..0000000000 --- a/docs/serverless/edr-install-config/uninstall-agent.mdx +++ /dev/null @@ -1,62 +0,0 @@ ---- -slug: /serverless/security/uninstall-agent -title: Uninstall ((agent)) -description: Remove ((agent)) from a host. -tags: [ 'serverless', 'security', 'how-to' ] ---- - - -
- -To uninstall ((agent)) from a host, run the `uninstall` command from the directory where it's running. Refer to the [((fleet)) and ((agent)) documentation](((fleet-guide))/uninstall-elastic-agent.html) for more information. - -If Agent tamper protection is enabled on the Agent policy for the host, you'll need to include the uninstall token in the command, using the `--uninstall-token` flag. You can find the uninstall token on the Agent policy or at **((fleet))** -> **Uninstall tokens**. - -For example: - - - - ```shell - sudo elastic-agent uninstall --uninstall-token 12345678901234567890123456789012 - ``` - - - ```shell - C:\"Program Files"\Elastic\Agent\elastic-agent.exe uninstall --uninstall-token 12345678901234567890123456789012 - ``` - - - -
- -## Uninstall ((elastic-endpoint)) - -Use these commands to uninstall ((elastic-endpoint)) from a host **ONLY** if [uninstalling an ((agent))](((fleet-guide))/uninstall-elastic-agent.html) is unsuccessful. - - - - ```shell - cd /tmp - cp /Library/Elastic/Endpoint/elastic-endpoint elastic-endpoint - sudo ./elastic-endpoint uninstall - rm elastic-endpoint - ``` - - - ```shell - cd /tmp - cp /opt/Elastic/Endpoint/elastic-endpoint elastic-endpoint - sudo ./elastic-endpoint uninstall - rm elastic-endpoint - ``` - - - ```shell - cd %TEMP% - copy "c:\Program Files\Elastic\Endpoint\elastic-endpoint.exe" elastic-endpoint.exe - .\elastic-endpoint.exe uninstall - del .\elastic-endpoint.exe - ``` - - - diff --git a/docs/serverless/edr-manage/allowlist-endpoint-3rd-party-av.mdx b/docs/serverless/edr-manage/allowlist-endpoint-3rd-party-av.mdx deleted file mode 100644 index 5d776b31c7..0000000000 --- a/docs/serverless/edr-manage/allowlist-endpoint-3rd-party-av.mdx +++ /dev/null @@ -1,69 +0,0 @@ ---- -slug: /serverless/security/allowlist-endpoint -title: Allowlist ((elastic-endpoint)) in third-party antivirus apps -description: Add ((elastic-endpoint)) as a trusted application in third-party antivirus (AV) software. -tags: [ 'serverless', 'security', 'overview' ] -status: in review ---- - - - - -If you use other antivirus (AV) software along with ((elastic-defend)), you may need to add the other system as a trusted application in the ((security-app)). Refer to for more information. - - -Third-party antivirus (AV) applications may identify the expected behavior of ((elastic-endpoint))—the installed component that performs ((elastic-defend))'s threat monitoring and prevention—as a potential threat. Add ((elastic-endpoint))'s digital signatures and file paths to your AV software's allowlist to ensure ((elastic-endpoint)) continues to function as intended. We recommend you allowlist both the file paths and digital signatures, if applicable. - - -Your AV software may refer to allowlisted processes as process exclusions, ignored processes, or trusted processes. It is important to note that file, folder, and path-based exclusions/exceptions are distinct from trusted applications and will not achieve the same result. This page explains how to ignore actions taken by processes, not how to ignore the files that spawned those processes. - - -## Allowlist ((elastic-endpoint)) on Windows - -File paths: - -* ELAM driver: `c:\Windows\system32\drivers\elastic-endpoint-driver.sys` -* Driver: `c:\Windows\system32\drivers\ElasticElam.sys` -* Executable: `c:\Program Files\Elastic\Endpoint\elastic-endpoint.exe` - - - The executable runs as `elastic-endpoint.exe`. - - -Digital signatures: - -* `Elasticsearch, Inc.` -* `Elasticsearch B.V.` - -For additional information about allowlisting on Windows, refer to [Trusting Elastic Defend in other software](https://github.com/elastic/endpoint/blob/main/PerformanceIssues-Windows.md#trusting-elastic-defend-in-other-software). - -## Allowlist ((elastic-endpoint)) on macOS - -File paths: - -* System extension (recursive directory structure): `/Applications/ElasticEndpoint.app/` - - - The system extension runs as `co.elastic.systemextension`. - - -* Executable: `/Library/Elastic/Endpoint/elastic-endpoint.app/Contents/MacOS/elastic-endpoint` - - - The executable runs as `elastic-endpoint`. - - -Digital signatures: - -* Authority/Developer ID Application: `Elasticsearch, Inc (2BT3HPN62Z)` -* Team ID: `2BT3HPN62Z` - -## Allowlist ((elastic-endpoint)) on Linux - -File path: - -* Executable: `/opt/Elastic/Endpoint/elastic-endpoint` - - - The executable runs as `elastic-endpoint`. - \ No newline at end of file diff --git a/docs/serverless/edr-manage/blocklist.mdx b/docs/serverless/edr-manage/blocklist.mdx deleted file mode 100644 index 95c8c4e845..0000000000 --- a/docs/serverless/edr-manage/blocklist.mdx +++ /dev/null @@ -1,98 +0,0 @@ ---- -slug: /serverless/security/blocklist -title: Blocklist -# description: Description to be written -tags: [ 'serverless', 'security', 'how-to' ] -status: in review ---- - - -
- -The blocklist (**Assets** → **Blocklist**) allows you to prevent specified applications from running on hosts, extending the list of processes that ((elastic-defend)) considers malicious. This helps ensure that known malicious processes aren't accidentally executed by end users. - -The blocklist is not intended to broadly block benign applications for non-security reasons; only use it to block potentially harmful applications. To compare the blocklist with other endpoint artifacts, refer to . - - - -* In addition to configuring specific entries on the **Blocklist** page, you must also ensure that the blocklist is enabled on the ((elastic-defend)) integration policy in the Malware protection settings. This setting is enabled by default. - -* You must have the appropriate user role to use this feature. -{/* Placeholder statement until we know which specific roles are required. Classic statement below for reference. */} -{/* * You must have the **Blocklist** privilege to access this feature. */} - - - -By default, a blocklist entry is recognized globally across all hosts running ((elastic-defend)). You can also assign a blocklist entry to specific ((elastic-defend)) integration policies, which blocks the process only on hosts assigned to that policy. - -1. Go to **Assets** → **Blocklist**. - -1. Click **Add blocklist entry**. The **Add blocklist** flyout appears. - -1. Fill in these fields in the **Details** section: - 1. `Name`: Enter a name to identify the application in the blocklist. - 1. `Description`: Enter a description to provide more information on the blocklist entry (optional). - -1. In the **Conditions** section, enter the following information about the application you want to block: - 1. `Select operating system`: Select the appropriate operating system from the drop-down. - 1. `Field`: Select a field to identify the application being blocked: - * `Hash`: The MD5, SHA-1, or SHA-256 hash value of the application's executable. - * `Path`: The full file path of the application's executable. - * `Signature`: (Windows only) The name of the application's digital signer. - - - To find the signer's name for an application, go to **Discover** and query the process name of the application's executable (for example, `process.name : "mctray.exe"` for a McAfee security binary). Then, search the results for the `process.code_signature.subject_name` field, which contains the signer's name (for example, `McAfee, Inc.`). - - - 1. `Operator`: The operator is `is one of` and cannot be modified. - - 1. `Value`: Enter the hash value, file path, or signer name. To enter multiple values (such as a list of known malicious hash values), you can enter each value individually or paste a comma-delimited list, then press **Return**. - - - Hash values must be valid to add them to the blocklist. - - -1. Select an option in the **Assignment** section to assign the blocklist entry to a specific integration policy: - - * `Global`: Assign the blocklist entry to all ((elastic-defend)) integration policies. - * `Per Policy`: Assign the blocklist entry to one or more specific ((elastic-defend)) integration policies. Select each policy where you want the blocklist entry to apply. - - - You can also select the `Per Policy` option without immediately assigning a policy to the blocklist entry. For example, you could do this to create and review your blocklist configurations before putting them into action with a policy. - - -1. Click **Add blocklist**. The new entry is added to the **Blocklist** page. - -1. When you're done adding entries to the blocklist, ensure that the blocklist is enabled for the ((elastic-defend)) integration policies that you just assigned: - 1. Go to **Assets** → **Policies**, then click on an integration policy. - 1. On the **Policy settings** tab, ensure that the **Malware protections** and **Blocklist** toggles are switched on. Both settings are enabled by default. - -
- -## View and manage the blocklist - -The **Blocklist** page (**Assets** → **Blocklist**) displays all the blocklist entries that have been added to the ((security-app)). To refine the list, use the search bar to search by name, description, or field value. - -![](../images/blocklist/-management-admin-blocklist.png) - -
- -### Edit a blocklist entry -You can individually modify each blocklist entry. You can also change the policies that a blocklist entry is assigned to. - -To edit a blocklist entry: - -1. Click the actions menu () for the blocklist entry you want to edit, then select **Edit blocklist**. -1. Modify details as needed. -1. Click **Save**. - -
- -### Delete a blocklist entry -You can delete a blocklist entry, which removes it entirely from all ((elastic-defend)) policies. This allows end users to access the application that was previously blocked. - -To delete a blocklist entry: - -1. Click the actions menu () for the blocklist entry you want to delete, then select **Delete blocklist**. -1. On the dialog that opens, verify that you are removing the correct blocklist entry, then click **Delete**. A confirmation message displays. - diff --git a/docs/serverless/edr-manage/endpoint-command-ref.mdx b/docs/serverless/edr-manage/endpoint-command-ref.mdx deleted file mode 100644 index af115a15ba..0000000000 --- a/docs/serverless/edr-manage/endpoint-command-ref.mdx +++ /dev/null @@ -1,296 +0,0 @@ ---- -slug: /serverless/security/endpoint-command-ref -title: ((elastic-endpoint)) command reference -description: Manage and troubleshoot ((elastic-endpoint)) using CLI commands. -tags: ["security","reference","manage"] -status: in review ---- - - -
- -This page lists the commands for management and troubleshooting of ((elastic-endpoint)), the installed component that performs ((elastic-defend))'s threat monitoring and prevention. - - - -* ((elastic-endpoint)) is not added to the `PATH` system variable, so you must prepend the commands with the full OS-dependent path: - * On Windows: `"C:\Program Files\Elastic\Endpoint\elastic-endpoint.exe"` - * On macOS: `/Library/Elastic/Endpoint/elastic-endpoint` - * On Linux: `/opt/Elastic/Endpoint/elastic-endpoint` - -* You must run the commands with elevated privileges—using `sudo` to run as the root user on Linux and macOS, or running as Administrator on Windows. - - - -The following ((elastic-endpoint)) commands are available: - -* diagnostics -* help -* inspect -* install -* memorydump -* run -* send -* status -* test -* top -* uninstall -* version - -Each of the commands accepts the following logging options: - -* `--log [stdout,stderr,debugview,file]` -* `--log-level [error,info,debug]` - -## elastic-endpoint diagnostics - -Gather diagnostics information from ((elastic-endpoint)). This command produces an archive that contains: - -- `version.txt`: Version information -- `elastic-endpoint.yaml`: Current policy -- `metrics.json`: Metrics document -- `policy_response.json`: Last policy response -- `system_info.txt`: System information -- `analysis.txt`: Diagnostic analysis report -- `logs` directory: Copy of ((elastic-endpoint)) log files - -### Example - -``` -elastic-endpoint diagnostics -``` - -## elastic-endpoint help - -Show help for the available commands. - -### Example - -``` -elastic-endpoint help -``` - -## elastic-endpoint inspect - -Show the current ((elastic-endpoint)) configuration. - -### Example - -``` -elastic-endpoint inspect -``` - -## elastic-endpoint install - -Install ((elastic-endpoint)) as a system service. - - -We do not recommend installing ((elastic-endpoint)) using this command. ((elastic-endpoint)) is managed by ((agent)) and cannot function as a standalone service. Therefore, there is no separate installation package for ((elastic-endpoint)), and it should not be installed independently. - -### Options - -`--resources ` - : Specify a resources `.zip` file to be used during the installation. This option is required. - -`--upgrade` - : Upgrade the existing installation. - -### Example - -``` -elastic-endpoint install --upgrade --resources endpoint-security-resources.zip -``` - -## elastic-endpoint memorydump - -Save a memory dump of the ((elastic-endpoint)) service. - -### Options - -`--compress` - : Compress the saved memory dump. - -`--timeout ` - : Specify the memory collection timeout, in seconds; the default is 60 seconds. - -### Example - -``` -elastic-endpoint memorydump --timeout 120 -``` - -## elastic-endpoint run - -Run `elastic-endpoint` as a foreground process if no other instance is already running. - -### Example - -``` -elastic-endpoint run -``` - -## elastic-endpoint send - -Send the requested document to the ((stack)). - -### Subcommands - -`metadata` - : Send an off-schedule metrics document to the ((stack)). - -### Example - -``` -elastic-endpoint send metadata -``` - -## elastic-endpoint status - -Retrieve the current status of the running ((elastic-endpoint)) service. The command also returns the last known status of ((agent)). - -### Options - -`--output` - : Control the level of detail and formatting of the information. Valid values are: - - * `human`: Returns limited information when ((elastic-endpoint))'s status is `Healthy`. If any policy actions weren't successfully applied, the relevant details are displayed. - * `full`: Always returns the full status information. - * `json`: Always returns the full status information. - -### Example - -``` -elastic-endpoint status --output json -``` - -## elastic-endpoint test - -Perform the requested test. - -### Subcommands - -`output` - : Test whether ((elastic-endpoint)) can connect to remote resources. - -### Example - -``` -elastic-endpoint test output -``` - -### Example output - -``` -Testing output connections - -Using proxy: - -Elasticsearch server: https://example.elastic.co:443 - Status: Success - -Global artifact server: https://artifacts.security.elastic.co - Status: Success - -Fleet server: https://fleet.example.elastic.co:443 - Status: Success -``` - -## elastic-endpoint top - -Show a breakdown of the executables that triggered ((elastic-endpoint)) CPU usage within the last interval. This displays which ((elastic-endpoint)) features are resource-intensive for a particular executable. - - -The meaning and output of this command are similar, but not identical, to the POSIX `top` command. The `elastic-endpoint top` command aggregates multiple processes by executable. The utilization values aren't measured by the OS scheduler but by a wall clock in user mode. The output helps identify outliers causing excessive CPU utilization, allowing you to fine-tune the ((elastic-defend)) policy and exception lists in your deployment. - - -### Options - -`--interval ` - : Specify the data collection interval, in seconds; the default is 5 seconds. - -`--limit ` - : Specify the number of updates to collect; by default, data is collected until interrupted by **Ctrl+C**. - -`--normalized` - : Normalize CPU usage values to a total of 100% across all CPUs on multi-CPU systems. - -### Example - -``` -elastic-endpoint top --interval 10 --limit 5 -``` - -### Example output - -``` -| PROCESS | OVERALL | API | BHVR | DIAG BHVR | DNS | FILE | LIB | MEM SCAN | MLWR | NET | PROC | RANSOM | REG | -============================================================================================================================================================= -| MSBuild.exe | 3146.0 | 0.0 | 0.8 | 0.7 | 0.0 | 2330.9 | 0.0 | 226.2 | 586.9 | 0.0 | 0.0 | 0.4 | 0.0 | -| Microsoft.Management.Services.IntuneWindowsAgen... | 30.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2 | 29.8 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | -| svchost.exe | 27.3 | 0.0 | 0.1 | 0.1 | 0.0 | 0.4 | 0.2 | 0.0 | 26.6 | 0.0 | 0.0 | 0.0 | 0.0 | -| LenovoVantage-(LenovoServiceBridgeAddin).exe | 0.1 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | -| Lenovo.Modern.ImController.PluginHost.Device.exe | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | -| msedgewebview2.exe | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | -| msedge.exe | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | -| powershell.exe | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | -| WmiPrvSE.exe | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | -| Lenovo.Modern.ImController.PluginHost.Device.exe | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | -| Slack.exe | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | -| uhssvc.exe | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | -| explorer.exe | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | -| taskhostw.exe | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | -| Widgets.exe | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | -| elastic-endpoint.exe | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | -| sppsvc.exe | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | - -Endpoint service (16 CPU): 113.0% out of 1600% - -Collecting data. Press Ctrl-C to cancel -``` - -#### Column abbreviations - -* `API`: Event Tracing for Windows (ETW) API events -* `AUTH`: Authentication events -* `BHVR`: Malicious behavior protection -* `CRED`: Credential access events -* `DIAG BHVR`: Diagnostic malicious behavior protection -* `DNS`: DNS events -* `FILE`: File events -* `LIB`: Library load events -* `MEM SCAN`: Memory scanning -* `MLWR`: Malware protection -* `NET`: Network events -* `PROC`: Process events -* `PROC INJ`: Process injection -* `RANSOM`: Ransomware protection -* `REG`: Registry events - -## elastic-endpoint uninstall - -Uninstall ((elastic-endpoint)). - - -((elastic-endpoint)) is managed by ((agent)). To remove ((elastic-endpoint)) from the target machine permanently, remove the ((elastic-defend)) integration from the ((fleet)) policy. The elastic-agent uninstall command also uninstalls ((elastic-endpoint)); therefore, in practice, the `elastic-endpoint uninstall` command is used only to troubleshoot broken installations. - - -### Options - -`--uninstall-token ` - : Provide the uninstall token. The token is required if agent tamper protection is enabled. - -### Example - -``` -elastic-endpoint uninstall --uninstall-token 12345678901234567890123456789012 -``` - -## elastic-endpoint version - -Show the version of ((elastic-endpoint)). - -### Example - -``` -elastic-endpoint version -``` diff --git a/docs/serverless/edr-manage/endpoint-event-capture.mdx b/docs/serverless/edr-manage/endpoint-event-capture.mdx deleted file mode 100644 index 65234e0b7e..0000000000 --- a/docs/serverless/edr-manage/endpoint-event-capture.mdx +++ /dev/null @@ -1,49 +0,0 @@ ---- -slug: /serverless/security/endpoint-event-capture -title: Event capture and ((elastic-defend)) -description: Learn more about how ((elastic-defend)) collects event data. -tags: [ 'serverless', 'security', 'reference' ] ---- - - -
- -((elastic-defend)) collects select data on system activity in order to detect and prevent as many threats as possible, while balancing storage and performance overhead. To that end, ((elastic-defend)) isn't designed to capture all system events. Some event data that ((elastic-defend)) generates gets aggregated, truncated, or deduplicated as needed to optimize threat detection and prevention. - -You can supplement ((elastic-defend))'s protection capabilities with [Elastic integrations](((integrations-docs))) and tools that provide more visibility and historical data. Consult the following sections to expand data collection for specific types of system events. - -## Network port creation and deletion - -((elastic-defend)) tracks TCP connections. If a port is created but no traffic flows, no events are generated. - -For complete capture of network port creation and deletion, consider capturing Windows event ID 5158 using the [Custom Windows Event Logs](((integrations-docs))/winlog) integration. - -## Network in/out connections - -((elastic-defend)) tracks TCP connections, which don't include network in/out connections. - -For complete network capture, consider deploying ((packetbeat)) using the [Network Packet Capture](((integrations-docs))/network_traffic) integration. - -## User behavior - -((elastic-defend)) only captures user security events required by its behavioral protection. This doesn't include every user event such as logins and logouts, or every time a user account is created, deleted, or modified. - -For complete capture of all or specific Windows security events, consider the [Custom Windows Event Logs](((integrations-docs))/winlog) integration. - -## System service registration, deletion, and modification - -((elastic-defend)) only captures system service security events required by its behavioral protection engine. Service creation and modification can also be detected in registry activity, for which ((elastic-defend)) has internal rules such as [Registry or File Modification from Suspicious Memory](https://github.com/elastic/protections-artifacts/blob/6d54ae289b290b1d42a7717569483f6ce907200a/behavior/rules/persistence_registry_or_file_modification_from_suspicious_memory.toml). - -For complete capture of all or specific Windows security events, consider the [Custom Windows Event Logs](((integrations-docs))/winlog) integration. In particular, capture events such as [Windows event ID 4697](https://learn.microsoft.com/en-us/windows/security/threat-protection/auditing/event-4697). - -## Kernel driver registration, deletion, and queries - -((elastic-defend)) scans every driver as it is loaded, but it doesn't generate an event each time. - -Drivers are registered in the system as system services. You can capture this with Windows event ID 4697 using the [Custom Windows Event Logs](((integrations-docs))/winlog) integration. - -Also consider capturing Windows event ID 6 using ((winlogbeat))'s [Sysmon module](((winlogbeat-ref))/winlogbeat-module-sysmon.html). - -## System configuration file creation, modification, and deletion - -((elastic-defend)) tracks creation, modification, and deletion of all files on the system. However, as mentioned above, the data might be aggregated, truncated, or deduplicated to provide only what's required for threat detection and prevention. diff --git a/docs/serverless/edr-manage/endpoint-self-protection.mdx b/docs/serverless/edr-manage/endpoint-self-protection.mdx deleted file mode 100644 index 5f064a88dc..0000000000 --- a/docs/serverless/edr-manage/endpoint-self-protection.mdx +++ /dev/null @@ -1,38 +0,0 @@ ---- -slug: /serverless/security/endpoint-self-protection -title: ((elastic-endpoint)) self-protection features -description: Learn how ((elastic-endpoint)) guards itself from tampering and attacks. -tags: [ 'serverless', 'security', 'overview' ] ---- - - -
- -((elastic-endpoint)), the installed component that performs ((elastic-defend))'s threat monitoring and prevention, protects itself against users and attackers that may try to interfere with its functionality. Protection features are consistently enhanced to prevent attackers who may attempt to use newer, more sophisticated tactics to interfere with the ((elastic-endpoint)). Self-protection is enabled by default when ((elastic-endpoint)) installs on supported platforms, listed below. - -Self-protection is enabled on the following 64-bit Windows versions: - -* Windows 8.1 -* Windows 10 -* Windows 11 -* Windows Server 2012 R2 -* Windows Server 2016 -* Windows Server 2019 -* Windows Server 2022 - -Self-protection is also enabled on the following macOS versions: - -* macOS 10.15 (Catalina) -* macOS 11 (Big Sur) -* macOS 12 (Monterey) - - -Other Windows and macOS variants (and all Linux distributions) do not have self-protection. - - -Self-protection defines the following permissions: - -* Users — even Administrator/root — **cannot** delete ((elastic-endpoint)) files (located at `c:\Program Files\Elastic\Endpoint` on Windows, and `/Library/Elastic/Endpoint` on macOS). -* Users **cannot** terminate the ((elastic-endpoint)) program or service. -* Administrator/root users **can** read ((elastic-endpoint))'s files. On Windows, the easiest way to read ((elastic-endpoint)) files is to start an Administrator `cmd.exe` prompt. On macOS, an Administrator can use the `sudo` command. -* Administrator/root users **can** stop the ((elastic-agent))'s service. On Windows, run the `sc stop "Elastic Agent"` command. On macOS, run the `sudo launchctl stop elastic-agent` command. diff --git a/docs/serverless/edr-manage/endpoints-page.mdx b/docs/serverless/edr-manage/endpoints-page.mdx deleted file mode 100644 index 515a1b4cf1..0000000000 --- a/docs/serverless/edr-manage/endpoints-page.mdx +++ /dev/null @@ -1,150 +0,0 @@ ---- -slug: /serverless/security/endpoints-page -title: Endpoints -# description: Description to be written -tags: [ 'serverless', 'security', 'overview' ] -status: in review ---- - - -
- -The **Endpoints** page (**Assets** → **Endpoints**) allows administrators to view and manage endpoints that are running the ((elastic-defend)) integration. - - - -* ((fleet)) must be enabled for administrative actions to function correctly. - -* You must have the appropriate user role to use this feature. -{/* Placeholder statement until we know which specific roles are required. Classic statement below for reference. */} -{/* * You must have the **Endpoint List** privilege to access this feature. */} - - - -
- -## Endpoints list - -The **Endpoints** list displays all hosts running ((elastic-defend)) and their relevant integration details. Endpoints appear in chronological order, with newly added endpoints at the top. - -![Endpoints page](../images/endpoints-page/-management-admin-endpoints-pg.png) - -The Endpoints list provides the following data: - -* **Endpoint**: The system hostname. Click the link to display endpoint details in a flyout. - -* **Agent Status**: The current status of the ((agent)), which is one of the following: - - * `Healthy`: The agent is online and communicating with ((elastic-sec)). - - * `Unenrolling`: The agent is currently unenrolling and will soon be removed from Fleet. Afterward, the endpoint will also uninstall. - - * `Unhealthy`: The agent is online but requires attention from an administrator because it's reporting a problem with a process. An unhealthy status could mean an upgrade failed and was rolled back to its previous version, or an integration might be missing prerequisites or additional configuration. Refer to Endpoint management troubleshooting for more on resolving an unhealthy agent status. - - * `Updating`: The agent is online and is updating the agent policy or binary, or is enrolling or unenrolling. - - * `Offline`: The agent is still enrolled but may be on a machine that is shut down or currently does not have internet access. In this state, the agent is no longer communicating with ((elastic-sec)) at a regular interval. - - - ((agent)) statuses in ((fleet)) correspond to the agent statuses in the ((security-app)). - - -* **Policy:** The name of the associated integration policy when the agent was installed. Click the link to display the integration policy details page. - -* **Policy status:** Indicates whether the integration policy was successfully applied. Click the link to view policy status response details in a flyout. - -* **OS**: The host's operating system. - -* **IP address**: All IP addresses associated with the hostname. - -* **Version**: The ((agent)) version currently running. - -* **Last active**: A date and timestamp of the last time the ((agent)) was active. - -* **Actions**: Select the context menu (*...*) to do the following: - - * **Isolate host**: Isolate the host from your network, blocking communication until the host is released. - - * **Respond**: Open the response console to perform response actions directly on the host. - - * **View response actions history**: View a history of response actions performed on the host. - - * **View host details**: View host details on the **Hosts** page in the ((security-app)). - - * **View agent policy**: View the agent policy in ((fleet)). - - * **View agent details**: View ((agent)) details and activity logs in ((fleet)). - - * **Reassign agent policy**: Change the [agent policy](((fleet-guide))/agent-policy.html#apply-a-policy) assigned to the host in ((fleet)). - -
- -### Endpoint details - -Click any link in the **Endpoint** column to display host details in a flyout. You can also use the **Take Action** menu button to perform the same actions as those listed in the Actions context menu, such as isolating the host, viewing host details, and viewing or reassigning the agent policy. - - - -
- -### Response actions history - -The endpoint details flyout also includes the **Response actions history** tab, which provides a log of the response actions performed on the endpoint, such as isolating a host or terminating a process. You can use the tools at the top to filter the information displayed in this view. Refer to Response actions history for more details. - - - -
- -### Integration policy details - -To view the integration policy page, click the link in the **Policy** column. If you are viewing host details, you can also click the **Policy** link on the flyout. - -On this page, you can view and configure endpoint protection and event collection settings. In the upper-right corner are Key Performance Indicators (KPIs) that provide current endpoint status. If you need to update the policy, make changes as appropriate, then click the **Save** button to apply the new changes. - - -Users must have permission to read/write to ((fleet)) APIs to make changes to the configuration. - - -![Integration page](../images/endpoints-page/-management-admin-integration-pg.png) - -Users who have unique configuration and security requirements can select **Show advanced settings** to configure the policy to support advanced use cases. Hover over each setting to view its description. - - -Advanced settings are not recommended for most users. - - -![Integration page](../images/endpoints-page/-management-admin-integration-advanced-settings.png) - -
- -### Policy status - -The status of the integration policy appears in the **Policy status** column and displays one of the following: - -* `Success`: The policy was applied successfully. - -* `Warning` or `Partially Applied`: The policy is pending application, or the policy was not applied in its entirety. - - - In some cases, actions taken on the endpoint may fail during policy application, but these cases are not critical failures - meaning there may be a failure, but the endpoints are still protected. In this case, the policy status will display as "Partially Applied." - - -* `Failure`: The policy did not apply correctly, and endpoints are not protected. - -* `Unknown`: The user interface is waiting for the API response to return, or, in rare cases, the API returned an undefined error or value. - -For more details on what's causing a policy status, click the link in the **Policy status** column and review the details flyout. Expand each section and subsection to display individual responses from the agent. - - -If you need help troubleshooting a configuration failure, refer to Endpoint management troubleshooting and [((fleet)) troubleshooting](((fleet-guide))/fleet-troubleshooting.html). - - - - -### Filter endpoints - -To filter the Endpoints list, use the search bar to enter a query using **[((kib)) Query Language (KQL)](((kibana-ref))/kuery-query.html)**. To refresh the search results, click **Refresh**. - - -The date and time picker on the right side of the page allows you to set a time interval to automatically refresh the Endpoints list — for example, to check if new endpoints were added or deleted. - diff --git a/docs/serverless/edr-manage/event-filters.mdx b/docs/serverless/edr-manage/event-filters.mdx deleted file mode 100644 index ebcbc15ca6..0000000000 --- a/docs/serverless/edr-manage/event-filters.mdx +++ /dev/null @@ -1,116 +0,0 @@ ---- -slug: /serverless/security/event-filters -title: Event filters -# description: Description to be written -tags: [ 'serverless', 'security', 'how-to' ] -status: in review ---- - - -
- -Event filters (**Assets** → **Event filters**) allow you to filter out endpoint events that you don't want stored in ((es)) — for example, high-volume events. By creating event filters, you can optimize your storage in ((es)). - -Event filters do not lower CPU usage on hosts; ((elastic-endpoint)) still monitors events to detect and prevent possible threats, but without writing event data to ((es)). To compare event filters with other endpoint artifacts, refer to . - - - -You must have the appropriate user role to use this feature. -{/* Placeholder statement until we know which specific roles are required. Classic statement below for reference. */} -{/* You must have the **Event Filters** privilege to access this feature. */} - - - - -Since an event filter blocks an event from streaming to ((es)), be conscious of event filter conditions you set and any existing rule conditions. If there is too much overlap, the rule may run less frequently than specified and, therefore, will not trigger the corresponding alert for that rule. This is the expected behavior of event filters. - - -By default, event filters are recognized globally across all hosts running ((elastic-defend)). You can also assign an event filter to a specific ((elastic-defend)) integration policy, which would filter endpoint events from the hosts assigned to that policy. - -Create event filters from the Hosts page or the Event filters page. - -1. Do one of the following: - - * To create an event filter from the Hosts page: - 1. Go to **Explore** → **Hosts**. - 1. Select the **Events** tab to view the Events table. - - 1. Find the event to filter, click the **More actions** menu (), then select **Add Endpoint event filter**. - - - Since you can only create filters for endpoint events, be sure to filter the Events table to display events generated by the ((elastic-endpoint)). - For example, in the KQL search bar, enter the following query to find endpoint network events: `event.dataset : endpoint.events.network`. - - - * To create an event filter from the Event filters page: - 1. Go to **Assets** → **Event filters**. - 1. Click **Add event filter**. The **Add event filter** flyout opens. - - ![](../images/event-filters/-management-admin-event-filter.png) - -1. Fill in these fields in the **Details** section: - 1. `Name`: Enter a name for the event filter. - 1. `Description`: Enter a filter description (optional). -1. In the **Conditions** section, depending which page you're using to create the filter, either modify the pre-populated conditions or add new conditions to define how ((elastic-sec)) will filter events. Use these settings: - 1. `Select operating system`: Select the appropriate operating system. - 1. Select which kind of event filter you'd like to create: - - * `Events`: Create a generic event filter that can match any event type. All matching events are excluded. - * `Process Descendants`: Specify a process, and suppress the activity of its descendant processes. Events from the matched process will be ingested, but events from its descendant processes will be excluded. - - This option adds the condition `event.category is process` to narrow the filter to process-type events. You can add more conditions to identify the process whose descendants you want to exclude. - 1. `Field`: Select a field to identify the event being filtered. - 1. `Operator`: Select an operator to define the condition. Available options are: - * `is` - * `is not` - * `is one of` - * `is not one of` - * `matches` | `does not match`: Allows you to use wildcards in `Value`, such as `C:\path\*\app.exe`. Available wildcards are `?` (match one character) and `*` (match zero or more characters). - - Using wildcards in file paths can impact performance. To create a more efficient event filter using wildcards, use multiple conditions and make them as specific as possible. For example, adding conditions using `process.name` or `file.name` can help limit the scope of wildcard matching. - - 1. `Value`: Enter the value associated with the `Field`. To enter multiple values (when using `is one of` or `is not one of`), enter each value, then press **Return**. - -1. To define multiple conditions, click the `AND` button and configure a new condition. You can also add nested conditions with the `Add nested condition` button. For example, the event filter pictured above excludes events whose `event.category` field is `network`, and whose `process.executable` field is as specified. - -1. Select an option in the **Assignment** section to assign the event filter to a specific integration policy: - - * `Global`: Assign the event filter to all integration policies for ((elastic-defend)). - * `Per Policy`: Assign the event filter to one or more specific ((elastic-defend)) integration policies. Select each policy in which you want the events to be filtered. - - - You can also select the `Per Policy` option without immediately assigning a policy to the event filter. For example, you could do this to create and review your event filter configurations before putting them into action with a policy. - - -1. Add a comment if you want to provide more information about the event filter (optional). -1. Click **Add event filter**. The new filter is added to the **Event filters** list. - -
- -## View and manage event filters - -The **Event filters** page (**Assets** → **Event filters**) displays all the event filters that have been added to the ((security-app)). To refine the list, use the search bar to search by filter name, description, comments, or field value. - -![](../images/event-filters/-management-admin-event-filters-list.png) - -
- -### Edit an event filter -You can individually modify each event filter. You can also change the policies that an event filter is assigned to. - -To edit an event filter: - -1. Click the actions menu () for the event filter you want to edit, then select **Edit event filter**. -1. Modify details or conditions as needed. -1. Click **Save**. - -
- -### Delete an event filter -You can delete an event filter, which removes it entirely from all ((elastic-defend)) integration policies. - -To delete an event filter: - -1. Click the actions menu () on the event filter you want to delete, then select **Delete event filter**. -1. On the dialog that opens, verify that you are removing the correct event filter, then click **Delete**. A confirmation message is displayed. - diff --git a/docs/serverless/edr-manage/host-isolation-exceptions.mdx b/docs/serverless/edr-manage/host-isolation-exceptions.mdx deleted file mode 100644 index 1c15cf1c75..0000000000 --- a/docs/serverless/edr-manage/host-isolation-exceptions.mdx +++ /dev/null @@ -1,75 +0,0 @@ ---- -slug: /serverless/security/host-isolation-exceptions -title: Host isolation exceptions -# description: Description to be written -tags: [ 'serverless', 'security', 'how-to' ] -status: in review ---- - - -
- -You can configure host isolation exceptions (**Assets** → **Host isolation exceptions**) for specific IP addresses that isolated hosts are still allowed to communicate with, even when blocked from the rest of your network. Isolated hosts can still send data to ((elastic-sec)), so you don't need to set up host isolation exceptions for them. - -Host isolation exceptions support IPv4 addresses, with optional classless inter-domain routing (CIDR) notation. - - - -You must have the appropriate user role to use this feature. -{/* Placeholder statement until we know which specific roles are required. Classic statement below for reference. */} -{/* You must have the **Host Isolation Exceptions** privilege to access this feature. */} - - - - -* Each host isolation exception IP address should be a highly trusted and secure location since you're allowing it to communicate with hosts that have been isolated to prevent a potential threat from spreading. - -* If your hosts depend on VPNs for network communication, you should also set up host isolation exceptions for those VPN servers' IP addresses. - - -Host isolation requires the Endpoint Protection Complete . By default, a host isolation exception is recognized globally across all hosts running ((elastic-defend)). You can also assign a host isolation exception to a specific ((elastic-defend)) integration policy, affecting only the hosts assigned to that policy. - -1. Go to **Assets** → **Host isolation exceptions**. -1. Click **Add Host isolation exception**. -1. Fill in these fields in the **Add Host isolation exception** flyout: - 1. `Name your host isolation exceptions`: Enter a name to identify the host isolation exception. - 1. `Description`: Enter a description to provide more information on the host isolation exception (optional). - 1. `Enter IP Address`: Enter the IP address for which you want to allow communication with an isolated host. This must be an IPv4 address, with optional CIDR notation (for example, `0.0.0.0` or `1.0.0.0/24`, respectively). -1. Select an option in the **Assignment** section to assign the host isolation exception to a specific integration policy: - - * `Global`: Assign the host isolation exception to all integration policies for ((elastic-defend)). - * `Per Policy`: Assign the host isolation exception to one or more specific ((elastic-defend)) integration policies. Select each policy where you want the host isolation exception to apply. - - You can also select the `Per Policy` option without immediately assigning a policy to the host isolation exception. For example, you could do this to create and review your host isolation exception configurations before putting them into action with a policy. - -1. Click **Add Host isolation exception**. The new exception is added to the **Host isolation exceptions** list. - -
- -## View and manage host isolation exceptions - -The **Host isolation exceptions** page displays all the host isolation exceptions that have been configured for ((elastic-sec)). To refine the list, use the search bar to search by name, description, or IP address. - -![List of host isolation exceptions](../images/host-isolation-exceptions/-management-admin-host-isolation-exceptions-ui.png) - -
- -### Edit a host isolation exception -You can individually modify each host isolation exception and change the policies that a host isolation exception is assigned to. - -To edit a host isolation exception: - -1. Click the actions menu () for the exception you want to edit, then select **Edit Exception**. -1. Modify details as needed. -1. Click **Save**. The newly modified exception appears at the top of the list. - -
- -### Delete a host isolation exception -You can delete a host isolation exception, which removes it entirely from all ((elastic-defend)) integration policies. - -To delete a host isolation exception: - -1. Click the actions menu () on the exception you want to delete, then select **Delete Exception**. -1. On the dialog that opens, verify that you are removing the correct host isolation exception, then click **Delete**. A confirmation message is displayed. - diff --git a/docs/serverless/edr-manage/manage-endpoint-protection.mdx b/docs/serverless/edr-manage/manage-endpoint-protection.mdx deleted file mode 100644 index 42f6171c5d..0000000000 --- a/docs/serverless/edr-manage/manage-endpoint-protection.mdx +++ /dev/null @@ -1,12 +0,0 @@ ---- -slug: /serverless/security/manage-endpoint-protection -title: Manage ((elastic-defend)) -description: Manage endpoint protection artifacts for ((elastic-defend)). -tags: [ 'serverless', 'security', 'overview' ] -status: in review ---- - - -
- -This section provides an overview of the management tools on the **Assets** page that administrators can use to manage endpoints, integration policies, trusted applications, event filters, host isolation exceptions, and blocked applications. diff --git a/docs/serverless/edr-manage/optimize-edr.mdx b/docs/serverless/edr-manage/optimize-edr.mdx deleted file mode 100644 index 562b7a4879..0000000000 --- a/docs/serverless/edr-manage/optimize-edr.mdx +++ /dev/null @@ -1,79 +0,0 @@ ---- -slug: /serverless/security/optimize-edr -title: Optimize ((elastic-defend)) -# description: Description to be written -tags: [ 'serverless', 'security', 'how-to' ] -status: in review ---- - - -
- -If you encounter problems like incompatibilities with other antivirus software, too many false positive alerts, or excessive storage or CPU usage, you can optimize ((elastic-defend)) to mitigate these issues. - -Endpoint artifacts — such as trusted applications and event filters — and Endpoint exceptions let you modify the behavior and performance of _((elastic-endpoint))_, the component installed on each host that performs ((elastic-defend))'s threat monitoring, prevention, and response actions. - -The following table explains the differences between several Endpoint artifacts and exceptions, and how to use them: - - - - - Trusted application - - - - **_Prevents ((elastic-endpoint)) from monitoring a process._** Use to avoid conflicts with other software, usually other antivirus or endpoint security applications. - - * Creates intentional blind spots in your security environment — use sparingly! - * Doesn't monitor the application for threats, nor does it generate alerts, even if it behaves like malware, ransomware, etc. - * Doesn't generate events for the application except process events for visualizations and other internal use by the ((stack)). - * Might improve performance, since ((elastic-endpoint)) monitors fewer processes. - * Might still generate malicious behavior alerts, if the application's process events indicate malicious behavior. To suppress alerts, create Endpoint alert exceptions. - - - - - - - - Event filter - - - - **_Prevents event documents from being written to ((es))._** Use to reduce storage usage in ((es)). - - Does NOT lower CPU usage for ((elastic-endpoint)). It still monitors event data for possible threats, but without writing event data to ((es)). - - - - - - - Blocklist - - - - **_Prevents known malware from running._** Use to extend ((elastic-defend))'s protection against malicious processes. - - NOT intended to broadly block benign applications for non-security reasons. - - - - - - - Endpoint alert exception - - - - **_Prevents ((elastic-endpoint)) from generating alerts or stopping processes._** Use to reduce false positive alerts, and to keep ((elastic-endpoint)) from preventing processes you want to allow. - - Might also improve performance: ((elastic-endpoint)) checks for exceptions _before_ most other processing, and stops monitoring a process if an exception allows it. - - - - - diff --git a/docs/serverless/edr-manage/policies-page-ov.mdx b/docs/serverless/edr-manage/policies-page-ov.mdx deleted file mode 100644 index 9a7884aa69..0000000000 --- a/docs/serverless/edr-manage/policies-page-ov.mdx +++ /dev/null @@ -1,24 +0,0 @@ ---- -slug: /serverless/security/policies-page -title: Policies -# description: Description to be written -tags: [ 'serverless', 'security', 'reference' ] -status: in review ---- - - -
- -The **Policies** page (**Assets** → **Policies**) lists all of the integration policies configured for ((elastic-defend)). - - - -You must have the appropriate user role to use this feature. -{/* Placeholder statement until we know which specific roles are required. Classic statement below for reference. */} -{/* You must have the **((elastic-defend)) Policy Management** privilege to access this feature. */} - - - -Click on an integration policy's name to configure its settings. For more information on configuring an integration policy, refer to Configure an integration policy for ((elastic-defend)). - -![](../images/policies-page-ov/-management-admin-policy-list.png) diff --git a/docs/serverless/edr-manage/trusted-apps-ov.mdx b/docs/serverless/edr-manage/trusted-apps-ov.mdx deleted file mode 100644 index 2576a17d41..0000000000 --- a/docs/serverless/edr-manage/trusted-apps-ov.mdx +++ /dev/null @@ -1,105 +0,0 @@ ---- -slug: /serverless/security/trusted-applications -title: Trusted applications -# description: Description to be written -tags: [ 'serverless', 'security', 'how-to' ] -status: in review ---- - - -
- - -If you use ((elastic-defend)) along with other antivirus (AV) software, you might need to configure the other system to trust ((elastic-endpoint)). Refer to for more information. - - -On the **Trusted applications** page (**Assets** → **Trusted applications**), you can add Windows, macOS, and Linux applications that should be trusted, such as other antivirus or endpoint security applications. Trusted applications are designed to help mitigate performance issues and incompatibilities with other endpoint software installed on your hosts. Trusted applications apply only to hosts running the ((elastic-defend)) integration. - - - -You must have the appropriate user role to use this feature. -{/* Placeholder statement until we know which specific roles are required. Classic statement below for reference. */} -{/* You must have the **Trusted Applications** privilege to access this feature. */} - - - -Trusted applications create blindspots for ((elastic-defend)), because the applications are no longer monitored for threats. One avenue attackers use to exploit these blindspots is by DLL (Dynamic Link Library) side-loading, where they leverage processes signed by trusted vendors — such as antivirus software — to execute their malicious DLLs. Such activity appears to originate from the trusted application's process. - -Trusted applications might still generate alerts in some cases, such as if the application's process events indicate malicious behavior. To reduce false positive alerts, add an Endpoint alert exception, which prevents ((elastic-defend)) from generating alerts. To compare trusted applications with other endpoint artifacts, refer to . - -Additionally, trusted applications still generate process events for visualizations and other internal use by the ((stack)). To prevent process events from being written to ((es)), use an event filter to filter out the specific events that you don't want stored in ((es)), but be aware that features that depend on these process events may not function correctly. - -By default, a trusted application is recognized globally across all hosts running ((elastic-defend)). You can also assign a trusted application to a specific ((elastic-defend)) integration policy, enabling the application to be trusted by only the hosts assigned to that policy. - -To add a trusted application: - -1. Go to **Manage** → **Trusted applications**. - -1. Click **Add trusted application**. - -1. Fill in the following fields in the **Add trusted application** flyout: - - * `Name your trusted application`: Enter a name for the trusted application. - - * `Description`(Optional): Enter a description for the trusted application. - - * `Select operating system`: Select the appropriate operating system from the drop-down. - - * `Field`: Select a field to identify the trusted application: - * `Hash`: The MD5, SHA-1, or SHA-256 hash value of the application's executable. - * `Path`: The full file path of the application's executable. - * `Signature`: (Windows only) The name of the application's digital signer. - - - To find the signer's name for an application, go to **Discover** and query the process name of the application's executable (for example, `process.name : "mctray.exe"` for a McAfee security binary). Then, search the results for the `process.code_signature.subject_name` field, which contains the signer's name (for example, `McAfee, Inc.`). - - - * `Operator`: Select an operator to define the condition: - * `is`: Must be _exactly_ equal to `Value`; wildcards are not supported. This operation is required for the `Hash` and `Signature` field types. - * `matches`: Can include wildcards in `Value`, such as `C:\path\*\app.exe`. This operator is only available for the `Path` field type. Available wildcards are `?` (match one character) and `*` (match zero or more characters). - - * `Value`: Enter the hash value, file path, or signer name. To add an additional value, click **AND**. - - - You can only add a single field type value per trusted application. For example, if you try to add two `Path` values, you'll get an error message. Also, an application's hash value must be valid to add it as a trusted application. In addition, to minimize visibility gaps in the ((security-app)), be as specific as possible in your entries. For example, combine `Signature` information with a known `Path`. - - -1. Select an option in the **Assignment** section to assign the trusted application to a specific integration policy: - * `Global`: Assign the trusted application to all integration policies for ((elastic-defend)). - * `Per Policy`: Assign the trusted application to one or more specific ((elastic-defend)) integration policies. Select each policy in which you want the application to be trusted. - - - You can also select the `Per Policy` option without immediately assigning a policy to the trusted application. For example, you could do this to create and review your trusted application configurations before putting them into action with a policy. - - -1. Click **Add trusted application**. The application is added to the **Trusted applications** list. - -
- -## View and manage trusted applications - -The **Trusted applications** page (**Assets** → **Trusted applications**) displays all the trusted applications that have been added to the ((security-app)). To refine the list, use the search bar to search by name, description, or field value. - -![](../images/trusted-apps-ov/-management-admin-trusted-apps-list.png) - -
- -### Edit a trusted application -You can individually modify each trusted application. You can also change the policies that a trusted application is assigned to. - -To edit a trusted application: - -1. Click the actions menu (*...*) on the trusted application you want to edit, then select **Edit trusted application**. -1. Modify details as needed. -1. Click **Save**. - -
- -### Delete a trusted application -You can delete a trusted application, which removes it entirely from all ((elastic-defend)) integration policies. - -To delete a trusted application: - -1. Click the actions menu (*...*) on the trusted application you want to delete, then select **Delete trusted application**. -1. On the dialog that opens, verify that you are removing the correct application, then click **Delete**. A confirmation message is displayed. - diff --git a/docs/serverless/endpoint-response-actions/automated-response-actions.mdx b/docs/serverless/endpoint-response-actions/automated-response-actions.mdx deleted file mode 100644 index 2c91d21d22..0000000000 --- a/docs/serverless/endpoint-response-actions/automated-response-actions.mdx +++ /dev/null @@ -1,43 +0,0 @@ ---- -slug: /serverless/security/automated-response-actions -title: Automated response actions -description: Automatically respond to events with endpoint response actions triggered by detection rules. -tags: ["serverless","security","defend","how-to","manage"] ---- - - -
- -Add ((elastic-defend))'s response actions to detection rules to automatically perform actions on an affected host when an event meets the rule's criteria. Use these actions to support your response to detected threats and suspicious events. - - - -- Automated response actions require an [Enterprise subscription](https://www.elastic.co/pricing). -- Hosts must have ((agent)) installed with the ((elastic-defend)) integration. -- Your user role must have the ability to create detection rules and to perform specific response actions. -- You can only add automated response actions to custom query rules. - - - -You can add automated response actions to a new or existing custom query rule. - -1. Do one of the following: - - **New rule**: On the last step of custom query rule creation, go to the **Response Actions** section and select **((elastic-defend))**. - - **Existing rule**: Edit the rule's settings, then go to the **Actions** tab. In the tab, select **((elastic-defend))** under the **Response Actions** section. - -1. Select an option in the **Response action** field: - - **Isolate**: Isolate the host, blocking communication with other hosts on the network. - - **Kill process**: Terminate a process on the host. - - **Suspend process**: Temporarily suspend a process on the host. - - - Be aware that automatic host isolation can result in unintended consequences, such as disrupting legitimate user activities or blocking critical business processes. - - -1. For process actions, specify how to identify the process you want to terminate or suspend: - - Turn on the toggle to use the alert's **process.pid** value as the identifier. - - To use a different alert field value to identify the process, turn off the toggle and enter the **Custom field name**. - -1. Enter a comment describing why you’re performing the action on the host (optional). - -1. To finish adding the response action, click **Create & enable rule** (for a new rule) or **Save changes** (for existing rules). diff --git a/docs/serverless/endpoint-response-actions/host-isolation-ov.mdx b/docs/serverless/endpoint-response-actions/host-isolation-ov.mdx deleted file mode 100644 index 9dfcc1ed16..0000000000 --- a/docs/serverless/endpoint-response-actions/host-isolation-ov.mdx +++ /dev/null @@ -1,158 +0,0 @@ ---- -slug: /serverless/security/isolate-host -title: Isolate a host -description: Host isolation allows you to cut off a host's network access until you release it. -tags: ["serverless","security","defend","how-to","manage"] -status: in review ---- - - -
- -Host isolation allows you to isolate hosts from your network, blocking communication with other hosts on your network until you release the host. Isolating a host is useful for responding to malicious activity or preventing potential attacks, as it prevents lateral movement across other hosts. - -Isolated hosts, however, can still send data to ((elastic-sec)). You can also create host isolation exceptions for specific IP addresses that isolated hosts are still allowed to communicate with, even when blocked from the rest of your network. - - - -* Host isolation requires the Endpoint Protection Complete . - -* Hosts must have ((agent)) installed with the ((elastic-defend)) integration. - -* Host isolation is supported for endpoints running Windows, macOS, and these Linux distributions: - - * CentOS/RHEL 8 - * Debian 11 - * Ubuntu 18.04, 20.04, and 22.04 - * AWS Linux 2 - -* To isolate and release hosts running any operating system, you must have the appropriate user role. {/* **Host Isolation** privilege */} - - - -![Endpoint page highlighting a host that's been isolated](../images/host-isolation-ov/-management-admin-isolated-host.png) - -You can isolate a host from a detection alert's details flyout, from the Endpoints page, or from the endpoint response console. Once a host is successfully isolated, an `Isolated` status displays next to the `Agent status` field, which you can view on the alert details flyout or Endpoints list table. - - -If the request fails, verify that the ((agent)) and your endpoint are both online before trying again. - - -All actions executed on a host are tracked in the host’s response actions history, which you can access from the Endpoints page. Refer to View host isolation history for more information. - -
- -## Isolate a host - - - -1. Open a detection alert: -* From the Alerts table or Timeline: Click **View details** (). -* From a case with an attached alert: Click **Show alert details** (**>**). -1. Click **Take action → Isolate host**. -1. Enter a comment describing why you’re isolating the host (optional). -1. Click **Confirm**. - - - - - -1. Go to **Assets → Endpoints**, then either: - * Select the appropriate endpoint in the **Endpoint** column, and click **Take action → Isolate host** in the endpoint details flyout. - * Click the **Actions** menu (*...*) on the appropriate endpoint, then select **Isolate host**. -1. Enter a comment describing why you’re isolating the host (optional). -1. Click **Confirm**. - - - - - - -The response console requires the Endpoint Protection Complete . - - -1. Open the response console for the host (select the **Respond** button or actions menu option on the host, endpoint, or alert details view). -1. Enter the `isolate` command and an optional comment in the input area, for example: - - `isolate --comment "Isolate this host"` - -1. Press **Return**. - - - - - - -The host isolation endpoint response action requires the Endpoint Protection Complete . - - - -Be aware that automatic host isolation can result in unintended consequences, such as disrupting legitimate user activities or blocking critical business processes. - - -1. Add an endpoint response action to a new or existing custom query rule. The endpoint response action will run whenever rule conditions are met: - * **New rule**: On the last step of custom query rule creation, go to the **Response Actions** section and select **((elastic-defend))**. - * **Existing rule**: Edit the rule's settings, then go to the **Actions** tab. In the tab, select **((elastic-defend))** under the **Response Actions** section. -1. Click the **Response action** field, then select **Isolate**. -1. Enter a comment describing why you’re isolating the host (optional). -1. To finish adding the response action, click **Create & enable rule** (for a new rule) or **Save changes** (for existing rules). - - - -After the host is successfully isolated, an **Isolated** status is added to the endpoint. Active end users receive a notification that the computer has been isolated from the network: - - - -
- -## Release a host - - - -1. Open a detection alert: -* From the Alerts table or Timeline: Click **View details** (). -* From a case with an attached alert: Click **Show alert details** (**>**). -1. From the alert details flyout, click **Take action → Release host**. -1. Enter a comment describing why you're releasing the host (optional). -1. Click **Confirm**. - - - - - -1. Go to **Assets → Endpoints**, then either: - * Select the appropriate endpoint in the **Endpoint** column, and click **Take action → Release host** in the endpoint details flyout. - * Click the **Actions** menu (*...*) on the appropriate endpoint, then select **Release host**. -1. Enter a comment describing why you're releasing the host (optional). -1. Click **Confirm**. - - - - - - -The response console requires the Endpoint Protection Complete . - - -1. Open the response console for the host (select the **Respond** button or actions menu option on the host, endpoint, or alert details view). -1. Enter the `release` command and an optional comment in the input area, for example: - - `release --comment "Release this host"` - -1. Press **Return**. - - - -After the host is successfully released, the **Isolated** status is removed from the endpoint. Active end users receive a notification that the computer has been reconnected to the network: - - - -
- -## View host isolation history - -To confirm if a host has been successfully isolated or released, check the response actions history, which logs the response actions performed on a host. - -Go to **Assets** → **Endpoints**, click an endpoint's name, then click the **Response action history** tab. You can filter the information displayed in this view. Refer to Response actions history for more details. - - diff --git a/docs/serverless/endpoint-response-actions/response-actions-config.mdx b/docs/serverless/endpoint-response-actions/response-actions-config.mdx deleted file mode 100644 index 7cc57e528a..0000000000 --- a/docs/serverless/endpoint-response-actions/response-actions-config.mdx +++ /dev/null @@ -1,129 +0,0 @@ ---- -slug: /serverless/security/response-actions-config -title: Configure third-party response actions -description: Configure ((elastic-sec)) to perform response actions on hosts protected by third-party systems. -tags: ["serverless","security","how-to","configure"] ---- - - - - - -
- -You can direct third-party endpoint protection systems to perform response actions on enrolled hosts, such as isolating a suspicious endpoint from your network, without leaving the ((elastic-sec)) UI. This page explains the configuration steps needed to enable response actions for these third-party systems: - -* CrowdStrike -* SentinelOne - -Check out to learn which response actions are supported for each system. - - -* Project features add-on: Endpoint Protection Complete -* User roles: **SOC manager** or **Endpoint operations analyst** -* Endpoints must have actively running third-party agents installed. - - -Select a tab below for your endpoint security system: - - - - {/* NOTE TO CONTRIBUTORS: These DocTabs have very similar content. If you change anything - in this tab, apply the change to the other tabs, too. */} - To configure response actions for CrowdStrike-enrolled hosts: - - 1. **Enable API access in CrowdStrike.** Create an API client in CrowdStrike to allow access to the system. Refer to CrowdStrike's docs for instructions. - - - Give the API client the minimum privilege required to read CrowdStrike data and perform actions on enrolled hosts. Consider creating separate API clients for reading data and performing actions, to limit privileges allowed by each API client. - - - Take note of the client ID, client secret, and base URL; you'll need them in later steps when you configure ((elastic-sec)) components to access CrowdStrike.

- - 1. **Install the CrowdStrike integration and ((agent)).** Elastic's [CrowdStrike integration](((integrations-docs))/crowdstrike) collects and ingests logs into ((elastic-sec)). - 1. Go to **Project Settings** → **Integrations**, search for and select **CrowdStrike**, then select **Add CrowdStrike**. - 1. Configure the integration with an **Integration name** and optional **Description**. - 1. Select **Collect CrowdStrike logs via API**, and enter the required **Settings**: - - **Client ID**: Client ID for the API client used to read CrowdStrike data. - - **Client Secret**: Client secret allowing you access to CrowdStrike. - - **URL**: The base URL of the CrowdStrike API. - 1. Select the **Falcon Alerts** and **Hosts** sub-options under **Collect CrowdStrike logs via API**. - 1. Scroll down and enter a name for the agent policy in **New agent policy name**. If other agent policies already exist, you can click the **Existing hosts** tab and select an existing policy instead. For more details on ((agent)) configuration settings, refer to [((agent)) policies](((fleet-guide))/agent-policy.html). - 1. Click **Save and continue**. - 1. Select **Add ((agent)) to your hosts** and continue with the ((agent)) installation steps to install ((agent)) on a resource in your network (such as a server or VM). ((agent)) will act as a bridge collecting data from CrowdStrike and sending it back to ((elastic-sec)).

- - 1. **Create a CrowdStrike connector.** Elastic's [CrowdStrike connector](((kibana-ref))/crowdstrike-action-type.html) enables ((elastic-sec)) to perform actions on CrowdStrike-enrolled hosts. - - - Do not create more than one CrowdStrike connector. - - - 1. Go to **Stack Management** → **Connectors**, then select **Create connector**. - 1. Select the **CrowdStrike** connector. - 1. Enter the configuration information: - - **Connector name**: A name to identify the connector. - - **CrowdStrike API URL**: The base URL of the CrowdStrike API. - - **CrowdStrike Client ID**: Client ID for the API client used to perform actions in CrowdStrike. - - **Client Secret**: Client secret allowing you access to CrowdStrike. - 1. Click **Save**.

- - 1. **Create and enable detection rules to generate ((elastic-sec)) alerts.** (Optional) Create detection rules to generate ((elastic-sec)) alerts based on CrowdStrike events and data. The [CrowdStrike integration docs](((integrations-docs))/crowdstrike) list the available ingested logs and fields you can use to build a rule query. - - This gives you visibility into CrowdStrike without needing to leave ((elastic-sec)). You can perform supported endpoint response actions directly from alerts that a rule creates, by using the **Take action** menu in the alert details flyout. -
- - - {/* NOTE TO CONTRIBUTORS: These DocTabs have very similar content. If you change anything - in this tab, apply the change to the other tabs, too. */} - To configure response actions for SentinelOne-enrolled hosts: - - 1. **Generate API access tokens in SentinelOne.** You'll need these tokens in later steps, and they allow ((elastic-sec)) to collect data and perform actions in SentinelOne. - - Create two API tokens in SentinelOne, and give them the minimum privilege required by the Elastic components that will use them: - - SentinelOne integration: Permission to read SentinelOne data. - - SentinelOne connector: Permission to read SentinelOne data and perform actions on enrolled hosts (for example, isolating and releasing an endpoint).

- - Refer to the [SentinelOne integration docs](((integrations-docs))/sentinel_one) or SentinelOne's docs for details on generating API tokens.

- - 1. **Install the SentinelOne integration and ((agent)).** Elastic's [SentinelOne integration](((integrations-docs))/sentinel_one) collects and ingests logs into ((elastic-sec)). - - 1. Go to **Project Settings** → **Integrations**, search for and select **SentinelOne**, then select **Add SentinelOne**. - 1. Configure the integration with an **Integration name** and optional **Description**. - 1. Ensure that **Collect SentinelOne logs via API** is selected, and enter the required **Settings**: - - **URL**: The SentinelOne console URL. - - **API Token**: The SentinelOne API access token you generated previously, with permission to read SentinelOne data. - 1. Scroll down and enter a name for the agent policy in **New agent policy name**. If other agent policies already exist, you can click the **Existing hosts** tab and select an existing policy instead. For more details on ((agent)) configuration settings, refer to [((agent)) policies](((fleet-guide))/agent-policy.html). - 1. Click **Save and continue**. - 1. Select **Add ((agent)) to your hosts** and continue with the ((agent)) installation steps to install ((agent)) on a resource in your network (such as a server or VM). ((agent)) will act as a bridge collecting data from SentinelOne and sending it back to ((elastic-sec)).

- - 1. **Create a SentinelOne connector.** Elastic's [SentinelOne connector](((kibana-ref))/sentinelone-action-type.html) enables ((elastic-sec)) to perform actions on SentinelOne-enrolled hosts. - - - Do not create more than one SentinelOne connector. - - - 1. Go to **Stack Management** → **Connectors**, then select **Create connector**. - 1. Select the **SentinelOne** connector. - 1. Enter the configuration information: - - **Connector name**: A name to identify the connector. - - **SentinelOne tenant URL**: The SentinelOne tenant URL. - - **API token**: The SentinelOne API access token you generated previously, with permission to read SentinelOne data and perform actions on enrolled hosts. - 1. Click **Save**.

- - 1. **Create and enable detection rules to generate ((elastic-sec)) alerts.** (Optional) Create detection rules to generate ((elastic-sec)) alerts based on SentinelOne events and data. - - This gives you visibility into SentinelOne without needing to leave ((elastic-sec)). You can perform supported endpoint response actions directly from alerts that a rule creates, by using the **Take action** menu in the alert details flyout. - - When creating a rule, you can target any event containing a SentinelOne agent ID field. Use one or more of these index patterns: - - | Index pattern | SentinelOne agent ID field | - | ----------------------------- | -------------------------------- | - | `logs-sentinel_one.alert*` | `sentinel_one.alert.agent.id` | - | `logs-sentinel_one.threat*` | `sentinel_one.threat.agent.id` | - | `logs-sentinel_one.activity*` | `sentinel_one.activity.agent.id` | - | `logs-sentinel_one.agent*` | `sentinel_one.agent.agent.id` | - - - Do not include any other index patterns. - - -
-
diff --git a/docs/serverless/endpoint-response-actions/response-actions-history.mdx b/docs/serverless/endpoint-response-actions/response-actions-history.mdx deleted file mode 100644 index 3195550a22..0000000000 --- a/docs/serverless/endpoint-response-actions/response-actions-history.mdx +++ /dev/null @@ -1,44 +0,0 @@ ---- -slug: /serverless/security/response-actions-history -title: Response actions history -description: The response actions history log keeps a record of actions taken on endpoints. -tags: ["serverless","security","defend","reference","manage"] -status: in review ---- - - -
- -((elastic-sec)) keeps a log of the response actions performed on endpoints, such as isolating a host or terminating a process. The log displays when each command was performed, the host on which the action was performed, the user who requested the action, any comments added to the action, and the action's current status. - - - -You must have the appropriate user role to use this feature. -{/* Placeholder statement until we know which specific roles are required. Classic statement below for reference. */} -{/* You must have the **Response Actions History** privilege to access this feature. */} - - - -To access the response actions history for all endpoints, go to **Assets** → **Response actions history**. You can also access the response actions history for an individual endpoint from these areas: - -* **Endpoints** page: Click an endpoint's name to open the details flyout, then click the **Response actions history** tab. -* **Response console** page: Click the **Response actions history** button. - -All of these contexts contain the same information and features. The following image shows the **Response actions history** page for all endpoints: - -![Response actions history page UI](../images/response-actions-history/-management-admin-response-actions-history-page.png) - -To filter and expand the information in the response actions history: - -* Enter a user name or comma-separated list of user names in the search field to display actions requested by those users. - -* Use the various drop-down menus to filter the actions shown: - - * **Hosts**: Show actions performed on specific endpoints. (This menu is only available on the **Response actions history** page for all endpoints.) - * **Actions**: Show specific actions types. - * **Statuses**: Show actions with a specific status. - * **Types**: Show actions based on the endpoint protection agent type (((elastic-defend)) or a third-party agent), and how the action was triggered (manually by a user or automatically by a detection rule). - -* Use the date and time picker to display actions within a specific time range. -* Click the expand arrow on the right to display more details about an action. - diff --git a/docs/serverless/endpoint-response-actions/response-actions.mdx b/docs/serverless/endpoint-response-actions/response-actions.mdx deleted file mode 100644 index 491caa97e0..0000000000 --- a/docs/serverless/endpoint-response-actions/response-actions.mdx +++ /dev/null @@ -1,256 +0,0 @@ ---- -slug: /serverless/security/response-actions -title: Endpoint response actions -description: Perform response actions on endpoints using a terminal-like interface. -tags: ["serverless","security","defend","reference","manage"] -status: rough content ---- - - -
- -The response console allows you to perform response actions on an endpoint using a terminal-like interface. You can enter action commands and get near-instant feedback on them. Actions are also recorded in the endpoint's response actions history for reference. - -Response actions are supported on all endpoint platforms (Linux, macOS, and Windows). - - - -* Response actions and the response console UI require the Endpoint Protection Complete . - -* Endpoints must have ((agent)) version 8.4 or higher installed with the ((elastic-defend)) integration to receive response actions. - -* Some response actions require specific user roles, indicated below. These are required to perform actions both in the response console and in other areas of the ((security-app)) (such as isolating a host from a detection alert). - -* Users must have the appropriate user role privileges for at least one response action to access the response console. - - - -![Response console UI](../images/response-actions/-management-admin-response-console.png) - -Launch the response console from any of the following places in ((elastic-sec)): - -* **Endpoints** page → **Actions** menu () → **Respond** -* Endpoint details flyout → **Take action** → **Respond** -* Alert details flyout → **Take action** → **Respond** -* Host details page → **Respond** - -To perform an action on the endpoint, enter a response action command in the input area at the bottom of the console, then press **Return**. Output from the action is displayed in the console. - -If a host is unavailable, pending actions will execute once the host comes online. Pending actions expire after two weeks and can be tracked in the response actions history. - - -Some response actions may take a few seconds to complete. Once you enter a command, you can immediately enter another command while the previous action is running. - - -Activity in the response console is persistent, so you can navigate away from the page and any pending actions you've submitted will continue to run. To confirm that an action completed, return to the response console to view the console output or check the response actions history. - - -Once you submit a response action, you can't cancel it, even if the action is pending for an offline host. - - -
- -## Response action commands - -The following response action commands are available in the response console. - -### `isolate` -Isolate the host, blocking communication with other hosts on the network. - -Required role: **Tier 3 analyst**, **SOC manager**, or **Endpoint operations analyst** - -Example: `isolate --comment "Isolate host related to detection alerts"` - -### `release` -Release an isolated host, allowing it to communicate with the network again. - -Required role: **Tier 3 analyst**, **SOC manager**, or **Endpoint operations analyst** - -Example: `release --comment "Release host, everything looks OK"` - -### `status` -Show information about the host's status, including: ((agent)) status and version, the ((elastic-defend)) integration's policy status, and when the host was last active. - -
-### `processes` -Show a list of all processes running on the host. This action may take a minute or so to complete. - -Required role: **Tier 3 analyst**, **SOC manager**, or **Endpoint operations analyst** - - - -Use this command to get current PID or entity ID values, which are required for other response actions such as `kill-process` and `suspend-process`. - -Entity IDs may be more reliable than PIDs, because entity IDs are unique values on the host, while PID values can be reused by the operating system. - - - - - Running this command on third-party-protected hosts might return the process list in a different format. Refer to for more information. - - -
-### `kill-process` - -Terminate a process. You must include one of the following parameters to identify the process to terminate: - -* `--pid` : A process ID (PID) representing the process to terminate. -* `--entityId` : An entity ID representing the process to terminate. - -Required role: **Tier 3 analyst**, **SOC manager**, or **Endpoint operations analyst** - -Example: `kill-process --pid 123 --comment "Terminate suspicious process"` - - - For SentinelOne-enrolled hosts, you must use the parameter `--processName` to identify the process to terminate. `--pid` and `--entityId` are not supported. - - Example: `kill-process --processName cat --comment "Terminate suspicious process"` - - -### `suspend-process` - -Suspend a process. You must include one of the following parameters to identify the process to suspend: - -* `--pid` : A process ID (PID) representing the process to suspend. -* `--entityId` : An entity ID representing the process to suspend. - -Required role: **Tier 3 analyst**, **SOC manager**, or **Endpoint operations analyst** - -Example: `suspend-process --pid 123 --comment "Suspend suspicious process"` - -
-### `get-file` - -Retrieve a file from a host. Files are downloaded in a password-protected `.zip` archive to prevent the file from running. Use password `elastic` to open the `.zip` in a safe environment. - - -Files retrieved from third-party-protected hosts require a different password. Refer to for your system's password. - - -You must include the following parameter to specify the file's location on the host: - -* `--path` : The file's full path (including the file name). - -Required role: **Tier 3 analyst**, **SOC manager**, or **Endpoint operations analyst** - -Example: `get-file --path "/full/path/to/file.txt" --comment "Possible malware"` - - -You can use the Osquery manager integration to query a host's operating system and gain insight into its files and directories, then use `get-file` to retrieve specific files. - - - - -When ((elastic-defend)) prevents file activity due to malware prevention, the file is quarantined on the host and a malware prevention alert is created. To retrieve this file with `get-file`, copy the path from the alert's **Quarantined file path** field (`file.Ext.quarantine_path`), which appears under **Highlighted fields** in the alert details flyout. Then paste the value into the `--path` parameter. - - - -### `execute` - -Run a shell command on the host. The command's output and any errors appear in the response console, up to 2000 characters. The complete output (stdout and stderr) are also saved to a downloadable `.zip` archive (password: `elastic`). Use these parameters: - -* `--command` : (Required) A shell command to run on the host. The command must be supported by `bash` for Linux and macOS hosts, and `cmd.exe` for Windows. - - - - * Multiple consecutive dashes in the value must be escaped; single dashes do not need to be escaped. For example, to represent a directory named `/opt/directory--name`, use the following: `/opt/directory\-\-name`. - - * You can use quotation marks without escaping. For example: - `execute --command "cd "C:\Program Files\directory""` - - - -* `--timeout` : (Optional) How long the host should wait for the command to complete. Use `h` for hours, `m` for minutes, `s` for seconds (for example, `2s` is two seconds). If no timeout is specified, it defaults to four hours. - -Required role: **SOC manager** or **Endpoint operations analyst** - -Example: `execute --command "ls -al" --timeout 2s --comment "Get list of all files"` - - -This response action runs commands on the host using the same user account running the ((elastic-defend)) integration, which normally has full control over the system. Be careful with any commands that could cause irrevocable changes. - - -### `upload` - -Upload a file to the host. The file is saved to the location on the host where ((elastic-endpoint)) is installed. After you run the command, the full path is returned in the console for reference. Use these parameters: - -* `--file` : (Required) The file to send to the host. As soon as you type this parameter, a popup appears — select it to navigate to the file, or drag and drop the file onto the popup. -* `--overwrite` : (Optional) Overwrite the file on the host if it already exists. - -Required role: **Tier 3 analyst**, **SOC manager**, or **Endpoint operations analyst** - -Example: `upload --file --comment "Upload remediation script"` - - -You can follow this with the `execute` response action to upload and run scripts for mitigation or other purposes. - - - -The default file size maximum is 25 MB, configurable in `kibana.yml` with the `xpack.securitySolution.maxUploadResponseActionFileBytes` setting. You must enter the value in bytes (the maximum is `104857600` bytes, or 100 MB). - - -### `scan` - -Scan a specific file or directory on the host for malware. This uses the malware protection settings (such as **Detect** or **Prevent** options, or enabling the blocklist) as configured in the host's associated ((elastic-defend)) integration policy. Use these parameters: - -* `--path` : (Required) The absolute path to a file or directory to be scanned. - -Required role: **Tier 3 Analyst**, **SOC Manager**, or **Endpoint Operations Analyst** - -Example: `scan --path "/Users/username/Downloads" --comment "Scan Downloads folder for malware"` - - - Scanning can take longer for directories containing a lot of files. - - -
- -## Supporting commands and parameters - -### `--comment` - -Add to a command to include a comment explaining or describing the action. Comments are included in the response actions history. - -### `--help` - -Add to a command to get help for that command. - -Example: `isolate --help` - -### `clear` - -Clear all output from the response console. - -### `help` - -List supported commands in the console output area. - - -You can also get a list of commands in the Help panel, which stays on the screen independently of the output area. - - -
- -## Help panel - -Click **Help** in the upper-right to open the **Help** panel, which lists available response action commands and parameters as a reference. - - -This panel displays only the response actions that you have the user role privileges to perform. - - - - -You can use this panel to build commands with less typing. Click the add icon () to add a command to the input area, enter any additional parameters or a comment, then press **Return** to run the command. - -If the endpoint is running an older version of ((agent)), some response actions may not be supported, as indicated by an informational icon and tooltip. [Upgrade ((agent))](((fleet-guide))/upgrade-elastic-agent.html) on the endpoint to be able to use the latest response actions. - - - -
- -## Response actions history - -Click **Response actions history** to display a log of the response actions performed on the endpoint, such as isolating a host or terminating a process. You can filter the information displayed in this view. Refer to Response actions history for more details. - - diff --git a/docs/serverless/endpoint-response-actions/third-party-actions.mdx b/docs/serverless/endpoint-response-actions/third-party-actions.mdx deleted file mode 100644 index edf2841e27..0000000000 --- a/docs/serverless/endpoint-response-actions/third-party-actions.mdx +++ /dev/null @@ -1,64 +0,0 @@ ---- -slug: /serverless/security/third-party-actions -title: Third-party response actions -description: Respond to threats on hosts enrolled in third-party security systems. -tags: ["serverless","security","defend","reference","manage"] ---- - - - - - -
- -You can perform response actions on hosts enrolled in other third-party endpoint protection systems, such as CrowdStrike or SentinelOne. For example, you can direct the other system to isolate a suspicious endpoint from your network, without leaving the ((elastic-sec)) UI. - - - -* Third-party response actions require the Endpoint Protection Complete . - -* Each response action type has its own user role privilege requirements. Find an action's role requirements at . - - - -## Supported systems and response actions - -The following third-party response actions are supported for CrowdStrike and SentinelOne. Prior configuration is required to connect each system with ((elastic-sec)). - - - - These response actions are supported for CrowdStrike-enrolled hosts: - - - **Isolate and release a host** using any of these methods: - - From a detection alert - - From the response console

- - Refer to the instructions on isolating and releasing hosts for more details. -
- - - These response actions are supported for SentinelOne-enrolled hosts: - - - **Isolate and release a host** using any of these methods: - - From a detection alert - - From the response console

- - Refer to the instructions on isolating and releasing hosts for more details.

- - - **Retrieve a file from a host** with the `get-file` response action. - - For SentinelOne-enrolled hosts, you must use the password `Elastic@123` to open the retrieved file. - - - - **Get a list of processes running on a host** with the `processes` response action. For SentinelOne-enrolled hosts, this command returns a link for downloading the process list in a file.

- - - **Terminate a process running on a host** with the `kill-process` response action. - - For SentinelOne-enrolled hosts, you must use the parameter `--processName` to identify the process to terminate. `--pid` and `--entityId` are not supported. - - Example: `kill-process --processName cat --comment "Terminate suspicious process"` - - - - **View past response action activity** in the response actions history log. -
-
diff --git a/docs/serverless/explore/data-views-in-sec.mdx b/docs/serverless/explore/data-views-in-sec.mdx deleted file mode 100644 index ec94ad099a..0000000000 --- a/docs/serverless/explore/data-views-in-sec.mdx +++ /dev/null @@ -1,57 +0,0 @@ ---- -slug: /serverless/security/data-views-in-sec -title: ((data-sources-cap)) in Elastic Security -description: Use data views to control what data displays on ((elastic-sec)) pages with event or alert data. -tags: [ 'serverless', 'security', 'reference', 'manage' ] -status: in review ---- - - -
- -((data-sources-cap)) determine what data displays on ((elastic-sec)) pages with event or alert data. -((data-sources-cap)) are defined by the index patterns they include. -Only data from ((es)) [indices](((ref))/documents-indices.html), [data streams](((ref))/data-streams.html), or [index aliases](((ref))/alias.html) specified in the active ((data-source)) will appear. - - -Custom indices are not included in the default ((data-source)). Modify it or create a custom ((data-source)) to include custom indices. - - -## Switch to another ((data-source)) - -You can tell which ((data-source)) is active by clicking the **((data-source-cap))** menu at the upper right of ((elastic-sec)) pages that display event or alert data, such as Overview, Alerts, Timelines, or Hosts. -To switch to another ((data-source)), click **Choose ((data-source))**, select one of the options, and click **Save**. - -![image highlighting how to open the data view selection menu](../images/data-views-in-sec/-getting-started-dataview-button-highlighted.png) - -## Create or modify a ((data-source)) - -To learn how to modify the default **Security Default Data View**, refer to . - -To learn how to modify, create, or delete another ((data-source)) refer to [((data-sources-cap))](((kibana-ref))/data-views.html). - -You can also temporarily modify the active ((data-source)) from the **((data-source-cap))** menu by clicking **Advanced options**, then adding or removing index patterns. - -![video showing how to filter the active data view](../images/data-views-in-sec/-getting-started-dataview-filter-example.gif) - -This only allows you to add index patterns that match indices that currently contain data (other index patterns are unavailable). Note that any changes made are saved in the current browser window and won't persist if you open a new tab. - - - You cannot update the data view for the Alerts page. This includes referencing a cross-cluster search (CCS) data view or any other data view. The Alerts page always shows data from `.alerts-security.alerts-default`. - - -
- -## The default ((data-source)) - -The default ((data-source)) is defined by the `securitySolution:defaultIndex` setting, which you can modify in your project's advanced settings{/* path to be updated: (**Stack Management** → **Advanced Settings** → **Security Solution**) */}. To learn more about this setting, including its default value, refer to ). - -The first time a user visits ((elastic-sec)){/* within a given ((kib)) [space](((kibana-ref))/xpack-spaces.html)*/}, the default ((data-source)) generates{/* in that space*/} and becomes active. - -{/* TO-DO: in the first sentence of the following note, link to the Serverless page that explains spaces. */} - - - Your space must have **Data View Management**{/*{kibana-ref}/xpack-spaces.html#spaces-control-feature-visibility[feature visibility*/} feature visibility setting enabled for the default ((data-source)) to generate and become active in your space. - - -If you delete the active ((data-source)) when there are no other defined ((data-sources)), the default ((data-source)) will regenerate and become active upon refreshing any ((elastic-sec)) page{/* in the space*/}. diff --git a/docs/serverless/explore/explore-your-data.mdx b/docs/serverless/explore/explore-your-data.mdx deleted file mode 100644 index 64fd6977b2..0000000000 --- a/docs/serverless/explore/explore-your-data.mdx +++ /dev/null @@ -1,17 +0,0 @@ ---- -slug: /serverless/security/explore-your-data -title: Explore your data -# description: Description to be written -tags: [ 'serverless', 'security', 'overview' ] -status: in review ---- - - -This section contains the following pages: - -* -* -* -* -* -* diff --git a/docs/serverless/explore/hosts-overview.mdx b/docs/serverless/explore/hosts-overview.mdx deleted file mode 100644 index fcae2ab0d2..0000000000 --- a/docs/serverless/explore/hosts-overview.mdx +++ /dev/null @@ -1,116 +0,0 @@ ---- -slug: /serverless/security/hosts-overview -title: Hosts page -description: Explore the Hosts page to analyze hosts and related security events. -tags: [ 'serverless', 'security', 'how-to', 'analyze' ] -status: in review ---- - - -
- -The Hosts page provides a comprehensive overview of all hosts and host-related security events. Key performance indicator (KPI) charts, data tables, and interactive widgets let you view specific data, drill down for deeper insights, and interact with Timeline for further investigation. - -![Hosts page](../images/hosts-overview/-management-hosts-hosts-ov-pg.png) - -The Hosts page has the following sections: - -
- -## Host KPI (key performance indicator) charts - -KPI charts show metrics for hosts and unique IPs within the time range specified in the date picker. This data is visualized using linear or bar graphs. - - -Hover inside a KPI chart to display the actions menu (), where you can perform these actions: inspect, open in Lens, and add to a new or existing case. - - -
- -## Data tables - -Beneath the KPI charts are data tables, categorized by individual tabs, which are useful for viewing and investigating specific types of data. Select the relevant tab to view the following data: - -* **Events**: All host events. To display alerts received from external monitoring tools, scroll down to the Events table and select **Show only external alerts** on the right. -* **All hosts**: High-level host details. -* **Uncommon processes**: Uncommon processes running on hosts. -* **Anomalies**: Anomalies discovered by machine learning jobs. -* **Host risk**: The latest recorded host risk score for each host, and its host risk classification. This feature requires the Security Analytics Complete and must be enabled to display the data. To learn more, refer to our entity risk scoring documentation. -* **Sessions**: Linux process events that you can open in Session View, an investigation tool that allows you to examine Linux process data at a hierarchal level. - -The tables within the **Events** and **Sessions** tabs include inline actions and several customization options. To learn more about what you can do with the data in these tables, refer to Manage detection alerts. - -![Events table](../images/hosts-overview/-getting-started-users-events-table.png) - -
- -## Host details page - -A host's details page displays all relevant information for the selected host. To view a host's details page, click its **Host name** link in the **All hosts** table. - -The host details page includes the following sections: - -* **Asset Criticality**: If the `securitySolution:enableAssetCriticality` advanced setting is on, this section displays the host's current asset criticality level. -* **Summary**: Details such as the host ID, when the host was first and last seen, the associated IP addresses, and associated operating system. If the entity risk score feature is enabled, this section also displays host risk score data. -* **Alert metrics**: The total number of alerts by severity, rule, and status (`Open`, `Acknowledged`, or `Closed`). -* **Data tables**: The same data tables as on the main Hosts page, except with values for the selected host instead of all hosts. - -![Host's details page](../images/hosts-overview/-management-hosts-hosts-detail-pg.png) - -## Host details flyout - -In addition to the host details page, relevant host information is also available in the host details flyout throughout the ((elastic-sec)) app. You can access this flyout from the following places: - -* The Alerts page, by clicking on a host name in the Alerts table -* The Entity Analytics dashboard, by clicking on a host name in the Host Risk Scores table -* The **Events** tab on the Users and user details pages, by clicking on a host name in the Events table -* The **User risk** tab on the user details page, by clicking on a host name in the Top risk score contributors table -* The **Events** tab on the Hosts and host details pages, by clicking on a host name in the Events table -* The **Host risk** tab on the host details page, by clicking on a host name in the Top risk score contributors table - -The host details flyout includes the following sections: - -* Host risk summary, which displays host risk data and inputs. -* Asset Criticality, which allows you to view and assign asset criticality. -* Observed data, which displays host details. - -![Host details flyout](../images/hosts-overview/-host-details-flyout.png) - -### Host risk summary - - -The **Host risk summary** section is only available if the risk scoring engine is turned on. - - -The **Host risk summary** section contains a risk summary visualization and table. - -The risk summary visualization shows the host risk score and host risk level. Hover over the visualization to display the **Options** menu (). Use this menu to inspect the visualization's queries, add it to a new or existing case, save it to your Visualize Library, or open it in Lens for customization. - -The risk summary table shows the category, score, and number of risk inputs that determine the host risk score. Hover over the table to display the **Inspect** button (), which allows you to inspect the table's queries. - -To expand the **Host risk summary** section, click **View risk contributions**. The left panel displays additional details about the host's risk inputs: - -* The asset criticality level and contribution score from the latest risk scoring calculation. -* The top 10 alerts that contributed to the latest risk scoring calculation, and each alert's contribution score. - -If more than 10 alerts contributed to the risk scoring calculation, the remaining alerts' aggregate contribution score is displayed below the **Alerts** table. - -![Host risk inputs](../images/hosts-overview/-host-risk-inputs.png) - -### Asset Criticality - - -The **Asset Criticality** section is only available if the `securitySolution:enableAssetCriticality` advanced setting is on. - - -The **Asset Criticality** section displays the selected host's asset criticality level. Asset criticality contributes to the overall host risk score. The criticality level defines how impactful the host is when calculating the risk score. - -![Asset criticality](../images/hosts-overview/-host-asset-criticality.png) - -Click **Assign** to assign a criticality level to the selected host, or **Change** to change the currently assigned criticality level. - -### Observed data - -This section displays details such as the host ID, when the host was first and last seen, the associated IP addresses and operating system, and the relevant Endpoint integration policy information. - -![Host observed data](../images/hosts-overview/-host-observed-data.png) \ No newline at end of file diff --git a/docs/serverless/explore/network-page-overview.mdx b/docs/serverless/explore/network-page-overview.mdx deleted file mode 100644 index bde68003f3..0000000000 --- a/docs/serverless/explore/network-page-overview.mdx +++ /dev/null @@ -1,86 +0,0 @@ ---- -slug: /serverless/security/network-page-overview -title: Network page -description: Analyze key network activity metrics on an interactive map, and use network event tables for deeper insights. -tags: [ 'serverless', 'security', 'how-to', 'analyze'] -status: in review ---- - - -
- -The Network page provides key network activity metrics in an interactive map, and network event tables that enable interaction with Timeline. You can drag and drop items of interest from the Network view to Timeline for further investigation. - -![](../images/network-page-overview/-getting-started-network-ui.png) - -
- -## Map - -The map provides an interactive visual overview of your network traffic. Hover over source and destination points to show more information, such as host names and IP addresses. - - -To access the interactive map, you must have the appropriate user role. To learn more about map setup, refer to Configure network map data. - - -There are several ways to drill down: - -* Click a point, hover over the host name or destination IP, then use the filter icon to add a field to the filter bar. -* Drag a field from the map to Timeline. -* Click a host name to go to the Hosts page. -* Click an IP address to open its details page. - -You can start an investigation using the map, and the map refreshes to show related data when you run a query or update the time range. - - -To add and remove layers, click on the **Options** menu (**...**) in the top right corner of the map. - - -
- -## Widgets and data tables - -Interactive widgets let you drill down for deeper insights: - -* Network events -* DNS queries -* Unique flow IDs -* TLS handshakes -* Unique private IPs - -There are also tabs for viewing and investigating specific types of data: - -* **Events**: All network events. To display alerts received from external monitoring tools, scroll down to the events table and select **Show only external alerts** on the right. - -The Events table includes inline actions and several customization options. To learn more about what you can do with the data in these tables, refer to Manage detection alerts. -* **Flows**: Source and destination IP addresses and countries. -* **DNS**: DNS network queries. -* **HTTP**: Received HTTP requests (HTTP requests for applications using - [Elastic APM](((apm-app-ref))/apm-getting-started.html) are monitored by default). - -* **TLS**: Handshake details. -* **Anomalies**: Anomalies discovered by machine learning jobs. - -
- -## IP details page - -An IP's details page shows related network information for the selected IP address. - -To view an IP's details page, click its IP address link from the Source IPs or Destination IPs table. - -The IP's details page includes the following sections: - -* **Summary**: General details such as the location, when the IP address was first and last seen, the associated host ID and host name, and links to external sites for verifying the IP address's reputation. - - - By default, the external sites are [Talos](https://talosintelligence.com/) and - [VirusTotal](https://www.virustotal.com/). Refer to Display reputation links on IP detail pages to learn how to configure IP reputation links. - - -* **Alert metrics**: The total number of alerts by severity, rule, and status (`Open`, `Acknowledged`, or `Closed`). - -* **Data tables**: The same data tables as on the main Network page, except with values for the selected IP address instead of all IP addresses. - -![IP details page](../images/network-page-overview/-getting-started-IP-detail-pg.png) - diff --git a/docs/serverless/explore/runtime-fields.mdx b/docs/serverless/explore/runtime-fields.mdx deleted file mode 100644 index 074e6169aa..0000000000 --- a/docs/serverless/explore/runtime-fields.mdx +++ /dev/null @@ -1,63 +0,0 @@ ---- -slug: /serverless/security/runtime-fields -title: Create runtime fields in ((elastic-sec)) -description: Create, edit, or delete runtime fields in ((elastic-sec)). -tags: [ 'serverless', 'security', 'how-to', 'manage' ] -status: in review ---- - - -
- -Runtime fields are fields that you can add to documents after you've ingested your data. For example, you could combine two fields and treat them as one, or perform calculations on existing data and use the result as a separate field. Runtime fields are evaluated when a query is run. - -You can create a runtime field and add it to your detection alerts or events from any page that lists alerts or events in a data grid table, such as **Alerts**, **Timelines**, **Hosts**, and **Users**. Once created, the new field is added to the current data view and becomes available to all ((elastic-sec)) alerts and events in the data view. - - -Runtime fields can impact performance because they're evaluated each time a query runs. Refer to [Runtime fields](((ref))/runtime.html) for more information. - - -To create a runtime field: - -1. Go to a page that lists alerts or events (for example, **Alerts** or **Timelines** → **_Name of Timeline_**). - -1. Do one of the following: - - * In the Alerts table, click the **Fields** toolbar button in the table's upper-left. From the **Fields** browser, click **Create field**. The **Create field** flyout opens. - - ![Fields browser](../images/runtime-fields/-reference-fields-browser.png) - - * In Timeline, go to the bottom of the sidebar, then click **Add a field**. The **Create field** flyout opens. - - ![Create runtime fields button in Timeline](../images/runtime-fields/-reference-create-runtime-fields-timeline.png) - -1. Enter a **Name** for the new field. - -1. Select a **Type** for the field's data type. - -1. Turn on the **Set value** toggle and enter a [Painless script](((ref))/modules-scripting-painless.html) to define the field's value. The script must match the selected **Type**. For more on adding fields and Painless scripting examples, refer to [Explore your data with runtime fields](((kibana-ref))/managing-data-views.html#runtime-fields). - -1. Use the **Preview** to help you build the script so it returns the expected field value. - -1. Configure other field settings as needed. - - - Some runtime field settings, such as custom labels and display formats, might display differently in some areas of the ((elastic-sec)) UI. - - -1. Click **Save**. The new field appears as a new column in the data grid. - -
- -## Manage runtime fields - -You can edit or delete existing runtime fields from the **Alerts**, **Timelines**, **Hosts**, and **Users** pages. - -1. Click the **Fields** button to open the **Fields** browser, then search for the runtime field you want. - - - Click the **Runtime** column header twice to reorder the fields table with all runtime fields at the top. - - -1. In the **Actions** column, select an option to edit or delete the runtime field. - diff --git a/docs/serverless/explore/siem-field-reference.mdx b/docs/serverless/explore/siem-field-reference.mdx deleted file mode 100644 index 6c9c7bb4bb..0000000000 --- a/docs/serverless/explore/siem-field-reference.mdx +++ /dev/null @@ -1,251 +0,0 @@ ---- -slug: /serverless/security/siem-field-reference -title: ((elastic-sec)) ECS field reference -description: Learn which ECS fields are used by ((elastic-sec)) to display various data. -tags: [ 'serverless', 'security', 'reference', 'manage' ] -status: in review ---- - - -
- -This section lists [Elastic Common Schema](((ecs-ref))) (ECS) fields used by ((elastic-sec)) to provide an optimal SIEM and security analytics experience to users. These fields are used to display data, provide rule previews, enable detection by prebuilt detection rules, provide context during rule triage and investigation, escalate to cases, and more. - - -We recommend you use ((agent)) integrations or ((beats)) to ship your data to ((elastic-sec)). ((agent)) integrations and Beat modules (for example, [((filebeat)) modules](((filebeat-ref))/filebeat-modules.html)) are ECS-compliant, which means data they ship to ((elastic-sec)) will automatically populate the relevant ECS fields. -If you plan to use a custom implementation to map your data to ECS fields (see [how to map data to ECS](((ecs-ref))/ecs-converting.html)), ensure the always required fields are populated. Ideally, all relevant ECS fields should be populated as well. - - -For detailed information about which ECS fields can appear in documents generated by ((elastic-endpoint)), refer to the [Endpoint event documentation](https://github.com/elastic/endpoint-package/tree/main/custom_documentation/doc/endpoint). - -
- -## Always required fields -((elastic-sec)) requires all event and threat intelligence data to be normalized to ECS. For proper operation, all data must contain the following ECS fields: - -* `@timestamp` -* `ecs.version` -* `event.kind` -* `event.category` -* `event.type` - -
- -## Fields required for process events -((elastic-sec)) relies on these fields to analyze and display process data: - -* `process.name` -* `process.pid` - -
- -## Fields required for host events -((elastic-sec)) relies on these fields to analyze and display host data: - -* `host.name` -* `host.id` - -((elastic-sec)) may use these fields to display additional host data: - -* `cloud.instance.id` -* `cloud.machine.type` -* `cloud.provider` -* `cloud.region` -* `host.architecture` -* `host.ip` -* `host.mac` -* `host.os.family` -* `host.os.name` -* `host.os.platform` -* `host.os.version` - -#### Authentication fields - -((elastic-sec)) relies on these fields and values to analyze and display host authentication data: - -* `event.category:authentication` -* `event.outcome:success` or `event.outcome:failure` - -((elastic-sec)) may also use this field to display additional host authentication data: - -* `user.name` - -#### Uncommon process fields - -((elastic-sec)) relies on this field to analyze and display host uncommon process data: - -* `process.name` - -((elastic-sec)) may also use these fields to display uncommon process data: - -* `agent.type` -* `event.action` -* `event.code` -* `event.dataset` -* `event.module` -* `process.args` -* `user.id` -* `user.name` - -
- -## Fields required for network events -((elastic-sec)) relies on these fields to analyze and display network data: - -* `destination.geo.location` (required for display of map data) -* `destination.ip` -* `source.geo.location` (required to display map data) -* `source.ip` - -((elastic-sec)) may also use these fields to analyze and display network data: - -* `destination.as.number` -* `destination.as.organization.name` -* `destination.bytes` -* `destination.domain` -* `destination.geo.country_iso_code` -* `source.as.number` -* `source.as.organization.name` -* `source.bytes` -* `source.domain` -* `source.geo.country_iso_code` - -#### DNS query fields - -((elastic-sec)) relies on these fields to analyze and display DNS data: - -* `dns.question.name` -* `dns.question.registered_domain` - -((elastic-sec)) may also use this field to display DNS data: - -* `dns.question.type` - - - If you want to be able to filter out PTR records, make sure relevant - events have `dns.question.type` fields with values of `PTR`. - - -#### HTTP request fields - -((elastic-sec)) relies on these fields to analyze and display HTTP request data: - -* `http.request.method` -* `http.response.status_code` -* `url.domain` -* `url.path` - -#### TLS fields - -((elastic-sec)) relies on this field to analyze and display TLS data: - -* `tls.server.hash.sha1` - -((elastic-sec)) may also use these fields to analyze and display TLS data: - -* `tls.server.issuer` -* `tls.server.ja3s` -* `tls.server.not_after` -* `tls.server.subject` - -## Fields required for events and external alerts -((elastic-sec)) relies on this field to analyze and display event and external alert data: - -* `event.kind` - - - For external alerts, the `event.kind` field's value must be `alert`. - - -((elastic-sec)) may also use these fields to analyze and display event and external alert data: - -* `destination.bytes` -* `destination.geo.city_name` -* `destination.geo.continent_name` -* `destination.geo.country_iso_code` -* `destination.geo.country_name` -* `destination.geo.region_iso_code` -* `destination.geo.region_name` -* `destination.ip` -* `destination.packets` -* `destination.port` -* `dns.question.name` -* `dns.question.type` -* `dns.resolved_ip` -* `dns.response_code` -* `event.action` -* `event.code` -* `event.created` -* `event.dataset` -* `event.duration` -* `event.end` -* `event.hash` -* `event.id` -* `event.module` -* `event.original` -* `event.outcome` -* `event.provider` -* `event.risk_score_norm` -* `event.risk_score` -* `event.severity` -* `event.start` -* `event.timezone` -* `file.ctime` -* `file.device` -* `file.extension` -* `file.gid` -* `file.group` -* `file.inode` -* `file.mode` -* `file.mtime` -* `file.name` -* `file.owner` -* `file.path` -* `file.size` -* `file.target_path` -* `file.type` -* `file.uid` -* `host.id` -* `host.ip` -* `http.request.body.bytes` -* `http.request.body.content` -* `http.request.method` -* `http.request.referrer` -* `http.response.body.bytes` -* `http.response.body.content` -* `http.response.status_code` -* `http.version` -* `message` -* `network.bytes` -* `network.community_id` -* `network.direction` -* `network.packets` -* `network.protocol` -* `network.transport` -* `pe.original_file_name` -* `process.args` -* `process.executable` -* `process.hash.md5` -* `process.hash.sha1` -* `process.hash.sha256` -* `process.name` -* `process.parent.executable` -* `process.parent.name` -* `process.pid` -* `process.ppid` -* `process.title` -* `process.working_directory` -* `rule.reference` -* `source.bytes` -* `source.geo.city_name` -* `source.geo.continent_name` -* `source.geo.country_iso_code` -* `source.geo.country_name` -* `source.geo.region_iso_code` -* `source.geo.region_name` -* `source.ip` -* `source.packets` -* `source.port` -* `user.domain` -* `user.name` - diff --git a/docs/serverless/explore/users-page.mdx b/docs/serverless/explore/users-page.mdx deleted file mode 100644 index 3defe8e72d..0000000000 --- a/docs/serverless/explore/users-page.mdx +++ /dev/null @@ -1,110 +0,0 @@ ---- -slug: /serverless/security/users-page -title: Users page -description: Analyze authentication and user behavior within your environment. -tags: [ 'serverless', 'security', 'how-to', 'analyze' ] -status: in review ---- - - -
- -The Users page provides a comprehensive overview of user data to help you understand authentication and user behavior within your environment. Key performance indicator (KPI) charts, data tables, and interactive widgets let you view specific data and drill down for deeper insights. - -![User's page](../images/users-page/-getting-started-users-users-page.png) - -The Users page has the following sections: - -## User KPI (key performance indicator) charts - -KPI charts show the total number of users and successful and failed user authentications within the time range specified in the date picker. Data in the KPI charts is visualized through linear and bar graphs. - - -Hover inside a KPI chart to display the actions menu (), where you can perform these actions: inspect, open in Lens, and add to a new or existing case. - - -## Data tables - -Beneath the KPI charts are data tables, which are useful for viewing and investigating specific types of data. Select the relevant tab to view the following details: - -* **Events**: Ingested events that contain the `user.name` field. You can stack by the `event.action`, `event.dataset`, or `event.module` field. To display alerts received from external monitoring tools, scroll down to the Events table and select **Show only external alerts** on the right. -* **All users**: A chronological list of unique user names, when they were last active, and the associated domains. -* **Authentications**: A chronological list of user authentication events and associated details, such as the number of successes and failures, and the host name of the last successful destination. -* **Anomalies**: Unusual activity discovered by machine learning jobs that contain user data. -* **User risk**: The latest recorded user risk score for each user, and its user risk classification. This feature requires the Security Analytics Complete and must be enabled to display the data. To learn more, refer to our entity risk scoring documentation. - -The Events table includes inline actions and several customization options. To learn more about what you can do with the data in these tables, refer to Manage detection alerts. - -## User details page - -A user's details page displays all relevant information for the selected user. To view a user's details page, click its **User name** link from the **All users** table. - -The user details page includes the following sections: - -* **Asset Criticality**: If the `securitySolution:enableAssetCriticality` advanced setting is on, this section displays the user's current asset criticality level. - -* **Summary**: Details such as the user ID, when the user was first and last seen, the associated IP address(es), and operating system. If the entity risk score feature is enabled, this section also displays user risk score data. - -* **Alert metrics**: The total number of alerts by severity, rule, and status (`Open`, `Acknowledged`, or `Closed`). - -* **Data tables**: The same data tables as on the main Users page, except with values for the selected user instead of for all users. - - - -## User details flyout - -In addition to the user details page, relevant user information is also available in the user details flyout throughout the ((elastic-sec)) app. You can access this flyout from the following places: - -* The Alerts page, by clicking on a user name in the Alerts table -* The Entity Analytics dashboard, by clicking on a user name in the User Risk Scores table -* The **Events** tab on the Users and user details pages, by clicking on a user name in the Events table -* The **User risk** tab on the user details page, by clicking on a user name in the Top risk score contributors table -* The **Events** tab on the Hosts and host details pages, by clicking on a user name in the Events table -* The **Host risk** tab on the host details page, by clicking on a user name in the Top risk score contributors table - -The user details flyout includes the following sections: - -* User risk summary, which displays user risk data and inputs. -* Asset Criticality, which allows you to view and assign asset criticality. -* Observed data, which displays user details. - -![User details flyout](../images/users-page/-user-details-flyout.png) - -### User risk summary - - -The **User risk summary** section is only available if the risk scoring engine is turned on. - - -The **User risk summary** section contains a risk summary visualization and table. - -The risk summary visualization shows the user risk score and user risk level. Hover over the visualization to display the **Options** menu (). Use this menu to inspect the visualization's queries, add it to a new or existing case, save it to your Visualize Library, or open it in Lens for customization. - -The risk summary table shows the category, score, and number of risk inputs that determine the user risk score. Hover over the table to display the **Inspect** button (), which allows you to inspect the table's queries. - -To expand the **User risk summary** section, click **View risk contributions**. The left panel displays additional details about the user's risk inputs: - -* The asset criticality level and contribution score from the latest risk scoring calculation. -* The top 10 alerts that contributed to the latest risk scoring calculation, and each alert's contribution score. - -If more than 10 alerts contributed to the risk scoring calculation, the remaining alerts' aggregate contribution score is displayed below the **Alerts** table. - -![User risk inputs](../images/users-page/-user-risk-inputs.png) - -### Asset Criticality - - -The **Asset Criticality** section is only available if the `securitySolution:enableAssetCriticality` advanced setting is on. - - -The **Asset Criticality** section displays the selected user's asset criticality level. Asset criticality contributes to the overall user risk score. The criticality level defines how impactful the user is when calculating the risk score. - -![Asset criticality](../images/users-page/-user-asset-criticality.png) - -Click **Assign** to assign a criticality level to the selected user, or **Change** to change the currently assigned criticality level. - -### Observed data - -This section displays details such as the user ID, when the user was first and last seen, and the associated IP addresses and operating system. - -![User observed data](../images/users-page/-user-observed-data.png) \ No newline at end of file diff --git a/docs/serverless/images/about-rules/-detections-all-rules.png b/docs/serverless/images/about-rules/-detections-all-rules.png deleted file mode 100644 index 5ad7137a53..0000000000 Binary files a/docs/serverless/images/about-rules/-detections-all-rules.png and /dev/null differ diff --git a/docs/serverless/images/add-exceptions/-detections-add-exception-ui.png b/docs/serverless/images/add-exceptions/-detections-add-exception-ui.png deleted file mode 100644 index a8d23d0eba..0000000000 Binary files a/docs/serverless/images/add-exceptions/-detections-add-exception-ui.png and /dev/null differ diff --git a/docs/serverless/images/add-exceptions/-detections-endpoint-add-exp.png b/docs/serverless/images/add-exceptions/-detections-endpoint-add-exp.png deleted file mode 100644 index fe5c928264..0000000000 Binary files a/docs/serverless/images/add-exceptions/-detections-endpoint-add-exp.png and /dev/null differ diff --git a/docs/serverless/images/add-exceptions/-detections-exception-affects-multiple-rules.png b/docs/serverless/images/add-exceptions/-detections-exception-affects-multiple-rules.png deleted file mode 100644 index 2ca300f624..0000000000 Binary files a/docs/serverless/images/add-exceptions/-detections-exception-affects-multiple-rules.png and /dev/null differ diff --git a/docs/serverless/images/add-exceptions/-detections-manage-default-rule-list.png b/docs/serverless/images/add-exceptions/-detections-manage-default-rule-list.png deleted file mode 100644 index 1dc17856c4..0000000000 Binary files a/docs/serverless/images/add-exceptions/-detections-manage-default-rule-list.png and /dev/null differ diff --git a/docs/serverless/images/add-exceptions/-detections-nested-exp.png b/docs/serverless/images/add-exceptions/-detections-nested-exp.png deleted file mode 100644 index b403c0b464..0000000000 Binary files a/docs/serverless/images/add-exceptions/-detections-nested-exp.png and /dev/null differ diff --git a/docs/serverless/images/add-exceptions/-detections-rule-exception-tab.png b/docs/serverless/images/add-exceptions/-detections-rule-exception-tab.png deleted file mode 100644 index e3eb5dc8d3..0000000000 Binary files a/docs/serverless/images/add-exceptions/-detections-rule-exception-tab.png and /dev/null differ diff --git a/docs/serverless/images/advanced-settings/-getting-started-solution-advanced-settings.png b/docs/serverless/images/advanced-settings/-getting-started-solution-advanced-settings.png deleted file mode 100644 index 3a2a3c6559..0000000000 Binary files a/docs/serverless/images/advanced-settings/-getting-started-solution-advanced-settings.png and /dev/null differ diff --git a/docs/serverless/images/agent-tamper-protection/agent-tamper-protection.png b/docs/serverless/images/agent-tamper-protection/agent-tamper-protection.png deleted file mode 100644 index 267d1dea23..0000000000 Binary files a/docs/serverless/images/agent-tamper-protection/agent-tamper-protection.png and /dev/null differ diff --git a/docs/serverless/images/ai-assistant-alert-triage/ai-triage-add-to-case.png b/docs/serverless/images/ai-assistant-alert-triage/ai-triage-add-to-case.png deleted file mode 100644 index 29d0f91333..0000000000 Binary files a/docs/serverless/images/ai-assistant-alert-triage/ai-triage-add-to-case.png and /dev/null differ diff --git a/docs/serverless/images/ai-assistant/-assistant-add-alert-context.gif b/docs/serverless/images/ai-assistant/-assistant-add-alert-context.gif deleted file mode 100644 index 4c404fc0e0..0000000000 Binary files a/docs/serverless/images/ai-assistant/-assistant-add-alert-context.gif and /dev/null differ diff --git a/docs/serverless/images/ai-assistant/-assistant-ai-assistant-button.png b/docs/serverless/images/ai-assistant/-assistant-ai-assistant-button.png deleted file mode 100644 index e7349f9775..0000000000 Binary files a/docs/serverless/images/ai-assistant/-assistant-ai-assistant-button.png and /dev/null differ diff --git a/docs/serverless/images/ai-assistant/-assistant-assistant-anonymization-menu.png b/docs/serverless/images/ai-assistant/-assistant-assistant-anonymization-menu.png deleted file mode 100644 index e942269e61..0000000000 Binary files a/docs/serverless/images/ai-assistant/-assistant-assistant-anonymization-menu.png and /dev/null differ diff --git a/docs/serverless/images/ai-assistant/-assistant-assistant-settings-menu.png b/docs/serverless/images/ai-assistant/-assistant-assistant-settings-menu.png deleted file mode 100644 index 728e61f944..0000000000 Binary files a/docs/serverless/images/ai-assistant/-assistant-assistant-settings-menu.png and /dev/null differ diff --git a/docs/serverless/images/ai-assistant/-assistant-assistant.gif b/docs/serverless/images/ai-assistant/-assistant-assistant.gif deleted file mode 100644 index 0fc37c40cf..0000000000 Binary files a/docs/serverless/images/ai-assistant/-assistant-assistant.gif and /dev/null differ diff --git a/docs/serverless/images/ai-assistant/-assistant-quick-prompts.png b/docs/serverless/images/ai-assistant/-assistant-quick-prompts.png deleted file mode 100644 index 2adfa57f15..0000000000 Binary files a/docs/serverless/images/ai-assistant/-assistant-quick-prompts.png and /dev/null differ diff --git a/docs/serverless/images/ai-assistant/-assistant-system-prompt.gif b/docs/serverless/images/ai-assistant/-assistant-system-prompt.gif deleted file mode 100644 index 09ddeacb10..0000000000 Binary files a/docs/serverless/images/ai-assistant/-assistant-system-prompt.gif and /dev/null differ diff --git a/docs/serverless/images/ai-assistant/assistant-basic-view.png b/docs/serverless/images/ai-assistant/assistant-basic-view.png deleted file mode 100644 index 4251f73ea2..0000000000 Binary files a/docs/serverless/images/ai-assistant/assistant-basic-view.png and /dev/null differ diff --git a/docs/serverless/images/ai-assistant/assistant-kb-menu.png b/docs/serverless/images/ai-assistant/assistant-kb-menu.png deleted file mode 100644 index 0f907cdf6f..0000000000 Binary files a/docs/serverless/images/ai-assistant/assistant-kb-menu.png and /dev/null differ diff --git a/docs/serverless/images/alert-suppression/-detections-alert-suppression-options.png b/docs/serverless/images/alert-suppression/-detections-alert-suppression-options.png deleted file mode 100644 index 61678b5e04..0000000000 Binary files a/docs/serverless/images/alert-suppression/-detections-alert-suppression-options.png and /dev/null differ diff --git a/docs/serverless/images/alert-suppression/-detections-suppressed-alerts-details.png b/docs/serverless/images/alert-suppression/-detections-suppressed-alerts-details.png deleted file mode 100644 index e13ccf34c3..0000000000 Binary files a/docs/serverless/images/alert-suppression/-detections-suppressed-alerts-details.png and /dev/null differ diff --git a/docs/serverless/images/alert-suppression/-detections-suppressed-alerts-table-column.png b/docs/serverless/images/alert-suppression/-detections-suppressed-alerts-table-column.png deleted file mode 100644 index e78222217d..0000000000 Binary files a/docs/serverless/images/alert-suppression/-detections-suppressed-alerts-table-column.png and /dev/null differ diff --git a/docs/serverless/images/alert-suppression/-detections-suppressed-alerts-table.png b/docs/serverless/images/alert-suppression/-detections-suppressed-alerts-table.png deleted file mode 100644 index 845f9f2e3a..0000000000 Binary files a/docs/serverless/images/alert-suppression/-detections-suppressed-alerts-table.png and /dev/null differ diff --git a/docs/serverless/images/alert-suppression/-detections-timeline-button.png b/docs/serverless/images/alert-suppression/-detections-timeline-button.png deleted file mode 100644 index 3fc4ef7d22..0000000000 Binary files a/docs/serverless/images/alert-suppression/-detections-timeline-button.png and /dev/null differ diff --git a/docs/serverless/images/alerts-run-osquery/-osquery-setup-query.png b/docs/serverless/images/alerts-run-osquery/-osquery-setup-query.png deleted file mode 100644 index 245a9f95ae..0000000000 Binary files a/docs/serverless/images/alerts-run-osquery/-osquery-setup-query.png and /dev/null differ diff --git a/docs/serverless/images/alerts-ui-manage/-detections-additional-filters.png b/docs/serverless/images/alerts-ui-manage/-detections-additional-filters.png deleted file mode 100644 index af39ef8d42..0000000000 Binary files a/docs/serverless/images/alerts-ui-manage/-detections-additional-filters.png and /dev/null differ diff --git a/docs/serverless/images/alerts-ui-manage/-detections-alert-assigned-alerts.png b/docs/serverless/images/alerts-ui-manage/-detections-alert-assigned-alerts.png deleted file mode 100644 index 1d63dccf53..0000000000 Binary files a/docs/serverless/images/alerts-ui-manage/-detections-alert-assigned-alerts.png and /dev/null differ diff --git a/docs/serverless/images/alerts-ui-manage/-detections-alert-change-status.png b/docs/serverless/images/alerts-ui-manage/-detections-alert-change-status.png deleted file mode 100644 index 333366d09f..0000000000 Binary files a/docs/serverless/images/alerts-ui-manage/-detections-alert-change-status.png and /dev/null differ diff --git a/docs/serverless/images/alerts-ui-manage/-detections-alert-filter-assigned-alerts.png b/docs/serverless/images/alerts-ui-manage/-detections-alert-filter-assigned-alerts.png deleted file mode 100644 index 98f0833897..0000000000 Binary files a/docs/serverless/images/alerts-ui-manage/-detections-alert-filter-assigned-alerts.png and /dev/null differ diff --git a/docs/serverless/images/alerts-ui-manage/-detections-alert-flyout-assignees.png b/docs/serverless/images/alerts-ui-manage/-detections-alert-flyout-assignees.png deleted file mode 100644 index 423adfd6b7..0000000000 Binary files a/docs/serverless/images/alerts-ui-manage/-detections-alert-flyout-assignees.png and /dev/null differ diff --git a/docs/serverless/images/alerts-ui-manage/-detections-alert-page-dropdown-controls.png b/docs/serverless/images/alerts-ui-manage/-detections-alert-page-dropdown-controls.png deleted file mode 100644 index fe7639fdab..0000000000 Binary files a/docs/serverless/images/alerts-ui-manage/-detections-alert-page-dropdown-controls.png and /dev/null differ diff --git a/docs/serverless/images/alerts-ui-manage/-detections-alert-page.png b/docs/serverless/images/alerts-ui-manage/-detections-alert-page.png deleted file mode 100644 index 23c577761d..0000000000 Binary files a/docs/serverless/images/alerts-ui-manage/-detections-alert-page.png and /dev/null differ diff --git a/docs/serverless/images/alerts-ui-manage/-detections-alert-table-toolbar-buttons.png b/docs/serverless/images/alerts-ui-manage/-detections-alert-table-toolbar-buttons.png deleted file mode 100644 index 91211efadd..0000000000 Binary files a/docs/serverless/images/alerts-ui-manage/-detections-alert-table-toolbar-buttons.png and /dev/null differ diff --git a/docs/serverless/images/alerts-ui-manage/-detections-bulk-add-alerts-to-timeline.png b/docs/serverless/images/alerts-ui-manage/-detections-bulk-add-alerts-to-timeline.png deleted file mode 100644 index bfcac3e402..0000000000 Binary files a/docs/serverless/images/alerts-ui-manage/-detections-bulk-add-alerts-to-timeline.png and /dev/null differ diff --git a/docs/serverless/images/alerts-ui-manage/-detections-bulk-apply-alert-tag.png b/docs/serverless/images/alerts-ui-manage/-detections-bulk-apply-alert-tag.png deleted file mode 100644 index c8c1d8211b..0000000000 Binary files a/docs/serverless/images/alerts-ui-manage/-detections-bulk-apply-alert-tag.png and /dev/null differ diff --git a/docs/serverless/images/alerts-ui-manage/-detections-event-rendered-view.png b/docs/serverless/images/alerts-ui-manage/-detections-event-rendered-view.png deleted file mode 100644 index 54471ada23..0000000000 Binary files a/docs/serverless/images/alerts-ui-manage/-detections-event-rendered-view.png and /dev/null differ diff --git a/docs/serverless/images/alerts-ui-manage/-detections-group-alerts-expand.png b/docs/serverless/images/alerts-ui-manage/-detections-group-alerts-expand.png deleted file mode 100644 index 0304592bd8..0000000000 Binary files a/docs/serverless/images/alerts-ui-manage/-detections-group-alerts-expand.png and /dev/null differ diff --git a/docs/serverless/images/alerts-ui-manage/-detections-group-alerts.png b/docs/serverless/images/alerts-ui-manage/-detections-group-alerts.png deleted file mode 100644 index 6f794f1de7..0000000000 Binary files a/docs/serverless/images/alerts-ui-manage/-detections-group-alerts.png and /dev/null differ diff --git a/docs/serverless/images/alerts-ui-manage/-detections-inline-actions-menu.png b/docs/serverless/images/alerts-ui-manage/-detections-inline-actions-menu.png deleted file mode 100644 index 5c4dff3d77..0000000000 Binary files a/docs/serverless/images/alerts-ui-manage/-detections-inline-actions-menu.png and /dev/null differ diff --git a/docs/serverless/images/alerts-ui-manage/-detections-timeline-button.png b/docs/serverless/images/alerts-ui-manage/-detections-timeline-button.png deleted file mode 100644 index 3fc4ef7d22..0000000000 Binary files a/docs/serverless/images/alerts-ui-manage/-detections-timeline-button.png and /dev/null differ diff --git a/docs/serverless/images/alerts-ui-manage/-detections-view-alert-details.png b/docs/serverless/images/alerts-ui-manage/-detections-view-alert-details.png deleted file mode 100644 index b777b0d25b..0000000000 Binary files a/docs/serverless/images/alerts-ui-manage/-detections-view-alert-details.png and /dev/null differ diff --git a/docs/serverless/images/alerts-ui-monitor/-detections-monitor-table.png b/docs/serverless/images/alerts-ui-monitor/-detections-monitor-table.png deleted file mode 100644 index 05cb4c3aa2..0000000000 Binary files a/docs/serverless/images/alerts-ui-monitor/-detections-monitor-table.png and /dev/null differ diff --git a/docs/serverless/images/alerts-ui-monitor/-detections-rule-execution-logs.png b/docs/serverless/images/alerts-ui-monitor/-detections-rule-execution-logs.png deleted file mode 100644 index 6284bed31d..0000000000 Binary files a/docs/serverless/images/alerts-ui-monitor/-detections-rule-execution-logs.png and /dev/null differ diff --git a/docs/serverless/images/alerts-ui-monitor/-detections-timestamp-override.png b/docs/serverless/images/alerts-ui-monitor/-detections-timestamp-override.png deleted file mode 100644 index 615a7db008..0000000000 Binary files a/docs/serverless/images/alerts-ui-monitor/-detections-timestamp-override.png and /dev/null differ diff --git a/docs/serverless/images/analyze-risk-score-data/alerts-flyout-rs.png b/docs/serverless/images/analyze-risk-score-data/alerts-flyout-rs.png deleted file mode 100644 index 6da59af6b9..0000000000 Binary files a/docs/serverless/images/analyze-risk-score-data/alerts-flyout-rs.png and /dev/null differ diff --git a/docs/serverless/images/analyze-risk-score-data/alerts-table-rs.png b/docs/serverless/images/analyze-risk-score-data/alerts-table-rs.png deleted file mode 100644 index d572deafa7..0000000000 Binary files a/docs/serverless/images/analyze-risk-score-data/alerts-table-rs.png and /dev/null differ diff --git a/docs/serverless/images/analyze-risk-score-data/filter-by-asset-criticality.png b/docs/serverless/images/analyze-risk-score-data/filter-by-asset-criticality.png deleted file mode 100644 index d5f426ca27..0000000000 Binary files a/docs/serverless/images/analyze-risk-score-data/filter-by-asset-criticality.png and /dev/null differ diff --git a/docs/serverless/images/analyze-risk-score-data/filter-by-host-risk-level.png b/docs/serverless/images/analyze-risk-score-data/filter-by-host-risk-level.png deleted file mode 100644 index 84a56291d6..0000000000 Binary files a/docs/serverless/images/analyze-risk-score-data/filter-by-host-risk-level.png and /dev/null differ diff --git a/docs/serverless/images/analyze-risk-score-data/group-by-asset-criticality.png b/docs/serverless/images/analyze-risk-score-data/group-by-asset-criticality.png deleted file mode 100644 index 5d5fa6e283..0000000000 Binary files a/docs/serverless/images/analyze-risk-score-data/group-by-asset-criticality.png and /dev/null differ diff --git a/docs/serverless/images/analyze-risk-score-data/group-by-host-risk-level.png b/docs/serverless/images/analyze-risk-score-data/group-by-host-risk-level.png deleted file mode 100644 index 3c40e9dbf5..0000000000 Binary files a/docs/serverless/images/analyze-risk-score-data/group-by-host-risk-level.png and /dev/null differ diff --git a/docs/serverless/images/analyze-risk-score-data/host-details-hr-tab.png b/docs/serverless/images/analyze-risk-score-data/host-details-hr-tab.png deleted file mode 100644 index 64bf96324a..0000000000 Binary files a/docs/serverless/images/analyze-risk-score-data/host-details-hr-tab.png and /dev/null differ diff --git a/docs/serverless/images/analyze-risk-score-data/host-details-overview.png b/docs/serverless/images/analyze-risk-score-data/host-details-overview.png deleted file mode 100644 index 8c19f8b90c..0000000000 Binary files a/docs/serverless/images/analyze-risk-score-data/host-details-overview.png and /dev/null differ diff --git a/docs/serverless/images/analyze-risk-score-data/hosts-hr-data.png b/docs/serverless/images/analyze-risk-score-data/hosts-hr-data.png deleted file mode 100644 index a578265942..0000000000 Binary files a/docs/serverless/images/analyze-risk-score-data/hosts-hr-data.png and /dev/null differ diff --git a/docs/serverless/images/analyze-risk-score-data/hosts-hr-level.png b/docs/serverless/images/analyze-risk-score-data/hosts-hr-level.png deleted file mode 100644 index b73d347653..0000000000 Binary files a/docs/serverless/images/analyze-risk-score-data/hosts-hr-level.png and /dev/null differ diff --git a/docs/serverless/images/analyze-risk-score-data/hrl-sort-by-host-risk-score.png b/docs/serverless/images/analyze-risk-score-data/hrl-sort-by-host-risk-score.png deleted file mode 100644 index 57ee2bbee1..0000000000 Binary files a/docs/serverless/images/analyze-risk-score-data/hrl-sort-by-host-risk-score.png and /dev/null differ diff --git a/docs/serverless/images/analyze-risk-score-data/risk-summary.png b/docs/serverless/images/analyze-risk-score-data/risk-summary.png deleted file mode 100644 index 0389dade2e..0000000000 Binary files a/docs/serverless/images/analyze-risk-score-data/risk-summary.png and /dev/null differ diff --git a/docs/serverless/images/asset-criticality/-asset-criticality-impact.png b/docs/serverless/images/asset-criticality/-asset-criticality-impact.png deleted file mode 100644 index 23282c03cc..0000000000 Binary files a/docs/serverless/images/asset-criticality/-asset-criticality-impact.png and /dev/null differ diff --git a/docs/serverless/images/asset-criticality/-assign-asset-criticality-host-details.png b/docs/serverless/images/asset-criticality/-assign-asset-criticality-host-details.png deleted file mode 100644 index c55e4b5e7d..0000000000 Binary files a/docs/serverless/images/asset-criticality/-assign-asset-criticality-host-details.png and /dev/null differ diff --git a/docs/serverless/images/asset-criticality/-assign-asset-criticality-host-flyout.png b/docs/serverless/images/asset-criticality/-assign-asset-criticality-host-flyout.png deleted file mode 100644 index cc3e8cd29d..0000000000 Binary files a/docs/serverless/images/asset-criticality/-assign-asset-criticality-host-flyout.png and /dev/null differ diff --git a/docs/serverless/images/asset-criticality/-assign-asset-criticality-timeline.png b/docs/serverless/images/asset-criticality/-assign-asset-criticality-timeline.png deleted file mode 100644 index b4ce0e00a6..0000000000 Binary files a/docs/serverless/images/asset-criticality/-assign-asset-criticality-timeline.png and /dev/null differ diff --git a/docs/serverless/images/attack-discovery/add-discovery-to-conversation.gif b/docs/serverless/images/attack-discovery/add-discovery-to-conversation.gif deleted file mode 100644 index d2d969d60e..0000000000 Binary files a/docs/serverless/images/attack-discovery/add-discovery-to-conversation.gif and /dev/null differ diff --git a/docs/serverless/images/attack-discovery/attack-discovery-full-card.png b/docs/serverless/images/attack-discovery/attack-discovery-full-card.png deleted file mode 100644 index af90d5c604..0000000000 Binary files a/docs/serverless/images/attack-discovery/attack-discovery-full-card.png and /dev/null differ diff --git a/docs/serverless/images/attack-discovery/select-model-empty-state.png b/docs/serverless/images/attack-discovery/select-model-empty-state.png deleted file mode 100644 index 78608bbd21..0000000000 Binary files a/docs/serverless/images/attack-discovery/select-model-empty-state.png and /dev/null differ diff --git a/docs/serverless/images/benchmark-rules/-cloud-native-security-benchmark-rules.png b/docs/serverless/images/benchmark-rules/-cloud-native-security-benchmark-rules.png deleted file mode 100644 index 107ba0ca1e..0000000000 Binary files a/docs/serverless/images/benchmark-rules/-cloud-native-security-benchmark-rules.png and /dev/null differ diff --git a/docs/serverless/images/blocklist/-management-admin-blocklist.png b/docs/serverless/images/blocklist/-management-admin-blocklist.png deleted file mode 100644 index f739e521ce..0000000000 Binary files a/docs/serverless/images/blocklist/-management-admin-blocklist.png and /dev/null differ diff --git a/docs/serverless/images/building-block-rule/-detections-alert-indices-ui.png b/docs/serverless/images/building-block-rule/-detections-alert-indices-ui.png deleted file mode 100644 index 8344624d7a..0000000000 Binary files a/docs/serverless/images/building-block-rule/-detections-alert-indices-ui.png and /dev/null differ diff --git a/docs/serverless/images/case-permissions/-cases-case-feature-privs.png b/docs/serverless/images/case-permissions/-cases-case-feature-privs.png deleted file mode 100644 index 209ded4826..0000000000 Binary files a/docs/serverless/images/case-permissions/-cases-case-feature-privs.png and /dev/null differ diff --git a/docs/serverless/images/cases-open-manage/-cases-add-vis-to-case.gif b/docs/serverless/images/cases-open-manage/-cases-add-vis-to-case.gif deleted file mode 100644 index 874611704d..0000000000 Binary files a/docs/serverless/images/cases-open-manage/-cases-add-vis-to-case.gif and /dev/null differ diff --git a/docs/serverless/images/cases-open-manage/-cases-cases-alert-tab.png b/docs/serverless/images/cases-open-manage/-cases-cases-alert-tab.png deleted file mode 100644 index 211a079dc6..0000000000 Binary files a/docs/serverless/images/cases-open-manage/-cases-cases-alert-tab.png and /dev/null differ diff --git a/docs/serverless/images/cases-open-manage/-cases-cases-copy-case-id.png b/docs/serverless/images/cases-open-manage/-cases-cases-copy-case-id.png deleted file mode 100644 index cb5e57fb38..0000000000 Binary files a/docs/serverless/images/cases-open-manage/-cases-cases-copy-case-id.png and /dev/null differ diff --git a/docs/serverless/images/cases-open-manage/-cases-cases-export-button.png b/docs/serverless/images/cases-open-manage/-cases-cases-export-button.png deleted file mode 100644 index c1b17befac..0000000000 Binary files a/docs/serverless/images/cases-open-manage/-cases-cases-export-button.png and /dev/null differ diff --git a/docs/serverless/images/cases-open-manage/-cases-cases-files.png b/docs/serverless/images/cases-open-manage/-cases-cases-files.png deleted file mode 100644 index 6698cdb448..0000000000 Binary files a/docs/serverless/images/cases-open-manage/-cases-cases-files.png and /dev/null differ diff --git a/docs/serverless/images/cases-open-manage/-cases-cases-home-page.png b/docs/serverless/images/cases-open-manage/-cases-cases-home-page.png deleted file mode 100644 index 070bb432fe..0000000000 Binary files a/docs/serverless/images/cases-open-manage/-cases-cases-home-page.png and /dev/null differ diff --git a/docs/serverless/images/cases-open-manage/-cases-cases-manage-comments.png b/docs/serverless/images/cases-open-manage/-cases-cases-manage-comments.png deleted file mode 100644 index f2a1a3b09f..0000000000 Binary files a/docs/serverless/images/cases-open-manage/-cases-cases-manage-comments.png and /dev/null differ diff --git a/docs/serverless/images/cases-open-manage/-cases-cases-open-vis.png b/docs/serverless/images/cases-open-manage/-cases-cases-open-vis.png deleted file mode 100644 index f1c7883f7e..0000000000 Binary files a/docs/serverless/images/cases-open-manage/-cases-cases-open-vis.png and /dev/null differ diff --git a/docs/serverless/images/cases-open-manage/-cases-cases-summary.png b/docs/serverless/images/cases-open-manage/-cases-cases-summary.png deleted file mode 100644 index d96cedab77..0000000000 Binary files a/docs/serverless/images/cases-open-manage/-cases-cases-summary.png and /dev/null differ diff --git a/docs/serverless/images/cases-open-manage/-cases-cases-ui-open.png b/docs/serverless/images/cases-open-manage/-cases-cases-ui-open.png deleted file mode 100644 index 95d9c8c5fb..0000000000 Binary files a/docs/serverless/images/cases-open-manage/-cases-cases-ui-open.png and /dev/null differ diff --git a/docs/serverless/images/cases-open-manage/-detections-markdown-icon.png b/docs/serverless/images/cases-open-manage/-detections-markdown-icon.png deleted file mode 100644 index 3ff22f66f2..0000000000 Binary files a/docs/serverless/images/cases-open-manage/-detections-markdown-icon.png and /dev/null differ diff --git a/docs/serverless/images/cases-settings/security-cases-connectors.png b/docs/serverless/images/cases-settings/security-cases-connectors.png deleted file mode 100644 index 8e49dd1f22..0000000000 Binary files a/docs/serverless/images/cases-settings/security-cases-connectors.png and /dev/null differ diff --git a/docs/serverless/images/cases-settings/security-cases-custom-fields.png b/docs/serverless/images/cases-settings/security-cases-custom-fields.png deleted file mode 100644 index 8fdf122120..0000000000 Binary files a/docs/serverless/images/cases-settings/security-cases-custom-fields.png and /dev/null differ diff --git a/docs/serverless/images/cases-settings/security-cases-settings.png b/docs/serverless/images/cases-settings/security-cases-settings.png deleted file mode 100644 index db0040084b..0000000000 Binary files a/docs/serverless/images/cases-settings/security-cases-settings.png and /dev/null differ diff --git a/docs/serverless/images/cases-settings/security-cases-templates.png b/docs/serverless/images/cases-settings/security-cases-templates.png deleted file mode 100644 index 3c0dc7c91f..0000000000 Binary files a/docs/serverless/images/cases-settings/security-cases-templates.png and /dev/null differ diff --git a/docs/serverless/images/cloud-posture-dashboard/-dashboards-cloud-sec-dashboard-individual-row.png b/docs/serverless/images/cloud-posture-dashboard/-dashboards-cloud-sec-dashboard-individual-row.png deleted file mode 100644 index 57f6eb8caf..0000000000 Binary files a/docs/serverless/images/cloud-posture-dashboard/-dashboards-cloud-sec-dashboard-individual-row.png and /dev/null differ diff --git a/docs/serverless/images/cloud-posture-dashboard/-dashboards-cloud-sec-dashboard.png b/docs/serverless/images/cloud-posture-dashboard/-dashboards-cloud-sec-dashboard.png deleted file mode 100644 index b7ab7af816..0000000000 Binary files a/docs/serverless/images/cloud-posture-dashboard/-dashboards-cloud-sec-dashboard.png and /dev/null differ diff --git a/docs/serverless/images/cloud-security-enable/manage-project.png b/docs/serverless/images/cloud-security-enable/manage-project.png deleted file mode 100644 index 48aff70e91..0000000000 Binary files a/docs/serverless/images/cloud-security-enable/manage-project.png and /dev/null differ diff --git a/docs/serverless/images/cloud-security-enable/project-features-edit.png b/docs/serverless/images/cloud-security-enable/project-features-edit.png deleted file mode 100644 index 9407ed78f8..0000000000 Binary files a/docs/serverless/images/cloud-security-enable/project-features-edit.png and /dev/null differ diff --git a/docs/serverless/images/configure-endpoint-integration-policy/-getting-started-install-endpoint-attack-surface-reduction.png b/docs/serverless/images/configure-endpoint-integration-policy/-getting-started-install-endpoint-attack-surface-reduction.png deleted file mode 100644 index cff71ebf2e..0000000000 Binary files a/docs/serverless/images/configure-endpoint-integration-policy/-getting-started-install-endpoint-attack-surface-reduction.png and /dev/null differ diff --git a/docs/serverless/images/configure-endpoint-integration-policy/-getting-started-install-endpoint-behavior-protection.png b/docs/serverless/images/configure-endpoint-integration-policy/-getting-started-install-endpoint-behavior-protection.png deleted file mode 100644 index a9fa3d5551..0000000000 Binary files a/docs/serverless/images/configure-endpoint-integration-policy/-getting-started-install-endpoint-behavior-protection.png and /dev/null differ diff --git a/docs/serverless/images/configure-endpoint-integration-policy/-getting-started-install-endpoint-event-collection.png b/docs/serverless/images/configure-endpoint-integration-policy/-getting-started-install-endpoint-event-collection.png deleted file mode 100644 index b408295e7b..0000000000 Binary files a/docs/serverless/images/configure-endpoint-integration-policy/-getting-started-install-endpoint-event-collection.png and /dev/null differ diff --git a/docs/serverless/images/configure-endpoint-integration-policy/-getting-started-install-endpoint-malware-protection.png b/docs/serverless/images/configure-endpoint-integration-policy/-getting-started-install-endpoint-malware-protection.png deleted file mode 100644 index 21f824edec..0000000000 Binary files a/docs/serverless/images/configure-endpoint-integration-policy/-getting-started-install-endpoint-malware-protection.png and /dev/null differ diff --git a/docs/serverless/images/configure-endpoint-integration-policy/-getting-started-install-endpoint-memory-protection.png b/docs/serverless/images/configure-endpoint-integration-policy/-getting-started-install-endpoint-memory-protection.png deleted file mode 100644 index b5fae9f154..0000000000 Binary files a/docs/serverless/images/configure-endpoint-integration-policy/-getting-started-install-endpoint-memory-protection.png and /dev/null differ diff --git a/docs/serverless/images/configure-endpoint-integration-policy/-getting-started-install-endpoint-ransomware-protection.png b/docs/serverless/images/configure-endpoint-integration-policy/-getting-started-install-endpoint-ransomware-protection.png deleted file mode 100644 index 824194f19b..0000000000 Binary files a/docs/serverless/images/configure-endpoint-integration-policy/-getting-started-install-endpoint-ransomware-protection.png and /dev/null differ diff --git a/docs/serverless/images/configure-endpoint-integration-policy/-getting-started-register-as-antivirus.png b/docs/serverless/images/configure-endpoint-integration-policy/-getting-started-register-as-antivirus.png deleted file mode 100644 index 61fb3d2fa6..0000000000 Binary files a/docs/serverless/images/configure-endpoint-integration-policy/-getting-started-register-as-antivirus.png and /dev/null differ diff --git a/docs/serverless/images/cspm-benchmark-rules/-cloud-native-security-benchmark-rules.png b/docs/serverless/images/cspm-benchmark-rules/-cloud-native-security-benchmark-rules.png deleted file mode 100644 index a05804e2bf..0000000000 Binary files a/docs/serverless/images/cspm-benchmark-rules/-cloud-native-security-benchmark-rules.png and /dev/null differ diff --git a/docs/serverless/images/cspm-get-started-gcp/-cloud-native-security-cspm-cloudshell-trust.png b/docs/serverless/images/cspm-get-started-gcp/-cloud-native-security-cspm-cloudshell-trust.png deleted file mode 100644 index 57ed37d3b4..0000000000 Binary files a/docs/serverless/images/cspm-get-started-gcp/-cloud-native-security-cspm-cloudshell-trust.png and /dev/null differ diff --git a/docs/serverless/images/cspm-get-started/-cloud-native-security-cspm-aws-auth-1.png b/docs/serverless/images/cspm-get-started/-cloud-native-security-cspm-aws-auth-1.png deleted file mode 100644 index 6a98ea85e3..0000000000 Binary files a/docs/serverless/images/cspm-get-started/-cloud-native-security-cspm-aws-auth-1.png and /dev/null differ diff --git a/docs/serverless/images/cspm-get-started/-cloud-native-security-cspm-aws-auth-2.png b/docs/serverless/images/cspm-get-started/-cloud-native-security-cspm-aws-auth-2.png deleted file mode 100644 index 515aa77492..0000000000 Binary files a/docs/serverless/images/cspm-get-started/-cloud-native-security-cspm-aws-auth-2.png and /dev/null differ diff --git a/docs/serverless/images/cspm-get-started/-cloud-native-security-cspm-aws-auth-3.png b/docs/serverless/images/cspm-get-started/-cloud-native-security-cspm-aws-auth-3.png deleted file mode 100644 index 48681cb68e..0000000000 Binary files a/docs/serverless/images/cspm-get-started/-cloud-native-security-cspm-aws-auth-3.png and /dev/null differ diff --git a/docs/serverless/images/cspm-get-started/-cloud-native-security-cspm-cloudformation-template.png b/docs/serverless/images/cspm-get-started/-cloud-native-security-cspm-cloudformation-template.png deleted file mode 100644 index 64f22dad94..0000000000 Binary files a/docs/serverless/images/cspm-get-started/-cloud-native-security-cspm-cloudformation-template.png and /dev/null differ diff --git a/docs/serverless/images/d4c-policy-guide/-cloud-native-security-d4c-policy-editor.png b/docs/serverless/images/d4c-policy-guide/-cloud-native-security-d4c-policy-editor.png deleted file mode 100644 index e36ae67165..0000000000 Binary files a/docs/serverless/images/d4c-policy-guide/-cloud-native-security-d4c-policy-editor.png and /dev/null differ diff --git a/docs/serverless/images/dashboards-overview/-dashboards-dashboards-landing-page.png b/docs/serverless/images/dashboards-overview/-dashboards-dashboards-landing-page.png deleted file mode 100644 index c947bfab19..0000000000 Binary files a/docs/serverless/images/dashboards-overview/-dashboards-dashboards-landing-page.png and /dev/null differ diff --git a/docs/serverless/images/data-quality-dash/-dashboards-data-qual-dash-detail.png b/docs/serverless/images/data-quality-dash/-dashboards-data-qual-dash-detail.png deleted file mode 100644 index 27b205a351..0000000000 Binary files a/docs/serverless/images/data-quality-dash/-dashboards-data-qual-dash-detail.png and /dev/null differ diff --git a/docs/serverless/images/data-quality-dash/-dashboards-data-qual-dash.png b/docs/serverless/images/data-quality-dash/-dashboards-data-qual-dash.png deleted file mode 100644 index 6d0749cd19..0000000000 Binary files a/docs/serverless/images/data-quality-dash/-dashboards-data-qual-dash.png and /dev/null differ diff --git a/docs/serverless/images/data-views-in-sec/-getting-started-dataview-button-highlighted.png b/docs/serverless/images/data-views-in-sec/-getting-started-dataview-button-highlighted.png deleted file mode 100644 index 4044f232a5..0000000000 Binary files a/docs/serverless/images/data-views-in-sec/-getting-started-dataview-button-highlighted.png and /dev/null differ diff --git a/docs/serverless/images/data-views-in-sec/-getting-started-dataview-filter-example.gif b/docs/serverless/images/data-views-in-sec/-getting-started-dataview-filter-example.gif deleted file mode 100644 index f1c1e77411..0000000000 Binary files a/docs/serverless/images/data-views-in-sec/-getting-started-dataview-filter-example.gif and /dev/null differ diff --git a/docs/serverless/images/deploy-elastic-endpoint-ven/-getting-started-install-endpoint-ven-allow_fda_ven.png b/docs/serverless/images/deploy-elastic-endpoint-ven/-getting-started-install-endpoint-ven-allow_fda_ven.png deleted file mode 100644 index 0ddc224224..0000000000 Binary files a/docs/serverless/images/deploy-elastic-endpoint-ven/-getting-started-install-endpoint-ven-allow_fda_ven.png and /dev/null differ diff --git a/docs/serverless/images/deploy-elastic-endpoint-ven/-getting-started-install-endpoint-ven-allow_full_disk_access_notification_ven.png b/docs/serverless/images/deploy-elastic-endpoint-ven/-getting-started-install-endpoint-ven-allow_full_disk_access_notification_ven.png deleted file mode 100644 index 8383df9f48..0000000000 Binary files a/docs/serverless/images/deploy-elastic-endpoint-ven/-getting-started-install-endpoint-ven-allow_full_disk_access_notification_ven.png and /dev/null differ diff --git a/docs/serverless/images/deploy-elastic-endpoint-ven/-getting-started-install-endpoint-ven-allow_network_filter_ven.png b/docs/serverless/images/deploy-elastic-endpoint-ven/-getting-started-install-endpoint-ven-allow_network_filter_ven.png deleted file mode 100644 index fc1fdd9721..0000000000 Binary files a/docs/serverless/images/deploy-elastic-endpoint-ven/-getting-started-install-endpoint-ven-allow_network_filter_ven.png and /dev/null differ diff --git a/docs/serverless/images/deploy-elastic-endpoint-ven/-getting-started-install-endpoint-ven-allow_system_extension_ven.png b/docs/serverless/images/deploy-elastic-endpoint-ven/-getting-started-install-endpoint-ven-allow_system_extension_ven.png deleted file mode 100644 index b1499d8872..0000000000 Binary files a/docs/serverless/images/deploy-elastic-endpoint-ven/-getting-started-install-endpoint-ven-allow_system_extension_ven.png and /dev/null differ diff --git a/docs/serverless/images/deploy-elastic-endpoint-ven/-getting-started-install-endpoint-ven-enter_login_details_to_confirm_ven.png b/docs/serverless/images/deploy-elastic-endpoint-ven/-getting-started-install-endpoint-ven-enter_login_details_to_confirm_ven.png deleted file mode 100644 index ed25e4323c..0000000000 Binary files a/docs/serverless/images/deploy-elastic-endpoint-ven/-getting-started-install-endpoint-ven-enter_login_details_to_confirm_ven.png and /dev/null differ diff --git a/docs/serverless/images/deploy-elastic-endpoint-ven/-getting-started-install-endpoint-ven-privacy_security_ven.png b/docs/serverless/images/deploy-elastic-endpoint-ven/-getting-started-install-endpoint-ven-privacy_security_ven.png deleted file mode 100644 index f73012cfa6..0000000000 Binary files a/docs/serverless/images/deploy-elastic-endpoint-ven/-getting-started-install-endpoint-ven-privacy_security_ven.png and /dev/null differ diff --git a/docs/serverless/images/deploy-elastic-endpoint-ven/-getting-started-install-endpoint-ven-select_fda_ven.png b/docs/serverless/images/deploy-elastic-endpoint-ven/-getting-started-install-endpoint-ven-select_fda_ven.png deleted file mode 100644 index 05057237c5..0000000000 Binary files a/docs/serverless/images/deploy-elastic-endpoint-ven/-getting-started-install-endpoint-ven-select_fda_ven.png and /dev/null differ diff --git a/docs/serverless/images/deploy-elastic-endpoint-ven/-getting-started-install-endpoint-ven-system_extension_blocked_warning_ven.png b/docs/serverless/images/deploy-elastic-endpoint-ven/-getting-started-install-endpoint-ven-system_extension_blocked_warning_ven.png deleted file mode 100644 index 05c472aeb4..0000000000 Binary files a/docs/serverless/images/deploy-elastic-endpoint-ven/-getting-started-install-endpoint-ven-system_extension_blocked_warning_ven.png and /dev/null differ diff --git a/docs/serverless/images/deploy-elastic-endpoint-ven/-getting-started-install-endpoint-ven-verify_fed_granted_ven.png b/docs/serverless/images/deploy-elastic-endpoint-ven/-getting-started-install-endpoint-ven-verify_fed_granted_ven.png deleted file mode 100644 index 56abac61a2..0000000000 Binary files a/docs/serverless/images/deploy-elastic-endpoint-ven/-getting-started-install-endpoint-ven-verify_fed_granted_ven.png and /dev/null differ diff --git a/docs/serverless/images/deploy-elastic-endpoint/-getting-started-fda-fda-7-16.png b/docs/serverless/images/deploy-elastic-endpoint/-getting-started-fda-fda-7-16.png deleted file mode 100644 index b6365a81eb..0000000000 Binary files a/docs/serverless/images/deploy-elastic-endpoint/-getting-started-fda-fda-7-16.png and /dev/null differ diff --git a/docs/serverless/images/deploy-elastic-endpoint/-getting-started-fda-lock-button.png b/docs/serverless/images/deploy-elastic-endpoint/-getting-started-fda-lock-button.png deleted file mode 100644 index ddae275806..0000000000 Binary files a/docs/serverless/images/deploy-elastic-endpoint/-getting-started-fda-lock-button.png and /dev/null differ diff --git a/docs/serverless/images/deploy-elastic-endpoint/-getting-started-fda-sec-privacy-pane.png b/docs/serverless/images/deploy-elastic-endpoint/-getting-started-fda-sec-privacy-pane.png deleted file mode 100644 index 5da2e7dc5d..0000000000 Binary files a/docs/serverless/images/deploy-elastic-endpoint/-getting-started-fda-sec-privacy-pane.png and /dev/null differ diff --git a/docs/serverless/images/deploy-elastic-endpoint/-getting-started-fda-select-endpoint-ext.png b/docs/serverless/images/deploy-elastic-endpoint/-getting-started-fda-select-endpoint-ext.png deleted file mode 100644 index d674734f50..0000000000 Binary files a/docs/serverless/images/deploy-elastic-endpoint/-getting-started-fda-select-endpoint-ext.png and /dev/null differ diff --git a/docs/serverless/images/deploy-elastic-endpoint/-getting-started-fda-select-fda.png b/docs/serverless/images/deploy-elastic-endpoint/-getting-started-fda-select-fda.png deleted file mode 100644 index 329bc9e11c..0000000000 Binary files a/docs/serverless/images/deploy-elastic-endpoint/-getting-started-fda-select-fda.png and /dev/null differ diff --git a/docs/serverless/images/deploy-elastic-endpoint/-getting-started-install-endpoint-allow-system-ext.png b/docs/serverless/images/deploy-elastic-endpoint/-getting-started-install-endpoint-allow-system-ext.png deleted file mode 100644 index 50c64552d7..0000000000 Binary files a/docs/serverless/images/deploy-elastic-endpoint/-getting-started-install-endpoint-allow-system-ext.png and /dev/null differ diff --git a/docs/serverless/images/deploy-elastic-endpoint/-getting-started-install-endpoint-filter-network-content.png b/docs/serverless/images/deploy-elastic-endpoint/-getting-started-install-endpoint-filter-network-content.png deleted file mode 100644 index d55eb7e04d..0000000000 Binary files a/docs/serverless/images/deploy-elastic-endpoint/-getting-started-install-endpoint-filter-network-content.png and /dev/null differ diff --git a/docs/serverless/images/deploy-elastic-endpoint/-getting-started-install-endpoint-system-ext-blocked.png b/docs/serverless/images/deploy-elastic-endpoint/-getting-started-install-endpoint-system-ext-blocked.png deleted file mode 100644 index eede5dc20c..0000000000 Binary files a/docs/serverless/images/deploy-elastic-endpoint/-getting-started-install-endpoint-system-ext-blocked.png and /dev/null differ diff --git a/docs/serverless/images/deploy-with-mdm/content-filtering-jamf.png b/docs/serverless/images/deploy-with-mdm/content-filtering-jamf.png deleted file mode 100644 index 0698e567c5..0000000000 Binary files a/docs/serverless/images/deploy-with-mdm/content-filtering-jamf.png and /dev/null differ diff --git a/docs/serverless/images/deploy-with-mdm/fda-jamf.png b/docs/serverless/images/deploy-with-mdm/fda-jamf.png deleted file mode 100644 index 60462fff9c..0000000000 Binary files a/docs/serverless/images/deploy-with-mdm/fda-jamf.png and /dev/null differ diff --git a/docs/serverless/images/deploy-with-mdm/notifications-jamf.png b/docs/serverless/images/deploy-with-mdm/notifications-jamf.png deleted file mode 100644 index a4b329396d..0000000000 Binary files a/docs/serverless/images/deploy-with-mdm/notifications-jamf.png and /dev/null differ diff --git a/docs/serverless/images/deploy-with-mdm/system-extension-jamf.png b/docs/serverless/images/deploy-with-mdm/system-extension-jamf.png deleted file mode 100644 index 25175582a1..0000000000 Binary files a/docs/serverless/images/deploy-with-mdm/system-extension-jamf.png and /dev/null differ diff --git a/docs/serverless/images/detection-engine-overview/-detections-alert-page.png b/docs/serverless/images/detection-engine-overview/-detections-alert-page.png deleted file mode 100644 index d66b44e199..0000000000 Binary files a/docs/serverless/images/detection-engine-overview/-detections-alert-page.png and /dev/null differ diff --git a/docs/serverless/images/detection-entity-dashboard/-dashboards-anomalies-table.png b/docs/serverless/images/detection-entity-dashboard/-dashboards-anomalies-table.png deleted file mode 100644 index a352bd5977..0000000000 Binary files a/docs/serverless/images/detection-entity-dashboard/-dashboards-anomalies-table.png and /dev/null differ diff --git a/docs/serverless/images/detection-entity-dashboard/-dashboards-entity-dashboard.png b/docs/serverless/images/detection-entity-dashboard/-dashboards-entity-dashboard.png deleted file mode 100644 index e529976aac..0000000000 Binary files a/docs/serverless/images/detection-entity-dashboard/-dashboards-entity-dashboard.png and /dev/null differ diff --git a/docs/serverless/images/detection-entity-dashboard/-dashboards-host-score-data.png b/docs/serverless/images/detection-entity-dashboard/-dashboards-host-score-data.png deleted file mode 100644 index 1fb6446c5d..0000000000 Binary files a/docs/serverless/images/detection-entity-dashboard/-dashboards-host-score-data.png and /dev/null differ diff --git a/docs/serverless/images/detection-entity-dashboard/-dashboards-user-score-data.png b/docs/serverless/images/detection-entity-dashboard/-dashboards-user-score-data.png deleted file mode 100644 index ec5edf06a0..0000000000 Binary files a/docs/serverless/images/detection-entity-dashboard/-dashboards-user-score-data.png and /dev/null differ diff --git a/docs/serverless/images/detection-response-dashboard/-detections-detection-response-dashboard.png b/docs/serverless/images/detection-response-dashboard/-detections-detection-response-dashboard.png deleted file mode 100644 index ae2e9cd18f..0000000000 Binary files a/docs/serverless/images/detection-response-dashboard/-detections-detection-response-dashboard.png and /dev/null differ diff --git a/docs/serverless/images/detections-ui-exceptions/-detections-exception-item-example.png b/docs/serverless/images/detections-ui-exceptions/-detections-exception-item-example.png deleted file mode 100644 index 1da080b593..0000000000 Binary files a/docs/serverless/images/detections-ui-exceptions/-detections-exception-item-example.png and /dev/null differ diff --git a/docs/serverless/images/detections-ui-exceptions/-detections-rule-exceptions-page.png b/docs/serverless/images/detections-ui-exceptions/-detections-rule-exceptions-page.png deleted file mode 100644 index 912feec301..0000000000 Binary files a/docs/serverless/images/detections-ui-exceptions/-detections-rule-exceptions-page.png and /dev/null differ diff --git a/docs/serverless/images/endpoint-management-req/-getting-started-endpoint-privileges.png b/docs/serverless/images/endpoint-management-req/-getting-started-endpoint-privileges.png deleted file mode 100644 index a1144296ed..0000000000 Binary files a/docs/serverless/images/endpoint-management-req/-getting-started-endpoint-privileges.png and /dev/null differ diff --git a/docs/serverless/images/endpoints-page/-management-admin-config-status.png b/docs/serverless/images/endpoints-page/-management-admin-config-status.png deleted file mode 100644 index 78e11ed804..0000000000 Binary files a/docs/serverless/images/endpoints-page/-management-admin-config-status.png and /dev/null differ diff --git a/docs/serverless/images/endpoints-page/-management-admin-endpoints-pg.png b/docs/serverless/images/endpoints-page/-management-admin-endpoints-pg.png deleted file mode 100644 index 22e7410c5b..0000000000 Binary files a/docs/serverless/images/endpoints-page/-management-admin-endpoints-pg.png and /dev/null differ diff --git a/docs/serverless/images/endpoints-page/-management-admin-host-flyout.png b/docs/serverless/images/endpoints-page/-management-admin-host-flyout.png deleted file mode 100644 index 0a3b36ac9c..0000000000 Binary files a/docs/serverless/images/endpoints-page/-management-admin-host-flyout.png and /dev/null differ diff --git a/docs/serverless/images/endpoints-page/-management-admin-integration-advanced-settings.png b/docs/serverless/images/endpoints-page/-management-admin-integration-advanced-settings.png deleted file mode 100644 index 8d1fd38512..0000000000 Binary files a/docs/serverless/images/endpoints-page/-management-admin-integration-advanced-settings.png and /dev/null differ diff --git a/docs/serverless/images/endpoints-page/-management-admin-integration-pg.png b/docs/serverless/images/endpoints-page/-management-admin-integration-pg.png deleted file mode 100644 index 0388413b19..0000000000 Binary files a/docs/serverless/images/endpoints-page/-management-admin-integration-pg.png and /dev/null differ diff --git a/docs/serverless/images/endpoints-page/-management-admin-response-actions-history-endpoint-details.png b/docs/serverless/images/endpoints-page/-management-admin-response-actions-history-endpoint-details.png deleted file mode 100644 index 531796532a..0000000000 Binary files a/docs/serverless/images/endpoints-page/-management-admin-response-actions-history-endpoint-details.png and /dev/null differ diff --git a/docs/serverless/images/environment-variable-capture/-cloud-native-security-env-var-capture-detail.png b/docs/serverless/images/environment-variable-capture/-cloud-native-security-env-var-capture-detail.png deleted file mode 100644 index b6ed6fdafc..0000000000 Binary files a/docs/serverless/images/environment-variable-capture/-cloud-native-security-env-var-capture-detail.png and /dev/null differ diff --git a/docs/serverless/images/environment-variable-capture/-cloud-native-security-env-var-capture.png b/docs/serverless/images/environment-variable-capture/-cloud-native-security-env-var-capture.png deleted file mode 100644 index d62ca4149c..0000000000 Binary files a/docs/serverless/images/environment-variable-capture/-cloud-native-security-env-var-capture.png and /dev/null differ diff --git a/docs/serverless/images/es-overview/-getting-started-workflow.png b/docs/serverless/images/es-overview/-getting-started-workflow.png deleted file mode 100644 index b71c7b0ace..0000000000 Binary files a/docs/serverless/images/es-overview/-getting-started-workflow.png and /dev/null differ diff --git a/docs/serverless/images/es-threat-intel-integrations/-getting-started-threat-intelligence-view.png b/docs/serverless/images/es-threat-intel-integrations/-getting-started-threat-intelligence-view.png deleted file mode 100644 index 32bc70d5df..0000000000 Binary files a/docs/serverless/images/es-threat-intel-integrations/-getting-started-threat-intelligence-view.png and /dev/null differ diff --git a/docs/serverless/images/es-ui-overview/-cases-cases-home-page.png b/docs/serverless/images/es-ui-overview/-cases-cases-home-page.png deleted file mode 100644 index 2552ace49f..0000000000 Binary files a/docs/serverless/images/es-ui-overview/-cases-cases-home-page.png and /dev/null differ diff --git a/docs/serverless/images/es-ui-overview/-cases-indicators-table.png b/docs/serverless/images/es-ui-overview/-cases-indicators-table.png deleted file mode 100644 index 3219eae6db..0000000000 Binary files a/docs/serverless/images/es-ui-overview/-cases-indicators-table.png and /dev/null differ diff --git a/docs/serverless/images/es-ui-overview/-cloud-native-security-benchmark-rules.png b/docs/serverless/images/es-ui-overview/-cloud-native-security-benchmark-rules.png deleted file mode 100644 index a05804e2bf..0000000000 Binary files a/docs/serverless/images/es-ui-overview/-cloud-native-security-benchmark-rules.png and /dev/null differ diff --git a/docs/serverless/images/es-ui-overview/-dashboards-dashboards-landing-page.png b/docs/serverless/images/es-ui-overview/-dashboards-dashboards-landing-page.png deleted file mode 100644 index c947bfab19..0000000000 Binary files a/docs/serverless/images/es-ui-overview/-dashboards-dashboards-landing-page.png and /dev/null differ diff --git a/docs/serverless/images/es-ui-overview/-detections-alert-page.png b/docs/serverless/images/es-ui-overview/-detections-alert-page.png deleted file mode 100644 index 23c577761d..0000000000 Binary files a/docs/serverless/images/es-ui-overview/-detections-alert-page.png and /dev/null differ diff --git a/docs/serverless/images/es-ui-overview/-detections-all-rules.png b/docs/serverless/images/es-ui-overview/-detections-all-rules.png deleted file mode 100644 index 5ad7137a53..0000000000 Binary files a/docs/serverless/images/es-ui-overview/-detections-all-rules.png and /dev/null differ diff --git a/docs/serverless/images/es-ui-overview/-detections-inline-actions-menu.png b/docs/serverless/images/es-ui-overview/-detections-inline-actions-menu.png deleted file mode 100644 index 5c4dff3d77..0000000000 Binary files a/docs/serverless/images/es-ui-overview/-detections-inline-actions-menu.png and /dev/null differ diff --git a/docs/serverless/images/es-ui-overview/-detections-rule-exceptions-page.png b/docs/serverless/images/es-ui-overview/-detections-rule-exceptions-page.png deleted file mode 100644 index 912feec301..0000000000 Binary files a/docs/serverless/images/es-ui-overview/-detections-rule-exceptions-page.png and /dev/null differ diff --git a/docs/serverless/images/es-ui-overview/-detections-rules-coverage.png b/docs/serverless/images/es-ui-overview/-detections-rules-coverage.png deleted file mode 100644 index b446481ae6..0000000000 Binary files a/docs/serverless/images/es-ui-overview/-detections-rules-coverage.png and /dev/null differ diff --git a/docs/serverless/images/es-ui-overview/-events-timeline-ui.png b/docs/serverless/images/es-ui-overview/-events-timeline-ui.png deleted file mode 100644 index 929c7e1a98..0000000000 Binary files a/docs/serverless/images/es-ui-overview/-events-timeline-ui.png and /dev/null differ diff --git a/docs/serverless/images/es-ui-overview/-getting-started-inline-actions-legend.png b/docs/serverless/images/es-ui-overview/-getting-started-inline-actions-legend.png deleted file mode 100644 index b200691606..0000000000 Binary files a/docs/serverless/images/es-ui-overview/-getting-started-inline-actions-legend.png and /dev/null differ diff --git a/docs/serverless/images/es-ui-overview/-getting-started-inspect-icon-context.png b/docs/serverless/images/es-ui-overview/-getting-started-inspect-icon-context.png deleted file mode 100644 index d3eab264ce..0000000000 Binary files a/docs/serverless/images/es-ui-overview/-getting-started-inspect-icon-context.png and /dev/null differ diff --git a/docs/serverless/images/es-ui-overview/-getting-started-nav-overview.gif b/docs/serverless/images/es-ui-overview/-getting-started-nav-overview.gif deleted file mode 100644 index 00c0bb0a7b..0000000000 Binary files a/docs/serverless/images/es-ui-overview/-getting-started-nav-overview.gif and /dev/null differ diff --git a/docs/serverless/images/es-ui-overview/-getting-started-network-ui.png b/docs/serverless/images/es-ui-overview/-getting-started-network-ui.png deleted file mode 100644 index d506f928c7..0000000000 Binary files a/docs/serverless/images/es-ui-overview/-getting-started-network-ui.png and /dev/null differ diff --git a/docs/serverless/images/es-ui-overview/-getting-started-search-bar.png b/docs/serverless/images/es-ui-overview/-getting-started-search-bar.png deleted file mode 100644 index 81c8d0900c..0000000000 Binary files a/docs/serverless/images/es-ui-overview/-getting-started-search-bar.png and /dev/null differ diff --git a/docs/serverless/images/es-ui-overview/-getting-started-timeline-accessiblity-directional-arrows.gif b/docs/serverless/images/es-ui-overview/-getting-started-timeline-accessiblity-directional-arrows.gif deleted file mode 100644 index aaafe8ced0..0000000000 Binary files a/docs/serverless/images/es-ui-overview/-getting-started-timeline-accessiblity-directional-arrows.gif and /dev/null differ diff --git a/docs/serverless/images/es-ui-overview/-getting-started-timeline-accessiblity-event-renderers.gif b/docs/serverless/images/es-ui-overview/-getting-started-timeline-accessiblity-event-renderers.gif deleted file mode 100644 index b5ca37ad50..0000000000 Binary files a/docs/serverless/images/es-ui-overview/-getting-started-timeline-accessiblity-event-renderers.gif and /dev/null differ diff --git a/docs/serverless/images/es-ui-overview/-getting-started-timeline-accessiblity-keyboard-focus-hotkeys.gif b/docs/serverless/images/es-ui-overview/-getting-started-timeline-accessiblity-keyboard-focus-hotkeys.gif deleted file mode 100644 index 61262111d9..0000000000 Binary files a/docs/serverless/images/es-ui-overview/-getting-started-timeline-accessiblity-keyboard-focus-hotkeys.gif and /dev/null differ diff --git a/docs/serverless/images/es-ui-overview/-getting-started-timeline-accessiblity-keyboard-focus.gif b/docs/serverless/images/es-ui-overview/-getting-started-timeline-accessiblity-keyboard-focus.gif deleted file mode 100644 index 81b5a39d3c..0000000000 Binary files a/docs/serverless/images/es-ui-overview/-getting-started-timeline-accessiblity-keyboard-focus.gif and /dev/null differ diff --git a/docs/serverless/images/es-ui-overview/-getting-started-timeline-accessiblity-page-up-n-down.gif b/docs/serverless/images/es-ui-overview/-getting-started-timeline-accessiblity-page-up-n-down.gif deleted file mode 100644 index 95499374bc..0000000000 Binary files a/docs/serverless/images/es-ui-overview/-getting-started-timeline-accessiblity-page-up-n-down.gif and /dev/null differ diff --git a/docs/serverless/images/es-ui-overview/-getting-started-timeline-accessiblity-shifting-keyboard-focus.gif b/docs/serverless/images/es-ui-overview/-getting-started-timeline-accessiblity-shifting-keyboard-focus.gif deleted file mode 100644 index e9728f4c34..0000000000 Binary files a/docs/serverless/images/es-ui-overview/-getting-started-timeline-accessiblity-shifting-keyboard-focus.gif and /dev/null differ diff --git a/docs/serverless/images/es-ui-overview/-getting-started-timeline-accessiblity-tab-keys.gif b/docs/serverless/images/es-ui-overview/-getting-started-timeline-accessiblity-tab-keys.gif deleted file mode 100644 index f68699f489..0000000000 Binary files a/docs/serverless/images/es-ui-overview/-getting-started-timeline-accessiblity-tab-keys.gif and /dev/null differ diff --git a/docs/serverless/images/es-ui-overview/-getting-started-timeline-ui-accessiblity-drag-n-drop.gif b/docs/serverless/images/es-ui-overview/-getting-started-timeline-ui-accessiblity-drag-n-drop.gif deleted file mode 100644 index ad71c29f6e..0000000000 Binary files a/docs/serverless/images/es-ui-overview/-getting-started-timeline-ui-accessiblity-drag-n-drop.gif and /dev/null differ diff --git a/docs/serverless/images/es-ui-overview/-getting-started-users-users-page.png b/docs/serverless/images/es-ui-overview/-getting-started-users-users-page.png deleted file mode 100644 index adcd0f5ab8..0000000000 Binary files a/docs/serverless/images/es-ui-overview/-getting-started-users-users-page.png and /dev/null differ diff --git a/docs/serverless/images/es-ui-overview/-getting-started-viz-options-menu-open.png b/docs/serverless/images/es-ui-overview/-getting-started-viz-options-menu-open.png deleted file mode 100644 index 9fa319d1d9..0000000000 Binary files a/docs/serverless/images/es-ui-overview/-getting-started-viz-options-menu-open.png and /dev/null differ diff --git a/docs/serverless/images/es-ui-overview/-management-hosts-hosts-ov-pg.png b/docs/serverless/images/es-ui-overview/-management-hosts-hosts-ov-pg.png deleted file mode 100644 index 55a7b7d3d4..0000000000 Binary files a/docs/serverless/images/es-ui-overview/-management-hosts-hosts-ov-pg.png and /dev/null differ diff --git a/docs/serverless/images/event-filters/-management-admin-event-filter.png b/docs/serverless/images/event-filters/-management-admin-event-filter.png deleted file mode 100644 index 9937fff516..0000000000 Binary files a/docs/serverless/images/event-filters/-management-admin-event-filter.png and /dev/null differ diff --git a/docs/serverless/images/event-filters/-management-admin-event-filters-list.png b/docs/serverless/images/event-filters/-management-admin-event-filters-list.png deleted file mode 100644 index 9dc0dab50c..0000000000 Binary files a/docs/serverless/images/event-filters/-management-admin-event-filters-list.png and /dev/null differ diff --git a/docs/serverless/images/exceptions-api-overview/-detections-api-exceptions-exceptions-logic.png b/docs/serverless/images/exceptions-api-overview/-detections-api-exceptions-exceptions-logic.png deleted file mode 100644 index cd6732be62..0000000000 Binary files a/docs/serverless/images/exceptions-api-overview/-detections-api-exceptions-exceptions-logic.png and /dev/null differ diff --git a/docs/serverless/images/findings-page/-cloud-native-security-findings-page.png b/docs/serverless/images/findings-page/-cloud-native-security-findings-page.png deleted file mode 100644 index 54b1e514eb..0000000000 Binary files a/docs/serverless/images/findings-page/-cloud-native-security-findings-page.png and /dev/null differ diff --git a/docs/serverless/images/get-started-with-kspm/-cloud-native-security-kspm-add-agent-wizard.png b/docs/serverless/images/get-started-with-kspm/-cloud-native-security-kspm-add-agent-wizard.png deleted file mode 100644 index d61fec7589..0000000000 Binary files a/docs/serverless/images/get-started-with-kspm/-cloud-native-security-kspm-add-agent-wizard.png and /dev/null differ diff --git a/docs/serverless/images/host-isolation-exceptions/-management-admin-host-isolation-exceptions-ui.png b/docs/serverless/images/host-isolation-exceptions/-management-admin-host-isolation-exceptions-ui.png deleted file mode 100644 index 250bd4eff2..0000000000 Binary files a/docs/serverless/images/host-isolation-exceptions/-management-admin-host-isolation-exceptions-ui.png and /dev/null differ diff --git a/docs/serverless/images/host-isolation-ov/-management-admin-host-isolated-notif.png b/docs/serverless/images/host-isolation-ov/-management-admin-host-isolated-notif.png deleted file mode 100644 index 8676d9fc28..0000000000 Binary files a/docs/serverless/images/host-isolation-ov/-management-admin-host-isolated-notif.png and /dev/null differ diff --git a/docs/serverless/images/host-isolation-ov/-management-admin-host-released-notif.png b/docs/serverless/images/host-isolation-ov/-management-admin-host-released-notif.png deleted file mode 100644 index f152cc0e1e..0000000000 Binary files a/docs/serverless/images/host-isolation-ov/-management-admin-host-released-notif.png and /dev/null differ diff --git a/docs/serverless/images/host-isolation-ov/-management-admin-isolated-host.png b/docs/serverless/images/host-isolation-ov/-management-admin-isolated-host.png deleted file mode 100644 index de557e6562..0000000000 Binary files a/docs/serverless/images/host-isolation-ov/-management-admin-isolated-host.png and /dev/null differ diff --git a/docs/serverless/images/host-isolation-ov/-management-admin-response-actions-history-endpoint-details.png b/docs/serverless/images/host-isolation-ov/-management-admin-response-actions-history-endpoint-details.png deleted file mode 100644 index 531796532a..0000000000 Binary files a/docs/serverless/images/host-isolation-ov/-management-admin-response-actions-history-endpoint-details.png and /dev/null differ diff --git a/docs/serverless/images/hosts-overview/-getting-started-users-events-table.png b/docs/serverless/images/hosts-overview/-getting-started-users-events-table.png deleted file mode 100644 index 56d7546bb1..0000000000 Binary files a/docs/serverless/images/hosts-overview/-getting-started-users-events-table.png and /dev/null differ diff --git a/docs/serverless/images/hosts-overview/-host-asset-criticality.png b/docs/serverless/images/hosts-overview/-host-asset-criticality.png deleted file mode 100644 index c3157649f1..0000000000 Binary files a/docs/serverless/images/hosts-overview/-host-asset-criticality.png and /dev/null differ diff --git a/docs/serverless/images/hosts-overview/-host-details-flyout.png b/docs/serverless/images/hosts-overview/-host-details-flyout.png deleted file mode 100644 index cf042949d6..0000000000 Binary files a/docs/serverless/images/hosts-overview/-host-details-flyout.png and /dev/null differ diff --git a/docs/serverless/images/hosts-overview/-host-observed-data.png b/docs/serverless/images/hosts-overview/-host-observed-data.png deleted file mode 100644 index 6b4b04ccb9..0000000000 Binary files a/docs/serverless/images/hosts-overview/-host-observed-data.png and /dev/null differ diff --git a/docs/serverless/images/hosts-overview/-host-risk-inputs.png b/docs/serverless/images/hosts-overview/-host-risk-inputs.png deleted file mode 100644 index 483980f378..0000000000 Binary files a/docs/serverless/images/hosts-overview/-host-risk-inputs.png and /dev/null differ diff --git a/docs/serverless/images/hosts-overview/-management-hosts-hosts-detail-pg.png b/docs/serverless/images/hosts-overview/-management-hosts-hosts-detail-pg.png deleted file mode 100644 index b883fdf85b..0000000000 Binary files a/docs/serverless/images/hosts-overview/-management-hosts-hosts-detail-pg.png and /dev/null differ diff --git a/docs/serverless/images/hosts-overview/-management-hosts-hosts-ov-pg.png b/docs/serverless/images/hosts-overview/-management-hosts-hosts-ov-pg.png deleted file mode 100644 index c234836f1c..0000000000 Binary files a/docs/serverless/images/hosts-overview/-management-hosts-hosts-ov-pg.png and /dev/null differ diff --git a/docs/serverless/images/indicators-of-compromise/-cases-indicator-added-to-case.png b/docs/serverless/images/indicators-of-compromise/-cases-indicator-added-to-case.png deleted file mode 100644 index 6b1f498c9b..0000000000 Binary files a/docs/serverless/images/indicators-of-compromise/-cases-indicator-added-to-case.png and /dev/null differ diff --git a/docs/serverless/images/indicators-of-compromise/-cases-indicator-details-flyout.png b/docs/serverless/images/indicators-of-compromise/-cases-indicator-details-flyout.png deleted file mode 100644 index 073a7f71ea..0000000000 Binary files a/docs/serverless/images/indicators-of-compromise/-cases-indicator-details-flyout.png and /dev/null differ diff --git a/docs/serverless/images/indicators-of-compromise/-cases-indicator-in-timeline.png b/docs/serverless/images/indicators-of-compromise/-cases-indicator-in-timeline.png deleted file mode 100644 index f0e4e7043b..0000000000 Binary files a/docs/serverless/images/indicators-of-compromise/-cases-indicator-in-timeline.png and /dev/null differ diff --git a/docs/serverless/images/indicators-of-compromise/-cases-indicator-query-timeline.png b/docs/serverless/images/indicators-of-compromise/-cases-indicator-query-timeline.png deleted file mode 100644 index cdb6128991..0000000000 Binary files a/docs/serverless/images/indicators-of-compromise/-cases-indicator-query-timeline.png and /dev/null differ diff --git a/docs/serverless/images/indicators-of-compromise/-cases-indicators-table.png b/docs/serverless/images/indicators-of-compromise/-cases-indicators-table.png deleted file mode 100644 index 3219eae6db..0000000000 Binary files a/docs/serverless/images/indicators-of-compromise/-cases-indicators-table.png and /dev/null differ diff --git a/docs/serverless/images/indicators-of-compromise/-cases-interact-with-indicators-table.gif b/docs/serverless/images/indicators-of-compromise/-cases-interact-with-indicators-table.gif deleted file mode 100644 index e98f0f8e10..0000000000 Binary files a/docs/serverless/images/indicators-of-compromise/-cases-interact-with-indicators-table.gif and /dev/null differ diff --git a/docs/serverless/images/indicators-of-compromise/-cases-remove-indicator.png b/docs/serverless/images/indicators-of-compromise/-cases-remove-indicator.png deleted file mode 100644 index f13242992c..0000000000 Binary files a/docs/serverless/images/indicators-of-compromise/-cases-remove-indicator.png and /dev/null differ diff --git a/docs/serverless/images/install-endpoint/-getting-started-install-endpoint-endpoint-cloud-sec-add-agent-detail.png b/docs/serverless/images/install-endpoint/-getting-started-install-endpoint-endpoint-cloud-sec-add-agent-detail.png deleted file mode 100644 index f34563c789..0000000000 Binary files a/docs/serverless/images/install-endpoint/-getting-started-install-endpoint-endpoint-cloud-sec-add-agent-detail.png and /dev/null differ diff --git a/docs/serverless/images/install-endpoint/-getting-started-install-endpoint-endpoint-cloud-sec-add-agent.png b/docs/serverless/images/install-endpoint/-getting-started-install-endpoint-endpoint-cloud-sec-add-agent.png deleted file mode 100644 index 052c3d0114..0000000000 Binary files a/docs/serverless/images/install-endpoint/-getting-started-install-endpoint-endpoint-cloud-sec-add-agent.png and /dev/null differ diff --git a/docs/serverless/images/install-endpoint/-getting-started-install-endpoint-endpoint-cloud-sec-integrations-page.png b/docs/serverless/images/install-endpoint/-getting-started-install-endpoint-endpoint-cloud-sec-integrations-page.png deleted file mode 100644 index 5d5c31e0e3..0000000000 Binary files a/docs/serverless/images/install-endpoint/-getting-started-install-endpoint-endpoint-cloud-sec-integrations-page.png and /dev/null differ diff --git a/docs/serverless/images/install-endpoint/-getting-started-install-endpoint-endpoint-cloud-security-configuration.png b/docs/serverless/images/install-endpoint/-getting-started-install-endpoint-endpoint-cloud-security-configuration.png deleted file mode 100644 index 35cf3affb3..0000000000 Binary files a/docs/serverless/images/install-endpoint/-getting-started-install-endpoint-endpoint-cloud-security-configuration.png and /dev/null differ diff --git a/docs/serverless/images/interactive-investigation-guides/-detections-ig-alert-flyout-invest-tab.png b/docs/serverless/images/interactive-investigation-guides/-detections-ig-alert-flyout-invest-tab.png deleted file mode 100644 index b686a3f4c0..0000000000 Binary files a/docs/serverless/images/interactive-investigation-guides/-detections-ig-alert-flyout-invest-tab.png and /dev/null differ diff --git a/docs/serverless/images/interactive-investigation-guides/-detections-ig-alert-flyout.png b/docs/serverless/images/interactive-investigation-guides/-detections-ig-alert-flyout.png deleted file mode 100644 index eb6a4eee6a..0000000000 Binary files a/docs/serverless/images/interactive-investigation-guides/-detections-ig-alert-flyout.png and /dev/null differ diff --git a/docs/serverless/images/interactive-investigation-guides/-detections-ig-filters-field-custom-value.png b/docs/serverless/images/interactive-investigation-guides/-detections-ig-filters-field-custom-value.png deleted file mode 100644 index 0c625298b0..0000000000 Binary files a/docs/serverless/images/interactive-investigation-guides/-detections-ig-filters-field-custom-value.png and /dev/null differ diff --git a/docs/serverless/images/interactive-investigation-guides/-detections-ig-investigation-guide-editor.png b/docs/serverless/images/interactive-investigation-guides/-detections-ig-investigation-guide-editor.png deleted file mode 100644 index 0f299b3b03..0000000000 Binary files a/docs/serverless/images/interactive-investigation-guides/-detections-ig-investigation-guide-editor.png and /dev/null differ diff --git a/docs/serverless/images/interactive-investigation-guides/-detections-ig-investigation-query-builder.png b/docs/serverless/images/interactive-investigation-guides/-detections-ig-investigation-query-builder.png deleted file mode 100644 index 2e96df67d7..0000000000 Binary files a/docs/serverless/images/interactive-investigation-guides/-detections-ig-investigation-query-builder.png and /dev/null differ diff --git a/docs/serverless/images/interactive-investigation-guides/-detections-ig-timeline-query.png b/docs/serverless/images/interactive-investigation-guides/-detections-ig-timeline-query.png deleted file mode 100644 index 48f3029494..0000000000 Binary files a/docs/serverless/images/interactive-investigation-guides/-detections-ig-timeline-query.png and /dev/null differ diff --git a/docs/serverless/images/interactive-investigation-guides/-detections-ig-timeline-template-fields.png b/docs/serverless/images/interactive-investigation-guides/-detections-ig-timeline-template-fields.png deleted file mode 100644 index d0c7ee0f79..0000000000 Binary files a/docs/serverless/images/interactive-investigation-guides/-detections-ig-timeline-template-fields.png and /dev/null differ diff --git a/docs/serverless/images/interactive-investigation-guides/-detections-ig-timeline.png b/docs/serverless/images/interactive-investigation-guides/-detections-ig-timeline.png deleted file mode 100644 index 706891bb91..0000000000 Binary files a/docs/serverless/images/interactive-investigation-guides/-detections-ig-timeline.png and /dev/null differ diff --git a/docs/serverless/images/invest-guide-run-osquery/-osquery-osquery-button.png b/docs/serverless/images/invest-guide-run-osquery/-osquery-osquery-button.png deleted file mode 100644 index bee81e5cd2..0000000000 Binary files a/docs/serverless/images/invest-guide-run-osquery/-osquery-osquery-button.png and /dev/null differ diff --git a/docs/serverless/images/invest-guide-run-osquery/-osquery-osquery-investigation-guide.png b/docs/serverless/images/invest-guide-run-osquery/-osquery-osquery-investigation-guide.png deleted file mode 100644 index c4353856c6..0000000000 Binary files a/docs/serverless/images/invest-guide-run-osquery/-osquery-osquery-investigation-guide.png and /dev/null differ diff --git a/docs/serverless/images/invest-guide-run-osquery/-osquery-run-query-investigation-guide.png b/docs/serverless/images/invest-guide-run-osquery/-osquery-run-query-investigation-guide.png deleted file mode 100644 index 8dcd0547d3..0000000000 Binary files a/docs/serverless/images/invest-guide-run-osquery/-osquery-run-query-investigation-guide.png and /dev/null differ diff --git a/docs/serverless/images/invest-guide-run-osquery/-osquery-setup-osquery-investigation-guide.png b/docs/serverless/images/invest-guide-run-osquery/-osquery-setup-osquery-investigation-guide.png deleted file mode 100644 index d244d8e9a0..0000000000 Binary files a/docs/serverless/images/invest-guide-run-osquery/-osquery-setup-osquery-investigation-guide.png and /dev/null differ diff --git a/docs/serverless/images/kubernetes-dashboard/-dashboards-kubernetes-dashboard.png b/docs/serverless/images/kubernetes-dashboard/-dashboards-kubernetes-dashboard.png deleted file mode 100644 index 43e4a516a2..0000000000 Binary files a/docs/serverless/images/kubernetes-dashboard/-dashboards-kubernetes-dashboard.png and /dev/null differ diff --git a/docs/serverless/images/kubernetes-dashboard/-dashboards-metadata-tab.png b/docs/serverless/images/kubernetes-dashboard/-dashboards-metadata-tab.png deleted file mode 100644 index 8844408493..0000000000 Binary files a/docs/serverless/images/kubernetes-dashboard/-dashboards-metadata-tab.png and /dev/null differ diff --git a/docs/serverless/images/machine-learning/-detections-machine-learning-ml-ui.png b/docs/serverless/images/machine-learning/-detections-machine-learning-ml-ui.png deleted file mode 100644 index 1ac95e5c92..0000000000 Binary files a/docs/serverless/images/machine-learning/-detections-machine-learning-ml-ui.png and /dev/null differ diff --git a/docs/serverless/images/machine-learning/-detections-machine-learning-rules-table-ml-job-error.png b/docs/serverless/images/machine-learning/-detections-machine-learning-rules-table-ml-job-error.png deleted file mode 100644 index 7244e784e3..0000000000 Binary files a/docs/serverless/images/machine-learning/-detections-machine-learning-rules-table-ml-job-error.png and /dev/null differ diff --git a/docs/serverless/images/machine-learning/-troubleshooting-rules-ts-ml-job-stopped.png b/docs/serverless/images/machine-learning/-troubleshooting-rules-ts-ml-job-stopped.png deleted file mode 100644 index 7fba6ed8f9..0000000000 Binary files a/docs/serverless/images/machine-learning/-troubleshooting-rules-ts-ml-job-stopped.png and /dev/null differ diff --git a/docs/serverless/images/network-page-overview/-getting-started-IP-detail-pg.png b/docs/serverless/images/network-page-overview/-getting-started-IP-detail-pg.png deleted file mode 100644 index cc18fd1d14..0000000000 Binary files a/docs/serverless/images/network-page-overview/-getting-started-IP-detail-pg.png and /dev/null differ diff --git a/docs/serverless/images/network-page-overview/-getting-started-network-ui.png b/docs/serverless/images/network-page-overview/-getting-started-network-ui.png deleted file mode 100644 index 558bb31b80..0000000000 Binary files a/docs/serverless/images/network-page-overview/-getting-started-network-ui.png and /dev/null differ diff --git a/docs/serverless/images/osquery-response-action/-osquery-available-response-actions-osquery.png b/docs/serverless/images/osquery-response-action/-osquery-available-response-actions-osquery.png deleted file mode 100644 index 1d4515bb6c..0000000000 Binary files a/docs/serverless/images/osquery-response-action/-osquery-available-response-actions-osquery.png and /dev/null differ diff --git a/docs/serverless/images/osquery-response-action/-osquery-osquery-results-tab.png b/docs/serverless/images/osquery-response-action/-osquery-osquery-results-tab.png deleted file mode 100644 index 862f24e965..0000000000 Binary files a/docs/serverless/images/osquery-response-action/-osquery-osquery-results-tab.png and /dev/null differ diff --git a/docs/serverless/images/osquery-response-action/-osquery-setup-single-query.png b/docs/serverless/images/osquery-response-action/-osquery-setup-single-query.png deleted file mode 100644 index c8e2badcbe..0000000000 Binary files a/docs/serverless/images/osquery-response-action/-osquery-setup-single-query.png and /dev/null differ diff --git a/docs/serverless/images/overview-dashboard/-dashboards-live-feed-ov-page.png b/docs/serverless/images/overview-dashboard/-dashboards-live-feed-ov-page.png deleted file mode 100644 index afc53301c1..0000000000 Binary files a/docs/serverless/images/overview-dashboard/-dashboards-live-feed-ov-page.png and /dev/null differ diff --git a/docs/serverless/images/overview-dashboard/-dashboards-overview-pg.png b/docs/serverless/images/overview-dashboard/-dashboards-overview-pg.png deleted file mode 100644 index f841dfe457..0000000000 Binary files a/docs/serverless/images/overview-dashboard/-dashboards-overview-pg.png and /dev/null differ diff --git a/docs/serverless/images/overview-dashboard/-getting-started-events-count.png b/docs/serverless/images/overview-dashboard/-getting-started-events-count.png deleted file mode 100644 index ef28a67b63..0000000000 Binary files a/docs/serverless/images/overview-dashboard/-getting-started-events-count.png and /dev/null differ diff --git a/docs/serverless/images/overview-dashboard/-getting-started-threat-intelligence-view.png b/docs/serverless/images/overview-dashboard/-getting-started-threat-intelligence-view.png deleted file mode 100644 index 32bc70d5df..0000000000 Binary files a/docs/serverless/images/overview-dashboard/-getting-started-threat-intelligence-view.png and /dev/null differ diff --git a/docs/serverless/images/policies-page-ov/-management-admin-policy-list.png b/docs/serverless/images/policies-page-ov/-management-admin-policy-list.png deleted file mode 100644 index 698cc59ca3..0000000000 Binary files a/docs/serverless/images/policies-page-ov/-management-admin-policy-list.png and /dev/null differ diff --git a/docs/serverless/images/prebuilt-rules-management/-detections-prebuilt-rules-add-badge.png b/docs/serverless/images/prebuilt-rules-management/-detections-prebuilt-rules-add-badge.png deleted file mode 100644 index d82a3f2d39..0000000000 Binary files a/docs/serverless/images/prebuilt-rules-management/-detections-prebuilt-rules-add-badge.png and /dev/null differ diff --git a/docs/serverless/images/prebuilt-rules-management/-detections-prebuilt-rules-add.png b/docs/serverless/images/prebuilt-rules-management/-detections-prebuilt-rules-add.png deleted file mode 100644 index 6cd3efff46..0000000000 Binary files a/docs/serverless/images/prebuilt-rules-management/-detections-prebuilt-rules-add.png and /dev/null differ diff --git a/docs/serverless/images/prebuilt-rules-management/-detections-prebuilt-rules-update.png b/docs/serverless/images/prebuilt-rules-management/-detections-prebuilt-rules-update.png deleted file mode 100644 index ade5071b55..0000000000 Binary files a/docs/serverless/images/prebuilt-rules-management/-detections-prebuilt-rules-update.png and /dev/null differ diff --git a/docs/serverless/images/prebuilt-rules-management/-detections-rule-details-prerequisites.png b/docs/serverless/images/prebuilt-rules-management/-detections-rule-details-prerequisites.png deleted file mode 100644 index 3579ac86c2..0000000000 Binary files a/docs/serverless/images/prebuilt-rules-management/-detections-rule-details-prerequisites.png and /dev/null differ diff --git a/docs/serverless/images/prebuilt-rules-management/-detections-rules-table-related-integrations.png b/docs/serverless/images/prebuilt-rules-management/-detections-rules-table-related-integrations.png deleted file mode 100644 index d79e6ddd3f..0000000000 Binary files a/docs/serverless/images/prebuilt-rules-management/-detections-rules-table-related-integrations.png and /dev/null differ diff --git a/docs/serverless/images/prebuilt-rules-management/prebuilt-rules-update-diff.png b/docs/serverless/images/prebuilt-rules-management/prebuilt-rules-update-diff.png deleted file mode 100644 index 07bd15ab9d..0000000000 Binary files a/docs/serverless/images/prebuilt-rules-management/prebuilt-rules-update-diff.png and /dev/null differ diff --git a/docs/serverless/images/response-actions-history/-management-admin-response-actions-history-page.png b/docs/serverless/images/response-actions-history/-management-admin-response-actions-history-page.png deleted file mode 100644 index 2203ea9209..0000000000 Binary files a/docs/serverless/images/response-actions-history/-management-admin-response-actions-history-page.png and /dev/null differ diff --git a/docs/serverless/images/response-actions/-management-admin-response-actions-history-console.png b/docs/serverless/images/response-actions/-management-admin-response-actions-history-console.png deleted file mode 100644 index 9191f62836..0000000000 Binary files a/docs/serverless/images/response-actions/-management-admin-response-actions-history-console.png and /dev/null differ diff --git a/docs/serverless/images/response-actions/-management-admin-response-console-help-panel.png b/docs/serverless/images/response-actions/-management-admin-response-console-help-panel.png deleted file mode 100644 index bc62d8c1ce..0000000000 Binary files a/docs/serverless/images/response-actions/-management-admin-response-console-help-panel.png and /dev/null differ diff --git a/docs/serverless/images/response-actions/-management-admin-response-console-unsupported-command.png b/docs/serverless/images/response-actions/-management-admin-response-console-unsupported-command.png deleted file mode 100644 index 14105cb21d..0000000000 Binary files a/docs/serverless/images/response-actions/-management-admin-response-console-unsupported-command.png and /dev/null differ diff --git a/docs/serverless/images/response-actions/-management-admin-response-console.png b/docs/serverless/images/response-actions/-management-admin-response-console.png deleted file mode 100644 index 4ed2ce27a0..0000000000 Binary files a/docs/serverless/images/response-actions/-management-admin-response-console.png and /dev/null differ diff --git a/docs/serverless/images/rule-monitoring-dashboard/-dashboards-rule-monitoring-overview.png b/docs/serverless/images/rule-monitoring-dashboard/-dashboards-rule-monitoring-overview.png deleted file mode 100644 index abd18e1458..0000000000 Binary files a/docs/serverless/images/rule-monitoring-dashboard/-dashboards-rule-monitoring-overview.png and /dev/null differ diff --git a/docs/serverless/images/rules-coverage/-detections-rules-coverage.png b/docs/serverless/images/rules-coverage/-detections-rules-coverage.png deleted file mode 100644 index b446481ae6..0000000000 Binary files a/docs/serverless/images/rules-coverage/-detections-rules-coverage.png and /dev/null differ diff --git a/docs/serverless/images/rules-ui-create/-detections-available-action-types.png b/docs/serverless/images/rules-ui-create/-detections-available-action-types.png deleted file mode 100644 index 4accbf5de9..0000000000 Binary files a/docs/serverless/images/rules-ui-create/-detections-available-action-types.png and /dev/null differ diff --git a/docs/serverless/images/rules-ui-create/-detections-available-response-actions.png b/docs/serverless/images/rules-ui-create/-detections-available-response-actions.png deleted file mode 100644 index 12a3629ae0..0000000000 Binary files a/docs/serverless/images/rules-ui-create/-detections-available-response-actions.png and /dev/null differ diff --git a/docs/serverless/images/rules-ui-create/-detections-eql-rule-query-example.png b/docs/serverless/images/rules-ui-create/-detections-eql-rule-query-example.png deleted file mode 100644 index 91600ff923..0000000000 Binary files a/docs/serverless/images/rules-ui-create/-detections-eql-rule-query-example.png and /dev/null differ diff --git a/docs/serverless/images/rules-ui-create/-detections-indicator-rule-example.png b/docs/serverless/images/rules-ui-create/-detections-indicator-rule-example.png deleted file mode 100644 index d30a7073e8..0000000000 Binary files a/docs/serverless/images/rules-ui-create/-detections-indicator-rule-example.png and /dev/null differ diff --git a/docs/serverless/images/rules-ui-create/-detections-indicator_value_list.png b/docs/serverless/images/rules-ui-create/-detections-indicator_value_list.png deleted file mode 100644 index bbc8f064eb..0000000000 Binary files a/docs/serverless/images/rules-ui-create/-detections-indicator_value_list.png and /dev/null differ diff --git a/docs/serverless/images/rules-ui-create/-detections-preview-rule.png b/docs/serverless/images/rules-ui-create/-detections-preview-rule.png deleted file mode 100644 index 5e8480738b..0000000000 Binary files a/docs/serverless/images/rules-ui-create/-detections-preview-rule.png and /dev/null differ diff --git a/docs/serverless/images/rules-ui-create/-detections-risk-source-field-ui.png b/docs/serverless/images/rules-ui-create/-detections-risk-source-field-ui.png deleted file mode 100644 index 3d83e00956..0000000000 Binary files a/docs/serverless/images/rules-ui-create/-detections-risk-source-field-ui.png and /dev/null differ diff --git a/docs/serverless/images/rules-ui-create/-detections-rule-query-example.png b/docs/serverless/images/rules-ui-create/-detections-rule-query-example.png deleted file mode 100644 index e712564873..0000000000 Binary files a/docs/serverless/images/rules-ui-create/-detections-rule-query-example.png and /dev/null differ diff --git a/docs/serverless/images/rules-ui-create/-detections-schedule-rule.png b/docs/serverless/images/rules-ui-create/-detections-schedule-rule.png deleted file mode 100644 index 1a856cac48..0000000000 Binary files a/docs/serverless/images/rules-ui-create/-detections-schedule-rule.png and /dev/null differ diff --git a/docs/serverless/images/rules-ui-create/-detections-selected-action-type.png b/docs/serverless/images/rules-ui-create/-detections-selected-action-type.png deleted file mode 100644 index 1f4065cbf3..0000000000 Binary files a/docs/serverless/images/rules-ui-create/-detections-selected-action-type.png and /dev/null differ diff --git a/docs/serverless/images/rules-ui-create/-detections-severity-mapping-ui.png b/docs/serverless/images/rules-ui-create/-detections-severity-mapping-ui.png deleted file mode 100644 index 9d65e46de2..0000000000 Binary files a/docs/serverless/images/rules-ui-create/-detections-severity-mapping-ui.png and /dev/null differ diff --git a/docs/serverless/images/rules-ui-management/-detections-all-rules.png b/docs/serverless/images/rules-ui-management/-detections-all-rules.png deleted file mode 100644 index 5ad7137a53..0000000000 Binary files a/docs/serverless/images/rules-ui-management/-detections-all-rules.png and /dev/null differ diff --git a/docs/serverless/images/rules-ui-management/-detections-rule-snoozing.png b/docs/serverless/images/rules-ui-management/-detections-rule-snoozing.png deleted file mode 100644 index 8edd67978f..0000000000 Binary files a/docs/serverless/images/rules-ui-management/-detections-rule-snoozing.png and /dev/null differ diff --git a/docs/serverless/images/runtime-fields/-reference-create-field-flyout.png b/docs/serverless/images/runtime-fields/-reference-create-field-flyout.png deleted file mode 100644 index 7f0d6dfcda..0000000000 Binary files a/docs/serverless/images/runtime-fields/-reference-create-field-flyout.png and /dev/null differ diff --git a/docs/serverless/images/runtime-fields/-reference-create-runtime-fields-timeline.png b/docs/serverless/images/runtime-fields/-reference-create-runtime-fields-timeline.png deleted file mode 100644 index 14b95f85c0..0000000000 Binary files a/docs/serverless/images/runtime-fields/-reference-create-runtime-fields-timeline.png and /dev/null differ diff --git a/docs/serverless/images/runtime-fields/-reference-fields-browser.png b/docs/serverless/images/runtime-fields/-reference-fields-browser.png deleted file mode 100644 index b46d4f5487..0000000000 Binary files a/docs/serverless/images/runtime-fields/-reference-fields-browser.png and /dev/null differ diff --git a/docs/serverless/images/session-view/-cloud-native-security-session-view-alert-types-badge.png b/docs/serverless/images/session-view/-cloud-native-security-session-view-alert-types-badge.png deleted file mode 100644 index 569d038dc4..0000000000 Binary files a/docs/serverless/images/session-view/-cloud-native-security-session-view-alert-types-badge.png and /dev/null differ diff --git a/docs/serverless/images/session-view/-detections-session-view-action-icon-detail.png b/docs/serverless/images/session-view/-detections-session-view-action-icon-detail.png deleted file mode 100644 index 4646f7e454..0000000000 Binary files a/docs/serverless/images/session-view/-detections-session-view-action-icon-detail.png and /dev/null differ diff --git a/docs/serverless/images/session-view/-detections-session-view-exec-user-change-badge.png b/docs/serverless/images/session-view/-detections-session-view-exec-user-change-badge.png deleted file mode 100644 index 2247f6a16c..0000000000 Binary files a/docs/serverless/images/session-view/-detections-session-view-exec-user-change-badge.png and /dev/null differ diff --git a/docs/serverless/images/session-view/-detections-session-view-output-badge.png b/docs/serverless/images/session-view/-detections-session-view-output-badge.png deleted file mode 100644 index 8d2c3cd6d2..0000000000 Binary files a/docs/serverless/images/session-view/-detections-session-view-output-badge.png and /dev/null differ diff --git a/docs/serverless/images/session-view/-detections-session-view-output-viewer.png b/docs/serverless/images/session-view/-detections-session-view-output-viewer.png deleted file mode 100644 index fce6b0630d..0000000000 Binary files a/docs/serverless/images/session-view/-detections-session-view-output-viewer.png and /dev/null differ diff --git a/docs/serverless/images/session-view/-detections-session-view-script-button.png b/docs/serverless/images/session-view/-detections-session-view-script-button.png deleted file mode 100644 index 9ddea9446e..0000000000 Binary files a/docs/serverless/images/session-view/-detections-session-view-script-button.png and /dev/null differ diff --git a/docs/serverless/images/session-view/-detections-session-view-terminal-labeled.png b/docs/serverless/images/session-view/-detections-session-view-terminal-labeled.png deleted file mode 100644 index 43c3554d9b..0000000000 Binary files a/docs/serverless/images/session-view/-detections-session-view-terminal-labeled.png and /dev/null differ diff --git a/docs/serverless/images/shared-exception-lists/-detections-actions-exception-list.png b/docs/serverless/images/shared-exception-lists/-detections-actions-exception-list.png deleted file mode 100644 index 9a8c48bca6..0000000000 Binary files a/docs/serverless/images/shared-exception-lists/-detections-actions-exception-list.png and /dev/null differ diff --git a/docs/serverless/images/shared-exception-lists/-detections-associated-shared-exception-list.png b/docs/serverless/images/shared-exception-lists/-detections-associated-shared-exception-list.png deleted file mode 100644 index e0bb062d15..0000000000 Binary files a/docs/serverless/images/shared-exception-lists/-detections-associated-shared-exception-list.png and /dev/null differ diff --git a/docs/serverless/images/shared-exception-lists/-detections-rule-exceptions-page.png b/docs/serverless/images/shared-exception-lists/-detections-rule-exceptions-page.png deleted file mode 100644 index 912feec301..0000000000 Binary files a/docs/serverless/images/shared-exception-lists/-detections-rule-exceptions-page.png and /dev/null differ diff --git a/docs/serverless/images/shared-exception-lists/-detections-view-filter-shared-exception.png b/docs/serverless/images/shared-exception-lists/-detections-view-filter-shared-exception.png deleted file mode 100644 index 199026d8d2..0000000000 Binary files a/docs/serverless/images/shared-exception-lists/-detections-view-filter-shared-exception.png and /dev/null differ diff --git a/docs/serverless/images/signals-to-cases/-detections-add-alert-to-case.gif b/docs/serverless/images/signals-to-cases/-detections-add-alert-to-case.gif deleted file mode 100644 index 5141258e0a..0000000000 Binary files a/docs/serverless/images/signals-to-cases/-detections-add-alert-to-case.gif and /dev/null differ diff --git a/docs/serverless/images/signals-to-cases/-detections-add-alert-to-existing-case.png b/docs/serverless/images/signals-to-cases/-detections-add-alert-to-existing-case.png deleted file mode 100644 index 27b7eaa687..0000000000 Binary files a/docs/serverless/images/signals-to-cases/-detections-add-alert-to-existing-case.png and /dev/null differ diff --git a/docs/serverless/images/signals-to-cases/-detections-add-alert-to-new-case.png b/docs/serverless/images/signals-to-cases/-detections-add-alert-to-new-case.png deleted file mode 100644 index 9082c6a966..0000000000 Binary files a/docs/serverless/images/signals-to-cases/-detections-add-alert-to-new-case.png and /dev/null differ diff --git a/docs/serverless/images/timeline-object-schema/-reference-timeline-object-ui.png b/docs/serverless/images/timeline-object-schema/-reference-timeline-object-ui.png deleted file mode 100644 index 46024aaae5..0000000000 Binary files a/docs/serverless/images/timeline-object-schema/-reference-timeline-object-ui.png and /dev/null differ diff --git a/docs/serverless/images/timeline-templates-ui/-events-all-actions-timeline-ui.png b/docs/serverless/images/timeline-templates-ui/-events-all-actions-timeline-ui.png deleted file mode 100644 index 6e2bea7e1b..0000000000 Binary files a/docs/serverless/images/timeline-templates-ui/-events-all-actions-timeline-ui.png and /dev/null differ diff --git a/docs/serverless/images/timeline-templates-ui/-events-create-a-timeline-template-field.png b/docs/serverless/images/timeline-templates-ui/-events-create-a-timeline-template-field.png deleted file mode 100644 index 6b2fd0ea1c..0000000000 Binary files a/docs/serverless/images/timeline-templates-ui/-events-create-a-timeline-template-field.png and /dev/null differ diff --git a/docs/serverless/images/timeline-templates-ui/-events-invalid-filter.png b/docs/serverless/images/timeline-templates-ui/-events-invalid-filter.png deleted file mode 100644 index ff0e5fba26..0000000000 Binary files a/docs/serverless/images/timeline-templates-ui/-events-invalid-filter.png and /dev/null differ diff --git a/docs/serverless/images/timeline-templates-ui/-events-template-filter-value.png b/docs/serverless/images/timeline-templates-ui/-events-template-filter-value.png deleted file mode 100644 index 1a9c7c1241..0000000000 Binary files a/docs/serverless/images/timeline-templates-ui/-events-template-filter-value.png and /dev/null differ diff --git a/docs/serverless/images/timeline-templates-ui/-events-template-query-example.png b/docs/serverless/images/timeline-templates-ui/-events-template-query-example.png deleted file mode 100644 index 80b305487a..0000000000 Binary files a/docs/serverless/images/timeline-templates-ui/-events-template-query-example.png and /dev/null differ diff --git a/docs/serverless/images/timeline-templates-ui/-events-timeline-template-filter.png b/docs/serverless/images/timeline-templates-ui/-events-timeline-template-filter.png deleted file mode 100644 index 0bf39ffa8a..0000000000 Binary files a/docs/serverless/images/timeline-templates-ui/-events-timeline-template-filter.png and /dev/null differ diff --git a/docs/serverless/images/timelines-ui/-events-correlation-tab-eql-query.png b/docs/serverless/images/timelines-ui/-events-correlation-tab-eql-query.png deleted file mode 100644 index 2c2a104489..0000000000 Binary files a/docs/serverless/images/timelines-ui/-events-correlation-tab-eql-query.png and /dev/null differ diff --git a/docs/serverless/images/timelines-ui/-events-esql-tab.png b/docs/serverless/images/timelines-ui/-events-esql-tab.png deleted file mode 100644 index deb79f0e16..0000000000 Binary files a/docs/serverless/images/timelines-ui/-events-esql-tab.png and /dev/null differ diff --git a/docs/serverless/images/timelines-ui/-events-timeline-disable-filter.png b/docs/serverless/images/timelines-ui/-events-timeline-disable-filter.png deleted file mode 100644 index 9a73b5b87c..0000000000 Binary files a/docs/serverless/images/timelines-ui/-events-timeline-disable-filter.png and /dev/null differ diff --git a/docs/serverless/images/timelines-ui/-events-timeline-field-exists.png b/docs/serverless/images/timelines-ui/-events-timeline-field-exists.png deleted file mode 100644 index c78c054156..0000000000 Binary files a/docs/serverless/images/timelines-ui/-events-timeline-field-exists.png and /dev/null differ diff --git a/docs/serverless/images/timelines-ui/-events-timeline-filter-exclude.png b/docs/serverless/images/timelines-ui/-events-timeline-filter-exclude.png deleted file mode 100644 index 8df9ee8512..0000000000 Binary files a/docs/serverless/images/timelines-ui/-events-timeline-filter-exclude.png and /dev/null differ diff --git a/docs/serverless/images/timelines-ui/-events-timeline-filter-value.png b/docs/serverless/images/timelines-ui/-events-timeline-filter-value.png deleted file mode 100644 index 7e51f9041a..0000000000 Binary files a/docs/serverless/images/timelines-ui/-events-timeline-filter-value.png and /dev/null differ diff --git a/docs/serverless/images/timelines-ui/-events-timeline-sidebar.png b/docs/serverless/images/timelines-ui/-events-timeline-sidebar.png deleted file mode 100644 index 2c4152ffeb..0000000000 Binary files a/docs/serverless/images/timelines-ui/-events-timeline-sidebar.png and /dev/null differ diff --git a/docs/serverless/images/timelines-ui/-events-timeline-ui-filter-options.png b/docs/serverless/images/timelines-ui/-events-timeline-ui-filter-options.png deleted file mode 100644 index e3aeddcec9..0000000000 Binary files a/docs/serverless/images/timelines-ui/-events-timeline-ui-filter-options.png and /dev/null differ diff --git a/docs/serverless/images/timelines-ui/-events-timeline-ui-renderer.png b/docs/serverless/images/timelines-ui/-events-timeline-ui-renderer.png deleted file mode 100644 index e799fe2236..0000000000 Binary files a/docs/serverless/images/timelines-ui/-events-timeline-ui-renderer.png and /dev/null differ diff --git a/docs/serverless/images/timelines-ui/-events-timeline-ui-updated.png b/docs/serverless/images/timelines-ui/-events-timeline-ui-updated.png deleted file mode 100644 index 4149116feb..0000000000 Binary files a/docs/serverless/images/timelines-ui/-events-timeline-ui-updated.png and /dev/null differ diff --git a/docs/serverless/images/trusted-apps-ov/-management-admin-trusted-apps-list.png b/docs/serverless/images/trusted-apps-ov/-management-admin-trusted-apps-list.png deleted file mode 100644 index 828f6e85ea..0000000000 Binary files a/docs/serverless/images/trusted-apps-ov/-management-admin-trusted-apps-list.png and /dev/null differ diff --git a/docs/serverless/images/ts-detection-rules/-troubleshooting-rules-ts-ml-job-stopped.png b/docs/serverless/images/ts-detection-rules/-troubleshooting-rules-ts-ml-job-stopped.png deleted file mode 100644 index 7fba6ed8f9..0000000000 Binary files a/docs/serverless/images/ts-detection-rules/-troubleshooting-rules-ts-ml-job-stopped.png and /dev/null differ diff --git a/docs/serverless/images/ts-detection-rules/-troubleshooting-warning-icon-message.png b/docs/serverless/images/ts-detection-rules/-troubleshooting-warning-icon-message.png deleted file mode 100644 index 07e6fded6a..0000000000 Binary files a/docs/serverless/images/ts-detection-rules/-troubleshooting-warning-icon-message.png and /dev/null differ diff --git a/docs/serverless/images/ts-detection-rules/-troubleshooting-warning-type-conflicts.png b/docs/serverless/images/ts-detection-rules/-troubleshooting-warning-type-conflicts.png deleted file mode 100644 index 45058f5e54..0000000000 Binary files a/docs/serverless/images/ts-detection-rules/-troubleshooting-warning-type-conflicts.png and /dev/null differ diff --git a/docs/serverless/images/ts-detection-rules/-troubleshooting-warning-unmapped-fields.png b/docs/serverless/images/ts-detection-rules/-troubleshooting-warning-unmapped-fields.png deleted file mode 100644 index 2b167c0029..0000000000 Binary files a/docs/serverless/images/ts-detection-rules/-troubleshooting-warning-unmapped-fields.png and /dev/null differ diff --git a/docs/serverless/images/ts-management/-troubleshooting-endpoints-transform-failed.png b/docs/serverless/images/ts-management/-troubleshooting-endpoints-transform-failed.png deleted file mode 100644 index 1b46dd539f..0000000000 Binary files a/docs/serverless/images/ts-management/-troubleshooting-endpoints-transform-failed.png and /dev/null differ diff --git a/docs/serverless/images/ts-management/-troubleshooting-transforms-start.png b/docs/serverless/images/ts-management/-troubleshooting-transforms-start.png deleted file mode 100644 index 1dcc9735df..0000000000 Binary files a/docs/serverless/images/ts-management/-troubleshooting-transforms-start.png and /dev/null differ diff --git a/docs/serverless/images/ts-management/-troubleshooting-unhealthy-agent-fleet.png b/docs/serverless/images/ts-management/-troubleshooting-unhealthy-agent-fleet.png deleted file mode 100644 index ea140f2993..0000000000 Binary files a/docs/serverless/images/ts-management/-troubleshooting-unhealthy-agent-fleet.png and /dev/null differ diff --git a/docs/serverless/images/tuning-anomaly-results/-detections-machine-learning-cloned-job-details.png b/docs/serverless/images/tuning-anomaly-results/-detections-machine-learning-cloned-job-details.png deleted file mode 100644 index 4c04764406..0000000000 Binary files a/docs/serverless/images/tuning-anomaly-results/-detections-machine-learning-cloned-job-details.png and /dev/null differ diff --git a/docs/serverless/images/tuning-anomaly-results/-detections-machine-learning-filter-add-item.png b/docs/serverless/images/tuning-anomaly-results/-detections-machine-learning-filter-add-item.png deleted file mode 100644 index 245670662a..0000000000 Binary files a/docs/serverless/images/tuning-anomaly-results/-detections-machine-learning-filter-add-item.png and /dev/null differ diff --git a/docs/serverless/images/tuning-anomaly-results/-detections-machine-learning-ml-rule-threshold.png b/docs/serverless/images/tuning-anomaly-results/-detections-machine-learning-ml-rule-threshold.png deleted file mode 100644 index 98b1beeeeb..0000000000 Binary files a/docs/serverless/images/tuning-anomaly-results/-detections-machine-learning-ml-rule-threshold.png and /dev/null differ diff --git a/docs/serverless/images/tuning-anomaly-results/-detections-machine-learning-rule-scope.png b/docs/serverless/images/tuning-anomaly-results/-detections-machine-learning-rule-scope.png deleted file mode 100644 index 5179ea992d..0000000000 Binary files a/docs/serverless/images/tuning-anomaly-results/-detections-machine-learning-rule-scope.png and /dev/null differ diff --git a/docs/serverless/images/tuning-anomaly-results/-detections-machine-learning-start-job-window.png b/docs/serverless/images/tuning-anomaly-results/-detections-machine-learning-start-job-window.png deleted file mode 100644 index 12c63d0be6..0000000000 Binary files a/docs/serverless/images/tuning-anomaly-results/-detections-machine-learning-start-job-window.png and /dev/null differ diff --git a/docs/serverless/images/tuning-detection-signals/-detections-prebuilt-rules-process-exception.png b/docs/serverless/images/tuning-detection-signals/-detections-prebuilt-rules-process-exception.png deleted file mode 100644 index c6495e566e..0000000000 Binary files a/docs/serverless/images/tuning-detection-signals/-detections-prebuilt-rules-process-exception.png and /dev/null differ diff --git a/docs/serverless/images/tuning-detection-signals/-detections-prebuilt-rules-process-specific-exception.png b/docs/serverless/images/tuning-detection-signals/-detections-prebuilt-rules-process-specific-exception.png deleted file mode 100644 index edcad01218..0000000000 Binary files a/docs/serverless/images/tuning-detection-signals/-detections-prebuilt-rules-process-specific-exception.png and /dev/null differ diff --git a/docs/serverless/images/tuning-detection-signals/-detections-prebuilt-rules-rule-details-page.png b/docs/serverless/images/tuning-detection-signals/-detections-prebuilt-rules-rule-details-page.png deleted file mode 100644 index ad6deeb6fe..0000000000 Binary files a/docs/serverless/images/tuning-detection-signals/-detections-prebuilt-rules-rule-details-page.png and /dev/null differ diff --git a/docs/serverless/images/turn-on-risk-engine/preview-risky-entities.png b/docs/serverless/images/turn-on-risk-engine/preview-risky-entities.png deleted file mode 100644 index 838ee1a7ff..0000000000 Binary files a/docs/serverless/images/turn-on-risk-engine/preview-risky-entities.png and /dev/null differ diff --git a/docs/serverless/images/turn-on-risk-engine/turn-on-risk-engine.png b/docs/serverless/images/turn-on-risk-engine/turn-on-risk-engine.png deleted file mode 100644 index 7593e7df10..0000000000 Binary files a/docs/serverless/images/turn-on-risk-engine/turn-on-risk-engine.png and /dev/null differ diff --git a/docs/serverless/images/users-page/-getting-started-users-user-details-pg.png b/docs/serverless/images/users-page/-getting-started-users-user-details-pg.png deleted file mode 100644 index f26432f08d..0000000000 Binary files a/docs/serverless/images/users-page/-getting-started-users-user-details-pg.png and /dev/null differ diff --git a/docs/serverless/images/users-page/-getting-started-users-users-page.png b/docs/serverless/images/users-page/-getting-started-users-users-page.png deleted file mode 100644 index c7028ebbd7..0000000000 Binary files a/docs/serverless/images/users-page/-getting-started-users-users-page.png and /dev/null differ diff --git a/docs/serverless/images/users-page/-user-asset-criticality.png b/docs/serverless/images/users-page/-user-asset-criticality.png deleted file mode 100644 index 72e4e34ca1..0000000000 Binary files a/docs/serverless/images/users-page/-user-asset-criticality.png and /dev/null differ diff --git a/docs/serverless/images/users-page/-user-details-flyout.png b/docs/serverless/images/users-page/-user-details-flyout.png deleted file mode 100644 index 99452099e2..0000000000 Binary files a/docs/serverless/images/users-page/-user-details-flyout.png and /dev/null differ diff --git a/docs/serverless/images/users-page/-user-observed-data.png b/docs/serverless/images/users-page/-user-observed-data.png deleted file mode 100644 index 0f2ec3f9f4..0000000000 Binary files a/docs/serverless/images/users-page/-user-observed-data.png and /dev/null differ diff --git a/docs/serverless/images/users-page/-user-risk-inputs.png b/docs/serverless/images/users-page/-user-risk-inputs.png deleted file mode 100644 index f6ec9c0ce6..0000000000 Binary files a/docs/serverless/images/users-page/-user-risk-inputs.png and /dev/null differ diff --git a/docs/serverless/images/value-lists-exceptions/-detections-edit-value-lists.png b/docs/serverless/images/value-lists-exceptions/-detections-edit-value-lists.png deleted file mode 100644 index dd53a8dc11..0000000000 Binary files a/docs/serverless/images/value-lists-exceptions/-detections-edit-value-lists.png and /dev/null differ diff --git a/docs/serverless/images/value-lists-exceptions/-detections-manage-value-list.png b/docs/serverless/images/value-lists-exceptions/-detections-manage-value-list.png deleted file mode 100644 index 7b65290ccf..0000000000 Binary files a/docs/serverless/images/value-lists-exceptions/-detections-manage-value-list.png and /dev/null differ diff --git a/docs/serverless/images/value-lists-exceptions/-detections-upload-lists-ui.png b/docs/serverless/images/value-lists-exceptions/-detections-upload-lists-ui.png deleted file mode 100644 index 37554f0e40..0000000000 Binary files a/docs/serverless/images/value-lists-exceptions/-detections-upload-lists-ui.png and /dev/null differ diff --git a/docs/serverless/images/view-alert-details/-detections-about-section-rp.png b/docs/serverless/images/view-alert-details/-detections-about-section-rp.png deleted file mode 100644 index 754cf1c0dd..0000000000 Binary files a/docs/serverless/images/view-alert-details/-detections-about-section-rp.png and /dev/null differ diff --git a/docs/serverless/images/view-alert-details/-detections-alert-details-flyout-left-panel.png b/docs/serverless/images/view-alert-details/-detections-alert-details-flyout-left-panel.png deleted file mode 100644 index 08e5dfe55e..0000000000 Binary files a/docs/serverless/images/view-alert-details/-detections-alert-details-flyout-left-panel.png and /dev/null differ diff --git a/docs/serverless/images/view-alert-details/-detections-alert-details-flyout-preview-panel.gif b/docs/serverless/images/view-alert-details/-detections-alert-details-flyout-preview-panel.gif deleted file mode 100644 index 52f91aaf38..0000000000 Binary files a/docs/serverless/images/view-alert-details/-detections-alert-details-flyout-preview-panel.gif and /dev/null differ diff --git a/docs/serverless/images/view-alert-details/-detections-alert-details-flyout-right-panel.png b/docs/serverless/images/view-alert-details/-detections-alert-details-flyout-right-panel.png deleted file mode 100644 index 1f01cda76a..0000000000 Binary files a/docs/serverless/images/view-alert-details/-detections-alert-details-flyout-right-panel.png and /dev/null differ diff --git a/docs/serverless/images/view-alert-details/-detections-correlations-overview.png b/docs/serverless/images/view-alert-details/-detections-correlations-overview.png deleted file mode 100644 index 6fec67ee03..0000000000 Binary files a/docs/serverless/images/view-alert-details/-detections-correlations-overview.png and /dev/null differ diff --git a/docs/serverless/images/view-alert-details/-detections-entities-overview.png b/docs/serverless/images/view-alert-details/-detections-entities-overview.png deleted file mode 100644 index e27d149368..0000000000 Binary files a/docs/serverless/images/view-alert-details/-detections-entities-overview.png and /dev/null differ diff --git a/docs/serverless/images/view-alert-details/-detections-expand-details-button.png b/docs/serverless/images/view-alert-details/-detections-expand-details-button.png deleted file mode 100644 index 2a53fac260..0000000000 Binary files a/docs/serverless/images/view-alert-details/-detections-expand-details-button.png and /dev/null differ diff --git a/docs/serverless/images/view-alert-details/-detections-expanded-correlations-view.png b/docs/serverless/images/view-alert-details/-detections-expanded-correlations-view.png deleted file mode 100644 index 2aa9b75275..0000000000 Binary files a/docs/serverless/images/view-alert-details/-detections-expanded-correlations-view.png and /dev/null differ diff --git a/docs/serverless/images/view-alert-details/-detections-expanded-entities-view.png b/docs/serverless/images/view-alert-details/-detections-expanded-entities-view.png deleted file mode 100644 index e7f05fe2ed..0000000000 Binary files a/docs/serverless/images/view-alert-details/-detections-expanded-entities-view.png and /dev/null differ diff --git a/docs/serverless/images/view-alert-details/-detections-expanded-prevalence-view.png b/docs/serverless/images/view-alert-details/-detections-expanded-prevalence-view.png deleted file mode 100644 index 48c44f6a18..0000000000 Binary files a/docs/serverless/images/view-alert-details/-detections-expanded-prevalence-view.png and /dev/null differ diff --git a/docs/serverless/images/view-alert-details/-detections-expanded-threat-intelligence-view.png b/docs/serverless/images/view-alert-details/-detections-expanded-threat-intelligence-view.png deleted file mode 100644 index da4632101c..0000000000 Binary files a/docs/serverless/images/view-alert-details/-detections-expanded-threat-intelligence-view.png and /dev/null differ diff --git a/docs/serverless/images/view-alert-details/-detections-insights-section-rp.png b/docs/serverless/images/view-alert-details/-detections-insights-section-rp.png deleted file mode 100644 index f10cc70a72..0000000000 Binary files a/docs/serverless/images/view-alert-details/-detections-insights-section-rp.png and /dev/null differ diff --git a/docs/serverless/images/view-alert-details/-detections-investigation-section-rp.png b/docs/serverless/images/view-alert-details/-detections-investigation-section-rp.png deleted file mode 100644 index c496593144..0000000000 Binary files a/docs/serverless/images/view-alert-details/-detections-investigation-section-rp.png and /dev/null differ diff --git a/docs/serverless/images/view-alert-details/-detections-open-alert-details-flyout.gif b/docs/serverless/images/view-alert-details/-detections-open-alert-details-flyout.gif deleted file mode 100644 index 462ff9f429..0000000000 Binary files a/docs/serverless/images/view-alert-details/-detections-open-alert-details-flyout.gif and /dev/null differ diff --git a/docs/serverless/images/view-alert-details/-detections-response-action-rp.png b/docs/serverless/images/view-alert-details/-detections-response-action-rp.png deleted file mode 100644 index 03bac21042..0000000000 Binary files a/docs/serverless/images/view-alert-details/-detections-response-action-rp.png and /dev/null differ diff --git a/docs/serverless/images/view-alert-details/-detections-threat-intelligence-overview.png b/docs/serverless/images/view-alert-details/-detections-threat-intelligence-overview.png deleted file mode 100644 index af44623035..0000000000 Binary files a/docs/serverless/images/view-alert-details/-detections-threat-intelligence-overview.png and /dev/null differ diff --git a/docs/serverless/images/view-alert-details/-detections-visualizations-section-rp.png b/docs/serverless/images/view-alert-details/-detections-visualizations-section-rp.png deleted file mode 100644 index 783bd302d0..0000000000 Binary files a/docs/serverless/images/view-alert-details/-detections-visualizations-section-rp.png and /dev/null differ diff --git a/docs/serverless/images/view-osquery-results/-osquery-pack-query-results.png b/docs/serverless/images/view-osquery-results/-osquery-pack-query-results.png deleted file mode 100644 index 0baa9162d4..0000000000 Binary files a/docs/serverless/images/view-osquery-results/-osquery-pack-query-results.png and /dev/null differ diff --git a/docs/serverless/images/view-osquery-results/-osquery-single-query-results.png b/docs/serverless/images/view-osquery-results/-osquery-single-query-results.png deleted file mode 100644 index e0bc6f3768..0000000000 Binary files a/docs/serverless/images/view-osquery-results/-osquery-single-query-results.png and /dev/null differ diff --git a/docs/serverless/images/visual-event-analyzer/-detections-alert-pill.png b/docs/serverless/images/visual-event-analyzer/-detections-alert-pill.png deleted file mode 100644 index e96fc829fd..0000000000 Binary files a/docs/serverless/images/visual-event-analyzer/-detections-alert-pill.png and /dev/null differ diff --git a/docs/serverless/images/visual-event-analyzer/-detections-analyze-event-button.png b/docs/serverless/images/visual-event-analyzer/-detections-analyze-event-button.png deleted file mode 100644 index 4c4e122ef1..0000000000 Binary files a/docs/serverless/images/visual-event-analyzer/-detections-analyze-event-button.png and /dev/null differ diff --git a/docs/serverless/images/visual-event-analyzer/-detections-analyze-event-timeline.png b/docs/serverless/images/visual-event-analyzer/-detections-analyze-event-timeline.png deleted file mode 100644 index 6aa6d31d5a..0000000000 Binary files a/docs/serverless/images/visual-event-analyzer/-detections-analyze-event-timeline.png and /dev/null differ diff --git a/docs/serverless/images/visual-event-analyzer/-detections-data-view-selection.png b/docs/serverless/images/visual-event-analyzer/-detections-data-view-selection.png deleted file mode 100644 index f0d15645ec..0000000000 Binary files a/docs/serverless/images/visual-event-analyzer/-detections-data-view-selection.png and /dev/null differ diff --git a/docs/serverless/images/visual-event-analyzer/-detections-date-range-selection.png b/docs/serverless/images/visual-event-analyzer/-detections-date-range-selection.png deleted file mode 100644 index 40515f6832..0000000000 Binary files a/docs/serverless/images/visual-event-analyzer/-detections-date-range-selection.png and /dev/null differ diff --git a/docs/serverless/images/visual-event-analyzer/-detections-event-details.png b/docs/serverless/images/visual-event-analyzer/-detections-event-details.png deleted file mode 100644 index f4a8eb16e8..0000000000 Binary files a/docs/serverless/images/visual-event-analyzer/-detections-event-details.png and /dev/null differ diff --git a/docs/serverless/images/visual-event-analyzer/-detections-event-type.png b/docs/serverless/images/visual-event-analyzer/-detections-event-type.png deleted file mode 100644 index 819a8495a7..0000000000 Binary files a/docs/serverless/images/visual-event-analyzer/-detections-event-type.png and /dev/null differ diff --git a/docs/serverless/images/visual-event-analyzer/-detections-full-screen-analyzer.png b/docs/serverless/images/visual-event-analyzer/-detections-full-screen-analyzer.png deleted file mode 100644 index bb0e2ec4ff..0000000000 Binary files a/docs/serverless/images/visual-event-analyzer/-detections-full-screen-analyzer.png and /dev/null differ diff --git a/docs/serverless/images/visual-event-analyzer/-detections-graphical-view.png b/docs/serverless/images/visual-event-analyzer/-detections-graphical-view.png deleted file mode 100644 index d7a56795ea..0000000000 Binary files a/docs/serverless/images/visual-event-analyzer/-detections-graphical-view.png and /dev/null differ diff --git a/docs/serverless/images/visual-event-analyzer/-detections-node-legend.png b/docs/serverless/images/visual-event-analyzer/-detections-node-legend.png deleted file mode 100644 index 0ba9bf6649..0000000000 Binary files a/docs/serverless/images/visual-event-analyzer/-detections-node-legend.png and /dev/null differ diff --git a/docs/serverless/images/visual-event-analyzer/-detections-process-details.png b/docs/serverless/images/visual-event-analyzer/-detections-process-details.png deleted file mode 100644 index c8b92c81be..0000000000 Binary files a/docs/serverless/images/visual-event-analyzer/-detections-process-details.png and /dev/null differ diff --git a/docs/serverless/images/visual-event-analyzer/-detections-process-list.png b/docs/serverless/images/visual-event-analyzer/-detections-process-list.png deleted file mode 100644 index 105d723d5f..0000000000 Binary files a/docs/serverless/images/visual-event-analyzer/-detections-process-list.png and /dev/null differ diff --git a/docs/serverless/images/visual-event-analyzer/-detections-process-schema.png b/docs/serverless/images/visual-event-analyzer/-detections-process-schema.png deleted file mode 100644 index 85d393d0c9..0000000000 Binary files a/docs/serverless/images/visual-event-analyzer/-detections-process-schema.png and /dev/null differ diff --git a/docs/serverless/images/visualize-alerts/-detections-alert-page-visualizations.png b/docs/serverless/images/visualize-alerts/-detections-alert-page-visualizations.png deleted file mode 100644 index 685a6233f5..0000000000 Binary files a/docs/serverless/images/visualize-alerts/-detections-alert-page-visualizations.png and /dev/null differ diff --git a/docs/serverless/images/visualize-alerts/-detections-alert-page-viz-collapsed.png b/docs/serverless/images/visualize-alerts/-detections-alert-page-viz-collapsed.png deleted file mode 100644 index 286128dee0..0000000000 Binary files a/docs/serverless/images/visualize-alerts/-detections-alert-page-viz-collapsed.png and /dev/null differ diff --git a/docs/serverless/images/visualize-alerts/-detections-alerts-viz-counts.png b/docs/serverless/images/visualize-alerts/-detections-alerts-viz-counts.png deleted file mode 100644 index 48176bb0be..0000000000 Binary files a/docs/serverless/images/visualize-alerts/-detections-alerts-viz-counts.png and /dev/null differ diff --git a/docs/serverless/images/visualize-alerts/-detections-alerts-viz-summary.png b/docs/serverless/images/visualize-alerts/-detections-alerts-viz-summary.png deleted file mode 100644 index d15a12b714..0000000000 Binary files a/docs/serverless/images/visualize-alerts/-detections-alerts-viz-summary.png and /dev/null differ diff --git a/docs/serverless/images/visualize-alerts/-detections-alerts-viz-treemap.png b/docs/serverless/images/visualize-alerts/-detections-alerts-viz-treemap.png deleted file mode 100644 index 5383244b84..0000000000 Binary files a/docs/serverless/images/visualize-alerts/-detections-alerts-viz-treemap.png and /dev/null differ diff --git a/docs/serverless/images/visualize-alerts/-detections-alerts-viz-trend.png b/docs/serverless/images/visualize-alerts/-detections-alerts-viz-trend.png deleted file mode 100644 index 4ee9d47e04..0000000000 Binary files a/docs/serverless/images/visualize-alerts/-detections-alerts-viz-trend.png and /dev/null differ diff --git a/docs/serverless/images/visualize-alerts/-detections-treemap-click.gif b/docs/serverless/images/visualize-alerts/-detections-treemap-click.gif deleted file mode 100644 index bb5b96dac4..0000000000 Binary files a/docs/serverless/images/visualize-alerts/-detections-treemap-click.gif and /dev/null differ diff --git a/docs/serverless/images/vuln-management-dashboard-dash/-cloud-native-security-vuln-management-dashboard.png b/docs/serverless/images/vuln-management-dashboard-dash/-cloud-native-security-vuln-management-dashboard.png deleted file mode 100644 index 063312fc0f..0000000000 Binary files a/docs/serverless/images/vuln-management-dashboard-dash/-cloud-native-security-vuln-management-dashboard.png and /dev/null differ diff --git a/docs/serverless/images/vuln-management-findings/-cloud-native-security-cnvm-findings-grouped.png b/docs/serverless/images/vuln-management-findings/-cloud-native-security-cnvm-findings-grouped.png deleted file mode 100644 index b62bd0564b..0000000000 Binary files a/docs/serverless/images/vuln-management-findings/-cloud-native-security-cnvm-findings-grouped.png and /dev/null differ diff --git a/docs/serverless/images/vuln-management-findings/-cloud-native-security-cnvm-findings-page.png b/docs/serverless/images/vuln-management-findings/-cloud-native-security-cnvm-findings-page.png deleted file mode 100644 index a2c36a19a7..0000000000 Binary files a/docs/serverless/images/vuln-management-findings/-cloud-native-security-cnvm-findings-page.png and /dev/null differ diff --git a/docs/serverless/images/vuln-management-get-started/-dashboards-cnvm-cloudformation.png b/docs/serverless/images/vuln-management-get-started/-dashboards-cnvm-cloudformation.png deleted file mode 100644 index 890be7a391..0000000000 Binary files a/docs/serverless/images/vuln-management-get-started/-dashboards-cnvm-cloudformation.png and /dev/null differ diff --git a/docs/serverless/images/vuln-management-get-started/-dashboards-cnvm-setup-1.png b/docs/serverless/images/vuln-management-get-started/-dashboards-cnvm-setup-1.png deleted file mode 100644 index 3b2ce9adb6..0000000000 Binary files a/docs/serverless/images/vuln-management-get-started/-dashboards-cnvm-setup-1.png and /dev/null differ diff --git a/docs/serverless/ingest/auto-import.mdx b/docs/serverless/ingest/auto-import.mdx deleted file mode 100644 index b6f9b49879..0000000000 --- a/docs/serverless/ingest/auto-import.mdx +++ /dev/null @@ -1,87 +0,0 @@ ---- -slug: /serverless/security/automatic-import -title: Automatic Import -description: Use Automatic Import to quickly normalize and ingest third-party data. -tags: [ 'serverless', 'security', 'how-to' ] -status: in review ---- - - - - -This feature is in technical preview. It may change in the future, and you should exercise caution when using it in production environments. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of GA features. - - -Automatic Import helps you quickly parse, ingest, and create [ECS mappings](https://www.elastic.co/elasticsearch/common-schema) for data from sources that don't yet have prebuilt Elastic integrations. This can accelerate your migration to ((elastic-sec)), and help you quickly add new data sources to an existing SIEM solution in ((elastic-sec)). Automatic Import uses a large language model (LLM) with specialized instructions to quickly analyze your source data and create a custom integration. - -While Elastic has 400+ [prebuilt data integrations](((integrations-docs))), Automatic Import helps you extend data coverage to other security-relevant technologies and applications. Elastic integrations (including those created by Automatic Import) normalize data to [the Elastic Common Schema (ECS)](https://www.elastic.co/guide/en/ecs/current/ecs-reference.html), which creates uniformity across dashboards, search, alerts, machine learning, and more. - - - -Click [here](https://elastic.navattic.com/automatic-import) to access an interactive demo that shows the feature in action, before setting it up yourself. - - - - -- A working . Automatic Import currently works with all variants of Claude 3. Other models are not supported in this technical preview, but will be supported in future versions. -- A [Security Analytics Complete](https://www.elastic.co/pricing/serverless-security) subscription. -- A sample of the data you want to import, in JSON or NDJSON format. - - - - -Using Automatic Import allows users to create new third-party data integrations through the use of third-party generative AI models (“GAI models”). Any third-party GAI models that you choose to use are owned and operated by their respective providers. Elastic does not own or control these third-party GAI models, nor does it influence their design, training, or data-handling practices. Using third-party GAI models with Elastic solutions, and using your data with third-party GAI models is at your discretion. Elastic bears no responsibility or liability for the content, operation, or use of these third-party GAI models, nor for any potential loss or damage arising from their use. Users are advised to exercise caution when using GAI models with personal, sensitive, or confidential information, as data submitted may be used to train the models or for other purposes. Elastic recommends familiarizing yourself with the development practices and terms of use of any third-party GAI models before use. - -You are responsible for ensuring that your use of Automatic Import complies with the terms and conditions of any third-party platform you connect with. - - - -## Create a new custom integration - -1. In ((elastic-sec)), click **Add integrations**. -2. Under **Can't find an integration?** click **Create new integration**. - - - -3. Click **Create integration**. -4. Select an . -5. Define how your new integration will appear on the Integrations page by providing a **Title**, **Description**, and **Logo**. Click **Next**. -6. Define your integration's package name, which will prefix the imported event fields. -7. Define your **Data stream title**, **Data stream description**, and **Data stream name**. These fields appear on the integration's configuration page to help identify the data stream it writes to. -8. Select your [**Data collection method**](https://www.elastic.co/guide/en/beats/filebeat/current/configuration-filebeat-options.html). This determines how your new integration will ingest the data (for example, from an S3 bucket, an HTTP endpoint, or a file stream). -9. Upload a sample of your data in JSON or NDJSON format. Make sure to include all the types of events that you want the new integration to handle. - -- The file extension (`.JSON` or `.NDJSON`) must match the file format. -- Only the first 10 events in the sample are analyzed. In this technical preview, additional data is truncated. -- Ensure each JSON or NDJSON object represents an event, and avoid deeply nested object structures. -- The more variety in your sample, the more accurate the pipeline will be (for example, include 10 unique log entries instead of the same type of entry 10 times). -- Ideally, each field name should describe what the field does. - -10. Click **Analyze logs**, then wait for processing to complete. This may take several minutes. -11. After processing is complete, the pipeline's field mappings appear, including ECS and custom fields. - - - -12. (Optional) After reviewing the proposed pipeline, you can fine-tune it by clicking **Edit pipeline**. Refer to the [((elastic-sec)) ECS reference](https://www.elastic.co/guide/en/security/current/siem-field-reference.html) to learn more about formatting field mappings. When you're satisfied with your changes, click **Save**. - - - -13. Click **Add to Elastic**. After the **Success** message appears, your new integration will be available on the Integrations page. - - - -14. Click **Add to an agent** to deploy your new integration and start collecting data, or click **View integration** to view detailed information about your new integration. - - -Once you've added an integration, you can't edit any details other than the ingest pipeline, which you can edit by going to **Project Settings → Stack Management → Ingest Pipelines**. - - - -You can use the to check the health of your data ingest pipelines and field mappings. - - - - - - - diff --git a/docs/serverless/ingest/images/auto-import-create-new-integration-button.png b/docs/serverless/ingest/images/auto-import-create-new-integration-button.png deleted file mode 100644 index 976898beb2..0000000000 Binary files a/docs/serverless/ingest/images/auto-import-create-new-integration-button.png and /dev/null differ diff --git a/docs/serverless/ingest/images/auto-import-edit-pipeline.gif b/docs/serverless/ingest/images/auto-import-edit-pipeline.gif deleted file mode 100644 index 1008fb345b..0000000000 Binary files a/docs/serverless/ingest/images/auto-import-edit-pipeline.gif and /dev/null differ diff --git a/docs/serverless/ingest/images/auto-import-review-integration-page.png b/docs/serverless/ingest/images/auto-import-review-integration-page.png deleted file mode 100644 index 97ea0ee831..0000000000 Binary files a/docs/serverless/ingest/images/auto-import-review-integration-page.png and /dev/null differ diff --git a/docs/serverless/ingest/images/auto-import-success-message.png b/docs/serverless/ingest/images/auto-import-success-message.png deleted file mode 100644 index d7ef0a8530..0000000000 Binary files a/docs/serverless/ingest/images/auto-import-success-message.png and /dev/null differ diff --git a/docs/serverless/ingest/ingest-data.mdx b/docs/serverless/ingest/ingest-data.mdx deleted file mode 100644 index 5627395d37..0000000000 --- a/docs/serverless/ingest/ingest-data.mdx +++ /dev/null @@ -1,136 +0,0 @@ ---- -slug: /serverless/security/ingest-data -title: Ingest data to Elastic Security -description: Learn how to add your own data to ((elastic-sec)). -tags: [ 'serverless', 'security', 'how-to' ] -status: in review ---- - - -
- -To ingest data, you can use: - -* The [((agent))](((fleet-guide))/fleet-overview.html) with the **((elastic-defend))** integration, which protects - your hosts and sends logs, metrics, and endpoint security data to ((elastic-sec)). See . - -* The ((agent)) with other integrations, which are available in the [Elastic Package Registry (EPR)](((fleet-guide))/fleet-overview.html#package-registry-intro). To install an integration that works with ((elastic-sec)), select **Add integrations** in the toolbar on most pages. On the **Integrations** page, select the **Security** category filter, then select an integration to view the installation instructions. For more information on integrations, refer to [((integrations))](((integrations-docs))). -* **((beats))** shippers installed for each system you want to monitor. -* The ((agent)) to send data from Splunk to ((elastic-sec)). See [Get started with data from Splunk](((observability-guide))/splunk-get-started.html). -* Third-party collectors configured to ship ECS-compliant data. provides a list of ECS fields used in ((elastic-sec)). - - - -If you use a third-party collector to ship data to ((elastic-sec)), you must -map its fields to the [Elastic Common Schema (ECS)](((ecs-ref))). Additionally, -you must add its index to the ((elastic-sec)) indices (update the **`securitySolution:defaultIndex`** advanced setting). - -((elastic-sec)) uses the [`host.name`](((ecs-ref))/ecs-host.html) ECS field as the -primary key for identifying hosts. - - - -The ((agent)) with the -[((elastic-defend)) integration](https://www.elastic.co/products/endpoint-security) -ships these data sources: - -* Process - Linux, macOS, Windows -* Network - Linux, macOS, Windows -* File - Linux, macOS, Windows -* DNS - Windows -* Registry - Windows -* DLL and Driver Load - Windows -* Security - Windows - -
- -## Install ((beats)) shippers - -To add hosts and populate ((elastic-sec)) with network security events, you need to install and -configure Beats on the hosts from which you want to ingest security events: - -* [((filebeat))](https://www.elastic.co/products/beats/filebeat) for forwarding and - centralizing logs and files - -* [((auditbeat))](https://www.elastic.co/products/beats/auditbeat) for collecting security events -* [((winlogbeat))](https://www.elastic.co/products/beats/winlogbeat) for centralizing - Windows event logs - -* [((packetbeat))](https://www.elastic.co/products/beats/packetbeat) for analyzing - network activity - -You can install ((beats)) using the UI guide or directly from the command line. - -### Install ((beats)) using the UI guide - -When you add integrations that use ((beats)), you're guided through the ((beats)) installation process. To begin, go to the **Integrations** page (select **Add integrations** in the toolbar on most pages), and then follow the links for the types of data you want to collect. - - -On the Integrations page, you can select the **Beats only** filter to only view integrations using Beats. - - -### Download and install ((beats)) from the command line - -To install ((beats)), see these installation guides: - -* [((filebeat)) quick start](((filebeat-ref))/filebeat-installation-configuration.html) - -* [((auditbeat)) quick start](((auditbeat-ref))/auditbeat-installation-configuration.html) - -* [((winlogbeat)) quick start](((winlogbeat-ref))/winlogbeat-installation-configuration.html) - -* [((packetbeat)) quick start](((packetbeat-ref))/packetbeat-installation-configuration.html) - -
- -### Enable modules and configuration options - -No matter how you installed ((beats)), you need to enable modules in ((auditbeat)) -and ((filebeat)) to populate ((elastic-sec)) with data. - - -For a full list of security-related beat modules, -[click here](https://www.elastic.co/integrations?solution=security). - - -To populate **Hosts** data, enable these modules: - -* [Auditbeat system module - Linux, macOS, - Windows](((auditbeat-ref))/auditbeat-module-system.html): - - * packages - * processes - * logins - * sockets - * users and groups -* [Auditbeat auditd module - Linux kernel audit events](((auditbeat-ref))/auditbeat-module-auditd.html) -* [Auditbeat file integrity - module - Linux, macOS, Windows](((auditbeat-ref))/auditbeat-module-file_integrity.html) - -* [Filebeat system module - Linux - system logs](((filebeat-ref))/filebeat-module-system.html) - -* [Filebeat Santa module - macOS - security events](((filebeat-ref))/filebeat-module-santa.html) - -* [Winlogbeat - Windows event logs](((winlogbeat-ref))/_winlogbeat_overview.html) - -To populate **Network** data, enable Packetbeat protocols and Filebeat modules: - -* [((packetbeat))](((packetbeat-ref))/packetbeat-overview.html) - * [DNS](((packetbeat-ref))/packetbeat-dns-options.html) - * [TLS](((packetbeat-ref))/configuration-tls.html) - * [Other supported protocols](((packetbeat-ref))/configuration-protocols.html) -* [((filebeat))](((filebeat-ref))/filebeat-overview.html) - * [Zeek NMS module](((filebeat-ref))/filebeat-module-zeek.html) - * [Suricata IDS module](((filebeat-ref))/filebeat-module-suricata.html) - * [Iptables/Ubiquiti module](((filebeat-ref))/filebeat-module-iptables.html) - * [CoreDNS module](((filebeat-ref))/filebeat-module-coredns.html) - * [Envoy proxy module (Kubernetes)](((filebeat-ref))/filebeat-module-envoyproxy.html) - * [Palo Alto Networks firewall module](((filebeat-ref))/filebeat-module-panw.html) - * [Cisco ASA firewall module](((filebeat-ref))/filebeat-module-cisco.html) - * [AWS module](((filebeat-ref))/filebeat-module-aws.html) - * [CEF module](((filebeat-ref))/filebeat-module-cef.html) - * [Google Cloud module](((filebeat-ref))/filebeat-module-googlecloud.html) - * [NetFlow module](((filebeat-ref))/filebeat-module-netflow.html) - diff --git a/docs/serverless/ingest/threat-intelligence.mdx b/docs/serverless/ingest/threat-intelligence.mdx deleted file mode 100644 index 8bc9710f2b..0000000000 --- a/docs/serverless/ingest/threat-intelligence.mdx +++ /dev/null @@ -1,76 +0,0 @@ ---- -slug: /serverless/security/threat-intelligence -title: Enable threat intelligence integrations -description: Use threat indicators to detect known threats and malicious activity. -tags: [ 'serverless', 'security', 'how-to' ] -status: in review ---- - - -
- -The Threat Intelligence view provides a streamlined way to collect threat intelligence data that you can use for threat detection and matching. Threat intelligence data consists of threat indicators ingested from third-party threat intelligence sources. - -Threat indicators describe potential threats, unusual behavior, or malicious activity on a network or in an environment. They are commonly used in indicator match rules to detect and match known threats. When an indicator match rule generates an alert, it includes information about the matched threat indicator. - - -To learn more about alerts with threat intelligence, visit View alert details. - - -Refer to the following sections to learn how to connect to threat intelligence sources using an ((agent)) integration, the Threat Intel module, or a custom integration. - - - -There are a few scenarios when data won't display in the Threat Intelligence view: - -- If you've chosen a time range that doesn't contain threat indicator event data, you are prompted to choose a different range. Use the date and time picker in the ((security-app)) to select a new range to analyze. -- If the ((agent)) or ((filebeat)) agent hasn't ingested Threat Intel module data yet, the threat indicator event counts won't load. You can wait for data to be ingested or reach out to your administrator for help resolving this. - -
- -## Add an ((agent)) integration - -1. Install a [((fleet))-managed ((agent))](((fleet-guide))/install-fleet-managed-elastic-agent.html) on the hosts you want to monitor. -1. In the Threat Intelligence view, click **Enable sources** to view the Integrations page. Scroll down and select **Elastic Agent only** to filter by ((agent)) integrations. - - - - If you know the name of ((agent)) integration you want to install, you can search for it directly. Alternatively, choose the **Threat Intelligence** category to display a list of available [threat intelligence integrations](((integrations-docs))/threat-intelligence-intro). - - - -1. Select an ((agent)) integration, then complete the installation steps. -1. Return to the Threat Intelligence view on the Overview dashboard. If indicator data isn't displaying, refresh the page or refer to these troubleshooting steps. - -
- -## Add a ((filebeat)) Threat Intel module integration - -1. Set up the [((filebeat)) agent](((filebeat-ref))/filebeat-installation-configuration.html) and enable the Threat Intel module. - - - For more information about enabling available threat intelligence filesets, refer to [Threat Intel module](((filebeat-ref))/filebeat-module-threatintel.html). - - -1. Update the `securitySolution:defaultThreatIndex` advanced setting by adding the appropriate index pattern name after the default ((fleet)) threat intelligence index pattern (`logs-ti*`): - 1. If you're _only_ using ((filebeat)) version 8.x, add the appropriate ((filebeat)) threat intelligence index pattern. For example, `logs-ti*`, `filebeat-8*`. - 1. If you're using a previous version of Filebeat _and_ a current one, differentiate between the threat intelligence indices by using unique index pattern names. For example, if you’re using ((filebeat)) version 7.0.0 and 8.0.0, update the setting to `logs-ti*`,`filebeat-7*`,`filebeat-8*`. -1. Return to the Threat Intelligence view on the Overview dashboard. Refresh the page if indicator data isn't displaying. - -
- -## Add a custom integration - -1. Set up a way to ingest data into your system. -1. Update the `securitySolution:defaultThreatIndex` advanced setting by adding the appropriate index pattern name after the default ((fleet)) threat intelligence index pattern (`logs-ti*`), for example, `logs-ti*`,`custom-ti-index*`. - - - Threat intelligence indices aren’t required to be ECS compatible. However, we strongly recommend compatibility if you’d like your alerts to be enriched with relevant threat indicator information. You can find a list of ECS-compliant threat intelligence fields at [Threat Fields](((ecs-ref))/ecs-threat.html). - - -1. Return to the Threat Intelligence view on the Overview dashboard (**Dashboards** → **Overview**). Refresh the page if indicator data isn't displaying. - - - The Threat Intelligence view searches for a `threat.feed.name` field value to define the source name in the **Name** column. If a custom source doesn't have the `threat.feed.name` field or hasn't defined a `threat.feed.name` field value, it's considered unnamed and labeled as **Other**. Dashboards aren't created for unnamed sources unless the `threat.feed.dashboard_id` field is defined. - - diff --git a/docs/serverless/investigate/cases-open-manage.mdx b/docs/serverless/investigate/cases-open-manage.mdx deleted file mode 100644 index ee6be38677..0000000000 --- a/docs/serverless/investigate/cases-open-manage.mdx +++ /dev/null @@ -1,271 +0,0 @@ ---- -slug: /serverless/security/cases-open-manage -title: Create and manage cases -description: Create a case in ((elastic-sec)), and add files and visualizations. -tags: ["serverless","security","how-to","analyze","manage"] -status: in review ---- - - -
- -You can create and manage cases using the UI or the [Cases API](((security-guide))/cases-api-overview.html). -{/* Link to classic docs until serverless API docs are available. */} - -
- -## Open a new case - -Open a new case to keep track of security issues and share their details with -colleagues. - -1. Go to **Cases**, then click **Create case**. If no cases exist, the Cases table will be empty and you'll be prompted to create one by clicking the **Create case** button inside the table. -1. (Optional) If you defined templates, select one to use its default field values. -1. Give the case a name, assign a severity level, and provide a description. You can use - [Markdown](https://www.markdownguide.org/cheat-sheet) syntax in the case description. - - - If you do not assign your case a severity level, it will be assigned **Low** by default. - - - - You can insert a Timeline link in the case description by clicking the Timeline icon (). - - -1. Optionally, add a category, assignees and relevant tags. You can add users only if they meet the necessary prerequisites. -1. If you defined custom fields, they appear in the **Additional fields** section. -1. Choose if you want alert statuses to sync with the case's status after they are added to the case. This option is enabled by default, but you can turn it off after creating the case. -1. From **External incident management**, select a connector. If you've previously added one, that connector displays as the default selection. Otherwise, the default setting is `No connector selected`. -1. Click **Create case**. - - - If you've selected a connector for the case, the case is automatically pushed to the third-party system it's connected to. - - -![Shows an open case](../images/cases-open-manage/-cases-cases-ui-open.png) -{/* NOTE: This is an autogenerated screenshot. Do not edit it directly. */} - -
- -{/* -This functionality does not exist yet in serverless. -To be updated: references to Kibana, ESS. Once this section is added back in, edit the frontmatter description back to: Create a case in ((elastic-sec)), configure email notifications, and add files and visualizations. - -## Add email notifications - -You can configure email notifications that occur when users are assigned to cases. - -For hosted ((kib)) on ((ess)): - -1. Add the email addresses to the monitoring email allowlist. Follow the steps in - [Send alerts by email](((cloud))/ec-watcher.html#ec-watcher-allowlist). - - You do not need to take any more steps to configure an email connector or update - ((kib)) user settings, since the preconfigured Elastic-Cloud-SMTP connector is - used by default. - -For self-managed ((kib)): - -1. Create a preconfigured email connector. - - - At this time, email notifications support only [preconfigured email connectors](((kibana-ref))/pre-configured-connectors.html), - which are defined in the `kibana.yml` file. - - -1. Set the `notifications.connectors.default.email` ((kib)) setting to the name of - your email connector. - -1. If you want the email notifications to contain links back to the case, you - must configure the [server.publicBaseUrl](((kibana-ref))/settings.html#server-publicBaseUrl) setting. - -When you subsequently add assignees to cases, they receive an email. - -
*/} - -## Manage existing cases - -From the Cases page, you can search existing cases and filter them by attributes such as assignees, categories, severity, status, and tags. You can also select multiple cases and use bulk actions to delete cases or change their attributes. General case metrics, including how long it takes to close cases, are provided above the table. - -![Case UI Home](../images/cases-open-manage/-cases-cases-home-page.png) -{/* NOTE: This is an autogenerated screenshot. Do not edit it directly. */} - -To explore a case, click on its name. You can then: - -* Review the case summary -* Add and manage comments - - - Comments can contain Markdown. For syntax help, click the Markdown icon () in the bottom right of the comment. - - -* Examine alerts and indicators attached to the case -* Add files -* Add a Lens visualization -* Modify the case's description, assignees, category, severity, status, and tags. -* Manage connectors and send updates to external systems (if you've added a connector to the case) -* Copy the case UUID -* Refresh the case to retrieve the latest updates - -
- -### Review the case summary - -Click on an existing case to access its summary. The case summary, located under the case title, contains metrics that summarize alert information and response times. These metrics update when you attach additional unique alerts to the case, add connectors, or modify the case's status: - -* **Total alerts**: Total number of unique alerts attached to the case -* **Associated users**: Total number of unique users that are represented in the attached alerts -* **Associated hosts**: Total number of unique hosts that are represented in the attached alerts -* **Total connectors**: Total number of connectors that have been added to the case -* **Case created**: Date and time that the case was created -* **Open duration**: Time elapsed since the case was created -* **In progress duration**: How long the case has been in the `In progress` state -* **Duration from creation to close**: Time elapsed from when the case was created to when it was closed - -![Shows you a summary of the case](../images/cases-open-manage/-cases-cases-summary.png) - -
- -### Manage case comments -To edit, delete, or quote a comment, select the appropriate option from the **More actions** menu (). - -![Shows you a summary of the case](../images/cases-open-manage/-cases-cases-manage-comments.png) - -
- -### Examine alerts attached to a case - -To explore the alerts attached to a case, click the **Alerts** tab. In the table, alerts are organized from oldest to newest. To view alert details, click the **View details** button. - -![Shows you the Alerts tab](../images/cases-open-manage/-cases-cases-alert-tab.png) - - -Each case can have a maximum of 1,000 alerts. - - -
- -### Add files - -To upload files to a case, click the **Files** tab: - -![A list of files attached to a case](../images/cases-open-manage/-cases-cases-files.png) -{/* NOTE: This is an autogenerated screenshot. Do not edit it directly. */} - -You can add images and text, CSV, JSON, PDF, or ZIP files. -For the complete list, check [mime_types.ts](https://github.com/elastic/kibana/blob/main/x-pack/plugins/cases/common/constants/mime_types.ts). - - -There is a 10 MiB size limit for images. For all other MIME types, the limit is 100 MiB. - - -To download or delete the file, or copy the file hash to your clipboard, open the **Actions** menu (**…**). -The available hash functions are MD5, SHA-1, and SHA-256. - -When you add a file, a comment is added to the case activity log. -To view an image, click its name in the activity or file list. - -
- -### Add a Lens visualization - - -Add a Lens visualization to your case to portray event and alert data through charts and graphs. - -![Shows how to add a visualization to a case](../images/cases-open-manage/-cases-add-vis-to-case.gif) - -To add a Lens visualization to a comment within your case: - -1. Click the **Visualization** button. The **Add visualization** dialog appears. -1. Select an existing visualization from your Visualize Library or create a new visualization. - - - Set an absolute time range for your visualization. This ensures your visualization doesn't change over time after you save it to your case, and provides important context for others managing the case. - - -1. Save the visualization to your Visualize Library by clicking the **Save to library** button (optional). - 1. Enter a title and description for the visualization. - 1. Choose if you want to keep the **Update panel on Security** activated. This option is activated by default and automatically adds the visualization to your Visualize Library. -1. After you've finished creating your visualization, click **Save and return** to go back to your case. -1. Click **Preview** to show how the visualization will appear in the case comment. -1. Click **Add Comment** to add the visualization to your case. - -Alternatively, while viewing a dashboard you can open a panel's menu then click **More actions** (​) → **Add to existing case** or **More actions** (​) → **Add to new case**. - -After a visualization has been added to a case, you can modify or interact with it by clicking the **Open Visualization** option in the case's comment menu. - -![Shows where the Open Visualization option is](../images/cases-open-manage/-cases-cases-open-vis.png) - -
- -### Copy the case UUID - -Each case has a universally unique identifier (UUID) that you can copy and share. To copy a case's UUID to a clipboard, go to the Cases page and select **Actions** → **Copy Case ID** for the case you want to share. Alternatively, go to a case's details page, then from the **More actions** menu (), select **Copy Case ID**. - - - -
- -## Export and import cases - -Cases can be exported and imported as saved objects using the Saved Objects project settings UI. - - -Before importing Lens visualizations, Timelines, or alerts, ensure their data is present. Without it, they won't work after being imported. - - -
- -### Export a case -Use the **Export** option to move cases between different ((elastic-sec)) instances. When you export a case, the following data is exported to a newline-delimited JSON (`.ndjson`) file: - -* Case details -* User actions -* Text string comments -* Case alerts -* Lens visualizations (exported as JSON blobs). - - - -The following attachments are _not_ exported: - -* **Case files**: Case files are not exported. However, they are accessible in **Project settings** → **Management** → **Files** to download and re-add. -* **Alerts**: Alerts attached to cases are not exported. You must re-add them after importing cases. - - - -To export a case: - -1. Go to **Project settings** → **Management** → **Saved objects**. -1. Search for the case by choosing a saved object type or entering the case title in the search bar. -1. Select one or more cases, then click the **Export** button. -1. Click **Export**. A confirmation message that your file is downloading displays. - - - Keep the **Include related objects** option enabled to ensure connectors are exported too. - - -![Shows the export saved objects workflow](../images/cases-open-manage/-cases-cases-export-button.png) - -
- -### Import a case - -To import a case: - -1. Go to **Project settings** → **Management** → **Saved objects**. -1. Click **Import**. -1. Select the NDJSON file containing the exported case and configure the import options. -1. Click **Import**. -1. Review the import log and click **Done**. - - - - Be mindful of the following: - - * If the imported case had connectors attached to it, you'll be prompted to re-authenticate the connectors. To do so, click **Go to connectors** on the **Import saved objects** flyout and complete the necessary steps. Alternatively, open the main menu, then go to **Project settings** → **Management** → **((connectors-ui))** to access connectors. - - * If the imported case had attached alerts, verify that the alerts' source documents exist in the environment. Case features that interact with alerts (such as the Alert details flyout and rule details page) rely on the alerts' source documents to function. - - - diff --git a/docs/serverless/investigate/cases-overview.mdx b/docs/serverless/investigate/cases-overview.mdx deleted file mode 100644 index edede7b946..0000000000 --- a/docs/serverless/investigate/cases-overview.mdx +++ /dev/null @@ -1,27 +0,0 @@ ---- -slug: /serverless/security/cases-overview -title: Cases -description: Cases enable you to track investigation details about security issues. -tags: ["security","overview","analyze"] -status: in review ---- - - -
- -Collect and share information about security issues by opening a case in ((elastic-sec)). Cases allow you to track key investigation details, collect alerts in a central location, and more. The ((elastic-sec)) UI provides several ways to create and manage cases. Alternatively, you can use the [Cases API](((security-guide))/cases-api-overview.html) to perform the same tasks. -{/* Link to classic docs until serverless API docs are available. */} - -You can also send cases to these external systems by configuring external connectors: - -* ((sn-itsm)) -* ((sn-sir)) -* ((jira)) (including Jira Service Desk) -* ((ibm-r)) -* ((swimlane)) -* ((webhook-cm)) - -![Case UI Home](../images/cases-open-manage/-cases-cases-home-page.png) -{/* NOTE: This is an autogenerated screenshot. Do not edit it directly. */} - - diff --git a/docs/serverless/investigate/cases-settings.mdx b/docs/serverless/investigate/cases-settings.mdx deleted file mode 100644 index 91aea82390..0000000000 --- a/docs/serverless/investigate/cases-settings.mdx +++ /dev/null @@ -1,166 +0,0 @@ ---- -slug: /serverless/security/cases-settings -title: Configure case settings -description: Change the default behavior of ((security)) cases by adding connectors, custom fields, templates, and closure options. -tags: [ 'serverless', 'security', 'how-to', 'configure' ] -status: in review ---- - - - -To access case settings in a ((security)) project, go to **Cases** → **Settings**. - -![Shows the case settings page](../images/cases-settings/security-cases-settings.png) -{/* NOTE: This is an autogenerated screenshot. Do not edit it directly. */} - -## Case closures - -If you close cases in your external incident management system, the cases will remain open in ((elastic-sec)) until you close them manually. - -To close cases when they are sent to an external system, select **Automatically close Security cases when pushing new incident to external system**. - -## External incident management systems - -You can push ((elastic-sec)) cases to these third-party systems: - -* ((sn-itsm)) -* ((sn-sir)) -* ((jira)) (including Jira Service Desk) -* ((ibm-r)) -* ((swimlane)) -* ((webhook-cm)) - -To push cases, you need to create a connector, which stores the information required to interact with an external system. After you have created a connector, you can set ((elastic-sec)) cases to automatically close when they are sent to external systems. - - -To create connectors and send cases to external systems, you need the Security Analytics Complete and the appropriate user role. For more information, refer to Cases prerequisites. - - -To create a new connector - -1. From the **Incident management system** list, select **Add new connector**. - -1. Select the system to send cases to: **((sn))**, **((jira))**, **((ibm-r))**, **((swimlane))**, or **((webhook-cm))**. - ![Shows the page for creating connectors](../images/cases-settings/security-cases-connectors.png) - {/* NOTE: This is an autogenerated screenshot. Do not edit it directly. */} - -1. Enter your required settings. For connector configuration details, refer to: - - [((sn-itsm)) connector](((kibana-ref))/servicenow-action-type.html) - - [((sn-sir)) connector](((kibana-ref))/servicenow-sir-action-type.html) - - [((jira)) connector](((kibana-ref))/jira-action-type.html) - - [((ibm-r)) connector](((kibana-ref))/resilient-action-type.html) - - [((swimlane)) connector](((kibana-ref))/swimlane-action-type.html) - - [((webhook-cm)) connector](((kibana-ref))/cases-webhook-action-type.html) - -To change the settings of an existing connector: - -1. Select the required connector from the incident management system list. -1. Click **Update \**. -1. In the **Edit connector** flyout, modify the connector fields as required, then click **Save & close** to save your changes. - -To change the default connector used to send cases to external systems, select the required connector from the incident management system list. - -### Mapped case fields - -When you export an ((elastic-sec)) case to an external system, case fields are mapped to existing fields in ((sn)), ((jira)), ((ibm-r)), and ((swimlane)). For the ((webhook-cm)) connector, case fields can be mapped to custom or pre-existing fields in the external system you're connecting to. - -Once fields are mapped, you can push updates to external systems, and mapped fields are overwritten or appended. Retrieving data from external systems is not supported. - - - - - Title - - - - - The case `Title` field is mapped to corresponding fields in external systems. Mapped field values are overwritten when you push updates. - - * **((sn))**: `Short description` - * **((jira))**: `Summary` - * **((ibm-r))**: `Name` - * **((swimlane))**: `Description` - - - - - - - - Description - - - - The case `Description` field is mapped to the `Description` field in all systems. Mapped field values are overwritten when you push updates. - - - - - - - Comments - - - - - The case `Comments` field is mapped to corresponding fields in external systems. - - * **((sn))**: `Work Notes` - * **((jira))**: `Comments` - * **((ibm-r))**: `Comments` - * **((swimlane))**: `Comments` - - - New and edited comments are added to incident records when pushed to ((sn)), ((jira)), or ((ibm-r)). Comments pushed to ((swimlane)) are appended to the `Comment` field in ((swimlane)) and posted individually. - - - - - - -## Custom fields - -You can add optional and required fields for customized case collaboration. - -1. In the **Custom fields** section, click **Add field**. - ![Add a custom field](../images/cases-settings/security-cases-custom-fields.png) - {/* NOTE: This is an autogenerated screenshot. Do not edit it directly. */} - -1. You must provide a field label and type (text or toggle). - You can optionally designate it as a required field and provide a default value. - -When you create a custom field, it's added to all new and existing cases. -In existing cases, new custom text fields initially have null values. - -You can subsequently remove or edit custom fields on the **Settings** page. - -## Templates - - - -You can make the case creation process faster and more consistent by adding templates. -A template defines values for one or all of the case fields (such as severity, tags, description, and title) as well as any custom fields. - -To create a template: - -1. In the **Templates** section, click **Add template**. - - ![Add a case template](../images/cases-settings/security-cases-templates.png) - {/* NOTE: This is an autogenerated screenshot. Do not edit it directly. */} - -1. You must provide a template name and case severity. You can optionally add template tags and a description, values for each case field, and a case connector. - -When users create cases, they can optionally select a template and use its field values or override them. - - -If you update or delete templates, existing cases are unaffected. - \ No newline at end of file diff --git a/docs/serverless/investigate/indicators-of-compromise.mdx b/docs/serverless/investigate/indicators-of-compromise.mdx deleted file mode 100644 index 1a331f9b25..0000000000 --- a/docs/serverless/investigate/indicators-of-compromise.mdx +++ /dev/null @@ -1,168 +0,0 @@ ---- -slug: /serverless/security/indicators-of-compromise -title: Indicators of compromise -description: Set up the Indicators page to detect, analyze, and respond to threats. -tags: [ 'serverless', 'security', 'how-to', 'analyze', 'manage' ] -status: in review ---- - - -
- -The Indicators page collects data from enabled threat intelligence feeds and provides a centralized view of indicators, also known as indicators of compromise (IoCs). This topic helps you set up the Indicators page and explains how to work with IoCs. - - - -* The Indicators page requires the Security Analytics Complete . -* You must have _one_ of the following installed on the hosts you want to monitor: - * **((agent))** - Install a [((fleet))-managed ((agent))](((fleet-guide))/install-fleet-managed-elastic-agent.html) and ensure the agent's status is `Healthy`. Refer to [((fleet)) Troubleshooting](((fleet-guide))/fleet-troubleshooting.html) if it isn't. - * **((filebeat))** - Install [((filebeat))](((filebeat-ref))/filebeat-installation-configuration.html). - - - -![Shows the Indicators page](../images/indicators-of-compromise/-cases-indicators-table.png) - -
- -## Threat intelligence and indicators -Threat intelligence is a research function that analyzes current and emerging threats and recommends appropriate actions to strengthen a company's security posture. Threat intelligence requires proactivity to be useful, such as gathering, analyzing, and investigating various threat and vulnerability data sources. - -An indicator, also referred to as an IoC, is a piece of information associated with a known threat or reported vulnerability. There are many types of indicators, including URLs, files, domains, email addresses, and more. Within SOC teams, threat intelligence analysts use indicators to detect, assess, and respond to threats. - -
- -## Set up the Indicators page - -Install a threat intelligence integration to add indicators to the Indicators page. - -1. From the ((security-app)) main menu, select one of the following: - * **Intelligence** → **Indicators** → **Add Integrations**. - * **Project settings** → **Integrations**. -1. In the search bar, search for `Threat Intelligence` to get a list of threat intelligence integrations. -1. Select a threat intelligence integration, then complete the integration's guided installation. - - - For more information about available fields, go to the [Elastic integration documentation](https://docs.elastic.co/integrations) and search for a specific threat intelligence integration. - - -1. Return to the Indicators page in ((elastic-sec)). Refresh the page if indicator data isn't displaying. - -
- -### Troubleshooting -If indicator data is not appearing in the Indicators table after you installed a threat intelligence integration: - -* Verify that the index storing indicator documents is included in the default ((elastic-sec)) indices (`securitySolution:defaultIndex`). The index storing indicator documents will differ based on the way you're collecting indicator data: - * **((agent)) integrations** - `logs_ti*` - * **((filebeat)) integrations** - `filebeat-*` -* Ensure the indicator data you're ingesting is mapped to [Elastic Common Schema (ECS)](((ecs-ref))). - - -These troubleshooting steps also apply to the Threat Intelligence view. - - -
- -## Indicators page UI - -After you add indicators to the Indicators page, you can examine, search, filter, and take action on indicator data. Indicators also appear in the Trend view, which shows the total values in the legend. - - - -
- -### Examine indicator details -Learn more about an indicator by clicking **View details**, then opening the Indicator details flyout. The flyout contains these informational tabs: - -* **Overview**: A summary of the indicator, including the indicator's name, the threat intelligence feed it came from, the indicator type, and additional relevant data. - - - Some threat intelligence feeds provide [Traffic Light Protocol (TLP) markings](https://www.cisa.gov/tlp#:~:text=Introduction,shared%20with%20the%20appropriate%20audience). The `TLP Marking` and `Confidence` fields will be empty if the feed doesn't provide that data. - - -* **Table**: The indicator data in table format. -* **JSON**: The indicator data in JSON format. - - ![Shows the Indicator details flyout, 600](../images/indicators-of-compromise/-cases-indicator-details-flyout.png) - - - -## Find related security events - -Investigate an indicator in Timeline to identify and predict related events in your environment. You can add an indicator to Timeline from the Indicators table or the Indicator details flyout. - -![Shows the results of an indicator being investigated in Timeline](../images/indicators-of-compromise/-cases-indicator-query-timeline.png) - -When you add an indicator to Timeline, a new Timeline opens with an auto-generated KQL query. The query contains the indicator field-value pair that you selected plus the field-value pair of the automatically mapped source event. By default, the query's time range is set to seven days before and after the indicator's `timestamp`. - -
- -### Example indicator Timeline investigation - -The following image shows a file hash indictor being investigated in Timeline. The indicator field-value pair is: - -`threat.indicator.file.hash.sha256 : 116dd9071887611c19c24aedde270285a4cf97157b846e6343407cf3bcec115a` - -![Shows the results of an indicator being investigated in Timeline](../images/indicators-of-compromise/-cases-indicator-in-timeline.png) - -The auto-generated query contains the indicator field-value pair (mentioned previously) and the auto-mapped source event field-value pair, which is: - -`file.hash.sha256 : 116dd9071887611c19c24aedde270285a4cf97157b846e6343407cf3bcec115a` - -The query results show an alert with a matching `file.hash.sha256` field value, which may indicate suspicious or malicious activity in the environment. - -
- -## Attach indicators to cases - -Attaching indicators to cases provides more context and available actions for your investigations. This feature allows you to easily share or escalate threat intelligence to other teams. - -To add indicators to cases: - -1. From the Indicators table, click the **More actions** () menu. Alternatively, open an indicator's details, then select **Take action**. -1. Select one of the following: - - * **Add to existing case**: From the **Select case** dialog box, select the case to which you want to attach the indicator. - * **Add to new case**: Configure the case details. Refer to Open a new case to learn more about opening a new case. - - The indicator is added to the case as a new comment. - -![An indicator attached to a case](../images/indicators-of-compromise/-cases-indicator-added-to-case.png) - -
- -### Review indicator details in cases - -When you attach an indicator to a case, the indicator is added as a new comment with the following details: - -* **Indicator name**: Click the linked name to open the Indicator details flyout, which contains the following tabs: - * **Overview**: A summary of the threat indicator, including its name and type, which threat intelligence feed it came from, and additional relevant data. - - - Some threat intelligence feeds provide [Traffic Light Protocol (TLP) markings](https://www.cisa.gov/tlp#:~:text=Introduction,shared%20with%20the%20appropriate%20audience). The `TLP Marking` and `Confidence` fields will be empty if the feed doesn't provide that data. - - - * **Table**: The indicator data in table format. - * **JSON**: The indicator data in JSON format. -* **Feed name**: The threat feed from which the indicator was ingested. -* **Indicator type**: The indicator type, for example, `file` or `.exe`. - -
- -### Remove indicators from cases -To remove an indicator attached to a case, click the **More actions** () menu → **Delete attachment** in the case comment. - -![Removing an indicator from a case](../images/indicators-of-compromise/-cases-remove-indicator.png) - -
- -## Use data from indicators to expand the blocklist - -Add indicator values to the blocklist to prevent selected applications from running on your hosts. You can use MD5, SHA-1, or SHA-256 hash values from `file` type indicators. - -You can add indicator values to the blocklist from the Indicators table or the Indicator details flyout. From the Indicators table, select the **More actions** () menu → **Add blocklist entry**. Alternatively, open an indicator's details, then select the **Take action** menu → **Add blocklist entry**. - - -Refer to Blocklist for more information about blocklist entries. - - diff --git a/docs/serverless/investigate/investigate-events.mdx b/docs/serverless/investigate/investigate-events.mdx deleted file mode 100644 index c32d199fb1..0000000000 --- a/docs/serverless/investigate/investigate-events.mdx +++ /dev/null @@ -1,20 +0,0 @@ ---- -slug: /serverless/security/investigate-events -title: Investigation tools -description: Investigate security events and track security issues in ((elastic-sec)). -tags: [ 'serverless', 'security', 'overview' ] -status: in review ---- - - -
- -The following sections describe tools for investigating security events and tracking security issues directly in ((elastic-sec)). - - -These features are available in the ((security-app))'s side navigation menu: - -* **Cases**: Track investigation details about security issues. -* **Investigations** → **Timelines**: Workspace for investigations and threat hunting. -* **Investigations** → **Osquery**: Run live and scheduled queries on operating systems. -* **Intelligence**: Indicators of compromise used for threat intelligence. diff --git a/docs/serverless/investigate/timeline-object-schema.mdx b/docs/serverless/investigate/timeline-object-schema.mdx deleted file mode 100644 index 67ccc14f15..0000000000 --- a/docs/serverless/investigate/timeline-object-schema.mdx +++ /dev/null @@ -1,805 +0,0 @@ ---- -slug: /serverless/security/timeline-object-schema -title: Timeline schema -description: A list of JSON elements inside the timeline object. -tags: [ 'serverless', 'security', 'reference' ] -status: in review ---- - - -
- -The Timeline schema lists all the JSON fields and objects required to create a Timeline or a Timeline template using the Create Timeline API. - - -All column, dropzone, and filter fields must be -[ECS fields](((ecs-ref))). - - -This screenshot maps the Timeline UI components to their JSON objects: - -![](../images/timeline-object-schema/-reference-timeline-object-ui.png) - -1. Title (`title`) -2. Global notes (`globalNotes`) -3. Data view (`dataViewId`) -4. KQL bar query (`kqlQuery`) -5. Time filter (`dateRange`) -6. Additional filters (`filters`) -7. KQL bar mode (`kqlMode`) -8. Dropzone (each clause is contained in its own `dataProviders` object) -9. Column headers (`columns`) -10. Event-specific notes (`eventNotes`) - - - - `columns` - columns[] - - The Timeline's - columns. - - - - - `created` - Float - - The time the Timeline was created, using a 13-digit Epoch - timestamp. - - - - - `createdBy` - String - - The user who created the Timeline. - - - - - - `dataProviders` - - - dataProviders[] - - Object containing dropzone query - clauses. - - - - - `dataViewId` - String - - ID of the Timeline's Data View, for example: `"dataViewId":"security-solution-default"`. - - - - - `dateRange` - dateRange - - The Timeline's search - period: - - * `end`: The time up to which events are searched, using a 13-digit Epoch - timestamp. - - * `start`: The time from which events are searched, using a 13-digit Epoch - timestamp. - - - - - - - `description` - String - - The Timeline's description. - - - - - `eventNotes` - - eventNotes[] - - - - Notes added to specific events in the Timeline. - - - - - `eventType` - String - - Event types displayed in - the Timeline, which can be: - - * `All data sources` - * `Events`: Event sources only - * `Detection Alerts`: Detection alerts only - - - - - - - `favorite` - favorite[] - - Indicates when and who marked a - Timeline as a favorite. - - - - - `filters` - filters[] - - Filters used - in addition to the dropzone query. - - - - - - `globalNotes` - - - globalNotes[] - - Global notes added to the Timeline. - - - - - `kqlMode` - String - - Indicates whether the KQL bar - filters the dropzone query results or searches for additional results, where: - - * `filter`: filters dropzone query results - * `search`: displays additional search results - - - - - - - `kqlQuery` - kqlQuery - - KQL bar - query. - - - - - `pinnedEventIds` - pinnedEventIds[] - - IDs of events pinned to the Timeline's - search results. - - - - - `savedObjectId` - String - - The Timeline's saved object ID. - - - - - `savedQueryId` - String - - If used, the saved query ID used to filter or search - dropzone query results. - - - - - `sort` - sort - - Object indicating how rows are sorted in the Timeline's grid: - - * `columnId` (string): The ID of the column used to sort results. - * `sortDirection` (string): The sort direction, which can be either `desc` or - `asc`. - - - - - - - `templateTimelineId` - String - - A unique ID (UUID) for Timeline templates. For - Timelines, the value is `null`. - - - - - `templateTimelineVersion` - Integer - - Timeline template version number. For - Timelines, the value is `null`. - - - - - `timelineType` - String - - Indicates whether the - Timeline is a template or not, where: - - * `default`: Indicates a Timeline used to actively investigate events. - * `template`: Indicates a Timeline template used when detection rule alerts are - investigated in Timeline. - - - - - - - `title` - String - - The Timeline's title. - - - - - `updated` - Float - - The last time the Timeline was updated, using a - 13-digit Epoch timestamp. - - - - - `updatedBy` - String - - The user who last updated the Timeline. - - - - - `version` - String - - The Timeline's version. - - - - - -
- -## columns object - - - - `aggregatable` - Boolean - - Indicates whether the field can be aggregated across - all indices (used to sort columns in the UI). - - - - - `category` - String - - The ECS field set to which the field belongs. - - - - - `description` - String - - UI column field description tooltip. - - - - - `example` - String - - UI column field example tooltip. - - - - - `indexes` - String - - Security indices in which the field exists and has the same - ((es)) type. `null` when all the security indices have the field with the same - type. - - - - - `id` - String - - ECS field name, displayed as the column header in the UI. - - - - - `type` - String - - The field's type. - - - - - -
- -## dataProviders object - - - - `and` - dataProviders[] - - Array containing dropzone query clauses using `AND` - logic. - - - - - `enabled` - Boolean - - Indicates if the dropzone query clause is enabled. - - - - - `excluded` - Boolean - - Indicates if the dropzone query clause uses `NOT` logic. - - - - - `id` - String - - The dropzone query clause's unique ID. - - - - - `name` - String - - The dropzone query clause's name (the clause's value - when Timelines are exported from the UI). - - - - - `queryMatch` - queryMatch - - The dropzone query clause: - - * `field` (string): The field used to search Security indices. - * `operator` (string): The clause's operator, which can be: - * `:` - The `field` has the specified `value`. - * `:*` - The field exists. - - - * `value` (string): The field's value used to match results. - - - - - - - -
- -## eventNotes object - - - - `created` - Float - - The time the note was created, using a 13-digit Epoch - timestamp. - - - - - `createdBy` - String - - The user who added the note. - - - - - `eventId` - String - - The ID of the event to which the note was added. - - - - - `note` - String - - The note's text. - - - - - `noteId` - String - - The note's ID - - - - - `timelineId` - String - - The ID of the Timeline to which the note was added. - - - - - `updated` - Float - - The last time the note was updated, using a - 13-digit Epoch timestamp. - - - - - `updatedBy` - String - - The user who last updated the note. - - - - - `version` - String - - The note's version. - - - - - -
- -## favorite object - - - - `favoriteDate` - Float - - The time the Timeline was marked as a favorite, using a - 13-digit Epoch timestamp. - - - - - `fullName` - String - - The full name of the user who marked the Timeline as - a favorite. - - - - - `keySearch` - String - - `userName` encoded in Base64. - - - - - `userName` - String - - The username of the user who marked the - Timeline as a favorite. - - - - - -
- -## filters object - - - - `exists` - String - - [Exists term query](((ref))/query-dsl-exists-query.html) for the - specified field (`null` when undefined). For example, `{"field":"user.name"}`. - - - - - `meta` - meta - - Filter details: - - * `alias` (string): UI filter name. - * `disabled` (boolean): Indicates if the filter is disabled. - * `key`(string): Field name or unique string ID. - * `negate` (boolean): Indicates if the filter query clause uses `NOT` logic. - * `params` (string): Value of `phrase` filter types. - * `type` (string): Type of filter. For example, `exists` and `range`. For more - information about filtering, see [Query DSL](((ref))/query-dsl.html). - - - - - - - `match_all` - String - - [Match all term query](((ref))/query-dsl-match-all-query.html) - for the specified field (`null` when undefined). - - - - - `query` - String - - [DSL query](((ref))/query-dsl.html) (`null` when undefined). For - example, `{"match_phrase":{"ecs.version":"1.4.0"}}`. - - - - - `range` - String - - [Range query](((ref))/query-dsl-range-query.html) (`null` when - undefined). For example, `{"@timestamp":{"gte":"now-1d","lt":"now"}}"`. - - - - - -
- -## globalNotes object - - - - `created` - Float - - The time the note was created, using a 13-digit Epoch - timestamp. - - - - - `createdBy` - String - - The user who added the note. - - - - - `note` - String - - The note's text. - - - - - `noteId` - String - - The note's ID - - - - - `timelineId` - String - - The ID of the Timeline to which the note was added. - - - - - `updated` - Float - - The last time the note was updated, using a - 13-digit Epoch timestamp. - - - - - `updatedBy` - String - - The user who last updated the note. - - - - - `version` - String - - The note's version. - - - - - -
- -## kqlQuery object - - - - `filterQuery` - filterQuery - - Object containing query details: - - * `kuery`: Object containing the query's clauses and type: - * `expression`(string): The query's clauses. - * `kind` (string): The type of query, which can be `kuery` or `lucene`. - * `serializedQuery` (string): The query represented in JSON format. - - - - - diff --git a/docs/serverless/investigate/timeline-templates-ui.mdx b/docs/serverless/investigate/timeline-templates-ui.mdx deleted file mode 100644 index 1ebcc84146..0000000000 --- a/docs/serverless/investigate/timeline-templates-ui.mdx +++ /dev/null @@ -1,157 +0,0 @@ ---- -slug: /serverless/security/timeline-templates-ui -title: Timeline templates -description: Attach Timeline templates to detection rules to streamline investigations. -tags: [ 'serverless', 'security', 'how-to', 'analyze', 'manage' ] -status: in review ---- - - -
- -You can attach Timeline templates to detection rules. When attached, the rule's alerts use the template when they are investigated in Timeline. This enables immediately viewing the alert's most interesting fields when you start an investigation. - -Templates can include two types of filters: - -* **Regular filter**: Like other KQL filters, defines both the source event field and its value. For example: `host.name : "win-server"`. - -* **Template filter**: Only defines the event field and uses a placeholder - for the field's value. When you investigate an alert in Timeline, the field's value is taken from the alert. - -For example, if you define the `host.name: "{host.name}"` template filter, when alerts generated by the rule are investigated in Timeline, the alert's -`host.name` value is used in the filter. If the alert's `host.name` value is -`Linux_stafordshire-061`, the Timeline filter is: -`host.name: "Linux_stafordshire-061"`. - - -For information on how to add Timeline templates to rules, refer to Create a detection rule. - - -When you load ((elastic-sec)) prebuilt rules, ((elastic-sec)) also loads a selection of prebuilt Timeline templates, which you can attach to detection rules. **Generic** templates use broad KQL queries to retrieve event data, and **Comprehensive** templates use detailed KQL queries to retrieve additional information. The following prebuilt templates appear by default: - -* **Alerts Involving a Single Host Timeline**: Investigate detection alerts involving a single host. -* **Alerts Involving a Single User Timeline**: Investigate detection alerts involving a single user. -* **Generic Endpoint Timeline**: Investigate ((elastic-endpoint)) detection alerts. -* **Generic Network Timeline**: Investigate network-related detection alerts. -* **Generic Process Timeline**: Investigate process-related detection alerts. -* **Generic Threat Match Timeline**: Investigate threat indicator match detection alerts. -* **Comprehensive File Timeline**: Investigate file-related detection alerts. -* **Comprehensive Network Timeline**: Investigate network-related detection alerts. -* **Comprehensive Process Timeline**: Investigate process-related detection alerts. -* **Comprehensive Registry Timeline**: Investigate registry-related detection alerts. - - -You can duplicate prebuilt templates and use them as -a starting point for your own custom templates. - - -
- -## Timeline template legend - -When you add filters to a Timeline template, the items are color coded to -indicate which type of filter is added. Additionally, you change Timeline -filters to template filters as you build your template. - -Regular Timeline filter - : Clicking **Convert to template field** changes the filter to a template filter: - - - -Template filter - - : - When you convert a template to a Timeline, template filters with placeholders are disabled: - - - - To enable the filter, either specify a value or change it to a field's existing filter (refer to Edit existing filters). - -
- -## Create a Timeline template - -1. Choose one of the following: - * Go to **Investigations** → **Timelines**. Click the **Templates** tab, then click **Create new Timeline template**. - * Go to the Timeline bar (which is at the bottom of most pages), click the button, then click **Create new Timeline template**. - * From an open Timeline or Timeline template, click **New** → **New Timeline template**. - -1. Add filters to the new Timeline template. Click **Add field**, and select the required option: - - * **Add field**: Add a regular Timeline filter. - * **Add template field**: Add a template filter with a value placeholder. - - - You can also drag and send items to the template from the **Overview**, **Hosts**, **Network**, and **Alerts** pages. - - - ![An example of a Timeline filter](../images/timeline-templates-ui/-events-create-a-timeline-template-field.png) - -1. Click **Save** to give the template a title and description. - -**Example** - -To create a template for process-related alerts on a specific host: - -* Add a regular filter for the host name: - `host.name: "Linux_stafordshire-061"` - -* Add template filter for process names: `process.name: "{process.name}"` - -![](../images/timeline-templates-ui/-events-template-query-example.png) - -When alerts generated by rules associated with this template are investigated -in Timeline, the host name is `Linux_stafordshire-061`, whereas the process name -value is retrieved from the alert's `process.name` field. - -
- -## Manage existing Timeline templates - -You can view, duplicate, export, delete, and create templates from existing Timelines: - -1. Go to **Investigations** → **Timelines** → **Templates**. - - ![](../images/timeline-templates-ui/-events-all-actions-timeline-ui.png) - -1. Click the **All actions** icon in the relevant row, and then select the action: - - * **Create timeline from template** (refer to Create a Timeline template) - * **Duplicate template** - * **Export selected** (refer to Export and import Timeline templates) - * **Delete selected** - * **Create query rule from timeline** (only available if the Timeline contains a KQL query) - * **Create EQL rule from timeline** (only available if the Timeline contains an EQL query) - - -To perform the same action on multiple templates, select templates, then the required action from the **Bulk actions** menu. - - - -You cannot delete prebuilt templates. - - -
- -## Export and import Timeline templates - -You can import and export Timeline templates, which enables importing templates from one {/*space or (*/}((elastic-sec)) instance to another. Exported templates are saved in an `ndjson` file. - -1. Go to **Investigations** → **Timelines** → **Templates**. -1. To export templates, do one of the following: - - * To export one template, click the **All actions** icon in the relevant row and then select **Export selected**. - - * To export multiple templates, select all the required templates and then click **Bulk actions** → **Export selected**. - -1. To import templates, click **Import**, then select or drag and drop the template `ndjson` file. - - - Each template object in the file must be represented in a single line. - Multiple template objects are delimited with newlines. - - - -You cannot export prebuilt templates. - - diff --git a/docs/serverless/investigate/timelines-ui.mdx b/docs/serverless/investigate/timelines-ui.mdx deleted file mode 100644 index b3c74e1600..0000000000 --- a/docs/serverless/investigate/timelines-ui.mdx +++ /dev/null @@ -1,256 +0,0 @@ ---- -slug: /serverless/security/timelines-ui -title: Timeline -description: Investigate events and complex threats in your network. -tags: [ 'serverless', 'security', 'how-to', 'analyze', 'manage' ] -status: in review ---- - - -
- -Use Timeline as your workspace for investigations and threat hunting. -You can add alerts from multiple indices to a Timeline to facilitate advanced investigations. - -You can drag or send fields of interest to a Timeline to create the desired query. For example, you can add fields from tables and histograms -on the **Overview**, **Alerts**, **Hosts**, and **Network** pages, as well as from -other Timelines. Alternatively, you can add a query directly in Timeline -by expanding the query builder and clicking **+ Add field**. - -![example Timeline with several events](../images/timelines-ui/-events-timeline-ui-updated.png) - -In addition to Timelines, you can create and attach Timeline templates to -detection rules. Timeline templates allow you to -define the source event fields used when you investigate alerts in -Timeline. You can select whether the fields use predefined values or values -retrieved from the alert. For more information, refer to Create Timeline templates. - -
- -## Create new or open existing Timeline - -To make a new Timeline, choose one of the following: - -* Go to the Timelines page (**Investigations** → **Timelines**), then click **Create new Timeline**. -* Go to the Timeline bar (which is at the bottom of most pages), click the button, then click **Create new Timeline**. -* From an open Timeline or Timeline template, click **New** → **New Timeline**. - -To open an existing Timeline, choose one of the following: -* Go to the Timelines page, then click a Timeline's title. -* Go to the Timeline bar, click the button, then click **Open Timeline**. -* From an open Timeline or Timeline template, click **Open**, then select the appropriate Timeline. - -To avoid losing your changes, you must save the Timeline before moving to a different ((security-app)) page. If you change an existing Timeline, you can use the **Save as new timeline** toggle to make a new copy of the Timeline, without overwriting the original one. - - -Click the star icon () to favorite your Timeline and quickly find it later. - - -
- -## View and refine Timeline results - -You can select whether Timeline displays detection alerts and other raw events, or just alerts. By default, Timeline displays both raw events and alerts. To hide raw events and display alerts only, click **Data view** to the left of the KQL query bar, then select **Show only detection alerts**. - -
- -## Inspect an event or alert -To further inspect an event or detection alert, click the **View details** button. A flyout with event or alert details appears. - -
- -## Configure Timeline event context and display - -Many types of events automatically appear in preconfigured views that provide relevant -contextual information, called **Event Renderers**. All event renderers are turned off by default. To turn them on, use the **Event renderers** toggle at the top of the results pane. To only turn on specific event renderers, click the gear () icon next to the toggle, and select the ones you want enabled. Close the **Customize event renderers** pane when you're done. Your changes are automatically applied to Timeline. - -![example timeline with the event renderer highlighted](../images/timelines-ui/-events-timeline-ui-renderer.png) - -The example above displays the Flow event renderer, which highlights the movement of -data between its source and destination. If you see a particular part of the rendered event that -interests you, you can drag it up to the drop zone below the query bar for further investigation. - -You can also modify a Timeline's display in other ways: - -* Add and remove fields from Timeline -* Create runtime fields and display them in the Timeline -* Reorder and resize columns -* Copy a column name or values to a clipboard -* Change how the name, value, or description of a field are displayed in Timeline -* View the Timeline in full screen mode -* Add or delete notes on individual events -* Add or delete investigation notes on the entire Timeline -* Pin interesting events to the Timeline - -
- -## Add and remove fields from Timeline - -The Timeline table shows fields that are available for alerts and events in the selected data view. You can modify the table to display fields that interest you. Use the sidebar to search for specific fields or scroll through it to find fields of interest. Fields that you select display as columns in the table. - -To add a field from the sidebar, hover over it, and click the **Add field as a column** button (), or drag and drop the field into the table. To remove a field, hover over it, and click the **Remove field as a column** button (). - - - -
- -## Use the Timeline query builder - -Expand the query builder by clicking the query builder button () to the right of the KQL query bar. Drop in fields to build a query that filters Timeline results. The fields' relative placement specifies their logical relationships: horizontally adjacent filters use `AND`, while vertically adjacent filters use `OR`. - - -Collapse the query builder and provide more space for Timeline results by clicking the query builder button (). - - -
- -## Edit existing filters - -Click a filter to access additional operations such as **Add filter**, **Clear all**, **Load saved query**, and more: - - - -Here are examples of various types of filters: - -Field with value - : Filters for events with the specified field value: - - - -Field exists - : Filters for events containing the specified field: - - - -Exclude results - : Filters for events that do not contain the specified field value - (`field with value` filter) or the specified field (`field exists` filter): - - - -Temporarily disable - : The filter is not used in the query until it is enabled again: - - - -Filter for field present - : Converts a `field with value` filter to a `field exists` filter. - - -When you convert a Timeline template to a -Timeline, some fields may be disabled. For more information, refer to -Timeline template legend. - - -
- -## Attach Timeline to a case - -To attach a Timeline to a new or existing case, open it, click **Attach to case** in the upper right corner, -then select either **Attach to new case** or **Attach to existing case**. - -To learn more about cases, refer to Cases. - -
- -## Manage existing Timelines - -You can view, duplicate, export, delete, and create templates from existing Timelines: - -1. Go to **Investigations** → **Timelines**. -1. Click the **All actions** menu in the desired row, then select an action: - -* **Create template from timeline** (refer to Create Timeline templates) -* **Duplicate timeline** -* **Export selected** (refer to Export and import Timelines) -* **Delete selected** -* **Create query rule from timeline** (only available if the Timeline contains a KQL query) -* **Create EQL rule from timeline** (only available if the Timeline contains an EQL query) - - -To perform an action on multiple Timelines, first select the Timelines, -then select an action from the **Bulk actions** menu. - - -
- -## Export and import Timelines - -You can export and import Timelines, which enables you to share Timelines from one {/* space or */} ((elastic-sec)) instance to another. Exported Timelines are saved as `.ndjson` files. - -To export Timelines: - -* Go to **Investigations** → **Timelines**. -* Either click the **All actions** menu in the relevant row and select **Export selected**, or select multiple Timelines and then click **Bulk actions** → **Export selected**. - -To import Timelines: - -* Click **Import**, then select or drag and drop the relevant `.ndjson` file. - - - Multiple Timeline objects are delimited with newlines. - - -
- -## Filter Timeline results with EQL -Use the **Correlation** tab to investigate Timeline results with [EQL queries](((ref))/eql.html). - -When forming EQL queries, you can write a basic query to return a list of events and alerts. Or, you can create sequences of EQL queries to view matched, ordered events across multiple event categories. Sequence queries are useful for identifying and predicting related events. They can also provide a more complete picture of potential adversary behavior in your environment, which you can use to create or update rules and detection alerts. - -The following image shows what matched ordered events look like in the Timeline table. Events that belong to the same sequence are matched together in groups and shaded red or blue. Matched events are also ordered from oldest to newest in each sequence. - -![a Timeline's correlation tab](../images/timelines-ui/-events-correlation-tab-eql-query.png) - -From the **Correlation** tab, you can also do the following: - -* Specify the date and time range that you want to investigate. -* Reorder the columns and choose which fields to display. -* Choose a data view and whether to show detection alerts only. - - -
- -## Use ((esql)) to investigate events - -The [Elasticsearch Query Language (((esql)))](((ref))/esql.html) provides a powerful way to filter, transform, and analyze event data stored in ((es)). ((esql)) queries use "pipes" to manipulate and transform data in a step-by-step fashion. This approach allows you to compose a series of operations, where the output of one operation becomes the input for the next, enabling complex data transformations and analysis. - -You can use ((esql)) in Timeline by opening the **((esql))** tab. From there, you can: - -- Write an ((esql)) query to explore your events. For example, start with the following query, then iterate on it to tailor your results: - - ```esql - FROM .alerts-security.alerts-default,apm-*-transaction*,auditbeat-*,endgame-*,filebeat-*,logs-*,packetbeat-*,traces-apm*,winlogbeat-*,-*elastic-cloud-logs-* - | LIMIT 10 - | KEEP @timestamp, message, event.category, event.action, host.name, source.ip, destination.ip, user.name - ``` - - This query does the following: - - - It starts by querying documents within the Security alert index (`.alerts-security.alerts-default`) and indices specified in the Security data view. - - Then, the query limits the output to the top 10 results. - - Finally, it keeps the default Timeline fields (`@timestamp`, `message`, `event.category`, `event.action`, `host.name`, `source.ip`, `destination.ip`, and `user.name`) in the output. - - - When querying indices that tend to be large (for example, `logs-*`), performance can be impacted by the number of fields returned in the output. To optimize performance, we recommend using the [`KEEP`](((ref))/esql-commands.html#esql-keep) command to specify fields that you want returned. For example, add the clause `KEEP @timestamp, user.name` to the end of your query to specify that you only want the `@timestamp` and `user.name` fields returned. - - - - - * An error message displays when the query bar is empty. - * When specifying data sources for an ((esql)) query, autocomplete doesn't suggest hidden indices, such as `.alerts-*`. You must manually enter the index name or pattern. - - - -- Click the help icon () on the far right side of the query editor to open the in-product reference documentation for all ((esql)) commands and functions. -- Visualize query results using Discover functionality. - - - -
- -## Additional ((esql)) resources - -To get started using ((esql)), read the tutorial for [using ((esql)) in ((kib))](((ref))/esql-kibana.html). Much of the functionality available in ((kib)) is also available in Timeline. - -To find examples of using ((esql)) for threat hunting, check out [our blog](https://www.elastic.co/blog/introduction-to-esql-new-query-language-flexible-iterative-analytics). diff --git a/docs/serverless/osquery/alerts-run-osquery.mdx b/docs/serverless/osquery/alerts-run-osquery.mdx deleted file mode 100644 index 80e5cecf3f..0000000000 --- a/docs/serverless/osquery/alerts-run-osquery.mdx +++ /dev/null @@ -1,61 +0,0 @@ ---- -slug: /serverless/security/alerts-run-osquery -title: Run Osquery from alerts -description: Run live queries against an alert's host to investigate potential security threats and system compromises. -tags: [ 'serverless', 'security', 'how-to', 'analyze' ] -status: in review ---- - - -
- -Run live queries on hosts associated with alerts to learn more about your infrastructure and operating systems. For example, with Osquery, you can search your system for indicators of compromise that might have contributed to the alert. You can then use this data to inform your investigation and alert triage efforts. - - - -* The [Osquery manager integration](((kibana-ref))/manage-osquery-integration.html) must be installed. -* ((agent))'s [status](((fleet-guide))/monitor-elastic-agent.html) must be `Healthy`. Refer to [((fleet)) Troubleshooting](((fleet-guide))/fleet-troubleshooting.html) if it isn't. -* You must have the appropriate user role to use this feature. - - - -To run Osquery from an alert: - -1. Do one of the following from the Alerts table: - * Click the **View details** button to open the Alert details flyout, then click **Take action → Run Osquery**. - * Select the **More actions** menu (), then select **Run Osquery**. -1. Choose to run a single query or a query pack. -1. Select one or more ((agent))s or groups to query. Start typing in the search field to get suggestions for ((agent))s by name, ID, platform, and policy. - - - The host associated with the alert is automatically selected. You can specify additional hosts to query. - - -1. Specify the query or pack to run: - * **Query**: Select a saved query or enter a new one in the text box. After you enter the query, you can expand the **Advanced** section to set a timeout period for the query, and view or set [mapped ECS fields](((kibana-ref))/osquery.html#osquery-map-fields) included in the results from the live query (optional). - - - Overwriting the query's default timeout period allows you to support queries that take longer to run. The default and minimum supported value for the **Timeout** field is `60`. The maximum supported value is `900`. - - - - Use placeholder fields to dynamically add existing alert data to your query. - - - * **Pack**: Select from available query packs. After you select a pack, all of the queries in the pack are displayed. - - - Refer to [prebuilt packs](((kibana-ref))/osquery.html#osquery-prebuilt-packs-queries) to learn about using and managing Elastic prebuilt packs. - - - - - -1. Click **Submit**. Query results will display within the flyout. - - - Refer to Examine Osquery results for more information about query results. - - -1. Click **Save for later** to save the query for future use (optional). - diff --git a/docs/serverless/osquery/invest-guide-run-osquery.mdx b/docs/serverless/osquery/invest-guide-run-osquery.mdx deleted file mode 100644 index 7feada211f..0000000000 --- a/docs/serverless/osquery/invest-guide-run-osquery.mdx +++ /dev/null @@ -1,75 +0,0 @@ ---- -slug: /serverless/security/invest-guide-run-osquery -title: Run Osquery from investigation guides -description: Add and run live queries from a rule's investigation guide. -tags: [ 'serverless', 'security', 'how-to', 'analyze' ] -status: in review ---- - - -
- -Detection rule investigation guides suggest steps for triaging, analyzing, and responding to potential security issues. When you build a custom rule, you can also set up an investigation guide that incorporates Osquery. This allows you to run live queries from a rule's investigation guide as you analyze alerts produced by the rule. - - - -* The [Osquery manager integration](((kibana-ref))/manage-osquery-integration.html) must be installed. -* ((agent))'s [status](((fleet-guide))/monitor-elastic-agent.html) must be `Healthy`. Refer to [((fleet)) Troubleshooting](((fleet-guide))/fleet-troubleshooting.html) if it isn't. -* You must have the appropriate user role to use this feature. - - - -![Shows a live query in an investigation guide](../images/invest-guide-run-osquery/-osquery-osquery-investigation-guide.png) - -
- -## Add live queries to an investigation guide - - -You can only add Osquery to investigation guides for custom rules because prebuilt rules cannot be edited. - - -1. Go to **Rules** → **Detection rules (SIEM)**, select a rule, then click **Edit rule settings** on the rule details page. -1. Select the **About** tab, then expand the rule's advanced settings. -1. Scroll down to the Investigation guide section. In the toolbar, click the **Osquery** button (). - 1. Add a descriptive label for the query; for example, `Search for executables`. - 1. Select a saved query or enter a new one. - - - Use placeholder fields to dynamically add existing alert data to your query. - - - 1. Expand the **Advanced** section to set a timeout period for the query, and view or set [mapped ECS fields](((kibana-ref))/osquery.html#osquery-map-fields) included in the results from the live query (optional). - - - Overwriting the query's default timeout period allows you to support queries that take longer to run. The default and minimum supported value for the **Timeout** field is `60`. The maximum supported value is `900`. - - - - -1. Click **Save changes** to add the query to the rule's investigation guide. - -
- -## Run live queries from an investigation guide - -1. Go to **Rules** → **Detection rules (SIEM)**, then select a rule to open its details. -1. Go to the About section of the rule details page and click **Investigation guide**. -1. Click the query. The Run Osquery pane displays with the **Query** field autofilled. Do the following: - 1. Select one or more ((agent))s or groups to query. Start typing in the search field to get suggestions for ((agent))s by name, ID, platform, and policy. - 1. Expand the **Advanced** section to set a timeout period for the query, and view or set the [mapped ECS fields](((kibana-ref))/osquery.html#osquery-map-fields) which are included in the live query's results (optional). - - - Overwriting the query's default timeout period allows you to support queries that take longer to run. The default and minimum supported value for the **Timeout** field is `60`. The maximum supported value is `900`. - - -1. Click **Submit** to run the query. Query results display in the flyout. - - - Refer to Examine Osquery results for more information about query results. - - -1. Click **Save for later** to save the query for future use (optional). - - - diff --git a/docs/serverless/osquery/osquery-placeholder-fields.mdx b/docs/serverless/osquery/osquery-placeholder-fields.mdx deleted file mode 100644 index 5c0b1556a6..0000000000 --- a/docs/serverless/osquery/osquery-placeholder-fields.mdx +++ /dev/null @@ -1,38 +0,0 @@ ---- -slug: /serverless/security/osquery-placeholder-fields -title: Use placeholder fields in Osquery queries -description: Pass data into queries dynamically, to enhance their flexibility and reusability. -tags: [ 'serverless', 'security', 'how-to', 'manage' ] -status: in review ---- - - -
- -Instead of hard-coding alert and event values into Osquery queries, you can use placeholder fields to dynamically pass this data into queries. Placeholder fields function like parameters. You can use placeholder fields to build flexible and reusable queries. - -Placeholder fields work in single queries or query packs. They're also supported in the following features: - -* Live queries -* Osquery Response Actions -* Investigation guides using Osquery queries - -
- -## Placeholder field syntax and requirements - -Placeholder fields use [mustache syntax](http://mustache.github.io/) and must be wrapped in double curly brackets (`{{example.field}}`). You can use any field within an event or alert document as a placeholder field. - -Queries with placeholder fields can only run against alerts or events. Otherwise, they will lack the necessary values and the query status will be `error`. - -
- -### Example query with a placeholder field - -The following query uses the `{{host.name}}` placeholder field: - -```sql -SELECT * FROM os_version WHERE name = {{host.os.name}} -``` - -When you run the query, the value that's stored in the alert or event's `host.name` field will be transferred to the `{{host.os.name}}` placeholder field. \ No newline at end of file diff --git a/docs/serverless/osquery/osquery-response-action.mdx b/docs/serverless/osquery/osquery-response-action.mdx deleted file mode 100644 index bf2d4857f7..0000000000 --- a/docs/serverless/osquery/osquery-response-action.mdx +++ /dev/null @@ -1,88 +0,0 @@ ---- -slug: /serverless/security/osquery-response-action -title: Add Osquery Response Actions -description: Osquery Response Actions allow you to add live queries to custom query rules so you can automatically collect data on systems the rules are monitoring. -tags: ["serverless","security","how-to","manage"] -status: in review ---- - - -
- - - -Osquery Response Actions allow you to add live queries to custom query rules so you can automatically collect data on systems the rule is monitoring. Use this data to support your alert triage and investigation efforts. - - - -* Osquery Response Actions require the Endpoint Protection Complete . -* The [Osquery manager integration](((kibana-ref))/manage-osquery-integration.html) must be installed. -* ((agent))'s [status](((fleet-guide))/monitor-elastic-agent.html) must be `Healthy`. Refer to [((fleet)) Troubleshooting](((fleet-guide))/fleet-troubleshooting.html) if it isn't. -* You must have the appropriate user role to use this feature. -* You can only add Osquery Response Actions to custom query rules. - - - -![The Osquery response action](../images/osquery-response-action/-osquery-available-response-actions-osquery.png) - -
- -## Add Osquery Response Actions to rules - -You can add Osquery Response Actions to new or existing custom query rules. Queries run every time the rule executes. - -1. Choose one of the following: - * **New rule**: When you are on the last step of custom query rule creation, go to the Response Actions section and click the **Osquery** icon. - * **Existing rule**: Edit the rule's settings, then go to the **Actions** tab. In the tab, click the **Osquery** icon under the Response Actions section. - - - If the rule's investigation guide is using an Osquery query, you'll be asked if you want to add the query as an Osquery Response Action. Click **Add** to add the investigation guide's query to the rule's Osquery Response Action. - - -2. Specify whether you want to set up a single live query or a pack: - * **Query**: Select a saved query or enter a new one. After you enter the query, you can expand the **Advanced** section to set a timeout period for the query, and view or set [mapped ECS fields](((kibana-ref))/osquery.html#osquery-map-fields) included in the results from the live query. Mapping ECS fields is optional. - - - Overwriting the query's default timeout period allows you to support queries that take longer to run. The default and minimum supported value for the **Timeout** field is `60`. The maximum supported value is `900`. - - - - You can use placeholder fields to dynamically add alert data to your query. - - - * **Pack**: Select from available query packs. After you select a pack, all of the queries in the pack are displayed. - - - Refer to [prebuilt packs](((kibana-ref))/osquery.html#osquery-prebuilt-packs-queries) to learn about using and managing Elastic prebuilt packs. - - - ![Shows how to set up a single query](../images/osquery-response-action/-osquery-setup-single-query.png) - -3. Click the **Osquery** icon to add more live queries (optional). -4. Click **Create & enable rule** (for a new rule) or **Save changes** (for existing rules) to finish adding the queries. - -
- -## Edit Osquery Response Actions - -If you want to choose a different query or query pack for the Osquery Response Action to use, edit the rule to update the Response Action. - - -If you edited a saved query or query pack that an Osquery Response Action is using, you must reselect the saved query or query pack on the related Osquery Response Action. Query changes are not automatically applied to Osquery Response Actions. - - -1. Edit the rule's settings, then go to the **Actions** tab. -1. Modify the settings for Osquery Response Actions you've added. -1. Click **Save changes**. - -
- -## Find query results - -When a rule generates an alert, Osquery automatically collects data on the host. Query results are displayed within the **Response Results** tab in the Alert details flyout. The number next to the **Response Results** tab represents the number of queries attached to the rule, in addition to endpoint response actions run by the rule. - - -Refer to Examine Osquery results for more information about query results. - - - diff --git a/docs/serverless/osquery/use-osquery.mdx b/docs/serverless/osquery/use-osquery.mdx deleted file mode 100644 index a89bce92a2..0000000000 --- a/docs/serverless/osquery/use-osquery.mdx +++ /dev/null @@ -1,19 +0,0 @@ ---- -slug: /serverless/security/query-operating-systems -title: Osquery -description: Integrate Osquery with ((elastic-sec)) for comprehensive data collection and security monitoring. -tags: [ 'serverless', 'security', 'overview' ] -status: in review ---- - - -
- -Osquery is an open source tool that lets you use SQL to query operating systems like a database. When you add the [Osquery manager integration](((kibana-ref))/manage-osquery-integration.html) to an ((agent)) policy, Osquery is deployed to all agents assigned to that policy. After completing this setup, you can [run live queries and schedule recurring queries](((kibana-ref))/osquery.html) for agents and begin gathering data from your entire environment. - -Osquery is supported for Linux, macOS, and Windows. You can use it with ((elastic-sec)) to perform real-time incident response, threat hunting, and monitoring to detect vulnerability or compliance issues. The following Osquery features are available from ((elastic-sec)): - -* Osquery Response Actions - Use Osquery Response Actions to add live queries to custom query rules. -* Live queries from investigation guides - Incorporate live queries into investigation guides to enhance your research capabilities while investigating possible security issues. -* Live queries from alerts - Run live queries against an alert's host to learn more about your infrastructure and operating systems. -* [Osquery settings](((kibana-ref))/osquery.html) - Navigate to **Investigations** → **Osquery** to manage project-level Osquery settings. diff --git a/docs/serverless/osquery/view-osquery-results.mdx b/docs/serverless/osquery/view-osquery-results.mdx deleted file mode 100644 index 1d2791f708..0000000000 --- a/docs/serverless/osquery/view-osquery-results.mdx +++ /dev/null @@ -1,50 +0,0 @@ ---- -slug: /serverless/security/examine-osquery-results -title: Examine Osquery results -description: Analyze results from queries and query packs. -tags: [ 'serverless', 'security', 'how-to', 'analyze' ] -status: in review ---- - - -
- -Osquery provides relevant, timely data that you can use to better understand and monitor your environment. When you run queries, results are indexed and displayed the Results table, which you can filter, sort, and interact with. - -
- -## Results table -The Results table displays results from single queries and query packs. - -
- -### Single query results - -Results for single queries appear on the **Results** tab. When you run a query, the number of agents queried and query status temporarily display in a status bar above the results table. Agent responses can be `Successful`, `Not yet responded` (pending), and `Failed`. - - - -
- -### Query pack results - -Results for each query in the pack appear in the **Results** tab. Click the expand icon () at the far right of each query row to display query results. The number of agents that were queried and their responses are shown for each query. Agent responses are color-coded. Green is `Sucessful`, `Not yet responded` (pending) is gray, and `Failed` is red. - - - -
- -## Investigate query results - -From the results table, you can: - -* Click **View in Discover** () to explore the results in Discover. -* Click **View in Lens** () to navigate to Lens, where you can use the drag-and-drop **Lens** editor to create visualizations. -* Click **Timeline** () to investigate a single query result in Timeline or **Add to timeline investigation** to investigate all results. This option is only available for single query results. - - When you open all results in Timeline, the events in Timeline are filtered based on the `action_ID` generated by the Osquery query. - -* Click **Add to Case** () to add the query results to a new or existing case. If you ran a live query from an alert, the alert and query results are added to the case as comments. -* Click the view details icon () to examine the query ID and statement. -* View more information about the request, such as failures, by opening the **Status** tab. - diff --git a/docs/serverless/partials/in-review-notice.mdx b/docs/serverless/partials/in-review-notice.mdx deleted file mode 100644 index 5a5713d8bd..0000000000 --- a/docs/serverless/partials/in-review-notice.mdx +++ /dev/null @@ -1,13 +0,0 @@ -
-
- - - -
-
- **In review** - - This page has been updated with content for ((serverless-short)) and is ready for review. - -
-
\ No newline at end of file diff --git a/docs/serverless/partials/in-testing-notice.mdx b/docs/serverless/partials/in-testing-notice.mdx deleted file mode 100644 index e860ed0a3c..0000000000 --- a/docs/serverless/partials/in-testing-notice.mdx +++ /dev/null @@ -1,13 +0,0 @@ -
-
- - - -
-
- **In testing** - - This page has been reviewed and is ready for testing. - -
-
\ No newline at end of file diff --git a/docs/serverless/partials/publish-ready-notice.mdx b/docs/serverless/partials/publish-ready-notice.mdx deleted file mode 100644 index b820d23d21..0000000000 --- a/docs/serverless/partials/publish-ready-notice.mdx +++ /dev/null @@ -1,13 +0,0 @@ -
-
- - - -
-
- **Publish ready** - - This page has been tested and is ready for publishing. - -
-
\ No newline at end of file diff --git a/docs/serverless/partials/rough-content-notice.mdx b/docs/serverless/partials/rough-content-notice.mdx deleted file mode 100644 index 4d78d4a325..0000000000 --- a/docs/serverless/partials/rough-content-notice.mdx +++ /dev/null @@ -1,14 +0,0 @@ -
-
- - - -
-
- **Rough content** - - This page may contain misleading and incorrect information. - The URL path and filename may change in the future, so use with caution. - -
-
\ No newline at end of file diff --git a/docs/serverless/projects-create/create-project.mdx b/docs/serverless/projects-create/create-project.mdx deleted file mode 100644 index b45429f3b5..0000000000 --- a/docs/serverless/projects-create/create-project.mdx +++ /dev/null @@ -1,38 +0,0 @@ ---- -slug: /serverless/security/create-project -title: Create a Security project -description: Get started with ((serverless-short)) ((elastic-sec)) in a few steps. -tags: [ 'serverless', 'security', 'how-to', 'get-started' ] -status: in review ---- - - -
- -A ((serverless-short)) project allows you to run ((elastic-sec)) in an autoscaled and fully-managed environment, where you don't have to manage the underlying ((es)) cluster and ((kib)) instances. - -## Create project - -Use your ((ecloud)) account to create a fully-managed ((elastic-sec)) project: - -1. Navigate to [cloud.elastic.co](https://cloud.elastic.co/). - -1. Log in to your ((ecloud)) account and select **Create project** from the **Serverless projects** panel. - -1. Select **Next** from the **Security** panel. - -1. Edit your project settings. (Click **Edit settings** to access all settings.) - - * **Name**: A unique name for your project. - - * **Cloud provider**: The cloud platform where you’ll deploy your project. We currently support Amazon Web Services (AWS). - - * **Region**: The cloud platform’s where your project will live. - - You can also check [the pricing details](https://cloud.elastic.co/pricing) to see how you consume ((serverless-short)) ((elastic-sec)). - -1. Select **Create project**. It takes a few minutes before your project gets created. - -1. Once the project is ready, select **Continue** to open the **Get started** page. (You might need to log into ((ecloud)) again.) - - From here, you can learn more about ((elastic-sec)) features and start setting up your workspace. diff --git a/docs/serverless/rules/about-rules.mdx b/docs/serverless/rules/about-rules.mdx deleted file mode 100644 index 47827957d9..0000000000 --- a/docs/serverless/rules/about-rules.mdx +++ /dev/null @@ -1,97 +0,0 @@ ---- -slug: /serverless/security/about-rules -title: About detection rules -description: Learn about detection rule types and how they work. -tags: [ 'serverless', 'security', 'overview' ] -status: in review ---- - - -
- -Rules run periodically and search for source events, matches, sequences, or ((ml)) job anomaly results that meet their criteria. When a rule's criteria are met, a detection alert is created. - -
- -## Rule types - -You can create the following types of rules: - -* **Custom query**: Query-based rule, which searches the defined indices and - creates an alert when one or more documents match the rule's query. - -* **Machine learning**: ((ml-cap)) rule, which creates an alert when a ((ml)) job - discovers an anomaly above the defined threshold (see Detect anomalies). - - For ((ml)) rules, the associated ((ml)) job must be running. If the ((ml)) job isn't - running, the rule will: - - * Run and create alerts if existing anomaly results with scores above the defined threshold - are discovered. - - * Issue an error stating the ((ml)) job was not running when the rule executed. -* **Threshold**: Searches the defined indices and creates a detections alert - when the number of times the specified field's value is present and meets the threshold during - a single execution. When multiple values meet the threshold, an alert is - generated for each value. - - For example, if the threshold `field` is `source.ip` and its `value` is `10`, an - alert is generated for every source IP address that appears in at least 10 of - the rule's search results. - -* **Event correlation**: Searches the defined indices and creates an alert when results match an - [Event Query Language (EQL)](((ref))/eql.html) query. - -* **Indicator match**: Creates an alert when ((elastic-sec)) index field values match field values defined in the specified indicator index patterns. For example, you can create an indicator index for IP addresses and use this index to create an alert whenever an event's `destination.ip` equals a value in the index. Indicator index field mappings should be [ECS-compliant](((ecs-ref))). For information on creating ((es)) indices and field types, see - [Index some documents](((ref))/getting-started-index.html), - [Create index API](((ref))/indices-create-index.html), and - [Field data types](((ref))/mapping-types.html). If you have indicators in a standard file format, such as CSV or JSON, you can also use the Machine Learning Data Visualizer to import your indicators into an indicator index. See [Explore the data in ((kib))](((ml-docs))/ml-getting-started.html#sample-data-visualizer) and use the **Import Data** option to import your indicators. - - - You can also use value lists as the indicator match index. See Use value lists with indicator match rules at the end of this topic for more information. - - -* **New terms**: Generates an alert for each new term detected in source documents within a specified time range. You can also detect a combination of up to three new terms (for example, a `host.ip` and `host.id` that have never been observed together before). - -* **((esql))**: Searches the defined indices and creates an alert when results match an [((esql)) query](((ref))/esql.html). - -![Shows the Rules page](../images/about-rules/-detections-all-rules.png) - -
- -## Data views and index patterns - -When you create a rule, you must either specify the ((es)) index pattens for which you'd like the rule to run, or select a data view field as the data source. If you select a data view, you can select runtime fields associated with that data view to create a query for the rule (with the exception of ((ml)) rules, which do not use queries). - - -To access data views, ensure you have the [required permissions](((kibana-ref))/data-views.html#data-views-read-only-access). - - -
- -## Notifications - -For both prebuilt and custom rules, you can send notifications when alerts are created. Notifications can be sent via ((jira)), Microsoft Teams, PagerDuty, Slack, and others, and can be configured when you create or edit a rule. - -
- -## Authorization - -Rules, including all background detection and the actions they generate, are authorized using an [API key](((kibana-ref))/api-keys.html) associated with the last user to edit the rule. Upon creating or modifying a rule, an API key is generated for that user, capturing a snapshot of their privileges. The API key is then used to run all background tasks associated with the rule including detection checks and executing actions. - - - -If a rule requires certain privileges to run, such as index privileges, keep in mind that if a user without those privileges updates the rule, the rule will no longer function. - - - -
- -## Exceptions - -When modifying rules or managing detection alerts, you can add exceptions that prevent a rule from generating alerts even when its criteria are met. This is useful for reducing noise, such as preventing alerts from trusted processes and internal IP addresses. - - -You can add exceptions to custom query, machine learning, event correlation, and indicator match rule types. - - diff --git a/docs/serverless/rules/add-exceptions.mdx b/docs/serverless/rules/add-exceptions.mdx deleted file mode 100644 index dd84590796..0000000000 --- a/docs/serverless/rules/add-exceptions.mdx +++ /dev/null @@ -1,299 +0,0 @@ ---- -slug: /serverless/security/add-exceptions -title: Add and manage exceptions -description: Learn how to create and manage rule exceptions. -tags: ["serverless","security","how-to","configure"] -status: in review ---- - - -
- -You can add exceptions to a rule from the rule details page, the Alerts table, the alert details flyout, or the Shared Exception Lists page. When you add an exception, you can also close all alerts that meet the exception’s criteria. - - - -* To ensure an exception is successfully applied, ensure that the fields you've defined for its query are correctly and consistently mapped in their respective indices. Refer to [ECS](((ecs-ref))) to learn more about supported mappings. - -* Be careful when adding exceptions to event correlation rules. Exceptions are evaluated against every event in the sequence, and if an exception matches any events that are necessary to complete the sequence, alerts are not created. - - To exclude values from a - specific event in the sequence, update the rule's EQL statement. For example: - - ```eql - `sequence - [file where file.extension == "exe" - and file.name != "app-name.exe"] - [process where true - and process.name != "process-name.exe"]` - ``` - -* Be careful when adding exceptions to indicator match rules. Exceptions are evaluated against source and indicator indices, so if the exception matches events in _either_ index, alerts are not generated. - - - -
- -## Add exceptions to a rule - -1. Do one of the following: - - * To add an exception from the rule details page: - 1. Go to the rule details page of the rule to which you want to add an - exception (**Rules** → **Detection rules (SIEM)** → **_Rule name_**). - - 1. Scroll down the rule details page, select the **Rule exceptions** tab, then click **Add rule exception**. - - ![Detail of rule exceptions tab](../images/add-exceptions/-detections-rule-exception-tab.png) - - * To add an exception from the Alerts table: - 1. Go to **Alerts**. - 1. Scroll down to the Alerts table, go to the alert you want to create an exception for, click the **More Actions** menu (), then select **Add rule exception**. - - * To add an exception from the alert details flyout: - 1. Go to **Alerts**. - 1. Click the **View details** button from the Alerts table. - 1. In the alert details flyout, click **Take action → Add rule exception**. - - * To add an exception from the Shared Exception Lists page: - 1. Go to **Rules** → **Shared exception lists**. - 1. Click **Create shared exception list** → **Create exception item**. - -1. In the **Add rule exception** flyout, name the exception. -1. Add conditions that define the exception. When the exception's query evaluates to `true`, rules don't generate alerts even when their criteria are met. - - - Rule exceptions are case-sensitive, which means that any character that's entered as an uppercase or lowercase letter will be treated as such. In the event you _don't_ want a field evaluated as case-sensitive, some ECS fields have a `.caseless` version that you can use. - - - - When you create a new exception from an alert, exception conditions are auto-populated with relevant alert data. Data from custom highlighted fields is listed first. A comment that describes the auto-generated exception conditions is also added to the **Add comments** section. - - - 1. **Field**: Select a field to identify the event being filtered. - - - - A warning displays for fields with conflicts. Using these fields might cause unexpected exceptions behavior. Refer to Troubleshooting type conflicts and unmapped fields for more information. - - - - 1. **Operator**: Select an operator to define the condition: - * `is` | `is not` — Must be an exact match of the defined value. - * `is one of` | `is not one of` — Matches any of the defined values. - * `exists` | `does not exist` — The field exists. - * `is in list` | `is not in list` — Matches values in a value list. - - - - * An exception defined by a value list must use `is in list` or `is not in list` in all conditions. - * Wildcards are not supported in value lists. - * If a value list can't be used due to size or data type, it'll be unavailable in the **Value** menu. - - - - * `matches` | `does not match` — Allows you to use wildcards in **Value**, such as `C:\\path\\*\\app.exe`. Available wildcards are `?` (match one character) and `*` (match zero or more characters). The selected **Field** data type must be [keyword](((ref))/keyword.html#keyword-field-type), [text](((ref))/text.html#text-field-type), or [wildcard](((ref))/keyword.html#wildcard-field-type). - - - - Some characters must be escaped with a backslash, such as `\\` for a literal backslash, `\*` for an asterisk, and `\?` for a question mark. Windows paths must be divided with double backslashes (for example, `C:\\Windows\\explorer.exe`), and paths that already include double backslashes might require four backslashes for each divider. - - - - - - Using wildcards can impact performance. To create a more efficient exception using wildcards, use multiple conditions and make them as specific as possible. For example, adding conditions using `process.name` or `file.name` can help limit the scope of wildcard matching. - - - - 1. **Value**: Enter the value associated with the **Field**. To enter multiple values (when using `is one of` or `is not one of`), enter each value, then press **Return**. - - - Identical, case-sensitive values are supported for the `is one of` and `is not one of` operators. For example, if you want to match the values `Windows` and `windows`, add both values to the **Value** field. - - - In the following example, the exception was created from the Rules page and prevents the rule from generating alerts when the `svchost.exe` process runs on hostname `siem-kibana`. - - ![](../images/add-exceptions/-detections-add-exception-ui.png) - -1. Click **AND** or **OR** to create multiple conditions and define their relationships. - -1. Click **Add nested condition** to create conditions using nested fields. This is only required for - these nested fields. For all other fields, nested conditions should not be used. - -1. Choose to add the exception to a rule or a shared exception list. - - - If you are creating an exception from the Shared Exception Lists page, you can add the exception to multiple rules. - - - - If a shared exception list doesn't exist, you can create one from the Shared Exception Lists page. - - -1. (Optional) Enter a comment describing the exception. - -1. (Optional) Enter a future expiration date and time for the exception. - -1. Select one of the following alert actions: - - * **Close this alert**: Closes the alert when the exception is added. This option - is only available when adding exceptions from the Alerts table. - - * **Close all alerts that match this exception and were generated by this rule**: Closes all alerts that match the exception's conditions and were generated only by the current rule. - -1. Click **Add rule exception**. - -
- -## Add ((elastic-endpoint)) exceptions - -Like detection rule exceptions, you can add Endpoint agent exceptions either by editing the Endpoint Security rule or by adding them as actions on alerts generated by the Endpoint Security rule. ((elastic-endpoint)) alerts have the following fields: - -* `kibana.alert.original_event.module determined:endpoint` -* `kibana.alert.original_event.kind:alert` - -You can also add Endpoint exceptions to rules that are associated with ((elastic-endpoint)) rule exceptions. To associate rules when creating or editing a rule, select the **((elastic-endpoint)) exceptions** option. - -Endpoint exceptions are added to the Endpoint Security rule **and** the ((elastic-endpoint)) on your hosts. - - - -Exceptions added to the Endpoint Security rule affect all alerts sent -from the Endpoint agent. Be careful not to unintentionally prevent useful Endpoint -alerts. - -Additionally, to add an Endpoint exception to the Endpoint Security rule, there must be at least one Endpoint Security alert generated in the system. For non-production use, if no alerts exist, you can trigger a test alert using malware emulation techniques or tools such as the Anti Malware Testfile from the [European Institute for Computer Anti-Virus Research (EICAR)](https://www.eicar.org/). - - - - - -[Binary fields](((ref))/binary.html) are not supported in detection rule exceptions. - - - -1. Do one of the following: - - * To add an Endpoint exception from the rule details page: - 1. Go to the rule details page (**Rules** → **Detection rules (SIEM)**), and then search for and select the Elastic **Endpoint Security** rule. - 1. Scroll down the rule details page, select the **Endpoint exceptions** tab, then click **Add endpoint exception**. - - * To add an Endpoint exception from the Alerts table: - 1. Go to **Alerts**. - 1. Scroll down to the Alerts table, and from an ((elastic-endpoint)) - alert, click the **More actions** menu (), then select **Add Endpoint exception**. - - * To add an Endpoint exception from Shared Exception Lists page: - 1. Go to **Rules** → **Shared exception lists**. - 1. Expand the Endpoint Security Exception List or click the list name to open the list's details page. Next, click **Add endpoint exception**. - - - The Endpoint Security Exception List is automatically created. By default, it's associated with the Endpoint Security rule and any rules with the **((elastic-endpoint)) exceptions** option selected. - - - The **Add Endpoint Exception** flyout opens. - - ![](../images/add-exceptions/-detections-endpoint-add-exp.png) - -1. If required, modify the conditions. Refer to Exceptions with nested conditions for more information on when nested conditions are required. - - - Rule exceptions are case-sensitive, which means that any character that's entered as an uppercase or lowercase letter will be treated as such. In the event you _don't_ want a field evaluated as case-sensitive, some ECS fields have a `.caseless` version that you can use. - - - - - Fields with conflicts are marked with a warning icon (). Using these fields might cause unexpected exceptions behavior. For more information, refer to Troubleshooting type conflicts and unmapped fields. - - Identical, case-sensitive values are supported for the `is one of` and `is not one of` operators. For example, if you want to match the values `Windows` and `windows`, add both values to the **Value** field. - - -1. (Optional) Add a comment to the exception. - -1. You can select any of the following: - - * **Close this alert**: Closes the alert when the exception is added. This option - is only available when adding exceptions from the Alerts table. - - * **Close all alerts that match this exception and were generated by this rule**: - Closes all alerts that match the exception's conditions. - -1. Click **Add Endpoint Exception**. An exception is created for both the detection rule and the ((elastic-endpoint)). - - - It might take longer for exceptions to be applied to hosts within larger deployments. - - -
- -## Exceptions with nested conditions - -Some Endpoint objects contain nested fields, and the only way to ensure you are -excluding the correct fields is with nested conditions. One example is the -`process.Ext` object: - -```json -{ - "ancestry": [], - "code_signature": { - "trusted": true, - "subject_name": "LFC", - "exists": true, - "status": "trusted" - }, - "user": "WDAGUtilityAccount", - "token": { - "elevation": true, - "integrity_level_name": "high", - "domain": "27FB305D-3838-4", - "user": "WDAGUtilityAccount", - "elevation_type": "default", - "sid": "S-1-5-21-2047949552-857980807-821054962-504" - } -} -``` - -
- -Only these objects require nested conditions to ensure the exception functions -correctly: - -* `Endpoint.policy.applied.artifacts.global.identifiers` -* `Endpoint.policy.applied.artifacts.user.identifiers` -* `Target.dll.Ext.code_signature` -* `Target.process.Ext.code_signature` -* `Target.process.Ext.token.privileges` -* `Target.process.parent.Ext.code_signature` -* `Target.process.thread.Ext.token.privileges` -* `dll.Ext.code_signature` -* `file.Ext.code_signature` -* `file.Ext.macro.errors` -* `file.Ext.macro.stream` -* `process.Ext.code_signature` -* `process.Ext.token.privileges` -* `process.parent.Ext.code_signature` -* `process.thread.Ext.token.privileges` - -### Nested condition example - -Creates an exception that excludes all LFC-signed trusted processes: - -![](../images/add-exceptions/-detections-nested-exp.png) - -
- -## View and manage exceptions - -To view a rule's exceptions, open the rule's details page (**Rules** → **Detection rules (SIEM)** → **_Rule name_**), then scroll down and select the **Rule exceptions** or **Endpoint exceptions** tab. All exceptions that belong to the rule will display in a list. From the list, you can filter, edit, and delete exceptions. You can also toggle between **Active exceptions** and **Expired exceptions**. - -![A default rule list](../images/add-exceptions/-detections-manage-default-rule-list.png) - -
- -## Find rules using the same exceptions -To find out if an exception is used by other rules, select the **Rule exceptions** or **Endpoint exceptions** tab, navigate to an exception list item, then click **Affects _X_ rules**. - - -Changes that you make to the exception also apply to other rules that use the exception. - - -![Exception that affects multiple rules](../images/add-exceptions/-detections-exception-affects-multiple-rules.png) \ No newline at end of file diff --git a/docs/serverless/rules/alerts-ui-monitor.mdx b/docs/serverless/rules/alerts-ui-monitor.mdx deleted file mode 100644 index 51e32292e4..0000000000 --- a/docs/serverless/rules/alerts-ui-monitor.mdx +++ /dev/null @@ -1,219 +0,0 @@ ---- -slug: /serverless/security/alerts-ui-monitor -title: Monitor and troubleshoot rule executions -description: Find out how your rules are performing, and troubleshoot common rule issues. -tags: ["serverless","security","how-to","monitor","manage"] -status: in review ---- - - -
- -Several tools can help you gain insight into the performance of your detection rules: - -* Rule Monitoring tab — The current state of all detection rules and their most recent executions. Go to the **Rule Monitoring** tab to get an overview of which rules are running, how long they're taking, and if they're having any trouble. - -* Execution results — Historical data for a single detection rule's executions over time. Consult the execution results to understand how a particular rule is running and whether it's creating the alerts you expect. - -* Detection rule monitoring dashboard — Visualizations to help you monitor the overall health and performance of ((elastic-sec))'s detection rules. Consult this dashboard for a high-level view of whether your rules are running successfully and how long they're taking to run, search data, and create alerts. - -Refer to the Troubleshoot missing alerts section below for strategies on adjusting rules if they aren't creating the expected alerts. - -
- -## Rule Monitoring tab - -To view a summary of all rule executions, including the most recent failures and execution -times, select the **Rule Monitoring** tab on the **Rules** page (**Rules** → -**Detection rules (SIEM)** → **Rule Monitoring**). - -![](../images/alerts-ui-monitor/-detections-monitor-table.png) - -On the **Rule Monitoring** tab, you can sort and filter rules just like you can on the **Installed Rules** tab. - - -To sort the rules list, click any column header. To sort in descending order, click the column header again. - - -For detailed information on a rule, the alerts it generated, and associated errors, click on its name in the table. This also allows you to perform the same actions that are available on the **Installed Rules** tab, such as modifying or deleting rules, activating or deactivating rules, exporting or importing rules, and duplicating prebuilt rules. - -
- -## Execution results - -Each detection rule execution is logged, including its success or failure, any warning or error messages, and how long it took to search for data, create alerts, and complete. This can help you troubleshoot a particular rule if it isn't behaving as expected (for example, if it isn't creating alerts or takes a long time to run). - -To access a rule's execution log, go to **Rules** → **Detection rules (SIEM)**, click the rule's name to open its details, then scroll down and select the **Execution results** tab. You can expand a long warning or error message by clicking the arrow at the end of a row. - -![Rule execution results tab](../images/alerts-ui-monitor/-detections-rule-execution-logs.png) - -You can hover over each column heading to display a tooltip about that column's data. Click a column heading to sort the table by that column. - -Use these controls to filter what's included in the logs table: - -* The **Status** drop-down filters the table by rule execution status: - * **Succeeded**: The rule completed its defined search. This doesn't necessarily mean it generated an alert, just that it ran without error. - * **Failed**: The rule encountered an error that prevented it from running. For example, a ((ml)) rule whose corresponding ((ml)) job wasn't running. - * **Warning**: Nothing prevented the rule from running, but it might have returned unexpected results. For example, a custom query rule tried to search an index pattern that couldn't be found in ((es)). - -* The date and time picker sets the time range of rule executions included in the table. This is separate from the global date and time picker at the top of the rule details page. - -* The **Show metrics columns** toggle includes more or less data in the table, pertaining to the timing of each rule execution. - -* The **Actions** column allows you to show alerts generated from a given rule execution. Click the filter icon () to create a global search filter based on the rule execution's ID value. This replaces any previously applied filters, changes the global date and time range to 24 hours before and after the rule execution, and displays a confirmation notification. You can revert this action by clicking **Restore previous filters** in the notification. - -
- -## Troubleshoot missing alerts - -When a rule fails to run close to its scheduled time, some alerts may be -missing. There are a number of ways to try to resolve this issue: - -* Troubleshoot gaps -* Troubleshoot ingestion pipeline delay -* Troubleshoot missing alerts for ((ml)) jobs - -You can also use Task Manager in ((kib)) to troubleshoot background tasks and processes that may be related to missing alerts: - -* [Task Manager health monitoring](((kibana-ref))/task-manager-health-monitoring.html) -* [Task Manager troubleshooting](((kibana-ref))/task-manager-troubleshooting.html) - -{/* Will need to revisit this section since it references a Kibana feature that's not currently available in serverless Security */} - -
- -### Troubleshoot maximum alerts warning - -When a rule reaches the maximum number of alerts it can generate during a single rule execution, the following warning appears on the rule's details page and in the rule execution log: `This rule reached the maximum alert limit for the rule execution. Some alerts were not created.` - -If you receive this warning, go to the rule's **Alerts** tab and check for anything unexpected. Unexpected alerts might be created from data source issues or queries that are too broadly scoped. To further reduce alert volume, you can also add rule exceptions or suppress alerts. - -
- -### Troubleshoot gaps - -If you see values in the Gaps column in the Rule Monitoring table or on the Rule details page -for a small number of rules, you can increase those rules' -Additional look-back time (**Rules** → **Detection rules (SIEM)** → the rule's **All actions** menu (*...*) → **Edit rule settings** → **Schedule** → **Additional look-back time**). - -It's recommended to set the `Additional look-back time` to at -least 1 minute. This ensures there are no missing alerts when a rule doesn't -run exactly at its scheduled time. - -((elastic-sec)) prevents duplication. Any duplicate alerts that are discovered during the -`Additional look-back time` are _not_ created. - - -If the rule that experiences gaps is an indicator match rule, see how to tune indicator match rules. Also please note that ((elastic-sec)) provides limited support for indicator match rules. - - -If you see gaps for numerous rules: - -* If you restarted ((kib)) when many rules were activated, try deactivating them - and then reactivating them in small batches at staggered intervals. This - ensures ((kib)) does not attempt to run all the rules at the same time. - -* Consider adding another ((kib)) instance to your environment. - -{/* Will need to revisit this section since it references Kibana. */} - -
- -### Troubleshoot ingestion pipeline delay - -{/* Will need to revisit this section since it mentions versions of the stack, Beats, and Agent. */} - -Even if your rule runs at its scheduled time, there might still be missing alerts if your ingestion pipeline delay is greater than your rule interval + additional look-back time. Prebuilt rules have a minimum interval + additional look-back time of 6 minutes. To avoid missed alerts for prebuilt rules, use caution to ensure that ingestion pipeline delays remain below 6 minutes. - -In addition, use caution when creating custom rule schedules to ensure that the specified interval + additional look-back time is greater than your deployment's ingestion pipeline delay. - -You can reduce the number of missed alerts due to ingestion pipeline delay by specifying the `Timestamp override` field value to `event.ingested` in advanced settings during rule creation or editing. The detection engine uses the value from the `event.ingested` field as the timestamp when executing the rule. - -For example, say an event occurred at 10:00 but wasn't ingested into ((es)) until 10:10 due to an ingestion pipeline delay. If you created a rule to detect that event with an interval + additional look-back time of 6 minutes, and the rule executes at 10:12, it would still detect the event because the `event.ingested` timestamp was from 10:10, only 2 minutes before the rule executed and well within the rule's 6-minute interval + additional look-back time. - -![](../images/alerts-ui-monitor/-detections-timestamp-override.png) - -
- -### Troubleshoot missing alerts for ((ml)) jobs - -((ml-cap)) detection rules use ((ml)) jobs that have dependencies on data fields populated by the ((beats)) and ((agent)) integrations. In ((stack)) version 8.3, new ((ml)) jobs (prefixed with `v3`) were released to operate on the ECS fields available at that time. - -If you're using 8.2 or earlier versions of ((beats)) or ((agent)) with ((stack)) version 8.3 or later, you may need to duplicate prebuilt rules or create new custom rules _before_ you update the Elastic prebuilt rules. Once you update the prebuilt rules, they will only use `v3` ((ml)) jobs. Duplicating the relevant prebuilt rules before updating them ensures continued coverage by allowing you to keep using `v1` or `v2` jobs (in the duplicated rules) while also running the new `v3` jobs (in the updated prebuilt rules). - - - -* Duplicated rules may result in duplicate anomaly detections and alerts. -* Ensure that the relevant `v3` ((ml)) jobs are running before you update the Elastic prebuilt rules. - - - -* If you only have **8.3 or later versions of ((beats)) and ((agent))**: You can download or update your prebuilt rules and use the latest `v3` ((ml)) jobs. No additional action is required. - -* If you only have **8.2 or earlier versions of ((beats)) or ((agent))**, or **a mix of old and new versions**: To continue using the `v1` and `v2` ((ml)) jobs specified by pre-8.3 prebuilt detection rules, you must duplicate affected prebuilt rules _before_ updating them to the latest rule versions. The duplicated rules can continue using the same `v1` and `v2` ((ml)) jobs, and the updated prebuilt ((ml)) rules will use the new `v3` ((ml)) jobs. - -* If you have **a non-Elastic data shipper that gathers ECS-compatible events**: You can use the latest `v3` ((ml)) jobs with no additional action required, as long as your data shipper uses the latest ECS specifications. However, if you're migrating from ((ml)) rules using `v1`/`v2` jobs, ensure that you start the relevant `v3` jobs before updating the Elastic prebuilt rules. - -The following Elastic prebuilt rules use the new `v3` ((ml)) jobs to generate alerts. Duplicate their associated `v1`/`v2` prebuilt rules _before_ updating them if you need continued coverage from the `v1`/`v2` ((ml)) jobs: - -{/* {/* Links to prebuilt rule pages temporarily removed for initial serverless docs. We can renable links once -we add prebuilt rule pages to the serverless docs.*/} -{/* -* Unusual Linux Network Port Activity: `v3_linux_anomalous_network_port_activity` - -* Anomalous Process For a Linux Population: `v3_linux_anomalous_process_all_hosts` - -* Unusual Linux Username: `v3_linux_anomalous_user_name` - -* Unusual Linux Process Calling the Metadata Service: `v3_linux_rare_metadata_process` - -* Unusual Linux User Calling the Metadata Service: `v3_linux_rare_metadata_user` - -* Unusual Process For a Linux Host: `v3_rare_process_by_host_linux` - -* Unusual Process For a Windows Host: `v3_rare_process_by_host_windows` - -* Unusual Windows Network Activity: `v3_windows_anomalous_network_activity` - -* Unusual Windows Path Activity: `v3_windows_anomalous_path_activity` - -* Anomalous Windows Process Creation: `v3_windows_anomalous_process_creation` - -* Anomalous Process For a Windows Population: `v3_windows_anomalous_process_all_hosts` - -* Unusual Windows Username: `v3_windows_anomalous_user_name` - -* Unusual Windows Process Calling the Metadata Service: `v3_windows_rare_metadata_process` - -* Unusual Windows User Calling the Metadata Service: `v3_windows_rare_metadata_user` - */} - -* Unusual Linux Network Port Activity: `v3_linux_anomalous_network_port_activity` - -* Unusual Linux Network Connection Discovery: `v3_linux_anomalous_network_connection_discovery` - -* Anomalous Process For a Linux Population: `v3_linux_anomalous_process_all_hosts` - -* Unusual Linux Username: `v3_linux_anomalous_user_name` - -* Unusual Linux Process Calling the Metadata Service: `v3_linux_rare_metadata_process` - -* Unusual Linux User Calling the Metadata Service: `v3_linux_rare_metadata_user` - -* Unusual Process For a Linux Host: `v3_rare_process_by_host_linux` - -* Unusual Process For a Windows Host: `v3_rare_process_by_host_windows` - -* Unusual Windows Network Activity: `v3_windows_anomalous_network_activity` - -* Unusual Windows Path Activity: `v3_windows_anomalous_path_activity` - -* Anomalous Windows Process Creation: `v3_windows_anomalous_process_creation` - -* Anomalous Process For a Windows Population: `v3_windows_anomalous_process_all_hosts` - -* Unusual Windows Username: `v3_windows_anomalous_user_name` - -* Unusual Windows Process Calling the Metadata Service: `v3_windows_rare_metadata_process` - -* Unusual Windows User Calling the Metadata Service: `v3_windows_rare_metadata_user` diff --git a/docs/serverless/rules/building-block-rule.mdx b/docs/serverless/rules/building-block-rule.mdx deleted file mode 100644 index 1beab4e166..0000000000 --- a/docs/serverless/rules/building-block-rule.mdx +++ /dev/null @@ -1,39 +0,0 @@ ---- -slug: /serverless/security/building-block-rules -title: Use building block rules -description: Set up building block rules and view building block alerts. -tags: [ 'serverless', 'security', 'how-to' ] -status: in review ---- - - -
- -Create building block rules when you do not want to see their generated alerts -in the UI. This is useful when you want: - -* A record of low-risk alerts without producing noise in the Alerts table. -* Rules that execute on the alert indices (`.alerts-security.alerts-`). - You can then use building block rules to create hidden alerts that act as a - basis for an 'ordinary' rule to generate visible alerts. - -## Set up rules that run on alert indices - -To create a rule that searches alert indices, select **Index Patterns** as the rule's **Source** and enter the index pattern for alert indices (`.alerts-security.alerts-*`): - -![](../images/building-block-rule/-detections-alert-indices-ui.png) - -## View building block alerts in the UI - -By default, building block alerts are excluded from the Overview and Alerts pages. -You can choose to include building block alerts on the Alerts page, which expands the number of alerts. - -1. Go to **Alerts**. -1. In the Alerts table, select **Additional filters** → - **Include building block alerts**, located on the far-right. - - -On a building block rule details page, the rule's alerts are displayed (by -default, **Include building block alerts** is selected). - - diff --git a/docs/serverless/rules/detection-engine-overview.mdx b/docs/serverless/rules/detection-engine-overview.mdx deleted file mode 100644 index 0566916ab1..0000000000 --- a/docs/serverless/rules/detection-engine-overview.mdx +++ /dev/null @@ -1,141 +0,0 @@ ---- -slug: /serverless/security/detection-engine-overview -title: Detection engine overview -description: Learn about the detection engine and its features. -tags: [ 'serverless', 'security', 'overview' ] -status: in review ---- - - -
- -Use the detection engine to create and manage rules and view the alerts -these rules create. Rules periodically search indices (such as `logs-*` and -`filebeat-*`) for suspicious source events and create alerts when a rule's -conditions are met. When an alert is created, its status is `Open`. To help -track investigations, an alert's status can be set as -`Open`, `Acknowledged`, or `Closed`. - -![Alerts page](../images/detection-engine-overview/-detections-alert-page.png) - -In addition to creating your own rules, enable -Elastic prebuilt rules to immediately start detecting -suspicious activity. For detailed information on all the prebuilt rules, see the Prebuilt rules reference. Once the prebuilt rules are loaded and -running, Tune detection rules and Add and manage exceptions explain -how to modify the rules to reduce false positives and get a better set of -actionable alerts. You can also use exceptions and value lists when creating or -modifying your own rules. - -There are two special prebuilt rules you need to know about: - -{/* Links to prebuilt rule pages temporarily removed for initial serverless docs. */} -* **Endpoint Security**: - Automatically creates an alert from all incoming Elastic Endpoint alerts. To - receive Elastic Endpoint alerts, you must install the Endpoint agent on your - hosts (see Install and configure the ((elastic-defend)) integration). - - When this rule is enabled, the following Endpoint events are displayed as - detection alerts: - - * Malware Prevention Alert - * Malware Detection Alert - - - When you load the prebuilt rules, this is the only rule that is enabled - by default. - - -{/* Links to prebuilt rule pages temporarily removed for initial serverless docs. */} -* **External Alerts**: Automatically creates an alert for - all incoming third-party system alerts (for example, Suricata alerts). - -If you want to receive notifications via external systems, such as Slack or -email, when alerts are created, use the [Alerting and Actions](((kibana-ref))/alerting-getting-started.html) framework. - -After rules have started running, you can monitor their executions to verify -they are functioning correctly, as well as view, manage, and troubleshoot -alerts (see Manage detection alerts and Monitor and troubleshoot rule executions). - -You can create and manage rules and alerts via the UI or the [Detections API](((security-guide))/rule-api-overview.html). -{/* Link to classic docs until serverless API docs are available. */} - - - -To make sure you can access Detections and manage rules, see -Detections prerequisites and requirements. - - - -
- -## Limited support for indicator match rules - -Indicator match rules provide a powerful capability to search your security data; however, their queries can consume significant deployment resources. When creating an indicator match rule, we recommend limiting the time range of the indicator index query to the minimum period necessary for the desired rule coverage. For example, the default indicator index query `@timestamp > "now-30d/d"` searches specified indicator indices for indicators ingested during the past 30 days and rounds the query start time down to the nearest day (resolves to UTC `00:00:00`). Without this limitation, the rule will include all of the indicators in your indicator indices, which may extend the time it takes for the indicator index query to complete. - -In addition, indicator match rules with an additional look-back time value greater than 24 hours are not supported. - -
- -## Detections configuration and prerequisites - -Detections requirements provides detailed information on all the -permissions required to initiate and use the Detections feature. - -
- -## Malware prevention - -Malware, short for malicious software, is any software program designed to damage or execute unauthorized actions on a -computer system. Examples of malware include viruses, worms, Trojan horses, adware, scareware, and spyware. Some -malware, such as viruses, can severely damage a computer's hard drive by deleting files or directory information. Other -malware, such as spyware, can obtain user data without their knowledge. - -Malware may be stealthy and appear as legitimate executable code, scripts, active content, and other software. It is also -often embedded in non-malicious files, non-suspicious websites, and standard programs — sometimes making the root -source difficult to identify. If infected and not resolved promptly, malware can cause irreparable damage to a computer -network. - -For information on how to enable malware protection on your host, see Malware Protection. - -
- -### Machine learning model - -To determine if a file is malicious or benign, a machine learning model looks for static attributes of files (without executing -the file) that include file structure, layout, and content. This includes information such as file header data, imports, exports, -section names, and file size. These attributes are extracted from millions of benign and malicious file samples, which then -are passed to a machine-learning algorithm that distinguishes a benign file from a malicious one. The machine learning -model is updated as new data is procured and analyzed. - -### Threshold - -A malware threshold determines the action the agent should take if malware is detected. The Elastic Agent uses a recommended threshold level that generates a balanced number of alerts with a low probability of undetected malware. This threshold also minimizes the number of false positive alerts. - -
- -## Ransomware prevention - -Ransomware is computer malware that installs discreetly on a user's computer and encrypts data until a specified amount of money (ransom) is paid. Ransomware is usually similar to other malware in its delivery and execution, infecting systems -through spear-phishing or drive-by downloads. If not resolved immediately, ransomware can cause irreparable damage to an entire computer network. - -Behavioral ransomware prevention on the Elastic Endpoint detects and stops ransomware attacks on Windows systems by analyzing data from low-level system processes, and is effective across an array of widespread ransomware families — including those targeting the system’s master boot record. - -For information on how to enable ransomware protection on your host, see Ransomware protection. - -### Resolve UI error messages - -Depending on your user role privileges and whether detection system indices have already been created, you might get one of these error messages when you -open the **Alerts** or **Rules** page: - -* **`Let’s set up your detection engine`** - - If you get this message, a user with specific privileges must visit the - **Alerts** or **Rules** page before you can view detection alerts and rules. - Refer to Enable and access detections for a list of all the requirements. - -* **`Detection engine permissions required`** - - If you get this message, you do not have the - required privileges to view the **Detections** feature, - and you should contact your project administrator. - diff --git a/docs/serverless/rules/detections-ui-exceptions.mdx b/docs/serverless/rules/detections-ui-exceptions.mdx deleted file mode 100644 index bed724de78..0000000000 --- a/docs/serverless/rules/detections-ui-exceptions.mdx +++ /dev/null @@ -1,36 +0,0 @@ ---- -slug: /serverless/security/rule-exceptions -title: Rule exceptions -description: Understand the different types of rule exceptions. -tags: [ 'serverless', 'security', 'overview' ] -status: in review ---- - - -
- -You can associate rule exceptions with detection and endpoint rules to prevent trusted processes and network activity from generating unnecessary alerts, therefore, reducing the number of false positives. - -When creating exceptions, you can assign them to individual rules or to multiple rules. - -
- -## Exceptions for individual rules - -Exceptions, also referred to as _exception items_, contain the source event conditions that determine when alerts shouldn't be generated. - -You can create exceptions that apply exclusively to a single rule. These types of exceptions can't be used by other rules, and you must manage them from the rule’s details page. To learn more about creating and managing single-rule exceptions, refer to Add and manage exceptions. - - - - -You can also use value lists to define exceptions for detection rules. Value lists allow you to match an exception against a list of possible values. - - -
- -## Exceptions shared among multiple rules - -If you want an exception to apply to multiple rules, you can add an exception to a shared exception list. Shared exception lists allow you to group exceptions together and then associate them with multiple rules. Refer to Create and manage shared exception lists to learn more. - -![Shared Exception Lists page](../images/detections-ui-exceptions/-detections-rule-exceptions-page.png) diff --git a/docs/serverless/rules/interactive-investigation-guides.mdx b/docs/serverless/rules/interactive-investigation-guides.mdx deleted file mode 100644 index 79642020c8..0000000000 --- a/docs/serverless/rules/interactive-investigation-guides.mdx +++ /dev/null @@ -1,159 +0,0 @@ ---- -slug: /serverless/security/interactive-investigation-guides -title: Launch Timeline from investigation guides -description: Pivot from detection alerts to investigations with interactive investigation guide actions. -tags: ["serverless","security","how-to","analyze","configure"] -status: in review ---- - - -
- -Detection rule investigation guides suggest steps for triaging, analyzing, and responding to potential security issues. For custom rules, you can create an interactive investigation guide that includes buttons for launching runtime queries in Timeline, using alert data and hard-coded literal values. This allows you to start detailed Timeline investigations directly from an alert using relevant data. - - - -Under the Investigation section, click **Show investigation guide** to open the **Investigation** tab in the left panel of the alert details flyout. - - - -The **Investigation** tab displays query buttons, and each query button displays the number of event documents found. Click the query button to automatically load the query in Timeline, based on configuration settings in the investigation guide. - -![Timeline with query pre-loaded from investigation guide action](../images/interactive-investigation-guides/-detections-ig-timeline.png) - -
- -## Add investigation guide actions to a rule - - -You can only create interactive investigation guides with custom rules because Elastic prebuilt rules can't be edited. However, you can duplicate a prebuilt rule, then configure the investigation guide for the duplicated rule. - - -You can configure an interactive investigation guide when you create a new rule or edit an existing rule. - -1. When configuring the rule's settings (the **About rule** step for a new rule, or the **About** tab for an existing rule), expand the **Advanced settings**, then scroll down to the **Investigation guide** Markdown editor. - - ![Investigation guide editor field](../images/interactive-investigation-guides/-detections-ig-investigation-guide-editor.png) - -1. Place the editor cursor where you want to add the query button in the investigation guide, then select the Investigate icon in the toolbar. The **Add investigation query** builder form appears. - - - -1. Complete the query builder form to create an investigation query: - 1. **Label**: Enter the text to appear on the query button. - 1. **Description**: (Optional) Enter additional text to include with the button. - 1. **Filters**: Select fields, operators, and values to build the query. Click **OR** or **AND** to create multiple filters and define their relationships. - - To use a field value from the alert as a query parameter, enter the field name surrounded by double curly brackets — such as `{{kibana.alert.example}}` — as a custom option for the filter value. - - - - 1. **Relative time range**: (Optional) Select a time range to limit the query, relative to the alert's creation time. - -1. Click **Save changes**. The syntax is added to the investigation guide editor. - - - If you need to change the query button's configuration, you can either edit the syntax directly in the editor (refer to the syntax reference below), or delete the syntax and use the query builder form to recreate the query. - - -1. Save and enable the rule. - -
- -### Query button syntax - -The following syntax defines a query button in an interactive investigation guide. - - - - `!{investigate{ }}` - - The container object holding all the query button's configuration attributes. - - - - - `label` - - Identifying text on the button. - - - - - `description` - - Additional text included with the button. - - - - - `providers` - - A two-level nested array that defines the query to run in Timeline. Similar to the structure of queries in Timeline, items in the outer level are joined by an `OR` relationship, and items in the inner level are joined by an `AND` relationship. - - Each item in `providers` corresponds to a filter created in the query builder UI and is defined by these attributes: - - * `field`: The name of the field to query. - * `excluded`: Whether the query result is excluded (such as **is not one of**) or included (*is one of*). - * `queryType`: The query type used to filter events, based on the filter's operator. For example, `phrase` or `range`. - * `value`: The value to search for. Either a hard-coded literal value, or the name of an alert field (in double curly brackets) whose value you want to use as a query parameter. - * `valueType`: The data type of `value`, such as `string` or `boolean`. - - - - - - `relativeFrom`, `relativeTo` - - (Optional) The start and end, respectively, of the relative time range for the query. Times are relative to the alert's creation time, represented as `now` in [date math](((ref))/common-options.html#date-math) format. For example, selecting **Last 15 minutes** in the query builder form creates the syntax `"relativeFrom": "now-15m", "relativeTo": "now"`. - - - - - - -Some characters must be escaped with a backslash, such as `\"` for a quotation mark and `\\` for a literal backslash. Divide Windows paths with double backslashes (for example, `C:\\Windows\\explorer.exe`), and paths that already include double backslashes might require four backslashes for each divider. A clickable error icon () displays below the Markdown editor if there are any syntax errors. - - -### Example syntax - -```json -!{investigate{ - "label": "Test action", - "description": "Click to investigate.", - "providers": [ - [ - {"field": "event.id", "excluded": false, "queryType": "phrase", "value": "{{event.id}}", "valueType": "string"} - ], - [ - {"field": "event.action", "excluded": false, "queryType": "phrase", "value": "rename", "valueType": "string"}, - {"field": "process.pid", "excluded": false, "queryType": "phrase", "value": "{{process.pid}}", "valueType": "string"} - ] - ], - "relativeFrom": "now-15m", - "relativeTo": "now" -}} -``` - -This example creates the following Timeline query, as illustrated below: - -`(event.id : )` -`OR (event.action : "rename" AND process.pid : )` - - - -### Timeline template fields - -When viewing an interactive investigation guide in contexts unconnected to a specific alert (such a rule's details page), queries open as Timeline templates, and `parameter` fields are treated as Timeline template fields. - - - diff --git a/docs/serverless/rules/prebuilt-rules/prebuilt-rules-management.mdx b/docs/serverless/rules/prebuilt-rules/prebuilt-rules-management.mdx deleted file mode 100644 index 0ea063b0f2..0000000000 --- a/docs/serverless/rules/prebuilt-rules/prebuilt-rules-management.mdx +++ /dev/null @@ -1,126 +0,0 @@ ---- -slug: /serverless/security/prebuilt-rules-management -title: Install and manage Elastic prebuilt rules -description: Start detections quickly with prebuilt rules designed and updated by Elastic. -tags: ["serverless","security","how-to","manage"] -status: in review ---- - - -
- -Follow these guidelines to start using the ((security-app))'s prebuilt rules, keep them updated, and make sure they have the data needed to run successfully. - -* Install and enable Elastic prebuilt rules -* Prebuilt rule tags -* Select and duplicate all prebuilt rules -* Update Elastic prebuilt rules -* Confirm rule prerequisites - - - -* Prebuilt rules don't start running by default. You must first install the rules, then enable them. After installation, only a few prebuilt rules will be enabled by default, such as the Endpoint Security rule. - -* You can't modify most settings on Elastic prebuilt rules. You can only edit rule actions and add exceptions. If you want to modify other settings on a prebuilt rule, you must first duplicate it, then make your changes to the duplicated rule. However, your customized rule is entirely separate from the original prebuilt rule, and will not get updates from Elastic if the prebuilt rule is updated. - - - -
- -## Install and enable Elastic prebuilt rules - -1. Go to **Rules** → **Detection rules (SIEM)**. The badge next to **Add Elastic rules** shows the number of prebuilt rules available for installation. - - ![The Add Elastic Rules page](../../images/prebuilt-rules-management/-detections-prebuilt-rules-add-badge.png) - -1. Click **Add Elastic rules**. - - - To examine the details of a rule before you install it, select the rule name. This opens the rule details flyout. - - -1. Do one of the following: - * Install all available rules: Click **Install all**. - * Install a single rule: Click **Install rule** for that rule. - * Install multiple rules: Select the rules and click **Install _x_ selected rule(s)**. - - - - Use the search bar and **Tags** filter to find the rules you want to install. For example, filter by `OS: Windows` if your environment only includes Windows endpoints. For more on tag categories, refer to Prebuilt rule tags. - - - - ![The Add Elastic Rules page](../../images/prebuilt-rules-management/-detections-prebuilt-rules-add.png) - -1. Go back to the **Rules** page, search or filter for any rules you want to run, and do either of the following: - - * Enable a single rule: Turn on the rule's **Enabled** switch. - * Enable multiple rules: Select the rules, then click **Bulk actions** → **Enable**. - -Once you enable a rule, it starts running on its configured schedule. To confirm that it's running successfully, check its **Last response** status in the rules table, or open the rule's details page and check the **Execution results** tab. - -
- -## Prebuilt rule tags - -Each prebuilt rule includes several tags identifying the rule's purpose, detection method, associated resources, and other information to help categorize your rules. These tags are category-value pairs; for example, `OS: Windows` indicates rules designed for Windows endpoints. Categories include: - -* `Data Source`: The application, cloud provider, data shipper, or Elastic integration providing data for the rule. -* `Domain`: A general category of data source types (such as cloud, endpoint, or network). -* `OS`: The host operating system, which could be considered another data source type. -* `Resources`: Additional rule resources such as investigation guides. -* `Rule Type`: Identifies if the rule depends on specialized resources (such as machine learning jobs or threat intelligence indicators), or if it's a higher-order rule built from other rules' alerts. -* `Tactic`: MITRE ATT&CK tactics that the rule addresses. -* `Threat`: Specific threats the rule detects (such as Cobalt Strike or BPFDoor). -* `Use Case`: The type of activity the rule detects and its purpose. Use cases include: - * `Active Directory Monitoring`: Detects changes related to Active Directory. - * `Asset Visibility`: Detects changes to specified asset types. - * `Configuration Audit`: Detects undesirable configuration changes. - * `Guided Onboarding`: Example rule, used for ((elastic-sec))'s guided onboarding tour. - * `Identity and Access Audit`: Detects activity related to identity and access management (IAM). - * `Log Auditing`: Detects activity on log configurations or storage. - * `Network Security Monitoring`: Detects network security configuration activity. - * `Threat Detection`: Detects threats. - * `Vulnerability`: Detects exploitation of specific vulnerabilities. - -
- -## Select and duplicate all prebuilt rules - -1. Go to **Rules** → **Detection rules (SIEM)**, then select the **Elastic rules** filter. -1. Click **Select all _x_ rules** above the rules table. -1. Click **Bulk actions** → **Duplicate**. -1. Select whether to duplicate the rules' exceptions, then click **Duplicate**. - -You can then modify the duplicated rules and, if required, delete the prebuilt ones. However, your customized rules are entirely separate from the original prebuilt rules, and will not get updates from Elastic if the prebuilt rules are updated. - -
- -## Update Elastic prebuilt rules - -Elastic regularly updates prebuilt rules to optimize their performance and ensure they detect the latest threats and techniques. When updated versions are available for your installed prebuilt rules, the **Rule Updates** tab appears on the **Rules** page, allowing you to update your installed rules with the latest versions. - -1. Go to **Rules** → **Detection rules (SIEM)**, then select the **Rule Updates** tab. - - - The **Rule Updates** tab doesn't appear if all your installed prebuilt rules are up to date. - - - ![The Rule Updates tab on the Rules page](../../images/prebuilt-rules-management/-detections-prebuilt-rules-update.png) - -1. (Optional) To examine the details of a rule's latest version before you update it, select the rule name. This opens the rule details flyout. - - Select the **Updates** tab to view rule changes field by field, or the **JSON view** tab to view changes for the entire rule in JSON format. Both tabs display side-by-side comparisons of the **Current rule** (what you currently have installed) and the **Elastic update** version (what you can choose to install). Deleted characters are highlighted in red; added characters are highlighted in green. - - To accept the changes and install the updated version, select **Update**. - - ![Prebuilt rule comparison](../../images/prebuilt-rules-management/prebuilt-rules-update-diff.png) - -1. Do one of the following to update prebuilt rules on the **Rules** page: - * Update all available rules: Click **Update all**. - * Update a single rule: Click **Update rule** for that rule. - * Update multiple rules: Select the rules and click **Update _x_ selected rule(s)**. - - - Use the search bar and **Tags** filter to find the rules you want to update. For example, filter by `OS: Windows` if your environment only includes Windows endpoints. For more on tag categories, refer to Prebuilt rule tags. - diff --git a/docs/serverless/rules/prebuilt-rules/prebuilt-rules.mdx b/docs/serverless/rules/prebuilt-rules/prebuilt-rules.mdx deleted file mode 100644 index 5fa189ca7e..0000000000 --- a/docs/serverless/rules/prebuilt-rules/prebuilt-rules.mdx +++ /dev/null @@ -1,23 +0,0 @@ ---- -slug: /serverless/security/prebuilt-rules -title: Prebuilt rule reference -description: Learn more about Elastic's prebuilt detection rules. -tags: [] -status: in review ---- - - -
- -Refer to the following documentation for more details about Elastic's prebuilt rules: - - - - [Prebuilt rule reference](((security-guide))/prebuilt-rules.html) - Lists all available prebuilt rules. - - - [Downloadable rule updates](((security-guide))/prebuilt-rules-downloadable-updates.html) - Lists all updates to prebuilt detection rules. - - diff --git a/docs/serverless/rules/rules-coverage.mdx b/docs/serverless/rules/rules-coverage.mdx deleted file mode 100644 index e3f4d3fb70..0000000000 --- a/docs/serverless/rules/rules-coverage.mdx +++ /dev/null @@ -1,54 +0,0 @@ ---- -slug: /serverless/security/rules-coverage -title: MITRE ATT&CK® coverage -description: Review your current coverage of MITRE ATT&CK® tactics and techniques, based on installed rules. -tags: ["security","how-to","manage","analyze","visualize"] -status: rough content ---- - - -
- -The **MITRE ATT&CK® coverage** page (**Rules** → **MITRE ATT&CK® Coverage**) shows which [MITRE ATT&CK®](https://attack.mitre.org) adversary tactics and techniques are covered by your installed and enabled detection rules. This includes both Elastic prebuilt rules and custom rules. - -Mirroring the MITRE ATT&CK® framework, columns represent major tactics, and cells within each column represent a tactic's related techniques. Cells are darker when a technique has more rules matching the current filters, as indicated in the **Legend** at the top. - - - -This page only includes the detection rules you currently have installed, and only rules that are mapped to MITRE ATT&CK®. The coverage page maps detections to the following [MITRE ATT&CK® version](https://attack.mitre.org/resources/updates/updates-april-2024) used by ((elastic-sec)): `v15.1`. Elastic prebuilt rules that aren't installed and custom rules that are either unmapped or mapped to a deprecated tactic or technique will not appear on the coverage map. - -You can map custom rules to tactics in **Advanced settings** when creating or editing a rule. - - - -![MITRE ATT&CK® coverage page](../images/rules-coverage/-detections-rules-coverage.png) - -## Filter rules - -Use the drop-down filters at the top of the page to control which of your installed detection rules are included in calculating coverage. - -* **Installed rule status**: Select to include **Enabled rules**, **Disabled rules**, or both. - -* **Installed rule type**: Select to include **Elastic rules** (prebuilt rules), **Custom rules** (user-created rules), or both. - -You can also search for a tactic or technique name, technique number, or rule name in the search bar. The search bar acts as a filter for the coverage grid: only rules matching the search term will be included. - - -Searches for tactics and techniques must match exactly, are case sensitive, and do _not_ support wildcards. - - -## Expand and collapse cells - -Click **Collapse cells** or **Expand cells** to change how much information the cells display. Cells always include the technique's name and the number of sub-techniques covered by enabled rules. Expand the cells to also display counts of disabled and enabled rules for each technique. - - -The counts inside cells are affected by how you filter the page. For example, if you filter the **Installed rule status** to only include **Enabled rules**, then all disabled rule counts will be 0 because disabled rules are filtered out. - - -## Enable rules - -You can quickly enable all the rules for a specific technique that you've installed, but not enabled. Click the technique's cell, then click **Enable all disabled** in the popup that appears. - -## Learn more about techniques and sub-techniques - -For more information on a specific technique and its sub-techniques, click the technique's cell, then click the title in the popup that appears. This opens a new browser tab with the technique's MITRE ATT&CK® documentation. diff --git a/docs/serverless/rules/rules-ui-create.mdx b/docs/serverless/rules/rules-ui-create.mdx deleted file mode 100644 index ee61b4ba02..0000000000 --- a/docs/serverless/rules/rules-ui-create.mdx +++ /dev/null @@ -1,883 +0,0 @@ ---- -slug: /serverless/security/rules-create -title: Create a detection rule -description: Create detection rules to monitor your environment for suspicious and malicious behavior. -tags: ["serverless","security","defend","how-to","manage","secure"] -status: in review ---- - - -
- -To create a new detection rule, follow these steps: - -1. Define the **rule type**. The configuration for this step varies depending on the rule type. -1. Configure basic rule settings. -1. Configure advanced rule settings (optional). -1. Set the rule's schedule. -1. Set up rule actions (optional). -1. Set up response actions (optional). - - - -* To create detection rules, you must have access to data views, which requires the appropriate user role. - -* You'll also need permissions to enable and view detections, manage rules, manage alerts, and preview rules. These permissions depend on the user role. Refer to Detections requirements for more information. - - - - -At any step, you can preview the rule before saving it to see what kind of results you can expect. - - -
- -## Create a machine learning rule - - -To create or edit ((ml)) rules, you need an appropriate user role. Additionally, the selected ((ml)) job must be running for the rule to function correctly. - -1. Go to **Rules** → **Detection rules (SIEM)** → **Create new rule**. The **Create new rule** page displays. -1. To create a rule based on a ((ml)) anomaly threshold, select **Machine Learning**, - then select: - - 1. The required ((ml)) jobs. - - - If a required job isn't currently running, it will automatically start when you finish configuring and enable the rule. - - - 1. The anomaly score threshold above which alerts are created. - -1. (Optional) Use **Suppress alerts by** to reduce the number of repeated or duplicate alerts created by the rule. Refer to Suppress detection alerts for more information. - - - Because ((ml)) rules generate alerts from anomalies, which don't contain source event fields, you can only use anomaly fields when configuring alert suppression. - - - {/* The following steps are repeated across multiple rule types. If you change anything - in these steps or sub-steps, apply the change to the other rule types, too. */} -1. (Optional) Add **Related integrations** to associate the rule with one or more [Elastic integrations](((integrations-docs))). This indicates the rule's dependency on specific integrations and the data they generate, and allows users to confirm each integration's installation status when viewing the rule. - - 1. Click **Add integration**, then select an integration from the list. You can also start typing an integration's name to find it faster. - - 1. Enter the version of the integration you want to associate with the rule, using [semantic versioning](https://semver.org/). For version ranges, you must use tilde or caret syntax. For example, `~1.2.3` is from 1.2.3 to any patch version less than 1.3.0, and `^1.2.3` is from 1.2.3 to any minor and patch version less than 2.0.0. - -1. Click **Continue** to configure basic rule settings. - -
- -## Create a custom query rule -1. Go to **Rules** → **Detection rules (SIEM)** → **Create new rule**. The **Create new rule** page displays. -1. To create a rule based on a KQL or Lucene query, select **Custom query**, - then: - - 1. Define which ((es)) indices or data view the rule searches for alerts. - 1. Use the filter and query fields to create the criteria used for detecting - alerts. - - The following example (based on the prebuilt rule Volume Shadow Copy Deleted or Resized via VssAdmin) detects when the `vssadmin delete shadows` - Windows command is executed: - - * **Index patterns**: `winlogbeat-*` - - Winlogbeat ships Windows event logs to ((elastic-sec)). - - * **Custom query**: `event.action:"Process Create (rule: ProcessCreate)" and process.name:"vssadmin.exe" and process.args:("delete" and "shadows")` - - Searches the `winlogbeat-*` indices for `vssadmin.exe` executions with - the `delete` and `shadow` arguments, which are used to delete a volume's shadow - copies. - - ![Rule query example](../images/rules-ui-create/-detections-rule-query-example.png) - - 1. You can use saved queries () and queries from saved Timelines (**Import query from saved Timeline**) as rule conditions. - - When you use a saved query, the **Load saved query "_query name_" dynamically on each rule execution** check box appears: - - * Select this to use the saved query every time the rule runs. This links the rule to the saved query, and you won't be able to modify the rule's **Custom query** field or filters because the rule will only use settings from the saved query. To make changes, modify the saved query itself. - - * Deselect this to load the saved query as a one-time way of populating the rule's **Custom query** field and filters. This copies the settings from the saved query to the rule, so you can then further adjust the rule's query and filters as needed. If the saved query is later changed, the rule will not inherit those changes. - -1. (Optional) Use **Suppress alerts by** to reduce the number of repeated or duplicate alerts created by the rule. Refer to Suppress detection alerts for more information. - - {/* The following steps are repeated across multiple rule types. If you change anything - in these steps or sub-steps, apply the change to the other rule types, too. */} -1. (Optional) Create a list of **Required fields** that the rule needs to function. This list is informational only, to help users understand the rule; it doesn't affect how the rule actually runs. - - 1. Click **Add required field**, then select a field from the index patterns or data view you specified for the rule. You can also start typing a field's name to find it faster, or type in an entirely new custom field. - - 1. Enter the field's data type. - -1. (Optional) Add **Related integrations** to associate the rule with one or more [Elastic integrations](((integrations-docs))). This indicates the rule's dependency on specific integrations and the data they generate, and allows users to confirm each integration's installation status when viewing the rule. - - 1. Click **Add integration**, then select an integration from the list. You can also start typing an integration's name to find it faster. - - 1. Enter the version of the integration you want to associate with the rule, using [semantic versioning](https://semver.org/). For version ranges, you must use tilde or caret syntax. For example, `~1.2.3` is from 1.2.3 to any patch version less than 1.3.0, and `^1.2.3` is from 1.2.3 to any minor and patch version less than 2.0.0. - -1. Click **Continue** to configure basic rule settings. - -
- -## Create a threshold rule -1. Go to **Rules** → **Detection rules (SIEM)** → **Create new rule**. The **Create new rule** page displays. -1. To create a rule based on a source event field threshold, select **Threshold**, then: - 1. Define which ((es)) indices the rule analyzes for alerts. - 1. Use the filter and query fields to create the criteria used for detecting - alerts. - - - You can use saved queries () and queries from saved Timelines (**Import query from saved Timeline**) as rule conditions. - - - 1. Use the **Group by** and **Threshold** fields to determine which source event field is used as a threshold and the threshold's value. - 1. Use the **Count** field to limit alerts by cardinality of a certain field. - - For example, if **Group by** is `source.ip, destination.ip` and its **Threshold** is `10`, an alert is generated for every pair of source and destination IP addresses that appear in at least 10 of the rule's search results. - - You can also leave the **Group by** field undefined. The rule then creates an alert when the number of search results is equal to or greater than the threshold value. If you set **Count** to limit the results by `process.name` >= 2, an alert will only be generated for source/destination IP pairs that appear with at least 2 unique process names across all events. - - - Alerts created by threshold rules are synthetic alerts that do not resemble the source documents. The alert itself only contains data about the fields that were aggregated over (the **Group by** fields). Other fields are omitted, because they can vary across all source documents that were counted toward the threshold. Additionally, you can reference the actual count of documents that exceeded the threshold from the `kibana.alert.threshold_result.count` field. - - -1. (Optional) Select **Suppress alerts** to reduce the number of repeated or duplicate alerts created by the rule. Refer to Suppress detection alerts for more information. - - {/* The following steps are repeated across multiple rule types. If you change anything - in these steps or sub-steps, apply the change to the other rule types, too. */} -1. (Optional) Create a list of **Required fields** that the rule needs to function. This list is informational only, to help users understand the rule; it doesn't affect how the rule actually runs. - - 1. Click **Add required field**, then select a field from the index patterns or data view you specified for the rule. You can also start typing a field's name to find it faster, or type in an entirely new custom field. - - 1. Enter the field's data type. - -1. (Optional) Add **Related integrations** to associate the rule with one or more [Elastic integrations](((integrations-docs))). This indicates the rule's dependency on specific integrations and the data they generate, and allows users to confirm each integration's installation status when viewing the rule. - - 1. Click **Add integration**, then select an integration from the list. You can also start typing an integration's name to find it faster. - - 1. Enter the version of the integration you want to associate with the rule, using [semantic versioning](https://semver.org/). For version ranges, you must use tilde or caret syntax. For example, `~1.2.3` is from 1.2.3 to any patch version less than 1.3.0, and `^1.2.3` is from 1.2.3 to any minor and patch version less than 2.0.0. - -1. Click **Continue** to configure basic rule settings. - -
- -## Create an event correlation rule -1. Go to **Rules** → **Detection rules (SIEM)** → **Create new rule**. The **Create new rule** page displays. -1. To create an event correlation rule using EQL, select **Event Correlation**, then: - 1. Define which ((es)) indices or data view the rule searches when querying for events. - 1. Write an [EQL query](((ref))/eql-syntax.html) that searches for matching events or a series of matching events. - - - To find events that are missing in a sequence, use the [missing events](((ref))/eql-syntax.html#eql-missing-events) syntax. - - - For example, the following rule detects when `msxsl.exe` makes an outbound - network connection: - - * **Index patterns**: `winlogbeat-*` - - Winlogbeat ships Windows events to ((elastic-sec)). - - * **EQL query**: - - ```eql - sequence by process.entity_id - [process - where event.type in ("start", "process_started") - and process.name == "msxsl.exe"] - [network - where event.type == "connection" - and process.name == "msxsl.exe" - and network.direction == "outgoing"] - ``` - - Searches the `winlogbeat-*` indices for sequences of a `msxsl.exe` process start - event followed by an outbound network connection event that was started by the - `msxsl.exe` process. - - ![](../images/rules-ui-create/-detections-eql-rule-query-example.png) - - - For sequence events, the ((security-app)) generates a single alert when all events listed in the sequence are detected. To see the matched sequence events in more detail, you can view the alert in the Timeline, and, if all events came from the same process, open the alert in Analyze Event view. - - - -1. (Optional) Click the EQL settings icon () to configure additional fields used by [EQL search](((ref))/eql.html#specify-a-timestamp-or-event-category-field): - * **Event category field**: Contains the event classification, such as `process`, `file`, or `network`. This field is typically mapped as a field type in the [keyword family](((ref))/keyword.html). Defaults to the `event.category` ECS field. - * **Tiebreaker field**: Sets a secondary field for sorting events (in ascending, lexicographic order) if they have the same timestamp. - * **Timestamp field**: Contains the event timestamp used for sorting a sequence of events. This is different from the **Timestamp override** advanced setting, which is used for querying events within a range. Defaults to the `@timestamp` ECS field. - -1. (Optional) Use **Suppress alerts by** to reduce the number of repeated or duplicate alerts created by the rule. Refer to Suppress detection alerts for more information. - - {/* The following steps are repeated across multiple rule types. If you change anything - in these steps or sub-steps, apply the change to the other rule types, too. */} -1. (Optional) Create a list of **Required fields** that the rule needs to function. This list is informational only, to help users understand the rule; it doesn't affect how the rule actually runs. - - 1. Click **Add required field**, then select a field from the index patterns or data view you specified for the rule. You can also start typing a field's name to find it faster, or type in an entirely new custom field. - - 1. Enter the field's data type. - -1. (Optional) Add **Related integrations** to associate the rule with one or more [Elastic integrations](((integrations-docs))). This indicates the rule's dependency on specific integrations and the data they generate, and allows users to confirm each integration's installation status when viewing the rule. - - 1. Click **Add integration**, then select an integration from the list. You can also start typing an integration's name to find it faster. - - 1. Enter the version of the integration you want to associate with the rule, using [semantic versioning](https://semver.org/). For version ranges, you must use tilde or caret syntax. For example, `~1.2.3` is from 1.2.3 to any patch version less than 1.3.0, and `^1.2.3` is from 1.2.3 to any minor and patch version less than 2.0.0. - -1. Click **Continue** to configure basic rule settings. - -
- -## Create an indicator match rule - - -((elastic-sec)) provides limited support for indicator match rules. See Limited support for indicator match rules for more information. - - -1. Go to **Rules** → **Detection rules (SIEM)** → **Create new rule**. The **Create new rule** page displays. -1. To create a rule that searches for events whose specified field value matches the specified indicator field value in the indicator index patterns, select **Indicator Match**, then fill in the following fields: - 1. **Source**: The individual index patterns or data view that specifies what data to search. - 1. **Custom query**: The query and filters used to retrieve the required results from - the ((elastic-sec)) event indices. For example, if you want to match documents that only contain a `destination.ip` address field, add `destination.ip : *`. - - - If you want the rule to check every field in the indices, use this - wildcard expression: `*:*`. - - - - You can use saved queries () and queries from saved Timelines (**Import query from saved Timeline**) as rule conditions. - - - 1. **Indicator index patterns**: The indicator index patterns containing field values for which you want to generate alerts. This field is automatically populated with indices specified in the `securitySolution:defaultThreatIndex` advanced setting. For more information, see Update default Elastic Security threat intelligence indices. - - - Data in indicator indices must be ECS compatible, and so it must contain a `@timestamp` field. - - - 1. **Indicator index query**: The query and filters used to filter the fields from - the indicator index patterns. The default query `@timestamp > "now-30d/d"` searches specified indicator indices for indicators ingested during the past 30 days and rounds the start time down to the nearest day (resolves to UTC `00:00:00`). - - 1. **Indicator mapping**: Compares the values of the specified event and indicator fields, and generates an alert if the values are identical. - - - Only single-value fields are supported. - - - To define which field values are compared from the indices, add the following: - - * **Field**: The field used for comparing values in the ((elastic-sec)) event - indices. - - * **Indicator index field**: The field used for comparing values in the indicator - indices. - - 1. You can add `AND` and `OR` clauses to define when alerts are generated. - - For example, to create a rule that generates alerts when `host.name` **and** - `destination.ip` field values in the `logs-*` or `packetbeat-*` ((elastic-sec)) indices - are identical to the corresponding field values in the `mock-threat-list` indicator - index, enter the rule parameters seen in the following image: - - ![Indicator match rule settings](../images/rules-ui-create/-detections-indicator-rule-example.png) - - - Before you create rules, create Timeline templates so - they can be selected here. When alerts generated by the rule are investigated in the Timeline, Timeline query values are replaced with their corresponding alert field values. - - -1. (Optional) Use **Suppress alerts by** to reduce the number of repeated or duplicate alerts created by the rule. Refer to Suppress detection alerts for more information. - - {/* The following steps are repeated across multiple rule types. If you change anything - in these steps or sub-steps, apply the change to the other rule types, too. */} -1. (Optional) Create a list of **Required fields** that the rule needs to function. This list is informational only, to help users understand the rule; it doesn't affect how the rule actually runs. - - 1. Click **Add required field**, then select a field from the index patterns or data view you specified for the rule. You can also start typing a field's name to find it faster, or type in an entirely new custom field. - - 1. Enter the field's data type. - -1. (Optional) Add **Related integrations** to associate the rule with one or more [Elastic integrations](((integrations-docs))). This indicates the rule's dependency on specific integrations and the data they generate, and allows users to confirm each integration's installation status when viewing the rule. - - 1. Click **Add integration**, then select an integration from the list. You can also start typing an integration's name to find it faster. - - 1. Enter the version of the integration you want to associate with the rule, using [semantic versioning](https://semver.org/). For version ranges, you must use tilde or caret syntax. For example, `~1.2.3` is from 1.2.3 to any patch version less than 1.3.0, and `^1.2.3` is from 1.2.3 to any minor and patch version less than 2.0.0. - -1. Click **Continue** to configure basic rule settings. - -
- -### Use value lists with indicator match rules - -While there are numerous ways you can add data into indicator indices, you can use value lists as the indicator match index in an indicator match rule. Take the following scenario, for example: - -You uploaded a value list of known ransomware domains, and you want to be notified if any of those domains matches a value contained in a domain field in your security event index pattern. - -1. Upload a value list of indicators. -1. Create an indicator match rule and fill in the following fields: - 1. **Index patterns**: The Elastic Security event indices on which the rule runs. - 1. **Custom query**: The query and filters used to retrieve the required results from the Elastic Security event indices (e.g., `host.domain :*`). - 1. **Indicator index patterns**: Value lists are stored in a hidden index called `.items-`. Enter the name of the ((kib)) space in which this rule will run in this field. - 1. **Indicator index query**: Enter the value `list_id :`, followed by the name of the value list you want to use as your indicator index (uploaded in Step 1 above). - 1. **Indicator mapping** - * **Field**: Enter the field from the Elastic Security event indices to be used for comparing values. - * **Indicator index field**: Enter the type of value list you created (i.e., `keyword`, `text`, or `IP`). - - - If you don't remember this information, go to **Rules** → **Detection rules (SIEM)** → **Manage value lists**. Locate the appropriate value list and note the field in the corresponding `Type` column. (Examples include keyword, text, and IP.) - - -![](../images/rules-ui-create/-detections-indicator_value_list.png) - -
- -## Create a new terms rule - -1. Go to **Rules** → **Detection rules (SIEM)** → **Create new rule**. The **Create new rule** page displays. -1. To create a rule that searches for each new term detected in source documents, select **New Terms**, then: - 1. Specify what data to search by entering individual ((es)) index patterns or selecting an existing data view. - 1. Use the filter and query fields to create the criteria used for detecting - alerts. - - - You can use saved queries () and queries from saved Timelines (**Import query from saved Timeline**) as rule conditions. - - - - 1. Use the **Fields** menu to select a field to check for new terms. You can also select up to three fields to detect a combination of new terms (for example, a `host.ip` and `host.id` that have never been observed together before). - - - When checking multiple fields, each unique combination of values from those fields is evaluated separately. For example, a document with `host.name: ["host-1", "host-2", "host-3"]` and `user.name: ["user-1", "user-2", "user-3"]` has 9 (3x3) unique combinations of `host.name` and `user.name`. A document with 11 values in `host.name` and 10 values in `user.name` has 110 (11x10) unique combinations. The new terms rule only evaluates 100 unique combinations per document, so selecting fields with large arrays of values might cause incorrect results. - - - 1. Use the **History Window Size** menu to specify the time range to search in minutes, hours, or days to determine if a term is new. The history window size must be larger than the rule interval plus additional look-back time, because the rule will look for terms where the only time(s) the term appears within the history window is _also_ within the rule interval and additional look-back time. - - For example, if a rule has an interval of 5 minutes, no additional look-back time, and a history window size of 7 days, a term will be considered new only if the time it appears within the last 7 days is also within the last 5 minutes. Configure the rule interval and additional look-back time when you set the rule's schedule. - -1. (Optional) Use **Suppress alerts by** to reduce the number of repeated or duplicate alerts created by the rule. Refer to Suppress detection alerts for more information. - - {/* The following steps are repeated across multiple rule types. If you change anything - in these steps or sub-steps, apply the change to the other rule types, too. */} -1. (Optional) Create a list of **Required fields** that the rule needs to function. This list is informational only, to help users understand the rule; it doesn't affect how the rule actually runs. - - 1. Click **Add required field**, then select a field from the index patterns or data view you specified for the rule. You can also start typing a field's name to find it faster, or type in an entirely new custom field. - - 1. Enter the field's data type. - -1. (Optional) Add **Related integrations** to associate the rule with one or more [Elastic integrations](((integrations-docs))). This indicates the rule's dependency on specific integrations and the data they generate, and allows users to confirm each integration's installation status when viewing the rule. - - 1. Click **Add integration**, then select an integration from the list. You can also start typing an integration's name to find it faster. - - 1. Enter the version of the integration you want to associate with the rule, using [semantic versioning](https://semver.org/). For version ranges, you must use tilde or caret syntax. For example, `~1.2.3` is from 1.2.3 to any patch version less than 1.3.0, and `^1.2.3` is from 1.2.3 to any minor and patch version less than 2.0.0. - -1. Click **Continue** to configure basic rule settings. - -
- -## Create an ((esql)) rule - -Use [((esql))](((ref))/esql.html) to query your source events and aggregate event data. Query results are returned in a table with rows and columns. Each row becomes an alert. - -To create an ((esql)) rule: - -1. Go to **Rules** → **Detection rules (SIEM)** → **Create new rule**. The **Create new rule** page appears. -1. Select **((esql))**, then write a query. - - - Refer to the sections below to learn more about ((esql)) query types, query design considerations, and rule limitations. - - - - Click the help icon () to open the in-product reference documentation for all ((esql)) commands and functions. - - -1. (Optional) Use **Suppress alerts by** to reduce the number of repeated or duplicate alerts created by the rule. Refer to Suppress detection alerts for more information. - - {/* The following steps are repeated across multiple rule types. If you change anything - in these steps or sub-steps, apply the change to the other rule types, too. */} -1. (Optional) Create a list of **Required fields** that the rule needs to function. This list is informational only, to help users understand the rule; it doesn't affect how the rule actually runs. - - 1. Click **Add required field**, then select a field from the index patterns or data view you specified for the rule. You can also start typing a field's name to find it faster, or type in an entirely new custom field. - - 1. Enter the field's data type. - -1. (Optional) Add **Related integrations** to associate the rule with one or more [Elastic integrations](((integrations-docs))). This indicates the rule's dependency on specific integrations and the data they generate, and allows users to confirm each integration's installation status when viewing the rule. - - 1. Click **Add integration**, then select an integration from the list. You can also start typing an integration's name to find it faster. - - 1. Enter the version of the integration you want to associate with the rule, using [semantic versioning](https://semver.org/). For version ranges, you must use tilde or caret syntax. For example, `~1.2.3` is from 1.2.3 to any patch version less than 1.3.0, and `^1.2.3` is from 1.2.3 to any minor and patch version less than 2.0.0. - -1. Click **Continue** to configure basic rule settings. - -
- -### ((esql)) query types - -((esql)) rule queries are loosely categorized into two types: aggregating and non-aggregating. - -
- -#### Aggregating query - -Aggregating queries use [`STATS...BY`](((ref))/esql-functions-operators.html#esql-agg-functions) functions to aggregate source event data. Alerts generated by a rule with an aggregating query only contain the fields that the ((esql)) query returns and any new fields that the query creates. - - - A _new field_ is a field that doesn't exist in the query's source index and is instead created when the rule runs. You can access new fields in the details of any alerts that are generated by the rule. For example, if you use the `STATS...BY` function to create a column with aggregated values, the column is created when the rule runs and is added as a new field to any alerts that are generated by the rule. - - -Here is an example aggregating query: - -```esql -FROM logs-* -| STATS host_count = COUNT(host.name) BY host.name -| SORT host_count DESC -| WHERE host_count > 20 -``` - -- This query starts by searching logs from indices that match the pattern `logs-*`. -- The query then aggregates the count of events by `host.name`. -- Next, it sorts the result by `host_count` in descending order. -- Then, it filters for events where the `host_count` field appears more than 20 times during the specified rule interval. - - -Rules that use aggregating queries might create duplicate alerts. This can happen when events that occur in the additional look-back time are aggregated both in the current rule execution and in a previous rule execution. - - -
- -#### Non-aggregating query - -Non-aggregating queries don't use `STATS...BY` functions and don't aggregate source event data. Alerts generated by a non-aggregating query contain source event fields that the query returns, new fields the query creates, and all other fields in the source event document. - - - A _new field_ is a field that doesn't exist in the query's source index and is instead created when the rule runs. You can access new fields in the details of any alerts that are generated by the rule. For example, if you use the [`EVAL`](((ref))/esql-commands.html#esql-eval) command to append new columns with calculated values, the columns are created when the rule runs and are added as new fields to any alerts generated by the rule. - - -Here is an example non-aggregating query: - -```esql -FROM logs-* METADATA _id, _index, _version -| WHERE event.category == "process" AND event.id == "8a4f500d" -| LIMIT 10 -``` -- This query starts by querying logs from indices that match the pattern `logs-*`. The `METADATA _id, _index, _version` operator allows alert deduplication. -- Next, the query filters events where the `event.category` is a process and the `event.id` is `8a4f500d`. -- Then, it limits the output to the top 10 results. - -
- -#### Turn on alert deduplication for rules using non-aggregating queries - -To deduplicate alerts, a query needs access to the `_id`, `_index`, and `_version` metadata fields of the queried source event documents. You can allow this by adding the `METADATA _id, _index, _version` operator after the `FROM` source command, for example: - -```esql -FROM logs-* METADATA _id, _index, _version -| WHERE event.category == "process" AND event.id == "8a4f500d" -| LIMIT 10 -``` - -When those metadata fields are provided, unique alert IDs are created for each alert generated by the query. - -When developing the query, make sure you don't [`DROP`](((ref))/esql-commands.html#esql-drop) or filter out the `_id`, `_index`, or `_version` metadata fields. - -Here is an example of a query that fails to deduplicate alerts. It uses the `DROP` command to omit the `_id` property from the results table: - -```esql -FROM logs-* METADATA _id, _index, _version -| WHERE event.category == "process" AND event.id == "8a4f500d" -| DROP _id -| LIMIT 10 -``` - -Here is another example of an invalid query that uses the `KEEP` command to only return `event.*` fields in the results table: - -```esql -FROM logs-* METADATA _id, _index, _version -| WHERE event.category == "process" AND event.id == "8a4f500d" -| KEEP event.* -| LIMIT 10 -``` - -
- -### Query design considerations - -When writing your query, consider the following: - -- The [`LIMIT`](((ref))/esql-commands.html#esql-limit) command specifies the maximum number of rows an ((esql)) query returns and the maximum number of alerts created per rule run. Similarly, a detection rule's **Max alerts per run** setting specifies the maximum number of alerts it can create every time it runs. - - If the `LIMIT` value and **Max alerts per run** value are different, the rule uses the lower value to determine the maximum number of alerts the rule generates. - -- When writing an aggregating query, use the [`STATS...BY`](((ref))/esql-commands.html#esql-stats-by) command with fields that you want to search and filter for after alerts are created. For example, using the `host.name`, `user.name`, `process.name` fields with the `BY` operator of the `STATS...BY` command returns these fields in alert documents, and allows you to search and filter for them from the Alerts table. - -- When configuring alert suppression on a non-aggregating query, we recommend sorting results by ascending `@timestamp` order. Doing so ensures that alerts are properly suppressed, especially if the number of alerts generated is higher than the **Max alerts per run** value. - -
- -### ((esql)) rule limitations - -If your ((esql)) query creates new fields that aren’t part of the ECS schema, they aren't mapped to the alerts index, so you can't search for or filter them in the Alerts table. As a workaround, create runtime fields. - -
- -### Highlight fields returned by the ((esql)) rule query - -When configuring an ((esql)) rule's **Custom highlighted fields**, you can specify any fields that the rule's aggregating or non-aggregating query return. This can help ensure that returned fields are visible in the alert details flyout while you're investigating alerts. - -
- -## Configure basic rule settings - -1. In the **About rule** pane, fill in the following fields: - 1. **Name**: The rule's name. - 1. **Description**: A description of what the rule does. - 1. **Default severity**: Select the severity level of alerts created by the rule: - * **Low**: Alerts that are of interest but generally are not considered to be security incidents. Sometimes a combination of low severity alerts can indicate suspicious activity. - - * **Medium**: Alerts that require investigation. - * **High**: Alerts that require an immediate investigation. - * **Critical**: Alerts that indicate it is highly likely a security incident has occurred. - - 1. **Severity override** (optional): Select to use source event values to - override the **Default severity** in generated alerts. When selected, a UI - component is displayed where you can map the source event field values to - severity levels. The following example shows how to map severity levels to `host.name` - values: - - ![](../images/rules-ui-create/-detections-severity-mapping-ui.png) - - - For threshold rules, not all source event values can be used for overrides; only the fields that were aggregated over (the `Group by` fields) will contain data. Please also note that overrides are not supported for event correlation rules. - - - 1. **Default risk score**: A numerical value between 0 and 100 that indicates the risk of events detected by the rule. This setting changes to a default value when you change the **Severity** level, but you can adjust the risk score as needed. General guidelines are: - * `0` - `21` represents low severity. - * `22` - `47` represents medium severity. - * `48` - `73` represents high severity. - * `74` - `100` represents critical severity. - 1. **Risk score override** (optional): Select to use a source event value to - override the **Default risk score** in generated alerts. When selected, a UI - component is displayed to select the source field used for the risk - score. For example, if you want to use the source event's risk score in - alerts: - - ![](../images/rules-ui-create/-detections-risk-source-field-ui.png) - - - For threshold rules, not all source event values can be used for overrides; only the fields that were aggregated over (the `Group by` fields) will contain data. - - - 1. **Tags** (optional): Words and phrases used to categorize, filter, and search - the rule. - -1. Continue with **one** of the following: - - * Configure advanced rule settings (optional) - * Set the rule's schedule - -
- -## Configure advanced rule settings (optional) - -1. Click **Advanced settings** and fill in the following fields where applicable: - 1. **Reference URLs** (optional): References to information that is relevant to - the rule. For example, links to background information. - - 1. **False positive examples** (optional): List of common scenarios that may produce - false-positive alerts. - - 1. **MITRE ATT&CKTM threats** (optional): Add relevant [MITRE](https://attack.mitre.org/) framework tactics, techniques, and subtechniques. - 1. **Custom highlighted fields** (optional): Specify highlighted fields for unique alert investigation flows. You can choose any fields that are available in the you selected for the rule's data source. - - After you create the rule, you can find all custom highlighted fields in the About section of the rule details page. If the rule has alerts, you can find custom highlighted fields in the Highlighted fields section of the alert details flyout. - - 1. **Setup guide** (optional): Instructions on rule prerequisites such as required integrations, configuration steps, and anything else needed for the rule to work correctly. - - 1. **Investigation guide** (optional): Information for analysts investigating - alerts created by the rule. You can also add action buttons to run Osquery or launch Timeline investigations using alert data. - - 1. **Author** (optional): The rule's authors. - 1. **License** (optional): The rule's license. - 1. **Elastic endpoint exceptions** (optional): Adds all Elastic Endpoint Security - rule exceptions to this rule (refer to Add ((elastic-endpoint)) exceptions to learn more about adding endpoint exceptions). - - - If you select this option, you can add Endpoint exceptions on the Rule details page. Additionally, all future exceptions added to the Endpoint Security rule also affect this rule. - - - 1. **Building block** (optional): Select to create a building-block rule. By default, alerts generated from a building-block rule are not displayed in the UI. See Use building block rules for more information. - - 1. **Max alerts per run** (optional): Specify the maximum number of alerts the rule can create each time it runs. Default is 100. - - 1. **Indicator prefix override**: Define the location of indicator data within the structure of indicator documents. When the indicator match rule executes, it queries specified indicator indices and references this setting to locate fields with indicator data. This data is used to enrich indicator match alerts with metadata about matched threat indicators. The default value for this setting is `threat.indicator`. - - - If your threat indicator data is at a different location, update this setting accordingly to ensure alert enrichment can still be performed. - - - 1. **Rule name override** (optional): Select a source event field to use as the - rule name in the UI (Alerts table). This is useful for exposing, at a glance, - more information about an alert. For example, if the rule generates alerts from - Suricata, selecting `event.action` lets you see what action (Suricata category) - caused the event directly in the Alerts table. - - - For threshold rules, not all source event values can be used for overrides; only the fields that were aggregated over (the `Group by` fields) will contain data. - - - 1. **Timestamp override** (optional): Select a source event timestamp field. When selected, the rule's query uses the selected field, instead of the default `@timestamp` field, to search for alerts. This can help reduce missing alerts due to network or server outages. Specifically, if your ingest pipeline adds a timestamp when events are sent to ((es)), this avoids missing alerts due to ingestion delays. - However, if you know your data source has an inaccurate `@timestamp` value, it is recommended you select the **Do not use @timestamp as a fallback timestamp field** option to ignore the `@timestamp` field entirely. - - - The [Microsoft](((filebeat-ref))/filebeat-module-microsoft.html) and - [Google Workspace](((filebeat-ref))/filebeat-module-google_workspace.html) ((filebeat)) modules have an `event.ingested` timestamp field that can be used instead of the default `@timestamp` field. - - -1. Click **Continue**. The **Schedule rule** pane is displayed. - - ![](../images/rules-ui-create/-detections-schedule-rule.png) - -1. Continue with setting the rule's schedule. - -
- -## Set the rule's schedule - -1. Select how often the rule runs. -1. Optionally, add `Additional look-back time` to the rule. When defined, the - rule searches indices with the additional time. - - For example, if you set a rule to run every 5 minutes with an additional - look-back time of 1 minute, the rule runs every 5 minutes but analyzes the - documents added to indices during the last 6 minutes. - - - - It is recommended to set the `Additional look-back time` to at - least 1 minute. This ensures there are no missing alerts when a rule does not - run exactly at its scheduled time. - - ((elastic-sec)) prevents duplication. Any duplicate alerts that are discovered during the - `Additional look-back time` are _not_ created. - - - -1. Click **Continue**. The **Rule actions** pane is displayed. - - ![Available connector types](../images/rules-ui-create/-detections-available-action-types.png) - -1. Do either of the following: - - * Continue onto setting up alert notifications and Response Actions (optional). - * Create the rule (with or without activation). - -
- -## Set up rule actions (optional) - -Use actions to set up notifications sent via other systems when alerts are generated. - - -To use actions for alert notifications, you need the appropriate user role. For more information, see Cases requirements. - - -1. Select a connector type to determine how notifications are sent. For example, if you select the ((jira)) connector, notifications are sent to your ((jira)) system. - - - Each action type requires a connector. Connectors store the - information required to send the notification from the external system. You can - configure connectors while creating the rule or in **Project settings** → **Management** → **((connectors-ui))**. For more - information, see [Action and connector types](((kibana-ref))/action-types.html). - - Some connectors that perform actions require less configuration. For example, you do not need to set the action frequency or variables for the [Cases connector](((kibana-ref))/cases-action-type.html). - - - ![Available connector types](../images/rules-ui-create/-detections-available-action-types.png) - -1. After you select a connector, set its action frequency to define when notifications are sent: - - * **Summary of alerts**: Select this option to get a report that summarizes generated alerts, which you can review at your convenience. Alert summaries will be sent at the specified time intervals. - - - When setting a custom notification frequency, do not choose a time that is shorter than the rule's execution schedule. - - - * **For each alert**: Select this option to ensure notifications are sent every time new alerts are generated. - -1. (Optional) Specify additional conditions that need to be met for notifications to send. Click the toggle to enable a setting, then add the required details: - - * **If alert matches query**: Enter a KQL query that defines field-value pairs or query conditions that must be met for notifications to send. The query only searches alert documents in the indices specified for the rule. - * **If alert is generated during timeframe**: Set timeframe details. Notifications are only sent if alerts are generated within the timeframe you define. - -1. Complete the required connector type fields. Here is an example with ((jira)): - - ![](../images/rules-ui-create/-detections-selected-action-type.png) - -1. Use the default notification message or customize it. You can add more context to the message by clicking the icon above the message text box and selecting from a list of available alert notification variables. - -1. Create the rule with or without activation. - - - When you activate a rule, it is queued, and its schedule is determined by - its initial run time. For example, if you activate a rule that runs every 5 - minutes at 14:03 but it does not run until 14:04, it will run again at 14:09. - - - - -After you activate a rule, you can check if it is running as expected -using the Monitoring tab on the Rules page. If you see -values in the `Gap` column, you can Troubleshoot missing alerts. - -When a rule fails to run, the ((security-app)) tries to rerun it at its next -scheduled run time. - - - -
- -### Alert notification placeholders - -You can use [mustache syntax](http://mustache.github.io/) to add variables to notification messages. The action frequency you choose determines the variables you can select from. - -The following variables can be passed for all rules: - - -Refer to [Action frequency: Summary of alerts](((kibana-ref))/rule-action-variables.html#alert-summary-action-variables) to learn about additional variables that can be passed if the rule's action frequency is **Summary of alerts**. - - -* `{{context.alerts}}`: Array of detected alerts -* `{{{context.results_link}}}`: URL to the alerts -* `{{context.rule.anomaly_threshold}}`: Anomaly threshold score above which - alerts are generated (((ml)) rules only) - -* `{{context.rule.description}}`: Rule description -* `{{context.rule.false_positives}}`: Rule false positives -* `{{context.rule.filters}}`: Rule filters (query rules only) -* `{{context.rule.id}}`: Unique rule ID returned after creating the rule -* `{{context.rule.index}}`: Indices rule runs on (query rules only) -* `{{context.rule.language}}`: Rule query language (query rules only) -* `{{context.rule.machine_learning_job_id}}`: ID of associated ((ml)) job (((ml)) - rules only) - -* `{{context.rule.max_signals}}`: Maximum allowed number of alerts per rule - execution - -* `{{context.rule.name}}`: Rule name -* `{{context.rule.query}}`: Rule query (query rules only) -* `{{context.rule.references}}`: Rule references -* `{{context.rule.risk_score}}`: Default rule risk score - - - This placeholder contains the rule's default values even when the **Risk score override** option is used. - - -* `{{context.rule.rule_id}}`: Generated or user-defined rule ID that can be - used as an identifier across systems - -* `{{context.rule.saved_id}}`: Saved search ID -* `{{context.rule.severity}}`: Default rule severity - - - This placeholder contains the rule's default values even when the **Severity override** option is used. - - -* `{{context.rule.threat}}`: Rule threat framework -* `{{context.rule.threshold}}`: Rule threshold values (threshold rules only) -* `{{context.rule.timeline_id}}`: Associated Timeline ID -* `{{context.rule.timeline_title}}`: Associated Timeline name -* `{{context.rule.type}}`: Rule type -* `{{context.rule.version}}`: Rule version -* `{{date}}`: Date the rule scheduled the action -* `{{kibanaBaseUrl}}`: Configured `server.publicBaseUrl` value, or empty string if not configured -* `{{rule.id}}`: ID of the rule -* `{{rule.name}}`: Name of the rule -* `{{rule.spaceId}}`: Space ID of the rule -* `{{rule.tags}}`: Tags of the rule -* `{{rule.type}}`: Type of rule -* `{{state.signals_count}}`: Number of alerts detected - -The following variables can only be passed if the rule’s action frequency is for each alert: - -* `{{alert.actionGroup}}`: Action group of the alert that scheduled actions for the rule -* `{{alert.actionGroupName}}`: Human-readable name of the action group of the alert that scheduled actions for the rule -* `{{alert.actionSubgroup}}`: Action subgroup of the alert that scheduled actions for the rule -* `{{alert.id}}`: ID of the alert that scheduled actions for the rule -* `{{alert.flapping}}`: A flag on the alert that indicates whether the alert status is changing repeatedly - -
- -#### Alert placeholder examples - -To understand which fields to parse, see the [Detections API](((security-guide))/rule-api-overview.html) to view the JSON representation of rules. -{/* Link to classic docs until serverless API docs are available. */} - -Example using `{{context.rule.filters}}` to output a list of filters: - -```json -{{#context.rule.filters}} -{{^meta.disabled}}{{meta.key}} {{#meta.negate}}NOT {{/meta.negate}}{{meta.type}} {{^exists}}{{meta.value}}{{meta.params.query}}{{/exists}}{{/meta.disabled}} -{{/context.rule.filters}} -``` - -Example using `{{context.alerts}}` as an array, which contains each alert generated since the last time the action was executed: - -```json -{{#context.alerts}} -Detection alert for user: {{user.name}} -{{/context.alerts}} -``` - -Example using the mustache "current element" notation `{{.}}` to output all the rule references in the `signal.rule.references` array: - -```json -{{#signal.rule.references}} {{.}} {{/signal.rule.references}} -``` - -
- -### Set up response actions (optional) -Use response actions to set up additional functionality that will run whenever a rule executes: - -* **Osquery**: Include live Osquery queries with a custom query rule. When an alert is generated, Osquery automatically collects data on the system related to the alert. Refer to Add Osquery Response Actions to learn more. - -* **((elastic-defend))**: Automatically run response actions on an endpoint when rule conditions are met. For example, you can automatically isolate a host or terminate a process when specific activities or events are detected on the host. Refer to to learn more. - - -Host isolation involves quarantining a host from the network to prevent further spread of threats and limit potential damage. Be aware that automatic host isolation can cause unintended consequences, such as disrupting legitimate user activities or blocking critical business processes. - - -![Shows available response actions](../images/rules-ui-create/-detections-available-response-actions.png) - -
- -## Preview your rule (optional) - -You can preview any custom or prebuilt rule to find out how noisy it will be. For a custom rule, you can then adjust the rule's query or other settings. - - -To preview rules, you must have the appropriate user role. Refer to Detections requirements for more information. - - -Click the **Rule preview** button while creating or editing a rule. The preview opens in a side panel, showing a histogram and table with the alerts you can expect, based on the defined rule settings and past events in your indices. - -![Rule preview](../images/rules-ui-create/-detections-preview-rule.png) - -The preview also includes the effects of rule exceptions and override fields. In the histogram, alerts are stacked by `event.category` (or `host.name` for machine learning rules), and alerts with multiple values are counted more than once. - -To interact with the rule preview: - -* Use the date and time picker to define the preview's time range. - - - Avoid setting long time ranges with short rule intervals, or the rule preview might time out. - - -* Click **Refresh** to update the preview. - * When you edit the rule's settings or the preview's time range, the button changes from blue to green to indicate that the rule has been edited since the last preview. - * For a relative time range (such as `Last 1 hour`), refresh the preview to check for the latest results. (Previews don't automatically refresh with new incoming data.) - -* Click the **View details** icon () in the alerts table to view the details of a particular alert. - -* To resize the preview, hover between the rule settings and preview, then click and drag the border. You can also click the border, then the collapse icon () to collapse and expand the preview. - -* To close the preview, click the **Rule preview** button again. - diff --git a/docs/serverless/rules/rules-ui-management.mdx b/docs/serverless/rules/rules-ui-management.mdx deleted file mode 100644 index bf3a5b8ab5..0000000000 --- a/docs/serverless/rules/rules-ui-management.mdx +++ /dev/null @@ -1,196 +0,0 @@ ---- -slug: /serverless/security/rules-ui-management -title: Manage detection rules -description: Manage your detection rules and enable Elastic prebuilt rules on the Rules page. -tags: ["serverless","security","how-to","manage"] -status: in review ---- - - -
- -The Rules page allows you to view and manage all prebuilt and custom detection rules. - -![The Rules page](../images/rules-ui-management/-detections-all-rules.png) - -On the Rules page, you can: - -* Sort and filter the rules list -* Check the current status of rules -* Modify existing rules settings -* Manage rules -* Snooze rule actions -* Export and import rules -* Confirm rule prerequisites -* Troubleshoot missing alerts - -
- -## Sort and filter the rules list - -To sort the rules list, click any column header. To sort in descending order, click the column header again. - -To filter the rules list, enter a search term in the search bar and press **Return**: - -* Rule name — Enter a word or phrase from a rule's name. -* Index pattern — Enter an index pattern (such as `filebeat-*`) to display all rules that use it. -* MITRE ATT&CK tactic or technique — Enter a MITRE ATT&CK tactic name (such as `Defense Evasion`) or technique number (such as `TA0005`) to display all associated rules. - - -Searches for index patterns and MITRE ATT&CK tactics and techniques must match exactly, are case sensitive, and do _not_ support wildcards. For example, to find rules using the `filebeat-*` index pattern, the search term `filebeat-*` is valid, but `filebeat` and `file*` are not because they don't exactly match the index pattern. Likewise, the MITRE ATT&CK tactic `Defense Evasion` is valid, but `Defense`, `defense evasion`, and `Defense*` are not. - - -You can also filter the rules list by selecting the **Tags**, **Last response**, **Elastic rules**, **Custom rules**, **Enabled rules**, and **Disabled rules** filters next to the search bar. - -The rules list retains your sorting and filtering settings when you navigate away and return to the page. These settings are also preserved when you copy the page's URL and paste into another browser. Select **Clear filters** above the table to revert to the default view. - -
- -## Check the current status of rules - -The **Last response** column displays the current status of each rule, based on the most recent attempt to run the rule: - -* **Succeeded**: The rule completed its defined search. This doesn't necessarily mean it generated an alert, just that it ran without error. -* **Failed**: The rule encountered an error that prevented it from running. For example, a ((ml)) rule whose corresponding ((ml)) job wasn't running. -* **Warning**: Nothing prevented the rule from running, but it might have returned unexpected results. For example, a custom query rule tried to search an index pattern that couldn't be found in ((es)). - -For ((ml)) rules, an indicator icon () also appears in this column if a required ((ml)) job isn't running. Click the icon to list the affected jobs, then click **Visit rule details page to investigate** to open the rule's details page, where you can start the ((ml)) job. - -
- -## Modify existing rules settings - -You can edit an existing rule's settings, and can bulk edit settings for multiple rules at once. - - - -For prebuilt Elastic rules, you can't modify most settings. You can only edit rule actions and add exceptions. If you try to bulk edit with both prebuilt and custom rules selected, the action will affect only the rules that can be modified. - -Similarly, rules will be skipped if they can't be modified by a bulk edit. For example, if you try to apply a tag to rules that already have that tag, or apply an index pattern to rules that use data views. - - - -1. Go to **Rules** → **Detection rules (SIEM)**. -1. Do one of the following: - * **Edit a single rule**: Select the **All actions** menu () on a rule, then select **Edit rule settings**. The **Edit rule settings** view opens, where you can modify the rule's settings. - * **Bulk edit multiple rules**: Select the rules you want to edit, then select an action from the **Bulk actions** menu: - * **Index patterns**: Add or delete the index patterns used by all selected rules. - * **Tags**: Add or delete tags on all selected rules. - * **Custom highlighted fields**: Add custom highlighted fields on all selected rules. You can choose any fields that are available in the default ((elastic-sec)) indices, or enter field names from other indices. To overwrite a rule's current set of custom highlighted fields, select the **Overwrite all selected rules' custom highlighted fields** option, then click **Save**. - * **Add rule actions**: Add rule actions on all selected rules. If you add multiple actions, you can specify an action frequency for each of them. To overwrite the frequency of existing actions select the option to **Overwrite all selected rules actions**. - - - Rule actions won't run during a [maintenance window](((kibana-ref))/maintenance-windows.html). They'll resume running after the maintenance window ends. - - - * **Update rule schedules**: Update the schedules and look-back times on all selected rules. - * **Apply Timeline template**: Apply a specified Timeline template to the selected rules. You can also choose **None** to remove Timeline templates from the selected rules. -1. On the page or flyout that opens, update the rule settings and actions. - - - To snooze rule actions, go to the **Actions** tab and click the bell icon. - - -1. If available, select **Overwrite all selected _x_** to overwrite the settings on the rules. For example, if you're adding tags to multiple rules, selecting **Overwrite all selected rules tags** removes all the rules' original tags and replaces them with the tags you specify. -1. Click **Save**. - -
- -## Manage rules - -You can duplicate, enable, disable, delete, and snooze actions for rules: - - -When duplicating a rule with exceptions, you can choose to duplicate the rule and its exceptions (active and expired), the rule and active exceptions only, or only the rule. If you duplicate the rule and its exceptions, copies of the exceptions are created and added to the duplicated rule's default rule list. If the original rule used exceptions from a shared exception list, the duplicated rule will reference the same shared exception list. - - -1. Go to **Rules** → **Detection rules (SIEM)**. -1. Do one of the following: - * Select the **All actions** menu () on a rule, then select an action. - * Select all the rules you want to modify, then select an action from the **Bulk actions** menu. - * To enable or disable a single rule, switch on the rule's **Enabled** toggle. - * To snooze actions for rules, click the bell icon. - -
- -## Snooze rule actions - -Instead of turning rules off to stop alert notifications, you can snooze rule actions for a specified time period. When you snooze rule actions, the rule continues to run on its defined schedule, but won't perform any actions or send alert notifications. - -You can snooze notifications temporarily or indefinitely. When actions are snoozed, you can cancel or change the duration of the snoozed state. You can also schedule and manage recurring downtime for actions. - -You can snooze rule notifications from the **Installed Rules** tab, the rule details page, or the **Actions** tab when editing a rule. - - - -
- -## Export and import rules - -You can export custom detection rules to an `.ndjson` file, which you can then import into another ((elastic-sec)) environment. - - - -You cannot export Elastic prebuilt rules, but you can duplicate a prebuilt rule, then export the duplicated rule. - -If you try to export with both prebuilt and custom rules selected, only the custom rules are exported. - - - -The `.ndjson` file also includes any actions, connectors, and exception lists related to the exported rules. However, other configuration items require additional handling when exporting and importing rules: - -- **Data views**: For rules that use a ((kib)) data view as a data source, the exported file contains the associated `data_view_id`, but does _not_ include any other data view configuration. To export/import between ((kib)) spaces, first use the [Saved Objects](((kibana-ref))/managing-saved-objects.html#managing-saved-objects-share-to-space) UI (**Project settings** → **Content** → **Saved Objects**) to share the data view with the destination space. - -To import into a different ((stack)) deployment, the destination cluster must include a data view with a matching data view ID (configured in the [data view's advanced settings](((kibana-ref))/data-views.html)). Alternatively, after importing, you can manually reconfigure the rule to use an appropriate data view in the destination system. - -- **Actions and connectors**: Rule actions and connectors are included in the exported file, but sensitive information about the connector (such as authentication credentials) _is not_ included. You must re-add missing connector details after importing detection rules. - - - You can also use the [Saved Objects](((kibana-ref))/managing-saved-objects.html#managing-saved-objects-share-to-space) UI (**Project settings** → **Content** → **Saved Objects**) to export and import necessary connectors before importing detection rules. - - -- **Value lists**: Any value lists used for rule exceptions are _not_ included in rule exports or imports. Use the Manage value lists UI (**Rules** → **Detection rules (SIEM)** → **Manage value lists**) to export and import value lists separately. - -To export and import detection rules: - -1. Go to **Rules** → **Detection rules (SIEM)**. -1. To export rules: - 1. In the rules table, select the rules you want to export. - 1. Select **Bulk actions** → **Export**, then save the exported file. -1. To import rules: - - - To import rules with and without actions, and to manage rule connectors, you must have the appropriate user role. Refer to Enable and access detections for more information. - - - 1. Click **Import rules**. - 1. Drag and drop the file that contains the detection rules. - - - Imported rules must be in an `.ndjson` file. - - - 1. (Optional) Select **Overwrite existing detection rules with conflicting "rule_id"** to update existing rules if they match the `rule_id` value of any rules in the import file. Configuration data included with the rules, such as actions, is also overwritten. - 1. (Optional) Select **Overwrite existing exception lists with conflicting "list_id"** to replace existing exception lists with exception lists from the import file if they have a matching `list_id` value. - 1. (Optional) Select **Overwrite existing connectors with conflicting action "id"** to update existing connectors if they match the `action id` value of any rule actions in the import file. Configuration data included with the actions is also overwritten. - 1. Click **Import rule**. - 1. (Optional) If a connector is missing sensitive information after the import, a warning displays and you're prompted to fix the connector. In the warning, click **Go to connector**. On the Connectors page, find the connector that needs to be updated, click **Fix**, then add the necessary details. - -
- -## Confirm rule prerequisites - -Many detection rules are designed to work with specific [Elastic integrations](((integrations-docs))) and data fields. These prerequisites are identified in **Related integrations** and **Required fields** on a rule's details page (**Rules** → **Detection rules (SIEM)**, then click a rule's name). **Related integrations** also displays each integration's installation status and includes links for installing and configuring the listed integrations. - -Additionally, the **Setup guide** section provides guidance on setting up the rule's requirements. - -![Rule details page with Related integrations, Required fields, and Setup guide highlighted](../images/prebuilt-rules-management/-detections-rule-details-prerequisites.png) - -You can also check rules' related integrations in the **Installed Rules** and **Rule Monitoring** tables. Click the **integrations** badge to display the related integrations in a popup. - - - - -You can hide the **integrations** badge in the rules tables by turning off the `securitySolution:showRelatedIntegrations` advanced setting. - - diff --git a/docs/serverless/rules/shared-exception-lists.mdx b/docs/serverless/rules/shared-exception-lists.mdx deleted file mode 100644 index d793009b30..0000000000 --- a/docs/serverless/rules/shared-exception-lists.mdx +++ /dev/null @@ -1,146 +0,0 @@ ---- -slug: /serverless/security/shared-exception-lists -title: Create and manage shared exception lists -description: Learn how to create and manage shared exception lists. -tags: [ 'serverless', 'security', 'how-to' ] -status: in review ---- - - -
- -Shared exception lists allow you to group exceptions together and then apply them to multiple rules. Use the Shared Exception Lists page to set up shared exception lists. - -![Shared Exception Lists page](../images/shared-exception-lists/-detections-rule-exceptions-page.png) - -
- -## Create shared exception lists - -Set up shared exception lists to contain exception items: - -1. Go to **Rules** → **Shared exception lists**. -1. Click **Create shared exception list** → **Create shared list**. -1. Give the shared exception list a name. -1. (Optional) Provide a description. -1. Click **Create shared exception list**. - -
- -## Add exception items to shared exception lists - -Add exception items: - -1. Go to **Rules** → **Shared exception lists**. -1. Click **Create shared exception list** → **Create exception item**. - - - You can add exceptions to an empty shared exception list by expanding the list, or viewing its details page and clicking **Create rule exception**. After creating an exception, you can associate the shared exception list with rules. Refer to Associate shared exception lists with rules to learn more. - - -1. In the **Add rule exception** flyout, name the exception item and add conditions that define when the exception prevents alerts. When the exception's query conditions are met (the query evaluates to `true`), rules do not generate alerts even when other rule criteria are met. - 1. **Field**: Select a field to identify the event being filtered. - - 1. **Operator**: Select an operator to define the condition: - * `is` | `is not` — Must be an exact match of the defined value. - * `is one of` | `is not one of` — Matches any of the defined values. - * `exists` | `does not exist` — The field exists. - * `is in list` | `is not in list` — Matches values in a value list. - - - - * An exception defined by a value list must use `is in list` or `is not in list` in all conditions. - * Wildcards are not supported in value lists. - * If a value list can't be used due to size or data type, it'll be unavailable in the **Value** menu. - - - - * `matches` | `does not match` — Allows you to use wildcards in **Value**, such as `C:\path\*\app.exe`. Available wildcards are `?` (match one character) and `*` (match zero or more characters). The selected **Field** data type must be [keyword](((ref))/keyword.html#keyword-field-type), [text](((ref))/text.html#text-field-type), or [wildcard](((ref))/keyword.html#wildcard-field-type). - - - Using wildcards can impact performance. To create a more efficient exception using wildcards, use multiple conditions and make them as specific as possible. For example, adding conditions using `process.name` or `file.name` can help limit the scope of wildcard matching. - - - 1. **Value**: Enter the value associated with the **Field**. To enter multiple values (when using `is one of` or `is not one of`), enter each value, then press **Return**. - -1. Click **AND** or **OR** to create multiple conditions and define their relationships. - -1. Click **Add nested condition** to create conditions using nested fields. This is only required for - these nested fields. For all other fields, nested conditions should not be used. - -1. Choose to add the exception to shared exception lists. - - - This option will be unavailable if a shared exception list doesn't exist. In addition, you can't add an endpoint exception item to the Endpoint Security Exception List from this UI. Refer to Add ((elastic-endpoint)) exceptions for instructions about creating endpoint exceptions. - - -1. (Optional) Enter a comment describing the exception. -1. (Optional) Enter a future expiration date and time for the exception. -1. (Optional) **Close all alerts that match this exception and were generated by this rule**: - Closes all alerts that match the exception's conditions and were generated only by the current rule. - -1. Click **Add rule exception**. - - - -## Associate shared exception lists with rules - -Apply shared exception lists to rules: - -1. Go to **Rules** → **Shared exception lists**. -1. Do one of the following: - * Select a shared exception list's name to open its details page, then click **Link rules**. - * Find the shared exception list you want to assign to rules, then from the **More actions** menu (), select **Link rules**. -1. Click the toggles in the **Link** column to select the rules you want to link to the exception list. - - - If you know a rule's name, you can enter it into the search bar. - - -1. Click **Save**. -1. (Optional) To verify that the shared exception list was added to the rules you selected: - - 1. Open a rule’s details page (**Rules** → **Detection rules (SIEM)** → **_Rule name_**). - 1. Scroll down the page, and then select the **Rule exceptions** tab. - 1. Navigate to the exception items that are included in the shared exception list. Click the **Affects shared list** link to view the associated shared exception lists. - - ![Associated shared exceptions](../images/shared-exception-lists/-detections-associated-shared-exception-list.png) - -
- -## View and filter exception lists - -The Shared Exception Lists page displays each shared exception list on an individual row, with the most recently created list at the top. Each row contains these details about the shared exception list: - -* Shared exception list name -* Date the list was created -* Username of the user who created the list -* Number of exception items in the shared exception list -* Number of rules the shared exception list affects - -To view the details of an exception item within a shared exception list, expand a row. - -![Associated shared exceptions](../images/shared-exception-lists/-detections-view-filter-shared-exception.png) - -To filter exception lists by a specific value, enter a value in the search bar. You can search the following attributes: - -* `name` -* `list_id` -* `created_by` - -If no attribute is selected, the app searches the list name by default. - -
- -## Manage shared exception lists - -You can edit, export, import, duplicate, and delete shared exception lists from the Shared Exception Lists page. - -To export or delete an exception list, select the required action button on the appropriate list. Note the following: - -* Exception lists are exported to `.ndjson` files. -* Exception lists are also exported as part of any exported detection rules configured with exceptions. Refer to Export and import rules. -* If an exception list is linked to any rules, you'll get a warning asking you to confirm the deletion. -* If an exception list contains expired exceptions, you can choose whether to include them in the exported file. - -![Detail of Exception lists table with export and delete buttons highlighted](../images/shared-exception-lists/-detections-actions-exception-list.png) \ No newline at end of file diff --git a/docs/serverless/rules/tuning-detection-signals.mdx b/docs/serverless/rules/tuning-detection-signals.mdx deleted file mode 100644 index 2210187f14..0000000000 --- a/docs/serverless/rules/tuning-detection-signals.mdx +++ /dev/null @@ -1,207 +0,0 @@ ---- -slug: /serverless/security/tune-detection-signals -title: Tune detection rules -description: Tune prebuilt and custom detection rules to optimize alert generation. -tags: [ 'serverless', 'security', 'how-to' ] -status: in review ---- - - -
- -Using the ((security-app)), you can tune prebuilt and custom detection rules to optimize alert generation. To reduce noise, you can: - -* Add exceptions to detection rules. - - - Using exceptions is recommended as this ensure excluded source event values - persist even after prebuilt rules are updated. - - -* Disable detection rules that rarely produce actionable alerts because they - match expected local behavior, workflows, or policy exceptions. - -* Clone and modify detection rule queries so they are - aligned with local policy exceptions. This reduces noise while retaining - actionable alerts. - -* Clone and modify detection rule risk scores, and use branching logic to map - higher risk scores to higher priority workflows. - -* Enable alert suppression for custom query rules to reduce the number of repeated or duplicate alerts. - -For details about tuning rules for specific categories: - -* Tune rules detecting authorized processes -* Tune Windows child process and PowerShell rules -* Tune network rules -* Tune indicator match rules - -
- -## Filter out uncommon application alerts - -Organizations frequently use uncommon and in-house applications. Occasionally, -these can trigger unwanted alerts. To stop a rule matching on an application, -add an exception for the required application. - -{/* Links to prebuilt rule pages temporarily removed for initial serverless docs. */} -{/* NOTE: Links to prebuilt rules will break if the rule is deprecated. Link to a different rule or remove the broken link. */} -For example, to prevent the **Unusual Process Execution Path - Alternate Data Stream** rule from -producing alerts for an in-house application named `myautomatedbuild`: - -1. Go to **Rules** → **Detection rules (SIEM)**. -1. Search for and then click on the **Unusual Process Execution Path - Alternate Data Stream** rule. - - The **Unusual Process Execution Path - Alternate Data Stream** rule details page is displayed. - ![Rule details page](../images/tuning-detection-signals/-detections-prebuilt-rules-rule-details-page.png) - -1. Select the **Rule exceptions** tab, then click **Add rule exception**. -1. Fill in these options: - * **Field**: `process.name` - * **Operator**: `is` - * **Value**: `myautomatedbuild` - - ![Add Rule Exception UI](../images/tuning-detection-signals/-detections-prebuilt-rules-process-exception.png) - -1. Click **Add rule exception**. - -
- -## Tune rules detecting authorized processes - -Authorized security testing, system tools, management frameworks, and -administrative activity may trigger detection rules. These legitimate -activities include: - -* Authorized security research. -* System and software management processes running scripts, including scripts - that start child processes. - -* Administrative and management frameworks that create users, schedule tasks, - make `psexec` connections, and run WMI commands. - -* Legitimate scripts using the `whoami` command. -* Applications that work with file shares, such as backup programs, and use the - server message block (SMB) protocol. - -To reduce noise for authorized activity, you can do any of these: - -* Add an exception to the rules that exclude specific servers, such as - the relevant host names, agent names, or other common identifiers. - For example, `host.name is `. - -* Add an exception to the rules that exclude specific - processes. - For example, `process.name is `. - -* Add an exception to the rules that exclude a common user. - For example, `user.name is `. - -Another useful technique is to assign lower risk scores to rules triggered by -authorized activity. This enables detections while keeping the resulting alerts -out of high-priority workflows. Use these steps: - -1. Before adding exceptions, duplicate the prebuilt rule. -1. Add an exception to the original prebuilt rule that excludes the relevant user - or process name (`user.name is ` or `process.name is "process-name"`). - -1. Edit the duplicated rule as follows: - * Lower the `Risk score` (**Edit rule settings** → **About** tab). - * Add an exception so the rule only matches the user or process name excluded - in original prebuilt rules. - (`user.name is not ` or `process.name is not `). - - ![Example of `is not` exception in the Add Rule Exception UI](../images/tuning-detection-signals/-detections-prebuilt-rules-process-specific-exception.png) - -1. Click **Add rule exception**. - -
- -## Tune Windows child process and PowerShell rules - -Normal user activity may sometimes trigger one or more of these rules: - -{/* Links to prebuilt rule pages temporarily removed for initial serverless docs. */} -{/* NOTE: Links to prebuilt rules will break if the rule is deprecated. Link to a different rule or remove the broken link. */} -* **Suspicious MS Office Child Process** -* **Suspicious MS Outlook Child Process** -* **System Shells via Services** -* **Unusual Parent-Child Relationship** -* **Windows Script Executing PowerShell** - -While all rules can be adjusted as needed, use care when adding exceptions to -these rules. Exceptions could result in an undetected client-side execution, or -a persistence or malware threat going unnoticed. - -Examples of when these rules may create noise include: - -* Receiving and opening email-attached Microsoft Office files, which - include active content such as macros or scripts, from a trusted third-party - source. - -* Authorized technical support personnel who provide remote workers with - scripts to gather troubleshooting information. - -In these cases, exceptions can be added to the rules using the relevant -`process.name`, `user.name`, and `host.name` conditions. Additionally, -you can create duplicate rules with lower risk scores. - -
- -## Tune network rules - -The definition of normal network behavior varies widely across different -organizations. Different networks conform to different security policies, -standards, and regulations. When normal network activity triggers alerts, -network rules can be disabled or modified. For example: - -* To exclude a specific source, add a `source.ip` exception with the - relevant IP address, and a `destination.port` exception with the relevant port - number (`source.ip is 196.1.0.12` and `destination.port is 445`). - -* To exclude source network traffic for an entire subnet, add a `source.ip` - exception with the relevant CIDR notation (`source.ip is 192.168.0.0/16`). - -* To exclude a destination IP for a specific destination port, add a - `destination.ip` exception with the IP address, and a `destination.port` - exception with the port number - (`destination.ip is 38.160.150.31` and `destination.port is 445`) - -* To exclude a destination subnet for a specific destination port, add a - `destination.ip` exception using CIDR notation, and a ‘destination.port’ - exception with the port number - (`destination.ip is 172.16.0.0/12` and `destination.port is 445`). - -
- -## Tune indicator match rules - -Take the following steps to tune indicator match rules: - -* Specify a detailed query as part of the indicator index query. Results of the indicator index query are used by the detection engine to query the indices specified in your rule definition's index pattern. Using no query or the wildcard `***` query may result in your rule executing very large queries. -* Limit your rule's additional look-back time to as short a duration as possible, and no more than 24 hours. -* Avoid cluster performance issues by scheduling your rule to run in one-hour intervals or longer. For example, avoid scheduling an indicator match rule to check for indicators every five minutes. - - -((elastic-sec)) provides limited support for indicator match rules. Visit support limitations for more information. - - -### Noise from common cloud-based network traffic - -In cloud-based organizations, remote workers sometimes access services over the -internet. The security policies of home networks probably differ from the -security policies of managed corporate networks, and these rules might need -tuning to reduce noise from legitimate administrative activities: - -{/* Links to prebuilt rule pages temporarily removed for initial serverless docs. */} -{/* NOTE: Links to prebuilt rules will break if the rule is deprecated. Link to a different rule or remove the broken link. */} -* **RDP (Remote Desktop Protocol) from the Internet** - - -If your organization is widely distributed and the workforce travels a -lot, use the `windows_anomalous_user_name_ecs`, -`linux_anomalous_user_name_ecs`, and `suspicious_login_activity_ecs` -((ml)) jobs to detect suspicious authentication activity. - - diff --git a/docs/serverless/rules/value-lists-exceptions.mdx b/docs/serverless/rules/value-lists-exceptions.mdx deleted file mode 100644 index 5bf8e9961d..0000000000 --- a/docs/serverless/rules/value-lists-exceptions.mdx +++ /dev/null @@ -1,104 +0,0 @@ ---- -slug: /serverless/security/value-lists-exceptions -title: Create and manage value lists -description: Make and manage value lists. -tags: [ 'serverless', 'security', 'how-to' ] -status: in review ---- - - -
- -Value lists hold multiple values of the same Elasticsearch data type, such as IP addresses, which are used to determine when an exception prevents an alert from being generated. You can use value lists to define exceptions for detection rules; however, you cannot use value lists to define endpoint rule exceptions. - -Value lists are lists of items with the same ((es)) [data type](((ref))/mapping-types.html). You can create value lists with these types: - -* `Keywords` (many [ECS fields](((ecs-ref))/ecs-field-reference.html) are keywords) -* `IP Addresses` -* `IP Ranges` -* `Text` - -After creating value lists, you can use `is in list` and `is not in list` operators to define exceptions. - - -You can also use a value list as the indicator match index when creating an indicator match rule. - - -
- -## Create value lists - -When you create a value list for a rule exception, be mindful of the list's size and data type. All rule types support value list exceptions, but extremely large lists or certain data types have limitations. - -Custom query, machine learning, and indicator match rules support the following value list types and sizes: -* **Keywords** or **IP addresses** list types with more than 65,536 values -* **IP ranges** list type with over 200 dash notation values (for example, `127.0.0.1-127.0.0.4` is one value) or more than 65,536 CIDR notation values - -To create a value list: - -1. Prepare a `txt` or `csv` file with all the values you want to use for - determining exceptions from a single list. If you use a `txt` file, new lines - act as delimiters. - - - - * All values in the file must be of the same ((es)) type. - - * Wildcards are not supported in value lists. Values must be literal values. - - * The maximum accepted file size is 9 million bytes. - - - -1. Go to **Rules** → **Detection rules (SIEM)**. -1. Click **Manage value lists**. The **Manage value lists** window opens. - - - -1. Select the list type (**Keywords**, **IP addresses**, **IP ranges**, or **Text**) from the **Type of value list** drop-down. -1. Drag or select the `csv` or `txt` file that contains the values. -1. Click **Import value list**. - - -If you import a file with a name that already exists, a new list is not created. The imported values are added to the existing list instead. - - -
- -## Manage value lists - -You can edit, remove, or export existing value lists. - -
- -### Edit value lists - -1. Go to **Rules** → **Detection rules (SIEM)**. -1. Click **Manage value lists**. The **Manage value lists** window opens. -1. In the **Value lists** table, click the value list you want to edit. -1. Do any of the following: - * **Filter items in the list**: Use the KQL search bar to find values in the list. Depending on your list's type, you can filter by the `keyword`, `ip_range`, `ip`, or `text` fields. For example, to filter by Gmail addresses in a value list of the `keyword` type, enter `keyword:*gmail.com` into the search bar. - - You can also filter by the `updated_by` field (for example, `updated_by:testuser`), or the `updated at` field (for example, `updated_at < now`). - * **Add individual items to the list**: Click **Create list item**, enter a value, then click **Add list item**. - * **Bulk upload list items**: Drag or select the `csv` or `txt` file that contains the values that you want to add, then click **Upload**. - * **Edit a value**: In the Value column, go to the value you want to edit and click the **Edit** button (). When you're done editing, click the **Save** button () to save your changes. Click the **Cancel** button () to revert your changes. - * **Remove a value**: Click the **Remove value** button () to delete a value from the list. - - - - -You can also edit value lists while creating and managing exceptions that use value lists. - - -
- -### Export or remove value lists - -1. Go to **Rules** → **Detection rules (SIEM)**. -1. Click **Manage value lists**. The **Manage value lists** window opens. -1. From the **Value lists** table, you can: - * Click the **Export value list** button () to export the value list. - * Click the **Remove value list** button () to delete the value list. - - \ No newline at end of file diff --git a/docs/serverless/security-overview.mdx b/docs/serverless/security-overview.mdx deleted file mode 100644 index d2dba23d7a..0000000000 --- a/docs/serverless/security-overview.mdx +++ /dev/null @@ -1,48 +0,0 @@ ---- -slug: /serverless/security/overview -title: ((elastic-sec)) overview -# description: Description to be written -tags: [ 'serverless', 'security', 'reference' ] ---- - - -
- -((elastic-sec)) combines threat detection analytics, cloud native security, and endpoint protection capabilities in a single solution, so you can quickly detect, investigate, and respond to threats and vulnerabilities across your environment. - -((elastic-sec)) provides: - -* A detection engine that identifies a wide range of threats -* A workspace for event triage, investigation, and case management -* Interactive data visualization tools -* Integrations for collecting data from various sources - -
- -## Learn more - -* ((elastic-sec)) UI overview: Navigate ((elastic-sec))'s various tools and interfaces. -* Detection rules: Use ((elastic-sec))'s detection engine with custom and prebuilt rules. -* Cloud native security: Enable cloud native security capabilities such as Cloud and Kubernetes security posture management, cloud native vulnerability management, and cloud workload protection for Kubernetes and VMs. -* Install ((elastic-defend)): Enable key endpoint protection capabilities like event collection and malicious activity prevention. -* [((ml-cap))](https://www.elastic.co/products/stack/machine-learning): Enable built-in ((ml)) tools to help you identify malicious behavior. -* Advanced entity analytics: Leverage ((elastic-sec))'s detection engine and ((ml)) capabilities to generate comprehensive risk analytics for hosts and users. -* Elastic AI Assistant: Ask AI Assistant questions about how to use ((elastic-sec)), how to understand particular alerts and other documents, and how to write ((esql)) queries. - -
- -## ((es)) and ((kib)) - -((elastic-sec)) uses ((es)) for data storage, management, and search, and ((kib)) is its main user interface. Learn more: - -* [((es))](https://www.elastic.co/products/elasticsearch): A real-time, -distributed storage, search, and analytics engine. ((elastic-sec)) stores your data using ((es)). -* [((kib))](https://www.elastic.co/products/kibana): An open-source analytics and -visualization platform designed to work with ((es)) and ((elastic-sec)). ((kib)) allows you to search, -view, analyze and visualize data stored in ((es)) indices. - -
- -### ((elastic-endpoint)) self-protection - -For information about ((elastic-endpoint))'s tamper-protection features, refer to . diff --git a/docs/serverless/security-ui.mdx b/docs/serverless/security-ui.mdx deleted file mode 100644 index 2ce642d5a5..0000000000 --- a/docs/serverless/security-ui.mdx +++ /dev/null @@ -1,242 +0,0 @@ ---- -slug: /serverless/security/security-ui -title: Elastic Security UI -# description: Description to be written -tags: [ 'serverless', 'security', 'reference' ] -status: in review ---- - - -
- -The ((security-app)) is a highly interactive workspace designed for security analysts that provides a clear overview of events and alerts from your environment. You can use the interactive UI to drill down into areas of interest. - -
- -## Search - -Filter for alerts, events, processes, and other important security data by entering [((kib)) Query Language (KQL)](((kibana-ref))/kuery-query.html) queries in the search bar, which appears at the top of each page throughout the app. A date/time filter set to `Today` is enabled by default, but can be changed to any time range. - -![](images/es-ui-overview/-getting-started-search-bar.png) - -* To refine your search results, select **Add Filter** (), then enter the field, operator (such as `is not` or `is between`), and value for your filter. - -* To save the current KQL query and any applied filters, select **Saved query menu** (), enter a name for the saved query, and select **Save saved query**. - - - -## Navigation menu - -The navigation menu contains direct links and expandable groups, identified by the group icon (). - -* Click a top-level link to go directly to its landing page, which contains links and information for related pages. - -* Click a group's icon () to open its flyout menu, which displays links to related pages within that group. Click a link in the flyout to navigate to its landing page. - -* Click the **Collapse side navigation** icon () to collapse and expand the main navigation menu. - -{/* Hiding this as short-term fix for serverless; consider creating a serverless version of the image? */} -{/* ![Overview of the navigation menu](images/es-ui-overview/-getting-started-nav-overview.gif) */} - -
- -## Visualization actions - -Many ((elastic-sec)) histograms, graphs, and tables display an **Inspect** button () when you hover over them. Click to examine the ((es)) queries used to retrieve data throughout the app. - - - -Other visualizations display an options menu (), which allows you to inspect the visualization's queries, add it to a new or existing case, or open it in Lens for customization. - - - -
- -## Inline actions for fields and values - -Throughout the ((security-app)), you can hover over many data fields and values to display inline actions, which allow you to customize your view or investigate further based on that field or value. - - - -In some visualizations, these actions are available in the legend by clicking a value's options icon (). - - - -Inline actions include the following (some actions are unavailable in some contexts): - -* **Filter In**: Add a filter that includes the selected value. -* **Filter Out**: Add a filter that excludes the selected value. -* **Add to timeline**: Add a filter to Timeline for the selected value. -* **Toggle column in table**: Add or remove the selected field as a column in the alerts or events table. (This action is only available on an alert's or event's details flyout.) -* **Show top _x_**: Display a pop-up window that shows the selected field's top events or detection alerts. -* **Copy to Clipboard**: Copy the selected field-value pair to paste elsewhere. - -## ((security-app)) pages - -The ((security-app)) contains the following pages that enable analysts to view, analyze, and manage security data. - -### Discover - -Use the Discover UI to filter your data or learn about its structure. - -### Dashboards - -Expand this section to access the Overview, Detection & Response, Kubernetes, Cloud Security Posture, Cloud Native Vulnerability Management, and Entity Analytics dashboards, which provide interactive visualizations that summarize your data. You can also create and view custom dashboards. Refer to Dashboards for more information. - -![The dashboards landing page, 75%](images/es-ui-overview/-dashboards-dashboards-landing-page.png) - -### Rules - -Expand this section to access the following pages: - -* **Rules**: Create and manage rules to monitor suspicious events. - - ![Rules page](images/es-ui-overview/-detections-all-rules.png) - -* **Benchmark Rules**: View, enable, or disable benchmark rules. - - ![Benchmark Rules page](images/es-ui-overview/-cloud-native-security-benchmark-rules.png) - -* **Shared Exception Lists**: View and manage rule exceptions and shared exception lists. - - ![Shared Exception Lists page](images/es-ui-overview/-detections-rule-exceptions-page.png) - -* **MITRE ATT&CK® coverage**: Review your coverage of MITRE ATT&CK® tactics and techniques, based on installed rules. - - ![MITRE ATT&CK® coverage page](images/es-ui-overview/-detections-rules-coverage.png) - -### Alerts - -View and manage alerts to monitor activity within your network. Refer to for more information. - -![](images/es-ui-overview/-detections-alert-page.png) - -### Findings - -Identify misconfigurations and vulnerabilities in your cloud infrastructure. For setup instructions, refer to , , or . - -![Findings page](images/findings-page/-cloud-native-security-findings-page.png) - -### Cases - -Open and track security issues. Refer to Cases to learn more. - -![Cases page](images/es-ui-overview/-cases-cases-home-page.png) - -### Investigations - -Expand this section to access the following pages: - - * Timelines: Investigate alerts and complex threats — such as lateral movement — in your network. Timelines are interactive and allow you to share your findings with other team members. - - ![Timeline page](images/es-ui-overview/-events-timeline-ui.png) - - - Click the **Timeline** button at the bottom of the ((security-app)) to start an investigation. - - - * Osquery: Deploy Osquery with ((agent)), then run and schedule queries. - -### Intelligence - -The Intelligence section contains the Indicators page, which collects data from enabled threat intelligence feeds and provides a centralized view of indicators of compromise (IoCs). Refer to Indicators of compromise to learn more. - -![Indicators page](images/es-ui-overview/-cases-indicators-table.png) - -### Explore - -Expand this section to access the following pages: - -* **Hosts**: Examine key metrics for host-related security events using graphs, charts, and interactive data tables. - - ![Hosts page](images/es-ui-overview/-management-hosts-hosts-ov-pg.png) - -* **Network**: Explore the interactive map to discover key network activity metrics and investigate network events further in Timeline. - - ![Network page](images/es-ui-overview/-getting-started-network-ui.png) - -* **Users**: Access a comprehensive overview of user data to help you understand authentication and user behavior within your environment. - - ![Users page](images/es-ui-overview/-getting-started-users-users-page.png) - -### Assets - -The Assets section allows you to manage the following features: - -* [((fleet))](((fleet-guide))/manage-agents-in-fleet.html) -* [((integrations))](((fleet-guide))/integrations.html) -* Endpoint protection - * Endpoints: View and manage hosts running ((elastic-defend)). - * Policies: View and manage ((elastic-defend)) integration policies. - * Trusted applications: View and manage trusted Windows, macOS, and Linux applications. - * Event filters: View and manage event filters, which allow you to filter endpoint events you don't need to want stored in ((es)). - * Host isolation exceptions: View and manage host isolation exceptions, which specify IP addresses that can communicate with your hosts even when those hosts are blocked from your network. - * Blocklist: View and manage the blocklist, which allows you to prevent specified applications from running on hosts, extending the list of processes that ((elastic-defend)) considers malicious. - * Response actions history: Find the history of response actions performed on hosts. -* Cloud security - * Container Workload Protection: Identify and block unexpected system behavior in Kubernetes containers. - -### ((ml-cap)) - -Manage ((ml)) jobs and settings. Refer to [((ml-cap)) docs](((ml-docs))/ml-ad-overview.html) for more information. - -### Get started - -Quickly add security integrations that can ingest data and monitor your hosts. - -### Project settings - -Configure project-wide settings related to users, billing, data management, and more. - -### Dev tools - -Use additional API and analysis tools to interact with your data. - -
- -## Accessibility features - -Accessibility features, such as keyboard focus and screen reader support, are built into the Elastic Security UI. These features offer additional ways to navigate the UI and interact with the application. - -
- -### Interact with draggable elements - -Use your keyboard to interact with draggable elements in the Elastic Security UI: - -* Press the `Tab` key to apply keyboard focus to an element within a table. Or, use your mouse to click on an element and apply keyboard focus to it. - - - -* Press `Enter` on an element with keyboard focus to display its menu and press `Tab` to apply focus sequentially to menu options. The `f`, `o`, `a`, `t`, `c` hotkeys are automatically enabled during this process and offer an alternative way to interact with menu options. - - - -* Press the spacebar once to begin dragging an element to a different location and press it a second time to drop it. Use the directional arrows to move the element around the UI. - - - -* If an event has an event renderer, press the `Shift` key and the down directional arrow to apply keyboard focus to the event renderer and `Tab` or `Shift` + `Tab` to navigate between fields. To return to the cells in the current row, press the up directional arrow. To move to the next row, press the down directional arrow. - - - -
- -### Navigate the Elastic Security UI -Use your keyboard to navigate through rows, columns, and menu options in the Elastic Security UI: - -* Use the directional arrows to move keyboard focus right, left, up, and down in a table. - - - -* Press the `Tab` key to navigate through a table cell with multiple elements, such as buttons, field names, and menus. Pressing the `Tab` key will sequentially apply keyboard focus to each element in the table cell. - - - -* Use `CTRL + Home` to shift keyboard focus to the first cell in a row. Likewise, use `CTRL + End` to move keyboard focus to the last cell in the row. - - - -* Use the `Page Up` and `Page Down` keys to scroll through the page. - - diff --git a/docs/serverless/serverless-security.docnav.json b/docs/serverless/serverless-security.docnav.json deleted file mode 100644 index 2cc9aa391b..0000000000 --- a/docs/serverless/serverless-security.docnav.json +++ /dev/null @@ -1,698 +0,0 @@ -{ - "mission": "Elastic Security", - "id": "serverless-security", - "landingPageSlug": "/serverless/security/what-is-security-serverless", - "icon": "logoSecurity", - "description": "Description to be written", - "items": [ - { - "slug": "/serverless/security/overview", - "classic-sources": [ "enSecurityEsOverview" ] - }, - { - "slug": "/serverless/security/security-billing" - }, - { - "slug": "/serverless/security/create-project" - }, - { - "slug": "/serverless/security/security-ui", - "classic-sources": [ "enSecurityEsUiOverview" ] - }, - { - "label": "AI for security", - "slug": "/serverless/security/ai-for-security", - "items": [ - { - "slug": "/serverless/security/ai-assistant" - }, - { - "slug": "/serverless/security/attack-discovery" - }, - { - "slug": "/serverless/security/llm-connector-guides", - "items": [ - { - "slug": "/serverless/security/llm-performance-matrix" - }, - { - "slug": "/serverless/security/connect-to-azure-openai" - }, - { - "slug": "/serverless/security/connect-to-bedrock" - }, - { - "slug": "/serverless/security/connect-to-openai" - }, - { - "slug": "/serverless/security/connect-to-google-vertex" - }, - { - "slug": "/serverless/security/connect-to-byo-llm" - } - ] - }, - { - "slug": "/serverless/security/ai-use-cases", - "items": [ - { - "slug": "/serverless/security/ai-usecase-incident-reporting" - }, - { - "slug": "/serverless/security/triage-alerts-with-elastic-ai-assistant" - }, - { - "slug": "/serverless/security/ai-assistant-esql-queries" - } - ] - } - ] - }, - { - "label": "Ingest data", - "slug": "/serverless/security/ingest-data", - "classic-sources": [ "enSecurityIngestData" ], - "items": [ - { - "slug": "/serverless/security/threat-intelligence", - "classic-sources": [ "enSecurityEsThreatIntelIntegrations" ] - }, - { - "slug": "/serverless/security/automatic-import" - } - ] - }, - { - "slug": "/serverless/security/endpoint-protection-intro", - "items": [ - { - "slug": "/serverless/security/elastic-endpoint-deploy-reqs", - "classic-sources": [ "enSecurityElasticEndpointDeployReqs" ] - }, - { - "label": "Install Elastic Defend", - "slug": "/serverless/security/install-edr", - "classic-sources": [ "enSecurityInstallEndpoint" ], - "items": [ - { - "slug": "/serverless/security/install-endpoint-manually", - "classic-sources": [ "enSecurityDeployElasticEndpoint" ] - }, - { - "slug": "/serverless/security/deploy-elastic-endpoint-ven", - "classic-sources": [ "enSecurityDeployElasticEndpointVen" ] - }, - { - "label": "Deploy on macOS with MDM", - "slug": "/serverless/security/deploy-with-mdm" - }, - { - "slug": "/serverless/security/agent-tamper-protection" - } - ] - }, - { - "slug": "/serverless/security/configure-endpoint-integration-policy", - "classic-sources": [ "enSecurityConfigureEndpointIntegrationPolicy" ], - "items": [ - { - "label": "Configure protection updates", - "slug": "/serverless/security/protection-artifact-control" - }, - { - "slug": "/serverless/security/endpoint-diagnostic-data", - "classic-sources": [ "enSecurityEndpointDiagnosticData" ] - }, - { - "label": "Self-healing rollback (Windows)", - "slug": "/serverless/security/self-healing-rollback", - "classic-sources": [ "enSecuritySelfHealingRollback" ] - }, - { - "label": "File system monitoring (Linux)", - "slug": "/serverless/security/linux-file-monitoring", - "classic-sources": [ "enSecurityLinuxFileMonitoring" ] - }, - { - "label": "Configure data volume", - "slug": "/serverless/security/endpoint-data-volume" - } - ] - }, - { - "slug": "/serverless/security/uninstall-agent" - } - ] - }, - { - "slug": "/serverless/security/manage-endpoint-protection", - "classic-sources": [ "enSecuritySecManageIntro" ], - "items": [ - { - "slug": "/serverless/security/endpoints-page", - "classic-sources": [ "enSecurityAdminPageOv" ] - }, - { - "slug": "/serverless/security/policies-page", - "classic-sources": [ "enSecurityPoliciesPageOv" ] - }, - { - "slug": "/serverless/security/trusted-applications", - "classic-sources": [ "enSecurityTrustedAppsOv" ] - }, - { - "slug": "/serverless/security/event-filters", - "classic-sources": [ "enSecurityEventFilters" ] - }, - { - "slug": "/serverless/security/host-isolation-exceptions", - "classic-sources": [ "enSecurityHostIsolationExceptions" ] - }, - { - "slug": "/serverless/security/blocklist", - "classic-sources": [ "enSecurityBlocklist" ] - }, - { - "slug": "/serverless/security/optimize-edr", - "classic-sources": [ "enSecurityEndpointArtifacts" ] - }, - { - "slug": "/serverless/security/endpoint-event-capture" - }, - { - "slug": "/serverless/security/allowlist-endpoint" - }, - { - "slug": "/serverless/security/endpoint-self-protection" - }, - { - "slug": "/serverless/security/endpoint-command-ref" - } - ] - }, - { - "slug": "/serverless/security/response-actions", - "classic-sources": [ "enSecurityResponseActions" ], - "items": [ - { - "slug": "/serverless/security/automated-response-actions" - }, - { - "slug": "/serverless/security/isolate-host", - "classic-sources": [ "enSecurityHostIsolationOv" ] - }, - { - "slug": "/serverless/security/response-actions-history", - "classic-sources": [ "enSecurityResponseActionsHistory" ] - }, - { - "slug": "/serverless/security/third-party-actions" - }, - { - "slug": "/serverless/security/response-actions-config" - } - ] - }, - { - "slug": "/serverless/security/cloud-native-security-overview", - "classic-sources": [ "enSecurityCloudNativeSecurityOverview" ], - "items": [ - { - "slug": "/serverless/security/security-posture-management", - "classic-sources": [ "enSecuritySecurityPostureManagement" ] - }, - { - "slug": "/serverless/security/enable-cloudsec" - }, - { - "slug": "/serverless/security/cspm", - "classic-sources": [ "enSecurityCspm" ], - "items": [ - { - "slug": "/serverless/security/cspm-get-started", - "classic-sources": [ "enSecurityCspmGetStarted" ] - }, - { - "slug": "/serverless/security/cspm-get-started-gcp", - "classic-sources": [ "enSecurityCspmGetStartedGcp" ] - }, - { - "slug": "/serverless/security/cspm-get-started-azure", - "classic-sources": [ "enSecurityCspmGetStartedAzure" ] - }, - { - "slug": "/serverless/security/cspm-findings-page", - "classic-sources": [ "enSecurityCspmFindingsPage" ] - }, - { - "slug": "/serverless/security/benchmark-rules", - "classic-sources": [ "enSecurityCspmBenchmarkRules" ] - }, - { - "slug": "/serverless/security/cloud-posture-dashboard-dash", - "classic-sources": [ "enSecurityCloudPostureDashboard" ] - }, - { - "slug": "/serverless/security/cspm-security-posture-faq", - "classic-sources": [ "enSecurityCspmSecurityPostureFaq" ] - } - ] - }, - { - "slug": "/serverless/security/kspm", - "classic-sources": [ "enSecurityKspm" ], - "items": [ - { - "slug": "/serverless/security/get-started-with-kspm", - "classic-sources": [ "enSecurityGetStartedWithKspm" ] - }, - { - "slug": "/serverless/security/cspm-findings-page", - "classic-sources": [ "enSecurityCspmFindingsPage" ] - }, - { - "slug": "/serverless/security/benchmark-rules", - "classic-sources": [ "enSecurityBenchmarkRules" ] - }, - { - "slug": "/serverless/security/cloud-posture-dashboard-dash", - "classic-sources": [ "enSecurityCloudPostureDashboard" ] - }, - { - "slug": "/serverless/security/security-posture-faq", - "classic-sources": [ "enSecuritySecurityPostureFaq" ] - } - ] - }, - { - "slug": "/serverless/security/vuln-management-overview", - "classic-sources": [ "enSecurityVulnManagementOverview" ], - "items": [ - { - "slug": "/serverless/security/vuln-management-get-started", - "classic-sources": [ "enSecurityVulnManagementGetStarted" ] - }, - { - "slug": "/serverless/security/vuln-management-findings", - "classic-sources": [ "enSecurityVulnManagementFindings" ] - }, - { - "slug": "/serverless/security/vuln-management-dashboard-dash", - "classic-sources": [ "ensSecurityVulnManagementDashboardDash" ] - }, - { - "slug": "/serverless/security/vuln-management-faq", - "classic-sources": [ "enSecurityVulnManagementFaq" ] - } - ] - }, - { - "slug": "/serverless/security/d4c-overview", - "classic-sources": [ "enSecurityD4cOverview" ], - "items": [ - { - "slug": "/serverless/security/d4c-get-started", - "classic-sources": [ "enSecurityD4cGetStarted" ] - }, - { - "slug": "/serverless/security/d4c-policy-guide", - "classic-sources": [ "enSecurityD4cPolicyGuide" ] - }, - { - "slug": "/serverless/security/kubernetes-dashboard-dash", - "classic-sources": [ "enSecurityKubernetesDashboard" ] - } - ] - }, - { - "slug": "/serverless/security/cloud-workload-protection", - "classic-sources": [ "enSecurityCloudWorkloadProtection" ], - "items": [ - { - "slug": "/serverless/security/environment-variable-capture", - "classic-sources": [ "enSecurityEnvironmentVariableCapture" ] - } - ] - } - ] - }, - { - "slug": "/serverless/security/explore-your-data", - "classic-sources": [ "enSecurityExploreYourData" ], - "items": [ - { - "slug": "/serverless/security/hosts-overview", - "classic-sources": [ "enSecurityHostsOverview" ] - }, - { - "slug": "/serverless/security/network-page-overview", - "classic-sources": [ "enSecurityNetworkPageOverview" ] - }, - { - "slug": "/serverless/security/users-page", - "classic-sources": [ "enSecurityUsersPage" ] - }, - { - "slug": "/serverless/security/data-views-in-sec", - "classic-sources": [ "enSecurityDataViewsInSec" ] - }, - { - "label": "Create runtime fields", - "slug": "/serverless/security/runtime-fields", - "classic-sources": [ "enSecurityRuntimeFields" ] - }, - { - "slug": "/serverless/security/siem-field-reference", - "classic-sources": [ "enSecuritySiemFieldReference" ] - } - ] - }, - { - "slug": "/serverless/security/dashboards-overview", - "classic-sources": [ "enSecurityDashboardsOverview" ], - "items": [ - { - "label": "Overview", - "slug": "/serverless/security/overview-dashboard", - "classic-sources": [ "enSecurityOverviewDashboard" ] - }, - { - "label": "Detection & Response", - "slug": "/serverless/security/detection-response-dashboard", - "classic-sources": [ "enSecurityDetectionResponseDashboard" ] - }, - { - "label": "Kubernetes", - "slug": "/serverless/security/kubernetes-dashboard-dash", - "classic-sources": [ "enSecurityKubernetesDashboard" ] - }, - { - "label": "Cloud Security Posture", - "slug": "/serverless/security/cloud-posture-dashboard-dash", - "classic-sources": [ "enSecurityCloudPostureDashboard" ] - }, - { - "label": "Entity Analytics", - "slug": "/serverless/security/detection-entity-dashboard", - "classic-sources": [ "enSecurityDetectionEntityDashboard" ] - }, - { - "label": "Data Quality", - "slug": "/serverless/security/data-quality-dash" - }, - { - "label": "Cloud Native Vulnerability Management", - "slug": "/serverless/security/vuln-management-dashboard-dash", - "classic-sources": [ "ensSecurityVulnManagementDashboardDash" ] - }, - { - "label": "Detection rule monitoring", - "slug": "/serverless/security/rule-monitoring-dashboard", - "classic-sources": [ "enSecurityRuleMonitoringDashboard" ] - } ] - }, - { - "slug": "/serverless/security/detection-engine-overview", - "classic-sources": [ "enSecurityDetectionEngineOverview" ] - }, - { - "label": "Rules", - "slug": "/serverless/security/about-rules", - "classic-sources": [ "enSecurityAboutRules" ], - "items": [ - { - "slug": "/serverless/security/rules-create", - "classic-sources": [ "enSecurityRulesUiCreate" ], - "items": [ - { - "slug": "/serverless/security/interactive-investigation-guides", - "classic-sources": [ "enSecurityInteractiveInvestigationGuides" ] - }, - { - "slug": "/serverless/security/building-block-rules", - "classic-sources": [ "enSecurityBuildingBlockRule" ] - } - ] - }, - { - "label": "Use Elastic prebuilt rules", - "slug": "/serverless/security/prebuilt-rules-management", - "classic-sources": [ "enSecurityPrebuiltRulesManagement" ] - }, - { - "slug": "/serverless/security/rules-ui-management", - "classic-sources": [ "enSecurityRulesUiManagement" ] - }, - { - "slug": "/serverless/security/alerts-ui-monitor", - "classic-sources": [ "enSecurityAlertsUiMonitor" ] - }, - { - "slug": "/serverless/security/rule-exceptions", - "classic-sources": [ "enSecurityDetectionsUiExceptions" ], - "items": [ - { - "slug": "/serverless/security/value-lists-exceptions", - "classic-sources": [ "enSecurityValueListsExceptions" ] - }, - { - "slug": "/serverless/security/add-exceptions", - "classic-sources": [ "enSecurityAddExceptions" ] - }, - { - "slug": "/serverless/security/shared-exception-lists", - "classic-sources": [ "enSecuritySharedExceptionLists" ] - } - ] - }, - { - "slug": "/serverless/security/rules-coverage", - "classic-sources": [ "enSecurityRulesCoverage" ] - }, - { - "slug": "/serverless/security/tune-detection-signals", - "classic-sources": [ "enSecurityTuningDetectionSignals" ] - }, - { - "slug": "/serverless/security/prebuilt-rules", - "classic-sources": [ "enSecurityPrebuiltRules" ], - "classic-skip": true - } - ] - }, - { - "label": "Alerts", - "slug": "/serverless/security/alerts-manage", - "classic-sources": [ "enSecurityAlertsUiManage" ], - "items": [ - { - "label": "Visualize alerts", - "slug": "/serverless/security/visualize-alerts", - "classic-sources": [ "enSecurityVisualizeAlerts" ] - }, - { - "label": "View alert details", - "slug": "/serverless/security/view-alert-details", - "classic-sources": [ "enSecurityViewAlertDetails" ] - }, - { - "label": "Add alerts to cases", - "slug": "/serverless/security/signals-to-cases", - "classic-sources": [ "enSecuritySignalsToCases" ] - }, - { - "label": "Suppress alerts", - "slug": "/serverless/security/alert-suppression", - "classic-sources": [ "enSecurityAlertSuppression" ] - }, - { - "slug": "/serverless/security/reduce-notifications-alerts", - "classic-sources": [ "enSecurityReduceNotificationsAlerts" ] - }, - { - "slug": "/serverless/security/query-alert-indices", - "classic-sources": [ "enSecurityQueryAlertIndices" ] - }, - { - "slug": "/serverless/security/alert-schema", - "classic-sources": [ "enSecurityAlertSchema" ] - } - ] - }, - { - "label": "Advanced Entity Analytics", - "slug": "/serverless/security/advanced-entity-analytics", - "items": [ - { - "label": "Entity risk scoring", - "slug": "/serverless/security/entity-risk-scoring", - "items": [ - { - "label": "Asset criticality", - "slug": "/serverless/security/asset-criticality" - }, - { - "label": "Turn on risk scoring", - "slug": "/serverless/security/turn-on-risk-engine" - }, - { - "label": "View risk score data", - "slug": "/serverless/security/analyze-risk-score-data" - } - ] - }, - { - "label": "Advanced behavioral detections", - "slug": "/serverless/security/advanced-behavioral-detections", - "items": [ - { - "slug": "/serverless/security/machine-learning", - "classic-sources": [ "enSecurityMachineLearning" ] - }, - { - "slug": "/serverless/security/tuning-anomaly-results", - "classic-sources": [ "enSecurityTuningAnomalyResults" ] - }, - { - "slug": "/serverless/security/behavioral-detection-use-cases" - }, - { - "slug": "/serverless/security/prebuilt-ml-jobs", - "classic-sources": [ "enSecurityPrebuiltMlJobs" ] - } - ] - } - ] - }, - { - "slug": "/serverless/security/investigate-events", - "classic-sources": [ "enSecurityInvestigateEvents" ], - "items": [ - { - "slug": "/serverless/security/timelines-ui", - "classic-sources": [ "enSecurityTimelinesUi" ], - "items": [ - { - "slug": "/serverless/security/timeline-templates-ui", - "classic-sources": [ "enSecurityTimelineTemplatesUi" ] - }, - { - "slug": "/serverless/security/timeline-object-schema", - "classic-sources": [ "enSecurityTimelineObjectSchema" ] - } - ] - }, - { - "slug": "/serverless/security/visual-event-analyzer", - "classic-sources": [ "enSecurityVisualEventAnalyzer" ] - }, - { - "slug": "/serverless/security/session-view", - "classic-sources": [ "enSecuritySessionView" ] - }, - { - "slug": "/serverless/security/query-operating-systems", - "classic-sources": [ "enSecurityUseOsquery" ], - "items": [ - { - "slug": "/serverless/security/osquery-response-action", - "classic-sources": [ "enSecurityOsqueryResponseAction" ] - }, - { - "slug": "/serverless/security/invest-guide-run-osquery", - "classic-sources": [ "enSecurityInvestGuideRunOsquery" ] - }, - { - "slug": "/serverless/security/alerts-run-osquery", - "classic-sources": [ "enSecurityAlertsRunOsquery" ] - }, - { - "slug": "/serverless/security/examine-osquery-results", - "classic-sources": [ "enSecurityViewOsqueryResults" ] - }, - { - "slug": "/serverless/security/osquery-placeholder-fields", - "classic-sources": [ "enSecurityOsqueryPlaceholderFields" ] - } - ] - }, - { - "slug": "/serverless/security/indicators-of-compromise", - "classic-sources": [ "enSecurityIndicatorsOfCompromise" ] - }, - { - "slug": "/serverless/security/cases-overview", - "classic-sources": [ "enSecurityCasesOverview" ], - "items": [ - { - "slug": "/serverless/security/cases-open-manage", - "classic-sources": [ "enSecurityCasesOpenManage" ] - }, - { - "slug": "/serverless/security/cases-settings", - "classic-sources": [ "enSecurityCasesSettings" ] - } - ] - } - ] - }, - { - "slug": "/serverless/security/asset-management" - }, - { - "slug": "/serverless/security/manage-settings", - "items": [ - { - "slug": "/serverless/security/project-settings" - }, - { - "slug": "/serverless/security/advanced-settings", - "classic-sources": [ "enSecurityAdvancedSettings" ] - }, - { - "slug": "/serverless/security/requirements-overview", - "classic-sources": [ "enSecuritySecRequirements" ], - "items": [ - { - "slug": "/serverless/security/detections-requirements", - "classic-sources": [ "enSecurityDetectionsPermissionsSection" ] - }, - { - "slug": "/serverless/security/cases-requirements", - "classic-sources": [ "enSecurityCasePermissions" ] - }, - { - "slug": "/serverless/security/ers-requirements" - }, - { - "slug": "/serverless/security/ml-requirements", - "classic-sources": [ "enSecurityMlRequirements" ] - }, - { - "slug": "/serverless/security/conf-map-ui", - "classic-sources": [ "enSecurityConfMapUi" ] - } - ] - } - ] - }, - { - "slug": "/serverless/security/troubleshooting-ov", - "items":[ - { - "label": "Detection rules", - "slug": "/serverless/security/ts-detection-rules", - "classic-sources": [ "enSecurityTsDetectionRules" ] - }, - { - "label": "Elastic Defend", - "slug": "/serverless/security/troubleshoot-endpoints", - "classic-sources": [ "enSecurityTsManagement" ] - } - ] - }, - { - "slug": "/serverless/security/security-technical-preview-limitations" - } - ] -} diff --git a/docs/serverless/settings/advanced-settings.mdx b/docs/serverless/settings/advanced-settings.mdx deleted file mode 100644 index 67046cb80a..0000000000 --- a/docs/serverless/settings/advanced-settings.mdx +++ /dev/null @@ -1,198 +0,0 @@ ---- -slug: /serverless/security/advanced-settings -title: Advanced settings -description: Update advanced ((elastic-sec)) settings. -tags: [ 'serverless','security','reference','manage' ] -status: in review ---- - - - -
- -The advanced settings determine: - -* Which indices ((elastic-sec)) uses to retrieve data -* ((ml-cap)) anomaly score display threshold -* The navigation menu style used throughout the ((security-app)) -* Whether the news feed is displayed on the Overview dashboard -* The default time interval used to filter ((elastic-sec)) pages -* The default ((elastic-sec)) pages refresh time -* Which IP reputation links appear on IP detail pages -* Whether cross-cluster search (CCS) privilege warnings are displayed -* Whether related integrations are displayed on the Rules page tables -* The options provided in the alert tag menu - -You must have the appropriate user role to access and change advanced settings. - - -Modifying advanced settings can affect performance and cause -problems that are difficult to diagnose. Setting a property value to a blank -field reverts to the default behavior, which might not be compatible with other -configuration settings. Deleting a custom setting removes it -permanently. - - -## Access advanced settings - -To access advanced settings, go to **Project Settings** → **Management** → **Advanced Settings**, then scroll down to **Security Solution** settings. - - -For more information on non-Security settings, refer to [Advanced Settings](((kibana-ref))/advanced-options.html). Some settings might not be available in ((serverless-short)) projects. - - -![](../images/advanced-settings/-getting-started-solution-advanced-settings.png) - -
- -## Update default Elastic Security indices - -The `securitySolution:defaultIndex` field defines which ((es)) indices the -((security-app)) uses to collect data. By default, index patterns are used to -match sets of ((es)) indices: - -* `apm-*-transaction*` -* `auditbeat-*` -* `endgame-*` -* `filebeat-*` -* `logs-*` -* `packetbeat-*` -* `winlogbeat-*` - - -Index patterns use wildcards to specify a set of indices. For example, the -`filebeat-*` index pattern means all indices starting with `filebeat-` are -available in the ((security-app)). - - -All of the default index patterns match [((beats))](((beats-ref))/beats-reference.html) and -[((agent))](((fleet-guide))/fleet-overview.html) indices. This means all -data shipped via ((beats)) and the ((agent)) is automatically added to the -((security-app)). - -You can add or remove any indices and index patterns as required, with a maximum of 50 items in the comma-delimited list. For background information on ((es)) indices, refer to [Data in: documents and indices](((ref))/documents-indices.html). - - -If you leave the `-*elastic-cloud-logs-*` index pattern selected, all Elastic cloud logs are excluded from all queries in the ((security-app)) by default. This is to avoid adding data from cloud monitoring to the app. - - - -((elastic-sec)) requires [ECS-compliant data](((ecs-ref))). If you use third-party data -collectors to ship data to ((es)), the data must be mapped to ECS. - lists ECS fields used in ((elastic-sec)). - - -
- -## Update default Elastic Security threat intelligence indices - -The `securitySolution:defaultThreatIndex` advanced setting specifies threat intelligence indices that ((elastic-sec)) features query for ingested threat indicators. This setting affects features that query threat intelligence indices, such as the Threat Intelligence view on the Overview page, indicator match rules, and the alert enrichment query. - -You can specify a maximum of 10 threat intelligence indices; multiple indices must be separated by commas. By default, only the `logs-ti*` index pattern is specified. Do not remove or overwrite this index pattern, as it is used by ((agent)) integrations. - - -Threat intelligence indices aren't required to be ECS-compatible for use in indicator match rules. However, we strongly recommend compatibility if you want your alerts to be enriched with relevant threat indicator information. When searching for threat indicator data, indicator match rules use the threat indicator path specified in the **Indicator prefix override** advanced setting. Visit Configure advanced rule settings for more information. - - -
- -## Telemetry settings - -Elastic transmits certain information about Elastic Security when users interact with the ((security-app)), detailed below. Elastic redacts or obfuscates personal data (IP addresses, host names, usernames, etc.) before transmitting messages. Security-specific telemetry events include: - -* **Detection rule security alerts:** Information about Elastic-authored prebuilt detection rules using the detection engine. Examples of alert data include machine learning job influencers, process names, and cloud audit events. -* **((elastic-endpoint)) Security alerts:** Information about malicious activity detected using ((elastic-endpoint)) detection engines. Examples of alert data include malicious process names, digital signatures, and file names written by the malicious software. Examples of alert metadata include the time of the alert, the ((elastic-endpoint)) version and related detection engine versions. -* **Configuration data for ((elastic-endpoint)):** Information about the configuration of ((elastic-endpoint)) deployments. Examples of configuration data include the Endpoint versions, operating system versions, and performance counters for Endpoint. -* **Exception list entries for Elastic rules:** Information about exceptions added for Elastic rules. Examples include trusted applications, detection exceptions, and rule exceptions. -* **Security alert activity records:** Information about actions taken on alerts generated in the ((security-app)), such as acknowledged or closed. - -To learn more, refer to our [Privacy Statement](https://www.elastic.co/legal/privacy-statement). - -## Set machine learning score threshold - -When security ((ml)) jobs are enabled, this setting -determines the threshold above which anomaly scores appear in ((elastic-sec)): - -* `securitySolution:defaultAnomalyScore` - -## Modify news feed settings - -You can change these settings, which affect the news feed displayed on the -((elastic-sec)) **Overview** page: - -* `securitySolution:enableNewsFeed`: Enables the security news feed on the - Security **Overview** page. - -* `securitySolution:newsFeedUrl`: The URL from which the security news feed content is - retrieved. - -## Enable asset criticality workflows - -The `securitySolution:enableAssetCriticality` setting determines whether asset criticality is included as a risk input to entity risk scoring. This setting is turned off by default. Turn it on to enable asset criticality workflows and to use asset criticality as part of entity risk scoring. - -## Exclude cold and frozen tier data from analyzer queries - -Including data from cold and frozen [data tiers](((ref))/data-tiers.html) in visual event analyzer queries may result in performance degradation. The `securitySolution:excludeColdAndFrozenTiersInAnalyzer` setting allows you to exclude this data from analyzer queries. This setting is turned off by default. - -## Change the default search interval and data refresh time - -These settings determine the default time interval and refresh rate ((elastic-sec)) -pages use to display data when you open the app: - -* `securitySolution:timeDefaults`: Default time interval -* `securitySolution:refreshIntervalDefaults`: Default refresh rate - - -Refer to [Date Math](((ref))/common-options.html) for information about the -syntax. The UI [time filter](((kibana-ref))/set-time-filter.html) overrides the -default values. - - - - -## Display reputation links on IP detail pages - -On IP details pages (**Network** → **_IP address_**), links to -external sites for verifying the IP address's reputation are displayed. By -default, links to these sites are listed: [TALOS](https://talosintelligence.com/) -and [VIRUSTOTAL](https://www.virustotal.com/). - -The `securitySolution:ipReputationLinks` field determines which IP reputation -sites are listed. To modify the listed sites, edit the field's JSON array. These -fields must be defined in each array element: - -* `name`: The link's UI display name. -* `url_template`: The link's URL. It can include `{{ip}}`, which is placeholder - for the IP address you are viewing on the **IP detail** page. - -**Example** - -Adds a link to https://www.dnschecker.org on **IP detail** pages: - -```json -[ - { "name": "virustotal.com", "url_template": "https://www.virustotal.com/gui/search/{{ip}}" }, - { "name": "dnschecker.org", "url_template": "https://www.dnschecker.org/ip-location.php?ip={{ip}}" }, - { "name": "talosIntelligence.com", "url_template": "https://talosintelligence.com/reputation_center/lookup?search={{ip}}" } -] -``` - -
- -## Configure cross-cluster search privilege warnings - -Each time a detection rule runs using a remote cross-cluster search (CCS) index pattern, it will return a warning saying that the rule may not have the required `read` privileges to the remote index. Because privileges cannot be checked across remote indices, this warning displays even when the rule actually does have `read` privileges to the remote index. - -If you've ensured that your detection rules have the required privileges across your remote indices, you can use the `securitySolution:enableCcsWarning` setting to disable this warning and reduce noise. - - - -## Show/hide related integrations in Rules page tables - -By default, Elastic prebuilt rules in the **Rules** and **Rule Monitoring** tables include a badge showing how many related integrations have been installed. Turn off `securitySolution:showRelatedIntegrations` to hide this in the rules tables (related integrations will still appear on rule details pages). - -
- -## Manage alert tag options - -The `securitySolution:alertTags` field determines which options display in the alert tag menu. The default alert tag options are `Duplicate`, `False Positive`, and `Further investigation required`. You can update the alert tag menu by editing these options or adding more. To learn more about using alert tags, refer to Apply and filter alert tags. diff --git a/docs/serverless/settings/case-permissions.mdx b/docs/serverless/settings/case-permissions.mdx deleted file mode 100644 index 035a75065d..0000000000 --- a/docs/serverless/settings/case-permissions.mdx +++ /dev/null @@ -1,113 +0,0 @@ ---- -slug: /serverless/security/cases-requirements -title: Cases requirements -description: Requirements for using and managing cases. -tags: [ 'serverless', 'security', 'reference','manage' ] -status: in review ---- - - -
- -{/* To view cases, you need the ((kib)) space `Read` privilege for the `Security` feature. To create cases and add comments, you need the `All` ((kib)) */} -{/* space privilege for the `Security` feature. */} - -{/* For more information, see */} -{/* ((kibana-ref))/xpack-spaces.html#spaces-control-user-access[Feature access based on user privileges]. */} - -User roles define feature privileges at different levels to manage feature access. To access cases, you must have the appropriate user role. - - - -To send cases to external systems, you need the Security Analytics Complete . - - - - -Certain feature tiers and roles might be required to manage case attachments. For example, to add alerts to cases, you must have a role that allows managing alerts. - - -{/* Hiding the whole table because it's classic-only. We'll replace with serverless info when it's available. */} -{/* To grant access to cases, set the ((kib)) space privileges for the **Cases** and **((connectors-feature))** features as follows: - - - - - Give full access to manage cases - - - - * **All** for the **Cases** feature under **Security** - - * **All** for the **((connectors-feature))** feature under **Management** - - - - Roles without **All** **((connectors-feature))** feature privileges cannot create, add, delete, or modify case connectors. - - - - - - - - - Give assignee access to cases - - - - * **All** for the **Cases** feature under **Security** - - - Before a user can be assigned to a case, they must log into ((kib)) at least - once, which creates a user profile. - - - - - - - - Give view-only access for cases - - **Read** for the **Security** feature and **All** for the **Cases** feature - - - - - - - Give access to view and delete cases - - - - **Read** for the **Cases** feature under **Security** with the **Delete** sub-feature selected - - - These privileges also enable you to delete comments and alerts from a case. - - - - - - - - Revoke all access to cases - - **None** for the **Cases** feature under **Security** - - - - - - -![Shows privileges needed for cases, actions, and connectors](../images/case-permissions/-cases-case-feature-privs.png) */} diff --git a/docs/serverless/settings/conf-map-ui.mdx b/docs/serverless/settings/conf-map-ui.mdx deleted file mode 100644 index 6fcf1bd8f7..0000000000 --- a/docs/serverless/settings/conf-map-ui.mdx +++ /dev/null @@ -1,155 +0,0 @@ ---- -slug: /serverless/security/conf-map-ui -title: Network map data requirements -description: Requirements for setting up and using the Network page. -tags: [ 'serverless', 'security', 'how-to','manage' ] -status: in review ---- - - -
- -Depending on your setup, to display and interact with data on the -**Network** page's map you might need to: - -* Create data views -* Add geographical IP data to events -* Map your internal network - - -To see source and destination connections lines on the map, you must -configure `source.geo` and `destination.geo` ECS fields for your indices. - - -
- -## Permissions required -In order to view the map, you need the appropriate user role. - -
- -## Create data views - -To display map data, you must define a -[data view](((kibana-ref))/data-views.html) (**Project settings** → **Management** → **Data views**) that includes one or more of the indices specified in the `securitysolution:defaultIndex` field in advanced settings. - -For example, to display data that is stored in indices matching the index pattern `servers-europe-*` on the map, you must use a data view whose index pattern matches `servers-europe-*`, such as `servers-*`. - -
- -## Add geoIP data - -When the ECS [source.geo.location and -destination.geo.location](((ecs-ref))/ecs-geo.html) fields are mapped, network data is displayed on -the map. - -If you use Beats, configure a geoIP processor to add data to the relevant -fields: - -
- -1. Define an ingest node pipeline that uses one or more `geoIP` processors to add - location information to events. For example, use the Console in **Dev tools** to create - the following pipeline: - - ```json - PUT _ingest/pipeline/geoip-info - { - "description": "Add geoip info", - "processors": [ - { - "geoip": { - "field": "client.ip", - "target_field": "client.geo", - "ignore_missing": true - } - }, - { - "geoip": { - "field": "source.ip", - "target_field": "source.geo", - "ignore_missing": true - } - }, - { - "geoip": { - "field": "destination.ip", - "target_field": "destination.geo", - "ignore_missing": true - } - }, - { - "geoip": { - "field": "server.ip", - "target_field": "server.geo", - "ignore_missing": true - } - }, - { - "geoip": { - "field": "host.ip", - "target_field": "host.geo", - "ignore_missing": true - } - } - ] - } - ``` - {/* CONSOLE */} - - In this example, the pipeline ID is `geoip-info`. `field` specifies the field - that contains the IP address to use for the geographical lookup, and - `target_field` is the field that will hold the geographical information. - `"ignore_missing": true` configures the pipeline to continue processing when - it encounters an event that doesn't have the specified field. - - - An example ingest pipeline that uses the GeoLite2-ASN.mmdb database to add - autonomous system number (ASN) fields can be found [here](https://github.com/elastic/examples/blob/master/Security%20Analytics/SIEM-examples/Packetbeat/geoip-info.json). - - -1. In your Beats configuration files, add the pipeline to the - `output.elasticsearch`tag: - - ```yml - output.elasticsearch: - hosts: ["localhost:9200"] - pipeline: geoip-info [^1] - ``` - [^1]: The value of this field must be the same as the ingest pipeline name in - step 1 (`geoip-info` in this example). - -
- -## Map your internal network - -If you want to add your network’s internal IP addresses to the map, define geo -location fields under the `processors` tag in the Beats configuration files -on your hosts: - -```yml - processors: - - add_host_metadata: - - add_cloud_metadata: ~ - - add_fields: - when.network.source.ip: [^1] - fields: - source.geo.location: - lat: - lon: - target: '' - - add_fields: - when.network.destination.ip: - fields: - destination.geo.location: - lat: - lon: - target: '' -``` -[^1]: For the IP address, you can use either `private` or CIDR notation. - - -You can also enrich your data with other -[host fields](((packetbeat-ref))/add-host-metadata.html). - - diff --git a/docs/serverless/settings/detections-permissions-section.mdx b/docs/serverless/settings/detections-permissions-section.mdx deleted file mode 100644 index d58ba2a439..0000000000 --- a/docs/serverless/settings/detections-permissions-section.mdx +++ /dev/null @@ -1,76 +0,0 @@ ---- -slug: /serverless/security/detections-requirements -title: Detections requirements -description: Requirements for setting up and configuring the detections feature. -tags: [ 'serverless', 'security', 'reference','manage' ] -status: in review ---- - - -
- -To use the Detections feature, you first need to -configure a few settings. You also need the appropriate role to send -notifications when detection alerts are generated. - -Additionally, there are some advanced settings used to -configure value list upload limits. - -
- - -{/* We're removing a lot of the information below because it's only relevant to classic, but retaining it for reference. We will replace it with serverless-specific equivalents once that information is available and useable. */} -## Enable and access detections -To use the Detections feature, it must be enabled and you must have the appropriate role to access rules and alerts. If your role does not have the privileges needed to enable this feature, you can request someone who has these privileges to visit your Security project{/* Kibana space */}, which will turn it on for you. {/* The following table describes the required privileges to access the Detections page, including rules and alerts. */} - -{/* The reference to the Detections page might be a bug in classic and serverless docs. Might need to change it to Alerts and Rules, or something different like "pages that use the Detections feature". If update this para, will need to update the table below as well. */} - - -For instructions about using Machine Learning jobs and rules, refer to Machine learning job and rule requirements. - - -
- -### Authorization - -Rules, including all background detection and the actions they generate, are authorized using an [API key](((kibana-ref))/api-keys.html) associated with the last user to edit the rule. Upon creating or modifying a rule, an API key is generated for that user, capturing a snapshot of their privileges. The API key is then used to run all background tasks associated with the rule including detection checks and executing actions. - - - -If a rule requires certain privileges to run, such as index privileges, keep in mind that if a user without those privileges updates the rule, the rule will no longer function. - - - -{/* Hiding the section below as it does not sound possible in serverless yet. But might be enabled soonish, so don't want to remove it entirely. */} -{/*
- -## Configure list upload limits - -You can set limits to the number of bytes and the buffer size used to upload -value lists to ((elastic-sec)). - -To set the value: - -1. Open `kibana.yml` [configuration file](((kibana-ref))/settings.html) or edit your - ((kib)) cloud instance. - -1. Add any of these settings and their required values: - * `xpack.lists.maxImportPayloadBytes`: Sets the number of bytes allowed for - uploading ((elastic-sec)) value lists (default `9000000`, maximum - `100000000`). For every 10 megabytes, it is recommended to have an additional 1 - gigabyte of RAM reserved for Kibana. - - For example, on a Kibana instance with 2 gigabytes of RAM, you can set this value up to 20000000 (20 megabytes). - - * `xpack.lists.importBufferSize`: Sets the buffer size used for uploading - ((elastic-sec)) value lists (default `1000`). Change the value if you're - experiencing slow upload speeds or larger than wanted memory usage when - uploading value lists. Set to a higher value to increase throughput at the - expense of using more Kibana memory, or a lower value to decrease throughput and - reduce memory usage. - - -For information on how to configure Elastic Cloud deployments, refer to -[Add Kibana user settings](((cloud))/ec-manage-kibana-settings.html). - */} - diff --git a/docs/serverless/settings/endpoint-management-req.mdx b/docs/serverless/settings/endpoint-management-req.mdx deleted file mode 100644 index d325888d8f..0000000000 --- a/docs/serverless/settings/endpoint-management-req.mdx +++ /dev/null @@ -1,170 +0,0 @@ ---- -# slug: /serverless/security/endpoint-management-req -title: ((elastic-defend)) requirements -description: Manage user roles and privileges to grant access to ((elastic-defend)) features. -tags: [ 'serverless','security','defend','reference','manage' ] -status: in review ---- - - -
- -You can create user roles and define privileges to manage feature access in ((kib)). This allows you to use the principle of least privilege while managing access to ((elastic-defend))'s features. - -Roles and privileges are configured in **Stack Management** → **Roles** in ((kib)). For more details on using this UI, refer to [((kib)) privileges](((kibana-ref))/kibana-role-management.html#adding_kibana_privileges). - - -((elastic-defend))'s feature privileges must be assigned to **All Spaces**. You can't assign them to an individual ((kib)) space. - - - - -To grant access, select **All** for the **Security** feature in the **((kib)) privileges** configuration UI, then turn on the **Customize sub-feature privileges** switch. For each of the following sub-feature privileges, select the type of access you want to allow: - -* **All**: Users have full access to the feature, which includes performing all available actions and managing configuration. -* **Read**: Users can view the feature, but can't perform any actions or manage configuration. (Some features don't have this privilege.) -* **None**: Users can't access or view the feature. - - - - - **Endpoint List** - - - - Access the Endpoints page, which lists all hosts running ((elastic-defend)), and associated integration details. - - - - - - - **Trusted Applications** - - - - Access the Trusted Applications page to remediate conflicts with other software, such as antivirus or endpoint security applications. - - - - - - - **Host Isolation Exceptions** - - - - Access the Host Isolation Exceptions page to add specific IP addresses that isolated hosts can still communicate with. - - - - - - - **Blocklist** - - - - Access the Blocklist page to prevent specified applications from running on hosts, extending the list of processes that ((elastic-defend)) considers malicious. - - - - - - - **Event Filters** - - - - Access the Event Filters page to filter out endpoint events that you don't want stored in ((es)). - - - - - - - **((elastic-defend)) Policy Management** - - - - Access the Policies page and ((elastic-defend)) integration policies to configure protections, event collection, and advanced policy features. - - - - - - - **Response Actions History** - - - - Access the response actions history for endpoints. - - - - - - - **Host Isolation** - - - - Allow users to isolate and release hosts. - - - - - - - **Process Operations** - - - - Perform host process-related response actions, including `processes`, `kill-process`, and `suspend-process`. - - - - - - - **File Operations** - - - - Perform file-related response actions in the response console. - - - - - - - **Execute Operations** - - - - Perform shell commands and script-related response actions in the response console. - - - The commands are run on the host using the same user account running the ((elastic-defend)) integration, which normally has full control over the system. Only grant this feature privilege to ((elastic-sec)) users who require this level of access. - - - - - - - -{/* Check with joe if it's ok to remove this section. */} -## Upgrade considerations - -After upgrading from ((elastic-sec)) 8.6 or earlier, existing user roles will be assigned **None** by default for any new endpoint management feature privileges, and you'll need to explicitly assign them. However, many features previously required the built-in `superuser` role, and users who previously had this role will still have it after upgrading. - -You'll probably want to replace the broadly permissive `superuser` role with more focused feature-based privileges to ensure that users have access to only the specific features that they need. Refer to [((kib)) role management](((kibana-ref))/kibana-role-management.html) for more details on assigning roles and privileges. diff --git a/docs/serverless/settings/ers-req.mdx b/docs/serverless/settings/ers-req.mdx deleted file mode 100644 index 903c8c52fa..0000000000 --- a/docs/serverless/settings/ers-req.mdx +++ /dev/null @@ -1,50 +0,0 @@ ---- -slug: /serverless/security/ers-requirements -title: Entity risk scoring prerequisites -description: Requirements for using entity risk scoring and asset criticality. -tags: [ 'serverless', 'security', 'reference', 'manage' ] -status: in review ---- - -To use entity risk scoring and asset criticality, you need the appropriate user roles. These features require the Security Analytics Complete project feature. - -This page covers the requirements for using the entity risk scoring and asset criticality features, as well as their known limitations. - -## Entity risk scoring - -### User roles - -To turn on the risk scoring engine, you need one of the following Security user roles: - -* Platform engineer -* Detections admin -* Admin - -### Known limitations - -* The risk scoring engine uses an internal user role to score all hosts and users. After you turn on the risk scoring engine, all alerts in the project will contribute to host and user risk scores. -* You cannot customize alert data views or risk weights associated with alerts and asset criticality levels. - -## Asset criticality - -To use the asset criticality feature, turn on the `securitySolution:enableAssetCriticality` advanced setting. - -### User roles - -The following Security user roles allow you to view an entity's asset criticality: - -* Viewer -* Tier 1 analyst - -The following Security user roles allow you to view, assign, change, or unassign an entity's asset criticality: - -* Editor -* Tier 2 analyst -* Tier 3 analyst -* Threat intelligence analyst -* Rule author -* SOC manager -* Endpoint operations analyst -* Platform engineer -* Detections admin -* Endpoint policy manager diff --git a/docs/serverless/settings/manage-settings.mdx b/docs/serverless/settings/manage-settings.mdx deleted file mode 100644 index 779bc6ce0a..0000000000 --- a/docs/serverless/settings/manage-settings.mdx +++ /dev/null @@ -1,14 +0,0 @@ ---- -slug: /serverless/security/manage-settings -title: Manage settings -# description: Description to be written -tags: [ 'serverless', 'security', 'overview' ] -status: in review ---- - - -These pages explain how to manage settings in various areas of the ((security-app)): - -* : Configure project-wide settings related to users, billing, data management, and more. -* : Update advanced ((elastic-sec)) settings. -* : Learn about requirements for specific features. diff --git a/docs/serverless/settings/ml-requirements.mdx b/docs/serverless/settings/ml-requirements.mdx deleted file mode 100644 index 9ae72514d6..0000000000 --- a/docs/serverless/settings/ml-requirements.mdx +++ /dev/null @@ -1,27 +0,0 @@ ---- -slug: /serverless/security/ml-requirements -title: ((ml-cap)) job and rule requirements -description: Requirements for using ((ml-cap)) jobs and rules. -tags: [ 'serverless', 'security', 'reference', 'manage' ] -status: in review ---- - - -
- -To run and create ((ml)) jobs and rules, you need the appropriate user role. - -For more information, go to [Set up ((ml-features))](((ml-docs))/setup.html). - - - -{/* The `machine_learning_admin` and `machine_learning_user` built-in */} Some roles give -access to the results of _all_ ((anomaly-jobs)), irrespective of whether the user -has access to the source indices. Likewise, a user who has full or read-only -access to ((ml-features)) {/* within a given ((kib)) space */} can view the results of _all_ -((anomaly-jobs)) that are visible{/* in that space */}. You must carefully consider who -is given these roles and feature privileges; ((anomaly-job)) results may propagate -field values that contain sensitive information from the source indices to the -results. - - diff --git a/docs/serverless/settings/project-settings.mdx b/docs/serverless/settings/project-settings.mdx deleted file mode 100644 index 1175421924..0000000000 --- a/docs/serverless/settings/project-settings.mdx +++ /dev/null @@ -1,10 +0,0 @@ ---- -slug: /serverless/security/project-settings -title: Project settings -description: Configure project-wide settings related to users, billing, data management, and more. -tags: [ 'serverless', 'security', 'overview', 'manage' ] -status: rough content ---- - - -Navigate to **Project settings** to configure project-wide settings related to users, billing, data management, and more. diff --git a/docs/serverless/settings/sec-requirements.mdx b/docs/serverless/settings/sec-requirements.mdx deleted file mode 100644 index 7671a1b98d..0000000000 --- a/docs/serverless/settings/sec-requirements.mdx +++ /dev/null @@ -1,62 +0,0 @@ ---- -slug: /serverless/security/requirements-overview -title: ((elastic-sec)) requirements -description: Requirements for using and configuring ((elastic-sec)). -tags: [ 'serverless', 'security', 'how-to','manage' ] -status: in review ---- - - -The [Support Matrix](https://www.elastic.co/support/matrix) page lists officially -supported operating systems, platforms, and browsers on which components such as ((beats)), ((agent)), ((elastic-defend)), and ((elastic-endpoint)) have been tested. - -
- -## Feature-specific requirements - -There are some additional requirements for specific features: - -* Detections prerequisites and requirements -* Cases prerequisites -* Entity risk scoring prerequisites -* Machine learning job and rule requirements -* ((elastic-endpoint)) requirements -* Configure network map data - -{/* Hiding the content below until we can validate equivalent statements for serverless. */} -{/* ## License requirements - -All features are available as part of the free Basic plan **except**: - -* Alert notifications via external systems -* ((ml-cap)) jobs and rules -* Cases integration with third-party ticketing - systems - -## Advanced configuration and UI options - -Configure advanced settings describes how to modify advanced settings, such as the -((elastic-sec)) indices, default time intervals used in filters, and IP reputation -links. */} - -## Third-party collectors mapped to ECS - -The [Elastic Common Schema (ECS)](((ecs-ref))) defines a common set of fields to be used for storing event data in Elasticsearch. ECS helps users normalize their event data -to better analyze, visualize, and correlate the data represented in their -events. ((elastic-sec)) can ingest and normalize events from any ECS-compliant data source. - - -((elastic-sec)) requires [ECS-compliant data](((ecs-ref))). If you use third-party data collectors to ship data to ((es)), the data must be mapped to ECS. ((elastic-sec)) ECS field reference lists ECS fields used in ((elastic-sec)). - - -{/* Hiding the content below until we can validate equivalent statements for serverless. */} -{/* ## Cross-cluster searches - -For information on how to perform cross-cluster searches on ((elastic-sec)) -indices, see: - -* [Search across cluster](((ref))/modules-cross-cluster-search.html) - (for self-managed ((stack)) deployments) - -* [Enable cross-cluster search](((cloud))/ec-enable-ccs.html) (for hosted deployments) */} - diff --git a/docs/serverless/technical-preview-limitations.mdx b/docs/serverless/technical-preview-limitations.mdx deleted file mode 100644 index b6be7feb58..0000000000 --- a/docs/serverless/technical-preview-limitations.mdx +++ /dev/null @@ -1,15 +0,0 @@ ---- -slug: /serverless/security/security-technical-preview-limitations -title: Technical preview limitations -description: Review the limitations that apply to Elastic Security projects in technical preview. -tags: [ 'serverless', 'security' ] ---- - - - -Currently, workloads outside of the following ranges may experience higher latencies: - -- Data ingest rate, total of all data sources, greater than 500GB per day -- Number of ((ml)) jobs greater than 50 -- Searchable data size greater than 10TB -- Number of endpoints and Cloud assets for ((fleet)) and ((agent)) management greater than 40,000 diff --git a/docs/serverless/troubleshooting/troubleshoot-endpoints.mdx b/docs/serverless/troubleshooting/troubleshoot-endpoints.mdx deleted file mode 100644 index cee0df2d9d..0000000000 --- a/docs/serverless/troubleshooting/troubleshoot-endpoints.mdx +++ /dev/null @@ -1,207 +0,0 @@ ---- -slug: /serverless/security/troubleshoot-endpoints -title: Troubleshoot ((elastic-defend)) -# description: Description to be written -tags: [ 'serverless', 'security', 'troubleshooting' ] -status: in review ---- - - -
- -This topic covers common troubleshooting issues when using ((elastic-defend))'s endpoint management tools. - -
- -## Endpoints - - - -In some cases, an `Unhealthy` ((agent)) status may be caused by a failure in the ((elastic-defend)) integration policy. In this situation, the integration and any failing features are flagged on the agent details page in ((fleet)). Expand each section and subsection to display individual responses from the agent. - - -Integration policy response information is also available from the **Endpoints** page in the ((security-app)) (**Assets** → **Endpoints**, then click the link in the **Policy status** column). - - -![Agent details page in ((fleet)) with Unhealthy status and integration failures](../images/ts-management/-troubleshooting-unhealthy-agent-fleet.png) - -Common causes of failure in the ((elastic-defend)) integration policy include missing prerequisites or unexpected system configuration. Consult the following topics to resolve a specific error: - -- Approve the system extension for ((elastic-endpoint)) (macOS) -- Enable Full Disk Access for ((elastic-endpoint)) (macOS) -- Resolve a potential system deadlock (Linux) - - -If the ((elastic-defend)) integration policy is not the cause of the `Unhealthy` agent status, refer to [((fleet)) troubleshooting](((fleet-guide))/fleet-troubleshooting.html) for help with the ((agent)). - - - - - - -If you have an `Unhealthy` ((agent)) status with the message `Disabled due to potential system deadlock`, that means malware protection was disabled on the ((elastic-defend)) integration policy due to errors while monitoring a Linux host. - -You can resolve the issue by configuring the policy's advanced settings related to **fanotify**, a Linux feature that monitors file system events. By default, ((elastic-defend)) works with fanotify to monitor specific file system types that Elastic has tested for compatibility, and ignores other unknown file system types. - -If your network includes nonstandard, proprietary, or otherwise unrecognized Linux file systems that cause errors while being monitored, you can configure ((elastic-defend)) to ignore those file systems. This allows ((elastic-defend)) to resume monitoring and protecting the hosts on the integration policy. - - -Ignoring file systems can create gaps in your security coverage. Use additional security layers for any file systems ignored by ((elastic-defend)). - - -To resolve the potential system deadlock error: - -1. Go to **Assets** → **Policies**, then click a policy's name. - -1. Scroll to the bottom of the policy and click **Show advanced settings**. - -1. In the setting `linux.advanced.fanotify.ignored_filesystems`, enter a comma-separated list of file system names to ignore, as they appear in `/proc/filesystems` (for example: `ext4,tmpfs`). Refer to Find file system names for more on determining the file system names. - -1. Click **Save**. - - Once you save the policy, malware protection is re-enabled. - - - - - -If you encounter a `“Required transform failed”` notice on the Endpoints page, you can usually resolve the issue by restarting the transform. Refer to [Transforming data](((ref))/transforms.html) for more information about transforms. - -![Endpoints page with Required transform failed notice](../images/ts-management/-troubleshooting-endpoints-transform-failed.png) - -To restart a transform that’s not running: - -1. Go to **Project settings** → **Management** → **Transforms**. -1. Enter `endpoint.metadata` in the search box to find the transforms for ((elastic-defend)). -1. Click the **Actions** menu () and do one of the following for each transform, depending on the value in the **Status** column: - - * `stopped`: Select **Start** to restart the transform. - * `failed`: Select **Stop** to first stop the transform, and then select **Start** to restart it. - - ![Transforms page with Start option selected](../images/ts-management/-troubleshooting-transforms-start.png) - -1. On the confirmation message that displays, click **Start** to restart the transform. -1. The transform’s status changes to `started`. If it doesn't change, refresh the page. - - - - - -After ((agent)) installs Endpoint, Endpoint connects to ((agent)) over a local relay connection to report its health status and receive policy updates and response action requests. If that connection cannot be established, the ((elastic-defend)) integration will cause ((agent)) to be in an `Unhealthy` status, and Endpoint won't operate properly. - -### Identify if the issue is happening - -You can identify if this issue is happening in the following ways: - -* Run ((agent))'s status command: - - * `sudo /opt/Elastic/Agent/elastic-agent status` (Linux) - * `sudo /Library/Elastic/Agent/elastic-agent status` (macOS) - * `c:\Program Files\Elastic\Agent\elastic-agent.exe status` (Windows) - - If the status result for `endpoint-security` says that Endpoint has missed check-ins or `localhost:6788` cannot be bound to, it might indicate this problem is occurring. - -* If the problem starts happening right after installing Endpoint, check the value of `fleet.agent.id` in the following file: - - * `/opt/Elastic/Endpoint/elastic-endpoint.yaml` (Linux) - * `/Library/Elastic/Endpoint/elastic-endpoint.yaml` (macOS) - * `c:\Program Files\Elastic\Endpoint\elastic-endpoint.yaml` (Windows) - - If the value of `fleet.agent.id` is `00000000-0000-0000-0000-000000000000`, this indicates this problem is occurring. - - - If this problem starts happening after Endpoint has already been installed and working properly, then this value will have changed even though the problem is happening. - - -### Examine Endpoint logs - -If you've confirmed that the issue is happening, you can look at Endpoint log messages to identify the cause: - -* `Failed to find connection to validate. Is Agent listening on 127.0.0.1:6788?` or `Failed to validate connection. Is Agent running as root/admin?` means that Endpoint is not able to create an initial connection to ((agent)) over port `6788`. -* `Unable to make GRPC connection in deadline(60s). Fetching connection info again` means that Endpoint's original connection to ((agent)) over port `6788` worked, but the connection over port `6789` is failing. - -### Resolve the issue - -To debug and resolve the issue, follow these steps: - -1. Examine the Endpoint diagnostics file named `analysis.txt`, which contains information about what may cause this issue. ((agent)) diagnostics automatically include Endpoint diagnostics. - -1. Make sure nothing else on your device is listening on ports `6788` or `6789` by running: - - * `sudo netstat -anp --tcp` (Linux) - * `sudo netstat -an -f inet` (macOS) - * `netstat -an` (Windows) - -1. Make sure `localhost` can be resolved to `127.0.0.1` by running: - - * `ping -4 -c 1 localhost` (Linux) - * `ping -c 1 localhost` (macOS) - * `ping -4 localhost` (Windows) - - - - - -After deploying ((elastic-defend)), you might encounter warnings or errors in the endpoint's **Policy status** in ((fleet)) if your mobile device management (MDM) is misconfigured or certain permissions for ((elastic-endpoint)) aren't granted. The following sections explain issues that can cause warnings or failures in the endpoint's policy status. - -### Connect Kernel has failed - -This means that the system extension or kernel extension was not approved. Consult the following topics for approving the system extension, either with MDM or without MDM: - -* Approve the system extension with MDM -* Approve the system extension without MDM - -You can validate the system extension is loaded by running - -``` -sudo systemextensionsctl list | grep co.elastic.systemextension -``` - -In the command output, the system extension should be marked as "active enabled". - -### Connect Kernel has failed and the system extension is loaded - -If the system extension is loaded and kernel connection still fails, this means that Full Disk Access was not granted. ((elastic-endpoint)) requires Full Disk Access to subscribe to system events via the ((elastic-defend)) framework, which is one of the primary sources of eventing information used by ((elastic-endpoint)). Consult the following topics for granting Full Disk Access, either with MDM or without MDM: - -* Enable Full Disk Access with MDM -* Enable Full Disk Access without MDM - -You can validate that Full Disk Access is approved by running - -``` -sudo /Library/Elastic/Endpoint/elastic-endpoint test install -``` - -If the command output doesn't contain a message about enabling Full Disk Access, the approval was successful. - -### Detect Network Events has failed - -This means that the network extension content filtering was not approved. Consult the following topics for approving network content filtering, either with MDM or without MDM: - -* Approve network content filtering with MDM -* Approve network content filtering without MDM - -You can validate that network content filtering is approved by running - -``` -sudo /Library/Elastic/Endpoint/elastic-endpoint test install -``` - -If the command output doesn't contain a message about approving network content filtering, the approval was successful. - -### Full Disk Access has a warning - -This means that Full Disk Access was not granted for one or all ((elastic-endpoint)) components. Consult the following topics for granting Full Disk Access, either with MDM or without MDM: - -* Enable Full Disk Access with MDM -* Enable Full Disk Access without MDM - -You can validate that Full Disk Access is approved by running - -``` -sudo /Library/Elastic/Endpoint/elastic-endpoint test install -``` - -If the command output doesn't contain a message about enabling Full Disk Access, the approval was successful. - - diff --git a/docs/serverless/troubleshooting/troubleshooting-intro.mdx b/docs/serverless/troubleshooting/troubleshooting-intro.mdx deleted file mode 100644 index 34bdfd3058..0000000000 --- a/docs/serverless/troubleshooting/troubleshooting-intro.mdx +++ /dev/null @@ -1,11 +0,0 @@ ---- -slug: /serverless/security/troubleshooting-ov -title: Troubleshooting -description: Resolve issues in ((elastic-sec)). -tags: ["serverless","security","troubleshooting","overview"] ---- - -This section covers common ((elastic-sec)) related issues and how to resolve them. - -* -* \ No newline at end of file diff --git a/docs/serverless/troubleshooting/ts-detection-rules.mdx b/docs/serverless/troubleshooting/ts-detection-rules.mdx deleted file mode 100644 index b12d4287b1..0000000000 --- a/docs/serverless/troubleshooting/ts-detection-rules.mdx +++ /dev/null @@ -1,108 +0,0 @@ ---- -slug: /serverless/security/ts-detection-rules -title: Troubleshoot detection rules -description: Covers common troubleshooting issues when creating or managing detection rules. -tags: ["serverless","security","troubleshooting","configure"] -status: in review ---- - - -
- -This topic covers common troubleshooting issues when creating or managing detection rules. - -
- -## ((ml-cap)) rules - - - -If a ((ml)) rule is failing, check to make sure the required ((ml)) jobs are running and start any jobs that have stopped. - -1. Go to **Rules** → **Detection rules (SIEM)**, then select the ((ml)) rule. The required ((ml)) jobs and their statuses are listed in the **Definition** section. - - ![Rule details page with ML job stopped](../images/ts-detection-rules/-troubleshooting-rules-ts-ml-job-stopped.png) - -1. If a required ((ml)) job isn't running, turn on the **Run job** toggle next to it. -1. Rerun the ((ml)) detection rule. - - - -
- -## Indicator match rules - - - -If you receive the following rule failure: `"Bulk Indexing of signals failed: [parent] Data too large"`, this indicates that the alerts payload was too large to process. - -This can be caused by bad indicator data, a misconfigured rule, or too many event matches. Review your indicator data or rule query. If nothing obvious is misconfigured, try executing the rule against a subset of the original data and continue diagnosis. - - - - - -If you receive the following rule failure: `"An error occurred during rule execution: message: "Request Timeout after 90000ms"`, this indicates that the query phase is timing out. Try refining the time frame or dividing the data defined in the query into multiple rules. - - - - - -If you receive the following rule failure: `Bulk Indexing of signals failed: index: ".index-name" reason: "maxClauseCount is set to 1024" type: "too_many_clauses"`, this indicates that the limit for the total number of clauses that a query tree can have is too low. To update your maximum clause count, [increase the size of your ((es)) JVM heap memory](((ref))/advanced-configuration.html#set-jvm-heap-size). 1 GB of ((es)) JVM heap size or more is sufficient. - - - - - -If you notice rule delays, review the suggestions above to troubleshoot, and also consider limiting the number of rules that run simultaneously, as this can cause noticeable performance implications. - - - -
- -## Rule exceptions - - - -When you're creating detection rule exceptions, autocomplete might not provide suggestions in the **Value** field if the values don't exist in the current page's time range. - -You can resolve this by expanding the time range, or by configuring the autocomplete feature to get suggestions from your full data set instead (turn off the `autocomplete:useTimeRange` advanced setting). -{/* Will need to revisit this section since it mentions advanced settings, which aren't exposed yet. */} - - -Turning off `autocomplete:useTimeRange` could cause performance issues if the data set is especially large. - - - - - - -A warning icon () and message appear for fields with type conflicts across multiple indices or fields that are unmapped. You can learn more about the conflict by clicking the warning message. - - -A field can have type conflicts _and_ be unmapped in specified indices. - - - - -
- -### Fields with conflicting types - -Type conflicts occur when a field is mapped to different types across multiple indices. To resolve this issue, you can create new indices with matching field type mappings and [reindex your data](((ref))/docs-reindex.html). Otherwise, use the information about a field's type mappings to ensure you're entering compatible field values when defining exception conditions. - -In the following example, the selected field has been defined as different types across five indices. - - - -
- -### Unmapped fields - -Unmapped fields are undefined within an index's mapping definition. Using unmapped fields to define an exception can prevent it from working as expected, and lead to false positives or unexpected alerts. To fix unmapped fields, [add them](((ref))/explicit-mapping.html#update-mapping) to your indices' mapping definitions. - -In the following example, the selected field is unmapped across two indices. - - - -
\ No newline at end of file diff --git a/docs/serverless/what-is-security-serverless.mdx b/docs/serverless/what-is-security-serverless.mdx deleted file mode 100644 index 0a9fe3ca29..0000000000 --- a/docs/serverless/what-is-security-serverless.mdx +++ /dev/null @@ -1,88 +0,0 @@ ---- -slug: /serverless/security/what-is-security-serverless -title: ((elastic-sec)) -# description: Description to be written -tags: [ 'serverless', 'security', 'overview' ] -layout: landing ---- - - - - - -Serverless projects provide you with the existing ((elastic-sec)) on-premise and Elastic Cloud deployment functionality, and the following new features and capabilities: - - - Continuous onboarding hub at the center of the **Get started** page - - Security-focused, single-level navigation - - **Osquery** availability within **Investigations** - - **Assets** management for ((fleet)), endpoints, and Cloud - - Security-specific roles - - Machine learning nodes included, by default - - Developer tools for interacting with your data - - - - - - - - -