Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Security AI] Add Kibana Support for Security AI Prompts Integration #207138

Merged
merged 42 commits into from
Jan 28, 2025

Conversation

stephmilovic
Copy link
Contributor

@stephmilovic stephmilovic commented Jan 17, 2025

Out of Band Security AI Prompts

This PR introduces support for the future integration of Security AI prompts in Kibana. Prompts will be stored within the integration as saved objects identified by the id security-ai-prompt.

To ensure reliability, fallback prompts are maintained in a local file and will be used when the corresponding prompt is unavailable in the integration.

Introduces 2 methods for fetching prompts:

  • getPrompt: Retrieves a prompt by promptId.
  • getPromptsByGroupId: Retrieves a group of prompts by promptGroupId.

The methods use a helper called resolveProviderAndModel to identify a provider + model by either providing the provider + model arguments or the connector argument. If neither arguments are provided, the connector is fetched by id from the actionsClient.

The saved object inference prompts are then fetched either by promptId in getPrompts or by the promptGroupId field in getPromptsByGroupId.

Finally, the returned saved objects along with a local prompt object are used in findPromptEntry to identify the best matching prompt per promptId + promptGroupId and provider + model. The prompts are matched in the following order:

  1. provider + model (integration)
  2. provider (integration)
  3. default (integration)
  4. provider + model (local)
  5. provider (local)
  6. default (local)

promptId, promptGroupId, promptDictionary

A promptId is the unique identifier of the prompt. The promptGroupId is the group of which the prompt belongs to. Each promptId is stored in promptDictionary. When using a promptId, refer to it from the promptDictionary, ex: promptDictionary.attackDiscoveryDefault.

Inference

We try to find the provider for inference connectors from config, since these connectors all have different provider types and we cannot rely on the actionTypeId. When the inference connector uses EIS, we use a mapping to identify the provider + model from the EIS model. If no provider can be identified for an inference connector, we default to Bedrock as provider. For 9.0.0 this model mapping will live in solutions code, but we hope to find a centralized design for the mappings in 9.1.0.

Testing

  1. Be connected with LangSmith
  2. Run EIS with Elasticsearch (ping Steph for instructions if you don't know how to do so)
  3. use curl to create an inference endpoint configured for EIS:
    curl -k --location --request PUT 'http://elastic:changeme@localhost:9200/_inference/local-eis-test' \
      --header 'Content-Type: application/json' \
      --data '{
        "service": "elastic",
        "task_type": "chat_completion",
        "service_settings": {
            "model_id": "rainbow-sprinkles"
        }
    }'
    
  4. Enable inference feature flag: xpack.stack_connectors.enableExperimental: ['inferenceConnectorOn']
  5. Have an OpenAI connector, Bedrock connector, and preconfigured inference connector:
    xpack.actions.preconfigured:
      my-eis:
        name: EIS Inference Preconfig
        actionTypeId: .inference
        exposeConfig: true
        config:
          provider: 'elastic'
          taskType: 'chat_completion'
          inferenceId: 'local-eis-test'
          providerConfig:
            organization_id: 'org-'
            rate_limit:
              requests_per_minute: 240
            model_id: 'rainbow-sprinkles'
    
  6. Download the following exported saved object file below and change the extension back to .ndjson from .json (github does not allow .ndjson files) This is a security-ai-prompt saved object, the type that will ship with our planned integration. The saved object specifies a prompt for Bedrock providers for the AI Assistant system prompt.
    ai-assistant-default.json
  7. Upload the file using within Stack Management > Saved Objects (these files are hidden from management, so you will not see it in the list)
  8. Go to the assistant. Select the OpenAI connector. Send a message, hello world. Find the trace in LangSmith. The system message should be the local prompt: You are a security analyst and expert in resolving security incidents. Your role is to assist by answering questions about Elastic Security. Do not answer questions unrelated to Elastic Security. If available, use the Knowledge History provided to try and answer the question. If not provided, you can try and query for additional knowledge via the KnowledgeBaseRetrievalTool.
  9. Back in the assistant, change the connector to Bedrock. Hit "Regenerate" to send the same message again. Find the trace in LangSmith. The system message should be the prompt from our uploaded saved object: Default system prompt test
  10. Back in the assistant, change the connector to EIS Inference. Hit "Regenerate" to send the same message again. Find the trace in LangSmith. The system message should be the prompt from our uploaded saved object: Default system prompt test

@stephmilovic stephmilovic added release_note:enhancement v9.0.0 Team: SecuritySolution Security Solutions Team working on SIEM, Endpoint, Timeline, Resolver, etc. backport:prev-minor Backport to (8.x) the previous minor version (i.e. one version back from main) Team:Security Generative AI Security Generative AI v8.18.0 labels Jan 25, 2025
@stephmilovic stephmilovic marked this pull request as ready for review January 25, 2025 17:04
@stephmilovic stephmilovic requested review from a team as code owners January 25, 2025 17:04
@elasticmachine
Copy link
Contributor

Pinging @elastic/security-solution (Team: SecuritySolution)

@botelastic botelastic bot added the Team:Fleet Team label for Observability Data Collection Fleet team label Jan 25, 2025
@stephmilovic
Copy link
Contributor Author

@elasticmachine merge upstream

Comment on lines 49 to 59
let userPrompt = '';
if (state.llmType === 'gemini') {
userPrompt = await getPrompt({
actionsClient,
connectorId: state.connectorId,
promptId: promptDictionary.userPrompt,
promptGroupId: promptGroupId.aiAssistant,
provider: 'gemini',
savedObjectsClient,
});
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Consider an alternative like:

  const userPrompt =
    state.llmType === 'gemini'
      ? await getPrompt({
          actionsClient,
          connectorId: state.connectorId,
          promptId: promptDictionary.userPrompt,
          promptGroupId: promptGroupId.aiAssistant,
          provider: 'gemini',
          savedObjectsClient,
        })
      : '';

to eliminate the local mutation.

@stephmilovic
Copy link
Contributor Author

@elasticmachine merge upstream


const prompts = await savedObjectsClient.find<Prompt>({
type: promptSavedObjectType,
filter: `${promptSavedObjectType}.attributes.promptId: ${promptId} AND ${promptSavedObjectType}.attributes.promptGroupId: ${promptGroupId}`,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider wrapping the filter's promptId and promptGroupId in quotes, for example:

filter: `${promptSavedObjectType}.attributes.promptId: "${promptId}" AND ${promptSavedObjectType}.attributes.promptGroupId: "${promptGroupId}"`,

@elasticmachine
Copy link
Contributor

💚 Build Succeeded

Metrics [docs]

Module Count

Fewer modules leads to a faster build time

id before after diff
inference 27 28 +1
observabilityAIAssistantApp 506 507 +1
observabilityAiAssistantManagement 384 385 +1
searchAssistant 264 265 +1
searchPlayground 280 281 +1
total +5

Public APIs missing comments

Total count of every public API that lacks a comment. Target amount is 0. Run node scripts/build_api_docs --plugin [yourplugin] --stats comments for more detailed information.

id before after diff
@kbn/inference-common 51 55 +4

Page load bundle

Size of the bundles that are downloaded on every page load. Target size is below 100kb

id before after diff
fleet 173.1KB 173.5KB +326.0B

Saved Objects .kibana field count

Every field in each saved object type adds overhead to Elasticsearch. Kibana needs to keep the total field count below Elasticsearch's default limit of 1000 fields. Only specify field mappings for the fields you wish to search on or query. See https://www.elastic.co/guide/en/kibana/master/saved-objects-service.html#_mappings

id before after diff
security-ai-prompt - 8 +8
Unknown metric groups

API count

id before after diff
@kbn/inference-common 162 166 +4

History

Copy link
Contributor

@andrew-goldstein andrew-goldstein left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @stephmilovic for adding support for out of band prompt updates to the security assistant and Attack discovery prompts! 🙏
✅ Desk tested locally
LGTM 🚀

@stephmilovic stephmilovic merged commit 7af5a83 into elastic:main Jan 28, 2025
9 checks passed
@kibanamachine
Copy link
Contributor

Starting backport for target branches: 8.x

https://github.com/elastic/kibana/actions/runs/13020825673

@kibanamachine
Copy link
Contributor

💔 All backports failed

Status Branch Result
8.x Backport failed because of merge conflicts

Manual backport

To create the backport manually run:

node scripts/backport --pr 207138

Questions ?

Please refer to the Backport tool documentation

@stephmilovic
Copy link
Contributor Author

💚 All backports created successfully

Status Branch Result
8.x

Note: Successful backport PRs will be merged automatically after passing CI.

Questions ?

Please refer to the Backport tool documentation

stephmilovic added a commit to stephmilovic/kibana that referenced this pull request Jan 28, 2025
…lastic#207138)

(cherry picked from commit 7af5a83)

# Conflicts:
#	src/core/packages/saved-objects/server-internal/src/object_types/index.ts
#	x-pack/platform/plugins/shared/fleet/server/routes/epm/index.test.ts
#	x-pack/platform/plugins/shared/fleet/server/services/agent_policies/package_policies_to_agent_permissions.test.ts
stephmilovic added a commit that referenced this pull request Jan 29, 2025
…ation (#207138) (#208648)

# Backport

This will backport the following commits from `main` to `8.x`:
- [[Security AI] Add Kibana Support for Security AI Prompts Integration
(#207138)](#207138)

<!--- Backport version: 9.6.4 -->

### Questions ?
Please refer to the [Backport tool
documentation](https://github.com/sorenlouv/backport)

<!--BACKPORT [{"author":{"name":"Steph
Milovic","email":"[email protected]"},"sourceCommit":{"committedDate":"2025-01-28T22:35:39Z","message":"[Security
AI] Add Kibana Support for Security AI Prompts Integration
(#207138)","sha":"7af5a8338bab6da9bc45eccfd21b11129b05048c","branchLabelMapping":{"^v9.0.0$":"main","^v8.18.0$":"8.x","^v(\\d+).(\\d+).\\d+$":"$1.$2"}},"sourcePullRequest":{"labels":["release_note:enhancement","Team:Fleet","v9.0.0","Team:
SecuritySolution","backport:prev-minor","Team:Security Generative
AI","v8.18.0"],"title":"[Security AI] Add Kibana Support for Security AI
Prompts
Integration","number":207138,"url":"https://github.com/elastic/kibana/pull/207138","mergeCommit":{"message":"[Security
AI] Add Kibana Support for Security AI Prompts Integration
(#207138)","sha":"7af5a8338bab6da9bc45eccfd21b11129b05048c"}},"sourceBranch":"main","suggestedTargetBranches":["8.x"],"targetPullRequestStates":[{"branch":"main","label":"v9.0.0","branchLabelMappingKey":"^v9.0.0$","isSourceBranch":true,"state":"MERGED","url":"https://github.com/elastic/kibana/pull/207138","number":207138,"mergeCommit":{"message":"[Security
AI] Add Kibana Support for Security AI Prompts Integration
(#207138)","sha":"7af5a8338bab6da9bc45eccfd21b11129b05048c"}},{"branch":"8.x","label":"v8.18.0","branchLabelMappingKey":"^v8.18.0$","isSourceBranch":false,"state":"NOT_CREATED"}]}]
BACKPORT-->
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backport:prev-minor Backport to (8.x) the previous minor version (i.e. one version back from main) release_note:enhancement Team:Fleet Team label for Observability Data Collection Fleet team Team:Security Generative AI Security Generative AI Team: SecuritySolution Security Solutions Team working on SIEM, Endpoint, Timeline, Resolver, etc. v8.18.0 v9.0.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants