Giter Site home page Giter Site logo

mrwadams / attackgen Goto Github PK

View Code? Open in Web Editor NEW
793.0 20.0 114.0 7.62 MB

AttackGen is a cybersecurity incident response testing tool that leverages the power of large language models and the comprehensive MITRE ATT&CK framework. The tool generates tailored incident response scenarios based on user-selected threat actor groups and your organisation's details.

License: GNU General Public License v3.0

Python 99.36% Dockerfile 0.64%

attackgen's Introduction

AttackGen

AttackGen is a cybersecurity incident response testing tool that leverages the power of large language models and the comprehensive MITRE ATT&CK framework. The tool generates tailored incident response scenarios based on user-selected threat actor groups and your organisation's details.

Table of Contents

Star the Repo

If you find AttackGen useful, please consider starring the repository on GitHub. This helps more people discover the tool. Your support is greatly appreciated! โญ

Features

  • Generates unique incident response scenarios based on chosen threat actor groups.
  • Allows you to specify your organisation's size and industry for a tailored scenario.
  • Displays a detailed list of techniques used by the selected threat actor group as per the MITRE ATT&CK framework.
  • Create custom scenarios based on a selection of ATT&CK techniques.
  • ๐Ÿ†• Use scenario templates to quickly generate custom scenarios based on common types of cyber incidents.
  • ๐Ÿ†• AttackGen Assistant - a chat interface for updating and/or asking questions about generated scenarios.
  • Capture user feedback on the quality of the generated scenarios.
  • Downloadable scenarios in Markdown format.
  • Use the OpenAI API, Azure OpenAI Service, ๐Ÿ†• Google AI API, Mistral API, or locally hosted Ollama models to generate incident response scenarios.
  • Available as a Docker container image for easy deployment.
  • Optional integration with LangSmith for powerful debugging, testing, and monitoring of model performance.

AttackGen Screenshot

Releases

v0.5.1

What's new? Why is it useful?
GPT-4o Model Support - Enhanced Model Options: AttackGen now supports the use of OpenAI's GPT-4o model. GPT4-o is OpenAI's leading model, able to generate scenarios twice as fast as GPT-4 for half the cost.

v0.5

What's new? Why is it useful?
AttackGen Assistant - Iterative Scenario Refinement: The new chat interface allows users to interact with their generated incident response scenarios, making it easy to update and ask questions about the scenario without having to regenerate it from scratch. This feature enables an iterative approach to scenario development, where users can refine and improve their scenarios based on the AI assistant's responses.

- Contextual Assistance: The AI assistant responds to user queries based on the context of the generated scenario and the conversation history. This ensures that the assistant's responses are relevant and helpful in refining the scenario.
Quick Start Templates for Custom Scenarios - Quick Scenario Generation: Users can now quickly generate custom incident response scenarios based on predefined templates for common types of cyber incidents, such as phishing attacks, ransomware attacks, malware infections, and insider threats. This feature makes it easier to create realistic scenarios without having to select individual ATT&CK techniques.

- Streamlined Workflow: The template selection is integrated seamlessly into the custom scenario generation process. Users can choose a template, which automatically populates the relevant ATT&CK techniques, and then further customize the scenario if needed.
Google AI API Integration - Expanded Model Options: AttackGen now supports the use of Google's Gemini models for generating incident response scenarios. This integration expands the range of high-quality models available to users, allowing them to leverage Google's AI capabilities for creating realistic and diverse scenarios.

v0.4

What's new? Why is it useful?
Mistral API Integration - Alternative Model Provider: Users can now leverage the Mistral AI models to generate incident response scenarios. This integration provides an alternative to the OpenAI and Azure OpenAI Service models, allowing users to explore and compare the performance of different language models for their specific use case.
Local Model Support using Ollama - Local Model Hosting: AttackGen now supports the use of locally hosted LLMs via an integration with Ollama. This feature is particularly useful for organisations with strict data privacy requirements or those who prefer to keep their data on-premises. Please note that this feature is not available for users of the AttackGen version hosted on Streamlit Community Cloud at https://attackgen.streamlit.app
Optional LangSmith Integration - Improved Flexibility: The integration with LangSmith is now optional. If no LangChain API key is provided, users will see an informative message indicating that the run won't be logged by LangSmith, rather than an error being thrown. This change improves the overall user experience and allows users to continue using AttackGen without the need for LangSmith.
Various Bug Fixes and Improvements - Enhanced User Experience: This release includes several bug fixes and improvements to the user interface, making AttackGen more user-friendly and robust.
Click to view release notes for earlier versions.

v0.3

What's new? Why is it useful?
Azure OpenAI Service Integration - Enhanced Integration: Users can now choose to utilise OpenAI models deployed on the Azure OpenAI Service, in addition to the standard OpenAI API. This integration offers a seamless and secure solution for incorporating AttackGen into existing Azure ecosystems, leveraging established commercial and confidentiality agreements.

- Improved Data Security: Running AttackGen from Azure ensures that application descriptions and other data remain within the Azure environment, making it ideal for organizations that handle sensitive data in their threat models.
LangSmith for Azure OpenAI Service - Enhanced Debugging: LangSmith tracing is now available for scenarios generated using the Azure OpenAI Service. This feature provides a powerful tool for debugging, testing, and monitoring of model performance, allowing users to gain insights into the model's decision-making process and identify potential issues with the generated scenarios.

- User Feedback: LangSmith also captures user feedback on the quality of scenarios generated using the Azure OpenAI Service, providing valuable insights into model performance and user satisfaction.
Model Selection for OpenAI API - Flexible Model Options: Users can now select from several models available from the OpenAI API endpoint, such as gpt-4-turbo-preview. This allows for greater customization and experimentation with different language models, enabling users to find the most suitable model for their specific use case.
Docker Container Image - Easy Deployment: AttackGen is now available as a Docker container image, making it easier to deploy and run the application in a consistent and reproducible environment. This feature is particularly useful for users who want to run AttackGen in a containerised environment, or for those who want to deploy the application on a cloud platform.

v0.2

What's new? Why is it useful?
Custom Scenarios based on ATT&CK Techniques - For Mature Organisations: This feature is particularly beneficial if your organisation has advanced threat intelligence capabilities. For instance, if you're monitoring a newly identified or lesser-known threat actor group, you can tailor incident response testing scenarios specific to the techniques used by that group.

- Focused Testing: Alternatively, use this feature to focus your incident response testing on specific parts of the cyber kill chain or certain MITRE ATT&CK Tactics like 'Lateral Movement' or 'Exfiltration'. This is useful for organisations looking to evaluate and improve specific areas of their defence posture.
User feedback on generated scenarios - Collecting feedback is essential to track model performance over time and helps to highlight strengths and weaknesses in scenario generation tasks.
Improved error handling for missing API keys - Improved user experience.
Replaced Streamlit st.spinner widgets with new st.status widget - Provides better visibility into long running processes (i.e. scenario generation).

v0.1

Initial release.

Requirements

  • Recent version of Python.
  • Python packages: pandas, streamlit, and any other packages necessary for the custom libraries (langchain and mitreattack).
  • OpenAI API key.
  • LangChain API key (optional) - see LangSmith Setup section below for further details.
  • Data files: enterprise-attack.json (MITRE ATT&CK dataset in STIX format) and groups.json.

Installation

Option 1: Cloning the Repository

  1. Clone this repository:
git clone https://github.com/mrwadams/attackgen.git
  1. Change directory into the cloned repository:
cd attackgen
  1. Install the required Python packages:
pip install -r requirements.txt

Option 2: Using Docker

  1. Pull the Docker container image from Docker Hub:
docker pull mrwadams/attackgen

LangSmith Setup

If you would like to use LangSmith for debugging, testing, and monitoring of model performance, you will need to set up a LangSmith account and create a .streamlit/secrets.toml file that contains your LangChain API key. Please follow the instructions here to set up your account and obtain your API key. You'll find a secrets.toml-example file in the .streamlit/ directory that you can use as a template for your own secrets.toml file.

If you do not wish to use LangSmith, you must still have a .streamlit/secrets.toml file in place, but you can leave the LANGCHAIN_API_KEY field empty.

Data Setup

Download the latest version of the MITRE ATT&CK dataset in STIX format from here. Ensure to place this file in the ./data/ directory within the repository.

Running AttackGen

After the data setup, you can run AttackGen with the following command:

streamlit run ๐Ÿ‘‹_Welcome.py

You can also try the app on Streamlit Community Cloud.

Usage

Running AttackGen

Option 1: Running the Streamlit App Locally

  1. Run the Streamlit app:
streamlit run ๐Ÿ‘‹_Welcome.py
  1. Open your web browser and navigate to the URL provided by Streamlit.
  2. Use the app to generate standard or custom incident response scenarios (see below for details).

Option 2: Using the Docker Container Image

  1. Run the Docker container:
docker run -p 8501:8501 mrwadams/attackgen

This command will start the container and map port 8501 (default for Streamlit apps) from the container to your host machine. 2. Open your web browser and navigate to http://localhost:8501. 3. Use the app to generate standard or custom incident response scenarios (see below for details).

Generating Scenarios

Standard Scenario Generation

  1. Choose whether to use the OpenAI API or the Azure OpenAI Service.
  2. Enter your OpenAI API key, or the API key and deployment details for your model on the Azure OpenAI Service.
  3. Select your organisatin's industry and size from the dropdown menus.
  4. Navigate to the Threat Group Scenarios page.
  5. Select the Threat Actor Group that you want to simulate.
  6. Click on 'Generate Scenario' to create the incident response scenario.
  7. Use the ๐Ÿ‘ or ๐Ÿ‘Ž buttons to provide feedback on the quality of the generated scenario. N.B. The feedback buttons only appear if a value for LANGCHAIN_API_KEY has been set in the .streamlit/secrets.toml file.

Custom Scenario Generation

  1. Choose whether to use the OpenAI API or the Azure OpenAI Service.
  2. Enter your OpenAI API Key, or the API key and deployment details for your model on the Azure OpenAI Service.
  3. Select your organisation's industry and size from the dropdown menus.
  4. Navigate to the Custom Scenario page.
  5. Use the multi-select box to search for and select the ATT&CK techniques relevant to your scenario.
  6. Click 'Generate Scenario' to create your custom incident response testing scenario based on the selected techniques.
  7. Use the ๐Ÿ‘ or ๐Ÿ‘Ž buttons to provide feedback on the quality of the generated scenario. N.B. The feedback buttons only appear if a value for LANGCHAIN_API_KEY has been set in the .streamlit/secrets.toml file.

Please note that generating scenarios may take a minute or so. Once the scenario is generated, you can view it on the app and also download it as a Markdown file.

Contributing

I'm very happy to accept contributions to this project. Please feel free to submit an issue or pull request.

Licence

This project is licensed under GNU GPLv3.

attackgen's People

Contributors

mrwadams avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

attackgen's Issues

Error Using OpenAI API Key - https://attackgen.streamlit.app/

Hello- What a neat project! I'm attempting to exercise the AttackGen instance hosted at https://attackgen.streamlit.app/

Screenshot 01: On the Welcome page, I paste a freshly created OpenAI API key from my ChatGPT Pro account, press enter for acceptance, and make additional sections.

Screenshot 02: On the Generate Threat Group Scenario page, I click "Generate Scenario.

Screenshot 03: The Generate Threat Group Scenario page then presents the following error:
"An error occurred while generating the scenario: Error code: 404 - {'error': {'message': 'The model gpt-4 does not exist or you do not have access to it.', 'type': 'invalid_request_error', 'param': None, 'code': 'model_not_found'}}"

Screenshot 04: Returning to the Welcome page, the API field is cleared.

Entering the same API key or a newly created one leads to the same results and error message.

Could you please review and advise if something is not working or how I should change my approach to exercising this AttackGen instance?

Respectfully,

Orlando Stevenson

01_Welcome Screen - 2024-04-11 23_15_41-
02_Generate Threat Group Scenario- 2024-04-11 23_15_41-
03_Generate Threat Group Scenario- Error Code 404 _ 2024-04-11 23_15_41-
04_Welcome Screen - Missing OpenAI API Key _2024-04-11 23_15_41-

Occur Errors while accessing the AttackGen Tool

Hi @mrwadams, This is the error that is displaying while trying to access the sections "Threat_Group_Scenarios, Custom_Scenarios, AttackGen_Assistant"

ModuleNotFoundError: This app has encountered an error. The original error message is redacted to prevent data leaks. Full error details have been recorded in the logs (if you're on Streamlit Cloud, click on 'Manage app' in the lower right of your app).

Traceback:

File "/home/adminuser/venv/lib/python3.9/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 600, in _run_script

exec(code, module.__dict__)

File "/mount/src/attackgen/pages/1_๐Ÿ›ก๏ธ_Threat_Group_Scenarios.py", line 6, in

from langchain_community.llms import Ollama

Threat_Group_Scenarios
AttackGen_Assistant
Custom_Scenarios

No secrets file found

After starting the welcome script using Python 3.11.9 and Streamlit 1.33, I entered my OpenAI API key and clicked on "Threat Group Scenarios." At this point I received the error below about a missing Streamlit secrets file:

FileNotFoundError: No secrets files found. Valid paths for a secrets.toml file are: /home/username/.streamlit/secrets.toml, /home/username/sh/learning/chatgpt-for-cybersecurity/attackgen/.streamlit/secrets.toml
Traceback:

File "/home/username/mambaforge/envs/openai/lib/python3.11/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 584, in _run_script
    exec(code, module.__dict__)
File "/home/username/sh/learning/chatgpt-for-cybersecurity/attackgen/pages/1_๐Ÿ›ก๏ธ_Threat_Group_Scenarios.py", line 30, in <module>
    if "LANGCHAIN_API_KEY" in st.secrets:
       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/username/mambaforge/envs/openai/lib/python3.11/site-packages/streamlit/runtime/secrets.py", line 345, in __contains__
    return key in self._parse(True)
                  ^^^^^^^^^^^^^^^^^
File "/home/username/mambaforge/envs/openai/lib/python3.11/site-packages/streamlit/runtime/secrets.py", line 214, in _parse
    raise FileNotFoundError(err_msg)

Suggestion: rename Welcome.py

The little hand-wavy-thing at the beginning of the file name _Welcome.py is a PITA when it comes to the command line. Would you consider renaming it just "Welcome.py"?

langsmith not disable-able

I want to use attackgen but do not have a langchain API key. This projects documentation indicates
"If you do not wish to use LangSmith, you can delete the LangSmith related environment variables from the top of the following files"

I commented out the environment variables portion in those files. I tried all 4 environment variables, then just the last one (API_KEY) but receive the same failure:

File "/home/bbb/venv/lib/python3.11/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 542, in _run_script
    exec(code, module.__dict__)
File "/.../attackgen/attackgen/pages/1_๐Ÿ›ก๏ธ_Threat_Group_Scenarios.py", line 28, in <module>
    client = Client()
             ^^^^^^^^
File "/home/bbb/venv/lib/python3.11/site-packages/langsmith/client.py", line 480, in __init__
    _validate_api_key_if_hosted(self.api_url, self.api_key)
File "/home/bbb/venv/lib/python3.11/site-packages/langsmith/client.py", line 269, in _validate_api_key_if_hosted
    raise ls_utils.LangSmithUserError(

Nothing renames Python script to app.py

Script to run app is called "๐Ÿ‘‹_Welcome.py" and does not get renamed during setup process. Changing the name to "app.py" and following the instructions to run the Streamlit program allows it to execute as normal.

Can't reach Ollama from docker

Hello,
Thank you for the excellent project.
I'm encountering an issue while attempting to use Ollama from a Docker attackgen. I've made modifications to the welcome.py file, substituting:

response = requests.get("http://localhost:11434/api/tags")

with:
response = requests.get("http://host.docker.internal:11434/api/tags")

I'm able to retrieve the list of available Ollama models. However, when attempting to use the threat group scenario or a custom scenario, I encounter an error. It seems that the application is attempting to establish an HTTP connection to localhost:11434 instead of host.docker.internal:11434.
An error occurred while generating the scenario: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/generate (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0xffff53750c20>: Failed to establish a new connection: [Errno 111] Connection refused'))

Could you please assist with what needs to be adapted? I've grepped for occurrences of 11434 but haven't found any usefull things.

Thank you!

Remove unnecessary secrets environ variable

A secrets file was placed in the gitignore file but used as a location for a environ variable causing an error during runtime. Removed it since it wasn't necessary as the openai key was utilized in the app.

Scenario Generation Error: 'ascii' codec can't encode character '\xa3' - Possible Encoding Issue

Hello, firstly i want to thank u for this tool. It's amazing. I have one issue about "Generate Scenario". When i try the this function get a fail. Fail output like that:

image

UI Output
An error occurred while generating the scenario: 'ascii' codec can't encode character '\xa3' in position 16: ordinal not in range(128)

Console output
/app/pages/1_๐Ÿ›ก๏ธ_Threat_Group_Scenarios.py:198: DeprecationWarning: DataFrameGroupBy.apply operated on the grouping columns. This behavior is deprecated, and in a future version of pandas the grouping columns will be excluded from the operation. Either pass include_groups=False to exclude the groupings or explicitly select the grouping columns after groupby to silence this warning.
.apply(lambda x: x.sample(n=1) if len(x) > 0 else None)
/app/pages/1_๐Ÿ›ก๏ธ_Threat_Group_Scenarios.py:198: DeprecationWarning: DataFrameGroupBy.apply operated on the grouping columns. This behavior is deprecated, and in a future version of pandas the grouping columns will be excluded from the operation. Either pass include_groups=False to exclude the groupings or explicitly select the grouping columns after groupby to silence this warning.
.apply(lambda x: x.sample(n=1) if len(x) > 0 else None)
/usr/local/lib/python3.12/site-packages/langchain_core/_api/deprecation.py:117: LangChainDeprecationWarning: The class langchain_community.chat_models.openai.ChatOpenAI was deprecated in langchain-community 0.0.10 and will be removed in 0.2.0. An updated version of the class exists in the langchain-openai package and should be used instead. To use it run pip install -U langchain-openai and import as from langchain_openai import ChatOpenAI warn_deprecated(

Thanks in advance for your help

Feature request, add field to do further querying around the generated threat scenarios

Hi,

It would be really useful to be able to ask it more questions and to elaborate on what has been generated.
Something like the following.

Is this repo a one-off poc or do you plan to keep investing in improving it?

--- "a/pages/2_\360\237\233\240\357\270\217_Custom_Scenarios.py"
+++ "b/pages/2_\360\237\233\240\357\270\217_Custom_Scenarios.py"
@@ -441,7 +441,35 @@ try:
                             st.markdown("---")
                             st.markdown(st.session_state['custom_scenario_text'])
                             st.download_button(label="Download Scenario", data=st.session_state['custom_scenario_text'], file_name="custom_scenario.md", mime="text/markdown")
-
+            st.session_state["question"] = st.text_input("Ask question:")
+            if st.button("Send Question", key="ask_question"):
+                if st.session_state["question"]:
+                    if 'custom_scenario_text' in st.session_state:
+                        st.markdown("---")
+                        original_text = st.session_state['custom_scenario_text']
+                        question = st.session_state['question']
+                        messages.append(st.session_state['custom_scenario_text'])
+                        messages.append(HumanMessage(
+                            content=question
+                        ))
+                        model = os.getenv('OLLAMA_MODEL')
+                        endpoint = os.getenv('OLLAMA_ENDPOINT')
+                        st.markdown("Querying LLM")
+                        llm = Ollama(model=model, base_url=f"http://{endpoint}")
+                        response = llm.invoke(messages, model=model)
+                        st.markdown("---")
+                        all_content = original_text + "\n\n---\n" + question + "\n\n---\n" + response + "\n\n"
+                        st.markdown(original_text)
+                        st.markdown("---")
+                        st.markdown(question)
+                        st.markdown("---")
+                        st.markdown(response)
+                        st.markdown("---")
+                        st.session_state['custom_scenario_text'] = all_content
+                        st.session_state.pop('question')
+                    else:
+                        st.markdown("You must generate a scenario first")
+                        st.session_state.pop("question")

Error generating a scenario

Hello. I'm sure it's something simple and that I'm just being clumsy, but I've tried on the website and by cloning the repository locally, and when I generate a scenario, I get the following error message:
An error occurred while generating the scenario: Error code: 404 - {'error': {'message': 'The model gpt-4-turbo-preview does not exist or you do not have access to it.', 'type': 'invalid_request_error', 'param': None, 'code': 'model_not_found'}}
Thank you in advance.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.