Think of this script as the brain and heart of the entire project. When an operator starts the tool in the terminal, live_recon.py is the first thing that comes to life. It is the conductor that directs the orchestra of various scan tools. Its job is to receive commands, plan the mission, dispatch the individual soldiers (scans), collect their reports (live findings) in real-time, and finally, survey the entire battlefield.
diff_intern_extern(target)
def diff_intern_extern(target):
try:
ip = ipaddress.ip_address(target)
return "internal" if ip.is_private or ip.is_loopback else "external"
except ValueError:
return "external"
Goal: This function is our first scout. It must make a critical decision: are we attacking a target in the internal network (e.g., in a corporate network) or on the external, public internet? Scan strategies will depend on this later.
Line 2: try: - This initiates a "safe" block. The code here is attempted, but if an error occurs, the program doesn't crash but instead jumps to the except block. It's our safety net.
Line 3: ip = ipaddress.ip_address(target) - The ipaddress library is like an ID checker for IP addresses. It tries to convert the text (target) into a special IP object. This only works if the text is a valid IP address like "192.168.1.1" or "8.8.8.8".
Line 4: return "internal" if ip.is_private ... - This is the core. The IP object has built-in superpowers. ip.is_private automatically detects if it's a private IP (like 192.168.x.x, 10.x.x.x). ip.is_loopback detects the "self" address (127.0.0.1). If either is true, it's an internal target. Otherwise, it's external.
Line 5-6: except ValueError: return "external" - This is Plan B. If the ID checker in line 3 fails (because the target is, for example, "google.com"), it raises a ValueError. We catch this error and, as a safety measure, always assume that a domain name is an external target.
if __name__ == "__main__":
if __name__ == "__main__":
os.system("clear")
print_mega_banner()
check_dependencies()
time.sleep(1)
parser = argparse.ArgumentParser(description="Recon Monster – Autonomous Recon Framework")
parser.add_argument("--ip", required=True, help="Target IP address or hostname")
args = parser.parse_args()
target = args.ip
sudo = [] if os.getuid() == 0 else ["sudo"]
scan_mode = diff_intern_extern(target)
log_dir = f"logs/{target}"
os.makedirs(log_dir, exist_ok=True)
live_findings = {
"ipv6": set(),
"nmap": set(),
"webserver": set(),
"methods": set(),
"cookie": set(),
"nikto": set(),
"ferox": set(),
}
Fundamentals: The if __name__ == "__main__": block is a standard in Python. It ensures that this code is only executed when we start the script directly (python live_recon.py), but not when we import it as a module into another script.
Preparation (Lines 2-5): Before the battle begins, the battlefield is prepared. os.system("clear") cleans the screen. print_mega_banner() provides the epic first impression. check_dependencies() is the weapon check – are all external tools (soldiers) like Nmap present?
Receiving Commands (Lines 7-10): argparse is like an order form for the command line. We define what information we need from the user (--ip) and make it mandatory (required=True). parser.parse_args() then reads the user's input and stores it.
Root Check (Line 12): os.getuid() == 0 is the way in Linux/Unix to ask: "Am I the god-emperor (root)?". The user ID of root is always 0. If yes, the sudo list is empty. If not, the list is filled with the word "sudo" so we can prepend it to commands that require admin rights.
Data Structure (Lines 18-26): live_findings is our central notebook. A set is like a list but has a superpower: it can only contain each item once. If we add "http" 10 times, it will still only contain "http" once. This is perfect for our live banner, which shouldn't display duplicates.
for scan in SCANS_TO_RUN:
for scan in SCANS_TO_RUN:
proc = None
try:
print_live_banner(scan_mode, live_findings)
print_scan_title(scan["name"])
cmd = [p.replace("{TARGET}", target) for p in scan["command"]]
if cmd[0] not in ["curl", "ping6"]:
cmd = sudo + cmd
proc = subprocess.Popen(
cmd,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
text=True,
errors="ignore"
)
for line in proc.stdout:
is_ferox = "ferox" in scan["name"].lower()
if is_ferox:
try:
if line.strip().startswith("{"):
data = json.loads(line)
url = data.get("url", "")
status = data.get("status", 200)
lime_color = "\033[92m"
reset_color = "\033[0m"
print(f"{lime_color}[+] Found: {url} (Status: {status}){reset_color}")
else:
print(line.rstrip())
except json.JSONDecodeError:
print(line.rstrip())
else:
print(line.rstrip())
parse_line(scan["name"], line, live_findings)
proc.wait()
print_scan_end()
if "Nmap" in scan["name"]:
jf = f"{log_dir}/nmap_full.json"
if os.path.exists(jf):
tasks = NmapRunner(jf).run_analysis()
for t in tasks:
live_findings["nmap"].add(t)
except KeyboardInterrupt:
print(f"\n\n[!] SCAN SKIPPED: {scan['name']} interrupted by user.")
if proc:
proc.terminate()
try:
proc.wait(timeout=2)
except subprocess.TimeoutExpired:
proc.kill()
print_scan_end()
time.sleep(1)
continue
Goal: This is the main engine. It systematically works through the list of scans from scans.py.
Building the Command (Line 7): [p.replace("{TARGET}", target) for p in scan["command"]] is an elegant Python shorthand ("List Comprehension"). Think of it as a mini-factory in one line: it takes the command template, punches the real target into the right place, and puts the finished part into a new list called cmd.
Starting the Process (Line 11): subprocess.Popen is the command to start a new, independent process (our scan). The parameters are crucial:
stdout=subprocess.PIPE: Tells the process: "Don't send your output directly to the terminal, but through an invisible tube (pipe) so I can read it."stderr=subprocess.STDOUT: Also redirects all error messages into the same tube. This way, we miss nothing.Live Parsing (Line 21): for line in proc.stdout: is the magic. This loop doesn't wait for the scan to finish. It listens at the "tube" and grabs **each line individually** as soon as it arrives. This enables real-time analysis.
The Feroxbuster Filter (Lines 23-38): A student asked what line.strip().startswith("{") means. Here is the explanation: It's a chain of commands.
line.strip(): Is like a cleaner. It takes the line and removes all invisible spaces or newlines at the beginning and end..startswith("{"): After cleaning, it looks at the first character. Is it an opening curly brace {?{. This line is our gold filter, separating the normal "work noise" from the valuable JSON findings.
Nmap Post-Analysis (Lines 43-48): Nmap is our most important spy. After it has submitted its report (the JSON file), a specialist (NmapRunner) is assigned to it to derive concrete follow-up missions (subroutines) from the raw data.
After the loop
print_live_banner(scan_mode, live_findings)
for task in live_findings["nmap"]:
fn = getattr(nmap_subroutines, task, None)
if fn:
fn(target)
show_detailed_logs(log_dir)
Goal: After all primary scans have run, the final actions are executed here.
Dynamic Execution (Lines 4-7): This is an advanced and extremely powerful technique. getattr(nmap_subroutines, task, None) is like a magical grab into a toolbox. It says: "Give me the tool (the function) from the nmap_subroutines box whose name is exactly in the text task." If the text is "ftp_brute", it fetches the ftp_brute function. This allows us to call functions dynamically based on text, without having to build a huge if/elif/else chain. This makes the code incredibly flexible for future extensions.
Debriefing (Line 9): show_detailed_logs(log_dir) is the final step. The operator is given the option to view all the original, unfiltered reports from their soldiers (the scan tools).
Every army needs engineers and logisticians. The utils.py module is exactly that: our Swiss Army Knife. It contains fundamental helper functions that are repeatedly called by other parts of the program. The most critical mission of this module is the weapon check (check_dependencies), which ensures that all our "soldiers" (external tools like Nmap) are present and ready for battle before the engagement begins.
check_dependencies()
import shutil
import sys
import subprocess
import os
def check_dependencies():
print("[+] Checking toolchain...")
required_tools = [
"nmap", "nikto", "feroxbuster", "curl", "wpscan",
"joomscan", "hydra", "rpcclient", "enum4linux", "wget"
]
missing_tools = []
for tool in required_tools:
if shutil.which(tool) is None:
missing_tools.append(tool)
if missing_tools:
print(f"[ERROR] The following tools are missing: {', '.join(missing_tools)}")
answer = input(
"Should Recon Monster attempt to install them automatically? "
"(apt-get will be used) [y/N]: "
)
if answer.lower() == 'y':
print("[+] Attempting to install missing tools...")
sudo_prefix = [] if os.getuid() == 0 else ["sudo"]
try:
update_command = sudo_prefix + ["apt-get", "update"]
subprocess.run(update_command, check=True)
install_command = sudo_prefix + ["apt-get", "install", "-y"] + missing_tools
subprocess.run(install_command, check=True)
print("[+] Installation successful! Re-checking dependencies...")
check_dependencies()
except subprocess.CalledProcessError:
print("[ERROR] Automatic installation failed. Please install the tools manually.")
sys.exit(1)
except FileNotFoundError:
print("[ERROR] 'sudo' or 'apt-get' not found. Please install dependencies manually.")
sys.exit(1)
else:
print("[ERROR] Aborted by user. Please install the missing tools manually.")
sys.exit(1)
else:
print("[+] All required tools are installed and ready.")
Goal: This function is our armorer. It ensures the tool doesn't crash mid-battle because an external program is missing. It checks the entire arsenal and even offers to automatically acquire missing weapons.
Line 9: required_tools = [...] - This is the official equipment list. Every tool we need for our scans is listed here.
High-End Concept: shutil.which(tool) (Line 16)
shutil.which() as a scout that runs through the entire operating system and asks: "Is there an executable program named 'nmap' anywhere in the standard paths?"./usr/bin/nmap).None (Nothing).is None check is the Pythonic way to see if the scout came back empty-handed.Automatic Installation (Lines 27-46): This is an advanced feature and extremely user-friendly.
subprocess.run(..., check=True) - Unlike Popen which starts a process and moves on, run starts a process and **waits for it to finish**. The parameter check=True is a built-in safety feature: if the command fails (e.g., because a package wasn't found), it immediately raises a CalledProcessError exception, which we can catch in our except block.check_dependencies() - A brilliant, recursive move. After the installation attempt, the function calls itself again to verify that all weapons are now truly present.The Emergency Exit (Line 48): sys.exit(1) - If tools are missing and the user declines installation (or it fails), this is the red button. sys.exit(1) terminates the program immediately and uncompromisingly. The number 1 is a standard code to signal to the operating system: "The mission was aborted with an error."
If live_recon.py is the conductor, then scans.py is the sheet music. This module is not an active weapon, but the strategic battle plan for our entire operation. It contains a single, crucial variable: SCANS_TO_RUN. This is an ordered list that precisely defines which scans are executed, in what order, and with which parameters. The beauty of this approach is its extreme flexibility: to change, add, or remove a scan, we only need to adjust this one "sheet music," not the complex logic of the conductor.
This module is pure configuration. It requires no external libraries to fulfill its mission. It is a pure data module.
SCANS_TO_RUN
SCANS_TO_RUN = [
{
"name": "IPv6 Discovery Scan",
"command": ["ping6", "-c", "3", "ff02::1"]
},
{
"name": "Curl - HTTP Headers",
"command": ["curl", "-I", "-L", "-v", "http://{TARGET}", "-s"]
},
# ... more scan definitions ...
{
"name": "Nmap Full Scan",
"command": ["nmap", "-sS", "-sC", "-sV", "-p-", "-Pn",
"--min-rate", "5000", "-oA", "logs/{TARGET}/nmap_full",
"-oJ", "logs/{TARGET}/nmap_full.json", "{TARGET}"]
},
{
"name": "Feroxbuster Scan",
"command": [
"feroxbuster",
"-u", "http://{TARGET}",
"-w", "/usr/share/seclists/Discovery/Web-Content/directory-list-2.3-medium.txt",
"-x", "txt,php,rar,zip,tar...",
# ... more feroxbuster flags
]
}
]
Goal: This single variable is the brain of the entire operation. It is a list of dictionaries.
The Architecture (High-End Concept): Each element in the list is a dictionary (an "index card") that describes a single scan. This index card always has two drawers:
"name": A human-readable name for the scan, which is displayed in the live banner (e.g., "Nmap Full Scan")."command": A list of strings that represents the exact command and its parameters.The {TARGET} Placeholder (Architectural Masterstroke): Instead of hardcoding the target in every command, the generic placeholder {TARGET} is used. The command center (live_recon.py) is responsible for replacing this placeholder with the actual target at runtime. This makes the battle plan reusable and extremely flexible.
Analysis of the Commands: The commands themselves are those of a professional.
--min-rate 5000 and -Pn command Nmap to be extremely aggressive and fast, skipping the host discovery phase. -oA and -oJ are orders to Nmap to save the results in all major formats (normal, XML, grepable) AND as a JSON file. The JSON file is the foundation for our later in-depth analysis.directory-list-2.3-medium.txt from SecLists), defines a huge list of interesting file extensions with -x, ignores common "noise" status codes with -C, and outputs everything as a clean JSON file for our parser.The Order is Crucial: The scans are executed in the exact order they appear in this list. Light, fast scans like curl are placed at the beginning to provide quick initial results, while the long, intensive scans like the Nmap Full Scan come later. This is a strategic decision to maximize the "live" feeling of the tool.
Think of this module as the ammunition store and enemy database of our operation. A soldier in the field who sees a tank doesn't report "big gray thing"; he reports "T-72 tank." For that, he needs a database in his head. This module provides exactly that: it contains the "intelligence" in the form of Python dictionaries (our databases), which allow the live_parser to turn raw scan results (like "Port 80 is open") into concrete, valuable information (like "HTTP web server found"). It is pure, hardcoded knowledge.
This module, similar to scans.py, is a pure data module. It is a collection of knowledge and requires no active libraries to fulfill its mission.
NMAP_SPECIALS
NMAP_SPECIALS = {
"anonymous ftp login allowed": "ftp_anon",
"wp-config": "wpconfig",
"robots.txt": "robots",
"id_rsa": "ssh_key",
"microsoft windows": "windows_host",
"guest login": "smb_guest",
"sql server": "mssql_db",
"php version": "php_info",
"apache tomcat": "tomcat",
"nagios": "nagios",
"webmin": "webmin",
"moodle": "moodle",
"joomla": "joomla",
"wordpress": "wordpress"
}
Goal: NMAP_SPECIALS is our "Most Wanted" list for text-based findings. It is a Python dictionary, a type of intelligent index card box.
The Architecture (High-End Concept: Key-Value Store): A dictionary always stores data in key-value pairs.
"anonymous ftp login allowed". The key is always unique."ftp_anon".Derivation of the Function (Why was it built this way?):
Imagine we didn't have this dictionary. The live_parser would then need a huge, confusing block of if/elif/else statements:
if "anonymous ftp login allowed" in line:
live_findings["nmap"].add("ftp_anon")
elif "wp-config" in line:
live_findings["nmap"].add("wpconfig")
...and so on. That would be a nightmare to maintain and expand.
The Brilliant Solution: We outsource the entire "intelligence" to this dictionary. The live_parser can now use a single, elegant loop to process the entire list: for key, value in NMAP_SPECIALS.items(): if key in line: .... To add a new signature, we only need to add a new line to this dictionary instead of changing the complex logic of the parser. This decouples the data from the logic – a core principle of good software architecture.
Think of this module as a highly specialized Signal Intelligence Officer and Spy. Its mission is to intercept the continuous stream of raw data (line) coming in from our soldiers (the scan tools) in real-time. It must analyze every single message, filter out the "noise" (unimportant information), and identify only the critical "Live Hits." It immediately reports this valuable intelligence to the command center (live_findings) so that the situation map in the live banner can be updated. Without this spy, our live banner would be useless.
remove_ansi(text)
import re
import json
from nmap_services_loader import NMAP_SPECIALS
def remove_ansi(text):
ansi_escape = re.compile(r'\x1B(?:[@-Z\\-_]|\[[0-?]*[ -/]*[@-~])')
return ansi_escape.sub('', text)
Goal: This function is our "decontamination unit." Many command-line tools use invisible ANSI escape codes to color their output. For our script, however, this is just interfering data noise. This function cleans the text from all these invisible control characters.
High-End Concept: re.compile(...)
re.compile() tells Python: "Hey, we'll be using this pattern often. Analyze it once, compile it into super-fast machine code, and save it." This is a performance optimization. With hundreds or thousands of lines to parse, this makes a noticeable difference.Line 8: return ansi_escape.sub('', text) - The .sub() command (substitute) is the execution. It takes the compiled regex, searches for all matches in the text, and replaces them with an empty string ('') – effectively deleting them.
parse_line(tool_name, line, live_findings)
def parse_line(tool_name, line, live_findings):
clean_line = remove_ansi(line).strip()
l = clean_line.lower()
if "nmap" in tool_name.lower():
# ... Nmap Parsing Logic ...
if "ferox" in tool_name.lower() or "gobuster" in tool_name.lower():
# ... Feroxbuster/Gobuster Parsing Logic ...
if "ipv6" in tool_name.lower():
# ... IPv6 Parsing Logic ...
# ... and so on for Nikto and Curl
Goal: This is the spy's brain. This function is a large switchboard that decides how a line is analyzed based on which tool it came from.
Preparation (Lines 2-3): First, the line is sent through our decontamination unit (remove_ansi). .strip() then removes any remaining spaces at the beginning and end. Only then is the clean line converted to lowercase for analysis.
The if Cascade (The Specialist Departments): The function is like an intelligence agency with different departments. if "nmap" in tool_name.lower(): asks: "Is this message from our Nmap agent?". If yes, the line is passed to the Nmap specialists. If not, the next department is asked. Each if block is a highly specialized analysis unit for a specific tool.
High-End Concept (Derivation of the Architecture): Instead of writing a huge, unreadable function that does everything, we have a clean "Single Responsibility" architecture here. The parse_line function is just the dispatcher. The actual, complex logic for each tool is encapsulated in its own isolated if block. If we want to add a new parser for a new tool (e.g., "sqlmap") in the future, we just need to add a new if "sqlmap" in tool_name.lower(): block without touching the rest of the code. This is modular, clean, and maintainable.
nmap Block
# ... inside the nmap if-block
match = re.search(r"(\d+)/(tcp|udp)\s+open\s+(.*)", l)
if match:
full_service_line = match.group(3)
# ...
if "openssh" in full_service_line:
live_findings["nmap"].add("openssh")
live_findings["nmap"].discard("ssh")
specific_found = True
# ... (similar blocks for apache, nginx, etc.)
ignore_list = ["unknown", "tcpwrapped", "service"]
if not specific_found and base_service not in ignore_list:
live_findings["nmap"].add(base_service)
Goal: This block refines the Nmap results. Instead of just reporting "http", we want to know if it's "http-apache" or "http-nginx".
The Regex (re.search(...)): This is a surgical strike to capture open ports. (.*) at the end is a greedy wildcard that says "capture everything until the end of the line". This gives us the full service description (e.g., "http Apache httpd 2.4.41").
High-End Concept: .discard("ssh") vs. .remove("ssh")
set)..remove("ssh") would crash if "ssh" was not yet in the set..discard("ssh") is the killer move: It tries to remove "ssh". If it's there, it's gone. If not, **nothing happens.** No error, no crash. This makes the code extremely robust.Derivation of the Logic: The specific_found variable is our memory. It remembers if we've already had a very specific hit. Only if at the end no specific hit was found (if not specific_found), do we add the general service (base_service). This prevents the banner from being cluttered with redundant, generic labels like "http" when we already know the much more valuable "http-apache".
curl Block
# ... inside the curl if-block
if "allow:" in l:
methods = l.split("allow:", 1)[1].strip()
for m in methods.split(","):
m_clean = m.strip().upper()
if m_clean:
live_findings["methods"].add(m_clean)
Goal: This part of the spy specializes in reading HTTP headers from the curl output.
High-End Concept: .split("allow:", 1)[1] - This is a precision cut to extract data.
.split("allow:", 1): This command splits the line into two parts exactly at the first occurrence of "allow:". The 1 is crucial; it means "split only once". The result is a list with two items: `['everything before allow:', 'everything after allow:']`.[1]: We take the second item from this list (index 1), which is exactly the text we want (e.g., "GET, HEAD, POST")..strip(): The cleaner removes any leftover spaces.
If the live-parser is the spy on the front line, then the NmapRunner is the intelligence analyst in the headquarters. Its mission begins *after* the main Nmap scan is complete. It takes the full, extremely detailed JSON report from Nmap, spreads it out on its analysis table, and looks for strategic targets for the second wave of attack. It then recommends a list of targeted follow-up missions (subroutines) to the command center, to be executed by our special forces (nmap_subroutines.py).
import json
class NmapRunner:
def __init__(self, json_file_path):
self.nmap_data = self._load_json(json_file_path)
self.tasks_to_run = []
def _load_json(self, file_path):
try:
with open(file_path, 'r') as f:
return json.load(f)
except (json.JSONDecodeError, FileNotFoundError):
return None
Goal: This is the blueprint for our analyst. A class is like a blueprint for an object. Every time we call NmapRunner, we create a new, fresh instance of this analyst, who has their own set of data and tasks.
High-End Concept: __init__(self, ...) - This is the "constructor" of the class, the function that is automatically called when the object is created. Think of it as the analyst's "boot-up protocol."
self: Is the most important word in a class. It is the object's reference to itself. It allows it to store and access its own data (like self.nmap_data).self.nmap_data = self._load_json(...): As soon as the analyst is "born," their first official act is to load the JSON evidence file.self.tasks_to_run = []: They grab an empty notepad to write down the identified follow-up missions.Function _load_json (Evidence Intake): The _ at the beginning of the name is a convention in Python. It signals to other developers: "This is an internal helper function, not to be called from the outside." The function itself is built robustly: it uses try...except to catch the two most common errors: the file does not exist (FileNotFoundError) or it contains broken JSON (json.JSONDecodeError).
run_analysis(self)
def run_analysis(self):
if not self.nmap_data:
print("[ERROR] Nmap JSON log could not be loaded or parsed.")
return []
for host in self.nmap_data.get('hosts', []):
for port in host.get('ports', []):
port_id = port.get('portid')
service = port.get('service', {})
service_name = service.get('name', 'unknown')
scripts = port.get('scripts', [])
if service_name == 'ftp':
is_anon = False
for script in scripts:
if (
script.get('id') == 'ftp-anon'
and 'Anonymous FTP login allowed' in script.get('output', '')
):
self.tasks_to_run.append('ftp_anon_get')
is_anon = True
break
if not is_anon:
self.tasks_to_run.append('ftp_brute')
if service_name == 'telnet':
self.tasks_to_run.append('telnet_brute')
if service_name == 'domain':
self.tasks_to_run.append('dns_axfr')
if 'netbios-ssn' in service_name or 'microsoft-ds' in service_name:
self.tasks_to_run.append('smb_enum')
if 'Microsoft Windows' in host.get('osmatch', [{}])[0].get('name', ''):
self.tasks_to_run.append('smb_nullsession')
return list(set(self.tasks_to_run))
Goal: This is the analyst's brain. This function combs through the loaded JSON data and applies a series of "if-then" rules to identify follow-up missions.
High-End Concept (Safe Dictionary Navigation): The expression self.nmap_data.get('hosts', []) is the professional way to navigate through complex JSON data. .get('key', default) tries to find the key 'hosts'. If the key doesn't exist (e.g., because the Nmap scan failed), the command doesn't raise an error but simply returns an empty list []. The loop then simply runs zero times, and the program doesn't crash. This makes the code extremely fault-tolerant.
Derivation of the Logic (FTP Example):
scripts). They specifically look for the result of the ftp-anon script. If this script reports that an anonymous login is allowed, the highly specialized mission 'ftp_anon_get' is added to the notepad.if not is_anon), is the brute-force mission 'ftp_brute' added. This is a smart prioritization: we first try the easy, quiet way before we bring out the loud artillery (Hydra).High-End Concept (Line 41): return list(set(self.tasks_to_run))
set(self.tasks_to_run): We convert our list into a set. The superpower of the set comes into play and instantly throws out all duplicates.list(...): Afterwards, we convert the clean set back into a normal list, which we return to the command center.
If NmapRunner is the planner, then this module is the executing Special Forces unit (Task Force). Every function in this file is a "Subroutine" – a self-contained, highly specialized mission designed for a very specific scenario. These missions are not executed by default; only when the NmapRunner analyst gives the green light. This is the core of our "intelligent" reconnaissance: we don't waste ammunition, we only deploy our best weapons when we have identified a worthy target.
In the current version, these functions only contain print statements as placeholders. In the final version, the necessary libraries (like subprocess to start Hydra or Wget) would be imported here.
def ftp_anon_get(target):
print(f"\n>>> SUBROUTINE STARTED: ftp_anon_get on {target}")
print(">>> ACTION: Attempting to recursively download the entire FTP server...")
# The `wget -m ...` command would go here
def ftp_brute(target):
print(f"\n>>> SUBROUTINE STARTED: ftp_brute on {target}")
print(">>> ACTION: Starting Hydra brute-force against the FTP service...")
def dns_axfr(target):
print(f"\n>>> SUBROUTINE STARTED: dns_axfr on {target}")
print(">>> ACTION: Attempting a DNS Zone Transfer...")
def smb_enum(target):
print(f"\n>>> SUBROUTINE STARTED: smb_enum on {target}")
print(">>> ACTION: Starting enum4linux for comprehensive SMB enumeration...")
# ... and so on for telnet_brute, smb_nullsession etc.
Goal: Each of these functions is a self-contained attack plan for a specific scenario.
High-End Concept (Dynamic Function Calling): The magic lies not in the functions themselves, but in how they are called. Let's look back at Chapter 1 (live_recon.py):
fn = getattr(nmap_subroutines, task, None)
if fn: fn(target)
Derivation of the Architecture: Imagine we didn't have this system. The command center would need a huge, ugly if/elif/else structure:
if task == "ftp_anon_get":
nmap_subroutines.ftp_anon_get(target)
elif task == "ftp_brute":
nmap_subroutines.ftp_brute(target)
...and so on. This would be a nightmare. For every new subroutine, we would have to rebuild the command center.
The Brilliant Solution: Our architecture is much smarter. The NmapRunner analyst only needs to return the exact name of the function (e.g., the string "ftp_brute"). The command center then uses getattr to dynamically find and execute that function.
The Strategic Advantage: To add a new special mission (e.g., an SSH brute-force), we only need to do two things:
NmapRunner that returns the task "ssh_brute" when Port 22 is open.def ssh_brute(target): ... in this nmap_subroutines.py file.live_recon.py) **never needs to be touched again**. It automatically adapts to our growing arsenal. This is the epitome of modular, extensible, and maintainable software architecture.
Every operation concludes with a debriefing. This module is the archivist who, at the end of the mission, gives the operator the option to view the complete, unfiltered original reports from all deployed agents (scan tools). It asks the user if they want to see the details and then presents the raw log files in a clean, readable format directly in the terminal. This is crucial for in-depth analysis and for understanding the entire battlefield.
show_detailed_logs(log_directory)
import os
import json
import shutil
# ... (LogColors class and helper functions like get_terminal_width, print_file_header)
def show_detailed_logs(log_directory):
answer = input(
"\n[?] If you want to display all scan results in detail, "
"press 'J' and hit Enter: "
)
if answer.lower() == 'j':
try:
log_files = sorted(os.listdir(log_directory))
if not log_files:
print(f"[INFO] No log files found in directory '{log_directory}'.")
return
# First, display all text-based logs (.nmap, .txt)
for filename in log_files:
if filename.endswith(".nmap") or filename.endswith(".txt"):
filepath = os.path.join(log_directory, filename)
print_file_header(filename)
with open(filepath, 'r', errors='ignore') as f:
print(f.read().strip())
# Afterwards, display all JSON-based logs
for filename in log_files:
if filename.endswith(".json") and "nmap" not in filename:
# ... (code to print header and process lines)
for line in f:
formatted = format_ferox_line(line)
if formatted:
print(formatted)
# ... (further logic for generic JSON parsing)
except FileNotFoundError:
print(f"[ERROR] Log directory '{log_directory}' not found.")
Goal: This function is the core of the debriefing. It interacts with the user and presents the data.
Line 8: answer = input(...) - This is the dialogue with the operator. The program pauses and waits for input. Only if the user explicitly enters 'J' (or 'j') is the rest executed. This prevents the screen from being flooded with huge log files if the user doesn't want it.
Line 14: log_files = sorted(os.listdir(log_directory)) - The archivist goes into the log folder (log_directory), lists all the files contained within (os.listdir), and sorts them alphabetically (sorted). This ensures a consistent and orderly display.
High-End Concept (Separate Processing Loops): Instead of processing all files in one loop, the logic is cleverly divided.
.nmap, .txt). These can be simply read and printed directly to the terminal (f.read()).format_ferox_line), which turns them into a pretty, tabular format.format_ferox_line(line)
def format_ferox_line(line):
try:
data = json.loads(line)
if data.get("type") != "response":
return None
status = data.get("status", 0)
url = data.get("url", "")
length = data.get("content_length", 0)
s_color = LogColors.GREEN
if 300 <= status < 400:
s_color = LogColors.YELLOW
if status >= 400:
s_color = LogColors.RED
return (
f"{s_color}[{status}]{LogColors.RESET} "
f"({LogColors.BLUE}{length:>6}b{LogColors.RESET}) {url}"
)
except json.JSONDecodeError:
return None
Goal: This function is a translator. It takes a single, cryptic JSON line from Feroxbuster and transforms it into a single, colorful, human-readable line for the report.
Line 3: data = json.loads(line) - The json translator takes the JSON text and turns it into a Python dictionary, which we can easily access.
Line 5: if data.get("type") != "response": return None - This is an intelligent filter. Feroxbuster produces different types of JSON messages. We are only interested in the actual server responses ("response"). All other messages are ignored (return None).
Color Logic (Lines 12-16): Here, the color is chosen based on the HTTP status code. Successful requests (2xx) are green, redirects (3xx) are yellow, and errors (4xx/5xx) are red. This gives the operator an immediate visual cue about the importance of a finding.
High-End Concept (Format String): f"({LogColors.BLUE}{length:>6}b{LogColors.RESET})"
{length:>6}: This is the secret weapon for clean tables. It tells Python: "Take the variable length, but always reserve 6 character spaces for it. If the number is smaller, fill the rest with spaces on the left (> means right-aligned)."