Grey bar Blue bar
Share this:

Thu, 15 May 2014

BootCamp Reloaded Infrastructure

Get some.


Why Infrastructure Hacking Isn't Dead


If you work in IT Security you may have heard people utter the phrase,


“Infrastructure hacking is dead!”



We hear this all the time but in all honesty, our everyday experience of working in the industry tells a completely different story.


With this in mind we've decided to factor out our “infrastructure related h@x0ry” from our Bootcamp Course and create a brand spanking new one, completely dedicated to all things ‘infrastructure'.


What You'll Learn


We've re-loaded this course to not only reinforce basic footprinting methodologies - which to be honest, are essential for target acquisition - but to also enable you to exploit common, real-world vulnerabilities.


But that's not all.


We've also highlighted methods for compromising Microsoft Active Directory infrastructures - something that's typical for corporate environments. The way in which we approach this is thorough, effective and shows you how to become DA without necessarily pulling all of your hair out.


A complete company takeover is really just a matter of time.


Get Hands-On Experience


As with all SensePost training courses, we don't just want you to sit there and watch us talk for a few days. Where's the fun in that and how on earth will you get real, tangible experience if you're just sat in a chair?


Not only will we all be doing practicals at the end of each topic, we've also created a brilliant culmination exercise:


“You'll need to compromise a company via the Internet and steal as much data as possible!”


The Bottom Line


The brand new Bootcamp Reloaded Infrastructure will provide you with a thorough introduction to real world hacking of corporate environments. You'll learn everything you need to successfully compromise most corporate networks out there.


For more information on our training offering, head over to here.

Sat, 1 Jun 2013

Honey, I’m home!! - Hacking Z-Wave & other Black Hat news

You've probably never thought of this, but the home automation market in the US was worth approximately $3.2 billion in 2010 and is expected to exceed $5.5 billion in 2016.


Under the hood, the Zigbee and Z-wave wireless communication protocols are the most common used RF technology in home automation systems. Zigbee is based on an open specification (IEEE 802.15.4) and has been the subject of several academic and practical security researches. Z-wave is a proprietary wireless protocol that works in the Industrial, Scientific and Medical radio band (ISM). It transmits on the 868.42 MHz (Europe) and 908.42MHz (United States) frequencies designed for low-bandwidth data communications in embedded devices such as security sensors, alarms and home automation control panels.


Unlike Zigbee, almost no public security research has been done on the Z-Wave protocol except once during a DefCon 2011 talk when the presenter pointed to the possibility of capturing the AES key exchange ... until now. Our Black Hat USA 2013 talk explores the question of Z-Wave protocol security and show how the Z-Wave protocol can be subjected to attacks.


The talk is being presented by Behrang Fouladi a Principal Security Researcher at SensePost, with some help on the hardware side from our friend Sahand Ghanoun. Behrang is one of our most senior and most respected analysts. He loves poetry, movies with Owen Wilson, snowboarding and long walks on the beach. Wait - no - that's me. Behrang's the guy who lives in London and has a Masters from Royal Holloway. He's also the guy who figured how to clone the SecureID software token.


Amazingly, this is the 11th time we've presented at Black Hat Las Vegas. We try and keep track of our talks and papers at conferences on our research services site, but for your reading convenience, here's a summary of our Black Hat talks over the last decade:



2002: Setiri : Advances in trojan technology (Roelof Temmingh)


Setiri was the first publicized trojan to implement the concept of using a web browser to communicate with its controller and caused a stir when we presented it in 2002. We were also very pleased when it got referenced by in a 2004 book by Ed Skoudis.


2003: Putting the tea back into cyber terrorism (Charl van der Walt, Roelof Temmingh and Haroon Meer)


A paper about targeted, effective, automated attacks that could be used in countrywide cyber terrorism. A worm that targets internal networks was also discussed as an example of such an attack. In some ways, the thinking in this talk eventually lead to the creation of Maltego.


2004: When the tables turn (Charl van der Walt, Roelof Temmingh and Haroon Meer)


This paper presented some of the earliest ideas on offensive strike-back as a network defence methodology, which later found their way into Neil Wyler's 2005 book "Aggressive Network Self-Defence".


2005: Assessment automation (Roelof Temmingh)


Our thinking around pentest automation, and in particular footprinting and link analyses was further expanded upon. Here we also released the first version of our automated footprinting tool - "Bidiblah".


2006: A tail of two proxies (Roelof Temmingh and Haroon Meer)


In this talk we literally did introduce two proxy tools. The first was "Suru', our HTTP MITM proxy and a then-contender to the @stake Web Proxy. Although Suru has long since been bypassed by excellent tools like "Burp Proxy" it introduced a number of exciting new concepts, including trivial fuzzing, token correlation and background directory brute-forcing. Further improvements included timing analysis and indexable directory checks. These were not available in other commercial proxies at the time, hence our need to write our own.


Another pioneering MITM proxy - WebScarab from OWASP - also shifted thinking at the time. It was originally written by Rogan Dawes, our very own pentest team leader.


The second proxy we introduced operated at the TCP layer, leveraging off the very excellent Scappy packet manipulation program. We never took that any further, however.


2007: It's all about timing (Haroon Meer and Marco Slaviero)


This was one of my favourite SensePost talks. It kicked off a series of research projects concentrating on timing-based inference attacks against all kinds of technologies and introduced a weaponized timing-based data exfiltration attack in the form of our Squeeza SQL Injection exploitation tool (you probably have to be South African to get the joke). This was also the first talk in which we Invented Our Own Acronym.


2008: Pushing a camel through the eye of a needle (Haroon Meer, Marco Slaviero & Glenn Wilkinson)


In this talk we expanded on our ideas of using timing as a vector for data extraction in so-called 'hostile' environments. We also introduced our 'reDuh' TCP-over-HTTP tunnelling tool. reDuh is a tool that can be used to create a TCP circuit through validly formed HTTP requests. Essentially this means that if we can upload a JSP/PHP/ASP page onto a compromised server, we can connect to hosts behind that server trivially. We also demonstrated how reDuh could be implemented under OLE right inside a compromised SQL 2005 server, even without 'sa' privileges.


2009: Clobbering the cloud (Haroon Meer, Marco Slaviero and Nicholas Arvanitis)


Yup, we did cloud before cloud was cool. This was a presentation about security in the cloud. Cloud security issues such as privacy, monoculture and vendor lock-in are discussed. The cloud offerings from Amazon, Salesforce and Apple as well as their security were examined. We got an email from Steve "Woz" Wozniak, we quoted Dan Geer and we had a photo of Dino Daizovi. We built an HTTP brute-forcer on Force.com and (best of all) we hacked Apple using an iPhone.


2010: Cache on delivery (Marco Slaviero)


This was a presentation about mining information from memcached. We introduced go-derper.rb, a tool we developed for hacking memcached servers and gave a few examples, including a sexy hack of bps.org. It seemed like people weren't getting our point at first, but later the penny dropped and we've to-date had almost 50,000 hits on the presentation on Slideshare.


2011: Sour pickles (Marco Slaviero)


Python's Pickle module provides a known capability for running arbitrary Python functions and, by extension, permitting remote code execution; however there is no public Pickle exploitation guide and published exploits are simple examples only. In this paper we described the Pickle environment, outline hurdles facing a shellcoder and provide guidelines for writing Pickle shellcode. A brief survey of public Python code was undertaken to establish the prevalence of the vulnerability, and a shellcode generator and Pickle mangler were written. Output from the paper included helpful guidelines and templates for shellcode writing, tools for Pickle hacking and a shellcode library.We also wrote a very fancy paper about it all...


We never presented at Black Hat USA in 2012, although we did do some very cool work in that year.


For this year's show we'll back on the podium with Behrang's talk, as well an entire suite of excellent training courses. To meet the likes of Behrang and the rest of our team please consider one of our courses. We need all the support we can get and we're pretty convinced you won't be disappointed.


See you in Vegas!

Mon, 4 Mar 2013

Vulnerability Management Analyst Position


Have a keen interest on scanning over 12000 IP's a week for vulnerabilities? Excited about the thought of assessing over 100 web applications for common vulnerabilities? If so, an exciting, as well as demanding, position has become available within the Managed Vulnerability Scanning (MVS) team at SensePost.


Job Title: Vulnerability Management Analyst


Salary Range: Industry standard, commensurate with experience


Location: Johannesburg/Pretoria, South Africa


We are looking for a talented person to join our MVS team to help manage the technology that makes up our Broadview suite and, more importantly, finding vulnerabilities, interpreting the results and manually verifying them. We are after talented people with a broad skill set to join our growing team of consultants. Our BroadView suite of products consists of our extensive vulnerability scanning engine, which looks at both the network-layer and the application layer, as well as our extensive DNS footprinting technologies.


The role of the Vulnerability Management Analyst will possess the following skills:


  • Be able to multitask and meet client deadlines. We want a person that thinks 'I can do that!'

  • Possess excellent written and oral communication skills. Being able to understand a vulnerability and explain it to business leaders is a must.

  • A working knowledge of enterprise vulnerability management products and remedial work flow

  • A broad knowledge of most common enterprise technologies and operating systems

  • A passion for security and technology


Some additional conditions:

  • A post graduate degree or infosec certification would be beneficial, however, showing us you have the passion and skills also helps

  • This job requires some after-hours and weekend commitments (we try to keep this to a minimum)

  • Bonus points for knowledge of sed, awk and python, ok even ruby.

  • PCI-QSA is desired but not required


Impress us with your skills by sending an email to jobs@sensepost.com and lets take it from there.


SensePost is an equal opportunity partner.

Wed, 16 Jan 2013

Client Side Fingerprinting in Prep for SE

On a recent engagement, we were tasked with trying to gain access to the network via a phishing attack (specifically phishing only). In preparation for the attack, I wanted to see what software they were running, to see if Vlad and I could target them in a more intelligent fashion. As this technique worked well, I thought this was a neat trick worth sharing.


First off the approach was to perform some footprinting to see if I could find their likely Internet breakout. While I found the likely range (it had their mail server in it) I couldn't find the exact IP they were being NAT'ed to. Not wanting to stop there, I tried out Vlad's Skype IP disclosure trick, which worked like a charm. What's cool about this approach is that it gives you both the internal and external IP of the user (so you can confirm they are connected to their internal network if you have another internal IP leak). You don't even need to be "friends", you can just search for people who list the company in their details, or do some more advanced OSINT to find Skype IDs of employees.


Once I had that IP, I went on a hunt for web logs that had been indexed by a search engine, that contained hits from that IP. My thinking was that I run into indexed Apache or IIS logs fairly often when googling for IPs or the like, so maybe some of these contained the external NAT IP of the target organisation. It took a fair bit of search term fiddling, but in the end I found 14 unique hits from their organisation semi-complete with User Agent information (some were partially obscured).


This provided me with the following stats:









Operating System


Win XP 8


Win 7 32 3


Win 7 64 3

Browser


IE 8 8


IE 6 3


IE 7 1


IE 9 1

Combination


Win 7 IE 8 4


Win XP IE 8 4


Win XP IE 6 3


Win 7 IE 9 1


Win XP IE 7 1


Granted, it could be that the same machine was present in multiple logs and the stats are skewed, but they are a large enough organisation that I thought the chances were low, especially as most of the sites who's logs I found were pretty niche. As validation of these results, later, once we had penetrated through to the internal network, it was clear that they had a big user base in regional offices still on Win XP and IE6, and a big user base at corporate offices who had been migrated to Windows 7 with IE8.


Unfortunately, the UserAgent didn't make it clear whether they had Acrobat or Java or what versions they were at. We thought of using some JavaScript to do such detection, but were under a time constraint, and went with trying to pwn them instead, with the thinking that if it doesn't work, we could retarget and at least get some debugging information.


Anecdotally, and to give the story an ending, it turned out that BlackHole and Metasploit's Browser AutoPwn were a bust, even our customised stuff got nailed by Forefront when the stager tried to inject it's payload at runtime, but an internal tool we use for launching modified meterpreter payloads worked like a charm (although, periodically died on Win7 64bit, so I'd recommend using reverse-http, you can restart sessions, and firing up a backup session to restart the other with).

Fri, 9 Mar 2012

Foot printing – Finding your target...

We were asked to contribute an article to PenTest magazine, and chose to write up an introductory how-to on footprinting. We've republished it here for those interested.

Network foot printing is, perhaps, the first active step in the reconnaissance phase of an external network security engagement. This phase is often highly automated with little human interaction as the techniques appear, at first glance, to be easily applied in a general fashion across a broad range of targets. As a security analyst, footprinting is also one of the most enjoyable parts of my job as I attempt to outperform the automatons; it is all about finding that one target that everybody forgot about or did not even know they had, that one old IIS 5 webserver that is not used, but not powered off.

With this article I am going to share some of the steps, tips and tricks that pentesters and hackers alike use when starting on a engagement.

Approach

As with most things in life having a good approach to a problem will yield better results and overtime as your approach is refined you will consume less time while getting better results. By following a methodology, your footprinting will become more repeatable and thus reliable. A basic footprining methodology covers reconnaissance, DNS mining, various information services (e.g. whois, Robtex, routes), network registration information and active steps such as SSL host enumeration.

While the temptation exists to merely feed a domain name into a tool or script and take the output as your completed footprint, this will not yield a passable footprint for two reasons. Firstly, a single tool will not have access to all the disparate information sources that one should consult, and secondly the footprinting process is inherently iterative and continuous. A footprint is almost never complete; instead, a fork of the footprint data provides the best current view of the target, but the information could change tomorrow as new sites are brought online, or old sites are taken offline. As a new piece of data is found that could expand the footprint, a new iteration of the footprinting process triggers with that datum as the seed, and the results are combined with all discovered information.

Know your target

The very first thing to do is to get to know your target organisation. What they do, who they do it for, who does it for them, where they do it from - both online and in the kinetic world, what community or charity work they are involved in. This will give you an insight into what type of network/infrastructure you can expect. Reading public announcements, financial reports and any other documents published on or by the organisation might also yield interesting results. Any organisation that must publish regular reports (e.g. listed companies), provide a treasure trove of information for understanding the target's core business units, corporate hierarchy and lines of business. All these become very useful when selecting targets.

Dumpster diving, if you are up for it and have physical access to the target, means sifting through trash to get useful information, but in recent times social media can provide us with even more. Sites like LinkedIn, Facebook and Twitter can provide you with lists of employees and projects that the organisation is involved with and perhaps even information about third party products and suppliers that are in use.

One should even keep an eye out for evidence of previous breaches or loss of credentials. It has become common place for hackers to post information about security breaches on sites like pastebin.com. The most likely evidence would be credentials in the form of corporate emails and passwords being reused on unrelated sites that are hacked, and have their user databases uploaded. In addition, developers use sites like Pastebin to share code, ideas and patches, and if you are lucky you might just find a little snippet of code sitting out in the open on Pastebin, that will give you the edge.

DNS

“The Domain Name System (DNS) is a hierarchical distributed naming system for computers, services, or any resource connected to the Internet or a private network.” — WikiPedia

In a nutshell, DNS is used to convert computer names to their numeric addresses.

Start by enumerating every possible domain owned by the target. This is where the information from the initial reconnaissance phase comes in handy, as the target's website will likely point to external domains of interest and also help you guess at possible names. With a list of most discovered domains in hand, move on to a TLD (Top level domain) expand. TLDs are the highest level subdomains in DNS; .com, .net, .za, .mobi are all examples of TLDs (The Mozilla Organization maintains a list of TLDs https://wiki.mozilla.org/TLD_List).

In the next step, we take a discovered discovered domain and check to see if there are any other domains with the same name, but with a different TLD. For example, if the target has the domain victim.com, test whether the domains victim.net, victim.info, victim.org etc. exist and if they exist check to see if they are owned by our target organization. To determine whether a domain exists or not, one should examine the SOA (start of authority) DNS record for the domain. Using commands like nslookup under Microsoft Windows or the dig/host commands under most of the *nix family will reveal SOA records.

Using dig, “dig zonetransfer.me soa”.

?

Figure 1: Using dig to get the SOA (Start of authority) record for a domain

If, by verifying the SOA, it is confirmed that the domain exists, then the next step is to track down who it belongs to. At this point the whois service is called upon. ‘Whois' is simply a registry that contains the information of the owner of a domain. Note that it is not entirely reliable and certainly not consistent. The following very simple query “whois zonetransfer.me” provides us with the owner of the domain “zonetransfer.me” detail.

Figure 2: Using whois to get the domain owner detail

After finding domains, running them through a TLD expansion and verifying their whois information, it is time to track down hosts. First we need to get the NS or name server records for the domains. Again using “dig zonetransfer.me ns” returns a list of all the name servers used by this domain. In many cases the name server will not be part of the target's network and is often out-of-scope, but they will still be used in the next step.

DNS yields much interesting information, but the default methods for extracting information from foreign servers effectively relies on a brute force. However, DNS supports a trick where all DNS information for a zone can be downloaded if the server allows it, and this is called a “zone transfer”. When enabled, they are extremely useful as they negate the need for guessing or brute-forcing; sadly they are commonly disabled. Still, given the usefulness of zone transfers it is always worth testing for. Zone transfers should be performed against all the name servers that are specified in the NS records of a domain as the data contained in each name server should be the same, but the security configuration might be different. Using dig, the following command will attempt to perform a zone transfer “dig axfr @ns12.zoneedit.com zonetransfer.me”

Figure 3: Performing a zone transfer using dig

As mentioned previously, zone transfers are not that common. When we cannot download the zone file, there are a couple of other tricks that might work. One is to brute force or guess host names: by using a long list of common hostnames one can test for names such as “fw.victim.com”, “intranet.victim.com”, “mail.victim.com” and so on. The names can be commonly seen hostnames, generated names when computers are assigned numeric or algorithmic names, or from sets of related names such as characters from a book series. When brute forcing DNS, be sure to check the following DNS records: CNAME, A and AAAA. Again this is easy using a tool like dig. “dig www.google.com a” produces the DNS configuration for www.google.com, note that the hostname www.google.com actually has multiple DNS entries, one CNAME record, and multiple A records. Looking at the IP addresses it is clear that there are several different hosts (2 in the screenshot below).

Figure 4: Using dig to get the a record for a host entry

Doing this manually seems easy and quick, (and it is) but if we want to brute force or guess many host names, then this will take too long. Of course, it is easy enough to script these commands to automate the process; however there are existing tools written specifically for this purpose. One of the most popular tools, Fierce, is a perl script written by RSnake (http://ha.ckers.org/fierce/), which is easy to use and has many useful functions. Additionally, there are tools like Paterva's Maltego and SensePost's Yeti (a tool I wrote) which provide graphical tools for this purpose.

If we happen to have a list of IP addresses or IP netblocks of the target, then a further DNS trick is to convert the addresses into hostnames using reverse lookups to get the PTR record entry. This is useful since reverse records are easily brute forced in IPv4. Bear in mind that DNS does not require a PTR record (reverse entry) or that entries in the reverse zone must match entries in the forward zone. But the result can give you an idea of whether the host is a shared host, owned and hosted by the company or just remote hosted.

To test once more, try using dig, “dig 104.66.194.173.in-addr.arpa ptr”. While this too can be easily automated, the previously mentioned tools will also handle PTR records.

Search engines:

DNS interrogation and mining forms the bulk foot printing, but thanks to modern search engines like Google and Bing, finding targets has become much easier.

Apart from the normal searching for your target, as you would do in your initial phase, you can actually use the data that you discovered during the course of the DNS mining to try and get further information using search engines. Bing from Microsoft provides us with two really useful search operators: “ip:” and “site:”. When using the “ip:” operator, Bing will return a list of hosts that it has indexed that resolve to the IP address that you have specified. Alternatively the “site:” operator when used with a domain name, will return a list of host names that have been indexed by the search engine and belong to the domain specified. Quick and easy, and Bing also provides you with a very simple free API that you can use to automate these searches.

Address mapping

All this fuss with DNS is important, but it is only useful insofar as they lead us to addresses. The next step is discovering where the target exists within the IP address space. Luckily useful tools and resources exist to help us uncover these ranges, by automating a combination of manual techniques such as whois querying, traceroute and netblock calculators. In the previous section the whois tool was used to get the domain owner information. The same tool can be used to discover the ownership/assignment details of a specific IP address. Let's take www.facebook.com; one of the IP addresses that it resolves to is 69.63.190.10. “whois 69.63.190.10” produces the following output.

Figure 5: Getting the netblock and owner using whois

From the whois output we get really useful information. First is a netblock range 69.63.176.0-69.63.190.255 as well as the owner of this net block, namely Facebook, Inc. In this case we are lucky and the netblock is registered to facebook, but often you will only get the network service provider to which the netblock is allocated to. In that case, you will have to query the service provider in order to gain more info about the specific netblock. Online resources can also be very useful, for example ARIN (American Registry for Internet Numbers) or any of the other regional registries (RIPE, AfriNIC, APNIC and LACNIC) provides a reverse whois search interface where one can search for organisation names and other terms, even performing wild card searches. Giving Facebook a second look, we try a search on the reverse whois interface found at http://whois.arin.net/ with the term “facebook”, and get a list of five additional network ranges.

Figure 6: Search results for ARIN reverse whois

SSL Certificates

Lastly, we turn to SSL. SSL may be more familiar as a “protection” against nasty eavesdroppers and men-in-the-middle, but it is useful for footprinters. How? It is really simple actually, one of the security checks performed by browsers when deciding on the validity of a SSL certificate is whether the Common Name contained in the certificate matches the DNS name of the host requested from the browser. How does this help? Say a list of IP addresses has been produced; the next step would be to perform a reverse lookup of all these addresses. However, if no reverse entry is present and Bing has no record of the IP, then some creativity is called for. If an HTTPS website is hosted on that address then simply browse to that IP address and, when presented with the invalid certificate error, message, look for the “real” host name.

Figure 7: Firefox reporting the common name contained in a SSL certificate for a host

Again, this is something that is easily automated, so we have included a module in Yeti to actually do this for you.

Conclusion

Foot printing might at first glance appear to be simple and mundane, but the more you do it, the more you will realise that very few organisations have a handle on exactly what they have and what they present to the Internet. As the Internet and networks evolve so will the way companies and organisations use it, and so will their footprint. A year-old footprint could be hopelessly outdated, and ongoing footprinting helps organisations maintain a current view of their threat landscape.

With the ongoing move away from local infrastructure to hosted infrastructure, the footprint expands, spreads and grows, and so will our quest to find as much as possible.