Grey bar Blue bar
Share this:

Wed, 31 Mar 2010

'Scraping' our time servers

The intertubes have been humming lately around a certain NTP feature to gather lists of NTP servers' clients and it naturally grabbed our attention. The humming was started by HD Moore recently where he revealed that it is possible to query NTP servers to get lists of addresses and using the information for fun and profit. He also mentioned that he will be releasing a paper describing all this and how he can create a sizable DDOS using NTP, without giving too much detail about it.

Some quick research into NTP(from ww.ntp.org) revealed that NTP servers allow you to perform a bunch of commands that are secondary to time keeping. You can easily play with these using the ntpdc client program eg. 'ntpdc target.ntp.server'. Some of these commands include:

  • listpeers - List the peers(NTP servers) for the time server
  • showpeer - Give time keeping info about a specific peer time server
  • peers - List peers and some basic time keeping info
  • sysstats - Info regarding ntp daemon itself
  • many more...
A lesser known command, that we will be focusing on, is called 'monlist' which via the ntpdc program's help is described as 'display data the server's monitor routines have collected'. Not what one might expect from a diagnostic function which will provide you with the last 600 addresses of clients who accessed the ntp server. Finding this function was relatively quick to do after we started analysing the source code available from www.ntp.org. Later on we discovered that Moore actually released his metasploit plugin for it available here

Playing around: So, this command allows you to get the last 600 IPs that make requests to a NTP server (well, sortof). The ntpdc program is limited to 400 IPs and because of that limitation we whipped up a util for everyone to play with and modify which is attached. The information gathered using this method (as far as we can see) is not worth much except for being interesting. And very interesting in deed as we have noted towards the end of this post. We proceeded to examine the South African time servers since we depend on them and since we are always interested in the South African Internet and security landscape. One can get a list of (some) South African NTP servers at time.org.za which we used for this post. All except 3 or so allow the monlist command. Using Maltego we added all the servers from time.org.za and ran the script as a local transform on them which produced these:

These two images are different views of the NTP servers and their clients from one run. In the first image you can clearly see each NTP server(centers of those circles) with its unique clients forming a circle around it. The clients that query from more than one of the servers you can see as the mush in the center of the image. The second image shows which clients use more than one ntp server in a slightly more visible manner. The larger the sphere the more servers the clients get their time from. One can also see which NTP servers are more secluded. As Moore mentioned, NTP servers will divulge even their internal network clients. This is also the case with some major NTP servers in South Africa. Some are showing tens of private IPs which for some individuals/companies may be a serious information leak.

Have data, what now? The most immediate application of this method will probably be more revealing footprinting exercises. For example:

  • Certain devices are pre-configured to use a certain ntp server, which one can query to find all those devices
  • Certain products are pre-configured in a similar fashion, eg. Ubuntu
  • NTP servers could leak internal network details and possible one of their other addresses(IPV6 or another network if multihomed).
  • IPs that will never show up in customary rDNS and fDNS queries may now suddenly pop up
Bandwidth implications: So we know that a busy server's ‘client cache' will have 600 entries and wireshark tells us that each result packet is 468 bytes (IP+UDP+NTP). Each result packet only contains 6 results so one is looking at +- 45kbytes of data for each request packet of 220 bytes (IP+UDP+NTP). The NTP server will just dump the data so you will need a sizeable down-link to catch all 100 UDP packets. Moore mentioned that he has developed a technique to create a 30 gigabit/sec DDOS which is not easy to defend against. Our bet is that spoofing the source address of the monlist request may be a way for creating a DDOS attack.

Have tool, will play nicely Attached are the monlist query script written in Python and the Maltego graph used in the example above. Just run ‘python ntp_monlist.py target_server' and wait 7-10 seconds(With default timeout and tries). If you dont receive close to 600 addresses then either your connection is too slow or the target server is not busy/popular enough. The script can act as a local transform for Maltego by changing the OUTPUT_FORMAT variable close to the top. You will need to set the speed/accuracy <---> #results slider to the far right for all results. If anyone has an idea on how to use this info better please drop a comment below.

Files: ntp_monlist za_time_servers

Tue, 5 Jun 2007

Re: Jeremiah Grossmans "How to find your websites"

Jeremiah from WhiteHatSec has just written a quick piece on how to find your websites. Now Footprinting is obviously dear to our hearts, with 3 Blackhat talks on it (or applications of it) ("Automation - Deus ex Machina or Rube Goldberg Machine?", "Putting The Tea Back Into CyberTerrorism", "The Role of Non Obvious Relationships in the Foot Printing Process"), a commercial tool almost dedicated to it, and a full blown chapter on it in Open Source Penetration Testing by charl and gareth. Footprinting is a genuinely important part of a companies security assessment, cause it doesn't matter if they have multi-layer firewalls and WAF's protecting the web app on their www.company.com, and an old barely used sql-injectable form on their community.company.com site that lets you grab SA on their SQL server anyway.. (Now that the shameless self promotion is over..) i wanted to touch on an interesting aspect of webserver discovery that is often skipped, and thats the issue of multiple websites running as name based virtual hosts on the same web-server. There was a time (not so long ago) when all of the popular scanning tools, failed to take into account that scanning 209.61.188.39 was not the same as scanning www.sensepost.com (or hackrack.sensepost.com which happens to be on the same ip address).

Quick Virtual Host Refresher:

An HTTP/1.1 compliant browser (you will struggle to find one that is not) sends along an additional required field when requesting a website, the Host: header.

So.. while a GET on our website looked like this using HTTP/1.0:

haroon$ telnet www.sensepost.com 80
Trying 209.61.188.39...
Connected to www.sensepost.com.
Escape character is '^]'.
GET / HTTP/1.0

HTTP/1.1 200 OK With HTTP/1.1 you also have to specify a host-header:

haroon$ telnet www.sensepost.com 80 Trying 209.61.188.39... Connected to www.sensepost.com. Escape character is '^]'. GET / HTTP/1.1 Host: www.sensepost.com

HTTP/1.1 200 OK

This allows the web-server to correctly route the request to the name based virtual host running on it.What should be obviously apparent is that in the above example, attacking 209.61.188.39 != attacking www.sensepost.com != attacking hackrack.sensepost.com

There is every possibility that a highly vulnerable CGI exists on www.sensepost.com/scripts/vuln.cgi which will not exist under 209.61.188.39/scripts/vuln.cgi or hackrack.sensepost.com/scripts/vuln.cgi

Therefore, if you _were_ running a cgi scanner like nikto or wikto against the IP, you would probably miss the cgi that (in many cases) would have allowed you to compromise the host.

So.. 3 quick tips on this..

  1. use one of the many online sources that attempt to map ip addresses to other websites running on them [www.domaintools.com, www.iptoolbox.fr] (you might just find test.company.com and get the beta version of their website prior to them turning on their sanitization features)
  2. if you are using a scanner, make sure you are aiming it right..
  3. if you are hosting your site at some ISP, ensure that you know who are hosted with. You could get owned just because someone else on the same box happened to have sloppy code (and the web-server setup doesn't segregate you properly).