Archive Page 2

10
Sep
13

Error Handling in bash

I have never really liked error handling in bash but a few weeks back i came across the ability to use traps. this is an awesome feature and i don’t know why i have never come across it before. So the information i generally want when there is an error is:
* Error code.
* Line number of the error.
* and the command that failed.

The error code is simple we all know $? has the error of the last run command. The line number also pretty easy a quick search of the man page presents us with LINENO. the command of the failed is a bit tricky. we have BASH_COMMAND however this doesn’t give us what we want. consider the following script

#!/bin/bash
trap 'echo ${BASH_COMMAND}' err
TMP=/bla
ls -l ${TMP}

as /bla dose not exit ls will error, the trap will be sprung and the script will print out

ls -l ${TMP}

unfortunately the TMP variable has not been resolved. this is a bit annoying if the offending line is a loop. to fix this a got a bit of help from a colleague and we get something that looks pretty ugly but works

#!/bin/bash
trap 'echo $(eval echo ${BASH_COMMAND})' err
TMP=/bla
ls -l ${TMP}

So my finale header looks like this

#/bin/bash
set -e
trap 'echo "ERROR ($?) line ${LINENO}: $(eval echo ${BASH_COMMAND})' err
Advertisements
07
Jun
13

Linux and router advertisements – ignore the prefix

This is just a short one.  I currently working on a project which involves managing a server owned and hosted by multiple different companies.  for me to be able to build the server i need to insist on certain things.  one of these this things is a static ipv6 address.

Now many organisation use Router Advertisements to distribute the default gateway and some use the to dynamically allocate ip addresses.  To my knowledge there is know way of providing a statically mapped address via RA’s.  So you probably know where im going with this.  

The network i was installing in today used RA’s to distribute both a prefix and a default route.  I already have a number of systems that that receive a default route via RA; however known of these networks offers a prefix in the RA.   At this point i could have probably contacted the operator and asked to stop sending out the prefix but where is the fun in that, there must be a way to accept the route and not the prefix.

We use centos so my first search was the [not so] amazingly documented network-scripts to see if there was a special flag i could get to achieve the behaviour i was after.  I found no such variable.  so i decided i would need to set the appropriate kernel parameter in a /sbin/ifup-local script.

The are a number of kernel parameters which we are interested in

accept_ra - BOOLEAN
	Accept Router Advertisements; autoconfigure using them.

	Possible values are:
		0 Do not accept Router Advertisements.
		1 Accept Router Advertisements if forwarding is disabled.
		2 Overrule forwarding behaviour. Accept Router Advertisements
		  even if forwarding is enabled.

	Functional default: enabled if local forwarding is disabled.
			    disabled if local forwarding is enabled.

accept_ra_defrtr - BOOLEAN
	Learn default router in Router Advertisement.

	Functional default: enabled if accept_ra is enabled.
			    disabled if accept_ra is disabled.

accept_ra_pinfo - BOOLEAN
	Learn Prefix Information in Router Advertisement.

	Functional default: enabled if accept_ra is enabled.
			    disabled if accept_ra is disabled.

accept_ra_rt_info_max_plen - INTEGER
	Maximum prefix length of Route Information in RA.

	Route Information w/ prefix larger than or equal to this
	variable shall be ignored.

	Functional default: 0 if accept_ra_rtr_pref is enabled.
			    -1 if accept_ra_rtr_pref is disabled.

accept_ra_rtr_pref - BOOLEAN
	Accept Router Preference in RA.

	Functional default: enabled if accept_ra is enabled.
			    disabled if accept_ra is disabled.

The ones that are really important to use are

accept_ra #needs to be set to 1
accept_ra_defrtr #needs to be set to 1
accept_ra_pinfo #needs to be set to 0

When the network scripts run if IPV6_AUTOCONF=yes then  accept_ra will be set to 1.  this will also cause the other *_ra_* parameters to be set to 1 as documented above.  so the only thing we really need to do is set accept_ra_pinfo to 0.  this is where we use the /sbin/ifup-local script below.

#!/bin/sh
DEVICE="$1"
/sbin/sysctl -e -w net.ipv6.conf.$DEVICE.accept_ra_pinfo=0 >/dev/null 2>&1

Conclusion

if you want router advertisement default gateway but you want to ignore the prefix sent.  Add IPV6_AUTOCONF=yes to your ifcfg-em? file and create /sbin/ifup-local with the above code.

Enjoy

06
Nov
12

Swapping Raid Disks – Part 3 Something crazy with dd

Ok so in earlier parts of this series i showed how i migrated two raid partitions of a disk i wanted to use exclusively for data.  The original disk topology looked as follows

Disk /dev/sda: 243031 cylinders, 255 heads, 63 sectors/track

Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

Device Boot Start End #cyls #blocks Id System
/dev/sda1   0+ 45689- 45690- 367001600 fd Linux raid autodetect
/dev/sda2 * 45689+ 45702- 13- 102400 fd Linux raid autodetect
/dev/sda3   45702+ 243031- 197329- 1585042432 83 Linux
/dev/sda4   0 - 0 0 0 Empty

/dev/sda1 & /dev/sda2 have now been migrated away and can be reclaimed   The problem is i dont want 3 separate partitions i just want one partitions which will include  all the space.   For the benefit of the reader. The disk is a 1.8 TB disk; the first 2 partitions take up 300GB and the last partitions takes up 1.5TB of data.  I want to end up with one partitions that contains 1.8TB and i don’t want to delete any of the data on the data partitions (about 90%) used.

I want to resize an ext2/3/4 partition so i my first port of call is resize2fs.  I know that i can change the disk topology and get resize2fs to grow the disk;  however i have never had to grow a disk backwards, so off to the man pages.  It didn’t take long before you see a pretty big warning

“make sure you create it with the same starting disk cylinder as before! Otherwise, the resize operation will certainly not work, and you may lose your entire filesystem”

Ok so it looks like resize2fs is out unless i can first move the sdc3 partition to the beginning of the disk.  hanging around in irc i had a lot of people telling me to give (g)parted a go, although most where sceptical if it was possible at all.  I took a look at (g)parted however as far as i can tell all (g)parted is only able to copy a partitions from one location two another.  If i had 1.5TB avalible space at the beginning of the disk i would have been able to copy /dev/sdc3, change the last cylinder, resiz2fs and everything would be sorted.  however As mentioned we only had 300GB avalible.

The night was getting late and the hacker in me started to consider some more exotic, dangerous and some would say down right stupid solutions.  I eventually arrived at dd.  I thought perhaps i could just to something like

dd if=/dev/sdc3 of=/dev/sdc bs=1M conv=noerror

In theory this would create on block device /dev/sdc3 containing the file system on /dev/sdc3.  The obvious problem here is that dd will start over writing the beginning of /dev/sdc3 before it has finished copying all of the data from /dev/sdc3.  Would this work? I figured in theory dd would read the partition table into memory, pick the two start positions and continue the copy process until it reaches the end of /sdv/sdc3.  It would neither know or care that we, or more accurately it, was overwriting the beginning of /dev/sdc3

At this point i should but if the a disclaimer

THIS IS VERY DANGEROUS, POSSIBLY STUPID.  

DO NOT DO THIS UNLESS YOU KNOW WHAT YOU ARE DOING

AND YOU DONT MIND LOSING YOUR DATA

Ok that out of the way i gave it a go and to my supprise it seem to have worked. If you are doing this on a remote machine i strongly recommend you use tmux or screen, if the dd is interrupted you have almost certainly lost your data.

[root@server ~]$ df -h | grep sdc
/dev/sdc3 1.5T 1.4T 98G 94% /data/disk1
[root@server ~]$ umount /dev/sdc3
[root@server ~]$ dd if=/dev/sdc3 of=/dev/sdc bs=1M conv=noerror
[root@server ~]$ mount /dev/sdc /data/disk1
[root@server ~]$ df -h | grep sdc
/dev/sdc 1.5T 1.4T 98G 94% /data/disk1
[root@server ~]$ umount /dev/sdc
[root@server ~]$ e2fsck -f /dev/sdc
[root@server ~]$ resize2fs /dev/sdc
[root@server ~]$ mount /dev/sdc /data/disk1
[root@server ~]$ df -h | grep sdc
/dev/sdc              1.8T  1.4T  462G  75% /data/disk1

So there we are looks like it wasn’t so insane after all. Comments most welcome

UPDATE: I had a problem mounting the new disk via its UUID.  tune2fs -l showed the same UUID which was present in /etc/fstab;  However using the mount command didn’t work.

[root@server ~]# tune2fs -l /dev/sdd | grep UUID
Filesystem UUID:          67966bfd-92b5-47b8-a545-277a4bea8be5
[root@server ~]# grep 67966bfd-92b5-47b8-a545-277a4bea8be5 /etc/fstab 
UUID=67966bfd-92b5-47b8-a545-277a4bea8be5 /data/disk2             ext4    noatime,nodiratime,nodelalloc        1 2
[root@server ~]# mount /data/disk2/
mount: special device UUID=67966bfd-92b5-47b8-a545-277a4bea8be5 does not exist

I have fixed this by doing running the following steps, not sure which one fixed it will test on the next system

[root@server ~]# cd /dev/disk/by-uuid/
[root@server ~]# ln -sv ../../sdd 67966bfd-92b5-47b8-a545-277a4bea8be5
[root@server ~]# mount /data/disk2/
[root@server ~]# umount /data/disk2/
[root@server ~]# rm 67966bfd-92b5-47b8-a545-277a4bea8be5
[root@server ~]# blockdev --rereadpt /dev/sdd
[root@server ~]# partprobe /dev/sdd
[root@server ~]# mount /data/disk2/

Also note that this server hasn’t been rebooted yet. Will update after it has.

Edit: System rebooted without issue, haven’t re-tested the mounting by uuid (or i don’t remember the results :)). however i preformed this procedure a couple of times without issue

06
Nov
12

Swapping Raid Disks – Part 2 Fixing /boot

In the previous part post Swapping Raid Disks – Part 1 MDADM we showed how to completely swap all disks in a Linux Software Raid array.  One of the Raid partitions we swapped was the /boot partitions.  We now need to ensure that grub is installed to these new partitions so the system can still boot once we destroy the old disks.  You can use the device map file and grub-install to do this however i will be using the grub cli.

The first thing to do is run grub and use the find command to see how grub addresses the disks with the /boot partitions.  The (hdX,X) values will more then likley be different on your system. you will also need to do this as root

[root@server ~]#grub
Probing devices to guess BIOS drives. This may take a long time.
GNU GRUB  version 0.97  (640K lower / 3072K upper memory)
[ Minimal BASH-like line editing is supported.  For the first word, TAB
lists possible command completions.  Anywhere else TAB lists the possible
completions of a device/filename.]
grub> find /grub/stage1
find /grub/stage1
(hd0,1)
(hd1,1)
(hd8,1)
(hd9,1)

From this we can see that grub can see 4 /boot partitions ((hd0,1), (hd2,1), (hd8,1) & (hd9,1)). We now need to ensure that grub is installed on all of these partitions. If you know which disks are the new disks and which are the old disks you can get away with installing grub just on the new disks but it will do know harm to install on all disks.

Installing grub requires three steps. Telling grub which device we will be working on with a map; Telling grub which partitions is the boot/root partitions and installing grub

grub> device (hd0) /dev/sda
device (hd0) /dev/sda
grub> root (hd0,1)
root (hd0,1)
 Filesystem type is ext2fs, partition type 0xfd
grub> setup (hd0)
setup (hd0)
 Checking if "/boot/grub/stage1" exists... no
 Checking if "/grub/stage1" exists... yes
 Checking if "/grub/stage2" exists... yes
 Checking if "/grub/e2fs_stage1_5" exists... yes
 Running "embed /grub/e2fs_stage1_5 (hd0)"...  26 sectors are embedded.
succeeded
 Running "install /grub/stage1 (hd0) (hd0)1+26 p (hd0,1)/grub/stage2 /grub/grub.conf"... succeeded
Done.

Here is an example of hd9 (/dev/sdj) as well

grub> device (hd9) /dev/sdj
device (hd9) /dev/sdj
grub> root (hd9,1)
root (hd9,1)
 Filesystem type is ext2fs, partition type 0xfd
grub> setup (hd9)
setup (hd9)
 Checking if "/boot/grub/stage1" exists... no
 Checking if "/grub/stage1" exists... yes
 Checking if "/grub/stage2" exists... yes
 Checking if "/grub/e2fs_stage1_5" exists... yes
 Running "embed /grub/e2fs_stage1_5 (hd9)"...  26 sectors are embedded.
succeeded
 Running "install /grub/stage1 (hd9) (hd9)1+26 p (hd9,1)/grub/stage2 /grub/grub.conf"... succeeded
Done.

you will also need fix the grub.conf/menu.1st file to ensure it specifies the correct root drive to use. In my case i know my boot partitions are on hd8 & hd9 however as above you can create entries for all hdX partitions show in the find command. Below is a copy of my modified menu file

title CentOS
	root (hd8,1)
	kernel /vmlinuz-2.6.32-220.17.1.el6.x86_64 [Removed for simplicity]
	initrd /initramfs-2.6.32-220.17.1.el6.x86_64.img
title CentOS (if the first disk in the array dies you will need to use this)
	root (hd9,1)
	kernel /vmlinuz-2.6.32-220.17.1.el6.x86_64 [Removed for simplicity]
	initrd /initramfs-2.6.32-220.17.1.el6.x86_64.img
title CentOS (backp hd0 wont work after you format the old disks)
	root (hd0,1)
	kernel /vmlinuz-2.6.32-220.17.1.el6.x86_64 [Removed for simplicity]
	initrd /initramfs-2.6.32-220.17.1.el6.x86_64.img
title CentOS (backp hd1 wont work after you format the old disks)
	root (hd1,1)
	kernel /vmlinuz-2.6.32-220.17.1.el6.x86_64 [Removed for simplicity]
	initrd /initramfs-2.6.32-220.17.1.el6.x86_64.img

If you are using a system with grub v2 you can use the search command to set the root instead of having 4 seperate menu items. CentOS dose not support grub v2 yet and i have not needed to do this on another system. however Arch linux has a good article on grub v2 which should help
At this point you should reboot and check that at least the first two menu options work and allow you to boot. If they do not the last two options should work. Review the steps you have taken to see if you have made any errors. If they come up successfully you should be able to format the original partitions and reuse them.

In my situation i wanted to keep all the data and reclaime the space at the beginning of the disk so i decided to do something crazy with dd (part 3 coming soon)

06
Nov
12

Swapping Raid Disks – Part 1 MDADM

We have a 2 disks (/dev/sda & /dev/sdb) which contains 2 SW RAID partitions partitions and one data partition.  The SW raid partitions where used for /boot and /.  the data partition was used as an hadoop data partitions.  hadoop is designed to perform best by making sequential reads/writes from a disk.  By having OS partitions on the same disk as the data partition we noticed that we where causing some issues with hadoop performance and decided to move the OS partitions to dedicated disks (/dev/sdi & /dev/sdj).

This was relatively simple.  New disks where installed and we used sfdisk and mdadm to configure them.

The first thing to do was to dump the partitions table for the current disk(s)

sfdisk -l /dev/sda -O partition

This preduced the following partitions table

Disk /dev/sda: 243031 cylinders, 255 heads, 63 sectors/track

Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

Device Boot Start End #cyls #blocks Id System
/dev/sda1 0+ 45689- 45690- 367001600 fd Linux raid autodetect
/dev/sda2 * 45689+ 45702- 13- 102400 fd Linux raid autodetect
/dev/sda3 45702+ 243031- 197329- 1585042432 83 Linux
/dev/sda4 0 - 0 0 0 Empty

However we did not want the data partitions on the new disks so we had to modify the partition table a little first so it looked like this:

Disk /dev/sdi: 243031 cylinders, 255 heads, 63 sectors/track

Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

Device Boot Start End #cyls #blocks Id System
/dev/sdi1 0+ 45689- 45690- 367001600 fd Linux raid autodetect
/dev/sdi2 * 45689+ 45702- 13- 102400 fd Linux raid autodetect
/dev/sdi3 0 - 0 0 0 Empty
/dev/sdi4 0 - 0 0 0 Empty

We configure our new disk as follows:

sfdisk -I partition --force /dev/sdi

We then updated the raid configuration to add the new partitions

mdadm --manage /dev/md0 --add /dev/sdi2
mdadm --manage /dev/md1 --add /dev/sdi1
mdadm --manage /dev/md0 --fail /dev/sda2
mdadm --manage /dev/md1 --fail /dev/sda1
mdadm -D /dev/md0
mdadm -D /dev/md1

Wait until the sync has completed.  you can monitor progress with the below command

watch cat /proc/mdstat

Once all data has synced to both raid partitions repeat the above steps for sdb/sdj.  Once the raid partitions are configured in such a way that sdi and sdj or the two active partitions and have been completely synced you can remove sda and sdb from the raid configueration

mdadm --manage /dev/md0 --remove /dev/sda2
mdadm --manage /dev/md0 --remove /dev/sdb2
mdadm --zero-superblock /dev/sda2
mdadm --zero-superblock /dev/sdb2

mdadm --manage /dev/md1 --remove /dev/sda1
mdadm --manage /dev/md1 --remove /dev/sdb1
mdadm --zero-superblock /dev/sda1
mdadm --zero-superblock /dev/sdb1

continue to swapping raid disks – part 2 fixing /boot for info on how to fix grub and the /boot partitions

13
Feb
12

Mapping CDN Domains

Introduction

Feel free to read from the bottom up if your not bothered about my ramblings

There an IETF draft proposed Google, Verisign and Neustar, available here http://tools.ietf.org/html/draft-vandergaast-edns-client-subnet-00. The benefit of this draft is to allow CDN networks to direct users to the web server which offeres the best performance for the users IP network.

The problem the CDN networks are trying to resolve exists because of the way CDN networks currently engineer traffic. In current implementations the source IP address of the dns query is used to decide which A Record it will return to the user. Most users do not perform iterative queries, they simply ask there upstream DNS cache to perform the query. So when the CDN network provides an answer it is on the source IP address of the caching server and not the user IP address.

In most situations this is not a problem because the caching server and the user will most likely prefer the same webserver. This becomes a problem for users who use Anycast Public DNS services, e.g. OpenDNS or large ISPs. In these cases the server which is making the query could be far away from the user asking. this can result in a UK user being directed to a US web-server.

Client-subnet resolves this problem by letting a user set the client subnet option with the value the CDN server should use to make its decisions instead of the the source address of the query.

Here is a diagram, OK i forgot how much i hated doing diagrams. Ill try and do something on the whiteboard later and upload a photo.  i should probably also mention that my desktop publishing skills are pretty much lacking across the board.

How can we use this?

Well if we are “pen testing” a network we want to find as many targets as possible. With normal query we can only retrive the serveres that the CDN provider thinks is best for us [or your cache]. We could utilise the current behavior of CDN networks to enumerate more entries. Grap your list of open resolvers and see what records and gather all the different answers.

Seems a bit of a pain, you need to maintain your list of serveres, write a script to resolve everything; collate it etc. But with this new extension we can just send an arbitrary IP address and ask the resolver to give us information for that address. Note this is by design and not a flaw.

With this in mind i thought it would be great if we could have these features in nmap. point nmap at an authoritative NS server, give it a domain and have nmap query the name server from multiple geographical locations and scan each record it finds.

This has led me to create 2 new scripts for nmap.

  • dns-client-subnet.nse
  • dns-client-subnet-scan.nse

dns-client-subnet.nse

The first script is more a proof of concept it takes the following arguments

  • dns-client-subnet.domain The domain to lookup
  • dns-client-subnet.address the client address to use
  • dns-client-subnet.nameserver nameserver to use. (default = host.ip)

This allows us to perform one scan/query specifying our own client IP address to see if we get different results.

Here we specify a source of 1.0.0.0

 nmap -sU -p 53 --script dns-client-subnet --script-args dns-client-subnet.domain=www.google.com,dns-client-subnet.address=1.0.0.0,dns-client-subnet.nameserver=ns1.google.com ns1.google.com</p>
Starting Nmap 5.61TEST4 ( http://nmap.org ) at 2012-02-13 20:45 CET
Nmap scan report for ns1.google.com (216.239.32.10)
Host is up (0.014s latency).
PORT STATE SERVICE
53/udp open|filtered domain
| dns-client-subnet:
| A : 74.125.235.84,74.125.235.80,74.125.235.81,74.125.235.83,74.125.235.82
|_ details : 24/32/1.0.0

Here we specify a source of 2.0.0.0

nmap -sU -p 53 --script dns-client-subnet --script-args dns-client-subnet.domain=www.google.com,dns-client-subnet.address=2.0.0.0,dns-client-subnet.nameserver=ns1.google.com ns1.google.com</p>
Starting Nmap 5.61TEST4 ( http://nmap.org ) at 2012-02-13 20:45 CET
Nmap scan report for ns1.google.com (216.239.32.10)
Host is up (0.015s latency).
PORT STATE SERVICE
53/udp open|filtered domain
| dns-client-subnet:
| A : 209.85.147.104,209.85.147.103,209.85.147.147,209.85.147.105,209.85.147.106,209.85.147.99
|_ details : 24/20/2.0.0</p>
Nmap done: 1 IP address (1 host up) scanned in 0.35 seconds

Success you can clearly see that the results are completely different.

Details

While we are here ill explain the details section.

  • details : 24/20/2.0.0

the first parameter is the subnet mask we sent. We are basically saying this user is somewhere in the 1.0.0.0/24 subnet.

The second parameter explains what subnet this response is valid for.

  • The first response we see a value of 32. this means that the response is only valid for the ip address 1.0.0.0/32.
  • The second has 20 so you should get the same response if you query from ip address in the 2.0.0.0/20

The last parameter is just an echo of the address we sent. One bit is missing because we only sent a /24 so the last bit is not needed.

One thing to explore further is how we can use this information to walk the dns. i.e. if we get a response like the above 2.0.0/20, we know to set the next client-subnet to 2.0.16/20. Then adjust subnet mask and client-subnet of the next query, based on the response we get. I played with this a bit but i got a lot of /32 response which would make things take a while. Although this is something that is worth more research though.

dns-client-subnet-scan.nse

So As the above method worked but we cant use details to easily enumerate all entries. So i decided to use a database of IP addresses and cycle through them. For this i chose the maxmind database. All i need is an IP address for each location. To gather this information i ran the following bash one liners. after downloading and extracting the maxmind cvs files.

awk -F, '{print $2}' GeoLiteCity_20120207/GeoLiteCity-Location.csv | tr -d \" | sort | uniq &gt; cc.codes
for i in $(&lt; area.codes ) ;
 do grep -m 1 ${i} GeoLiteCity_20120207/GeoLiteCity-Location.csv ;
done | tr -d \" | awk -F, '{printf "%s %s:%s,%s\n",$1,$2,$3,$4}' | while read line ;
do id=${line%% *};
 description_and_country=${line##* };
 description=${description_and_country##*:};
 country=${description_and_country%%:*};
 if [ "${description}" != "," ];
 then code=${description%%,*};
 place=${description##*,} ;
 addr=$(grep -m 1 "\"${id}\"" GeoLiteCity_20120207/GeoLiteCity-Blocks.csv | \
 tr -d \" | awk -F, '{print $1}') ;
 echo -e "\t${country}.${code} = {ip=${addr}, desc=\"${country},${description}\"}," ;
 fi ;
done

I wont expand this one


awk -F, '{print $2}' GeoLiteCity_20120207/GeoLiteCity-Location.csv | tr -d \" | sort | uniq &gt; cc.codes
 for i in $(&lt; cc.codes ) ; do grep -m 1 ${i} GeoLiteCity_20120207/GeoLiteCity-Location.csv ; done | tr -d \" | awk -F, '{printf "%s %s\n",$1,$2}' | while read line ; do id=${line%% *}; country=${line##* }; addr=$(grep -m 1 "\"${id}\"" GeoLiteCity_20120207/GeoLiteCity-Blocks.csv | tr -d \" | awk -F, '{print $1}') ; echo -e "\t$country = {ip=${addr}, desc=\"${country}\"}," ; done

Now the above code is a very dirty hack and still required me to do some manual clean up. dont ask me to explain my logic, it was late and i just wanted results :). However with a bit of massaging the above managed to get me the structure you see at the top of the this script. This takes that structure, cycles through it, and performs a query for each address.

This script takes the following arguments

  • dns-client-subnet.domain The domain to lookup
  • dns-client-subnet.nameserver nameserver to use. (default = host.ip)

The results are instead of getting 6 ip addresses for http://www.google.com. We get lots

nmap -sU -p 53 --script dns-client-subnet-scan --script-args dns-client-subnet-scan.domain=www.google.com ns1.google.com
Starting Nmap 5.61TEST4 ( http://nmap.org ) at 2012-02-13 21:19 CET
Nmap scan report for ns1.google.com (216.239.32.10)
Host is up (0.013s latency).
PORT STATE SERVICE
53/udp open|filtered domain
| dns-client-subnet-scan:
| 173.194.33.16
| 173.194.33.17
| 173.194.33.18
| 173.194.33.19
| 173.194.33.20
| 173.194.33.48
| 173.194.33.49
| 173.194.33.50
| 173.194.33.51
| 173.194.33.52
| 173.194.34.112
| 173.194.34.113
| 173.194.34.114
| 173.194.34.115
| 173.194.34.116
| 173.194.34.144
| 173.194.34.145
| 173.194.34.146
| 173.194.34.147
| 173.194.34.148
| 173.194.34.16
| 173.194.34.17
| 173.194.34.176
| 173.194.34.177
| 173.194.34.178
| 173.194.34.179
| 173.194.34.18
| 173.194.34.180
| 173.194.34.19
| 173.194.34.20
| 173.194.34.48
| 173.194.34.49
| 173.194.34.50
| 173.194.34.51
| 173.194.34.52
| 173.194.34.80
| 173.194.34.81
| 173.194.34.82
| 173.194.34.83
| 173.194.34.84
| 173.194.41.112
| 173.194.41.113
| 173.194.41.114
| 173.194.41.115
| 173.194.41.116
| 173.194.41.144
| 173.194.41.145
| 173.194.41.146
| 173.194.41.147
| 173.194.41.148
| 173.194.41.80
| 173.194.41.81
| 173.194.41.82
| 173.194.41.83
| 173.194.41.84
| 173.194.65.103
| 173.194.65.104
| 173.194.65.105
| 173.194.65.106
| 173.194.65.147
| 173.194.65.99
| 173.194.66.103
| 173.194.66.104
| 173.194.66.105
| 173.194.66.106
| 173.194.66.147
| 173.194.66.99
| 173.194.67.103
| 173.194.67.104
| 173.194.67.105
| 173.194.67.106
| 173.194.67.147
| 173.194.67.99
| 173.194.69.103
| 173.194.69.104
| 173.194.69.105
| 173.194.69.106
| 173.194.69.147
| 173.194.69.99
| 209.85.137.103
| 209.85.137.104
| 209.85.137.105
| 209.85.137.147
| 209.85.137.99
| 209.85.143.104
| 209.85.143.99
| 209.85.147.103
| 209.85.147.104
| 209.85.147.105
| 209.85.147.106
| 209.85.147.147
| 209.85.147.99
| 209.85.173.103
| 209.85.173.104
| 209.85.173.105
| 209.85.173.147
| 209.85.173.99
| 209.85.229.103
| 209.85.229.104
| 209.85.229.105
| 209.85.229.147
| 209.85.229.99
| 72.14.204.103
| 72.14.204.104
| 72.14.204.105
| 72.14.204.147
| 72.14.204.99
| 74.125.113.103
| 74.125.113.104
| 74.125.113.105
| 74.125.113.106
| 74.125.113.147
| 74.125.113.99
| 74.125.115.103
| 74.125.115.104
| 74.125.115.105
| 74.125.115.106
| 74.125.115.147
| 74.125.115.99
| 74.125.127.103
| 74.125.127.104
| 74.125.127.105
| 74.125.127.106
| 74.125.127.147
| 74.125.127.99
| 74.125.157.104
| 74.125.157.147
| 74.125.157.99
| 74.125.159.103
| 74.125.159.104
| 74.125.159.105
| 74.125.159.106
| 74.125.159.147
| 74.125.159.99
| 74.125.224.240
| 74.125.224.241
| 74.125.224.242
| 74.125.224.243
| 74.125.224.244
| 74.125.224.80
| 74.125.224.81
| 74.125.224.82
| 74.125.224.83
| 74.125.224.84
| 74.125.225.80
| 74.125.225.81
| 74.125.225.82
| 74.125.225.83
| 74.125.225.84
| 74.125.226.144
| 74.125.226.145
| 74.125.226.146
| 74.125.226.147
| 74.125.226.148
| 74.125.227.112
| 74.125.227.113
| 74.125.227.114
| 74.125.227.115
| 74.125.227.116
| 74.125.227.48
| 74.125.227.49
| 74.125.227.50
| 74.125.227.51
| 74.125.227.52
| 74.125.229.208
| 74.125.229.209
| 74.125.229.210
| 74.125.229.211
| 74.125.229.212
| 74.125.230.208
| 74.125.230.209
| 74.125.230.210
| 74.125.230.211
| 74.125.230.212
| 74.125.230.240
| 74.125.230.241
| 74.125.230.242
| 74.125.230.243
| 74.125.230.244
| 74.125.230.80
| 74.125.230.81
| 74.125.230.82
| 74.125.230.83
| 74.125.230.84
| 74.125.239.16
| 74.125.239.17
| 74.125.239.18
| 74.125.239.19
| 74.125.239.20
| 74.125.31.103
| 74.125.31.104
| 74.125.31.105
| 74.125.31.106
| 74.125.31.147
| 74.125.31.99
| 74.125.53.103
| 74.125.53.104
| 74.125.53.105
| 74.125.53.106
| 74.125.53.147
| 74.125.53.99
| 74.125.71.103
| 74.125.71.104
| 74.125.71.105
| 74.125.71.106
| 74.125.71.147
| 74.125.71.99
| 74.125.79.103
| 74.125.79.104
| 74.125.79.105
| 74.125.79.106
| 74.125.79.147
|_ 74.125.79.99
Nmap done: 1 IP address (1 host up) scanned in 4.50 seconds

Conlusion

I think there is a lot more that could be explored here but i thought i would write up what i have done and see if anyone else is has any ideas

Installation

These scripts both rely on patches to the dns.lua library. I have checked in the full dns.lua library im using and other files to github, see below. copy dns.lua to your system nselib dir. on my system this is
/usr/local/share/nmap/nselib
However this will vary a lot depending on distribution. then copy the dns-client-subnet.nse and dns-client-subnet-scan.nse into ~/.nmap/scripts. Once you have done that you should be able to use the scripts as per the examples above

Files

https://github.com/b4ldr/nse-scripts/blob/master/dns-client-subnet-scan.nse
https://github.com/b4ldr/nse-scripts/blob/master/dns-client-subnet.nse
https://github.com/b4ldr/nselib/blob/master/dns.lua

28
Jan
12

HideMyAss VPN Part 3

So now we have our daemons with multiple tunnels so how do we keep them uptodate. Below is the script i use to update the config. it preforms some simple error checking to avoid restarting the tunnels unnecessarily so you could possibly run it from cron

#!/bin/bash

UK_URL="http://vpn.hidemyass.com/vpnconfig/client_config.php?win=1&loc=UK,+London+(LOC1+S1)"
US_URL="http://vpn.hidemyass.com/vpnconfig/client_config.php?win=1&loc=USA,+New+York+(DC2+S1)"
UK_DOMAINS="www.bbc.co.uk www.itv.co.uk mercury.itv.com www.channel4.com ais.channel4.com ll.securestream.channel4.com"
US_DOMAINS="www.hulu.com www.vevo.com www.crackle.com"

declare -A DOMAINS=(["uk"]=${UK_DOMAINS} ["us"]=${US_DOMAINS})
declare -A URL=(["uk"]=${UK_URL} ["us"]=${US_URL})

for COUNTRY in us uk
do
        TMPFILE=`mktemp` || exit 1
        wget "${URL[${COUNTRY}]}" -O ${TMPFILE}  || exit 1
        sed -i -e 's/\.\/keys\//\/etc\/openvpn\/keys\//g' -e 's/^auth-user-pass/auth-user-pass \/etc\/openvpn\/up/' ${TMPFILE}
        echo "route-nopull" >> ${TMPFILE}
        echo "max-routes 10240" >> ${TMPFILE}
        for DOMAIN in ${DOMAINS[${COUNTRY}]}
        do
                echo origin $(dig +short ${DOMAIN} | tail -1)  | \
                nc asn.shadowserver.org 43 | awk '{print "prefix",$1}'  | \
                nc asn.shadowserver.org 43  | \
                while read line
                do  
                        echo -en "route "  
                        ipcalc --nocolor --nobinary ${line}  |  awk '/(Address|Netmask)/ {printf "%s ", $2}'  
                        echo  
                done
        done | sort | uniq >> ${TMPFILE}
        O_HASH=$(md5sum /etc/openvpn/openvpn-${COUNTRY}.cfg | awk '{print $1}')
        N_HASH=$(md5sum ${TMPFILE} | awk '{print $1}')
        if [ "${O_HASH}" != "${N_HASH}" ]
        then 
                echo "${O_HASH}"
                echo "${N_HASH}"
                echo  "/etc/openvpn/openvpn-${COUNTRY}.cfg has changed"
                mv ${TMPFILE}  /etc/openvpn/openvpn-${COUNTRY}.cfg
                svc -d  /service/openvpn-${COUNTRY}
                svc -u  /service/openvpn-${COUNTRY}
        else
                rm  ${TMPFILE}
        fi
done



Advertisements