Archive for the ‘Myself’ Category

IPv6 experiments / lessons learned

During the last couple of days I did some experiments with IPv6 connectivity / applications / configuration.
For nearly two years I already got two tunnels. One for a server and one for my home connectivity.
I never got aiccu working on Mac OSX so the home tunnel was down most of the time.

Finally it got to me and I worked on getting 2 subnets now, again, one for the home network and one for the servers.
For the gentoo servers I used the router howto from with the radvd configuration.
RADVD is a router advertisement daemon for ipv6 networks. IPv6 has a mechanism for auto configuration where the router advertisement daemon sends advertisements about the supported prefix (aka network/netmask in IPv4 world) and its own ip address for the gateway. So far it seems like most ipv6 stacks have this auto configuration included by default so every IPv6 enabled server in the reachable network suddenly has a IPv6 address. I never knew that that many servers of mine are IPv6-enabled and even quite some servers of my isp were suddently connected through IPv6 (getting me a curious call of my ISP ;-)).
Thats the first thing to be worried about, suddenly they are all connected to the big bad internet without correct reverse dns entries, firewalls and the like.
Speaking of firewalls, usually you don’t have a IPv6 firewall up at this moment. Your old ipv4 firewall rules won’t catch any ipv6 traffic. Therefore, again, every IPv6 enabled host is exposed to the world without proper protection. Thats even worse if you open a tunnel to your home network as the home network is most often connected through some router doing nat and internally just using private ip addresses so that the hosts are not exposed to the outside world at all. With opening the tunnel and enabling the radvd service you got them out in the open world either.

On my home network I got a CentOS5 server running which is doing some smb service and the like.
I got that one connected to the sixxs tunnel and started the radvd service on that box. So far so good, Mac OSX has IPv6 enabled with autoconfiguration by default so. So the hosts got the IPv6 addresses and routing.
ping6 worked (btw. nice to have most tools available as ipv6 cmds with just 6 at the end) but the browser delivered no IPv6 website. There you are, CentOS5 / RHEL HAVE a ip6tables ruleset enabled by default and that one was just open for icmp (ping) messages. Good protection but cost me a while to diagnose. So I opened some more loopholes for the IPv6 connection on the home network for smtp, imap, http, https and dns and still let the radvd daemon running.
At the server network I disabled the radvd service and manually set ipv6 addresses and gateway so that I won’t disturb neighbours in the network anymore :-). A strict ip6tables ruleset was enabled too.
For fun I went through the IPv6 certification by and got as far as to prove that I got:

  • ipv6 connectivity
  • an ipv6 enabled webserver
  • an ipv6 enabled mailaddress (yes my main mail address is now ipv6 enabled!)
  • reverse dns entries for my ipv6 enabled hosts (powerdns has no problems with that)

The step which still gives me trouble is that I can’t give fully ipv6 enabled nameservers to the outside world. My main nameserver is ipv6 enabled but the secondary ones from don’t have ipv6 connectivity or AAAA entries so there’s not much I can about it.
Skimming through the maillogs on my mailserver I was stunned to see that *a lot* of spam is trying to deliver through IPv6 already. postgrey is working with ipv6 without trouble, amavis / spam assassin too so there’s not really a problem. Seems like spammers adapt more quickly to the new technologies though. On the other hand I found that (a german ISP) got its mailservers connected through IPv6 already and is publishing AAAA entries for them. Therefore some mail is already delivered through IPv6.
In the near future I might try to offer some experimental IPv6 access to the services provided but without any native ipv6 connectivity (anyone knows if TeliaSonera is offering it and if it poses additional costs?) that doesn’t make too much sense for production.

At least now I can check how the applications I’m using and providing are working with IPv6. Also Phorum needs to be checked for that.

Nginx, finally!

Seeing the notice that the license on my Litespeed webserver is expiring again (yearly payments 😦 ) I finally started to move my sites to nginx (together with a move in datacenters so that webserver configuration was to be done anyway).
There were some more webservers in the run but I ended up with nginx.
Some others, lighttpd (got a bit silent over there and I don’t want to put my sites on a dying project), cherokee (now even with a webinterface!, but documentation is a bit sparse and the latest release seems inconsistent with the configuration – I simply couldn’t find out to do what I wanted to do) and the original Litespeed webserver.
In the end I wanted to come back to an open source webserver which doesn’t lock me in like that.
LSWS had some regressions in the last versions and one always has to wait for the developer team to fix them (even though they are quick) as no one else can dig into the code and also no one can write modules or enhancements because of the closed source.
Also there were some features which are now only available in the enterprise (aka paid) version which I don’t want to be forced to use forever. Also in the last year(s) its simply more directed to hosting companies or similar which are using native httpd.conf files and not doing the configuration in the webinterface they are offering. Some features are even only working with using httpd.conf entries.
Oh and the free version doesn’t offer x86-64 versions therefore I needed compat libs.
Therefore better do the cut now and use something else.
Nginx has the fastcgi loadbalancing I want, rewrite rules, great configurability and a very active community (and developers).
The only thing I’m really missing there is the possibility to use .htaccess files which forced me to search for the .htaccess files and turn their rules into native nginx configuration entries. Oh, one feature I forgot, reloading the configuration without doing a full restart of the webserver is neat too :).
All issues I had could be quickly solved by either searching the maillist archive or posting there.

Don’t get me wrong. I still recommend LSWS to users who want to have an easy to use webserver with great performance as a drop in replacement for Apache supporting most of the previous features out of the box but its simply not for me anymore.

laws and the use of logging IPs

in the light of recent court-decisions in germany ( german article ) which essentially disallows logging of IPs I’m wondering what one would really need it for?

I’m using IP-logging/-tracking in multiple ways:
1. statistics about visits and recurring users
2. storing it with forum-posts to allow law enforcement in case some user really goes over the line
3. tracking requests in a given time by IP to automatically block potential attacks

So what of that could be avoided?

For 1. , one could just ignore logging the ip but trying to count visits and recurring users would be impossible with that. What now? Maybe logging a md5/shaX of the ip to have some unique key per IP? Wouldn’t that still fall under the rule from the court as you could find out which was the actual IP?
Counting visits is an important tool for getting advertisers to advertise at a page (In my opinion). Any ideas?

For 2. , guess one could disable that but would I be responsible then for each and every forum-post because the real poster can’t be retrieved? (Yeah, laws in german are bad for the one offering the forum after all 😦 )
On the other hand there is the upcoming data retention ( german news collection about this topic ) which is planned for keeping all records for 6 months (!!!). So for now I should remove all tracking of ip-addresses just to be forced to store it for 6 months a while later?

For 3. , this behaviour gives me another problem too. Trying to load-balance over multiple webservers usually goes through a reverse proxy in front of the webservers which would always give the REMOTE_ADDR of the reverse-proxy to the apps. So the reverse-proxy would need to add this security layer. But I really failed to find one doing this up to know.
But is that really needed and I’m just oversensitive in this area? Do I need to accept any number of requests/s from any user?

Are there other use-cases for logging IPs?

How are other users handling this?

A new post … REALLY!

Ok, I agree that it went a little bit silent in the last weeks but that was just because of an exam I had to do and which I really had a lot to learn for.
Now that one is done and I only got to finish (or at least start ;)) my diploma thesis to bring it to an end.

Lets see if I can get some life back into this blog.

To lighttpd or not to lighttpd

So for 3 months lighttpd is now in the top 5 list of netcraft statistics.
I actually tried lighttpd before using LiteSpeed-Webserver which is a commercial product (with a free standard-version) but for my use-case superior to use. Maybe they are on par performance-wise, I don’t know and didn’t do enough benchmarks to tell but the usability is totally different.
According to netcraft there are more than a million domains hosted on lighttpd now but why is there no Webinterface to configure it? Do the users see this as useless? I don’t really like to be depending on SSH-access for changing something in my webserver configuration when I’m on the road and missing input validation like a webinterface could do.
Also why is there no support to use .htaccess-files or at least search for .htaccess-files and convert them to something lighttpd likes? LiteSpeed supports .htaccess files with a cache so that it isn’t as much a performance hit as it was previously.
I would be really afraid of opening lots of holes while switching to lighttpd because I secured a ton of directories with simply a “Deny from all” in .htaccess-files and sometimes “Basic Authentication”.
Why does it have to be so hard? 😉

Blue screens and US-networks

So finally Maurice (who would see a link here if he had a blog ;-)) reminded be to talk about it … .

Finally we had our first Phorum-Developers meeting when we went to the MySQL-Conference last April.
Which was a great experience and I have to thank MySQL AB for the invitation to the DotOrg Pavilion.

But all the time there I have been hit by Blue Screens on my ThinkPad (yeah I know, bad to have Windows running but I think its still better with Windows on Laptops than Linux).
There must be something bad about US-WLAN-networks as I never had this problem before and only got it there when connecting to the Conference-WLAN.
Do they use some different channels or frequencies? I really don’t know as I just want to have a working connection.
Unfortunately it didn’t fix itself when coming back to Germany. I still had occasional crashes with the infamous blue-screen.
Only a full reinstall of the OS and all the precious applications helped.

So much about US-Wireless-networks :(.

So that is it …

… another blog from one of these web-guys.
Whats a web-guy? I see it more as developers active in the web-community or something like that or do you think thats something else?

Actually it was Brian who brought me to blogging and wordpress alltogether and therefore that title … ;).

At first, let me introduce myself:
Thomas Seifert from Berlin, Germany.
One of the three main-developers of Phorum.
My own project / “company” is which is also the cause why I ended up as Phorum-Developer as I’m using Phorum as the base application for the forum-hosting done there.
Other projects? Hmm, is one of them but there isn’t much traffic yet.
As you can see, all my web-projects are currently build with on PHP and MySQL and usually on Linux.
I know, there are a lot of other combinations possible but PHP/MySQL is IMHO the best combination ever.
You don’t have to worry about licensing costs when you start a project (just had that problem with a work project I’m involved in) and PHP allows for really rapid development and not coding weeks before seeing any result. MySQL is another problem though. Yeah, its fast and lean but it has changed much over the last couple of years (more about thatin another blog post later).