Skip to content

Some corporate filter proxies blocking Amazon S3 static website?

by Pascal Deschenes on June 21st, 2011
Thunderstorm

Recently, Amazon has introduced a feature within their S3 infrastructure: the ability to directly serve a full static website. Prior to that, we had to set up a Cloudfront in order to set default Object for a default index.html. While this is all good news, it appears that using such setup might not be as a good idea as it first sounded.

The steps to host a full-blown static web through Amazon S3 are simple and throughly explained:

  1. Create an S3 bucket named after your CNAME (e.g. www.foo.bar)
  2. Setup permissions for this bucket as world viewable
  3. Enable website options for your bucket, such as index and error document
  4. Get your endpoint CNAME (e.g www.foo.bar.s3-website-us-east-1.amazonaws.com)
  5. Update your DNS record to point to this CNAME
  6. Upload your web site content to your S3 bucket

Tadam. And it’s even painlessly working…except.

Except some corporate proxy filtering rules seem to not like this setup for some obscure reasons that I can’t get right for now. Apparently, this is not related to a DNS configuration problem since most people can actually access the website. It could potentially point out to the actual length of the CNAME which is a bit bigger than usual but according to the RFC 1035, that shouldn’t be a problem as it stipulate that such CNAME limit should not exceed 253 characters in dotted form. Moreover, in such case, directly using the amazon CNAME (e.g. www.foo.bar.s3-website-us-east-1.amazonaws.com) works. Could this be some sort of enforcement to have a CNAME, for a bare domain (e.g. foo.bar) resolving to the same IP?

I would really appreciate a better explanation for all this but I’m still puzzle.

You might also be interested in the following related posts:

From → development, thinking