Archive for August, 2011

Taking your webapp offline

Sunday, August 21st, 2011

I’ve recently done a few things that would benefit from having an offline mode. Empowered by Appcache Facts I tried to make the newly published La Vuelta 2011 results page an offline-capable app.

Goal

The goal of this exercise is twofold:

  1. To force caching of resources that slow down the loading of your page.
  2. To actually be able to use the app when offline.

Solution

It’s as simple as writing a cache manifest. It pays to name it [something].manifest, as Apache already serves the correct header with that suffix. What you do is list all the files you want cached, add stuff you allow on the network (check Appcache Facts and Dive Into HTML5 – Offline Web Applications) and it all works. Except it doesn’t.

CACHE MANIFEST
# version 1

CACHE:
resource.js

NETWORK:
*
http://*
https://*

AJAX stuff

If you’re using AJAX to get data from a server using jsonp/script, your framework of choice (jQuery in my case) will probably default to using a no-cache approach to loading it. This will mean that it will request the file + some suffix to prevent browser caching. This will mean that the resource will not be available if you’re actually online.

You can use navigator.onLine to switch cache on/off, but I suggest you first try requesting the no-cache resource and if it errors out, request the cached resource. The benefit is that even if you are online but the server is not, the users will still see the data / use the app.

$.ajax({
	dataType: 'script',
	url: 'resource.js',
	cache: false,
	success: successHandler,
	error: function () {
		$.ajax({
			dataType: 'script',
			url: 'resource.js',
			cache: true,
			success: successHandler
		});
	}
});

iPad issues

Fixing the AJAX meant that it worked properly on the desktop (test with Firefox that has offline mode) and in Safari on the iPad. The trouble started when I added the app to Home Screen – data failed to load. It was the same app as in Safari and it should have worked.

After some debugging I found out that the data actually was loaded but the eval failed (I was using $.getScript). Some weird testing showed that the problem was a newline character in the data. As I really liked the newline there I added some code to the error handling that removed the newline and evaled the data, then ran success. And it worked!

$.ajax({
	dataType: 'script',
	url: 'resource.js',
	cache: false,
	success: successHandler,
	error: function () {
		$.ajax({
			dataType: 'script',
			url: 'resource.js',
			cache: true,
			success: successHandler,
			error: function (xhr) {
				eval(xhr.textResponse.replace(/\n/g, ''));
				successHandler();
			}
		});
	}
});

Debugging

It’s somewhat hard to debug offline stuff. I suggest using a clearly visible version indicator in the file to make sure you know which version of the file you’re looking at. Also remember that the first time you load the app after changing the file & cache manifest it is served from cache. At the same time the manifest is checked and on the iPad the files are downloaded after everything else is done (app finished loading and rendering).

It works!

After these problems were solved I used the aforementioned navigator.onLine to hide stuff that normally comes from the network but is not relevant to the offline app (banners, share links, like/+1 buttons) and you can now check the La Vuelta 2011 results page.

Form protection

Friday, August 5th, 2011

I’ve seen a discussion recently on how to protect your forms from spammers/bots that come and fill the forms to either fill your database with crap data or fill your page with porn links. When I read the answers I figured out that none of the people read the amazing article I did years ago, so I decided to try to remember what it said. So, a big fat disclaimer: I read this in an article somewhere and I don’t remember where. If you know the original article please post it in the comments, I’d love to link to it, I bet it has way more info than this one.

The problem

Almost all websites now have some forms on them, some of them are contact / registration forms, others use the data submitted and display it on the site itself (comment forms). But letting others submit data to your site/database opens you to all sorts of attacks. If you actually show the content of the submitted form, you’ll get a bunch of spammers posting comments with lots of links. If you only store data and not show it anywhere you’re still at risk – if you don’t notice your disk can fill up, your database may grow beyond its limits,… So what we want to do is to prevent bogus form posting.

Spammer approach

If you think about writing a spam-bot that will try to spam as many sites you possibly can you have two basic approaches.

Record / replay

This is a very simple approach – you use a person to submit the form, preferably with something that looks like real input and record the request made. Then you hand that data off to the bot, it changes some content and tries to resubmit it.

Automation based on heuristics

I wanted to say AI, but it really isn’t. What it is is a set of simple rules and randomized values that the bot thinks might trick your site into accepting the submit. Let’s say you have three fields, two are inputs with field names “name” and “email” and the third field is “comment”. A simple script can fill these with “valid” data and try to submit it.

Human entry

By far the simplest, but also most costly for spammers. Go on Amazon Turk or whatever other service, send a link to a Google Spreadsheet and have people manually enter the stuff into your forms. This is the source of “Sorry, this is my job” endings to spam comments.

Popular solutions

Turing test

Add a field to the form that the user must fill with something that humans can do easily, but machines can’t. The biggest group here are Captchas (display an image with distorted characters, possibly an audio link that reads out the same characters, and have the user somehow figure it out and write the solution), but there have been others, like a “Human?” checkbox, or “3 + 4 =” fields, “Is this a kitten?” with a pic next to it.

2-step process

Supposedly by far the easiest way to do this is by introducing a 2-step process. After the initial submit, you get back a preview of the submitted data, possibly with a checkbox that says “Is this correct?” and another submit button. Robots are usually not good at following through and thus don’t actually submit the content.

Both solutions have an impact on user experience. With Captchas it’s sometimes really hard to see what they are and even if they have a “different image” link, it just seems like the owner of the site wants to make your life hell before you can submit your data. The other challenges might be easier on the user, but also easier to figure out if you’re being targeted by bots. The 2-step process works great for comments, that usually don’t have an edit link, so it might actually be good for user experience if done right (not Wikipedia style), but are less appropriate on other types of forms.

Protect yourself differently

These are the techniques that should prevent most bogus form entries from random passing bots, except “Human entry” – no protection for that, even though Captchas try hard. There is not much you can do when you’re targeted…

Honeypot field

Use this field to trick autoguessing bots to submit something in a field you know should be empty.

  • Add an input field to your form with a regular name (state, maiden-name,…) that does not appear on your form otherwise.
  • Use a label that will clearly communicate that it needs to be empty.
  • Hide it with CSS, preferably not by adding class=”hidden”.

If the form post includes content in this field discard it and redirect back to the form. The trick is to make sure the bots don’t figure out this is a honeypot, so use valid looking but nonsensical classes…

Date field

Use it to prevent resubmit of data too far from the creation date. Allow users a few hours to post the form.

To prevent manual modification you can use either proper encryption (symetric or asymetric) that will allow you to decode it on form post or use this date in combination with the onetime token.

Onetime token

Use this field to prevent replay of request data. If you can, save it into the database.It is a good idea to make this token in a form that it cannot be faked (say one character changed ad you have a valid one). This can be done with hashing data or encryption.

This one can be as tricky as you want. What I usually do (disclaimer: I don’t know much about encryption so this might be crap advice) is use a plain datetime field with the onetime token generated from IP address, UserAgent and the date field with HMAC. There is no need for this token to be reversible – I can recreate the same thing with the data from the form post and check if it matches.

When using these techniques make sure you take care of the user experience. If you detect a problem on what might be valid user input (“timeout” on the date field with a non used onetime token, wrong onetime token from an ip change by the service provider), you might want to display a second step from the “2-step process”. Whatever you do, don’t call your users spammers or bots – be nice, bots don’t read the text anyway.

Did I miss anything?

I know of no plugin that uses all of these techniques, but I haven’t really looked for it. What I do know is that I don’t want to ever use a Captcha, cause it often keeps me out, and the 2-step process in just too weird sometimes. Hope this helps. And again – if you find the original article (must be some 5 years old now at least if not more) or have any other solutions you use or endorse, do leave a comment.