Work for food

June 4th, 2013

I’ve recently been exposed to two different local companies looking to get work done for what is essentially free.

Ads

The first one was a request by a reputable event venue looking for the visual identity for an international Jazz festival. Their posting vas very raw, saying only what they want (a poster) and what they give in return (tickets to the festival and a t-shirt).

The second one was a request by a PR agency for a month long stint doing “PR and Event Management”. The posting is humorous and very well written (PR agency, remember?) and also includes a list of what they want (full day of hard work) and what they give in return (lunchmoney).

Interpretation

As usually both postings leave a lot of room for interpretation and of course people base their interpretation on their feelings towards the company. To make things even I’ll try to make two interpretations for both – one optimistic and one pessimistic.

Optimist

The first request is targeted at aspiring designers who have either just finished their studies and cannot find work or are trying to find work as designers even though they studied something else. Maybe they’re just Jazz fans trying design while unemployed. Since the event organizer has a team of internal designers they’re not actually looking for all the applications (logo, poster, booklet, tickets,…) – they want a poster that communicates an idea (agencies will tell you that ideas are hard to come by, entrepreneurs will sell them a-dime-a-dozen). Since designers are usually hired based on their portfolios (preferring published work) winning this could jumpstart a career. It could even possibly lead to a job for a music label or another, bigger music festival.

The second request is targeted at people who know that in PR and Event Management it’s all about who you know and who you’ve worked with. This means that working for a company on multinational accounts can lead to a job in either this same company or at the multinationals – which could get you far. The company is only asking for a month of “free” work and is actually using this as a testing period for a full-time hire after the month expires. They’re a good standing company with loads of work and the salary is great. Since you’ll be working a lot with a great bunch of people you’re learn so much that after the month is over you’ll not only have the offer from this company, but from at least three more.

Pessimist

The first request is a way to get a free visual identity because they want to fire the internal design team as soon as possible. The winner will have to do all the applications for free after he wins and the tickets will be the worst you can possibly get for a concert, while the t-shirt will be of the wrong size. They will not allow you to sign your work or advertise that you did it.

The second request is a way to pay less for people who will pass out flyers at events, make coffee and type CEOs recordings of PR notices. It also includes sending PR emails from to media and journalists and reminding them every day until they publish. Since you’ll be working hard all day there won’t be any time for mentoring or observing what others do and after you’re done coworkers won’t even remember your name.

But…

People who supported one and not the other were probably thinking of one as a pessimist and the other as an optimist. Knowing the companies they might be right, but that doesn’t change the fact that none of these scenarios are probably true.

The economist in me will say that if you can make people work for you for free, just so they get an entry in their CV, you should. But is that the right thing to do? I don’t think so. It’s a tricky subject and there’s a lot of different arguments for and against such requests and quite a few of them surfaced in the discussions on Slovenian social media. The bottom line for me is that it’s a slippery slope… But that’s a whole new post.

A website is usually not just one product

May 15th, 2012

By releasing Inside government we were testing a proposition (‘all of what government is doing and why in one place’), and two supporting products (a frontend website and a content management system).

Ross Ferguson

People usually forget this. When you don’t, your project has way more chances to succeed.

Preloading resources

October 3rd, 2011

You might have read that when optimizing your site for speed HTTP requests are something to look at – less requests mean less waiting and usually a faster site. When doing that for CSS you’re faced with a dilemma:

  1. Put all of CSS in one file, which means less requests, but also means that the first time people land on your site, they will wait longer than needed to see the content which could mean a ruined first impression.
  2. Put CSS in per-section files, which means less loading up front but more request as people drill down into different sections of the page.

When using the first option your only focus can be on how to optimize the CSS so that it loads as fast as possible and is as small as possible. As you have less files you can easily have that file minified and gzipped even if you’re not using a deployment solution that will do that for you.

Using the second variant gives you more options – you should still optimize the files, but now you have the option of deciding when to load them to make sure that the landing page does not get hit by the added request.

Even though I started with CSS preloading can be done for any resources needed on the page – fonts, scripts, images.

When to preload?

An very obvious case is a search results page. It usually has a very distinct design that requires certain resources not needed anywhere else. But that’s not enough – you need to know when the user will these resources so you don’t go preloading everything just in case. With search it’s when they focus inside a search box – they start typing the query, while you start preloading the resources in the background.

Other obvious places to preload resources:

  • My account – when users click log-in or focus into the login form
  • Checkout – when users add stuff to their cart

Other less obvious places are landing pages when the choice a user makes turns into many sub-options, especially when products have the same resources used on the content pages.

To the code!

If you’re already using a library it probably provides AJAX methods and event methods. If not, you can search MicroJS to find one and adapt the syntax.

A simple preloader is almost as simple as helloWorld – the only thing you need to make sure is that the type of data type is set to text so that it does not get executed.

window.preload = function (src) {
	$.get(src, null, null, 'text');
} // call with preload('somefile.css');

If you want to allow the loading of multiple files at once you can detect if the passed element is an array.

window.preload = function (data) {
	var a = $.isArray(data) ? data : [data];
	$.each(a, function (i, n) {
		$.get(n, null, null, 'text');
	});
}
// call with preload('somefile.css');
// or preload(['somefile.css', 'otherfile.js']);

If you prefer to call the function with many parameters you can just use arguments in the each function call.

window.preload = function () {
	$.each(arguments, function (i, n) {
		$.get(n, null, null, 'text');
	});
}
// call with preload('somefile.css');
// or preload('somefile.css', 'otherfile.js');

Key element is to load the resources after the window.onload has happened – this means that any resources needed for the page to function properly have been loaded. If you do stuff sooner you might have to wait for other resources like fonts, images, videos. This means you need to know if the onload event happened – if it has, preload immediately, otherwise wait for the event to fire.

(function () {
	var win = window,
		$ = window.jQuery,
		$win = $(win);
	$win.load(function () {
		$win.data('loaded', true);
	});
	win.preload = function (data) {
		var a = $.isArray(data) ? data : [data],
			fn = function (i, n) {
				$.each(a, function () {
					$.get(n, null, null, 'text');
				});
			};
		if ($win.data('loaded')) {
			fn();
		} else {
			$win.load(fn);
		}
	};
}());
// call with preload('somefile.css');
// or preload(['somefile.css', 'otherfile.js']);

As you can see I also did a few other things – wrapped the code into a function to isolate all the variables. I also assigned window.jQuery to a local $ to make it a bit more bulletproof.

Needless to say this script needs to be loaded during the load stage. If you intend to load it afterwards you need to make sure that you properly detect the onload event – if you don’t nothing will get preloaded as it will be waiting for that event to fire.

Taking your webapp offline

August 21st, 2011

I’ve recently done a few things that would benefit from having an offline mode. Empowered by Appcache Facts I tried to make the newly published La Vuelta 2011 results page an offline-capable app.

Goal

The goal of this exercise is twofold:

  1. To force caching of resources that slow down the loading of your page.
  2. To actually be able to use the app when offline.

Solution

It’s as simple as writing a cache manifest. It pays to name it [something].manifest, as Apache already serves the correct header with that suffix. What you do is list all the files you want cached, add stuff you allow on the network (check Appcache Facts and Dive Into HTML5 – Offline Web Applications) and it all works. Except it doesn’t.

CACHE MANIFEST
# version 1

CACHE:
resource.js

NETWORK:
*
http://*
https://*

AJAX stuff

If you’re using AJAX to get data from a server using jsonp/script, your framework of choice (jQuery in my case) will probably default to using a no-cache approach to loading it. This will mean that it will request the file + some suffix to prevent browser caching. This will mean that the resource will not be available if you’re actually online.

You can use navigator.onLine to switch cache on/off, but I suggest you first try requesting the no-cache resource and if it errors out, request the cached resource. The benefit is that even if you are online but the server is not, the users will still see the data / use the app.

$.ajax({
	dataType: 'script',
	url: 'resource.js',
	cache: false,
	success: successHandler,
	error: function () {
		$.ajax({
			dataType: 'script',
			url: 'resource.js',
			cache: true,
			success: successHandler
		});
	}
});

iPad issues

Fixing the AJAX meant that it worked properly on the desktop (test with Firefox that has offline mode) and in Safari on the iPad. The trouble started when I added the app to Home Screen – data failed to load. It was the same app as in Safari and it should have worked.

After some debugging I found out that the data actually was loaded but the eval failed (I was using $.getScript). Some weird testing showed that the problem was a newline character in the data. As I really liked the newline there I added some code to the error handling that removed the newline and evaled the data, then ran success. And it worked!

$.ajax({
	dataType: 'script',
	url: 'resource.js',
	cache: false,
	success: successHandler,
	error: function () {
		$.ajax({
			dataType: 'script',
			url: 'resource.js',
			cache: true,
			success: successHandler,
			error: function (xhr) {
				eval(xhr.textResponse.replace(/\n/g, ''));
				successHandler();
			}
		});
	}
});

Debugging

It’s somewhat hard to debug offline stuff. I suggest using a clearly visible version indicator in the file to make sure you know which version of the file you’re looking at. Also remember that the first time you load the app after changing the file & cache manifest it is served from cache. At the same time the manifest is checked and on the iPad the files are downloaded after everything else is done (app finished loading and rendering).

It works!

After these problems were solved I used the aforementioned navigator.onLine to hide stuff that normally comes from the network but is not relevant to the offline app (banners, share links, like/+1 buttons) and you can now check the La Vuelta 2011 results page.

Form protection

August 5th, 2011

I’ve seen a discussion recently on how to protect your forms from spammers/bots that come and fill the forms to either fill your database with crap data or fill your page with porn links. When I read the answers I figured out that none of the people read the amazing article I did years ago, so I decided to try to remember what it said. So, a big fat disclaimer: I read this in an article somewhere and I don’t remember where. If you know the original article please post it in the comments, I’d love to link to it, I bet it has way more info than this one.

The problem

Almost all websites now have some forms on them, some of them are contact / registration forms, others use the data submitted and display it on the site itself (comment forms). But letting others submit data to your site/database opens you to all sorts of attacks. If you actually show the content of the submitted form, you’ll get a bunch of spammers posting comments with lots of links. If you only store data and not show it anywhere you’re still at risk – if you don’t notice your disk can fill up, your database may grow beyond its limits,… So what we want to do is to prevent bogus form posting.

Spammer approach

If you think about writing a spam-bot that will try to spam as many sites you possibly can you have two basic approaches.

Record / replay

This is a very simple approach – you use a person to submit the form, preferably with something that looks like real input and record the request made. Then you hand that data off to the bot, it changes some content and tries to resubmit it.

Automation based on heuristics

I wanted to say AI, but it really isn’t. What it is is a set of simple rules and randomized values that the bot thinks might trick your site into accepting the submit. Let’s say you have three fields, two are inputs with field names “name” and “email” and the third field is “comment”. A simple script can fill these with “valid” data and try to submit it.

Human entry

By far the simplest, but also most costly for spammers. Go on Amazon Turk or whatever other service, send a link to a Google Spreadsheet and have people manually enter the stuff into your forms. This is the source of “Sorry, this is my job” endings to spam comments.

Popular solutions

Turing test

Add a field to the form that the user must fill with something that humans can do easily, but machines can’t. The biggest group here are Captchas (display an image with distorted characters, possibly an audio link that reads out the same characters, and have the user somehow figure it out and write the solution), but there have been others, like a “Human?” checkbox, or “3 + 4 =” fields, “Is this a kitten?” with a pic next to it.

2-step process

Supposedly by far the easiest way to do this is by introducing a 2-step process. After the initial submit, you get back a preview of the submitted data, possibly with a checkbox that says “Is this correct?” and another submit button. Robots are usually not good at following through and thus don’t actually submit the content.

Both solutions have an impact on user experience. With Captchas it’s sometimes really hard to see what they are and even if they have a “different image” link, it just seems like the owner of the site wants to make your life hell before you can submit your data. The other challenges might be easier on the user, but also easier to figure out if you’re being targeted by bots. The 2-step process works great for comments, that usually don’t have an edit link, so it might actually be good for user experience if done right (not Wikipedia style), but are less appropriate on other types of forms.

Protect yourself differently

These are the techniques that should prevent most bogus form entries from random passing bots, except “Human entry” – no protection for that, even though Captchas try hard. There is not much you can do when you’re targeted…

Honeypot field

Use this field to trick autoguessing bots to submit something in a field you know should be empty.

  • Add an input field to your form with a regular name (state, maiden-name,…) that does not appear on your form otherwise.
  • Use a label that will clearly communicate that it needs to be empty.
  • Hide it with CSS, preferably not by adding class=”hidden”.

If the form post includes content in this field discard it and redirect back to the form. The trick is to make sure the bots don’t figure out this is a honeypot, so use valid looking but nonsensical classes…

Date field

Use it to prevent resubmit of data too far from the creation date. Allow users a few hours to post the form.

To prevent manual modification you can use either proper encryption (symetric or asymetric) that will allow you to decode it on form post or use this date in combination with the onetime token.

Onetime token

Use this field to prevent replay of request data. If you can, save it into the database.It is a good idea to make this token in a form that it cannot be faked (say one character changed ad you have a valid one). This can be done with hashing data or encryption.

This one can be as tricky as you want. What I usually do (disclaimer: I don’t know much about encryption so this might be crap advice) is use a plain datetime field with the onetime token generated from IP address, UserAgent and the date field with HMAC. There is no need for this token to be reversible – I can recreate the same thing with the data from the form post and check if it matches.

When using these techniques make sure you take care of the user experience. If you detect a problem on what might be valid user input (“timeout” on the date field with a non used onetime token, wrong onetime token from an ip change by the service provider), you might want to display a second step from the “2-step process”. Whatever you do, don’t call your users spammers or bots – be nice, bots don’t read the text anyway.

Did I miss anything?

I know of no plugin that uses all of these techniques, but I haven’t really looked for it. What I do know is that I don’t want to ever use a Captcha, cause it often keeps me out, and the 2-step process in just too weird sometimes. Hope this helps. And again – if you find the original article (must be some 5 years old now at least if not more) or have any other solutions you use or endorse, do leave a comment.

View Source Alliance

November 19th, 2010

Most of what I learned on the web in my early years was from “View Source”. Then came the books and the conferences.

It makes me sad to see lots of sites minifying code for performance and not releasing the full version of the code so other developers could learn from it. It’s the openness that I really like about the web.

I think there should be a “View Source Alliance” that would set rules on how to release your code in a way that visitors can benefit from the speed of minified code, while web developers can still find your full files and learn from them.

I’ll set a few simple rules here, hoping somebody with more reach picks them up:

  1. If you minify the files for them, use a simple convention name.min.ext (say jquery.min.js)
  2. When you deploy minified files, also deploy their full version at name.ext (say jquery.js)
  3. If you for some reason can’t release the full files next to the minified ones, add this to the top of the minified file: /*viewsource*http://path.to/full.ext*/ (say /*viewsource*http://http://code.jquery.com/jquery-1.4.4.js*/)

This way you will not only help others, but sometimes even stop breaking the law. Because you might be using some open source code with a licence that says you must release your code with a same/similar licence.