Archive for the ‘blurps’ Category

The survey for people who make websites

Thursday, July 31st, 2008

[The survey for people who make websites] I took it! And so should you

Yep, again.

Gmail on a Blackberry

Wednesday, July 23rd, 2008

Google AppsImage via WikipediaI was trying to set up my Google Apps mail account on my Blackberry and I couldn’t get it to work. I tried it several times and it just wouldn’t budge.

I guess the trick is to set up IMAP access to the account from another app first – when I added the account to my Apple Mail app and tried again it set the account up immediately.

Zemanta Pixie

DOM DocumentFragments

Tuesday, July 22nd, 2008

As I read John‘s post on DocumentFragments the idea was very obvious as were the speed improvements.

Let’s say you have 10 divs that you attach to and 10 elements that you need to attach. In the “normal” case this means you will call appendChild 100 times and cloneNode 100 times. In the “fragment” case you will only call appendChild 20 times (10 to append the elements to the fragment and 10 to append the fragment to the divs) and cloneNode 10 times (when appending the fragment to the div). My thinking was that with clone you don’t really gain much as it in effect must clone 100 nodes even though it is called only 10 times, but you do gain some time with less appends and you might gain some more time by not appending each node to the visible document which should trigger less redrawing.

As I don’t like to be in the dark I set off to test some of these assumptions. I didn’t run the test in all browsers so Firefox 3 on Mac will have to do:

Append 10 nodes to a detached node
60us
Append 10 nodes to an attached node
360us
Append 10 nodes to an attached node, display:none
160us
Append 10 nodes to a fragment
60us

This means that appending does seem to be slower when you are attaching to nodes that are in the displayed document but also that appending to an element is no slower than appending to a DocumentFragment.

The next test I wanted to do is to see how speed of clone changes when you have the same number of elements in different depths:

Clone a detached empty node
15us
Clone an attached empty node
15us
Clone an empty fragment
15us
Clone an empty node (deep)
15us
Clone an empty fragment (deep)
15us
Clone a detached node with 9 subnodes (total of 10 nodes)
27us
Clone an attached node with 9 subnodes (total of 10 nodes)
29us
Clone a fragment with 9 subnodes (total of 10 nodes)
27us
Clone a detached node with deep subnodes (total of 10 nodes)
28us
Clone an attached node with deep subnodes (total of 10 nodes)
28us
Clone a fragment with deep subnodes (total of 10 nodes)
27us
Clone 10 detached empty nodes in a loop
95us

As you can see the changes in test times between similar variations aren’t significant. It does however pay off to clone bigger chunks of the tree with the deep parameter.

This means you should only gain by using DocumentFragment when you’re attaching many sibling nodes that don’t have a single parent node. A simple case for this would be when you’re attaching items (<li>) to an existing list. On the other hand if you are attaching a whole list you would not gain anything since what you could do is set up a list and clone the whole list and attach that:

Append 10 items to a single list (directly)
100us
Append 10 items to a single list (fragment first)
107us
Append 10 items to 10 lists (directly)
2000us
Append 10 items to 10 lists (fragment first)
570us

In the first case, when attaching to a single list, you actually lose time with the fragment first method because you first attach items to the fragment and then attach the fragment to the list. I must remind you that you don’t need to do any cloning here since you’re only attaching the items to a single list. This means no gain due to clone being faster on bigger chunks. The second case mimics the case that John presented in his post and the difference is obvious.

The lesson: if you’re about to attach a lot of sibling nodes into more than one location (in other words you’ll need cloning) it’s smart to use a DOM DocumentFragment for that.

Zemanta Pixie

How much would a toothbrush owned by Kevin Rose cost?

Thursday, July 3rd, 2008

I found this fascinating quote today:

Kevin Rose on the cover of BusinessWeek

Image via Wikipedia

This idea that how famous you are, and how many people know your name, actually increases the value of everything you own and everything you do, is kind of fascinating to me. But just how famous do you have to be? And is there some direct correlation between how many people have heard of you and the worth of your actions and possessions? Kevin Rose has 50,000 followers on Twitter. How much do you think he could get for his toothbrush?sarahcpr

I don’t think he’d get much actually. He’s too accessible. Oh and by the way – we have a new reblog.

Zemanta Pixie

Firefox JavaScript hiccups

Thursday, June 19th, 2008

While I was testing the speed of some simple JavaScripts on my recently invented JavaScript speed testing ground I noticed a weird thing going on in Firefox. But let’s start at the beginning.

Mozilla FirefoxImage via WikipediaThe JavaScript speed testing I do is very simple. I take a user specified piece of code and eval it, then I take the name of the function to call and eval that which means I get a pointer to a function I need to call. I set the number of times I want the function to be executed and then I prepare the interval.

The testing takes place in a function that is called every second which should give functions enough time to do what they’re supposed to do the specified number of times. The time is measured by setting a variable to the current date (and time of course) and then the function is executed in a for loop specified number of times. Immediately after the loop the time is measured again and the difference is the time spent by the loop. To be able to validate the output of the function I assign whatever the function returns to a variable (all the variables are set up before any of this happens). So the code looks like this:

t0 = new Date();
for (i=0;i<times;i++) {
	r = fn();
}
t1 = new Date();

The first thing when doing such speed test is to run an empty test. What we want to know is how much the whole time measuring takes. It’s got something to do with the fact that as soon as you measure the time you change what’s going on and also the time it takes to do that. And the fact that we’ve got a loop, a compare and an assignement going on.

So I ran this function test(){} a 100000 times. Since it doesn’t actually do anything you’d expect to get small and very similar times. And you do. So the next thing was to try something that actually does something, like function test(){var a={b:1};}. I expected bigger times but still quite similar. And I got such times in all browser I tested except Firefox. At first I thought that it might be something with the operating system. Or the extensions. Or any other number of things that could delay a JavaScript. But after quite a lot of tests on other browsers and platforms I’m quite sure that Firefox is the one to blame.

I tested Firefox 2 and Firefox 3 and both have the same problems. The only difference being that Firefox 3 has bigger problems — the times go up by 3-4 times the normal time, while in Firefox 2 I only saw a 2 times increase. I should mention that the biggest time in Firefox 3 (even with the increase) was still smaller than all the Firefox 2 times. What I did find out is that Firefox is a completely unreliable browser for speed testing. I have no idea what’s causing this but I’d really like to know. Anyone?

And while am at it — with Firebug 1.2 and all the panels on the times were about 3 times slower on Firefox 3.

Zemanta Pixie

Review: Adria Airways and NLB

Monday, June 16th, 2008

Recently two more big and very frequented Slovenian sites relaunched and I think they too deserve a mention.

Adria Airways

The first page I want to put to the test is the new page of the first and the biggest Slovenian airline. It was recently launched by my ex colleagues at Parsek as the second version to be made there. The first edition was designed and prepared in another agency and Parsek only did the backend while the new version is all Parsek. To be fair the biggest and the most important part — the reservation module — is still made by the french company Amadeus.

The new design tries to incorporate a leaner navigation with less elements even though it became wider, almost reaching the 1000px mark. The front page is much more sales oriented, displaying a lot of useful information. I can’t get past the color scheme that is really too dull. There are quite a few validation errors, the ones in HTML mostly due to non–escaped ampersands, while those in CSS are just sloppy coding without checking the validator.

I was surprised to see that some stuff doesn’t work well with Firefox 3 and Safari 3 even though the first one isn’t released yet (will be tomorrow) and the second one doesn’t have a lot of users in Slovenia. I’d still stick to what Yahoo! has to say in their Graded Browser support table for browser support.

I was positively surprised at how well some inside pages are designed down to the last dot and icon and negatively how bad the pages that “only” present CMS content look. I don’t know whose fault this is and I don’t even care, it doesn’t matter for the end user. I’m sure the guys at Parsek will check these pages out and try to make changes that will make them better. When I first saw the design while I was still at Parsek I wasn’t sure if the title on the right would work but now that I’m surfing the page I actually think it does. There is one problem there though – if you visit this page (screenshot) you’ll see that you can see its title “About us” four times in a very small area. It’s nice to know where you are but isn’t this a little bit too much?

NLB

The next big redesign is the biggest Slovenian bank which redesigned their site after quite a while. I don’t really know what to say about the redesign – the last one was horrendous so this one is easy on the eye. It too got wider and restructured so people can find relevant information easier. The home page lists all the products for residents and businesses so you can access them directly.

If the design got overhauled the backend didn’t — if it did it got it fashion tips from the 90s. Validation returns a lot of errors and — prepare for a shock — the encoding is iso-8859-2. The number of non semantic elements is significant and inline scripts are there too (<SCRIPT language=JavaScript>).

The most interesting thing about the new page is the fact that it now uses “friendly URLs”. And how utterly broken they are. You could also say this page is a textbook case for how wrong things can go when you don’t think about them. So you’ll have two pages, one at /nalozbe-v-vrednostne-papirje and the other at /nalozbe-v-vrednostne-papirje1. I have no idea how that tells you anything about how the content behind these links is different. It would tell you more if the first was prefixed with /residential and the second one with /businesses.

Another funny thing I noticed is how banners are designed to look as if they weren’t images but rather just HTML parts of the page. The reason I noticed is that I was on the Mac while checking the page and since font rendering is different it looks really weird. I think I might have seen the same difference on Vista with ClearType on.

Zemanta Pixie