Archive for the ‘software’ Category

Enabling camera and microphone in Chrome on M1

Friday, January 21st, 2022

After installing my new MacBook Pro M1, I had an issue where Google Chrome would ask me to give permission to a site for using a camera and microphone, but would then show them as inaccessible.

First level googling suggested going to System Preferences > Security & Privacy > Privacy > Camera and tick the box next to Google Chrome. Just one problem – Google Chrome was not listed and there doesn’t seem to be a way to add a program to that list manually.

Another hit suggested upgrading Chrome and macOS – both were upgraded already.

I googled some more and found this report of the same problem with comments closed and no answer.

I tried a bunch of things in Chrome and had no success – settings, other sites, disabling/enabling camera… nothing worked.

So I gave up and opened Safari to see if I can at least use the camera there. It worked out of the box, the camera and mic turned on and I was in the meeting. I then opened System Preferences and Safari is not listed there at all – I guess that’s what you get when you work in the same building as the OS guys. Went back to Chrome to see if it works now and Chrome asked me for permissions again – only this time, it also appeared in Privacy list, where I could now allow access to Camera and Microphone.

I had a feeling using Safari first might trigger this, but it was more of a Hail Mary then anything else really. The other option – removing Chrome with all the profiles and trying from scratch – did not sound appealing at all.

So now you know.

VMs on M1

Monday, December 27th, 2021

I have upgraded my laptop. It’s been a while (Mid 2014) and I felt like the new MacBook Pro is finally a computer I can use a while. It’s got an SD card slot, an HDMI port, MagSafe and enough USB ports (even though they are all USB-C). What I’m missing is an easy way to connect my old displayport screen, but I’ll fix that by upgrading that as well (it’s ~10 years old).

The machine is great – the screen, the silence even when under load, the fingerprint reader, I love it all. I have not had any issues with the apps due to it being Apple Silicon. What I have noticed is that some smaller utility apps I used have disappeared since I last did a fresh install – developers moved on, decided to not support the new platform or the new APIs. I have mostly1 found replacements, even if some are paid apps.

Reasoning

On my previous laptop I had all development stuff running directly on the Mac. This was a problem every time I upgraded the OS as random things would die and fixing them took a lot of time. So this time around I want to pack all my development stuff inside a linux VM that would then hold either code directly or docker containers.

My initial idea was to set up an x86_64 virtual machine, so that I could have an environment as close to what I normally use in production, but installing it in UTM took forever, so I abandoned that idea for now.

Software

Step one was setting up some virtual machines to test how that would work.

I have previously used VirtualBox, but they have not yet decided to support the M1, so what I found and tested was:

  1. UTM,
  2. Parallels Desktop for M1 and
  3. VMWare Fusion for Apple Silicon.

So I went about installing Ubuntu in all three environments. My source image was Ubuntu 20.04.3 LTS, the machines set up as arm with 8GB of RAM and 4 cores. In the case of UTM, the system is set to QEMU 5.0 ARM VM (virt-5.0)2 with CPU set to cortex-a72 and Force Multicore checked.

Shared directory

After installing I looked at how I can share a directory from the host inside the VM:

  1. UTM

    I haven’t figured it out yet as it wanted me to install something on my Mac, so I gave up (for now).

  2. Parallels Desktop

    Default instructions are to reboot and then mount a CD from which you install the relevant tools. This went well and the directories were shared under /media/psf.

  3. VMWare Fusion

    VMWare requires you to install vmware tools on linux and you should get the mount automatically, but I didn’t, so I hade to add the line to /etc/fstab manually. Going with fstab is nice, as you can mount the share anywhere you like.

Performance

With that solved, I did a quick test of speed inside the VMs. Nothing comprehensive, just a quick feeler to see what kind of performance I can expect. To do that I ran the following python code, output mimicking that of the ping command:

import statistics
import timeit

l = []
for i in range(10):
	l.append(timeit.timeit("hashlib.pbkdf2_hmac('sha256', b'password', b'salt', 100000)", "import hashlib", number=100) / 100 * 1000)

print("min/avg/max/stddev = {:.3f}/{:.3f}/{:.3f}/{:.3f} us".format(
	min(l),
	statistics.mean(l),
	max(l),
	statistics.pstdev(l)
))

Times:

  1. UTM

    min/avg/max/stddev = 15.442/15.479/15.510/0.018 us (python 3.8.10)

  2. Parallels Desktop

    min/avg/max/stddev = 14.582/14.659/14.809/0.077 us (python 3.8.10)

  3. VMWare Fusion3

    min/avg/max/stddev = 14.596/14.632/14.713/0.031 us (python 3.8.10)

  4. Host

    min/avg/max/stddev = 23.598/24.297/25.038/0.553 us (python 3.8.9)

  5. MacBook Pro (Mid 2014)

    min/avg/max/stddev = 308.944/316.638/326.198/4.993 us (python 3.5.2)

    min/avg/max/stddev = 64.027/64.870/66.026/0.658 us (python 3.8.8)

I have no idea why the VMs are faster than the host – my guess is the VMs are running on performance cores, so python also gets a performance core, while python on the host runs on the efficiency core. Haven’t yet figured out how to confirm this though.

Update: I added times from my old laptop. Oddly python 3.5 was way slower, while there is no difference in times between 3.5 and 3.8 on arm (tested on Parallels).

Result

I think UTM could be great especially with its low price (free online, 9.99€ on the App Store), but everything is a bit finicky. If you can use one of the images provided and you don’t need to set up directory sharing, it’s surely a good option.

I don’t yet have a preference between Parallels and Fusion – Fusion has better folder sharing approach but kidnaps the cursor, which is quite annoying. At the moment Fusion is free (full price for Fusion 12 Player is 135.53€ while Pro is 180.98€ in the Europe store at the moment), while Parallels is already a paid product (99.99€ one time or 79.99€ per year, 99.99€ per year for Pro). As far as I have read, VMWare does not intend to support anything that is not Arm, while Parallels already has that support, which might make me go that way.

Next up

Next things I want to figure out:

  • does running an x86_64 VM makes any sense?
  • is it possible to mount a VM HDD without running the VM?
  • set up PyCharm to work with this setup

If you’re interested in anything else, let me know.

  1. Anybody know of a replacement for PresenterMate? ^
  2. For some reason I could not make the install work on a higher version (5.1, 6.x) ^
  3. I initially thought Fusion was much slower, but I likely screwed something up when measuring ^

Benchmarking web servers

Monday, August 25th, 2014

We recently bought a server at CubeSensors and are now putting it through its paces.

One of the things I was looking at was our HTTP server / load balancing setup. We’re currently using an Nginx server in front of Tornado instances and I was wondering if there is something that is more specialized and thus likely faster that would allow for realtime upstream configuration changes. I found HAProxy.

I did a simple setup on the server (Ubuntu 14.04) to test things:

  • Nginx 1.6.1
  • HAProxy 1.5.3
  • Tornado 4.0.1

Tornado is running a very simple app that only returns OK at /, there’s two of them and both are started with 2 forks (server.start(2)). HAProxy is set up with option http-keep-alive with a 60s timeout and points to the two Tornados while Nginx has two virtual hosts, one linked directly to Tornados with keepalive 4, the other to HAProxy also the same keepalive setting, both with proxy_http_version 1.1 and removed Connection header. Server timeout is set to 30s, client to 10s.

The first idea was to test with ab using something like ab -n 10000 -c 100 URL. On a machine that is running nothing else but the tests, the times were changing way too much from one test to another. I also noticed that ab is making HTTP 1.0 requests.

The second tool I tried is siege, which has a few different options than ab and also makes HTTP/1.1 requests. I used siege -c 1000 -r 100 -b URL and Nginx-Tornado combination manages a 50% higher transaction rate than any combination with HAProxy. But I think this says more about my ability to configure HAProxy than it does about HAProxy itself – I keep getting a bunch of stale connections (in TIME_WAIT state) hanging around even with option forceclose.

Resolution: sticking with Nginx. It can log POST body, upstream configuration changes will be handled by including the upstream configuration which can be changed and reloaded.

Taking your webapp offline

Sunday, August 21st, 2011

I’ve recently done a few things that would benefit from having an offline mode. Empowered by Appcache Facts I tried to make the newly published La Vuelta 2011 results page an offline-capable app.

Goal

The goal of this exercise is twofold:

  1. To force caching of resources that slow down the loading of your page.
  2. To actually be able to use the app when offline.

Solution

It’s as simple as writing a cache manifest. It pays to name it [something].manifest, as Apache already serves the correct header with that suffix. What you do is list all the files you want cached, add stuff you allow on the network (check Appcache Facts and Dive Into HTML5 – Offline Web Applications) and it all works. Except it doesn’t.

CACHE MANIFEST
# version 1

CACHE:
resource.js

NETWORK:
*
http://*
https://*

AJAX stuff

If you’re using AJAX to get data from a server using jsonp/script, your framework of choice (jQuery in my case) will probably default to using a no-cache approach to loading it. This will mean that it will request the file + some suffix to prevent browser caching. This will mean that the resource will not be available if you’re actually online.

You can use navigator.onLine to switch cache on/off, but I suggest you first try requesting the no-cache resource and if it errors out, request the cached resource. The benefit is that even if you are online but the server is not, the users will still see the data / use the app.

$.ajax({
	dataType: 'script',
	url: 'resource.js',
	cache: false,
	success: successHandler,
	error: function () {
		$.ajax({
			dataType: 'script',
			url: 'resource.js',
			cache: true,
			success: successHandler
		});
	}
});

iPad issues

Fixing the AJAX meant that it worked properly on the desktop (test with Firefox that has offline mode) and in Safari on the iPad. The trouble started when I added the app to Home Screen – data failed to load. It was the same app as in Safari and it should have worked.

After some debugging I found out that the data actually was loaded but the eval failed (I was using $.getScript). Some weird testing showed that the problem was a newline character in the data. As I really liked the newline there I added some code to the error handling that removed the newline and evaled the data, then ran success. And it worked!

$.ajax({
	dataType: 'script',
	url: 'resource.js',
	cache: false,
	success: successHandler,
	error: function () {
		$.ajax({
			dataType: 'script',
			url: 'resource.js',
			cache: true,
			success: successHandler,
			error: function (xhr) {
				eval(xhr.textResponse.replace(/\n/g, ''));
				successHandler();
			}
		});
	}
});

Debugging

It’s somewhat hard to debug offline stuff. I suggest using a clearly visible version indicator in the file to make sure you know which version of the file you’re looking at. Also remember that the first time you load the app after changing the file & cache manifest it is served from cache. At the same time the manifest is checked and on the iPad the files are downloaded after everything else is done (app finished loading and rendering).

It works!

After these problems were solved I used the aforementioned navigator.onLine to hide stuff that normally comes from the network but is not relevant to the offline app (banners, share links, like/+1 buttons) and you can now check the La Vuelta 2011 results page.

The Cross-Origin Request Sharing (CORS) knot

Tuesday, August 3rd, 2010

As a developer for a company that does a lot of cross-domain posting I was happy to see CORS finally happening. I’ve authored jQuery.windowName plug-in to add support for cross-domain posting in the past and this was my way out – no more hacks as browser support for CORS grows.

Yeah right.

CORS is supported on XMLHttpRequest object in Gecko and Webkit. You just request something on a different domain, and if the response has the right header with the right value you get the data. In IE you have to use a new object called XDomainRequest which is somewhat similar to XMLHttpRequest, but without readystate, onreadystate event handler and basically any header getters/setters.

Sounds ok and when I first tried it I was quite happy with how it worked. But unfortunately nothing is that simple in the land of browsers. When it fails, it’s a whole new game…

Gecko (Firefox)

Same domain request:
  • 200 response: xhr = {status: 200, statusText: ‘OK’}
  • 304 response: xhr = {status: 304, statusText: ‘Not Modified’}
Cross domain request with proper headers:
  • 200 response: xhr = {status: 200, statusText: ‘OK’}
  • 304 response: xhr = {status: 0, statusText: ‘Not Modified’}
Cross domain request without proper headers:
  • 200 response: xhr = {status: 0, statusText: ‘OK’}
  • 304 response: xhr = {status: 0, statusText: ‘Not Modified’}

Webkit (Safari, Chrome)

Same domain request:
  • 200 response: xhr = {status: 200, statusText: ‘OK’}
  • 304 response: xhr = {status: 304, statusText: ‘Not Modified’}
Cross domain request with proper headers:
  • 200 response: xhr = {status: 200, statusText: ‘OK’}
  • 304 response: xhr = {status: 304, statusText: ‘Not Modified’}
Cross domain request without proper headers:
  • 200 response: xhr = {status: 0, statusText: ”}
  • 304 response: xhr = {status: 0, statusText: ”}

IE

Same domain requests made with XMLHttpRequest, cross domain requests made with XDomainRequest.

Same domain request:
  • 200 response: xhr = {status: 200, statusText: ‘OK’}
  • 304 response: xhr = {status: 304, statusText: ‘Not Modified’}
Cross domain request with proper headers:
  • 200 response: onload invoked
  • 304 response: onload with cache, onerror otherwise
Cross domain request without proper headers:
  • 200 response: onerror invoked
  • 304 response: onerror invoked

Where’s the knot?

The problem with this is the lack of error handling. Firefox will return status 0 on all errors, but will indicate what the response was with statusText. It will also return 0 on a 304 response, which is a bit weird. Webkit on the other hand only returns status 0 on failed requests, but without telling you what’s wrong. You get the same error when a preflight request fails, a 500 error on the server or the real request fails. IE with its own separate XDomainRequest implementation simplifies this two handlers – onload and onerror. Unfortunately it raises an onerror request when a 304 is returned on empty cache.

jQuery issues

If you’re working with jQuery (and possibly other libraries) you’re in for a treat. Opera 9.5 falsely reports status = 0 when a HTTP status 304 is returned from the server. For this reason jQuery treats status === 0 as a successful request. Thus you have no way of knowing whether your request was a success or a failure.

Untying the knot

If you intend to use jQuery’s built-in methods you’ll need to do your own bookkeeping:

  1. Know when a request is a CORS request
  2. Wait for complete event
  3. If data is empty trigger error handling

For this to work you obviously always need some data. If empty data is valid you’re screwed.

Easy way out

You might have noticed before that I made a jQuery.windowName plugin. It still has the same name, but now also supports CORS requests and does some of the CORS related bookkeeping. As the latest version is still in testing you can get it directly at http://friedcell.net/js/jQuery.windowName/jQuery.windowName.plugin.1.0.0.js (test page).

Enhanced by Zemanta

An Event Apart Seattle review – day 1

Wednesday, June 2nd, 2010
An Event Apart Seattle
Image by Heather L via Flickr

“This is your pilot speaking. We’ve been notified that the passenger bridge has a flat tire.” were the first few words after landing in Chicago, the third airport of the day. I left Ljubljana at 7:15 CEST towards Amsterdam, switched planes and continued towards Chicago. Fortunately the issue with the gates was small enough not to endanger my connection for the last leg – to Seattle, where I landed around 16:15 PST (around 18 hours after taking off from Ljubljana).

I came to Seattle to attend An Event Apart, a conference I wanted to attend since it was first announced. Meeting people like Jeffrey Zeldman and Eric Meyer and learning from them is just amazing. But first I needed to get to the hotel in downtown Seattle and get some sleep.

After a really long day the light rail ride from the Sea-Tac airport to downtown Seattle was really amazing. Going through the suburbs, enjoying the displays of American culture – the highways, the trucks, the architecture, the people. There were only a few of us on the train at the first stop, but at the next stop loads of people got on wearing bright green shirts, scarfs and even a few kids with their faces painted green/blue – fans of Seattle Sounders FC. I thought to myself – nah, it can’t be soccer.

As I arrived a day early I had a day to spare to see the city. I woke up late and then visited The Space Needle – amazing views even in cloudy weather. I didn’t take the old-school monorail built for the world fair in 1962 thinking I’ll do that some other day. After registering at the wonderful Bell Conference Center (thanks to Marci for resolving the issues and sorry I woke you up Gašper) I walked through town to the Pike Place Market and to the high street stores – and wandered into a huge Anime convention (Sakura-Con) and a bunch of kids (not even teenagers?) wearing totally inappropriate clothes.

The day ended with a karaoke meet-up set up by Jeff Croft. I met Mike Davidson of the Newsvine fame (thanks for the beer!) and I heard Andy Clarke and Jeremy Keith sing Ace of Spades together.

The conference started on Monday with breakfast – a really good one. And then two days of talks and an additional day of workshops. I’ll review them in different depth.

Jeffrey Zeldman – Put Your Worst Foot Forward

I wanted to see Jeffrey talk for some time now. I also got to meet him just before the conference started which made me want to see this even more as he’s a really friendly guy with years of experience to share. And the talk proved to be all that and more. Explaining his mistakes from the past and the ways he is solving them – teaching what to do with anti-patterns (to quote Jeremy Keith) was really effective and I think we were all nodding as it seems we all do the same mistakes.

The checklist

  1. Know before you go.
  2. Keep expectations on track and in sync.
  3. Constantly course-correct.
  4. Tell the TRUTH.
  5. Phrase it from the client’s/boss’s point of view.
  6. Report bad news before the client/boss notices it.
  7. Have a recovery plan.
  8. Apologize-but never denigrate yourself or your team.
  9. Have an exit strategy.
  10. Know when to quit.

Takeaways

Working with clients is a long distance relationship – away from sight, away from heart. You need to put more energy into syncing and you need to make sure you see things with their eyes. And as in any relationship – you need to know when to leave.

Nicole Sullivan – Object Oriented CSS

Nicole used to work for Yahoo! and recently helped Facebook optimize their stylesheets, so you might say she has some experience in building and maintaining CSS systems. But unfortunately it also means that a lot of us cannot relate to some stuff she is saying. One of the first thoughts I had was that she might be a good person to write the “CSS – The Good Parts” – she even quoted Douglas Crockford in her presentation.

Controversial

There were a few points that I couldn’t agree with when she said them and decided that I will think about them later. I’m not saying they’re bad practice, I just don’t think they’re good advice for most of us.

  1. Don’t use specificity was one of the things that seem like throwing away a really powerful tool because some people can’t handle the power. I could probably agree with this in big systems, but it sounds like one of the reasons to adopt Java – it’s easy for beginners to start doing productive stuff and hard for them to screw things up.
  2. Don’t use .class1.class2 as that causes all sorts of cross-browser issues. I would classify this as good stuff but it seems only IE6 is affected. So I couldn’t care less…
  3. Hacks shouldn’t change specificity as you’re not using specificity at all. That means that Modernizr and all other tools that add a class to the HTML/body elements are out of the question. The solution – using _property:value; – was something I don’t feel good about – using such invalid hacks just seems wrong.
  4. To define headers use h1, .h1 {} and in HTML use <h2 class=”h1″>…</h2> if necessary. That just seems wrong even though I agree that reusing styles is important.
  5. Avoid specifying location when targeting elements. When you do that moving an element into a different context loses the styles.

Good stuff

This list is what I think can mostly be used today for most of the people writing CSS. It is not a set of rules to abide in every case, but it should be your main modus operandi.

  1. Reuse code as much as possible. If you’re copy pasting, you’re doing it wrong. One of the ways to do this is by following the second rule.
  2. Don’t use ids, inline styles and !important to write easily applicable code. You should not write location specific code. Don’t use .sidebar ul, but rather add a class (eg. sidenav) to that ul and use .sidenav for the rule. Smaller CSS yes, but it will also get you bigger HTML (and classitis?).
  3. Think in modules and provide styles that are easily reusable by just using a class name in HTML. Only elements that are strictly bound to modules should have location specific selectors (but with .class, not #id).
  4. Put defaults into .class and use elm.class to apply specifics. Many elements can have .error – and all errors should have a similar look, whether they’re divs, lists or paragraphs.

Wish-list

  1. Variables are something a lot of people want. What I want is for them to be simple enough that people can’t abuse them to make CSS a programming language. The proposed syntax:
    • To set the variable: @variables hex {myblue:#006;}
    • To access the variable a {color:hex.myblue;}
  2. Prototypes are a really good way of providing defaults to a lot of elements at once and gets rid of rules that have many comma-delimited selectors. The proposed syntax:
    • Set a prototype with allowed child nodes: @prototype .box {margin:10px;children:b,.inner;}
    • Add styles to child nodes: .box .inner {position:relative;}
    • Use a prototype: .weatherBox {extends:.box;}
    • Under the hood this translates to: @prototype .box, .weatherBox {…} .box .inner, .weatherBox .inner {…} .weatherBox {…}
    • Also allows checking code: .leftCol .inner {color:red;} is invalid as .inner is part of .box prototype and .leftCol does not extend it
  3. Mix-ins were skipped in the presentation as she was running out of time. You can think of them as small pieces of repeatable code that is only set in one place and used in others. Syntax:
    • Set a mixin: @mixin .clearfix {zoom:1}
    • Any selector that matches the mixin selector modifies it: .clearfix:after {content:”.”;display:block;height:0;clear:both;visibility:hidden;}
    • Include a mixin: .line {include:.clearfix;}
  4. Prototype sub-nodes were also skipped. They seem to allow calculations based on values defined in different sub-nodes of prototypes – they’re not meant to access computed style:
    • Use calculations: .box .bottom {height:5px;} .box .bl {height:10px;margin-top:.bl.height-.bottom.height;}

Some of these changes will require us to write code for new and old browsers independently or to write a “compiler” that will compile code for older browsers. Is there one already written?

Takeaways

Building a CSS system means thinking about the selectors (and not the properties) and Nicole probably knows more than anyone else on that subject (to make you feel more comfortable, Jeremy Keith of Clearleft said they arrived to the same conclusion independently). Another, probably even more important takeaway is that you should think about flexible modules – sometimes stuff is more similar that it might seem at first. If you write CSS for a module that supports variations you’ll write less code that will apply faster and your visitors will be happy. If you want to look into an Object oriented CSS framework – check Nicole’s project OOCSS project at GitHub.

Dan Cederholm – The CSS3 Experience

Dan told us that we can and should use CSS3 now in non-critical areas such as experience, visual rewards, feedback and movement for users with the latest & greatest browsers. Not so much progressive enhancement as progressive enrichment.

Some ideas for use of CSS3:

  • Hover on items with RGBa background, a text-shadow and a border-radius with a transition (Sam Brown style).
  • Hover with opacity change. Create a single image, make it transparent normally and less transparent on hover. With a transition of course.
  • Multiple backgrounds to achieve a Silverback parallax effect.
  • Enriching form elements with a background gradient and border radius.
  • Making form buttons prettier with text-shadow, border-radius, box-shadow and a background gradient. Animate the focus styles.
  • Use scale transform with box-shadow and a transition for hover on images in gallery.
  • Rotation on hover for a single degree with a transition.

Takeaways

You can use CSS3 today, but know what others are missing so they don’t miss critical visual cues. Be subtle with these things or we’ll end back at using transitions to make stuff blink.

Luke Wroblewski – Mobile First!

Web products should be designed for mobile first. (Even if no mobile version is planned.)

Mobile is a big opportunity for growth, but you need to think about different things than when you’re doing web development like:

  • Multiple screen sizes and densities
  • Performance optimization
  • Touch targets, gestures, and actions
  • Location systems
  • Device capabilities

Designing for a smaller screen size will make you focus on core actions. To do that you’ll need to know your users. You should focus on iPhone, not (only) because of its popularity but also because it sets the design expectations very high. It also doesn’t allow any hidden features that hide in menus pressed by buttons – everything needs to be on the interface. When designing you should define device groups, create a default reference design and define rules for content and design adaptation – opt for web standards and a flexible layout. Technically you need to take care that you reduce requests and file size. You should take advantage of HTML5 that allows you to cache things locally and gives you the canvas tag that might sometimes be smarter than loading images. Think outside your web box – less cross browser issues means some new tricks come into play (like data URLs).

The context of using mobile applications is different. It’s not a long time sitting in front of a computer but rather quick bursts of attention everywhere, using mostly just one hand.

Mobile is innovating fast and you should think about the new capabilities to innovate yourself. Touch interfaces mean no hovers, thinking about bigger touch targets and a bunch of gestures that differ from platform to platform. Location information (from GPS, WiFi, cell towers or IP) is almost ubiquitous and can be used for positioning and filtering, but you should not forget other innovations that are less obvious like orientation information, audio & video input and output, compass, push notifications, Bluetooth connections, proximity sensors, ambient light detectors,…

Takeaways

You need to think about mobile because it’s an opportunity for growth, the constraints will give you the focus you need to make a great product and the capabilities will drive innovation in your product. But don’t forget that the design considerations are different.

Aarron Walter – Learning To Love Humans—Emotional Interface Design

There’s a lot of talk about usability of web pages, but is this enough? Usable is just edible. Would you say you go to the restaurant because their food is edible? We have a few options on how to trigger an emotional response to our designs – one of them is giving our sites personality. It’s a platform for emotional response as we like to empathize and personality invites empathy.

People will forgive shortcomings, follow your lead and sing your praises if you reward them with positive emotion.

You can use treats to give users something more. Let users discover new things. It’s the little positive surprises that make us happy.

Takeaways

Usability is not enough, we need to think about designing pleasurable experiences. We need to create an emotional response from our users and make them want to come back.

Jared Spool – Anatomy of a Design Decision

How do we make design decisions and what kind of designs exist? There are a few decision styles:

  1. Unintentional design – when users will put up with whatever we give them and we don’t care about support costs and frustration (think airlines & hotels).
  2. Self design – works great when users are like us and we are our own users (think 37signals).
  3. Genius design – when we have domain knowledge that informs our decisions and we’re solving same design problems repeatedly.
  4. Activity focused design – when we can identify users and record their activities to go beyond our previous experiences.
  5. Experience focused design – when we want to improve our users’ complete experiences, in between specific activities.

There are ways of moving up the chain:

  • “Eat your own dog food” to get from unintentional to self design.
  • Do usability testing to get from self design to genius design.
  • Field studies get you from genius design to activity focused design.
  • Personas & patterns help you get to experience focused design.

There are two fundamentally opposite ways we can make decisions:

  1. Rule-based decisions are based on design books, brand identities and other rules. They don’t allow exceptions and ignore the knowledge of the person deciding.
  2. Informed decisions are based on design patterns and put the person deciding behind the wheel. They are good for handling exceptions.

With this in mind we can look into what is needed to do one or the other:

  1. Dogma
  2. Methodologies
  3. Process
  4. Techniques
  5. Tricks

The first two are typical for rule-based decision making as they rely on a set of rules and don’t require a lot of knowledge from the person deciding. Techniques and tricks on the other hand come with experience and a lot of domain knowledge.

Takeaways

You need to know which decision style you’re using and encourage informed decisions, avoiding rule-based decision making. Techniques and tricks are more effective than methodologies and dogma even though/because they’re harder to come by.

Pete LePage – Help Us Kill IE6

A sponsored talk that didn’t really turn out as bad as some I’ve seen at other conferences (eg. FOWD). Pete presented the history and some IE9 features. He also suggested that we let IE6 users know that they might want to upgrade their browser as Facebook does.

MediaTemple Party

The party was nice – being fashionably late meant that it wasn’t too crowded and that most of the snacks had already gone. I had a brief chat about designing Drupal 7 with Mark Boulton, met Aarron Walter and Petra Gregorová formerly from Slovakia and a police man from Denmark that does web development in spare time. And I grabbed a (mt) beer and a coaster as a souvenir.

Enhanced by Zemanta