First level googling suggested going to System Preferences > Security & Privacy > Privacy > Camera and tick the box next to Google Chrome. Just one problem – Google Chrome was not listed and there doesn’t seem to be a way to add a program to that list manually.
Another hit suggested upgrading Chrome and macOS – both were upgraded already.
I googled some more and found this report of the same problem with comments closed and no answer.
I tried a bunch of things in Chrome and had no success – settings, other sites, disabling/enabling camera… nothing worked.
So I gave up and opened Safari to see if I can at least use the camera there. It worked out of the box, the camera and mic turned on and I was in the meeting. I then opened System Preferences and Safari is not listed there at all – I guess that’s what you get when you work in the same building as the OS guys. Went back to Chrome to see if it works now and Chrome asked me for permissions again – only this time, it also appeared in Privacy list, where I could now allow access to Camera and Microphone.
I had a feeling using Safari first might trigger this, but it was more of a Hail Mary then anything else really. The other option – removing Chrome with all the profiles and trying from scratch – did not sound appealing at all.
]]>The machine is great – the screen, the silence even when under load, the fingerprint reader, I love it all. I have not had any issues with the apps due to it being Apple Silicon. What I have noticed is that some smaller utility apps I used have disappeared since I last did a fresh install – developers moved on, decided to not support the new platform or the new APIs. I have mostly1 found replacements, even if some are paid apps.
On my previous laptop I had all development stuff running directly on the Mac. This was a problem every time I upgraded the OS as random things would die and fixing them took a lot of time. So this time around I want to pack all my development stuff inside a linux VM that would then hold either code directly or docker containers.
My initial idea was to set up an x86_64 virtual machine, so that I could have an environment as close to what I normally use in production, but installing it in UTM took forever, so I abandoned that idea for now.
Step one was setting up some virtual machines to test how that would work.
I have previously used VirtualBox, but they have not yet decided to support the M1, so what I found and tested was:
So I went about installing Ubuntu in all three environments. My source image was Ubuntu 20.04.3 LTS, the machines set up as arm with 8GB of RAM and 4 cores. In the case of UTM, the system is set to QEMU 5.0 ARM VM (virt-5.0)2 with CPU set to cortex-a72 and Force Multicore checked.
After installing I looked at how I can share a directory from the host inside the VM:
I haven’t figured it out yet as it wanted me to install something on my Mac, so I gave up (for now).
Default instructions are to reboot and then mount a CD from which you install the relevant tools. This went well and the directories were shared under /media/psf
.
VMWare requires you to install vmware tools on linux and you should get the mount automatically, but I didn’t, so I hade to add the line to /etc/fstab
manually. Going with fstab is nice, as you can mount the share anywhere you like.
With that solved, I did a quick test of speed inside the VMs. Nothing comprehensive, just a quick feeler to see what kind of performance I can expect. To do that I ran the following python code, output mimicking that of the ping command:
import statistics
import timeit
l = []
for i in range(10):
l.append(timeit.timeit("hashlib.pbkdf2_hmac('sha256', b'password', b'salt', 100000)", "import hashlib", number=100) / 100 * 1000)
print("min/avg/max/stddev = {:.3f}/{:.3f}/{:.3f}/{:.3f} us".format(
min(l),
statistics.mean(l),
max(l),
statistics.pstdev(l)
))
Times:
min/avg/max/stddev = 15.442/15.479/15.510/0.018 us (python 3.8.10)
min/avg/max/stddev = 14.582/14.659/14.809/0.077 us (python 3.8.10)
min/avg/max/stddev = 14.596/14.632/14.713/0.031 us (python 3.8.10)
min/avg/max/stddev = 23.598/24.297/25.038/0.553 us (python 3.8.9)
min/avg/max/stddev = 308.944/316.638/326.198/4.993 us (python 3.5.2)
min/avg/max/stddev = 64.027/64.870/66.026/0.658 us (python 3.8.8)
I have no idea why the VMs are faster than the host – my guess is the VMs are running on performance cores, so python also gets a performance core, while python on the host runs on the efficiency core. Haven’t yet figured out how to confirm this though.
Update: I added times from my old laptop. Oddly python 3.5 was way slower, while there is no difference in times between 3.5 and 3.8 on arm (tested on Parallels).
I think UTM could be great especially with its low price (free online, 9.99€ on the App Store), but everything is a bit finicky. If you can use one of the images provided and you don’t need to set up directory sharing, it’s surely a good option.
I don’t yet have a preference between Parallels and Fusion – Fusion has better folder sharing approach but kidnaps the cursor, which is quite annoying. At the moment Fusion is free (full price for Fusion 12 Player is 135.53€ while Pro is 180.98€ in the Europe store at the moment), while Parallels is already a paid product (99.99€ one time or 79.99€ per year, 99.99€ per year for Pro). As far as I have read, VMWare does not intend to support anything that is not Arm, while Parallels already has that support, which might make me go that way.
Next things I want to figure out:
If you’re interested in anything else, let me know.
]]>So it is weird to me that we haven’t become more wary of what people we know tell us. One would think that in the age where we don’t adopt any conclusions without checking with multiple sources, we’d do the same when information is shared privately – and one would be wrong. In other words – the government is lying to us, the media is lying to us, but what we hear at the bar is all true.
I’ve recently tried to become less susceptible to these kinds of one sided stories and try to check the other side before creating an opinion. I’ve been called out on adopting other people’s opinions a few times in the past and I’m trying to be better at this. But this means work and a few additional variables with all information you store in your brain – source and reliability. So now whenever I hear something I try to check it before I store it memory – and if I can’t, I store it as a rumour with a low reliability score, just as I would when I hear something on the news.
Adopting this has made me more content with myself, but also made it harder to converse with people who refuse to question things they have been told.
]]>So even though they look the same based on the big claims on the container, they are quite dissimilar with the second one using more fat (8,7g vs 5,4g) and less sugar (24g vs 27,4g) resulting in higher energy value. I’m also a bit annoyed that it is 10g lighter even though it seems it’s using the same container.
Taste wise and texture wise I prefer the first one – it tastes a bit sweeter and creamier. I’ll see if the second turns creamier with time. Oddly, the second has a somewhat bitter taste as if the hazelnuts were a bit over and not really perfect. Might be a batch issue or an overall quality issue, who knows. Either way, I’d buy no.1 again, keeping no.2 in mind only as a backup.
]]>That might sound a bit harsh, but I always felt that as the organiser of an event (I was involved with Spletne urice – a weekly meetup – for quite a while) my job is to provide people with as much value as possible to show that I respect their time and effort to come sit in a hall for an hour or so and listen to something I consider important/relevant[1].
As we didn’t have meetups during the summer (less people in town + our space was closed), this meant that every season would start off with me going through all the possible topics I could think of that I felt had developments relevant to the community, brainstorm topics with other senior people in the community and then thinking of companies and people who could be good at presenting these topics.
Unfortunately Slovenians don’t really want to speak in public too much, so a lot of time was spent convincing people to actually present. If I started the season with 20 topics and people I could start at the beginning of the season and when people said “maybe in a few months” I set a date for them and kept reminding them. This was an ongoing thing as new topics and relevant speakers would pop up during the season. Because you can’t fill all the slots this way I had a set of “evergreen” topics and people who can present on them to fill it all up – this also helps in months when you have less time, but it does mean you owe people.
I almost never let people write their own talk descriptions and titles. While I did ask them for a description it was more of a way to see what they want to talk about and the text I wrote was what I wanted them to talk about. This meant that I would give back suggestions on how to make the talk more relevant to the crowd and also to set the expectations – as the meetups were on the broad topic of web technologies, a good narrow description would pull in listeners that would otherwise not have come. For people who have not presented before or felt they might not do a good job I offered even more help –
checking their slides, possibly guiding them on how to tweak them for better effect.
What I see nowadays feels more or less unmanaged and even though that sometimes means some awesome odd-ball talks, it often has the following result:
All of the above means that more often than not these things just waste people’s time and look like the organiser and the speaker have no respect for the time of the people attending. I know this is not true most of the time, but having a bunch of people show up because they are hiring and go to meetups to find new employees (of which there are usually none) only masks the fact that the event should be run better and provide more value to the community[2].
The question then is – if you can’t do a meetup properly, do you find another person or a team to do it better? And if there is no one else, do you want to up your game or just quit? Is something better than nothing?
]]>What I’ve seen (again) is that the state of car configurators and comparison tools has not progressed a lot since I first started seeing them in about 2000 when working on a website for Renault. That’s why I like buying cars from Asian brands (actually Japanese brands) – they have a small number of trims and not a lot of things you can add, which makes for a simple decision process. The european brands however will basically sell you an engine with a steering wheel and a set of wheels and then let you add on whatever you want/need so that you actually buy a car – I’m exaggerating here, but not long ago BMW had manual rear windows in the default trim.
The state of the art seems to be adding numbered codes to equipment and then listing them in packages, sometimes online even notifying the user about the incompatibilities when selected (which is sometimes fun – I still can’t configure a Renault car).
The funny thing when comparing trims/models is the fact that there seem to be no links between items, which sometimes means that you’ll have a “Steering wheel” and some trims will not have it – cause they have “Leather steering wheel” a page lower (intentionally selected these cause you can’t solve this with a sort).
Modelling features seems somewhat simple:
If you try to normalize this you will quickly notice that it gets highly confusing when you have the same commercial title for a feature pack that includes less features and is priced lower because it only applies to high-end trims.
If you’ve ever done this before you can also imagine this conundrum makes for a very fun UI experience – some features only apply to automatic transmission models, some only to models with a certain engine or number of doors. Let’s not even start with special editions…
What do you do to deal with all this mess?
]]>:first-child:nth-last-child(4) {}
will only select the first child if there are exactly 4 elements. If you group that with the general sibling selector ~
, you can also get all the other elements.
The idea of quantity queries has been around for a year or so and even though at first glance you might say “We’ve got flexbox for that now”, you’d only be right in certain cases. The thing that quantity queries bring to the table is the idea of being able to change the styling depending on the number of elements, which I guess people currently solve either on the backend or with javascript.
But that’s not the main reason I’m writing this – it’s the last talk of the night where a three-way selector solution was presented by Gorazd. He created a CSS version but had problems with the smoothness of the animation as he didn’t know where the selector was before the selection to move it to the selected position after a user interaction. He resorted to using javascript that basically only did some class switching. This immediately gave me an idea that a sibling selector could be used for that if the indicator had the same parent as the inputs and was positioned after them in the code. And today I made a proof-of-concept solution I’m calling “the three-way CSS-only selector”.
It uses three radio buttons, so the form is perfectly submittable, the labels also select properly, it animates properly and does not use javascript. It’s only been tested on the browsers I have on my Mac, so I can easily see it breaking in IE or mobile browsers – if anyone wants to fix that please go ahead and ping me to add your solution to this post. You can also check the solution on JSBin.
]]>Must haves beside wheels, frame, forks, saddle, pedals, handlebars and a reasonable price:
Optionals:
You can get the lights, but it’s way nicer if they look the part, cables inside the frame make the bike look nicer and a rear rack is useful for schlepping stuff around.
I’ve recently done another search and found that Schwinn Brighton comes close – it has all the must-haves, but unfortunately it’s not available in Slovenia.
Update: I found this bike in Cube Town Pro black 2017.
]]>Since then I found out that delivery guy ignorance and package mishandling doesn’t only happen here, it happens elsewhere too. I’ve even been told that the “this side up” sign is ignored, they only care where the label is so it can be scanned fast and automatically.
The prices on their price lists are also out right ridiculous. Most companies I’ve seen only offer fast shipping (1-2 business days in Europe), if you want anything cheaper/slower you go the way of the local post, which is unreliable and can’t guarantee anything (and will usually take 5-10 business days in Europe).
Returning packages is another thing that is far from solved – from the delivery companies to customs officers, so it’s really hard to create a good experience for the customer to return an item free of charge.
As usually with these huge systems you need to get to the people level to get things working – when you’re calling a person not a number, when you call the delivery guy by name everything works. For everything else you try to hack the system to get what you want…
]]>One of the things I was looking at was our HTTP server / load balancing setup. We’re currently using an Nginx server in front of Tornado instances and I was wondering if there is something that is more specialized and thus likely faster that would allow for realtime upstream configuration changes. I found HAProxy.
I did a simple setup on the server (Ubuntu 14.04) to test things:
Tornado is running a very simple app that only returns OK at /, there’s two of them and both are started with 2 forks (server.start(2)
). HAProxy is set up with option http-keep-alive
with a 60s timeout and points to the two Tornados while Nginx has two virtual hosts, one linked directly to Tornados with keepalive 4
, the other to HAProxy also the same keepalive setting, both with proxy_http_version 1.1
and removed Connection header. Server timeout is set to 30s, client to 10s.
The first idea was to test with ab
using something like ab -n 10000 -c 100 URL
. On a machine that is running nothing else but the tests, the times were changing way too much from one test to another. I also noticed that ab is making HTTP 1.0 requests.
The second tool I tried is siege
, which has a few different options than ab
and also makes HTTP/1.1 requests. I used siege -c 1000 -r 100 -b URL
and Nginx-Tornado combination manages a 50% higher transaction rate than any combination with HAProxy. But I think this says more about my ability to configure HAProxy than it does about HAProxy itself – I keep getting a bunch of stale connections (in TIME_WAIT state) hanging around even with option forceclose
.
Resolution: sticking with Nginx. It can log POST body, upstream configuration changes will be handled by including the upstream configuration which can be changed and reloaded.
]]>I have no idea who decided that security questions were a good idea in the first place. The answer to the question can usually either be easily researched (maiden names, first teachers, first cars,…) or hard to remember. The first one is a problem because then they don’t really provide any security, only add friction to the process.
Remembering the answers is a bigger problem because of a few reasons. Some of the questions are hard to answer in the first place – I for one have no idea what my first concert was and even if I think about it I have no idea if when setting the answer I thought the one at school counts or was it the first one I bought tickets for myself, which band did I write or did I wrote all of them in what order and in what form. Geographic questions are also much fun because you never know how local your answer was – was it the street, town, county, state,… And because of the first issue, the easily researched questions get tricky answers that you never again remember unless they are really obvious, which again makes them easily breakable.
I can see some value in these kinds of questions when there is a person on the other side, but only if that person is trained to recognize people that make up stories and lie. But this doesn’t happen very often.
So if you want something to be secure, make users select stronger passwords. Don’t add shit that doesn’t add security but problems.
]]>The first one was a request by a reputable event venue looking for the visual identity for an international Jazz festival. Their posting vas very raw, saying only what they want (a poster) and what they give in return (tickets to the festival and a t-shirt).
The second one was a request by a PR agency for a month long stint doing “PR and Event Management”. The posting is humorous and very well written (PR agency, remember?) and also includes a list of what they want (full day of hard work) and what they give in return (lunchmoney).
As usually both postings leave a lot of room for interpretation and of course people base their interpretation on their feelings towards the company. To make things even I’ll try to make two interpretations for both – one optimistic and one pessimistic.
The first request is targeted at aspiring designers who have either just finished their studies and cannot find work or are trying to find work as designers even though they studied something else. Maybe they’re just Jazz fans trying design while unemployed. Since the event organizer has a team of internal designers they’re not actually looking for all the applications (logo, poster, booklet, tickets,…) – they want a poster that communicates an idea (agencies will tell you that ideas are hard to come by, entrepreneurs will sell them a-dime-a-dozen). Since designers are usually hired based on their portfolios (preferring published work) winning this could jumpstart a career. It could even possibly lead to a job for a music label or another, bigger music festival.
The second request is targeted at people who know that in PR and Event Management it’s all about who you know and who you’ve worked with. This means that working for a company on multinational accounts can lead to a job in either this same company or at the multinationals – which could get you far. The company is only asking for a month of “free” work and is actually using this as a testing period for a full-time hire after the month expires. They’re a good standing company with loads of work and the salary is great. Since you’ll be working a lot with a great bunch of people you’re learn so much that after the month is over you’ll not only have the offer from this company, but from at least three more.
The first request is a way to get a free visual identity because they want to fire the internal design team as soon as possible. The winner will have to do all the applications for free after he wins and the tickets will be the worst you can possibly get for a concert, while the t-shirt will be of the wrong size. They will not allow you to sign your work or advertise that you did it.
The second request is a way to pay less for people who will pass out flyers at events, make coffee and type CEOs recordings of PR notices. It also includes sending PR emails from to media and journalists and reminding them every day until they publish. Since you’ll be working hard all day there won’t be any time for mentoring or observing what others do and after you’re done coworkers won’t even remember your name.
People who supported one and not the other were probably thinking of one as a pessimist and the other as an optimist. Knowing the companies they might be right, but that doesn’t change the fact that none of these scenarios are probably true.
The economist in me will say that if you can make people work for you for free, just so they get an entry in their CV, you should. But is that the right thing to do? I don’t think so. It’s a tricky subject and there’s a lot of different arguments for and against such requests and quite a few of them surfaced in the discussions on Slovenian social media. The bottom line for me is that it’s a slippery slope… But that’s a whole new post.
]]>By releasing Inside government we were testing a proposition (‘all of what government is doing and why in one place’), and two supporting products (a frontend website and a content management system).
Ross Ferguson
People usually forget this. When you don’t, your project has way more chances to succeed.
]]>When using the first option your only focus can be on how to optimize the CSS so that it loads as fast as possible and is as small as possible. As you have less files you can easily have that file minified and gzipped even if you’re not using a deployment solution that will do that for you.
Using the second variant gives you more options – you should still optimize the files, but now you have the option of deciding when to load them to make sure that the landing page does not get hit by the added request.
Even though I started with CSS preloading can be done for any resources needed on the page – fonts, scripts, images.
An very obvious case is a search results page. It usually has a very distinct design that requires certain resources not needed anywhere else. But that’s not enough – you need to know when the user will these resources so you don’t go preloading everything just in case. With search it’s when they focus inside a search box – they start typing the query, while you start preloading the resources in the background.
Other obvious places to preload resources:
Other less obvious places are landing pages when the choice a user makes turns into many sub-options, especially when products have the same resources used on the content pages.
If you’re already using a library it probably provides AJAX methods and event methods. If not, you can search MicroJS to find one and adapt the syntax.
A simple preloader is almost as simple as helloWorld – the only thing you need to make sure is that the type of data type is set to text
so that it does not get executed.
window.preload = function (src) {
$.get(src, null, null, 'text');
} // call with preload('somefile.css');
If you want to allow the loading of multiple files at once you can detect if the passed element is an array.
window.preload = function (data) {
var a = $.isArray(data) ? data : [data];
$.each(a, function (i, n) {
$.get(n, null, null, 'text');
});
}
// call with preload('somefile.css');
// or preload(['somefile.css', 'otherfile.js']);
If you prefer to call the function with many parameters you can just use arguments in the each function call.
window.preload = function () {
$.each(arguments, function (i, n) {
$.get(n, null, null, 'text');
});
}
// call with preload('somefile.css');
// or preload('somefile.css', 'otherfile.js');
Key element is to load the resources after the window.onload has happened – this means that any resources needed for the page to function properly have been loaded. If you do stuff sooner you might have to wait for other resources like fonts, images, videos. This means you need to know if the onload event happened – if it has, preload immediately, otherwise wait for the event to fire.
(function () {
var win = window,
$ = window.jQuery,
$win = $(win);
$win.load(function () {
$win.data('loaded', true);
});
win.preload = function (data) {
var a = $.isArray(data) ? data : [data],
fn = function (i, n) {
$.each(a, function () {
$.get(n, null, null, 'text');
});
};
if ($win.data('loaded')) {
fn();
} else {
$win.load(fn);
}
};
}());
// call with preload('somefile.css');
// or preload(['somefile.css', 'otherfile.js']);
As you can see I also did a few other things – wrapped the code into a function to isolate all the variables. I also assigned window.jQuery to a local $ to make it a bit more bulletproof.
Needless to say this script needs to be loaded during the load stage. If you intend to load it afterwards you need to make sure that you properly detect the onload event – if you don’t nothing will get preloaded as it will be waiting for that event to fire.
]]>The goal of this exercise is twofold:
It’s as simple as writing a cache manifest. It pays to name it [something].manifest, as Apache already serves the correct header with that suffix. What you do is list all the files you want cached, add stuff you allow on the network (check Appcache Facts and Dive Into HTML5 – Offline Web Applications) and it all works. Except it doesn’t.
CACHE MANIFEST
# version 1
CACHE:
resource.js
NETWORK:
*
http://*
https://*
If you’re using AJAX to get data from a server using jsonp/script, your framework of choice (jQuery in my case) will probably default to using a no-cache approach to loading it. This will mean that it will request the file + some suffix to prevent browser caching. This will mean that the resource will not be available if you’re actually online.
You can use navigator.onLine to switch cache on/off, but I suggest you first try requesting the no-cache resource and if it errors out, request the cached resource. The benefit is that even if you are online but the server is not, the users will still see the data / use the app.
$.ajax({
dataType: 'script',
url: 'resource.js',
cache: false,
success: successHandler,
error: function () {
$.ajax({
dataType: 'script',
url: 'resource.js',
cache: true,
success: successHandler
});
}
});
Fixing the AJAX meant that it worked properly on the desktop (test with Firefox that has offline mode) and in Safari on the iPad. The trouble started when I added the app to Home Screen – data failed to load. It was the same app as in Safari and it should have worked.
After some debugging I found out that the data actually was loaded but the eval failed (I was using $.getScript). Some weird testing showed that the problem was a newline character in the data. As I really liked the newline there I added some code to the error handling that removed the newline and evaled the data, then ran success. And it worked!
$.ajax({
dataType: 'script',
url: 'resource.js',
cache: false,
success: successHandler,
error: function () {
$.ajax({
dataType: 'script',
url: 'resource.js',
cache: true,
success: successHandler,
error: function (xhr) {
eval(xhr.textResponse.replace(/\n/g, ''));
successHandler();
}
});
}
});
It’s somewhat hard to debug offline stuff. I suggest using a clearly visible version indicator in the file to make sure you know which version of the file you’re looking at. Also remember that the first time you load the app after changing the file & cache manifest it is served from cache. At the same time the manifest is checked and on the iPad the files are downloaded after everything else is done (app finished loading and rendering).
After these problems were solved I used the aforementioned navigator.onLine to hide stuff that normally comes from the network but is not relevant to the offline app (banners, share links, like/+1 buttons) and you can now check the La Vuelta 2011 results page.
]]>Almost all websites now have some forms on them, some of them are contact / registration forms, others use the data submitted and display it on the site itself (comment forms). But letting others submit data to your site/database opens you to all sorts of attacks. If you actually show the content of the submitted form, you’ll get a bunch of spammers posting comments with lots of links. If you only store data and not show it anywhere you’re still at risk – if you don’t notice your disk can fill up, your database may grow beyond its limits,… So what we want to do is to prevent bogus form posting.
If you think about writing a spam-bot that will try to spam as many sites you possibly can you have two basic approaches.
This is a very simple approach – you use a person to submit the form, preferably with something that looks like real input and record the request made. Then you hand that data off to the bot, it changes some content and tries to resubmit it.
I wanted to say AI, but it really isn’t. What it is is a set of simple rules and randomized values that the bot thinks might trick your site into accepting the submit. Let’s say you have three fields, two are inputs with field names “name” and “email” and the third field is “comment”. A simple script can fill these with “valid” data and try to submit it.
By far the simplest, but also most costly for spammers. Go on Amazon Turk or whatever other service, send a link to a Google Spreadsheet and have people manually enter the stuff into your forms. This is the source of “Sorry, this is my job” endings to spam comments.
Add a field to the form that the user must fill with something that humans can do easily, but machines can’t. The biggest group here are Captchas (display an image with distorted characters, possibly an audio link that reads out the same characters, and have the user somehow figure it out and write the solution), but there have been others, like a “Human?” checkbox, or “3 + 4 =” fields, “Is this a kitten?” with a pic next to it.
Supposedly by far the easiest way to do this is by introducing a 2-step process. After the initial submit, you get back a preview of the submitted data, possibly with a checkbox that says “Is this correct?” and another submit button. Robots are usually not good at following through and thus don’t actually submit the content.
Both solutions have an impact on user experience. With Captchas it’s sometimes really hard to see what they are and even if they have a “different image” link, it just seems like the owner of the site wants to make your life hell before you can submit your data. The other challenges might be easier on the user, but also easier to figure out if you’re being targeted by bots. The 2-step process works great for comments, that usually don’t have an edit link, so it might actually be good for user experience if done right (not Wikipedia style), but are less appropriate on other types of forms.
These are the techniques that should prevent most bogus form entries from random passing bots, except “Human entry” – no protection for that, even though Captchas try hard. There is not much you can do when you’re targeted…
Use this field to trick autoguessing bots to submit something in a field you know should be empty.
If the form post includes content in this field discard it and redirect back to the form. The trick is to make sure the bots don’t figure out this is a honeypot, so use valid looking but nonsensical classes…
Use it to prevent resubmit of data too far from the creation date. Allow users a few hours to post the form.
To prevent manual modification you can use either proper encryption (symetric or asymetric) that will allow you to decode it on form post or use this date in combination with the onetime token.
Use this field to prevent replay of request data. If you can, save it into the database.It is a good idea to make this token in a form that it cannot be faked (say one character changed ad you have a valid one). This can be done with hashing data or encryption.
This one can be as tricky as you want. What I usually do (disclaimer: I don’t know much about encryption so this might be crap advice) is use a plain datetime field with the onetime token generated from IP address, UserAgent and the date field with HMAC. There is no need for this token to be reversible – I can recreate the same thing with the data from the form post and check if it matches.
When using these techniques make sure you take care of the user experience. If you detect a problem on what might be valid user input (“timeout” on the date field with a non used onetime token, wrong onetime token from an ip change by the service provider), you might want to display a second step from the “2-step process”. Whatever you do, don’t call your users spammers or bots – be nice, bots don’t read the text anyway.
I know of no plugin that uses all of these techniques, but I haven’t really looked for it. What I do know is that I don’t want to ever use a Captcha, cause it often keeps me out, and the 2-step process in just too weird sometimes. Hope this helps. And again – if you find the original article (must be some 5 years old now at least if not more) or have any other solutions you use or endorse, do leave a comment.
]]>It makes me sad to see lots of sites minifying code for performance and not releasing the full version of the code so other developers could learn from it. It’s the openness that I really like about the web.
I think there should be a “View Source Alliance” that would set rules on how to release your code in a way that visitors can benefit from the speed of minified code, while web developers can still find your full files and learn from them.
I’ll set a few simple rules here, hoping somebody with more reach picks them up:
This way you will not only help others, but sometimes even stop breaking the law. Because you might be using some open source code with a licence that says you must release your code with a same/similar licence.
]]>Back to the beginning. There are loads of theories on motivation and most of them just cover different aspects and mostly they can all live together. One of the aspects of motivation is where it comes from. That gives us intrinsic motivation and extrinsic motivation. Obviously it’s not a boolean thing as any individual sits on a line between the extremes. And there’s also the matter of having a different source of motivation for different things. Let’s not get into that.
Let me quote Wikipedia on this:
Intrinsic motivation refers to motivation that is driven by an interest or enjoyment in the task itself, and exists within the individual rather than relying on any external pressure.
We are motivated by the fact that we’re getting something done and by the feeling we get ourselves when we’re done. We’re not in it so someone can tell us we did a good job. We don’t really care. A friend of mine once said: “It’s for me. If somebody else likes it – great.” We like to think that the more we get into the subject, the better we’ll be at it and the better the result. In other words no relying on luck, no shortcuts, no marketing/selling, no subpar stuff. Because of all this we don’t like when others interfere with stuff we’re responsible for.
Another quote from Wikipedia:
Common extrinsic motivations are rewards like money and grades, coercion and threat of punishment. Competition is in general extrinsic because it encourages the performer to win and beat others, not to enjoy the intrinsic rewards of the activity.
These people often need to be “managed” to give them the sense of direction and success and are in that way more demanding. They also need more information about what is going on and might see the successes of their co-workers as their own and be empowered by them.
You might want to hire intrinsically motivated people when you don’t have the management layers to keep them motivated or you just don’t have time to do that. On the other hand it’s very hard to keep them motivated when all the fun work is gone and thus tend to either switch tasks/assignments or try to over-explore/discover. These two reasons are also why some people want to hire intrinsically motivated employees only to later regret it as they can’t motivate them anymore or don’t know how.
Hiring extrinsically motivated people might be better for cases where you can manage them properly as they might feel lost without guidance. They are somewhat easier to motivate as you have a lot more ways to do it – sometimes even just a public pat on the back suffices. They are surely a better choice for a company in an established market as they thrive on beating the competition. If you have an “employee of the month” you should hire extrinsically motivated people.
I think it’s very important to hire a homogenous team motivation-wise. An extrinsically motivated manager might have a problem motivating an intrinsically employee and the employee won’t get why his extrinsically motivated colleague is bummed that he wasn’t complimented on the great work last week.
We intrinsically motivated have a problem. Even though that extrinsically motivated people can internalize motivation when it matches their values and beliefs it is much more common for somewhat extrinsically motivated people to become more and more extrinsic with time due to the Overjustification effect. As that means that their number will go up with time, most of us are used to just getting in to solve the problems and then get out. Somewhat ironic that it sounds like a mercenary.
]]>HTML Lint is a tool that makes sure your code looks good. While XHTML was very strict with syntax HTML 5 is more lenient like previous versions of HTML, which means keeping consistent code styles will become more difficult. Validating is not good enough anymore.
HTML Lint is under constant development. If you find a bug, report it on Twitter.
It started in Seattle, at An Event Apart. Jeremy Keith said in his presentation that validation for HTML5 doesn’t make much sense anymore and that there should be a Lint tool. I started thinking about it and after lunch I asked Jeremy what options he wanted in it. I added some of my own and made the first version of it flying to Phoenix (going to IA Summit) and then fixed it flying back to Ljubljana.
We released the first version soon and updated it with a new design a few days ago. I’ve been putting the update off as I had a few other projects going on, but Jeremy mentioned it at Drupalcon and Remy pointed his htmllint.com to it and tweeted about it. So it had to be done.
HTML Lint was coded in Python by MMM, which consists of me (Marko Mrdjenovič) and Marko Samastur. The design for it was done by Sara Tušar Suhadolc. The source code should be available soon.