Monthly Archives: January 2012

Before You Use Captcha: Form Protection Tips

I ran into a great post on the PHP devnetwork forums here talking about some tactics to protect your forms without using Captcha.

First is the Honeypot method. The idea is to add a field to the form that bots would likely fill out but that is a hidden element on the form that should remain blank. twindev explains:

Honeypot – This generally stops bots that go to your site auto submit the form. Add a field, called something like URL (something they would really want to fill in), style it so that it is not visible on the screen. Make sure for accessibility, you add a label that says that the field should be left blank. Then on the code that processes the form, if this field doesn’t exist or it does but it isn’t blank, don’t accept the form submission

The second method that twindev suggests is Timeout. It gives forms an encrypted timestamp that requires the form be requested anew and submitted within a certain period of time. twindev describes it:

Time out – If someone writes a bot to just flat out POST to your site, add a field in the form that is the current timestamp. Then when the form is submitted, only accept it if it is within a certain period of time (say, an hour). The direct posting of data will only work for that much time. Now someone looking may recognize the timestamp, so use a simple function to convert it to something very difficult, and then once submitted, convert it back to a number. (See this post of mine for my functions to do this)

It also occurred to me that checking that the timestamp is a little bit old would prevent bots from rapid-fire spamming. For example, require that the post be submitted at least 20 seconds after being rendered. A person would take that long to complete the form, but a bot would have to wait.

JavaScript Selector Library Supports CSS4!

I was reading DailyJS and ran across this great JavaScript selector library called Sel.

It can select elements using the very brand new CSS4 features. The following are some examples.

/* subject overriding, was '$div .box' in a previous CSS4 draft,
returns 'div' rather than '.box' */
div! .box

/* id references, 'input' who's ID matches 'label's 'for' attribute */
label /for/ input

/* case insensitive attribute matching */
[attr = "val" i]

/* :nth-match and :nth-last-match to match against sub-selectors */
div:nth-match(3 of .box)

/* links who's target absolute URI matches the current document's URI,
arguments specify the degree of locality */

/* :column */

/* :nth-column and :nth-last-column */

I’ve been hopeful that a selector engine would take on the challenge of CSS4 support. From what I understand about jQuery, CSS4 support would require a lot of rewriting Sizzle since it is so highly dependent on querySelectorAll(). I know that NWMatcher is a lot more robust in it’s pre- and post-processing of selectors even when the browser supports querySelectorAll(). In other words, I’m thinking a selector engine needs to know when the browser supports CSS4 and use querySelectorAll() if possible even when CSS4 is not supported.

Long Text Lines in Webkit

I got a report that a long link was overflowing in Chrome 16 and Safari 5.1:


For some reason Webkit doesn’t wrap the text sensibly by default. IE and Firefox do fine. Luckily there is a very quick CSS solution:

body {
word-wrap: break-word;