Review – Loop11

I recently used Loop11 on a remote testing project, and thought it would be a good time to add in my two cents worth in terms of review.

For those who don’t know, Loop11 is a survey tool, marketed as a remote testing tool. It lets you set up a test and then run it on a site – even if that site isn’t your own – by using url links. The test can include tasks and questions, and all data is captured for you, up to a maximum of 1000 participants.

In terms of ‘online testing’ it’s a world improved from a standard survey, but still a long way away from the ideal observed/measured remote testing environment I’d like to see. Having said that, it’s offering a great deal for a reasonable price per test.

Okay, first for the good news. And there’s a fair bit of it.

Loop11 allows you to configure your test with a number of tasks, and with questions before or after each, and pretty much everywhere else you need. Tasks have a start url and a success url, and these are tracked to measure success rates. That means you can ask people to access a certain page and then track how many succeeded.

Questions have a great range of types, which allows you to configure open or closed questions with a range of options and feedback levels – although one piece of advice is to avoid open questions with text input, as this gets hard to review/manage once the responses creep up into the hundreds.

You can copy tasks and questions, and juggle them around in the sequence. At any time you can review the test and walk through it, to see how it works – an invaluable tool. Crucially it also doesn’t force you to pay for a test credit until and if you want to launch your test, which allows you to become thoroughly happy with the product before you commit financially.

The in-page support is great, with plenty of help and examples to get you going. Although it can be a bit slow to build questions in this format, overall it’s a pretty slick experience.

When it comes to results there’s lots to like too. Loop11 creates a very reasonable PDF output report, but also allows you to export the raw data in various formats including Excel and XML as well as viewing the results directly within your browser.

All this means the tool is easy to set up, easy to manage, and provides a great level of detail at the end.

I also have to commend the Loop11 team for providing great support. We unfortunately encountered a number of problems with our previous test, some of which were outside the control of Loop11, but the support team were always helpful and quick to respond – essential when you’re looking at a potentially disruptive live survey on your website.

So what doesn’t work well?

I think possibly the biggest gripe I have with this tool is that it blanks the screen while asking questions. During a task a small control bar appears at the top of the browser window, reminding you of the task and allowing users to mark the task completed or failed. Once either is selected the task ends and the user sees a blank screen whilst they are asked questions.

That does work fine for some forms of testing, but for more content/marketing driven tests it’s far less useful. For example some questions may relate to whether the user can see a certain piece of content, or how they feel about something they can see. When the page is blank the user is then forced to remember what they saw, and try and comment on it.

We’re currently configuring a test where the user will be asked which option they would choose in order to find a certain piece of content – and the possible list is pretty much anything on the nav, quite a long list. Whilst we can list them all and ask the user to choose, that’s nowhere near as powerful as asking them to view the page and tell us where they’d go.

So if I could change one thing about Loop11 I’d change this – allow the interface to stay visible during questions.

I also find it personally limiting that the results don’t tie questions to tasks. Questions appear in the report by number, but there’s no immediate way of seeing which task the question relates to. That means flicking back and forth between the output report and the test structure, especially when you may have duplicate questions (for example “How did you feel about that task?”).

Finally, we encountered some errors that made using Loop11 a little tricky at times. With our previous survey a number of people reported their browser crashing during the survey, and Loop11 were not able to identify the cause – though to be fair this may well be down to the external site we were testing. We also hit a problem when attempting to view results, due to the logic behind how report output is generated- that’s something Loop11 have assured us is in the pipeline for improvement.

Overall I’d recommend Loop11 to anyone looking to run remote tests – it’s not perfect, but it’s a great extension to our toolkit and the positive support approach means you’ll hopefully overcome any minor gripes you encounter.

You can see more at www.loop11.com.

Gary Bunker

the Fore