I clearly didn’t make myself hungry enough putting together the cupcake bakery mini-site for my prototyping talk last month, as I’ve spent the past few weeks polishing that prototype — and staring at chocolate frosting — in order to make that code a proper, release-ready framework. And I’m now happy to say… the 3-Layer-Cake Prototype is now ready for your (ahem) consumption!
The prototype builds on a technique I’ve actually used for many years, one I first read about in late 2004 in this Invasion of the Body Switchers article on A List Apart by Andy Clarke and James Edwards. I remember finding it instantly useful for a project I was doing at Coinstar, where I was tasked with developing an updated kiosk interface that could accommodate touchscreen and keypad interfaces, in either Spanish and English. Using IOTBS, I was able to create an interactive prototype from a single set of HTML pages that effectively simulated the various interfaces and served up different language content.
But it wasn’t until sometime last year, fiddling with WYSIWYG prototyping tool Axure and seeing their in-prototype annotations that I realized… hey, I could do that, too!
First I tried absolutely positioning individual labels over each page, something which was pretty effective, but slightly wonky cross-browser and -platform. Plus, it was incredibly time-consuming having to both position the labels (nudge, nudge – save – refresh x 1000) and create all the additional markup and CSS to support them.
Then I discovered CSS3 Generated Content and realized that it could be used to create an almost identical set of annotations, but in a way that was much simpler and scalable. In a nutshell, here’s how it works:
The Annotation Technique
- Elements bearing a class of “x” in the HTML page will be tagged with a numbered note, to be shown in annotations view.
And if the element has one or more classes already, just add it to the existing one(s), as multiple classes are A-OK.
- In the switch.css file that defines view-specific styles, a counter is defined for all elements tagged with the “x” class when in Annotations view.
- Then the counter is started, its results styled and set to appear as generated content before the start of each “x”-tagged element in Annotations view. From this, the note labels appear onscreen next to each of those elements.
- Finally, the annotations themselves are inserted into the notes panel in the HTML page, written up and formatted as an ordered list, with one list item for every “x”-tagged element on the page, in the order in which they appear in the code.
The notes and the annotations lists aren’t yoked together, so this is the only place where you’ll need to manually match up the count/order of elements to list items.
And that’s the heart of the annotations functionality, explained. You could (and should!) strip out whatever you don’t need, if, say, you want the annotations without the complete 3-Layer-Cake Prototype framework. Non-commercial re-mix and re-use is very much encouraged under the Creative Commons license under which I’m releasing this work. And if you end up using this and posting something online, please let me know, as I’d love to check it out!
Cleaning out my file cabinet recently, I’ve stumbled across a number of old documents for clients and projects long past. Most of these are typical project artifacts: specifications, wireframes, schedules and the like. But some were written for broader purposes, such as streamlining a process that wasn’t working, or summarizing common practices into written guidelines.
Many of these guidelines got sent straight to the circular file (“Web Image Optimization in Debabelizer”, anyone?), but I think this one is a keeper. It’s been close to ten years since I wrote this document, but it matches up pretty well with my attitude today on how to perform quality assurance on a web site from a content and design perspective.
And QA is something I’ve continued to do even as I transitioned from a more technical role to a user experience one. While it would be great if every engineer could instantly spot the difference between Helvetica and Arial, or a 20-pixel margin that’s 19 pixels instead, I think it’s nonetheless important that designers, copywriters, product managers, UX architects and any other non-technical folks need to step up and participate in the QA process. Their very involvement helps better raise awareness of the “softer side” of QA, and for everyone to ultimately own quality results in a better product all-around.
Just try not to let it sting too much if your bug gets labeled a P4…
How To Conduct QA and What to Look For
(originally written: April 29, 2002)
When conducting Quality Assurance (QA), there are a number of different criteria you should use in examining each page or group of pages for accuracy and overall readiness for delivery to the client. There are three major categories under which most QA comments should fall: Design, Content and Functionality. Below are lists of specific things that you as a QA tester should look for when evaluating each page.
The HTML page design should match, as closely as possible, that of the final design comp as found on the extranet (Design & Production). In particular, the following should be examined:
- Spacing — Are all of the page elements (text, logos, headers, footers, etc.) arranged with consistent and correct spacing?
- Colors — Do the text and background colors match the design?
- Fonts — Size, typeface (font face) and color and spacing within ASCII text should be consistent.
- Image Quality — Are individual images sharp and true to the original design? While images do need to be optimized for file size and other reasons, there shouldn’t be any major differences in the look of the design comp and that of images on the HTML page.
- Design Translation — Does the design translate to HTML smoothly? That is, does the HTML rendition smoothly account for the fact that browser windows can be resized, elements can change in position to one another with actual content filled in table cells, etc.
- Consistency — While the design elements and logic within a page may be consistent and look good, does the same hold true when comparing that page with another within the same section or sub-section (Main Page vs. Article Page). Or within the same type (all pop-up windows)?
All pages should contain the final text, image or other assets as provided to us by [Client]. What those assets are, and whether or not we have them, can be checked by looking in any of the relevant sub-folders within the Assets directory. So when conducting QA for Content, one should check:
- All Page Text/Copy — Open the asset files (Word or Text documents) in the relevant assets folder(s) and read that text, compare with the text in the HTML page. Have final text assets been used? You will need to thoroughly read all of the copy to be sure, as sometimes the placeholder copy is very similar to the final text. We should also strive to correct any obvious spelling or typographical errors we encounter in final page copy.
- All Form Field Text/Copy — All drop-down menus and other form elements should also be selected, their contents viewed, to ensure that even text content not immediately viewable is correct as per above. You will also want to check the correctness of all error and thank you page copy, but this can only be done when testing functionality (see “Functionality: Forms”).
- ALT Tags on All Images — In the PC version of IE, simply roll your mouse over an image and hold the pointer there for a moment; a yellow box with the text of the image ALT tag should appear. Is the ALT tag a useful translation of the image or image text? Does it use proper capitalization, spelling and punctuation?
- Page Titles — The title of the page should accurately reflect both: a) where the document resides relative to overall site hierarchy, and b) the actual name of the page. In addition, there is a consistent naming scheme used throughout all pages that reflects the final section names and the final site structure as laid out in the Site Functional Spec. Finally, check page titles for consistent use of separators (colons) and correct spelling, capitalization and punctuation.
- Graphic Text — All graphic images containing text should similarly be checked for correct spelling, capitalization and punctuation. Names and titles should also be checked against the final nomenclature as per delivered assets. Check for consistency between graphic and text assets as well, noting things even as minor as the difference between “Terms & Conditions” and “Terms and Conditions”.
- Image and Other Media Content — If there are pages where the actual content is inserted as final by [Agency] into the design, ensure that the images are correct and match the context.
Unless the Technology department has specifically excluded any areas from testing due to work still in-progress, pages should be checked for complete functionality of links and other display and interactive elements. Here are the things to look for:
- Broken Links — Are all of the links functional? If there are any broken links, is this a temporary condition, something that will be resolved when the pages are moved to their final location or when final elements are inserted? Check with Technology if you are unsure.
- Correct Link Functionality — Do links trigger the correct actions? That is, do the links correctly launch a new browser window, launch a pop-up, close a pop-up, load a page in a parent window or any combination of these actions? Does each link also point to the correct page? You will need to reference the relevant section of the Site Functional Spec for the page(s) you are QA-ing to ensure that link functionality is correct. Please review the element details associated with each page wireframe for a listing of the correct functionality for each link.
- Forms — Wherever there is a form on the site — a feedback form, login, registration, etc. — can you fill out the form and submit its contents successfully? If the form submission functionality is not hooked-up, you may still be able to test the form validation capabilities. Try filling in incomplete or incorrect information in each of the fields. Are you still able to submit the form? If not, do you get an error message? If you get an alert or error message, is it correct? Check the error copy against the final text assets.
- Technology Detection — Test the capabilities of the system in detecting (“sniffing”) for the existence or absence of plug-ins, cookies or other elements on which certain site functionality is dependent. For example, if you are a registered user, is this detected and relevant dynamic functionality appears (username pre-filled in login box)? Keep in mind that technology detection is global functionality that may not be in place until farther along in the QA process. Please check with the Technology department if you have any questions about what elements are and aren’t ready for testing.