LinkedIn emplea cookies para mejorar la funcionalidad y el rendimiento de nuestro sitio web, así como para ofrecer publicidad relevante. Si continúas navegando por ese sitio web, aceptas el uso de cookies. Consulta nuestras Condiciones de uso y nuestra Política de privacidad para más información.
LinkedIn emplea cookies para mejorar la funcionalidad y el rendimiento de nuestro sitio web, así como para ofrecer publicidad relevante. Si continúas navegando por ese sitio web, aceptas el uso de cookies. Consulta nuestra Política de privacidad y nuestras Condiciones de uso para más información.
As a mostly-mobile guy, I have to remind myself that even at a mobile specific event not everyone knows how big and important it is. So first, a brief overview of why this presentation matters…
There are more mobile devices than humans. Yes, over 7 billion devices in use.
Computer sales are plummeting. PC sales dropped 95% in 2013. Mobiles continue to grow, and for several years now have outsold desktops and laptops.
If you heard that iPad sales are flattening, remember that’s just one device by one maker. There will be more tablets sold in 2014 than desktops and laptops combined.
Even with all this scale in place, mobile use rates continues to grow, rapidly. Mobile traffic grew 80% in 2013.
Which I believe. Depending on the survey, as many as 2/3rd the people in the US only have a mobile internet device, or prefer to use their mobile over a desktop or laptop—even when one is available in front of them or in the next room—to access the internet. You won’t be surprised that the rates in places like Kenya, where connectivity is generally mobile, are over 90%.
Almost half of ALL the data transferred over the internet (in the US) this most recent Christmas Day came from mobile devices.
So, design for mobile, adaptively, as you design your solutions on every platform.
And that means most of the time we’re going to design for touch. Which should be a snap. I mean, touch is so natural. [CLICK] Anyone can design a touch-based system without risk of users hitting the wrong target or anything.
Oh, you have problems? Everyone does. Because touch is still fairly new. We are still developing patterns of interaction. And we don’t really, in general, understand how touchscreens even work.
More of these at DamnYouAutocorrect.com
What we we used to know about touch was…
[CLICK] …what Apple told us, the 44 pixel target.
But that was based on some convenience of that platform’s design, and pixel sizes. It’s not based on the real world.
Now we know how to design for people. And for the many devices that people use, not just iPhones and iPads. We know how to design for hands, fingers and thumbs.
(Image is cover page from http://www.amazon.com/Fingers-Thumb-Bright-Early-Board/dp/0679890483)
We know this from — 1,333 original observations on how people hold and touch their phones — At least 19 serious, academic studies (by others) which I referenced and analyzed — Including one with some 91,731 users and over 120 million touch events. — 651 new observations done in coordination with the eLearning Guild, on how people also use phablets and tablets in offices, classrooms and the home — And I am currently doing some additional research to get info on gesture and context. I’m sharing some of that preliminary data with you here today, but more is coming over the next few months.
Now we know that people hold phones… in multiple ways.
(See http://www.uxmatters.com/mt/archives/2013/02/how-do-users-really-hold-mobile-devices.php for more information on grasping methods.)
We know that this diagram is wrong (and you can tell anyone who repeats it).
-- We know touch accuracy has nothing to do with finger or thumb size. -- We know it has no direct relationship to reach. -- There are not “no go” areas in the corner of the screen to avoid or put dangerous controls, just areas of more and less accuracy, which we can easily account for in design. -- No one, and no design solution, will yield pinpoint accuracy so you can use tiny targets.
Biomechanics are more complex than this. But more important, while some people use the phone with one hand…
… they then change, regularly shifting their grip. To reach other areas with another finger, to type with two thumbs…
To cradle the device for more reach…
(Video from Luke Wroblewski, who gathered it on a plane sometime in 2013.)
The more I watch people, the more I am amazed at how variable their interactions are.
How they are comfortable changing their hand position. how they touch the screen in different ways to do different things with their devices, as they change tasks and context.
(Video from recent set of user interviews I did. Teenager with her Galaxy Tab.)
Phablets, the largest things you’d consider “phones,” are used a little more when sitting down, than normal sized phones …
And tablets are used almost 2/3rd of the time in a stand [CLICK] or set down on tables.
Large tablets, like the 10” iPads, are used about ¼ of the time with physical keyboards [CLICK] And almost 10% [CLICK] with pen styluses.
Yeah, that’s a pen hiding under the case.
In general, as devices get larger, they are used less and less held in the hand.
Distance from the eye can be surmised by device class.
And the smaller the device is, the more it is used on the move.
On the move doesn’t mean in busses or on trains, but can just mean when you walk around the house or office. Instead of finding time to stop and use that tablet on the table, or sit and type on a computer at your desk.
Because different devices are held (or placed on the table) further from the eye than other devices, you need to make text different sizes.
(For more on this, and the math from the next slide, start with http://4ourth.com/wiki/Human%20Factors%20%26%20Physiology)
That’s because we don’t view anything based on size, but on resolution at our eyeballs. And the relationship between this and that is called angular resolution. This is actually the simple version of this formula. To get the 3438 number requires knowing the size of the sensors in your eyeball, and so forth.
Don’t take a picture of this formula. I’ve done the math for you.
Visual Angle (minutes of arc) = (3438) * (length of the object perpendicular to the line of sight) / (distance from the front of the eye to the object)
And that tells me very small phones (which are not all featurephones) can get away with tiny 4 point type, for most smartphones, 6 point, for large tablets held in the hand use 8 pt, and for tablets used on surfaces or in stands use 10 pt.
These are MINIMUMS. At least 2 pt larger for almost all actual uses like body copy. Even larger for more readability, for active environments, and for older populations. The smallest sizes are okay for things like labels under icons, though.
A key use for text and icons is to label touch targets.
As much as no-affordance interfaces and secret gestures are a trendy way to insist you are making delightfully surprising experiences, making sure your simple actions just work is a much more sure bet.
Make your targets work for your users.
Visual targets are important. As much as no-affordance interfaces and secret gestures are a trendy way to insist you are making delightfully surprising experiences, making sure your simple actions just work is a much more sure bet.
Visual targets must: — Attract the user's eye.— Be drawn so that the user understands that they are actionable elements.— Bereadable, so the user understands what action they will perform.— Be large and clear enough the user is confident that he can easily tap them.
A word on size. People use different devices in different ways. Just one is distance, and ways of holding. Tablets, for example, are held (or placed on the table) further from the eye than phone-sized devices, so you need to make text different sizes. (For more on this, and the math from the next slide, start with http://4ourth.com/wiki/Human%20Factors%20%26%20Physiology)
Angular resolution is what matters instead of absolute size, and that’s calculated based on the distance between the screen and the viewer’s eyeballs. This is actually the simple version of this formula. To get the 3438 number requires knowing the size of the sensors in your eyeball, and so forth. Visual Angle (minutes of arc) = (3438) * (length of the object perpendicular to the line of sight) / (distance from the front of the eye to the object)
So, very small phones (which are not all featurephones as I show) can get away with tiny 4 point type, for most smartphones, 6 point, and for large tablets, 8 point. Extrapolate or do the math for other size devices. These are minimums. Larger for more readability, for active environments, and for older populations.
Clickable items need to not just afford their action (making it clear what it does) but do so consistently. Someone tell me why my calendar name, attendance and the participants are selectable rows, but the location is a link and I have to click exactly where the link is. Be consistent, and make whole contained areas (rows, boxes) selectable as that is what is expected.
So I said finger size doesn’t matter. Well, not for touch target size or touch accuracy. Really, at all.
But they still get in the way….
This is anecdotal, but I have seen similar results on real projects. When I updated to the new Twitter, I kept hitting the Add-person icon. Because it’s got a plus, and is visible in the other action area.
But mostly because the compose area was obscured. Plus I like to focus on the middle of the page like every human, so simply missed it.
This sort of behavior makes me abide by a simple rule: Nothing below the key touch targets. Of course, that’s too simplified. What I mean is, nothing below the target that is: — About the target. A carousel with labels below won’t work well. — Updated based on user input. Notifications, or a sliding input at the top of the screen that changes results below is bad. Generally, this is easy. You just flip things vertically, putting the updated info or label above, and you are safe.
I keep mentioning touch targets, so let’s get to the size.
The way the electrical conductivity of the capacitive touch screen works, the part that is always sensed is the centroid (or geometric center) of the contact patch, or the flat part of your finger against the screen. What matters for touch accuracy is the Circular Error of Probability or the pointing accuracy of people with their fingers. There’s a bit of a range here, depending on the user’s attention, care and the environment in which they operate. Not to mention the ability of the user themselves. Touch targets themselves should be no smaller than about 6 mm, and preferably 8 mm. Give enough room for this. Make sure that small targets have some padding around them to make it easier to click, even if the user can’t see that’s clickable area.
But touch targets are relatively easy. What really matters is interference. If you have to remember one lesson, and one set of numbers, remember interference. Which is just avoiding accidental clicks by having enough space between items.
Defining as button size, or spacing between buttons won’t do it. Your link or button is so variable, what you need is a guideline for interference alone. Whether you check digitally, or as I’ll show in a minute, with real world tools, you don’t measure space between items, but space between centers. Center the size target in the clickable area, and if anything else is in the circle, that has a chance of being clicked by accident. The tab bar here is typical of tab bars: It’s too short and too near other items so there will be accidents.
I didn’t give you a size for interference. There’s a reason. Looking closely the pile of serious academic research I mentioned, at my own research, at usability studies on real products, reveals that the way people touch devices is a bit more complex than a single number. But in ways that correspond neatly to some of the work we already do. So aside from numbers, the one thing to remember is to avoid the edges.
As I said, people are worse at accurately touching the edges, and especially the top and bottom. I have turned the gathered data into usable charts, with larger interference zones at the top and edges, and which neatly correspond to sort of structural zones that already exist in much of our design. Up here are those numbers. These are the rows for mastheads, tabs, the big content area of course, and the chyron at the bottom. If you aren’t getting the rows I refer to, I mean this…. (This whole principle detailed, with many references, in http://www.uxmatters.com/mt/archives/2013/11/design-for-fingers-and-thumbs-instead-of-touch.php)
(Point to Masthead, Tabs, Content, Chyron)And, if you look at the few squares I overlaid here, you can see how they correspond to the diagram of where people touch screens accurately. Or, not. You can see the red square where things are a bit too close together also.
I am starting to call this designing by zones. You just make sure those strips exist, and make sure they have proper spacing for the handset screen selected.
That can also be boiled down to a pretty simple checklist. — Put things that people want to read, or the primary interaction, in the center. — Provide room to scroll, so pages longer than the viewport can scroll that content to the center of the page. — Make rows selectable, without requiring small buttons at the left and right sides. — Limit the number of common controls, in the masthead and chyron…— Because everything has to have plenty of space. I’ll provide specific guidelines, but “plenty” is easy to remember. — For tabs, don’t hide content or require gestures to use them.
Lastly, I say the best way to work with a lot of this stuff is to do it at device scale. Work on the device, send images and code to the handset.
Sketch at device scale, so you start with it being the right way around. Avoid too much reliance on your computer screen, and the Powerpoint to show it off. (From a participant at a workshop on this same basic topic at MoDev UX 2014 in McLean, Virginia.)
Size guidelines are fine, but you can help yourself a lot and reduce your math time by just checking your work at scale.
Take the design out of Graffle, Visio, Acture, Photosho, InDesign or whatever, and get it off the computer. No need for clever prototype tools (though those are fine), just put screens onto actual devices. Try it out. Pass it around the room to make sure you aren’t foolish, or to share the design the way it will really work, in meetings with clients or stakeholders. If you do want to measure, do it directly to make sure your sizes are right. You can use a circle template you get at the art supply store (or these days, Amazon), but I made up my own little tool I keep in my pocket, because this is so important. (Get the template and see video about how and why to use it at http://4ourth.com/wiki/4ourth%20Mobile%20Touch%20Template.)
You can boil this down to the sizes and numbers you care the most about, but basically, when designing for touch, think about: — Visible targets — Is the text readable? Do the actions afford whatever action you want them to have? — Fingers — Do they obscure important information? Do they cover so much of your button the user can’t tell if they clicked it or not? — Touch Target Sizes — Just meet the basic sizes. And provide plenty of room around targets when you have it. — Design by Zones — To avoid interference, make sure there’s enough space between targets, by where it is on the viewport. Try to put key stuff to read or click in the center, make edge stuff bigger. This is all shared in many ways, and you can get this deck from the conference or I will have it on Slideshare soon.
And, if you want to discuss more, just ask me. If you miss these addresses, just Google my name and you’ll find me.
If you download this yourself, it’s information you may find useful. But it’s mostly here in case I get a question: instead of just waving my arms, I can show these neat slides I used to put in the presentation but which bore most of you now.
Since it’s the most common thing, we’re talking about capacitive touch. Resistive is the one where you have to simply apply pressure, and a grid of conductive leads make contact, so the device knows which point you touched. These are still being built. Even for consumer devices, like tablets or seatback entertainment systems and so on. There are even the old IR beam systems still around, used mostly in rugged environments (some ATMs and kiosks) and for very large displays.Capacitive touch, uses electrical properties of your body. Your finger acts as a capacitor whose presence in the system can easily be measured by these little nodes, in a grid, on several layers between the display screen and the protective plastic or glass. But it is not perfect. There is math, and interference, and tradeoffs in thickness, weight, cost, and optical clarity that get in the way of increased precision.
A few years ago, Motorola put a handful of devices in a little jig so they could precisely, robotically control the pressure, angle and speed of touch sensing. This is some of them. Even the much-loved iPhone is imperfect, with notable distortion at the edges, and actually a total inability to get to the edge on some sides. Look at the stairstep pattern on the Droid. That’s a problem with the calculations or something that predicts the precise position between the sensors. The pitch of the steps is clearly the grid size.
You can actually SEE the capacitive touch sensors sometimes. Rarely as a grid, but only one direction at a time. This is important so you understand the previous slide. These sensors aren’t microscopic, but quite large. There’s a lot of math to estimate where you touched between sensors.
As it turns out, it’s not really important how big our fingers are, except insofar as they obscure part of the screen, which is something else. Our finger squishes against the screen and only a part gets flattened and detected. My own research indicates this is pretty much the same for everyone. Children press really hard, so have a larger relative contact patch for example. There is some variability based on task, so people can use fingertips and press lightly.
BREAK This phone, and all smartphones, are not just about touch.
But have huge numbers of sensors, that make them aware of the user, and the world they live in.
Design by zones examples.
Design by zones examples. Bad ones here. More than 4 actions on the menu bar is too many. And all tabs are too short, or too near other elements.
Understanding when to use mousedown vs. mouseup, even just to do stuff like activate the visual click on mousedown, but not activate the action until mouseup, can be really good ways to improve the overall experience of the interaction. But, not all touch behaviors are equally supported. Check compatibility before you implement, and make sure that the platforms you need to support will work. For the web, as shown here, make sure there’s a useful fallback, so the design works no matter what even if some platforms have better features. http://www.quirksmode.org/dom/events/index_mobile.htmlhttp://www.quirksmode.org/blog/archives/2014/01/touch_action_te.html
If you miss these addresses, just Google my name and you’ll find me.