Sunday, September 30, 2012

Emotional Design

Emotional Design
By: Donald A. Norman

Book Comparison
        The book Emotional Design has a much different view point on the design of devices from the book Design of Everyday Things, also by Donald A. Norman, after reading just the first chapter.  Emotional design in Design of everyday Things is described more to be a trap for designers to make devices that are not friendly to users, but are aesthetically pleasing.  In Emotional Design, studies making a device "pretty" are proven to be easier to use than a similar device that is simply not as visually like able.  Each book has a different emphasis of possible aspects in designing devices.  
        In Design of Everyday Things the emphasis was clearly on usability and intuitive devices.  In my opinion, usability is much more important than aesthetics.  An ugly design that is easy to use will trump a majestic design that is impossible to understand.  Explaining how to create a usable design is also harder to do.  This book does a great job at isolating what makes an intuitive design and what went wrong with designs that are unusable.  It is not easy to know if a design is intuitive by looking at its as it is easy to see if a design is appealing.  Usability is always more important, but a great design has a bad chance of getting noticed if it is not also visually appealing.  
        In Emotional Design, the benefits of visually appealing designs are discussed. As well as the psychological reasons people are drawn to particular designs and the thought processes behind them.  Dr. Norman explains that natural human tendencies effect how people see things.  This does not mean that all people have the same opinions about devices because that is clearly not true.  Our experiences can either emphasize emotions people are born with or dampen them to where they are nonexistent.  The fear of heights is a good example Dr. Norman explains.  Everyone is born with a fear of heights, but through experiences some people may develop acrophobia or on the other hand have no fear of heights at all.  Every emotion is developed like this by everyone.  These emotions are what decide every aspect of how people interact with devices and should be considered in a design.
        

Wednesday, September 19, 2012

Book Reading #1: Bad Design #5

Bad Design #5: Unlabeled Sinks


This is the sink in my bathroom.  As you can see, there is no visible way of knowing which knob is the hot and which is the cold.  I constantly get them confused and have even had arguments with my roommates about which one is which.  The hot water takes a while to get warm and the cold water gets warm for a little while before getting cold, so it sends a lot of mixed signals about which is actually the hot one and which is actually the cold.  I have let the cold water run thinking it was the hot water. I would be waiting for it to get hot only to realize I just wasted a bunch of water for no reason at all.  The sink in our kitchen is the same way which is even worse since hot water is needed more in the kitchen than in the bathroom sink.  I have made the mistake so often that I do not even try to get hot water anymore.  I simply do not care anymore.

Book Reading #1: Bad Design #4

Bad Design #4: TV with no remote


This is my TV.  There is noting wrong with the TV itself.  The problem lies with what it did not come with.  A remote.  I checked the manual and there was no mistake, this model simply does not come with a remote.  The remote in the picture is a cheap universal one I had to buy separately, but guess what?  It does not work on the TV.  I have been able to connect the remote to my roommate's TV's, so there is no problem with the universal remote.  A TV without a remote is just a large monitor.  That is all I use it for which is a real shame.  I have thought about buying a more expensive universal remote, but I haven't in fear that it would be a waste of money if it does not work with the TV either.  I now hate this brand and will never buy anything from them again.  No they did not say there was a remote in the box, but I still feel cheated.  It is commonly assumed that all TV's should come with a remote.  The idea that a brand new TV could be bought without one would have never crossed my mind if I had not bought this TV. 

Book Reading #1: Bad Design #3

Bad Design #3: Garage Door Opener


This is my garage door opener.  It has a single button to add new garage door signal transmitters(remotes) so that they will open the garage door.  The downfall? The same button also clears all transmitters currently known by the garage door opener(receiver) if held too long.  When I was connecting my remote, I read the first line of the instructions that say press and HOLD the button on the transmitter.  I was not thinking clearly and held the button on the garage door opener instead.  That wiped the other remotes that were already known by the receiver.  This annoyed my roommates since they then had to connect their remotes again.  I have had to connect many wireless devices together such as computer mice or the controllers to an Xbox 360 which built a standard that requires buttons on both devices to be held down.  This presents a problem when the garage door opener design goes against this standard and instead clears its memory when the button is held down.

Book Reading #1: Bad Design #2

Bad Design #2: Passenger Car Door Lock


This is the passenger's door lock in my car.  It  is not intuitive to the first time user.  I can't even count the number of times passengers have gotten stuck in my car because of the lock.  I always have to unlock the doors from the driver's seat when there is a new person in my car for fear of having to awkwardly wait for them to eventually figure it out. The left picture is the door in the locked position and the right is unlocked.  Since my car is rather user unfriendly, I have paid attention to other car's locking mechanisms which mostly unlock after the first time the handle is pulled.  All that is needed to open the door are two pulls. My car does not do that.  The motion to unlock it is also rather unnatural when you do manage to find the small stub that is the lock.  New users shouldn't even try to find it in the dark.

Book Reading #1: Bad Design #1

Bad Design #1: Switch for Power Outlet


This is a light switch that controls a power outlet on the wall.  The outlet is behind the recliner, so you will just have to trust me when I say that the lamp is connected to a normal looking outlet and is controlled by the switch.  This may seem like a great design at first when it is being used properly, but after further analysis it will not seem as so.  When my roommates and I moved into the house the switch appeared to not do anything when switched on or off.  When we attempted to use the outlet, nothing would work in it because the switch happened to be in the off position. Only later did we realize the invisible connection between the switch and outlet by luck. How were we supposed to know?  There was absolutely no visible aid to show that the switch was connected to the outlet, so we assumed it was just a faulty outlet.  We live in an old house, so some things are bound to not work.  Many down falls go through my head when I think of this.  What if we had connected a computer or TV to the outlet when the switch was in the on position?  Every time someone hit the switch, the device would go off potentially damaging it.  There is no other outlets on that side of the room either, so this one outlet effectively forces us to put all our other devices in the other outlet on the opposite side of the room.  Only our lamp may plug into the outlet since it is the only device we want to be controlled by the switch even though there are two plugs like all normal outlets.  The other plug is completely useless.  Who needs two lamps in the same spot?

Book Reading #1: Good Design #5

Good Design #5: Printer and Scanner


This is a printer and scanner that actually works! All I had to do was install the software on my computer and it works perfectly.  The software has a single button to begin scanning with advanced options being secondary.  I have never used them and never had to, but it is nice knowing they are there.  The printing has never failed on me over the three years I have had it which is saying a lot for a printer.  I have always heard or seen horror stories about printers which makes me very great full to have one that I can rely on.  It does require USB plugin, but it only takes seconds to plug it into my laptop and print or scan.  It does exactly what it should do with only a few clicks with the option to go into details.  Even the details are in easy to understand terminology or contain pictures to explain options.  All the buttons on the printer itself are clear and there is a display to display helpful feedback to the user.

Book Reading #1: Good Design #4

Good Design #4: Great Spice Cabinet


This is a multi-layered spice cabinet.  The picture does not completely show all the space it contains in such a small area because there is so much.  The left most and right most parts are the outer doors that have many shelves.  The piece of wood in the center is an inner door that swings from the center to look like the shelving just right of it when closed.  Behind the two inner doors that hinge in the middle is another foot of  stationary shelving for even more items.  There is more space on top of the inner doors as you can just see.  My house has a small kitchen, but my roommates and I can jam a whole lot into this one greatly built cabinet.  I had never seen anything like it before moving into my current house.  We also have a normal shelved cabinet that is a pain to deal with in comparison because it does not give the potential of organization as this cabinet does.  This style of cabinet also makes it easy to get to all the objects even ones in the back where a normal cabinet would require you to move things around to get to which messes with organization.  This cabinet is the only organized space in our entire house because it forces us to be.  It is truly a diamond in the rough that is our kitchen.

Book Reading #1: Good Design #3

Good Design #3: Alarm Clock


This is my alarm clock.  There are quite a few buttons and switches on top and the sides of it.  There is a lot of functionality to it that requires many different inputs in order to keep them all straight, but they are relatively intuitive.  There are two settable alarms that can be set by holding down one of the top buttons and pushing the up or down arrows on the right.  That is the only counter intuitive part of the whole clock.  On the left side are switches that turn off and on the two alarms.  It is clearly labeled and a light on the front goes on when an alarm is active. Currently, the second alarm is on signified by the bottom right LED.  The speakers are very loud and you can choose between turning on the radio or a very annoying alarm when it is time to wake up.  There is a battery backup in case of a power outage. It is also satellite linked, so the time never needs to be set and synchronizes the time and date in seconds after turning on.  The LED screen has a variable brightness on the left side.  All the radio settings are on the right side.  It has a few problems, but it is by far the best alarm clock that I have ever had and does not need to be turned off on weekends and turned back on for the week. It knows what day it is and can be set to only go off on week days.      
The one problem is that it does not take into account day light savings, so I have to change the time zone to get the right time, which I usually take a minute to figure out how to do, twice a year.

Book Reading #1: Good Design #2

Good Design #2: Electric Kettle


This is an electric kettle that boils water in seconds.  It is very simple with only a button in the handle to open the top and a switch to turn it on under the handle.  It has measurements on the side so that another container is not needed to measure before poring into the kettle.  The kettle can only sit on the stand one way and sets into a panel to provide power to the coils inside the kettle.  There are no exposed areas that could burn if touched.  The entire kettle is insulated very well including the bottom.  When it is full of water and rather heavy it is natural to grab it with two hands which is completely safe to do.  Once the water inside comes to a boil it automatically flips the switch off so it is not possible to over boil.  The same mechanism that can tell when it is boiling also can somehow tell if the kettle is empty.  It will not heat the coils and potentially melt the entire inside if there is no water to boil.  I have never had a problem with it.  Before I had the kettle, the only way I could heat water for tea was the microwave or stove which are much less convenient.

Book Reading #1: Good Design #1


Good Design #1: Electric Toothbrush


This is a sonicare tooth brush. It makes brushing much easier than a manual tooth brush. Its completely water proof and simply sits into the charger. It uses magnetic inductance to charge so there is no plugging in which makes it very simple and clean looking. The interchangeable brush heads make replacing an old one easy to do though they are rather expensive. All that is needed to use it is to push the button and it will run for exactly 2 minutes which is the standard length of time to brush. Once 2 minutes is up it turns off. When it is sitting in the charger, the light in the button flashes when charging and is solid on when it is fully charged. I did not read any instructions when I bought it even though there was a small manual that came with it. The entire process is incredibly intuitive. There is only one button for turning it on and off. It turns itself off after 2 minutes so you rarely ever actually turn it off yourself. Only when you need to stop brushing before the 2 minutes are up.
I have only had one problem with it. If I feel as though I have finished brushing before the 2 minutes are up and take it out of my mouth while it is still vibrating, I get sprayed by the tooth paste. I will eventually learn not to do that anymore, but it goes against how I have always brushed my teeth before. All I have to do is push the button again to turn it off before taking it out of my mouth. Also notice how my roommates(the right one) looks different to mine(the left one).

Tuesday, September 18, 2012

Book Reading #1: Book Response

The Design of Everyday Things

By: Donald A. Norman

Book Response 
I found this book to be very informative and a great read for anyone designing anything and for end users as well. Pretty much everyone should read this book. It gives clear guidelines to follow and concepts to always consider when designing any device. Dr. Norman uses many examples of both good and bad designs on what makes a device user friendly. He explains exactly what makes a design more difficult than it needs to be and give plenty of examples as to how it can be improved. When reading this book, it is good to keep the fact that it was written in the 80’s in mind when topics of what good future designs will be like arises.  There are many points where he predicts future devices that are common today and gives some examples that are solved now.
I had similar views on a few of the ideas he explained, but the vast majority was completely new to me. I greatly benefitted from reading this. Most of his guidelines for creating human-centered designs were so simple that I will not be possible for me to forget very easily. I may forget what Dr. Norman called them, but the concepts will be engrained in my head. Every time I come in contact with an overly complicated device, I will immediately think of this book. I will no longer blame myself for not knowing how to properly interact with devices around me. I will no longer feel silly for not being able to open a door of any kind. I will further scrutinize manuals for newly bought devices without mercy. After reading this book I cannot but notice how many things in our society can be so much better. It made finding devices to write review about incredibly easy.
Dr. Norman’s belief on how the mind and memory works is incredibly fascinating to me. As a computer engineer, I always explained the mind as working similar to a computer, since that was the closest conceptualization I could think of and understand to an extent. His description of how he thinks the mind works was very different. It was still easy to understand, and made a lot of sense when analyzed. I always pair memories that are similar even if the situation is completely different. My conceptual models of the mind and of memory are now considerably different to take in consideration what he had explained to be true. Hopefully it is more accurate and will help me understand where I myself am more likely to make mistakes. I also considered myself a rather forgetful person, but now I feel more of a motivation to keep track of important things in my environment. With the now seemingly futuristic devices at my disposal in this day and age, I will try not to depend on my memory as much as I once did.
More than just potential designers should read this book. It is an incredibly simple read that does not require much technical insight. Anyone who reads this can benefit greatly and can be a more conscious consumer of marketplaces. If everyone read this book, the number of poorly designed devices would drop dramatically because know one would buy them. That is how it should be! As Dr. Norman explained, society would be correctly promoting the evolution of user-centered devices. We could transfer that same concept to movies and boycott poor quality films as a society while we are at it, too!  No more remakes!

Book Reading #1: Chapter 7

The Design of Everyday Things

By: Donald A. Norman

Chapter 7
Being the last chapter, Dr. Norman reiterated everything that he considers a good user friendly device should have.  I liked that he packed all of his proposals on how to make a device intuitive into one area.  It makes referencing them much easier if I ever find the need to do so in the future.  I believe all the properties of a great design needed most of the description he lays out in the rest of the book, but now that I know the basis a point of reference is very nice.  He also talks about  how some devices must be user unfriendly such as medicine bottles.  He argues that out of all the devices out there intentional confusing devices are a minute group.  The majority of devices that are complicated should not be.  We, as users, need to promote good designs and boycott the bad in order to move the evolution of user friendly designs forward instead of into an endless spiral of poor designs.  He is absolutely right, if everyone complained about bad designs manufacturers will know what to change in the future.  The problem is that this requires everyone to contact the manufactures, stores and other users in order to get the complaint in all the right hands.  As a designer of software, I will do my best to make user-centered designs.

Book Reading #1: Chapter 6

The Design of Everyday Things

By: Donald A. Norman

Chapter 6
I thought the evolution of the keyboard was interesting since it is universally accepted and the layout has always caught made me wonder how it actually came to be.  Also believe that designers have to understand that they are not typical users.  It is essential to get as many different inputs on a design as possible which was also talked about in chapter 5.  Dr. Norman's evolution of design is very true and I believe that the market place is actually moving into that direction again since this book was written.  It is not completely there yet, but it certainly has improved.  I think Dr. Norman has had a few too many bad experiences with shower faucets.  He spends a considerable amount of time (more than he does on doors) in this chapter pointing out different poor faucet designs.  He briefly mentions designs that unintentionally added too many features which make them look like a technological work of art to people who do not know how difficult they are to use.  I think he should have talked about examples of devices with too many features and how to avoid it than shower faucets.  The end of the chapter was devoted to computers and how they were oriented to computer programmers instead of the users.  He finishes with an accurate depiction of the technology we have today.

Book Reading #1: Chapter 5

The Design of Everyday Things

By: Donald A. Norman

Chapter 5
Dr. Norman begins by explaining different types of errors in detail and then how they should be handled by a device.  The design of a good device must be error friendly whether that is by reducing them out right or making them easily reversible when they happen.  I have come in contact with a few websites, such as elearning, that do not save work in text boxes, so if you submit the text answers while there is an internet error all of the text is lost.  I have to write it in Word and then copy it over to be safe.  That is an annoyance that should not exist in larger websites.  Dr. Norman then talks about the theory of how the mind works that he believes in.  I found his view to be very interesting and found myself thinking that I think that way about things.  I am not sure if I truly think in the connectionist approach or whether I am just thinking of specific times that fit its theory.  He goes on to explain the different forms of forcing functions.  I think that forcing functions are the best way to reduce errors since they are unavoidable and require the correct procedure.  I will attempt to use them every time they make sense in a design.

Monday, September 17, 2012

Book Reading #1: Chapter 4

The Design of Everyday Things

By: Donald A. Norman

Chapter 4
This chapter works to explain in more detail the properties that are needed for a natural interaction.  Dr. Norman categorizes constraints that can limit how intuitive a design is to a user.  I found the constraints to make a lot of sense, and I think it is helpful in understanding the troubles people may have with the design that I would not have thought of.  He states that there are always  problems with designs that the makers have no way of realizing without having user testing.  The rest of the chapter talks about problems with commonly occurring devices such as switches.  In my parents house there is a row of four light switches, and after 10+ years living in that house I still do not hit the switch I want on the first try.  Dr. Norman also talks about feedback at the end of the chapter which I found very interesting.  I strongly believe in having input.  I always get very annoyed when I am installing or running a program and it does not display a status bar or anything that would hint at progress.  Informative feedback is always needed for any type of process or action.

Sunday, September 16, 2012

Book Reading #1: Chapter 3

The Design of Everyday Things

By: Donald A. Norman

Chapter 3
In this chapter, Dr. Norman talks about many interesting, but over looked, ways memory works.  This chapter was a real eye opener to how I should memorize things as well as how to explain to others my ideas so they are more likely to understand and retain it.  The successful transfer of ideas is clearly a vital property to any sort of group environment.  When attempting to remember something my first action is to repeat it in my head over and over which as Dr. Norman said is burdensome because it requires a lot of mental thought.  I have always told myself to use my phone to keep track of dates, but now I have even more motivation to do so after reading this chapter.  I like his prediction of the emergence of the first PDAs and small cellphones.  It was not a far fetched prediction at the time by any means, but I grinned when I read it.  He gives many points about the ways we retain memory and I think it would be incredibly beneficial for teachers to always be aware of these simple, but not obvious, concepts.

Saturday, September 15, 2012

Book Reading #1: Chapter 2

The Design of Everyday Things

By: Donald A. Norman

Chapter 2
I have always tried to explain how a new object works in my head on my own and without any conceptual models or manual.  Many of the times my mental conceptual model is not accurate, but that first idea I get is the one that sticks. If I am not corrected I begin to think my own model is the actual model. A good example is the microwave oven. I may be an engineer, but I still do not fully know exactly how microwaves heat up my food. I have an idea, but I know it is wrong. How random chance is handled by the mind is very interesting.  Dr. Norman explained that most of the times when coincidences happen people do not see them as coincidences until they keep happening at seemingly random times.  That first few times something happens by chance I am always thinking about what I was doing just before, and I try to tether my actions as the cause of the coincidence no matter how implausible it may be.  The mind is always jumping for an explanation for every unknown situation or object we come in contact with.  It would be amazing if we could predict why the mind does this.

Wednesday, September 12, 2012

Minds, Brains and Programs


Minds, Brains, and Programs
By: Dr. John R. Searle

This paper is arguing the long debate about Artificial Intelligence (AI) and whether or not man-made machines are fundamentally capable of intentionality.  Intentionality meaning the understanding of something past what it is, and more about what it embodies.  Dr. Searle’s example of the Chinese box explains that a machine that passes the Turing test does not imply that it has intentionality.  A machine that can take in Chinese characters and accurately give a response in Chinese does not imply that the machine understands Chinese like a Chinese speaking human does.  A person, that has no prior knowledge of Chinese, can also take in Chinese characters in the same way, and use the same algorithmic processes the machine uses to get the same answer without knowing what they mean.  Since the human does not understand Chinese, but can still deceive a Chinese speaking human into thinking they do, then the machine by comparison does not understand Chinese in the same way. 
The rest of the paper is Dr. Seale refuting responses to his argument about how man-made machines cannot achieve intentionality as a human mind can.  He explains that the human mind is intentional because our human brain is “causally capable of producing perception, action, understanding, learning, and other intentional phenomena. And part of the point of the present argument is that only something that had those causal powers could have that intentionality“.  We do not function as a computer does with algorithms which are only capable of running and create the information of the next algorithm. 
I believe that Dr. Seale makes very valid explanations as to why machines running off of formal processes can never be capable of intentionality.  They can only simulate understanding because the machine did not make the programs it uses and so has no real understanding of the contents of the programs and know way of ever knowing them.  Looking back at the Chinese box, consider why the human did not retain any intentionality about the Chinese language even though humans are fully capable of intentionality.  The algorithms did not teach them anything that would aid in understanding the Chinese language.  The million dollar question I think of when considering this is how exactly do humans obtain intentionality about something?  That is what is needed to know before any human can possibly make a man-made machine do likewise.  In Dr. Seale’s definition of understanding, how do humans come to the conclusion that they understand something? 

Tuesday, September 11, 2012

Book Reading #1: Chapter 1

The Design of Everyday Things

By: Donald A. Norman

Chapter 1
The way Dr. Norman explains the basic principles of making user friendly designs is so incredibly simple that it is a wonder why they have not been made standard long ago.  He goes on later to explain why that is, but when he was explaining the components of a good intuitive design I thought it was mind melting how obvious they were. I have come in contact with glass doors like the ones his friend got stuck in many times. I always got a sense of confusion when approaching them and had to really think about how to open them so as to not look silly.  At first, like Dr. Norman said the common response was, I felt embarrassed that I had to put so much thought into just opening a door. After encountering the doors enough times, I began to develop an immature idea along the lines of Dr. Norman's visual component of design as to why the doors were so terrible. If I can't see the hinges then there is no sure way of knowing which side is the right side to push or pull.  This is a reason I chose to take this course. I do not want to design a door that makes people feel stupid by requiring more thought than it should. Over processing a user is inefficient and bad for business. 

Monday, September 10, 2012

Paper Reading #6: Using Rhythmic Patterns as an Input Method

Using Rhythmic Patterns as an Input Method

Emilien Ghomi     Guillaume Faure     Stéphane Huot     Olivier Chapuis     Michel Beaudouin-Lafon
ghomi@lri.fr         gfaure@lri.fr            huot@lri.fr           chapuis@lri.fr       mbl@lri.fr

Univ Paris-Sud (LRI)                        CNRS (LRI)                        INRIA
F-91405 Orsay, France                    F-91405 Orsay, France       F-91405 Orsay, France

Author Bios:
Emilien Ghomi   
  • Ph.D Student at Université Paris-Sud
  • Michel Beaudouin-Lafon and  Stéphane Huot are his advisors
Guillaume Faure
  • Ph.D Student at Université Paris-Sud
  • Michel Beaudouin-Lafon is his supervisor
Stéphane Huot  
Olivier Chapuis
  • Research Scientist at LRI (Paris-Sud).
  • Ph.D. in Mathematics from University Paris VII Diderot
Michel Beaudouin-Lafon
  • Senior member of Institut Universitaire de France
  • Ph.D. thesis at Laboratoire de Recherche en Informatique, Univ. Paris-Sud
All are members of the InSitu research team (LRI & INRIA Saclay Ile-de-France).

Summary:
The authors of this paper are testing the use of short 2-6 beat rhythmic patterns as a replacement for interacting with computers, phones and other devices with the use of various motion or touch pad sensors.  The uses of rhythms as controls is endless, from turning a phone on vibrate or skipping a song just by tapping it in your pocket to replacing hot key shortcuts in computer programs.  They posed some questions that would test if implementing technology with rhythmic controls is worth developing: 
  • Are people able to learn and memorize patterns? 
  • Can they use them to trigger commands? 
  • Which patterns make sense for interaction and how to design a vocabulary? 
  • What is the best feedback helps executing and learning patterns? 
  • How to design effective recognizers that do not require training?
Related work not referenced in the paper:
  • Five-Key Text Input Using Rhythmic Mappings
    • Uses multiple taps to represent a single character on a keypad that is much smaller than a normal full sized keyboard.  It does not use a beat but only a sequence of button presses.  
  • Rhythmic Interaction with a Mobile Device
    • The authors wanted to be able to measure the movement of a phone in a 3D space to interpret rhythmic gestures to provide spatiotemporal gesture classification.  This does not use rhythmic tapping with a beat to command a device.
  • Music Wall: A Tangible User Interface Using Tapping as an Interactive Technique
    • uses a sequence of taps to interface with a wall or table using embedded sensors for casual communication like knocking on a door rings the doorbell.  This is a similar idea only used on different types of devices for different purposes.
  • RhythmLink: Securely Pairing I/O-Constrained Devices by Tapping
    • The title is self explanatory. It uses a shared sequence of taps to securely link devices such as bluetooth headset to a phone.  The authors do not intend to use rhythms for controlling devices as intended in the paper I am evaluating.
  • inTUIt – Simple Identification on Tangible User Interfaces
    • Uses tapping as a way of interacting, but does not go further into making different types of tapping different commands. 
  • Exploring Reinforcement Learning for Mobile Percussive Collaboration
    • Aims to makes real-time, multi-user musical expression on mobile devices just as intuitive as their physical counterparts.
  • Sonic gestures and rhythmic interaction between the human and the computer
    • Explains many different ways of interacting with a device such as tapping on a table, but does not actually implement an experiment or way of measuring the tapping sequences.
  • Movement Sonification: Effects on Perception and Action
    • Translates movement, such as a person jumping, into a sound wave using qualities such as the force the person exerts on the ground over time.
  • Gesture Authentication with Touch Input for Mobile Devices
    • Intends rhythmic patterns as passwords and not basic user commands for devices.
  • Temporal Interaction Between an Artificial Orchestra Conductor and Human Musicians
    • Is intended to guide an orchestra by listening to the tempo and rhythm of the music and moving accordingly as a human musician would do.
Evaluation:
two experiments were done. The first was to evaluate whether a computer can register the rhythmic patterns of a novice user effectively. The second was to see if patterns can be memorized as efficiently as the shortcuts used today. The first experiment used 30 patterns the participants listened to and saw a visual representation before attempting to replicate the pattern for the recognizer to interpret. There were four groups of participants. no feedback when tapping the rhythm, and audio feedback, a visual feedback and a combination of visual and audio feedback.  Figure 6 show the quantitative data gathered.  The qualitative data consisted of how the participants felt about the different forms of feedback. Many only wanted one form of feedback if they had both and the quantitative data clearly shows that no feedback did not work as well.  The second experiment used hot keys as an example of a long used shortcut system that their rhythmic patterns may be able to replace.  To see if using rhythms is just as fast as using hot keys, they assigned 14 patterns and hot keys to the same object with no correlation between the patterns and hot keys to the object. The audio only feedback was used since it was the most effective from experiment 1.
The participants had the choice of using either patterns or hot keys as long as they tried both. Most used the patterns because they found it more fun.  The experiment was stretched over two days to get a better accuracy with how the participants memorized the shortcuts.  There were two groups, one where participants were told the shortcuts and then did a memory test and the other group was given help whenever they needed it.  The quantitative data are shown in the figures to the right.  In Figure 14, there is not much of a difference in the recall rate once the patterns and hot keys are learned.  In Figure 15, the help rates are also rather similar between the rhythmic patterns and hot keys showing that learning patterns is just as easy to learn as the conventional hot keys are.

Discussion:
They listed many ways that makes rhythmic patterns better than using hot keys, but I do not believe they will be able to replace them.  I see this as a great way to give commands to phones without having to take them out of your pocket or to skip songs as you jog and other thing like that.  They need to run an experiment testing how long it takes to do some simple tasks using patterns and hot keys.  I think that someone using the patterns will never be able to move as quickly as an experienced hot key user since pauses are an inherent part of giving a command rhythmically. 


















Thursday, September 6, 2012

Paper Reading #5: Ripple Effects of an Embedded Social Agent: A Field Study of a Social Robot in the Workplace


Ripple Effects of an Embedded Social Agent:
A Field Study of a Social Robot in the Workplace

Min Kyung Lee(1), Sara Kiesler(1), Jodi Forlizzi(1), Paul Rybski(2)

Human-Computer Interaction Institute1, Robotics Institute2
Carnegie Mellon University
5000 Forbes Ave, Pittsburgh, PA 15213 USA
{mklee, kiesler, forlizzi, rybski}@cs.cmu.edu

Author Bios:
Min Kyung Lee
  • Sara Kiesler is her Ph.D. adviser.
Sara Kiesler
  • Social and behavioral aspects of computers and computer-based communication technologies.
Jodi Forlizzi
  • Associate Professor, School of Design and Human-Computer Interaction Institute,
  • The A. Nico Habermannn Chair in the School of Computer Science
Paul Rybski
  • Systems Scientist – RI, ECE Carnegie Mellon
  • PhD, 2003 Computer Science and Engineering UMN
Summary:
A social robot called Snackbot delivered snacks to people in an office and engaged the recipients with small talk as it delivered their snacks.  The authors of the paper wanted to see how people interact with and perceive Snackbot. There has not been a study on if robotic agent's social skills begin to annoy people interacting with the agent for an extended amount of time.  Snackbot was 4.5 feet tall and had a face that went through many tests to be as pleasing to people as possible.  Snackbot was remotely controlled by an operator on a laptop and moved autonomously and had a speech recognition system. Unless an issue occurred, the operator never took control of the robot.  The most common times the operator had to step in was if a participant asked a question that Snackbot did not know how to answer.  Interactions started with Snackbot identifying the participant, engaging in small talk, giving them their snack and then leaving in a socially acceptable manner.  Snackbot was given categories and topics to engage the participant in based on the unique participant, what snack they chose, seasonal or apologies for breaking down since it happened often.   

Related work not referenced in the paper:
  • Involving Users in the Design of a Mobile Office Robot
    • The robot in this paper was made to assist in fetching objects for physically impaired people in the work place.  The authors were interested in how people in an office would perceive a robot over an extended period of time exactly like with Snackbot. It was not autonomous nor did it have speech recognition like Snackbot.
  • The USUS Evaluation Framework for Human-Robot Interaction 
    • The authors identify what a robot must be and be capable of doing in order for it to have a seamless integration into the work place. It pretty much lists out the same topics that Snackbot was meant to test, but they do not provide any field study data.
  • Studying the acceptance of a robotic agent by elderly users
    • The point of this paper is to do a filed study to see if a robot could be socially accepted among elderly people as it helps their day to day lives.  It may have a more focused audience of study than Snackbot, but the idea is exactly the same. Snackbot is not at all a novel idea.
  • Human–Robot Interaction in Rescue Robotics
    • This paper is about creating more environmentally aware rescue robots that do not take many people to control and to also be socially accepted as a robotic agent.  
  • Whose Job Is It Anyway? A Study of Human–Robot Interaction in a Collaborative Task
    • This paper explains how the time for collaboration with humanoid like robots in the work place is not far in the future. It describes exactly what all these other papers are doing, such as Snackbot, which is slowly perfecting the art of creating a robot so life like that it can easily be socially accepted by humans.
  • Final Report for the DARPA/NSF Interdisciplinary Study on Human–Robot Interaction
    • This paper explains how roboticists collaborated with psychologists, sociologists, cognitive scientists, communication experts and HCI specialist to setup a common ground for them all to work together in making a more humanoid robot agent to interact on a daily basis with people in many different environments. 
  • A Methodological Variation for Acceptance Evaluation of HumanRobot Interaction in Public Places  
    • This paper has the exact same idea that the experiment involving Snackbot was in tended to do only for public uses.  It does not have any field data though.
  • A Social Informatics Approach to Human–Robot Interaction With a Service Social Robot
    • The authors describe experiments that have been done involving the benefits of emotional and social robots against emotionless robots in work environments like an autonomous cherry picker. They describe how they believe robots should approach social intelligence in order to be successful in society.
  •  Teamwork-Centered Autonomy for Extended Human-Agent Interaction in Space Applications
    • Astronaut's jobs working in space is incredibly dangerous and would greatly improve if they could work with autonomous robots that can intelligently do jobs that are too dangerous or slow for humans to do in space.
  • Enjoyment, Intention to Use And Actual Use of a Conversational Robot by Elderly People
    • Not all social agents have to be used strictly as workers.  They will also be used as potential company for people like the elderly who can't be left alone too long, but can't afford a full time nurse either.  This robotic agents helping, a single nurse can take care of many elderly people at once.
Evaluation:
Snackbot delivered snacks to 16 offices on one floor of a building at a US university.  There were 21 participants, 8 women and 13 men ranging from 22-51. There were 11 graduate students, eight staff, one doctor and one faculty member. They were all part of the computer science department, but half had no programming experience.  Snackbot delivered snacks between 2:30-4 p.m. on Mondays, Wednesdays and Fridays for four months.  All data was subjective since the experiment was to see if the participants would accept Snackbot socially. 175 interactions were recorded and 161 interactions were videoed. Participants interacted with Snackbot an average of 9 times over their two months of deliveries. Participants were not told that they would be given snacks by a robot when they signed up. There was mostly qualitative data since the impressions Snackbot had on the participants were mostly collected in end experiment interviews. Five participants saw Snackbot as a failed person because it broke down every once on a while and sometimes interacted inhumanly such as asking a door if it could pass or pausing for extended periods during a conversation.  These breakdowns helped the participants keep the sense that Snackbot is in fact a robot.  From the interviews it was found that most of the participants got used to Snackbot and would sometimes go out of their way to make sure they were in their office when it arrived.  They would call days Snackbot would come around "Snackbot day".  When participants were busy and did not want to talk to Snackbot, they said they later felt bad about getting irritated with Snackbot. The majority of the participants felt emotions for Snackbot whenever it "humiliated" itself when it would breakdown.  Some participants got jealous of other participants because they appeared to get more attention from Snackbot when in actuality they were not.  An unintended effect of Snackbot called the ripple effect where people that were not participating, but watched Snackbot's interactions built both negative and positive behaviors about Snackbot.  Over time the ripple effect grew and was variables like office culture were a factor and needed to be considered when deploying a social agent like Snackbot in a work environment.  Over all 75% of participants grew to enjoy interacting with Snackbot.

Discussion:
I believe that as robotics and the technology to build them increases we will begin to see more and more robotic agents in work places everywhere. This experiment was not novel and has been done before, but not at this level of computing power.  It was building off of so many other experiments that all were better than the last.  The participants were not a representation of a normal work environment considering it was a computer science department in a university and they could easily have known the authors of this paper.  Their opinions on the matter were probably far from being unbiased.  Bias is impossible to get rid of in an experiment like this, but more diverse participants are a must.

Paper Reading #4: Pay Attention! Designing Adaptive Agents that Monitor and Improve User Engagement


Pay Attention! Designing Adaptive Agents that Monitor and Improve User Engagement

Dan Szafir, Bilge Mutlu
Department of Computer Sciences, University of Wisconsin–Madison
1210 West Dayton Street, Madison, WI 53706, USA
fdszafir,bilgeg@cs.wisc.edu

Author Bios:
Dan Szafir
Bilge Mutlu
  • In the Department of Computer Sciences at the University of Wisconsin-Madison.
Summary:
A robotic agent that tells a story and monitors the users behaviors, emotions and  mental states to best keep the user's attention on the story.  Recently, studies have been using electroencephalography(EEG) signals to measure alertness and attention. This new technology is commercially available making brain-computer interface(BCI) possible. The robotic agent uses this new advancement in the form of large headphones to monitor the FP1 region of the cortex which is thought to control learning, mental states and concentration.  When the attention of the user on the story begins to decrease, the robotic agent changes its tone, engages gestures and other techniques known to regain a user's attention.  These techniques have been thoroughly studied and many long standing educational theories have been made.

Related work not referenced in the paper:
  • On the Use of Electrooculogram for Efficient Human Computer Interfaces
    • Explains how the use of different BCI types can be used to help people physically disabled to interact with computers. It mainly described the use of electrooculogram(EOG) which follows eye movement, but explains how EEG would be a more reliable way of BCI interaction.
  • A hybrid platform based on EOG and EEG signals to restore communication for patients afflicted with progressive motor neuron diseases
    • The title is so long it pretty much speaks for itself. It focuses on the use of the BCI tools to help people impaired physically interact with specific devices. It does not consider their use to monitor a user's attention like the robotic agent.
  • Brain–Computer Interfaces for Multimodal Interaction: A Survey and Principles
    • This paper is very similar to Pay Attention! and talks about how EEG measurements can be used to help people suffering from attention deficit hyperactivity disorder or non-disabled people stay focused on a single task or a story in our case.  It does not actually implement this idea though and stays in the realm of potential uses for this new technology. No real experiments were done like in the paper I am writing about.
  • Brainput: Enhancing Interactive Systems with StreamingfNIRS Brain Input
    • This paper sees the use of BCI devices as a new way of interacting with computers and other devices to maximize a person's productivity by preventing them  from being mentally overwhelmed with tasks and not as a learning aid. 
  • Adaptive Brain Interfaces
    • Like many of the papers involving BCI devices, this paper also is about how it can be used for physically disabled persons interact as they could never have before.
  • Towards passive brain–computer interfaces: applying brain–computer interface technology to human–machine systems in general
    • Again, another title so long it describes its content too well. It explains the development of BCI and where the authors think it will be in the future. It gives a few examples of BCI devices, not any using it as a way of keeping a user's attention like the robotic agent.
  • Electroencephalogram-Based Control of an Electric Wheelchair
    • In the beginning of BCI research the focus was on aiding the disabled and this paper is one of them only it has focused on the use to control a wheel chair instead of a computer like the other papers I have been finding.  Like the other papers, it does not consider BCI devices as a way of keeping a user's attention either.
  • Combining Eye Gaze Input With a Brain–Computer Interface for Touchless Human–Computer Interaction
    • This paper again is focused on the idea of being able to control a computer pointer with nothing but your gaze and thoughts. There are many papers on BCI with this same idea, but none of them that I have found consider its use for teaching by keeping the user's attention.
  • Using a Low-Cost Electroencephalograph for Task Classification in HCI Research
    • They had three different tasks that the user would perform and the EEG device would differentiate which task they were performing from stimulation in the brain. It can measure a user's mind state, but does not measure  their attention or concentration.
  •  Towards Ambulatory Brain-Computer Interfaces: A Pilot Study with P300 Signals
    • When a user is in motion, the EEG signal is reduced significantly. This paper proposes a solution to this problem by measuring a specific brain signal called P300.  The author's idea of a potential use for this is mainly entertainment purposes and not learning which is the focus of the robotic agent in the paper I am reading. 
Evaluation:
There were two hypotheses the author's wanted to evaluate. The first was that the educational attention grabbers performed by the robotic agent when EEG measurements of the user's attention drops will raise their attention and improve their learning performance. The second states that the robotic agent's engagements with the user when their attention decreases will also help motivate them and increase their rapport with the agent. To test these hypotheses, participants are placed in a room with a robotic agent that beings with a lesson on the zodiac signs followed by a long story.  Then objective questions about the zodiac signs were asked and then questions about the story were asked. During the lesson on zodiac symbols no attention grabbing cues were used, so it was as if they were simply listening to an audio tape. This part was meant to be a buffer to draw the participant's attention from the real test of the long story.  There were three groups that each listened to the story in three distinct ways. the first was with no attention cues, the second was with randomly timed cues and the third was with the adaptive cues when the user was loosing focus.

The results from the questions backed their first hypothesis.  Subjective observations of the participants after their interaction with the robotic agent showed interesting results.  Females showed a higher sense of motivation and rapport with the agent confirming the second hypothesis, but the male participants had the exact opposite reactions not how the authors expected. The authors considered this as a possible result of the appearance of the robot being small and having a child's voice which could be easier for women to connect with than men.

Discussion:
The objective data gathered showed that the adaptive robotic agent improved the learning curve of the participants by a significant margin. The idea of using EEG measurements to track attention and concentration is not a completely novel idea, but I could not find any other paper where the authors use a human like robot to act as a teacher.  This was the only paper I found that had actual observed data of a BCI device positively impacting a user's ability to retain new subjects they have just learned.

Tuesday, September 4, 2012

Paper Reading #3: HoloDesk: Direct 3D Interactions with a Situated SeeThrough Display


HoloDesk: Direct 3D Interactions with a Situated SeeThrough Display

Otmar Hilliges (1), David Kim (1,2), Shahram Izadi (1), MalteWeiss (1,3), Andrew D.Wilson (4)

1 Microsoft Research   2 Culture Lab               3 RWTH Aachen University   4 Microsoft Research
 7 JJ Thomson Ave         Newcastle University,   52056 Aachen,                       One Microsoft Way
 Cambridge, UK             Newcastle, UK             Germany                                Redmond, WA
{otmarh,b-davidk,shahrami,awilson}@microsoft.com, weiss@cs.rwth-aachen.de

Author Bios:
Otmar Hilliges
  • PhD in Compter Science from Ludwig-Maximilians Universität München / LMU Munich
David Kim
  • Part of Microsfot Research in Cambridge, UK
Shahram Izadi
  • Research scientist at Microsoft Research Cambridge. 
  • Xerox PARC before that.
  • PhD with Tom Rodden and Yvonne Rogers working on the EQUATOR project.
Malte Weiss
  • PhD student at the Media Computing Group of RWTH Aachen University.
Andrew D.Wilson


Summary:
A user can interact with 3D virtual objects that act like physical objects through pushing, scooping up and even grasping the virtual objects within the scene of the HoloDesk. The creators made manipulating the virtual objects as intuitive as possible by being able to grasp objects and turning and rotating them. As shown in the pictures below, physical objects can also be used to interact with the virtual objects.  Of course it is limited by the fact that the virtual objects cannot interact back with the physical objects, they can only react as a physical object would. Physical objects even cast virtual shadows on virtual objects that are bellow them.
It consists of an interaction volume where the virtual objects appear to be when looking through the glass transparent glass beamsplitter right above the volume.  A RGB Webcam measures the orientation and position of the user's head so that the projected 3D volume from the LCD is displayed so that when the user looks through the beamsplitter it is a if the projection is under it. A Kinect is used to track the user's hands and other physical objects that may interact with the virtual objects in the volume. The RGB Webcam is continuously updating where the user's head is and refreshing the projection to keep up the illusion of the virtual 3D volume.

Related work not referenced in the paper:
  • Calibration Requirements and Procedures for a Monitor-Based Augmented Reality System
    • Talks about the mathematics needed in order to successfully display an augmented reality such as the GPS coordinates and orientations of users and virtual objects. It does not use a webcam to judge orientation or a beamsplitter to give the illusion of a 3D space like the HoloDesk does.
    •   
  •  Face to Face Collaborative AR on Mobile Phones
    • From the massive amount of augmented reality already being used in phones, they went a step further and added a face to face aspect. When connected to another person's phone they can see each other as if the other player was on the other side of a table tennis game. This was limited to a 2D representation since it was only using a camera phone where as the HoloDesk is fully a 3D experience.
  • Table Top Augmented Reality System for Conceptual Design and Prototyping
    • This is very similar to the concept of the HoloDesk, but was implemented in a very different way. A large LCD screen is lifted a couple inches off a table and a person reaches under it where a camera displays your hands on the LCD screen along with the 3D environment to manipulate. you cannot directly see your hands and so is not as natural looking as the HoloDesk is. The virtual objects also do not act as they would if they were real as they attempt to do in HoloDesk.
  • The Office of the Future: A Unified Approach to Image-Based Modeling and Spatially Immersive Displays
    • The office of the future uses magnetism to track a users head orientation to project a seemingly 3D image on any surface instead of a camera and does not allow direct manipulation of virtual objects like HoloDesk does.
  • Combining Multiple Depth Cameras and Projectors for Interactions On, Above, and Between Surfaces
    • LightSpace is a small room with projectors and a depth camera suspended on the ceiling. The depth camera follows a person's movements and registers gestures to interact with the walls and table. It does not display an interactive 3D environment, but it does project 2D images into the user's hand or the floor.
  • MirageTable: Freehand Interaction on a Projected Augmented Reality Tabletop 
    • The MirageTable is very similar to the HoloDesk in that a camera follows the user's head and virtual objects can be manipulated with just your hands. However, objects cannot be grasped and there is a projector instead of an LCD screen and beamsplitter.
  • Vision-based 3D Finger Interactions for Mixed Reality Games with Physics Simulation
    • Uses a laptop and 3D camera to interact with virtual objects that are displayed on the laptop's screen. The only way to interact with the objects is by selecting them with your fingertip at which point a virtual tether links the object to your finger. There is no 3D volume and the interactions are not as intuitive as the HoloDesk.
  • Interactions in the Air: Adding Further Depth to Interactive Tabletops
    • Uses a projector under the table to display virtual objects that can be grabbed by pinching your fingers above the table.  Virtual objects cannot be picked up and the display is 2D so it does not require head tracking.
  • Multimodal Interaction in an Augmented Reality Scenario
    • A headset and glasses creates an augmented reality where a depth camera identifies objects and can follow the user's fingers in order to select menu items and physical objects by projecting a line along a pointing finger. There are no virtual objects to manipulate and is a mobile device unlike the HoloDesk.
  • Simulating Educational Physical Experiments in Augmented Reality
    • A head mounted unit creates an augmented reality through glasses where a pen with white balls attached for a depth camera to follow can create virtual objects and give them properties by drawing on a pad that is also being tracked with white balls attached to it. All virtual interaction is from the pen and pad where as the HoloDesk does not use any tools for interaction.
Evaluation:
There was an informal and a formal evaluation done. The informal evaluation simply had hundreds of users play with it without any prior instruction or objective to do.  All observations were qualitative and subjective based on the observer. The formal evaluation used a simple task to measure accuracy of where the virtual objects appear to be with three display types. standard setup(DHD), standard setup with stereo output(SHD) and Nvidia 3D Vision LCD shutter glasses(IHD).  The task started with a red cube that when touched would randomly spawn another cube and time how long it took the user to touch the new cube. when the second cube was touched, another appeared waiting to be touched. The results are displayed below.

Discussion:
I found the idea of the HoloTable to not be very novel, but how they went about creating it absolutely was.  There were many other augmented reality tables, but none had such intuitive interaction with the virtual objects as the HoloDesk.  I do not think the evaluation fully tested every aspect of the device, but they got very good reviews from the people how participated in the informal evaluation.  I think that it is a step up from all the other augmented reality tables out there.






Monday, September 3, 2012

Paper Reading #2: Touché: Enhancing Touch Interaction on Humans, Screens, Liquids, and Everyday Objects


Touché: Enhancing Touch Interaction on Humans, Screens, Liquids, and Everyday Objects

Munehiko Sato (1,2), Ivan Poupyrev (1), Chris Harrison (1,3)

1 Disney Research Pittsburgh,     2  Graduate School of Engineering,  3 HCI Institute,
   4720 Forbes Avenue,                   The University of Tokyo,                Carnegie Mellon University,
   Pittsburgh, PA 15213 USA           Hongo 7-3-1, Tokyo                     5000 Forbes Avenue,
   {munehiko.sato, ivan.poupyrev}    113-8656 Japan                            Pittsburgh, PA 15213 USA
   @disneyresearch.com                                                                  chris.harrison@cs.cmu.edu

Author Bios:
Munehiko Sato
Ivan Poupyrev
  • Senior research scientist at Walt Disney Research.
  • Worked as a researcher at Sony Computer Science Laboratories and the Advanced Telecommunication Research Institute International.
  • Ph.D. dissertation in Hiroshima University, Japan.
Chris Harrison
Summary:
Their Touché circuit device uses their new found Swept Frequency Capacitive Sensing technique to register multiple types of touches from the human body.  The device can be attached into any conductive object or material. Conductive diodes can be retrofitted to any non-conductive object touch sensitivity would be found useful to have. The human body can manipulate small electrical signals and Touché can monitor these changes.  The main novel idea is to measure the electrical alterations from human contact at different frequencies in a "sweeping" fashion.  This is a new concept because it was not feasible without the small and powerful processors that are now available. By taking data points at different frequencies, the sensitivity of the touch sensor is greatly improved.  In the figure to the right, picture a displays wrist bands that sense hand gestures in order to control a phone. Picture b shows a door knob sensitive to different types of touches. Picture c shows water that can sense how much of a person's body is in contact with he water. Picture d has a smart phone with a casing that is capable of registering exactly how a person is holding the phone and can respond accordingly.  These are only a few of the possible uses for Touché.  It is a very simple to install into a device. Only a single wire is needed to be in contact with the conductive surface for it to work properly and it is relatively cheap to create.

Related work not referenced in the paper:
  • ReachMedia: On-the-move interaction with everyday objects 
    • This is a wrist only device that uses radio waves to detect objects the person is holding. The object requires a RFID tag for the wrist device to detect it and give the user helpful information about it as they pick it up.  Touché is not used for this purpose and uses electrical human touch sensors not radio waves.
  •  Enabling mobile micro-interactions with physiological computing
    • Has an arm band that registers muscle movements in the hand to control a computer or other device. Also uses "Skinput" which is a touch UI from a projection onto the skin. No electrical signals are sensed like with the Touché.
  • Multimodal Human Computer Interaction: A Survey
    • States possible touch sensitive or gesture sensing ways of collecting commands by objects like computers. Does not say much about how to actually collect the data like Touché.
  • Evaluating Capacitive Touch Input on Clothes
    • Uses micro buttons sewn into fabrics in a visually appealing way with wires that can be stretched as to keep the fabric flexible.  The creators of Touché want to get away from actual buttons and create objects in themselves responsive.
  • gRmobile: A Framework for Touch and Accelerometer Gesture Recognition for Mobile Games
    • Uses an accelerometer and camera to visually register gestures and orientation. This is a completely different way of collecting gestures and their way is not very novel.
  • Your Noise is My Command: Sensing Gestures Using the Body as an Antenna
    • I found this one very interesting because it uses the EM fields already generated by wires in a home to be able to map where and what a person is touching such as a wall. This is similar but no the same as Touché since it uses EM instead of electrical inductance in the human body.
  •  Sensing Foot Gestures from the Pocket 
    • Attempts to provide silent commands to a phone in your pocket by sensing the orientation of the phone and flexing of the toes.  It uses buttons inside of a shoe to double tap or scrolling motion. I see many problems with this design. If you are running there is no telling what the sensors are going to accidentally register.
  • PACER: Fine-grained Interactive Paper via Camera-touch Hybrid Gestures on a Cell Phone
    • Uses a camera phone to visually map a physical piece of paper and a virtual one on the phone. You can then do things like search a term simply by selecting it on the virtual picture in the phone. This a completely different way of interacting with objects from Touché.
  •  On Body Capacitive Sensing for a Simple Touchless User Interface
    • Treats the human body one plate of a capacitor and senses guestures without actually touching anything. The motivation came from doctors not wanting to touch anything in fear of contamination. Where this uses capacitance Touché uses inductance of the entire body.
  • PocketTouch: Through-Fabric Capacitive Touch Input
    • Uses capacitance just like a touch screen on a phone, but it is capable of being put into many different types of fabric of eyes free use of your phone while it is in your pocket. This uses capacitance with the human body instead of inductance of an electrical signal like Touché.
Evaluation:
They evaluated the accuracy of Touché to correctly register different types of gestures with five different objects and materials: a doorknob, a table, a phone case, on-body sensing from wrist bands, and water.  They used two groups of 12 participants each. The groups were shown pictures of the possible gestures they could make with each object. They then were told to make these gestures one at a time. What Touché actually registered each of their attempted gestures as was kept from the participant and experimenters until the end. In the first group, experimenters used data gathered from each participant at the beginning of their testing in order to fine tune the Touché devices to the specific touch of each participants. The second gorup was "walk-up" where there was no prior data collection to their testing. The graphs below show their findings. all of their data collection was quantitative and objective.  More data was collected for the "walk-up" group because it was done on a later date.


Discussion:
I found this paper to be very interesting and can I can easily see this technology in our immediate future. the Touché device can potentially be attached to any object. It is so small and simple in its design that it can be added to everyday items cheaply without anyone even being aware it is there. Know one will notice a normal looking, key less door that will only open when you use the correct combination of hand gestures.