H1 | H2 | H3 | H4

Homework 1: Distilling Design Implications
Due date: See class schedule

Note: This is a two-person assignment. Find a partner from the class who is NOT on your project team. You will complete the homework together. Both your names need to appear on the assignment, but only one of you needs to submit the assignment (in PDF format, via canvas). Start each A-I table on a new page. Both of you will receive the same grade for the assignment.

Overview: Two of the important early steps in evidence-based design is to describe the user(s) and to define the context in which the users are situated. Then, crucially, it is important to distill design implications from the context and the user description. These design implications are constraints or requirements that fall out of the context/user findings. Subsequently, once those design implications are identified, the task is more fully detailed and documented, and as you head toward an actual design, the design implications need to get instantiated in design implementations.

In this assignment, you will get practice in identifying user attributes, and then distilling implications from those user attributes. Your task is as follows: for each of the following combinations of user type and context, create an Attribute-Implication table. The A-I table should include (at least) the following categories of user attributes: perception; cognition; (physical) movement; motivations; social attachments. You can add any other categories you see as potentially instructive.

For each category, identify at least three (3) attributes or aspects or descriptions that is applicable for that user class, in that context.

Then, complete each table row with at least one design implication. These design implications should be general enough that they cover a broad range of possible actual implementations, but specific enough that the design implication could be turned into a testable/verifiable design requirement, as part of a contract.

Consider the following example of a partially-completed Attribute-Implication table for the user-context combination of "adult underwater (scuba diving)". Note that a fully completed table would have many attributes for each category, and often many implications for each attribute.

A-I Table: adult underwater (scuba diving)
Category Attribute Implication
Perception Restricted field of view (mask) Do not require peripheral vision
Perception Muffled/diminished hearing Avoid depending on audio signals
Cognition Reduced memory (cold water) Use recognition rather than recall
Cognition Possible distraction (looking at fish) Assess and manage attention
Movement Limited hand dexterity (gloves; cold) Make controls operable with gloves
Movement Variable orientation (hard to remain stationary, neutrally bouyant) Move system with user
Motivations Anxious or nervous (under water!) Makes processes simple and clear
Social Dive buddy present Design may involve two individuals
Social Dive buddy present Enable user time to check on buddy regularly

User-Context Combinations:

  1. Teacher in highschool classroom
  2. Police officer inside patrol car (driver seat)
  3. Adult commuter on bicycle
  4. Child wheelchair user at a stadium
  5. Older adult (65+ years old) in commercial/restaurant kitchen





Homework 2: Two-person Team Mini-Design: Auditory User Interface
Due date: See class schedule

Being able to come up with a creative and effective design for a system is part of the required skillset for HCI professionals. This homework will allow you to explore that area, and demonstrate your individual design skills. This clearly requires creativity, but it also requires an understanding of the principles and guidelines for usability that have been covered in the course. Not everyone is as bursting with ideas. Not everyone feels she or he has drawing or aesthetic or so-called creativity skills. If you think you are lacking in these areas, well, they can only get better with practice! Here's your chance to practice in a situation where you have everything to gain and nearly nothing to lose. Applying the systematic design process and usability principles you now know, can overcome a lot of supposed "lack of creativity". You just might surprise yourself!!

You and your teammate will design, layout, and mockup an auditory user interface. You will identify a context, then determine a class of users, and a task/need for those users. This will all be in the realm of an auditory interaction. Then you will list design constraints and considerations, assess the types of information that needs to be communicated to/from the user/system...then layout the interaction model. You will want to use storyboards, flowcharts, or other methods of representing the interation. Finally, you will actually prototype the interaction.

You will decide exactly what "auditory user interface" means. It can involve speech-based input (by the user), non-speech input; speech and/or non-speech output (from the system).

Your (prototype) system design might be simple, like an Alexa Skill. Or it might be more complex/sophisticated, like a telephone-style Interactive Voice Response (IVR) system like airlines deploy. Or it could be something along the lines of an audio-based agent, like in the movie Her....or, as I have mentioned, "Siri's Smarter Sister".

You can use whatever tools you wish to protoype. You might use Powerpoint, with audio clips. Or you can use an audio-based prototyping tool. Or you can build an actual interface using the Alexa (or other) system building toolkits. There are lots of ways to quickly generate spoken audio (text-to-speech, TTS), and lots of audio clips around. Be sure to use all your creative powers, and do not be limited to boring old TTS. Below you will find just a few links to some of the tools you might consider.

What to hand in:
1. Your team will hand in one report (just one of you needs to upload it) that describes the context, user, tasks, requirements, information architecture, and interaction flow. Add any information that helps us understand what you have designed. Also talk about how you have prototyped the interaction. What tools did you use? What went well? What were the challenges?
2. You must also capture the interaction in a way that we can evaluate the prototype. This may be an audio recording, or a demo video of the system in action. It would also be very helpful to have access to the actual protoype, so we can play with/interact with it by ourselves. This may be very straightforward in some cases (e.g., an Alexa Skill); or it might

Some Resources (admittedly, more focused on Mac tools):

Some others:


Apple's Dev tool called Repeat After Me:

Note that on Mac OS, you can use Text Edit to type in your text, then use the built-in functionality under Edit..Speech..Start speaking. You can also use the built-in Service called Add to iTunes as a Spoken Track. You can embed Apple's speech markup tags into the text in Text Edit, then have the system speak that out.

Copy the following into TextEdit and have the system speak it:


now try:

[[char LTRL]] cat [[char NORM]]

now try:

[[inpt PHON]] ~dOWnt ~1EHvAXr ~d1UW +DAEt _AXg1EHn! [[inpt TEXT]]

now this:

[[inpt TUNE]]
s {D 250; P 212.0:0 212.0:35 212.0:54 212.0:85 212.0:96}
1AA {D 190; P 232.0:0 218.0:35 222.0:80}
r {D 80; P 216.0:0}
IY {D 150; P 177.0:0 162.0:29 162.0:68 162.0:77 162.0:90 162.0:100}
, {D 20}
d {D 60; P 162.0:0 162.0:36 162.0:57 160.0:93}
1EY {D 350; P 162.0:0 150.0:27 150.0:41 150.0:70}
v {D 30; P 150.0:0 150.0:29 150.0:52 150.0:67 150.0:90 150.0:100}
, {D 510}
2AY {D 140; P 173.0:0 196.0:45}
k {D 100; P 196.0:0 196.0:95}
AE {D 180; P 198.0:0 232.0:56}
n {D 80; P 232.0:0}
t {D 20; P 232.0:0 232.0:38}
d {D 40; P 232.0:0 232.0:85 208.0:92}
1UW {D 180; P 210.0:0 232.0:32 253.0:60 245.0:76}
D {D 60; P 245.0:0 186.0:92}
AE {D 240; P 186.0:0 168.0:37}
t {D 30; P 155.0:0 155.0:60 155.0:93}
r {D 70; P 155.0:0 149.0:53}
1AY {D 180; P 157.0:0 137.0:61}
t {D 40; P 128.0:0 132.2:56 135.0:94}
n {D 80; P 129.0:0 153.0:31 147.0:94}
1AW {D 340; P 147.0:0 140.8:22 169.2:88 148.0:100}
. {D 780}
[[inpt TEXT]]

Microsoft, both speech creation, and speech recognition: