Georgia Tech Sonification Lab
School of Psychology - Georgia Institute of Technology
Director

Dr. Bruce N. Walker       LinkedIn logo Photo of B. Walker
Professor in the School of Psychology and the School of Interactive Computing at Georgia Tech, and founder of the Sonification Lab. Dr. Walker completed his Ph.D. at Rice University in Human Factors Psychology and Human-Computer Interaction in 2001. He is a Core Faculty Member in the GVU Center, a member of the Center for Music Technology (GTCMT) and the Center for Biologically Inspired Design (CBID), and a Project Director in the WirelessRERC. Dr. Walker is a Past-President of the International Community for Auditory Display (ICAD).
     [bruce.walker@psych.gatech.edu]    [Web page]


Postdoctoral Researchers

Postdoc Position Available Placeholder Image
The Sonification Lab is accepting applications for a paid postdoctoral researcher. Candidates must hold (or be ABD for) a PhD in Psychology, Computing, HCI, or other field that relates directly to the work in the Sonification Lab. Projects include, but are not limited to, fMRI studies related to auditory perception, driving studies, in-vehicle assistive technology, auditory graphs and sonification, accessible museums and aquariua, auditory user interfaces, wayfinding systems, STEM education for students with disabilities, computer systems and technologies for audio-based assistive technologies.



PhD Students

Brandon Biggs       LinkedIn logo Photo of B. Biggs
Brandon is a PhD student in the Human Centered Computing (HCC) program in the School of Interactive Computing. He completed an MS at OCAD University in Canada. His research interests focus on accessible maps of all kinds, as well as assistive technology more broadly. He is also working to make tools to make education as inclusive and user friendly as possible.
     [bbiggs31 [at] gatech.edu]    [Web page]

Stanley Cantrell       LinkedIn logo Photo of S. Cantrell
Stanley is a PhD student in the Human Centered Computing (HCC) program in the School of Interactive Computing. His research interests are in the areas of Human-Computer Interaction and Assistive Technologies. He is particularly interested in researching novel interfaces and artifacts that facilitate communication and make information more accessible and usable for children, the elderly, and disadvantaged individuals.
     [cantrell [at] gatech.edu]    [www.stanleyjcantrell.com]

Nadia Fereydooni       LinkedIn logo Photo of N. Fereydooni
Nadia Fereydooni is a PhD student in the HCC program in the School of Interactive Computing. She completed her BS at the University of New Hampshire. Her research focuses on the use of Virtual Reality as non-driving-related task in automated vehicles to provide rich, immersive experiences while maintaining passengers' trust and safety.
     [nadia.fereydooni [at] gatech.edu]    [Web page]

Emily Parcell       LinkedIn logo Photo of E. Parcell
Emily is a PhD student in the School of Psychology. She completed her MS at Embry Riddle Aeronautical University, and her work focuses on driving, distracted driving, HMIs, and education.
     [ekparcell [at] gatech.edu]    [Web page]

Sidney Scott-Sharoni       LinkedIn logo Photo of S. Scott-Sharoni
Sidney Scott-Sharoni is a PhD student in the School of Psychology. She completed her BS at Old Dominion, then worked as a research scientist at NADS at the University of Iowa. Now as a PhD student in our lab, Sidney studies advanced in-vehicle displays, and human-automation interaction.
     [sidney.scott.sharoni [at] gatech.edu]    [Web page]


Lab Alumni

Affiliated Faculty

Dr. Adrian Houtsma Photo of B. Walker
Formerly Adjunct Professor in the School of Psychology. Dr. Houtsma completed his Ph.D. at MIT, and has had a distinguished career as a professor and research lab director in the US and The Netherlands. He helped supervise graduate students and helped lead our Bone Conduction Audio research projects.
     [adrianus.houtsma [at] psych.gatech.edu]    [Web page]

Postdoc Alumni

Dr. Andrew Wallace       LinkedIn logo Photo of A. Wallace
Dr Andrew Wallace received his PhD in Cognitive Science from Brown University in 2011. His dissertation was entitled, The Auditory Representation of Time and Frequency in Vowel Quality Perception. Andy worked on studies of neural speech representation. He is now a Cognitive Scientist in Los Angeles.

PhD Alumni

Dr. Rachel Stuck       LinkedIn logo Photo of R. Stuck
Rachel earned her PhD student in the School of Psychology in 2020. Her research interests include human-robot interaction, assistive technology and robotics, and trust in technology. Rachel has started her career at Symbio.
     [rachel.stuck [at] gmail.com]    [Web page]

[Dissertation: "Perceived Relational Risk and Perceived Situational Risk: Scale Development"]
[Masters: "Understanding dimensions of trust between older adults and human or robot care providers"]



Zoe Becerra       LinkedIn logo Photo of Z. Becerra
Zoe earned her MS in the School of Psychology in 2020. She previously earned a Bachelor of Science Degree in Psychology and Mathematics from Morehead State University. Zoe's research interests include vigilance, situation awareness, multimodal displays, and trust in automated driving. Zoe has started her career with Anthem.
     [zbecerra3 [at] gatech.edu]    [Web page]

[Masters: "Measuring the influence of automation on situation awareness in highly automated vehicles"]



Dr. Brittany Holthausen (née Noah)       LinkedIn logo Photo of B. Noah
Brittany earned her PhD in the School of Psychology in 2020. She previously earned Bachelor of Science Degrees in Biomedical Engineering and Psychology from Virginia Commonwealth University. Her research interests include reliability and trust in automation, trust calibration, multi-modal displays, immersive and non-immersive dynamic environments, and automated driving. Brittany has started her career as a Human Systems Engineer at Boeing.
     [brittany.noah [at] gatech.edu]    [Web page]

[Dissertation: "Development and validation of the situational trust scale for automated driving (STS-AD) "]
[Masters: "Understanding automation handoff impacts on workload and trust when mitigated by reliability displays"]



Dr. Brianna Tomlinson       LinkedIn logo Photo of B. Tomlinson
Brianna earned her PhD in the School of Interactive Computing, in the HCC program, in 2020. Her research was into how to use auditory displays and sonifications (non-speech audio) to support glanceability and how these displays could also provide overviews of complex information systems. She studied these topics in two contexts: weather (in an Accessible Weather App research project) and space (for a Solar System Sonification using spatial audio for a planetarium show). Brianna is now a researcher at Oculus.
     [btomlin [at] gatech.edu]    [Web page]

[Dissertation: "Measuring the effect of user experience and engagement on learning using interactive simulations."]



Dr. R. Michael Winters       LinkedIn logo Photo of R. M. Winters
Mike earned his PhD in the School of Music, in the Music Technology PhD program, in 2020. Mike's primary advisor in Music was Prof. Grace Leslie. Prof. Walker was co-advisor. Mike's research looked at the emotional and neurophysiological affects of music on the listener. Mike also has experience in music composition, sonification, musical robotics, physics, mathematics, perception, cognition and design. Mike is currently a researcher and consultant.
     [mikewinters [at] gatech.edu]    [Web page]

[Dissertation: "...pending update to Smartech..."]



Dr. Keenan May       LinkedIn logo Photo of K. May
Keenan earned his PhD in Engineering Psychology in 2020, having previously graduated from the MS-HCI program in 2014. He studied the ability of humans to multitask using emerging input and output devices while maintaining awareness of their surroundings during safety critical activities such as cycling and driving. Other interests include assistive technologies for cognitive impairments, physiological sensing of workload and affect and augmented cognition systems, and physiological measurements of engagement for games research or classroom use. He is now a researcher at Microsoft.
     [kmay [at] gatech.edu]    [Web page]

[Dissertation: "Impact of action-object congruency on the integration of auditory and visual stimuli in extended reality"]
[Masters: "Modifying distracting headphone audio to increase situation awareness"]



Dr. Jonathan Schuett       LinkedIn logo Photo of J. Schuett
Jonathan earned his PhD student in the School of Psychology in 2019. His interests include auditory graphs, auditory perception, multimodal user interfaces, assistive technology, and assessment. His research looked at how to create effective auditory graphs, focusing on context and multiple data series. Jonathan came to GT with a Masters degree from James Madison University. He is now a Human Factors Engineer at Apple.
     [jschuett6 [at] gatech.edu]    [Web page]

[Dissertation: "Measuring the effects of display design and individual differences on the utilization of multi-stream sonifications"]



Dr. Jared Batterman       LinkedIn logo Photo of J. Batterman
Jared earned his PhD in the School of Psychology in 2019. His research looked at how to design effective auditory graphs, with an emphasis on conveying error in a measurement, such as an auditory equivalent to error bars. His interests also included auditory perception, cognition, sonification, and auditory interfaces. Jared came to GT with a Masters degree from Villanova. He is now a Senior Usability/Accessibility Engineer at MITRE.
     [jmbatterman [at] gmail.com]    [Web page]

[Dissertation: "Understanding the misunderstanding: Why confidence intervals are poorly understood and evaluating proposed solutions across sensory modalities"]



Dr. Thomas Gable       LinkedIn logo Photo of T. Gable
Thom completed his PhD in Psychology in 2017. His research was centered on improving users' abilities in multitasking situations through the application of multimodal displays. Much of his work was in the driving domain where he was involved in a diverse set of projects as a part of multidisciplinary teams, leading to many publications, including journals, conferences, and several important tech reports related to research our driving simulator. Thom launched his post-PhD research career at Microsoft.

[Dissertation: "The effect of experience on the use of multimodal displays in a multitasking interaction"]
[Masters: "Applying spindex auditory cues while driving and performing a secondary search task"]



Dr. Jeff Wilson       LinkedIn logo Photo of J. Wilson
Jeff Wilson earned his PhD student in Computer Science in 2016. He is a Senior Research Scientist in the Interactive Media Technology Center. He received a Bachelors of Computer Science in 1999 and a Masters of Computer Science in 2001 from Georgia Tech. His areas of specialization include graphics, visualization, digital audio, game design, and virtual and augmented reality applications. Some of the projects Jeff has worked on include large-format, projected VR displays, mobile and head mounted AR applications, auditory interfaces for automotive applications, educational games, and mobile health applications.
     [jeff.wilson [at] imtc.gatech.edu]    [Web page]

[Dissertation: "Push and pull menus for auditory interfaces"]


Dr. Yee Chieh Chew       LinkedIn logo Photo of Y.C. Chew
Yee Chieh earned her PhD in the School of Interactive Computing in the HCC program, in 2014. Her interests include studying the effect of introducing educational and assistive technology in STEM classes for students with visual impairment. Denise was also a researcher on the Auditory Graphs project. Yee Chieh is currently a User Experience Researcher at Moveworks.

[Dissertation: "Assessing the use of auditory graphs for middle school mathematics"]



Dr. Carrie Bruce       LinkedIn logo Photo of C. Bruce
Carrie earned her PhD in the School of Interactive Computing, in the HCC program, in 2014. She is now a Principal Research Scientist at Georgia Tech. She is also a speech-language pathologist and an assistive technology practitioner. Carrie studies discourse in human-technology interactions, and was a co-PI on the Accessible Aquarium Project.

[Dissertation: "Facilitating participation in adults with and without vision loss by supporting exhibit motivations through real-time descriptive mediation"]



Dr. Julia Olsheski (née DeBlasio)       LinkedIn logo Photo of J. DeBlasio Olsheski
Julia earned her PhD student in the School of Psychology in 2014. Her interests include environmental design and auditory interfaces. Her research looks at how multimodal stimuli (e.g., auditory and visual) are processed. She is currently working on the IVAT in-vehicle assistive technology project, as well as research for NASA, and was the lead researcher on the Medical Technology project.

[Dissertation: "The role of synesthetic correspondence in intersensory binding: investigating an unrecognized confound in multimodal perception research "]
[Masters: "Documentation in a medical setting with young and older adults"]



Dr. Benjamin K. Davison       LinkedIn logo Photo of B. Davison
Ben completed his PhD in the School of Interactive Computing in the HCC program, in 2013. His dissertation research studied the development, deployment, and usage of auditory display and sonification software for Math education for students with vision loss. He was the lead developer on the Sonification Sandbox project, and a researcher on the Auditory Graphs and Advanced Auditory Menus projects. Ben is now (from 2012) a researcher at Google, but in his spare time continues to collaborate with the Sonification Lab on various projects.

[Dissertation: "Universal graph literacy: understanding how blind and low vision students can satisfy the common core standards with accessible auditory graphs"]



Dr. Myounghoon "Philart" Jeon       LinkedIn logo Photo of M.H. Jeon
Myounghoon (aka "Philart") received his PhD in Psychology from Georgia Tech in 2012. His dissertation was entitled, Effects Of Affective States On Driver Situation Awareness And Adaptive Mitigation Interfaces: Focused On Anger. While at Georgia Tech, Philart's research looked at how emotion and affect play a role in user interfaces and in task performance, especially driving. His interests also included audio design and perception for mobile devices such as cell phones and in-vehicle "infotainment" systems. He worked on the IVAT in-vehicle assistive technology project, on the Advanced Auditory Menus project, and also on the Accessible Aquarium project. Dr. Jeon is now a professor at Virginia Tech, where he directs the Mind Music Machine Lab.

[Dissertation: "Effects of affective states on driver situation awareness and adaptive mitigation interfaces: focused on anger"]
[Masters: " 'Spindex' (speech index) enhances menu navigation user experience of touch screen devices in various input gestures: tapping, wheeling, and flicking"]



Dr. Michael A. Nees       LinkedIn logo Photo of M. Nees
Dr. Michael A. Nees received his PhD in Psychology from Georgia Tech in 2009. His dissertation was entitled, Internal Representations of Auditory Frequency: Behavioral Studies of Format and Malleability by Instructions. Follwoing graduation, Dr. Nees taught at Spelman College in Atlanta, and then held a postdoctoral research position in the Georgia Tech Sonification Lab. Among other successes and honors, Mike was awarded the 2010 APA Division 21 George E. Briggs Dissertation Award, for the best dissertation in the field of Applied Experimental/Engineering Psychology. Mike is now (from 2018) an Associate Professor of Psychology at Lafayette College, and directs the Human Factors, Perception, & Cognition Laboratory.

[Dissertation: "Internal representations of auditory frequency: behavioral studies of format and malleability by instructions"]
[Masters: "Data Density and Trend Reversals in Auditory Graphs: Effects on Point Estimation and Trend Identification Tasks"]



Dr. Raymond M. Stanley       LinkedIn logo Photo of R. Stanley
Dr. Raymond M. Stanley received his PhD in Psychology from Georgia Tech in 2009. Ray's dissertation was entitled, Measurement and Validation of Bone-Conduction Adjustment Functions in Virtual 3D Audio Displays. While at Georgia Tech, Ray's research centered on perception and psychophysics, and especially bone conduction audio and bonephones. He was the lead researcher on the Bone Conduction Audio project. Dr. Stanley held a postdoctoral researcher position with Prof. Art Wingman at Brandeis University, before accepting a position at Mitre Corporation.

[Dissertation: "Measurement and validation of bone-conduction adjustment functions in virtual 3D audio displays"]
[Masters: "Toward adapting spatial audio displays for use with bone conduction: the cancellation of bone-conducted and air-conducted sound waves"]



Dr. Daniel Smith (LTC Retired, US Army)       LinkedIn logo Photo of D. Smith
Daniel R. Smith (LTC Retired) joined us in the midst of his distinguished and active military career, having earned his Bachelor of Science in Environmental Engineering from US Military Academy at West Point, and then having been deployed in various places and roles. He completed his MS degree with us, in Engineering Psychology, then completed a tour of teaching at West Point. Dan returned to active deployment, then returned to Georgia Tech to complete his PhD in Indutrial/Organizational Psychology. He finally returned to West Point and finished out his Army career as Associate Professor, Program Director & Sr. Editor, at West Point. He is now an entrepreneur.

[Masters (2003): "Effects of Training and Context on Human Performance in a Point Estimation Sonification Task"]



Lisa Mauney (née Siebenaler)       LinkedIn logo Photo of L. Mauney
Lisa earned her MS in the School of Psychology. Her interests included individual differences in the perception and comprehension of auditory displays, auditory graphs, and sonifications, as well as assistive technology and how people with low vision read. She started in the Lab as an undergraduate, and completed her Undergraduate Senior Thesis in the Lab, before joining us as a grad student. She was part of the Individual Differences and Training in Auditory Displays and Auditory Graphs projects. LIsa has left the lab and taken an HCI research scientist job in the corporate world.

[Masters: "Individual Differences in Cognitive, Musical, and Perceptual Abilities"]



Jeff Lindsay       LinkedIn logo Photo of J. Lindsay
Jeff earned his MS in the School of Psychology. His interests included auditory perception, perception of space, navigation, and auditory user interfaces. He was a lead researcher in the SWAN (System for Wearable Auditory Navigation) project. Jeff is now a User Experience strategist.

[Masters: "The effect of a simultaneous speech discrimination task on navigation in a virtual environment"]



Vincent Martin       LinkedIn logo Photo of V. Martin
Vincent is a Ph.D. student in Human Centered Computing in the school of Interactive Computing. Much of Vincent's research has been in the area of audio displays and menus, and multi-modal input and output. Many of these projects deal with audio discrimination of sound, including audio graphs and understanding of information displayed with sound. This has included projects such as a sound based navigation system for blind people and a sound based interface for accessing aquariums. Vincent's primary research area is making Statistical output from mathematical calculating packages such as SPSS and SAS readily accessible to blind students and fellow researchers. His research uses sonification of visual graphs that represent the Statistical output and hopefully will culminate in a dissertation related to the solution in a few more years.
     [vincent.martin [at] gatech.edu]    [Web page]


Current HCI and Computing Masters Students

Every semester the Sonification Lab has several HCI Masters students, Computer Science Masters students, and other Masters students working in the lab. They work on all of the projects in the lab, with their contribtions ranging from programming to running experiments and analyzing data.

Note: The Sonification Lab used to engage MS students in individual projects and independent studies (e.g., CS 8903 or PSYC 8903). However, to expand the number of MS students who can participate in SonLab research, and to better serve our ongoing projects, we now usually engage students through our lab studio course, CS 8803 SRD - Sonification Lab R&D Studio. This course allows about 12-16 students per semester to work on projects in 2-student teams, working closely with one of the SonLab PhD students and Dr. Walker. For that reasons, we no longer list individual students here unless they are working on an independant study or are in another special category. We do, most certainly, still invite MS students to work in the lab, either as part of the Studio experience, or on occasion, in individual projects.




MS Alumni

We sometimes have MS students who join our lab in a more formal or in-depth manner, such as military students who complete a full Masters degree in our lab.

Brandon Thomas       LinkedIn logo Photo of B. Thomas
Brandon is a decorated combat veteran with one combat tour to Iraq and one to Afghanistan. Brandon earned his B.S. degree in Engineering Psychology with a track focus of Systems Engineering in 2007 from the United States Military Academy. He then completed his Masters of Science in Engineering Psychology here at Georgia Tech, before returning to teach at West Point. He has retired from the Army, and is now an entrepreneur.

[Masters (2017): "Multi-Modal Workload Impacts on Battlefield Situation Awareness"]



Ashley Henry Cauley       LinkedIn logo Photo of A. Henry
Ashley graduated with a BS in Biology from Georgia Tech in 2011, and joined the Sonification Lab as a full time research associate. She had already been working in the lab as an undergraduate. She then stayed around and completed her MS in HCI. Ashley was involved in many of our research projects, including the Accessible Aquarium, the Aquarium Fugue, and STEM education for visually impaired students.


Historical Listing of Some MS Students

Jason D'Orazio, MS-HCI (Psychology track), Dec 2002
"Issues in Car Navigation"

Daniel Smith, MS (Psychology), May 2003
"Effects of Training and Context on Human Performance in a Point Estimation Sonification Task"

Darren Hough, MS, (Architecture--Industrial Design), May 2003
"Aesthetics and Product Usability"

Justin Godfrey, MS-HCI (Psychology track), Aug 2004
"Development and Evaluation of the Audio Abacus"

Josh Cothran, MS-HCI (Computer Science track), Aug 2004
"Web Application for Controlled Burn Simulation "

Kathy Lau, MS-HCI (Psychology track), Dec 2004
"Tame Study "

Amanda Nance, MS-HCI (Computer Science track), May 2005
"Sonification of Menu Structures "

Kevin Stamper, MS-HCI (Computer Science track), May 2005
"Mobile Audio Designs (MAD) Monkey: A Tool for Sound Design "

Jeffrey Lindsay, MS Psychology, Sept 2005
"The Effect of a Simultaneous Speech Discrimination Task on Navigation in a Virtual Environment"

Britt Caldwell, MS-HCI (Psychology track) May 2006
"Effects of Technology Use on the Doctor-Patient Interaction"

Lisa Mauney, MS Psychology, Aug 2006
"Individual Differences in Cognitive, Musical, and Perceptual Abilities"

Ray Stanley, MS Psychology, Aug 2006
"Adapting Spatial Audio Displays For Use With Bone Conduction: How Bone-conducted Waves Interact With Air-conducted Waves at the Basilar Membrane"

Robert Gray, MS-HCI (Psychology track) Aug 2006
"Audio Task Assistance for Aircraft Maintainers"

Michael Nees, MS Psychology, Jan 2007
"Data Density and Trend Reversals in Auditory Graphs: Effects on Point Estimation and Trend Identification Tasks"

Pavani Yalla, MS-HCI (LCC track) May 2008
"Advanced Auditory Menus"

Anandi Pendse, MS-CS, May 2008
"Bone Conduction Audio Perception"

Vivek Muppalla, MS-CS, May 2008
"SWAN System Research and Development"

Les Smee, MS-HCI (CS track), Dec 2008
"MotoBridge GUI study: An interface study of the State of Georgia's interoperability solution"

Anna (Anya) Kogan, MS-HCI (Psychology track), May 2009
"Auditory Graphs for Education; Spearcons in Dual Tasks"

Siddharth Gupta, MS-HCI (CS track), May 2009
"Advanced Auditory Menus"

Unkyong Lee, MS-CS, May 2009
"Sonification Sandbox: Online Version; Auditory Menus: Mobile Version"

Mary Frances Jones, MS-HCI (Psychology track), August 2009
"In-home evaluation of 'Fuzzy Logic' Sign Language Teaching Toy"

Julia DeBlasio, MS Psychology, August 2009
"Documentation in a Medical Setting with Young and Older Adults"

Stephen Choi, MS-HCI (CS track), May 2010
"The Digitizer Audio Graph: Automatic Auditory Graph Generation Using Computer Vision"

Marc Buigues, MS-CS, May 2010
"Development and Evaluation of Museum Visitor Tracking Software"

Jeff McCloud, MS-Industrial Design, May 2010
In-Vehicle Assistive Technologies

Neil Russell, MS-HCI, May 2010
Accessible Aquarium Project: Zoo Soundscapes

Gary Golubski, MS-HCI, May 2010
Auditory Graphs in Classrooms at the Georgia Academy for the Blind

Myounghoon "Philart" Jeon, October 2010
"Spindex (Speech Index) Enhances Menu Navigation User Experience Of Touch Screen Devices In Various Input Gestures: Tapping, Wheeling, And Flicking"

Victor Ondego, MS-HCI (Psychology track), May 2011
Accessible Aquarium Project

Ruby Zheng, MS-HCI, May 2011
Interactive Aquarium Interface

Ozum Akanser, MS-HCI (Psychology), May 2012.
Accessible electronic user interfaces for data collection in the Mwangaza Project on accessible STEM education in Kenya.

Joe Lin, MS-HCI (Computing) student, May 2012.
Developing auditory graphing tools for iOS devices.

Sundararajan Sarangan, MS-CS, May 2012.
Advanced in-vehicle infotainment systems in our driving simulator, and the In-Vehicle Assistive Technology (IVAT) system built on the Centrafuse platform.

Abhishek Srivastava, MS-CS, May 2012.
Implementing auditory interfaces on Android devices, and helping develop software for auditory graphs.

Hyewon Suh, MS-CS, May 2012.
Auditory Graphs, the Accessible Aquarium Project, and Advanced Auditory Menus.

Sung-ihk Yang, MS-CS, May 2012.
Web sites in support of various projects in the lab.

Jung-Bin "Jay" Yim, MS-CS, May 2012.
In-Vehicle Assistive Technologies (IVAT) and Aquarium Fugue project.

Hitesh Chhabra, MS-CS, December 2013
Auditory Graphs project.

Ramitha Chitloor, MS-CS, May 2013

Saie Deshpande, MS-CS specializing in HCI, May 2013
Worked on a database to archive data collected in our Mwangaza Project in Kenya.

Erin Hennessy, MS-HCI, May 2013
Sonification of human movement.

Amrutha Krishnan, MS-HCI, May 2013
Auditory games and web development.

Sruthi Padala, MS-CS, May 2014
Movement sensor calibration.

Lisa Rossi, MS-HCI, May 2014

Rick Swette, MS-HCI (Psychology), May 2014
In-Vehicle Assistive Technology (IVAT) project.

Haifa Wright-Hullett, MS-HCI (Psychology), December 2013
Multimodal interfaces in support of our NASA research.

Brandon Conway, MS-HCI, May 2014
Sonified Fantasy Sports




Undergraduate Students

We always have many undergraduate students, from all over the campus, including Psychology, Computing, LCC, ECE, etc., working on various projects. Students generally start in the lab by completing a project for credit. This may range from programming a bit of software, or running subjects, or entering and analyzing data, or doing usability testing, or... or .... This initial experience often leads to continued experience and further projects. Please contact Dr. Walker if you are interested in gaining experience or working on a project in the Lab's general areas of interest. Don't worry if you don't have a specific project in mind...On the other hand, if you do have a specific project in mind, that's cool too--we are always open to new ideas. Here are some students who have gone that extra mile, and completed a Senior Thesis.

Undergraduate Senior Thesis Students:

Lisa Siebenaler, Fall 2002, Spring 2003
"Magnitude Estimation of Sound Attributes Used in Auditory Displays: A Study of Blind and Visually Impaired Listeners"
Received Georgia Tech President's Undergraduate Research Award

Yoko Nakano, Fall 2004, Spring 2005
"Systematic Evaluation of 'SoundScape' Sonifications"
Received Georgia Tech President's Undergraduate Research Award

Jennifer Holmes, Fall 2006, Spring 2007
"Wayfinding Effectiveness with the System for Wearable Audio Navigation (SWAN)"

Dianne Palladino, Spring 2007, Fall 2007
"Evaluation of Spearcons as an Auditory Interface Element"

Sara Cantu, Spring 2007, Fall 2007
"Evaluation of Advanced Tactile Interface Device"

Naomi Warnick, Spring 2008, Fall 2008
"Reaction Time for Sounds Presented via Bone Conduction Audio Devices" and "Measuring the Effects of Physical Sound Stimulus and Bone Conducting Transducer Location on Reaction Time"
Received Georgia Tech President's Undergraduate Research Award

Tyler Campbell, Summer 2009, Fall 2009
"Trust, Technology, and Money"

Yarden Moskovitch, Fall 2009, Spring 2010
"Evaluation of Automatic Textual Description of Mathematical Graphs"
Received Georgia Tech President's Undergraduate Research Award

Riley Winton, Fall 2011, Spring 2012
"Musician Interpretaton of Dynamic Exhibits"

Hannah Fletcher, Spring 2012, Fall 2012
"Different Stimulus Types to Represent Points of Intersection in Auditory Graphs"
Received Georgia Tech President's Undergraduate Research Award

Amanda Brock, Fall 2012, Spring 2013
"Attitude Survey Assessment in Blind Subjects"

Michelle Han, Spring 2013, Summer 2013
"Making standardized tests accessible"

Montana Haygood, Fall 2014, Spring 2015
"Prevalence and features of hearing loss by student drumline members"
   Published at the HFES conference in September 2016.

Heather Roberts, Fall 2014, Spring 2015
"Evaluation of STEM with GNIE"

Lee Martin Frazer, Fall 2015, Spring 2016
"Physiological measures and driving"