Georgia Tech Sonification Lab
School of Psychology - Georgia Institute of Technology
Director

Dr. Bruce N. Walker       LinkedIn logo Photo of B. Walker
Professor in the School of Psychology and the School of Interactive Computing at Georgia Tech, and founder of the Sonification Lab. Dr. Walker completed his Ph.D. at Rice University in Human Factors Psychology and Human-Computer Interaction in 2001. He is a Core Faculty Member in the GVU Center, a member of the Center for Music Technology (GTCMT) and the Center for Biologically Inspired Design (CBID), and a Project Director in the WirelessRERC. Dr. Walker is a Past-President of the International Community for Auditory Display (ICAD).
     [bruce.walker@psych.gatech.edu]    [Web page]

Affiliated Faculty

Dr. Adrian Houtsma Photo of B. Walker
Adjunct Professor in the School of Psychology. Dr. Houtsma completed his Ph.D. at MIT, and has had a distinguished career as a professor and research lab director in the US and The Netherlands. He has helped supervise graduate students and helped lead our Bone Conduction Audio research projects.
     [adrianus.houtsma [at] psych.gatech.edu]    [Web page]


Postdoctoral Researchers

Postdoc Position Available Placeholder Image
The Sonification Lab is accepting applications for a paid postdoctoral researcher. Candidates must hold (or be ABD for) a PhD in Psychology, Computing, HCI, or other field that relates directly to the work in the Sonification Lab. Projects include, but are not limited to, fMRI studies related to auditory perception, driving studies, in-vehicle assistive technology, auditory graphs and sonification, accessible museums and aquariua, auditory user interfaces, wayfinding systems, STEM education for students with disabilities, computer systems and technologies for audio-based assistive technologies.



PhD Students

Brittany Noah       LinkedIn logo Photo of B. Noah
Brittany is a PhD student in the School of Psychology. She earned Bachelor of Science Degrees in Biomedical Engineering and Psychology from Virginia Commonwealth University. Her research interests include reliability and trust in automation, trust calibration, multi-modal displays, immersive and non-immersive dynamic environments, and automated driving. She is currently involved in the eco-driving displays project and the automated lane keeping displays project. In the future, she hopes to continue working on issues surrounding trust in automated, unmanned, and autonomous vehicles.
     [brittany.noah [at] gatech.edu]    [Web page]

Stanley Cantrell       LinkedIn logo Photo of S. Cantrell
Stanley is a PhD student in the Human Centered Computing (HCC) program in the School of Interactive Computing. His research interests are in the areas of Human-Computer Interaction and Assistive Technologies. He is particularly interested in researching novel interfaces and artifacts that facilitate communication and make information more accessible and usable for children, the elderly, and disadvantaged individuals.
     [stanleyjcantrell [at] gmail.com]    [Web page]

Jared Batterman       LinkedIn logo Photo of J. Batterman
Jared is a PhD student in the School of Psychology. His interests include auditory perception, cognition, sonification, and auditory interfaces. His research looks at how to design effective auditory graphs, with current interest in conveying error in a measurement, such as an auditory equivalent to error bars. Jared came to GT with a Masters degree.
     [jmbatterman [at] gmail.com]    [Web page]

Thomas Gable       LinkedIn logo Photo of T. Gable
Thom is a PhD student in the School of Psychology. His research is centered on improving usersí abilities in multitasking situations through the application of multimodal displays. Currently much of his work is done in the driving domain where he is involved in a diverse set of projects as a part of multidisciplinary teams. Thom also uses his minor of qualitative HCI research to breach the standard quantitative focus of psychology research and find new ways of answering research questions.
     [thomas.gable [at] gatech.edu]    [Web page]

Vincent Martin       LinkedIn logo Photo of V. Martin
Vincent is a Ph.D. student in Human Centered Computing in the school of Interactive Computing. Much of Vincent's research has been in the area of audio displays and menus, and multi-modal input and output. Many of these projects deal with audio discrimination of sound, including audio graphs and understanding of information displayed with sound. This has included projects such as a sound based navigation system for blind people and a sound based interface for accessing aquariums. Vincent's primary research area is making Statistical output from mathematical calculating packages such as SPSS and SAS readily accessible to blind students and fellow researchers. His research uses sonification of visual graphs that represent the Statistical output and hopefully will culminate in a dissertation related to the solution in a few more years.
     [vincent.martin [at] gatech.edu]    [Web page]

Keenan May       LinkedIn logo Photo of K. May
Keenan is a student in the Engineering Psychology PhD program, having previously graduated from the MS-HCI program in 2014. He studies the ability of humans to multitask using emerging input and output devices while maintaining awareness of their surroundings during safety critical activities such as cycling and driving. Other interests include assistive technologies for cognitive impairments, physiological sensing of workload and affect and augmented cognition systems, and physiological measurements of engagement for games research or classroom use.
     [kmay [at] gatech.edu]    [Web page]

Jonathan Schuett       LinkedIn logo Photo of J. Schuett
Jonathan is a PhD student in the School of Psychology. His interests include auditory graphs, auditory perception, multimodal user interfaces, assistive technology, and assessment. His research is looking at how to create effective auditory graphs, focusing on context and multiple data series. Jonathan is also involved in the IVAT project, the Mwangaza Project in Kenya, and auditory display design projects. Jonathan came to GT with a Masters degree.
     [jschuett6 [at] gatech.edu]    [Web page]

Brianna Tomlinson       LinkedIn logo Photo of B. Tomlinson
Brianna is a PhD student in the School of Interactive Computing, in the HCC program. Her current work is studying how to use auditory displays and sonifications (non-speech audio) to support glanceability and how these displays could also provide overviews of complex information systems. She is currently researching these topics in two contexts: weather (in an Accessible Weather App research project) and space (for a Solar System Sonification using spatial audio for a planetarium show).
     [btomlin [at] gatech.edu]    [Web page]

Jeff Wilson       LinkedIn logo Photo of J. Wilson
Jeff Wilson is a PhD student in Computer Science, and a Senior Research Scientist in the Interactive Media Technology Center. He received a Bachelors of Computer Science in 1999 and a Masters of Computer Science in 2001 from Georgia Tech. His areas of specialization include graphics, visualization, digital audio, game design, and virtual and augmented reality applications. Some of the projects Jeff has worked on include large-format, projected VR displays, mobile and head mounted AR applications, auditory interfaces for automotive applications, educational games, and mobile health applications.
     [jeff.wilson [at] bitc.gatech.edu]    [Web page]


Current HCI and Computing Masters Students

In addition, every semester the Sonification Lab has several HCI Masters students, Computer Science Masters students, and other Masters students working in the lab. They work on all of the projects in the lab, with their contribtions ranging from programming to running experiments and analyzing data.

Note: The Sonification Lab used to engage MS students in individual projects and independent studies (e.g., CS 8903 or PSYC 8903). However, to expand the number of MS students who can participate in SonLab research, and to better serve our ongoing projects, we now usually engage students through our lab studio course, "CS 8803 SRD - Sonification Lab R&D Studio". This course allows about 12-16 students per semester to work on projects in 2-student teams, working closely with one of the SonLab PhD students and Dr. Walker. For that reasons, we no longer list individual students here unless they are workingo nan independant study or are in another special category. We do, most certainly, still invite MS students to work in the lab, either as part of the Studio experience, or on occasion, in individual projects.



Brandon Thomas       LinkedIn logo Photo of B. Thomas
Brandon has had many assignments throughout his military career and was selected to be an instructor at the United States Military Academy. He is a decorated combat veteran with one combat tour to Iraq and one to Afghanistan. Brandon earned his B.S. degree in Engineering Psychology with a track focus of Systems Engineering in 2007 from the United States Military Academy is pursuing a Masterís of Science in Engineering Psychology at the Georgia Institute of Technology as a 2014 GEM Fellow.


Research Associates and Staff

Ashley Henry       LinkedIn logo Photo of A. Henry
Ashley graduated with a BS in Biology from Georgia Tech in 2011, and joined the Sonification Lab as a full time research associate. She had already been working in the lab as an undergraduate. She then stayed around and completed her MS in HCI. Ashley was involved in many of our research projects, including the Accessible Aquarium, the Aquarium Fugue, and STEM education for visually impaired students.


Undergraduate Students

We always have many undergraduate students, from all over the campus, including Psychology, Computing, LCC, ECE, etc., working on various projects. Students generally start in the lab by completing a project for credit. This may range from programming a bit of software, or running subjects, or entering and analyzing data, or doing usability testing, or... or .... This initial experience often leads to continued experience and further projects. Please contact Dr. Walker if you are interested in gaining experience or working on a project in the Lab's general areas of interest. Don't worry if you don't have a specific project in mind...On the other hand, if you do have a specific project in mind, that's cool too--we are always open to new ideas. Here are some students who have gone that extra mile, and completed a Senior Thesis.

Undergraduate Senior Thesis Students:

Lisa Siebenaler, Fall 2002, Spring 2003
"Magnitude Estimation of Sound Attributes Used in Auditory Displays: A Study of Blind and Visually Impaired Listeners"
Received Georgia Tech President's Undergraduate Research Award

Yoko Nakano, Fall 2004, Spring 2005
"Systematic Evaluation of 'SoundScape' Sonifications"
Received Georgia Tech President's Undergraduate Research Award

Jennifer Holmes, Fall 2006, Spring 2007
"Wayfinding Effectiveness with the System for Wearable Audio Navigation (SWAN)"

Dianne Palladino, Spring 2007, Fall 2007
"Evaluation of Spearcons as an Auditory Interface Element"

Sara Cantu, Spring 2007, Fall 2007
"Evaluation of Advanced Tactile Interface Device"

Naomi Warnick, Spring 2008, Fall 2008
"Reaction Time for Sounds Presented via Bone Conduction Audio Devices" and "Measuring the Effects of Physical Sound Stimulus and Bone Conducting Transducer Location on Reaction Time"
Received Georgia Tech President's Undergraduate Research Award

Tyler Campbell, Summer 2009, Fall 2009
"Trust, Technology, and Money"

Yarden Moskovitch, Fall 2009, Spring 2010
"Evaluation of Automatic Textual Description of Mathematical Graphs"
Received Georgia Tech President's Undergraduate Research Award

Riley Winton, Fall 2011, Spring 2012
"Musician Interpretaton of Dynamic Exhibits"

Hannah Fletcher, Spring 2012, Fall 2012
"Different Stimulus Types to Represent Points of Intersection in Auditory Graphs"
Received Georgia Tech President’s Undergraduate Research Award

Amanda Brock, Fall 2012, Spring 2013
"Attitude Survey Assessment in Blind Subjects"

Michelle Han, Spring 2013, Summer 2013
"Making standardized tests accessible"

Montana Haygood, Fall 2014, Spring 2015
"Prevalence and features of hearing loss by student drumline members"
   Published at the HFES conference in September 2016.

Heather Roberts, Fall 2014, Spring 2015
"Evaluation of STEM with GNIE"

Lee Martin Frazer, Fall 2015, Spring 2016
"Physiological measures and driving"




Lab Graduate Student Alumni

Postdoc Alumni
Dr. Andrew Wallace       LinkedIn logo Photo of A. Wallace
Dr Andrew Wallace received his PhD in Cognitive Science from Brown University in 2011. His dissertation was entitled, The Auditory Representation of Time and Frequency in Vowel Quality Perception. Andy worked on studies of neural speech representation. He is now a Cognitive Scientist in Los Angeles.

PhD Alumni
Dr. Yee Chieh Chew       LinkedIn logo Photo of D. Chew
Yee Chieh earned her PhD in the School of Interactive Computing in the HCC program. Her interests include studying the effect of introducing educational and assistive technology in STEM classes for students with visual impairment. Denise was also a researcher on the Auditory Graphs project. Yee Chieh took a position as User Experience Researcher at Kaiser Permanente.

[Dissertation: "Assessing the use of auditory graphs for middle school mathematics"]



Dr. Carrie Bruce       LinkedIn logo Photo of C. Bruce
Carrie earned her PhD in the School of Interactive Computing, in the HCC program. She is now a Senior Research Scientist at Georgia Tech. She is also a speech-language pathologist and an assistive technology practitioner. Carrie studies discourse in human-technology interactions, and was a co-PI on the Accessible Aquarium Project.

[Dissertation: "Facilitating participation in adults with and without vision loss by supporting exhibit motivations through real-time descriptive mediation"]



Dr. Julia Olsheski (née DeBlasio)       LinkedIn logo Photo of J. DeBlasio Olsheski
Julia earned her PhD student in the School of Psychology. Her interests include environmental design and auditory interfaces. Her research looks at how multimodal stimuli (e.g., auditory and visual) are processed. She is currently working on the IVAT in-vehicle assistive technology project, as well as research for NASA, and was the lead researcher on the Medical Technology project.

[Dissertation: "The role of synesthetic correspondence in intersensory binding: investigating an unrecognized confound in multimodal perception research "]
[Masters: "Documentation in a medical setting with young and older adults"]



Dr. Benjamin K. Davison       LinkedIn logo Photo of B. Davison
Ben completed his PhD in the School of Interactive Computing in the HCC program, in Spring 2013. His dissertation research studied the development, deployment, and usage of auditory display and sonification software for Math education for students with vision loss. He was the lead developer on the Sonification Sandbox project, and a researcher on the Auditory Graphs and Advanced Auditory Menus projects. Ben is now (from 2012) a researcher at Google, but in his spare time continues to collaborate with the Sonification Lab on various projects.

[Dissertation: "Universal graph literacy: understanding how blind and low vision students can satisfy the common core standards with accessible auditory graphs"]



Dr. Michael A. Nees       LinkedIn logo Photo of M. Nees
Dr. Michael A. Nees received his PhD in Psychology from Georgia Tech in 2009. His dissertation was entitled, Internal Representations of Auditory Frequency: Behavioral Studies of Format and Malleability by Instructions. Follwoing graduation, Dr. Nees taught at Spelman College in Atlanta, and then held a postdoctoral research position in the Georgia Tech Sonification Lab. Among other successes and honors, Mike was awarded the 2010 APA Division 21 George E. Briggs Dissertation Award, for the best dissertation in the field of Applied Experimental/Engineering Psychology. Mike is now (from 2011) an Assistant Professor of Psychology at Lafayette College, and directs the Human Factors, Perception, & Cognition Laboratory.

[Dissertation: "Internal representations of auditory frequency: behavioral studies of format and malleability by instructions"]
[Masters: "Data Density and Trend Reversals in Auditory Graphs: Effects on Point Estimation and Trend Identification Tasks"]



Dr. Myounghoon "Philart" Jeon       LinkedIn logo Photo of M.H. Jeon
Philart received his PhD in Psychology from Georgia Tech in 2012. His dissertation was entitled, Effects Of Affective States On Driver Situation Awareness And Adaptive Mitigation Interfaces: Focused On Anger. While at Georgia Tech, Philart's research looked at how emotion and affect play a role in user interfaces and in task performance, especially driving. His interests also included audio design and perception for mobile devices such as cell phones and in-vehicle "infotainment" systems. He worked on the IVAT in-vehicle assistive technology project, on the Advanced Auditory Menus project, and also on the Accessible Aquarium project. Dr. Jeon is now a professor at Michigan Tech, where he directs the Mind Music Machine Lab.

[Dissertation: "Effects of affective states on driver situation awareness and adaptive mitigation interfaces: focused on anger"]
[Masters: " 'Spindex' (speech index) enhances menu navigation user experience of touch screen devices in various input gestures: tapping, wheeling, and flicking"]



Dr. Raymond M. Stanley       LinkedIn logo Photo of R. Stanley
Dr. Raymond M. Stanley received his PhD in Psychology from Georgia Tech in 2009. Ray's dissertation was entitled, Measurement and Validation of Bone-Conduction Adjustment Functions in Virtual 3D Audio Displays. While at Georgia Tech, Ray's research centered on perception and psychophysics, and especially bone conduction audio and bonephones. He was the lead researcher on the Bone Conduction Audio project. Dr. Stanley held a postdoctoral researcher position with Prof. Art Wingman at Brandeis University, before accepting a position at Mitre Corporation.

[Dissertation: "Measurement and validation of bone-conduction adjustment functions in virtual 3D audio displays"]
[Masters: "Toward adapting spatial audio displays for use with bone conduction: the cancellation of bone-conducted and air-conducted sound waves"]



Lisa Mauney (née Siebenaler)       LinkedIn logo Photo of L. Mauney
Lisa earned her MS in the School of Psychology. Her interests included individual differences in the perception and comprehension of auditory displays, auditory graphs, and sonifications, as well as assistive technology and how people with low vision read. She started in the Lab as an undergraduate, and completed her Undergraduate Senior Thesis in the Lab, before joining us as a grad student. She was part of the Individual Differences and Training in Auditory Displays and Auditory Graphs projects. LIsa has left the lab and taken an HCI research scientist job in the corporate world.

[Masters: "Individual Differences in Cognitive, Musical, and Perceptual Abilities"]



Jeff Lindsay       LinkedIn logo Photo of J. Lindsay
Jeff earned his MS in the School of Psychology. His interests included auditory perception, perception of space, navigation, and auditory user interfaces. He was a lead researcher in the SWAN (System for Wearable Auditory Navigation) project. Jeff is now a User Experience strategist.

[Masters: "The effect of a simultaneous speech discrimination task on navigation in a virtual environment"]



MS Alumni

Note: The Sonification Lab used to engage MS students in individual projects and independent studies (e.g., CS 8903 or PSYC 8903). However, to expand the number of MS students who can participate in SonLab research, and to better serve our ongoing projects, we now usually engage students through our lab studio course, "CS 8803 SRD - Sonification Lab R&D Studio". This course allows about 12-16 students per semester to work on projects in 2-student teams, working closely with one of the SonLab PhD students and Dr. Walker. For that reasons, we no longer list individual students here. We do, most certainly, still invite MS students to work in the lab, either as part of the Studio experience, or on occasion, in individual projects.

Jason D'Orazio, MS-HCI (Psychology track), Dec 2002
"Issues in Car Navigation"

Daniel Smith, MS (Psychology), May 2003
"Effects of Training and Context on Human Performance in a Point Estimation Sonification Task"

Darren Hough, MS, (Architecture--Industrial Design), May 2003
"Aesthetics and Product Usability"

Justin Godfrey, MS-HCI (Psychology track), Aug 2004
"Development and Evaluation of the Audio Abacus"

Josh Cothran, MS-HCI (Computer Science track), Aug 2004
"Web Application for Controlled Burn Simulation "

Kathy Lau, MS-HCI (Psychology track), Dec 2004
"Tame Study "

Amanda Nance, MS-HCI (Computer Science track), May 2005
"Sonification of Menu Structures "

Kevin Stamper, MS-HCI (Computer Science track), May 2005
"Mobile Audio Designs (MAD) Monkey: A Tool for Sound Design "

Jeffrey Lindsay, MS Psychology, Sept 2005
"The Effect of a Simultaneous Speech Discrimination Task on Navigation in a Virtual Environment"

Britt Caldwell, MS-HCI (Psychology track) May 2006
"Effects of Technology Use on the Doctor-Patient Interaction"

Lisa Mauney, MS Psychology, Aug 2006
"Individual Differences in Cognitive, Musical, and Perceptual Abilities"

Ray Stanley, MS Psychology, Aug 2006
"Adapting Spatial Audio Displays For Use With Bone Conduction: How Bone-conducted Waves Interact With Air-conducted Waves at the Basilar Membrane"

Robert Gray, MS-HCI (Psychology track) Aug 2006
"Audio Task Assistance for Aircraft Maintainers"

Michael Nees, MS Psychology, Jan 2007
"Data Density and Trend Reversals in Auditory Graphs: Effects on Point Estimation and Trend Identification Tasks"

Pavani Yalla, MS-HCI (LCC track) May 2008
"Advanced Auditory Menus"

Anandi Pendse, MS-CS, May 2008
"Bone Conduction Audio Perception"

Vivek Muppalla, MS-CS, May 2008
"SWAN System Research and Development"

Les Smee, MS-HCI (CS track), Dec 2008
"MotoBridge GUI study: An interface study of the State of Georgia's interoperability solution"

Anna (Anya) Kogan, MS-HCI (Psychology track), May 2009
"Auditory Graphs for Education; Spearcons in Dual Tasks"

Siddharth Gupta, MS-HCI (CS track), May 2009
"Advanced Auditory Menus"

Unkyong Lee, MS-CS, May 2009
"Sonification Sandbox: Online Version; Auditory Menus: Mobile Version"

Mary Frances Jones, MS-HCI (Psychology track), August 2009
"In-home evaluation of 'Fuzzy Logic' Sign Language Teaching Toy"

Julia DeBlasio, MS Psychology, August 2009
"Documentation in a Medical Setting with Young and Older Adults"

Stephen Choi, MS-HCI (CS track), May 2010
"The Digitizer Audio Graph: Automatic Auditory Graph Generation Using Computer Vision"

Marc Buigues, MS-CS, May 2010
"Development and Evaluation of Museum Visitor Tracking Software"

Jeff McCloud, MS-Industrial Design, May 2010
In-Vehicle Assistive Technologies

Neil Russell, MS-HCI, May 2010
Accessible Aquarium Project: Zoo Soundscapes

Gary Golubski, MS-HCI, May 2010
Auditory Graphs in Classrooms at the Georgia Academy for the Blind

Myounghoon "Philart" Jeon, October 2010
"Spindex (Speech Index) Enhances Menu Navigation User Experience Of Touch Screen Devices In Various Input Gestures: Tapping, Wheeling, And Flicking"

Victor Ondego, MS-HCI (Psychology track), May 2011
Accessible Aquarium Project

Ruby Zheng, MS-HCI, May 2011
Interactive Aquarium Interface

Ozum Akanser, MS-HCI (Psychology), May 2012.
Accessible electronic user interfaces for data collection in the Mwangaza Project on accessible STEM education in Kenya.

Joe Lin, MS-HCI (Computing) student, May 2012.
Developing auditory graphing tools for iOS devices.

Sundararajan Sarangan, MS-CS, May 2012.
Advanced in-vehicle infotainment systems in our driving simulator, and the In-Vehicle Assistive Technology (IVAT) system built on the Centrafuse platform.

Abhishek Srivastava, MS-CS, May 2012.
Implementing auditory interfaces on Android devices, and helping develop software for auditory graphs.

Hyewon Suh, MS-CS, May 2012.
Auditory Graphs, the Accessible Aquarium Project, and Advanced Auditory Menus.

Sung-ihk Yang, MS-CS, May 2012.
Web sites in support of various projects in the lab.

Jung-Bin "Jay" Yim, MS-CS, May 2012.
In-Vehicle Assistive Technologies (IVAT) and Aquarium Fugue project.

Hitesh Chhabra, MS-CS, December 2013
Auditory Graphs project.

Ramitha Chitloor, MS-CS, May 2013

Saie Deshpande, MS-CS specializing in HCI, May 2013
Worked on a database to archive data collected in our Mwangaza Project in Kenya.

Erin Hennessy, MS-HCI, May 2013
Sonification of human movement.

Amrutha Krishnan, MS-HCI, May 2013
Auditory games and web development.

Sruthi Padala, MS-CS, May 2014
Movement sensor calibration.

Lisa Rossi, MS-HCI, May 2014

Rick Swette, MS-HCI (Psychology), May 2014
In-Vehicle Assistive Technology (IVAT) project.

Haifa Wright-Hullett, MS-HCI (Psychology), December 2013
Multimodal interfaces in support of our NASA research.

Brandon Conway, MS-HCI, May 2014
Sonified Fantasy Sports