Project Ideas (for FALL 2024 and beyond -- updated August 2024)
Not an exhaustive list, but at least represents the kinds of projects we will do, depending on needs, lab research goals, and fit to 8803 students' skills and interests.
Featured and New project ideas updated for FALL 2024:
- Project name: Accessible Maps
-
We have a number of projects that rely on map data, so we have built GIS and Unity-based maps of campus, of buildings, etc. We are working on software to make the map data accessible to blind users. We are also working on ways to leverage the map data in robotics and other ways. This project needs various kinds of programmers, as well as designers and researchers. We are also looking at using ChatGPT as a front end of a map (like Google), so a user could have a conversation-like query about the map, what is in the map, etc. [ML/ChatGPT/web programming/research studies]
- Project name: Inclusive and Accessible Climate Communications
-
The Sonification Lab has a new (and large) initiative to make climate and weather information (maps, charts, etc.) more accessible and more inclusive. We will be using available tools like Highcharts and AudioM to produce examples of more inclusive content.
This project will also include a survey of climate communications from sources such as the United Nations and the BBC, and determining how inclusinve they are, and where improvements are necessary.
- Project name: Cognitive Tests in VR/AR
-
We conduct many studies in VR, and need to be able to administer standard tests (working memory capacity, etc.) to participants. This project will implement various psychometric tests in VR.
- Project name: Research Project Using VR Spaces to Promote Creativity
-
Spaces (both real and virtual) can be designed to enhance creativity. This project will design a VR tool or pipeline to allow us to generate and reconfigure VR spaces for use in the research.
- Project name: Develop Pedestrian and Driver Interaction Scene in VR
-
We use multi-player VR to study the interaction of pedestrians, drivers, and self-driving cars. This project is to implement a VR scene in Strangeland, and then collect data with human participants.
- Project name: STRANGELAND Enhancements
-
We collaborate with Cornell Tech on the development of a multiplayer VR world - an immersive digital twin - that can be used for research. We are expanding the capabilities of Strangeland.
This project is to add flying cars, or NPCs (pedestrians, robots, animals, traffic), or 3D audio entensions, to Strangeland.
- Project name: Mental Health Monitoring project - dashboard
-
Project with various GT researchers and an external company, to develop a mental health daily support and assessment app, leveraging innovative AI, machine learning, sonification.
In Fall 2024 we will focus on the design and implementation of a web-based "dashboard" to present data to users.
- Project name: Graph Ingestion Engine - bring to market
-
We have built a web service that can take in an image of a graph or chart, then convert to CSV, and to HSSP (retain as much of the formatting as possible) for use in the Highcharts Sonification Studio. This semester we will finish deploying the web front end, tie in to the GT supercomputing cluster (PACE), and deploy as a service. We will also add new and better pipelines to handle different kinds of graphs, and perform better. [web app/client-server/computer vision/ML analysis pipelines]
- Project name: Explorations with HAPTX VR Gloves
-
Exploratory project making use of the HaptX Gloves system. Will likely involve wayfinding or accessible maps.
- Project name: Robotic Guide Dog
-
A GT team is in a multi-year interdisciplinary program of research to develop a robotic guide dog to help people with vision loss. The point is to do the research, design, algorithm development, prototyping, and extensive testing that is clearly needed in such a project.
We need all skill sets, including ethnographers, designers, programmers (!!), researchers. We have robots, and access to plenty of experts (including persons with vision loss, and dog+human teams).
This semester we will be focusing on a two-way speech interface between the robot and the human user.
- Project name: AccessCORPS Tools
-
Course materials (think Web sites, powerpoint slides, lab manuals, text books) can be inaccessible to students with disabilities, such as those who have vision loss. Teachers may not know that their materials are in accessible; they may not know how to fix them; and they may not have the time to do so.
AccessCORPS is a VIP team at GT, involving a group of students who will be trained to identify accessibility issues in courses, and then work with instructors to fix and enhance their courses and course materials (see https://www.vip.gatech.edu).
AccessCORPS needs various software tools, such as: (1) AI-based automated description of images, maps, graphs; (2) tool to extract all the images from a Powerpoint slide deck, so they can be described.
ALSO, we will be taking content from specific courses (e.g., graphs from Stats classes) and making them accessible.
- Project name: AI and Music Performance
-
We have a new project to use AI in music performance, and to study the performance and also the AI structure itself.
- Project name: Multimodal Reader app - extensions
-
We have developed an app (for iOS, in Swift) that combines and synchronizes the visual text from an eBook with the audio from an audio book.
This semester we will be adding multimodal synchronization features, usability enhancements, and performance optimization.
- Project name: VR "Found" Data, applications for film
-
Data obtained from a VR headset can be used to modify the user's experience. Sometimes this is explicit data such as head pose or hand location. Other times there is information in what might otherwise be considered "noise". We can use this in a range of applications, including film experiences.
- Project name: Minecraft as a Research Platform
-
We are looking at using Minecraft as a research platform to study maps, wayfinding, teams, creativity, design, etc. To support this, we need to implement data logging, data collection, in-game surveys, etc.
- Project name: AR AdventuRes with Magic Leap
-
Develop Augmented Reality apps using the new Magic Leap AR glasses.
- Project name: Whats Happening...MOBILE
-
A while ago, Prof. John Stasko's research group developed a system to harvest/scrape data from the internet (e.g., weather data, flight prices, etc.) called Info Canvas (and What's Happening), then use those data to generate artwork that is modified based on the data.
The information (the data) is basically embedded in the art.
We are now taking that to the next level, creating both visual and audio versions of data-driven art/music, and deploying those in VR, in the car, and in other on-the-go contexts.
We will develop data harvesters, and a pipeline to convert data into art/music, then design various art/music pieces, and do evaluations/research. [mobile app/ML/sonification/info-viz]
- Project name: WHOI Alvin VR model and simulator
-
Develop a "flyable" Unity/VR-based simulation of the Alvin submersible from the Woods Hole Oceanographic Institute (WHOI). We have built much of the framework for a complete Unity model of Alvin. Now we will finish the model, and add interactive elements inside the virtual sub (e.g., manipulate a switch on a panel inside the sub, and that turns on lights outside the sub). We will work with WHOI on applications of the model for training. [Unity/blender/C#/research studies]
- Project name: Eclipse Sonification
-
We will be making the upcoming solar eclipse accessible via sonification and immersive VR. Also includes accessible maps and accessible weather displays.
- Project name: Sonification in Africa - Deployment Study in Kenya
-
We are conducting a field deployment study of the Highcharts Sonification Studio (HSS). We will deploy it to schools for the blind in Kenya, working with teachers and IT trainers and (blind) students. [research/field studies]
- Project name: Microsoft "Devices" Study
-
Microsoft is collaborating on a project to study how people use, and transition between, mobile devices. This will be a research effort involving mixed methods.
- Project name: VR training for automated vehicles
-
As cars come out with new features (ranging from advanced cruise control (ACC) all the way up to fully self-driving), consumers need to know that these features exist, what they are, and perhaps experience them before they buy a car. In some cases drivers need to be trained how to interact with advanced features before they get out onto the road.
We plan to use VR, AR, and other (Unity-based) simulations to familiarize and train consumers. This involves setting up a driving simulator, extending it to meet our needs, designing and prototyping training, and doing evaluations. [Unity/research]
- Project name: STING 2.0 - Driving Simulator Telemetry-driven GUIs
-
Our advanced driving simulators produce a lot of telemetry data (e.g., speed, steering wheel angle, whether cruise control is turned on). We have built a bridge (middleware) called STING that listens for those data, and makes them available for use by third-party applications.
We have also written just such a third-party app that generates heads-up displays based on the telemetry, and displays it anywhere in the driving simulator's 6+ screens.
This project will be update STING, and create STING 2.0 that will work with the newest version of the simulator software, and provide advanced data handling and display capabilities.
- Project name: You can't Spell Agriculture without "AI" and "AR"
-
VRlandia project using AI and AR/VR for various farming-related tasks. The future of agroeconomics. Project(s) with John Deere.
- Project name: VRlandia
-
Brief description: We are developing a lab space filled with VR/AR/XR and other simulation gear (including driving simulators) that are available for use by GT students and researchers.
We need help establishing the lab, getting gear set up, creating a website (including tools to check out equipment, reserve lab space, etc.), developing policies, and also helping others use the space and gear (i.e., "Genius Bar").
- Project name: SWAN 2.0++
-
Brief description: Does shifting sounds slightly from a more real-time mapping to fit a more musical beat/structure/soundscape make it easier for people to learn and have a better aesthetic response when navigating an indoor space?
* Milestone 1: design examples of sound shifts using the programing from SWAN 2.0/the audio for the courses
* Milestone 2: build out full set for evaluation
* Milestone 3: IRB/test!
- Project name: Accessible Safari
-
Brief description: How can a blind person go on safari (in a jeep, in Africa, for example)--lions and giraffes and rhinos, oh my!? How can technology be used to help the person know what is around, and get a complete experiece?
This involves advanced sensing, computer vision, machine learning, and multimodal user interfaces.
Begin the process of developing an accessible mechanism for safari participants to know about and learn about what is around them. This may eventually mean mounting a camera on a jeep, performing complex computer-vision processes, and conveying the results to the passenger using synthesized audio (both music and speech).
- Project name: Connect Outdoors: Accessible Archery
-
Brief description: We will partner with Connect Outdoors, a non-profit recently that works with veterans, and works with the visually impaired (not necessarily veterans), to help make outdoors activities, like camping, fishing, archery, marksmanship, etc. accessible.
- Project name: Personalized Driving Displays for Automated Vehicles
-
Brief description: This goal of this project is to design HUDs for highly automated vehicles that correspond with specific driving styles (thrill-seeking, transit, and defensive). HUDs have already been designed for the transit and defensive driving style. Focus groups need to be conducted to determine what driving behavior users associate with thrill-seeking drivers. Following the focus groups, a HUD will be designed.
* Milestone 1: Design focus group and recruit partipcants
* Milestone 2: Data collection + analysis
* Milestone 3: Design HUD for thrill-seeking driving
- Project name: Accessible Drones for Education in Kenya
-
Brief description: Develop an accessible method for programming drones, so that blind students (at our partner schools in Kenya) can interact with droves as an educational tool. The programming as well as the operation of the drone will need to be accessible to learners with vision impairment.
- Project name: Roadside Test for Impaired Driving
-
Brief description: Driving while impaired is a real problem. Legalized cannabis is making things even more complex. We will be developing a roadside sobriety test for canabis that is based on behavior, not chemistry.
- Project name: Super SID Space Weather Monitor
-
Brief description: Using a SuperSID and sonifying/audifying the data from it. This is a small radio-weather telescope. Could additionally explore radio jove, and other Society of Radio Astronomers (SARA) projects to find an interesting dataset to work with! Most of this will be a design project (maybe with some evaluation
http://www.radio-astronomy.org/node/210 http://www.radio-astronomy.org/pdf/SuperSIDOrder.pdf Exploring the possibility of doing a sonification project with this. (details TBD)
* Milestone 1: Get all the materials set up for this: after making the choice, buy the thing, or get the data set from the group
* Milestone 2: First set of mappings/sonification process done
* Milestone 3: Updated round of sonification/mappings done
Older projects listed below...not in active development, but still of interest.....................
- Project name: Alexa Service to Prevent Suicide
-
Brief description: Police officers and law enforcement personnel are at great risk for suicide, particularly just after a cereer transition (e.g., retirement). We will be developing an Alexa-based service that will engage recently-retired officers, in an effort to reduce isolation and anxiety, and hopefully reduce suicide risk.
- Project name: SPAM program development and evaluation
-
Brief description: Using the server that Rohan has developed, build a GUI that allows for easy input of and response to SPAM questions for use during driving experiments. This should also collect the following data:
* timestamp for:
* ready prompt presented, ready prompt accepted, question presented, answer chosen
* content for:
* question presented
* choices presented
* answer chosen
* answer accuracy
* participant number
The program should be able to run on a windows computer and participant input should be able to be given via the Panasonic touch screen in the driving simulator.
The program should multiple question options for each question point throughout the experiment with randomized presentation of each question.
- Project name: Influence of musical skills/practice on learning complex sonification mappings
-
Brief description: Lots of experiments have proposed that musical training may influence the ability to learn sonification mappings. But does it really have an effect?
* Milestone 1: explore musical sophistication scales (e.g., jonathan and grace's work) and see if any other studies have tried to do this (Lit review type thing)
* Milestone 2: build out a study design looking at complex sonifications + measuring musical skills (could use the audio from jonathan's study) + IRB
* Milestone 3: pilot the study and then collect data
- Project name: School Bus Information System - Mobile app, and web-app
-
Brief description: We are developing a comprehensive system to provide information to parents, teachers, and drivers about the location and status of school buses. There are 4 components: the parents' app; the school transportation coordinators' tablet app; the bus service's web portal; and the drivers' app.
Most of these have been researched. This project is about implementing functional prototypes and actual working versions, services.
* Milestone 1: Wireframes based on design research (much of which has already been done)
* Milestone 2: Built apps using prototyping tools or full-scale development tools (e.g., sketch, React Native, etc.)
* Milestone 3: Evaluate and refine
- Project name: OpenStreetMaps Database Population App
-
Brief description: This project would be to develop and app to interface with openstreetmaps, which is what Microsoft Soundscape pulls objects from. The app would be tailored toward allowing a lay person to easily locate and appropriately tag permanent-ish objects, either indoors or outdoors. Such a database could also be used by another SWAN offshoot (see below).
* Milestone 1: App wireframes and selection of appropriate tags
* Milestone 2: Vertical slice of app functionality including backend
* Milestone 3: Completed app
- Project name: SWAN Mobile Building Preview App
-
Brief description: This project would be conducted in concert or as a followup to the openstreetmaps project. Many participants in the SWAN study asked for a "virtual preview" version of the system. What this app would do is allow a user to virtually explore a building, just before entering, via a Unity soundscape on their iPhone. To achieve this, you would provide a UI where the user could select from a list of nearby building entrances or other reference/sync points (¥J.S. Coon, Ferst and Cherry entrance¥). Then, they could use the iPhone as a ¥joystick¥ to walk through a soundscape representation of the building, which would be populated using crowdsourced objects (with appropriate metadata tags) placed in openstreetmaps. You could also have an experimental MR mode for the system, that would utilize ARKit to provide situated exploration relative to the sync points (but that wouldn¥t be the focus). These types of virtual previews are extremely effective when done in VR, but it¥s never been deployed like this, where a person could just pull it up while riding the bus over to a building, or right outside. This project would need substantial programming and Unity expertise.
* Milestone 1: App wireframes and technical feasibility assessment/ implementation plan
* Milestone 2: Vertical slice of app functionality including backend
* Milestone 3: Completed app
- Project name: Spearcons learning/transfer work
-
Brief description: Can learning/training on a subset of spearcons help people transfer knowledge of one set to another leading to faster recognition of new ones, or better accuracy for a really large set. Does domain matter/bias the ability for these to work now?
* Milestone 1: Choose spearcon set and decide on methodology
* Milestone 2: build out experiment and do the IRB
* Milestone 3: data collection
- Project name: Accessible Amazon echo
-
Brief description: teaching amazon echo to recognize different pitches instead of speech to help people who have trouble speaking
* Milestone 1: test feasibility of this with the Amazon Echo API, storyboard the interaction
* Milestone 2: build out the recognizer for it (v1)
* Milestone 3: iterate and update to v2 + some early testing
- Project name: Musical Piece Analysis Visualizer
-
Brief description: At the Atlanta Botanical Gardens, they have an interesting area where the music and RGB LEDs are used to make an audio+visual show. There might be something interesting here about building a way to recognize patterns from music or data and to visualize those patterns from a huge dataset. Note: not sure exactly how this would go, was mostly a random idea.
* Milestone 1:
* Milestone 2:
* Milestone 3:
- Project name: Effect of audio/the multimedia effect on germane and extrinsic cognitive load for PhET simulations
-
Brief description: Mayer's multimedia principle has found (numerous times) that audio from narration or text description, in addition to visuals, can help students have better germane load when learning concepts. Exploration of this for one of the latter sims, such as GFL:B or FL may find whether or not there are effects to cognitive load during learning for students when using visuals + sonifications.
* Milestone 1: design study for measuring the cognitive load, following Mayer's multimedia studies
* Milestone 2: irb + pilot testing
* Milestone 3: data collection + analysis
- Project name: Designing Interactions for Auditory Reactions on Facebook
-
Brief description: Sonifications show great promise in improving the usability of Facebook Reactions for sighted and visually-impaired Facebook users. Our studies have found that certain types of sounds are highly effective at conveying specific emotions. However, we have yet to explore the design of an interface that accommodate this new mode of interaction. Students working on this project will dive in the literature of auditory interfaces, affective computing, and accessible computing, develop low and high fidelity prototypes of user interfaces, and evaluate these interfaces with real world users.
* Milestone 1: Survey of the literature: What has been done? What works, What doesn't?
* Milestone 2: Develop low fidelity prototypes of multiple interfaces for Auditory Reactions on Facebook
* Milestone 3: Down-select to 1-3 prototypes that will be mocked-up for usability testing
* Milestone 4: Run usability tests with real world users
- Project name: Presenting Auditory Facebook Reactions in Facebook Posts
-
Brief description: Auditory displays have been successful at effectively conveying information to individuals living with visual-impairments. However, sighted people frequently encounter situational visual impairments. For example, one cannot focus visual attention on a phone while driving a vehicle. In scenarios such as this, an auditory interface would be ideal for delivering information to the user. Students working on this project will explore the literature of auditory interfaces and communications research, develop prototypes of different presentations of Auditory Facebook posts, and evaluate their designs with real world users.
* Milestone 1: Survey of the literature: What¥s been done? What works, What doesn¥t?
* Milestone 2: Develop low fidelity prototypes of multiple interfaces for Auditory Reactions on Facebook
* Milestone 3: Down-select to 1-3 prototypes that will be mocked-up for usability testing
* Milestone 4: Run usability tests with real world users
Ongoing topic areas under consideration:
- Driving
- * Integrate sensors, cameras, LEAP, Kinect, etc.
* Eye tracking in simulator
* IVAT applications
* Driving research
* Simulator maps, models, scripting, tools
- Sonification Projects:
- * Weather data sonification
* Gorilla movement/tracking data
* Accessible Fantasy Football sonification
* Aquarium/fish/critter data
* Aquarium fugue interactive exhibit
* Brainwave sonification
* Discus tracking/sonification of track and field movements
* Rowing sonification 2.0
* Other sports/human movement sonification (javelin, dance, walking)
* Sonification of GAIT with artificial limb/leg
* Sonification of prosthesis movement/alignment/error/grip/etc.
* Sonification of star birth/life/death cycle
- Weather Station
- * Set up weather station equipment, connect to it to get data, and log, graph, and sonify data
- SWAN 2.0
- * Indoor localization using UWB
* VR/AR-based wayfinding system
- Accessible MOOCs
- Accessible ILEs (Aquarium, Zoo, Science Museum)
- Accessible Stats
- * develop software (SAS, r, SPSS?)
* evaluate, iterate
* deploy in the field (schools?)
- Math for the Blind
- * GNIE extensions-line drawing and curves
* GNIE extensions-bar graph
* GNIE extensions-other graph families (pie?)
* GNIE extensions-support for figures, 3D shapes, 3D graphs?
* GNIE extensions-support for word problems
* GNIE extensions-export student answers to 'hand in' version
* GNIE extensions-question generation or/and bulk import of questions by teacher (not one-by-one in GNIE, itself)
- Sonification Sandbox - Web version
- Electronic White Cane Handle
- Accessible Games
- * Navy-ish (Navy 2.0)
* Battleship (coordinate plane)
* Other math/numeric games
* Lemonade Stand
* Fantasy Football, Baseball, Hockey, etc.
* Ergonometer/stationary bike to generate power for video games and cell phone charging in Africa
- Symmetry and Usability
- * Eye tracking of symmetrical/asymmetrical interfaces
- Others:
- * Food and music - Behavior reinforcement of proper eating habits via gamification (a sequencer game w/ fruit icons vs fatty foods- fruit icons are predictable and pleasant, fatty foods and chaotic and dissonant; promotes affective response to healthier foods)
* Music practice tool - promoting proper practice habits with auditory feedback
* Passive environmental sonification (sonifying current traffic conditions, weather, etc, in an always-on manner; aesthetic enough to not be turned off, effective enough to be kept on)
* Use LEAP as a conductor training tool