Defense for Out of Sight

This is what I talked about in my defense to obtain the academic degree ‘Bachelor of Science’ in the field of Media Systems at the Bauhaus-University of Weimar (I started there the last year the programme was available, so there is no easy way to link to the actual programme I did.).

The slides can be found here (link is *.pdf).  The full thesis ‘Out of Sight – Navigation and Immersion of Blind Players in Text-Based Games’ can be found here (link is *.pdf).  However, this should give you a small overview over the topic as such.

Please note, that -although I did some editing in the speech to present it here –  there are many commas, that aren’t required in the English (or German for that matter) language.  I used them to structure my speech.

Welcome to my defense in the field of media systems. After a year of work on it, I am happy, that I can present to you the results and some further deliberations of my studies.

The topic of the talk is ‘Navigation and Immersion of Blind Players in Text Based Games’.

In this talk, I will first present terms, that are relevant to the study. Then I want to discuss the design of the research and continue with showing selected results of the study.

I finally will discuss these results and show why these are relevant.

First I want to bring us all to the same level of understanding about text based games.

Text based games have a room description in a short and a long form. There are also exits, which can be presented in different ways – this becomes important later on. However, how the exits are presented, is chosen by the developers. There exists the completely abstract form (door, window), but you could easily change this form to cardinal directions (north, south) or egocentric coordinates (left, right). Furthermore there are objects that you can potentially interact with.

Interaction happens when the player types in a command. So let’s try to flee. Well, the game didn’t understand me. Let’s try to run away. Also not an option. This is a great source of frustration within a text-based game, since you can only use the interaction, that the developers made available to you as a player. So let’s try to do something else.  This works, but it didn’t do much, so let’s give in to the game itself.

These games are played by different kinds of people. First off – following Richard Bartle – there are different kinds of players. Those are defined on the axes of acting vs. interacting and whether this is done mainly in the world context or with other players.

However, these categories are more productive to use, if they are seen as _playstyles_ a player can adapt.

The nature of text based games makes it also possible to integrate visually impaired or blind players. With a screenreader they are able to interact with the game just like every other player. In Multiplayer versions, different playstyles and differently abled players meet. However, certain parameters should be considered by the developers, if they want to create a game, which is accessible for all their potential players. Blind players see themselves confronted with the ‘special games’ industry, where they have games that are specifically designed for their needs. However, in their everyday lives they usually have sighted friends too and want to interact with them socially and also: play the same games.

One of the less obvious parameters that might lead to a more accessible game for different players has been picked for the study. Jack Loomis et al. have shown that the process of navigation is different for blind and sighted people. Let’s take a step back to see how and why this is the case. Simplified, there are two different ways a reference system for navigation can be built: In an allocentric way or an egocentric way. For the purpose of general understanding, we see an allocentric system as a system that has as its orientation something that is external to the ego – or the players avatar. In my test example this has been done by presenting the navigation with coordinates like north, south, east and west.

An egocentric representation, on the other hand, is oriented on the player. In the test setup this has been conveyed in the way that exits were given as – for example – forward, backward, left and right. Blind people and especially early blind people tend to navigate through their everyday world with an internal representation of the world that can be understood as egocentric, whereas sighted people tend to build up an allocentric representation.

My assumption in that context was, that, if the navigation system is easier to handle for the different player groups, immersion should be encouraged.

I could easily have written a thesis only about the term immersion, but there are all in all four points one has to consider when talking about it in a game context.

-> Immersion involves a certain feeling of presence within the game environment and not the actual world the player is in.

-> Immersion includes the identification with ones avatar or character.

-> Immersion is easily disrupted.

-> Immersion only occurs if the player accepts the rules set by the game, be it social rules, physical rules or others.

To work with a single sentence definition, I defined ‘Immersion’ as the player’s undisrupted and focused engagement with the game they are playing.

Armed with all these definition of terms, we can now discuss the initial hypothesis. From the previously mentioned theory, that blind players navigate egocentrically, I expected them to perform better in an egocentric reference frame than in an allocentric one. Also I expected them to perform better in the egocentric reference frame than sighted players would. This is reversed then for sighted players. They would perform better in the allocentric system than in the egocentric system and furthermore better than blind players in the allocentric system.

The instance I used has been taken from the Multi-User Dungeon ‘Discworld’. The map contains twenty-two rooms. There are twelve rooms that can be classified as inside rooms and ten rooms that can be classified as outside rooms. This is important because the player’s internal spatial representation is expected to change for these types of rooms, adapting the results of Stephen Kosslyn’s study. This means, that, one single outside room will generally be perceived as being larger than one single inside room.

The player has to assemble material to build a fishing rod in different areas of the game, build the rod and then go fishing a fish in a pond. This fish then has to be brought back to a non-player character within the house.

So this means there are three navigational tasks, first to navigate within the house – some items are in these three rooms, navigating within the garden to find the bait and the pond and navigating back to the house. While this map is now available to you, it was not been made available to the test participants before finishing all aspects of the study.

In the study there were all in all 12 participants, six of them were blind and six of them were sighted. Three of each group tested the egocentric system and three of each group tested the allocentric system.

My findings were, that there is a learning curve in getting used to the navigation system and the game in general for inexperienced players. However, blind players in the egocentric system had – compared to all the other players, the steadiest learning curve. It’s steadier than the one for blind players in the allocentric system. This means they could make their previously acquired knowledge most productive while playing the game. Considering the navigation tasks, these curves explain in the first section how well the player navigates through an unknown inside area, the second section, how they adapt their knowledge for unknown outside areas and the third how well they can navigate back through a previously encountered environment.

The analysis of the commands shows that, while sighted players have about the same Command Error Rate unrelated to the navigation system that was presented to them, blind players perform tremendously better in the egocentric system compared to the allocentric system. They make less errors overall and as you can see by the Player Side Command Error Rate, they also tried a lot of different modes of interaction that weren’t available, leading to a frustrating experience with the game.

The command error rate is calculated as the sum of half of the typos, wrong commands like looking into a direction that doesn’t exist and typed in commands the game didn’t support.

This is then given as a percentage of all commands that the player typed in. For the player side command error rate, the commands that are not implemented are taken off the sum and substracted from the total of commands.

The audio data recorded only gives mild indications. The table here shows the percentage of the type of speech for every group of participants. In-Game comments are about things that are about the game in general, meta-game comments are dealing with the nature of the game and the circumstances of the testing, whereas out-of game comments are about something completely different, not related to the game or the circumstances in which it is played.

Players using allocentric navigation had more in-game parts in their speech, regardless of their abilities. The only other thing that is shown, is that blind players tended to speak less during the game sessions. This might be explained by the fact that they had to have the text of the game read out to them via a screenreader, so they had to concentrate on listening and weren’t up for speaking themselves so much.

It also might have been, that I didn’t stress the thinking aloud technique and how important it is enough. This is, because I was afraid I would interrupt their gameplay, if I would give too many comments that hint to the test setup.

I had used an Out-Of-Game Questionaire which has been used as a normaliser for the In-Game Questionnaire to determine the Immersion Value.

It appears, that the immersion value was about the same for all participants. The blind participants were able to immerse themselves slightly better than the sighted players. As you can see on the individual variances within the groups, more participants would be needed to illustrate the effects of the navigation system on immersion via questionnaires. However, if you take out the most extreme values in each group, the tendency seems to be that egocentric navigation systems generally lead to more immersion for any group of players than the allocentric system. This accounts for blind and sighted players alike, which is a surprising result according to the hypothesis.

With regards to the navigation questionnaire, I wanted to point out something interesting. When sighted players of the egocentric system where asked about allocentric coordinates, they were never shy about giving an answer, although they never had access to any information about it. And all their answers were consistent, too!

It seems that sighted players encode the ‘forward’ movement with the cardinal coordinate ‘north’. This finding could be especially interesting for research in spatial representation of a player or user who navigates through an artificial world space. Blind players of the egocentric system, however, declared that they weren’t able to give an answer to this question.

Another interesting point is that, the values for the accuracy in the room estimation task (how many ‘rooms’ are between point a and b) was most accurate for players of the egocentric system, whereas sighted players performed here generally better than blind players.

All in all, the Navigation Questionnaire shows that maybe some of the tasks are suited to be easily solved by sighted players and blind players might have problems with the type of questions that were asked in general.

All these results point into the direction of mildly confirming the hypothesis, but not in every single metric, only seen in perspective.

And here we come to things, that should be considered in a future study.

One of them is using an English-language game while living in a German-speaking community.

The level of fluency of English has been assessed by the participants themselves, if they weren’t native speakers. They were simply asked, whether they felt confident navigating through an environment. There could be an influence on the results depending on whether a native speaker played the game or whether it has been played as a game that is written in a later-acquired language. The fact that the game was in English, diminished the potential pool of especially blind participants who live closeby tremendously.

Also, all the test sessions have been recorded by only one person and also have been analysed by only one person. There are probably some bias issues that I haven’t seen as such yet.

These issues in mind, the results of the study I conducted can be used in the development of text-based games as well as in further research. Developers of text-based games can take these results to make their game more accessible or more challenging for differently abled player groups. This can maybe even create whole new game dynamics.

The findings of these studies can also be made productive in adaption for general issues of navigating through virtual spaces. They suggest that especially in non-egocentric navigation systems, it is important to provide the sighted user with information on how they relate to the whole world or at least a larger part of it.

For academia, I opened the possibility to use text-based games as a tool for research in the field of psychology and for a better understanding of perceptive processes in the field of Human-Computer Interaction and Usability.

Personally I would be interested in creating intuitive map interfaces for blind users.

I opened a little door in the field of research into or with text based games. Let’s find out what is behind it!

Thank you for listening to my defense and I’m open for questions now.

Leave a comment

Your email address will not be published. Required fields are marked *

2 × 5 =

This site uses Akismet to reduce spam. Learn how your comment data is processed.