Beyond Transparency — Comments on the Update to Submission and Reviewing Guidelines for CHI

This is a crosspost from an article, I wrote first on Medium.

[This post specifically addresses a recent update for the submission and reviewing guidelines of CHI, a conference within the Human-Computer Interaction (HCI) research community, but might be relevant for readers interested in discussions around what constitutes ‘good research’ more broadly.]

To discuss different aspects of what constitutes quality in research, a myriad of terms and concepts are summoned. We talk about rigour, transparency, accountability, replicability, reflexivity, significance, stringency, consistency and many more. As someone who is passionate about methodology and enthusiastically discusses epistemological consequences of different ways of knowledge production, I was excited to see an update to the submission and reviewing guidelines for CHI 2020. After all, it was a chance to reflect on what we can know and how by looking at how we claim knowledge in the first place.

Given that I have received reviews calling my approaches too critical for the field of HCI and its methods, the description of methodological detail on the one hand (R1) putting others to shame on the other (R2) too excessive, I welcome any approach that aims at a shared understanding of what we might understand as high-quality research and where different conceptualisations might differ, how and why. So, let me start this post by sincerely thanking the authors and contributors that have worked towards updating these guidelines in a concerted effort. They pushed the discussion further and rejuvenated vigour in collectively thinking on this. Let me take here the opportunity to further reflect on the guidelines, synthesising some critique brought up in different social media platforms and adding my own, personal perspective on this.

The overall aim of the update is to increase transparency in reporting research. It is relevant to keep this in mind as an overarching goal, particularly as by focusing on transparency in this update, the initiators have added another dimension to how we, as a community, assess research and what matters to us. This means, that how we assess transparency and what we define as transparent is core to understanding which research is (highly) valued and communicate expectations to authors.

The first key component the initiators refer to is replicability. By starting with this one, they already set the pace. Replicability only makes sense from a post/positivist or critical realist paradigm, one that assumes no standpoint and operates with a notion of disembodied objectivity. However, the initiators do not indicate how the concept of replicability expects a certain kind of research and, given it is the first one listed, potentially excludes others. It implicitly (not necessarily intentionally) disregards those coming from a critical/feminist/queer lens in particular, those, like myself, who root their work firmly in an epistemology that values the privilege of partial perspective and values different kinds of situated knowledges. How do you replicate an autoethnography where the very point is to provide an in-depth, highly contextual analysis? How do you replicate an argument for ‘tangible bits as a new interaction paradigm’ (a paper that has the highest citation count of any CHI paper according to my query)?

Another aspect, the initiators focus on is sharing data. Inspired by (and heavily advertising for) the Open Science Framework, “reviewers may expect that all materials created for this research (such as experiment code, stimuli, questionnaires, system code, and example datasets), all raw data measured, and all analysis scripts are shared.” There is a lot to unpack here, notably the lack of nuance on whether the practices associated with open science are at all achieving what they are set out to do (thanks to stuart reeves for pointing this out). Such a requirement also ignores the vast varieties of institutional particularities that researches have to consider, ranging from supranational levels (EU, GDPR) to individual, local routines. And while the update acknowledges that sharing data might not always be possible (and argues for explaining why), it sets a normative expectation on sharing and, subsequently, a normative expectation for research (and institutional contexts) that allows such sharing. On that note, a discussion in the CHI Meta Facebook group has pointed to a more nuanced discussionand some practical guidelines, though those might not be appropriate when working with marginalised participants. In addition, there is an assumption that all data can be digital or digitalised; if it can’t be provided, it needs to be specified why. According to the update, why data can be shared and which reflections go into sharing which parts of the data is less relevant (a.k.a. sharing as norm), absence has to be defended (a.k.a. diverging from the norm has to be defended and can be attacked).

In consequence, the update also invites authors “to share as much non-sensitive and non-proprietary code as possible to help reviewers scrutinize, replicate and reproduce your results”. Besides the culture of suspicion and mistrust this perpetuates (instead of assuming expertise and knowledge and asking for clarification out of curiosity and respect), this adds an entirely new workload to authors and reviewers without acknowledging the potential effects this might have, particularly with an update published only 6 weeks before the abstract submission deadline. Ignoring that papers introducing datasets have a notoriously difficult time in getting accepted to CHI, merging the publication of data with any type of reporting on analysis, devalues them further and makes it less likely for them to be accepted in the larger field as stand-alone contributions. However, the sentence also drastically increases demands on reviewers. For context, associate chairs are expected to manage reviewers for 8–10 papers and additionally provide in-depth reviews for another 8–10 papers within a time span of only six weeks — all that while the semester starts up again (at least at my university, it coincides with the start of a teaching term) and most of us are expected to continue conducting high quality research and contributing to self-administrative aspects of the academy. If we want to take this seriously (and I am not sure we should to the extent the update implies), we also need to change the practices and frames in which authoring, submission and reviewing occur.

Most of the online critique has centred, though, on the different requirements posed for reporting “technologically oriented” and “qualitative” approaches. Full disclosure on this part: One of the initiators (Matthew Kay) already indicated that this part will be revised. However, the core distinction seems inappropriate as the first category conflates a technology focus with quantitative research, which, in my opinion, is an inadequate reduction of the vast variety of approaches that can focus on technology (including those that make an argument as philosophy research through design).

Lots of buzz has been about limiting positionality, rationale for design, transparency of decision making and ethical contextualisation to qualitative approaches (see my annoyed tweetMelanie Sage questioning this suggestion or further discussions initiated by Sarita Schoenebeck). In my understanding, this should be required of any research. At the very least, expecting some work to provide more detail than others is deeply unfair given that CHI operates with a strict page limit (which was also raised as an issue by Casey Fiesler on Twitter). On a side note, the wording on sharing conveys an entirely different approach as to why and how this valued. Authors are “welcome” to only share data after having obtained explicit consent to do so. This indicates an opportunity instead of implying a requirement.

Before I conclude, allow me to reiterate that I am deeply grateful for the initiative to rekindle discussions around how we present our research and review our peers. My critique and the comments on social media indicate a level of care and an expectation towards the field to keep working on the issues of quality assessment in HCI research. Going forward, we should discuss which implications the high-level expectations we state have for how we organise and value authoring and reviewing processes. As a longer term project, I am interested in collecting actionable criteria for the assessment of a range of contributions, epistemologies and methods present in HCI. Let’s keep talking about it.

Post Scriptum:

Edit Notes:

  • edited the sentence discussing the simultaneous start of the reviewing period with the start of the teaching term to localise it appropriately to my personal context (thanks to Geraldine Fitzpatrick for pointing this out).

Leave a comment

Your email address will not be published. Required fields are marked *

2 + 12 =

This site uses Akismet to reduce spam. Learn how your comment data is processed.