Evaluation

How do we know whether the hack week was successful? What does successful even mean, and how do we define metrics for that success?

Hack weeks take a great deal of work to organize, and as organizers we often want to know whether the time and effort spent on the organization was valuable. Did the workshop serve the needs we hoped it would serve? Did it fulfil the goals and objectives we settled on during the organization? Did participants enjoy the experience, did they feel comfortable and welcome? Here is where evaluation of the hack week, of its elements, organizational structure and design decisions is important. Without evaluation, there is no learning about the event as a whole, and no potential for improvement.

We therefore recommend that organizing committees incorporate evaluation and feedback mechanisms into the design and implementation of their participant-driven events.

Running Experiments

We try to follow established best practices as much as possible at our hack weeks. For many design decisions that need to be made during implementation, there is a huge amount of research and knowledge available that we can draw from (e.g. best practices for teaching computational methods), some of which we aim to share on the resources page. However, as organizers we also acknowledge that we do not necessarily have the deep knowledge on the current state of research in all topics that are relevant to hack week organization, and that we are learners, too. And for some things, like how to facilitate the creation of hacks and the formation of project teams, there is no established best-practices wisdom to our knowledge.

While learning and improving our knowledge of research areas that directly touch the hack week model where we can, we often implement new ideas as experiments where that knowledge isn't available. We have an idea that might improve the workshop, for example in facilitation of the project work, or perhaps we are in a new location with rooms that have particular features. Often we don't know whether this new idea will improve the workshop overall (or what "improve" even means in that context!), but we would like to know.

Implementing these changes as experiments has a number of advantages: * Implementing experiments (as opposed to a structure that is just assumed to work) allows for innovation, because it allows for constructively questioning and critiquing organizational structures, which may be in place for no other reason than "this person five years ago thought it was a good idea". * Experiments allow for constructive failure. Any experiment might run the risk that they might not work. Nobody wants a terrible workshop, but there is a wide space between perfection and "this particular thing didn't work out so well" where experiments can help inform future decisions, and allows us to improve on designs that may be adequate, but not actually good. * Experiments allow for iterative improvement, and therefore can help remove some of the pressure on the organizing committee of having to get all decisions perfectly right all the time. Organizing participant-driven events can be a daunting task, especially for new organizing committees. Experiments remove some of the pressure (imagined or not) of having to design a perfect workshop in a space where there may not exist knowledge about how to do that for their particular community.

Experiments can be a hugely useful tool, but in order for them to work, they require innovation. As in our scientific work, where we form a hypothesis, collect data and subsequently analyse that data to confirm or reject our hypothesis, hack week experiments should be set up with a clear hypothesis, a way to collect relevant data, and a pathway for evaluation at the end. As organizers come up with new ideas they want to try out, they should simultaneously consider how they will figure out whether this idea was worthwhile, and how to measure success.

Different forms of Evaluation

Evaluation can take a number of different forms, both formal and informal. At the basic level, organizers might want to get together at the end of or after the workshop, and discuss their experiences, observations and conclusions. At Astro Hack Week, for example, we often either have dinner as an organizing committee on the last night, to discuss any experiences and issues while they are fresh in people's minds, or a follow-up online call shortly after the end of the hack week. These observations are often helpful as an overall, birds-eye view of the hack week from the perspective of the organizers: how did the group respond to a certain tutorial? Did the room facilitate the hacking, or were there many issues due to the lack of power outlets? Were most people embedded in project teams, or were there many working on their own?

Observations from the organizers can only give a very incomplete view of the workshop, however. Organizers have thought about the hack week for months, and often know the precise motivation for certain design decisions. They are likely to be blind to a number of issues, especially those that might affect first-time attendees or people from backgrounds not shared by anyone on the organizing committee. Participant feedback is therefore a crucial tool for evaluating specific experiments as well as the experience attendees had as a whole. It is useful to have a feedback mechanism available during the hack week itself, so that organizers can respond promptly to any issues being flagged: for example, the group of organizers (or a subset) may be designated as contact persons for feedback, and can make themselves available for participants to raise issues. However, there may be sensitive issues that participants do not want to raise in person. For these situations, having an anonymous feedback mechanism is useful to gather information. For example, participants could be encouraged to write sticky notes and paste them to a wall as they leave the room. If this is too public, there could be a box with paper and pens, for example in the coffee area, where feedback can be shared anonymously, to be read by the organizers e.g. during a lunch break or after the end of the day, who can then make timely decisions on whether they need to take action.

While an anonymous feedback box is valuable for feedback that requires immediate action, it is also valuable to gather information from participants near the end of or after the hack week, which may then drive design decisions for the next workshop or hack week. Because our hack weeks came out of the Moore-Sloan Data Science Environment, we have always been interested in hack weeks in the context of career trajectories of the attending researchers, models for learning computational skills, how physical space affects learning and collaboration, the uptake of (open-source) software tools and Open Science and reproducibility frameworks. With these in mind, we have designed a shared end-of-survey workshop that we use across hack weeks, which is available in PDF and Qualtrics format in the HackWeek Toolkit repository on GitHub. Beyond the questions, which we desiged with the help of affiliated social science researchers, we also ask specific questions about the participants' experiences that we are interested in as feedback for the workshop in general or particular elements. For example, we ask whether participants felt they learned useful skills from each of the tutorials, whether the space they were in was conducive to project work, and whether they would prefer more tutorial content or open project time in future iterations of the workshop. These questions are usually designed to elicit specific information we are interested in, but we also share a number of open-ended questions aimed at eliciting both positive and negative feedback, identifying issues that participants didn't feel comfortable raising during the workshop, but also helping us identify activities or design choices that participants liked particularly well.

It is important that these surveys do not get too long. While there are many things we as organizers want to know about, there is a level of fatigue associated with long surveys, and participants are less likely to fill them out, and be as detailed as we hope them to be, if the survey is very long. In general, we aim to set aside a block of time (around 20 minutes) on the last day, for example after lunch, where we invite participants to fill out the survey, and specifically explain why this is important to us, and what we will do with the information they share. Response rates drop sharply after the end of the event, and running the feedback survey exclusively after the end of the event has never generated response rates higher than 50%.

Ethnographic Field Work

The hack weeks have generated some interest from social science researchers who work on topics related to research and open-source communities. Over the course of the past few years, we have had several researchers visit the hack weeks and conduct field work while there. These have been very fruitful collaborations, and some researchers have shared their work with us in various ways, which has provided an important outside view on the community, generating insights that are not easily available to those of us who are embedded in the community and take many of its norms and traditions for granted. One example of a summary of field notes shared with us were provided in the form of a blog post by Brittany Fiore-Gartland. Our work around addressing and mitigating the prevalence of impostor phenomenon has been a direct consequence of the insights she shared with us at the end of the week.