top of page

Research in Innovative Media Research and Extension Department and the Learning Games Lab

Though we are situated in a university, our team functions much like a game and educational media development studio. The learning tools we create include different kinds of media (animation, video, app, game, virtual lab); content (food safety, math, financial literacy, nutrition, agricultural sciences, etc.); audience (young children, youth, adults, field workers, professionals) and the environment in which they will be used (in school, in public, in training seminars). The one consistent aspect of almost everything we develop is that our work is research-based. Our products are rooted in the expertise of researchers in Extension: we use several methods to understand what products need to be made and how they should be designed. We depend on our user testers to help us make improvements throughout development. Department faculty and our research partners evaluate the impact of the learning tools, and review the processes used to improve our operations.


Formative research, conducted before and during production, informs development by establishing the need for a product, the audience, existing materials, and theory on how best to create the necessary change. Formative research also guides the product through constant testing during development. Summative research usually summarizes how the product changes users through evaluation, and reviews processes used in development.


A diagram showing how research is broken into formative (before and during production) and summative research (after production). Summative research is broken down into Impact & Evaluation and Process review.
Types of formative and summative research

Design research

Almost all of our products have content experts who join our team to create a product. They usually know their content — such as financial literacy, or math education, or how to communicate climate change — and often understand the needs for new materials as a result of their own research. We use several different approaches in determining how to create a product. Our team usually starts with what the research says regarding need: What is the change that needs to be created? Who needs to be changed? Why haven't these audiences already changed?


As we continue through the design process, we may do an environmental survey to explore where learners are currently engaged in the content area, and what learning in a given environment could look like. For example: in creating math games for middle school, we looked at how curricula was already used, what teaching in 50-minute class periods can facilitate, and how teachers use and view the use of games in their learning.

We always conduct an analysis of what already exists for learners in that content area, and explore media for related content areas. For example, when developing tools around food waste and climate change, we explored how existing websites talk to youth about food waste, the kinds of activities offered, and what media we could integrate into our work. We also wanted to review how media dealt with the topic, so we explored games around climate change, and in-person activities that encourage youth to make a change in this area.

Finally, we often use theory to guide our decisions. Constructivism is a learning theory which proposes learners construct their own knowledge from experiences, combining it with what they already know to create new meaning. For example, in developing our game on credit score, Night of the Living Debt, we used principles of constructivism to guide learners in exploring within the game, making their own choices and using game feedback to build on what they already knew. We are exploring some of the many behavioral change theories to build products guiding learners on how they implement food safety guidelines in producing food. In addition to knowing their content area, our content experts often also bring a theoretical grounding in how to best create necessary change, and we explore research on those theories to guide our materials.


User testing

A Think Tank student sits at a desk and user tests Agrinautica in the Learning Games Lab.
Summer Think Tank Consultants testing Agrinautica

We do a lot of user testing in our Learning Games Lab: some of our games have gone through more than 60 sessions with kids playing prototypes. User testing means testing by users: in reality, we are testing the product and how users interact with it. We conduct user testing with early scripts, graphics and ideas. We ask users to move through wireframes (simple interactive storyboards that show how a game or interactive program could work) to show us if our initial designs make sense. We ask users to play with prototypes and incomplete versions to make sure our interfaces are usable, and the users know what to do. We also explore early ideas around evaluation to see if the players of games are learning what they need to, or making the changes we design the product to prompt. Once the game is in final stages, we often need to test level balancing, to be sure the feedback and challenge is appropriate.


We often test with different audiences and multiple locations. With our Math Snacks games, we used early prototypes with youth in our learning games lab to make sure graphics felt appropriate, and to change gameplay as needed. Once we had fully playable games, we tested them in math classrooms, because we know players respond a little differently when they play in a formal environment than they do at home. We also tested with teachers: as the gatekeepers, they decide what games kids will play, and have valuable insight regarding the ways math can be taught and explored.


A group of students sit at desks user testing Math Snacks games in a classroom.
Dr. Chris Engledowl leads students through testing one of the Math Snacks games, 'Creature Cavern'

We have rigorous procedures in place to do a wide variety of user testing, to be able to immediately implement what we learn from testing in development, and to document the testing in ways that are useful for our design team and our clients.


Impact and evaluation

If we’ve done our user testing correctly, we should have a fairly good idea of the impact of our products. We always like to try and capture at least some way our products change the user. Depending on funding and the scope of our project, we may be able to do a large scale, randomized trial (such as the work done on the Math Snacks suites of games), or a simple user survey assessing self-reported findings. Ideally, the evaluation team is different from the development team to prevent bias in findings. However, evaluation experts and designers should work together to determine what kinds of likely changes will occur, with what audience, and what the full intervention will look like.


Finally, we depend on expert review for evaluation. While we use expert input into the development of our products, including the use of advisory panels, we have also created a peer-review process for media developed in our department. To ensure our products meet valid requirements of our peers, we ask external content experts, educators and media developers to review products on several aspects; including usability, appropriateness of media, and accuracy of content. This peer review mirrors that which is often conducted for other extension publications, and provides a gateway for additional necessary feedback to improve the products.


Process review

How do we make games, animations, videos, apps and virtual labs that work? We learn something on every project, so every product we make is an investment in each future product. We conduct research on how to create tools. That includes reviewing what we did on a project: did we meet deadlines, deliver the product we intended to, and use funds wisely? This kind of process evaluation is often required on grants, and often conducted by one of our external evaluation partners. Internally, we look at our processes to inform our work. For example, how do we integrate the knowledge all of our developers have on creating accessible tools? How can we meet the needs of all of our learners by offering media which is more representative? What tools or management processes should we explore to be better at our work?


Our process research may propose models or theories for development, such as

  • the Transformational Game Design Process (Schell & Chamberlin, 2018),

  • our framework for accessibility in game design (Cezarotto & Chamberlin, in press).


It may offer recommendations on how to create better tools; such as

  • strategies for creating exergames (Martinez).

The glass door that leads to the Video closet in the Learning Games Lab.
The Video Closet where user testers reflect on their experiences

We also share the methods used for conducting other research. Examples include:

  • Engage, Reflect and Record: Using the Video Closet as a Qualitative Method for User Testing and Evaluation (Armstrong, Cezarotto, & Chamberlin, 2021)

  • User Testing in the Learning Games Lab (Trespalacios, Chamberlin et al., 2017).


Finally, our process review may be shared only with internal audiences. Annually, we conduct a design retreat to review the processes used in development, and how to consistently improve work.


Because NMSU is a land-grant institution, we take our outreach and research missions seriously. We work hard to make sure products developed in the department are used by the intended audiences and are readily available, and appreciate the opportunity to share the processes we used to develop and the impacts of our tools with a larger audience.



References


Armstrong, A. L., Cezarotto, M. A., Chamberlin, B. A. (2021). Engage, Reflect and Record: Using the Video Closet as a Qualitative Method for User Testing and Evaluation. Manuscript in progress.


Cezarotto, M. A, Chamberlin, B. A. (in press). Towards accessibility in educational games: a framework for the design team. InfoDesign - Brazilian Journal of Information Design


Chamberlin, B., Trespalacios, J. H., Muise, A. S., & Garza, M. C. (2017). User Testing in the Learning Games Lab. Games User Research: A Case Study Approach, 55.


Chamberlin, B.A., Schell, J. (2018). Connected Learning Summit, “The Secret Process for Making Games that Matter.” MIT, Boston, MA (August 2, 2018).


Martinez, P. N. (2017). Active games: an examination of user engagement to define design recommendations.






Written By: Barbara Chamberlin, Interim Department Head, bchamber@nmsu.edu with collaboration from Matheus Cezarotto, Pamela Martinez, Amanda Armstrong, and Amy Muise.



bottom of page