Deep Dive: UX Tools for Game Development

Nora Leca
15 min readMay 2, 2021

--

Leveling up with better UX insight 💡

A good UX designer is versed in many tools that help make our work better informed, faster to iterate, and easier to communicate. Unfortunately, many of the kind of tools that tech & software companies use all the time didn’t really exist in practice at the game studio I worked for, so part of my role as the only UX designer in the studio became focused on investigating, testing out, and integrating new ways of thinking about our experience’s strengths & weaknesses and our players’ goals.

The biggest wins I found were in using small-scale user testing for more data-driven design, rapid interface iteration with prototyping, and conceptualizing & communicating player progression with journey maps.

Lvl 1: User Research & Testing on a Budget 📝

One of the biggest issues I found happening when creating games was a lack of data-driven design, and a creative process that relied much more on intuition, manager’s opinions, and educated guesses over reliable & effective research methods. Proposing UX solutions that obviously aim to take into account the user’s needs felt like a shot in the dark a lot of the time, with little else than my own design sense and our competitors to guide my decisions.

Not all User Research is Good User Research 👎

The existing research, testing and data analysis methods came in 4 flavours when I started at the studio: Marketing Personas, Internal Group Testing, Tracking & Analytics Data, and External User Testing. All of them had drawbacks and inadequacies which I’ve outlined below, and I didn’t feel we could accurately gauge the impact of our designs & UX or the larger picture of enjoyability & usability in our game with only these methods.

Narrow Marketing Personas 👔

  • created in a vacuum as an ideal, not a practical model formed from real potential user’s goals & desires gathered through interviews & surveys
  • created by managers & marketing specialists mostly to be used as a validation to executives to exemplify our intended market segment
  • persona data consisting only of superficial demographic points, and a very narrow view of user desires from their consumer affinities of other apps & brands they associate with
  • having only 1 version of a persona, not multiple to represent a diverse user base
  • rarely updating or referring to it, rendering it mostly irrelevant to actually designing for the target user
  • lacking a clear & thorough model of real users gives little baseline to compare results to from any user testing

Bias-laden Internal Group Playtests 💬

  • group playtesting sessions moderated by producers and/or game designers infrequently or only at production milestones
  • participation is optional and comprised of members of the game team itself as well as studio peers from other projects, introducing many different biases due to their relationship to the product, moderators, and other participants
  • informal, open group setting where feedback is voluntarily given aloud to the group causes vocal people to dominate the airspace, and quieter people feel not as comfortable contributing
  • without a researcher to design the test, parameters are too loose to gain concrete findings on usability or design, with moderators mainly focused on asking if the game was fun & engaging and noting any bugs discovered

Limited Tracking & Analytics Data 📊

  • analyzing collected data from event tracking can offer useful insights into trends in a product’s accessibility and in-practice usability amongst large groups of live users.
  • not implementing enough broad & targeted trackers, or having limited capability to easily and accessibly compare & contrast the data inhibits the ability to harness the data for insights
  • if designers and researchers don’t work closely enough with developers to give input on what to look for using data collection, then the data will primarily serve business & developer needs to only learn about things like user churn and technical bugs.

Expensive External User Testing 💰

  • outsourcing user testing to external testing agencies can be prohibitively expensive, meaning only a scarce few tests will be budgeted for, leaving much of the development cycle not properly tested to validate progress
  • testing agencies can get booked up when busy, reducing flexibility in being able to perform testing when a project most needs it
  • not all testing agencies provide equal results, sometimes being less flexible or transparent about letting the product team give input on participant selection criteria, or survey & interview questions
  • being able to see a recording of a user’s exact hand movements can show a lot about nonverbal intents & hesitations— but not all testing records the hands, often only providing device & audio recordings, and sometimes facial recordings

We determined that there was a need to have an agile user testing method that was more accessible, flexible, and affordable to rely on for more targeted usability insights.

Agile Small-Batch User Research Testing ☕️

The Normal Nielsen group, the industry-leading user research thinktank, define the ideal number of participants in a user test to be 5: enough people to provide varied feedback to get a broader picture of the experience as well as similar feedback to see where trends are starting to form in user perception & engagement of a product - but not too many people that valuable time is spent gathering participants, running user tests, and analyzing the data only to get increasingly diminishing returns on the number of useful insights beyond 5 people.

With help from a colleague who had experience in Game User Research, we created a framework for an easily replicable testing format & technical recording set up that we could run out of an office board room on an afternoon with minimal cost, setup, or disruption to the development process.

We ran a proof of concept test at one of our projects milestone’s to show management that it was a viable outlet for gaining critically valuable insights in a smarter manner going forward, and in 2020 we started researching tools and methods to take the user testing fully online to perform small-scale remote testing due to the limitations of #pandemiclife.

Our Test Design Checklist ✅

  • Intake Survey: free tools like Survey Monkey or Google Forms can easily create custom intake surveys that ask the right questions to help narrow down your potential applicants to just 5 people that best fit your target audience profile (this is where having proper personas can help a lot!). With a shareable link to the survey, talent recruiters or project managers can manage email communications and share open calls to participate in the appropriate channels, and can review & collate the responses with other stakeholders like researchers and designers.
  • Testing Methodology: There are a lot of different methods for user testing, but to get the most insight from both general feedback & a focus on usability in our limited time with just 5 participants, we decided to do a hybrid explorative & assessment test that allowed the user to explore at their own pace and think aloud with any observations or feedback, but the test was moderated and would include a questionnaire & interview with the participant to get more targeted feedback on areas of concern.
  • Test Build: Using a stable development build of just a tutorial flow, an MVP (minimum viable product), or further milestones yields a lot of valuable insight on a user’s comprehension with the essential concepts, layouts & flows even if the experience may not be complete or polished, and gives direction for improvement before spending valuable time perfecting a flawed execution. ⚠️ Do not wait until your alpha build to test your core experience! ⚠️ Having users interact with the intended device for the experience is important to create a close facsimile of their experience out on the real world with your product, so getting a build on a phone was important for us to observe the usability of a mobile-only product.
  • Questionnaire: Like the intake survey, using free survey tools to put together a questionnaire with easy to answer multiple-choice and scale rating questions is a fast and easy way to be able to get a lot of quantitative data points about the tester’s experience after they’ve completed the test, that can be combined with their user profile from their intake survey to form a base model for target audience fit & engagement.
  • Interview: To be able to get more in-depth qualitative data about the tester’s experience, having a moderator do a face-to-face interview to ask them questions about certain aspects of the experience allows the testers to express their thoughts and feelings more fully than through a simple questionnaire. Besides just observing the user’s actions, this step is where I found the majority of quality usability insight can come from if the right questions for the product’s specific usability concerns are asked.
  • Legal Documents: in order to properly run a user test with external participants, there have to be safeguards in place to ensure the participants have a safe experience and are aware of the intended use of their data, as well as documents to protect the company’s intellectual property and in-progress work. Work with Legal and HR to get together a Waiver, an NDA Agreement, and if there is an incentive to participate, a Confirmation of Redemption.

Our Homegrown Testing Suite Set Up 🪑

We wanted to be able to capture all 4 critical user feedback channels:

  • 📱 Screen footage captured directly from device to see the user’s flow through the test.
  • 💬 Audio recording to hear the user’s reactions and spoken aloud observations & questions, as well as to have an audio transcript of the interview portion.
  • 👆 Hand footage to see non-verbalized decision making shown through body language in their primary input method, in our case touching the screen (or not, lack of input can show decision making too). *In computer-based testing, cursor tracking is included in screen footage, but on mobile, only actual contact inputs can be visualized through software, so this step is unique but valuable for mobile testing.
  • 🤔 Facial footage to be able to capture unspoken emotional reactions to things like humorous narrative lines, or looks of confusion.

To be able to capture all these channels, we plugged all the recording devices into a PC sitting near the user and used the free software OBS to capture all the inputs and output one video file that synced up and arranged all three footages to be seen at once, along with the audio feed.

In traditional testing methods, a moderator wouldn’t typically be in the room with the tester during the testing or questionnaire portions, to allow them to feel more comfortable in their own company (ideally on some comfortable furniture too! 🛋) and not feel scrutinized while being observed. Moderators then observe the live recordings from a monitor in an adjacent space and take notes.

†We had the goal to keep our footprint and technical setup minimal and contained, so we divided a meeting room with a big rolling whiteboard to create some sense of separation, but ideally, more separation & comfort is best for the participant.

A Winning Formula 🏆

This relatively simple, easy & low-cost method convinced people to commit to doing proper usability testing more frequently, especially when we hit major milestones of the project and wanted to gut check where we were at, or were undertaking a redesign of an existing system and wanted to gauge reactions before we pushed it live.

Data-driven design is always going to be a more informed design, but how and when you get the data makes a big difference to its efficacy. I sought to make it easier to gain valuable, targeted data that could first and foremost advise me & my design peers better.

With the footages all neatly captured together, combined with the intake survey, questionnaire, and notes the moderator took down during the test & interview, post-test review and analysis was less time consuming and more accessible, with the entire process & results posted in our documentation for review by the stakeholders.

Lvl 2: Prototyping & Rapid Iteration 🏃‍♀️

The pipeline for implementing UX & UI in our studio was unfortunately very cumbersome & fragmented, relying on outdated software and proprietary libraries to work with a custom game engine, and only a handful of developers that knew how to work with the UI systems, creating a bottleneck in the pipeline - a constant backlog of UI/UX fixes & features to get in and never enough man-hours to do it.

Our implementation pipeline was not a flexible process that could support experimentation or rapid iteration; working with a UI developer to try and implement multiple options of design to be able to test them out and see what felt best was a hard ask.

I didn’t want to be limited by a flawed system and impede the intrinsic design thinking process of iteration, I felt we needed the freedom to be able to think outside the box and test out hypotheses before committing to spending development time on them.

Prototypes of a Prototype 🌱

Game designers were already using rudimentary tools like Balsamiq and PowerPoint for simple wireframes & screen-to-screen flow prototypes, which had their use for communicating to developers the desired outcomes of basic interactions, but were severely limited in their ability to convey complex micro-interactions or transitions.

UI artists also were used to creating motion animation mocks & storyboards to explore more UI animation-heavy sequences, but the mock was usually a Flash animation or an After Effects video developers would scrub through frame by frame to replicate in implementation, and it existed in isolation of any potential flow prototype that might relate to the same feature.

I wanted to combine aspects of both of these processes into one prototyping tool, but if I wanted to test innovative UX solutions in the realm of gaming’s complex micro-interactions & game states, I needed a tool that could emulate as closely as possible what we could do in development to prove it was worth upgrading to.

Harder, Better, Faster, Stronger 💪

I have tried out many of the common prototyping tools available — inVision, Marvel, Axure, Adobe XD, Sketch, Figma, Principle, etc — but only 2 were flexible and fully-featured enough at the time to suit my needs — ProtoPie and Proto.io, both severely underrated tools in the prototyping community. I was most familiar with the latter, so I chose this as my platform to champion prototyping with, convincing management to fund a single license to let me show examples of how we could harness rapid iteration.

Some of the key things I wanted to be able to do to prototype with depth & complexity were:

  • real-time UI responses to multiple & changing user inputs
  • element state memory to maintain continuity & update information while transitioning between multiple screens & popups
  • draggable/drag & drop micro-interactions
  • motion scripting for screen transitions & game sequences that didn’t rely on user inputs (i.e. lootbox openings)

Winner Winner Chicken Dinner 🐓

The ability to experiment with an instance of a feature at 70–90% interactivity capability on the native device was a huge win for our UI/UX iteration process. In having to think like a developer just to be able to make the prototype function fully, I got much better at catching edge cases and making sure the designs were more bulletproof.

An example of a prototype made on ProtoPie showing a loot crate opening sequence
An example of a low fidelity prototype I made on Proto.io to show a loot rate opening sequence

Being able to really feel how complex interactions flow, seeing all the details communicated without ambiguity, getting on the same page with developers on exact implementation desires & concerns — all invaluable in being able to sell designs to stakeholders & boost confidence in the feasibility of the implementation.

We didn’t use this method of prototyping for every single feature, but it was a useful tool to have when we wanted to explore an interaction mechanism a bit more in-depth. Prototyping at this level with Proto.io was not necessarily a fast learning curve even though the tool is well made, it just has a lot of subtle capabilities that take time to master, so I was the only designer who took the time to learn and use the tool; the hope was for me to eventually be able to mentor my fellow designers to be able to harness its power too.

† The next level up in prototyping would be to work in a prototyping/design tool like Figma that works more at the atomic design level of interface/design system organization and mirrors the way implementation is done in web frameworks like React, making it more compatible to pass off a prototype to a developer to potentially aid and speed up implementation. However, in the absence of these kinds of development frameworks in games, prototypes are still just visual examples to help developers build a real system off of, but they still help ease a lot of friction in iterating.

Lvl 3: Visualizing Progression with Journey Maps 📈

If you’re not that familiar with free-to-play mobile games, especially in the action-RPG genre, then the complexity of the monetization models, resource inputs & outputs, and feature structure may shock you compared to what you might think a fun little time-waster on your phone would encompass. It certainly shocked me when I started working on one; just learning to visualize a gameplay loop kind of blew my mind.

One thing games like these specialize in is drawing out the player’s progression through the entire game’s content over a long period of time, doling out items and gameplay at tightly controlled intervals and milestones so that players don’t burn through it too fast, meanwhile getting hooked on the game loop habits to continue coming back to it every day for months to years.

At the highest level, the game loop represents the principles of this ‘experiential economy’ flow. At the lowest level, the actual nitty-gritty numbers are buried deep within tables & formulas in the Excel sheets of game economy designers. There was no mid-level understanding of this core part of the game for most of the team.

In every experience, there is an intended feeling of progression, a general ebb and flow the users should experience throughout their time with the product, and I wanted to find a better tool for collaborating to craft that flow and visualize it so everyone could understand it more easily.

Putting the Experience on the Map📍

I realized a journey map would be a great tool to visualize this information, the same way they are often used to visualize a customer’s experience through a retail environment, or a consumer experience through an e-commerce platform.

The great thing about journey maps is how flexible they are — there isn’t just one way to create one or only one kind of information design method you have to use. Different qualities of a journey can be measured in varied quantifications, and when seen alongside each other in one information design piece, you get a holistic picture of the intention for the experience.

It’s not a perfect or complete picture, many aspects of the crazy deep excel sheet’s data will never be represented in an infographic, but it is an accessible way to visualize high-level experience goals with low-level balancing information. Much like personas, it can be useful to have multiple journey maps to represent narrower segments of the journey — like a beginner tutorial vs late-game content — or capture the differing experience for certain users — like casual NPUs (non-paying users) vs dedicated whales ($$$).

Reflection: Leaving Game Development in Better Shape than I Found it 🌠

The processes and tools we found worked for us are now a part of the studio’s design culture, not just a part of one person’s role. The goal of my unique position was not necessarily known in the beginning, but the result of UX’s inclusion in the process helped lead many year’s worth of stagnant process into a stronger, better place for the future.

I am passionate about process, and believe that UX & design thinking methodologies can extend far beyond layout wireframes & screen flows, and has the power & potential to improve the experience of creating itself.

It is the collective responsibility of any workplace to continually level itself up to keep process running smoothly, as well as keeping up with changing trends in tools & production methods to not fall behind their peers in industry. Call it Design Experience or Developer Experience, it benefits both the company and the many talented creators who give their time to make awesome things to provide resources & motivation to let our abilities & work shine, not be bogged down in poor process and by managers who have blinders on to change.

I look forward to continually leveling myself up as a designer & creator, to keep growing my own skills and toolset, and to always strive to bring good process to anything I do.

--

--

Nora Leca
Nora Leca

Written by Nora Leca

noraleca.com / ux designer / visual creative / maker / gamer

No responses yet