Webinar presenter Emily LaGratta answered a number of your questions after her presentation, Measuring Fairness: Using Client Feedback to Enhance Our Work. Here are just a few of her responses.
Audience Question: Someone has to look at and manage the data we’re getting from feedback tools. Otherwise, people learn their voice isn’t really being heard because no one gets back to them. How do you suggest we staff these reviews? Should it be an IT person, an admin, a director, or a group of people, or what?
Emily LaGratta: Great question. What’s great about these tools and being a little bit late to the party, frankly, is that these software companies have figured that out for us. They have made the usability side of it very, very easy to access. I’m a lawyer by training. I don’t have any particular technology background. And so, choosing a user-friendly tool was key to me, including with regards to managing the the data and producing it in a way that made sense. The company SurveyStance that we ended up working with has this really nice online dashboard, so that in real-time, we could go in and see the distribution of responses within a selected date range. We could set notifications for a text or an e-mail if certain types of responses came in. So, for example, in one court, the court administrator wanted to know every time a negative “thumbs down” response came in for a certain question. That would allow them to investigate further whether there’s something going on situationally in the courthouse lobby or such that they could then respond to.
Audience Question: Early on, you talked about the four expectations of authority figures. Where can we learn more about this research and its findings?
Emily LaGratta: I love that question. There’s a body of research on a topic called procedural justice, or procedural fairness, that outlines those expectations. Thankfully, there a lot of academic studies that have been done on the topic, as well as a number fo tools that have been developed specifically in the policing field, as well as the courts’ field. I can promise you I’m quickly working on building out some tools for some other sectors of the justice system, as well, like probation and prisons. But for now, I would point you to a website called www.proceduralfairness.org. It compiles a bunch of relevant resources. They also have a blog if you decide you’re at that level of interest in the topic. My website, www.lagratta.com, has quite a bit on the topic as well.
Audience Question: Do you generally recommend organizations ask closed-ended, and very specific questions or more freeform general feedback?
Emily LaGratta: As I mentioned, our first time testing these tools was focused on getting court leaders comfortable with what they were asking via closed-ended questions. In any legal context, with an open-ended question, you do have to worry about someone potentially disclosing something that they shouldn’t have. You also don’t want someone to offer up a topic or an issue that’s so specific that then if someone doesn’t reply to that individual directly, now you’ve got a very specific unsolved problem on your hands. So deciding about questions and question formats, it really is just part of the planning discussion. I think the nice thing about having at least some closed-ended questions is, of course, that data is much cleaner and lends itself to a tidy bar graph or a pie chart. The open-ended questions, on the other hand, will take a little bit more work to review but provide richer feedback. The right answer is probably a compromise: use both formats, once your agency is ready to.
Audience Question: Are you aware of any courts that are doing surveys after virtual court hearings? And if so, how is that working out?
Emily LaGratta: Yes, I have heard of that. For those using Zoom or other platforms, many of those have a survey function built right in. So, it’s actually relatively easy to turn on that functionality if you’re interested. I don’t have a national view of that, of how often it’s being used, but I have heard of courts experimenting with it. I think it would be really interesting to ensure that you know where the feedback is coming from. Meaning, if you’ve got feedback following a specific court session, you would know what docket and what judge that it pertained to. That’s very valuable. If you can’t get your judges to be comfortable with that level of specificity, you could potentially use other strategies, like that QR code I showed you or a link to a court-wide survey that would pool the data more broadly. It’s just like all things: you want to make sure that you have the support from key stakeholders, or it won’t get off the ground.
Audience Question: Have you seen any issues in agencies that use feedback as a direct input during performance review time?
Emily LaGratta: The short answer is no, I’m not aware of that yet. I think our minds could all imagine the ways in which this type of information could be used to our detriment or to someone’s detriment, and those are very important conversations to have. The staff feedback that we got via our pilots was so positive so we didn’t have to face the question of how a court leader might handle negative staff feedback. But you could imagine that, right? So, one way to safeguard that, starting out, is to set parameters about how the feedback would be used, whether good or bad. Perhaps feedback could only have an additive, positive effect, for staff performance reviews. Otherwise, I fear that this wades into human resources and union issues that I’m not equipped to answer, but are very good questions to explore.
Click Here to Watch a Recording of Measuring Fairness: Using Client Feedback to Enhance Our Work.