In my last post, I talked about how the recent American Rescue Plan Act will bring an influx of funds specifically for out-of-school time (OST) -- after-school and summer -- programs, as well as for community schools and wraparound services.
This is a huge win for those working tirelessly in family engagement and OST! I also mentioned that to be on the safe side, it's a good idea to start building your evidence base now, in case these funds are earmarked for evidence-based programs under the Every Student Succeeds Act (ESSA). Now that you're all familiar with the four evidence levels, let's dig into the most accessible one: Level 4. So many grass roots, small, community-based organizations are at a disadvantage with ESSA's evidence requirement. Here's why: 1) evaluation services can get expensive, and 2) they often require technical know-how or an outside consultant to do them well. Program staff are great at working with kids, families, and schools. That's why they do this work! They didn't sign up to be evaluators, so I get why the thought of doing an evaluation can send some program staff running for the hills. But let's take a deep breath. Here's the great news about Level 4: if you know that your organization is planning to evaluate your family engagement or after-school services but hasn't done so yet, you can demonstrate that there is a great likelihood that your services are impactful and still get access to those Title I and other federal funds. That's it - demonstrating a likelihood! It's a great way to get your foot in the door with districts while working towards the bigger goal of becoming evidence-based. So you may be saying - Amanda, that sounds great, but how do I show that my services are likely to have a positive impact on kids and families? Here's what you need to apply for ESSA Level 4 approval: 1) A logic model for your organization
Essentially, logic models are a depiction of what you put into your program (resources, activities), what you hope to get out of it (short- and long-term outcomes), and how you'll know you're on track to do that (measures, benchmarks).
Check out my post about the ins and outs of logic models here. 2) Citations demonstrating the impact of similar programs
We can use online tools like Google Scholar to find existing evaluations and research studies that show that similar programs serving similar groups of kids or families had a positive impact.
So, if you're a program in a major urban center and you find a study demonstrating the effectiveness of a small, rural initiative, it's probably best to keep looking. We want to compare apples to apples here. You'll also have to make sure that the studies you find meet the ESSA standards described in my last post. 3) A plan for your future evaluation
All you need to do is put together a plan for how you are going to measure your program's impact in the future. You'll have to share who you'll study, what you'll look at, and when you'll conduct this research.
You may need to chat with an evaluator for this one. Don't worry though - evaluations don't have to be a multi-year, super expensive endeavor!
I hope you noticed that none of those three requirements needed any program data!
So if you're started to track your family engagement or student data, Level 4 gives you time to get your systems up and running, while still giving you access to the funds you need and the students and families you want to work with! If you want to know how to DIY the ESSA Level 4 process, sign up below for Evidence for Engagement, the free mini course from Tamara Hamai and me! With weekly videos and worksheets, it will walk you through how to get your application ready for your local school district and get your foot in the door.
0 Comments
In my last post, I talked about how logic models -- although they can be a chore -- can actually be a great visual roadmap of your program's components, benchmarks, and goals.
Working collaboratively to develop a logic model can be a unifying experience for teams, a way to get everyone on the same page about the work. But what keeps your finished logic model from disappearing into the dark void of your computer's file system, never to be found or opened again? If you've been reading my blog for awhile, you may know where I'm going... You can use your logic model to help design your team's data tracking system! Although people design them differently, logic models always have a column for either the measures that you'll use to assess progress (as I have in my template below) or the immediate outputs that would occur as a result of your activities.
If you're measuring your family engagement efforts, some examples could include:
Your logic model is basically a cheat sheet to the data points you'll want to track for your program! So if you follow the steps in my free guide to tracking your engagement data, you'll see that you've already answered some of the questions in Step 1 -- why you're tracking the data. I don't know about you, but I think that identifying the purpose behind each part of our work is often the hardest. But since you've already connected your measures to your short- and long-term goals, you're ahead of the game! With your logic model in hand, you'll be ready to tackle the rest of the questions with ease and start tracking. If you haven't gotten your hands on my free guide, use the form below to get your copy!
"Ugh, they're making us submit a logic model? What is the point of a logic model anyway?"
"I don't understand all that technical jargon. What am I supposed to put in a logic model?" These are the thoughts I imagine my clients having when a funder or state department of education requires them to create a logic model ... and rightfully so. Logic models are often overly complicated, far too technical, and not connected to ACTUAL practice - so it's no wonder that they are not intuitive for people doing "the work" in education. It doesn't have to be that way though. Let's talk about what a logic model actually is and how it can help your organization. I found the following definition of a logic model on the Community Tool Box (emphasis added): “A logic model presents a picture of how your effort or initiative is supposed to work… Effective logic models make an explicit, often visual, statement of the activities that will bring about change and the results you expect to see for the community and its people. A logic model keeps participants in the effort moving in the same direction by providing a common language and point of reference.”
Here's what I love about this definition:
1) It clearly conveys that a logic model is a visualization of how your program operates, and 2) It helps teams see that a logic model can actually streamline their work and make sure everyone is on the same page. The image below from the CDC is another great way to think about what a logic model can do.
So instead of an annoying task you have to complete for grant funding, think of a logic model as a dynamic map of your program and how you will collaboratively work towards achieving your goal.
Recently, I introduced this new way of thinking to a school district team I'm working with. I'm designing a data dashboard (learn more here!) for them so that all of their family-serving teams can share data and serve their district's families more efficiently. (I'm really excited about this.) When we met with each team individually, they were all saying the same things about how they engage with families and what data they already or want to track. Yet, collectively, they couldn't see the forest for the trees. They didn't know how similarly each team was operating! Let me be clear: this wasn't because they were not communicating or working together. It's because they didn't have a framework to guide their collective work and show where there was overlap across project teams. So we got to work. Using Google Slides, we did an interactive work session where the teams brainstormed what they would put in each part of a logic model. Below is a screenshot of their "Activities" brainstorm.
Then we did a virtual "gallery walk" so they could see how much overlap there was. See how many "I do this too" stars there are in the image?
After this, it took no time to put together their ideas into a more traditional logic model format. If your team is currently struggling with making a logic model, don't be afraid! Reframing how we think about logic models can go a long way towards making them purposeful, usable tools to make our family engagement work more effective.
Isn't it so gratifying to learn a new skill and get to apply it?
One thing I've been learning lately is how to use ArcGIS, a super fancy mapping tool that allows you to collect, analyze, and visualize all sorts of data. I've talked about mapping with clients and at conference presentations for awhile, and I've loved using public (read: FREE) mapping tools to learn more about the communities I was serving or studying. I’ve used maps in many ways – describing the community for grants or needs assessments, determining which students need home visits, or figuring out which resources are near students’ homes. Most recently, I've gotten to use maps through my part-time work as a researcher at Ohio State. We were trying to figure out if the students in our college had practicum placements within federally designated "medically underserved communities." Using a free public map file from a government agency and uploading a list of addresses where our students were placed, I was able to instantly visualize and (through ArcGIS's fancy tools) analyze the percentage of our students working within underserved communities. Seeing it all come together was magical. It painted such a clear picture of the impact of our college and the difference that our students are able to make. Given the disproportionate impact of COVID-19 on disadvantaged communities and the rising awareness of systemic racism on, well, every aspect of community life, knowing what children and families need outside of school – and acting on it — is critical. Using data to pinpoint which students are at the greatest risk of disengaging from online school or whose families struggle to meet basic needs is essential for targeting interventions and outreach. Here's a list of free mapping tools to get you started (from my May post on the AEA365 blog): Demographics
City and county agencies also have amazing resources. See if your health or police departments, school districts, or universities have online tools for exploring your area. Here’s a map I created of the schools, hospitals, and other services where I live, from the city’s mapping tools:
To get started mapping your own data, I always recommend starting with Google Maps!
Beyond being a lifesaver for those with a poor sense of direction like me, Google Maps offers a free tool for creating your own maps. You can map multiple data sources and use colors, symbols, and labels to make sense of your data.
Then go play! See what maps you can create of your community.
Now think about each of your students and families as dots on that map. Imagine what they might see every day when they walk down the street. What resources are available to them in their community? In what ways is their community potentially putting them at risk? You can use what you learn from your mapping explorations to influence survey questions and interview protocols for students and families. For example, if there has been a recent rise in crime rates in a neighborhood, ask families and students if they feel safe and what the school or district could do to make them feel safer. Certainly, your findings can also help you figure out what services to offer within your program or school. It's been so enjoyable to build my mapping skillset and explore a whole new way of looking at data. I hope you take some time to play around with these mapping tools and see what you can learn!
This week I've got a co-author to help me continue my qualitative data series!
Sarah Dunifon is Founder and Principal Evaluator of Improved Insights LLC, an educational evaluation firm focused on STEM and youth-based programming. She is based in Cleveland, Ohio and is a fellow board member of the Ohio Program Evaluators Group. We hope you enjoy our post below.
Qualitative data can be a bit elusive.
It’s not usually too hard to find data for things that are measurable. We know we can do surveys, or count the number of attendees, or track patterns over time. Qualitative data though - the context for those numbers - often takes a little more work to track down. Of course, we can always do interviews and focus groups with stakeholders to learn about their experiences, our usual go-to’s. However, if you think of qualitative data for what it is - simply put, another information source - you’ll find that so many other forms of it are hiding in plain sight. Think about the chatbox in your last Zoom session - you may not have realized it, but that’s a source of qualitative data! Other sources you may have readily available include the phone call logs that your teachers keep when they call families or even the observations you did of an event (online, drive-up, or fully in-person). If you need more, there are lots of ways of collecting qualitative data, and many of them are even more prevalent now in our almost fully virtual world. This makes our lives a lot easier, as we prepare to write our annual reports, apply for grants, or share the impact that our program had during this unusual year. Like I mentioned in my last post, sharing the context for our quantitative findings can make those reports tell a much richer story. Yet it’s not always intuitive to know how to turn a whole bunch of text into these powerful programmatic insights. So when you find these sources of qualitative data, what do you do with them? We can actually find patterns in our data by assigning thematic codes to different words, phrases, or even images. Sometimes, you start with a set of codes that have to do with your program goals, or the research concepts underlying your program. Other times, you just code as you go. If you start to see a lot of mentions of a particular topic, that topic can become a code. Coding can take many forms, and there is fancy software that can help you do it, but sometimes all you need is a notebook and some markers or a color-coded spreadsheet. Below you can see some sample data about an after-school program focused on science and animals that we’ve color-coded according to the themes we saw.
In one glance, you can see that our participants liked a lot of aspects of the program, but games and activities (in blue) and the food (in pink), got the most mentions.
Coding allows us to see what’s happening across the dataset and pull out themes or key insights that we need to highlight. Sharing your qualitative data analysis can be an important addition to your data story when demonstrating the impact of your work. It can add relevance, personality, and context to quantitative data by illustrating individual effects. By reviewing our datasets systematically, we can also find some incredible quotes - the kind you would never attempt to paraphrase if you were writing a paper because they were so perfectly worded -- and let our stakeholders’ words shine. You can feature key quotations by offsetting them or putting them in a different color in your report to highlight individual experiences and catch readers’ attention. Another popular way to display qualitative data is in a word cloud. Word clouds are visual representations of keywords that come up frequently in a set of qualitative data. Typically, the bigger the word, the more frequently it appeared in a data set. There are plenty of critiques of word clouds in the data visualization space and rightly so - word clouds can often obscure meaning rather than clarify it. So if you are going to use them, here are three things you should know: 1. Give the data a good cleaning to remove anything that you don’t want represented in the visual. Here, we’d recommend removing any responses that do not give value (e.g., “idk,” “I’m not sure,” “Nothing,” etc.) as well as any text surrounding the main themes (e.g., “I like the [...],” “I love [...],” “my favorite thing is [...],” etc.). 2. Consider the messages or key points you see in the data that you wish to convey visually. If it is possible to condense themes further or pull out important words, now is the time to do so. This might mean collapsing phrases as best as possible to a single word, or perhaps a few words of important meaning. 3. Make sure to keep the essence of the data - meanings can be misconstrued when collapsing phrases into single words or shorter phrases. If you’re finding this is happening, perhaps a word cloud is not the best way to display your data. However, with data cleaning and basic analysis, the word cloud can change drastically. Take a look here at three versions of the same word cloud we generated on WordItOut using the data we shared earlier. The first was created with original - or “raw” - data, the second with cleaned data, and the third with some basic analysis and condensing. Notice how the prominent words change with each version, and how the meaning and key messages can shift.
As you can see, while word clouds are one of the most accessible forms of qualitative data displays, they take some work to be most effective.
However, word clouds aren’t your only option. Data visualization experts like Stephanie Evergreen, Storytelling with Data, and Depict Data Studio all have great resources on different qualitative data displays. The case is clear - with some simple analysis and visualization, qualitative data can be a powerful addition to your data story.
You should know by now that I'm a bit of a data nerd.
I love spreadsheets. I love organizing data and using it to illuminate patterns. I love the "ah-ha" moments when clients realize how much their own data can tell them about the kids and families they're serving. So it may surprise you that I'm here to say that numbers and spreadsheets don't tell us everything. That doesn't mean that numbers (or quantitative data) are irrelevant. It just means that they are even more informative when paired with stories, quotations, or anecdotes (qualitative data). (See the box for a quick refresher on the difference between the two).
Here's an example. Yesterday, I was re-reading an article from The Columbus Dispatch, my local paper, about the spike in youth violence that has occurred during the pandemic.
It's been horrible to hear about how many children and teens (well, really anyone, for that matter) have been victims of gun violence since the spring. The article cites a number of statistics -- that the number of children treated in Columbus for gunshot wounds this spring and summer was double the rate from 2019 (from 16 to 32); and that children from racial or ethnic minorities are twice as likely to be shot than white children. Those are AWFUL statistics - and they certainly help me see that there is a dire situation here. But then, the article talks to a teacher whose student -- an eight year-old boy -- was killed. Here's what the article shares about (and from) the teacher: Thalgott has lost a handful of former students during her 20 years of teaching on the South Side. She's seen even more students who have lost a parent to gun violence. Having lost some former students or their family members to gun violence -- either as victims or perpetrators -- this quote really gets to me. This quote conjures up such raw emotions that suddenly it puts the statistics they cited into context. Those 32 children are somebody's child, somebody's sibling, somebody's student, somebody's mentee. Hearing from a person who actually experienced that loss made a big difference in how I processed this article. I imagine it did for you too. Quantitative data can be so powerful, but its impact is amplified when we lift up the voices of those we are serving or studying. Qualitative data -- gathered through interviews, focus groups, open-ended survey questions, or observations -- can sometimes more effectively communicate the experience of what is happening in your school or community. I'll be doing a series of posts on qualitative data over the next few weeks -- how to collect it, how to use it, and how using a combination of data can truly help you tell your story.
Sometimes you launch a survey, and you're blown away by the number of responses you get.
And sometimes, you're not. I had one of these moments last week. I was SUPER excited to try something new with my blog and launch a survey to hear what readers wanted to see in future posts. I sent out my blog to my email list, waiting with bated breath (okay, maybe I'm being dramatic) for all of the responses to pour in. ... And then I realized that the survey I embedded didn't even show up in the email. INSERT FACE PALM EMOJI. Let's just say that I didn't get the response rate I was hoping for. But here's the thing - it's okay to have a survey fail. All hope is not lost. If you don't get the response rate you were hoping for, take a step back and consider:
For me, technical issues definitely got the best of my survey attempt, but I also think a reminder wouldn't hurt. So here's my plug: I'd love to know more about what you want to learn! I'd appreciate if you could take a minute to share your thoughts and preferences with me. I'll report the results and use them to make this blog even more beneficial for you. The survey is embedded below, but if it's not loading for you, click the button to go right to it.
We have all taken TONS of surveys in our lifetime.
We get surveys when we make an online purchase, when we speak with a customer service agent, when we want to get a free gift card, and even when we go to the hospital. We're all pros at taking surveys... and we all know when we're taking one that's TERRIBLY designed. For me, if I don't feel like I can answer the questions, or if it gets too long or overly annoying, I'm out. And that organization just lost a respondent. I don't want that to happen to you -- because in education, surveying our stakeholders is SO important. It shows that we value our stakeholders' opinions, feedback, and experiences. We can't afford to lose respondents because of iffy survey design. Here are a few of my tips for upping your survey game: 1. Ask only what's really important. Make a list of what your team is wondering about or what the impact of your proposed projects/plans might be before you draft your survey questions. Keep it short and sweet ... if it's not related to those things, don't include it. 2. Reach respondents where they are. Think of all of your touch points with your key stakeholders. Students may be log in for online class, families may check social media for updates, and all of your stakeholders may access meal sites. At all of these venues, you can easily ask about needs, satisfaction with the school's efforts, or other questions you may have. You can also get feedback through polls in Google Classroom, Zoom, via text message, or even on social media. 3. This may seem obvious, but ... make it easy for respondents to actually answer your questions. Keep the language clear and simple so a person of any reading level can understand it. Never ask about more than one topic in a single question, and try to avoid giving a neutral middle answer option when you can. (In both of these cases, it's very hard for you to actually learn anything from the data.) And of course, if you work with communities who speak languages other than English, find a way to translate your survey into their language. Translation is a much tougher process than it should be, but it is essential for making all of your families feel valued and for hearing from your entire community, not just one subset. All that being said, I feel the same way about you - my colleagues, clients, and readers. I want to know what's important to you and what would be helpful for me to cover on the blog. I hope that you've been inspired by this post and will take my brief survey below. I appreciate your feedback and will use it to generate future content for you!
When I started my doctoral program at Vanderbilt, I certainly didn't expect to get into a ... heated discussion, shall we say? ... with the professor of my first course.
We were discussing characteristics of effective leaders, and our professor mentioned that the Myers-Briggs Type Indicator, one of the most well-known personality tests, was essentially worthless. You see, despite its incredible popularity, there is actually no data to show that Myers-Briggs is a valid and reliable assessment -- that it measures what it intends to, and that you'd consistently get the same outcomes if you took it again and again. Now, I've always been a pretty introspective person, and I (still) love personality tests as a fun way to reflect on how I think, feel, and interact with others. I'd never taken them as a scientific assessment of my psyche, but Myers-Briggs especially had stood out to me as a somewhat revelatory framework for why people interact and act the way they do. I had always gotten the exact same result when I'd taken the Myers-Briggs (ENFJ, if you're curious), so when my professor started talking about how most people get quite different results each time they take it, and that there was no research to support its utility, part of me was bummed, and part of me was fired up. I argued (civilly, of course) that I didn't use it as a formal diagnostic tool, but instead as a helpful resource or an interesting way of looking at things. So why should it matter? (Newsflash: It does matter.) For fun, I recently read The Personality Brokers: The Strange History of Myers-Briggs and the Birth of Personality Testing by Merve Emre. Of course, she confirmed what my professor had said many years ago. However, it reminded me of something I see often in education. People who are passionate about helping children and families often feel that they KNOW that what they're doing is helping the communities they serve, even without any real data to back it up. We KNOW that our Family Science Night was a success because there were lots of families there, and everyone enjoyed themselves. We BELIEVE that a teacher is effective because the children love them. We FEEL the impact of an after-school program because, well, it's been in the community forever. Unfortunately, we can't rely on gut instincts, feelings, and beliefs alone to tell us if something is effective... just like I couldn't make decisions based on only an affinity for Myers-Briggs. Let me be clear: education, and family engagement in particular, tends to get kind of fuzzy. While we can't rely on intuition, it's also true that we can't rigorously test everything that happens in schools. We need to find a middle ground. But this isn't just my random interest in personality theory. When it comes to children and families, we need to make sure that what we're doing to try to help them actually works. Luckily, it's not that hard to get started. We can begin tracking data, analyzing trends, and ultimately, measuring our impact so that we know we aren't just THINKING that we're changing lives. We actually are. Ready to start your evidence journey?Sign up below to get the Evidence for Engagement mini-course sent to your inbox. Thank you!You have successfully joined our subscriber list.
I love a good spreadsheet. I mean, I really get excited about it. You may have read on the About page how my business evolved from the development of a really fancy spreadsheet. True story. Now I get to help others learn to use Excel to improve their work and watch them get excited about it too.
One positive thing to come out of the pandemic is an increased appetite for online professional development. Recently, I've gotten to connect with old friends and colleagues by providing a three-part Excel workshop series for the Family League of Baltimore. So on top of hanging out with my old network, I've gotten to teach them about all the fun stuff Excel can do. Win-win! Here's an overview of the series: Part 1: Excel Basics A lot of educators just haven't been trained in how to use data. They may be consumers of it, using someone else's spreadsheet to glean information, but often, they just don't know how to utilize Excel's features for themselves. The Excel Basics workshop starts from the top and discusses formatting, functions, and formulas that beginners can use to build their Excel capacity. Part 2: Creating and Using Templates in Excel In the engagement world, there is so much to track! This session built on what was covered in the Basics session and walked participants through the process of designing their own customized tracking sheets. We used breakout rooms to discuss how to track different topics, and we walked through some more advanced features and functions to make these tools as automated as possible. Part 3: Reporting and Visualizing Data in Excel Data visualization is a hot topic in evaluation right now, and I get why. When you're able to effectively show your data graphically, you can make your results accessible for a much wider audience. In this session, we talked about so many fun parts of Excel - PivotTables, creating charts and tables for reporting, and ... drumroll, please ... creating interactive dashboards! Did you know you can create dashboards like the one below to share with your team?
Here's what some of my past workshop participants had to say about their experience:
Besides my obvious bias towards using Excel for ... well, everything ... I think it is even more important now for schools and districts to be effectively tracking their work. As we navigate through so many unknowns with school reopening, it will be critical to keep an eye on students who are at risk of falling through the cracks. Good news - Excel can help (and so can I!). I'd love to bring this workshop series to more places, so if your team could use a bit of an Excel boost, let's talk! |
AboutThe goal of this blog is to highlight relevant issues that impact students, families, and communities and spark engaging discussions about how to address those issues through evaluation. Categories
All
Archives
March 2021
|