We won an “Inflation Persuasion” Award!
Galvanize Action won ANOTHER Expy award from the Analyst Institute! Our two-time award-winning research team brought home the “Inflation Persuasion” award for our 2024 Tracking Survey & Randomized Controlled Trial Field Experiment.
From May to November 2024, Galvanize Action and one of our partners ran a field experiment and tracking survey to test the effectiveness of our programming. Half of the white women in this experiment did see Galvanize Action ads—that’s the treatment group. The other half, the control group, was blocked from seeing Galvanize Action ads. Over several months, we asked both groups of women the same questions about the top of mind issues and civic choices. The differences between the treatment group and the control group allow us to measure Galvanize Action’s persuasive impact.
This experiment showed that our ads (and our partner’s!) successfully moved white women toward progress on the economy, which was consistently their most important civic issue. This is really good news!
This RCT shows that Galvanize Action’s programming was effective with white women and especially effective on questions relating to the economy and inflation. When controlling for demographic and pre-test variables, our programming (and our partner’s programming) had a statistically-significant effect on whether participants blamed the Biden/Harris administration for inflation and on beliefs about which party is better for the economy.
That success with messaging around the economy and inflation is why we won the “Inflation Persuasion” Expy award in 2024! Take a look at some of the ads that white women watched as part of this experiment.
🔗Watch more economy ads on YouTube.
We’re so proud of our outstanding Research Team and their impeccable experiment design and their second Expy award! Congratulations to Rachael Firestone, Hannah Curtis, Laura Hardner, and Rebecca Cutler. If you’d like to drop them a congratulatory note, DM us on Instagram @galvanize_action and we’ll pass it along.
Frequently Asked Questions
Some of our blog readers and social media followers are asking great questions about this randomized controlled trial (RCT)! Here are a few answers to help you understand.
Q: What counts as a statistically-significant effect? Are there different levels of significance?
A: We can calculate and quantify levels of statistical significance with p-values. P-values tell us how likely it is that a given result happened because of a relationship to the variable being tested (in this case, our ads) rather than by random chance. You might see them written as “p = 0.099” or similar. The lower the p-value, the higher the level of significance. Statistically-significant results are exactly what we want when we’re trying to demonstrate a causal relationship between our programming and our impact, and they’re particularly hard to come by in election years like 2024!
Q: I heard someone talking about coalescing survey responses based on start date. What does that mean?
A: Attrition can impact surveys. For example, more than 16,000 women were recruited for this survey, but some of them dropped off between May and November. To make up for that attrition, we added new participants in June. Since those new participants were not with us to complete the baseline (starting point) survey, we treated their first survey response as a baseline.