What Users Are Really Asking About the Higg Index

by Lee Green, vice president of communications and marketing at Cascale

Most sustainability systems are designed with outputs in mind.

Scores, benchmarks, disclosures, reports.

But if you want to understand where things are actually working or breaking down, it’s often more useful to look one step earlier. Not at the data itself, but at the questions people ask when they’re trying to produce it.

Over the past couple of months, we’ve been looking more closely at anonymized user questions submitted through a support feature within HowToHigg, designed to help users navigate Higg Index guidance more effectively.

HowToHigg supports users across the full suite of Higg Index tools, which are built on Cascale’s methodologies and framework, with the tools themselves being exclusively available via Worldly, the most comprehensive sustainability data and insights platform.

From a communications and engagement perspective, these questions are particularly useful. They don’t necessarily reflect the issues users encounter once inside the tools, or the detailed feedback captured through formal channels. But they do highlight where guidance, interpretation, and understanding may need to be strengthened, often before or alongside direct tool use.

Across more than 400 user questions, a number of consistent themes started to emerge. Taken together, they offer a useful lens into where users are seeking clarity, and where interpretation may begin to diverge.

A large share of questions focused on Higg FEM verification procedures. How to select Verification Bodies, what deadlines apply, how verifier rotation works, and the difference between self-assessment and verified scores. These are not edge cases. They sit at the core of how data becomes credible and comparable.

We also saw frequent questions around data classification and reporting methodology. How to distinguish between hazardous and non-hazardous waste. How to classify water use. How energy sources align with GHG Protocol scopes. These are the kinds of decisions that seem small in isolation but have a direct impact on consistency when applied across thousands of facilities.

Another cluster of questions related to cadence, deadlines, and module access, including reporting timelines and purchasing requirements. Again, not complex in theory, but critical in practice when companies are managing reporting across multiple teams and regions.

Questions around scoring logic and weighting came up repeatedly as well. Whether Level 2 and Level 3 questions are scored. How sub-questions contribute to final scores. What happens when zero-tolerance issues are identified. These are the mechanics behind the numbers, and understanding them is key to interpreting results correctly.

Some questions also pointed to platform access and functionality, reinforcing the importance of close coordination between Cascale’s methodologies and guidance, and Worldly’s platform delivery.

It’s important to be clear about what this is, and what it isn’t.

These insights are not a substitute for the detailed feedback gathered through formal channels such as Zendesk, direct user engagement, or module-specific support. Those remain critical for identifying and resolving specific issues within the tools themselves.

What this layer of questions offers is something slightly different. An earlier view into how users approach Higg Index guidance, and where additional clarity may be needed before or alongside engaging directly with the tools.

As the primary guidance platform for the Higg Index, HowToHigg plays a critical role in shaping how methodologies are understood and applied. And in that context, the questions users ask are often the first indication of where interpretation may begin to diverge.

If we want consistent, comparable data, that layer matters.

Because even the most robust methodologies rely on consistent understanding in practice. And every unclear definition, every misinterpretation, and every point of confusion has the potential to show up downstream.

So the takeaway is a simple one.

Pay attention to the questions.

They don’t just reflect what users don’t know. They point to where we can make the system clearer, more accessible, and ultimately more consistent in how it’s applied.

Lee Green is vice president of communications and marketing at Cascale.