Three things we believe.
Every choice in this methodology resolves against three foundations. When we’re uncertain about how to score a product, we go back to these.
A score is the conclusion of an argument.
Not a vibe, not a vote, not an aggregate of customer reviews. Every contributing criterion is cited. Every source is a clickable URL. The reasoning is published next to the number. If we can’t cite it, we don’t score it.
3.0 / 5 means the market norm. Nothing more.
A 3 says: this product does what its peers do. Above 3 is real differentiation. Below 3 is a real gap. Anchoring scores to actual market behaviour, not to aspirational lab standards no product hits, is what makes them comparable.
Brand reputation never enters the score.
Marketing budgets, celebrity endorsements, country-of-origin claims, “trusted by 10,000+” banners, doctor recommendations: none of it factors in. Brand profile is collected separately as context. The product gets graded on the chemistry, the dose, the form, the evidence. Nothing else.
Stop writing copy. Start crafting arguments. The score has to survive scrutiny, so we build it that way.
What every score measures.
Five dimensions feed into every Kyaloon evaluation. They map to the questions a buyer should be asking but rarely is. Different supplement categories weight these differently (what matters for an omega-3 isn’t what matters for a magnesium), but the five pillars themselves don’t change.
Ingredient form
Magnesium glycinate or oxide. Whey isolate or concentrate. Triglyceride or ethyl-ester omega-3. KSM-66 or generic ashwagandha root. Form decides whether the molecule actually reaches the body.
Dose adequacy
Benchmarked against the product’s stated use case, not against a generic clinical RCT dose. A “daily maintenance” magnesium at 100mg is fine. A “therapeutic” magnesium at the same 100mg is underdosed.
Safety screen
A binary pass / fail layer. Heavy metals, dose toxicity, drug-interaction risk, banned substances. Failures don’t reduce the score. Failures hide the score until the brand fixes the gate.
Independent verification
Third-party lab results, certificates of analysis, accredited testing certifications. Captured as badges around the score, never folded into the score. Lack of testing is surfaced as context, not as a penalty.
Price per effective serving
The unit-economics layer. What you actually pay for a serving that delivers the dose. Same chemistry across brands often shows a 2x to 3x spread. Captured as metadata, surfaced as context.
A nuance worth flagging: only pillars 1 and 2 actually feed the score itself. Safety is a gate, not a multiplier. Verification is a badge. Price is metadata. We keep them apart on purpose. Bundling them together is what makes most rating systems gameable.
The score, the gate, the badges, the brand.
Every product page on Kyaloon shows four kinds of information. Each one means something different. We never collapse them.
The score answers one question: how good is this product at being what it claims to be? Everything else surrounds the number as context. We never blend safety, testing, or brand reputation into the score itself, because doing so makes any of those layers either unfairly disqualifying, or quietly gameable.
If a product passes the safety gate but lacks third-party verification, you’ll see the score, no Quality Verified badge, and a clearly labelled data gap. The reader decides what weight to give the absence.
What ‘3 out of 5’ actually means.
Most rating systems anchor scores to an aspirational standard (what an ideal supplement would do). We anchor to the actual prevalence of a practice in the Indian-market peer set.
less than 10% do it
~ 30 to 50% do it
more than 80% do it
A 3 / 5 on Kyaloon is genuinely meaningful. It means: this product does what its category peers actually do. Above 3 means the brand has done something the rest of the market hasn’t. Below 3 means it’s falling short of behaviour that’s already standard.
The market shifts. What’s “rare” today becomes “emerging” in a year. When the prevalence of a practice changes meaningfully, the rubric is updated and products re-evaluated. The anchor moves with the market on purpose.
The high-level process.
Every product goes through the same six steps. We’re describing this at a high level only. The specific tools, prompts, and per-category thresholds aren’t public, both because they would be tedious to keep in sync with the live system, and because exposing them would let bad-faith brands optimise for the rubric instead of for the buyer.
Label and listing capture
We collect the product’s full label, the brand’s public claims, e-commerce listings (Amazon, D2C, marketplaces), and any disclosed sourcing information. This is the starting point. Not the conclusion.
Independent web research
The evaluator does its own research before scoring. It searches for independent test results, recall and regulatory actions, contamination reports, and published evidence on the specific product. The brand’s own data is treated as a hypothesis to verify, never as proof.
Category rubric application
Each supplement category (protein, creatine, magnesium, omega-3, ashwagandha, melatonin, and so on) has its own evaluation rubric, built from category-specific clinical literature. The rubric scores 5 to 10 dimensions specific to that category.
Safety gate screening
A binary pass / fail screen for contamination risk, dose toxicity above documented upper limits, banned-substance presence, and known drug-interaction risks. A failed gate hides the score. The score returns when the brand fixes the underlying issue and provides updated evidence.
Quality verification check
We document third-party lab evidence, certificates of analysis, and accredited testing certifications where present. Where absent, we report the gap. Verification status is shown as a badge around the score, never folded into the score.
Score published with full reasoning
The final score is published with every contributing criterion, the reasoning behind each, and the URL of every cited source. The data gaps (what we couldn’t find, what the brand didn’t disclose) are published alongside.
The full evaluation for any given product lives on its product page. Click the score on any product card and scroll to What we like and Concerns. That’s the rubric output, in plain language, with sources.
What counts as a source.
Every claim in a Kyaloon evaluation has to be backed by a clickable URL. Not a study name. Not an author reference. The actual link, so anyone can check it. The sources we draw from fall into five buckets, ordered by evidentiary weight.
If we can’t find a source, we report a data gap rather than fill it with brand marketing. The product’s evaluation will explicitly say what was missing. For instance: “brand has not published a third-party CoA for this batch.”
What doesn’t affect a score.
Sometimes what a methodology refuses to count matters more than what it counts. The following never enter a Kyaloon product score, in any weighting, under any circumstance.
Most of these correlate with brand strength, not product quality. Letting any of them into the score would mean the largest brands automatically score highest, which is exactly the failure mode we exist to correct.
When scores change.
Rubrics evolve. Products are reformulated. New evidence emerges. We handle this through explicit versioning rather than silent updates.
Rubric versions
Re-evaluations
Safety gate
How we make money.
We run on affiliate commissions. When you click through to a retailer from a Kyaloon product page and complete a purchase, some retailers pay us a small commission. That’s our only revenue source.
What we never do, and how it’s structurally enforced:
- Take payment from a brand to influence a score, in any form, including “preferred placement”, “verified status”, “expedited review”, or sponsored content.
- Adjust a score after publication based on commercial considerations.
- Hide a low score for any product because we earn affiliate revenue from it.
- Change the order in which we surface products based on commission tier.
- Accept editorial input from brands on the methodology.
The methodology is the firewall. As long as it’s public and auditable, any deviation is independently verifiable by anyone who cares enough to check.
What Kyaloon is not.
A few things we want stated plainly, so there’s no misunderstanding.
If you find a problem.
A methodology that can’t be challenged isn’t a methodology. If you think a Kyaloon score is wrong, a source is mis-cited, a safety gate misapplied, or a piece of evidence missed, we want to hear it.
Open the product page, click Report an issue, and tell us what’s wrong. Every report is reviewed. If we agree, we re-evaluate and publish the corrected score with the new reasoning attached.
Browse rated products