The Devil is in the Details
While numerical rankings are incredibly seductive, they are also the most contentious feature of Tickerscores.
As with any ranking system, how the rankings are generated is the most critical feature in determining the reliability of those numbers for decision making purposes. In other words, what goes into the ranking determines what the outputted number actually means and whether or not those scores are useful.
Take, for example, the category of “management team” that forms part of the overall score. One of the multiple variables that go into the score for management is teams learning to ramp up production efficiently and control costs effectively.
While many investors would intuitively agree with some kind of positive association between company performance and efficient production pathways or well-controlled costs, defining exactly how much (numerically) it matters is much more complex task. Other components that go into the management score include:
- Years of experience of the management team
- The “track record” of success
- Administrative/Operating expenses
- Whether management team owns shares
Even though it may seem academic to debate these seemingly small factors, in reality the combined effect of a lot of small factors is exactly what lies at the heart of what could make or break Tickerscores as a valuable research tool – it relies on the combination of many different factors put together to create a numerical and pictorial representation of the ‘strength’ of a particular company.
So what makes a ‘good score’? Typically, higher numbers are the signal of stronger companies however when asked about what constitutes a “good” score, the lead analyst on the Tickerscores project, Rob Furhman, characterized the value of Tickerscores as “not about [finding] what’s good, but eliminating what’s not good.”
[…] published an in-depth preview of Tickerscores over the summer, citing that: “Tickerscores does have the ingredients to […]