Posted on Leave a comment

Special Series: J.D. Power & Associates – Discount Brokerage Rankings Explained Part 1

3.      How is Investor Satisfaction measured?

At the heart of every survey are the people who are going to be answering the questions.  The team at J.D. Power begins by accessing large databases of individuals that agree to participate in online surveys and have identified themselves as discount brokerage account holders.  Those individuals are then provided with the chance to complete a survey, often in exchange for some kind of reward or incentive for their time and opinion. An example of the type of database that is sampled is the J.D. Power and Associates Power Panel.  No client lists from any of the discount brokerages are used to find individuals, and as a result the data source is considered “third party”.

Large scale surveys, especially of opinions and experiences are complicated to put together.  In order to be meaningful there has to be enough responses to accurately reflect a cross section of clients.  On the other hand, there are some brands that are simply either more popular or have more customers, so some element of balance needs to be ensured so that all brands are comparable.  As such, the investor satisfaction survey requires both a minimum and maximum number of client surveys that need to be filled out each year for each discount brokerage in order for a discount brokerage to be included in the final reporting.  For example, in the 2012 survey, Credential Direct did not have sufficient survey completion to warrant including it in the final rankings and so their results do not appear in the list of discount brokerages reviewed.  Each discount brokerage, depending on its representation in the database, will have an associated number of surveys that must be completed in order to be able to balance the findings so that all companies are comparable.

Once the data is gathered it is then analysed and the rankings across the six categories (mentioned above) result in a final score.  J.D. Power has two components that go into its reporting of investor satisfaction.  The first is a basic numerical score and the second is their proprietary “Power Circle” rating.

The numerical score a discount brokerage gets is based on a 1000 point scale.  An industry average is calculated by taking the average of all of the scores for each discount brokerage. Interestingly (especially for the stats nerds), while the average score for the industry is mentioned, the standard deviation for this score is not, making it difficult to put the “average” into context.  For example, knowing the standard deviation would help to answer whether the scores are all over the place or if they are tightly grouped together. This is important because those considering different discount brokerages would be interested in knowing whether other investors perceive Canadian discount brokerages as essentially similar to one another or if there is a significant advantage to be expected by going with one brand over another.  [note: for a quick and easy explanation of standard deviations and why they matter, click here]

For many people, numbers on their own might seem confusing.  To help consumers with their decision making, J.D. Power opted to use their 5 point “Power Circle Ratings”.  The ratings are scored as follows:

Number of Power Circles Meaning Equivalent Score
5/5 Among the best Within the top 10%
4/5 Better than most Next 30% of all companies
3/5 About average Next 30% (10% above survey average, 20% below)
2/5 The rest Next 30%
Source: http://www.jdpower.com/faq/faq.htm #colspan# #colspan#

Similar to a “star” rating system, the Power Circles represent categories of performance. In the quest to make things ‘simple’ by using circles, there are certain advantages and disadvantages to keep in mind.  One of the biggest advantages of the circle rating system is speed. As far as making things “easy”, using circles to represent how satisfied investors are is a very quick reference system.  The tiers that the ratings are based on make it reasonable to assume that whoever has placed first has done a better job keeping their clients feeling satisfied than their peers.

When looking further down the list however, the definitions of the ratings really matter. It is clear that there is a lot of leeway given to the bottom 90% of companies i.e. the “better than most,” the “about average” and “the rest.”

Leave a Reply