W hether consciously or subconsciously, all of us are exposed on a daily basis to measurement situations. Sometimes we are meas- uring others, sometimes we are measuring ourselves and sometimes we are the ones being measured. In most situations these measurements will have little impact as far as our lives are concerned, but sometimes they can result in more serious consequences.
In the case of regulatory examinations like those already imposed by financial services regulators like the FSA and those about to be imposed by the MCCB, the consequences of success or failure are indeed serious. Examination failure means brokers can no longer work in that arena. In short, a broker’s livelihood is at stake.
So, what can be done to ensure that examinations measure what they are supposed to measure, and do so consistently? The question needs to be considered in its wider context.
To begin with, it is important to be clear about what is being measured.
If a candidate passes a regulatory examination ‘ CeMAP or MAQ, for example ‘ then the inference drawn is that the individual is suitably qualified to give mortgage advice. The examination is expected to be a good test of professional competence in the mortgage arena. But how is such a test derived?
For mortgage advice, there needs to be a thorough job analysis that links the content of the examination directly to all aspects of the work of a mortgage adviser. This would include knowledge and understanding of the Mortgage Code, the range of products on the market and the mortgage application process.
This examination is not and should not be a test based simply on subjects the examiners think should be tested, and instead should be based on what mortgage advisers would not usually understand or be able to explain to clients.
Assuming the mortgage examinations are based on a job analysis, it is important to consider what type of questions should be used? There are two basic categories of questions. First, objective style ‘ where there is a definite right or wrong answer, and second, subjective style ‘ where the examiner has to form a view based on the candidate’s free format answer.
The choice will depend on a number of factors ‘ the most important the competence being measured.
Objective questions are most commonly used to test knowledge. However, it is generally accepted that a talented question writer can construct objective questions that not only assess a candidate’s ability to recall information, but also tests their skills of comprehension, application, analysis or synthesis in arriving at the correct answer. Objective questions are the most appropriate type to use for competence assessments. The use of objective testing is well-established in the first two papers of CeFA and the FPC, in the first two papers of CeMAP, and in parts of CeMAP Bridge and MAQ.
In fact, many experts agree that open response questions are only necessary where there is a need to measure the candidate’s skills in written expression, to demonstrate originality of thought or synthesis of information.
A question many examining bodies are now asking themselves is how best to use technology to improve both the results they get from examinations and the service they provide to candidates.
In today’s world, it is simply unacceptable that examinations are only available every two or three months and that results take several days, if not weeks, to come through.
In this respect, the Institute of Financial Services (ifs) should be congratulated for taking a proactive stance, by enabling candidates to sit examinations for CeMAP papers one and two at the Prometric Thompson Learning testing centres that are located throughout the UK.
This means candidates are able to attend test centres which are geographically more convenient, at times which are more suited to their own circumstances and deliver their results immediately.
ifs is also currently engaged in a project to convert Paper 3 and the ‘Bridge’ to Prometric testing, which will be available later this year.
Those advisers who meet the necessary standards and pass the regulatory examinations derive substantial economic and social benefits, but they also bear considerable personal and professional responsibility.
Accordingly, it is not unreasonable to expect examining bodies to provide evidence that their systems and practices are reasonable, fair and founded on some sound, rational basis.
Over the course of the next few months some of the issues which confront professional bodies in their efforts to ensure their examinations are a fair and accurate measure of competence will be examined in greater detail.