Exam development - our most crucial task.

The development and maintenance of certification exams is one of the most important tasks undertaken by the ACAC. Several important stages are involved in the creation of a credible certification exam, including job/task analysis, item development and assessment, exam construction and review and cut score specification. In order to ensure that Council exams accurately represent their rapidly changing fields, our certification boards guide each exam through all the stages of the process.

Download ACAC's Assessment Development Procedures

  • The certification board authorizes and undertakes a job analysis study to define the job-related activities, knowledge and skills required of a competent field professional. The board uses the DACUM method, a group-based research technique that relies on the knowledge of industry experts to define the domains of knowledge and practice essential to the field being certified. The DACUM process generates a consensus-based description of the knowledge and skills of a successful practitioner which are then used to develop content domains of the certificaiton exam. Results of the DACUM session are validated by a committee of the board formed for that purpose.

    The Job/Task Analysis is reviewed annually to make sure that it continues to correspond to current industry practice. The certification boards may authorize a brand new Job/Task Analysis every five years.

  • The certification board oversees the development of all examination items. The board employs or retains experienced item writers or solicits the assistance of certified personnel. Item writers are given specific instructions as to the format and content requirements for each item:

    • Items must be coded to specific domains of the exam specification.

    • Items must be relevant to the skill or knowledge base.

    • Items must be classified according to level of difficulty.

    • Items must be free of bias due to gender, race, ethnicity, religion, etc.

    Exams (and items) are designed to separate candidates into two distinct groups: candidates whose knowledge and skill levels are deemed acceptable for certification and candidates whose level of competence falls below the minimum required for certification.

    Functioning as an item review committee, the board uses a content validity ratio (CVR) procedure to evaluate each item’s relation to the content area. Items deemed acceptable by the board at this point are re-checked for grammar and style. Items deemed acceptable by the certification board are added to item banks according to their domain classification.

  • The certification board oversees the construction of each exam and annually reviews each exam for reliability. This process includes a number of important steps:

    • The board creates an exam specification based on the domains identified during the Job/Task Analysis, including the number of topics covered by the exam, the number of items required for each topic, the number of dummy items included in each exam for purposes of pretest analysis and the rotation of items on and off of examinations based on statistical item assessment.

    • The board authorizes the assembly of a draft exam and reviews it for content coverage, item redundancy and accuracy of the answer key.

  • The certification board conducts an annual item assessment survey, in which statistical information is gathered and analyzed for the items in each examination. Test items with the following undesirable statistical characteristics are flagged for replacement:

    • Items whose difficulty indices (expressed as a proportion of candidates answering the item incorrectly) are too high.

    • Items whose discrimination indices are either too low or negative. Item discrimination index is expressed as a point biserial correlation coefficient, which measures the relationship between a candidate’s answer on a particular item and his score on the whole test. It tells how well an item distinguishes between high and low scoring candidates.

  • The board conducts annual reviews of the reliability of each examination to ensure mistake-free processing and internal consistency. An ongoing relationship with Kryterion is maintained for computer-based exam delivery and scoring. Randomization of items eliminates the need for several forms of the same examination and ensures consistency of reliability indices for each exam administration. The board employs the Kuder Richardson Formula #20 (KR-20) to establish an internal consistency coefficient for each examination. This coefficient measures the homogeneity of the content area tested by the exam. Examinations whose KR-20 coefficient is below 0.8 are flagged for further review and modification.

  • The certification board uses the Modified Angoff Method to determine a criterion-referenced passing score for each examination. According to this method, certification board members discuss the characteristics which distinguish a minimally qualified certificant from an individual who should not be certified and develop a profile of the “borderline” candidate. Independently, each board member asks the Angoff question of each item on the examination: “What percentage of borderline candidates WILL answer this question correctly?” Board members’ answers for each item are averaged. The average of the Angoff scores for each item on the exam is the passing score.