SBIR-STTR Award

Measuring Ability When Some Examinees Cheat
Award last edited on: 10/29/2018

Sponsored Program
SBIR
Awarding Agency
DOD : Navy
Total Award Amount
$384,930
Award Phase
2
Solicitation Topic Code
N94-236
Principal Investigator
Michael Levine

Company Information

Algorithm Design & Measurement Service (AKA: ADAMS)

905 Shurts Street
Urbana, IL 61801
   (217) 384-7615
   N/A
   N/A
Location: Single
Congr. District: 13
County: Champaign

Phase I

Contract Number: N66001-95-C-7015
Start Date: 6/8/1995    Completed: 12/8/1995
Phase I year
1995
Phase I Amount
$87,857
Project objectives briefly described: i. Improve accuracy of measurement for cheating examinees by using a scoring method that takes account of the credibility of the examinee's patterm of wrong answers. ii. Provide an adjunct to on-line calibration that would assist in timing the replacement of items. iii. Identify test sites and recruiters that are associated with high rates of item compromise. iv. Improve accuary of measurement of normal examinees by utilizing ancillary information such as frequency of item usage and likelihood of item compromise in the examinee's testing site. These objectives are to be achieved by modeling the probability that an item has been compromised and the probability that a particular examinee has previewed one or more items. The parameters of the models are to be estimated from both group and individual data. Accuracy of estimation is to be improved by explicitly incorporating information about test compromise in the calculation of the individual examinee's posterior ability distribution. Statistical tests are to be used to identify compromised items. Statistical tests and estimated distributions of aberrance are to be used to identify test sites and recruiters involved in test compromise.

Benefit:
(i) More accurate measurement of both normal and cheating examinees. (ii) A disincentive for cheating. (iii) A measure of the extent of compromise that can be used to time adaptive test item placement. (iv) A disincentive for dishonest recruiters.

Keywords:
ability measurement, ability measurement, Item Response Theory, adaptive testing, test compromise, achievement measurement, Modeling, aptitude measurement

Phase II

Contract Number: N66001-96-C-7017
Start Date: 9/30/1996    Completed: 9/30/1998
Phase II year
1996
Phase II Amount
$297,073
Cheating on a small number of items can have a large effect on adaptive multiple-choise test scores. In Phase I it was shown that it is possible to compute conventional test scores in a way that does not adversely affect honest examinees. the scores of cheating examinees are substantially lowered. The same scoring formula is applied to all examinees; it is not necessary to classify examinees as honest or cheating. In an environment in which chating occurs, the corrected-for-cheating socres provide better measurement than number right scores. These results have been shown with simulation data. In Phase II score correction is applied to adaptive tests. Evaluation experiments with actual data are proposed. Simultaneous application to several subtests is developed. Complex forms of chating are considered. Correction-for-cheating scores are to be developed formultidimensional tests. Results are to be evaluated with operational data. Additionally, the proportions of people cheating on exactly one item, exactly two items, . are to be estimated. The proportions of people cheating on particular items and item pairs are to be estimated. A screening method for identifying compromised items is to be developed. Integration of score compromise methods with routine adaptive-test maintenance tasks such as item calibration, item replacement methods, and equating will be attempted.

Benefit:
(i) More accurate measurement of both normal and cheating examinees. (ii) A disincentive for cheating. (iii) Ameliorating the effects of language difficulties and other concomittants of low socio-econimic-status in evaluating the educational achievements of school districts.

Keywords:
adaptive testing, achievement measurement, test compromise, Item Response Theory, aptitude measurement, Modeling, ability measurement