VLAB reinvents the way research software is created through an inexpensive, reusable, nonprogrammer software toolkit with an extensive host program for pooling user created components. An intuitive user interface provides an "executable diagram" of an experiment, templates, and step-by-step instructions on how to set up an experiments, including task specification, stimuli generation, sequencing, timing, feedback, and recording. OLE/COM design enables component sharing and recycling and easy interfacing with other equipment, such as eye-trackers, video, and recording devices. Applications include classical, exploratory, ad applied research. Phase I developed the infrastructure with startup templates for evaluation. Evaluation demonstrated the flexibility, extensibility, and ease of use of VLAB and verified that other authoring systems do not offer effective and affordable alternatives. Phase II completes the core program and develops a web-based database of pre-built templates, a host system for 3rd party applications, and a full on-line user support system. VLAB enables the construction of dynamic graphical displays including moving objects and tests involving reading, letter counting, searching, identifying, aligning, and fatigue measurement. Other applications include adaptive technology, education, engineering, law, and the social sciences. Analysis of data from user applications will help the research community prepare for do-it-yourself trends in research software technology. PROPOSED COMMERCIAL APPLICATION: Commercial development and distribution of a software toolkit for basic and clinical vision research and practice and for evaluating computer-based vision and reading aids. Potential extensions include use in human performance and human/machine interface research, adaptive technology, reading and literacy research, science education, visual ergonomics, and stroke/brain injury rehabilitation.