This proposal describes a novel method for synthesizing speech that incorporates acoustic cues for enhancing the listener's ability to judge the distance at which material is spoken. Currently, there is no standard procedure for providing distance information in synthetic speech. The project will delineate the most important acoustic cues used by listeners to judge the distance of spoken materials and incorporate these into the speech signal. The approach's efficacy will be evaluated by listening tests examining the ability to estimate distance of spoken material presented over headphones. The evaluation will be used to refine the method by which distance information is incorporated into the speech signal using a special-purpose synthesis system. The manner in which a talker speaks depends, in part, on his/her distance to the listener. This facet of articulation is often referred to as "vocal effort," and is known to enhance a listener's distance estimation. This project will focus on evaluating the efficacy of such talker-intrinsic cues for judging the distance of spoken material and build this knowledge into a synthesizer for simulating speech spoken over a range of distances