Neural networks have proved highly effective at extracting information from text. However, noisy micro-text text has proved to be particularly difficult because low-level syntactic cues much less useful. In this project, we investigate how information produced by semantic, expectation-based symbolic models can be injected into a neural net architecture to improve extraction accuracy on user-generated text that is short, noisy, and has new, emerging entities. Our work in phase I demonstrated the feasibility of our approach, and we were able to produce an implementation that exceeded the state of the art in experiments extracting and linking entitles in noisy text. In phase II, we have three objectives. First, we intend to improve the architecture further, to produce significant further gains in accuracy based on ideas developed in phase I. Second, we intend to expand the range of knowledge sources that the architecture can take advantage of, and also show that the approach can generalize to other types of extraction tasks, such as relation extraction. Third, we plan to implement an end-to-end system, complete with APIs to enable rapid integration into larger NLP systems and a user interface for training and evaluation. From a practical point of view, our research, if successful, will not only improve extraction accuracy on noisy text, but will also address one of the major problems with neural networks in dynamic domains where new entities emerge frequently. Neural models are trained on a frozen âsnapshotâ of the world, and the stored information can only be updated through potentially costly retraining or fine tuning. Our work will allow systems to be updated much more rapidly and effectively when the world chang