The document presents two neural network models for named entity recognition (NER) without language-specific resources: an LSTM-CRF model and a transition-based stack LSTM (S-LSTM) model. The LSTM-CRF model uses a bidirectional LSTM layer followed by a CRF layer to label input sequences, while the S-LSTM model directly constructs labeled entity chunks. Both models represent words as character-level representations from a bidirectional LSTM combined with word embeddings. The models are evaluated on four languages and achieve state-of-the-art performance on three of the languages without external labeled data.