USING LARGE SCALE DEEP LEARNING METHOD TO PREDICT PUNCTUATIONS IN TELUGU LANGUAGE

Author Name:
Author Email:

Abstract

Punctuation performs an important role in language processing. However, automated speech recognization systems only output plain terms sequences. It really is then appealing to predict punctuations on simple word sequences. Earlier works are focused on using lexical features or prosodic cues captured from small corpus to predictable simple punctuations. When compared with simple punctuations, rich punctuation provides more meaningful information and are more challenging to predict. In this paper, LSDL model is suggested to predict rich punctuations on large-scale corpora. Experiments are performed on both in-domain and out-of-domain datasets for prediction of punctuations. The result of the Experiments shown that LSDL can significantly outperform the original CRF-based model. Furthermore, large-scale corpora are demonstrated to bring large improvement, and presenting POS tags and Chunking information in LSDL model on small corpus to improve performance.

 INTRODUCTION

Nowadays, with the fast development of IT, innumerable levels of information have been created and disseminated, a huge part which is speech information. The most frequent way to investigate speech data is to convert them into text message so that natural language processing techniques, such as analysis of sentiments, extraction of information and machine translation, can be applied. Research [2] has proven that punctuations are essential for these downstream processing. However outputs of the almost all of automated speech reorganization (ASR) systems [1] simply consist of streams of words.

There were some research on this problem, specifically, punctuation prediction or punctuation recovery. Most of earlier works rely on lexical features or prosodic cues [2]. In such cases, supervised learning techniques are used, but there is no large and high-quality corpus for training such models, especially in Telugu language. Most of the research works focuses on small corpora such as PTB and CTB (Telugu Tree Bank), and manual

 

 

 

speech transcription. Due to commercial and cost factors large-scale manual transcriptions are unavailable for general public. And the public corpora are usually small and only cover few categories, as the abundance and variety of training data are of significant importance for the punctuation prediction model to assure higher accuracy and reliability as well as generalization ability.

Text materials can be split into two types: formal text materials and informal text materials, according to their writing styles. Media is a typical genre of formal text, which generally has standard punctuations and structure. While a more substantial part of a text on the web is informal text, such as posts and microblogs (Twitter and Weibo). Commonly, speech transcriptions are much closer to informal text because colloquialism contains words and phrases that are being used in ordinary discussion, including slangs, idioms and abbreviations.

In this paper, we focus on rich Telugu language punctuations prediction. To resolve the situation triggered by restricted training data and high cost of manual labeling, we use a huge scale corpus gathered from various resources including Wikipeida, News, Weibo and real speech transcriptions. In our experiments, we shown that large-scale corpora bring large improvement than small corpora.

Punctuation prediction is generally considered as an example of sequence labeling tasks, treating punctuations as labels of words. Currently, recurrent neural networks(RNNs), especially long short-term memory RNN, have been a dominating approach for sequence labeling. But there are few works implementing RNNs to punctuation prediction except from [3]. They bring a two-stage Large Scale Deep Learning (LSDL) model to revive punctuation in speech transcriptions using both lexical features and prosodic cues, reducing the error by at most 16% in comparison to 4- gram+DT-p method. It shows the promising power of deep learning methods. To avoid manual work, we simply used lexical features as a source of LSDL. [4] Demonstrates multi-view learning framework can offer similar performance. We then add POS tags and chunking information to our model with multi-view learning framework. By introducing additional information, our LSDL model makes improvement on small corpora.

CONCLUSION

Our large-scale deep learning model achieves overwhelmingly improvement in rich punctuations prediction than traditional method. We are concluding that LSDL model has a better ability to predict the sentence boundary, but because of characteristics of Telugu language punctuation, there continues to be a issue of determining which punctuation to place at the boundary. Our work also shows that large-scale corpus helps promote the performance and generalization capability in both formal and informal corpora.

REFERENCES

  • Xueyang Wu, Su Zhu, Yue Wu, Kai Yu, “Rich Punctuations Prediction Using Large-scale Deep Learning”, Chinese Spoken Language Processing (ISCSLP), 2016 10th International Symposium on, 2016.
  • Favre, R. Grishman, D. Hillard, H. Ji, D. Hakkani-Tur, and M. Ostendorf, “Punctuating speech for information extraction,” ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing – Proceedings, 2008.
  • Stolcke, E. Shriberg, and M. Harper, “Using Conditional Random Fields For Sentence Boundary Detection In Speech,” 2005.
  • Tilk and T. Alumae, “LSTM for punctuation restoration in ¨ speech transcripts,” Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, Vol:54. 2015.
  • Dhillon, D. P. Foster, and L. H. Ungar, “Multi-view learning of word embeddings via cca,” in Advances in Neural Information Processing Systems 24, J. Shawe-Taylor, R. S. Zemel, P. L. Bartlett, F. Pereira, and K. Q. Weinberger, Eds. Curran Associates, Inc., 2011.
  • Lafferty, A. McCallum, and F. C. N. Pereira, “Conditional random fields: Probabilistic models for segmenting and labeling sequence data,” ICML ’01 Proceedings of the Eighteenth International Conference on Machine Learning, Vol: 8, 2001.
  • Hochreiter, S. Hochreiter, J. Schmidhuber, and J. Schmidhuber, “Long short-term memory.” Neural computation, vol. 9, no. 8, 1997.
  • Xu, D. Tao, and C. Xu, “A Survey on Multi-view Learning,” Cvpr, vol. 36, no. 8, 2015. [Online]. Available: http://arxiv.org/abs/1304.5634.
  • Zhu, Y. Zhang, W. Chen, M. Zhang, and J. Zhu, “Fast and Accurate Shift-Reduce Constituent Parsing,” Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics,Vol: 1, 2013.
  • F. T. K. Sang and S. Buchholz, “Introduction to the CoNLL- 2000 shared task: Chunking,” in Proceedings of CoNLL, Vol: 0, 2000.

DOWNLOAD THE COMPLETE RESEARCH PAPER PDF

 

24 total views, 0 views today

About the author: tej