Journal Information
ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP)
https://dl.acm.org/journal/tallip
Impact Factor:
1.800
Publisher:
ACM
ISSN:
2375-4699
Viewed:
16356
Tracked:
11
Call For Papers
The ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP) publishes high quality original archival papers, survey papers, and technical notes in the areas of computation and processing of information in Asian languages, low-resource languages of Africa, Australasia, Oceania and the Americas, as well as related disciplines. It is published six times a year.

The subject areas covered by TALLIP include, but are not limited to:

    Computational Linguistics: including computational phonology, computational morphology, computational syntax (e.g. parsing), computational semantics, computational pragmatics, etc.
    Linguistic Resources: including computational lexicography, terminology, electronic dictionaries, cross-lingual dictionaries, electronic thesauri, etc.
    Hardware and software algorithms and tools for Asian or low-resource language processing, e.g., handwritten character recognition.
    Information Understanding: including text understanding, speech understanding, character recognition, discourse processing, dialogue systems, etc.
    Machine Translation involving Asian or low-resource languages.
    Information Retrieval: including natural language processing (NLP) for concept-based indexing, natural language query interfaces, semantic relevance judgments, etc.
    Information Extraction and Filtering: including automatic abstraction, user profiling, etc.
    Speech processing: including text-to-speech synthesis and automatic speech recognition.
    Multimedia Asian Information Processing: including speech, image, video, image/text translation, etc.
    Cross-lingual information processing involving Asian or low-resource languages.

Papers that deal in theory, systems design, evaluation and applications in the aforesaid subjects are appropriate for TALLIP. Emphasis will be placed on the originality and the practical significance of the reported research.

In addition, papers published in TALLIP must relate to some aspect of Asian or low-resource language or speech processing. Asian languages include languages in East Asia (e.g. Chinese, Japanese, Korean), South Asia (Hindi, Tamil, etc), Southeast Asia (Malay, Thai, Vietnamese, etc), the Middle East (Arabic, etc), and so on. Low-resource languages of primary interest are those of Africa, Australasia, Oceania, the Americas and, of course, Asia.

TALLIP is also open to articles that provide literature and/or technology reviews in one of the above-mentioned areas. However, in order to be considered for publication, such an article must demonstrate that it provides information that is substantially different from what is available in the literature. The author(s) of review articles must submit a cover letter that clearly states why they believe their contribution adds useful material not covered elsewhere. Also, a review article should not just be a summarization of previous work: it must also be a work of exegesis. That is, the author(s) must show how the different strands of work relate to each other. They should also offer a view of where the field under review is going. A survey or review article submitted to TALLIP will go through the usual review process only if the editors feel the submission adheres to the above guidelines.

Finally, starting with issue 12(4), a new editorial column entitled TALLIP Perspectives (formerly TALIP Perspectives) has been launched. The editorial column is open anyone who wants to contribute an editorial on topics relevant to Asian language and speech processing. Contributors should contact the Editor directly. Submissions will be shared with the Associate Editors for discussion. In rare instances, if the submission involves substantial technical content, we will ask an outside reviewer for assistance.
Last updated by Dou Sun in 2024-08-10
Special Issues
Special Issue on Natural Language Processing for Cross-Modal Learning
Submission Date: 2024-11-25

Guest Editors: • Dr. Shadi Mahmoud Faleh AlZu’bi, Al-Zaytoonah University of Jordan, dr.shadi.alzubi@gmail.com • Dr. Maysam Abbod, Brunel University London, maysam.abbod@brunel.ac.uk • Dr. Ashraf Darwish, Helwan University, ashraf.darwish.eg@ieee.org In today’s world, most unstructured data is in the form of text, and structured data is primarily images. Online websites have millions of product descriptions, mainly in text form, whereas shopping sites already have millions of pictures. Since it is cumbersome to train the model with textual data and annotate all the images manually, it is required to learn how to use both available datasets by aligning these two modalities. Such as cross-modal retrieval: Given an image as input, predict similar-looking books or predict all the books which contain the words written on the book cover, or indicating book genre given a title of a book along with its summary. For such tasks, one needs to learn how to build models which can do multimodal learning or cross-modal learning. Natural Language Processing (NLP) research aims to build systems that understand natural, unconstrained languages in terms of a finite set of formalized representations. While we can use a search engine to look up information online, it's not always as good as talking to people and knowing whether they can help or where they're going next. NLP, on the other hand, NLP is something that we can do very well with our computers. Many people associate it with speech recognition and the creation of speech-to-text conversion programs for web searches and Wikipedia articles, but it goes much deeper than that. An important reason for NLP development is its importance in the realm of cross-modal deep learning. The multimodal learning model combines data from multiple sources to get better performance than single modal deep learning. NLP plays a vital role in this context because almost all the information we receive is through natural language and hence can be used in datasets that utilize multimodal learning models. It is slated to be one of the most essential and powerful tools for making sense of our data, enabling us to discover new patterns from existing information and make connections where they weren't apparent before. It supports cross-modal machine learning where previously, a single application could not analyze diverse information from the data captured from different sources. Cross-modal Learning helps NLP organizations use natural language processing in understanding inter- modal data that computer scientists in isolation previously studied. It enables the extraction of relevant information that can be used to tailor computer communications and can also be applied to cross-modal learning. Though the primary focus has been on computational linguistics, it has also been extensively studied in cognitive psychology, computer vision, and semantic computing. NLP will enable advanced learning methods to capture, process, and analyze all the data needed for training. The use of NLP for cross-modal learning is particularly relevant as it allows us to train machines with combined input from multiple modalities. As such, we can incorporate information from sensory systems into more modular algorithms and curtail over-generalizing errors that often occur due to a lack of training on multiple modalities. Cross-modal training then makes it possible for specific features in auditory/verbal features like accent etc., visual elements like lighting and color etc., space surrounding objects, and even tactile features such as surface texture, via heart-rate variability and respiration synchronized with speech time are all incorporated into the same voice recognition algorithm which can improve overall performance dramatically. NLP is a growing area of research aiming to teach computers how to understand the meaning of language. One step in this research direction is learning to label and make predictions on new data according to existing labeled examples. This requires that tags be associated with both the data they are to be applied too, as well as an overriding hypothesis of the data's essence. In order for such a machine learning technique to work, references to important details of the data state in one modality must also be recognized by processing in other modalities such as images or audio files. This special issue on Natural Language Processing for cross-modal Learning aims at bringing together researchers from different communities to exchange ideas, results, and standard models. Topics • State-of-the-art in Cross-Modal Learning, focusing on Neural and Statistical methods like Conditional Random Fields. • Practical Difficulties and Refinements of Cross-Modal Learning based on recent work in Natural Language Processing • Image Annotation and Automatic Captioning • Learning from Heterogeneous Modalities • Feature Extraction and Representation in Natural Language Processing • Multimodal Representation Learning • Design of Algorithms to exploit both Vision and Language in providing solutions to a Broad Range of Tasks • Data Descriptives and Transformation for Text Mining • Coreference Resolution, Sentiment Analysis, and Opinion Mining • Textual Entailment and Recognizing Textual Entailment (RTE) problems • Multimodal Information Extraction and Question Answering • Vector Representation Methods and Evaluation Techniques Important Dates • Submissions deadline: May 30, 2024 • First-round review decisions: August 25, 2024 • Deadline for revision submissions: November 30, 2024 • Notification of final decisions: January 25, 2025 • Tentative publication: As per Journal Policy Submission Information Please refer https://dl.acm.org/journal/tallip/author-guidelines and select “Natural Language Processing for Cross-Modal Learning” in the TALLIP submission site, https://mc.manuscriptcentral.com/tallip For questions and further information, please contact Dr. Shadi Mahmoud Faleh AlZu’bi /dr.shadi.alzubi@gmail.com
Last updated by Dou Sun in 2024-08-10
Special Issue on Transfer Learning for Low-resource Languages in Healthcare using NLP Models for Clinical Text Analysis
Submission Date: 2024-11-30

Guest Editors: Dr. Zohaib Mushtaq, Assistant Professor, Department of Electrical Engineering, College of Engineering and Technology, University of Sargodha, zohaib.mushtaq@uos.edu.pk Dr. Wahyu Rahmaniar, Assistant Professor, Institute of Innovative Research, Tokyo Institute of Technology, rahmaniar.w.aa@m.titech.ac.jp Dr. Qazi Mazhar ul Haq, Assistant Professor, Department of Computer Science and Engineering and IBPI, Yuan Ze University, qazi@saturn.yzu.edu.tw Languages classified as low-resource have comparatively less data available for conversational AI systems to be trained on. Conversely, a greater number of European and Western languages, including Chinese, Spanish, French, Japanese, and English, are high-resource languages. A machine uses transfer learning to increase its generalization about another task by using the knowledge it has learned from a prior one. For instance, you may use the knowledge a classifier learned to identify drinks while training it to predict whether an image contains food. A machine uses transfer learning to increase its generalization about another task by using the knowledge it has learned from a prior one. For instance, you may use the knowledge a classifier learned to identify drinks while training it to predict whether an image contains food. A machine learning technique called transfer learning uses knowledge from one task to improve performance on a related one. For instance, in the context of image classification, truck recognition may make use of knowledge acquired by learning to identify cars. In both training and testing datasets, TL concentrates on a broad range of tasks, domains, and patterns. There are other examples of TL in action in the actual world, such as the capacity to discriminate between items like bikes and cars. Two people learning to ride a bike is another example from real life. While there are many benefits to transfer learning, the three most significant ones are shorter training times, better neural network performance, and less data. Furthermore, natural language processing (NLP) in the healthcare industry can identify the context in which words are spoken, improving its ability to understand patient interactions and pick up on the finer points of a patient's health. This facilitates the management of treatment and follow-up procedure data by medical experts. Text analytics software is capable of deciphering and communicating data from clinical documents, discharge summaries, physicians' notes, and other types of health records. Natural language processing and other computational and linguistic methods are combined by text analytics technologies. These days, processing and analyzing biomedical data requires the use of clinical NLP technology. It uses cutting-edge language technology to analyze textual data from biomedicine and has shown considerable potential for enhancing human health. Natural language processing (NLP) has a section called text generation that focuses on automatically producing text. It has several uses, such as conversational agents, content production, and machine translation. Statistical language models are among the most widely used methods for text production. Using NLP effectively can help reduce EHR distress. NLP is often used by clinicians as a substitute for handwriting and typing notes. Most patients struggle to understand their health data, even when they can access it through an electronic health record (EHR) system. A broad spectrum of healthcare professionals, including doctors, nurses, pharmacists, and administrators, employ medical NLP in the sector. They are able to decrease administrative burden, enhance predictive analytics, and streamline procedures. Topics • Transfer learning in clinical named entity recognition with limited resources. • Natural language processing is used for low-resource languages in the field of medicine. • Transfer learning in analysis of natural language with limited resources. • Medical named entity identification using embedded transfer with limited resources. • Machine learning for the classification of radiology reports in low-resource languages. • A therapeutic use of computerized translation for languages with limited resources. • Multi-aspect transfer learning for social media diagnosis of low resource mental diseases. • An empirical investigation into multilingual pre-trained language models and transfer-learning. • Lexical restriction mechanism-based transfer learning for low-resource machine translation. • Identification of named entities by dynamic transfer of knowledge. • Oncological named entity recognition using ensemble transfer learning on enriched domain resources. • Considering the use of transfer learning to create large, multilingual pre-trained machine translation models for the healthcare domain. Important Dates • Submissions deadline: November 30, 2024 • First-round review decisions: January 17, 2025 • Deadline for revision submissions: March 30, 2025 • Notification of final decisions: May 6, 2025 • Tentative publication: July 24, 2025 Submission Information Please refer https://dl.acm.org/journal/tallip/author-guidelines and select “Special Issue on Transfer Learning for Low-resource Languages in Healthcare using NLP Models for Clinical Text Analysis” in the TALLIP submission site. For questions and further information, please contact Dr. Zohaib Mushtaq /zohaib.mushtaq@uos.edu.pk.
Last updated by Dou Sun in 2024-08-10
Related Journals
Related Conferences
Recommendation