Información de la Revista
Computer Speech and Language
https://www.sciencedirect.com/journal/computer-speech-and-languageFactor de Impacto: |
3.100 |
Editor: |
Elsevier |
ISSN: |
0885-2308 |
Vistas: |
18723 |
Seguidores: |
30 |
Solicitud de Artículos
An official publication of the International Speech Communication Association (ISCA) Computer Speech & Language publishes reports of original research related to the recognition, understanding, production, coding and mining of speech and language. The speech and language sciences have a long history, but it is only relatively recently that large-scale implementation of and experimentation with complex models of speech and language processing has become feasible. Such research is often carried out somewhat separately by practitioners of artificial intelligence, computer science, electronic engineering, information retrieval, linguistics, phonetics, or psychology. The journal provides a focus for this work, and encourages an interdisciplinary approach to speech and language research and technology. Thus contributions from all of the related fields are welcomed in the form of reports of theoretical or experimental studies, tutorials, reviews, and brief correspondence pertaining to models and their implementation, or reports of fundamental research leading to the improvement of such models. Research Areas Include Algorithms and models for speech recognition and synthesis Natural language processing for speech understanding and generation Statistical computational linguistics Computational models of discourse and dialogue Information retrieval, extraction and summarization Speaker and language recognition Computational models of speech production and perception Signal processing for speech analysis, enhancement and transformation Evaluation of human and computer system performance
Última Actualización Por Dou Sun en 2024-07-16
Special Issues
Special Issue on Multi-Speaker, Multi-Microphone, and Multi-Modal Distant Speech RecognitionDía de Entrega: 2024-12-02Automatic speech recognition (ASR) has significantly progressed in the single-speaker scenario, owing to extensive training data, sophisticated deep learning architectures, and abundant computing resources. Building on this success, the research community is now tackling real-world multi-speaker speech recognition, where the number and nature of the sound sources are unknown and changing over time. In this scenario, refining core multi-speaker speech processing technologies such as speech separation, speaker diarization, and robust speech recognition is essential, and the effective integration of these advancements becomes increasingly crucial. In addition, emerging approaches, such as end-to-end neural networks, speech foundation models, and advanced training methods (e.g., semi-supervised, self-supervised, and unsupervised training) incorporating multi-microphone and multi-modal information (such as video and accelerometer data), offer promising avenues to alleviate these challenges. This special issue gathers recent advances in multi-speaker, multi-microphone, and multi-modal speech processing studies to establish real-world conversational speech recognition. Guest editors: Assoc. Prof. Shinji Watanabe (Executive Guest Editor) Carnegie Mellon University, Pittsburgh, Pennsylvania, United States of America Email: shinjiw@ieee.org Areas of Expertise: Speech recognition, speech enhancement, and speaker diarization Dr. Michael Mandel Reality Labs, Meta, Menlo Park, California, United States of America Email: mmandel@meta.com Areas of Expertise: Source separation, noise robust ASR, electromyography Dr. Marc Delcroix NTT Corporation, Chiyoda-Ku, Japan Email: marc.delcroix@ieee.org; marc.delcroix@ntt.com Areas of Expertise: Robust speech recognition, speech enhancement, source separation and extraction Dr. Leibny Paola Garcia Perera Johns Hopkins University, Baltimore, Maryland, United States of America Email: lgarci27@jhu.edu Areas of Expertise: Speech recognition, speech enhancement, and speaker diarization, multimodal speech processing Dr. Katerina Zmolikova Meta, Menlo Park, California, United States of America Email: kzmolikova@meta.com Areas of Expertise: Speech separation and extraction, speech enhancement, robust speech recognition Dr. Samuele Cornell Carnegie Mellon University, Pittsburgh, Pennsylvania, United States of America Email: scornell@andrew.cmu.edu Areas of Expertise: Robust speech recognition, speech separation and enhancement Special issue information: Relevant research topics include (but are not limited to): Speaker identification and diarization Speaker localization and beamforming Single- or multi-microphone enhancement and source separation Robust features and feature transforms Robust acoustic and language modeling for distant or multi-talker ASR Traditional or end-to-end robust speech recognition Training schemes: data simulation and augmentation, semi-supervised, self-supervised, and unsupervised training for distant or multi-talker speech processing Pre-training and fine-tuning of speech and audio foundation models and their application to distant and multi-talker speech processing Robust speaker and language recognition Robust paralinguistics Cross-environment or cross-dataset performance analysis Environmental background noise modeling Multimodal speech processing Systems, resources, and tools for distant Speech Recognition In addition to traditional research papers, the special issue also hopes to include descriptions of successful conversational speech recognition systems where the contribution is more in the implementation than the techniques themselves, as well as successful applications of conversational speech recognition systems. For example, the recently concluded seventh and eighth CHiME challenges serve as a focus for discussion in this special issue. The challenge considered the problem of conversational speech separation, speech recognition, and speaker diarization in everyday home environments from multi-microphone and multi-modal input. Seventh and eighth CHiME challenges consist of multiple tasks based on 1) distant automatic speech recognition with multiple devices in diverse scenarios, 2) unsupervised domain adaptation for conversational speech enhancement, 3) distant diarization and ASR in natural conferencing environments, and 4) ASR for multimodal conversations in smart glasses. Papers reporting evaluation results on the CHiME-7/8 datasets or other datasets dealing with real-world conversational speech recognition are equally welcome. Manuscript submission information: Tentative Dates: Submission Open Date: August 19, 2024 Manuscript Submission Deadline: December 2, 2024 Editorial Acceptance Deadline: September 1, 2025 Contributed full papers must be submitted via Computer Speech & Language online submission system (Editorial Manager®): https://www.editorialmanager.com/ycsla/default2.aspx. Please select the article type “VSI: Multi-DSR” when submitting the manuscript online. Please refer to the Guide for Authors to prepare your manuscript: https://www.elsevier.com/journals/computer-speech-and-language/0885-2308/guide-for-authors For any further information, the authors may contact the Guest Editors. Keywords: Speech recognition, speech enhancement/separation, speaker diarization, multi-speaker, multi-microphone, multi-modal, Distant Speech Recognition, CHiME challenge
Última Actualización Por Dou Sun en 2024-07-16
Revistas Relacionadas
CCF | Nombre Completo | Factor de Impacto | Editor | ISSN |
---|---|---|---|---|
International Journal of Computer Integrated Manufacturing | 3.700 | Taylor & Francis | 0951-192X | |
Annals of Mathematics and Artificial Intelligence | 1.200 | Springer | 1012-2443 | |
International Journal of Health Geographics | 3.000 | Springer | 1476-072X | |
International Journal of Control | 1.600 | Taylor & Francis | 0020-7179 | |
c | Soft Computing | 3.100 | Springer | 1432-7643 |
c | Neural Processing Letters | 2.600 | Springer | 1370-4621 |
IT Professional | 2.200 | IEEE | 1520-9202 | |
Journal of Optimization Theory and Applications | 1.600 | Springer | 0022-3239 | |
Information Security Technical Report | Elsevier | 1363-4127 | ||
Language Learning & Technology | 3.800 | University of Hawaii Press | 1094-3501 |
Nombre Completo | Factor de Impacto | Editor |
---|---|---|
International Journal of Computer Integrated Manufacturing | 3.700 | Taylor & Francis |
Annals of Mathematics and Artificial Intelligence | 1.200 | Springer |
International Journal of Health Geographics | 3.000 | Springer |
International Journal of Control | 1.600 | Taylor & Francis |
Soft Computing | 3.100 | Springer |
Neural Processing Letters | 2.600 | Springer |
IT Professional | 2.200 | IEEE |
Journal of Optimization Theory and Applications | 1.600 | Springer |
Information Security Technical Report | Elsevier | |
Language Learning & Technology | 3.800 | University of Hawaii Press |
Conferencias Relacionadas
Abreviación | Nombre Completo | Entrega | Conferencia |
---|---|---|---|
FSPSE | International Conference on Frontiers of Signal Processing and Software Engineering | 2022-11-15 | 2022-11-25 |
SODA | ACM-SIAM Symposium on Discrete Algorithms | 2024-07-05 | 2025-01-12 |
ICCIS | International Conference on Computational and Information Sciences | 2014-03-10 | 2014-05-30 |
SAT | International Conference on Theory and Applications of Satisfiability Testing | 2024-03-08 | 2024-08-21 |
ICETCA | International Conference on Electronics Technology and Computer Applications | 2020-07-15 | 2020-07-28 |
AAME' | International Conference on Aerospace, Aerodynamics and Mechatronics Engineering | 2022-06-02 | 2022-07-22 |
SaCoNeT | International Conference on Smart Communications in Network Technologies | 2018-07-31 | 2018-10-27 |
AmI | European Conference on Ambient Intelligence | 2019-07-19 | 2019-11-13 |
HPTS | International Workshop on High Performance Transaction Systems | 2011-10-23 | |
WebSci | ACM Web Science | 2024-11-30 | 2025-05-20 |
Recomendaciones