Journal Information
ACM Journal of Data and Information Quality (JDIQ)
https://dl.acm.org/journal/jdiq
Impact Factor:
1.500
Publisher:
ACM
ISSN:
1936-1955
Viewed:
13446
Tracked:
1
Call For Papers
ACM Journal of Data and Information Quality (JDIQ) is a multi-disciplinary journal that attracts papers ranging from theoretical research to algorithmic solutions to empirical research to experiential evaluations. Its mission is to publish high impact articles contributing to the field of data and information quality (IQ). 

JDIQ accepts research conducted using a wide variety of methods ranging from positivists to interpretive methods, systems building descriptions, and database theory, as well as statistical analysis, mathematical modeling, quasi experimental methods, hermeneutics, action research, and case study. JDIQ accepts diverse research methods that are customary in different research backgrounds and traditions, both quantitative and qualitative.  Research papers also need to provide valuable and relevant implications for applying their findings and solutions in practice.

Given the diversity of disciplines and author interests, ACM JDIQ welcomes experience papers, typically submitted by a practitioner or industrial researcher, survey papers that provide a critical assessment of the state of the art on specific IQ topics while highlighting open research challenges, and short challenge papers that describe a major research challenge to the JDIQ community.

ACM JDIQ is published on a quarterly basis. It also organizes special issues as part of the journal volume offering. ACM JDIQ special issues draw together a range of contributions on a given theme, and are an important mechanism for presenting a focused collection of significant work from areas of high innovation and activity to the Data Quality community.

Starting from 2019,  JDIQ welcomes a new type of invited contributions, called on the horizon papers. These manuscripts are written by top researchers in the field of Data Quality. They aim to introduce emerging new topics, their challenging aspects and envisioned solutions. They can be submitted by invitation only. 

JDIQ welcomes high-quality research contributions from the following areas, but not limited to:

    Data quality assessment
        Data quality metrics and frameworks
        Data profiling
        Error and anomaly detection
        Synthetic data and data synthesis
        Data provenance
        Probabilistic, incomplete and uncertain data
        Big data quality
    Data cleaning
        Data preparation and wrangling
        Entity matching, entity resolution, record linkage
        Data fusion
        Schema and ontology matching
        Data repair, augmentation, and imputation
        Data re-purposing
    Data quality and AI
        Data mining
        Data quality and machine learning
        Data quality and generative AI
        Data quality and representation learning
        Intelligent systems
        Automated planning and reasoning
        Federated learning
        Data poisoning
    Data governance
        Data quality standardization
        Responsible AI, bias and fairness
        Data quality policies and standards
        Legal aspects of data quality
        Maturity models
        Human-computer interaction
        Behavioral aspects of data quality
        Data privacy and access control
        Security evaluation
        Economics of data quality
        IQ education and curriculum development
    Data integration
        Data ecosystems
        Data acquisition
        Metadata management
        Web data management
        Semantic web and ontologies
        Information extraction
        Information foraging
        Multimodal data integration
        Knowledge graphs
        Knowledge representation and reasoning
    Application-specific data quality management
        Laboratory experimentation
        Healthcare and clinical data
        Financial data
        Social media data
        Multimedia and unstructured data
        Sensors and streaming data, IoT
        Robotic process automation
        Business process management
        Process mining
Last updated by Dou Sun in 2024-08-09
Special Issues
Special Issue on Data quality dimensions in Data FAIRification design and processes
Submission Date: 2024-11-30

Guest Editors: • Anna Bernasconi, Politecnico di Milano (Italy), anna.bernasconi@polimi.it • Stefano Cirillo, Università di Salerno (Italy), scirillo@unisa.it • Alberto García S., Universitat Politecnica de Valencia (Spain), algarsi3@pros.upv.es • Hazar Harmouch, University of Amsterdam (the Netherlands), h.harmouch@uva.nl FAIR Data Principles were introduced in 2016 as guidelines for metadata, data, and supporting infrastructure to make these Findable, Accessible, Interoperable, and Reusable1. Data FAIRification refers to the set of processes (including data modeling, cleaning, profiling, integration, preparation, and engineering) that have the objective of making unFAIR data compliant with the FAIR principles. Depending on the shape and quality of the data sources, data FAIRification may encompass a multitude of distinct activities. Such activities involve, for instance, the analysis of data, the definition of semantic models, data linking strategies, the choice of licenses, the design of metadata structures, and the implementation/deployment of resources that make data available. FAIRness can be measured on data artifacts at different levels of granularity, including but not limited to repositories, data sources, data lakes, datasets, data sheets, data models, and single data objects. It applies to various scientific application domains, including natural sciences (for example, biomedical, chemistry, astronomy, agriculture, earth sciences, and life sciences), engineering, and humanities and social sciences. Adhering to FAIR principles has become an important feature for data objects and often a strict requirement for being considered for publication or funding in many scientific contexts. The strong connection between data FAIRification and data quality presents a fertile ground for research. It motivates a deeper exploration of methodologies, tools, and best practices to effectively integrate FAIR principles with innovative data quality frameworks. This will thereby enhance data reliability, trustworthiness, and the utility of data assets in rapidly evolving data ecosystems. This special issue aims to collect recent advancements in the theory and practice of data quality in the context of data FAIRification and FAIRness. Topics Topics of interest are inspired from the themes above and include, but are not limited to: • Information quality standards for FAIRness • Data quality metrics for data FAIRness • Improving the quality of FAIRification processes • Data FAIRification and Data curation • Data FAIRification and Data Integration • Data FAIRification and Data Profiling • Data FAIRification for Data Science • Data quality and Data Interoperability • Data quality and Data Reusability • Models and Ontology for FAIRness • Data and Information quality requirements in FAIR data-systems • Methodologies and tools for FAIRness assessment • Methodologies and tools for FAIRness enforcement • Automatic and semi-automatic data FAIRification solutions • FAIRness of data points, datasets, data repositories Expected Contributions We welcome the following types of research contributions: • Survey papers: Should present a coherent review of scientific work related to data quality issues in data preparation, together with interesting future research directions in the field (up to 25 pages). • Technical papers: Should present novel research contributions on the topics above, clearly describing the progress from the state of the art and providing evidence for the benefits of the contributions (up to 25 pages). • Experience papers: Should detail recent applications of data quality techniques in practice and industry, providing pertinent application scenario(s), lessons learned, and open problems (up to 15 pages). • Resource papers: Should present a new resource, such as a dataset or tool, or an appealing compilation of multiple datasets (up to 15 pages). Important Dates • Submission deadline: 30 November 2024 • First-round review decisions: 29 February 2025 • Deadline for revision submissions: 30 April 2025 • Notification of final decisions: 30 June 2025 • Camera-ready Manuscript: 20 July 2025 • Tentative publication: October 2025 Submission Information JDIQ welcomes manuscripts that extend prior published work, provided they contain at least 30% new material, and that the significant new contributions are clearly identified in the introduction. Submission guidelines with Latex (preferred) or Word templates are available at https://dl.acm.org/journal/jdiq/author-guidelines#subm Please submit the paper by selecting as the type of submission: “SI: Data FAIRification”. For questions and further information, please contact Anna Bernasconi, anna.bernasconi@polimi.it
Last updated by Dou Sun in 2024-08-09
Related Journals
Related Conferences
CCFCOREQUALISShortFull NameSubmissionNotificationConference
cIFIPTMIFIP WG 11.11 International Conference on Trust Management2019-04-092019-05-122019-07-17
ccb1MMMInternational Conference on MultiMedia Modeling2024-08-192024-10-092025-01-07
aa2DISCInternational Symposium on Distributed Computing2024-05-072024-08-022024-10-28
LIDInternational Workshop on Logic in Databases 2011-02-062011-03-25
CSENInternational Conference on Computer Science and Engineering2022-09-032022-09-102022-09-17
ICCEAIInternational Conference on Computer Engineering and Artificial Intelligence2021-06-052021-06-102021-08-27
cb3ICIDSInternational Conference on Interactive Digital Storytelling2015-07-062015-08-212015-11-30
cab1ICONIPInternational Conference on Neural Information Processing2024-05-312024-07-262024-12-02
b5DICTAPInternational Conference on Digital Information and Communication Technology and its Applications2019-03-042019-03-112019-04-03
aa*a1CAVInternational Conference on Computer Aided Verification2024-01-192024-03-262024-07-22
Recommendation