The 6th International Workshop on Designing Meaning Representations
Welcome to DMR 2025, the 6th International Workshop on Designing Meaning Representations!
DMR 2025 will be held in beautiful Prague, Czechia, August 4-5, 2025.
Workshop papers due | April 21, 2025 |
---|---|
Notification of acceptance | June 16, 2025 |
Camera-ready papers due | July 1, 2025 |
Workshop dates | August 4-5, 2025 |
All deadlines are 11:59PM UTC-12:00 (“anywhere on Earth”).
Submit your papers on OpenReview here!
DMR 2025 invites the submissions of long and short papers about original works on the design, processing, and use of meaning representations. While deep learning methods have led to many breakthroughs in practical natural language applications, there is still a sense among many NLP researchers that we have a long way to go before we can develop systems that can actually “understand” human language and explain the decisions they make. Indeed, “understanding” natural language entails many different human-like capabilities, and they include but are not limited to the ability to track entities in a text, understand the relations between these entities, track events and their participants described in a text, understand how events unfold in time, and distinguish events that have actually happened from events that are planned or intended, are uncertain, or did not happen at all. We believe a critical step in achieving natural language understanding is to design meaning representations for text that have the necessary meaning “ingredients” that help us achieve these capabilities. Such meaning representations can also potentially be used to evaluate the compositional generalization capacity of deep learning models.
There has been a growing body of research devoted to the design, annotation, and parsing of meaning representations in recent years. In particular, formal meaning representation frameworks such as Minimal Recursion Semantics (MRS) and Discourse Representation Theory are developed with the goal of supporting logical inference in reasoning-based AI systems and are therefore easily translatable into first-order logic, while other meaning representation frameworks such as Abstract Meaning Representation (AMR), Uniform Meaning Representation (UMR), Tecto-grammatical Representation (TR) in Prague Dependency Treebanks and the Universal Conceptual Cognitive Annotation (UCCA), put more emphasis on the representation of core predicate-argument structure. The automatic parsing of natural language text into these meaning representations and the generation of natural language text from these meaning representations are also very active areas of research, and a wide range of technical approaches and learning methods have been applied to these problems.
DMR intends to bring together researchers who are producers and consumers of meaning representations and, through their interaction, gain a deeper understanding of the key elements of meaning representations that are the most valuable to the NLP community. The workshop will provide an opportunity for meaning representation researchers to present new frameworks and to critically examine existing frameworks with the goal of using their findings to inform the design of next-generation meaning representations. One particular goal is to understand the relationship between distributed meaning representations trained on large data sets using network models and the symbolic meaning representations that are carefully designed and annotated by NLP researchers, with an aim of gaining a deeper understanding of areas where each type of meaning representation is the most effective.
The workshop solicits papers that address one or more of the following topics: