[AAAI 2023 Oral] : Multi-modal 多模态 / Vision-language 视觉语言收录论文集合(42篇)

Heterogeneous Graph Learning for Multi-modal Medical Data Analysis
Kim Sein; Lee Namkyeong; Lee Junseok; Hyun Dongmin; Park Chanyoung

Cross-Category Highlight Detection via Feature Decomposition and Modality Alignment
Zhang Zhenduo

Cross-Modality Person Re-Identification with Memory-based Contrastive Embedding
Cheng De; Wang Xiaolong; Wang Nannan; Wang Zhen; Wang Xiaoyu; Gao Xinbo

Efficient End-to-End Video Question Answering with Pyramidal Multimodal Transformer
Peng Min; Wang Chongyang; Shi Yu; Zhou Xiang-Dong

DUET: Cross-modal Semantic Grounding for Contrastive Zero-shot Learning
Chen Zhuo; Huang Yufeng; Chen Jiaoyan; Geng Yuxia; Zhang Wen; Fang Yin; Z. Pan Jeff; Chen Huajun

Open-Vocabulary Multi-Label Classification via Multi-Modal Knowledge Transfer
He Sunan; Guo Taian; Dai Tao; Qiao Ruizhi; shu Xiujun; Ren Bo; Xia Shu-Tao

See How You Read? Multi-reading Habits Fusion Reasoning for Multi-modal Fake News Detection
Wu Lianwei; liu pusheng; Zhang Yanning

Leveraging Modality-specific Representations for Audiovisual Speech Recognition via Reinforcement Learning
Chen Chen; Hu Yuchen; Zhang Qiang; Zou Heqing; Zhu Beier; Chng Eng Siong

Mutual-enhanced Incongruity Learning Network for Multimodal Sarcasm Detection
Qiao Yang; Jing Liqiang; Song Xuemeng; Chen Xiaolin; Zhu Lei; Nie Liqiang

BridgeTower: Building Bridges Between Encoders in VisionLanguage Representation Learning
Xu Xiao; Wu Chenfei; Rosenman Shachar; Lal Vasudev; Che Wanxiang; Duan Nan

Explaining (Sarcastic) Utterances to Enhance Affect Understanding in Multimodal Dialogues
Kumar Shivani; Mondal Ishani; Akhtar Md Shad; Chakraborty Tanmoy

Actional Atomic-Concept Learning for Demystifying VisionLanguage Navigation
Lin Bingqian; Zhu Yi; Liang Xiaodan; Lin Liang; Liu Jianzhuang

Cross-Modal Label Contrastive Learning for Unsupervised Audio-Visual Event Localization
Bao Peijun; Yang Wenhan; Ng Boon Poh; Er Meng Hwa; Kot Alex

Self-Supervised Audio-Visual Representation Learning with Relaxed Cross-Modal Synchronicity
Sarkar Pritam; Etemad Ali

GMDNet: A Graph-based Mixture Density Network for Estimating Packages’ Multimodal Travel Time Distribution
Mao Xiaowei; Wan Huaiyu; Wen Haomin; Wu Fan ; Zheng Jianbin; Qiang Yuting; Guo Shengnan; wu lixia; Hu Haoyuan; Lin Youfang

Multi-level Confidence Learning for Trustworthy Multimodal Classification
Tang Chang; Zheng Xiao; Wan Zhiguo; Hu Chengyu; Zhang Wei

Sparse Maximum Margin Learning From Multimodal Human Behavioral Patterns
Zheng Ervine; Yu Qi; Zheng Zhi

COCA: COllaborative CAusal Regularization for Audio-Visual Question Answering
Lao Mingrui; Pu Nan; Liu Yu; He Kai; Bakker Erwin M.; Lew Michael S

M3AE: Multimodal Representation Learning for Brain Tumor Segmentation with Missing Modalities
Liu Hong; WEI DONG; Lu Donghuan; Sun Jinghan; Wang Liansheng; Zheng Yefeng

Mx2M: Masked Cross-Modality Modeling in Domain Adaptation for 3D Semantic Segmentation
Zhang Boxiang; Wang Zunran; Ling Yonggen; Guan Yuanyuan; zhang shenghao; Li Wenhui

MCoMet: Multimodal Fusion Transformer for Physical Audiovisual Commonsense Reasoning
Zong Daoming; Sun Shiliang

Alignment-Enriched Tuning for Patch-Level Pre-trained Document Image Models
Wang Lei; He JiaBang; Xu Xing; Liu Ning; LIU HUI

Situated Conversation Agent Pretrained with Multimodal Questions from Incremental Layout Graph
Long Yuxing; Hui Binyuan; ye fulong; Li Yanyang; Han Zhuoxin; Yuan Caixia; Li Yongbin; Wang Xiaojie

MPMQA: Multimodal Question Answering on Product
Zhang Liang; Hu Anwen; Zhang Jing; HU SHUO; Jin Qin

Long-tail Cross-Modal Hashing
Gao Zijun; Wang Jun; Yu Guoxian; Yan Zhongmin; Domeniconi Carlotta; Zhang Jinglin

Tagging before Alignment: Integrating Multi-Modal Tags for Video-Text Retrieval
Chen Yizhen; Wang Jie; Lin Lijian; Qi Zhongang; Ma Jin; Shan Ying

MRCN: A Novel Modality Restitution and Compensation Network for Visible-Infrared Person Re-Identification
yukang zhang; Yan Yan; Li Jie; Wang Hanzi

UniSyn: An end-to-end unified model for text-to-speech and singing voice synthesis
Lei Yi; Yang Shan; Wang Xinsheng; Xie Qicong; Yao Jixun; Xie Lei; Su Dan

MMTN: Multi-modal Memory Transformer Network for Image-Report Consistent Medical Report Generation
Cao Yiming; Cui Lizhen; Zhang Lei; Yu Fuqiang; Li Zhen; Xu Yonghui

Topology-Aware Optimal Transport For Multimodal Hate Detection
Zhang Linhao; jin li; Sun Xian; Xu Guangluan; zhang zequn; Li Xiaoyu; Liu Nayu; Liu Qing; Yan Shiyao

MNER-QG: An End-to-End MRC Framework for Multimodal Named Entity Recognition with Query Grounding
Jia Meihuizi; Shen Lei; Shen Xin; Liao Lejian; Chen Meng; He Xiaodong; Chen Zhendong; Li Jiaqi

Joint Multimodal Entity-Relation Extraction Based on Edgeenhanced Graph Alignment Network and Word-pair Relation Tagging
yuan li; Cai Yi; Wang Jin; Li Qing

CALIP: Zero-Shot Enhancement of CLIP with Parameter-free Attention
Guo Ziyu; Zhang Renrui; Qiu Longtian; ma xianzheng; Miao Xupeng; He Xuming; Cui Bin

Symbolic Replay: Scene Graph as Prompt for Continual Learning on VQA Task
Lei Stan Weixian; Gao Difei; Wu Jay Zhangjie; Wang Yuxuan; Liu Wei; Zhang Mengmi; Shou Mike Zheng

Unifying Vision-Language Representation Space with Singletower Transformer
Jang Jiho; Kong Chaerin; Jeon DongHyeon; Kim Seonhoon; Kwak Nojun

Aesthetically Relevant Image Captioning
Zhong ZhiPeng; Zhou Fei; Qiu Guoping

Layout-aware Dreamer for Embodied Visual Referring Expression Grounding
Li Mingxiao; Wang Zehao; Tuytelaars Tinne; Moens Sien

Show Interpret and Tell: Entity-aware Contextualised Image Captioning in Wikipedia
Nguyen Khanh; Biten Ali Furkan; Mafla Andres; Gomez Lluis; Karatzas Dimosthenis

Debiased Fine-Tuning for Vision-language Models by Prompt Regularization
Zhu Beier; Niu Yulei; Lee Saeil; Hur Minhoe; Zhang Hanwang

Zero-Shot Cross-Lingual Event Argument Extraction with Language-Oriented Prefix-Tuning
Cao Pengfei; Jin Zhuoran; Chen Yubo; Liu Kang; Zhao Jun

DocEdit: Language-guided Document Editing
Mathur Puneet; Jain Rajiv; Gu Jiuxiang; Dernoncourt Franck; Manocha Dinesh; Morariu Vlad I

Reject Decoding via Language-Vision Models for Text-toImage Synthesis
Wu Fuxiang; Liu Liu; Hao Fusheng; He Fengxiang; Wang Lei; Cheng Jun

内容主要包括AAAI 2023 Oral中多模态相关的Paper,若有遗漏欢迎补充。

文章出处登录后可见!

已经登录?立即刷新

共计人评分,平均

到目前为止还没有投票!成为第一位评论此文章。

(0)
扎眼的阳光的头像扎眼的阳光普通用户
上一篇 2023年8月4日
下一篇 2023年8月4日

相关推荐