
IEEE/CAA Journal of Automatica Sinica
Citation: | Y. Ming, N. N. Hu, C. X. Fan, F. Feng, J. W. Zhou, and H. Yu, “Visuals to text : A comprehensive review on automatic image captioning,” IEEE/CAA J. Autom. Sinica, vol. 9, no. 8, pp. 1339–1365, Aug. 2022. doi: 10.1109/JAS.2022.105734 |
GIVEN an image, it is natural for humans to quickly understand and give textual description of it while it is not an easy task for machines. The technique for machines to automatically generate textual description of images is called image captioning, which has a wide range of practical applications. Several Android applications have been developed based on the image captioning models [1] to help the visually impaired to retrieve information, to navigate routes, and even to get the sense of their environment. Advanced captioning models are also imbedded in the intelligent robot system to help the robot gain better visual understanding. In brief, image captioning enhances the machine’s ability of multi-modal information understanding and is widely used in different areas such as image analysis and retrieval [2]–[4], human-robot interactions [5], intelligence education [6] and intelligent blind guidance [7].
The goal of the image captioning is to generate textual description for the visual content of an image, which needs to be linguistically comprehensible, syntactically correct and semantically corresponding to the image content. It integrates techniques from multiple fields focusing on mapping the semantics of visuals to texts. This needs not only to detect and identify objects in the image but also have a deep understanding of the semantic content of images including the scene location, object properties and their interactions. Determining the existence, properties, and relationships of objects in an image is not a simple task. To describe the information in a grammatically correct sentence makes the task more difficult. Even, cross modal information communication relies heavily on natural language description, whether it is written or spoken.
The evolution of automatic image captioning is shown in Fig. 1. In the beginning, image captioning attempts to generate simple sentences for images taken under specific scenes [8], [9], for example, to generate a brief description of human activities in a fixed office environment. Apparently, such methods are severely restricted by the scene where the image is taken, far from describing images from our life. Subsequent studies are dedicated to generate descriptions for images taken from various environmental scenes [10]–[12], and early works mainly follow two traditional approaches based on retrieval and template. Retrieval-based methods retrieve sentences from the prepared image-caption database for given images [13]–[15], and template-based methods fill semantic words or phrases detected from visual features into the given template [16]–[18]. The former relies on existing annotations in the training set, while the latter relies on predesigned language structures. However, both types of methods are not flexible enough.
Benefited from recent advances in deep neural networks [19]–[24], deep learning provide effective solutions to visual and language cross-modal modeling, which are also used to boost existing systems. Image captioning based on deep learning has become the focus of research [25]–[30]. From the first work [31] adopting a convolutional neural network (CNN) extract visual representation and feeding it in recurrent neural networks (RNNs) to obtain a sequence, methods have been enriched with the object region features [29], [32]–[34], semantic relation features [35]–[37], attention mechanisms [25], [38], [39], reinforcement learning strategies [28], [40], [41], up to the breakthroughs of self-attention to Transformer [30], [42] and vision-and-language pre-training approaches [43], [44], as shown in Fig. 2. These efforts aim to find the most effective pipeline to create connections between visual semantics and textual elements, and map the visual content of an image into a sequence of words while maintaining its understandability.
In this paper, we provide an overview of the image captioning developed in the last ten years. We review the work of the main track image captioning methods, especially those focusing on images taken from real life. Moreover, we mainly focus on deep learning-based methods, and develop a taxonomy of the encoder-decoder framework, attention mechanism and training strategies according to model structures and training manners. Then, we summarize the datasets used in image captioning, from domain-generic to domain-specific benchmarks, as well as the standard and non-standard metrics. To conclude, we give a discussion of open challenges and suggest future directions.
The organization of the rest of this paper is as follows. In Section II, we review retrieve-based and template-based approaches respectively. In Section III, the deep neural networks are introduced in particular. In this section, we divide the deep neural network-based image caption methods into several subclasses, and discuss classic methods in each subclass respectively. The results of most advanced methods are compared on the benchmark dataset in Section IV. After that, we look into the future work of the image captioning in Section V. In Section VI, we give a conclusion.
In the early days of computer vision (CV), the computer is used to imitate the human visual system and to tell what it was watching. Subsequently, further efforts are made to learn more about the human visual cognition ability, so computers can describe what they see in a simple natural language. The task of image captioning appears, early researchers mainly leverage retrieval-based and template-based methods to enable a computer with the visual learning ability to generate a fluent and understandable sentence for the given image.
The main idea of the retrieval-based image captioning is to store a large number of image-caption pairs in a corpus. It first searches out images similar to the input image in the query corpus through similarity comparison. The best annotation caption from the corresponding candidates of retrieved images is selected as the caption of given image. The flow chart is shown in Fig. 3①. Captions generated by these methods can be a sentence that already exists in the corpus, or a sentence composed of retrieved sentences.
In the beginning, assuming that there is always a similar image in the database for the given image, the computer can directly use annotation of the retrieved image as the description of the given image [45]–[49]. Farhadi et al. [45] use the nearest neighbor rule to select the candidate image. It then matches corresponding sentences with the Tree-F1 rule to output the closest sentence. Tree-F1 is a measure that reflects two important interacting components, accuracy and specificity. Ordonez et al. [46] find the most similar matching image by calculating the global similarity between the given image and the image in the database, and transfer the caption of the matched image. Socher et al. [47] focus on actions and subjects based on the word order and syntactic details to retrieve the images described by these sentences. Those methods do not consider the effect of noise. Subsequently, to reduce the impact of visual estimation noise, Mason and Charniak [48] propose a nonparametric density estimation technique, which estimates the word probability density from annotations of the retrieved image, and select the sentence with the highest probability for the given image. And Sun et al. [49] filter text terms based on visual discrimination, group them into concepts according visual and semantic similarities, and then use bi-directional retrieval to output caption.
The above assumption is difficult to meet in practical application. Therefore, in other retrieval-based image captioning methods, a new output sentence is formed by simple processing of retrieved sentences [13], [14], [50], [51]. The typical operation is that extracting a list of expressive phrases from existing annotations according to image similarity, then generating new sentences for the given image by combining relevant phrases elements selectively [13], [50]. Differently from the above method, Hodosh et al. [14] propose a ranking-based method to establish a joint expression space by constructing sequence cores and capturing the core of semantic similarity. Devlin et al. [51] first find a set of k-nearest neighbor (k-NN) images in the training dataset, and then return the consensus caption from the set of candidate captions describing the set of k-NN images.
In essence, the retrieval-based methods generate the description of the given image by choosing the most semantically similar sentences or phrases from the database. The generated captions are usually syntactically correct, fluent in expressing, and close to natural language. However, there are also obvious disadvantages, such as over-reliance on annotations database, which lead to the generated sentences keeping the same syntax expression and expression style as it. It also restricts the image caption to existing sentences or phrases that cannot accommodate new objects or scenes. In some cases, the generated captions may even have nothing to do with the given image.
The template-based image captioning can be regarded as a process of syntactic and semantic constraints. Typically, it is a data-driven model, which predefines syntax rules. It detects and extracts related elements such as objects, actions, scenes, relationships and transfers them in semantic representation to predict language labels, then fills the labels in the predefined template to generate captions [15], [45], [52], [53]. The flow path is shown in Fig. 4. The template-based methods can ensure that the grammar of the sentences is correct.
Most of the early template-based captioning techniques extract a single word from the given image [17], [45], [54]. The predicted words, such as subjects, predicates, and prepositions, are then linked to generate descriptions. On the first try, Farhadi et al. [45] use support vector machine (SVM) to detect three elements (objects, actions and scenes) as a single semantic word to describe the image. Yang et al. [17] predict the optimal quadruplet (Nouns-Verbs-Scenes-Prepositions) via the hidden Markov model (HMM) [55] to fill the template. Krishnamoorthy et al. [54] use the Subject-Verb-Object (SVO) model to combine the output of the most advanced object and action detector with real world knowledge, to gain the best (subject, action, object). Kulkarni et al. [15] propose to use conditional random fields (CRF) to extract closest words which can be used to describe the image from a large visual annotations database. Xu et al. [52] introduce a dependency-tree structure to embed sentences into a continuous vector space. These methods can preserve the visually grounded meaning and word order.
A phrase carries a larger chunk of information than a single word. Phrase-based sentences tend to be more descriptive than word-based. In template-based image captioning, researchers also try to use phrases to fill templates for generating more descriptive sentences [10], [53], [56]. Li et al. [10] extract phrase-level semantic of objects, attributes and spatial relations based on web-scale n-gram data. Lebret et al. [53] train a purely bilinear model to infer phrases for producing relevant descriptions. Ushiku et al. [56] extract continuous words from the training sentences as phrases, then map image features and these phrases into the same shared subspace for phrases selection. Besides descriptive, phrase-based templates have a great improvement on the syntax.
The template-based captioning models can produce syntactically correct sentences. The descriptions are generally more consistent with the image content than the retrieve-based models. Nonetheless, they are highly dependent on the predefined template. Generally, the template is fixed, the length of generated captions is immutable, and the description content is relatively simple. In addition, the method needs to annotate a lot of objects, attributes and relationships of the image, which make it deal with large-scale data stiffly, and cannot be applicable to images of all fields.
Recently, deep learning has made great progress in CV and natural language processing (NLP) [4], [43], [57], and they have been widely used in the image captioning. Deep learning can directly map images to texts according to a large dataset, and generate accurate description sentences. Deep learning-based image captioning can also be understood as a probability distribution problem. Given an image I, and its corresponding annotations marked as S,
P(S|V;θ)=N∏t=0P(St|S1,S2,…,St−1,V;θ). | (1) |
The problem of generating text sentences can be transformed into conditional probability modeling
logP(S|V;θ)=N∑t=0logP(St|S1,S2,…,St−1,V;θ). | (2) |
The final goal of the transfer model is to obtain the maximum logarithmic likelihood sum of all samples
θ∗=argmaxθ∑(I,S)logP(S|V;θ) | (3) |
where
S′=argmaxθ∑SP(S|V;θ∗). | (4) |
Since each position has an entire vocabulary of words as candidates, the size of the word search increases exponentially with the length of the sequence. It is not feasible to compute the probabilities of all sequences and then select the sequence with the highest probability. In practice, the beam search is needed to reduce the search space. The detailed training, verification and testing procedures are shown in Fig. 5.
Benefiting from the encoder-decoder framework, deep learning-based methods work in a simple end-to-end manner training as Fig. 2, while they focus on different priorities. In this section, we review such models, divide them into several subcategories based on the encoder-decoder framework, attention mechanism, and training strategy, and then introduce and discuss each subcategory separately. The overall technical diagram is shown in Fig. 6.
The encoder-decoder framework is the basis of the deep learning captioning models. Vinyals et al. [26] first propose the neural image caption (NIC) generator, a CNN model to generate captions, which is the first work to introduce the encoder-decoder framework into image captioning. The NIC represents the image captioning as a translation problem, where images are regarded as the input and sentences as the output. It serves as the basis for subsequent improvements and a baseline model for performance comparison between models. Generally, CNN with convolution layer, pooling layer and full connection layer is used as the encoder to obtain image features represented as fixed length vectors through matrix transformation. RNN/long short-term memory (LSTM) is used as a decoder to decode visual features to iteratively generate descriptive text sequences. To generate a more human-like sentence, researchers have made many innovative improvements based on the basic framework. In this part, we will summarize these developmental works from the perspective of visual encoding and language decoding respectively.
1) Visual Encoding: The first challenge is that the encoder can learn and provide an effective representation of the visual content. The current visual representation learning methods can be divided into three categories: convolutional representation learning (global or region image feature) as shown in Figs. 7 and 8, graph representation learning (visual relationships) as shown in Fig. 9, and attention representation learning (intra-modal interaction). Here, we mainly share the related work of the three categories of visual representation learning. More details of the attention mechanism will be introduced in the next part.
a) Convolutional representation learning: Global grid convolutional features are employed in a large variety of image captioning models [12], [58]–[60]. In the NIC [26], the last fully connected layer of GoogleNet is activated to extract high-level and fixed-size representations of visual features, which are then used as a conditional element of the LSTM module to generate a sentence. Differently from NIC, Xu et al. [25] use the output of underlying convolutional layers instead of the final fully connected layer output as the image feature vector. Other methods [11], [12], [61] directly extract activation information from the last pooling layer of ResNet-101/152 pre-trained by ImageNet as image features, which can improve the generalization of the model. For better encoding visual contents, improved CNN models have been widely used. Donahue et al. [62] add a long-term recurrent convolutional networks (LRCN) module after the first two fully connected layers to handle the variable-length visual input. Wu et al. [58] and Zhang et al. [63] obtain global grid features through pre-trained CNN, while learning explicit representation of high-level attributes to guide more accurate captions. Mao et al. [64] connect a multimodal component after the feature layer to merge the visual and language information, which can strengthen the visual-language association. The global grid representations cover all the content of a given image in a uniform division. However, the uniformly fragmented embedding treats all significant and non-significant objects and regions equally, which makes it difficult to generate specific and accurate description.
To have a deeper understanding of the images through fine grained analysis and even multiple steps of reasoning, encoding visual features of significant regions has become a mainstream direction. Karpathy and Fei-Fei [32], and Fang et al. [65] use sub-region of images rather than the full images to reason sentences. Anderson et al. [29], [66] leverage the Faster RCNN [67] to obtain object regions and other salient image regions. Subsequently, based on the Faster RCNN detector, more and more approaches are proposed to encode salient visual regions to obtain high-level semantic information. Datta et al. [68] learn the potential correspondences between regions-of-interest (RoIs) and phrases in captions, and use matched RoIs as the discriminative condition. To learn the knowledge from external unpaired images and texts, Chen et al. [69] extract regional features from images and classify them using the multiple instance learning (MIL). Kim et al. [70] encode the region features of union, subject and object through a region proposal network (RPN) [67] in dense relational image captioning. Yang et al. [71] use an object’s spatial coherence integration method to concatenate raw visual features of every two overlapping objects. These regional feature encoding methods successfully solve the problem of fine-grained information learning in visual feature encoding.
Besides learning visual representation in terms of global grid or salient region features, these convolutional methods also attempt to detect semantic concepts from image regions, and then encode them into high-level representations. Although great progress has been made, most of approaches only deal with entity regions alone without considering interactions between different regions. This makes it deficient in acquiring the topological structure and paired relationships.
b) Graph representation learning: To further improve the visual relationship and entity structure encoding, many researchers use scene graph to extract high-level semantics and visual relationships, and then utilize the graph convolutional network (GCN) [72], [73] to learn graph representations. Scene graph represents an abstraction of salient objects and their complex relationships, which provides rich structured semantic information of an image. Specifically, it extracts the high-level semantic such as objects, attributes and relationships from region features, and build semantic graph with directed edge to produce relation-aware region-level representations, as shown in Fig. 9.
To prove the relationships between objects are beneficial for representing and describing an image, Yao et al. [35] propose a GCN-LSTM architecture to obtain visual representations from scene graphs built on spatial and semantic connections for the first time. Yang et al. [74] build a scene graph auto-encoder (SGAE) to use the directed scene graph to represent the complex structural layout of both images and sentences. The idea has also been applied in other graph embedding works [36], [37], [75], [76]. Moreover, many graph-based approaches aim to improve the applications of scene graph features [77], [78]. Zhong et al. [79] decompose a scene graph into a set of sub-graphs, and capture semantic elements from each sub-graph to interpret different image contents. Chen et al. [80] use a directed abstract scene graphs without any concrete semantic labels to encode user intention in fine-grained and control the diversity of captions. Zhang et al. [81] exploit the semantic coherency between the visual and language modalities to align the visual scene graph to the language graph, and the alignment consensus guides the model to capture both the correct linguistic characteristics and visual relevance. Tripathi et al. [82] close the semantic gap between visual graph and caption graph by leveraging the spatial location of objects and additional human-object-interaction labels. This obtains a competitive image captioning performance only relying on the scene graph labels.
Although state-of-the-art performance has been achieved based on graph embedding, the scene graph generated by black box generator also causes some challenges for image captioning. One is that generating high-quality descriptions lies more on the captioning models rather than on the scene graph parser [83]. The other is that most scene graph captioning models are still too noisy in generating sentences [84]. Constructing a specific spatial relationship scene graph based on the application scene and conditions is correct direction for researches on image captioning. The gap and alignment of semantic between the visual scene graph and the text scene graph also attract more and more attention.
c) Attention representation learning: Many of the latest advanced captioning models gradually replace the traditional CNN-LSTM architecture with a Transformer model based on the self-attention (SA) mechanism. These models take the image captioning task from a sequence-to-sequence prediction perspective. They directly encode images into attention features to model global visual context in each encoder layer, and the process is totally convolution-free. More details about this attention-based visual representation learning can be seen in the Intra-Modal Attention of Section III-B.
As said above, the purpose of improving visual encoding is to extract more useful discriminant information from images. Replacing global grid features activated by CNN with regional features can introduce fine-grained semantic and other high-level features. Graph representation learning can embed explicit, structural relationships between detected objects, which can guide for more interactive descriptions. Attention-based visual representation can mine more internal interaction between visual elements. These methods significantly improve the description effect of images, which also enhance the descriptive, diversity and accuracy of generated sentences. However, there are some inherent drawbacks. For example, it is difficult to effectively explain semantic reliability for some scenarios.
2) Language Decoding: Decoder aims to predict the probability of a given sequence of words occurred in the sentence. It deals with the text generation as a stochastic process. LSTM is the most widely used decoder in existing image captioning models [26], [85]. Both the image features and the word embedding vectors are projected to the decoder, and the image features is only input at the first step to inform the LSTM about the image contents. The next word is generated based on the current time step and the previous hidden states. Then the beam search is used to iteratively consider the k best sentence sets up to time t for generating sentence candidates of size
To enhance the contribution of image information to posteriori words generation, the first idea is to introduce visual sentinel into the decoder for estimating the probability distribution of the next word based on the previous word and the content of the image [86]–[89]. Jia et al. [90] and Donahue et al. [62] put visual semantic information as an extra input to each unit of the LSTM block, which considers visual information in every time step. Mao et al. [85], [91] input the global features of the given image and the corresponding sentence. Lu et al. [27] use an additional “visual sentinel” vector instead of a single hidden state of the LSTM. These methods are mainly based on unidirectional and shallow LSTMs shown in the left part of Fig. 10. However, its capacity of long sequence learning is limited.
To make better use of both history and future contexts, more directional and deep LSTM variants are proposed to enrich the text decoding. Wang et al. [92] use an end-to-end bidirectional LSTM model to learn both forward and reverse contexts for modeling long-term visual language interaction. Zheng et al. [93] also use a bidirectional LSTMs structure to obtain the global context information. The forward and backward LSTMs simultaneously construct the sentence in a complementary manner. Furthermore, two-stage [33], [94]–[97] and triple-stream [98]–[100] LSTM variants are proposed to explore more visual context and semantic information. Wu et al. [33] decode the text through a GridLSTM and a depth LSTM. The GridLSTM obtains visual features selectively for recalling image content while generating each single word. Deep LSTMs use selected visual features as a potential memory to ensure the caption does not deviate from the original image content. Gu et al. [98] design a coarse-to-fine framework with three stacked LSTMs. Attention weights and hidden vectors produced by the preceding stage are used as inputs of the current stage, and they are taken as the disambiguating cues to the subsequent stage as well. Similarly, Kim et al. [99] provide the triplet (topic, object and union) features detected from the region features to three LSTM branches separately. The three branches work collaboratively to predict a related word and its part-of-speech class, leading to relationship-based image understanding. Figs. 11 and 12 show some examples of deep LSTM variants. As can be seen from the figures, the preliminary description is generated in the first layer and then paraphrased into a more diverse and descriptive sentence in the deep layer.
Recurrent models based on LSTM have been the standard decoder in image captioning for many years. The decoder usually trains the complex dynamics of input sequences in a time range. It can remember or use information in the input sequences using internal memory units. The main shortcoming of LSTM decoding is that the LSTM is struggle to maintain long-term dependencies. Above works show that the improvement of the decoder mainly focuses on enriching the information in both the visuals and words when generating the description. These improvements are mainly based on generating captions through autoregressive decoding. Each word is generated in the order of the previously generated words, which may result in issues such as error accumulation, captioning latency, improper semantics and lack of diversity.
Early captioning models generate sentences considering the scene as a whole rather than the spatial regions relevant to the parts of the words. For understanding and reasoning images more fine-grained, visual attention have been broadly used to interpret related visual information dynamically, and to ground generated words on corresponding image regions. By integrating attention mechanisms into the encoder-decoder framework, the information of the source domain and target domain is aligned to generate a dynamic attention on each part of the input image when generating descriptive words. In this subsection, we summarize the application of different attention mechanisms and methods in terms of the basic framework of image captioning introduced above and discuss how to improve its effect.
1) Cross-Modal Attention: Xu et al. [25] use a dynamic and time-varying visual representation to replace the static global vector and improve the alignment between words and visual content. One drawback of the model is that it utilizes features from the lower CNN layer which may fail to capture high-level semantic information. To alleviate this issue, more relevant attention improvements have been proposed.
The global-local attention [101], [102] integrates local representation at the object level with global representation at the image level. Semantic attention [38], [103]–[106] integrates semantics to form a feedback connecting the top-down and bottom-up computation, which can focus on the key and various aspects of the image synchronously. Spatial and channel-wise attention [107]–[109] is also used to select semantic attributes on the demand of the sentence context. Adaptive attention [27], [110]–[113] with an additional visual sentinel, decides when to rely on visual signals (for visual words) and when to just rely on the language model (for non-visual words). Context attention [36], [114], [115] focuses on different locations of the input image according to contextual regions, and the current generation state. Specific visual objects, implicit visual relationships and visuals that have been previously interpreted can be taken into account. Memory-enhanced attention [116]–[118] keeps all the information of previously generated words in a memory storage. Hierarchical attention [119]–[122] is able to learn different concepts at different layers. Low-level concepts are merged into the high-level ones, and low-level features are passed to the top to predict words as well. Deliberate attention [123] is proposed to relieve the exposure bias. Generally, the first attention layer provides the hidden states and visual attention for generating a preliminary caption, while the second deliberates residual-based attention makes refinement. Visual relationship attention [71], [124], [125] extends the attention mechanism from region to relationship via contextualized embedding for individual regions. It is a parallel attention, which can both extract adjacent relationships and capture the feature of adjacent spatial layouts. Recurrent attention [126], [127] introduces continual learning into image captioning, and it considers the transient nature of vocabularies in continual image captioning, especially for disjoint vocabularies.
The cross-modal attention mechanism is designed to discover the region-level, semantic-level, visual context, and even higher-level visual relationships alignment between visual information and linguistic features. The application of these cross-modal attention mechanisms is demonstrated in Fig. 13. Inspired by the application of self-attention in machine translation, self-attention models have been used in image captioning to explore the interaction of intra-modal information.
2) Intra-Modal Attention: Differently from above cross-modal attention integrated with the CNN-LSTM framework, intra-modal attention interaction methods are mainly based on the self-attention (SA) proposed in Transformer [128]. Transformer is an encoder-decoder framework consisting only of multihead self-attention mechanisms, which has a stronger capability of long sequence features capture and parallel computation compared with the CNN/RNN model. The self-attention mechanism reduces dependence on external information and can capture the internal interaction of features.
The development of intra-modal interaction learning can be divided into three stages. In the first stage, transformer encoder or decoder works with the traditional CNN-LSTM framework [12], [129]–[131], as shown in Fig. 14. Based on CNN encoder, Zhu et al. [131] use the Transformer decoder with stacked self attention to replace the LSTM decoder, which can solve the inherent cross-time sequence problem in the LSTM decoder. The new decoder can memorize dependencies between the sequences, and is trained in parallel. On the contrary, Banzi et al. [130] build an encoder, which combines a self-attention mechanism with contrastive feature construction. The self-attention encoder aggregates visual information of each image group, captures the difference information between them, and finally generates context-aware captions.
Furthermore, the whole Transformer structure is used with object detected features and to guide the caption decoding [42], [132], [133], as shown in Fig. 15. Yu et al. [42] extend Transformer to a multimodal Transformer to model the intra-modal interactions. Object relation Transformer (ORT) [132] and Geometry-aware self-attention (G-SAN) [133] introduce geometric attention into standard self-attention, which can explicitly and efficiently consider the relations of geometry and spatial between objects. Entangled attention (ETA) [134] implements the multi-head attention in an entangled manner to leverage the complementary nature of visual and semantic information in attention operations. Meshed-memory Transformer (M2-T) [135] learns multi-level representation of the region relationships according to the prior knowledge. Dual-level collaborative Transformer (DLCT) [136] uses a dual-way self-attention to explore the intrinsic properties of these two features taking geometric information into account. Although these models can obtain competitive captions, they are completed based on CNN image preprocessing, without overcoming the limitations of CNN in global context modeling.
Moreover, the model is used totally convolution-free. Liu et al. [30] consider a sequence-to-sequence prediction perspective, and propose a caption Transformer that takes the sequentialized raw images as input, just as shown in Fig. 16. It can model global context at every encoder layer from the beginning, completely eliminating convolution and recurrent. The intra-modal interaction between image patches in the encoder and the “words-to-patches” cross-modal attention in the decoder are all effectively utilized. The same idea is further effectively verified in other works [5], [137].
Based on the Transformer [128], the self-attention mechanism improves the main disadvantage of recurrent models that it is hard to maintain long-term dependencies between the generated words. It has been established as an effective method for modeling the relations between image regions, caption words and the state of language prediction model. The intra-modal interaction of visual, text, and the cross-modal semantic alignment between visual and text are all successfully explored and leveraged.
The image captioning model usually generates a caption word by word sequentially, according to the words generated in the previous step and the visual features. In fact, exporting each word is a sampling process. At each step, the output word is sampled from a learned distribution over the annotation vocabulary. The beam search algorithm is the most effective sampling strategy in image captioning. It selects the sequence with the highest probability at each step as candidates and finally outputs the one with the highest probability, rather than output the word in each step. The beam search is often used with different training strategies. In this section, we elaborate on the existing training strategies, which are classified into cross-entropy loss, reinforcement learning, and pre-training model.
1) Cross-Entropy Loss: The cross-entropy loss is generally used to calculate the difference between probability distributions of the target and the predicted value, when adjusting model weights during training. Given a sequence of target words
LXE(θ)=−n∑i=1log(P(yi|y1:i−1,V)) | (5) |
where P is the probability distribution calculated from the language model, and V is the representation of visual features. Hence, in each time step of the training, the possibility of the negative log-likelihood of the current word can be minimized based on previous annotation words. The cross-entropy loss operates at the word level and optimizes the probability of each word in the ground-truth sequence, but not taking the longer range dependencies between generated words into account.
Most deep learning methods of image captioning are trained setting with the cross-entropy loss. Previous models, such as NIC [26], Show, Attend and Tell [25], Semantic Attention [114], SCA-CNN [107], Adaptive Attention [27], rely only on the loss for training. These traditional training settings with cross entropy suffer from the problem of exposure bias. This enables the model to generate safer descriptions of the given image. When two images are similar in scene but not in detail, the model tends to generate a rough description. However, this causes the specific details of the image to be ignored. To tackle this problem, deep reinforcement learning strategies have been proposed to alleviate the exposure bias problem during cross-entropy training.
2) Reinforcement Learning: The reinforcement learning (RL) [138] paradigm is designed to overcome the limitations of word-level training of the cross-entropy with a sequence-level training. It also leverages the beam search and greedy decoding to calculate the loss gradient as follows:
▽θLRL(θ)=−1nn∑i=1((r(wi)−b)▽θlogP(wi)) | (6) |
where
Many works [40], [139], [140] harness the RL strategy and explore different sequence-level metrics as rewards. Ranzato et al. [141] first introduce the reinforcement learning algorithm into RNN for a sequence level training, which usually adopts BLEU and recall oriented understudy for gisting evaluation (ROUGE) as reward signals. CIDEr [142] and semantic propositional image captionevaluation (SPICE) [143] are also used as reward. Liu et al. [144] propose a policy gradient to directly optimize a linear combination of the SPICE and CIDEr metrics. Rennie et al. [28] build a self-critical sequence training strategy. This makes it the most widely used RL-based strategy [135], [145], [146]. Furthermore, Chen et al. [147] and Yan et al. [122] use conditional generative adversarial nets to enhance any existing RL-based image captioning frameworks. Seo et al. [41] leverage a policy gradient approach to maximize the human ratings as rewards. Shi et al. [148] further imitate the attention preference of humans and fine-tune the attention directly with language evaluation rewards through an RL strategy.
In fact, the random strategies are difficult to improve in an acceptable amount of time. Therefore, common image captioning models require pre-training using cross-entropy or masked language models, and then fine-tune by reinforcement learning strategies with sequence-level metrics as rewards.
3) Pre-Training Model: Visual-language pre-training models trained on massive image-text pairs, are also proposed to fine-tune visual to language understand tasks [44].
In application, there are two objectives for the pre-training models. First and foremost, the pre-training models focus on the text-image alignment. Zhou et al. [149] present a unified visual-language pre-training model to concatenate salient objects and corresponding region features, and align them at the word-region level. Li et al. [150] propose a simplified alignment learning method by using object tags of images as anchor points. However, the method fails to generate novel object captions as it uses image-caption pairs for pre-training. Further, Hu et al. [151] break the dependency and pre-train visual-text alignments based on image-tag pairs, improving novel object captioning and the general image captioning. The other most common pre-training objectives are the masked contextual token loss. When training the BERT [43] architecture, tokens of each modality (visual and textual) are randomly masked. Notably, some works completely avoid the combination with the cross-entropy loss. As verified by above models, the pre-training can significantly accelerate the learning speed of image captions and improve the performance of the model.
We mainly introduce the commonly used public datasets and evaluation metrics for image captioning models.
An effective dataset can make an algorithm more efficient. A summary of some public datasets is given in Table I, and some sample image-annotation pairs are shown in Fig. 17, along with captions generated by some typical methods for three benchmark datasets.
Dataset | Size | Captions/Image | Topic | ||
Training | Validation | Testing | |||
Flickr8k [153] | 6000 | 1000 | 1000 | 5 | Human activities |
Flickr30k [154] | 28 000 | 1000 | 1000 | 5 | Human activities |
MSCOCO [152] | 82 783 | 40 504 | 40 775 | 5 | Daily scene |
(Karpathy’s split) | 112 783 | 5000 | 5000 | 5 | Daily scene |
PASCAL 1K [155] | − | − | 1000 | 5 | Human activities |
YFCC100M [156] | 9920 million (32%) | 7 | Public multimedia | ||
Multi30K-CLID [157] | 29 000 | 1000 | 1000 | 5 | Daily scene |
AIC [158] | 210 000 | 30 000 | 30 000 + 30 000 | 5 | Daily scene |
IAPR TC-12 [159] | 17 665 | − | 1962 | 1.7 | Still natual |
GoodNews [160] | 424 000 | 18 000 | 23 000 | 1 | News |
VizWiz [7] | 23 431 | 7750 | 8000 | 5 | Blind view |
Nocaps [161] | 1 700 000 | 4500 | 10 600 | 10 | Novel objects |
FACAD [162] | 993 000 images in total | 0.2 | Fashion items | ||
TextCaps [163] | 424 000 | 18 000 | 23 000 | 1 | Text |
1) Benchmark Datasets: MS COCO② is the most widely used large-scale dataset in image captioning [152], which mainly consists of complex scene images from Yahoo’s photo album site Flickr. It includes 82 783 train images, 40 504 validation images, 40 775 test images, and each image is associated with 5 annotations. Since the description of the test set is not publicly available, Karpathy et al. [32] re-divide the train set and the validation set into training/validation/test set in practical applications, in which 5000 images are used for validation, 5000 images for testing, and the rest for training. The dataset has also a large official test set version including 40 775 test images paired with 40 private captions each, and a public evaluation server② to measure the performance.
Flickr8k/Flickr30k are all come from Yahoo’s photo album site Flickr, and are annotated through crowdsourcing services provided by Amazon Mechanical Turk. Flickr8k② [153] contains 8092 images, where 6092 are used for training, 1000 for verification, and the rest 1000 for testing. Each image is annotated with 5 different sentences with an average length of 11.8 words. The dataset is small and suitable for beginners.
Flickr30k③ [154] is an extension of the Flickr8k dataset. It contains 31 783 images, and each image is associated with 5 manual sentence labels. In this dataset,
2) Early Datasets: PASCAL 1K [155] is extended from object detection to image captioning. It contains 20 categories, each of which has a random sample of 50 images paired with 5 private captions each. All images are collected from the flickr photo-sharing website. YFCC100M [156] contains 99.2 million images from Yahoo Flickr, and
3) Specific Datasets: Moreover, several novel datasets are built for special requirements of the image captioning. GoodNews⑥ [160] is the largest news caption dataset, which contains news articles captured from 2010 to 2018. It gathers 466 000 images, each with only single manual caption, headlines and text articles. It splits into three sets with 424 000 for training, 18 000 for validation and 23 000 for testing randomly. AIC⑥ [158] is the first Chinese language caption dataset. All images of the dataset are collected using Internet search engines. It contains more than 200 scenes and 150 types of actions, with 210 000 images for training, 30 000 images for verification and 60 000 images for testing. Each image provides 5 Chinese language annotations. VizWiz⑥ [7] is built for image captioning services that blind rely on. It consists of 31 981 images taken from blind people, and each paired with 5 captions. It is used roughly with a
Image captioning is inspired by the neural machine translation, and its early evaluation metrics come from machine translation and text summarization. It also forms unique evaluation criteria in the process of development.
1) Standard Metrics: BLEU [166] is used to analyze the co-occurrence of n-grams between the candidate and reference. n-gram is usually used to reflect the precision of the generated descriptions. It compares a text segment with a set of references to compute a score, which correlates with human’s judgment of quality. Given an image I, the generated caption can be denoted as
BLEU@N(C,S)=BP(C,S)⋅exp(N∑n=1wnlogPn(C,S)) | (7) |
where
BP(C,S)={1,forlc>lsexp(1−lslc),forlc≤ls | (8) |
Pn(C,S)=∑i∑kmin{hk(ci),maxi≤mhk(sij)}∑i∑kmin{hk(ci)} | (9) |
where
Semantic propositional image captionevaluation (METEOR) [167] is calculated based on the weighted harmonic average of single-word recall and precision, which can offset the shortcomings of BLEU. It also adds a wordnet-based synonymordlista to address issues of synonym matching. The METEOR dataset aims to gain a harmonic average of the accuracy and recall between the best selected caption and the reference caption
METEOR=(1−Pen)Fmean | (10) |
Pen=γ(frag)θ | (11) |
Fmean=PmRmαPm+(1−α)Rm | (12) |
where
ROUGE [168] compares the generated word sequence and word pairs with reference descriptions. There are several different ROUGEs, such as Rouge-L, Rouge-N. The most widely used is Rouge-L, where the longest identical fragment in the generated and reference sentences is defined as the longest common subsequence (LCS). The generated sentence C and the reference sentence S are taken as examples
Rlcs=LCS(C,S)m | (13) |
Plcs=LCS(C,S)n | (14) |
ROUGE−L=(1+β2)RlcsPlcsRlcs+β2Plcs | (15) |
where m and n represent the lengths of C and S respectively, and
CIDEr [142] is an automatic caption evaluation metric based on consensus. It treats the sentence as a document and uses TF-IDF to calculate the weight of words. The consistency of the generated caption with the reference caption is measured by the cosine distance between the TF-IDF vector representations of two sentences.
CIDEr(Ci,Si)=1N∑n=1CIDErn(Ci,Si) | (16) |
CIDErN(Ci,Si)=1m∑jgk(Ci)∗gk(sij)‖gk(Ci)‖∗‖gk(sij)‖ | (17) |
gk(⋅)=hk(⋅)∑ωl∈Ωhl(⋅)log(|I|∑Ip∈Imin(1,∑qhn(⋅))) | (18) |
where
SPICE [143] is proposed to simulate human judgment. It hypothesizes that semantic propositional is an important component of human caption evaluation. Based on the semantic scene graph, it contains comparisons of objects, attributes, relations. For SPICE, a caption C is first parsed into a scene graph as follows:
G(C)=⟨O(C),E(C),K(C)⟩ | (19) |
where
T(G(C))≜ | (20) |
The binary matching operator
\begin{equation} \begin{aligned} P(C,S) = \frac{\vert T(G(C)) \otimes T(G(S))\vert}{\vert T(G(C))\vert} \end{aligned} \end{equation} | (21) |
\begin{equation} \begin{aligned} R(C,S) = \frac{\vert T(G(C)) \otimes T(G(S))\vert}{\vert T(G(S))\vert} \end{aligned} \end{equation} | (22) |
\begin{equation} \begin{aligned} SPICE(C,S) = F_{1}(C,S) = \frac{2 \cdot P(C,S) \cdot R(C,S)}{P(C,S) + R(C,S)} \end{aligned} \end{equation} | (23) |
where
All above standard metrics can be computed with the publicly released source code⑫.
2) Metrics for Diversity: The diversity of the generated sentences is compared using the metrics calculated in competing methods. Uniqueness [169] is the percentage of the distinct captions generated by sampling from the latent space. Novel sentences [169] are those generated sentences which do not appear in the training annotations. m-Bleu-4 [170] computes the average of Bleu-4 for each diverse caption with respect to the remaining diverse captions per image. It can predict whether it is overlapped between different subtitles. The lower the score, the greater the diversity. n-gram diversity (Div-n) [170] measures the ratio of distinct n-grams per caption to the total number of words generated per set of diverse captions. self-CIDEr [171], [172] is derived from using CIDEr similarity for latent semantic analysis and kernelization. It forms a pairwise similarity matrix, and uses the singular values of the matrix to measure the diversity of sentences. It is interpretable, and the more topics extracted, the more diverse for the captions. In practice, it needs to be used with other metrics for the syntactic correctness and the relevance to the image.
3) Metrics Based on Learning: The open-ended nature of image captioning makes it a challenging area for evaluation. Recently, many evaluation strategies based on learning are investigated, they aim to evaluate how human-like a caption is. Text-to-image grounding evaluation (TIGEr) [173] converts the reference and candidate sentences into grounding score vectors. Fidelity and adequacy ensured image caption evaluation (FAIFr) [174] leverages the scene graph as a bridge to represent both images and captions. TBERT-S [175] exploits pre-trained BERT embeddings [43] to represent and matches the tokens in the reference and candidate sentences via cosine similarity. ViLBERTScore [176] further reflects image context while utilizing the advantages of BERT-S. The ViLBERT is used to generate conditional image embedding in the generated and reference text. Then the embedding of each sentence pair is compared to acquire the similarity score. CIDErBtw [177] aims to improve the clarity of image captioning through similar image sets. It is used to assess how different a caption is from similar images. Contrastive language image pre-training score (CLIP-S) [178] is a cross-modal retrieval model inspired by CLIP [179]. SeMantic and linguistic UndeRstanding fusion (SMURF) [180] introduces “typicality” into evaluation, a new formulation rooted in the information theory. It is particularly suitable for problems which lack of definite ground truth. It evaluates fluency through style and grammar. Unreferenced metric for image captioning (UMIC) [181] is also a metric without a reference caption. Based on visual-linguistic BERT, UMIC is trained to recognize negative words through contrast learning.
In this section, we present a brief analysis of the application of datasets and metrics. And we also elaborate strengths and weaknesses of several classic captioning models.
Firstly, we review the application of datasets and metrics in image captioning. As shown in Table II, at the beginning of the study, the verification of image captioning is mainly completed based on Flickr 8K/30K. Both datasets are deficient in the number and scenes of images, which further restricts the performance improvement of captioning models. MS COCO is a large-scale dataset with complex scene images, which is more suitable for the task. Therefore, in recent studies, MS COCO has been used as a benchmark caption dataset in image captioning, except for some researches for special requirements. Initially, the metrics of image captioning are referenced from the NLP tasks, for example BLEU is a standard metric for neural machine translation, and R@K is usually used for recommender systems or ranking tasks. Subsequently, CIDEr and SPICE metrics are gradually proposed to evaluate the captions specifically. Finally, the standard metric set including BLEU, METEOR, ROUGE, CIDEr and SPICE has been established for image captioning systems.
Methods | Datasets | Evaluation metrics | Year |
Kiros et al. [31] | IAPR TC-12, SBU | BLEU, PPLX | 2014 |
Mao et al. [182] | IAPR TC-12, Flickr 8K/30K | BLEU, R@K, Mrank | 2014 |
Karpathy et al. [183] | PASCAL, Flickr 8K/30K | R@K, Mrank | 2014 |
Chen and Zitnick [184] | PASCAL, Flickr 8K/30K, MS COCO | BLEU, METEOR, CIDEr | 2015 |
Jia et al. [90] | Flickr 8K/30K, MS COCO | BLEU, METEOR, CIDEr | 2015 |
Vinyals et al. [185] | Flickr 8K/30K, MS COCO | BLEU, METEOR, CIDEr | 2015 |
Tran et al. [186] | MS COCO, Adobe-MIT, Instagram | Human evaluation | 2016 |
Li et al. [38] | Flickr 30K, MSCOCO | BLEU, METEOR, ROUGE, CIDEr | 2016 |
Hendricks et al. [187] | MS COCO, ImageNet | BLEU, METEOR | 2016 |
Yang et al. [188] | Visual genome | METEOR, AP, IoU | 2017 |
Liu et al. [144] | MS COCO | SPIDEr, Human evaluation | 2017 |
Gu et al. [189] | Flickr 30K, MS COCO | BLEU, METEOR, CIDEr, SPICE | 2017 |
Rennie et al. [28] | MS COCO | BLEU, METEOR, ROUGE, CIDEr | 2017 |
Wu et al. [190] | Flickr 8K/30K, MS COCO | BLEU, METEOR, CIDEr | 2018 |
Aneja et al. [191] | MS COCO | BLEU, METEOR, ROUGE, CIDEr | 2018 |
Wang and Chan [192] | MS COCO | BLEU, METEOR, ROUGE, CIDEr | 2018 |
Anderson et al. [29] | MS COCO | BLEU, METEOR, ROUGE, CIDEr, SPICE | 2018 |
Lu et al. [110] | Flickr 30K, MS COCO | BLEU, METEOR, CIDEr, SPICE | 2018 |
Xiao et al. [193] | Flickr 8K/30K, MS COCO | BLEU, METEOR, ROUGE, CIDEr | 2019 |
Jia et al. [90] | YFCC100M, InstaPIC-1.1M | BLEU, METEOR, ROUGE, CIDEr | 2019 |
Yang et al. [74] | MS COCO, Visual genome | BLEU, METEOR, ROUGE, CIDEr, SPICE | 2019 |
Qin et al. [194] | MS COCO | BLEU, METEOR, ROUGE, CIDEr, SPICE | 2019 |
Biten et al. [160] | GoodNews | BLEU, METEOR, ROUGE, CIDEr, SPICE | 2019 |
Liu et al. [195] | MS COCO | BLEU, METEOR, ROUGE, CIDEr | 2020 |
Yang et al. [162] | Fashion-caps | BLEU, METEOR, ROUGE, CIDEr, SPICE, mAP, ACC | 2020 |
Gurari et al. [7] | MS COCO, VizWiz | BLEU, METEOR, ROUGE, CIDEr, SPICE | 2020 |
Sidorov et al. [163] | MS COCO, TextCaps | BLEU, METEOR, ROUGE, CIDEr, SPICE | 2020 |
Wang et al. [196] | MS COCO | BLEU, METEOR, ROUGE, CIDEr, SPICE | 2020 |
Hu et al. [197] | BreakingNews, GoodNews | BLEU, METEOR, ROUGE, CIDEr | 2020 |
Fei [198] | MS COCO | BLEU, METEOR, ROUGE, CIDEr, SPICE | 2021 |
Hu et al. [151] | MS COCO, Nocaps | BLEU, METEOR, CIDEr, SPICE | 2021 |
Zhang et al. [44] | MS COCO | BLEU, METEOR, CIDEr, SPICE, R@K | 2021 |
Zhang et al. [199] | MS COCO | BLEU, METEOR, ROUGE, CIDEr, SPICE | 2021 |
Luo et al. [200] | MS COCO | BLEU, METEOR, ROUGE, CIDEr, SPICE | 2021 |
Zhang et al. [63] | MS COCO | BLEU, METEOR, ROUGE, CIDEr, SPICE | 2021 |
We analyze the performance of representative approaches in terms of different evaluation metrics presented in Section IV-B based on MS COCO. The results displayed in Table III are obtained either from caption files provided by the original authors or other implementation. The table contains all constituent ways of the encoder-decoder model mentioned in Sections III-A and III-B. In addition to language accuracy evaluation, the learning-based metrics can also reflect the benefit of vision-and-language pre-training. As illustrated in Table III, for convolutional representation learning, compared with grid features obtained from CNN (NIC [26] and Xu et al. [25]), all standard metrics have a substantial improvement with the introduction of CNN region-based visual encodings in SCST [28] and Up-Down [29]. This illustrates that the region-based feature representation favors the better visual understanding than coarse global information expressed by grid feature. Further improvement trend also occurs in GCN encoding (SGAE [74]), complete (CPTR [30]), and incomplete (ORT [132], M2-T [135]) self-attention encoding. More abundant visual relationships contribute significantly to understand the visual information and transfer it into text.
Methods | Standard metrics | Diversity metrics | Learning-based metrics | ||||||||
BLEU-1 | METEOR | CIDEr | Div-1 | Div-2 | Novel | TIGEr | BERT-S | CLIP-S | |||
NIC [26] | 72.4 | 25.0 | 97.2 | 1.4 | 4.5 | 36.1 | 71.8 | 93.4 | 69.7 | ||
Xu et al. [25] | 74.1 | 26.2 | 104.6 | 1.7 | 6.0 | 47.0 | 73.2 | 93.6 | 71.0 | ||
SCST [28] | 78.0 | 27.1 | 117.4 | 1.0 | 3.1 | 64.9 | 73.9 | 88.9 | 71.2 | ||
Up-Down [29] | 79.4 | 27.9 | 122.7 | 1.2 | 4.4 | 67.6 | 74.6 | 88.8 | 72.3 | ||
SGAE [74] | 81.0 | 28.4 | 129.1 | 1.4 | 5.4 | 71.4 | 74.6 | 94.1 | 73.4 | ||
MT [42] | 80.8 | 28.8 | 129.6 | 1.1 | 4.8 | 70.4 | 74.8 | 88.8 | 72.6 | ||
AOANet [202] | 80.2 | 29.2 | 129.8 | 1.6 | 6.2 | 69.3 | 75.1 | 94.3 | 73.7 | ||
ORT [132] | 80.5 | 28.7 | 128.3 | 2.1 | 7.2 | 73.8 | 75.1 | 94.1 | 73.6 | ||
M2-T [135] | 80.8 | 29.2 | 131.2 | 1.7 | 7.9 | 78.9 | 75.3 | 93.7 | 73.4 | ||
Unified VLP [149] | 80.9 | 29.3 | 129.3 | 1.9 | 8.1 | 74.1 | 75.1 | 94.4 | 75.0 | ||
CPTR [30] | 81.7 | 29.1 | 129.4 | 1.4 | 6.8 | 75.6 | 74.8 | 94.3 | 74.5 |
Furthermore, in Table IV, we summarize the performance of the widely accepted methods according to the taxonomies proposed in Sections III-A−III-C. We report their accuracy score in terms of standard evaluation metrics on the MS COCO Karpathy split test set. And their applications of visual encoding and language decoding ways, attention mechanism and training strategies are exhibited as well. As shown in Table IV, methods are clustered based primarily on the dates they are proposed. It can be seen that the performance of image captioning models has made impressive progress in recent years. For the standard metrics, the BLEU-4 score is calculated from an average of 24.6 for the global CNN features (NIC [26]) to an average of 38.2 and 38.4 for those exploiting the self-attention encoding (X-LAN [129]) and graph encoding (CGVRG [201]) based on the cross-entropy loss, while the same positive trend is noticed in the reinforcement learning training. The CIDEr score is absent in early grid feature models, it is calculated from an average of 114.0 for region features to an average of 135.6 for the application of the self-attention mechanism with the peak at 140.4 for vision-and-language pre-training based on the reinforcement learning training strategy. In addition, we can reach the following conclusion that the more fine-grained and structured visual semantic information and diverse mutual relationship are mined, the better caption is generated. Since the performance of NIC [26] (coarse grid features) is much lower than that of Up-Down [29] (fine-grained visual region features), and the performance of Up-Down [29] is much lower than GCN-LSTM [35] (structured visual information and relationships) and ETA [134] (visual internal relationships). Moreover, the collected results from different training strategies show that the reinforcement learning strategy can be a valid alternative to the cross-entropy loss. Finally, the latest pre-training model VinVL [44] obtains peak scores in all standard metrics.
Methods | En | De | ATT | EX | RL | ||||||||
B4 | M | R-L | C | S | B4 | M | R-L | C | S | ||||
NIC [26] | CNN | LSTM | × | 24.6 | − | − | − | − | 27.7 | 23.7 | − | 85.5 | − |
Soft-ATT [25] | CNN | LSTM | √ | 24.3 | 23.9 | − | − | − | − | − | − | − | − |
Hard-ATT [25] | CNN | LSTM | √ | 25.0 | 23.0 | − | − | − | − | − | − | − | − |
GLA [102] | CNN | LSTM | √ | 31.2 | 24.9 | 53.3 | 96.4 | − | − | − | − | − | − |
Semantic-ATT [114] | CNN | LSTM | √ | 37.7 | 27.9 | 58.2 | 123.7 | − | − | − | − | − | |
Adp-ATT [27] | CNN | LSTM | √ | 33.2 | 25.7 | 55.0 | 101.3 | − | − | − | − | − | − |
SCST [28] | CNN | LSTM | √ | 30.0 | 26.0 | 54.3 | 101.3 | − | 34.2 | 26.7 | 55.7 | 114.0 | − |
Up-Down [29] | CNN | LSTM | √ | 36.2 | 27.0 | 56.4 | 113.5 | 20.3 | 36.3 | 27.7 | 56.9 | 120.1 | 21.4 |
Stack-Cap [98] | CNN | LSTM | √ | 35.2 | 26.5 | − | 109.1 | − | 36.1 | 27.4 | 56.9 | 120.4 | 20.9 |
CAVP [203] | CNN | LSTM | √ | − | − | − | − | − | 38.6 | 28.3 | 58.5 | 126.3 | 21.6 |
SGAE [74] | GCN | LSTM | √ | − | − | − | − | − | 38.4 | 28.4 | 58.6 | 127.8 | 22.1 |
AOANet [202] | SA | LSTM | √ | 36.9 | 28.5 | 57.3 | 118.5 | 21.6 | 39.1 | 29.0 | 58.9 | 128.9 | 22.5 |
ETA [134] | SA | T-ATT | √ | 37.1 | 28.2 | 57.1 | 117.9 | 21.4 | 39.3 | 28.8 | 58.9 | 126.6 | 22.7 |
RFNet [204] | CNN | LSTM | √ | 35.8 | 27.4 | 56.8 | 112.5 | 20.5 | 36.5 | 27.7 | 57.3 | 121.9 | 21.2 |
LSTM-A [205] | CNN | LSTM | √ | 35.2 | 26.9 | 55.8 | 108.8 | 20.0 | 35.5 | 27.3 | 56.8 | 118.3 | 20.8 |
GCN-LSTM [35] | GCN | LSTM | √ | 36.8 | 27.9 | 57.0 | 116.3 | 20.9 | 38.2 | 28.5 | 58.3 | 127.6 | 22.0 |
CNM [75] | GCN | LSTM | √ | 37.1 | 27.9 | 57.3 | 116.6 | 20.8 | 38.7 | 28.4 | 58.7 | 127.4 | 21.8 |
DA [123] | CNN | LSTM | √ | 33.7 | 26.4 | 54.6 | 104.9 | 19.4 | 37.5 | 28.5 | 58.2 | 125.6 | 22.3 |
MT [42] | SA | T-ATT | √ | 37.4 | 28.7 | 57.4 | 119.6 | − | 40.7 | 29.5 | 59.7 | 134.1 | − |
ORT [132] | SA | T-ATT | √ | 35.5 | 28.0 | 56.6 | 115.4 | 21.2 | 38.6 | 28.7 | 58.4 | 128.3 | 22.6 |
M2-T [135] | CNN | LSTM | √ | − | − | − | − | − | 39.1 | 29.2 | 58.6 | 131.2 | 22.6 |
LBPF [194] | CNN | LSTM | √ | 37.4 | 28.1 | 57.5 | 116.4 | 21.2 | 38.3 | 28.5 | 58.4 | 127.6 | 22.0 |
GCN-HIP [206] | GCN | LSTM | √ | 38.0 | 28.6 | 57.8 | 120.3 | 21.4 | 39.1 | 28.9 | 59.2 | 130.6 | 22.3 |
VSUA [36] | GCN | LSTM | √ | − | − | − | − | − | 38.4 | 28.5 | 58.4 | 128.6 | 22.0 |
NG-SAN [133] | SA | T-ATT | √ | − | − | − | − | − | 39.9 | 29.3 | 59.2 | 132.1 | 23.3 |
POS-SCAN [207] | CNN | LSTM | √ | 36.5 | 27.9 | − | 114.9 | 20.8 | 38.0 | 28.5 | − | 125.9 | 22.2 |
X-LAN [129] | SA | LSTM | √ | 38.2 | 28.8 | 58.0 | 122.0 | 21.9 | 39.5 | 29.5 | 59.2 | 132.0 | 23.4 |
X-T [129] | SA | T-ATT | √ | 37.0 | 28.7 | 57.5 | 120.0 | 21.8 | 39.7 | 29.5 | 59.1 | 132.8 | 23.4 |
OSCAR [150] | SA | T-ATT | √ | 36.5 | 30.3 | − | 123.7 | 23.1 | 40.5 | 29.7 | − | 137.6 | 22.8 |
CGVRG [201] | GCN | LSTM | √ | 38.4 | 28.2 | 58.0 | 119.0 | 21.1 | 38.9 | 28.8 | 58.7 | 129.6 | 22.3 |
SRT [208] | SA | T-ATT | √ | 36.6 | 28.0 | 56.9 | 116.9 | 21.3 | 38.5 | 28.7 | 58.4 | 129.1 | 22.4 |
CPTR [30] | SA | T-ATT | √ | − | − | − | − | − | 40.0 | 29.1 | 59.4 | 129.4 | − |
MAC [209] | SA | T-ATT | √ | − | − | − | − | − | 39.5 | 29.3 | 58.9 | 131.6 | 22.8 |
DLCT [200] | SA | T-ATT | √ | − | − | − | − | − | 39.8 | 29.5 | 59.1 | 133.8 | 23.0 |
RSTNet [63] | SA | T-ATT | √ | − | − | − | − | − | 40.1 | 29.8 | 59.5 | 135.6 | 23.3 |
VRATT-Soft [125] | CNN | LSTM | √ | 34.3 | 28.5 | 60.0 | 111.7 | 20.1 | 37.5 | 28.5 | 61.6 | 122.1 | 22.1 |
VRATT-Hard [125] | CNN | LSTM | √ | 36.3 | 27.9 | 60.6 | 113.0 | 20.4 | 36.6 | 28.4 | 60.9 | 119.8 | 21.5 |
VinVL. [44] | SA | T-ATT | √ | 38.2 | 30.3 | − | 129.3 | 23.6 | 40.9 | 30.9 | − | 140.4 | 25.1 |
Moreover, in Fig. 18, we display five examples of image captioning results obtained from some popular approaches based on the visual representation mode, training strategy and attention mechanism mentioned in Sections III-A, Section III-B, and Section III-C, respectively. The generated captions come from the NIC [26] model with global grid features, STCT [28] with the RL training strategy, Up-Down [29] with visual region features, VRG [201] with graph representation, and Transformer with self-attention, coupled with the corresponding ground truth sentence. Compared with ground truth, we highlight the novel descriptions of new objects (in green), attributes (in blue) and relations (in orange), which provides intuitive visual illustration for different kinds of image caption methods. In particular, there are also some error descriptions of the given image highlighted in red. It is obvious from these highlighted words that VRG and Transformer models can generate better captions with detailed attributes and relationships, such as “blue and white”, “with”, and “on the tail” in the first image of Fig. 18. More examples can be found in other images. The result is consistent with the previous quantitative analysis, more fine-grained and structured visual information and diverse visual relationship contribute to the better captions. There are also several obvious errors marked in red, such as “a market” in the second image and “dog” in the fifth image. This indicates that there are still significant challenges in visual content understanding and cross-modal semantic alignment.
Automatic image captioning has attracted increasing attention in recent years. It has made significant progress due to the development of the encoder-decoder framework, attention mechanism, and different training strategies. However, there is still room for further improvement.
Current captioning approaches usually describe images using black-box architectures, which can interpret the image either briefly or in detail. However, its performance is still not clearly explainable and lacks descriptive flexibility, which creates a gap between human and machine intelligence. Since an image can be described in various ways according to the goal and the context backgrounds, a higher degree of flexibility is needed for captioning complex scenarios. Although some progress has been made in controllable captioning [210], [211] and editable captioning [212], its length and diversity are still restricted to the annotation. More fine-grained visual information, more potential visual relationships and more common-sense language priors will contribute to improve the flexibility of sentences. The flexible captioning is one of the topics worth further investigating in the future.
Most of existing image captioning models seriously rely on paired image-caption datasets. How to learn a captioning model with unpaired dataset is a challenging and essential task. Only a small number of attempts have been made in unpaired image captioning, such as novel object captioning [151], [161] and dense captioning [213], [214]. Due to the significantly different characteristics of the two modalities, unpaired image-to-sentence translation is more challenging and far from mature. Unsupervised learning techniques can reduce the dependence of models on large datasets. And it is an effective scheme to solve the unpaired image captioning problem. Besides, language pivoting is another worthy endeavor, which captures the characteristics from the pivot language and aligns it to the target language.
Different users have different caption requirements in different situations. For example, people need to post more personalized, emotional, and diversified sentences on social apps. User requirements captioning, such as diversity captioning [172], personality captioning [215] and topic captioning, which can meet various needs for different users in different scenarios, is also a direction necessary and worthy to further research. Specific datasets and model improvements can be proposed to meet user needs. Furthermore, external knowledge and commonsense reasoning are also good ideas, as they can help models generate captions with more stylized information and enhance knowledge reasoning as well.
Most current image captioning research focuses on single-sentence captions. But, the descriptive capacity of this form is limited. As we know, a picture may contain rich information worth a thousand words. To completely depict an image, paragraph captioning [71], [216] is considered as the feasible description, which can generate a paragraph with multiple sentences for describing the given image. However, existing paragraph captioning models mostly focus on generating multi-sentence of several topics without considering the semantic coherence between sentences. How to leverage more fine-grained, relevant visual features and priori knowledge to generate a true paragraph with linguistic coherence corresponding to the image, is still a challenging and interesting task.
Existing encoder-decoder models use autoregressive decoding to generate captions. It may result in sequential error accumulation and slow generation, which limit the applications in practice. Inspired by machine translation, non-autoregressive (NA) decoding [217], [218] has been proposed to solve these issues, which aims to speed up the inference time. Preliminary NA models suffer from the language quality problem due to the indirect modeling of the target distribution and ignoring the sentence-level consistency. The Non-Autoregressive captioning models need further improvement for generating elegant descriptions while maintaining low time consumption for words prediction.
Image captioning cannot be separated from the specific datasets. Existing image captioning datasets are not enough to fully support the above extended directions. Therefore, more new specific datasets, such as novel objects captioning dataset, fashion captioning dataset and multilingual captioning dataset, need to be developed. Similarly, the evaluation metrics should neither be limited to the accuracy based on similarity calculation. Although some diversity metrics and semantic-level metrics are proposed, it is still a challenging area for evaluating multiple image captioning with an open-ended nature.
In this paper, we review the development of the image captioning and related issues including datasets and evaluation metrics. Firstly, we give a brief introduction to the traditional retrieval-based, template-based methods and their improvements. Secondly, recent deep learning image captioning models are discussed in detail, especially the encoder-decoder framework, attention mechanism and training strategies. After that, We classify and summarize the datasets and evaluation metrics for image captioning. The existing methods are compared on the benchmark MS COCO based on the standard evaluation metrics. Although these deep learning models have achieved significant progress, there is also room for improvements. So finally, we give a detailed discussion about potential future research directions in the image captioning task. Image captioning has been widely used in intelligent information transmission, intelligent home, smart education and other fields. This makes it still an important research direction in deep learning and artificial intelligence and will have an increasing impact on our daily life in the future.
[1] |
S. P. Manay, S. A. Yaligar, Y. Thathva Sri Sai Reddy, and N. J. Saunshimath, “Image captioning for the visually impaired,” in Proc. Emerging Research in Computing, Information, Communication and Applications, Springer, 2022, pp. 511−522.
|
[2] |
R. Hinami, Y. Matsui, and S. Satoh, “Region-based image retrieval revisited,” in Proc. 25th ACM Int. Conf. Multimedia, 2017, pp. 528−536.
|
[3] |
E. Hand and R. Chellappa, “Attributes for improved attributes: A multi-task network utilizing implicit and explicit relationships for facial attribute classification,” in Proc. AAAI Conf. Artificial Intelligence, 2017, vol. 31, no. 1, pp. 4068−4074.
|
[4] |
X. Cheng, J. Lu, J. Feng, B. Yuan, and J. Zhou, “Scene recognition with objectness,” Pattern Recognition, vol. 74, pp. 474–487, 2018. doi: 10.1016/j.patcog.2017.09.025
|
[5] |
Z. Meng, L. Yu, N. Zhang, T. L. Berg, B. Damavandi, V. Singh, and A. Bearman, “Connecting what to say with where to look by modeling human attention traces,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2021, pp. 12679−12688.
|
[6] |
L. Liu, W. Ouyang, X. Wang, P. Fieguth, J. Chen, X. Liu, and M. Pietikäinen, “Deep learning for generic object detection: A survey,” Int. J. Computer Vision, vol. 128, no. 2, pp. 261–318, 2020. doi: 10.1007/s11263-019-01247-4
|
[7] |
D. Gurari, Y. Zhao, M. Zhang, and N. Bhattacharya, “Captioning images taken by people who are blind,” in Proc. European Conf. Computer Vision, Springer, 2020, pp. 417−434.
|
[8] |
A. Kojima, T. Tamura, and K. Fukunaga, “Natural language description of human activities from video images based on concept hierarchy of actions,” Int. J. Computer Vision, vol. 50, no. 2, pp. 171–184, 2002. doi: 10.1023/A:1020346032608
|
[9] |
P. Hède, P.-A. Moëllic, J. Bourgeoys, M. Joint, and C. Thomas, “Automatic generation of natural language description for images.” in Proc. RIAO, Citeseer, 2004, pp. 306−313.
|
[10] |
S. Li, G. Kulkarni, T. Berg, A. Berg, and Y. Choi, “Composing simple image descriptions using web-scale n-grams,” in Proc. 15th Conf. Computational Natural Language Learning, 2011, pp. 220−228.
|
[11] |
S. P. Liu, Y. T. Xian, H. F. Li, and Z. T. Yu, “Text detection in natural scene images using morphological component analysis and Laplacian dictionary,” IEEE/CAA J. Autom. Sinica, vol. 7, no. 1, pp. 214–222, Jan. 2020.
|
[12] |
A. Tran, A. Mathews, and L. Xie, “Transform and tell: Entity-aware news image captioning,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2020, pp. 13035−13045.
|
[13] |
P. Kuznetsova, V. Ordonez, T. L. Berg, and Y. Choi, “Treetalk: Composition and compression of trees for image descriptions,” Trans. Association for Computational Linguistics, vol. 2, pp. 351–362, 2014. doi: 10.1162/tacl_a_00188
|
[14] |
M. Hodosh, P. Young, and J. Hockenmaier, “Framing image description as a ranking task: Data, models and evaluation metrics,” J. Artificial Intelligence Research, vol. 47, pp. 853–899, 2013. doi: 10.1613/jair.3994
|
[15] |
G. Kulkarni, V. Premraj, V. Ordonez, S. Dhar, S. Li, Y. Choi, A. C. Berg, and T. L. Berg, “Babytalk: Understanding and generating simple image descriptions,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 35, no. 12, pp. 2891–2903, 2013. doi: 10.1109/TPAMI.2012.162
|
[16] |
M. Mitchell, J. Dodge, A. Goyal, K. Yamaguchi, K. Stratos, X. Han, A. Mensch, A.-d. Berg, T. Berg, and H. Daumé III, “Midge: Generating image descriptions from computer vision detec-tions,” in Proc. 13th Conf. European Chapter Association for Computational Linguistics, 2012, pp. 747−756.
|
[17] |
Y. Yang, C. Teo, H. Daumé III, and Y. Aloimonos, “Corpus-guided sentence generation of natural images,” in Proc. Conf. Empirical Methods in Natural Language Processing, 2011, pp. 444−454.
|
[18] |
W. N. H. W. Mohamed, M. N. M. Salleh, and A.-d. H. Omar, “A comparative study of reduced error pruning method in decision tree algorithms,” in Proc. IEEE Int. Conf. Control System, Computing and Engineering, 2012, pp. 392−397.
|
[19] |
W. Liu, Z. Wang, Y. Yuan, N. Zeng, K. Hone, and X. Liu, “A novel sigmoid-function-based adaptive weighted particle swarm optimizer,” IEEE Trans. Cybernetics, vol. 51, no. 2, pp. 1085–1093, 2019. doi: 10.1109/TCYB.2019.2925015
|
[20] |
S. Harford, F. Karim, and H. Darabi, “Generating adversarial samples on multivariate time series using variational autoencoders,” IEEE/CAA J. Autom. Sinica, vol. 8, no. 9, pp. 1523–1538, Sept. 2021.
|
[21] |
M. S. Sarafraz and M. S. Tavazoei, “A unified optimization-based framework to adjust consensus convergence rate and optimize the network topology in uncertain multi-agent systems,” IEEE/CAA J. Autom. Sinica, vol. 8, no. 9, pp. 1539–1539, Sept. 2021.
|
[22] |
Y. R. Wang, S. C. Gao, M. C. Zhou, and Y. Yu, “A multi-layered gravitational search algorithm for function optimization and real-world problems,” IEEE/CAA J. Autom. Sinica, vol. 8, no. 1, pp. 94–109, Jan. 2021.
|
[23] |
K. H. Liu, Z. H. Ye, H. Y. Guo, D. P. Cao, L. Chen, and F.-Y. Wang, “FISS GAN: A generative adversarial network for foggy image semantic segmentation,” IEEE/CAA J. Autom. Sinica, vol. 8, no. 8, pp. 1428–1439, Aug. 2021.
|
[24] |
Y. Ming, X. Meng, C. Fan, and H. Yu, “Deep learning for monocular depth estimation: A review,” Neurocomputing, vol. 438, no. 28, pp. 14–33, 2021.
|
[25] |
K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhudinov, R. Zemel, and Y. Bengio, “Show, attend and tell: Neural image caption generation with visual attention,” in Proc. Int. Conf. Machine Learning, 2015, pp. 2048−2057.
|
[26] |
O. Vinyals, A. Toshev, S. Bengio, and D. Erhan, “Show and tell: Lessons learned from the 2015 MS COCO image captioning challenge,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 39, no. 4, pp. 652–663, 2016.
|
[27] |
J. Lu, C. Xiong, D. Parikh, and R. Socher, “Knowing when to look: Adaptive attention via a visual sentinel for image captioning,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2017, pp. 375−383.
|
[28] |
S. J. Rennie, E. Marcheret, Y. Mroueh, J. Ross, and V. Goel, “Self-critical sequence training for image captioning,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2017, pp. 7008−7024.
|
[29] |
P. Anderson, X. He, C. Buehler, D. Teney, M. Johnson, S. Gould, and L. Zhang, “Bottom-up and top-down attention for image captioning and visual question answering,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2018, pp. 6077−6086.
|
[30] |
W. Liu, S. Chen, L. Guo, X. Zhu, and J. Liu, “CPTR: Full transformer network for image captioning,” arXiv preprint arXiv: 2101.10804, 2021.
|
[31] |
R. Kiros, R. Salakhutdinov, and R. Zemel, “Multimodal neural language models,” in Proc. Int. Conf. Machine Learning, 2014, pp. 595−603.
|
[32] |
A. Karpathy and L. Fei-Fei, “Deep visual-semantic alignments for generating image descriptions,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2015, pp. 3128−3137.
|
[33] |
L. Wu, M. Xu, J. Wang, and S. Perry, “Recall what you see continually using grid LSTM in image captioning,” IEEE Trans. Multimedia, vol. 22, no. 3, pp. 808–818, 2019.
|
[34] |
K. Lin, Z. Gan, and L. Wang, “Augmented partial mutual learning with frame masking for video captioning,” in Proc. AAAI Conf. Artificial Intelligence, 2021, vol. 35, no. 3, pp. 2047−2055.
|
[35] |
T. Yao, Y. Pan, Y. Li, and T. Mei, “Exploring visual relationship for image captioning,” in Proc. European Conf. Computer Vision, 2018, pp. 684−699.
|
[36] |
L. Guo, J. Liu, J. Tang, J. Li, W. Luo, and H. Lu, “Aligning linguistic words and visual semantic units for image captioning,” in Proc. 27th ACM Int. Conf. Multimedia, 2019, pp. 765−773.
|
[37] |
X. Yang, H. Zhang, and J. Cai, “Auto-encoding and distilling scene graphs for image captioning,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 44, no. 5, pp. 2313–2327, 2020. doi: 10.1109/TPAMI.2020.3042192
|
[38] |
X. S. Li, Y. T. Liu, K. F. Wang, and F.-Y. Wang, “A recurrent attention and interaction model for pedestrian trajectory prediction,” IEEE/CAA J. Autom. Sinica, vol. 7, no. 5, pp. 1361–1370, Sept. 2020.
|
[39] |
P. Liu, Y. Zhou, D. Peng, and D. Wu, “Global-attention-based neural networks for vision language intelligence,” IEEE/CAA J. Autom. Sinica, vol. 8, no. 7, pp. 1243–1252, 2020.
|
[40] |
T. L. Zhou, M. Chen, and J. Zou, “Reinforcement learning based data fusion method for multi-sensors,” IEEE/CAA J. Autom. Sinica, vol. 7, no. 6, pp. 1489–1497, Nov. 2020.
|
[41] |
P. H. Seo, P. Sharma, T. Levinboim, B. Han, and R. Soricut, “Reinforcing an image caption generator using off-line human feedback,” in Proc. AAAI Conf. Artificial Intelligence, 2020, vol. 34, no. 3, pp. 2693−2700.
|
[42] |
J. Yu, J. Li, Z. Yu, and Q. Huang, “Multimodal transformer with multi-view visual representation for image captioning,” IEEE Trans. Circuits and Systems for Video Technology, vol. 30, no. 12, pp. 4467–4480, 2020. doi: 10.1109/TCSVT.2019.2947482
|
[43] |
J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv: 1810.04805, 2018.
|
[44] |
P. Zhang, X. Li, X. Hu, J. Yang, L. Zhang, L. Wang, Y. Choi, and J. Gao, “VINVL: Revisiting visual representations in vision-language models,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2021, pp. 5579−5588.
|
[45] |
A. Farhadi, M. Hejrati, M. A. Sadeghi, P. Young, C. Rashtchian, J. Hockenmaier, and D. Forsyth, “Every picture tells a story: Generating sentences from images,” in Proc. European Conf. Computer Vision, Springer, 2010, pp. 15−29.
|
[46] |
V. Ordonez, G. Kulkarni, and T. Berg, “Im2text: Describing images using 1 million captioned photographs,” Advances in Neural Information Processing Systems, vol. 24, pp. 1143–1151, 2011.
|
[47] |
R. Socher, A. Karpathy, Q. V. Le, C. D. Manning, and A. Y. Ng, “Grounded compositional semantics for finding and describing images with sentences,” Trans. Association Computational Linguistics, vol. 2, pp. 207–218, 2014. doi: 10.1162/tacl_a_00177
|
[48] |
R. Mason and E. Charniak, “Nonparametric method for data-driven image captioning,” in Proc. 52nd Annual Meeting Association for Computational Linguistics, 2014, vol. 2, pp. 592−598.
|
[49] |
C. Sun, C. Gan, and R. Nevatia, “Automatic concept discovery from parallel text and visual corpora,” in Proc. IEEE Int. Conf. Computer Vision, 2015, pp. 2596−2604.
|
[50] |
A. Gupta, Y. Verma, and C. Jawahar, “Choosing linguistics over vision to describe images,” in Proc. AAAI Conf. Artificial Intelligence, 2012, vol. 26, no. 1.
|
[51] |
J. Devlin, S. Gupta, R. Girshick, M. Mitchell, and C. L. Zitnick, “Exploring nearest neighbor approaches for image captioning,” arXiv preprint arXiv: 1505.04467, 2015.
|
[52] |
R. Xu, C. Xiong, W. Chen, and J. Corso, “Jointly modeling deep video and compositional text to bridge vision and language in a unified framework,” in Proc. AAAI Conf. Artificial Intelligence, 2015, vol. 29, no. 1.
|
[53] |
R. Lebret, P. Pinheiro, and R. Collobert, “Phrase-based image captioning,” in Proc. Int. Conf. Machine Learning, 2015, pp. 2085−2094.
|
[54] |
N. Krishnamoorthy, G. Malkarnenkar, R. Mooney, K. Saenko, and S. Guadarrama, “Generating natural-language video descriptions using text-mined knowledge,” in Proc. AAAI Conf. Artificial Intelligence, 2013, vol. 27, no. 1.
|
[55] |
I. U. Rahman, Z. Wang, W. Liu, B. Ye, M. Zakarya, and X. Liu, “An n-state markovian jumping particle swarm optimization algorithm,” IEEE Trans. Systems,Man,and Cybernetics: Systems, vol. 51, no. 11, pp. 6626–6638, 2020. doi: 10.1109/TSMC.2019.2958550
|
[56] |
Y. Ushiku, M. Yamaguchi, Y. Mukuta, and T. Harada, “Common subspace for model and similarity: Phrase learning for caption generation from images,” in Proc. IEEE Int. Conf. Computer Vision, 2015, pp. 2668−2676.
|
[57] |
M. Muzahid, W. G. Wan, F. Sohel, L. Y. Wu, and L. Hou, “CurveNet: Curvature-based multitask learning deep networks for 3D object recognition,” IEEE/CAA J. Autom. Sinica, vol. 8, no. 6, pp. 1177–1187, Jun. 2021.
|
[58] |
Q. Wu, C. Shen, L. Liu, A. Dick, and A. Van Den Hengel, “What value do explicit high level concepts have in vision to language problems?” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2016, pp. 203−212.
|
[59] |
J. Gu, S. Joty, J. Cai, and G. Wang, “Unpaired image captioning by language pivoting,” in Proc. European Conf. Computer Vision, 2018, pp. 503−519.
|
[60] |
J. Gamper and N. Rajpoot, “Multiple instance captioning: Learning representations from histo-pathology textbooks and articles,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2021, pp. 16549−16559.
|
[61] |
Y. Song, S. Chen, Y. Zhao, and Q. Jin, “Unpaired cross-lingual image caption generation with self-supervised rewards,” in Proc. 27th ACM Int. Conf. Multi-Media, 2019, pp. 784−792.
|
[62] |
J. Donahue, L. Anne Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell, “Long-term recurrent convolutional networks for visual recognition and description,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2015, pp. 2625−2634.
|
[63] |
X. Zhang, X. Sun, Y. Luo, J. Ji, Y. Zhou, Y. Wu, F. Huang, and R. Ji, “Rstnet: Captioning with adaptive attention on visual and non-visual words,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2021, pp. 15465−15474.
|
[64] |
J. Mao, J. Huang, A. Toshev, O. Camburu, A. L. Yuille, and K. Murphy, “Generation and comprehension of unambiguous object descriptions,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2016, pp. 11−20.
|
[65] |
H. Fang, S. Gupta, F. Iandola, R. K. Srivastava, L. Deng, P. Dollár, J. Gao, X.-D. He, M. Mitchell, J. C. Platt, C. L. Zitnick, and G. Zweig, “From captions to visual concepts and back,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2015, pp. 1473−1482.
|
[66] |
P. Anderson, S. Gould, and M. Johnson, “Partially-supervised image captioning,” arXiv preprint arXiv: 1806.06004, 2018.
|
[67] |
R. Girshick, “Fast R-CNN,” in Proc. IEEE Int. Conf. Computer Vision, 2015, pp. 1440−1448.
|
[68] |
S. Datta, K. Sikka, A. Roy, K. Ahuja, D. Parikh, and A. Divakaran, “Align2Ground: Weakly supervised phrase grounding guided by image-caption alignment,” in Proc. IEEE/CVF Int. Conf. Computer Vision, 2019, pp. 2601−2610.
|
[69] |
X. Chen, M. Zhang, Z. Wang, L. Zuo, B. Li, and Y. Yang, “Leveraging unpaired out-of-domain data for image captioning,” Pattern Recognition Letters, vol. 132, pp. 132–140, 2020. doi: 10.1016/j.patrec.2018.12.018
|
[70] |
D.-J. Kim, T.-H. Oh, J. Choi, and I. S. Kweon, “Dense relational image captioning via multi-task triple-stream networks,” arXiv preprint arXiv: 2010.03855, 2020.
|
[71] |
L.-C. Yang, C.-Y. Yang, and J. Y.-j. Hsu, “Object relation attention for image paragraph captioning,” in Proc. AAAI Conf. Artificial Intelligence, vol. 35, no. 4, 2021, pp. 3136−3144.
|
[72] |
X. B. Hong, T. Zhang, Z. Cui, and J. Yang, “Variational gridded graph convolution network for node classification,” IEEE/CAA J. Autom.Sinica, vol. 8, no. 10, pp. 1697–1708, Oct. 2021.
|
[73] |
X. Liu, M. Yan, L. Deng, G. Li, X. Ye, and D. Fan, “Sampling methods for efficient training of graph convolutional networks: A survey,” IEEE/CAA J. Autom. Sinica, vol. 2, no. 9, pp. 205–234, 2022.
|
[74] |
X. Yang, K. Tang, H. Zhang, and J. Cai, “Auto-encoding scene graphs for image captioning,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2019, pp. 10685−10694.
|
[75] |
X. Yang, H. Zhang, and J. Cai, “Learning to collocate neural modules for image captioning,” in Proc. IEEE/CVF Int. Conf. Computer Vision, 2019, pp. 4250−4260.
|
[76] |
J. Gu, S. Joty, J. Cai, H. Zhao, X. Yang, and G. Wang, “Unpaired image captioning via scene graph alignments,” in Proc. IEEE/CVF Int. Conf. Computer Vision, 2019, pp. 10323−10332.
|
[77] |
R. Zellers, M. Yatskar, S. Thomson, and Y. Choi, “Neural motifs: Scene graph parsing with global context,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2018, pp. 5831−5840.
|
[78] |
V. S. Chen, P. Varma, R. Krishna, M. Bernstein, C. Re, and L. Fei-Fei, “Scene graph prediction with limited labels,” in Proc. IEEE/CVF Int. Conf. Computer Vision, 2019, pp. 2580−2590.
|
[79] |
Y. Zhong, L. Wang, J. Chen, D. Yu, and Y. Li, “Comprehensive image captioning via scene graph decomposition,” in Proc. European Conf. Computer Vision, Springer, 2020, pp. 211−229.
|
[80] |
S. Chen, Q. Jin, P. Wang, and Q. Wu, “Say as you wish: Fine-grained control of image caption generation with abstract scene graphs,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2020, pp. 9962−9971.
|
[81] |
W. Zhang, H. Shi, S. Tang, J. Xiao, Q. Yu, and Y. Zhuang, “Consensus graph representation learning for better grounded image captioning,” in Proc. AAAI Conf. Artificial Intelligence, 2021, pp.3394–3402.
|
[82] |
S. Tripathi, K. Nguyen, T. Guha, B. Du, and T. Q. Nguyen, “Sg2caps: Revisiting scene graphs for image captioning,” arXiv preprint arXiv: 2102.04990, 2021.
|
[83] |
D. Wang, D. Beck, and T. Cohn, “On the role of scene graphs in image captioning,” in Proc. Beyond Vision and Language: Integrating Real-World Knowledge, 2019, pp. 29−34.
|
[84] |
V. S. J. Milewski, M. F. Moens, and I. Calixto, “Are scene graphs good enough to improve image captioning?” in Proc. 1st Conf. Asia-Pacific Chapter of Association for Computational Linguistics and 10th Int. Joint Conf. Natural Language Processing, 2020, pp. 504−515.
|
[85] |
J. Mao, X. Wei, Y. Yang, J. Wang, Z. Huang, and A. L. Yuille, “Learning like a child: Fast novel visual concept learning from sentence descriptions of images,” in Proc. IEEE/CVF Int. Conf. Computer Vision, 2015, pp. 2533−2541.
|
[86] |
L. Wang, A. G. Schwing, and S. Lazebnik, “Diverse and accurate image description using a variational auto-encoder with an additive gaussian encoding space,” arXiv preprint arXiv: 1711.07068, 2017.
|
[87] |
M. Wang, L. Song, X. Yang, and C. Luo, “A parallel-fusion RNN-LSTM architecture for image caption generation,” in IEEE Int. Conf. Image Processing, IEEE, 2016, pp. 4448−4452.
|
[88] |
W. Jiang, L. Ma, X. Chen, H. Zhang, and W. Liu, “Learning to guide decoding for image captioning,” in Proc. AAAI Conf. Artificial Intelligence, 2018, pp. 6959−6966.
|
[89] |
Y. Xian and Y. Tian, “Self-guiding multimodal LSTM—When we do not have a perfect training dataset for image captioning?” IEEE Trans. Image Processing, vol. 28, no. 11, pp. 5241–5252, 2019. doi: 10.1109/TIP.2019.2917229
|
[90] |
X. Jia, E. Gavves, B. Fernando, and T. Tuytelaars, “Guiding the long-short term memory model for image caption generation,” in Proc. IEEE/CVF Int. Conf. Computer Vision, 2015, pp. 2407−2415.
|
[91] |
J. Mao, W. Xu, Y. Yang, J. Wang, Z. Huang, and A. Yuille, “Deep captioning with multimodal recurrent neural networks (m-RNN),” arXiv preprint arXiv: 1412.6632, 2014.
|
[92] |
C. Wang, H. Yang, C. Bartz, and C. Meinel, “Image captioning with deep bidirectional LSTMs,” in Proc. 24th ACM International Conf. Multimedia, 2016, pp. 988−997.
|
[93] |
Y. Zheng, Y. Li, and S. Wang, “Intention oriented image captions with guiding objects,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2019, pp. 8395−8404.
|
[94] |
I. Laina, C. Rupprecht, and N. Navab, “Towards unsupervised image captioning with shared multimodal embeddings,” in Proc. IEEE/CVF Int. Conf. Computer Vision, 2019, pp. 7414−7424.
|
[95] |
W. J. Zhang, J. C. Wang, and F. P. Lan, “Dynamic hand gesture recognition based on short-term sampling neural networks,” IEEE/CAA J. Autom. Sinica, vol. 8, no. 1, pp. 110–120, Jan. 2021.
|
[96] |
L. Liu, J. Tang, X. Wan, and Z. Guo, “Generating diverse and descriptive image captions using visual paraphrases,” in Proc. IEEE/CVF Int. Conf. Computer Vision, 2019, pp. 4240−4249.
|
[97] |
G. Yin, L. Sheng, B. Liu, N. Yu, X. Wang, and J. Shao, “Context and attribute grounded dense captioning,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2019, pp. 6241−6250.
|
[98] |
J. Gu, J. Cai, G. Wang, and T. Chen, “Stack-captioning: Coarse-to-fine learning for image captioning,” in Proc. AAAI Conf. Artificial Intelligence, 2018, vol. 32, no. 1, pp. 6837–6844.
|
[99] |
D.-J. Kim, J. Choi, T.-H. Oh, and I. S. Kweon, “Dense relational captioning: Triple-stream networks for relationship-based captioning,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2019, pp. 6271−6280.
|
[100] |
Z. Song, X. Zhou, Z. Mao, and J. Tan, “Image captioning with context-aware auxiliary guidance,” in Proc. AAAI Conf. Artificial Intelligence, 2021, vol. 35, no. 3, pp. 2584−2592.
|
[101] |
X. D. Zhao, Y. R. Chen, J. Guo, and D. B. Zhao, “A spatial-temporal attention model for human trajectory prediction,” IEEE/CAA J. Autom. Sinica, vol. 7, no. 4, pp. 965–974, Jul. 2020.
|
[102] |
L. Li, S. Tang, L. Deng, Y. Zhang, and Q. Tian, “Image caption with global-local attention,” in Proc. AAAI Conf. Artificial Intelligence, 2017, vol. 31, no. 1, pp. 4133−4239.
|
[103] |
C. Wu, Y. Wei, X. Chu, F. Su, and L. Wang, “Modeling visual and word-conditional semantic attention for image captioning,” Signal Processing: Image Communication, vol. 67, pp. 100–107, 2018. doi: 10.1016/j.image.2018.06.002
|
[104] |
Z. Zhang, Q. Wu, Y. Wang, and F. Chen, “Fine-grained and semantic-guided visual attention for image captioning,” in Proc. IEEE Winter Conf. Applications of Computer Vision, IEEE, 2018, pp. 1709−1717.
|
[105] |
P. Cao, Z. Yang, L. Sun, Y. Liang, M. Q. Yang, and R. Guan, “Image captioning with bidirectional semantic attention-based guiding of long short-term memory,” Neural Processing Letters, vol. 50, no. 1, pp. 103–119, 2019. doi: 10.1007/s11063-018-09973-5
|
[106] |
S. Wang, L. Lan, X. Zhang, G. Dong, and Z. Luo, “Object-aware semantics of attention for image captioning,” Multimedia Tools and Applications, vol. 79, no. 3, pp. 2013–2030, 2020.
|
[107] |
L. Chen, H. Zhang, J. Xiao, L. Nie, J. Shao, W. Liu, and T.-S. Chua, “SCA-CNN: Spatial and channel-wise attention in convolutional networks for image captioning,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2017, pp. 5659−5667.
|
[108] |
J. Zhou, X. Wang, J. Han, S. Hu, and H. Gao, “Spatial-temporal attention for image captioning,” in Proc. IEEE Fourth Int. Conf. Multimedia Big Data, 2018, pp. 1−5.
|
[109] |
J. Ji, C. Xu, X. Zhang, B. Wang, and X. Song, “Spatio-temporal memory attention for image captioning,” IEEE Trans. Image Processing, vol. 29, pp. 7615–7628, 2020. doi: 10.1109/TIP.2020.3004729
|
[110] |
J. Lu, J. Yang, D. Batra, and D. Parikh, “Neural baby talk,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2018, pp. 7219−7228.
|
[111] |
F. Xiao, X. Gong, Y. Zhang, Y. Shen, J. Li, and X. Gao, “DAA: Dual LSTMs with adaptive attention for image captioning,” Neurocomputing, vol. 364, pp. 322–329, 2019. doi: 10.1016/j.neucom.2019.06.085
|
[112] |
Z. Deng, Z. Jiang, R. Lan, W. Huang, and X. Luo, “Image captioning using DenseNet network and adaptive attention,” Signal Processing: Image Communication, vol. 85, p. 115836, 2020.
|
[113] |
C. Yan, Y. Hao, L. Li, J. Yin, A. Liu, Z. Mao, Z. Chen, and X. Gao, “Task-adaptive attention for image captioning,” IEEE Trans. Circuits and Systems for Video Technology, vol. 32, no. 1, pp. 43–51, 2021. doi: 10.1109/TCSVT.2021.3067449
|
[114] |
M. Cornia, L. Baraldi, G. Serra, and R. Cucchiara, “Paying more attention to saliency: Image captioning with saliency and context attention,” ACM Trans. Multimedia Computing,Communications,and Applications, vol. 14, no. 2, pp. 1–21, 2018.
|
[115] |
J. Wang, W. Wang, L. Wang, Z. Wang, D. D. Feng, and T. Tan, “Learning visual relationship and context-aware attention for image captioning,” Pattern Recognition, vol. 98, p. 107075, 2020.
|
[116] |
H. Chen, G. Ding, Z. Lin, Y. Guo, and J. Han, “Attend to knowledge: Memory-enhanced attention network for image captioning,” in Proc. Int. Conf. Brain Inspired Cognitive Systems, Springer, 2018, pp. 161−171.
|
[117] |
T. Wang, X. Xu, F. Shen, and Y. Yang, “A cognitive memory-augmented network for visual anomaly detection,” IEEE/CAA J. Autom. Sinica, vol. 8, no. 7, pp. 1296–1307, Jul. 2021.
|
[118] |
C. Xu, M. Yang, X. Ao, Y. Shen, R. Xu, and J. Tian, “Retrieval-enhanced adversarial training with dynamic memory-augmented attention for image paragraph captioning,” Knowledge-Based Systems, vol. 214, p. 106730, 2021.
|
[119] |
Y. Cheng, F. Huang, L. Zhou, C. Jin, Y. Zhang, and T. Zhang, “A hierarchical multimodal attention-based neural network for image captioning,” in Proc. 40th Int. ACM SIGIR Conf. Research and Development in Information Retrieval, 2017, pp. 889−892.
|
[120] |
Q. Wang and A. B. Chan, “Gated hierarchical attention for image captioning,” in Proc. Asian Conf. Computer Vision, Springer, 2018, pp. 21−37.
|
[121] |
W. Wang, Z. Chen, and H. Hu, “Hierarchical attention network for image captioning,” in Proc. AAAI Conf. Artificial Intelligence, 2019, vol. 33, no. 1, pp. 8957−8964.
|
[122] |
S. Yan, Y. Xie, F. Wu, J. S. Smith, W. Lu, and B. Zhang, “Image captioning via hierarchical attention mechanism and policy gradient optimization,” Signal Processing, vol. 167, p. 107329, 2020.
|
[123] |
L. Gao, K. Fan, J. Song, X. Liu, X. Xu, and H. T. Shen, “Deliberate attention networks for image captioning,” in Proc. AAAI Conf. Artificial Intelligence, 2019, vol. 33, no. 1, pp. 8320−8327.
|
[124] |
Z. Zhang, Y. Wang, Q. Wu, and F. Chen, “Visual relationship attention for image captioning,” in Proc. IEEE Int. Joint Conf. Neural Networks, 2019, pp. 1−8.
|
[125] |
Z. Zhang, Q. Wu, Y. Wang, and F. Chen, “Exploring region relationships implicitly: Image captioning with visual relationship attention,” Image and Vision Computing, vol. 109, p. 104146, 2021.
|
[126] |
R. Del Chiaro, B. Twardowski, A. D. Bagdanov, and J. Van de Weijer, “Ratt: Recurrent attention to transient tasks for continual image captioning,” arXiv preprint arXiv: 2007.06271, 2020.
|
[127] |
Y. Li, X. Zhang, J. Gu, C. Li, X. Wang, X. Tang, and L. Jiao, “Recurrent attention and semantic gate for remote sensing image captioning,” IEEE Trans. Geoscience and Remote Sensing, vol.60, p. 5608816, 2021.
|
[128] |
A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” in Proc. Advances in Neural Information Processing Systems, 2017, pp. 5998−6008.
|
[129] |
Y. Pan, T. Yao, Y. Li, and T. Mei, “X-linear attention networks for image captioning,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2020, pp. 10971−10980.
|
[130] |
J. Banzi, I. Bulugu, and Z. F. Ye, “Learning a deep predictive coding network for a semi-supervised 3D-hand pose estimation,” IEEE/CAA J. Autom. Sinica, vol. 7, no. 5, pp. 1371–1379, Sept. 2020.
|
[131] |
X. Zhu, L. Li, J. Liu, H. Peng, and X. Niu, “Captioning transformer with stacked attention modules,” Applied Sciences, vol. 8, no. 5, p. 739, 2018.
|
[132] |
S. Herdade, A. Kappeler, K. Boakye, and J. Soares, “Image captioning: Transforming objects into words,” in Proc. Advances in Neural Information Processing Systems, 2019, pp. 11137−11147.
|
[133] |
L. Guo, J. Liu, X. Zhu, P. Yao, S. Lu, and H. Lu, “Normalized and geometry-aware self-attention network for image captioning,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2020, pp. 10327−10336.
|
[134] |
G. Li, L. Zhu, P. Liu, and Y. Yang, “Entangled transformer for image captioning,” in Proc. IEEE Int. Conf. Computer Vision, 2019, pp. 8928−8937.
|
[135] |
M. Cornia, M. Stefanini, L. Baraldi, and R. Cucchiara, “Meshed-memory transformer for image captioning,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2020, pp. 10578−10587.
|
[136] |
Y. Luo, J. Ji, X. Sun, L. Cao, Y. Wu, F. Huang, C. Lin, and R. Ji, “Dual-level collaborative transformer for image captioning,” in Proc. AAAI Conf. Artificial Intelligence, 2021, pp. 1−8.
|
[137] |
C. Sundaramoorthy, L. Z. Kelvin, M. Sarin, and S. Gupta, “End-to-end attention-based image captioning,” arXiv preprint arXiv: 2104.14721, 2021.
|
[138] |
N. Zeng, H. Li, Z. Wang, W. Liu, S. Liu, F. E. Alsaadi, and X. Liu, “Deep-reinforcement-learning-based images segmentation for quantitative analysis of gold immunochromatographic strip,” Neurocomputing, vol. 425, pp. 173–180, 2021. doi: 10.1016/j.neucom.2020.04.001
|
[139] |
J. Ji, X. Sun, Y. Zhou, R. Ji, F. Chen, J. Liu, and Q. Tian, “Attacking image captioning towards accuracy-preserving target words removal,” in Proc. 28th ACM Int. Conf. Multimedia, 2020, pp. 4226−4234.
|
[140] |
T. Liu, B. Tian, Y. F. Ai, and F.-Y. Wang, “Parallel reinforcement learning-based energy efficiency improvement for a cyber-physical system,” IEEE/CAA J. Autom. Sinica, vol. 7, no. 2, pp. 617–626, Mar. 2020.
|
[141] |
M. Ranzato, S. Chopra, M. Auli, and W. Zaremba, “Sequence level training with recurrent neural networks,” arXiv preprint arXiv: 1511.06732, 2015.
|
[142] |
R. Vedantam, C. Lawrence Zitnick, and D. Parikh, “Cider: Consensus-based image description evaluation,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2015, pp. 4566−4575.
|
[143] |
P. Anderson, B. Fernando, M. Johnson, and S. Gould, “Spice: Semantic propositional image caption evaluation,” in Proc. European Conf. Computer Vision, Springer, 2016, pp. 382−398.
|
[144] |
S. Liu, Z. Zhu, N. Ye, S. Guadarrama, and K. Murphy, “Improved image captioning via policy gradient optimization of spider,” in Proc. IEEE Int. Conf. Computer Vision, 2017, pp. 873−881.
|
[145] |
L. Zhang, F. Sung, L. Feng, T. Xiang, S. Gong, Y. Yang, and T. Hospedales, “Actor-critic sequence training for image captioning,” in Visually-Grounded Interaction and Language: NIPS 2017 Workshop, 2017, pp.1–10.
|
[146] |
Y. Lin, J. McPhee, and N. L. Azad, “Comparison of deep reinforcement learning and model predictive control for adaptive cruise control,” IEEE Trans. Intelligent Vehicles, vol. 6, no. 2, pp. 221–231, Jun. 2021. doi: 10.1109/TIV.2020.3012947
|
[147] |
C. Chen, S. Mu, W. Xiao, Z. Ye, L. Wu, and Q. Ju, “Improving image captioning with conditional generative adversarial nets,” in Proc. AAAI Conf. Artificial Intelligence, 2019, vol. 33, no. 1, pp. 8142−8150.
|
[148] |
X. Shi, X. Yang, J. Gu, S. Joty, and J. Cai, “Finding it at another side: A viewpoint-adapted matching encoder for change captioning,” in Proc. European Conf. Computer Vision, Springer, 2020, pp. 574−590.
|
[149] |
L. Zhou, H. Palangi, L. Zhang, H. Hu, J. Corso, and J. Gao, “Unified vision-language pre-training for image captioning and VQA,” in Proc. AAAI Conf. Artificial Intelligence, 2020, vol. 34, no. 7, pp. 13041−13049.
|
[150] |
X. Li, X. Yin, C. Li, P. Zhang, X. Hu, L. Zhang, L. Wang, H. Hu, L. Dong, F. Wei, et al., “Oscar: Object-semantics aligned pre-training for vision-language tasks,” in Proc. European Conf. Computer Vision, Springer, 2020, pp. 121−137.
|
[151] |
X. Hu, X. Yin, K. Lin, L. Zhang, J. Gao, L. Wang, and Z. Liu, “Vivo: Visual vocabulary pre-training for novel object captioning,” in Proc. AAAI Conf. Artificial Intelligence, 2021, vol. 35, no. 2, pp. 1575−1583.
|
[152] |
T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, “Microsoft COCO: Common objects in context,” in Proc. European Conf. Computer Vision, Springer, 2014, pp. 740−755.
|
[153] |
P. Young, A. Lai, M. Hodosh, and J. Hockenmaier, “From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions,” Trans. Association Computational Linguistics, vol. 2, pp. 67–78, 2014. doi: 10.1162/tacl_a_00166
|
[154] |
B. A. Plummer, L. Wang, C. M. Cervantes, J. C. Caicedo, J. Hockenmaier, and S. Lazebnik, “Flickr30k entities: Collecting region-to-phrase correspondences for richer image-to-sentence models,” in Proc. IEEE Int. Conf. Computer Vision, 2015, pp. 2641−2649.
|
[155] |
M. Everingham, A. Zisserman, C. K. Williams, et al., “The 2005 pascal visual object classes challenge,” in Proc. Machine Learning Challenges Workshop, Springer, 2005, pp. 117−176.
|
[156] |
B. Thomee, D. A. Shamma, G. Friedland, B. Elizalde, K. Ni, D. Poland, D. Borth, and L.-J. Li, “YFCC100m: The new data in multimedia research,” Commun. ACM, vol. 59, no. 2, pp. 64–73, 2016. doi: 10.1145/2812802
|
[157] |
D. Elliott, S. Frank, K. Sima’an, and L. Specia, “Multi30k: Multilingual english-german image descriptions,” in Proc. 5th Workshop on Vision and Language, 2016, pp. 70−74.
|
[158] |
J. Wu, H. Zheng, B. Zhao, Y. Li, B. Yan, R. Liang, W. Wang, S. Zhou, G. Lin, Y. Fu, Y. Z. Wang, and Y. G. Wang, “AI challenger: A large-scale dataset for going deeper in image understanding,” arXiv preprint arXiv: 1711.06475, 2017.
|
[159] |
M. Grubinger, P. Clough, H. Müller, and T. Deselaers, “The iapr TC-12 benchmark: A new evaluation resource for visual information systems,” in Proc. Int. Workshop OntoImage, 2006, vol. 2, pp.13–23.
|
[160] |
A. F. Biten, L. Gomez, M. Rusinol, and D. Karatzas, “Good news, everyone! Context driven entity-aware captioning for news images,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2019, pp. 12466−12475.
|
[161] |
H. Agrawal, K. Desai, Y. Wang, X. Chen, R. Jain, M. Johnson, D. Batra, D. Parikh, S. Lee, and P. Anderson, “NOCAPS: Novel object captioning at scale,” in Proc. IEEE/CVF Int. Conf. Computer Vision, 2019, pp. 8948−8957.
|
[162] |
X. Yang, H. Zhang, D. Jin, Y. Liu, C.-H. Wu, J. Tan, D. Xie, J. Wang, and X. Wang, “Fashion captioning: Towards generating accurate descriptions with semantic rewards,” in Proc. Computer Vision-ECCV, Springer, 2020, pp. 1−17.
|
[163] |
O. Sidorov, R. Hu, M. Rohrbach, and A. Singh, “Textcaps: A dataset for image captioning with reading comprehension,” in Proc. European Conf. Computer Vision, Springer, 2020, pp. 742−758.
|
[164] |
R. Krishna, Y. Zhu, O. Groth, J. Johnson, K. Hata, J. Kravitz, S. Chen, Y. Kalantidis, L.-J. Li, D. A. Shamma, S. B. Michael, and F.-F. Li, “Visual genome: Connecting language and vision using crowdsourced dense image annotations,” Int. J. Computer Vision, vol. 123, no. 1, pp. 32–73, 2017. doi: 10.1007/s11263-016-0981-7
|
[165] |
I. Krasin, T. Duerig, N. Alldrin, V. Ferrari, S. Abu-El-Haija, A. Kuznetsova, H. Rom, J. Uijlings, S. Popov, A. Veit, S. Abu-El-Haija, S. Belongie, C. University, D. Cai, Z. Y. Feng, V. Ferrari, and V. Gomes, “Openimages: A public dataset for large-scale multi-label and multi-class image classification,” Dataset available from https://github.com/openimages, vol. 2, no. 3, p. 18, 2017.
|
[166] |
K. Papineni, S. Roukos, T. Ward, and W. Zhu, “BLEU: A method for automatic evaluation of machine translation,” in Proc. 40th Annual Meeting Association for Computational Linguistics, 2002, pp. 311−318.
|
[167] |
S. Banerjee and A. Lavie, “Meteor: An automatic metric for mt evaluation with improved correlation with human judgments,” in Proc. ACL Workshop Intrinsic and Extrinsic Evaluation Measures for Machine Trans. and/or Summarization, 2005, vol. 29, pp. 65−72.
|
[168] |
C. Lin and F. J. Och, “Automatic evaluation of machine translation quality using longest common subsequence and skip-bigram statistics,” in Proc. 42nd Annual Meeting of Association for Computational Linguistics, 2004, pp. 605−612.
|
[169] |
A. Deshpande, J. Aneja, L. Wang, A. G. Schwing, and D. Forsyth, “Fast, diverse and accurate image captioning guided by part-of-speech,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2019, pp. 10695−10704.
|
[170] |
J. Aneja, H. Agrawal, D. Batra, and A. Schwing, “Sequential latent spaces for modeling the intention during diverse image captioning,” in Proc. IEEE/CVF Int. Conf. Computer Vision, 2019, pp. 4261−4270.
|
[171] |
Q. Wang and A. B. Chan, “Describing like humans: On diversity in image captioning,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2019, pp. 4195−4203.
|
[172] |
Q. Wang, J. Wan, and A. B. Chan, “On diversity in image captioning: Metrics and methods,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 44, no. 2, pp. 1035–1049, 2020. doi: 10.1109/TPAMI.2020.3013834
|
[173] |
M. Jiang, Q. Huang, L. Zhang, X. Wang, P. Zhang, Z. Gan, J. Diesner, and J. Gao, “Tiger: Text-to-image grounding for image caption evaluation,” in Proc. Conf. Empirical Methods in Natural Language Processing and 9th Int. Joint Conf. Natural Language Processing, 2020, pp. 2141−2152.
|
[174] |
S. Wang, Z. Yao, R. Wang, Z. Wu, and X. Chen, “Faier: Fidelity and adequacy ensured image caption evaluation,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2021, pp. 14050−14059.
|
[175] |
T. Zhang, V. Kishore, F. Wu, K. Q. Weinberger, and Y. Artzi, “Bertscore: Evaluating text generation with bert,” in Proc. Int. Conf. Learning Representations, 2019, pp. 1−43.
|
[176] |
H. Lee, S. Yoon, F. Dernoncourt, D. S. Kim, T. Bui, and K. Jung, “Vilbertscore: Evaluating image caption using vision-and-language bert,” in Proc. 1st Workshop Evaluation and Comparison NLP Systems, 2020, pp. 34−39.
|
[177] |
J. Wang, W. Xu, Q. Wang, and A. B. Chan, “Compare and reweight: Distinctive image captioning using similar images sets,” in Proc. European Conf. Computer Vision, Springer, 2020, pp. 370−386.
|
[178] |
J. Hessel, A. Holtzman, M. Forbes, R. L. Bras, and Y. Choi, “Clipscore: A reference-free evaluation metric for image captioning,” arXiv preprint arXiv: 2104.08718, 2021.
|
[179] |
A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, G. Krueger, and I. Sutskever, “Learning transferable visual models from natural language supervision,” Image, vol. 2, p. T2, 2021.
|
[180] |
J. Feinglass and Y. Yang, “Smurf: Semantic and linguistic understanding fusion for caption evaluation via typicality analysis,” arXiv preprint arXiv: 2106.01444, 2021.
|
[181] |
H. Lee, S. Yoon, F. Dernoncourt, T. Bui, and K. Jung, “Umic: An unreferenced metric for image captioning via contrastive learning,” in Proc. 59th Annual Meeting of the Association for Computational Linguistics and 11th Int. Joint Conf. Natural Language Processing, 2021, pp. 220−226.
|
[182] |
J. Mao, W. Xu, Y. Yang, J. Wang, and A. L. Yuille, “Explain images with multimodal recurrent neural networks,” arXiv preprint arXiv: 1410.1090, 2014.
|
[183] |
A. Karpathy, A. Joulin, and L. Fei-Fei, “Deep fragment embeddings for bidirectional image sentence mapping,” in Proc. 27th Int. Conf. Neural Information Processing Systems, 2014, vol. 2, pp. 1889−1897.
|
[184] |
X. Chen and C. L. Zitnick, “Mind’s eye: A recurrent visual representation for image caption generation,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2015, pp. 2422−2431.
|
[185] |
O. Vinyals, A. Toshev, S. Bengio, and D. Erhan, “Show and tell: A neural image caption generator,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2015, pp. 3156−3164.
|
[186] |
K. Tran, X. He, L. Zhang, J. Sun, C. Carapcea, C. Thrasher, C. Buehler, and C. Sienkiewicz, “Rich image captioning in the wild,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2016, pp. 49−56.
|
[187] |
L. A. Hendricks, S. Venugopalan, M. Rohrbach, R. Mooney, K. Saenko, and T. Darrell, “Deep compositional captioning: Describing novel object categories without paired training data,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2016, pp. 1−10.
|
[188] |
L. Yang, K. Tang, J. Yang, and L.-J. Li, “Dense captioning with joint inference and visual context,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2017, pp. 2193−2202.
|
[189] |
J. Gu, G. Wang, J. Cai, and T. Chen, “An empirical study of language cnn for image captioning,” in Proc. IEEE Int. Conf. Computer Vision, 2017, pp. 1222−1231.
|
[190] |
Q. Wu, C. Shen, P. Wang, A. Dick, and A. van den Hengel, “Image captioning and visual question answering based on attributes and external knowledge,” IEEE Trans. Pattern Analysis &Machine Intelligence, vol. 40, no. 6, pp. 1367–1381, 2018.
|
[191] |
J. Aneja, A. Deshpande, and A. G. Schwing, “Convolutional image captioning,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2018, pp. 5561−5570.
|
[192] |
Q. Wang and A. B. Chan, “CNN + CNN: Convolutional decoders for image captioning,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2018, pp. 1−9.
|
[193] |
X. Xiao, L. Wang, K. Ding, S. Xiang, and C. Pan, “Deep hierarchical encoder-decoder network for image captioning,” IEEE Trans. Multimedia, vol. 21, no. 11, pp. 2942–2956, 2019. doi: 10.1109/TMM.2019.2915033
|
[194] |
Y. Qin, J. Du, Y. Zhang, and H. Lu, “Look back and predict forward in image captioning,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2019, pp. 8367−8375.
|
[195] |
J. Liu, K. Wang, C. Xu, Z. Zhao, R. Xu, Y. Shen, and M. Yang, “Interactive dual generative adversarial networks for image captioning,” in Proc. AAAI Conf. Artificial Intelligence, 2020, vol. 34, no. 7, pp. 11588−11595.
|
[196] |
Y. Wang, W. Zhang, Q. Liu, Z. Zhang, X. Gao, and X. Sun, “Improving intra-and inter-modality visual relation for image captioning,” in Proc. 28th ACM Int. Conf. Multimedia, 2020, pp. 4190−4198.
|
[197] |
A. Hu, S. Chen, and Q. Jin, “ICECAP: Information concentrated entity-aware image captioning,” in Proc. 28th ACM Int. Conf. Multimedia, 2020, pp. 4217−4225.
|
[198] |
Z. Fei, “Memory-augmented image captioning,” in Proc. AAAI Conf. Artificial Intelligence, 2021, vol. 35, no. 2, pp. 1317−1324.
|
[199] |
Y. Zhang, X. Shi, S. Mi, and X. Yang, “Image captioning with transformer and knowledge graph,” Pattern Recognition Letters, vol. 143, pp. 43–49, 2021. doi: 10.1016/j.patrec.2020.12.020
|
[200] |
Y. Luo, J. Ji, X. Sun, L. Cao, Y. Wu, F. Huang, C.-W. Lin, and R. Ji, “Dual-level collaborative transformer for image captioning,” in Proc. AAAI Conf. Artificial Intelligence, 2021, vol. 35, no. 3, pp. 2286−2293.
|
[201] |
Z. Shi, X. Zhou, X. Qiu, and X. Zhu, “Improving image captioning with better use of caption,” in Proc. 58th Annual Meeting Association for Computational Linguistics, 2020, pp. 7454−7464.
|
[202] |
L. Huang, W. Wang, J. Chen, and X. Wei, “Attention on attention for image captioning,” in Proc. IEEE Int. Conf. Computer Vision, 2019, pp. 4634−4643.
|
[203] |
D. Liu, Z.-J. Zha, H. Zhang, Y. Zhang, and F. Wu, “Context-aware visual policy network for sequence-level image captioning,” in Proc. 26th ACM Int. Conf. Multimedia, 2018, pp. 1416−1424.
|
[204] |
W. Jiang, L. Ma, Y.-G. Jiang, W. Liu, and T. Zhang, “Recurrent fusion network for image captioning,” in Proc. European Conf. Computer Vision, Springer, 2018, pp. 499−515.
|
[205] |
T. Yao, Y. Pan, Y. Li, Z. Qiu, and T. Mei, “Boosting image captioning with attributes,” in Proc. IEEE/CVF Int. Conf. Computer Vision, 2017, pp. 4894−4902.
|
[206] |
T. Yao, Y. Pan, Y. Li, and T. Mei, “Hierarchy parsing for image captioning,” in Proc. IEEE/CVF Int. Conf. Computer Vision, 2019, pp. 2621−2629.
|
[207] |
Y. Zhou, M. Wang, D. Liu, Z. Hu, and H. Zhang, “More grounded image captioning by distilling image-text matching model,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2020, pp. 4777−4786.
|
[208] |
L. Wang, Z. Bai, Y. Zhang, and H. Lu, “Show, recall, and tell: Image captioning with recall mechanism.” in Proc. AAAI Conf. Artificial Intelligence, 2020, pp. 12176−12183.
|
[209] |
J. Ji, Y. Luo, X. Sun, F. Chen, G. Luo, Y. Wu, Y. Gao, and R. Ji, “Improving image captioning by leveraging intra-and inter-layer global representation in transformer network,” in Proc. AAAI Conf. Artificial Intelligence, 2021, vol. 35, no. 2, pp. 1655−1663.
|
[210] |
M. Cornia, L. Baraldi, and R. Cucchiara, “Show, control and tell: A framework for generating controllable and grounded captions,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2019, pp. 8307−8316.
|
[211] |
C. Deng, N. Ding, M. Tan, and Q. Wu, “Length-controllable image captioning,” in Computer Vision-ECCV, Springer, 2020, pp. 712−729.
|
[212] |
F. Sammani and L. Melas-Kyriazi, “Show, edit and tell: A framework for editing image captions,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2020, pp. 4808−4816.
|
[213] |
X. Li, S. Jiang, and J. Han, “Learning object context for dense captioning,” in Proc. AAAI Conf. Artificial Intelligence, 2019, pp. 8650−8657.
|
[214] |
S. Chen and Y.-G. Jiang, “Towards bridging event captioner and sentence localizer for weakly supervised dense event captioning,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2021, pp. 8425−8435.
|
[215] |
K. Shuster, S. Humeau, H. Hu, A. Bordes, and J. Weston, “Engaging image captioning via personality,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2019, pp. 12516−12526.
|
[216] |
R. Li, H. Liang, Y. Shi, F. Feng, and X. Wang, “Dual-CNN: A convolutional language decoder for paragraph image captioning,” Neurocomputing, vol. 396, pp. 92–101, 2020. doi: 10.1016/j.neucom.2020.02.041
|
[217] |
Z. Fei, “Iterative back modification for faster image captioning,” in Proc. 28th ACM Int. Conf. Multimedia, 2020, pp. 3182−3190.
|
[218] |
L. Guo, J. Liu, X. Zhu, and H. Lu, “Fast sequence generation with multi-agent reinforcement learning,” arXiv preprint arXiv: 2101.09698, 2021.
|
[1] | Alessandra Perniciano, Luca Zedda, Cecilia Di Ruberto, Barbara Pes, Andrea Loddo. CRDet: An Artificial Intelligence-Based Framework for Automated Cheese Ripeness Assessment From Digital Images[J]. IEEE/CAA Journal of Automatica Sinica. doi: 10.1109/JAS.2024.125061 |
[2] | Juanjuan Li, Rui Qin, Sangtian Guan, Wenwen Ding, Fei Lin, Fei-Yue Wang. Attention Markets of Blockchain-Based Decentralized Autonomous Organizations[J]. IEEE/CAA Journal of Automatica Sinica, 2024, 11(6): 1370-1380. doi: 10.1109/JAS.2024.124491 |
[3] | Yanan Jia, Qiming Hu, Renwei Dian, Jiayi Ma, Xiaojie Guo. PAPS: Progressive Attention-Based Pan-sharpening[J]. IEEE/CAA Journal of Automatica Sinica, 2024, 11(2): 391-404. doi: 10.1109/JAS.2023.123987 |
[4] | Haotian Liu, Yuchuang Tong, Zhengtao Zhang. Human Observation-Inspired Universal Image Acquisition Paradigm Integrating Multi-Objective Motion Planning and Control for Robotics[J]. IEEE/CAA Journal of Automatica Sinica, 2024, 11(12): 2463-2475. doi: 10.1109/JAS.2024.124512 |
[5] | Tianyu Shen, Jinlin Sun, Shihan Kong, Yutong Wang, Juanjuan Li, Xuan Li, Fei-Yue Wang. The Journey/DAO/TAO of Embodied Intelligence: From Large Models to Foundation Intelligence and Parallel Intelligence[J]. IEEE/CAA Journal of Automatica Sinica, 2024, 11(6): 1313-1316. doi: 10.1109/JAS.2024.124407 |
[6] | Hongmin Liu, Qi Zhang, Yufan Hu, Hui Zeng, Bin Fan. Unsupervised Multi-Expert Learning Model for Underwater Image Enhancement[J]. IEEE/CAA Journal of Automatica Sinica, 2024, 11(3): 708-722. doi: 10.1109/JAS.2023.123771 |
[7] | Kui Jiang, Ruoxi Wang, Yi Xiao, Junjun Jiang, Xin Xu, Tao Lu. Image Enhancement via Associated Perturbation Removal and Texture Reconstruction Learning[J]. IEEE/CAA Journal of Automatica Sinica, 2024, 11(11): 2253-2269. doi: 10.1109/JAS.2024.124521 |
[8] | Fei-Yue Wang, Qinghai Miao, Xuan Li, Xingxia Wang, Yilun Lin. What Does ChatGPT Say: The DAO from Algorithmic Intelligence to Linguistic Intelligence[J]. IEEE/CAA Journal of Automatica Sinica, 2023, 10(3): 575-579. doi: 10.1109/JAS.2023.123486 |
[9] | Wenhua Li, Xingyi Yao, Kaiwen Li, Rui Wang, Tao Zhang, Ling Wang. Coevolutionary Framework for Generalized Multimodal Multi-Objective Optimization[J]. IEEE/CAA Journal of Automatica Sinica, 2023, 10(7): 1544-1556. doi: 10.1109/JAS.2023.123609 |
[10] | Yan-Jun Lin, Yun-Shi Yang, Li Chai, Zhi-Yun Lin. Distributed Finite-Time Event-Triggered Formation Control Based on a Unified Framework of Affine Image[J]. IEEE/CAA Journal of Automatica Sinica. doi: 10.1109/JAS.2023.123885 |
[11] | Dan Zhang, Qiusheng Lian, Yueming Su, Tengfei Ren. Dual-Prior Integrated Image Reconstruction for Quanta Image Sensors Using Multi-Agent Consensus Equilibrium[J]. IEEE/CAA Journal of Automatica Sinica, 2023, 10(6): 1407-1420. doi: 10.1109/JAS.2023.123390 |
[12] | Xingxia Wang, Jing Yang, Yutong Wang, Qinghai Miao, Fei-Yue Wang, Aijun Zhao, Jian-Ling Deng, Lingxi Li, Xiaoxiang Na, Ljubo Vlacic. Steps Toward Industry 5.0: Building “6S” Parallel Industries With Cyber-Physical-Social Intelligence[J]. IEEE/CAA Journal of Automatica Sinica, 2023, 10(8): 1692-1703. doi: 10.1109/JAS.2023.123753 |
[13] | Yu Liu, Yu Shi, Fuhao Mu, Juan Cheng, Xun Chen. Glioma Segmentation-Oriented Multi-Modal MR Image Fusion With Adversarial Learning[J]. IEEE/CAA Journal of Automatica Sinica, 2022, 9(8): 1528-1531. doi: 10.1109/JAS.2022.105770 |
[14] | Yirui Wang, Shangce Gao, Mengchu Zhou, Yang Yu. A Multi-Layered Gravitational Search Algorithm for Function Optimization and Real-World Problems[J]. IEEE/CAA Journal of Automatica Sinica, 2021, 8(1): 94-109. doi: 10.1109/JAS.2020.1003462 |
[15] | Ke Zhang, Yukun Su, Xiwang Guo, Liang Qi, Zhenbing Zhao. MU-GAN: Facial Attribute Editing Based on Multi-Attention Mechanism[J]. IEEE/CAA Journal of Automatica Sinica, 2021, 8(9): 1614-1626. doi: 10.1109/JAS.2020.1003390 |
[16] | Pei Liu, Yingjie Zhou, Dezhong Peng, Dapeng Wu. Global-Attention-Based Neural Networks for Vision Language Intelligence[J]. IEEE/CAA Journal of Automatica Sinica, 2021, 8(7): 1243-1252. doi: 10.1109/JAS.2020.1003402 |
[17] | Xiaodong Zhao, Yaran Chen, Jin Guo, Dongbin Zhao. A Spatial-Temporal Attention Model for Human Trajectory Prediction[J]. IEEE/CAA Journal of Automatica Sinica, 2020, 7(4): 965-974. doi: 10.1109/JAS.2020.1003228 |
[18] | Qiang Wang, Xiaojing Yang, Zhigang Huang, Shiqian Ma, Qiao Li, David Wenzhong Gao, Fei-Yue Wang. A Novel Design Framework for Smart Operating Robot in Power System[J]. IEEE/CAA Journal of Automatica Sinica, 2018, 5(2): 531-538. doi: 10.1109/JAS.2017.7510838 |
[19] | Fei-Yue Wang, Xiao Wang, Lingxi Li, Li Li. Steps toward Parallel Intelligence[J]. IEEE/CAA Journal of Automatica Sinica, 2016, 3(4): 345-348. |
[20] | Jinlong Wang, Qianchuan Zhao, Haitao Li. A Multi-agent Based Evaluation Framework and Its Applications[J]. IEEE/CAA Journal of Automatica Sinica, 2014, 1(2): 218-224. |
Dataset | Size | Captions/Image | Topic | ||
Training | Validation | Testing | |||
Flickr8k [153] | 6000 | 1000 | 1000 | 5 | Human activities |
Flickr30k [154] | 28 000 | 1000 | 1000 | 5 | Human activities |
MSCOCO [152] | 82 783 | 40 504 | 40 775 | 5 | Daily scene |
(Karpathy’s split) | 112 783 | 5000 | 5000 | 5 | Daily scene |
PASCAL 1K [155] | − | − | 1000 | 5 | Human activities |
YFCC100M [156] | 9920 million ( 32\% ) | 7 | Public multimedia | ||
Multi30K-CLID [157] | 29 000 | 1000 | 1000 | 5 | Daily scene |
AIC [158] | 210 000 | 30 000 | 30 000 + 30 000 | 5 | Daily scene |
IAPR TC-12 [159] | 17 665 | − | 1962 | 1.7 | Still natual |
GoodNews [160] | 424 000 | 18 000 | 23 000 | 1 | News |
VizWiz [7] | 23 431 | 7750 | 8000 | 5 | Blind view |
Nocaps [161] | 1 700 000 | 4500 | 10 600 | 10 | Novel objects |
FACAD [162] | 993 000 images in total | 0.2 | Fashion items | ||
TextCaps [163] | 424 000 | 18 000 | 23 000 | 1 | Text |
Methods | Datasets | Evaluation metrics | Year |
Kiros et al. [31] | IAPR TC-12, SBU | BLEU, PPLX | 2014 |
Mao et al. [182] | IAPR TC-12, Flickr 8K/30K | BLEU, R@K, Mrank | 2014 |
Karpathy et al. [183] | PASCAL, Flickr 8K/30K | R@K, Mrank | 2014 |
Chen and Zitnick [184] | PASCAL, Flickr 8K/30K, MS COCO | BLEU, METEOR, CIDEr | 2015 |
Jia et al. [90] | Flickr 8K/30K, MS COCO | BLEU, METEOR, CIDEr | 2015 |
Vinyals et al. [185] | Flickr 8K/30K, MS COCO | BLEU, METEOR, CIDEr | 2015 |
Tran et al. [186] | MS COCO, Adobe-MIT, Instagram | Human evaluation | 2016 |
Li et al. [38] | Flickr 30K, MSCOCO | BLEU, METEOR, ROUGE, CIDEr | 2016 |
Hendricks et al. [187] | MS COCO, ImageNet | BLEU, METEOR | 2016 |
Yang et al. [188] | Visual genome | METEOR, AP, IoU | 2017 |
Liu et al. [144] | MS COCO | SPIDEr, Human evaluation | 2017 |
Gu et al. [189] | Flickr 30K, MS COCO | BLEU, METEOR, CIDEr, SPICE | 2017 |
Rennie et al. [28] | MS COCO | BLEU, METEOR, ROUGE, CIDEr | 2017 |
Wu et al. [190] | Flickr 8K/30K, MS COCO | BLEU, METEOR, CIDEr | 2018 |
Aneja et al. [191] | MS COCO | BLEU, METEOR, ROUGE, CIDEr | 2018 |
Wang and Chan [192] | MS COCO | BLEU, METEOR, ROUGE, CIDEr | 2018 |
Anderson et al. [29] | MS COCO | BLEU, METEOR, ROUGE, CIDEr, SPICE | 2018 |
Lu et al. [110] | Flickr 30K, MS COCO | BLEU, METEOR, CIDEr, SPICE | 2018 |
Xiao et al. [193] | Flickr 8K/30K, MS COCO | BLEU, METEOR, ROUGE, CIDEr | 2019 |
Jia et al. [90] | YFCC100M, InstaPIC-1.1M | BLEU, METEOR, ROUGE, CIDEr | 2019 |
Yang et al. [74] | MS COCO, Visual genome | BLEU, METEOR, ROUGE, CIDEr, SPICE | 2019 |
Qin et al. [194] | MS COCO | BLEU, METEOR, ROUGE, CIDEr, SPICE | 2019 |
Biten et al. [160] | GoodNews | BLEU, METEOR, ROUGE, CIDEr, SPICE | 2019 |
Liu et al. [195] | MS COCO | BLEU, METEOR, ROUGE, CIDEr | 2020 |
Yang et al. [162] | Fashion-caps | BLEU, METEOR, ROUGE, CIDEr, SPICE, mAP, ACC | 2020 |
Gurari et al. [7] | MS COCO, VizWiz | BLEU, METEOR, ROUGE, CIDEr, SPICE | 2020 |
Sidorov et al. [163] | MS COCO, TextCaps | BLEU, METEOR, ROUGE, CIDEr, SPICE | 2020 |
Wang et al. [196] | MS COCO | BLEU, METEOR, ROUGE, CIDEr, SPICE | 2020 |
Hu et al. [197] | BreakingNews, GoodNews | BLEU, METEOR, ROUGE, CIDEr | 2020 |
Fei [198] | MS COCO | BLEU, METEOR, ROUGE, CIDEr, SPICE | 2021 |
Hu et al. [151] | MS COCO, Nocaps | BLEU, METEOR, CIDEr, SPICE | 2021 |
Zhang et al. [44] | MS COCO | BLEU, METEOR, CIDEr, SPICE, R@K | 2021 |
Zhang et al. [199] | MS COCO | BLEU, METEOR, ROUGE, CIDEr, SPICE | 2021 |
Luo et al. [200] | MS COCO | BLEU, METEOR, ROUGE, CIDEr, SPICE | 2021 |
Zhang et al. [63] | MS COCO | BLEU, METEOR, ROUGE, CIDEr, SPICE | 2021 |
Methods | Standard metrics | Diversity metrics | Learning-based metrics | ||||||||
BLEU-1 | METEOR | CIDEr | Div-1 | Div-2 | Novel | TIGEr | BERT-S | CLIP-S | |||
NIC [26] | 72.4 | 25.0 | 97.2 | 1.4 | 4.5 | 36.1 | 71.8 | 93.4 | 69.7 | ||
Xu et al. [25] | 74.1 | 26.2 | 104.6 | 1.7 | 6.0 | 47.0 | 73.2 | 93.6 | 71.0 | ||
SCST [28] | 78.0 | 27.1 | 117.4 | 1.0 | 3.1 | 64.9 | 73.9 | 88.9 | 71.2 | ||
Up-Down [29] | 79.4 | 27.9 | 122.7 | 1.2 | 4.4 | 67.6 | 74.6 | 88.8 | 72.3 | ||
SGAE [74] | 81.0 | 28.4 | 129.1 | 1.4 | 5.4 | 71.4 | 74.6 | 94.1 | 73.4 | ||
MT [42] | 80.8 | 28.8 | 129.6 | 1.1 | 4.8 | 70.4 | 74.8 | 88.8 | 72.6 | ||
AOANet [202] | 80.2 | 29.2 | 129.8 | 1.6 | 6.2 | 69.3 | 75.1 | 94.3 | 73.7 | ||
ORT [132] | 80.5 | 28.7 | 128.3 | 2.1 | 7.2 | 73.8 | 75.1 | 94.1 | 73.6 | ||
M2-T [135] | 80.8 | 29.2 | 131.2 | 1.7 | 7.9 | 78.9 | 75.3 | 93.7 | 73.4 | ||
Unified VLP [149] | 80.9 | 29.3 | 129.3 | 1.9 | 8.1 | 74.1 | 75.1 | 94.4 | 75.0 | ||
CPTR [30] | 81.7 | 29.1 | 129.4 | 1.4 | 6.8 | 75.6 | 74.8 | 94.3 | 74.5 |
Methods | En | De | ATT | EX | RL | ||||||||
B4 | M | R-L | C | S | B4 | M | R-L | C | S | ||||
NIC [26] | CNN | LSTM | × | 24.6 | − | − | − | − | 27.7 | 23.7 | − | 85.5 | − |
Soft-ATT [25] | CNN | LSTM | √ | 24.3 | 23.9 | − | − | − | − | − | − | − | − |
Hard-ATT [25] | CNN | LSTM | √ | 25.0 | 23.0 | − | − | − | − | − | − | − | − |
GLA [102] | CNN | LSTM | √ | 31.2 | 24.9 | 53.3 | 96.4 | − | − | − | − | − | − |
Semantic-ATT [114] | CNN | LSTM | √ | 37.7 | 27.9 | 58.2 | 123.7 | − | − | − | − | − | |
Adp-ATT [27] | CNN | LSTM | √ | 33.2 | 25.7 | 55.0 | 101.3 | − | − | − | − | − | − |
SCST [28] | CNN | LSTM | √ | 30.0 | 26.0 | 54.3 | 101.3 | − | 34.2 | 26.7 | 55.7 | 114.0 | − |
Up-Down [29] | CNN | LSTM | √ | 36.2 | 27.0 | 56.4 | 113.5 | 20.3 | 36.3 | 27.7 | 56.9 | 120.1 | 21.4 |
Stack-Cap [98] | CNN | LSTM | √ | 35.2 | 26.5 | − | 109.1 | − | 36.1 | 27.4 | 56.9 | 120.4 | 20.9 |
CAVP [203] | CNN | LSTM | √ | − | − | − | − | − | 38.6 | 28.3 | 58.5 | 126.3 | 21.6 |
SGAE [74] | GCN | LSTM | √ | − | − | − | − | − | 38.4 | 28.4 | 58.6 | 127.8 | 22.1 |
AOANet [202] | SA | LSTM | √ | 36.9 | 28.5 | 57.3 | 118.5 | 21.6 | 39.1 | 29.0 | 58.9 | 128.9 | 22.5 |
ETA [134] | SA | T-ATT | √ | 37.1 | 28.2 | 57.1 | 117.9 | 21.4 | 39.3 | 28.8 | 58.9 | 126.6 | 22.7 |
RFNet [204] | CNN | LSTM | √ | 35.8 | 27.4 | 56.8 | 112.5 | 20.5 | 36.5 | 27.7 | 57.3 | 121.9 | 21.2 |
LSTM-A [205] | CNN | LSTM | √ | 35.2 | 26.9 | 55.8 | 108.8 | 20.0 | 35.5 | 27.3 | 56.8 | 118.3 | 20.8 |
GCN-LSTM [35] | GCN | LSTM | √ | 36.8 | 27.9 | 57.0 | 116.3 | 20.9 | 38.2 | 28.5 | 58.3 | 127.6 | 22.0 |
CNM [75] | GCN | LSTM | √ | 37.1 | 27.9 | 57.3 | 116.6 | 20.8 | 38.7 | 28.4 | 58.7 | 127.4 | 21.8 |
DA [123] | CNN | LSTM | √ | 33.7 | 26.4 | 54.6 | 104.9 | 19.4 | 37.5 | 28.5 | 58.2 | 125.6 | 22.3 |
MT [42] | SA | T-ATT | √ | 37.4 | 28.7 | 57.4 | 119.6 | − | 40.7 | 29.5 | 59.7 | 134.1 | − |
ORT [132] | SA | T-ATT | √ | 35.5 | 28.0 | 56.6 | 115.4 | 21.2 | 38.6 | 28.7 | 58.4 | 128.3 | 22.6 |
M2-T [135] | CNN | LSTM | √ | − | − | − | − | − | 39.1 | 29.2 | 58.6 | 131.2 | 22.6 |
LBPF [194] | CNN | LSTM | √ | 37.4 | 28.1 | 57.5 | 116.4 | 21.2 | 38.3 | 28.5 | 58.4 | 127.6 | 22.0 |
GCN-HIP [206] | GCN | LSTM | √ | 38.0 | 28.6 | 57.8 | 120.3 | 21.4 | 39.1 | 28.9 | 59.2 | 130.6 | 22.3 |
VSUA [36] | GCN | LSTM | √ | − | − | − | − | − | 38.4 | 28.5 | 58.4 | 128.6 | 22.0 |
NG-SAN [133] | SA | T-ATT | √ | − | − | − | − | − | 39.9 | 29.3 | 59.2 | 132.1 | 23.3 |
POS-SCAN [207] | CNN | LSTM | √ | 36.5 | 27.9 | − | 114.9 | 20.8 | 38.0 | 28.5 | − | 125.9 | 22.2 |
X-LAN [129] | SA | LSTM | √ | 38.2 | 28.8 | 58.0 | 122.0 | 21.9 | 39.5 | 29.5 | 59.2 | 132.0 | 23.4 |
X-T [129] | SA | T-ATT | √ | 37.0 | 28.7 | 57.5 | 120.0 | 21.8 | 39.7 | 29.5 | 59.1 | 132.8 | 23.4 |
OSCAR [150] | SA | T-ATT | √ | 36.5 | 30.3 | − | 123.7 | 23.1 | 40.5 | 29.7 | − | 137.6 | 22.8 |
CGVRG [201] | GCN | LSTM | √ | 38.4 | 28.2 | 58.0 | 119.0 | 21.1 | 38.9 | 28.8 | 58.7 | 129.6 | 22.3 |
SRT [208] | SA | T-ATT | √ | 36.6 | 28.0 | 56.9 | 116.9 | 21.3 | 38.5 | 28.7 | 58.4 | 129.1 | 22.4 |
CPTR [30] | SA | T-ATT | √ | − | − | − | − | − | 40.0 | 29.1 | 59.4 | 129.4 | − |
MAC [209] | SA | T-ATT | √ | − | − | − | − | − | 39.5 | 29.3 | 58.9 | 131.6 | 22.8 |
DLCT [200] | SA | T-ATT | √ | − | − | − | − | − | 39.8 | 29.5 | 59.1 | 133.8 | 23.0 |
RSTNet [63] | SA | T-ATT | √ | − | − | − | − | − | 40.1 | 29.8 | 59.5 | 135.6 | 23.3 |
VRATT-Soft [125] | CNN | LSTM | √ | 34.3 | 28.5 | 60.0 | 111.7 | 20.1 | 37.5 | 28.5 | 61.6 | 122.1 | 22.1 |
VRATT-Hard [125] | CNN | LSTM | √ | 36.3 | 27.9 | 60.6 | 113.0 | 20.4 | 36.6 | 28.4 | 60.9 | 119.8 | 21.5 |
VinVL. [44] | SA | T-ATT | √ | 38.2 | 30.3 | − | 129.3 | 23.6 | 40.9 | 30.9 | − | 140.4 | 25.1 |
Dataset | Size | Captions/Image | Topic | ||
Training | Validation | Testing | |||
Flickr8k [153] | 6000 | 1000 | 1000 | 5 | Human activities |
Flickr30k [154] | 28 000 | 1000 | 1000 | 5 | Human activities |
MSCOCO [152] | 82 783 | 40 504 | 40 775 | 5 | Daily scene |
(Karpathy’s split) | 112 783 | 5000 | 5000 | 5 | Daily scene |
PASCAL 1K [155] | − | − | 1000 | 5 | Human activities |
YFCC100M [156] | 9920 million ( 32\% ) | 7 | Public multimedia | ||
Multi30K-CLID [157] | 29 000 | 1000 | 1000 | 5 | Daily scene |
AIC [158] | 210 000 | 30 000 | 30 000 + 30 000 | 5 | Daily scene |
IAPR TC-12 [159] | 17 665 | − | 1962 | 1.7 | Still natual |
GoodNews [160] | 424 000 | 18 000 | 23 000 | 1 | News |
VizWiz [7] | 23 431 | 7750 | 8000 | 5 | Blind view |
Nocaps [161] | 1 700 000 | 4500 | 10 600 | 10 | Novel objects |
FACAD [162] | 993 000 images in total | 0.2 | Fashion items | ||
TextCaps [163] | 424 000 | 18 000 | 23 000 | 1 | Text |
Methods | Datasets | Evaluation metrics | Year |
Kiros et al. [31] | IAPR TC-12, SBU | BLEU, PPLX | 2014 |
Mao et al. [182] | IAPR TC-12, Flickr 8K/30K | BLEU, R@K, Mrank | 2014 |
Karpathy et al. [183] | PASCAL, Flickr 8K/30K | R@K, Mrank | 2014 |
Chen and Zitnick [184] | PASCAL, Flickr 8K/30K, MS COCO | BLEU, METEOR, CIDEr | 2015 |
Jia et al. [90] | Flickr 8K/30K, MS COCO | BLEU, METEOR, CIDEr | 2015 |
Vinyals et al. [185] | Flickr 8K/30K, MS COCO | BLEU, METEOR, CIDEr | 2015 |
Tran et al. [186] | MS COCO, Adobe-MIT, Instagram | Human evaluation | 2016 |
Li et al. [38] | Flickr 30K, MSCOCO | BLEU, METEOR, ROUGE, CIDEr | 2016 |
Hendricks et al. [187] | MS COCO, ImageNet | BLEU, METEOR | 2016 |
Yang et al. [188] | Visual genome | METEOR, AP, IoU | 2017 |
Liu et al. [144] | MS COCO | SPIDEr, Human evaluation | 2017 |
Gu et al. [189] | Flickr 30K, MS COCO | BLEU, METEOR, CIDEr, SPICE | 2017 |
Rennie et al. [28] | MS COCO | BLEU, METEOR, ROUGE, CIDEr | 2017 |
Wu et al. [190] | Flickr 8K/30K, MS COCO | BLEU, METEOR, CIDEr | 2018 |
Aneja et al. [191] | MS COCO | BLEU, METEOR, ROUGE, CIDEr | 2018 |
Wang and Chan [192] | MS COCO | BLEU, METEOR, ROUGE, CIDEr | 2018 |
Anderson et al. [29] | MS COCO | BLEU, METEOR, ROUGE, CIDEr, SPICE | 2018 |
Lu et al. [110] | Flickr 30K, MS COCO | BLEU, METEOR, CIDEr, SPICE | 2018 |
Xiao et al. [193] | Flickr 8K/30K, MS COCO | BLEU, METEOR, ROUGE, CIDEr | 2019 |
Jia et al. [90] | YFCC100M, InstaPIC-1.1M | BLEU, METEOR, ROUGE, CIDEr | 2019 |
Yang et al. [74] | MS COCO, Visual genome | BLEU, METEOR, ROUGE, CIDEr, SPICE | 2019 |
Qin et al. [194] | MS COCO | BLEU, METEOR, ROUGE, CIDEr, SPICE | 2019 |
Biten et al. [160] | GoodNews | BLEU, METEOR, ROUGE, CIDEr, SPICE | 2019 |
Liu et al. [195] | MS COCO | BLEU, METEOR, ROUGE, CIDEr | 2020 |
Yang et al. [162] | Fashion-caps | BLEU, METEOR, ROUGE, CIDEr, SPICE, mAP, ACC | 2020 |
Gurari et al. [7] | MS COCO, VizWiz | BLEU, METEOR, ROUGE, CIDEr, SPICE | 2020 |
Sidorov et al. [163] | MS COCO, TextCaps | BLEU, METEOR, ROUGE, CIDEr, SPICE | 2020 |
Wang et al. [196] | MS COCO | BLEU, METEOR, ROUGE, CIDEr, SPICE | 2020 |
Hu et al. [197] | BreakingNews, GoodNews | BLEU, METEOR, ROUGE, CIDEr | 2020 |
Fei [198] | MS COCO | BLEU, METEOR, ROUGE, CIDEr, SPICE | 2021 |
Hu et al. [151] | MS COCO, Nocaps | BLEU, METEOR, CIDEr, SPICE | 2021 |
Zhang et al. [44] | MS COCO | BLEU, METEOR, CIDEr, SPICE, R@K | 2021 |
Zhang et al. [199] | MS COCO | BLEU, METEOR, ROUGE, CIDEr, SPICE | 2021 |
Luo et al. [200] | MS COCO | BLEU, METEOR, ROUGE, CIDEr, SPICE | 2021 |
Zhang et al. [63] | MS COCO | BLEU, METEOR, ROUGE, CIDEr, SPICE | 2021 |
Methods | Standard metrics | Diversity metrics | Learning-based metrics | ||||||||
BLEU-1 | METEOR | CIDEr | Div-1 | Div-2 | Novel | TIGEr | BERT-S | CLIP-S | |||
NIC [26] | 72.4 | 25.0 | 97.2 | 1.4 | 4.5 | 36.1 | 71.8 | 93.4 | 69.7 | ||
Xu et al. [25] | 74.1 | 26.2 | 104.6 | 1.7 | 6.0 | 47.0 | 73.2 | 93.6 | 71.0 | ||
SCST [28] | 78.0 | 27.1 | 117.4 | 1.0 | 3.1 | 64.9 | 73.9 | 88.9 | 71.2 | ||
Up-Down [29] | 79.4 | 27.9 | 122.7 | 1.2 | 4.4 | 67.6 | 74.6 | 88.8 | 72.3 | ||
SGAE [74] | 81.0 | 28.4 | 129.1 | 1.4 | 5.4 | 71.4 | 74.6 | 94.1 | 73.4 | ||
MT [42] | 80.8 | 28.8 | 129.6 | 1.1 | 4.8 | 70.4 | 74.8 | 88.8 | 72.6 | ||
AOANet [202] | 80.2 | 29.2 | 129.8 | 1.6 | 6.2 | 69.3 | 75.1 | 94.3 | 73.7 | ||
ORT [132] | 80.5 | 28.7 | 128.3 | 2.1 | 7.2 | 73.8 | 75.1 | 94.1 | 73.6 | ||
M2-T [135] | 80.8 | 29.2 | 131.2 | 1.7 | 7.9 | 78.9 | 75.3 | 93.7 | 73.4 | ||
Unified VLP [149] | 80.9 | 29.3 | 129.3 | 1.9 | 8.1 | 74.1 | 75.1 | 94.4 | 75.0 | ||
CPTR [30] | 81.7 | 29.1 | 129.4 | 1.4 | 6.8 | 75.6 | 74.8 | 94.3 | 74.5 |
Methods | En | De | ATT | EX | RL | ||||||||
B4 | M | R-L | C | S | B4 | M | R-L | C | S | ||||
NIC [26] | CNN | LSTM | × | 24.6 | − | − | − | − | 27.7 | 23.7 | − | 85.5 | − |
Soft-ATT [25] | CNN | LSTM | √ | 24.3 | 23.9 | − | − | − | − | − | − | − | − |
Hard-ATT [25] | CNN | LSTM | √ | 25.0 | 23.0 | − | − | − | − | − | − | − | − |
GLA [102] | CNN | LSTM | √ | 31.2 | 24.9 | 53.3 | 96.4 | − | − | − | − | − | − |
Semantic-ATT [114] | CNN | LSTM | √ | 37.7 | 27.9 | 58.2 | 123.7 | − | − | − | − | − | |
Adp-ATT [27] | CNN | LSTM | √ | 33.2 | 25.7 | 55.0 | 101.3 | − | − | − | − | − | − |
SCST [28] | CNN | LSTM | √ | 30.0 | 26.0 | 54.3 | 101.3 | − | 34.2 | 26.7 | 55.7 | 114.0 | − |
Up-Down [29] | CNN | LSTM | √ | 36.2 | 27.0 | 56.4 | 113.5 | 20.3 | 36.3 | 27.7 | 56.9 | 120.1 | 21.4 |
Stack-Cap [98] | CNN | LSTM | √ | 35.2 | 26.5 | − | 109.1 | − | 36.1 | 27.4 | 56.9 | 120.4 | 20.9 |
CAVP [203] | CNN | LSTM | √ | − | − | − | − | − | 38.6 | 28.3 | 58.5 | 126.3 | 21.6 |
SGAE [74] | GCN | LSTM | √ | − | − | − | − | − | 38.4 | 28.4 | 58.6 | 127.8 | 22.1 |
AOANet [202] | SA | LSTM | √ | 36.9 | 28.5 | 57.3 | 118.5 | 21.6 | 39.1 | 29.0 | 58.9 | 128.9 | 22.5 |
ETA [134] | SA | T-ATT | √ | 37.1 | 28.2 | 57.1 | 117.9 | 21.4 | 39.3 | 28.8 | 58.9 | 126.6 | 22.7 |
RFNet [204] | CNN | LSTM | √ | 35.8 | 27.4 | 56.8 | 112.5 | 20.5 | 36.5 | 27.7 | 57.3 | 121.9 | 21.2 |
LSTM-A [205] | CNN | LSTM | √ | 35.2 | 26.9 | 55.8 | 108.8 | 20.0 | 35.5 | 27.3 | 56.8 | 118.3 | 20.8 |
GCN-LSTM [35] | GCN | LSTM | √ | 36.8 | 27.9 | 57.0 | 116.3 | 20.9 | 38.2 | 28.5 | 58.3 | 127.6 | 22.0 |
CNM [75] | GCN | LSTM | √ | 37.1 | 27.9 | 57.3 | 116.6 | 20.8 | 38.7 | 28.4 | 58.7 | 127.4 | 21.8 |
DA [123] | CNN | LSTM | √ | 33.7 | 26.4 | 54.6 | 104.9 | 19.4 | 37.5 | 28.5 | 58.2 | 125.6 | 22.3 |
MT [42] | SA | T-ATT | √ | 37.4 | 28.7 | 57.4 | 119.6 | − | 40.7 | 29.5 | 59.7 | 134.1 | − |
ORT [132] | SA | T-ATT | √ | 35.5 | 28.0 | 56.6 | 115.4 | 21.2 | 38.6 | 28.7 | 58.4 | 128.3 | 22.6 |
M2-T [135] | CNN | LSTM | √ | − | − | − | − | − | 39.1 | 29.2 | 58.6 | 131.2 | 22.6 |
LBPF [194] | CNN | LSTM | √ | 37.4 | 28.1 | 57.5 | 116.4 | 21.2 | 38.3 | 28.5 | 58.4 | 127.6 | 22.0 |
GCN-HIP [206] | GCN | LSTM | √ | 38.0 | 28.6 | 57.8 | 120.3 | 21.4 | 39.1 | 28.9 | 59.2 | 130.6 | 22.3 |
VSUA [36] | GCN | LSTM | √ | − | − | − | − | − | 38.4 | 28.5 | 58.4 | 128.6 | 22.0 |
NG-SAN [133] | SA | T-ATT | √ | − | − | − | − | − | 39.9 | 29.3 | 59.2 | 132.1 | 23.3 |
POS-SCAN [207] | CNN | LSTM | √ | 36.5 | 27.9 | − | 114.9 | 20.8 | 38.0 | 28.5 | − | 125.9 | 22.2 |
X-LAN [129] | SA | LSTM | √ | 38.2 | 28.8 | 58.0 | 122.0 | 21.9 | 39.5 | 29.5 | 59.2 | 132.0 | 23.4 |
X-T [129] | SA | T-ATT | √ | 37.0 | 28.7 | 57.5 | 120.0 | 21.8 | 39.7 | 29.5 | 59.1 | 132.8 | 23.4 |
OSCAR [150] | SA | T-ATT | √ | 36.5 | 30.3 | − | 123.7 | 23.1 | 40.5 | 29.7 | − | 137.6 | 22.8 |
CGVRG [201] | GCN | LSTM | √ | 38.4 | 28.2 | 58.0 | 119.0 | 21.1 | 38.9 | 28.8 | 58.7 | 129.6 | 22.3 |
SRT [208] | SA | T-ATT | √ | 36.6 | 28.0 | 56.9 | 116.9 | 21.3 | 38.5 | 28.7 | 58.4 | 129.1 | 22.4 |
CPTR [30] | SA | T-ATT | √ | − | − | − | − | − | 40.0 | 29.1 | 59.4 | 129.4 | − |
MAC [209] | SA | T-ATT | √ | − | − | − | − | − | 39.5 | 29.3 | 58.9 | 131.6 | 22.8 |
DLCT [200] | SA | T-ATT | √ | − | − | − | − | − | 39.8 | 29.5 | 59.1 | 133.8 | 23.0 |
RSTNet [63] | SA | T-ATT | √ | − | − | − | − | − | 40.1 | 29.8 | 59.5 | 135.6 | 23.3 |
VRATT-Soft [125] | CNN | LSTM | √ | 34.3 | 28.5 | 60.0 | 111.7 | 20.1 | 37.5 | 28.5 | 61.6 | 122.1 | 22.1 |
VRATT-Hard [125] | CNN | LSTM | √ | 36.3 | 27.9 | 60.6 | 113.0 | 20.4 | 36.6 | 28.4 | 60.9 | 119.8 | 21.5 |
VinVL. [44] | SA | T-ATT | √ | 38.2 | 30.3 | − | 129.3 | 23.6 | 40.9 | 30.9 | − | 140.4 | 25.1 |