├── README.md ├── commons ├── 常用句子 │ └── 常用句子.md ├── 常用句式 │ └── 常用句式.md ├── 常用短语 │ └── 常用短语.md └── 常用缩写 │ └── 常用缩写.md ├── practice ├── Abstract │ └── abstract.md ├── Conclusion │ └── conclusion.md ├── Experiments │ └── experiments.md ├── Introduction │ └── introduction.md ├── Method │ └── method.md └── Related_work │ └── related_work.md └── summary └── summary.md /README.md: -------------------------------------------------------------------------------- 1 | 2 | # awesome-cv-writing 3 | 4 | ## 简介 5 | 该仓库包含了计算机视觉论文中常用写作/表达方式等。 6 | 7 | ## 目录 8 | 1. [简介](#简介) 9 | 2. [写作](#写作) 10 | 3. [实战](#实战) 11 | 4. [总结](#总结) 12 | 13 | 14 | ## 写作 15 | - [常用句式](commons/常用句式/常用句式.md) 16 | - [常用句子](commons/常用句子/常用句子.md) 17 | - [常用短语](commons/常用短语/常用短语.md) 18 | - [常用缩写](commons/常用缩写/常用缩写.md) 19 | - [一意多词](commons/一意多词/一意多词.md) 20 | 21 | 22 | ## 实战 23 | - [Abstract](practice/Abstract/abstract.md) 24 | - [Introduction](practice/Introduction/introduction.md) 25 | - [Related work](practice/Related_work/related_work.md) 26 | - [Method](practice/Method/method.md) 27 | - [Experiments](practice/Experiments/experiments.md) 28 | - [Conclusion](practice/Conclusion/conclusion.md) 29 | 30 | 31 | ## 总结 32 | - [paper](summary/summary.md) 33 | 34 | -------------------------------------------------------------------------------- /commons/常用句子/常用句子.md: -------------------------------------------------------------------------------- 1 | ## 该部分主要记录一些常用的句子 2 | 3 | 4 | >* **是什么而不是什么** 5 | 中文:据我们所知,这是第一个直接用深度卷积网络而不是通过学习网络参数来捕获先验信息的研究 6 | 英文:To the best of our knowledge, this is the first study that directly investigates the prior **captured by deep convolutional generative network** **independently of learning the network parameters for images**. 7 | 解释:两个定语,两个定语可以直接叠加: 8 | 1) was captured by the deep convolutional generative network 9 | 2) was catpured independently of learning the network parameters for images 10 | 11 | >* **除了什么任务,我们的方法在另一个领域也有重要的应用** 12 | 中文:除了图像恢复任务,我们的方法还有另外一个重要的应用:理解包含在deep cnn激活中的信息 13 | 英文:**In addition to** standard image restoration tasks, **we show an application of our technique to** understanding **the information contained within** the activations of deep neural networks. 14 | 15 | >* **不是而是,避免了由什么引起的错误/偏见** 16 | 中文:由于像TV norm之类的新的正则器是手工设计的而不是从数据中学习到的,可视化结果避免了由使用学习到的强正则化器引起的偏置。 17 | 英文: Since the new regularizer, like the TV norm, **is not** learned from data **but is** entirely handcrafted, the resulting visualizations **avoid potential bias arising from** the use of powerful learned regularizers. 18 | 19 | > * **通过什么被应用到...** 20 | 中文:通过学习一个将随机向量z映射到x的生成器/解码器将深度网络应用到图像生成 21 | 英文:Deep networks **are applied to** image generation **by** **learning a generator/decoder networks x = fθ(z)** that map a random code vector z to an image x. 22 | 23 | >* **什么有稳定的提升** 24 | 中文:通过引入去噪组件,性能有持续的提升 25 | 英文: There is a consistent performance improvement by introducted by the denoising blocks. 26 | 27 | >* **提升了多少幅度,从多少到多少** 28 | 中文:它提升了Resnet-152 baseline的准确率3.2%,从52.5%到55.7% 29 | 英文: It improves the accuracy of ResNet-152 baseline by 3.2% from 52.5% to 55.7% (Figure 6, right). 30 | 31 | >* **一种方法实现了多少,其counterpart实现了多少** 32 | 中文:ResNet-152 baseline实现了多少准确率,它的counterpart比它提升了多少,实现了多少准确率 33 | 英文: ResNet-152 baseline has 39.2% accuracy, and its denoising counterpart is 3.4% better, achieving 42.6% 34 | 35 | >* **如果不特别声明,默认的是...** 36 | 中文:在此论文中,我们默认使用了A,如果不特别声明 37 | 英文:We use A by default in this paper unless noted. 38 | 39 | >* **缩小了什么与什么之间的差距** 40 | 中文:它同时缩小了两类图像恢复方法的鸿沟:使用deep cnn进行学习的方法和基于人工图像特征的非学习方法 41 | 英文:It also **bridges the gap between** two very popular families of image restoration methods: learning-based methods using deep convolutional networks and learning-free methods based on handcrafted image priors such as self-similarity. 42 | 解释: methods复数,networks复数,such as self-similarity做最后的定语 43 | >* **为了清楚,有些地方在图中没有体现出来** 44 | 中文:为了清楚,有些地方在图中没有体现出来 45 | 英文:For clarity, * and * are not included. 46 | >* **为了限制问题的复杂程度,我们把讨论限制在...** 47 | To restrict the complexity of our problem, we limit our discussion to the scene texts with upper-case characters. 48 | 49 | >* **然而我们证明了在...场景下也可以适用** 50 | However, we demonstrate that the proposed method can also be appied for lower-case characters. 51 | 52 | >* **计做逐位相乘** 53 | denotes element-wise product of matrics. 54 | 55 | >* **为了解决..问题,已经做了那些尝试** 56 | 为了解决‘domain shift’的问题-domain adaptation的主要考验,学术界已经做了一些尝试 57 | To alleviate the effect of 'domain shift', the major challenge in domain adaptation, studies have attempted to align the distributions of the two domains. 58 | 59 | >* **不像之前的方法,我们的方法....** 60 | 跟之前的研究-用固定的CNN extractor提取soure feature,不同的是我们的方法同时学习两个domain的特征代表 61 | Unlike previous studies where the soure features are extracted with a fixed pre-trained encoder, our method jointly learns feature representations of two domains. 62 | 63 | >* **在..数据集上评估了模型,并且取得了很大的成绩** 64 | 我们在几个有名的domain adaptation 数据集上评估了模型,结果展示我们取得了很大的成绩。 65 | We evaluate the proposed method on several domain adaptation benchmarks and achieve superior or 66 | comparable performance to state-of-the-art results 67 | 68 | >* **为了验证模型每一个组件的效果** 69 | To verify the effectiveness of each component in our model 70 | 71 | >* **性能下降** 72 | As shown in Tabel I, after removing one or more parts, the performance degrades in most case. 73 | -------------------------------------------------------------------------------- /commons/常用句式/常用句式.md: -------------------------------------------------------------------------------- 1 | ## 该部分主要记录一些常用的句式 2 | 3 | 4 | >* **when引导的定语从句** 5 | 中文:例如,最近[33]的作者表示用精标数据训练的分类网络泛化性能很好,这个网络同时可以过拟合随机的标签。 6 | 英文:For instance, the authors of [33] recently shows that the same image classification network that generalizes well **when trained on geninus data** can also overfit **when presented with random labels**. 7 | 解释:when 引导的定语的位置 8 | 1) network that generalizes well when trained on geninus data 9 | 2) can also overfit when presented with random labels. 10 | >* **缩小了什么与什么之间的差距** 11 | 它同时缩小了两类图像恢复方法的鸿沟:使用deep cnn进行学习的方法和基于人工图像特征的非学习方法 12 | It also **bridges the gap between** two very popular families of image restoration methods: learning-based methods using deep convolutional networks and learning-free methods based on handcrafted image priors such as self-similarity. 13 | >>* methods复数,networks复数,such as 14 | -------------------------------------------------------------------------------- /commons/常用短语/常用短语.md: -------------------------------------------------------------------------------- 1 | ## 该部分主要记录一些常用的短语 2 | 3 | >* To the best of our knowledge # 据我们所知 4 | >* In addition to # 除了 5 | >* 结果显著好于 6 | 1) with dramatically improved results # 结果显着改善 7 | 2) is marginly better than # 显著好于 8 | >* the information contained within # 包含在...的信息 9 | >* be applied to #被应用到 10 | >* our results are robust # 结果鲁棒 11 | >* consider the following denosing operations:A,B,and C. # 使用了如下操作 12 | >* 30 thousand # 3万的表达方式 13 | >* can extend well to # 可以泛化到 14 | >* ,denoted as EATEN thereafter # 叫做EATEN(以后也是) 15 | >* is capable of # 可以 16 | >* 采用,应用,使用 17 | >>* is employed to 18 | >>* is introduced to 19 | >* 利用 20 | a simple model leveraging adversarial learning. 21 | >* 准确率从...下降到... 22 | Accuracy drops for 90% to 50%. 23 | >* 准确率从...提升到... 24 | It improves accuracy 80% to 90%. 25 | >* 目的是... 26 | Image-image translation **aims to** construcing a mapping funcion between two domains. 27 | -------------------------------------------------------------------------------- /commons/常用缩写/常用缩写.md: -------------------------------------------------------------------------------- 1 | ## 该部分主要记录一些常用的缩写 2 | 3 | 4 | >* w.r.t. # with respect to #with regard to 关于 5 | 例句:the loss function w.r.t the image pixel values #像素值处的损失函数 6 | >* , i.e. retraining is neede #也就是说 7 | 例句:the learned model is thus not generalizable to new styles, i.e. retraining is needed for transformations of new style 8 | -------------------------------------------------------------------------------- /practice/Abstract/abstract.md: -------------------------------------------------------------------------------- 1 | ## 总结 2 | 摘要:顾名思义,是对通篇文章的总结。一般形式是首先引出问题(Introduction),然后简述相关相关工作(Related work),为了解决该问题,我们提出了什么什么方法(Method),然后实验结论(Experiments)等。上述有些部分可以在摘要里一笔带过,也可以不提,但是Method/Experiments是通篇的灵魂,是必须介绍的。 3 | 4 | ## case分析 5 | 6 | 如下,找到一些比较好的摘要,逐个分析,最后抽象出一套固定的模版。 7 | 8 | ### **[DLOW: Domain Flow for Adaptation and Generalization[CVPR2019 Oral]](https://arxiv.org/pdf/1812.05418.pdf)** 9 | ### **[LOMO: An Accurate Detector for Text of Arbitrary Shapes[CVPR2019]](https://arxiv.org/abs/1904.06535)** 10 | #### **引出问题** 11 | **Previous** scene text detection methods have progressed substantially over the past years. **However**, limited by the receptive field of CNNs and the simple representations like rectangle bounding box or quadrangle adopted to describe text, previous methods **may fall short** when dealing with more challenging text instances, such as extremely long text and arbitrarily shaped text. 12 | #### **解决问题** 13 | **To address these two problems**, we present a novel text detector namely LOMO, which localizes the text progressively for multiple times (or in other word, LOok More than Once). 14 | #### **方法详解** 15 | LOMO consists of a direct regressor (DR), an interative refinement module (IRM) and a shape expression module (SEM). At first, text proposals in the form of quadrangle are generated by DR branch. Next, IRM progressively perceives the entire long text by iterative refinement based on the extracted feature blocks of preliminary proposals. Finally, a SEM is introduced to reconstruct more precise representation of irregular text by considering the geometry properties of text instance, including text region, text center line and border offsets. 16 | #### **实验结果** 17 | **The state-of-the art results** on serveral public benchmarks including ICDAR2017-RCTW, SCUT-CTW1500, Total-Text, ICDAR2015 and ICDAR17-MLT confirm the striking robustness and effectiveness of LOMO. 18 | 19 | ### **[EAST: An Efficient and Accurate Scene Text Detector[CVPR2017]](http://openaccess.thecvf.com/content_cvpr_2017/html/Zhou_EAST_An_Efficient_CVPR_2017_paper.html)** 20 | #### **引出问题** 21 | **Previous approaches** for scene text detection have already achieved promising performance across various benchmarks. **However**, they usually fall short when dealing with challenging scenarios, even when equipped with deep neural network models, because the overall performance is determined by the interplay of multiple stages and components in the pipelines. 22 | #### **解决问题** 23 | In this work, we propose a simple yet powerful pipeline that yields fast and accurate text detection in natural scenes. 24 | #### **方法详解** 25 | The pipeline directly predicts words or text lines of arbitrary orientations and quadrilateral shapes in full images, eliminating unnecessary intermediate steps (**eg.,** candidate aggregation and word partitioning), with a single neural network. The simplicity of our pipeline allows concentrating efforts on designing loss functions and neural network architecture. 26 | #### **实验结果** 27 | Experiments on standard datasets including ICDAR 2015, COCO-Text and MSRA-TD500 demonstrate that the proposed algorithm significantly outperforms state-of-the-art methods in terms of both accuracy and efficiency. On the ICDAR 2015 dataset, the proposed algorithm achieves an F-score of 0.7829 at 13.2fps at 720p resolution. 28 | #### **点评** 29 | scenarios v.s. scenes: 前者更倾向case,后者更倾向场景。 30 | -------------------------------------------------------------------------------- /practice/Conclusion/conclusion.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/beacandler/awesome-cv-writing/113e6faab2d5fb44a2544ed3eac1bbf0352f08ae/practice/Conclusion/conclusion.md -------------------------------------------------------------------------------- /practice/Experiments/experiments.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/beacandler/awesome-cv-writing/113e6faab2d5fb44a2544ed3eac1bbf0352f08ae/practice/Experiments/experiments.md -------------------------------------------------------------------------------- /practice/Introduction/introduction.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/beacandler/awesome-cv-writing/113e6faab2d5fb44a2544ed3eac1bbf0352f08ae/practice/Introduction/introduction.md -------------------------------------------------------------------------------- /practice/Method/method.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/beacandler/awesome-cv-writing/113e6faab2d5fb44a2544ed3eac1bbf0352f08ae/practice/Method/method.md -------------------------------------------------------------------------------- /practice/Related_work/related_work.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/beacandler/awesome-cv-writing/113e6faab2d5fb44a2544ed3eac1bbf0352f08ae/practice/Related_work/related_work.md -------------------------------------------------------------------------------- /summary/summary.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/beacandler/awesome-cv-writing/113e6faab2d5fb44a2544ed3eac1bbf0352f08ae/summary/summary.md --------------------------------------------------------------------------------