합성곱 신경망

수학노트
Pythagoras0 (토론 | 기여)님의 2020년 12월 22일 (화) 03:18 판 (→‎노트: 새 문단)
둘러보기로 가기 검색하러 가기

노트

위키데이터

말뭉치

  1. ConvNet architectures make the explicit assumption that the inputs are images, which allows us to encode certain properties into the architecture.[1]
  2. A ConvNet arranges its neurons in three dimensions (width, height, depth), as visualized in one of the layers.[1]
  3. A ConvNet is made up of Layers.[1]
  4. We use three main types of layers to build ConvNet architectures: Convolutional Layer, Pooling Layer, and Fully-Connected Layer (exactly as seen in regular Neural Networks).[1]
  5. The final layer of the CNN architecture uses a classification layer such as softmax to provide the classification output.[2]
  6. This tutorial demonstrates training a simple Convolutional Neural Network (CNN) to classify CIFAR images.[3]
  7. As input, a CNN takes tensors of shape (image_height, image_width, color_channels), ignoring the batch size.[3]
  8. In this example, you will configure our CNN to process inputs of shape (32, 32, 3), which is the format of CIFAR images.[3]
  9. Why ReLU is important : ReLU’s purpose is to introduce non-linearity in our ConvNet.[4]
  10. The pre-processing required in a ConvNet is much lower as compared to other classification algorithms.[5]
  11. The architecture of a ConvNet is analogous to that of the connectivity pattern of Neurons in the Human Brain and was inspired by the organization of the Visual Cortex.[5]
  12. A ConvNet is able to successfully capture the Spatial and Temporal dependencies in an image through the application of relevant filters.[5]
  13. The role of the ConvNet is to reduce the images into a form which is easier to process, without losing features which are critical for getting a good prediction.[5]
  14. CNN is a class of deep learning networks that has attracted much attention in recent studies.[6]
  15. At the beginning of the application of CNN, a similar pipeline is adopted as in the supervised learning models, except that the machine learning algorithms are replaced by CNN.[6]
  16. The pipeline of CNN-based models is illustrated in Fig.[6]
  17. Illustration of CNN-based model.[6]
  18. As shown, it is necessary to prepare a large number of training data with corresponding labels for efficient classification using CNN.[7]
  19. They used a multiview strategy in 3D-CNN, whose inputs were obtained by cropping three 3D patches of a lung nodule in different sizes and then resizing them into the same size.[7]
  20. To utilize time series data, the study used triphasic CT images as 2D images with three channels, which corresponds to the RGB color channels in computer vision, for 2D-CNN.[7]
  21. However, one can also apply CNN to this task as well.[7]
  22. Central to the convolutional neural network is the convolutional layer that gives the network its name.[8]
  23. In the context of a convolutional neural network, a convolution is a linear operation that involves the multiplication of a set of weights with the input, much like a traditional neural network.[8]
  24. The name “convolutional neural network” indicates that the network employs a mathematical operation called convolution.[9]
  25. A convolutional neural network consists of an input and an output layer, as well as multiple hidden layers.[9]
  26. The hidden layers of a CNN typically consist of a series of convolutional layers that convolve with a multiplication or other dot product.[9]
  27. The neocognitron is the first CNN which requires units located at multiple network positions to have shared weights.[9]
  28. To start, the CNN receives an input feature map: a three-dimensional matrix where the size of the first two dimensions corresponds to the length and width of the images in pixels.[10]
  29. For each filter-tile pair, the CNN performs element-wise multiplication of the filter matrix and the tile matrix, and then sums all the elements of the resulting matrix to get a single value.[10]
  30. During training, the CNN "learns" the optimal values for the filter matrices that enable it to extract meaningful features (textures, edges, shapes) from the input feature map.[10]
  31. As the number of filters (output feature map depth) applied to the input increases, so does the number of features the CNN can extract.[10]
  32. These operations are the basic building blocks of every Convolutional Neural Network, so understanding how these work is an important step to developing a sound understanding of ConvNets.[11]
  33. The primary purpose of Convolution in case of a ConvNet is to extract features from the input image.[11]
  34. It is important to understand that these layers are the basic building blocks of any CNN.[11]
  35. Please note however, that these operations can be repeated any number of times in a single ConvNet.[11]
  36. Deep CNN made a noteworthy contribution in several domains like image classification and recognition; therefore, they become widely known standards.[12]
  37. This section describes various classical and modern architectures of Deep CNN, which are currently utilized as a building block of several segmentation architectures.[12]
  38. This CNN model implements dropout layers with a particular end goal to battle the issue of overfitting to the training data.[12]
  39. The controller is a predefined RNN, where child model is the required CNN for classification of images.[12]

소스