둘러보기로 가기 검색하러 가기


  • AlexNet competed in the ImageNet Large Scale Visual Recognition Challenge on September 30, 2012.[1]
  • AlexNet achieved a top-5 error around 16%, which was an extremely good result back in 2012.[2]
  • AlexNet is made up of eight trainable layers, five convolution layers and three fully connected layers.[2]
  • To address overfitting during training, AlexNet uses both data augmentation and dropout layers.[3]
  • Training an image on the VGG network uses techniques similar to Krizhevsky et al., mentioned previously (i.e. the training of AlexNet).[3]
  • AlexNet is one of the variants of CNN which is also referred to as a Deep Convolutional Neural Network.[4]
  • Then the AlexNet applies maximum pooling layer or sub-sampling layer with a filter size 3×3 and a stride of two.[5]
  • To achieve optimum optimal values of Deep Learning AlexNet structure, Modified Crow Search (MCS) is presented.[6]
  • The selected Alexnet is ineffective to upgrade the model execution.[6]
  • AlexNet, as a primary, average, fundamental, and a standout amongst the best DCNN engineering, was first proposed in our work.[6]
  • This optimal Alexnet is anything but difficult to upgrade the quality and expanding the SR by standardized learning rates.[6]
  • Following the convolutional layers, the original AlexNet had fully-connected layers with 4096 nodes each.[7]
  • We saw earlier that Image Classification is a quite easy task thanks to Deep Learning nets such as Alexnet.[8]
  • The convolutional part of Alexnet is used to compute the features of each region and then SVMs use these features to classify the regions.[8]
  • AlexNet was the pioneer in CNN and open the whole new research era.[9]
  • After competing in ImageNet Large Scale Visual Recognition Challenge, AlexNet shot to fame.[10]
  • Before exploring AlexNet it is essential to understand what is a convolutional neural network.[10]
  • AlexNet was trained on a GTX 580 GPU with only 3 GB of memory which couldn’t fit the entire network.[10]
  • The authors of AlexNet used pooling windows, sized 3×3 with a stride of 2 between the adjacent windows.[10]
  • The figure shown below shows us that with the help of ReLUs(solid curve), AlexNet can achieve a 25% training error rate.[10]
  • The authors of AlexNet extracted random crops sized 227×227 from inside the 256×256 image boundary, and used this as the network’s inputs.[10]
  • AlexNet was able to recognize off-center objects and most of its top 5 classes for each image were reasonable.[10]
  • AlexNet was not the first fast GPU-implementation of a CNN to win an image recognition contest.[11]
  • In short, AlexNet contains 5 convolutional layers and 3 fully connected layers.[12]
  • AlexNet famously won the 2012 ImageNet LSVRC-2012 competition by a large margin (15.3% VS 26.2% (second place) error rates).[13]
  • AlexNet uses Rectified Linear Units (ReLU) instead of the tanh function, which was standard at the time.[14]
  • AlexNet allows for multi-GPU training by putting half of the model’s neurons on one GPU and the other half on another GPU.[14]
  • AlexNet vastly outpaced this with a 37.5% top-1 error and a 17.0% top-5 error.[14]
  • AlexNet is able to recognize off-center objects and most of its top five classes for each image are reasonable.[14]
  • AlexNet is an incredibly powerful model capable of achieving high accuracies on very challenging datasets.[14]
  • However, removing any of the convolutional layers will drastically degrade AlexNet’s performance.[14]
  • AlexNet consists of eight layers: five convolutional layers, two fully-connected hidden layers, and one fully-connected output layer.[15]
  • Second, AlexNet used the ReLU instead of the sigmoid as its activation function.[15]
  • Besides, AlexNet changed the sigmoid activation function to a simpler ReLU activation function.[15]
  • AlexNet controls the model complexity of the fully-connected layer by dropout (Section 4.6), while LeNet only uses weight decay.[15]
  • The architecture used in the 2012 paper is popularly called AlexNet after the first author Alex Krizhevsky.[16]
  • As mentioned above, AlexNet was the winning entry in ILSVRC 2012.[16]
  • Random crops of size 227×227 were generated from inside the 256×256 images to feed the first layer of AlexNet.[16]
  • An important feature of the AlexNet is the use of ReLU(Rectified Linear Unit) Nonlinearity.[16]
  • The authors of AlexNet extracted random crops of size 227×227 from inside the 256×256 image boundary to use as the network’s inputs.[16]
  • This can be understood from AlexNet, where FC layers contain approx.[17]
  • AlexNet is a work of supervised learning and got very good results.[18]
  • AlexNet was used as the basic transfer learning model.[19]
  • We used AlexNet as the basic transfer learning model and tested different transfer configurations.[19]
  • Original AlexNet was performed on two graphical processing units (GPUs).[19]
  • Nowadays, researchers tend to use only one GPU to implement AlexNet.[19]
  • Figure 2 illustrates the structure of AlexNet.[19]
  • The details of learnable weights and biases of AlexNet are shown in Table 3.[19]
  • Compared to traditional neural networks, there are several advanced techniques used in AlexNet.[19]
  • Hence, we could not directly apply AlexNet as the feature extractor.[19]
  • The AlexNet consists of five conv layers (CL1, CL2, CL3, CL4, and CL5) and three fully-connected layers (FCL6, FL7, FL8).[19]
  • AlexNet can make full use of all its parameters with a big dataset.[19]