MNIST deep learning sample source code

Tweet


Sample source code with less amount of code than other webpage. I have removed the redundant codes as much as possible so that the code will be easy to understand.


Keras(tensorflow) MNIST sample source code

Install CPU-version Keras(tensorflow) as follows.

pip install tensorflow

Keras(tensorflow) MNIST source code is as follows.

import keras
(train_images,train_labels),(test_images,test_labels)=keras.datasets.mnist.load_data()
model=keras.Sequential([
  keras.layers.Flatten(input_shape=(28,28)),
  keras.layers.Dense(128,activation='relu'),
  keras.layers.Dense(10,activation='softmax')])
model.compile(
  optimizer='adam',
  loss='sparse_categorical_crossentropy',
  metrics=['accuracy'])
model.fit(train_images,train_labels,epochs=10)
predictions=model.predict(test_images,verbose=1)
print(test_images[30])
print(predictions[30])
print(test_images[35])
print(predictions[35])

Output is as follows.

[1.0986688e-23 8.2032309e-17 8.9843673e-16 9.9999857e-01 4.4098041e-11 3.7638634e-07 7.8312568e-38 7.3481271e-13 1.8313495e-12 1.0399993e-06]

Score of the index 3 is the largest among 10 numbers. This data is recognized as 3.

[2.0747413e-15 4.0649436e-08 9.9999332e-01 1.4631374e-09 0.0000000e+00 1.5581767e-12 9.1050986e-19 6.6151097e-06 2.5312184e-22 0.0000000e+00]

Score of the index 2 is the largest among 10 numbers. This data is recognized as 2.


Pytorch MNIST sample source code

Install CPU-version Pytorch as follows.
Open https://pytorch.org/ . Go to [INSTALL PYTORCH], select[Stable], [Windows], [Pip], [Python], [CPU]. [Run this Command] is the way to install.

pip3 install torch torchvision torchaudio

Pytorch MNIST CPU-version source code is as follows.

import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.utils.data import DataLoader
from torchvision import datasets,transforms

transform=transforms.Compose([
  transforms.ToTensor()
])

traindata=datasets.MNIST(
  root='./data',
  train=True,
  download=True,
  transform=transform
)

testdata=datasets.MNIST(
  root='./data',
  train=False,
  download=True,
  transform=transform
)

trainloader=DataLoader(traindata,batch_size=64)

class Net(nn.Module):
  def __init__(self):
    super().__init__()
    self.l1=nn.Linear(28*28,128)
    self.l2=nn.Linear(128,10)

  def forward(self,x):
    x=torch.flatten(x,1)
    x=self.l1(x)
    x=F.relu(x)
    x=self.l2(x)
    x=F.softmax(x,dim=1)
    return x

model=Net()
criterion=nn.CrossEntropyLoss()
optimizer=optim.Adam(model.parameters())

for epoch in range(10):
  print('epoch',epoch,'/',10)
  model.train()
  for images,labels in trainloader:
    optimizer.zero_grad()
    outputs=model(images)
    loss=criterion(outputs,labels)
    loss.backward()
    optimizer.step()

model.eval()
with torch.no_grad():
  image,label=testdata[30]
  output=model(image)
  print(image)
  print(output)
  image,label=testdata[35]
  output=model(image)
  print(image)
  print(output)

Output is as follows.

tensor([[2.4412e-17, 3.1114e-13, 4.4132e-16, 1.0000e+00, 6.7855e-17, 1.8957e-09, 2.4993e-22, 1.2994e-13, 3.7946e-13, 8.4796e-09]])

Score of the index 3 is the largest among 10 numbers. This data is recognized as 3.

tensor([[2.0772e-14, 2.3950e-12, 1.0000e+00, 2.1788e-07, 4.5914e-26, 5.4916e-11, 1.6610e-16, 1.3890e-21, 3.9839e-12, 1.4265e-22]])

Score of the index 2 is the largest among 10 numbers. This data is recognized as 2.


Back