{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Notebook 13: Using Deep Learning to Study SUSY with Pytorch" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Learning Goals\n", "The goal of this notebook is to introduce the powerful PyTorch framework for building neural networks and use it to analyze the SUSY dataset. After this notebook, the reader should understand the mechanics of PyTorch and how to construct DNNs using this package. In addition, the reader is encouraged to explore the GPU backend available in Pytorch on this dataset.\n", "\n", "## Overview\n", "In this notebook, we use Deep Neural Networks to classify the supersymmetry dataset, first introduced by Baldi et al. in [Nature Communication (2015)](https://www.nature.com/articles/ncomms5308). The SUSY data set consists of 5,000,000 Monte-Carlo samples of supersymmetric and non-supersymmetric collisions with $18$ features. The signal process is the production of electrically-charged supersymmetric particles which decay to $W$ bosons and an electrically-neutral supersymmetric particle that is invisible to the detector.\n", "\n", "The first $8$ features are \"raw\" kinematic features that can be directly measured from collisions. The final $10$ features are \"hand constructed\" features that have been chosen using physical knowledge and are known to be important in distinguishing supersymmetric and non-supersymmetric collision events. More specifically, they are given by the column names below.\n", "\n", "In this notebook, we study this dataset using Pytorch." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from __future__ import print_function, division\n", "import os,sys\n", "import numpy as np\n", "import torch # pytorch package, allows using GPUs\n", "# fix seed\n", "seed=17\n", "np.random.seed(seed)\n", "torch.manual_seed(seed)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Structure of the Procedure\n", "\n", "Constructing a Deep Neural Network to solve ML problems is a multiple-stage process. Quite generally, one can identify the key steps as follows:\n", "\n", "* ***step 1:*** Load and process the data\n", "* ***step 2:*** Define the model and its architecture\n", "* ***step 3:*** Choose the optimizer and the cost function\n", "* ***step 4:*** Train the model \n", "* ***step 5:*** Evaluate the model performance on the *unseen* test data\n", "* ***step 6:*** Modify the hyperparameters to optimize performance for the specific data set\n", "\n", "Below, we sometimes combine some of these steps together for convenience.\n", "\n", "Notice that we take a rather different approach, compared to the simpler MNIST Keras notebook. We first define a set of classes and functions and run the actual computation only in the very end." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Step 1: Load and Process the SUSY Dataset\n", "\n", "The supersymmetry dataset can be downloaded from the UCI Machine Learning repository on [https://archive.ics.uci.edu/ml/machine-learning-databases/00279/SUSY.csv.gz](https://archive.ics.uci.edu/ml/machine-learning-databases/00279/SUSY.csv.gz). The dataset is quite large. Download the dataset and unzip it in a directory.\n", "\n", "Loading data in Pytroch is done by creating a user-defined a class, which we name `SUSY_Dataset`, and is a child of the `torch.utils.data.Dataset` class. This ensures that all necessary attributes required for the processing of the data during the training and test stages are easily inherited. The `__init__` method of our custom data class should contain the usual code for loading the data, which is problem-specific, and has been discussed for the SUSY data set in Notebook 5. More importantly, the user-defined data class must override the `__len__` and `__getitem__` methods of the parent `DataSet` class. The former returns the size of the data set, while the latter allows the user to access a particular data point from the set by specifying its index." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "from torchvision import datasets # load data\n", "\n", "class SUSY_Dataset(torch.utils.data.Dataset):\n", " \"\"\"SUSY pytorch dataset.\"\"\"\n", "\n", " def __init__(self, data_file, root_dir, dataset_size, train=True, transform=None, high_level_feats=None):\n", " \"\"\"\n", " Args:\n", " csv_file (string): Path to the csv file with annotations.\n", " root_dir (string): Directory with all the images.\n", " train (bool, optional): If set to `True` load training data.\n", " transform (callable, optional): Optional transform to be applied on a sample.\n", " high_level_festures (bool, optional): If set to `True`, working with high-level features only. \n", " If set to `False`, working with low-level features only.\n", " Default is `None`: working with all features\n", " \"\"\"\n", "\n", " import pandas as pd\n", "\n", " features=['SUSY','lepton 1 pT', 'lepton 1 eta', 'lepton 1 phi', 'lepton 2 pT', 'lepton 2 eta', 'lepton 2 phi', \n", " 'missing energy magnitude', 'missing energy phi', 'MET_rel', 'axial MET', 'M_R', 'M_TR_2', 'R', 'MT2', \n", " 'S_R', 'M_Delta_R', 'dPhi_r_b', 'cos(theta_r1)']\n", "\n", " low_features=['lepton 1 pT', 'lepton 1 eta', 'lepton 1 phi', 'lepton 2 pT', 'lepton 2 eta', 'lepton 2 phi', \n", " 'missing energy magnitude', 'missing energy phi']\n", "\n", " high_features=['MET_rel', 'axial MET', 'M_R', 'M_TR_2', 'R', 'MT2','S_R', 'M_Delta_R', 'dPhi_r_b', 'cos(theta_r1)']\n", "\n", "\n", " #Number of datapoints to work with\n", " df = pd.read_csv(root_dir+data_file, header=None,nrows=dataset_size,engine='python')\n", " df.columns=features\n", " Y = df['SUSY']\n", " X = df[[col for col in df.columns if col!=\"SUSY\"]]\n", "\n", " # set training and test data size\n", " train_size=int(0.8*dataset_size)\n", " self.train=train\n", "\n", " if self.train:\n", " X=X[:train_size]\n", " Y=Y[:train_size]\n", " print(\"Training on {} examples\".format(train_size))\n", " else:\n", " X=X[train_size:]\n", " Y=Y[train_size:]\n", " print(\"Testing on {} examples\".format(dataset_size-train_size))\n", "\n", "\n", " self.root_dir = root_dir\n", " self.transform = transform\n", "\n", " # make datasets using only the 8 low-level features and 10 high-level features\n", " if high_level_feats is None:\n", " self.data=(X.values.astype(np.float32),Y.values.astype(int))\n", " print(\"Using both high and low level features\")\n", " elif high_level_feats is True:\n", " self.data=(X[high_features].values.astype(np.float32),Y.values.astype(int))\n", " print(\"Using both high-level features only.\")\n", " elif high_level_feats is False:\n", " self.data=(X[low_features].values.astype(np.float32),Y.values.astype(int))\n", " print(\"Using both low-level features only.\")\n", "\n", "\n", " # override __len__ and __getitem__ of the Dataset() class\n", "\n", " def __len__(self):\n", " return len(self.data[1])\n", "\n", " def __getitem__(self, idx):\n", "\n", " sample=(self.data[0][idx,...],self.data[1][idx])\n", "\n", " if self.transform:\n", " sample=self.transform(sample)\n", "\n", " return sample" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Last, we define a helper function `load_data()` that accepts as a required argument the set of parameters `args`, and returns two generators: `test_loader` and `train_loader` which readily return mini-batches." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "def load_data(args):\n", "\n", " data_file='SUSY.csv'\n", " root_dir=os.path.expanduser('~')+'/ML_review/SUSY_data/'\n", "\n", " kwargs = {} # CUDA arguments, if enabled\n", " # load and noralise train and test data\n", " train_loader = torch.utils.data.DataLoader(\n", " SUSY_Dataset(data_file,root_dir,args.dataset_size,train=True,high_level_feats=args.high_level_feats),\n", " batch_size=args.batch_size, shuffle=True, **kwargs)\n", "\n", " test_loader = torch.utils.data.DataLoader(\n", " SUSY_Dataset(data_file,root_dir,args.dataset_size,train=False,high_level_feats=args.high_level_feats),\n", " batch_size=args.test_batch_size, shuffle=True, **kwargs)\n", "\n", " return train_loader, test_loader" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Step 2: Define the Neural Net and its Architecture\n", "\n", "To construct neural networks with Pytorch, we make another class called `model` as a child of Pytorch's `nn.Module` class. The `model` class initializes the types of layers needed for the deep neural net in its `__init__` method, while the DNN is assembled in a function method called `forward`, which accepts an `autograd.Variable` object and returns the output layer. Using this convention Pytorch will automatically recognize the structure of the DNN, and the `autograd` module will pull the gradients forward and backward using backprop.\n", "\n", "Our code below is constructed in such a way that one can choose whether to use the high-level and low-level features separately and altogether. This choice determines the size of the fully-connected input layer `fc1`. Therefore the `__init__` method accepts the optional argument `high_level_feats`. " ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import torch.nn as nn # construct NN\n", "\n", "class model(nn.Module):\n", " def __init__(self,high_level_feats=None):\n", " # inherit attributes and methods of nn.Module\n", " super(model, self).__init__()\n", "\n", " # an affine operation: y = Wx + b\n", " if high_level_feats is None:\n", " self.fc1 = nn.Linear(18, 200) # all features\n", " elif high_level_feats:\n", " self.fc1 = nn.Linear(10, 200) # low-level only\n", " else:\n", " self.fc1 = nn.Linear(8, 200) # high-level only\n", "\n", "\n", " self.batchnorm1=nn.BatchNorm1d(200, eps=1e-05, momentum=0.1)\n", " self.batchnorm2=nn.BatchNorm1d(100, eps=1e-05, momentum=0.1)\n", "\n", " self.fc2 = nn.Linear(200, 100) # see forward function for dimensions\n", " self.fc3 = nn.Linear(100, 2)\n", "\n", " def forward(self, x):\n", " '''Defines the feed-forward function for the NN.\n", "\n", " A backward function is automatically defined using `torch.autograd`\n", "\n", " Parameters\n", " ----------\n", " x : autograd.Tensor\n", " input data\n", "\n", " Returns\n", " -------\n", " autograd.Tensor\n", " output layer of NN\n", "\n", " '''\n", "\n", " # apply rectified linear unit\n", " x = F.relu(self.fc1(x))\n", " # apply dropout\n", " #x=self.batchnorm1(x)\n", " x = F.dropout(x, training=self.training)\n", "\n", "\n", " # apply rectified linear unit\n", " x = F.relu(self.fc2(x))\n", " # apply dropout\n", " #x=self.batchnorm2(x)\n", " x = F.dropout(x, training=self.training)\n", "\n", "\n", " # apply affine operation fc2\n", " x = self.fc3(x)\n", " # soft-max layer\n", " x = F.log_softmax(x,dim=1)\n", "\n", " return x" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Steps 3+4+5: Choose the Optimizer and the Cost Function. Train and Evaluate the Model\n", "\n", "Next, we define the function `evaluate_model`. The first argument, `args`, contains all hyperparameters needed for the DNN (see below). The second and third arguments are the `train_loader` and the `test_loader` objects, returned by the function `load_data()` we defined in Step 1 above. The `evaluate_model` function returns the final `test_loss` and `test_accuracy` of the model.\n", "\n", "First, we initialize a `model` and call the object `DNN`. In order to define the loss function and the optimizer, we use modules `torch.nn.functional` (imported here as `F`) and `torch.optim`. As a loss function we choose the negative log-likelihood, and stored is under the variable `criterion`. As usual, we can choose any from a variety of different SGD-based optimizers, but we focus on the traditional SGD.\n", "\n", "Next, we define two functions: `train()` and `test()`. They are called at the end of `evaluate_model` where we loop over the training epochs to train and test our model. \n", "\n", "The `train` function accepts an integer called `epoch`, which is only used to print the training data. We first set the `DNN` in a train mode using the `train()` method inherited from `nn.Module`. Then we loop over the mini-batches in `train_loader`. We cast the data as pytorch `Variable`, re-set the `optimizer`, perform the forward step by calling the `DNN` model on the `data` and computing the `loss`. The backprop algorithm is then easily done using the `backward()` method of the loss function `criterion`. We use `optimizer.step` to update the weights of the `DNN`. Last print the performance for every minibatch. `train` returns the loss on the data.\n", "\n", "The `test` function is similar to `train` but its purpose is to test the performance of a trained model. Once we set the `DNN` model in `eval()` mode, the following steps are similar to those in `train`. We then compute the `test_loss` and the number of `correct` predictions, print the results and return them. " ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "import torch.nn.functional as F # implements forward and backward definitions of an autograd operation\n", "import torch.optim as optim # different update rules such as SGD, Nesterov-SGD, Adam, RMSProp, etc\n", "\n", "def evaluate_model(args,train_loader,test_loader):\n", "\n", " # create model\n", " DNN = model(high_level_feats=args.high_level_feats)\n", " # negative log-likelihood (nll) loss for training: takes class labels NOT one-hot vectors!\n", " criterion = F.nll_loss\n", " # define SGD optimizer\n", " optimizer = optim.SGD(DNN.parameters(), lr=args.lr, momentum=args.momentum)\n", " #optimizer = optim.Adam(DNN.parameters(), lr=0.001, betas=(0.9, 0.999))\n", "\n", "\n", " ################################################\n", "\n", " def train(epoch):\n", " '''Trains a NN using minibatches.\n", "\n", " Parameters\n", " ----------\n", " epoch : int\n", " Training epoch number.\n", "\n", " '''\n", "\n", " # set model to training mode (affects Dropout and BatchNorm)\n", " DNN.train()\n", " # loop over training data\n", " for batch_idx, (data, label) in enumerate(train_loader):\n", " # zero gradient buffers\n", " optimizer.zero_grad()\n", " # compute output of final layer: forward step\n", " output = DNN(data)\n", " # compute loss\n", " loss = criterion(output, label)\n", " # run backprop: backward step\n", " loss.backward()\n", " # update weigths of NN\n", " optimizer.step()\n", " \n", " # print loss at current epoch\n", " if batch_idx % args.log_interval == 0:\n", " print('Train Epoch: {} [{}/{} ({:.0f}%)]\\tLoss: {:.6f}'.format(\n", " epoch, batch_idx * len(data), len(train_loader.dataset),\n", " 100. * batch_idx / len(train_loader), loss.item() ))\n", " \n", "\n", " return loss.item()\n", "\n", " ################################################\n", "\n", " def test():\n", " '''Tests NN performance.\n", "\n", " '''\n", "\n", " # evaluate model\n", " DNN.eval()\n", "\n", " test_loss = 0 # loss function on test data\n", " correct = 0 # number of correct predictions\n", " # loop over test data\n", " for data, label in test_loader:\n", " # compute model prediction softmax probability\n", " output = DNN(data)\n", " # compute test loss\n", " test_loss += criterion(output, label, size_average=False).item() # sum up batch loss\n", " # find most likely prediction\n", " pred = output.data.max(1, keepdim=True)[1] # get the index of the max log-probability\n", " # update number of correct predictions\n", " correct += pred.eq(label.data.view_as(pred)).cpu().sum().item()\n", "\n", " # print test loss\n", " test_loss /= len(test_loader.dataset)\n", " \n", " print('\\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.3f}%)\\n'.format(\n", " test_loss, correct, len(test_loader.dataset),\n", " 100. * correct / len(test_loader.dataset)))\n", " \n", "\n", " return test_loss, correct / len(test_loader.dataset)\n", "\n", "\n", " ################################################\n", "\n", "\n", " train_loss=np.zeros((args.epochs,))\n", " test_loss=np.zeros_like(train_loss)\n", " test_accuracy=np.zeros_like(train_loss)\n", "\n", " epochs=range(1, args.epochs + 1)\n", " for epoch in epochs:\n", "\n", " train_loss[epoch-1] = train(epoch)\n", " test_loss[epoch-1], test_accuracy[epoch-1] = test()\n", "\n", "\n", "\n", " return test_loss[-1], test_accuracy[-1]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Step 6: Modify the Hyperparameters to Optimize Performance of the Model\n", "\n", "To study the performance of the model for a variety of different `data_set_sizes` and `learning_rates`, we do a grid search. \n", "\n", "Let us define a function `grid_search`, which accepts the `args` variable containing all hyper-parameters needed for the problem. After choosing logarithmically-spaced `data_set_sizes` and `learning_rates`, we first loop over all `data_set_sizes`, update the `args` variable, and call the `load_data` function. We then loop once again over all `learning_rates`, update `args` and call `evaluate_model`." ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "def grid_search(args):\n", "\n", "\n", " # perform grid search over learnign rate and number of hidden neurons\n", " dataset_sizes=[1000, 10000, 100000, 200000] #np.logspace(2,5,4).astype('int')\n", " learning_rates=np.logspace(-5,-1,5)\n", "\n", " # pre-alocate data\n", " test_loss=np.zeros((len(dataset_sizes),len(learning_rates)),dtype=np.float64)\n", " test_accuracy=np.zeros_like(test_loss)\n", "\n", " # do grid search\n", " for i, dataset_size in enumerate(dataset_sizes):\n", " # upate data set size parameters\n", " args.dataset_size=dataset_size\n", " args.batch_size=int(0.01*dataset_size)\n", "\n", " # load data\n", " train_loader, test_loader = load_data(args)\n", "\n", " for j, lr in enumerate(learning_rates):\n", " # update learning rate\n", " args.lr=lr\n", "\n", " print(\"\\n training DNN with %5d data points and SGD lr=%0.6f. \\n\" %(dataset_size,lr) )\n", "\n", " test_loss[i,j],test_accuracy[i,j] = evaluate_model(args,train_loader,test_loader)\n", "\n", "\n", " plot_data(learning_rates,dataset_sizes,test_accuracy)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Last, we use the function `plot_data`, defined below, to plot the results. " ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "\n", "def plot_data(x,y,data):\n", "\n", " # plot results\n", " fontsize=16\n", "\n", "\n", " fig = plt.figure()\n", " ax = fig.add_subplot(111)\n", " cax = ax.matshow(data, interpolation='nearest', vmin=0, vmax=1)\n", " \n", " cbar=fig.colorbar(cax)\n", " cbar.ax.set_ylabel('accuracy (%)',rotation=90,fontsize=fontsize)\n", " cbar.set_ticks([0,.2,.4,0.6,0.8,1.0])\n", " cbar.set_ticklabels(['0%','20%','40%','60%','80%','100%'])\n", "\n", " # put text on matrix elements\n", " for i, x_val in enumerate(np.arange(len(x))):\n", " for j, y_val in enumerate(np.arange(len(y))):\n", " c = \"${0:.1f}\\\\%$\".format( 100*data[j,i]) \n", " ax.text(x_val, y_val, c, va='center', ha='center')\n", "\n", " # convert axis vaues to to string labels\n", " x=[str(i) for i in x]\n", " y=[str(i) for i in y]\n", "\n", "\n", " ax.set_xticklabels(['']+x)\n", " ax.set_yticklabels(['']+y)\n", "\n", " ax.set_xlabel('$\\\\mathrm{learning\\\\ rate}$',fontsize=fontsize)\n", " ax.set_ylabel('$\\\\mathrm{hidden\\\\ neurons}$',fontsize=fontsize)\n", "\n", " plt.tight_layout()\n", "\n", " plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Run Code\n", "\n", "As we mentioned in the beginning of the notebook, all functions and classes discussed above only specify the procedure but do not actually perform any computations. This allows us to re-use them for different problems. \n", "\n", "Actually running the training and testing for every point in the grid search is done below. The `argparse` class allows us to conveniently keep track of all hyperparameters, stored in the variable `args` which enters most of the functions we defined above. \n", "\n", "To run the simulation, we call the function `grid_search`. " ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "## Exercises\n", "\n", "* One of the advantages of Pytorch is that it allows to automatically use the CUDA library for fast performance on GPU's. For the sake of clarity, we have omitted this in the above notebook. Go online to check how to put the CUDA commands back into the code above. _Hint:_ study the [Pytorch MNIST tutorial](https://github.com/pytorch/examples/blob/master/mnist/main.py) to see how this works in practice.\n" ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Training on 800 examples\n", "Using both high and low level features\n", "Testing on 200 examples\n", "Using both high and low level features\n", "\n", " training DNN with 1000 data points and SGD lr=0.000010. \n", "\n", "Train Epoch: 1 [0/800 (0%)]\tLoss: 0.561476\n", "Train Epoch: 1 [100/800 (12%)]\tLoss: 0.823435\n", "Train Epoch: 1 [200/800 (25%)]\tLoss: 0.647225\n", "Train Epoch: 1 [300/800 (38%)]\tLoss: 0.612186\n", "Train Epoch: 1 [400/800 (50%)]\tLoss: 0.962393\n", "Train Epoch: 1 [500/800 (62%)]\tLoss: 0.835941\n", "Train Epoch: 1 [600/800 (75%)]\tLoss: 0.808794\n", "Train Epoch: 1 [700/800 (88%)]\tLoss: 0.766973\n", "\n", "Test set: Average loss: 0.7115, Accuracy: 109/200 (54.500%)\n", "\n", "Train Epoch: 2 [0/800 (0%)]\tLoss: 0.861468\n", "Train Epoch: 2 [100/800 (12%)]\tLoss: 0.653841\n", "Train Epoch: 2 [200/800 (25%)]\tLoss: 0.823339\n", "Train Epoch: 2 [300/800 (38%)]\tLoss: 0.745887\n", "Train Epoch: 2 [400/800 (50%)]\tLoss: 0.694589\n", "Train Epoch: 2 [500/800 (62%)]\tLoss: 0.693052\n", "Train Epoch: 2 [600/800 (75%)]\tLoss: 0.719047\n", "Train Epoch: 2 [700/800 (88%)]\tLoss: 0.591686\n", "\n", "Test set: Average loss: 0.7107, Accuracy: 109/200 (54.500%)\n", "\n", "Train Epoch: 3 [0/800 (0%)]\tLoss: 0.728128\n", "Train Epoch: 3 [100/800 (12%)]\tLoss: 0.698269\n", "Train Epoch: 3 [200/800 (25%)]\tLoss: 0.705191\n", "Train Epoch: 3 [300/800 (38%)]\tLoss: 0.683300\n", "Train Epoch: 3 [400/800 (50%)]\tLoss: 0.732665\n", "Train Epoch: 3 [500/800 (62%)]\tLoss: 0.849138\n", "Train Epoch: 3 [600/800 (75%)]\tLoss: 0.728277\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "/Users/mgbukov/miniconda3/envs/mlreview/lib/python3.6/site-packages/torch/nn/functional.py:52: UserWarning: size_average and reduce args will be deprecated, please use reduction='sum' instead.\n", " warnings.warn(warning.format(ret))\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Train Epoch: 3 [700/800 (88%)]\tLoss: 0.743678\n", "\n", "Test set: Average loss: 0.7098, Accuracy: 109/200 (54.500%)\n", "\n", "Train Epoch: 4 [0/800 (0%)]\tLoss: 0.687534\n", "Train Epoch: 4 [100/800 (12%)]\tLoss: 0.729848\n", "Train Epoch: 4 [200/800 (25%)]\tLoss: 0.774060\n", "Train Epoch: 4 [300/800 (38%)]\tLoss: 0.767740\n", "Train Epoch: 4 [400/800 (50%)]\tLoss: 0.747602\n", "Train Epoch: 4 [500/800 (62%)]\tLoss: 0.678511\n", "Train Epoch: 4 [600/800 (75%)]\tLoss: 0.892563\n", "Train Epoch: 4 [700/800 (88%)]\tLoss: 0.617390\n", "\n", "Test set: Average loss: 0.7091, Accuracy: 109/200 (54.500%)\n", "\n", "Train Epoch: 5 [0/800 (0%)]\tLoss: 0.666867\n", "Train Epoch: 5 [100/800 (12%)]\tLoss: 0.747424\n", "Train Epoch: 5 [200/800 (25%)]\tLoss: 0.623322\n", "Train Epoch: 5 [300/800 (38%)]\tLoss: 0.803995\n", "Train Epoch: 5 [400/800 (50%)]\tLoss: 0.729541\n", "Train Epoch: 5 [500/800 (62%)]\tLoss: 0.844938\n", "Train Epoch: 5 [600/800 (75%)]\tLoss: 0.717547\n", "Train Epoch: 5 [700/800 (88%)]\tLoss: 0.595089\n", "\n", "Test set: Average loss: 0.7084, Accuracy: 109/200 (54.500%)\n", "\n", "Train Epoch: 6 [0/800 (0%)]\tLoss: 0.772091\n", "Train Epoch: 6 [100/800 (12%)]\tLoss: 0.641700\n", "Train Epoch: 6 [200/800 (25%)]\tLoss: 0.948149\n", "Train Epoch: 6 [300/800 (38%)]\tLoss: 0.783350\n", "Train Epoch: 6 [400/800 (50%)]\tLoss: 0.803563\n", "Train Epoch: 6 [500/800 (62%)]\tLoss: 0.749601\n", "Train Epoch: 6 [600/800 (75%)]\tLoss: 0.590462\n", "Train Epoch: 6 [700/800 (88%)]\tLoss: 0.793399\n", "\n", "Test set: Average loss: 0.7077, Accuracy: 109/200 (54.500%)\n", "\n", "Train Epoch: 7 [0/800 (0%)]\tLoss: 0.699198\n", "Train Epoch: 7 [100/800 (12%)]\tLoss: 0.538285\n", "Train Epoch: 7 [200/800 (25%)]\tLoss: 0.637657\n", "Train Epoch: 7 [300/800 (38%)]\tLoss: 0.707620\n", "Train Epoch: 7 [400/800 (50%)]\tLoss: 0.739883\n", "Train Epoch: 7 [500/800 (62%)]\tLoss: 0.710847\n", "Train Epoch: 7 [600/800 (75%)]\tLoss: 0.883698\n", "Train Epoch: 7 [700/800 (88%)]\tLoss: 0.878761\n", "\n", "Test set: Average loss: 0.7069, Accuracy: 109/200 (54.500%)\n", "\n", "Train Epoch: 8 [0/800 (0%)]\tLoss: 0.640105\n", "Train Epoch: 8 [100/800 (12%)]\tLoss: 0.690040\n", "Train Epoch: 8 [200/800 (25%)]\tLoss: 0.777747\n", "Train Epoch: 8 [300/800 (38%)]\tLoss: 0.596149\n", "Train Epoch: 8 [400/800 (50%)]\tLoss: 0.601473\n", "Train Epoch: 8 [500/800 (62%)]\tLoss: 0.621435\n", "Train Epoch: 8 [600/800 (75%)]\tLoss: 0.908196\n", "Train Epoch: 8 [700/800 (88%)]\tLoss: 0.859109\n", "\n", "Test set: Average loss: 0.7062, Accuracy: 109/200 (54.500%)\n", "\n", "Train Epoch: 9 [0/800 (0%)]\tLoss: 0.566881\n", "Train Epoch: 9 [100/800 (12%)]\tLoss: 0.710102\n", "Train Epoch: 9 [200/800 (25%)]\tLoss: 0.788556\n", "Train Epoch: 9 [300/800 (38%)]\tLoss: 0.702322\n", "Train Epoch: 9 [400/800 (50%)]\tLoss: 0.720015\n", "Train Epoch: 9 [500/800 (62%)]\tLoss: 0.645902\n", "Train Epoch: 9 [600/800 (75%)]\tLoss: 0.741547\n", "Train Epoch: 9 [700/800 (88%)]\tLoss: 0.765223\n", "\n", "Test set: Average loss: 0.7055, Accuracy: 109/200 (54.500%)\n", "\n", "Train Epoch: 10 [0/800 (0%)]\tLoss: 0.819819\n", "Train Epoch: 10 [100/800 (12%)]\tLoss: 0.629852\n", "Train Epoch: 10 [200/800 (25%)]\tLoss: 0.679486\n", "Train Epoch: 10 [300/800 (38%)]\tLoss: 0.681212\n", "Train Epoch: 10 [400/800 (50%)]\tLoss: 0.841154\n", "Train Epoch: 10 [500/800 (62%)]\tLoss: 0.610462\n", "Train Epoch: 10 [600/800 (75%)]\tLoss: 0.742932\n", "Train Epoch: 10 [700/800 (88%)]\tLoss: 0.743243\n", "\n", "Test set: Average loss: 0.7048, Accuracy: 109/200 (54.500%)\n", "\n", "\n", " training DNN with 1000 data points and SGD lr=0.000100. \n", "\n", "Train Epoch: 1 [0/800 (0%)]\tLoss: 0.707171\n", "Train Epoch: 1 [100/800 (12%)]\tLoss: 0.759615\n", "Train Epoch: 1 [200/800 (25%)]\tLoss: 0.623426\n", "Train Epoch: 1 [300/800 (38%)]\tLoss: 0.761207\n", "Train Epoch: 1 [400/800 (50%)]\tLoss: 0.690813\n", "Train Epoch: 1 [500/800 (62%)]\tLoss: 0.776441\n", "Train Epoch: 1 [600/800 (75%)]\tLoss: 0.685847\n", "Train Epoch: 1 [700/800 (88%)]\tLoss: 0.706618\n", "\n", "Test set: Average loss: 0.6949, Accuracy: 97/200 (48.500%)\n", "\n", "Train Epoch: 2 [0/800 (0%)]\tLoss: 0.734724\n", "Train Epoch: 2 [100/800 (12%)]\tLoss: 0.769788\n", "Train Epoch: 2 [200/800 (25%)]\tLoss: 0.714627\n", "Train Epoch: 2 [300/800 (38%)]\tLoss: 0.700415\n", "Train Epoch: 2 [400/800 (50%)]\tLoss: 0.721317\n", "Train Epoch: 2 [500/800 (62%)]\tLoss: 0.725358\n", "Train Epoch: 2 [600/800 (75%)]\tLoss: 0.728227\n", "Train Epoch: 2 [700/800 (88%)]\tLoss: 0.659629\n", "\n", "Test set: Average loss: 0.6933, Accuracy: 96/200 (48.000%)\n", "\n", "Train Epoch: 3 [0/800 (0%)]\tLoss: 0.795568\n", "Train Epoch: 3 [100/800 (12%)]\tLoss: 0.696280\n", "Train Epoch: 3 [200/800 (25%)]\tLoss: 0.744723\n", "Train Epoch: 3 [300/800 (38%)]\tLoss: 0.661478\n", "Train Epoch: 3 [400/800 (50%)]\tLoss: 0.653667\n", "Train Epoch: 3 [500/800 (62%)]\tLoss: 0.740828\n", "Train Epoch: 3 [600/800 (75%)]\tLoss: 0.715702\n", "Train Epoch: 3 [700/800 (88%)]\tLoss: 0.647341\n", "\n", "Test set: Average loss: 0.6917, Accuracy: 103/200 (51.500%)\n", "\n", "Train Epoch: 4 [0/800 (0%)]\tLoss: 0.718259\n", "Train Epoch: 4 [100/800 (12%)]\tLoss: 0.715631\n", "Train Epoch: 4 [200/800 (25%)]\tLoss: 0.678426\n", "Train Epoch: 4 [300/800 (38%)]\tLoss: 0.707464\n", "Train Epoch: 4 [400/800 (50%)]\tLoss: 0.709158\n", "Train Epoch: 4 [500/800 (62%)]\tLoss: 0.740914\n", "Train Epoch: 4 [600/800 (75%)]\tLoss: 0.697477\n", "Train Epoch: 4 [700/800 (88%)]\tLoss: 0.785577\n", "\n", "Test set: Average loss: 0.6903, Accuracy: 99/200 (49.500%)\n", "\n", "Train Epoch: 5 [0/800 (0%)]\tLoss: 0.712337\n", "Train Epoch: 5 [100/800 (12%)]\tLoss: 0.658852\n", "Train Epoch: 5 [200/800 (25%)]\tLoss: 0.690872\n", "Train Epoch: 5 [300/800 (38%)]\tLoss: 0.664473\n", "Train Epoch: 5 [400/800 (50%)]\tLoss: 0.672096\n", "Train Epoch: 5 [500/800 (62%)]\tLoss: 0.681344\n", "Train Epoch: 5 [600/800 (75%)]\tLoss: 0.745848\n", "Train Epoch: 5 [700/800 (88%)]\tLoss: 0.712893\n", "\n", "Test set: Average loss: 0.6890, Accuracy: 94/200 (47.000%)\n", "\n", "Train Epoch: 6 [0/800 (0%)]\tLoss: 0.720909\n", "Train Epoch: 6 [100/800 (12%)]\tLoss: 0.730349\n", "Train Epoch: 6 [200/800 (25%)]\tLoss: 0.763710\n", "Train Epoch: 6 [300/800 (38%)]\tLoss: 0.727711\n", "Train Epoch: 6 [400/800 (50%)]\tLoss: 0.672487\n", "Train Epoch: 6 [500/800 (62%)]\tLoss: 0.677815\n", "Train Epoch: 6 [600/800 (75%)]\tLoss: 0.626029\n", "Train Epoch: 6 [700/800 (88%)]\tLoss: 0.706892\n", "\n", "Test set: Average loss: 0.6877, Accuracy: 95/200 (47.500%)\n", "\n", "Train Epoch: 7 [0/800 (0%)]\tLoss: 0.762181\n", "Train Epoch: 7 [100/800 (12%)]\tLoss: 0.655490\n", "Train Epoch: 7 [200/800 (25%)]\tLoss: 0.785486\n", "Train Epoch: 7 [300/800 (38%)]\tLoss: 0.719932\n", "Train Epoch: 7 [400/800 (50%)]\tLoss: 0.683267\n", "Train Epoch: 7 [500/800 (62%)]\tLoss: 0.716905\n", "Train Epoch: 7 [600/800 (75%)]\tLoss: 0.610312\n", "Train Epoch: 7 [700/800 (88%)]\tLoss: 0.689631\n", "\n", "Test set: Average loss: 0.6864, Accuracy: 97/200 (48.500%)\n", "\n", "Train Epoch: 8 [0/800 (0%)]\tLoss: 0.718268\n", "Train Epoch: 8 [100/800 (12%)]\tLoss: 0.626109\n", "Train Epoch: 8 [200/800 (25%)]\tLoss: 0.694751\n", "Train Epoch: 8 [300/800 (38%)]\tLoss: 0.777661\n", "Train Epoch: 8 [400/800 (50%)]\tLoss: 0.642748\n", "Train Epoch: 8 [500/800 (62%)]\tLoss: 0.734109\n", "Train Epoch: 8 [600/800 (75%)]\tLoss: 0.733591\n", "Train Epoch: 8 [700/800 (88%)]\tLoss: 0.704688\n", "\n", "Test set: Average loss: 0.6851, Accuracy: 99/200 (49.500%)\n", "\n", "Train Epoch: 9 [0/800 (0%)]\tLoss: 0.645388\n", "Train Epoch: 9 [100/800 (12%)]\tLoss: 0.730897\n", "Train Epoch: 9 [200/800 (25%)]\tLoss: 0.745258\n", "Train Epoch: 9 [300/800 (38%)]\tLoss: 0.746987\n", "Train Epoch: 9 [400/800 (50%)]\tLoss: 0.675306\n", "Train Epoch: 9 [500/800 (62%)]\tLoss: 0.759842\n", "Train Epoch: 9 [600/800 (75%)]\tLoss: 0.723907\n", "Train Epoch: 9 [700/800 (88%)]\tLoss: 0.661631\n", "\n", "Test set: Average loss: 0.6839, Accuracy: 102/200 (51.000%)\n", "\n", "Train Epoch: 10 [0/800 (0%)]\tLoss: 0.637127\n", "Train Epoch: 10 [100/800 (12%)]\tLoss: 0.703343\n", "Train Epoch: 10 [200/800 (25%)]\tLoss: 0.625676\n", "Train Epoch: 10 [300/800 (38%)]\tLoss: 0.654124\n", "Train Epoch: 10 [400/800 (50%)]\tLoss: 0.675645\n", "Train Epoch: 10 [500/800 (62%)]\tLoss: 0.675741\n", "Train Epoch: 10 [600/800 (75%)]\tLoss: 0.694248\n", "Train Epoch: 10 [700/800 (88%)]\tLoss: 0.652979\n", "\n", "Test set: Average loss: 0.6826, Accuracy: 103/200 (51.500%)\n", "\n", "\n", " training DNN with 1000 data points and SGD lr=0.001000. \n", "\n", "Train Epoch: 1 [0/800 (0%)]\tLoss: 0.828281\n", "Train Epoch: 1 [100/800 (12%)]\tLoss: 0.706446\n", "Train Epoch: 1 [200/800 (25%)]\tLoss: 0.742615\n", "Train Epoch: 1 [300/800 (38%)]\tLoss: 0.699289\n", "Train Epoch: 1 [400/800 (50%)]\tLoss: 0.695920\n", "Train Epoch: 1 [500/800 (62%)]\tLoss: 0.700937\n", "Train Epoch: 1 [600/800 (75%)]\tLoss: 0.670519\n", "Train Epoch: 1 [700/800 (88%)]\tLoss: 0.704970\n", "\n", "Test set: Average loss: 0.6916, Accuracy: 101/200 (50.500%)\n", "\n", "Train Epoch: 2 [0/800 (0%)]\tLoss: 0.707742\n", "Train Epoch: 2 [100/800 (12%)]\tLoss: 0.679032\n", "Train Epoch: 2 [200/800 (25%)]\tLoss: 0.703815\n", "Train Epoch: 2 [300/800 (38%)]\tLoss: 0.641242\n", "Train Epoch: 2 [400/800 (50%)]\tLoss: 0.631897\n", "Train Epoch: 2 [500/800 (62%)]\tLoss: 0.673181\n", "Train Epoch: 2 [600/800 (75%)]\tLoss: 0.758181\n", "Train Epoch: 2 [700/800 (88%)]\tLoss: 0.686858\n", "\n", "Test set: Average loss: 0.6833, Accuracy: 123/200 (61.500%)\n", "\n", "Train Epoch: 3 [0/800 (0%)]\tLoss: 0.666551\n", "Train Epoch: 3 [100/800 (12%)]\tLoss: 0.738501\n", "Train Epoch: 3 [200/800 (25%)]\tLoss: 0.736996\n", "Train Epoch: 3 [300/800 (38%)]\tLoss: 0.663481\n", "Train Epoch: 3 [400/800 (50%)]\tLoss: 0.715448\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Train Epoch: 3 [500/800 (62%)]\tLoss: 0.649770\n", "Train Epoch: 3 [600/800 (75%)]\tLoss: 0.677709\n", "Train Epoch: 3 [700/800 (88%)]\tLoss: 0.649908\n", "\n", "Test set: Average loss: 0.6751, Accuracy: 133/200 (66.500%)\n", "\n", "Train Epoch: 4 [0/800 (0%)]\tLoss: 0.638706\n", "Train Epoch: 4 [100/800 (12%)]\tLoss: 0.646711\n", "Train Epoch: 4 [200/800 (25%)]\tLoss: 0.738700\n", "Train Epoch: 4 [300/800 (38%)]\tLoss: 0.629142\n", "Train Epoch: 4 [400/800 (50%)]\tLoss: 0.624597\n", "Train Epoch: 4 [500/800 (62%)]\tLoss: 0.745657\n", "Train Epoch: 4 [600/800 (75%)]\tLoss: 0.725137\n", "Train Epoch: 4 [700/800 (88%)]\tLoss: 0.640283\n", "\n", "Test set: Average loss: 0.6675, Accuracy: 140/200 (70.000%)\n", "\n", "Train Epoch: 5 [0/800 (0%)]\tLoss: 0.687889\n", "Train Epoch: 5 [100/800 (12%)]\tLoss: 0.650391\n", "Train Epoch: 5 [200/800 (25%)]\tLoss: 0.646154\n", "Train Epoch: 5 [300/800 (38%)]\tLoss: 0.654916\n", "Train Epoch: 5 [400/800 (50%)]\tLoss: 0.679761\n", "Train Epoch: 5 [500/800 (62%)]\tLoss: 0.631989\n", "Train Epoch: 5 [600/800 (75%)]\tLoss: 0.691708\n", "Train Epoch: 5 [700/800 (88%)]\tLoss: 0.691091\n", "\n", "Test set: Average loss: 0.6597, Accuracy: 146/200 (73.000%)\n", "\n", "Train Epoch: 6 [0/800 (0%)]\tLoss: 0.619725\n", "Train Epoch: 6 [100/800 (12%)]\tLoss: 0.687790\n", "Train Epoch: 6 [200/800 (25%)]\tLoss: 0.592156\n", "Train Epoch: 6 [300/800 (38%)]\tLoss: 0.695230\n", "Train Epoch: 6 [400/800 (50%)]\tLoss: 0.663265\n", "Train Epoch: 6 [500/800 (62%)]\tLoss: 0.637461\n", "Train Epoch: 6 [600/800 (75%)]\tLoss: 0.758096\n", "Train Epoch: 6 [700/800 (88%)]\tLoss: 0.589519\n", "\n", "Test set: Average loss: 0.6530, Accuracy: 144/200 (72.000%)\n", "\n", "Train Epoch: 7 [0/800 (0%)]\tLoss: 0.720149\n", "Train Epoch: 7 [100/800 (12%)]\tLoss: 0.675038\n", "Train Epoch: 7 [200/800 (25%)]\tLoss: 0.624679\n", "Train Epoch: 7 [300/800 (38%)]\tLoss: 0.696514\n", "Train Epoch: 7 [400/800 (50%)]\tLoss: 0.602463\n", "Train Epoch: 7 [500/800 (62%)]\tLoss: 0.738410\n", "Train Epoch: 7 [600/800 (75%)]\tLoss: 0.722651\n", "Train Epoch: 7 [700/800 (88%)]\tLoss: 0.631086\n", "\n", "Test set: Average loss: 0.6458, Accuracy: 145/200 (72.500%)\n", "\n", "Train Epoch: 8 [0/800 (0%)]\tLoss: 0.564008\n", "Train Epoch: 8 [100/800 (12%)]\tLoss: 0.686089\n", "Train Epoch: 8 [200/800 (25%)]\tLoss: 0.735220\n", "Train Epoch: 8 [300/800 (38%)]\tLoss: 0.681267\n", "Train Epoch: 8 [400/800 (50%)]\tLoss: 0.534413\n", "Train Epoch: 8 [500/800 (62%)]\tLoss: 0.631500\n", "Train Epoch: 8 [600/800 (75%)]\tLoss: 0.675105\n", "Train Epoch: 8 [700/800 (88%)]\tLoss: 0.622950\n", "\n", "Test set: Average loss: 0.6396, Accuracy: 150/200 (75.000%)\n", "\n", "Train Epoch: 9 [0/800 (0%)]\tLoss: 0.716496\n", "Train Epoch: 9 [100/800 (12%)]\tLoss: 0.633340\n", "Train Epoch: 9 [200/800 (25%)]\tLoss: 0.609537\n", "Train Epoch: 9 [300/800 (38%)]\tLoss: 0.650772\n", "Train Epoch: 9 [400/800 (50%)]\tLoss: 0.756648\n", "Train Epoch: 9 [500/800 (62%)]\tLoss: 0.684104\n", "Train Epoch: 9 [600/800 (75%)]\tLoss: 0.657388\n", "Train Epoch: 9 [700/800 (88%)]\tLoss: 0.570877\n", "\n", "Test set: Average loss: 0.6319, Accuracy: 146/200 (73.000%)\n", "\n", "Train Epoch: 10 [0/800 (0%)]\tLoss: 0.550361\n", "Train Epoch: 10 [100/800 (12%)]\tLoss: 0.661233\n", "Train Epoch: 10 [200/800 (25%)]\tLoss: 0.817553\n", "Train Epoch: 10 [300/800 (38%)]\tLoss: 0.591799\n", "Train Epoch: 10 [400/800 (50%)]\tLoss: 0.735668\n", "Train Epoch: 10 [500/800 (62%)]\tLoss: 0.642043\n", "Train Epoch: 10 [600/800 (75%)]\tLoss: 0.671387\n", "Train Epoch: 10 [700/800 (88%)]\tLoss: 0.631629\n", "\n", "Test set: Average loss: 0.6257, Accuracy: 149/200 (74.500%)\n", "\n", "\n", " training DNN with 1000 data points and SGD lr=0.010000. \n", "\n", "Train Epoch: 1 [0/800 (0%)]\tLoss: 0.696035\n", "Train Epoch: 1 [100/800 (12%)]\tLoss: 0.618539\n", "Train Epoch: 1 [200/800 (25%)]\tLoss: 0.718540\n", "Train Epoch: 1 [300/800 (38%)]\tLoss: 0.650360\n", "Train Epoch: 1 [400/800 (50%)]\tLoss: 0.642039\n", "Train Epoch: 1 [500/800 (62%)]\tLoss: 0.626914\n", "Train Epoch: 1 [600/800 (75%)]\tLoss: 0.624919\n", "Train Epoch: 1 [700/800 (88%)]\tLoss: 0.514577\n", "\n", "Test set: Average loss: 0.5904, Accuracy: 142/200 (71.000%)\n", "\n", "Train Epoch: 2 [0/800 (0%)]\tLoss: 0.682929\n", "Train Epoch: 2 [100/800 (12%)]\tLoss: 0.651318\n", "Train Epoch: 2 [200/800 (25%)]\tLoss: 0.588398\n", "Train Epoch: 2 [300/800 (38%)]\tLoss: 0.664375\n", "Train Epoch: 2 [400/800 (50%)]\tLoss: 0.680439\n", "Train Epoch: 2 [500/800 (62%)]\tLoss: 0.693252\n", "Train Epoch: 2 [600/800 (75%)]\tLoss: 0.840514\n", "Train Epoch: 2 [700/800 (88%)]\tLoss: 0.851084\n", "\n", "Test set: Average loss: 0.5447, Accuracy: 149/200 (74.500%)\n", "\n", "Train Epoch: 3 [0/800 (0%)]\tLoss: 0.430184\n", "Train Epoch: 3 [100/800 (12%)]\tLoss: 0.797171\n", "Train Epoch: 3 [200/800 (25%)]\tLoss: 0.413475\n", "Train Epoch: 3 [300/800 (38%)]\tLoss: 0.561503\n", "Train Epoch: 3 [400/800 (50%)]\tLoss: 0.684532\n", "Train Epoch: 3 [500/800 (62%)]\tLoss: 0.457825\n", "Train Epoch: 3 [600/800 (75%)]\tLoss: 0.595141\n", "Train Epoch: 3 [700/800 (88%)]\tLoss: 1.026458\n", "\n", "Test set: Average loss: 0.5341, Accuracy: 149/200 (74.500%)\n", "\n", "Train Epoch: 4 [0/800 (0%)]\tLoss: 0.618329\n", "Train Epoch: 4 [100/800 (12%)]\tLoss: 0.640518\n", "Train Epoch: 4 [200/800 (25%)]\tLoss: 0.572886\n", "Train Epoch: 4 [300/800 (38%)]\tLoss: 0.420373\n", "Train Epoch: 4 [400/800 (50%)]\tLoss: 0.487304\n", "Train Epoch: 4 [500/800 (62%)]\tLoss: 0.628173\n", "Train Epoch: 4 [600/800 (75%)]\tLoss: 0.389652\n", "Train Epoch: 4 [700/800 (88%)]\tLoss: 0.448448\n", "\n", "Test set: Average loss: 0.5121, Accuracy: 154/200 (77.000%)\n", "\n", "Train Epoch: 5 [0/800 (0%)]\tLoss: 0.809161\n", "Train Epoch: 5 [100/800 (12%)]\tLoss: 0.395011\n", "Train Epoch: 5 [200/800 (25%)]\tLoss: 0.343491\n", "Train Epoch: 5 [300/800 (38%)]\tLoss: 0.381724\n", "Train Epoch: 5 [400/800 (50%)]\tLoss: 0.409402\n", "Train Epoch: 5 [500/800 (62%)]\tLoss: 0.664123\n", "Train Epoch: 5 [600/800 (75%)]\tLoss: 0.365333\n", "Train Epoch: 5 [700/800 (88%)]\tLoss: 0.558361\n", "\n", "Test set: Average loss: 0.5073, Accuracy: 152/200 (76.000%)\n", "\n", "Train Epoch: 6 [0/800 (0%)]\tLoss: 0.427038\n", "Train Epoch: 6 [100/800 (12%)]\tLoss: 0.497466\n", "Train Epoch: 6 [200/800 (25%)]\tLoss: 0.307992\n", "Train Epoch: 6 [300/800 (38%)]\tLoss: 0.574125\n", "Train Epoch: 6 [400/800 (50%)]\tLoss: 0.495350\n", "Train Epoch: 6 [500/800 (62%)]\tLoss: 0.407845\n", "Train Epoch: 6 [600/800 (75%)]\tLoss: 0.526377\n", "Train Epoch: 6 [700/800 (88%)]\tLoss: 0.459272\n", "\n", "Test set: Average loss: 0.5031, Accuracy: 157/200 (78.500%)\n", "\n", "Train Epoch: 7 [0/800 (0%)]\tLoss: 0.440641\n", "Train Epoch: 7 [100/800 (12%)]\tLoss: 0.373742\n", "Train Epoch: 7 [200/800 (25%)]\tLoss: 0.546155\n", "Train Epoch: 7 [300/800 (38%)]\tLoss: 0.580418\n", "Train Epoch: 7 [400/800 (50%)]\tLoss: 0.224490\n", "Train Epoch: 7 [500/800 (62%)]\tLoss: 0.636834\n", "Train Epoch: 7 [600/800 (75%)]\tLoss: 0.505638\n", "Train Epoch: 7 [700/800 (88%)]\tLoss: 0.655799\n", "\n", "Test set: Average loss: 0.5026, Accuracy: 150/200 (75.000%)\n", "\n", "Train Epoch: 8 [0/800 (0%)]\tLoss: 0.515595\n", "Train Epoch: 8 [100/800 (12%)]\tLoss: 0.641143\n", "Train Epoch: 8 [200/800 (25%)]\tLoss: 0.330472\n", "Train Epoch: 8 [300/800 (38%)]\tLoss: 0.406667\n", "Train Epoch: 8 [400/800 (50%)]\tLoss: 0.369609\n", "Train Epoch: 8 [500/800 (62%)]\tLoss: 0.438342\n", "Train Epoch: 8 [600/800 (75%)]\tLoss: 0.405456\n", "Train Epoch: 8 [700/800 (88%)]\tLoss: 0.587713\n", "\n", "Test set: Average loss: 0.4954, Accuracy: 155/200 (77.500%)\n", "\n", "Train Epoch: 9 [0/800 (0%)]\tLoss: 0.445754\n", "Train Epoch: 9 [100/800 (12%)]\tLoss: 0.445473\n", "Train Epoch: 9 [200/800 (25%)]\tLoss: 0.474659\n", "Train Epoch: 9 [300/800 (38%)]\tLoss: 0.367472\n", "Train Epoch: 9 [400/800 (50%)]\tLoss: 0.447237\n", "Train Epoch: 9 [500/800 (62%)]\tLoss: 0.564817\n", "Train Epoch: 9 [600/800 (75%)]\tLoss: 0.490990\n", "Train Epoch: 9 [700/800 (88%)]\tLoss: 0.513888\n", "\n", "Test set: Average loss: 0.4939, Accuracy: 153/200 (76.500%)\n", "\n", "Train Epoch: 10 [0/800 (0%)]\tLoss: 0.344564\n", "Train Epoch: 10 [100/800 (12%)]\tLoss: 0.182895\n", "Train Epoch: 10 [200/800 (25%)]\tLoss: 0.510447\n", "Train Epoch: 10 [300/800 (38%)]\tLoss: 0.656765\n", "Train Epoch: 10 [400/800 (50%)]\tLoss: 0.662577\n", "Train Epoch: 10 [500/800 (62%)]\tLoss: 0.552037\n", "Train Epoch: 10 [600/800 (75%)]\tLoss: 0.541802\n", "Train Epoch: 10 [700/800 (88%)]\tLoss: 0.297868\n", "\n", "Test set: Average loss: 0.5122, Accuracy: 155/200 (77.500%)\n", "\n", "\n", " training DNN with 1000 data points and SGD lr=0.100000. \n", "\n", "Train Epoch: 1 [0/800 (0%)]\tLoss: 0.702246\n", "Train Epoch: 1 [100/800 (12%)]\tLoss: 0.680417\n", "Train Epoch: 1 [200/800 (25%)]\tLoss: 0.738558\n", "Train Epoch: 1 [300/800 (38%)]\tLoss: 0.647689\n", "Train Epoch: 1 [400/800 (50%)]\tLoss: 0.435705\n", "Train Epoch: 1 [500/800 (62%)]\tLoss: 0.433396\n", "Train Epoch: 1 [600/800 (75%)]\tLoss: 0.504250\n", "Train Epoch: 1 [700/800 (88%)]\tLoss: 0.557903\n", "\n", "Test set: Average loss: 0.5641, Accuracy: 142/200 (71.000%)\n", "\n", "Train Epoch: 2 [0/800 (0%)]\tLoss: 0.489483\n", "Train Epoch: 2 [100/800 (12%)]\tLoss: 0.553770\n", "Train Epoch: 2 [200/800 (25%)]\tLoss: 0.522039\n", "Train Epoch: 2 [300/800 (38%)]\tLoss: 0.674991\n", "Train Epoch: 2 [400/800 (50%)]\tLoss: 0.442521\n", "Train Epoch: 2 [500/800 (62%)]\tLoss: 0.477862\n", "Train Epoch: 2 [600/800 (75%)]\tLoss: 0.665187\n", "Train Epoch: 2 [700/800 (88%)]\tLoss: 0.965202\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\n", "Test set: Average loss: 0.7319, Accuracy: 123/200 (61.500%)\n", "\n", "Train Epoch: 3 [0/800 (0%)]\tLoss: 0.719377\n", "Train Epoch: 3 [100/800 (12%)]\tLoss: 0.499032\n", "Train Epoch: 3 [200/800 (25%)]\tLoss: 0.653635\n", "Train Epoch: 3 [300/800 (38%)]\tLoss: 0.622866\n", "Train Epoch: 3 [400/800 (50%)]\tLoss: 0.615727\n", "Train Epoch: 3 [500/800 (62%)]\tLoss: 0.448293\n", "Train Epoch: 3 [600/800 (75%)]\tLoss: 0.953069\n", "Train Epoch: 3 [700/800 (88%)]\tLoss: 0.512476\n", "\n", "Test set: Average loss: 0.5358, Accuracy: 146/200 (73.000%)\n", "\n", "Train Epoch: 4 [0/800 (0%)]\tLoss: 0.529396\n", "Train Epoch: 4 [100/800 (12%)]\tLoss: 0.400666\n", "Train Epoch: 4 [200/800 (25%)]\tLoss: 0.599574\n", "Train Epoch: 4 [300/800 (38%)]\tLoss: 0.577725\n", "Train Epoch: 4 [400/800 (50%)]\tLoss: 0.635720\n", "Train Epoch: 4 [500/800 (62%)]\tLoss: 0.634720\n", "Train Epoch: 4 [600/800 (75%)]\tLoss: 0.756374\n", "Train Epoch: 4 [700/800 (88%)]\tLoss: 1.011996\n", "\n", "Test set: Average loss: 0.5952, Accuracy: 132/200 (66.000%)\n", "\n", "Train Epoch: 5 [0/800 (0%)]\tLoss: 0.830493\n", "Train Epoch: 5 [100/800 (12%)]\tLoss: 0.502611\n", "Train Epoch: 5 [200/800 (25%)]\tLoss: 0.730146\n", "Train Epoch: 5 [300/800 (38%)]\tLoss: 0.625743\n", "Train Epoch: 5 [400/800 (50%)]\tLoss: 0.264062\n", "Train Epoch: 5 [500/800 (62%)]\tLoss: 0.484053\n", "Train Epoch: 5 [600/800 (75%)]\tLoss: 0.861344\n", "Train Epoch: 5 [700/800 (88%)]\tLoss: 0.451043\n", "\n", "Test set: Average loss: 0.5973, Accuracy: 142/200 (71.000%)\n", "\n", "Train Epoch: 6 [0/800 (0%)]\tLoss: 0.250868\n", "Train Epoch: 6 [100/800 (12%)]\tLoss: 0.459612\n", "Train Epoch: 6 [200/800 (25%)]\tLoss: 0.406168\n", "Train Epoch: 6 [300/800 (38%)]\tLoss: 0.551782\n", "Train Epoch: 6 [400/800 (50%)]\tLoss: 0.553747\n", "Train Epoch: 6 [500/800 (62%)]\tLoss: 0.554346\n", "Train Epoch: 6 [600/800 (75%)]\tLoss: 0.602257\n", "Train Epoch: 6 [700/800 (88%)]\tLoss: 0.317622\n", "\n", "Test set: Average loss: 0.5605, Accuracy: 147/200 (73.500%)\n", "\n", "Train Epoch: 7 [0/800 (0%)]\tLoss: 0.515653\n", "Train Epoch: 7 [100/800 (12%)]\tLoss: 0.415217\n", "Train Epoch: 7 [200/800 (25%)]\tLoss: 0.647236\n", "Train Epoch: 7 [300/800 (38%)]\tLoss: 0.223138\n", "Train Epoch: 7 [400/800 (50%)]\tLoss: 0.409093\n", "Train Epoch: 7 [500/800 (62%)]\tLoss: 0.727628\n", "Train Epoch: 7 [600/800 (75%)]\tLoss: 1.045135\n", "Train Epoch: 7 [700/800 (88%)]\tLoss: 0.990046\n", "\n", "Test set: Average loss: 0.5733, Accuracy: 135/200 (67.500%)\n", "\n", "Train Epoch: 8 [0/800 (0%)]\tLoss: 0.772774\n", "Train Epoch: 8 [100/800 (12%)]\tLoss: 0.306425\n", "Train Epoch: 8 [200/800 (25%)]\tLoss: 0.386095\n", "Train Epoch: 8 [300/800 (38%)]\tLoss: 0.432196\n", "Train Epoch: 8 [400/800 (50%)]\tLoss: 1.216796\n", "Train Epoch: 8 [500/800 (62%)]\tLoss: 0.568146\n", "Train Epoch: 8 [600/800 (75%)]\tLoss: 0.602302\n", "Train Epoch: 8 [700/800 (88%)]\tLoss: 0.442552\n", "\n", "Test set: Average loss: 0.5602, Accuracy: 150/200 (75.000%)\n", "\n", "Train Epoch: 9 [0/800 (0%)]\tLoss: 0.417107\n", "Train Epoch: 9 [100/800 (12%)]\tLoss: 0.529822\n", "Train Epoch: 9 [200/800 (25%)]\tLoss: 0.277546\n", "Train Epoch: 9 [300/800 (38%)]\tLoss: 0.268979\n", "Train Epoch: 9 [400/800 (50%)]\tLoss: 0.325881\n", "Train Epoch: 9 [500/800 (62%)]\tLoss: 0.531899\n", "Train Epoch: 9 [600/800 (75%)]\tLoss: 0.508487\n", "Train Epoch: 9 [700/800 (88%)]\tLoss: 0.487037\n", "\n", "Test set: Average loss: 0.6176, Accuracy: 126/200 (63.000%)\n", "\n", "Train Epoch: 10 [0/800 (0%)]\tLoss: 0.764035\n", "Train Epoch: 10 [100/800 (12%)]\tLoss: 0.644874\n", "Train Epoch: 10 [200/800 (25%)]\tLoss: 0.477930\n", "Train Epoch: 10 [300/800 (38%)]\tLoss: 0.346694\n", "Train Epoch: 10 [400/800 (50%)]\tLoss: 0.507348\n", "Train Epoch: 10 [500/800 (62%)]\tLoss: 0.702342\n", "Train Epoch: 10 [600/800 (75%)]\tLoss: 0.613372\n", "Train Epoch: 10 [700/800 (88%)]\tLoss: 0.572796\n", "\n", "Test set: Average loss: 0.5932, Accuracy: 131/200 (65.500%)\n", "\n", "Training on 8000 examples\n", "Using both high and low level features\n", "Testing on 2000 examples\n", "Using both high and low level features\n", "\n", " training DNN with 10000 data points and SGD lr=0.000010. \n", "\n", "Train Epoch: 1 [0/8000 (0%)]\tLoss: 0.727676\n", "Train Epoch: 1 [1000/8000 (12%)]\tLoss: 0.704320\n", "Train Epoch: 1 [2000/8000 (25%)]\tLoss: 0.710183\n", "Train Epoch: 1 [3000/8000 (38%)]\tLoss: 0.733221\n", "Train Epoch: 1 [4000/8000 (50%)]\tLoss: 0.723302\n", "Train Epoch: 1 [5000/8000 (62%)]\tLoss: 0.692780\n", "Train Epoch: 1 [6000/8000 (75%)]\tLoss: 0.721705\n", "Train Epoch: 1 [7000/8000 (88%)]\tLoss: 0.718693\n", "\n", "Test set: Average loss: 0.7014, Accuracy: 879/2000 (43.950%)\n", "\n", "Train Epoch: 2 [0/8000 (0%)]\tLoss: 0.726935\n", "Train Epoch: 2 [1000/8000 (12%)]\tLoss: 0.687883\n", "Train Epoch: 2 [2000/8000 (25%)]\tLoss: 0.731870\n", "Train Epoch: 2 [3000/8000 (38%)]\tLoss: 0.698007\n", "Train Epoch: 2 [4000/8000 (50%)]\tLoss: 0.705002\n", "Train Epoch: 2 [5000/8000 (62%)]\tLoss: 0.719240\n", "Train Epoch: 2 [6000/8000 (75%)]\tLoss: 0.697965\n", "Train Epoch: 2 [7000/8000 (88%)]\tLoss: 0.710202\n", "\n", "Test set: Average loss: 0.7013, Accuracy: 883/2000 (44.150%)\n", "\n", "Train Epoch: 3 [0/8000 (0%)]\tLoss: 0.722399\n", "Train Epoch: 3 [1000/8000 (12%)]\tLoss: 0.695973\n", "Train Epoch: 3 [2000/8000 (25%)]\tLoss: 0.709207\n", "Train Epoch: 3 [3000/8000 (38%)]\tLoss: 0.732915\n", "Train Epoch: 3 [4000/8000 (50%)]\tLoss: 0.690858\n", "Train Epoch: 3 [5000/8000 (62%)]\tLoss: 0.715846\n", "Train Epoch: 3 [6000/8000 (75%)]\tLoss: 0.711072\n", "Train Epoch: 3 [7000/8000 (88%)]\tLoss: 0.681372\n", "\n", "Test set: Average loss: 0.7011, Accuracy: 883/2000 (44.150%)\n", "\n", "Train Epoch: 4 [0/8000 (0%)]\tLoss: 0.711699\n", "Train Epoch: 4 [1000/8000 (12%)]\tLoss: 0.701226\n", "Train Epoch: 4 [2000/8000 (25%)]\tLoss: 0.702945\n", "Train Epoch: 4 [3000/8000 (38%)]\tLoss: 0.671898\n", "Train Epoch: 4 [4000/8000 (50%)]\tLoss: 0.713768\n", "Train Epoch: 4 [5000/8000 (62%)]\tLoss: 0.707367\n", "Train Epoch: 4 [6000/8000 (75%)]\tLoss: 0.728022\n", "Train Epoch: 4 [7000/8000 (88%)]\tLoss: 0.703425\n", "\n", "Test set: Average loss: 0.7010, Accuracy: 883/2000 (44.150%)\n", "\n", "Train Epoch: 5 [0/8000 (0%)]\tLoss: 0.709825\n", "Train Epoch: 5 [1000/8000 (12%)]\tLoss: 0.708488\n", "Train Epoch: 5 [2000/8000 (25%)]\tLoss: 0.676953\n", "Train Epoch: 5 [3000/8000 (38%)]\tLoss: 0.733793\n", "Train Epoch: 5 [4000/8000 (50%)]\tLoss: 0.748478\n", "Train Epoch: 5 [5000/8000 (62%)]\tLoss: 0.674785\n", "Train Epoch: 5 [6000/8000 (75%)]\tLoss: 0.725700\n", "Train Epoch: 5 [7000/8000 (88%)]\tLoss: 0.698330\n", "\n", "Test set: Average loss: 0.7009, Accuracy: 885/2000 (44.250%)\n", "\n", "Train Epoch: 6 [0/8000 (0%)]\tLoss: 0.709079\n", "Train Epoch: 6 [1000/8000 (12%)]\tLoss: 0.696563\n", "Train Epoch: 6 [2000/8000 (25%)]\tLoss: 0.732473\n", "Train Epoch: 6 [3000/8000 (38%)]\tLoss: 0.698275\n", "Train Epoch: 6 [4000/8000 (50%)]\tLoss: 0.705673\n", "Train Epoch: 6 [5000/8000 (62%)]\tLoss: 0.714218\n", "Train Epoch: 6 [6000/8000 (75%)]\tLoss: 0.709984\n", "Train Epoch: 6 [7000/8000 (88%)]\tLoss: 0.711174\n", "\n", "Test set: Average loss: 0.7008, Accuracy: 886/2000 (44.300%)\n", "\n", "Train Epoch: 7 [0/8000 (0%)]\tLoss: 0.706600\n", "Train Epoch: 7 [1000/8000 (12%)]\tLoss: 0.701822\n", "Train Epoch: 7 [2000/8000 (25%)]\tLoss: 0.732518\n", "Train Epoch: 7 [3000/8000 (38%)]\tLoss: 0.688266\n", "Train Epoch: 7 [4000/8000 (50%)]\tLoss: 0.702672\n", "Train Epoch: 7 [5000/8000 (62%)]\tLoss: 0.698141\n", "Train Epoch: 7 [6000/8000 (75%)]\tLoss: 0.706397\n", "Train Epoch: 7 [7000/8000 (88%)]\tLoss: 0.698923\n", "\n", "Test set: Average loss: 0.7006, Accuracy: 887/2000 (44.350%)\n", "\n", "Train Epoch: 8 [0/8000 (0%)]\tLoss: 0.717546\n", "Train Epoch: 8 [1000/8000 (12%)]\tLoss: 0.702528\n", "Train Epoch: 8 [2000/8000 (25%)]\tLoss: 0.717072\n", "Train Epoch: 8 [3000/8000 (38%)]\tLoss: 0.704772\n", "Train Epoch: 8 [4000/8000 (50%)]\tLoss: 0.712914\n", "Train Epoch: 8 [5000/8000 (62%)]\tLoss: 0.699612\n", "Train Epoch: 8 [6000/8000 (75%)]\tLoss: 0.732635\n", "Train Epoch: 8 [7000/8000 (88%)]\tLoss: 0.670173\n", "\n", "Test set: Average loss: 0.7005, Accuracy: 887/2000 (44.350%)\n", "\n", "Train Epoch: 9 [0/8000 (0%)]\tLoss: 0.705649\n", "Train Epoch: 9 [1000/8000 (12%)]\tLoss: 0.702978\n", "Train Epoch: 9 [2000/8000 (25%)]\tLoss: 0.706398\n", "Train Epoch: 9 [3000/8000 (38%)]\tLoss: 0.719252\n", "Train Epoch: 9 [4000/8000 (50%)]\tLoss: 0.715954\n", "Train Epoch: 9 [5000/8000 (62%)]\tLoss: 0.683632\n", "Train Epoch: 9 [6000/8000 (75%)]\tLoss: 0.708228\n", "Train Epoch: 9 [7000/8000 (88%)]\tLoss: 0.696788\n", "\n", "Test set: Average loss: 0.7004, Accuracy: 892/2000 (44.600%)\n", "\n", "Train Epoch: 10 [0/8000 (0%)]\tLoss: 0.700282\n", "Train Epoch: 10 [1000/8000 (12%)]\tLoss: 0.724479\n", "Train Epoch: 10 [2000/8000 (25%)]\tLoss: 0.709506\n", "Train Epoch: 10 [3000/8000 (38%)]\tLoss: 0.708790\n", "Train Epoch: 10 [4000/8000 (50%)]\tLoss: 0.703068\n", "Train Epoch: 10 [5000/8000 (62%)]\tLoss: 0.713272\n", "Train Epoch: 10 [6000/8000 (75%)]\tLoss: 0.716486\n", "Train Epoch: 10 [7000/8000 (88%)]\tLoss: 0.688014\n", "\n", "Test set: Average loss: 0.7003, Accuracy: 893/2000 (44.650%)\n", "\n", "\n", " training DNN with 10000 data points and SGD lr=0.000100. \n", "\n", "Train Epoch: 1 [0/8000 (0%)]\tLoss: 0.710169\n", "Train Epoch: 1 [1000/8000 (12%)]\tLoss: 0.691915\n", "Train Epoch: 1 [2000/8000 (25%)]\tLoss: 0.691650\n", "Train Epoch: 1 [3000/8000 (38%)]\tLoss: 0.713645\n", "Train Epoch: 1 [4000/8000 (50%)]\tLoss: 0.727196\n", "Train Epoch: 1 [5000/8000 (62%)]\tLoss: 0.707788\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Train Epoch: 1 [6000/8000 (75%)]\tLoss: 0.710902\n", "Train Epoch: 1 [7000/8000 (88%)]\tLoss: 0.692166\n", "\n", "Test set: Average loss: 0.6919, Accuracy: 1082/2000 (54.100%)\n", "\n", "Train Epoch: 2 [0/8000 (0%)]\tLoss: 0.715787\n", "Train Epoch: 2 [1000/8000 (12%)]\tLoss: 0.716278\n", "Train Epoch: 2 [2000/8000 (25%)]\tLoss: 0.681749\n", "Train Epoch: 2 [3000/8000 (38%)]\tLoss: 0.684479\n", "Train Epoch: 2 [4000/8000 (50%)]\tLoss: 0.690353\n", "Train Epoch: 2 [5000/8000 (62%)]\tLoss: 0.686106\n", "Train Epoch: 2 [6000/8000 (75%)]\tLoss: 0.694722\n", "Train Epoch: 2 [7000/8000 (88%)]\tLoss: 0.683836\n", "\n", "Test set: Average loss: 0.6904, Accuracy: 1127/2000 (56.350%)\n", "\n", "Train Epoch: 3 [0/8000 (0%)]\tLoss: 0.684679\n", "Train Epoch: 3 [1000/8000 (12%)]\tLoss: 0.674189\n", "Train Epoch: 3 [2000/8000 (25%)]\tLoss: 0.700893\n", "Train Epoch: 3 [3000/8000 (38%)]\tLoss: 0.701272\n", "Train Epoch: 3 [4000/8000 (50%)]\tLoss: 0.692683\n", "Train Epoch: 3 [5000/8000 (62%)]\tLoss: 0.698199\n", "Train Epoch: 3 [6000/8000 (75%)]\tLoss: 0.703976\n", "Train Epoch: 3 [7000/8000 (88%)]\tLoss: 0.707629\n", "\n", "Test set: Average loss: 0.6890, Accuracy: 1151/2000 (57.550%)\n", "\n", "Train Epoch: 4 [0/8000 (0%)]\tLoss: 0.703755\n", "Train Epoch: 4 [1000/8000 (12%)]\tLoss: 0.692734\n", "Train Epoch: 4 [2000/8000 (25%)]\tLoss: 0.717795\n", "Train Epoch: 4 [3000/8000 (38%)]\tLoss: 0.711542\n", "Train Epoch: 4 [4000/8000 (50%)]\tLoss: 0.682708\n", "Train Epoch: 4 [5000/8000 (62%)]\tLoss: 0.697612\n", "Train Epoch: 4 [6000/8000 (75%)]\tLoss: 0.709732\n", "Train Epoch: 4 [7000/8000 (88%)]\tLoss: 0.685663\n", "\n", "Test set: Average loss: 0.6877, Accuracy: 1182/2000 (59.100%)\n", "\n", "Train Epoch: 5 [0/8000 (0%)]\tLoss: 0.699202\n", "Train Epoch: 5 [1000/8000 (12%)]\tLoss: 0.681597\n", "Train Epoch: 5 [2000/8000 (25%)]\tLoss: 0.703145\n", "Train Epoch: 5 [3000/8000 (38%)]\tLoss: 0.681526\n", "Train Epoch: 5 [4000/8000 (50%)]\tLoss: 0.670278\n", "Train Epoch: 5 [5000/8000 (62%)]\tLoss: 0.700635\n", "Train Epoch: 5 [6000/8000 (75%)]\tLoss: 0.687604\n", "Train Epoch: 5 [7000/8000 (88%)]\tLoss: 0.685772\n", "\n", "Test set: Average loss: 0.6864, Accuracy: 1210/2000 (60.500%)\n", "\n", "Train Epoch: 6 [0/8000 (0%)]\tLoss: 0.690343\n", "Train Epoch: 6 [1000/8000 (12%)]\tLoss: 0.687278\n", "Train Epoch: 6 [2000/8000 (25%)]\tLoss: 0.675960\n", "Train Epoch: 6 [3000/8000 (38%)]\tLoss: 0.705044\n", "Train Epoch: 6 [4000/8000 (50%)]\tLoss: 0.703282\n", "Train Epoch: 6 [5000/8000 (62%)]\tLoss: 0.703023\n", "Train Epoch: 6 [6000/8000 (75%)]\tLoss: 0.687314\n", "Train Epoch: 6 [7000/8000 (88%)]\tLoss: 0.703106\n", "\n", "Test set: Average loss: 0.6852, Accuracy: 1235/2000 (61.750%)\n", "\n", "Train Epoch: 7 [0/8000 (0%)]\tLoss: 0.671762\n", "Train Epoch: 7 [1000/8000 (12%)]\tLoss: 0.701850\n", "Train Epoch: 7 [2000/8000 (25%)]\tLoss: 0.707641\n", "Train Epoch: 7 [3000/8000 (38%)]\tLoss: 0.703546\n", "Train Epoch: 7 [4000/8000 (50%)]\tLoss: 0.711041\n", "Train Epoch: 7 [5000/8000 (62%)]\tLoss: 0.693455\n", "Train Epoch: 7 [6000/8000 (75%)]\tLoss: 0.696086\n", "Train Epoch: 7 [7000/8000 (88%)]\tLoss: 0.686095\n", "\n", "Test set: Average loss: 0.6841, Accuracy: 1244/2000 (62.200%)\n", "\n", "Train Epoch: 8 [0/8000 (0%)]\tLoss: 0.686531\n", "Train Epoch: 8 [1000/8000 (12%)]\tLoss: 0.677912\n", "Train Epoch: 8 [2000/8000 (25%)]\tLoss: 0.686239\n", "Train Epoch: 8 [3000/8000 (38%)]\tLoss: 0.704038\n", "Train Epoch: 8 [4000/8000 (50%)]\tLoss: 0.697486\n", "Train Epoch: 8 [5000/8000 (62%)]\tLoss: 0.677249\n", "Train Epoch: 8 [6000/8000 (75%)]\tLoss: 0.680064\n", "Train Epoch: 8 [7000/8000 (88%)]\tLoss: 0.680018\n", "\n", "Test set: Average loss: 0.6830, Accuracy: 1259/2000 (62.950%)\n", "\n", "Train Epoch: 9 [0/8000 (0%)]\tLoss: 0.681513\n", "Train Epoch: 9 [1000/8000 (12%)]\tLoss: 0.676095\n", "Train Epoch: 9 [2000/8000 (25%)]\tLoss: 0.683777\n", "Train Epoch: 9 [3000/8000 (38%)]\tLoss: 0.691705\n", "Train Epoch: 9 [4000/8000 (50%)]\tLoss: 0.696020\n", "Train Epoch: 9 [5000/8000 (62%)]\tLoss: 0.684099\n", "Train Epoch: 9 [6000/8000 (75%)]\tLoss: 0.695634\n", "Train Epoch: 9 [7000/8000 (88%)]\tLoss: 0.688131\n", "\n", "Test set: Average loss: 0.6819, Accuracy: 1281/2000 (64.050%)\n", "\n", "Train Epoch: 10 [0/8000 (0%)]\tLoss: 0.674268\n", "Train Epoch: 10 [1000/8000 (12%)]\tLoss: 0.674218\n", "Train Epoch: 10 [2000/8000 (25%)]\tLoss: 0.701858\n", "Train Epoch: 10 [3000/8000 (38%)]\tLoss: 0.675874\n", "Train Epoch: 10 [4000/8000 (50%)]\tLoss: 0.702081\n", "Train Epoch: 10 [5000/8000 (62%)]\tLoss: 0.703176\n", "Train Epoch: 10 [6000/8000 (75%)]\tLoss: 0.692648\n", "Train Epoch: 10 [7000/8000 (88%)]\tLoss: 0.669730\n", "\n", "Test set: Average loss: 0.6808, Accuracy: 1301/2000 (65.050%)\n", "\n", "\n", " training DNN with 10000 data points and SGD lr=0.001000. \n", "\n", "Train Epoch: 1 [0/8000 (0%)]\tLoss: 0.748844\n", "Train Epoch: 1 [1000/8000 (12%)]\tLoss: 0.710549\n", "Train Epoch: 1 [2000/8000 (25%)]\tLoss: 0.725623\n", "Train Epoch: 1 [3000/8000 (38%)]\tLoss: 0.720901\n", "Train Epoch: 1 [4000/8000 (50%)]\tLoss: 0.711438\n", "Train Epoch: 1 [5000/8000 (62%)]\tLoss: 0.724673\n", "Train Epoch: 1 [6000/8000 (75%)]\tLoss: 0.689648\n", "Train Epoch: 1 [7000/8000 (88%)]\tLoss: 0.719881\n", "\n", "Test set: Average loss: 0.6967, Accuracy: 937/2000 (46.850%)\n", "\n", "Train Epoch: 2 [0/8000 (0%)]\tLoss: 0.722610\n", "Train Epoch: 2 [1000/8000 (12%)]\tLoss: 0.712821\n", "Train Epoch: 2 [2000/8000 (25%)]\tLoss: 0.709567\n", "Train Epoch: 2 [3000/8000 (38%)]\tLoss: 0.705417\n", "Train Epoch: 2 [4000/8000 (50%)]\tLoss: 0.695027\n", "Train Epoch: 2 [5000/8000 (62%)]\tLoss: 0.707301\n", "Train Epoch: 2 [6000/8000 (75%)]\tLoss: 0.713473\n", "Train Epoch: 2 [7000/8000 (88%)]\tLoss: 0.702530\n", "\n", "Test set: Average loss: 0.6841, Accuracy: 1068/2000 (53.400%)\n", "\n", "Train Epoch: 3 [0/8000 (0%)]\tLoss: 0.697902\n", "Train Epoch: 3 [1000/8000 (12%)]\tLoss: 0.687280\n", "Train Epoch: 3 [2000/8000 (25%)]\tLoss: 0.703073\n", "Train Epoch: 3 [3000/8000 (38%)]\tLoss: 0.695030\n", "Train Epoch: 3 [4000/8000 (50%)]\tLoss: 0.687740\n", "Train Epoch: 3 [5000/8000 (62%)]\tLoss: 0.692897\n", "Train Epoch: 3 [6000/8000 (75%)]\tLoss: 0.695042\n", "Train Epoch: 3 [7000/8000 (88%)]\tLoss: 0.673611\n", "\n", "Test set: Average loss: 0.6742, Accuracy: 1358/2000 (67.900%)\n", "\n", "Train Epoch: 4 [0/8000 (0%)]\tLoss: 0.679266\n", "Train Epoch: 4 [1000/8000 (12%)]\tLoss: 0.688101\n", "Train Epoch: 4 [2000/8000 (25%)]\tLoss: 0.663009\n", "Train Epoch: 4 [3000/8000 (38%)]\tLoss: 0.685576\n", "Train Epoch: 4 [4000/8000 (50%)]\tLoss: 0.698362\n", "Train Epoch: 4 [5000/8000 (62%)]\tLoss: 0.681344\n", "Train Epoch: 4 [6000/8000 (75%)]\tLoss: 0.656040\n", "Train Epoch: 4 [7000/8000 (88%)]\tLoss: 0.668664\n", "\n", "Test set: Average loss: 0.6644, Accuracy: 1437/2000 (71.850%)\n", "\n", "Train Epoch: 5 [0/8000 (0%)]\tLoss: 0.657408\n", "Train Epoch: 5 [1000/8000 (12%)]\tLoss: 0.652054\n", "Train Epoch: 5 [2000/8000 (25%)]\tLoss: 0.684324\n", "Train Epoch: 5 [3000/8000 (38%)]\tLoss: 0.684844\n", "Train Epoch: 5 [4000/8000 (50%)]\tLoss: 0.661951\n", "Train Epoch: 5 [5000/8000 (62%)]\tLoss: 0.682703\n", "Train Epoch: 5 [6000/8000 (75%)]\tLoss: 0.670805\n", "Train Epoch: 5 [7000/8000 (88%)]\tLoss: 0.667759\n", "\n", "Test set: Average loss: 0.6545, Accuracy: 1462/2000 (73.100%)\n", "\n", "Train Epoch: 6 [0/8000 (0%)]\tLoss: 0.663250\n", "Train Epoch: 6 [1000/8000 (12%)]\tLoss: 0.672179\n", "Train Epoch: 6 [2000/8000 (25%)]\tLoss: 0.669661\n", "Train Epoch: 6 [3000/8000 (38%)]\tLoss: 0.673492\n", "Train Epoch: 6 [4000/8000 (50%)]\tLoss: 0.646684\n", "Train Epoch: 6 [5000/8000 (62%)]\tLoss: 0.650809\n", "Train Epoch: 6 [6000/8000 (75%)]\tLoss: 0.650157\n", "Train Epoch: 6 [7000/8000 (88%)]\tLoss: 0.665025\n", "\n", "Test set: Average loss: 0.6449, Accuracy: 1474/2000 (73.700%)\n", "\n", "Train Epoch: 7 [0/8000 (0%)]\tLoss: 0.668964\n", "Train Epoch: 7 [1000/8000 (12%)]\tLoss: 0.650208\n", "Train Epoch: 7 [2000/8000 (25%)]\tLoss: 0.669286\n", "Train Epoch: 7 [3000/8000 (38%)]\tLoss: 0.637196\n", "Train Epoch: 7 [4000/8000 (50%)]\tLoss: 0.680341\n", "Train Epoch: 7 [5000/8000 (62%)]\tLoss: 0.641309\n", "Train Epoch: 7 [6000/8000 (75%)]\tLoss: 0.651198\n", "Train Epoch: 7 [7000/8000 (88%)]\tLoss: 0.660376\n", "\n", "Test set: Average loss: 0.6362, Accuracy: 1476/2000 (73.800%)\n", "\n", "Train Epoch: 8 [0/8000 (0%)]\tLoss: 0.661032\n", "Train Epoch: 8 [1000/8000 (12%)]\tLoss: 0.651090\n", "Train Epoch: 8 [2000/8000 (25%)]\tLoss: 0.631498\n", "Train Epoch: 8 [3000/8000 (38%)]\tLoss: 0.666181\n", "Train Epoch: 8 [4000/8000 (50%)]\tLoss: 0.670746\n", "Train Epoch: 8 [5000/8000 (62%)]\tLoss: 0.652429\n", "Train Epoch: 8 [6000/8000 (75%)]\tLoss: 0.623989\n", "Train Epoch: 8 [7000/8000 (88%)]\tLoss: 0.647921\n", "\n", "Test set: Average loss: 0.6275, Accuracy: 1489/2000 (74.450%)\n", "\n", "Train Epoch: 9 [0/8000 (0%)]\tLoss: 0.638977\n", "Train Epoch: 9 [1000/8000 (12%)]\tLoss: 0.632178\n", "Train Epoch: 9 [2000/8000 (25%)]\tLoss: 0.628478\n", "Train Epoch: 9 [3000/8000 (38%)]\tLoss: 0.645219\n", "Train Epoch: 9 [4000/8000 (50%)]\tLoss: 0.634084\n", "Train Epoch: 9 [5000/8000 (62%)]\tLoss: 0.660404\n", "Train Epoch: 9 [6000/8000 (75%)]\tLoss: 0.637642\n", "Train Epoch: 9 [7000/8000 (88%)]\tLoss: 0.629800\n", "\n", "Test set: Average loss: 0.6189, Accuracy: 1478/2000 (73.900%)\n", "\n", "Train Epoch: 10 [0/8000 (0%)]\tLoss: 0.626010\n", "Train Epoch: 10 [1000/8000 (12%)]\tLoss: 0.640649\n", "Train Epoch: 10 [2000/8000 (25%)]\tLoss: 0.661770\n", "Train Epoch: 10 [3000/8000 (38%)]\tLoss: 0.641002\n", "Train Epoch: 10 [4000/8000 (50%)]\tLoss: 0.611413\n", "Train Epoch: 10 [5000/8000 (62%)]\tLoss: 0.646812\n", "Train Epoch: 10 [6000/8000 (75%)]\tLoss: 0.634771\n", "Train Epoch: 10 [7000/8000 (88%)]\tLoss: 0.637156\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\n", "Test set: Average loss: 0.6100, Accuracy: 1491/2000 (74.550%)\n", "\n", "\n", " training DNN with 10000 data points and SGD lr=0.010000. \n", "\n", "Train Epoch: 1 [0/8000 (0%)]\tLoss: 0.699168\n", "Train Epoch: 1 [1000/8000 (12%)]\tLoss: 0.697183\n", "Train Epoch: 1 [2000/8000 (25%)]\tLoss: 0.673701\n", "Train Epoch: 1 [3000/8000 (38%)]\tLoss: 0.652901\n", "Train Epoch: 1 [4000/8000 (50%)]\tLoss: 0.687900\n", "Train Epoch: 1 [5000/8000 (62%)]\tLoss: 0.663231\n", "Train Epoch: 1 [6000/8000 (75%)]\tLoss: 0.657198\n", "Train Epoch: 1 [7000/8000 (88%)]\tLoss: 0.618428\n", "\n", "Test set: Average loss: 0.6132, Accuracy: 1506/2000 (75.300%)\n", "\n", "Train Epoch: 2 [0/8000 (0%)]\tLoss: 0.604715\n", "Train Epoch: 2 [1000/8000 (12%)]\tLoss: 0.611860\n", "Train Epoch: 2 [2000/8000 (25%)]\tLoss: 0.609607\n", "Train Epoch: 2 [3000/8000 (38%)]\tLoss: 0.609965\n", "Train Epoch: 2 [4000/8000 (50%)]\tLoss: 0.575748\n", "Train Epoch: 2 [5000/8000 (62%)]\tLoss: 0.589488\n", "Train Epoch: 2 [6000/8000 (75%)]\tLoss: 0.623795\n", "Train Epoch: 2 [7000/8000 (88%)]\tLoss: 0.580438\n", "\n", "Test set: Average loss: 0.5488, Accuracy: 1526/2000 (76.300%)\n", "\n", "Train Epoch: 3 [0/8000 (0%)]\tLoss: 0.560898\n", "Train Epoch: 3 [1000/8000 (12%)]\tLoss: 0.571261\n", "Train Epoch: 3 [2000/8000 (25%)]\tLoss: 0.540140\n", "Train Epoch: 3 [3000/8000 (38%)]\tLoss: 0.611341\n", "Train Epoch: 3 [4000/8000 (50%)]\tLoss: 0.604470\n", "Train Epoch: 3 [5000/8000 (62%)]\tLoss: 0.547775\n", "Train Epoch: 3 [6000/8000 (75%)]\tLoss: 0.620242\n", "Train Epoch: 3 [7000/8000 (88%)]\tLoss: 0.551052\n", "\n", "Test set: Average loss: 0.5134, Accuracy: 1536/2000 (76.800%)\n", "\n", "Train Epoch: 4 [0/8000 (0%)]\tLoss: 0.504537\n", "Train Epoch: 4 [1000/8000 (12%)]\tLoss: 0.512580\n", "Train Epoch: 4 [2000/8000 (25%)]\tLoss: 0.545692\n", "Train Epoch: 4 [3000/8000 (38%)]\tLoss: 0.534161\n", "Train Epoch: 4 [4000/8000 (50%)]\tLoss: 0.488619\n", "Train Epoch: 4 [5000/8000 (62%)]\tLoss: 0.592135\n", "Train Epoch: 4 [6000/8000 (75%)]\tLoss: 0.550200\n", "Train Epoch: 4 [7000/8000 (88%)]\tLoss: 0.520150\n", "\n", "Test set: Average loss: 0.4982, Accuracy: 1529/2000 (76.450%)\n", "\n", "Train Epoch: 5 [0/8000 (0%)]\tLoss: 0.545439\n", "Train Epoch: 5 [1000/8000 (12%)]\tLoss: 0.553422\n", "Train Epoch: 5 [2000/8000 (25%)]\tLoss: 0.530373\n", "Train Epoch: 5 [3000/8000 (38%)]\tLoss: 0.522281\n", "Train Epoch: 5 [4000/8000 (50%)]\tLoss: 0.566266\n", "Train Epoch: 5 [5000/8000 (62%)]\tLoss: 0.526884\n", "Train Epoch: 5 [6000/8000 (75%)]\tLoss: 0.549719\n", "Train Epoch: 5 [7000/8000 (88%)]\tLoss: 0.517988\n", "\n", "Test set: Average loss: 0.4864, Accuracy: 1529/2000 (76.450%)\n", "\n", "Train Epoch: 6 [0/8000 (0%)]\tLoss: 0.480552\n", "Train Epoch: 6 [1000/8000 (12%)]\tLoss: 0.573093\n", "Train Epoch: 6 [2000/8000 (25%)]\tLoss: 0.575137\n", "Train Epoch: 6 [3000/8000 (38%)]\tLoss: 0.569813\n", "Train Epoch: 6 [4000/8000 (50%)]\tLoss: 0.444771\n", "Train Epoch: 6 [5000/8000 (62%)]\tLoss: 0.497351\n", "Train Epoch: 6 [6000/8000 (75%)]\tLoss: 0.527475\n", "Train Epoch: 6 [7000/8000 (88%)]\tLoss: 0.515924\n", "\n", "Test set: Average loss: 0.4803, Accuracy: 1529/2000 (76.450%)\n", "\n", "Train Epoch: 7 [0/8000 (0%)]\tLoss: 0.497473\n", "Train Epoch: 7 [1000/8000 (12%)]\tLoss: 0.530438\n", "Train Epoch: 7 [2000/8000 (25%)]\tLoss: 0.514741\n", "Train Epoch: 7 [3000/8000 (38%)]\tLoss: 0.527368\n", "Train Epoch: 7 [4000/8000 (50%)]\tLoss: 0.482035\n", "Train Epoch: 7 [5000/8000 (62%)]\tLoss: 0.485020\n", "Train Epoch: 7 [6000/8000 (75%)]\tLoss: 0.440541\n", "Train Epoch: 7 [7000/8000 (88%)]\tLoss: 0.483561\n", "\n", "Test set: Average loss: 0.4723, Accuracy: 1549/2000 (77.450%)\n", "\n", "Train Epoch: 8 [0/8000 (0%)]\tLoss: 0.462284\n", "Train Epoch: 8 [1000/8000 (12%)]\tLoss: 0.585840\n", "Train Epoch: 8 [2000/8000 (25%)]\tLoss: 0.454685\n", "Train Epoch: 8 [3000/8000 (38%)]\tLoss: 0.497827\n", "Train Epoch: 8 [4000/8000 (50%)]\tLoss: 0.532175\n", "Train Epoch: 8 [5000/8000 (62%)]\tLoss: 0.533099\n", "Train Epoch: 8 [6000/8000 (75%)]\tLoss: 0.513271\n", "Train Epoch: 8 [7000/8000 (88%)]\tLoss: 0.471093\n", "\n", "Test set: Average loss: 0.4704, Accuracy: 1545/2000 (77.250%)\n", "\n", "Train Epoch: 9 [0/8000 (0%)]\tLoss: 0.559074\n", "Train Epoch: 9 [1000/8000 (12%)]\tLoss: 0.408493\n", "Train Epoch: 9 [2000/8000 (25%)]\tLoss: 0.576147\n", "Train Epoch: 9 [3000/8000 (38%)]\tLoss: 0.516105\n", "Train Epoch: 9 [4000/8000 (50%)]\tLoss: 0.499826\n", "Train Epoch: 9 [5000/8000 (62%)]\tLoss: 0.421751\n", "Train Epoch: 9 [6000/8000 (75%)]\tLoss: 0.495686\n", "Train Epoch: 9 [7000/8000 (88%)]\tLoss: 0.434827\n", "\n", "Test set: Average loss: 0.4665, Accuracy: 1558/2000 (77.900%)\n", "\n", "Train Epoch: 10 [0/8000 (0%)]\tLoss: 0.477638\n", "Train Epoch: 10 [1000/8000 (12%)]\tLoss: 0.602242\n", "Train Epoch: 10 [2000/8000 (25%)]\tLoss: 0.509296\n", "Train Epoch: 10 [3000/8000 (38%)]\tLoss: 0.489183\n", "Train Epoch: 10 [4000/8000 (50%)]\tLoss: 0.520043\n", "Train Epoch: 10 [5000/8000 (62%)]\tLoss: 0.451816\n", "Train Epoch: 10 [6000/8000 (75%)]\tLoss: 0.499089\n", "Train Epoch: 10 [7000/8000 (88%)]\tLoss: 0.471891\n", "\n", "Test set: Average loss: 0.4686, Accuracy: 1541/2000 (77.050%)\n", "\n", "\n", " training DNN with 10000 data points and SGD lr=0.100000. \n", "\n", "Train Epoch: 1 [0/8000 (0%)]\tLoss: 0.685625\n", "Train Epoch: 1 [1000/8000 (12%)]\tLoss: 0.610574\n", "Train Epoch: 1 [2000/8000 (25%)]\tLoss: 0.605155\n", "Train Epoch: 1 [3000/8000 (38%)]\tLoss: 0.506837\n", "Train Epoch: 1 [4000/8000 (50%)]\tLoss: 0.513233\n", "Train Epoch: 1 [5000/8000 (62%)]\tLoss: 0.503912\n", "Train Epoch: 1 [6000/8000 (75%)]\tLoss: 0.533501\n", "Train Epoch: 1 [7000/8000 (88%)]\tLoss: 0.496268\n", "\n", "Test set: Average loss: 0.4800, Accuracy: 1534/2000 (76.700%)\n", "\n", "Train Epoch: 2 [0/8000 (0%)]\tLoss: 0.466775\n", "Train Epoch: 2 [1000/8000 (12%)]\tLoss: 0.497168\n", "Train Epoch: 2 [2000/8000 (25%)]\tLoss: 0.471591\n", "Train Epoch: 2 [3000/8000 (38%)]\tLoss: 0.448215\n", "Train Epoch: 2 [4000/8000 (50%)]\tLoss: 0.502071\n", "Train Epoch: 2 [5000/8000 (62%)]\tLoss: 0.412387\n", "Train Epoch: 2 [6000/8000 (75%)]\tLoss: 0.621073\n", "Train Epoch: 2 [7000/8000 (88%)]\tLoss: 0.429254\n", "\n", "Test set: Average loss: 0.4607, Accuracy: 1560/2000 (78.000%)\n", "\n", "Train Epoch: 3 [0/8000 (0%)]\tLoss: 0.383824\n", "Train Epoch: 3 [1000/8000 (12%)]\tLoss: 0.477260\n", "Train Epoch: 3 [2000/8000 (25%)]\tLoss: 0.541657\n", "Train Epoch: 3 [3000/8000 (38%)]\tLoss: 0.419515\n", "Train Epoch: 3 [4000/8000 (50%)]\tLoss: 0.534225\n", "Train Epoch: 3 [5000/8000 (62%)]\tLoss: 0.455988\n", "Train Epoch: 3 [6000/8000 (75%)]\tLoss: 0.336730\n", "Train Epoch: 3 [7000/8000 (88%)]\tLoss: 0.445602\n", "\n", "Test set: Average loss: 0.4537, Accuracy: 1578/2000 (78.900%)\n", "\n", "Train Epoch: 4 [0/8000 (0%)]\tLoss: 0.428422\n", "Train Epoch: 4 [1000/8000 (12%)]\tLoss: 0.546706\n", "Train Epoch: 4 [2000/8000 (25%)]\tLoss: 0.538695\n", "Train Epoch: 4 [3000/8000 (38%)]\tLoss: 0.472345\n", "Train Epoch: 4 [4000/8000 (50%)]\tLoss: 0.527527\n", "Train Epoch: 4 [5000/8000 (62%)]\tLoss: 0.572210\n", "Train Epoch: 4 [6000/8000 (75%)]\tLoss: 0.393512\n", "Train Epoch: 4 [7000/8000 (88%)]\tLoss: 0.508052\n", "\n", "Test set: Average loss: 0.4541, Accuracy: 1578/2000 (78.900%)\n", "\n", "Train Epoch: 5 [0/8000 (0%)]\tLoss: 0.498704\n", "Train Epoch: 5 [1000/8000 (12%)]\tLoss: 0.437050\n", "Train Epoch: 5 [2000/8000 (25%)]\tLoss: 0.484379\n", "Train Epoch: 5 [3000/8000 (38%)]\tLoss: 0.521299\n", "Train Epoch: 5 [4000/8000 (50%)]\tLoss: 0.464109\n", "Train Epoch: 5 [5000/8000 (62%)]\tLoss: 0.439022\n", "Train Epoch: 5 [6000/8000 (75%)]\tLoss: 0.469260\n", "Train Epoch: 5 [7000/8000 (88%)]\tLoss: 0.493705\n", "\n", "Test set: Average loss: 0.4709, Accuracy: 1565/2000 (78.250%)\n", "\n", "Train Epoch: 6 [0/8000 (0%)]\tLoss: 0.531487\n", "Train Epoch: 6 [1000/8000 (12%)]\tLoss: 0.522983\n", "Train Epoch: 6 [2000/8000 (25%)]\tLoss: 0.504122\n", "Train Epoch: 6 [3000/8000 (38%)]\tLoss: 0.401215\n", "Train Epoch: 6 [4000/8000 (50%)]\tLoss: 0.449189\n", "Train Epoch: 6 [5000/8000 (62%)]\tLoss: 0.418882\n", "Train Epoch: 6 [6000/8000 (75%)]\tLoss: 0.461273\n", "Train Epoch: 6 [7000/8000 (88%)]\tLoss: 0.414933\n", "\n", "Test set: Average loss: 0.4688, Accuracy: 1552/2000 (77.600%)\n", "\n", "Train Epoch: 7 [0/8000 (0%)]\tLoss: 0.515866\n", "Train Epoch: 7 [1000/8000 (12%)]\tLoss: 0.434252\n", "Train Epoch: 7 [2000/8000 (25%)]\tLoss: 0.464060\n", "Train Epoch: 7 [3000/8000 (38%)]\tLoss: 0.471080\n", "Train Epoch: 7 [4000/8000 (50%)]\tLoss: 0.492378\n", "Train Epoch: 7 [5000/8000 (62%)]\tLoss: 0.491037\n", "Train Epoch: 7 [6000/8000 (75%)]\tLoss: 0.393833\n", "Train Epoch: 7 [7000/8000 (88%)]\tLoss: 0.436135\n", "\n", "Test set: Average loss: 0.4558, Accuracy: 1570/2000 (78.500%)\n", "\n", "Train Epoch: 8 [0/8000 (0%)]\tLoss: 0.412632\n", "Train Epoch: 8 [1000/8000 (12%)]\tLoss: 0.499017\n", "Train Epoch: 8 [2000/8000 (25%)]\tLoss: 0.443011\n", "Train Epoch: 8 [3000/8000 (38%)]\tLoss: 0.516305\n", "Train Epoch: 8 [4000/8000 (50%)]\tLoss: 0.513990\n", "Train Epoch: 8 [5000/8000 (62%)]\tLoss: 0.491332\n", "Train Epoch: 8 [6000/8000 (75%)]\tLoss: 0.474468\n", "Train Epoch: 8 [7000/8000 (88%)]\tLoss: 0.476098\n", "\n", "Test set: Average loss: 0.4600, Accuracy: 1563/2000 (78.150%)\n", "\n", "Train Epoch: 9 [0/8000 (0%)]\tLoss: 0.476434\n", "Train Epoch: 9 [1000/8000 (12%)]\tLoss: 0.396529\n", "Train Epoch: 9 [2000/8000 (25%)]\tLoss: 0.454722\n", "Train Epoch: 9 [3000/8000 (38%)]\tLoss: 0.483790\n", "Train Epoch: 9 [4000/8000 (50%)]\tLoss: 0.400906\n", "Train Epoch: 9 [5000/8000 (62%)]\tLoss: 0.371000\n", "Train Epoch: 9 [6000/8000 (75%)]\tLoss: 0.377229\n", "Train Epoch: 9 [7000/8000 (88%)]\tLoss: 0.417167\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\n", "Test set: Average loss: 0.4647, Accuracy: 1553/2000 (77.650%)\n", "\n", "Train Epoch: 10 [0/8000 (0%)]\tLoss: 0.470852\n", "Train Epoch: 10 [1000/8000 (12%)]\tLoss: 0.362817\n", "Train Epoch: 10 [2000/8000 (25%)]\tLoss: 0.473564\n", "Train Epoch: 10 [3000/8000 (38%)]\tLoss: 0.456163\n", "Train Epoch: 10 [4000/8000 (50%)]\tLoss: 0.503372\n", "Train Epoch: 10 [5000/8000 (62%)]\tLoss: 0.534373\n", "Train Epoch: 10 [6000/8000 (75%)]\tLoss: 0.437734\n", "Train Epoch: 10 [7000/8000 (88%)]\tLoss: 0.540555\n", "\n", "Test set: Average loss: 0.4577, Accuracy: 1565/2000 (78.250%)\n", "\n", "Training on 80000 examples\n", "Using both high and low level features\n", "Testing on 20000 examples\n", "Using both high and low level features\n", "\n", " training DNN with 100000 data points and SGD lr=0.000010. \n", "\n", "Train Epoch: 1 [0/80000 (0%)]\tLoss: 0.709857\n", "Train Epoch: 1 [10000/80000 (12%)]\tLoss: 0.699108\n", "Train Epoch: 1 [20000/80000 (25%)]\tLoss: 0.705935\n", "Train Epoch: 1 [30000/80000 (38%)]\tLoss: 0.707634\n", "Train Epoch: 1 [40000/80000 (50%)]\tLoss: 0.704721\n", "Train Epoch: 1 [50000/80000 (62%)]\tLoss: 0.701020\n", "Train Epoch: 1 [60000/80000 (75%)]\tLoss: 0.701019\n", "Train Epoch: 1 [70000/80000 (88%)]\tLoss: 0.700481\n", "\n", "Test set: Average loss: 0.7008, Accuracy: 8467/20000 (42.335%)\n", "\n", "Train Epoch: 2 [0/80000 (0%)]\tLoss: 0.704691\n", "Train Epoch: 2 [10000/80000 (12%)]\tLoss: 0.705849\n", "Train Epoch: 2 [20000/80000 (25%)]\tLoss: 0.702281\n", "Train Epoch: 2 [30000/80000 (38%)]\tLoss: 0.702284\n", "Train Epoch: 2 [40000/80000 (50%)]\tLoss: 0.706607\n", "Train Epoch: 2 [50000/80000 (62%)]\tLoss: 0.700778\n", "Train Epoch: 2 [60000/80000 (75%)]\tLoss: 0.706767\n", "Train Epoch: 2 [70000/80000 (88%)]\tLoss: 0.697481\n", "\n", "Test set: Average loss: 0.7007, Accuracy: 8471/20000 (42.355%)\n", "\n", "Train Epoch: 3 [0/80000 (0%)]\tLoss: 0.702649\n", "Train Epoch: 3 [10000/80000 (12%)]\tLoss: 0.699560\n", "Train Epoch: 3 [20000/80000 (25%)]\tLoss: 0.701000\n", "Train Epoch: 3 [30000/80000 (38%)]\tLoss: 0.697289\n", "Train Epoch: 3 [40000/80000 (50%)]\tLoss: 0.698617\n", "Train Epoch: 3 [50000/80000 (62%)]\tLoss: 0.699536\n", "Train Epoch: 3 [60000/80000 (75%)]\tLoss: 0.701737\n", "Train Epoch: 3 [70000/80000 (88%)]\tLoss: 0.707562\n", "\n", "Test set: Average loss: 0.7005, Accuracy: 8478/20000 (42.390%)\n", "\n", "Train Epoch: 4 [0/80000 (0%)]\tLoss: 0.702440\n", "Train Epoch: 4 [10000/80000 (12%)]\tLoss: 0.701264\n", "Train Epoch: 4 [20000/80000 (25%)]\tLoss: 0.706735\n", "Train Epoch: 4 [30000/80000 (38%)]\tLoss: 0.705587\n", "Train Epoch: 4 [40000/80000 (50%)]\tLoss: 0.699598\n", "Train Epoch: 4 [50000/80000 (62%)]\tLoss: 0.704237\n", "Train Epoch: 4 [60000/80000 (75%)]\tLoss: 0.707695\n", "Train Epoch: 4 [70000/80000 (88%)]\tLoss: 0.704129\n", "\n", "Test set: Average loss: 0.7004, Accuracy: 8493/20000 (42.465%)\n", "\n", "Train Epoch: 5 [0/80000 (0%)]\tLoss: 0.703403\n", "Train Epoch: 5 [10000/80000 (12%)]\tLoss: 0.704349\n", "Train Epoch: 5 [20000/80000 (25%)]\tLoss: 0.700892\n", "Train Epoch: 5 [30000/80000 (38%)]\tLoss: 0.702444\n", "Train Epoch: 5 [40000/80000 (50%)]\tLoss: 0.707312\n", "Train Epoch: 5 [50000/80000 (62%)]\tLoss: 0.702573\n", "Train Epoch: 5 [60000/80000 (75%)]\tLoss: 0.694193\n", "Train Epoch: 5 [70000/80000 (88%)]\tLoss: 0.707019\n", "\n", "Test set: Average loss: 0.7003, Accuracy: 8487/20000 (42.435%)\n", "\n", "Train Epoch: 6 [0/80000 (0%)]\tLoss: 0.699891\n", "Train Epoch: 6 [10000/80000 (12%)]\tLoss: 0.700422\n", "Train Epoch: 6 [20000/80000 (25%)]\tLoss: 0.703033\n", "Train Epoch: 6 [30000/80000 (38%)]\tLoss: 0.700780\n", "Train Epoch: 6 [40000/80000 (50%)]\tLoss: 0.698106\n", "Train Epoch: 6 [50000/80000 (62%)]\tLoss: 0.704318\n", "Train Epoch: 6 [60000/80000 (75%)]\tLoss: 0.702799\n", "Train Epoch: 6 [70000/80000 (88%)]\tLoss: 0.705650\n", "\n", "Test set: Average loss: 0.7001, Accuracy: 8488/20000 (42.440%)\n", "\n", "Train Epoch: 7 [0/80000 (0%)]\tLoss: 0.697022\n", "Train Epoch: 7 [10000/80000 (12%)]\tLoss: 0.706780\n", "Train Epoch: 7 [20000/80000 (25%)]\tLoss: 0.703789\n", "Train Epoch: 7 [30000/80000 (38%)]\tLoss: 0.695989\n", "Train Epoch: 7 [40000/80000 (50%)]\tLoss: 0.703647\n", "Train Epoch: 7 [50000/80000 (62%)]\tLoss: 0.699872\n", "Train Epoch: 7 [60000/80000 (75%)]\tLoss: 0.696702\n", "Train Epoch: 7 [70000/80000 (88%)]\tLoss: 0.695905\n", "\n", "Test set: Average loss: 0.7000, Accuracy: 8502/20000 (42.510%)\n", "\n", "Train Epoch: 8 [0/80000 (0%)]\tLoss: 0.704256\n", "Train Epoch: 8 [10000/80000 (12%)]\tLoss: 0.705376\n", "Train Epoch: 8 [20000/80000 (25%)]\tLoss: 0.706469\n", "Train Epoch: 8 [30000/80000 (38%)]\tLoss: 0.701247\n", "Train Epoch: 8 [40000/80000 (50%)]\tLoss: 0.700315\n", "Train Epoch: 8 [50000/80000 (62%)]\tLoss: 0.703419\n", "Train Epoch: 8 [60000/80000 (75%)]\tLoss: 0.694629\n", "Train Epoch: 8 [70000/80000 (88%)]\tLoss: 0.704856\n", "\n", "Test set: Average loss: 0.6999, Accuracy: 8523/20000 (42.615%)\n", "\n", "Train Epoch: 9 [0/80000 (0%)]\tLoss: 0.704544\n", "Train Epoch: 9 [10000/80000 (12%)]\tLoss: 0.695405\n", "Train Epoch: 9 [20000/80000 (25%)]\tLoss: 0.702356\n", "Train Epoch: 9 [30000/80000 (38%)]\tLoss: 0.702244\n", "Train Epoch: 9 [40000/80000 (50%)]\tLoss: 0.696443\n", "Train Epoch: 9 [50000/80000 (62%)]\tLoss: 0.704172\n", "Train Epoch: 9 [60000/80000 (75%)]\tLoss: 0.705754\n", "Train Epoch: 9 [70000/80000 (88%)]\tLoss: 0.700879\n", "\n", "Test set: Average loss: 0.6997, Accuracy: 8536/20000 (42.680%)\n", "\n", "Train Epoch: 10 [0/80000 (0%)]\tLoss: 0.698163\n", "Train Epoch: 10 [10000/80000 (12%)]\tLoss: 0.702479\n", "Train Epoch: 10 [20000/80000 (25%)]\tLoss: 0.697485\n", "Train Epoch: 10 [30000/80000 (38%)]\tLoss: 0.702398\n", "Train Epoch: 10 [40000/80000 (50%)]\tLoss: 0.711448\n", "Train Epoch: 10 [50000/80000 (62%)]\tLoss: 0.702527\n", "Train Epoch: 10 [60000/80000 (75%)]\tLoss: 0.711168\n", "Train Epoch: 10 [70000/80000 (88%)]\tLoss: 0.697541\n", "\n", "Test set: Average loss: 0.6996, Accuracy: 8548/20000 (42.740%)\n", "\n", "\n", " training DNN with 100000 data points and SGD lr=0.000100. \n", "\n", "Train Epoch: 1 [0/80000 (0%)]\tLoss: 0.715197\n", "Train Epoch: 1 [10000/80000 (12%)]\tLoss: 0.711921\n", "Train Epoch: 1 [20000/80000 (25%)]\tLoss: 0.729711\n", "Train Epoch: 1 [30000/80000 (38%)]\tLoss: 0.724807\n", "Train Epoch: 1 [40000/80000 (50%)]\tLoss: 0.727388\n", "Train Epoch: 1 [50000/80000 (62%)]\tLoss: 0.724497\n", "Train Epoch: 1 [60000/80000 (75%)]\tLoss: 0.725045\n", "Train Epoch: 1 [70000/80000 (88%)]\tLoss: 0.727475\n", "\n", "Test set: Average loss: 0.7087, Accuracy: 8248/20000 (41.240%)\n", "\n", "Train Epoch: 2 [0/80000 (0%)]\tLoss: 0.714319\n", "Train Epoch: 2 [10000/80000 (12%)]\tLoss: 0.731445\n", "Train Epoch: 2 [20000/80000 (25%)]\tLoss: 0.707774\n", "Train Epoch: 2 [30000/80000 (38%)]\tLoss: 0.714780\n", "Train Epoch: 2 [40000/80000 (50%)]\tLoss: 0.718108\n", "Train Epoch: 2 [50000/80000 (62%)]\tLoss: 0.720248\n", "Train Epoch: 2 [60000/80000 (75%)]\tLoss: 0.714495\n", "Train Epoch: 2 [70000/80000 (88%)]\tLoss: 0.715954\n", "\n", "Test set: Average loss: 0.7062, Accuracy: 8342/20000 (41.710%)\n", "\n", "Train Epoch: 3 [0/80000 (0%)]\tLoss: 0.717270\n", "Train Epoch: 3 [10000/80000 (12%)]\tLoss: 0.719187\n", "Train Epoch: 3 [20000/80000 (25%)]\tLoss: 0.706191\n", "Train Epoch: 3 [30000/80000 (38%)]\tLoss: 0.716768\n", "Train Epoch: 3 [40000/80000 (50%)]\tLoss: 0.716282\n", "Train Epoch: 3 [50000/80000 (62%)]\tLoss: 0.714398\n", "Train Epoch: 3 [60000/80000 (75%)]\tLoss: 0.716676\n", "Train Epoch: 3 [70000/80000 (88%)]\tLoss: 0.716338\n", "\n", "Test set: Average loss: 0.7040, Accuracy: 8496/20000 (42.480%)\n", "\n", "Train Epoch: 4 [0/80000 (0%)]\tLoss: 0.708547\n", "Train Epoch: 4 [10000/80000 (12%)]\tLoss: 0.712602\n", "Train Epoch: 4 [20000/80000 (25%)]\tLoss: 0.717987\n", "Train Epoch: 4 [30000/80000 (38%)]\tLoss: 0.714138\n", "Train Epoch: 4 [40000/80000 (50%)]\tLoss: 0.714872\n", "Train Epoch: 4 [50000/80000 (62%)]\tLoss: 0.707068\n", "Train Epoch: 4 [60000/80000 (75%)]\tLoss: 0.709471\n", "Train Epoch: 4 [70000/80000 (88%)]\tLoss: 0.719670\n", "\n", "Test set: Average loss: 0.7020, Accuracy: 8648/20000 (43.240%)\n", "\n", "Train Epoch: 5 [0/80000 (0%)]\tLoss: 0.712123\n", "Train Epoch: 5 [10000/80000 (12%)]\tLoss: 0.705291\n", "Train Epoch: 5 [20000/80000 (25%)]\tLoss: 0.709001\n", "Train Epoch: 5 [30000/80000 (38%)]\tLoss: 0.702263\n", "Train Epoch: 5 [40000/80000 (50%)]\tLoss: 0.706440\n", "Train Epoch: 5 [50000/80000 (62%)]\tLoss: 0.704011\n", "Train Epoch: 5 [60000/80000 (75%)]\tLoss: 0.713797\n", "Train Epoch: 5 [70000/80000 (88%)]\tLoss: 0.713780\n", "\n", "Test set: Average loss: 0.7002, Accuracy: 8851/20000 (44.255%)\n", "\n", "Train Epoch: 6 [0/80000 (0%)]\tLoss: 0.708955\n", "Train Epoch: 6 [10000/80000 (12%)]\tLoss: 0.701124\n", "Train Epoch: 6 [20000/80000 (25%)]\tLoss: 0.706962\n", "Train Epoch: 6 [30000/80000 (38%)]\tLoss: 0.703147\n", "Train Epoch: 6 [40000/80000 (50%)]\tLoss: 0.713594\n", "Train Epoch: 6 [50000/80000 (62%)]\tLoss: 0.714497\n", "Train Epoch: 6 [60000/80000 (75%)]\tLoss: 0.703184\n", "Train Epoch: 6 [70000/80000 (88%)]\tLoss: 0.703132\n", "\n", "Test set: Average loss: 0.6985, Accuracy: 9027/20000 (45.135%)\n", "\n", "Train Epoch: 7 [0/80000 (0%)]\tLoss: 0.710458\n", "Train Epoch: 7 [10000/80000 (12%)]\tLoss: 0.706783\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Train Epoch: 7 [20000/80000 (25%)]\tLoss: 0.704416\n", "Train Epoch: 7 [30000/80000 (38%)]\tLoss: 0.708493\n", "Train Epoch: 7 [40000/80000 (50%)]\tLoss: 0.716165\n", "Train Epoch: 7 [50000/80000 (62%)]\tLoss: 0.706213\n", "Train Epoch: 7 [60000/80000 (75%)]\tLoss: 0.700028\n", "Train Epoch: 7 [70000/80000 (88%)]\tLoss: 0.713193\n", "\n", "Test set: Average loss: 0.6969, Accuracy: 9210/20000 (46.050%)\n", "\n", "Train Epoch: 8 [0/80000 (0%)]\tLoss: 0.708336\n", "Train Epoch: 8 [10000/80000 (12%)]\tLoss: 0.705779\n", "Train Epoch: 8 [20000/80000 (25%)]\tLoss: 0.708531\n", "Train Epoch: 8 [30000/80000 (38%)]\tLoss: 0.707671\n", "Train Epoch: 8 [40000/80000 (50%)]\tLoss: 0.712592\n", "Train Epoch: 8 [50000/80000 (62%)]\tLoss: 0.710731\n", "Train Epoch: 8 [60000/80000 (75%)]\tLoss: 0.708170\n", "Train Epoch: 8 [70000/80000 (88%)]\tLoss: 0.701300\n", "\n", "Test set: Average loss: 0.6954, Accuracy: 9392/20000 (46.960%)\n", "\n", "Train Epoch: 9 [0/80000 (0%)]\tLoss: 0.709463\n", "Train Epoch: 9 [10000/80000 (12%)]\tLoss: 0.704660\n", "Train Epoch: 9 [20000/80000 (25%)]\tLoss: 0.689310\n", "Train Epoch: 9 [30000/80000 (38%)]\tLoss: 0.703454\n", "Train Epoch: 9 [40000/80000 (50%)]\tLoss: 0.706950\n", "Train Epoch: 9 [50000/80000 (62%)]\tLoss: 0.701706\n", "Train Epoch: 9 [60000/80000 (75%)]\tLoss: 0.701360\n", "Train Epoch: 9 [70000/80000 (88%)]\tLoss: 0.700901\n", "\n", "Test set: Average loss: 0.6939, Accuracy: 9581/20000 (47.905%)\n", "\n", "Train Epoch: 10 [0/80000 (0%)]\tLoss: 0.709202\n", "Train Epoch: 10 [10000/80000 (12%)]\tLoss: 0.709092\n", "Train Epoch: 10 [20000/80000 (25%)]\tLoss: 0.702193\n", "Train Epoch: 10 [30000/80000 (38%)]\tLoss: 0.702133\n", "Train Epoch: 10 [40000/80000 (50%)]\tLoss: 0.701327\n", "Train Epoch: 10 [50000/80000 (62%)]\tLoss: 0.705861\n", "Train Epoch: 10 [60000/80000 (75%)]\tLoss: 0.698883\n", "Train Epoch: 10 [70000/80000 (88%)]\tLoss: 0.700197\n", "\n", "Test set: Average loss: 0.6924, Accuracy: 9760/20000 (48.800%)\n", "\n", "\n", " training DNN with 100000 data points and SGD lr=0.001000. \n", "\n", "Train Epoch: 1 [0/80000 (0%)]\tLoss: 0.710853\n", "Train Epoch: 1 [10000/80000 (12%)]\tLoss: 0.703988\n", "Train Epoch: 1 [20000/80000 (25%)]\tLoss: 0.707492\n", "Train Epoch: 1 [30000/80000 (38%)]\tLoss: 0.699851\n", "Train Epoch: 1 [40000/80000 (50%)]\tLoss: 0.703508\n", "Train Epoch: 1 [50000/80000 (62%)]\tLoss: 0.701523\n", "Train Epoch: 1 [60000/80000 (75%)]\tLoss: 0.703516\n", "Train Epoch: 1 [70000/80000 (88%)]\tLoss: 0.686840\n", "\n", "Test set: Average loss: 0.6848, Accuracy: 11189/20000 (55.945%)\n", "\n", "Train Epoch: 2 [0/80000 (0%)]\tLoss: 0.693794\n", "Train Epoch: 2 [10000/80000 (12%)]\tLoss: 0.690592\n", "Train Epoch: 2 [20000/80000 (25%)]\tLoss: 0.689504\n", "Train Epoch: 2 [30000/80000 (38%)]\tLoss: 0.687438\n", "Train Epoch: 2 [40000/80000 (50%)]\tLoss: 0.690289\n", "Train Epoch: 2 [50000/80000 (62%)]\tLoss: 0.688909\n", "Train Epoch: 2 [60000/80000 (75%)]\tLoss: 0.684075\n", "Train Epoch: 2 [70000/80000 (88%)]\tLoss: 0.691422\n", "\n", "Test set: Average loss: 0.6729, Accuracy: 13286/20000 (66.430%)\n", "\n", "Train Epoch: 3 [0/80000 (0%)]\tLoss: 0.690993\n", "Train Epoch: 3 [10000/80000 (12%)]\tLoss: 0.689412\n", "Train Epoch: 3 [20000/80000 (25%)]\tLoss: 0.680717\n", "Train Epoch: 3 [30000/80000 (38%)]\tLoss: 0.681917\n", "Train Epoch: 3 [40000/80000 (50%)]\tLoss: 0.674909\n", "Train Epoch: 3 [50000/80000 (62%)]\tLoss: 0.683230\n", "Train Epoch: 3 [60000/80000 (75%)]\tLoss: 0.665544\n", "Train Epoch: 3 [70000/80000 (88%)]\tLoss: 0.685835\n", "\n", "Test set: Average loss: 0.6628, Accuracy: 14070/20000 (70.350%)\n", "\n", "Train Epoch: 4 [0/80000 (0%)]\tLoss: 0.677483\n", "Train Epoch: 4 [10000/80000 (12%)]\tLoss: 0.681490\n", "Train Epoch: 4 [20000/80000 (25%)]\tLoss: 0.677488\n", "Train Epoch: 4 [30000/80000 (38%)]\tLoss: 0.667637\n", "Train Epoch: 4 [40000/80000 (50%)]\tLoss: 0.669382\n", "Train Epoch: 4 [50000/80000 (62%)]\tLoss: 0.670441\n", "Train Epoch: 4 [60000/80000 (75%)]\tLoss: 0.664033\n", "Train Epoch: 4 [70000/80000 (88%)]\tLoss: 0.669917\n", "\n", "Test set: Average loss: 0.6535, Accuracy: 14414/20000 (72.070%)\n", "\n", "Train Epoch: 5 [0/80000 (0%)]\tLoss: 0.662508\n", "Train Epoch: 5 [10000/80000 (12%)]\tLoss: 0.663022\n", "Train Epoch: 5 [20000/80000 (25%)]\tLoss: 0.665074\n", "Train Epoch: 5 [30000/80000 (38%)]\tLoss: 0.660559\n", "Train Epoch: 5 [40000/80000 (50%)]\tLoss: 0.666832\n", "Train Epoch: 5 [50000/80000 (62%)]\tLoss: 0.664970\n", "Train Epoch: 5 [60000/80000 (75%)]\tLoss: 0.657169\n", "Train Epoch: 5 [70000/80000 (88%)]\tLoss: 0.667754\n", "\n", "Test set: Average loss: 0.6448, Accuracy: 14529/20000 (72.645%)\n", "\n", "Train Epoch: 6 [0/80000 (0%)]\tLoss: 0.662098\n", "Train Epoch: 6 [10000/80000 (12%)]\tLoss: 0.661854\n", "Train Epoch: 6 [20000/80000 (25%)]\tLoss: 0.658346\n", "Train Epoch: 6 [30000/80000 (38%)]\tLoss: 0.659743\n", "Train Epoch: 6 [40000/80000 (50%)]\tLoss: 0.659351\n", "Train Epoch: 6 [50000/80000 (62%)]\tLoss: 0.651273\n", "Train Epoch: 6 [60000/80000 (75%)]\tLoss: 0.648752\n", "Train Epoch: 6 [70000/80000 (88%)]\tLoss: 0.650942\n", "\n", "Test set: Average loss: 0.6363, Accuracy: 14627/20000 (73.135%)\n", "\n", "Train Epoch: 7 [0/80000 (0%)]\tLoss: 0.650276\n", "Train Epoch: 7 [10000/80000 (12%)]\tLoss: 0.642630\n", "Train Epoch: 7 [20000/80000 (25%)]\tLoss: 0.647791\n", "Train Epoch: 7 [30000/80000 (38%)]\tLoss: 0.652298\n", "Train Epoch: 7 [40000/80000 (50%)]\tLoss: 0.638959\n", "Train Epoch: 7 [50000/80000 (62%)]\tLoss: 0.639897\n", "Train Epoch: 7 [60000/80000 (75%)]\tLoss: 0.638109\n", "Train Epoch: 7 [70000/80000 (88%)]\tLoss: 0.644963\n", "\n", "Test set: Average loss: 0.6282, Accuracy: 14665/20000 (73.325%)\n", "\n", "Train Epoch: 8 [0/80000 (0%)]\tLoss: 0.651271\n", "Train Epoch: 8 [10000/80000 (12%)]\tLoss: 0.638376\n", "Train Epoch: 8 [20000/80000 (25%)]\tLoss: 0.645874\n", "Train Epoch: 8 [30000/80000 (38%)]\tLoss: 0.638896\n", "Train Epoch: 8 [40000/80000 (50%)]\tLoss: 0.637035\n", "Train Epoch: 8 [50000/80000 (62%)]\tLoss: 0.636409\n", "Train Epoch: 8 [60000/80000 (75%)]\tLoss: 0.636783\n", "Train Epoch: 8 [70000/80000 (88%)]\tLoss: 0.644974\n", "\n", "Test set: Average loss: 0.6201, Accuracy: 14714/20000 (73.570%)\n", "\n", "Train Epoch: 9 [0/80000 (0%)]\tLoss: 0.645297\n", "Train Epoch: 9 [10000/80000 (12%)]\tLoss: 0.637906\n", "Train Epoch: 9 [20000/80000 (25%)]\tLoss: 0.640790\n", "Train Epoch: 9 [30000/80000 (38%)]\tLoss: 0.627469\n", "Train Epoch: 9 [40000/80000 (50%)]\tLoss: 0.638363\n", "Train Epoch: 9 [50000/80000 (62%)]\tLoss: 0.633748\n", "Train Epoch: 9 [60000/80000 (75%)]\tLoss: 0.643103\n", "Train Epoch: 9 [70000/80000 (88%)]\tLoss: 0.636656\n", "\n", "Test set: Average loss: 0.6121, Accuracy: 14779/20000 (73.895%)\n", "\n", "Train Epoch: 10 [0/80000 (0%)]\tLoss: 0.633914\n", "Train Epoch: 10 [10000/80000 (12%)]\tLoss: 0.632493\n", "Train Epoch: 10 [20000/80000 (25%)]\tLoss: 0.627166\n", "Train Epoch: 10 [30000/80000 (38%)]\tLoss: 0.632092\n", "Train Epoch: 10 [40000/80000 (50%)]\tLoss: 0.636172\n", "Train Epoch: 10 [50000/80000 (62%)]\tLoss: 0.618695\n", "Train Epoch: 10 [60000/80000 (75%)]\tLoss: 0.630732\n", "Train Epoch: 10 [70000/80000 (88%)]\tLoss: 0.642953\n", "\n", "Test set: Average loss: 0.6047, Accuracy: 14810/20000 (74.050%)\n", "\n", "\n", " training DNN with 100000 data points and SGD lr=0.010000. \n", "\n", "Train Epoch: 1 [0/80000 (0%)]\tLoss: 0.697973\n", "Train Epoch: 1 [10000/80000 (12%)]\tLoss: 0.689287\n", "Train Epoch: 1 [20000/80000 (25%)]\tLoss: 0.678359\n", "Train Epoch: 1 [30000/80000 (38%)]\tLoss: 0.670037\n", "Train Epoch: 1 [40000/80000 (50%)]\tLoss: 0.662208\n", "Train Epoch: 1 [50000/80000 (62%)]\tLoss: 0.655228\n", "Train Epoch: 1 [60000/80000 (75%)]\tLoss: 0.648531\n", "Train Epoch: 1 [70000/80000 (88%)]\tLoss: 0.635769\n", "\n", "Test set: Average loss: 0.6114, Accuracy: 14907/20000 (74.535%)\n", "\n", "Train Epoch: 2 [0/80000 (0%)]\tLoss: 0.636103\n", "Train Epoch: 2 [10000/80000 (12%)]\tLoss: 0.626280\n", "Train Epoch: 2 [20000/80000 (25%)]\tLoss: 0.612292\n", "Train Epoch: 2 [30000/80000 (38%)]\tLoss: 0.616145\n", "Train Epoch: 2 [40000/80000 (50%)]\tLoss: 0.607718\n", "Train Epoch: 2 [50000/80000 (62%)]\tLoss: 0.587785\n", "Train Epoch: 2 [60000/80000 (75%)]\tLoss: 0.613270\n", "Train Epoch: 2 [70000/80000 (88%)]\tLoss: 0.579698\n", "\n", "Test set: Average loss: 0.5489, Accuracy: 15207/20000 (76.035%)\n", "\n", "Train Epoch: 3 [0/80000 (0%)]\tLoss: 0.588873\n", "Train Epoch: 3 [10000/80000 (12%)]\tLoss: 0.569671\n", "Train Epoch: 3 [20000/80000 (25%)]\tLoss: 0.578632\n", "Train Epoch: 3 [30000/80000 (38%)]\tLoss: 0.576780\n", "Train Epoch: 3 [40000/80000 (50%)]\tLoss: 0.582997\n", "Train Epoch: 3 [50000/80000 (62%)]\tLoss: 0.574457\n", "Train Epoch: 3 [60000/80000 (75%)]\tLoss: 0.543472\n", "Train Epoch: 3 [70000/80000 (88%)]\tLoss: 0.528735\n", "\n", "Test set: Average loss: 0.5137, Accuracy: 15339/20000 (76.695%)\n", "\n", "Train Epoch: 4 [0/80000 (0%)]\tLoss: 0.551613\n", "Train Epoch: 4 [10000/80000 (12%)]\tLoss: 0.543068\n", "Train Epoch: 4 [20000/80000 (25%)]\tLoss: 0.548906\n", "Train Epoch: 4 [30000/80000 (38%)]\tLoss: 0.562617\n", "Train Epoch: 4 [40000/80000 (50%)]\tLoss: 0.515420\n", "Train Epoch: 4 [50000/80000 (62%)]\tLoss: 0.561311\n", "Train Epoch: 4 [60000/80000 (75%)]\tLoss: 0.559609\n", "Train Epoch: 4 [70000/80000 (88%)]\tLoss: 0.551798\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\n", "Test set: Average loss: 0.4963, Accuracy: 15470/20000 (77.350%)\n", "\n", "Train Epoch: 5 [0/80000 (0%)]\tLoss: 0.549332\n", "Train Epoch: 5 [10000/80000 (12%)]\tLoss: 0.525346\n", "Train Epoch: 5 [20000/80000 (25%)]\tLoss: 0.552634\n", "Train Epoch: 5 [30000/80000 (38%)]\tLoss: 0.563371\n", "Train Epoch: 5 [40000/80000 (50%)]\tLoss: 0.564668\n", "Train Epoch: 5 [50000/80000 (62%)]\tLoss: 0.568728\n", "Train Epoch: 5 [60000/80000 (75%)]\tLoss: 0.560736\n", "Train Epoch: 5 [70000/80000 (88%)]\tLoss: 0.501028\n", "\n", "Test set: Average loss: 0.4865, Accuracy: 15535/20000 (77.675%)\n", "\n", "Train Epoch: 6 [0/80000 (0%)]\tLoss: 0.564258\n", "Train Epoch: 6 [10000/80000 (12%)]\tLoss: 0.520596\n", "Train Epoch: 6 [20000/80000 (25%)]\tLoss: 0.547322\n", "Train Epoch: 6 [30000/80000 (38%)]\tLoss: 0.515506\n", "Train Epoch: 6 [40000/80000 (50%)]\tLoss: 0.550544\n", "Train Epoch: 6 [50000/80000 (62%)]\tLoss: 0.505403\n", "Train Epoch: 6 [60000/80000 (75%)]\tLoss: 0.534858\n", "Train Epoch: 6 [70000/80000 (88%)]\tLoss: 0.502775\n", "\n", "Test set: Average loss: 0.4789, Accuracy: 15588/20000 (77.940%)\n", "\n", "Train Epoch: 7 [0/80000 (0%)]\tLoss: 0.530125\n", "Train Epoch: 7 [10000/80000 (12%)]\tLoss: 0.528042\n", "Train Epoch: 7 [20000/80000 (25%)]\tLoss: 0.523079\n", "Train Epoch: 7 [30000/80000 (38%)]\tLoss: 0.482682\n", "Train Epoch: 7 [40000/80000 (50%)]\tLoss: 0.503262\n", "Train Epoch: 7 [50000/80000 (62%)]\tLoss: 0.516694\n", "Train Epoch: 7 [60000/80000 (75%)]\tLoss: 0.533552\n", "Train Epoch: 7 [70000/80000 (88%)]\tLoss: 0.510181\n", "\n", "Test set: Average loss: 0.4749, Accuracy: 15621/20000 (78.105%)\n", "\n", "Train Epoch: 8 [0/80000 (0%)]\tLoss: 0.506149\n", "Train Epoch: 8 [10000/80000 (12%)]\tLoss: 0.521474\n", "Train Epoch: 8 [20000/80000 (25%)]\tLoss: 0.491547\n", "Train Epoch: 8 [30000/80000 (38%)]\tLoss: 0.511403\n", "Train Epoch: 8 [40000/80000 (50%)]\tLoss: 0.500965\n", "Train Epoch: 8 [50000/80000 (62%)]\tLoss: 0.478549\n", "Train Epoch: 8 [60000/80000 (75%)]\tLoss: 0.516275\n", "Train Epoch: 8 [70000/80000 (88%)]\tLoss: 0.492698\n", "\n", "Test set: Average loss: 0.4703, Accuracy: 15684/20000 (78.420%)\n", "\n", "Train Epoch: 9 [0/80000 (0%)]\tLoss: 0.531255\n", "Train Epoch: 9 [10000/80000 (12%)]\tLoss: 0.506924\n", "Train Epoch: 9 [20000/80000 (25%)]\tLoss: 0.502441\n", "Train Epoch: 9 [30000/80000 (38%)]\tLoss: 0.494101\n", "Train Epoch: 9 [40000/80000 (50%)]\tLoss: 0.483607\n", "Train Epoch: 9 [50000/80000 (62%)]\tLoss: 0.531680\n", "Train Epoch: 9 [60000/80000 (75%)]\tLoss: 0.490221\n", "Train Epoch: 9 [70000/80000 (88%)]\tLoss: 0.494850\n", "\n", "Test set: Average loss: 0.4676, Accuracy: 15705/20000 (78.525%)\n", "\n", "Train Epoch: 10 [0/80000 (0%)]\tLoss: 0.505682\n", "Train Epoch: 10 [10000/80000 (12%)]\tLoss: 0.489378\n", "Train Epoch: 10 [20000/80000 (25%)]\tLoss: 0.496558\n", "Train Epoch: 10 [30000/80000 (38%)]\tLoss: 0.510254\n", "Train Epoch: 10 [40000/80000 (50%)]\tLoss: 0.496754\n", "Train Epoch: 10 [50000/80000 (62%)]\tLoss: 0.529930\n", "Train Epoch: 10 [60000/80000 (75%)]\tLoss: 0.502369\n", "Train Epoch: 10 [70000/80000 (88%)]\tLoss: 0.487612\n", "\n", "Test set: Average loss: 0.4658, Accuracy: 15719/20000 (78.595%)\n", "\n", "\n", " training DNN with 100000 data points and SGD lr=0.100000. \n", "\n", "Train Epoch: 1 [0/80000 (0%)]\tLoss: 0.705091\n", "Train Epoch: 1 [10000/80000 (12%)]\tLoss: 0.641505\n", "Train Epoch: 1 [20000/80000 (25%)]\tLoss: 0.580319\n", "Train Epoch: 1 [30000/80000 (38%)]\tLoss: 0.557262\n", "Train Epoch: 1 [40000/80000 (50%)]\tLoss: 0.549322\n", "Train Epoch: 1 [50000/80000 (62%)]\tLoss: 0.527447\n", "Train Epoch: 1 [60000/80000 (75%)]\tLoss: 0.502881\n", "Train Epoch: 1 [70000/80000 (88%)]\tLoss: 0.506342\n", "\n", "Test set: Average loss: 0.4698, Accuracy: 15614/20000 (78.070%)\n", "\n", "Train Epoch: 2 [0/80000 (0%)]\tLoss: 0.493702\n", "Train Epoch: 2 [10000/80000 (12%)]\tLoss: 0.505257\n", "Train Epoch: 2 [20000/80000 (25%)]\tLoss: 0.481080\n", "Train Epoch: 2 [30000/80000 (38%)]\tLoss: 0.512082\n", "Train Epoch: 2 [40000/80000 (50%)]\tLoss: 0.471982\n", "Train Epoch: 2 [50000/80000 (62%)]\tLoss: 0.506896\n", "Train Epoch: 2 [60000/80000 (75%)]\tLoss: 0.497945\n", "Train Epoch: 2 [70000/80000 (88%)]\tLoss: 0.500737\n", "\n", "Test set: Average loss: 0.4553, Accuracy: 15809/20000 (79.045%)\n", "\n", "Train Epoch: 3 [0/80000 (0%)]\tLoss: 0.487297\n", "Train Epoch: 3 [10000/80000 (12%)]\tLoss: 0.472463\n", "Train Epoch: 3 [20000/80000 (25%)]\tLoss: 0.457577\n", "Train Epoch: 3 [30000/80000 (38%)]\tLoss: 0.479991\n", "Train Epoch: 3 [40000/80000 (50%)]\tLoss: 0.494305\n", "Train Epoch: 3 [50000/80000 (62%)]\tLoss: 0.464404\n", "Train Epoch: 3 [60000/80000 (75%)]\tLoss: 0.479279\n", "Train Epoch: 3 [70000/80000 (88%)]\tLoss: 0.459238\n", "\n", "Test set: Average loss: 0.4519, Accuracy: 15826/20000 (79.130%)\n", "\n", "Train Epoch: 4 [0/80000 (0%)]\tLoss: 0.466627\n", "Train Epoch: 4 [10000/80000 (12%)]\tLoss: 0.471862\n", "Train Epoch: 4 [20000/80000 (25%)]\tLoss: 0.451782\n", "Train Epoch: 4 [30000/80000 (38%)]\tLoss: 0.476699\n", "Train Epoch: 4 [40000/80000 (50%)]\tLoss: 0.480063\n", "Train Epoch: 4 [50000/80000 (62%)]\tLoss: 0.461674\n", "Train Epoch: 4 [60000/80000 (75%)]\tLoss: 0.474348\n", "Train Epoch: 4 [70000/80000 (88%)]\tLoss: 0.451285\n", "\n", "Test set: Average loss: 0.4491, Accuracy: 15872/20000 (79.360%)\n", "\n", "Train Epoch: 5 [0/80000 (0%)]\tLoss: 0.454546\n", "Train Epoch: 5 [10000/80000 (12%)]\tLoss: 0.431953\n", "Train Epoch: 5 [20000/80000 (25%)]\tLoss: 0.423571\n", "Train Epoch: 5 [30000/80000 (38%)]\tLoss: 0.458014\n", "Train Epoch: 5 [40000/80000 (50%)]\tLoss: 0.449383\n", "Train Epoch: 5 [50000/80000 (62%)]\tLoss: 0.441713\n", "Train Epoch: 5 [60000/80000 (75%)]\tLoss: 0.455981\n", "Train Epoch: 5 [70000/80000 (88%)]\tLoss: 0.445281\n", "\n", "Test set: Average loss: 0.4477, Accuracy: 15878/20000 (79.390%)\n", "\n", "Train Epoch: 6 [0/80000 (0%)]\tLoss: 0.476829\n", "Train Epoch: 6 [10000/80000 (12%)]\tLoss: 0.440844\n", "Train Epoch: 6 [20000/80000 (25%)]\tLoss: 0.450948\n", "Train Epoch: 6 [30000/80000 (38%)]\tLoss: 0.464564\n", "Train Epoch: 6 [40000/80000 (50%)]\tLoss: 0.452788\n", "Train Epoch: 6 [50000/80000 (62%)]\tLoss: 0.458984\n", "Train Epoch: 6 [60000/80000 (75%)]\tLoss: 0.443346\n", "Train Epoch: 6 [70000/80000 (88%)]\tLoss: 0.461984\n", "\n", "Test set: Average loss: 0.4460, Accuracy: 15895/20000 (79.475%)\n", "\n", "Train Epoch: 7 [0/80000 (0%)]\tLoss: 0.458578\n", "Train Epoch: 7 [10000/80000 (12%)]\tLoss: 0.436979\n", "Train Epoch: 7 [20000/80000 (25%)]\tLoss: 0.458328\n", "Train Epoch: 7 [30000/80000 (38%)]\tLoss: 0.457037\n", "Train Epoch: 7 [40000/80000 (50%)]\tLoss: 0.483248\n", "Train Epoch: 7 [50000/80000 (62%)]\tLoss: 0.462219\n", "Train Epoch: 7 [60000/80000 (75%)]\tLoss: 0.442516\n", "Train Epoch: 7 [70000/80000 (88%)]\tLoss: 0.446028\n", "\n", "Test set: Average loss: 0.4449, Accuracy: 15916/20000 (79.580%)\n", "\n", "Train Epoch: 8 [0/80000 (0%)]\tLoss: 0.451255\n", "Train Epoch: 8 [10000/80000 (12%)]\tLoss: 0.433276\n", "Train Epoch: 8 [20000/80000 (25%)]\tLoss: 0.485846\n", "Train Epoch: 8 [30000/80000 (38%)]\tLoss: 0.445357\n", "Train Epoch: 8 [40000/80000 (50%)]\tLoss: 0.429605\n", "Train Epoch: 8 [50000/80000 (62%)]\tLoss: 0.443655\n", "Train Epoch: 8 [60000/80000 (75%)]\tLoss: 0.436730\n", "Train Epoch: 8 [70000/80000 (88%)]\tLoss: 0.469937\n", "\n", "Test set: Average loss: 0.4445, Accuracy: 15914/20000 (79.570%)\n", "\n", "Train Epoch: 9 [0/80000 (0%)]\tLoss: 0.463443\n", "Train Epoch: 9 [10000/80000 (12%)]\tLoss: 0.460738\n", "Train Epoch: 9 [20000/80000 (25%)]\tLoss: 0.430145\n", "Train Epoch: 9 [30000/80000 (38%)]\tLoss: 0.451020\n", "Train Epoch: 9 [40000/80000 (50%)]\tLoss: 0.462381\n", "Train Epoch: 9 [50000/80000 (62%)]\tLoss: 0.466396\n", "Train Epoch: 9 [60000/80000 (75%)]\tLoss: 0.448938\n", "Train Epoch: 9 [70000/80000 (88%)]\tLoss: 0.468543\n", "\n", "Test set: Average loss: 0.4433, Accuracy: 15933/20000 (79.665%)\n", "\n", "Train Epoch: 10 [0/80000 (0%)]\tLoss: 0.433695\n", "Train Epoch: 10 [10000/80000 (12%)]\tLoss: 0.487441\n", "Train Epoch: 10 [20000/80000 (25%)]\tLoss: 0.454066\n", "Train Epoch: 10 [30000/80000 (38%)]\tLoss: 0.450409\n", "Train Epoch: 10 [40000/80000 (50%)]\tLoss: 0.455621\n", "Train Epoch: 10 [50000/80000 (62%)]\tLoss: 0.426238\n", "Train Epoch: 10 [60000/80000 (75%)]\tLoss: 0.462113\n", "Train Epoch: 10 [70000/80000 (88%)]\tLoss: 0.440993\n", "\n", "Test set: Average loss: 0.4436, Accuracy: 15917/20000 (79.585%)\n", "\n", "Training on 160000 examples\n", "Using both high and low level features\n", "Testing on 40000 examples\n", "Using both high and low level features\n", "\n", " training DNN with 200000 data points and SGD lr=0.000010. \n", "\n", "Train Epoch: 1 [0/160000 (0%)]\tLoss: 0.697536\n", "Train Epoch: 1 [20000/160000 (12%)]\tLoss: 0.694769\n", "Train Epoch: 1 [40000/160000 (25%)]\tLoss: 0.695942\n", "Train Epoch: 1 [60000/160000 (38%)]\tLoss: 0.695974\n", "Train Epoch: 1 [80000/160000 (50%)]\tLoss: 0.687821\n", "Train Epoch: 1 [100000/160000 (62%)]\tLoss: 0.689334\n", "Train Epoch: 1 [120000/160000 (75%)]\tLoss: 0.695128\n", "Train Epoch: 1 [140000/160000 (88%)]\tLoss: 0.693733\n", "\n", "Test set: Average loss: 0.6855, Accuracy: 22418/40000 (56.045%)\n", "\n", "Train Epoch: 2 [0/160000 (0%)]\tLoss: 0.690398\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Train Epoch: 2 [20000/160000 (12%)]\tLoss: 0.694173\n", "Train Epoch: 2 [40000/160000 (25%)]\tLoss: 0.692116\n", "Train Epoch: 2 [60000/160000 (38%)]\tLoss: 0.691964\n", "Train Epoch: 2 [80000/160000 (50%)]\tLoss: 0.691619\n", "Train Epoch: 2 [100000/160000 (62%)]\tLoss: 0.690516\n", "Train Epoch: 2 [120000/160000 (75%)]\tLoss: 0.693235\n", "Train Epoch: 2 [140000/160000 (88%)]\tLoss: 0.694582\n", "\n", "Test set: Average loss: 0.6854, Accuracy: 22448/40000 (56.120%)\n", "\n", "Train Epoch: 3 [0/160000 (0%)]\tLoss: 0.696227\n", "Train Epoch: 3 [20000/160000 (12%)]\tLoss: 0.694529\n", "Train Epoch: 3 [40000/160000 (25%)]\tLoss: 0.690709\n", "Train Epoch: 3 [60000/160000 (38%)]\tLoss: 0.689253\n", "Train Epoch: 3 [80000/160000 (50%)]\tLoss: 0.690921\n", "Train Epoch: 3 [100000/160000 (62%)]\tLoss: 0.685167\n", "Train Epoch: 3 [120000/160000 (75%)]\tLoss: 0.692460\n", "Train Epoch: 3 [140000/160000 (88%)]\tLoss: 0.686860\n", "\n", "Test set: Average loss: 0.6853, Accuracy: 22474/40000 (56.185%)\n", "\n", "Train Epoch: 4 [0/160000 (0%)]\tLoss: 0.688780\n", "Train Epoch: 4 [20000/160000 (12%)]\tLoss: 0.693148\n", "Train Epoch: 4 [40000/160000 (25%)]\tLoss: 0.692194\n", "Train Epoch: 4 [60000/160000 (38%)]\tLoss: 0.692709\n", "Train Epoch: 4 [80000/160000 (50%)]\tLoss: 0.693655\n", "Train Epoch: 4 [100000/160000 (62%)]\tLoss: 0.690548\n", "Train Epoch: 4 [120000/160000 (75%)]\tLoss: 0.690444\n", "Train Epoch: 4 [140000/160000 (88%)]\tLoss: 0.689332\n", "\n", "Test set: Average loss: 0.6851, Accuracy: 22497/40000 (56.242%)\n", "\n", "Train Epoch: 5 [0/160000 (0%)]\tLoss: 0.688800\n", "Train Epoch: 5 [20000/160000 (12%)]\tLoss: 0.692959\n", "Train Epoch: 5 [40000/160000 (25%)]\tLoss: 0.690799\n", "Train Epoch: 5 [60000/160000 (38%)]\tLoss: 0.691604\n", "Train Epoch: 5 [80000/160000 (50%)]\tLoss: 0.693255\n", "Train Epoch: 5 [100000/160000 (62%)]\tLoss: 0.687228\n", "Train Epoch: 5 [120000/160000 (75%)]\tLoss: 0.689395\n", "Train Epoch: 5 [140000/160000 (88%)]\tLoss: 0.695083\n", "\n", "Test set: Average loss: 0.6850, Accuracy: 22529/40000 (56.322%)\n", "\n", "Train Epoch: 6 [0/160000 (0%)]\tLoss: 0.689608\n", "Train Epoch: 6 [20000/160000 (12%)]\tLoss: 0.685819\n", "Train Epoch: 6 [40000/160000 (25%)]\tLoss: 0.688747\n", "Train Epoch: 6 [60000/160000 (38%)]\tLoss: 0.692951\n", "Train Epoch: 6 [80000/160000 (50%)]\tLoss: 0.695513\n", "Train Epoch: 6 [100000/160000 (62%)]\tLoss: 0.698390\n", "Train Epoch: 6 [120000/160000 (75%)]\tLoss: 0.697410\n", "Train Epoch: 6 [140000/160000 (88%)]\tLoss: 0.694441\n", "\n", "Test set: Average loss: 0.6849, Accuracy: 22563/40000 (56.407%)\n", "\n", "Train Epoch: 7 [0/160000 (0%)]\tLoss: 0.695451\n", "Train Epoch: 7 [20000/160000 (12%)]\tLoss: 0.690968\n", "Train Epoch: 7 [40000/160000 (25%)]\tLoss: 0.694902\n", "Train Epoch: 7 [60000/160000 (38%)]\tLoss: 0.696121\n", "Train Epoch: 7 [80000/160000 (50%)]\tLoss: 0.692092\n", "Train Epoch: 7 [100000/160000 (62%)]\tLoss: 0.690247\n", "Train Epoch: 7 [120000/160000 (75%)]\tLoss: 0.695067\n", "Train Epoch: 7 [140000/160000 (88%)]\tLoss: 0.691260\n", "\n", "Test set: Average loss: 0.6848, Accuracy: 22594/40000 (56.485%)\n", "\n", "Train Epoch: 8 [0/160000 (0%)]\tLoss: 0.691153\n", "Train Epoch: 8 [20000/160000 (12%)]\tLoss: 0.691588\n", "Train Epoch: 8 [40000/160000 (25%)]\tLoss: 0.692792\n", "Train Epoch: 8 [60000/160000 (38%)]\tLoss: 0.695482\n", "Train Epoch: 8 [80000/160000 (50%)]\tLoss: 0.694909\n", "Train Epoch: 8 [100000/160000 (62%)]\tLoss: 0.694001\n", "Train Epoch: 8 [120000/160000 (75%)]\tLoss: 0.691612\n", "Train Epoch: 8 [140000/160000 (88%)]\tLoss: 0.689127\n", "\n", "Test set: Average loss: 0.6846, Accuracy: 22617/40000 (56.542%)\n", "\n", "Train Epoch: 9 [0/160000 (0%)]\tLoss: 0.699426\n", "Train Epoch: 9 [20000/160000 (12%)]\tLoss: 0.686623\n", "Train Epoch: 9 [40000/160000 (25%)]\tLoss: 0.695512\n", "Train Epoch: 9 [60000/160000 (38%)]\tLoss: 0.692502\n", "Train Epoch: 9 [80000/160000 (50%)]\tLoss: 0.686093\n", "Train Epoch: 9 [100000/160000 (62%)]\tLoss: 0.693720\n", "Train Epoch: 9 [120000/160000 (75%)]\tLoss: 0.691294\n", "Train Epoch: 9 [140000/160000 (88%)]\tLoss: 0.691889\n", "\n", "Test set: Average loss: 0.6845, Accuracy: 22653/40000 (56.633%)\n", "\n", "Train Epoch: 10 [0/160000 (0%)]\tLoss: 0.692374\n", "Train Epoch: 10 [20000/160000 (12%)]\tLoss: 0.693439\n", "Train Epoch: 10 [40000/160000 (25%)]\tLoss: 0.694436\n", "Train Epoch: 10 [60000/160000 (38%)]\tLoss: 0.687419\n", "Train Epoch: 10 [80000/160000 (50%)]\tLoss: 0.692452\n", "Train Epoch: 10 [100000/160000 (62%)]\tLoss: 0.687066\n", "Train Epoch: 10 [120000/160000 (75%)]\tLoss: 0.691027\n", "Train Epoch: 10 [140000/160000 (88%)]\tLoss: 0.688050\n", "\n", "Test set: Average loss: 0.6844, Accuracy: 22670/40000 (56.675%)\n", "\n", "\n", " training DNN with 200000 data points and SGD lr=0.000100. \n", "\n", "Train Epoch: 1 [0/160000 (0%)]\tLoss: 0.697540\n", "Train Epoch: 1 [20000/160000 (12%)]\tLoss: 0.697083\n", "Train Epoch: 1 [40000/160000 (25%)]\tLoss: 0.695987\n", "Train Epoch: 1 [60000/160000 (38%)]\tLoss: 0.690630\n", "Train Epoch: 1 [80000/160000 (50%)]\tLoss: 0.695427\n", "Train Epoch: 1 [100000/160000 (62%)]\tLoss: 0.693449\n", "Train Epoch: 1 [120000/160000 (75%)]\tLoss: 0.691263\n", "Train Epoch: 1 [140000/160000 (88%)]\tLoss: 0.691844\n", "\n", "Test set: Average loss: 0.6901, Accuracy: 21173/40000 (52.932%)\n", "\n", "Train Epoch: 2 [0/160000 (0%)]\tLoss: 0.692849\n", "Train Epoch: 2 [20000/160000 (12%)]\tLoss: 0.695949\n", "Train Epoch: 2 [40000/160000 (25%)]\tLoss: 0.693565\n", "Train Epoch: 2 [60000/160000 (38%)]\tLoss: 0.690521\n", "Train Epoch: 2 [80000/160000 (50%)]\tLoss: 0.692377\n", "Train Epoch: 2 [100000/160000 (62%)]\tLoss: 0.694584\n", "Train Epoch: 2 [120000/160000 (75%)]\tLoss: 0.692072\n", "Train Epoch: 2 [140000/160000 (88%)]\tLoss: 0.691132\n", "\n", "Test set: Average loss: 0.6891, Accuracy: 21462/40000 (53.655%)\n", "\n", "Train Epoch: 3 [0/160000 (0%)]\tLoss: 0.695255\n", "Train Epoch: 3 [20000/160000 (12%)]\tLoss: 0.695031\n", "Train Epoch: 3 [40000/160000 (25%)]\tLoss: 0.690024\n", "Train Epoch: 3 [60000/160000 (38%)]\tLoss: 0.693541\n", "Train Epoch: 3 [80000/160000 (50%)]\tLoss: 0.691810\n", "Train Epoch: 3 [100000/160000 (62%)]\tLoss: 0.694084\n", "Train Epoch: 3 [120000/160000 (75%)]\tLoss: 0.697059\n", "Train Epoch: 3 [140000/160000 (88%)]\tLoss: 0.692586\n", "\n", "Test set: Average loss: 0.6880, Accuracy: 21852/40000 (54.630%)\n", "\n", "Train Epoch: 4 [0/160000 (0%)]\tLoss: 0.696967\n", "Train Epoch: 4 [20000/160000 (12%)]\tLoss: 0.692188\n", "Train Epoch: 4 [40000/160000 (25%)]\tLoss: 0.685209\n", "Train Epoch: 4 [60000/160000 (38%)]\tLoss: 0.690242\n", "Train Epoch: 4 [80000/160000 (50%)]\tLoss: 0.695247\n", "Train Epoch: 4 [100000/160000 (62%)]\tLoss: 0.697769\n", "Train Epoch: 4 [120000/160000 (75%)]\tLoss: 0.690691\n", "Train Epoch: 4 [140000/160000 (88%)]\tLoss: 0.691298\n", "\n", "Test set: Average loss: 0.6870, Accuracy: 22193/40000 (55.483%)\n", "\n", "Train Epoch: 5 [0/160000 (0%)]\tLoss: 0.689351\n", "Train Epoch: 5 [20000/160000 (12%)]\tLoss: 0.691943\n", "Train Epoch: 5 [40000/160000 (25%)]\tLoss: 0.695891\n", "Train Epoch: 5 [60000/160000 (38%)]\tLoss: 0.689337\n", "Train Epoch: 5 [80000/160000 (50%)]\tLoss: 0.689776\n", "Train Epoch: 5 [100000/160000 (62%)]\tLoss: 0.689858\n", "Train Epoch: 5 [120000/160000 (75%)]\tLoss: 0.687728\n", "Train Epoch: 5 [140000/160000 (88%)]\tLoss: 0.685325\n", "\n", "Test set: Average loss: 0.6860, Accuracy: 22510/40000 (56.275%)\n", "\n", "Train Epoch: 6 [0/160000 (0%)]\tLoss: 0.689849\n", "Train Epoch: 6 [20000/160000 (12%)]\tLoss: 0.690648\n", "Train Epoch: 6 [40000/160000 (25%)]\tLoss: 0.689729\n", "Train Epoch: 6 [60000/160000 (38%)]\tLoss: 0.694689\n", "Train Epoch: 6 [80000/160000 (50%)]\tLoss: 0.686267\n", "Train Epoch: 6 [100000/160000 (62%)]\tLoss: 0.691425\n", "Train Epoch: 6 [120000/160000 (75%)]\tLoss: 0.687313\n", "Train Epoch: 6 [140000/160000 (88%)]\tLoss: 0.687692\n", "\n", "Test set: Average loss: 0.6851, Accuracy: 22815/40000 (57.038%)\n", "\n", "Train Epoch: 7 [0/160000 (0%)]\tLoss: 0.689807\n", "Train Epoch: 7 [20000/160000 (12%)]\tLoss: 0.688712\n", "Train Epoch: 7 [40000/160000 (25%)]\tLoss: 0.689868\n", "Train Epoch: 7 [60000/160000 (38%)]\tLoss: 0.686265\n", "Train Epoch: 7 [80000/160000 (50%)]\tLoss: 0.687202\n", "Train Epoch: 7 [100000/160000 (62%)]\tLoss: 0.687460\n", "Train Epoch: 7 [120000/160000 (75%)]\tLoss: 0.689398\n", "Train Epoch: 7 [140000/160000 (88%)]\tLoss: 0.692314\n", "\n", "Test set: Average loss: 0.6841, Accuracy: 23103/40000 (57.758%)\n", "\n", "Train Epoch: 8 [0/160000 (0%)]\tLoss: 0.686476\n", "Train Epoch: 8 [20000/160000 (12%)]\tLoss: 0.687612\n", "Train Epoch: 8 [40000/160000 (25%)]\tLoss: 0.685242\n", "Train Epoch: 8 [60000/160000 (38%)]\tLoss: 0.688446\n", "Train Epoch: 8 [80000/160000 (50%)]\tLoss: 0.689180\n", "Train Epoch: 8 [100000/160000 (62%)]\tLoss: 0.681610\n", "Train Epoch: 8 [120000/160000 (75%)]\tLoss: 0.684914\n", "Train Epoch: 8 [140000/160000 (88%)]\tLoss: 0.686573\n", "\n", "Test set: Average loss: 0.6832, Accuracy: 23361/40000 (58.403%)\n", "\n", "Train Epoch: 9 [0/160000 (0%)]\tLoss: 0.688832\n", "Train Epoch: 9 [20000/160000 (12%)]\tLoss: 0.686388\n", "Train Epoch: 9 [40000/160000 (25%)]\tLoss: 0.685628\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Train Epoch: 9 [60000/160000 (38%)]\tLoss: 0.688939\n", "Train Epoch: 9 [80000/160000 (50%)]\tLoss: 0.689757\n", "Train Epoch: 9 [100000/160000 (62%)]\tLoss: 0.687121\n", "Train Epoch: 9 [120000/160000 (75%)]\tLoss: 0.688064\n", "Train Epoch: 9 [140000/160000 (88%)]\tLoss: 0.688722\n", "\n", "Test set: Average loss: 0.6822, Accuracy: 23686/40000 (59.215%)\n", "\n", "Train Epoch: 10 [0/160000 (0%)]\tLoss: 0.688352\n", "Train Epoch: 10 [20000/160000 (12%)]\tLoss: 0.687095\n", "Train Epoch: 10 [40000/160000 (25%)]\tLoss: 0.688700\n", "Train Epoch: 10 [60000/160000 (38%)]\tLoss: 0.689241\n", "Train Epoch: 10 [80000/160000 (50%)]\tLoss: 0.685562\n", "Train Epoch: 10 [100000/160000 (62%)]\tLoss: 0.692156\n", "Train Epoch: 10 [120000/160000 (75%)]\tLoss: 0.686256\n", "Train Epoch: 10 [140000/160000 (88%)]\tLoss: 0.684139\n", "\n", "Test set: Average loss: 0.6813, Accuracy: 23973/40000 (59.932%)\n", "\n", "\n", " training DNN with 200000 data points and SGD lr=0.001000. \n", "\n", "Train Epoch: 1 [0/160000 (0%)]\tLoss: 0.711210\n", "Train Epoch: 1 [20000/160000 (12%)]\tLoss: 0.708529\n", "Train Epoch: 1 [40000/160000 (25%)]\tLoss: 0.709962\n", "Train Epoch: 1 [60000/160000 (38%)]\tLoss: 0.709772\n", "Train Epoch: 1 [80000/160000 (50%)]\tLoss: 0.709394\n", "Train Epoch: 1 [100000/160000 (62%)]\tLoss: 0.701491\n", "Train Epoch: 1 [120000/160000 (75%)]\tLoss: 0.702403\n", "Train Epoch: 1 [140000/160000 (88%)]\tLoss: 0.698894\n", "\n", "Test set: Average loss: 0.6927, Accuracy: 19903/40000 (49.758%)\n", "\n", "Train Epoch: 2 [0/160000 (0%)]\tLoss: 0.695125\n", "Train Epoch: 2 [20000/160000 (12%)]\tLoss: 0.699186\n", "Train Epoch: 2 [40000/160000 (25%)]\tLoss: 0.701044\n", "Train Epoch: 2 [60000/160000 (38%)]\tLoss: 0.694566\n", "Train Epoch: 2 [80000/160000 (50%)]\tLoss: 0.694051\n", "Train Epoch: 2 [100000/160000 (62%)]\tLoss: 0.691268\n", "Train Epoch: 2 [120000/160000 (75%)]\tLoss: 0.693306\n", "Train Epoch: 2 [140000/160000 (88%)]\tLoss: 0.689296\n", "\n", "Test set: Average loss: 0.6836, Accuracy: 24090/40000 (60.225%)\n", "\n", "Train Epoch: 3 [0/160000 (0%)]\tLoss: 0.685483\n", "Train Epoch: 3 [20000/160000 (12%)]\tLoss: 0.689562\n", "Train Epoch: 3 [40000/160000 (25%)]\tLoss: 0.693319\n", "Train Epoch: 3 [60000/160000 (38%)]\tLoss: 0.685696\n", "Train Epoch: 3 [80000/160000 (50%)]\tLoss: 0.689653\n", "Train Epoch: 3 [100000/160000 (62%)]\tLoss: 0.691741\n", "Train Epoch: 3 [120000/160000 (75%)]\tLoss: 0.687653\n", "Train Epoch: 3 [140000/160000 (88%)]\tLoss: 0.686184\n", "\n", "Test set: Average loss: 0.6754, Accuracy: 27155/40000 (67.888%)\n", "\n", "Train Epoch: 4 [0/160000 (0%)]\tLoss: 0.679059\n", "Train Epoch: 4 [20000/160000 (12%)]\tLoss: 0.682266\n", "Train Epoch: 4 [40000/160000 (25%)]\tLoss: 0.678426\n", "Train Epoch: 4 [60000/160000 (38%)]\tLoss: 0.679051\n", "Train Epoch: 4 [80000/160000 (50%)]\tLoss: 0.681175\n", "Train Epoch: 4 [100000/160000 (62%)]\tLoss: 0.678167\n", "Train Epoch: 4 [120000/160000 (75%)]\tLoss: 0.677122\n", "Train Epoch: 4 [140000/160000 (88%)]\tLoss: 0.674250\n", "\n", "Test set: Average loss: 0.6677, Accuracy: 28763/40000 (71.907%)\n", "\n", "Train Epoch: 5 [0/160000 (0%)]\tLoss: 0.678272\n", "Train Epoch: 5 [20000/160000 (12%)]\tLoss: 0.680418\n", "Train Epoch: 5 [40000/160000 (25%)]\tLoss: 0.682454\n", "Train Epoch: 5 [60000/160000 (38%)]\tLoss: 0.679521\n", "Train Epoch: 5 [80000/160000 (50%)]\tLoss: 0.671055\n", "Train Epoch: 5 [100000/160000 (62%)]\tLoss: 0.672802\n", "Train Epoch: 5 [120000/160000 (75%)]\tLoss: 0.669698\n", "Train Epoch: 5 [140000/160000 (88%)]\tLoss: 0.671952\n", "\n", "Test set: Average loss: 0.6602, Accuracy: 29485/40000 (73.713%)\n", "\n", "Train Epoch: 6 [0/160000 (0%)]\tLoss: 0.669765\n", "Train Epoch: 6 [20000/160000 (12%)]\tLoss: 0.665902\n", "Train Epoch: 6 [40000/160000 (25%)]\tLoss: 0.670903\n", "Train Epoch: 6 [60000/160000 (38%)]\tLoss: 0.669754\n", "Train Epoch: 6 [80000/160000 (50%)]\tLoss: 0.671972\n", "Train Epoch: 6 [100000/160000 (62%)]\tLoss: 0.668719\n", "Train Epoch: 6 [120000/160000 (75%)]\tLoss: 0.664775\n", "Train Epoch: 6 [140000/160000 (88%)]\tLoss: 0.660152\n", "\n", "Test set: Average loss: 0.6529, Accuracy: 29874/40000 (74.685%)\n", "\n", "Train Epoch: 7 [0/160000 (0%)]\tLoss: 0.666156\n", "Train Epoch: 7 [20000/160000 (12%)]\tLoss: 0.666693\n", "Train Epoch: 7 [40000/160000 (25%)]\tLoss: 0.664908\n", "Train Epoch: 7 [60000/160000 (38%)]\tLoss: 0.665601\n", "Train Epoch: 7 [80000/160000 (50%)]\tLoss: 0.658841\n", "Train Epoch: 7 [100000/160000 (62%)]\tLoss: 0.659382\n", "Train Epoch: 7 [120000/160000 (75%)]\tLoss: 0.662474\n", "Train Epoch: 7 [140000/160000 (88%)]\tLoss: 0.658091\n", "\n", "Test set: Average loss: 0.6456, Accuracy: 30067/40000 (75.168%)\n", "\n", "Train Epoch: 8 [0/160000 (0%)]\tLoss: 0.657233\n", "Train Epoch: 8 [20000/160000 (12%)]\tLoss: 0.653101\n", "Train Epoch: 8 [40000/160000 (25%)]\tLoss: 0.660574\n", "Train Epoch: 8 [60000/160000 (38%)]\tLoss: 0.658713\n", "Train Epoch: 8 [80000/160000 (50%)]\tLoss: 0.657150\n", "Train Epoch: 8 [100000/160000 (62%)]\tLoss: 0.653666\n", "Train Epoch: 8 [120000/160000 (75%)]\tLoss: 0.658895\n", "Train Epoch: 8 [140000/160000 (88%)]\tLoss: 0.653125\n", "\n", "Test set: Average loss: 0.6383, Accuracy: 30204/40000 (75.510%)\n", "\n", "Train Epoch: 9 [0/160000 (0%)]\tLoss: 0.654367\n", "Train Epoch: 9 [20000/160000 (12%)]\tLoss: 0.656965\n", "Train Epoch: 9 [40000/160000 (25%)]\tLoss: 0.652353\n", "Train Epoch: 9 [60000/160000 (38%)]\tLoss: 0.645441\n", "Train Epoch: 9 [80000/160000 (50%)]\tLoss: 0.649583\n", "Train Epoch: 9 [100000/160000 (62%)]\tLoss: 0.652022\n", "Train Epoch: 9 [120000/160000 (75%)]\tLoss: 0.648837\n", "Train Epoch: 9 [140000/160000 (88%)]\tLoss: 0.652718\n", "\n", "Test set: Average loss: 0.6310, Accuracy: 30349/40000 (75.873%)\n", "\n", "Train Epoch: 10 [0/160000 (0%)]\tLoss: 0.638290\n", "Train Epoch: 10 [20000/160000 (12%)]\tLoss: 0.649517\n", "Train Epoch: 10 [40000/160000 (25%)]\tLoss: 0.646564\n", "Train Epoch: 10 [60000/160000 (38%)]\tLoss: 0.648319\n", "Train Epoch: 10 [80000/160000 (50%)]\tLoss: 0.645404\n", "Train Epoch: 10 [100000/160000 (62%)]\tLoss: 0.645872\n", "Train Epoch: 10 [120000/160000 (75%)]\tLoss: 0.640661\n", "Train Epoch: 10 [140000/160000 (88%)]\tLoss: 0.641092\n", "\n", "Test set: Average loss: 0.6238, Accuracy: 30381/40000 (75.953%)\n", "\n", "\n", " training DNN with 200000 data points and SGD lr=0.010000. \n", "\n", "Train Epoch: 1 [0/160000 (0%)]\tLoss: 0.696070\n", "Train Epoch: 1 [20000/160000 (12%)]\tLoss: 0.677516\n", "Train Epoch: 1 [40000/160000 (25%)]\tLoss: 0.675997\n", "Train Epoch: 1 [60000/160000 (38%)]\tLoss: 0.669214\n", "Train Epoch: 1 [80000/160000 (50%)]\tLoss: 0.656146\n", "Train Epoch: 1 [100000/160000 (62%)]\tLoss: 0.647163\n", "Train Epoch: 1 [120000/160000 (75%)]\tLoss: 0.638084\n", "Train Epoch: 1 [140000/160000 (88%)]\tLoss: 0.630936\n", "\n", "Test set: Average loss: 0.6022, Accuracy: 29903/40000 (74.757%)\n", "\n", "Train Epoch: 2 [0/160000 (0%)]\tLoss: 0.628269\n", "Train Epoch: 2 [20000/160000 (12%)]\tLoss: 0.620329\n", "Train Epoch: 2 [40000/160000 (25%)]\tLoss: 0.608322\n", "Train Epoch: 2 [60000/160000 (38%)]\tLoss: 0.603806\n", "Train Epoch: 2 [80000/160000 (50%)]\tLoss: 0.598223\n", "Train Epoch: 2 [100000/160000 (62%)]\tLoss: 0.595937\n", "Train Epoch: 2 [120000/160000 (75%)]\tLoss: 0.576972\n", "Train Epoch: 2 [140000/160000 (88%)]\tLoss: 0.581402\n", "\n", "Test set: Average loss: 0.5391, Accuracy: 30541/40000 (76.353%)\n", "\n", "Train Epoch: 3 [0/160000 (0%)]\tLoss: 0.576532\n", "Train Epoch: 3 [20000/160000 (12%)]\tLoss: 0.580321\n", "Train Epoch: 3 [40000/160000 (25%)]\tLoss: 0.560484\n", "Train Epoch: 3 [60000/160000 (38%)]\tLoss: 0.583032\n", "Train Epoch: 3 [80000/160000 (50%)]\tLoss: 0.554103\n", "Train Epoch: 3 [100000/160000 (62%)]\tLoss: 0.550453\n", "Train Epoch: 3 [120000/160000 (75%)]\tLoss: 0.541751\n", "Train Epoch: 3 [140000/160000 (88%)]\tLoss: 0.575952\n", "\n", "Test set: Average loss: 0.5054, Accuracy: 30784/40000 (76.960%)\n", "\n", "Train Epoch: 4 [0/160000 (0%)]\tLoss: 0.558233\n", "Train Epoch: 4 [20000/160000 (12%)]\tLoss: 0.548893\n", "Train Epoch: 4 [40000/160000 (25%)]\tLoss: 0.548963\n", "Train Epoch: 4 [60000/160000 (38%)]\tLoss: 0.552454\n", "Train Epoch: 4 [80000/160000 (50%)]\tLoss: 0.551623\n", "Train Epoch: 4 [100000/160000 (62%)]\tLoss: 0.540242\n", "Train Epoch: 4 [120000/160000 (75%)]\tLoss: 0.547339\n", "Train Epoch: 4 [140000/160000 (88%)]\tLoss: 0.541507\n", "\n", "Test set: Average loss: 0.4875, Accuracy: 31113/40000 (77.782%)\n", "\n", "Train Epoch: 5 [0/160000 (0%)]\tLoss: 0.522058\n", "Train Epoch: 5 [20000/160000 (12%)]\tLoss: 0.503705\n", "Train Epoch: 5 [40000/160000 (25%)]\tLoss: 0.531778\n", "Train Epoch: 5 [60000/160000 (38%)]\tLoss: 0.548663\n", "Train Epoch: 5 [80000/160000 (50%)]\tLoss: 0.514593\n", "Train Epoch: 5 [100000/160000 (62%)]\tLoss: 0.514267\n", "Train Epoch: 5 [120000/160000 (75%)]\tLoss: 0.517169\n", "Train Epoch: 5 [140000/160000 (88%)]\tLoss: 0.509606\n", "\n", "Test set: Average loss: 0.4774, Accuracy: 31189/40000 (77.972%)\n", "\n", "Train Epoch: 6 [0/160000 (0%)]\tLoss: 0.542238\n", "Train Epoch: 6 [20000/160000 (12%)]\tLoss: 0.525145\n", "Train Epoch: 6 [40000/160000 (25%)]\tLoss: 0.509262\n", "Train Epoch: 6 [60000/160000 (38%)]\tLoss: 0.504582\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Train Epoch: 6 [80000/160000 (50%)]\tLoss: 0.518707\n", "Train Epoch: 6 [100000/160000 (62%)]\tLoss: 0.507325\n", "Train Epoch: 6 [120000/160000 (75%)]\tLoss: 0.497972\n", "Train Epoch: 6 [140000/160000 (88%)]\tLoss: 0.535714\n", "\n", "Test set: Average loss: 0.4693, Accuracy: 31356/40000 (78.390%)\n", "\n", "Train Epoch: 7 [0/160000 (0%)]\tLoss: 0.522787\n", "Train Epoch: 7 [20000/160000 (12%)]\tLoss: 0.504788\n", "Train Epoch: 7 [40000/160000 (25%)]\tLoss: 0.520766\n", "Train Epoch: 7 [60000/160000 (38%)]\tLoss: 0.527349\n", "Train Epoch: 7 [80000/160000 (50%)]\tLoss: 0.515260\n", "Train Epoch: 7 [100000/160000 (62%)]\tLoss: 0.518895\n", "Train Epoch: 7 [120000/160000 (75%)]\tLoss: 0.503900\n", "Train Epoch: 7 [140000/160000 (88%)]\tLoss: 0.497610\n", "\n", "Test set: Average loss: 0.4656, Accuracy: 31394/40000 (78.485%)\n", "\n", "Train Epoch: 8 [0/160000 (0%)]\tLoss: 0.514254\n", "Train Epoch: 8 [20000/160000 (12%)]\tLoss: 0.505731\n", "Train Epoch: 8 [40000/160000 (25%)]\tLoss: 0.502496\n", "Train Epoch: 8 [60000/160000 (38%)]\tLoss: 0.519354\n", "Train Epoch: 8 [80000/160000 (50%)]\tLoss: 0.509659\n", "Train Epoch: 8 [100000/160000 (62%)]\tLoss: 0.501614\n", "Train Epoch: 8 [120000/160000 (75%)]\tLoss: 0.512401\n", "Train Epoch: 8 [140000/160000 (88%)]\tLoss: 0.483391\n", "\n", "Test set: Average loss: 0.4614, Accuracy: 31482/40000 (78.705%)\n", "\n", "Train Epoch: 9 [0/160000 (0%)]\tLoss: 0.504925\n", "Train Epoch: 9 [20000/160000 (12%)]\tLoss: 0.507819\n", "Train Epoch: 9 [40000/160000 (25%)]\tLoss: 0.503934\n", "Train Epoch: 9 [60000/160000 (38%)]\tLoss: 0.500493\n", "Train Epoch: 9 [80000/160000 (50%)]\tLoss: 0.501396\n", "Train Epoch: 9 [100000/160000 (62%)]\tLoss: 0.512802\n", "Train Epoch: 9 [120000/160000 (75%)]\tLoss: 0.495267\n", "Train Epoch: 9 [140000/160000 (88%)]\tLoss: 0.500724\n", "\n", "Test set: Average loss: 0.4587, Accuracy: 31528/40000 (78.820%)\n", "\n", "Train Epoch: 10 [0/160000 (0%)]\tLoss: 0.487282\n", "Train Epoch: 10 [20000/160000 (12%)]\tLoss: 0.492254\n", "Train Epoch: 10 [40000/160000 (25%)]\tLoss: 0.491856\n", "Train Epoch: 10 [60000/160000 (38%)]\tLoss: 0.480748\n", "Train Epoch: 10 [80000/160000 (50%)]\tLoss: 0.516473\n", "Train Epoch: 10 [100000/160000 (62%)]\tLoss: 0.506555\n", "Train Epoch: 10 [120000/160000 (75%)]\tLoss: 0.484160\n", "Train Epoch: 10 [140000/160000 (88%)]\tLoss: 0.507923\n", "\n", "Test set: Average loss: 0.4558, Accuracy: 31579/40000 (78.948%)\n", "\n", "\n", " training DNN with 200000 data points and SGD lr=0.100000. \n", "\n", "Train Epoch: 1 [0/160000 (0%)]\tLoss: 0.700151\n", "Train Epoch: 1 [20000/160000 (12%)]\tLoss: 0.632980\n", "Train Epoch: 1 [40000/160000 (25%)]\tLoss: 0.575079\n", "Train Epoch: 1 [60000/160000 (38%)]\tLoss: 0.572152\n", "Train Epoch: 1 [80000/160000 (50%)]\tLoss: 0.537741\n", "Train Epoch: 1 [100000/160000 (62%)]\tLoss: 0.519731\n", "Train Epoch: 1 [120000/160000 (75%)]\tLoss: 0.506868\n", "Train Epoch: 1 [140000/160000 (88%)]\tLoss: 0.504988\n", "\n", "Test set: Average loss: 0.4591, Accuracy: 31544/40000 (78.860%)\n", "\n", "Train Epoch: 2 [0/160000 (0%)]\tLoss: 0.487960\n", "Train Epoch: 2 [20000/160000 (12%)]\tLoss: 0.505924\n", "Train Epoch: 2 [40000/160000 (25%)]\tLoss: 0.476286\n", "Train Epoch: 2 [60000/160000 (38%)]\tLoss: 0.488260\n", "Train Epoch: 2 [80000/160000 (50%)]\tLoss: 0.477032\n", "Train Epoch: 2 [100000/160000 (62%)]\tLoss: 0.461733\n", "Train Epoch: 2 [120000/160000 (75%)]\tLoss: 0.482932\n", "Train Epoch: 2 [140000/160000 (88%)]\tLoss: 0.495323\n", "\n", "Test set: Average loss: 0.4465, Accuracy: 31782/40000 (79.455%)\n", "\n", "Train Epoch: 3 [0/160000 (0%)]\tLoss: 0.461676\n", "Train Epoch: 3 [20000/160000 (12%)]\tLoss: 0.478750\n", "Train Epoch: 3 [40000/160000 (25%)]\tLoss: 0.474325\n", "Train Epoch: 3 [60000/160000 (38%)]\tLoss: 0.482127\n", "Train Epoch: 3 [80000/160000 (50%)]\tLoss: 0.463712\n", "Train Epoch: 3 [100000/160000 (62%)]\tLoss: 0.461602\n", "Train Epoch: 3 [120000/160000 (75%)]\tLoss: 0.466510\n", "Train Epoch: 3 [140000/160000 (88%)]\tLoss: 0.431422\n", "\n", "Test set: Average loss: 0.4407, Accuracy: 31889/40000 (79.722%)\n", "\n", "Train Epoch: 4 [0/160000 (0%)]\tLoss: 0.454191\n", "Train Epoch: 4 [20000/160000 (12%)]\tLoss: 0.452877\n", "Train Epoch: 4 [40000/160000 (25%)]\tLoss: 0.470446\n", "Train Epoch: 4 [60000/160000 (38%)]\tLoss: 0.464326\n", "Train Epoch: 4 [80000/160000 (50%)]\tLoss: 0.487929\n", "Train Epoch: 4 [100000/160000 (62%)]\tLoss: 0.444073\n", "Train Epoch: 4 [120000/160000 (75%)]\tLoss: 0.469638\n", "Train Epoch: 4 [140000/160000 (88%)]\tLoss: 0.464015\n", "\n", "Test set: Average loss: 0.4389, Accuracy: 31934/40000 (79.835%)\n", "\n", "Train Epoch: 5 [0/160000 (0%)]\tLoss: 0.469626\n", "Train Epoch: 5 [20000/160000 (12%)]\tLoss: 0.451672\n", "Train Epoch: 5 [40000/160000 (25%)]\tLoss: 0.459388\n", "Train Epoch: 5 [60000/160000 (38%)]\tLoss: 0.491337\n", "Train Epoch: 5 [80000/160000 (50%)]\tLoss: 0.454779\n", "Train Epoch: 5 [100000/160000 (62%)]\tLoss: 0.442819\n", "Train Epoch: 5 [120000/160000 (75%)]\tLoss: 0.486913\n", "Train Epoch: 5 [140000/160000 (88%)]\tLoss: 0.475788\n", "\n", "Test set: Average loss: 0.4374, Accuracy: 31939/40000 (79.847%)\n", "\n", "Train Epoch: 6 [0/160000 (0%)]\tLoss: 0.440321\n", "Train Epoch: 6 [20000/160000 (12%)]\tLoss: 0.474917\n", "Train Epoch: 6 [40000/160000 (25%)]\tLoss: 0.451301\n", "Train Epoch: 6 [60000/160000 (38%)]\tLoss: 0.441167\n", "Train Epoch: 6 [80000/160000 (50%)]\tLoss: 0.463579\n", "Train Epoch: 6 [100000/160000 (62%)]\tLoss: 0.452635\n", "Train Epoch: 6 [120000/160000 (75%)]\tLoss: 0.452458\n", "Train Epoch: 6 [140000/160000 (88%)]\tLoss: 0.456863\n", "\n", "Test set: Average loss: 0.4358, Accuracy: 31944/40000 (79.860%)\n", "\n", "Train Epoch: 7 [0/160000 (0%)]\tLoss: 0.459115\n", "Train Epoch: 7 [20000/160000 (12%)]\tLoss: 0.446554\n", "Train Epoch: 7 [40000/160000 (25%)]\tLoss: 0.440093\n", "Train Epoch: 7 [60000/160000 (38%)]\tLoss: 0.451953\n", "Train Epoch: 7 [80000/160000 (50%)]\tLoss: 0.458189\n", "Train Epoch: 7 [100000/160000 (62%)]\tLoss: 0.451847\n", "Train Epoch: 7 [120000/160000 (75%)]\tLoss: 0.450336\n", "Train Epoch: 7 [140000/160000 (88%)]\tLoss: 0.456441\n", "\n", "Test set: Average loss: 0.4349, Accuracy: 31949/40000 (79.873%)\n", "\n", "Train Epoch: 8 [0/160000 (0%)]\tLoss: 0.439282\n", "Train Epoch: 8 [20000/160000 (12%)]\tLoss: 0.453577\n", "Train Epoch: 8 [40000/160000 (25%)]\tLoss: 0.463878\n", "Train Epoch: 8 [60000/160000 (38%)]\tLoss: 0.443377\n", "Train Epoch: 8 [80000/160000 (50%)]\tLoss: 0.451906\n", "Train Epoch: 8 [100000/160000 (62%)]\tLoss: 0.445123\n", "Train Epoch: 8 [120000/160000 (75%)]\tLoss: 0.458518\n", "Train Epoch: 8 [140000/160000 (88%)]\tLoss: 0.438410\n", "\n", "Test set: Average loss: 0.4341, Accuracy: 31963/40000 (79.907%)\n", "\n", "Train Epoch: 9 [0/160000 (0%)]\tLoss: 0.461733\n", "Train Epoch: 9 [20000/160000 (12%)]\tLoss: 0.446442\n", "Train Epoch: 9 [40000/160000 (25%)]\tLoss: 0.462070\n", "Train Epoch: 9 [60000/160000 (38%)]\tLoss: 0.461662\n", "Train Epoch: 9 [80000/160000 (50%)]\tLoss: 0.446705\n", "Train Epoch: 9 [100000/160000 (62%)]\tLoss: 0.446212\n", "Train Epoch: 9 [120000/160000 (75%)]\tLoss: 0.429772\n", "Train Epoch: 9 [140000/160000 (88%)]\tLoss: 0.455330\n", "\n", "Test set: Average loss: 0.4337, Accuracy: 31998/40000 (79.995%)\n", "\n", "Train Epoch: 10 [0/160000 (0%)]\tLoss: 0.432510\n", "Train Epoch: 10 [20000/160000 (12%)]\tLoss: 0.464081\n", "Train Epoch: 10 [40000/160000 (25%)]\tLoss: 0.426821\n", "Train Epoch: 10 [60000/160000 (38%)]\tLoss: 0.441955\n", "Train Epoch: 10 [80000/160000 (50%)]\tLoss: 0.433958\n", "Train Epoch: 10 [100000/160000 (62%)]\tLoss: 0.462393\n", "Train Epoch: 10 [120000/160000 (75%)]\tLoss: 0.441262\n", "Train Epoch: 10 [140000/160000 (88%)]\tLoss: 0.424818\n", "\n", "Test set: Average loss: 0.4334, Accuracy: 31977/40000 (79.942%)\n", "\n" ] }, { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAakAAAEKCAYAAACopKobAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4wLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvqOYd8AAAIABJREFUeJzs3Xd81EX++PHXO733BEICBAi996JSBGliP8tZz9Pz7P1snIrd7/28U++8YjkLKip6FiyAWJAiRYoooHQCJJDQAunZ7L5/f3w+gYRswi7sZkmY5+PxeZCdz8zsTEIyO+UzI6qKYRiGYZyIggJdAMMwDMOoj2mkDMMwjBOWaaQMwzCME5ZppAzDMIwTlmmkDMMwjBOWaaQMwzCME5ZppPxARF4VkQIRWX0MafuLyM8islFE/i4iYodPEZFcEfnRvib6vuRuyzNeRNbZ5bnPzf1wEXnPvr9ERLJq3LvfDl8nIuOOlqeI3GyHqYik+LtuDZWlxv1jqd8x//x97VjrJyLJIvKtiBSLyAuNXW5PeVC/4SKyQkSqROQ3gSijcZxU1Vw+voDhQD9g9TGkXQoMBQSYCUyww6cAdzdyPYKBTUB7IAxYBXQ7Is6NwH/sry8B3rO/7mbHDwfa2fkEN5Qn0BfIArYCKU2xfsf78z+B6hcNnApcD7wQyHocZ/2ygF7AVOA3gS6zuby/TE/KD1R1HrCvZpiIdBCRWSKyXETmi0iXI9OJSDoQp6qL1PoNmwqc2zildmsQsFFVN6tqJfAucM4Rcc4B3rC//gAYbff+zgHeVdUKVd0CbLTzqzdPVV2pqlv9Xaka/FE/tz//ADnm+qlqiaouAMobr7heO2r9VHWrqv4EuAJRQOP4mUaq8bwE3KKq/YG7gX+5iZMB7KjxeocdVu1mEfnJHk5K9F9Ra5VnewPlqRVHVauAA0ByA2k9ybOx+KN+J5LjqV9T0BR+BsZxMo1UIxCRGGAY8L6I/Ai8CKS7i+omrHrfqn8DHYA+wE7gr34oqjflOVocb8MDwR/1O5EcT/2agqZcdr9xNycqIkkiMkdENtj/JtrhYs99b7Q/APezwzvboz6rRGSoHRYiIl+JSFRj1sc0Uo0jCChU1T41rq4iElxjIcSjWJ8EM2ukywTyAFQ1X1WdquoCXsYeWvKzHUBrd+VxF0dEQoB4rKGu+tJ6kmdj8Uf9TiTHU7+moCn8DALhdWD8EWH3AV+rakfga/s1wASgo31dh/VhGOCPdpzfYI38ANwAvKmqpX4ruRumkWoEqnoQ2CIiF8KhTy+97UanutF6SFV3AkUiMsSe97gS+MROU7PndR7QGCvHfgA6ikg7EQnDmlifcUScGcBV9te/Ab6x59NmAJfYq8faYf0SLPUwz8bij/qdSI6nfk3BifR/6YRRz5xozbnHNzg8130OMFUti4EE+2+NA4gEogCHiCQAZ2HNkzeuQK/caI4X8A7WkJwD69PeNVgrwGZhrUBaCzxUT9oBWA3QJuAFQOzwN4GfgZ+wfhHTG6kuE4H1dnkm22GPAmfbX0cA72MtHFgKtK+RdrKdbh32KsX68rTDb7W/X1VYn4hfaaL1q/PzD+D/xeOp31asP3bFdj26NXb5fVC/gXbZS4C9wJpAl7mRvi9Z1FhdijWSU/P+fvvfz4BTa4R/bf8NagPMBRZhrY78GzAiEHWp/gNoGIZhBMC4UdG6d5/T4/jLf6pYQ+1Vly+p6ks149jPu32mqj3s14WqmlDj/n5VTRSRz4Gn1FrJiYh8DdyjqstrxM0GHgduA57BWu7/oKqu96qixyikMd7EMAzDcG/vPidLZ7fxOH5w+oZyVR3g5dvki0i6qu60h/MK7HBP5vWeAP6MNdLxNlYP+2HgMi/LcEzMnJRhGEYAKYpDqzy+jlHNucersOe67fAr7XnyIcABtebGARCREUCuqm7Amp9yAU7760ZhelKGYRgBpIDLhyvnReQdYCSQIiI7sHo9TwPTReQaYBtwoR39C6x5vY1AKXB1jXwEqwd1kR30ElZPKgRrpV+jMHNShmEYAdSvd7jOn9XS4/gxrbYtP4bhvibL9KQMwzACSFGcprNQLzMndYIQkesCXQZfM3VqGkydAs+FenydbEwjdeJoUr9UHjJ1ahpMnQJIASfq8XWyMcN9hmEYAXYy9pA8ZRqpowiOjdaQZP9vOB6clEB4Vmbj/E91uduX0/eCExMIb9O6UeoUE904J0pEtYwhuWtqo9QpWBrndImYllGkdUtulDoVO8Ia420ITY0jqmN6o9SpbOOuPaqaeqzpFcycVANMI3UUIcmJtJx8a6CL4VNS0fxGeYcNWBfoIvhcUlhJoIvgcwt3tgt0EXzux0lP5hxPekVxmJ5UvUwjZRiGEUgKTtNG1cs0UoZhGAFkPcxr1Mc0UoZhGAElON2e32iAaaQMwzACSgGXGe6rl2mkDMMwAsz0pOpnGinDMIwAsh7mNY1UfUwjZRiGEWAuNY1UfUwjZRiGEUCmJ9Uw00gZhmEEkCI4NDjQxThhmUbKMAwjgExPqmGmkTIMwwgowanNb6syXzGNlGEYRgBZO06YRqo+ppEyDMMIMDPcVz/TSBmGYQSQqhnua4hppAzDMALMZXpS9TKNlGEYRgBZq/tMT6o+ppEyDMMIKDPc1xDTSBmGYQSQgnmYtwGmkTIMwwggRcxwXwNMI2UYhhFgLjPcVy/TSBmGYQSQWTjRsCbRSInIq8AkoEBVe9hhScB7QBawFbhIVfeLiADPAxOBUuB3qrrCTnMV8Gc728dV9Q1flzX3/qcJCg+HIEGCg2g5+dZD99TlYtcT/yA4IY60W672OK2zqJg9/3oTV1kZ8eeMI6pvdwB2//MNEi87j5CEOF9Xo5YdDz95uFxBwaTfcxsAe96eTtnqtQTHxtDqgbu9SussKmb3K2/gKisj4czxRPXuAUDBS6+RdPH5hMTH+60+B3MKWfjgN4deF+cW0fMP/elyiVUGl9PF7Ks/ISo1ihF/HVcn/Yzz3iUkKhQJFoKCgxj32rkAlO8vY/59X+EorqTXdf3JHJEFwLx7vmTAn04hKjXab3Xav/Ugs+9feLiOucUMvr4nbYa1chve+9IutdJPnTSD0KiQQ3W66C2r3mX7y5l513wqih0MvqEX7UdlAvD5nfMYef8AolOj/FanquJytv/9c8q37QaENredSXTXTNb8/p8ER4Yd+j3p/Nzv66StL07VgRK2PPE/nMXltLxiBAlDOwOw+bH3aX3jeEKTY/1Wn/oogtMc1VGvJtFIAa8DLwBTa4TdB3ytqk+LyH3263uBCUBH+xoM/BsYbDdqDwMDsD68LBeRGaq639eFTbvrOoJj6/5BKvp6AaHpabjKyr1KW7p0FdHD+hE1sA8Fz/+XqL7dKV21lrA2rfzeQFVrcev1BMfULlfM4AHEDh/G3jff9TptyfIfiR40gOj+vSn41ytE9e5B6c9rCcvM8GsDBRDXNoEJU88HrAbpk7PfofWItofur5++hvisBBwllfXmMfqfZxKeEFErLGfOJtpN7EjbMe2Ze8csMkdkkTs/h8TOKX5toAASs+K45J0JgFWn1yd8QrtRrYlLj3Yb7s65L44mMjG8Vtj6WTl0ntSOjuPa8uktc2k/KpMt83JJ7ZLo1wYKIPelOcT170C7By7A5XDiqnAcupf95GWExDf8/u7i7P9uLUmn9yRheDc2P/weCUM7c2DJBqI6tAxIA1XNbItUvybxnVHVecC+I4LPAap7Qm8A59YIn6qWxUCCiKQD44A5qrrPbpjmAOP9X3pL1f5Cyn7+lZhTB3qfODgIraxCHVWICOp0UvT1AmLHjvB9Qb0Qkd2e4Khj+0MlwUGow4FWOUGCrDrNnU/cmJG+LeRR5C/LIyYjluh06w9UaUEJeQu30/7szl7nFRQShLOiCpfDhQQJrioX695bQ9fLevm62A3asTSf+MwY4tKjPQpvSHBIEFUVTpyVLkSsOq2ato6+V3T1dbFrcZZWULJmG0ljewMQFBpMSEzEUVIdnYQE4aqsQh1OEFCni92fLCXt/CHHnfexUgWnBnl8nWyaSk/KnRaquhNAVXeKSJodngFsrxFvhx1WX7jPFTz3CogQO3wwMcMHA7D/vU9JvGAirvIKr9NGD+rDnlfepWTxchLOn0jx3MVED+lPUHiYP4rvvlz/fBkEYk4ZQuwp3v1Cu0sbPaAve16fRsnS5SScM5Gi+YuIHtSfoLDGqxNAzpzNtD2jw6HXK55bRJ+bB+Eorb8XhcC3t80Egexzu5J9rjV01nZsNose/patMzfS+8aBbPhwLVkTsgmJaNxfsw1f5tBxXFuPwwEQmHHTt4hA9wuy6X5+NgAdx7dlzuRFrPt8K0Nv7c3P72+gy5lZhEb6t04VuwoJiYti23OfUb6lgMjslmRcdwbBEWGIwKaH3gGE5Al9SRnft2516omTOKI7Of/vE/Z98zOtfjeKPZ8vJ2l0T4IiQv1an4aJ2XGiAU25kaqPu5+2NhBeNwOR64DrAIKTErx68xb33khIQhzOg8UUPPcKIS1T0fIKgmNjCGubSfm6TV6ljejUnqCoSNJuteawXCWlHJw1l5QbrmDv1A9wlZYRd8ZwwjvU88fHB1reeRMh8fE4i4rJf+ElQlukEZHd/rjSBkVGknbDNQA4S0s5+NW3pF57FXunvY+rrIy404cT3i7Lb3UCcDqc5C7IofeNAwDIXbCN8MRIkrqkkL8ir950Y148i6jUaMr3lfHtbTOJaxtPWt90wmLCDs1hVR6s4Je3fuLUp8aw9Kn5VBZV0OW3PUnp2cLvddr6XS5Db+7tUXi1C14dQ3RqFKX7yplx47ckZsXRql8a4bFhTPq71WMvP1jJytd/Yfwzp/LtY0upKKqkz+VdaNkrxQ8VcVG6aRcZ148lunMGO178koL3F5F+xQg6/uVKQpNjcRSWsOnP7xCRmUxMjza1ktcXJzg6gvZTLgagqriMgv8tJuuBC9j29y9wFpeRdt5gortm+r4+DVA4KXtInmrK35l8exgP+98CO3wHUHPQPRPIayC8DlV9SVUHqOoAd3NLDameIwqOiyGyT3cqt26nYuNWylatJff+p9nz8jQqft3Env/Wncdxl/ZIBz77mriJp1P6w4+Etc0k+aoLKfxolldl9Fb1HFFwbAxRvXtQkbPNp2kPzPyK+LGjKVn2I2FtMkm+9CL2f+rfOgHsXLSDpM4pRCZZQ5a7f8ond34OM857l+8f/Jb85Xl8P+XbOumq55cikiLJHNGWvWt314mz+tWVdL+qDzlzNpHYOYXBk4ez6j/L/FshIGfhTlK7JBGVHOlReLXq+aWopAjaj8okf/XeOnGWvbya/td0Z8OsHFK7JnL6Q4NZ/MIq31cCCE2JJTQljujO1mBHwildKNu0y7pnzx2FJkQTP7QTpevr/hp7Eif/nQW0uGgYhd+tISq7JW1un8TOqXP9Up+GVJ/M6+l1smnKjdQM4Cr766uAT2qEXymWIcABe1hwNjBWRBJFJBEYa4f5jKui8tBwnquikvK16wlt1ZKE8yeQ8ZfJZDx1Hyl/uJTwLh1IueYSj9LW5Mjfg/PAQSI6t8dV6UBEQAStqvJlNdyUq/xwuX5dT1h6y6Ok8jyto2C3VaeOHVBHpTVOI6AOh7ssfSpnzqZaQ319bhzIuTMu5eyPLmHYY6No0b8Vw6aMqpWmqsxxaEFFVZmDXUtyiW+fWCtO0fYDlO0pIa1fOs7yKiTI6sQ7K51+rhFsmJ1Dx/FuhvrqCQdwlFVRWeI49PX2xbtIyq69eKVwWxElu8vI6J9GVbnTqpNAlZ/qFJoYQ1hKLOU7rMayaNVWwtuk4CyvxFlq/Z44yyspWrmFiLaptdJ6Eqcidx+OfcXE9GxrLciwf0auRvgZueMkyOPrZNMkhvtE5B1gJJAiIjuwVuk9DUwXkWuAbcCFdvQvsJafb8Ragn41gKruE5HHgB/seI+q6pGLMY6L62ARu//9pvXC6SRqUF8iezQ8AV/w91dJuvI34HAcNe2Bj2cRf6611iN6YB92/+sNir5eQPzZY31ZjVqcRUXsftlen+JyET2gL5HdrDmY3a+9TcXGTTiLS9jx4OPETxxL7NBB5P/7vyRf+hvUUVVv2mqFn80iYZJdp/592f3y6xTNXUD8mf6rE0BVeRW7luYy8N5TPYo/985ZDLr/NFyVTubf9xVgrZbLGtuBVkNrr5Zb9Z9l9L7eGkJse0YH5t87h3XTV9PrD/19W4kjOMqq2L5kFyMfGOhR+Ke3zuX0BwdRVeFi5t3zAatOncZn0XZYq1pxF/9zFUNusoYKO45vy8y75rPqnXUMvt5/i0Iyrh9HzjOfoFVOwlom0ub2M6kqLGHL4/+zIrhcJIzoTlx/64PGpoffo82tE3E5quqNU23nm3NJv2IkAAkjurPl8Q/YM+MHWl423G/1qY9iHuZtiKi6nZYxbOFZmVrzWafmQCqa3y/EsAHrAl0En0sKKwl0EXxu4c52gS6Cz/046cnlqjrgWNO37hGvt73v+WKkP3X78rjer6lpEj0pwzCM5sr0pBpmvjOGYRgB5kQ8vo5GRO4QkTUislpE3hGRCBFpJyJLRGSDiLwnImF23FvseF/UCDtVRP7m5yp7zDRShmEYAaQquDTI46shIpIB3AoMsLeQCwYuAf4PeFZVOwL7gWvsJNcCvYCVwDh7W7kHgcf8UtljYBopwzCMAPPxjhMhQKSIhABRwE7gdOAD+37NHXoAQu14DuAK4At/bBd3rMyclGEYRgBVPyflk7xUc0XkGawVz2XAl8ByoFBVq59VqbnbzjPAYmANsBD4mEbcLs4TpidlGIYRQNbCCfH4wnoUZ1mN67rqvOxnQM8B2gGtgGisTbfdvS2q+qaq9lXVy4E7gb8DE0TkAxF5VkQC3kaYnpRhGEaAefmQ7p4GlqCPAbao6m4AEfkQGIa10XaI3Zuqs9uOiLQCBqrqIyKyFBgKPAGMxtqMO2AC3koahmGczBTPe1Guo587tQ0YIiJR9iKI0cBa4FvgN3acmjv0VHsMa8EEQCR2Bw9rriqgTCNlGIYRYC6CPL4aoqpLsBZIrAB+xvob/xLWWXt3ishGIBn4b3UaEelrp11pB/3XTtsP8P8mmkdhhvsMwzACyDpPyndHdajqw1hbx9W0GRhUT/yVHF6Sjqo+BzznswIdJ9NIGYZhBJgHw3gnLdNIGYZhBJA1J2VmXupjGinDMIwA82S7o5OVx42UvV4+qMYDYYjIOKAH8E2NSTfDMAzDQ4pQ5Tr5DjP0lDc9qXeACuBKABG5HviXfc8hImeq6lc+Lp9hGEaz5zI9qXp5MxA6BOtAwWp/Al4B4oEPgck+LJdhGMZJoXp1n6fXycabnlQakAsgItlY2268oKpFIvIaMM0P5TMMw2j2zMKJ+nnTSB3EeggMrKPc96jqT/ZrJxDhw3IZhmGcFKp3nDDc86aR+h64T0SqgNupPfSXjbWzrmEYhuElMydVP2/6mPcAScAMrF7TlBr3LgYW+a5YhmEYJ4dj2AX9pOJxT0pVNwCdRCRZVfcecfs2YJdPS2YYhnGSMHNS9fP6YV43DRSq+rNvinPiEYcQnt+8nnmO6rMv0EUwPNAc/3CNbLUx0EXwuR+PN4OTtIfkKa/++orIVcBvgTbUXSihqtrBVwUzDMM4GShQ1Qw/kPiKNztOPAg8AqzG+vBQ4a9CGYZhnCyq56QM97zpSV0DPK+qd/irMIZhGCej5tRIiUgYcD4wHmsTiFZYI297gXXAd8B7qrrWk/y8aaSSgU+9Kq1hGIbRoObynJSIRGHtRHQzkAj8AiwFdgNlWKvD2wE3AX8WkQXAA6q6sKF8vWmkvgN6A994XXrDMAyjXs3kOalNWKu8HwKmu1tkV01ETgEuB2aLyF2q+mJ9cb1ppG4HPhSRvVgP8tZZIqaqLi/yMwzDMLTZDPfdoKofexLR7j0tFJEpQFZDcb1ppNbb/75W3/t6mZ9hGMZJr7ksnPC0gToiTT6Q31AcbxqVR7G+n4ZhGIYPNYdGqiEiEgx0AgRYX/NcwqPxZseJKd4XzTAMw2iIIjhdzfc5KRHpCXwEtLeDtorIBZ4elNt8vzOGYRhNhAvx+GqC/gVMBeKADGAl8G9PE3vVSIlIuog8IyI/iMgmEVkqIn8RkZZeFdkwDMMArEMPm8MGsyJyl4i4a1O6A/+nqsWquhOr0eruab4eN1Ii0glrp4lbgWKs9e8lWJvL/igiHT3NyzAMwzhMVTy+TmBXYLUFpxwRvga4S0SiRSQNuM4O84g3Pan/wzr4sJOqjlLV36rqKKzJsAP2fcMwDMMrnveiTuSeFNAfa/X35yLyXxFJssNvAa7Faj92AoOxHuj1iDeN1CjgQVXdWjNQVXOwzpYa5UVehmEYhq059KRU1amqzwLdgGhgvYhcq6o/YnVmettXR1Vd7mm+3ixBDwOK6rlXZN83DMMwvNBcnpOqpqp5wCUiMhp4QUSuAa5X1VXHkp83PakfgVuOnBgTEQFuxAfHqhiGYZx01Fo84el1ohORCBGJV9WvgV5Ye77OE5FnRSTW2/y8aaQeBcYAv4jIoyJyg4g8gjUBdgbWMR6GYRiGl5rDEnQRaSMi32AtrNsnIr8AA1X1SazGqh3wq4hc7E2+3jzMO0tEzgSeACZjPTmswHJgkqp+6c0bG4ZhGM3qYd7/2v+eApRirfz+WERa2WsXzrXbkL+LyDWqOtaTTD36zohImIh8BJSp6gAgFmgNxKrqIFWd7W1tDMMwDEszGe4bDDyhqktU9WfgbiCFwztNoKqfYz0jtcjTTD1qpFS1EmuoL8h+Xaqquapa6nn5DcMwDHeaw+o+rPOjrhaRZBGJxlpmXgpsqxlJVctV9WFPM/Vmdd9CrFMW53qRxjAMw2iA1UM6oRsfT/0R+AAosF8XAr9X1fLjydSbRuourPHFYuBjrIeyanU+zXlShmEY3msOS9BV9UcR6Qx0xnokaZ2qlh1vvt40Uj/b/z5vX0c67vOkRORVYBJQoKo97LAk4D2sg7G2Ahep6n576fvzwESsLuXvVHWFneYq4M92to+r6ht2eH/gdSAS6+DG21R9P8qrLhfb//0swXHxZFxx7VHDqznLyij4eDoVBTsBocV5FxPZJouqkmJ2TnsNV3k5yaPHE9OtJwB5b79K2lkXEBIX7+sq1C5XcTnb//E55Tm7QaD1bZOI7pLJ2mteIDgyDIIECQ6i07PX1El7cPkm8l7+EnUpSWf0ocWFwwCoOlDClic+wFVSQcvLRxA/tDMAWx6fTuYNEwhN9nqlqscO5hSy8MHDB0wX5xbR8w/96XJJDwBcThezr/6EqNQoRvx1XJ30lUUVLH1qPoWb9iMCgycPJ6VnC8r3lzH/vq9wFFfS67r+ZI7IAmDePV8y4E+nEJUa7bc67d96kDkPLDhcx9wiBv6xN22GtXIb3vvSLrXSb/s+jwXPLENdStdzs+n3O2trtbL95cy6ex4VRZUMvrE37Ua2BmDmnd8x/P6BRKdG+aU++7Ye5Iv7FtcodzFDru9Bv8s6seKtdaz+eAsikJwdz9gpgwgJDz4Ut2hXKbMfWkLJnnIkSOh5fnv6XtoJgNL95Xx21/dUFFUy9MaeZI/KAGDGHQs4/YH+xKRG+qU+R3OCzzV5TFWdwFpf5nminSf1OvAC1o651e4DvlbVp0XkPvv1vcAEoKN9DcbaVXew3ag9DAywy7tcRGao6n47znXAYqxGajww09eVKFw0n9DUFrgqyj0Kr7b7i4+J6tiZ9N9ehVZV4XI4ACj+aSVxfQcS27MPuVNfJqZbT4p/XUN4eobfGyiA3Je/JLZfe7LuvwCXw4lWOA7d6/DE5YTEu/9DpU4Xuf+ZRfvHLiU0OY4Nd75K/OCORLRJZf93a0ka3YuE07qxecq7xA/tzIGl64ns0NKvDRRAXNsEJkw9H7AapE/OfofWI9oeur9++hrisxJwlFS6Tb/82cWkD8nk1CfH4HQ4cZZbR+PkzNlEu4kdaTumPXPvmEXmiCxy5+eQ2DnFrw0UQGJWHBdNm3ioTlMnfkT7UZnEpse4Da/J5XQx//9+4Kx/nk50iyj+d+UssoZnktQ+ng2zt9J5Unuyx7bl81u+od3I1mydt4OULol+a6AAkrLiuPzdsYfK98r4z8gelUFxQSk/vruRKz8YR0hECJ/f+z3rZm+j+9ntDqUNChaG39GHtK6JVJY4mHbZHNoMaUFy+3jWzdpG10lt6TyuDR/dPI/sURls/i6PtC6JAWugoHkM94nIear6kZdp0oG2qrq4vjger3tU1Smq+khDlzeFq+c95lH3WPpzgDfsr98Azq0RPlUti4EEu8LjgDmqus9umOYA4+17caq6yO49Ta2Rl884DhRSsn4t8QMGexRezVleTtnWzcT1t+5LSAjBkfYvTXAw6nCgVVUggjqdFC6aR+Kp/t+JyllaQcnqbSSN7QNAUGgwwTERHqUt3ZBHWHoS4S0TCQoNJmF4Nw4ssQ54lpAgXBVVqMOJiKBOF3s++YG084b6rS7u5C/LIyYjluh0q2EsLSghb+F22p/d2W18R0klu3/cSfuzrPvBocGExYYDEBQShLOiCpfDhQQJrioX695bQ9fLejVOZWy5P+QTnxFDbHqMR+EFa/YS3zqWuMxYgkODyR7blq3fbQesOlVVVOF0OMGu00/v/EqfK7s1Wn22Ly0gPjOauFZWQ+9yuqiqcOKqclFV5qzTuESnRpLWNRGAsOhQktrFUVxQVqM+TpyVLkSs+qyctp7+V7r/eTcGxfNFEyd4Y/ZPEVklItfX2LfPLRE5TUReAjZiPUNVr6Zw3HsLe3t3VHWnvYsuWOeSbK8Rb4cd1lD4DjfhdYjIdVg9LkLiE70q7J4vPiFl7CRclRUehVer2r+X4Oho8j96l8qdeYRnZJI68VyCwsKJ7dWXXe+/zcEfl5Ey9kwOLP2euD4DCArz/05Ulbv2ExwfxfbnPqNsaz5RHVrS6rqxBEeEIcDmh6aBCMnj+5I8vl+ttI69RYSlHO4VhSbHUbo+F4DEEd3JeeZj9n/7E+m/O509ny8n8fSeBEWE+r1ONeXM2UxZeyb1AAAgAElEQVTbMzocer3iuUX0uXkQjlL3vaji3CLCEyJZ8vg89m/YR1KXZPrfMZSQyFDajs1m0cPfsnXmRnrfOJANH64la0I2IRGN+2u2cfZWssdleRxeUlBGdIvDvaLotCgKVu8FoOP4LL6avJD1n29hyC19Wf3Bejqd2Z7QRqzTutnb6DyuDQAxaVH0v6Iz/534OSHhwbQZ2oK2Q+s/KehAXgm71xXSskcyAF3Gt2Hm5CX88lkOp97ai1Xvb6TrpCxCIwP7p9CXQ1QikgC8AvSws/49sA730yYXYI2S7QPOVdW9ItIBayn5JV6+dTbWsvNHgX/YD/OuAnYDFUAi1nL0AUA8MA84Q1W/byhTb47q+OYo19deVuh4uftIoccQXjdQ9SVVHaCqA4KjPR+mKV63luCYGCIyWnsUXus9XS4qduaSMHAYbW66i6DQcPbPs+ZNgiMiybjiWtrccAfhrTIpWbeWmG69yP94OjvfeYOybVs9LqO31OmibNMukif2o/Pz1xIUEUbBB9b/qey/XEWn56+l3ZRL2PP5copXbzsisZsMxfoxBEdH0P7hS+j07DVEdmjJwR82ED+sC9v/8Tlbn/ofJb/ucJPYt5wOJ7kLcmg92hoqyl2wjfDESJK6pNSbxuV0sX/9HrLP78qEqecREhnK2qnWlmRhMWGM+Os4xr12LkmdU8hbuJ3Wo9qx9Kn5LHjgK/b8nN8oddo6L5cOY9p4FG5x84Oyf1vCY8I48/lR/ObNCaR2SSRnfi4dTm/N3McXM/ueeez6abfvK3FEuTfPy6PjGdbvTvnBSjbNzePqzyZy7eyzcJRV8cvnOW7TVpY6+Pzu7xlxVx/CY6wPP+GxYZz799O49O0zSOuayJb5O8kencFXj/3AZ3/6nrxVe/xaH7cU1CUeXx54Hpilql2wNnT9hcPTJh2Br+3XYC2IG4I1snSpHfY48KDX1bAeTXoU68P/5cAyrJ3Rfw/cAZwFBNvl626fptFgAwXebYsUhPVft+aVgvV0cfXZ9f6Qbw/VVY9fVi9v3IH1QHG1TCDvKOGZbsJ9pjxnCyW/rmHLXx9n1/S3KNuykV3vv11veE0hcfGExMUT0dqaG4np3ovynbl13mPft3NIHDGGop9XEt4qk7TzLmbvV1/4shq1hKbEEZoSR3Rnq9MZf0oXyjbtsu7Zc0ehCdHED+1M6fq8I9LGUrnn8J7Ejr0HCU2qPdQEkP/OAlpcdAqF89YQmd2S1rdNYufUuX6q0WE7F+0gqXMKkUlWL2L3T/nkzs9hxnnv8v2D35K/PI/vp3xbK01UWjRRqdGkdLc69K1HtWP/+r118l796kq6X9WHnDmbSOycwuDJw1n1n2V+r9O2hXmkdEkkKjnSo3Cwek4l+YcfeSwpKCXazfzMspdX0//3PdgwO4fULkmMemgoS/55THuGemzrwl2kdUkkOtkaYt62JJ/4jGiiEiMIDg0i+/RMdv5Ut2FxOlx8dvf3dJnYhuzRmXXuAyx5aQ2DrunKulnbSOuayBkPD+T7f/7sNq6/+Wq4T0TigOHYuz+oaqWqFlL/tIkLCAeiAIeInAbsVNUNx14Xdajqe6r6e1XtpqoJqhqhqhmqOtqeHvrV0/y82RZppLtwu2v4MfCkp3l5aQZwFfC0/e8nNcJvFpF3sRZOHLCHA2cDT4pI9TjdWOB+Vd0nIkUiMgRYAlwJ/MOXBU0ZeyYpY88EoHTLRvYvmEvLCy87dM9deLWQ2DhC4hOo3F1AWGoapZs3EJbaolacyr27qSo6QFS7DuzflUtQSCgiWHNVfhKaGENYShzlO/YSkZlM8aqtRLROxVleCS4lOCocZ3klRSs30+KS02qljerYisq8fVTsKiQ0OZbCeWtpe3ftacCKvH049hUR07MtZVvyCQqz/ktqpf/qVC1nzqZaQ319bhxInxsHApC/Io9f3/6ZYVNqz/tFJkcR1SKagzmFxLVNIH9ZLnFZCbXiFG0/QNmeEtL6pbN/w16Cw606OSudfq4RbJydQ0e3Q33uwwHSuiVTuL2Ig7nFRKdFsvHLHMY8XvvcusJtBynZU0qr/i3Ys36/tZpO/F+ndbMOD/UBxLaMYufPe3GUVRESEcz2pfm06FZ7+kNV+erRH0hqF0e/y93PNe3fVkTx7nIy+6exe10hIeHBiEBVRWCeovHh6r72WMNrr4lIb6xt626j/mmTR4DZWB/YLwemA94O8/nVcQ/EquomEXka+H9A3+PJS0TeAUYCKSKyA2uV3tPAdHu7923AhXb0L7CWn2/EWoJ+tV2efSLyGPCDHe9RVa1ejHEDh5egz8QPK/u8lTv1ZVqcexEhcfGknXkeuz54G3U6CU1MosX5tf+v7J0zk+QzJgAQ27MvO6e9RuGi+SSNHu/XMmb8cSzb/voxWuUirEUCrW+fRFVhCVuf+ACwhgQTR3Qnrr/1B3/zlHdpfcuZhCbHknH9ODY//A64XCSN6U1E29Raee+cOpf0K0cCkDC8O1ufeJ/dM36g5WUj/FqnqvIqdi3NZeC9p3oUf+6dsxh0/2lEpUbT/85hLJoyF6fDSUxGHEMmD68Vd9V/ltH7+gEAtD2jA/PvncO66avp9Yf+Pq9HTY7yKrYv3cnwyYM8Cv/81m8Z+eBgolOjOO1PA/jslm9Qp9Ll7A4kdajd8C791yoG3dgbgOxxWcy6+zt+encdA//ov0UhjrIqti3JZ/Tkw9+39J7JdBydybTL5hAULKR2TqTH+dauOx/fMo8xDw3kwI5ifvk8h5TseN66xNpS9JSbe9Lu1PRD+Xz/z58ZdpP1KEfn8W349M6FrHxnA0Nv6OG3+tRH8Xp1X4qI1OyWv6SqL9lfhwD9gFtUdYmIPM/hob267606B2txWfWjO18AnUXkbmA/1mM6Ad1ZSHzxmJCIjAU+UlX/rrMNgIiM1trmhjsCXQyfiupz5ALKpq9biv/nexpbQuhxPwd5wgkPchw9UhPzXL/py+09TY9JePsMzXzS44Nq2fzbyfW+n4i0BBarapb9+jSsRiobGGn3otKBuarauUa6KOAzrNXRX2IND14KOFX15WOqmI8c99a79lLDO4FNx18cwzCMk4+vNphV1V3AdnvnB4DRWA/XVk+bQO1pk2r3AM+rqgNrpEmx5qv89zCchzwe7hORLdRdBhQGVE+cXOCrQhmGYZxUfLtNwi3A2yISBmzGmgoJwv20CSLSChigqlPsoL9ibXhQiB+eJfWWN3NS31H3W1kO5ADvq6rpSRmGYXjNtw/pquqPWM8iHWl0PfHzsLajq379PvC+zwp0nLxZ3fc7P5bDMAzj5GQ/J9WciMhC4D/AdFV1v4OBh5rFcZCGYRhNmnpxNQ0OrOex8kTkbyLS5WgJ6uNVIyUifUXkQxHZIyJVItLPDn9SRPy7DtowDKPZOnKfhIauE5/9XG1XrIbqSmCNiMwVkYtFxKu9z7zZFulUrCN/uwDTjkjrAq735o0NwzAMW/PrSaGq61T1Tqxtkn6HtSXSNGCHiDwtIu0bSl/Nm57U01hPJnfHWnJe0wqsB8gMwzAMbzXDRqqaqlao6ptYO1/MB1KxlryvF5H37We76uVNI9UP+Ld9zMWR36o99hsbhmEY3lBAxfOrCRGRSBH5vYgsxdoFKBWrsWqFtQPQMODtBrLwagl6OfU/2JUOHPAiL8MwDMPWXE7mrSYiPYE/ApcB0VgPD9+rqjV3bH5ZRHZxlOXu3jRSC4DbRaTmk8rV39prgG/qJjEMwzCOqpk1UljnSOUBz2HtLbiznngbsdY61MubRupBYKH95h9gfVuvEpG/YZ0ZMtCLvAzDMIxqTWwYzwMXAh+raoPb5KvqL0CDR4x7c3z8KqxzSvKByVhrIW+2b49Q1XWe5mUYhmHYFMTl+dVEzAAi3N0QkWhvlqF7dVSHqq4ARotIBJAEFAZ6G3fDMIymrektiPDAK0Aoh0/7relFoBLrxN6jOqYdJ1S1XFXzTANlGIbhA81vCfoo6u60Xm0G9ewj6I5XPSn7UKzfAm2o25VTVe1QN5VhGIbRoKbT+HgqDSio595uDp+ecVTeHNXxINZRw6uBH4Hj2jTQMAzDsDW/RqoA6Al86+ZeT2Cvpxl505O6ButQrOZ1TK1hGEYgVT/M27x8BjwoInNV9afqQPv5qcnAR55m5E0jlQx86kV8wzAMwwPS/HpSDwFnAMtF5AdgB9YefoOALcCfPc3Im4UT3wG9vYhvGIZheKKZLZxQ1T1Yz84+hfW4Uh/73yeAgfZ9j3jTk7od+FBE9gJfAPvcFKzprOI3DMMw/EZVC7F6VA8dTz7eNFLr7X9fq69MXubXNCgEOZrXePHBdUmBLoLPuZLrW0jUdEUGVwa6CD4XKg1uQHDSkmZ2Mq8vedOoPEqT6WwahmE0EU1oGM8bItIDa8FdZ9w/suTRs1IeN1KqOsXj0hmGYRiea2aNlIgMxlrHsBXoCPwEJGI9Y7sDa2NZjxzTjhOGYRiG74h6fjURTwIfYh2SK8A1qpoFjME6ofdxTzMyjZRhGEagNbPVfUAv4C0OlzgYQFW/wWqgnvI0o+a30MEwDKOpaTqNj6dCgRJVdYnIPqyDcautA3p4mpHpSRmGYQSQN0N9TWi4bxPWw7tgzUf9XkSCRCQIuBrY5WlGpidlGIYRaM1zW6SRwDSs+anPgYOAE4gBbvU0I9NIGYZhBFgTOszQI6r6cI2vvxKRIcAFQBQwS1W/9DQv00gZhmEEWtMZxjsq+9TdicBPqroFQFVXAiuPJT9znpRhGEYgNa25pqNSVYeITAfGY20me1zMeVKGYRiB1owaKdtmrIMPj5s5T8owDCPQml8j9Rdgsoh8o6q7jycjc56UYRhGgDWn4T7b6UASsEVEFgM7qd0Uq6pe5UlG3jRS1edJfeNFGsMwDOPkcyrgAHYDHeyrJo+bZXOelGEYRqA1s56UqrbzVV7mPCnDMIxAamar+3zNnCdlGIYRaM1sDEpE2hwtjqpu8yQvc56UYRhGAAnNsie1laN3aoI9ycgMzxmGYQSajxspEQkGlgG5qjpJRNoB72KtuFsBXKGqlSJyC/BHYBtwrh12KnC+qt55HEX4PXVrlQycCbQHHvM0I692QReRviLyoYjsEZEqEelnhz8pIuO9ycswDMPg0JyUj3dBvw34pcbr/wOeVdWOwH6s514BrsU6+2klME5EBHgQLxoRt1VSfV1V3zji+pt9ZPwCrIbKIx43UnbrugjogrWzbc20LuB6T/MyDMMwavDhoYcikonVY3nFfi1Yzy19YEd5Azi3RpJQrI1fHcAVwBequv84a9SQt7B6Wh7xpif1NDAb6zjgI7uBK4B+XuRlGIZhVPPtybzPAfdweDlGMlCoqlX26x0cPuvpGWAxkAosBK4C/nVcdTm6NOru/Vovb+ak+mGNU6pInU7nHqxKHpWIvApMAgpUtYcdlgS8B2RhTbhdpKr77U8Az2PtqFsK/E5VV9hprgL+bGf7uKq+YYf3B14HIrGe57rNLrPb9/Ci/h5Tl4ucV54lJDaezN9ei+PAfnZ+PA1nSRGIkNBvKImDh9dKU7mngLz/TT302rF/L8kjx5M0ZARVJcXkTX8NZ3kZKaMmENulJwC57/6XFmf+hpDYeH9Uo06d8p57luD4eFpecy0AB+Z9R9GSJYAQlt6SlIsvISg0tFa6+uI4i4vJf/01XGXlJE4YT3QPq075r71K8vkXEBLvvzodzClk0UNfH3pdnFtEjz/0p/PFVhlcThdzfv8xkalRDH+m7ij20ie+I2/hNsITI5nw9m8OhZfvL2Ph/XOoLKqk53UDyByRBcD8e75kwJ9OITI12m912re1iM/vXXTo9YHcEobd0J1+l3Vi+VvrWf3RFhBIyY5n3CMDCQmvPWddXlTJnEeWsWfTQURg7MMDadU7mdJ9Fcy4ayEVRQ5OuakH2aOsv22f3L6Q0Q/0IyYt0i/12bu1iBn3LD30ujC3hFNv6MbAy7NZ9vZGVn24FVWl9/ntGHh5dp305QcrmfnoCvZsPAgiTJzSjwy7Ph/euZiKIgen3dSNTqe3AuB/ty9i7AN9iPVTfY7Gy4UTKSKyrMbrl1T1JQARqf7bulxERlZn7yYPBVDVN4E37bQPA38HJojIlcB24K5jef5VRIa7CQ7DOpH3fmC+p3l500iVY3UJ3UkHDniYz+vAC8DUGmH3AV+r6tMicp/9+l5gAtDRvgYD/wYG2w3Ow8AArG/2chGZYTc6/wauw/p08AXWTrwzG3gPn9u/ZB5hKWm4Kqw9eCUomLSx5xCRnomropytLz9LVPtOhKe2PJQmLCWNrD/eDVgNwqZnHznUGBWtXkFc7wHEde/L9rdfIrZLT4rXrSE8PbNRGiiAg/PnE9qiBa7ycgCqDhzg4PwFZNxzD0GhoRRMnUrJjyuJHTjoUJqG4hSvXEnMgIHE9OnDrldeJrpHT0rXrCEsI8OvDRRAXNsExr1xAWA1SJ+eM43M4VmH7m+Yvpq4rAQcJZVu02dN7ET2b7qz5NG5tcK3zdlE1oROtBnTnu/unEXmiCxyF+SQ2DnZrw0UQFJWLFe8NxYAl1N5adynZI/KoKigjJXvbOCq/40nNCKYz+5ZxLrZ2+l+dlat9HP/8iNZw1py1jPDcDpcOMqtD92/ztpGt7Oy6DKuNR/eNJ/sURls+i6PtK4JfmugAJKzYrl6+uhD9fnX2C/odHordm88wKoPt3LlWyMJDg1i+k0L6XBaS5LaxtRK//VffqL9sBac98wQqz5lVn3WztpOj7Pa0HV8Ju/fuJBOp7di43c7adElIWANFODtwok9qjqgnnunAGeLyESs3kocVs8qQURC7N5UJpBXM5GItAIGquojIrIUGAo8AYwG5nhVOstc6taqurH8DrjB04y8Ge5bANxurxqpVl2Ia/BwuyRVnUfd3SrOwRonhdrjpecAU9WyGOsbnQ6MA+ao6j67YZoDjLfvxanqIlVVrIbw3KO8h085DhZSsuEX4vsOORQWEhtHRHomAEHhEYSnpFF1sP42vXTLBkITkwlNSLICgoNRhwOXswoRQV1O9i+ZR9KwUf6oQh1VhYWU/rKW2EGDa4Wry4k6HKjTictRSXBc3calvjhi10mdVSCCOp0cmD+P+JGNU6dqBcvyiM6IIzo9FoDSgmLyvt9O+7M615smrW864XHhdcKDQoJwVlThcriQIHBVuVj/3mq6XNbbb+V3Z9vSfBIyY4hrZTWMLqdSVeHEVeXCUe4kOrX2SEtFsYMdK3bT4zxrk4Dg0CAiYsOsr0OCqCp34qx0IUGCq8rFimkbGHBl/d8fX8tZUkBCZjTxraLYu7mIVr0SCY0MISgkiNb9U9jwTa2/t1QUO9i+Yg+9zss6XJ+4GvWpqF2fZW9vZPBVHRutPnV4M9R3lMZMVe9X1UxVzQIuAb5R1cuAb4Hqbv9VwCdHJH0Ma8EEWKNQijVcWF/H5GhGYc2D1byGAq1UdZSq5jWUuCZvelIPYo1ZrsKagFPgKhH5G9AfGOhFXkdqoao7AVR1p4hUb/GegdXlrFY9ltpQ+A434Q29Rx0ich1Wb4yQ+ESvKlIw+2NSx0zCVen+JBNH4T7Kd+USkdm23jwOrllJXI++h17H9ejHzg/f4sBPy0gdPYnCHxYS13sAQaFhXpXtWO395BOSJk3CVX64TiHx8cSPHMn2xx9DQkOJ7NSJqM61/3A1FCemb18K3n6b4uXLSDrzTA5+/z0x/QcQFNY4daq27atNtD3j8LZiK59bTO+bBlFV6vA6rzZjs1n88DdsnbWB3jcOYuOHa8ma0JGQiMZ90mPd7O10Hm89SxmbFsmAKzvzyoTPCAkPpu3QlmQNbVkr/oHcEiITw5n98A/sXn+AFl0TGXVPH0IjQ+gyoQ1fPLCYXz7L4bTbevLj9E10O7MtoZGNV6dfZu+g64TWAKRkxzHvhbWUFVYQEh7M5gX5tOyWUCt+4Y4SohLD+eKh5RSsP0DLbgmMvqc3YZEhdJvQmhn3L2XNZ9sYcVsPVkzfTPdJbRq1Pu40wsm89wLvisjjWCv5/nvovUX6wqGDCbHv/Yz1N/aRY3kzVf3uuEpbg8c9KVVdBQwH8oHJWF23m+3bI1R1na8KVUN9Y6nehntFVV9S1QGqOiA4yvNhmuL1awiJjiGiVWu3912VFeS+/zpp484lONz9vKE6qyhZt4bYbn0OhQVHRJJ56R/I+sOdRKRnUrxhLbFde7Hr0/fIff91yrZv9ap+3ihdu5bgmBjCM2vXyVlaSunqNbR+YDJtHnoYraykePlyj+MERUbS8tprybj9DsIyMin9ZS3RvXqx+/3p5L/xBuVb/VenQ+VzOMldkEPr060eRN7CHMITI0jq4tH0ah1hMWEM/+t4xr56HomdUshbuI3Mke344al5LHzgK/b8nO/L4rvldLjY9F0enc6weu7lByvZNDeXaz47k+u+PAtHWRVrP8+plcZV5aLg10J6X9iBK949g9DIYJa++isA4bGhnPeP07hs2hjSuiayZf5OOo7J5MtHl/Hp3d+Tt2qv3+uz8buddDnD+qyZ0j6OwVd34r3rFzD9poWkdYonKLj2nzGXU9n1ayF9L2rP1e+NJjQihMWvrjtUnwtfOIWrpp1Oy64JbJq3i85jMpj5yAo+unsxuX6uT338sAQdVZ2rqpPsrzer6iBVzVbVC1W1oka8lap6TY3Xz6lqd1UdXzOeV/URGSIiF9Vz70IRGezunjtePSelqivsde6xWOOacXbX7ZiOBa4h3x6qw/63wA7fAdT861g9ltpQeKab8Ibew2fKtm+heN0aNj3/GHn/e5PSLRvI++gtANTpJHf668T16Eds11715lG88VfC0zMIiYl1e3/vvC9JPnUMB1evICK9NS3PvoTd33zu66ocUr51C6Vr17D9icfZ/fZblG/cSMG0tynfsIGQ5CSCY2KQ4GCievaq07B4EgegcM4cEkaPoXjlSsIzMkm9+GL2z/zCb3WqtmvRdhI7pRCRZI1o7Pkpn7wF2/j0/HdY9NA3FCzPY/GUb48p7zWvraDb7/qybc4mErukMGjycH5+8QdfFt+tLQt20qJLItHJ1oegbUvyiWsVTVRSOMGhQXQ8PYOdR/whjm0RRWxaJOk9kwHoOCaTgl/rrila/NJaBl3TlV9nbaNF10TGThnIghd+9mt9Ni/YRYsuCYfqA9D7vCx+9+5oLnt1BBFxoSS2qf1BMrZFJLFpkbTqaQ2Xdz4jg/xfCuvkvfDFXxl6bWfWztxOy24JTJzSn3n/WOPX+tTLt6v7TgRPYa0Ed6erfd8jXjVS1VS1XFXzVLX0WNK7MQNrnBRqj5fOAK4UyxDggD1kNxsYKyKJIpIIjAVm2/eK7FZcgCuPyMvde/hM6uhJdLjjYTrc9iCtLriCqHYdaXXe5agquz59j/DUNJKGjmwwj6LVK4jr4X41f+Xe3VQVHSQqKxt1OEAEAbSqym18X0iaeCZtHnyI1pP/TOpllxORnU3apZcRnJBARU4OrspKVJXyDRsIbVF7BNWTOI7du3EePEBkhw6ooxKCrM6wP+tULWfOJtrUGOrrdcMgzv7kUs768LcMffR00vq3YsgU7+fIirYfoGxPKWl903GWW/OIAM4Kp8/KXp91s7bTefzhz2+xLaPY9fM+HGVVqCrblhaQ1K72B6DolAhiW0axb2sRgBWnfVytOPtziijeXUbrAalUlTuRIEHE/3VaO2sHXcdn1gor2Wct3jm4s5T13+TRbULtXn5MSgRxLSPZa9cnZ0kBKUfUZ19OMcW7y2hTXR8REKiqDMAmej6ckzqB9MZavObOUqwHiD3S4ECsiHhzdpTavawGicg7wEisZZQ7sFbpPQ1MF5FrsLbnuNCO/gXW8vONWEvQr7bfaJ+IPAZUfzR9VFWrF2PcwOEl6DPtiwbew+/Ktm/h4E/LCEtLZ+uLzwCQcvpEYjp2Y8e0l2h51sWExMbjclRSsnk9Lc50X7Q9335ByqiJAMT26Evue6+xf+k8UkY2/mYfEW3bEt2rF3nP/g2CggnLyCBuyFAAdr3yMikXXtRgnGr7Z84kccIEAGL69CX/9dc4OH8+ieP8W6eq8iryf8hlwL2neRR/3l2zGHjfaUSmRlu9rJV5VBSWM+OcafS4th/tz+pyKO7PL/5Azz9aU7RtzujAgvu+ZP37q+lxbX0LsnzDUVZFzpJ8xvy5/6Gw9J7JdByTyVuXfkVQsJDWJYGeF1gP+39483zGPjSAmLRIRt3bl5kPLMFZ5SI+I5pxj9SeYl74z9WcclMPALqMb80nd3zPymkbGHpDfR+WfVOfrYsLGP/nvrXCP75rCWUHKgkKCeKM+/scWhTx/k0LGf9wP2LTIhlzb28+e+AHnA4XCRnRTHy0f6085r2whuE3W2XvOiGTD29fzLJpGzntxm5+q099BPfzFE1cBPV3goIBj+dRxFoEV89NkbnUbrs7Ay2xnjPKB1pgPXe0E1inqqd7+sZNRUSr1pr1h+PZwurEUxnXdD6OeWrQsF8DXQSfy4isO0TV1IWK/3uTje3/+ny4vIEl4UcV1aK1Zl/q+d+Yn5+787jerzGIyHJgmar+0c29F4HBqtqnbsq6GuxJqerIGhmfi/Vg7RBVXVojfDDWQ7LPe1R6wzAMo5ZmuAv6f4AXReQg8DKHV1pfh/XI0o2eZuTNusvHgAdrNlAAqrpERKYAj+OHeR7DMIxmr5k1Uqr6soh0Bu6g9jZ6irXR7Uue5uVNI9UR67x6dwqAunuTGIZhGA3TRnlOqtGp6t0i8m9gDNb+gXuAr1R1szf5eNNIbcE6d2Smm3t/xJqnMgzDMLzVzHpS1VR1E7DpePLwppF6BHhb5P+3d+dhUlTnAod/3/QszL4ywLCvwoAgi4oghqBsenEJJjdGIhKDwS0ajSYxGsG45Saam2iuN14jLiS8jzQAABkUSURBVCERFQwYRTYF2RUQUUFlh4FhHZh96+5z/6iaoWeY7umG7qlh+N7nqYeuqlOnzilm5utz6tQp+QJrxomagRPXY72+48YzKYhSSp2rWto9KRGZAnRu6I3u9u2hXTWTgjcmlBknXseaM68Qaxbbv9j/ngDGGmNmB5uXUkopHy3vOam7AX/TdxwG7gk2o5AmrDLGLAGWiEgUkIU1G28L7E1VSqmm09JaUlhjFPxN37EV6O5n3ylOa1ZFOzCFfVohpZQ655xdLaRgubEaMg0JaXLMJp9xQimlVD0tL0h9DEwD3mhg3zROzhbUqMZaUlGEMONEsCdVSillEVpkd9/jWLeG1gEvAvuxHub9MdZb3kcHm5HOOKGUUk5rYUHKGLNcRK7HeivwX3127QYmGmOWBZuXzjihlFJOMiDeFhalAGPMPGCePfNEJtZAu29CzUdnnFBKKYe1wO6+Wmf6QlydcUIppZzWQoOUiAzAGstwyqvIjTGvBpOHzjihlFIOa2ktKRFJA94FhtZssv/1rWl4g5Qx5nUROYoVrH4FxADVWEMJxxpjlgabl1JKKR8tLEgBT2Ddh7oMWAFchzVb0Y+AS4DvB5uRzjihlFJOMi2vJYU1hd4MTr5CPs8YswFYZs+MfjdwUzAZ6YwTSinltJYXpNoBO40xHhGpAJJ99s0FXg82o4ATzIqIR0Qusj977XV/i/t0aqKUUueymod5g13OEgeBNPvzHqwuvhohjQRvrCX1KNZrf2s+nz2XSCmlzhamxf1pXYkVmP4NvAY8IiJdsOb0mwzMDzajxmacmOHzeXro5VRKKRVQy3wz7wwgx/78e6xBFP8JJGAFqLuCzei07kmdS0ycobxbpdPFCKvoOI/TRQi7xOgqp4sQdlFnUd9OsFpincKhpQUp3zfyGmOqgfvsJWQhBSkRmQzcAHTi1IezjDEm6HeEKKWUsmns9ivoICUiD2M14b4ANgEtq3mhlFIO0Qamf6G0pG4B/mSM+VmkCqOUUuccQ0scOBE2oQSpTOCdSBVEKaXOVdqS8i/gc1L1LAcGRKogSil1zjIhLOeYxl4f7xvE7gHmisgx4D2goH56nSJJKaVC00LfzBs2jXX3uakbuwWY6SetCSI/pZRSvozRe1IBBDPjhF49pZSKoJb2nFQ4NTbjxPQmKodSSp2ztLvPP+2eU0opJxnAq1HKHw1SSinlNI1RfoUyBF0ppVQEhOtVHSLSUUQ+FJGtIvKliNxtb88QkcUiss3+N93ePtFOt0JEMu1t3UUk6Pc9RZoGKaWUclrNCL9glsDcwH3GmD7AUOAOEckFfgksNcb0BJba62BN+joUeBX4gb3tMeDhMNfwtGmQUkoph4WrJWWMyTfGbLQ/FwNbgfbANcArdrJXgGvtz14gDusVGtUiMgLIN8ZsC3slT5Pek1JKKSdFaCYJ+yWDA4F1QBtjTD5YgUxEsu1kM4CFwAFgEvAG8P3wl+b0aZBSSikHCSCekKJUlois91l/wRjzQp08RZKAOcA9xpgiEWkwI2PMYmCxfcxkrNmEzhORnwPHgbuNMWWhFC7cNEgppZTDJLQZJ44aY4b4zUskBitAzTLGzLU3HxKRdnYrqh1wuN4xCVivdR8LLMLqHvwBcCPwf6EULtz0npRSSjkplMllGx/dJ8DfgK3GmGd8ds3HCkLY/86rd+gDWK9iqgbi7TN5se5VOUpbUkop5aiwzt03HPgh8LmIbLK3PQg8BbwhIrcAe4Hv1hwgIjnAEJ8Zhp4G1gInODnAwjEapJRSymHhmhbJGLMS6zZXQy73c8wB4D981t8E3gxPic6cBimllHKazoLuV5MHKRHpiPXgWFusPs8XjDF/EpEMYDbQBdgNfM8Yc9zuY/0TcCVQBtxc8xyAPRrlITvrx4wxr9jbBwMvY/Wtvoc1QsX4O0c467f/vt8RFR8HEoW4omg7/U4AvKXlHJs5l+q8QyCQectE4np0rj2uOv8IR//nn7Xr7iMFpF53BSljL8VTVMLRZ/+Ot6yC1O+MJmFwXwCO/OlV0m+6luj0lHBW4RR77/oDEh+HRAlERdHhidsBKFywmqIP1oOBlFFDSL1y2CnH+kvjKSrl4DOz8JZWkPG9K0i8MBeAg3/4O1k/uprojMjVqXBPIR8+uLx2vfhACYNuvYC+N+RSWVzFqsdXc3zHcRBhxEPDyO6fXef4vDX7Wfv0xxivodc1PRkw+XwAyo9XsPSBD6kqrmLwtIF0HtkJgCU//4BhvxhKQuvIde8X7C7mnQfWnqzj/lKG39aXwZN6sv61b/j87d0g0LpnKuNmDCE6zlXn+A2ztrF57i4w0P87XRk8qScAZQWVzLt3NRXF1Vx6R196jmoPwNv3rGL0g4NIyo6PSH2O7S7mXw98Urt+Iq+UEbf34aJJPfhk1nY2zdkNBgZM7MJFk3qccry/NGUFlcz52Voqiqv51p259BqVA8Bbd69h7K8vIDlC9QnI6CzogTjRkqp5InqjiCQDG0RkMXAz1hPRT4nIL7GeiP4FMB7oaS8XA88DF9sB5xFgCNZNvg0iMt8OOs8Dt2L1q74HjAMWcPKp6/rnCKvsX0zFlZxYZ9vxf7xD/Pm9aH3njRi3G1NZXWd/TLvWtPvtTwEwXi/773myNhiVrfuMxOGDSLh4AIefnknC4L6UfbqV2M45EQ9QNXIe+hGulJN1qtp3iKIP1tP+sWlItIuDT71CwsBexLTLCipNyerNJI8YSNKw/uQ/9QqJF+ZSuuEr4rrkRDRAAaR2TuXaWVcD4PV4mX3Vm7UBZd3TH9N+aA6jnhqJp9qDu8JT51ivx8ua/1rL2OfGkJidwPzJ79JpREfSu6Wxc9Euel7VnW6ju7Lw7sV0HtmJvSv2kXleRkQDFEBGl2QmvzHaLqPhf8f8mx6jcig+VM7Gf25nytyxxLRyMf/+tXz1/j76XdOl9tgj2wvZPHcXk/4+CldMFG/dsZJuI9qS3jmZr97fS98Jnek9riNv3b6SnqPas2P5Adr0To9YgALI7JLMLW+Mqq3Pc6MXcN6oHI5sK2LTnN3cPGskrpgoZt++mh4j2pLROelkfQKk+XJBHudf3Yk+4zow+/bV9BqVw7Zl+bTpk+ZMgKqhLSm/mnx032k8EX0N8KqxrAXS7CGUY4HFxpgCOzAtBsbZ+1KMMWuMMQar1eabV0PniChveQUVX+8m8TJr1KhERxOV6P8XomLLdqKzM4nOSrc2uFyY6mqM241ECcbjoXjRKpLHX9YUxW9Q1f4jtOrZkai4WMTlolWfrpR+sjXoNOKKwlS7MdVuRKw6FS5YTeqES5u0Hvmf5JPcIZmkdklUlVRx8NND9LrGakW4YlzEJcfWSX/0y6OkdEghpX0yrhgX3cZ0Ze9H+wCIcgnuSg+eag8igtft5ct/buH8H/Zr0jrtXXeItA5JpOZYXyqMx+Cu9OB1e3FXuElq3apO+oKdxeT0zyAmPpqo6Cg6Ds5i2wcHrDpFR+Gu9OCu8iJR4HV72TBrGxdO7tVk9dm97jBpHRNJzUng6K5i2tcr6zd2WWsESuOKEaorvHiqvLX/R5/M2sHQyT2brD4N0tfH++XoEPRAT0QDNX0s7YF9Pofl2dsCbc9rYDsBzhE+Ihz+w0vkP/IsJcs+BsB9uABXciIFL75F/m/+zLGX5uCtrPKbRdm6zSQO7V+7njj0Aso/38aRp2eSeu3llHywlsThA4mKi/WbR1gJ5D/5MnkP/g9FS60umNiO2VRs3Y2nuAxvZRVlm77BfaywzmGB0iQNH0DZZ9vIf+oV0q8fRdHij0kecUHT1cm2c/Fuuo3pCljdfq3S41jx6Cr+NekdVj62muryui3e0iNlJLY52aJMzE6g7EgpAN3HdWP/2v0s/OkSBk69gK1zvqbHld2JbtW0HRZfLcyj9/iOACS3iWfITb14Ydy7PD/638QlxdBlWNs66bN6pJC34SjlJyqpLnezc+VBig9Zz2/2Gd+JXasPMeeOlQyblsumN3aQ+x+diYlvujptfT+P3HEdAGjdI5m9G45SZpd1x8qDFB0sr5M+UJrc8R3ZteYQs29fzYjberNh9i76TejYpPVpiHi9QS/nGsf+Z4J9IpqGR6qY09geStluxeouxJWZFsqhtPn1NKLTU/AUlXD4938jul1rJDaGqj0HSJ80gbjunSiY9Q5F/15G2sQxpxbU7ab8062kXT+2dltUQiuy770ZsO5tFb27nKy7JnHspbl4y8pJGXdpnftb4ZYz/VaiM1LwFJaQ/8TLxORkEd+nK6lXjyD/iZlEtYoltlNbcNX9zhPbPttvmqiEVrT7xU0AeErKOTH/I9rc+wOOvPA23tIKUq8aTqtenSJWJwBPtYe9H+1jyO2DADBuL8e+LmDozy8mu19r1j79MZtf+YLB0waePKjBnyTrRy42KZYxf7wCgMqiSja/9jmX/+7brHx8NVXFVfT7Qe4p97fCXycvO5YfYMRPrdZbRVEV25cdYOq7VxKXHMM7969ly7t7yL3q5M9LZrcULppyHm9OW0FsQjTZvdKIcll1ikuOYeJzl9bm9fHMr7nmmWEsnLGBiuIqLvxhL3IGZEa0PtuWH2Tk3VbXd1a3FC6Z0ovXf7KK2IRo2vRKJSq67q98oDStkmP43nPWfdHyoirWvLSNiX+8mPdmbKSiqJqLbupBhwjWp0E1TySpBjnSkgr0RLS93/eJ6Dygo8/hHbDmmQq0vUMD2wOdow5jzAvGmCHGmCH17y01puYekSslifhBfanauY/o9FRc6SnEdbf+6CYM6UfVngMNHl+++RtiO+fgSk1ucH/hvKWkTPg2ZWs/I7ZLezJvmciJtxaFVMZQ1dwjcqUmkXBhHyp37Acg5dtD6PDkHeQ8MhVXUjwxbU/95Q4mzfG5H5J27UhKVm0mrmt7Wv/kOgpmL45onQDyVu8ns3cG8ZlW12tCdiKJ2Qlk92sNQJdRnTn29bE6xyRmJ1B6qLR2vfRwWYP3mza9+BkDpvRn56JdZPXO5NKHhrH++U8jWBvLrpUHye6dRmKm1aW3Z+1hUtsnkpARhysmip6Xt2f/pmOnHHf+dV256fUr+P5LI2mVEkNap1N//tb8dQtDf9yHrxbspU1uGuOmD2HFs19EtD47Vh6kjU99AAZ8pws/mj2KSTMvo1VqLOmdkk45Lpg0q/76FcOnnseWBfto2yedq2YMYvmft0S0Pg0RDGKCX841TR6kTuOJ6PnATWIZChTaXXULgTEikm6/G2UMsNDeVywiQ+1z3VQvr0BPXZ8Rb2UV3vLK2s8VX24jpn0bXGnJRGemUZ1/BICKLTuIyWn4G3XZ2s9IGDqgwX3VB4/iOVFEq97d8FZVW6PtEEx1dYPpw8Fb4VOniirKN28ntoNVdk9hCQDuoyco/WQLScP6n3J8Y2mq84/iOV5EfG5XTFU1RAmIYKrdEatTjZ2LdtV29QEkZMWTmJ1I4R6rS/LAJ/mkda3bks7KzaJwXxHF+4vxVHvYuWgXnUZ0qJOmcG8RZUfLaTeoLe4KN0QJIoKnsu4gjEjY+v5eeo872QJNaRdP/uYCqsvdGGPYs+4wmd1OHZhSWlABQFF+Gds+OECf8R3r7D++p5iSIxV0HNKa6grrnpsIuKsiW6ctC/LoO77u9S09Zv08FuaX8fXSA+TW2x9MmoI9JZQcrqDTkCyrPlFAE9THr/C9qqPFcaK7L9Qnot/DGn6+HWsI+hQAY0yBiPwWqBmn+qgxpsD+fBsnh6AvsBcCnCMsvIUlHHn2NWvF4yVh6AXE9z8PgPQbJ3Dsr7Mxbg/RrTPI/PH1ABx+ZiYZUyYSnZ5SG9gybr6uwfwL5ywi1e4iTBw6gCN/fo3iRatIvW50OKtRh6ewhEPP/AMA4/GSNLw/CRdYN80P/fGfeErKEJeLrCkTcCVZLZL8371K66nXEp2R4jdNjYLZS8j4T6uLLGlYfw4+PYvCBWtI/26Dzx2GjbvCzYF1+Qz/1SV1tg+9/2KWPbwCr9tLck4SI34zHIBF9yzh0l8PI6F1ApfcfzELf7oE4/XSc0JP0run18ljw/MbGXyb1YXYbUxXlt7/IVte38KgnwwkkqrL3exZe5gxDw2u3dbu/Ex6XdGe125YiriENr3T6D/RCsxz7ljJ2EcGk5Qdz/z71lBeWIUrOorLf3UBrVLq3htc8dyXjLjT6nLrPb4j8+5ZzcZ/bGf47bkRrc+utYcZ93Dd6zb3vnV2WYWxDw4g3i7r7DtWc+UjA0nOjvebpsby57bwrTutsueO68icn61l/awdjLijT8TqE9A5GHyCJUYvTkBxXTuYmmedWoroOIe+LUbQyO7N5vU3YZMRW9p4orNMjLS8n70nB7y9IdCEr41JTcgxQ8+bGnT6RZsePaPznW10xgmllHLYuXivKVgapJRSymkapPzSIKWUUo46NwdEBEuDlFJKOckAob2Z95yiQUoppRym96T80yCllFJO0yDllwYppZRykgG8GqT80SCllFKO0oETgWiQUkopp2mQ8kuDlFJKOU2DlF8apJRSykl6TyogDVJKKeUoA96WN6dhuGiQUkopJ2lLKiANUkop5TS9J+WXBimllHKaBim/NEgppZSj9DmpQDRIKaWUkwzg9TpdimZLg5RSSjlNW1J+aZBSSimnaZDyS4OUUko5yugQ9AA0SCmllJMMGI8+zOuPBimllHKadvf5pUGqEVW79x/de/Ov9jTBqbKAo01wnqbUZHXa2RQnsej/09mhKevU+YyONkZH9wWgQaoRxpjWTXEeEVlvjBnSFOdqKlqns4PWqRnQlpRfGqSUUsphRltSfmmQUkopR+mME4FokGo+XnC6ABGgdTo7aJ2cpLOgBxTldAGUxRhz9vxSBUnrdHbQOjUDxhv80ggRGSciX4vIdhH5pb1tlohsFpEnfNI9LCLXRLBWYaFBSjUJEZkuIs3262JzL18gInKtiNzrdDnU6TGA8Zqgl0BExAX8BRgP5AI3iEh/AGNMf2CEiKSKSDvgImPMvMjW7sxpkFLK8iJwidOFOE3XAhqkzlbGYDyeoJdGXARsN8bsNMZUAa8DVwHxIhIFxAIe4FHgN5GsVrjoPSnVYolInDGmMpi0xpg8IC/CRQpKKOVWLUQQ3XhBag/s81nPAy4G9gIbgdeAHoAYYz4N10kjyhijiy4RX4Dp1o9bnW0DgPnAcaAcWAWMqJemB9Yv1i47zU7geSC9ofyBfsBCoASY57O9J/CuvX0P1rfIqEbKF9SxdtobgK+ACuBz4GpgGbAsmOtSv9zB1h142T7ed9kdyjXWxfHfjfeB9SEsX9Rbv9Unr+8CL/qs/xB4tt753gFygF8DbwBTnb4GgRZtSSlHiMggYAXwKTAVKAOmAUtEZJgxZoOdNAfr2+A9WH9ouwEPAu/RcPfcPOBvwO8ALzDS3v42MBP4IzABmIH1jXNmEMUNeKyIjAZmYQWD+7BmO/hvoBXwTRD5N1RuCK7uvwVaAxdiBUaASrtcwV5j5SBjzLgwZpcHdPRZ7wAcqFmxB0qsBxKBfsaY74nIRyIyyxhTFsZyhI/TUVKXc2OhXksFWApsBWJ9trnsbf8KkE80cClWi2Fg/fyBuxs6LzCl3vbPgUX+yhfisauxvt2Kz7ZB9rHLgrku9csdYt1fBvIaSH9a11iXs3exf0Z2Al2x7j99BvS198UAHwAJWC3sf9jbVwJpTpfd36IDJ1STE5F44FvAm4BXRKJFJBoQYAlwmU/aWBF5UES+EpFyoBqrdQBwXgPZv+3ntO/WW/8C6BRkkf0ea4+mGgLMMfZvPIAxZiNWN12wTin3adTd99igr7FqOYwxbuBOrK7jrcAbxpgv7d13AK8Yq8W0GRAR+RxYZYw54UiBg6DdfcoJGVjf6B+2l1OISJQxxgs8CdyFNRppNVCM1YUxF6s7rb58P+csqLde6ef4UI/NwvqGeriB4w4FmT80XO5Q6+4rlGusWhBjzHtYXcL1t/+3z2eDdR+12dMgpZxwAuu+y1+AVxtK4PPH8/vAq8aYx2r2iUhSgLyb+lmno1gtnOwG9rXBGlUVjIbKHWrdfYVyjZVqtjRIqSZnjCkVkRVY/eIbG/ljmYAVBHxNiVjhQmSM8YjIemCiiEyv6fITkcFY9wWCDVINCbbulUB8vXKFco2VarY0SCmn3At8BCwUkb9hdXdlYQ04cBljfmmnex+YbPedbwe+AwxzoLyBPAIsAt4WkRew6jEdOMjJkXqnI9i6bwEyROQ2rJFbFcaYzwn+GivVbGmQUo4wxmwUkQux/sD/GUgFjmA9cPi/PknvwrrZ/7i9/h5WX/rHTVfawIwxi0XkRqy6vI0VUO7Dep6q8AyyDrbuLwJDgSeANKxnubqEcI2VarbEZ0CSUipMRKQDVrB63BjzW6fLo9TZSoOUUmfIHu79DNbQ7qNYD90+gDVwoq8xxt+IQ6VUI7S7T6kz5wHaAs8BmUAp1vNM39UApdSZ0ZaUUkqpZktnnFBKKdVsaZBSSinVbGmQUkop1WxpkFJKKdVsaZBSSinVbGmQUkop1WxpkFJKKdVsaZBSSinVbP0/rguhbe/jPhUAAAAASUVORK5CYII=\n", "text/plain": [ "
" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" } ], "source": [ "import argparse # handles arguments\n", "import sys; sys.argv=['']; del sys # required to use parser in jupyter notebooks\n", "\n", "# Training settings\n", "parser = argparse.ArgumentParser(description='PyTorch SUSY Example')\n", "parser.add_argument('--dataset_size', type=int, default=100000, metavar='DS',\n", " help='size of data set (default: 100000)')\n", "parser.add_argument('--high_level_feats', type=bool, default=None, metavar='HLF',\n", " help='toggles high level features (default: None)')\n", "parser.add_argument('--batch-size', type=int, default=100, metavar='N',\n", " help='input batch size for training (default: 64)')\n", "parser.add_argument('--test-batch-size', type=int, default=1000, metavar='N',\n", " help='input batch size for testing (default: 1000)')\n", "parser.add_argument('--epochs', type=int, default=10, metavar='N',\n", " help='number of epochs to train (default: 10)')\n", "parser.add_argument('--lr', type=float, default=0.05, metavar='LR',\n", " help='learning rate (default: 0.02)')\n", "parser.add_argument('--momentum', type=float, default=0.8, metavar='M',\n", " help='SGD momentum (default: 0.5)')\n", "parser.add_argument('--no-cuda', action='store_true', default=False,\n", " help='disables CUDA training')\n", "parser.add_argument('--seed', type=int, default=2, metavar='S',\n", " help='random seed (default: 1)')\n", "parser.add_argument('--log-interval', type=int, default=10, metavar='N',\n", " help='how many batches to wait before logging training status')\n", "args = parser.parse_args()\n", "\n", "# set seed of random number generator\n", "torch.manual_seed(args.seed)\n", "\n", "grid_search(args)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.1" } }, "nbformat": 4, "nbformat_minor": 2 }