{ "cells": [ { "cell_type": "markdown", "id": "98289536", "metadata": { "id": "98289536" }, "source": [ "# **過擬合(Overfitting)**\n", "從模型調校當中了解分別需要查看訓練集以及驗證集的模型表現結果,然而在驗證集上若沒有如訓練集表現的,其中一個可能發生的原因即是模型過擬合在訓練集上,此份程式碼會介紹在過擬合情況產生時,如何在模型上做抑制的手段。\n", "\n", "## 本章節內容大綱\n", "* ### [Regularization](#Regularization)\n", "* ### [Early Stopping](#EarlyStopping)\n", "* ### [Dropout](#Dropout)\n", "* ### [Parameter Initialization](#ParameterInitialization)\n", "* ### [Batch Normalization](#BatchNormalization)\n", "-----------------" ] }, { "cell_type": "markdown", "id": "7648b4fe", "metadata": { "id": "7648b4fe" }, "source": [ "## 匯入套件" ] }, { "cell_type": "code", "execution_count": null, "id": "80780447", "metadata": { "id": "80780447" }, "outputs": [], "source": [ "import numpy as np\n", "import pandas as pd\n", "import matplotlib.pyplot as plt\n", "\n", "# Tensorflow 相關套件\n", "import tensorflow as tf\n", "from tensorflow import keras\n", "from tensorflow.keras import layers" ] }, { "cell_type": "markdown", "id": "39d096f5", "metadata": { "id": "39d096f5" }, "source": [ "## 創建資料集/載入資料集(Dataset Creating / Loading)" ] }, { "cell_type": "code", "source": [ "# 上傳資料\n", "!wget -q https://github.com/TA-aiacademy/course_3.0/releases/download/DL/Data_part3.zip\n", "!unzip -q Data_part3.zip" ], "metadata": { "id": "3tzFaHNZu6GR" }, "id": "3tzFaHNZu6GR", "execution_count": null, "outputs": [] }, { "cell_type": "code", "execution_count": null, "id": "0942624d", "metadata": { "id": "0942624d" }, "outputs": [], "source": [ "train_df = pd.read_csv('./Data/News_train.csv')\n", "test_df = pd.read_csv('./Data/News_test.csv')" ] }, { "cell_type": "code", "execution_count": null, "id": "772c8373", "metadata": { "id": "772c8373" }, "outputs": [], "source": [ "train_df.head()" ] }, { "cell_type": "code", "execution_count": null, "id": "b167f802", "metadata": { "id": "b167f802" }, "outputs": [], "source": [ "X_df = train_df.iloc[:, :-1].values\n", "y_df = train_df.y_category.values" ] }, { "cell_type": "code", "execution_count": null, "id": "1adce20a", "metadata": { "id": "1adce20a" }, "outputs": [], "source": [ "X_test = test_df.iloc[:, :-1].values\n", "y_test = test_df.y_category.values" ] }, { "cell_type": "markdown", "id": "3e803aaa", "metadata": { "id": "3e803aaa" }, "source": [ "## 資料前處理(Data Preprocessing)" ] }, { "cell_type": "code", "execution_count": null, "id": "e096abf4", "metadata": { "id": "e096abf4" }, "outputs": [], "source": [ "from sklearn.preprocessing import StandardScaler, MinMaxScaler\n", "# Feature scaling\n", "sc = StandardScaler()\n", "X_scale = sc.fit_transform(X_df, y_df)\n", "X_test_scale = sc.transform(X_test)" ] }, { "cell_type": "code", "execution_count": null, "id": "a422159d", "metadata": { "id": "a422159d" }, "outputs": [], "source": [ "# Convert to One-Hot encoding\n", "y_onehot = keras.utils.to_categorical(y_df)\n", "y_test_onehot = keras.utils.to_categorical(y_test)" ] }, { "cell_type": "code", "execution_count": null, "id": "b693ebeb", "metadata": { "id": "b693ebeb" }, "outputs": [], "source": [ "# train, valid/test dataset split\n", "from sklearn.model_selection import train_test_split\n", "X_train, X_valid, y_train, y_valid = train_test_split(X_scale, y_onehot,\n", " test_size=0.2,\n", " random_state=17,\n", " stratify=y_df)" ] }, { "cell_type": "code", "execution_count": null, "id": "0dc32d3f", "metadata": { "id": "0dc32d3f" }, "outputs": [], "source": [ "print(f'X_train shape: {X_train.shape}')\n", "print(f'X_valid shape: {X_valid.shape}')\n", "print(f'y_train shape: {y_train.shape}')\n", "print(f'y_valid shape: {y_valid.shape}')" ] }, { "cell_type": "markdown", "id": "36fa6c65", "metadata": { "id": "36fa6c65" }, "source": [ "## 模型建置(Model Building)" ] }, { "cell_type": "code", "execution_count": null, "id": "3aa4c3be", "metadata": { "id": "3aa4c3be" }, "outputs": [], "source": [ "def build_model(input_shape, output_shape):\n", " keras.backend.clear_session()\n", " tf.random.set_seed(17) # 固定隨機產生的數字序列\n", "\n", " model = keras.models.Sequential()\n", " model.add(layers.Dense(64,\n", " input_shape=input_shape,\n", " activation='tanh'))\n", " model.add(layers.Dense(64,\n", " activation='tanh'))\n", " model.add(layers.Dense(output_shape,\n", " activation='softmax'))\n", "\n", " return model" ] }, { "cell_type": "code", "execution_count": null, "id": "def4ac11", "metadata": { "id": "def4ac11" }, "outputs": [], "source": [ "model = build_model(X_train[0].shape, y_onehot.shape[1])\n", "model.summary()" ] }, { "cell_type": "markdown", "id": "8a8c0484", "metadata": { "id": "8a8c0484" }, "source": [ "## 模型訓練(Model Training)" ] }, { "cell_type": "code", "execution_count": null, "id": "04261902", "metadata": { "id": "04261902" }, "outputs": [], "source": [ "# 編譯模型用以訓練 (設定 optimizer, loss function, metrics, 等等)\n", "model.compile(loss='categorical_crossentropy',\n", " optimizer=keras.optimizers.Nadam(0.001),\n", " metrics=['acc'])" ] }, { "cell_type": "code", "execution_count": null, "id": "66c1b3ef", "metadata": { "id": "66c1b3ef" }, "outputs": [], "source": [ "history = model.fit(X_train, y_train,\n", " epochs=20,\n", " batch_size=512,\n", " validation_data=(X_valid, y_valid))" ] }, { "cell_type": "markdown", "id": "c26459a6", "metadata": { "id": "c26459a6" }, "source": [ "## 模型評估(Model Evaluation)" ] }, { "cell_type": "code", "execution_count": null, "id": "70fea77f", "metadata": { "id": "70fea77f" }, "outputs": [], "source": [ "train_loss = history.history['loss']\n", "train_acc = history.history['acc']\n", "\n", "valid_loss = history.history['val_loss']\n", "valid_acc = history.history['val_acc']" ] }, { "cell_type": "code", "execution_count": null, "id": "23a78ed7", "metadata": { "id": "23a78ed7" }, "outputs": [], "source": [ "plt.figure(figsize=(15, 4))\n", "plt.subplot(1, 2, 1)\n", "plt.plot(range(len(train_loss)), train_loss, label='train_loss')\n", "plt.plot(range(len(valid_loss)), valid_loss, label='valid_loss')\n", "plt.xlabel('Epochs')\n", "plt.ylabel('Loss')\n", "plt.legend()\n", "\n", "plt.subplot(1, 2, 2)\n", "plt.plot(range(len(train_acc)), train_acc, label='train_acc')\n", "plt.plot(range(len(valid_acc)), valid_acc, label='valid_acc')\n", "plt.xlabel('Epochs')\n", "plt.ylabel('Accuracy')\n", "plt.legend()" ] }, { "cell_type": "code", "execution_count": null, "id": "50017993", "metadata": { "id": "50017993" }, "outputs": [], "source": [ "# Print the results of testing data\n", "print('============================')\n", "print('Testing data')\n", "print('============================')\n", "print(f'loss: {model.evaluate(X_test_scale, y_test_onehot, verbose=0)[0]}')\n", "print(f'acc: {model.evaluate(X_test_scale, y_test_onehot, verbose=0)[1]}')" ] }, { "cell_type": "markdown", "id": "c2d332c7", "metadata": { "id": "c2d332c7" }, "source": [ "## 過擬合抑制策略" ] }, { "cell_type": "markdown", "id": "a4127de0", "metadata": { "id": "a4127de0" }, "source": [ "![](https://hackmd.io/_uploads/B1rmk5Ubp.png)\n" ] }, { "cell_type": "markdown", "id": "6ed71fa3", "metadata": { "id": "6ed71fa3" }, "source": [ "\n", "* ## Regularization\n", "" ] }, { "cell_type": "code", "execution_count": null, "id": "99c90b9c", "metadata": { "id": "99c90b9c" }, "outputs": [], "source": [ "def build_model_regular(input_shape, output_shape, l1_alpha, l2_alpha):\n", " # 重新建構一個可以新增 Regularizers 的模型\n", "\n", " keras.backend.clear_session()\n", " tf.random.set_seed(17) # 固定隨機產生的數字序列\n", "\n", " model = keras.models.Sequential()\n", " model.add(layers.Dense(64,\n", " input_shape=input_shape,\n", " activation='tanh',\n", " kernel_regularizer=keras.regularizers.l1_l2(\n", " l1=l1_alpha, l2=l2_alpha)))\n", " model.add(layers.Dense(64,\n", " activation='tanh',\n", " kernel_regularizer=keras.regularizers.l1_l2(\n", " l1=l1_alpha, l2=l2_alpha)))\n", "\n", " model.add(layers.Dense(output_shape,\n", " activation='softmax'))\n", "\n", " return model" ] }, { "cell_type": "code", "execution_count": null, "id": "c145bd62", "metadata": { "id": "c145bd62" }, "outputs": [], "source": [ "# 以下放置要比較的 regularizer 數值\n", "l1_l2_list = [(0, 0), (1e-3, 0), (0, 1e-2), (1e-3, 1e-2)]\n", "\n", "batch_size = 512\n", "epochs = 20\n", "\n", "# 建立兩個 list 記錄選用不同 regularizer 數值的訓練結果\n", "train_loss_list = []\n", "train_acc_list = []\n", "\n", "# 建立兩個 list 記錄選用不同 regularizer 數值的驗證結果\n", "valid_loss_list = []\n", "valid_acc_list = []\n", "\n", "# 建立一個 list 紀錄選用不同 regularizer 數值的測試結果\n", "test_eval = []\n", "\n", "# 迭代不同的 regularizer 數值去訓練模型\n", "for l1_alpha, l2_alpha in l1_l2_list:\n", " print('Training a model with regularizer L1: {}, L2: {}'\n", " .format(l1_alpha, l2_alpha))\n", "\n", " # 確保每次都是訓練新的模型,而不是接續上一輪的模型\n", " model = build_model_regular(X_train[0].shape, y_onehot.shape[1],\n", " l1_alpha, l2_alpha)\n", " model.compile(loss='categorical_crossentropy',\n", " optimizer=keras.optimizers.Nadam(0.001),\n", " metrics=['acc'])\n", "\n", " # 確保每次都設定一樣的參數\n", " history = model.fit(X_train, y_train,\n", " batch_size=batch_size,\n", " epochs=epochs,\n", " verbose=0,\n", " validation_data=(X_valid, y_valid))\n", "\n", " # 將訓練過程記錄下來\n", " train_loss_list.append(history.history['loss'])\n", " valid_loss_list.append(history.history['val_loss'])\n", " train_acc_list.append(history.history['acc'])\n", " valid_acc_list.append(history.history['val_acc'])\n", " test_eval.append(model.evaluate(X_test_scale,\n", " y_test_onehot,\n", " verbose=0))\n", "print('----------------- training done! -----------------')" ] }, { "cell_type": "code", "execution_count": null, "id": "5f2b62a2", "metadata": { "id": "5f2b62a2" }, "outputs": [], "source": [ "# 視覺化訓練過程\n", "plt.figure(figsize=(15, 7))\n", "\n", "train_line = ()\n", "valid_line = ()\n", "\n", "# 繪製 Training loss\n", "plt.subplot(121)\n", "for k in range(len(l1_l2_list)):\n", " l1, l2 = l1_l2_list[k]\n", " loss = train_loss_list[k]\n", " val_loss = valid_loss_list[k]\n", " train_l = plt.plot(\n", " range(len(loss)), loss,\n", " label=f'Training L1: {l1}, L2: {l2}')\n", " valid_l = plt.plot(\n", " range(len(val_loss)), val_loss, '--',\n", " label=f'Validation L1: {l1}, L2: {l2}')\n", "\n", " train_line += tuple(train_l)\n", " valid_line += tuple(valid_l)\n", "plt.title('Loss')\n", "\n", "# 繪製 Training accuracy\n", "plt.subplot(122)\n", "train_acc_line = []\n", "valid_acc_line = []\n", "for k in range(len(l1_l2_list)):\n", " l1, l2 = l1_l2_list[k]\n", " acc = train_acc_list[k]\n", " val_acc = valid_acc_list[k]\n", " plt.plot(range(len(acc)), acc,\n", " label=f'Training L1: {l1}, L2: {l2}')\n", " plt.plot(range(len(val_acc)), val_acc, '--',\n", " label=f'Validation L1: {l1}, L2: {l2}')\n", "plt.title('Accuracy')\n", "\n", "first_legend = plt.legend(handles=train_line,\n", " bbox_to_anchor=(1.05, 1))\n", "\n", "plt.gca().add_artist(first_legend)\n", "plt.legend(handles=valid_line,\n", " bbox_to_anchor=(1.05, 0.8))\n", "plt.show()" ] }, { "cell_type": "code", "execution_count": null, "id": "800bec37", "metadata": { "id": "800bec37" }, "outputs": [], "source": [ "# Print the results of testing data\n", "for k in range(len(l1_l2_list)):\n", " print('============================')\n", " print(f'(l1, l2) = {l1_l2_list[k]}')\n", " print('============================')\n", " print(f'loss: {test_eval[k][0]}')\n", " print(f'acc: {test_eval[k][1]}\\n')" ] }, { "cell_type": "markdown", "id": "16ee28bf", "metadata": { "id": "16ee28bf" }, "source": [ "\n", "* ## Early Stopping" ] }, { "cell_type": "code", "execution_count": null, "id": "03de87a8", "metadata": { "id": "03de87a8" }, "outputs": [], "source": [ "n_patience = 5 # 訓練過程經過 n_patience 次沒有進步之後停止\n", "early_stopping = keras.callbacks.EarlyStopping(\n", " monitor='val_loss', # 是否進步的指標\n", " patience=n_patience,\n", " verbose=1)" ] }, { "cell_type": "code", "execution_count": null, "id": "63c5ef65", "metadata": { "id": "63c5ef65" }, "outputs": [], "source": [ "model = build_model(X_train[0].shape, y_onehot.shape[1])\n", "model.summary()" ] }, { "cell_type": "code", "execution_count": null, "id": "644bc9da", "metadata": { "id": "644bc9da" }, "outputs": [], "source": [ "# 編譯模型用以訓練 (設定 optimizer, loss function, metrics, 等等)\n", "model.compile(loss='categorical_crossentropy',\n", " optimizer=keras.optimizers.Nadam(0.001),\n", " metrics=['acc'])" ] }, { "cell_type": "code", "execution_count": null, "id": "479bd6a9", "metadata": { "id": "479bd6a9" }, "outputs": [], "source": [ "history = model.fit(X_train, y_train,\n", " epochs=20,\n", " batch_size=512,\n", " validation_data=(X_valid, y_valid),\n", " callbacks=[early_stopping])" ] }, { "cell_type": "code", "execution_count": null, "id": "3d7e2f86", "metadata": { "id": "3d7e2f86" }, "outputs": [], "source": [ "train_loss = history.history['loss']\n", "train_acc = history.history['acc']\n", "\n", "valid_loss = history.history['val_loss']\n", "valid_acc = history.history['val_acc']" ] }, { "cell_type": "code", "execution_count": null, "id": "2acf3b51", "metadata": { "id": "2acf3b51" }, "outputs": [], "source": [ "plt.figure(figsize=(15, 4))\n", "plt.subplot(1, 2, 1)\n", "plt.plot(range(len(train_loss)), train_loss, label='train_loss')\n", "plt.plot(range(len(valid_loss)), valid_loss, label='valid_loss')\n", "plt.xlabel('Epochs')\n", "plt.ylabel('Loss')\n", "plt.legend()\n", "\n", "plt.subplot(1, 2, 2)\n", "plt.plot(range(len(train_acc)), train_acc, label='train_acc')\n", "plt.plot(range(len(valid_acc)), valid_acc, label='valid_acc')\n", "plt.xlabel('Epochs')\n", "plt.ylabel('Accuracy')\n", "plt.legend()" ] }, { "cell_type": "code", "execution_count": null, "id": "655d7982", "metadata": { "id": "655d7982" }, "outputs": [], "source": [ "# Print the results of testing data\n", "print('============================')\n", "print('Testing data')\n", "print('============================')\n", "print(f'loss: {model.evaluate(X_test_scale, y_test_onehot, verbose=0)[0]}')\n", "print(f'acc: {model.evaluate(X_test_scale, y_test_onehot, verbose=0)[1]}')" ] }, { "cell_type": "markdown", "id": "1b321f9a", "metadata": { "id": "1b321f9a" }, "source": [ "\n", "* ## Dropout\n", "![](https://hackmd.io/_uploads/HJePycUba.png)\n" ] }, { "cell_type": "code", "execution_count": null, "id": "95f89986", "metadata": { "id": "95f89986" }, "outputs": [], "source": [ "def build_model_dropout(input_shape, output_shape, droprate):\n", " keras.backend.clear_session()\n", " tf.random.set_seed(17) # 固定隨機產生的數字序列\n", "\n", " model = keras.models.Sequential()\n", " model.add(layers.Dense(64,\n", " input_shape=input_shape,\n", " activation='tanh'))\n", " # 加入 Dropout\n", " model.add(layers.Dropout(droprate, seed=17))\n", "\n", " model.add(layers.Dense(64,\n", " activation='tanh'))\n", " # 加入 Dropout\n", " model.add(layers.Dropout(droprate, seed=17))\n", "\n", " model.add(layers.Dense(output_shape,\n", " activation='softmax'))\n", "\n", " return model" ] }, { "cell_type": "code", "execution_count": null, "id": "b629990f", "metadata": { "id": "b629990f" }, "outputs": [], "source": [ "# 以下放置要比較的 dropout rate\n", "dropout_rates = [0, 0.1, 0.2, 0.4]\n", "\n", "batch_size = 512\n", "epochs = 20\n", "\n", "# 建立兩個 list 記錄選用不同 dropout rate 的訓練結果\n", "train_loss_list = []\n", "train_acc_list = []\n", "\n", "# 建立兩個 list 記錄選用不同 dropout rate 的驗證結果\n", "valid_loss_list = []\n", "valid_acc_list = []\n", "\n", "# 建立一個 list 紀錄選用不同 dropout rate 數值的測試結果\n", "test_eval = []\n", "\n", "# 迭代不同的 dropout rate 去訓練模型\n", "for drop_r in dropout_rates:\n", " print('Training a model with dropout rate: {}'\n", " .format(drop_r))\n", "\n", " # 確保每次都是訓練新的模型,而不是接續上一輪的模型\n", " model = build_model_dropout(X_train[0].shape,\n", " y_onehot.shape[1],\n", " drop_r)\n", " model.compile(loss='categorical_crossentropy',\n", " optimizer=keras.optimizers.Nadam(0.001),\n", " metrics=['acc'])\n", "\n", " # 確保每次都設定一樣的參數\n", " history = model.fit(X_train, y_train,\n", " batch_size=batch_size,\n", " epochs=epochs,\n", " verbose=0,\n", " validation_data=(X_valid, y_valid))\n", "\n", " # 將訓練結果記錄下來\n", " train_loss_list.append(history.history['loss'])\n", " train_acc_list.append(history.history['acc'])\n", " valid_loss_list.append(history.history['val_loss'])\n", " valid_acc_list.append(history.history['val_acc'])\n", " test_eval.append(model.evaluate(X_test_scale,\n", " y_test_onehot,\n", " verbose=0))\n", "print('----------------- training done! -----------------')" ] }, { "cell_type": "code", "execution_count": null, "id": "7c313846", "metadata": { "id": "7c313846" }, "outputs": [], "source": [ "# 視覺化訓練過程\n", "plt.figure(figsize=(15, 7))\n", "\n", "train_line = ()\n", "valid_line = ()\n", "\n", "# 繪製 Training loss\n", "plt.subplot(121)\n", "for k in range(len(dropout_rates)):\n", " loss = train_loss_list[k]\n", " val_loss = valid_loss_list[k]\n", " train_l = plt.plot(\n", " range(len(loss)), loss,\n", " label=f'Training dropout rate:{dropout_rates[k]}')\n", " valid_l = plt.plot(\n", " range(len(val_loss)), val_loss, '--',\n", " label=f'Validation dropout rate:{dropout_rates[k]}')\n", "\n", " train_line += tuple(train_l)\n", " valid_line += tuple(valid_l)\n", "plt.title('Loss')\n", "\n", "# 繪製 Training accuracy\n", "plt.subplot(122)\n", "train_acc_line = []\n", "valid_acc_line = []\n", "for k in range(len(dropout_rates)):\n", " acc = train_acc_list[k]\n", " val_acc = valid_acc_list[k]\n", " plt.plot(range(len(acc)), acc,\n", " label=f'Training dropout rate:{dropout_rates[k]}')\n", " plt.plot(range(len(val_acc)), val_acc, '--',\n", " label=f'Validation dropout rate:{dropout_rates[k]}')\n", "plt.title('Accuracy')\n", "\n", "first_legend = plt.legend(handles=train_line,\n", " bbox_to_anchor=(1.05, 1))\n", "\n", "plt.gca().add_artist(first_legend)\n", "plt.legend(handles=valid_line,\n", " bbox_to_anchor=(1.05, 0.8))\n", "plt.show()" ] }, { "cell_type": "code", "execution_count": null, "id": "21014c14", "metadata": { "id": "21014c14" }, "outputs": [], "source": [ "# Print the results of testing data\n", "for k in range(len(dropout_rates)):\n", " print('============================')\n", " print(f'dropout_rate = {dropout_rates[k]}')\n", " print('============================')\n", " print(f'loss: {test_eval[k][0]}')\n", " print(f'acc: {test_eval[k][1]}\\n')" ] }, { "cell_type": "markdown", "id": "b6710c34", "metadata": { "id": "b6710c34" }, "source": [ "\n", "* ## Parameter Initialization\n", "tf.keras.initializers: https://www.tensorflow.org/api_docs/python/tf/keras/initializers" ] }, { "cell_type": "code", "execution_count": null, "id": "f8f68a5c", "metadata": { "id": "f8f68a5c" }, "outputs": [], "source": [ "def build_model_init(input_shape, output_shape, init):\n", " keras.backend.clear_session()\n", " tf.random.set_seed(17)\n", "\n", " model = keras.models.Sequential()\n", " model.add(layers.Dense(64,\n", " input_shape=input_shape,\n", " activation='tanh',\n", " kernel_initializer=init)) # 由此更改初始化方式\n", " model.add(layers.Dense(64,\n", " activation='tanh',\n", " kernel_initializer=init)) # 由此更改初始化方式\n", " model.add(layers.Dense(output_shape,\n", " activation='softmax',\n", " kernel_initializer=init)) # 由此更改初始化方式\n", " return model" ] }, { "cell_type": "code", "execution_count": null, "id": "636543a7", "metadata": { "id": "636543a7" }, "outputs": [], "source": [ "# 以下放置要比較的 initializer\n", "init_l = ['glorot_normal',\n", " 'he_normal',\n", " 'lecun_normal',\n", " 'random_normal',\n", " 'truncated_normal']\n", "\n", "batch_size = 512\n", "epochs = 20\n", "\n", "# 建立兩個 list 記錄選用不同 initializer 的訓練結果\n", "train_loss_list = []\n", "train_acc_list = []\n", "\n", "# 建立兩個 list 記錄選用不同 initializer 的驗證結果\n", "valid_loss_list = []\n", "valid_acc_list = []\n", "\n", "# 建立一個 list 紀錄選用不同 initializer 數值的測試結果\n", "test_eval = []\n", "\n", "# 迭代不同的 initializer 去訓練模型\n", "for init in init_l:\n", " print(f'Training model, init = {init}')\n", "\n", " # 確保每次都是訓練新的模型,而不是接續上一輪的模型\n", " model = build_model_init(X_train[0].shape,\n", " y_onehot.shape[1],\n", " init)\n", " model.compile(loss='categorical_crossentropy',\n", " optimizer=keras.optimizers.Nadam(0.001),\n", " metrics=['acc'])\n", "\n", " # 確保每次都設定一樣的參數\n", " history = model.fit(X_train, y_train,\n", " batch_size=batch_size,\n", " epochs=epochs,\n", " verbose=0,\n", " validation_data=(X_valid, y_valid))\n", "\n", " # 將訓練結果記錄下來\n", " train_loss_list.append(history.history['loss'])\n", " train_acc_list.append(history.history['acc'])\n", " valid_loss_list.append(history.history['val_loss'])\n", " valid_acc_list.append(history.history['val_acc'])\n", " test_eval.append(model.evaluate(X_test_scale,\n", " y_test_onehot,\n", " verbose=0))\n", "print('----------------- training done! -----------------')" ] }, { "cell_type": "code", "execution_count": null, "id": "1ac5fe3d", "metadata": { "id": "1ac5fe3d" }, "outputs": [], "source": [ "# 視覺化訓練過程\n", "plt.figure(figsize=(15, 7))\n", "\n", "train_line = ()\n", "valid_line = ()\n", "\n", "# 繪製 Training loss\n", "plt.subplot(121)\n", "for k in range(len(init_l)):\n", " loss = train_loss_list[k]\n", " val_loss = valid_loss_list[k]\n", " train_l = plt.plot(\n", " range(len(loss)), loss,\n", " label=f'Training init: {init_l[k]}')\n", " valid_l = plt.plot(\n", " range(len(val_loss)), val_loss, '--',\n", " label=f'Validation init: {init_l[k]}')\n", "\n", " train_line += tuple(train_l)\n", " valid_line += tuple(valid_l)\n", "plt.title('Loss')\n", "\n", "# 繪製 Training accuracy\n", "plt.subplot(122)\n", "train_acc_line = []\n", "valid_acc_line = []\n", "for k in range(len(init_l)):\n", " acc = train_acc_list[k]\n", " val_acc = valid_acc_list[k]\n", " plt.plot(range(len(acc)), acc,\n", " label=f'Training init: {init_l[k]}')\n", " plt.plot(range(len(val_acc)), val_acc, '--',\n", " label=f'Validation init: {init_l[k]}')\n", "plt.title('Accuracy')\n", "\n", "first_legend = plt.legend(handles=train_line,\n", " bbox_to_anchor=(1.05, 1))\n", "\n", "plt.gca().add_artist(first_legend)\n", "plt.legend(handles=valid_line,\n", " bbox_to_anchor=(1.05, 0.75))\n", "plt.show()" ] }, { "cell_type": "code", "execution_count": null, "id": "2c7840e9", "metadata": { "id": "2c7840e9" }, "outputs": [], "source": [ "# Print the results of testing data\n", "for k in range(len(init_l)):\n", " print('============================')\n", " print(f'initializer = {init_l[k]}')\n", " print('============================')\n", " print(f'loss: {test_eval[k][0]}')\n", " print(f'acc: {test_eval[k][1]}\\n')" ] }, { "cell_type": "markdown", "id": "4aed4aa8", "metadata": { "id": "4aed4aa8" }, "source": [ "\n", "* ## Batch Normalization" ] }, { "cell_type": "code", "execution_count": null, "id": "b88dfdec", "metadata": { "id": "b88dfdec" }, "outputs": [], "source": [ "def build_model_bn(input_shape, output_shape, bn=True):\n", " keras.backend.clear_session()\n", " tf.random.set_seed(17) # 固定隨機產生的數字序列\n", "\n", " model = keras.models.Sequential()\n", " model.add(layers.Dense(64,\n", " input_shape=input_shape))\n", " if bn:\n", " model.add(layers.BatchNormalization())\n", " model.add(layers.Activation('tanh'))\n", "\n", " model.add(layers.Dense(64))\n", "\n", " if bn:\n", " model.add(layers.BatchNormalization())\n", " model.add(layers.Activation('tanh'))\n", "\n", " model.add(layers.Dense(output_shape,\n", " activation='softmax'))\n", " return model" ] }, { "cell_type": "code", "execution_count": null, "id": "adf4cb5e", "metadata": { "id": "adf4cb5e" }, "outputs": [], "source": [ "BN = [False, True]\n", "\n", "batch_size = 512\n", "epochs = 20\n", "\n", "# 建立兩個 list 記錄是否加入 BatchNormalization 的訓練結果\n", "train_loss_list = []\n", "train_acc_list = []\n", "\n", "# 建立兩個 list 記錄是否加入 BatchNormalization 的驗證結果\n", "valid_loss_list = []\n", "valid_acc_list = []\n", "\n", "# 建立一個 list 紀錄是否加入 BatchNormalization 的測試結果\n", "test_eval = []\n", "\n", "# 迭代是否加入 BatchNormalization 去訓練模型\n", "for bn in BN:\n", " print('Training a model with BatchNormalization: {}'\n", " .format(str(bn)))\n", "\n", " # 確保每次都是訓練新的模型,而不是接續上一輪的模型\n", " model = build_model_bn(X_train[0].shape,\n", " y_onehot.shape[1],\n", " bn)\n", " model.compile(loss='categorical_crossentropy',\n", " optimizer=keras.optimizers.Nadam(0.001),\n", " metrics=['acc'])\n", "\n", " # 確保每次都設定一樣的參數\n", " history = model.fit(X_train, y_train,\n", " batch_size=batch_size,\n", " epochs=epochs,\n", " verbose=0,\n", " validation_data=(X_valid, y_valid))\n", "\n", " # 將訓練結果記錄下來\n", " train_loss_list.append(history.history['loss'])\n", " train_acc_list.append(history.history['acc'])\n", " valid_loss_list.append(history.history['val_loss'])\n", " valid_acc_list.append(history.history['val_acc'])\n", " test_eval.append(model.evaluate(X_test_scale,\n", " y_test_onehot,\n", " verbose=0))\n", "print('----------------- training done! -----------------')" ] }, { "cell_type": "code", "execution_count": null, "id": "f2699c27", "metadata": { "id": "f2699c27" }, "outputs": [], "source": [ "# 視覺化訓練過程\n", "plt.figure(figsize=(15, 7))\n", "\n", "train_line = ()\n", "valid_line = ()\n", "\n", "# 繪製 Training loss\n", "plt.subplot(121)\n", "for k in range(len(BN)):\n", " loss = train_loss_list[k]\n", " val_loss = valid_loss_list[k]\n", " train_l = plt.plot(\n", " range(len(loss)), loss,\n", " label=f'Training BatchNormalization:{str(BN[k])}')\n", " valid_l = plt.plot(\n", " range(len(val_loss)), val_loss, '--',\n", " label=f'Validation BatchNormalization:{str(BN[k])}')\n", "\n", " train_line += tuple(train_l)\n", " valid_line += tuple(valid_l)\n", "plt.title('Loss')\n", "\n", "# 繪製 Training accuracy\n", "plt.subplot(122)\n", "train_acc_line = []\n", "valid_acc_line = []\n", "for k in range(len(BN)):\n", " acc = train_acc_list[k]\n", " val_acc = valid_acc_list[k]\n", " plt.plot(range(len(acc)), acc,\n", " label=f'Training BatchNormalization:{str(BN[k])}')\n", " plt.plot(range(len(val_acc)), val_acc, '--',\n", " label=f'Validation BatchNormalization:{str(BN[k])}')\n", "plt.title('Accuracy')\n", "\n", "first_legend = plt.legend(handles=train_line,\n", " bbox_to_anchor=(1.05, 1))\n", "\n", "plt.gca().add_artist(first_legend)\n", "plt.legend(handles=valid_line,\n", " bbox_to_anchor=(1.05, 0.75))\n", "plt.show()" ] }, { "cell_type": "code", "execution_count": null, "id": "93b68689", "metadata": { "id": "93b68689" }, "outputs": [], "source": [ "# Print the results of testing data\n", "for k in range(len(BN)):\n", " print('============================')\n", " print(f'BatchNormalization = {BN[k]}')\n", " print('============================')\n", " print(f'loss: {test_eval[k][0]}')\n", " print(f'acc: {test_eval[k][1]}\\n')" ] }, { "cell_type": "markdown", "id": "8883e69e", "metadata": { "id": "8883e69e" }, "source": [ "---\n", "### Quiz\n", "請試著利用 Data/pkgo_train.csv 做多元分類問題,預測五個種類的 pokemon,並使用 Data/pkgo_test.csv 驗證結果。\n", "\n", "若出現 Overfitting 的情況,嘗試使用以上抑制 Overfitting 的方法調整訓練模型的策略。" ] }, { "cell_type": "code", "execution_count": null, "id": "dd2ab8ac", "metadata": { "id": "dd2ab8ac" }, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.12" }, "colab": { "provenance": [] }, "accelerator": "GPU", "gpuClass": "standard" }, "nbformat": 4, "nbformat_minor": 5 }