{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "pa49bUnKyRgF" }, "source": [ "# Time series forecasting" ] }, { "cell_type": "markdown", "metadata": { "id": "TokBlnUhWFw9" }, "source": [ "## Dataset\n", "\n", "此份筆記本會使用由 Max Planck Institute for Biogeochemistry 所記錄的天氣資料集來實作時間序列預測模型。\n", "\n", "此份資料集包含 14 種不同的特徵,例如氣溫、大氣壓力和濕度。每 10 分鐘收集一次數據,全部資料含蓋由 2009 年至 2016 年之間收集的數據。\n", "\n", "下面我們將實作幾組模型進行**每小時**的天氣預測。\n", "\n", "Notebook 分成以下三個部分:\n", "- EDA & Feature Engineering\n", "- Data Windowing\n", "- Models\n", " - Single-step Models: 預測**單個時間點**的數值\n", " - Multi-output Models: 預測**多個特徵**數值\n", " - Multi-step Models: 預測**多個時間點**的數值" ] }, { "cell_type": "markdown", "metadata": { "id": "XVhK72Pu1cJL" }, "source": [ "## Importing Packages" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "7rZnJaGTWQw0" }, "outputs": [], "source": [ "import os\n", "\n", "import IPython\n", "import IPython.display\n", "import matplotlib as mpl\n", "import matplotlib.pyplot as plt\n", "import numpy as np\n", "import pandas as pd\n", "import seaborn as sns\n", "import tensorflow as tf\n", "\n", "mpl.rcParams['figure.figsize'] = (8, 6)\n", "mpl.rcParams['axes.grid'] = False" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!wget -q 'https://github.com/TA-aiacademy/course_3.0/releases/download/TSRNN/jena_climate_2009_2016.csv.zip'\n", "!unzip -q 'jena_climate_2009_2016.csv.zip'" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "資料集特徵除了 **Date Time** 之外還包含另外 14 種特徵\n", "\n", "其說明可參照 Max Planck Institute for Biogeochemistry 所提供之說明\n", "\n", "" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df = pd.read_csv('jena_climate_2009_2016.csv')\n", "\n", "df.info()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df.head(10)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "TX6uGeeeWIkG" }, "outputs": [], "source": [ "# 由 Date Time 欄位可得知貢料為每 10 分鐘收集一筆\n", "# 由於我們要進行預測的時間間隔是以「每小時」為單位\n", "# 所以將資料以小時為單位做切片俿理\n", "# [start:stop:step] 從第 5 篳資料開始,每經過 6 個時間記錄點取一次資料\n", "\n", "df = df[5::6]\n", "df.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "type(df['Date Time'][5])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# 將 Date Time 欄位獨立取出,並將其中資料由 str 轉換為 datetime\n", "\n", "date_time = pd.to_datetime(df.pop('Date Time'), format='%d.%m.%Y %H:%M:%S')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## EDA & Feature Engineering" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Vg5XIc5tfNlG" }, "outputs": [], "source": [ "# 針對 'T (degC)', 'p (mbar)', 'rho (g/m**3)' 等特徵畫出它們的時間變化圖\n", "\n", "# 2009~2016\n", "plot_cols = ['T (degC)', 'p (mbar)', 'rho (g/m**3)']\n", "plot_features = df[plot_cols]\n", "plot_features.index = date_time\n", "_ = plot_features.plot(subplots=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# 前 480 個小時(20 天)\n", "plot_features = df[plot_cols][:480]\n", "plot_features.index = date_time[:480]\n", "_ = plot_features.plot(subplots=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "h510pgKVrrai" }, "outputs": [], "source": [ "# 查看每個特徵的敘述統計量\n", "\n", "df.describe().transpose()" ] }, { "cell_type": "markdown", "metadata": { "id": "i47LiW5DCVsP" }, "source": [ "從以上的統計量描述可以發現 (`wv (m/s)`) 和 (`max. wv (m/s)`) 的 `min` 值為 `-9999` ,這樣的數值很有可能是錯的\n", "\n", "另外透過 `wd(deg)` 風向這個欄位可判斷風速應該要大於或等於零 (`>=0`),因此我們將 (`wv (m/s)`) 和 (`max. wv (m/s)`) 中的 `-9999` 代換為零:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "qFOq0_80vF4d" }, "outputs": [], "source": [ "# 將 'wv (m/s)' 中值等於 -9999.0 的值代換為 0.0\n", "wv = df['wv (m/s)']\n", "bad_wv = wv == -9999.0\n", "wv[bad_wv] = 0.0\n", "\n", "# 將 'max. wv (m/s)' 中值等於 -9999.0 的值代換為 0.0\n", "max_wv = df['max. wv (m/s)']\n", "bad_max_wv = max_wv == -9999.0\n", "max_wv[bad_max_wv] = 0.0\n", "\n", "# 最後可再檢查代換後兩個特徵的最三值是否 >= 0.0\n", "df['wv (m/s)'].min(), df['max. wv (m/s)'].min()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Wind" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "YO7JGTcWQG2z" }, "outputs": [], "source": [ "# 畫出 'wd (deg)' 與 'wv (m/s)' 的 2D 直方圖\n", "\n", "plt.hist2d(df['wd (deg)'], df['wv (m/s)'], bins=(50, 50), vmax=400)\n", "plt.colorbar()\n", "plt.xlabel('Wind Direction [deg]')\n", "plt.ylabel('Wind Velocity [m/s]')" ] }, { "cell_type": "markdown", "metadata": { "id": "FYyEaqiD6j4s" }, "source": [ "風向 `wd (deg)` 採 0°~360° 為記錄資料。這樣的記錄方式對模型來說不是一件好事,因為 360° 與 0° 彼此應該是靠近且平滑連接的。\n", "\n", "我們可以利用「風向」和「風速」轉換出風向量的「x分量」與「y分量」\n", "\n", "" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "6GmSTHXw6lI1" }, "outputs": [], "source": [ "wv = df.pop('wv (m/s)')\n", "max_wv = df.pop('max. wv (m/s)')\n", "\n", "# 將角度由度度量轉換為弧度度量\n", "wd_rad = df.pop('wd (deg)')*np.pi / 180\n", "\n", "# 計算風的「x分量」與「y分量」 \n", "df['Wx'] = wv*np.cos(wd_rad)\n", "df['Wy'] = wv*np.sin(wd_rad)\n", "\n", "# 計算最大風的「x分量」與「y分量」\n", "df['max Wx'] = max_wv*np.cos(wd_rad)\n", "df['max Wy'] = max_wv*np.sin(wd_rad)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "bMgCG5o2SYKD" }, "outputs": [], "source": [ "# 畫出 'Wx', 'Wy' 的 2D 直方圖\n", "\n", "plt.hist2d(df['Wx'], df['Wy'], bins=(50, 50), vmax=400)\n", "plt.colorbar()\n", "plt.xlabel('Wind X [m/s]')\n", "plt.ylabel('Wind Y [m/s]')\n", "ax = plt.gca()\n", "ax.axis('tight')" ] }, { "cell_type": "markdown", "metadata": { "id": "_8im1ttOWlRB" }, "source": [ "### Time" ] }, { "cell_type": "markdown", "metadata": { "id": "EC_pnM1D5Sgc" }, "source": [ "天氣類型的數據可能會與一天當中的時間或是一年當中的時間週期有相關性。\n", "\n", "我們可以通過使用 sine / cosine transform 將「一天的時間」資訊和「一年的時間」資訊編碼起來形成時間資料。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "MBfX6CDwax73" }, "outputs": [], "source": [ "# 將 data_time 中的「日期時間」資料轉換成「秒(float)」(ex: 2009-01-01 01:00:00 ---> 1230771600.0)\n", "timestamp_s = date_time.map(pd.Timestamp.timestamp)\n", "\n", "# 分別計算一天與一年之秒數\n", "day = 24*60*60\n", "year = (365.2425)*day\n", "\n", "# 分別進行一天與一年為週期的 sine / cosine 轉換\n", "df['Day sin'] = np.sin(timestamp_s * (2 * np.pi / day))\n", "df['Day cos'] = np.cos(timestamp_s * (2 * np.pi / day))\n", "df['Year sin'] = np.sin(timestamp_s * (2 * np.pi / year))\n", "df['Year cos'] = np.cos(timestamp_s * (2 * np.pi / year))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "mXBbTJZfuuTC" }, "outputs": [], "source": [ "# 畫出一天 (24小時之內) 的時間變化\n", "plt.plot(np.array(df['Day sin'])[:25])\n", "plt.plot(np.array(df['Day cos'])[:25])\n", "plt.xlabel('Time [h]')\n", "plt.title('Time of day signal')" ] }, { "cell_type": "markdown", "metadata": { "id": "2rbL8bSGDHy3" }, "source": [ "### Split the data" ] }, { "cell_type": "markdown", "metadata": { "id": "qoFJZmXBaxCc" }, "source": [ "將資料以 `(70%, 20%, 10%)` 的比例切分成「訓練」、「驗證」與「測試」集。\n", "\n", "要特別注意,在切分前勿將資料打亂,以避免後續在取 window 時喪失時間相關性。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "ia-MPAHxbInX" }, "outputs": [], "source": [ "n = len(df)\n", "train_df = df[0:int(n*0.7)]\n", "val_df = df[int(n*0.7):int(n*0.9)]\n", "test_df = df[int(n*0.9):]\n", "\n", "num_features = df.shape[1]" ] }, { "cell_type": "markdown", "metadata": { "id": "-eFckdUUHWmT" }, "source": [ "### Data Normalization" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Eji6njXvHusN" }, "outputs": [], "source": [ "# 計算訓練集各特徵的平均數與標準差\n", "train_mean = train_df.mean()\n", "train_std = train_df.std()\n", "\n", "# 分別對訓練、驗證、測試資料做 normalization\n", "train_df = (train_df - train_mean) / train_std\n", "val_df = (val_df - train_mean) / train_std\n", "test_df = (test_df - train_mean) / train_std" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "T0UYEnkwm8Fe" }, "outputs": [], "source": [ "# 進行完 normalization 後,可觀察各個特徵的數值分佈\n", "\n", "df_std = (df - train_mean) / train_std\n", "df_std = df_std.melt(var_name='Column', value_name='Normalized') # 將 df_std DataFrame 從 wide format 轉成 long format \n", "\n", "plt.figure(figsize=(16, 8))\n", "ax = sns.violinplot(x='Column', y='Normalized', data=df_std)\n", "_ = ax.set_xticklabels(df.keys(), rotation=90)" ] }, { "cell_type": "markdown", "metadata": { "id": "ZBBmdxZ2HgfJ" }, "source": [ "## Data windowing\n", "\n", "對 DataFrame 中的資料依照時序進行窗口採樣 (Windowing)\n", "\n", "對於 Windowing 而言,以下三個特性是應當考慮到的:\n", "\n", "- input windows 和 lable windows 的時間長度(number of time steps)\n", "- input windows 和 lable windows 的時間差 (time offset)\n", "- 哪些 feature 被用作輸入(input)、標籤(label)或兩者都是 " ] }, { "cell_type": "markdown", "metadata": { "id": "YAhGUVx1jtOy" }, "source": [ "我們可依據任務與使用的模型產生不同的 windows,以下是幾個例子:\n", "\n", "1. 如果我們要利用前 24 小時的歷史資料來預測未來 24 小時後的天氣狀態,我們取 window 的方式應該如下:\n", "\n", "\n", "\n", "2. 如果我們要利用前 6 小時的歷史資料來預測未來 1 小時的天氣狀態,我們取 window 的方式應該如下:\n", "\n", "" ] }, { "cell_type": "markdown", "metadata": { "id": "sa2BbfNZt8wy" }, "source": [ "接下來我們定義了一個 `WindowGenerator` 類。這個類能夠:\n", "\n", "1. 處理上方圖表中所示的索引(`indexes`)和時間偏移量(`offsets`)。\n", "2. 將 `windows` 分成 `(inputs, labels)` 對。\n", "3. 繪製 `windows` 的內容。\n", "4. 使用 `tf.data.Dataset` 從訓練、評估和測試數據中高效地生成這些窗口的批次。" ] }, { "cell_type": "markdown", "metadata": { "id": "rfx3jGjyziUF" }, "source": [ "### 1. Indexes and offsets\n", "\n", "`WindowGenerator` 類中的 `__init__` 方法包含了 `input` 和 `label` 的**索引**所需的所有邏輯。\n", "\n", "`WindowGenerator` 還接受訓練、驗證和測試集的 `DataFrame` 作為輸入。這些稍後將被轉換為 `windows` 的 `tf.data.Dataset`" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Kem30j8QHxyW" }, "outputs": [], "source": [ "class WindowGenerator():\n", " def __init__(self, input_width, label_width, shift,\n", " train_df=train_df, val_df=val_df, test_df=test_df,\n", " label_columns=None):\n", " # 存入原始數據\n", " self.train_df = train_df\n", " self.val_df = val_df\n", " self.test_df = test_df\n", "\n", " # 將要預測的 label column 做索引編號\n", " # 如果 label_columns == None 則將 train_df 中的所有 columns 作索引編號\n", " self.label_columns = label_columns\n", " if label_columns is not None:\n", " self.label_columns_indices = {name: i for i, name in\n", " enumerate(label_columns)}\n", " self.column_indices = {name: i for i, name in\n", " enumerate(train_df.columns)}\n", "\n", " # 進行 windows 的參數設定\n", " self.input_width = input_width # 設定 input window 長度\n", " self.label_width = label_width # 說定 label window 長度\n", " self.shift = shift # 設定 time offset 長度\n", "\n", " self.total_window_size = input_width + shift\n", "\n", " # slice(start, end, step): 返回一個 slice 物件,用於指定如何對一個 sequence 做切片\n", " self.input_slice = slice(0, input_width) # 切分 input window 索引位置的切片對象\n", " self.input_indices = np.arange(self.total_window_size)[self.input_slice]\n", "\n", " self.label_start = self.total_window_size - self.label_width # label window 的起始位置\n", " self.labels_slice = slice(self.label_start, None) # 切分 label window 索引位置的切片對象\n", " self.label_indices = np.arange(self.total_window_size)[self.labels_slice]\n", "\n", " # 定義 WindowGenerator() 物件的表現形式,如此可以方便知道一個 WindowGenerator() 物件的相關資訊\n", " def __repr__(self):\n", " return '\\n'.join([\n", " f'Total window size: {self.total_window_size}',\n", " f'Input indices: {self.input_indices}',\n", " f'Label indices: {self.label_indices}',\n", " f'Label column name(s): {self.label_columns}'])" ] }, { "cell_type": "markdown", "metadata": { "id": "yVJgblsYzL1g" }, "source": [ "下面我們創建兩 `WindowGenerator` 物件,分別對應到上面兩張 windows 的圖" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "IsM5kRkz0UwK" }, "outputs": [], "source": [ "w1 = WindowGenerator(input_width=24, label_width=1, shift=24,\n", " label_columns=['T (degC)'])\n", "w1" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "viwKsYeAKFUn" }, "outputs": [], "source": [ "w2 = WindowGenerator(input_width=6, label_width=1, shift=1,\n", " label_columns=['T (degC)'])\n", "w2" ] }, { "cell_type": "markdown", "metadata": { "id": "kJaUyTWQJd-L" }, "source": [ "### 2. Split\n", "\n", "將 `windows` 分成 `(inputs, labels)` 對。\n", "\n", "" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "W4KbxfzqkXPW" }, "outputs": [], "source": [ "def split_window(self, windows):\n", " # 定義將 windows 分解成 input_windows 與 label_windows 的方法\n", " # windows 的 shape 分別表示 (batch, time, features)\n", " \n", " input_windows = windows[:, self.input_slice, :]\n", " label_windows = windows[:, self.labels_slice, :]\n", " \n", " # 挑選出有指定做為 label 的特徵\n", " if self.label_columns is not None:\n", " label_windows = tf.stack(\n", " [label_windows[:, :, self.column_indices[name]] for name in self.label_columns],\n", " axis=-1)\n", "\n", " # 做完切片後檢查一次 input_windows 和 label_windows 的 time shape 是否與設定的 input_width 與 label_width 一致\n", " input_windows.set_shape([None, self.input_width, None])\n", " label_windows.set_shape([None, self.label_width, None])\n", "\n", " return input_windows, label_windows\n", "\n", "# 將 split_window 賦值給 WindowGenerator.split_window\n", "WindowGenerator.split_window = split_window" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "YeCWbq6KLmL7" }, "outputs": [], "source": [ "# 將 3 個切片疊在一起,其時間長度為 total_window_size\n", "example_window = tf.stack([np.array(train_df[:w2.total_window_size]),\n", " np.array(train_df[100:100+w2.total_window_size]),\n", " np.array(train_df[200:200+w2.total_window_size])])\n", "\n", "example_inputs, example_labels = w2.split_window(example_window)\n", "\n", "print('All shapes are: (batch, time, features)')\n", "print(f'Window shape: {example_window.shape}')\n", "print(f'Inputs shape: {example_inputs.shape}')\n", "print(f'Labels shape: {example_labels.shape}')" ] }, { "cell_type": "markdown", "metadata": { "id": "tFZukGXrJoGo" }, "source": [ "### 3. Plot\n", "\n", "針對 `input_windows`, `label_windows`, `模型預測` 視覺化內容" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "fmgd1qkYUWT7" }, "outputs": [], "source": [ "# 先將上面 example_inputs 和 example_labels 賦值給 w2.example\n", "w2.example = example_inputs, example_labels" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "jIrYccI-Hm3B" }, "outputs": [], "source": [ "def plot(self, model=None, plot_col='T (degC)', max_subplots=3):\n", " inputs, labels = self.example # 給定 batch size 個時間區段的 inputs 和 labels\n", " plt.figure(figsize=(12, 8))\n", " plot_col_index = self.column_indices[plot_col]\n", " max_n = min(max_subplots, len(inputs)) # 設定 subplots 的數量\n", " for n in range(max_n):\n", " plt.subplot(max_n, 1, n+1)\n", " plt.ylabel(f'{plot_col} [normed]')\n", " \n", " # 畫出 Plot_col 的 input 資料\n", " # zorder 控制繪圖的順序,數值愈大愈晚畫上表示會在圖的愈上層\n", " plt.plot(self.input_indices, inputs[n, :, plot_col_index],\n", " label='Inputs', marker='.', zorder=-10)\n", "\n", " if self.label_columns:\n", " label_col_index = self.label_columns_indices.get(plot_col, None)\n", " else:\n", " label_col_index = plot_col_index\n", "\n", " if label_col_index is None:\n", " continue\n", "\n", " # 畫出 plot_col 的 label 資料\n", " plt.scatter(self.label_indices, labels[n, :, label_col_index], \n", " edgecolors='k', label='Labels', c='#2ca02c', s=64)\n", " if model is not None:\n", " predictions = model(inputs)\n", " \n", " # 畫出模型的預測結果\n", " plt.scatter(self.label_indices, predictions[n, :, label_col_index],marker='X', \n", " edgecolors='k', label='Predictions',c='#ff7f0e', s=64)\n", "\n", " if n == 0:\n", " plt.legend()\n", "\n", " plt.xlabel('Time [h]')\n", "\n", "WindowGenerator.plot = plot" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "XjTqUnglOOni" }, "outputs": [], "source": [ "w2.plot()" ] }, { "cell_type": "markdown", "metadata": { "id": "UqiqcPOldPG6" }, "source": [ "也可以選其它的 column 來畫資料點,不過 `w2` 在創建時僅設定了 label 為 `T (degC)` 所以圖中只會呈現 `input_windows` 的資料" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "EBRe4wnlfCH8" }, "outputs": [], "source": [ "w2.plot(plot_col='p (mbar)')" ] }, { "cell_type": "markdown", "metadata": { "id": "xCvD-UaUzYMw" }, "source": [ "### 4. Create `tf.data.Dataset`s" ] }, { "cell_type": "markdown", "metadata": { "id": "kLO3SFR9Osdf" }, "source": [ "下面的 `make_dataset` 方法會接收時間序列 DataFrame 並將其轉換為 `tf.data.Dataset` of `(input_window, label_window)`\n", "\n", "使用 `tf.keras.utils.timeseries_dataset_from_array` 會返回 `tf.data.Dataset` 物件,它的每一個輸出是一個一個的 `window`" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "35qoSQeRVfJg" }, "outputs": [], "source": [ "def make_dataset(self, data):\n", " data = np.array(data, dtype=np.float32)\n", " ds = tf.keras.utils.timeseries_dataset_from_array(\n", " data=data,\n", " targets=None,\n", " sequence_length=self.total_window_size,\n", " sequence_stride=1,\n", " shuffle=True,\n", " batch_size=32,)\n", "\n", " ds = ds.map(self.split_window)\n", "\n", " return ds\n", "\n", "WindowGenerator.make_dataset = make_dataset" ] }, { "cell_type": "markdown", "metadata": { "id": "LvsxQwJaCift" }, "source": [ "`WindowGenerator` 對象保存了訓練、驗證和測試數據。 \n", "\n", "使用之前定義的 `make_dataset` 方法添加屬性以將它們作為 `tf.data.Datasets` 訪問。此外,添加一個標準示例批次以便於訪問和繪圖。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "2jZ2KkqGCfzu" }, "outputs": [], "source": [ "@property\n", "def train(self):\n", " return self.make_dataset(self.train_df)\n", "\n", "@property\n", "def val(self):\n", " return self.make_dataset(self.val_df)\n", "\n", "@property\n", "def test(self):\n", " return self.make_dataset(self.test_df)\n", "\n", "@property\n", "def example(self):\n", " # 獲取並緩存一批用於繪圖的 “inputs 和 labels 示例\n", " \n", " # getattr() 函数用於返回一个對象的属性值,此處用以返回 `_example` 屬性\n", " result = getattr(self, '_example', None)\n", " if result is None:\n", " \n", " # 未找到示例批次,因此從 `.val` 數據集中獲取一個\n", " result = next(iter(self.val))\n", " \n", " # 並緩存以備下次使用\n", " self._example = result\n", " return result\n", "\n", "WindowGenerator.train = train\n", "WindowGenerator.val = val\n", "WindowGenerator.test = test\n", "WindowGenerator.example = example" ] }, { "cell_type": "markdown", "metadata": { "id": "fF_Vj6Iw3Y2w" }, "source": [ "現在,`WindowGenerator` 對象可以訪問 `tf.data.Dataset` 對象,因此我們可以輕鬆地迭代數據\n", "\n", "`Dataset.element_spec` 屬性告訴您數據集元素的結構、數據類型和形狀" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "daJ0-U383YVs" }, "outputs": [], "source": [ "# 每個元素都是一個(input,label)序對\n", "w2.train.element_spec" ] }, { "cell_type": "markdown", "metadata": { "id": "XKTx3_Z7ua-n" }, "source": [ "我們也可以從 `Dataset` 中迭代出具體的批次" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "6gtKXEgf4Iml" }, "outputs": [], "source": [ "for example_inputs, example_labels in w2.train.take(1):\n", " print(f'Inputs shape (batch, time, features): {example_inputs.shape}')\n", " print(f'Labels shape (batch, time, features): {example_labels.shape}')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Models" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Single-step Models\n", "\n", "僅根據當前條件預測未來某一特定時間 (某一小時)的值\n", "\n", "下面我們以預測**單個特徵** `T (degC)` 的值為例作為說明" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "若是要以 RNN 設計 single-step models , 則訓練方式可分為兩種 `以單個時間點為 Label` 和 `以多個時間點為 Label`:\n", "\n", "#### 以單個時間點為 Label\n", "\n", "其中一種是 RNN 僅返回最後一個時間步的輸出,讓模型有時間在進行單個預測之前預熱其內部狀態, 如下圖:\n", "\n", "\n", "\n", "要使用此種作法須將 Keras RNN layer 中的 `return_sequences` 參數設為 `False`" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "window_1h = WindowGenerator(\n", " input_width=24, label_width=1, shift=1,\n", " label_columns=['T (degC)'])\n", "\n", "window_1h" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "lstm_model_1h = tf.keras.models.Sequential([\n", " # Shape [batch, time, features] => [batch, lstm_units]\n", " tf.keras.layers.LSTM(32, return_sequences=False, input_shape=(24, 19)),\n", " # Shape => [batch, dense_units]\n", " tf.keras.layers.Dense(units=1),\n", " # Shape => [batch, 1, features](使 model prediction shape 與 label shape 一致)\n", " tf.keras.layers.Reshape([1, -1])\n", "])\n", "\n", "lstm_model_1h.summary()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "MAX_EPOCHS = 20\n", "\n", "def compile_and_fit(model, window, patience=2):\n", " early_stopping = tf.keras.callbacks.EarlyStopping(monitor='val_loss',\n", " patience=patience,\n", " mode='min')\n", "\n", " model.compile(loss=tf.keras.losses.MeanSquaredError(),\n", " optimizer=tf.keras.optimizers.Adam(),\n", " metrics=[tf.keras.metrics.MeanAbsoluteError()])\n", "\n", " history = model.fit(window.train, epochs=MAX_EPOCHS,\n", " validation_data=window.val,\n", " callbacks=[early_stopping])\n", " return history" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "eZEROCQVYV6q" }, "outputs": [], "source": [ "print('Input shape:', window_1h.example[0].shape)\n", "print('Output shape:', lstm_model_1h(window_1h.example[0]).shape)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "uvdWRl1e9WJl" }, "outputs": [], "source": [ "history = compile_and_fit(lstm_model_1h, window_1h)\n", "\n", "IPython.display.clear_output()\n", "\n", "# tf.keras.Model.evaluate => (loss value, metrics values)\n", "print('Val MAE:', lstm_model_1h.evaluate(window_1h.val, verbose=0)[1])\n", "print('Test MAE:', lstm_model_1h.evaluate(window_1h.test, verbose=0)[1])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "window_1h.plot(lstm_model_1h)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 以多個時間點為 Label\n", "\n", "另外一種使用 RNN 設計 single-step models 的方式為讓 RNN 為每個輸入返回一個輸出,如下圖:\n", "\n", "\n", "\n", "要使用此種作法須將 Keras RNN layer 中的 `return_sequences` 參數設為 `True`" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "window_24h = WindowGenerator(\n", " input_width=24, label_width=24, shift=1,\n", " label_columns=['T (degC)'])\n", "\n", "window_24h" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "lstm_model_24h = tf.keras.models.Sequential([\n", " # Shape [batch, time, features] => [batch, time, lstm_units]\n", " tf.keras.layers.LSTM(32, return_sequences=True, input_shape=(24, 19)),\n", " # Shape => [batch, time, features]\n", " tf.keras.layers.Dense(units=1)\n", "])\n", "\n", "lstm_model_24h.summary()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "eZEROCQVYV6q" }, "outputs": [], "source": [ "print('Input shape:', window_24h.example[0].shape)\n", "print('Output shape:', lstm_model_24h(window_24h.example[0]).shape)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "uvdWRl1e9WJl" }, "outputs": [], "source": [ "history = compile_and_fit(lstm_model_24h, window_24h)\n", "\n", "IPython.display.clear_output()\n", "\n", "# tf.keras.Model.evaluate => (loss value, metrics values)\n", "print('Val MAE:', lstm_model_24h.evaluate(window_24h.val, verbose=0)[1])\n", "print('Test MAE:', lstm_model_24h.evaluate(window_24h.test, verbose=0)[1])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "window_24h.plot(lstm_model_24h)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Multi-output models\n", "\n", "目前為止,以上模型都預測了**單個時間步長**的**單個輸出特徵** `T (degC)`\n", "\n", "所有這些模型都可以轉換為預測**多個特徵**,只需更改輸出層中的單元數並調整 training windows 讓其包含標籤中的所有特徵" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "multi_output_window_24h = WindowGenerator(\n", " input_width=24, label_width=24, shift=1)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "multi_output_lstm_model_24h = tf.keras.models.Sequential([\n", " # Shape [batch, time, features] => [batch, time, lstm_units]\n", " tf.keras.layers.LSTM(32, return_sequences=True),\n", " # Shape => [batch, time, features]\n", " tf.keras.layers.Dense(units=num_features)\n", "])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print('Input shape:', multi_output_window_24h.example[0].shape)\n", "print('Output shape:', multi_output_lstm_model_24h(multi_output_window_24h.example[0]).shape)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "history = compile_and_fit(multi_output_lstm_model_24h, multi_output_window_24h)\n", "\n", "IPython.display.clear_output()\n", "print('Val MAE:', multi_output_lstm_model_24h.evaluate( multi_output_window_24h.val, verbose=0)[1])\n", "print('Test MAE:', multi_output_lstm_model_24h.evaluate( multi_output_window_24h.test, verbose=0)[1])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "multi_output_window_24h.plot(multi_output_lstm_model_24h, plot_col='p (mbar)')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Residual connections\n", "\n", "我們也可以設計殘差結構,讓模型預測下個時間點的變化量而非預測下個時間點的數值,這也是時間序列中常見的作法\n", "\n", "" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "7YlfnDQC22TQ" }, "outputs": [], "source": [ "class ResidualWrapper(tf.keras.Model):\n", " def __init__(self, model):\n", " super().__init__()\n", " self.model = model\n", "\n", " def call(self, inputs, *args, **kwargs):\n", " delta = self.model(inputs, *args, **kwargs)\n", "\n", " # 每個時間黸的預測是前一個時間點的輸入(inputs)加上模型計算的差值(delta)。\n", " return inputs + delta" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "multi_output_residual_lstm_24h = ResidualWrapper(\n", " tf.keras.Sequential([\n", " tf.keras.layers.LSTM(32, return_sequences=True),\n", " tf.keras.layers.Dense(\n", " num_features,\n", " # delta 的值一開始應該要小\n", " # 因此設定 output layer 初始化權重為零\n", " kernel_initializer=tf.initializers.zeros())\n", "]))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "NNeH02pspc9B" }, "outputs": [], "source": [ "history = compile_and_fit(multi_output_residual_lstm_24h, multi_output_window_24h)\n", "\n", "IPython.display.clear_output()\n", "print('Val MAE:', multi_output_residual_lstm_24h.evaluate(multi_output_window_24h.val, verbose=0)[1])\n", "print('Test MAE:', multi_output_residual_lstm_24h.evaluate( multi_output_window_24h.test, verbose=0)[1])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "multi_output_window_24h.plot(multi_output_residual_lstm_24h, plot_col='p (mbar)')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Multi-step models\n", "\n", "前面做的模型不論做的是 single-output models 還是 multi-output models 所作的都是 **single-step models**
\n", "也就是模型做的預測是**未來一個時間步長**的值
\n", "接下來將著眼於如何擴展這些模型以進行**多時間步長**預測" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "OUT_STEPS = 24\n", "multi_step_window_24h = WindowGenerator(input_width=24, \n", " label_width=OUT_STEPS, \n", " shift=OUT_STEPS)\n", "\n", "multi_step_window_24h" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "multi_step_window_24h.plot()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### RNN\n", "\n", "RNN 可以學習使用長期的歷史輸入。模型會累積 24 小時的 internal state,然後再對接下來的 24 小時進行單一預測
\n", "在這種單次格式中,LSTM 只需要在最後一個時間步產生一個輸出,所以在 `tf.keras.layers.LSTM` 中設置 `return_sequences=False`\n", "\n", "" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "multi_step_lstm_model_24h = tf.keras.Sequential([\n", " # Shape [batch, time, features] => [batch, lstm_units].\n", " tf.keras.layers.LSTM(32, return_sequences=False),\n", " # Shape => [batch, out_steps*features].\n", " tf.keras.layers.Dense(OUT_STEPS*num_features,\n", " kernel_initializer=tf.initializers.zeros()),\n", " # Shape => [batch, out_steps, features].\n", " tf.keras.layers.Reshape([OUT_STEPS, num_features])\n", "])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "history = compile_and_fit(multi_step_lstm_model_24h, multi_step_window_24h)\n", "\n", "IPython.display.clear_output()\n", "print('Val MAE:', multi_step_lstm_model_24h.evaluate(multi_step_window_24h.val, verbose=0)[1])\n", "print('Test MAE:', multi_step_lstm_model_24h.evaluate(multi_step_window_24h.test, verbose=0)[1])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "multi_step_window_24h.plot(multi_step_lstm_model_24h)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Autoregressive model\n", "構建 Autoregressive RNN 模型,模型從 `OUT_STEPS` 開始進行後面每個時間點的預測,每一步模型會參考前一步的預測再生成下一步的預測
\n", "由於自迴歸的機制在 `tf.keras.layers.LSTM` 並沒有支援,這樣的機制要從較低級別的 `tf.keras.layers.LSTMCell` 進行單時間步運作的設計與管理\n", "\n", "`RNN Layer`可用於一次性的處理整批序列,與 `RNN Layer`不同的是`RNN Cell`僅會進行單個時間步長的訊息處理
\n", "可以把`RNN Layer`想成是將`RNN Cell`以 `for` loop 包裝起來,例如: `RNN(LSTMCell(10))`,如此將 Cell 包裝起來後就可以像 Layer 一樣一次性地處理整批序列\n", "\n", "" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class FeedBack(tf.keras.Model):\n", " def __init__(self, units, out_steps):\n", " super().__init__()\n", " self.out_steps = out_steps\n", " self.units = units\n", " self.lstm_cell = tf.keras.layers.LSTMCell(units)\n", " # 將 LSTMCell 包裝 RNN Layer 中\n", " self.lstm_rnn = tf.keras.layers.RNN(self.lstm_cell, return_state=True)\n", " self.dense = tf.keras.layers.Dense(num_features)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "feedback_model = FeedBack(units=32, out_steps=OUT_STEPS)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def warmup(self, inputs):\n", " # inputs.shape => (batch, time, features)\n", " # x.shape => (batch, lstm_units)\n", " x, *state = self.lstm_rnn(inputs)\n", "\n", " # predictions.shape => (batch, features)\n", " prediction = self.dense(x)\n", " return prediction, state\n", "\n", "FeedBack.warmup = warmup" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "prediction, state = feedback_model.warmup(multi_step_window_24h.example[0])\n", "prediction.shape" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def call(self, inputs, training=None):\n", " # 用以儲存每個時間點的 output\n", " predictions = []\n", " # 將 inputs 給 warmup 作用回傳出 prediction 與 state\n", " prediction, state = self.warmup(inputs)\n", "\n", " # 將第一次的預測結果放入 predictions list 中\n", " predictions.append(prediction)\n", "\n", " # 執行後續的 autoregressive prediction 過程\n", " for n in range(1, self.out_steps):\n", " # 使用上一個 prediction 作為 input\n", " x = prediction\n", " # 執行一次 lstm \n", " x, state = self.lstm_cell(x, states=state,\n", " training=training)\n", " # 將 lstm 的 output 過一次 dense layer 進行預測\n", " prediction = self.dense(x)\n", " # Add the prediction to the output.\n", " predictions.append(prediction)\n", "\n", " # predictions.shape => (time, batch, features)\n", " predictions = tf.stack(predictions)\n", " # predictions.shape => (batch, time, features)\n", " predictions = tf.transpose(predictions, [1, 0, 2])\n", " return predictions\n", "\n", "FeedBack.call = call" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print('Output shape (batch, time, features): ', feedback_model(multi_step_window_24h.example[0]).shape)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "history = compile_and_fit(feedback_model, multi_step_window_24h)\n", "\n", "IPython.display.clear_output()\n", "print('Val MAE:', feedback_model.evaluate(multi_step_window_24h.val, verbose=0)[1])\n", "print('Test MAE:', feedback_model.evaluate(multi_step_window_24h.test, verbose=0)[1])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "multi_step_window_24h.plot(feedback_model)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## References\n", "\n", "TensorFlow 官網 Time Series Forcasting." ] } ], "metadata": { "accelerator": "GPU", "colab": { "name": "time_series.ipynb", "provenance": [], "toc_visible": true }, "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.12" } }, "nbformat": 4, "nbformat_minor": 4 }