Skip to content

Commit

Permalink
Table/Figure title changes.
Browse files Browse the repository at this point in the history
  • Loading branch information
b3nk4n committed Oct 17, 2016
1 parent b2b1084 commit 6016364
Show file tree
Hide file tree
Showing 3 changed files with 4 additions and 4 deletions.
2 changes: 1 addition & 1 deletion chapters/01_introduction.tex
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ \section{Problem Statement}
\begin{figure}[htpb]
\centering
\includegraphics[width=1.0\linewidth]{figures/ucf-intro/serie1.png}
\caption[Example Image Sequence]{Example of an image sequence with an unknown future frame. The sequence is starting from the left and is taken from UCF-101.} \label{fig:intro-seq}
\caption[Image Sequence Example]{Example of an image sequence with an unknown future frame. The sequence is starting from the left and is taken from UCF-101.} \label{fig:intro-seq}
\end{figure}

Like in the first example of object recognition on static pictures, this task might sound trivial for humans once more, since we already have built an intuition regarding motion and our environment. When we have a look at the image sequence in Figure \ref{fig:intro-seq}, we have a strong idea about how this sequence might continue. At least for a couple of time steps. The boy in the foreground probably will lift his left foot towards the ball, while the ball continues to fall down due to gravitation. In contrast, the background will stay almost unchanged.
Expand Down
2 changes: 1 addition & 1 deletion chapters/02_fundamentals.tex
Original file line number Diff line number Diff line change
Expand Up @@ -409,7 +409,7 @@ \subsubsection{Structure}
\caption{}
\label{fig:rnn-many2many}
\end{subfigure}
\caption[Examples of RNN Input-Output Modes]{Visualization of different recurrent network input-output modes: (a) one-to-many, (b) many-to-one, (c) many-to-many. Red squares denote the inputs, gray squares the recurrent cells and all outputs are colored in blue. Input and output squares can be understood as either further neural networks or the direct input and output. (Based on \parencite{rnn-effectiveness})}
\caption[RNN Input-Output Modes]{Visualization of different recurrent network input-output modes: (a) one-to-many, (b) many-to-one, (c) many-to-many. Red squares denote the inputs, gray squares the recurrent cells and all outputs are colored in blue. Input and output squares can be understood as either further neural networks or the direct input and output. (Based on \parencite{rnn-effectiveness})}
\label{fig:rnn-modes}
\end{figure}

Expand Down
4 changes: 2 additions & 2 deletions chapters/06_evaluation.tex
Original file line number Diff line number Diff line change
Expand Up @@ -559,7 +559,7 @@ \subsubsection{Quantitative Results}
3enc-ConvLSTM-SS-3dec(5/64-64-64) & \num{5299841} & \textbf{0.0407} \\
\bottomrule
\end{tabular}
\caption[Results on Moving MNIST]{Comparison with other networks on Moving MNIST. The numbers in brackets identify the \textit{amount of hidden units} per layer in case of FC-LSTMs, and for ConvLSTMs the \textit{hidden-to-hidden kernel size} followed by the \textit{feature maps per layer} are listed. Further, \textit{3enc} denotes tree convolutional layers in the spatial encoder. We call our model 3enc-ConvLSTM-SS-3dec.}\label{tab:mm-comparison}
\caption[Test Results on Moving MNIST]{Comparison with other networks on Moving MNIST. The numbers in brackets identify the \textit{amount of hidden units} per layer in case of FC-LSTMs, and for ConvLSTMs the \textit{hidden-to-hidden kernel size} followed by the \textit{feature maps per layer} are listed. Further, \textit{3enc} denotes tree convolutional layers in the spatial encoder. We call our model 3enc-ConvLSTM-SS-3dec.}\label{tab:mm-comparison}
\end{table}

\subsubsection{Qualitative Results}
Expand Down Expand Up @@ -903,7 +903,7 @@ \subsubsection{Quantitative Results}
MSE+GDL+SSIM & \textbf{0.0023} & \textbf{0.0005} & \textbf{0.0037} & \textbf{0.0014} \\
\bottomrule
\end{tabular}
\caption[Test Errors on MsPacman]{Absolute and squared error test results on MsPacman dataset using varying loss layers in our 2-layer network.}\label{tab:pac-comparison2}
\caption[Test Results on MsPacman]{Absolute and squared error test results on MsPacman dataset using varying loss layers in our 2-layer network.}\label{tab:pac-comparison2}
\end{table}


Expand Down

0 comments on commit 6016364

Please sign in to comment.