1.displaymath 单行数学环境,不带编号。
\begin{displaymath}
This\ is\ displaymath\ envirment.\ I\ don
't\ have\ a\ tag
\end{displaymath}
2.equation 单行数学环境,全文按序编号。
\begin{equation}
This\ is\ equation\ envirment.\ I\ have\ a\ tag
\end{equation}
3.itemize 条目环境,按小圆点排列。
\begin{itemize}
\item This is
\item itemize environment
\end{itemize}
4.enmerate 枚举环境,按数字序号排列。
\begin{enumerate}
\item This is
\item enumerate enviroment
\end{enumerate}
5.quotation 引用环境,将输入看作纯文本,有大缩进。
\begin{quotation}
This is quotation environment. I have big indent, and output plaintext.
\end{quotation}
6.verbatim 复读环境,字体特殊,将输入看作纯文本。
\begin{verbatim}
This is verbatim enviroment.I also output plaintext.
\end{verbatim}
7.tabular 表格环境。
\begin{tabular}{l|c|c}
Aloha&This is tabular environment & I can have many rows\\
\hline
I am BJ&Hello World &I love you\\
&I can make multiple lines & I can even enter $\int$
\end{tabular}
其中{}框住的三个字母lcc表示表格有三列,l:本列左对齐,c:本列居中,r:本列右对齐。&符号分割表项,\\换行,\hline添加水平线。
8.description 描述环境,将输入看作纯文本。
\begin{description}
\item[This is describe environment.]
\item[It seems cool.]
\end{description}
9.matrix 矩阵环境,使用时要加载amsmath包,并用美元括住。编译器会将matrix看作数学符号处理。
$$
\begin{matrix}
I& am& a\\
Matrix& I& am\\
seen& as& a\ symbol
\end{matrix}
$$
10.table 浮动表格环境,浮动体位置更灵活。
\begin{table}[hbt]
\begin{tabular}{l|cc}
& I& just\\
\hline
& like & tabular\\
& environment&but more complete
\end{tabular}
\caption{This is a floating table.}
\end{table}
11.preamble 引言环境。
\title{This is a preamble}
\author{Chester}
\date{\today}
\maketitle
12.figure 图片环境。
\begin{figure}[hbt]
\centering
\includegraphics{lenna.png}
\caption{lenna}
\end{figure
13.一个更灵活的图片环境,并且可以居中与缩放图片,需要graphicx包
{\centering\includegraphics[scale=0.85]{test.png} }\\ 注意这里必须要空一行,这和tex的对齐方式有关,留待日后
14.一个表格的实例,比较细节,含有合并单元格的操作,需要algorithm和algorithmic包
\begin{tabular}{|l|c|c|c}
\multicolumn{2}{}&&\multicolumn{2}{|c}{Predicted Classes}\\ \cline{3-4}
\multicolumn{2}{}&&\multicolumn{1}{|c|}{zero }& nonzero\\
\hline
Real&zero&975&5\\ \cline{2-4}
Class&nonzero&53&927\\ \hline
\end{tabular}\\
15.一个伪代码的实例(Naive Bayes)
\begin{algorithm}
\caption{Training Naive Bayes Classifier}
\label{alg:train_bayes}
\textbf{Input:} The training set with the labels $\mathcal{D}=\{(\mathbf{x}_i,y_i)\}.$
\begin{algorithmic}[1]
\STATE $\mathcal{V}\leftarrow$ the set of distinct words and other tokens found in $\mathcal{D}$\\
\FOR{each target value $c$ in the labels set $\mathcal{C}$}
\STATE $\mathcal{D}_c\leftarrow$ the training samples whose labels are $c$\\
\STATE $P(c)\leftarrow\frac{|\mathcal{D}_c|}{|\mathcal{D}|}$\\
\STATE $T_c\leftarrow$ a single document by concatenating all training samples in $\mathcal{D}_c$\\
\STATE $n_c\leftarrow |T_c|$
\FOR{each word $w_k$ in the vocabulary $\mathcal{V}$}
\STATE $n_{c,k}\leftarrow$ the number of times the word $w_k$ occurs in $T_c$\\
\STATE $P(w_k|c)=\frac{n_{c,k}+1}{n_c+|\mathcal{V}|}$
\ENDFOR
\ENDFOR
\end{algorithmic}
\end{algorithm}