CWYAlpha

Just another WordPress.com site

Archive for 二月 2013

Thought this was cool: 小明不小心把iPhone4掉到河里,坐在河边伤心的哭起来。 河神听…

leave a comment »

小明不小心把iPhone4掉到河里,坐在河边伤心的哭起来。

河神听到哭声,很可怜他,就从河里拿出一个iPhone5,问是不是他的,小明摇头。

又拿出一个iPhone4S问他,小明依旧摇头。最后拿出了iPhone4,小明点头说:这个才是我的。

河神大悦:诚实的孩子,这3个iPhone你都拿去吧,反正也不能用了。

来自广东省深圳市的匿名人士对新闻:《动点科技:中国人为什么要越狱?》的评论
from cnBeta.com精彩优秀评论: http://www.cnbeta.com/articles/225416.htm

Written by cwyalpha

二月 9, 2013 at 3:47 下午

发表在 Uncategorized

Thought this was cool: That Netflix RMSE is way too low or is it ? ( Clustering-Based Matrix Factorization – implementation -)

leave a comment »

We’ve seen this type of occurrence on Nuit Blanche before. This one is either a bombshell or a dud. Early on in a discussion in the Advanced Matrix Factorization group, Nima Mirbakhsh shared his thought and a interesting and potentially mind blowing implementation, here is what he said:

Helping to evaluate my proposed extension on matrix factorization.

Hello eveyone,
I have a new extension of matrix factorization named “Clustering-Based Matrix Factorization”. I apply it on many datasets including “Netflix”, “Movielens”, “Epinions”, “Flixter”, and it acheives very good results. For the last three data sets the RMSE result is good and realizable, but for Netflix dataset it acheives very interestng result. As we all know the RMSE result of the Netflix prize winner was 0.8567, now my method achieves the RMSE of 0.8122.
I know that the Netflix prize winner’s method includes fusion of lots of different algorithm’s result, and it is hard to believe that one algorithm can reach such a good result. It has been my concern in the last couple of months too. Thus, I check my source code and my setup several time but cannot find any bug there. I also submit the paper in ICML but except a weak acceptation all other reviewers said that my method actually make sense but they all reject my work just because of the extraordinary result!
That is why I decide to put the paper and my source code online that everyone can evaluate it. Now, I am going to ask you to kindly joining me to evaluate the paper and the source code more accurately. Lets say if my method works fine, it is going to be a new experience on recommendation systems and may show us that they are still opportunities to improve the RMSE results.
Here is the paper’s link following by source code’s link:
source code: http://goo.gl/Az0lS 
Thanks everyone in advance.
We recently saw some improvement of the Netflix RMSE (Linear Bandits in High Dimension and Recommendation Systems) but this time, the code is shared for everybody to kick the tires on it. As a reminder, we featured that paper earlier:

Recommender systems are emerging technologies that nowadays can be found in many applications such as Amazon, Netflix, and so on. These systems help users find relevant information, recommendations, and their preferred items. Matrix Factorization is a popular method in Recommendation Systems showing promising results in accuracy and complexity. In this paper we propose an extension of matrix factorization that uses the clustering paradigm to cluster similar users and items in several communities. We then establish their effects on the prediction model then. To the best of our knowledge, our proposed model outperforms all other published recommender methods in accuracy and complexity. For instance, our proposed method’s accuracy is 0.8122 on the Netflix dataset which is better than the Netflix prize winner’s accuracy of 0.8567.


from Nuit Blanche: http://nuit-blanche.blogspot.com/2013/02/that-netflix-rmse-is-way-too-low-or-is.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+blogspot%2FwCeDd+%28Nuit+Blanche%29

Written by cwyalpha

二月 9, 2013 at 3:30 下午

发表在 Uncategorized

Thought this was cool: 火控雷达意味着什么

leave a comment »

形象生动

> Military ships and planes do not “lock onto other vessels all the time”. Not with targeting radar, and not in international waters – doing so is proscribed under international law as an act of aggression. It is the high-tech equivalent of sticking a gun in someone’s face and cocking the hammer

via

from est's blog: http://blog.est.im/post/42493400735

Written by cwyalpha

二月 9, 2013 at 2:58 下午

发表在 Uncategorized

Thought this was cool: LDA-math-文本建模

leave a comment »

4. 文本建模

我们日常生活中总是产生大量的文本,如果每一个文本存储为一篇文档,那每篇文档从人的观察来说就是有序的词的序列 $d=(w_1, w_2, \cdots, w_n)$。

corpus
包含$M$ 篇文档的语料库

统计文本建模的目的就是追问这些观察到语料库中的的词序列是如何生成的。统计学被人们描述为猜测上帝的游戏,人类产生的所有的语料文本我们都可以看成是一个伟大的上帝在天堂中抛掷骰子生成的,我们观察到的只是上帝玩这个游戏的结果 —— 词序列构成的语料,而上帝玩这个游戏的过程对我们是个黑盒子。所以在统计文本建模中,我们希望猜测出上帝是如何玩这个游戏的,具体一点,最核心的两个问题是

  • 上帝都有什么样的骰子;
  • 上帝是如何抛掷这些骰子的;

第一个问题就是表示模型中都有哪些参数,骰子的每一个面的概率都对应于模型中的参数;第二个问题就表示游戏规则是什么,上帝可能有各种不同类型的骰子,上帝可以按照一定的规则抛掷这些骰子从而产生词序列。

dice-all god-throw-dice

上帝掷骰子

4.1 Unigram Model

假设我们的词典中一共有 $V$ 个词 $v_1, v_2, \cdots v_V$,那么最简单的 Unigram Model 就是认为上帝是按照如下的游戏规则产生文本的。

game-unigram-model

上帝的这个唯一的骰子各个面的概率记为 $\overrightarrow{p} = (p_1, p_2, \cdots, p_V)$, 所以每次投掷骰子类似于一个抛钢镚时候的贝努利实验, 记为 $w\sim Mult(w|\overrightarrow{p}) $。

unigram-model上帝投掷$V$ 个面的骰子

对于一篇文档$d=\overrightarrow{w}=(w_1, w_2, \cdots, w_n)$, 该文档被生成的概率就是
$$ p(\overrightarrow{w}) = p(w_1, w_2, \cdots, w_n) = p(w_1)p(w_2) \cdots p(w_n) $$
而文档和文档之间我们认为是独立的, 所以如果语料中有多篇文档 $\mathcal{W}=(\overrightarrow{w_1}, \overrightarrow{w_2},…,\overrightarrow{w_m})$,则该语料的概率是
$$p(\mathcal{W})= p(\overrightarrow{w_1})p(\overrightarrow{w_2})
\cdots p(\overrightarrow{w_m}) $$

在 Unigram Model 中, 我们假设了文档之间是独立可交换的,而文档中的词也是独立可交换的,所以一篇文档相当于一个袋子,里面装了一些词,而词的顺序信息就无关紧要了,这样的模型也称为词袋模型(Bag-of-words)。

假设语料中总的词频是$N$, 在所有的 $N$ 个词中,如果我们关注每个词 $v_i$ 的发生次数 $n_i$,那么 $\overrightarrow{n}=(n_1, n_2,\cdots, n_V)$ 正好是一个多项分布
$$ p(\overrightarrow{n}) = Mult(\overrightarrow{n}|\overrightarrow{p}, N)
= \binom{N}{\overrightarrow{n}} \prod_{k=1}^V p_k^{n_k} $$
此时, 语料的概率是
\begin{align*}
p(\mathcal{W})= p(\overrightarrow{w_1})p(\overrightarrow{w_2}) \cdots p(\overrightarrow{w_m})
= \prod_{k=1}^V p_k^{n_k}
\end{align*}

当然,我们很重要的一个任务就是估计模型中的参数$\overrightarrow{p}$,也就是问上帝拥有的这个骰子的各个面的概率是多大,按照统计学家中频率派的观点,使用最大似然估计最大化$P(\mathcal{W})$,于是参数$p_i$的估计值就是
$$ \hat{p_i} = \frac{n_i}{N} .$$

对于以上模型,贝叶斯统计学派的统计学家会有不同意见,他们会很挑剔的批评只假设上帝拥有唯一一个固定的骰子是不合理的。在贝叶斯学派看来,一切参数都是随机变量,以上模型中的骰子 $\overrightarrow{p}$不是唯一固定的,它也是一个随机变量。所以按照贝叶斯学派的观点,上帝是按照以下的过程在玩游戏的

game-bayesian-unigram-model
上帝的这个坛子里面,骰子可以是无穷多个,有些类型的骰子数量多,有些类型的骰子少,所以从概率分布的角度看,坛子里面的骰子$\overrightarrow{p}$ 服从一个概率分布 $p(\overrightarrow{p})$,这个分布称为参数$\overrightarrow{p}$ 的先验分布。

bayesian-unigram-model贝叶斯观点下的 Unigram Model

以上贝叶斯学派的游戏规则的假设之下,语料$\mathcal{W}$产生的概率如何计算呢?由于我们并不知道上帝到底用了哪个骰子$\overrightarrow{p}$,所以每个骰子都是可能被使用的,只是使用的概率由先验分布$p(\overrightarrow{p})$来决定。对每一个具体的骰子$\overrightarrow{p}$,由该骰子产生数据的概率是 $p(\mathcal{W}|\overrightarrow{p})$, 所以最终数据产生的概率就是对每一个骰子$\overrightarrow{p}$上产生的数据概率进行积分累加求和
$$ p(\mathcal{W}) = \int p(\mathcal{W}|\overrightarrow{p}) p(\overrightarrow{p})d\overrightarrow{p} $$
在贝叶斯分析的框架下,此处先验分布$p(\overrightarrow{p})$ 就可以有很多种选择了,注意到
$$ p(\overrightarrow{n}) = Mult(\overrightarrow{n}|\overrightarrow{p}, N) $$
实际上是在计算一个多项分布的概率,所以对先验分布的一个比较好的选择就是多项分布对应的共轭分布,即 Dirichlet 分布
$$ Dir(\overrightarrow{p}|\overrightarrow{\alpha})=
\frac{1}{\Delta(\overrightarrow{\alpha})} \prod_{k=1}^V p_k^{\alpha_k -1},
\quad \overrightarrow{\alpha}=(\alpha_1, \cdots, \alpha_V) $$
此处,$\Delta(\overrightarrow{\alpha})$ 就是归一化因子$Dir(\overrightarrow{\alpha})$,即
$$ \Delta(\overrightarrow{\alpha}) =
\int \prod_{k=1}^V p_k^{\alpha_k -1} d\overrightarrow{p} . $$

dirichlet-multinomial-unigram

Dirichlet 先验下的 Unigram Model

graph-model-unigram

Unigram Model的概率图模型

回顾前一个小节介绍的 Drichlet 分布的一些知识,其中很重要的一点就是

Dirichlet 先验 + 多项分布的数据 $\rightarrow$ 后验分布为 Dirichlet 分布

$$ Dir(\overrightarrow{p}|\overrightarrow{\alpha}) + MultCount(\overrightarrow{n})= Dir(\overrightarrow{p}|\overrightarrow{\alpha}+\overrightarrow{n}) $$

于是,在给定了参数 $\overrightarrow{p}$的先验分布 $Dir(\overrightarrow{p}|\overrightarrow{\alpha})$ 的时候,各个词出现频次的数据 $\overrightarrow{n} \sim Mult(\overrightarrow{n}|\overrightarrow{p},N)$ 为多项分布, 所以无需计算,我们就可以推出后验分布是
\begin{equation}
p(\overrightarrow{p}|\mathcal{W},\overrightarrow{\alpha})
= Dir(\overrightarrow{p}|\overrightarrow{n}+ \overrightarrow{\alpha})
= \frac{1}{\Delta(\overrightarrow{n}+\overrightarrow{\alpha})}
\prod_{k=1}^V p_k^{n_k + \alpha_k -1} d\overrightarrow{p}
\end{equation}

在贝叶斯的框架下,参数$\overrightarrow{p}$如何估计呢?由于我们已经有了参数的后验分布,所以合理的方式是使用后验分布的极大值点,或者是参数在后验分布下的平均值。在该文档中,我们取平均值作为参数的估计值。使用上个小节中的结论,由于 $\overrightarrow{p}$ 的后验分布为 $Dir(\overrightarrow{p}|\overrightarrow{n} + \overrightarrow{\alpha})$,于是
$$
E(\overrightarrow{p}) = \Bigl(\frac{n_1 + \alpha_1}{\sum_{i=1}^V(n_i + \alpha_i)},
\frac{n_2 + \alpha_2}{\sum_{i=1}^V(n_i + \alpha_i)}, \cdots,
\frac{n_V + \alpha_V}{\sum_{i=1}^V(n_i + \alpha_i)} \Bigr)
$$
也就是说对每一个 $p_i$, 我们用下式做参数估计
\begin{equation}
\label{dirichlet-parameter-estimation}
\hat{p_i} = \frac{n_i + \alpha_i}{\sum_{i=1}^V(n_i + \alpha_i)}
\end{equation}
考虑到 $\alpha_i$ 在 Dirichlet 分布中的物理意义是事件的先验的伪计数,这个估计式子的含义是很直观的:每个参数的估计值是其对应事件的先验的伪计数和数据中的计数的和在整体计数中的比例。

进一步,我们可以计算出文本语料的产生概率为
\begin{align}
p(\mathcal{W}|\overrightarrow{\alpha}) & = \int p(\mathcal{W}|\overrightarrow{p}) p(\overrightarrow{p}|\overrightarrow{\alpha})d\overrightarrow{p} \notag \\
& = \int \prod_{k=1}^V p_k^{n_k} Dir(\overrightarrow{p}|\overrightarrow{\alpha}) d\overrightarrow{p} \notag \\
& = \int \prod_{k=1}^V p_k^{n_k} \frac{1}{\Delta(\overrightarrow{\alpha})}
\prod_{k=1}^V p_k^{\alpha_k -1} d\overrightarrow{p} \notag \\
& = \frac{1}{\Delta(\overrightarrow{\alpha})}
\int \prod_{k=1}^V p_k^{n_k + \alpha_k -1} d\overrightarrow{p} \notag \\
& = \frac{\Delta(\overrightarrow{n}+\overrightarrow{\alpha})}{\Delta(\overrightarrow{\alpha})}
\label{likelihood-dir-mult}
\end{align}

4.2 Topic Model 和 PLSA

以上 Unigram Model 是一个很简单的模型,模型中的假设看起来过于简单,和人类写文章产生每一个词的过程差距比较大,有没有更好的模型呢?

我们可以看看日常生活中人是如何构思文章的。如果我们要写一篇文章,往往是先确定要写哪几个主题。譬如构思一篇自然语言处理相关的文章,可能 40\% 会谈论语言学、30\% 谈论概率统计、20\% 谈论计算机、还有10\%谈论其它的主题:

  • 说到语言学,我们容易想到的词包括:语法、句子、乔姆斯基、句法分析、主语…;
  • 谈论概率统计,我们容易想到以下一些词: 概率、模型、均值、方差、证明、独立、马尔科夫链、…;
  • 谈论计算机,我们容易想到的词是: 内存、硬盘、编程、二进制、对象、算法、复杂度…;

我们之所以能马上想到这些词,是因为这些词在对应的主题下出现的概率很高。我们可以很自然的看到,一篇文章通常是由多个主题构成的、而每一个主题大概可以用与该主题相关的频率最高的一些词来描述。

以上这种直观的想法由Hoffman 于 1999 年给出的PLSA(Probabilistic Latent Semantic Analysis) 模型中首先进行了明确的数学化。Hoffman 认为一篇文档(Document) 可以由多个主题(Topic) 混合而成, 而每个Topic 都是词汇上的概率分布,文章中的每个词都是由一个固定的 Topic 生成的。下图是英语中几个Topic 的例子。

topic-examplesTopic 就是Vocab 上的概率分布

所有人类思考和写文章的行为都可以认为是上帝的行为,我们继续回到上帝的假设中,那么在 PLSA 模型中,Hoffman 认为上帝是按照如下的游戏规则来生成文本的。

game-plsa

以上PLSA 模型的文档生成的过程可以图形化的表示为

plsa-doc-topic-wordPLSA 模型的文档生成过程

我们可以发现在以上的游戏规则下,文档和文档之间是独立可交换的,同一个文档内的词也是独立可交换的,还是一个 bag-of-words 模型。游戏中的$K$ 个topic-word 骰子,我们可以记为 $\overrightarrow{\varphi}_1, \cdots, \overrightarrow{\varphi}_K$, 对于包含$M$篇文档的语料 $C=(d_1, d_2, \cdots, d_M)$ 中的每篇文档$d_m$,都会有一个特定的doc-topic骰子$\overrightarrow{\theta}_m$,所有对应的骰子记为 $\overrightarrow{\theta}_1, \cdots, \overrightarrow{\theta}_M$。为了方便,我们假设每个词$w$ 都是一个编号,对应到topic-word 骰子的面。于是在 PLSA 这个模型中,第$m$篇文档 $d_m$ 中的每个词的生成概率为
$$ p(w|d_m) = \sum_{z=1}^K p(w|z)p(z|d_m) = \sum_{z=1}^K \varphi_{zw} \theta_{mz}$$
所以整篇文档的生成概率为
$$ p(\overrightarrow{w}|d_m) = \prod_{i=1}^n \sum_{z=1}^K p(w_i|z)p(z|d_m) =
\prod_{i=1}^n \sum_{z=1}^K \varphi_{zw_i} \theta_{dz} $$
由于文档之间相互独立,我们也容易写出整个语料的生成概率。求解PLSA 这个 Topic Model 的过程汇总,模型参数并容易求解,可以使用著名的 EM 算法进行求得局部最优解,由于该模型的求解并不是本文的介绍要点,有兴趣的同学参考 Hoffman 的原始论文,此处略去不讲。

相关文章:

  1. LDA-math-认识Beta/Dirichlet分布(3)
  2. LDA-math-认识Beta/Dirichlet分布(1)
  3. LDA-math-神奇的Gamma函数(3)
  4. LDA-math-认识Beta/Dirichlet分布(2)
  5. LDA-math-神奇的Gamma函数(1)
  6. LDA-math-MCMC 和 Gibbs Sampling(1)
  7. LDA-math-MCMC 和 Gibbs Sampling(2)
  8. LDA-math-神奇的Gamma函数(2)
  9. 概率语言模型及其变形系列-LDA及Gibbs Sampling
  10. 正态分布的前世今生(四)


from 我爱自然语言处理: http://www.52nlp.cn/lda-math-%e6%96%87%e6%9c%ac%e5%bb%ba%e6%a8%a1?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+52nlp+%28%E6%88%91%E7%88%B1%E8%87%AA%E7%84%B6%E8%AF%AD%E8%A8%80%E5%A4%84%E7%90%86%29

Written by cwyalpha

二月 3, 2013 at 12:53 上午

发表在 Uncategorized

Thought this was cool: HTML5 for TV (Big screen)

leave a comment »

看到一则微博

@刘兴亮
HTML5的突出优点是多设备、跨平台。“HTML5 for TV”是由美国Cable Labs牵头组成的国际性开源协作项目,为推动HTML5技术在电视终端上的应用。预计2013年会有更多的开发者投入到基于智能电视的应用开发。针对大屏的应用开发是个“富矿”。2013年,智能电视终端应用的重要特点是——跨屏,跨终端互操作。

然后就跑去看了下他们的文档:

http://www.cablelabs.com/specifications/CL-SP-HTML5-MAP-I02-120510.pdf

觉得这个只是定制了一个节目厂商嵌入HTML的规范,然后他们的Tru2way®设备可以推送网页节目到电视。

这个只解决了一部分问题。首先限定了是线性内容,其次格式居然限定死是MPEG-2 TS。这个H.265的时代来看真的有点匪夷所思。

个人觉得最大的问题,是解决了HTML5 to TV,但是没有解决TV内嵌HTML5渲染引擎的问题。比如一个现实的问题:

vod.xunlei.com 我已经大部分视频在这里看了。但是

  • 多媒体键盘的media key没法响应。这个是browser vendor的问题
  • 音视频内容无法DLNA到其他设备的屏幕

如果真正要推动标准发展和技术进步,HTML5应该支持

<video onControlPlay="function(){}" />

这样的DOM节点。

只是一些粗浅看法,可能并不完善。

from est's blog: http://blog.est.im/post/41774190853

Written by cwyalpha

二月 2, 2013 at 12:38 下午

发表在 Uncategorized

Thought this was cool: 你熟悉的计算机语言是第几代的?

leave a comment »

1GL (First-generation programming language)

机器语言 —— 就是传说中小刀刻光盘那哥们儿去美国之后发明的

2GL:

  • 汇编

3GL:

  • Fortran
  • COBOL
  • Pascal
  • C
  • C++
  • C#
  • Java
  • BASIC
  • Delphi
  • Ada

4GL:

  • LabVIEW
  • Progress 4GL (OpenEdge Advanced Business Language)
  • SQL 和 PL/SQL
  • MATLAB
  • R
  • Scilab
  • XQuery
  • XUL
  • ColdFusion

5GL

  • Prolog
  • Haskell
  • ML
  • Erlang

虽然是一个过时的概念,还是拿出来玩一玩。

from est's blog: http://blog.est.im/post/41061230598

Written by cwyalpha

二月 2, 2013 at 12:38 下午

发表在 Uncategorized

Thought this was cool: Case study: million songs dataset

leave a comment »

A couple of days ago I wrote about the million songs dataset. Our man in London, Clive Cox from Rummble Labs, suggested we should implement rankings based on item similarity.

Thanks to Clive suggestion, we have now an implementation of Fabio Aiolli’s cost function as explained in the paper: A Preliminary Study for a Recommender System for the Million Songs Dataset, which is the winning method in this contest.

Following are detailed instructions on how to utilize GraphChi CF toolkit on the million songs dataset data, for computing user ratings out of item similarities. 

Instructions for computing item to item similarities:

1) For obtaining the dataset, download createTrain.sh and createTrain.py scripts.

2) Run createTrain.sh to download the million songs dataset and prepare GraphChi compatible format.
$ sh createTrain.sh
Note: this operation may take an hour or so to prepare the data.

3) Run GraphChi item based collaborative filtering, to find out the top 500 similar items for each item:

./toolkits/collaborative_filtering/itemcf –training=train –K=500 –asym_cosine_alpha=0.15 –distance=3 –min_allowed_intersection=5
Explanation: –training points to the training file. –K=500 means we compute the top 500 similar items.
–distance=3 is Aillio’s metric. –min_allowed_intersection=5 – means we take into account only items that were rated together by at least 5 users.


Note: this operation requires around 20GB of memory and may take a few ours…

4) Post process results to create a single item to item ratings file
$ sh ./toolkits/collaborative_filtering/topk.sh train
Sorting output file train.out0

Merging sorted files:
File written: train-topk

5) Create a matrix market header – by saving the below bash script into a file and running it:
(or simply copy everything and paste into a bash/sh shell window)

#!/bin/sh -x
USERS=100000
ITEMS=385371
echo “%%MatrixMarket matrix coordinate real general” > train-topk\:info
echo $ITEMS $ITEMS `wc -l train-topk | awk ‘{print $1}’` >> train-topk\:info
echo “*********************”
cat train-topk\:info
echo “*********************”


Create user recommendations based on item similarities:

1) Run itemsim2rating to compute recommendations based on item similarities
$ ./toolkits/parsers/itemsim2rating –training=train –similarity=train-topk –K=500 membudget_mb 50000 –nshards=1 –max_iter=2 –Q=3
Note: this operation may require 20GB of RAM and may take a couple of hours based on your computer configuration.

Output file is: train-rec

Evaluating the result

1) Prepare test data:
./toolkits/parsers/topk –training=test –K=500

Output file is: test.ids

2) Prepare training recommendations: 

./toolkits/parsers/topk –training=train-rec –K=500


Output file is: train-rec.ids

3) Compute mean average precision @ 500:
./toolkits/collaborative_filtering/metric_eval –training=train-rec.ids –test=test.ids –K=500


About the performance: Q is the power applied to the item similarity weight.
When Q = 1 we get:

INFO:     metric_eval.cpp(eval_metrics:114): 7.48179 Finished evaluating 100000 instances. 
ESC[0mINFO:     metric_eval.cpp(eval_metrics:117): Computed AP@500 metric: 0.151431

Acknowledgements:

  • Clive Cox, RummbleLabs.com for proposing to implement item based recommendations in GraphChi, and support in the process of implementing this method.
  • Fabio Aiolli, University of Padova, winner of Million songs dataset contest, for great support regarding implementation of his metric.


from Large Scale Machine Learning and Other Animals: http://bickson.blogspot.com/2013/02/case-study-million-songs-dataset.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+blogspot%2FsYXZE+%28Large+Scale+Machine+Learning+and+Other+Animals%29

Written by cwyalpha

二月 2, 2013 at 12:09 下午

发表在 Uncategorized