mac book pro 安装airparrot遇到的问题

周海汉 2015.3.31 前两天升级到100MB宽带,运营商送个大麦盒子。看到它还支持airplay,就想测试一下。结果证明airplay支持的格式非常有限,只测试成功mov和jpg图片,其余rmvb和mkv格式都不支持,还不如直接插hdmi线方便。 测试时安装了air Parrot,装了该程序后又自动为我安装了airplay的驱动,并且告诉我重启。我重启后操作系统直接黑屏起不来,提示:

no bootable device
—insert boot disk and press any key
按alt+电源键几秒钟,让选启动方式,选了ssd启动,才重新进入操作系统。
但后面发现mac book没声音了,重启系统也没声音,插耳机也没有。 网上说air parrot的驱动有问题,导致mac book没有声音。
进入~  cd /System/Library/Extensions/
找到刚安装的驱动:
➜ Extensions ls -alt total 0
drwxr-xr-x 236 root wheel 8024 3 28 08:01 . drwxr-xr-x@ 3 root wheel 102 3 28 08:01 APExtFramebuffer.kext drwxr-xr-x@ 3 root wheel 102 3 28 08:01 AirParrotDriver.kext drwxr-xr-x 79 root wheel 2686 3 25 17:42 .. drwxr-xr-x 6 root wheel 204 3 25 17:42 System.kext drwxr-xr-x 6 root wheel 204 3 8 23:02 AppleBacklightExpert.kext drwxr-xr-x 6 root wheel 204 3 8 23:02 AudioAUUC.kext
3月28日就两个文件:APExtFramebuffer.kext  AirParrotDriver.kext
将其删除重启
➜  Extensions
sudo rm -rf APExtFramebuffer.kext AirParrotDriver.kext
清除驱动缓存
rm -rf /System/Library/Extensions.mkext
rm -rf /System/Library/Extensions.kextcache
sudo rm -R /System/Library/Caches/com.apple.kext.caches
重启
如声音没恢复,在系统偏好设置里,声音,选“内置扬声器”,不要选airparrot,并确认没有选静音即可。
如重启还是出现黑屏,则alt+电源进入系统后,在系统偏好设置里,选“启动磁盘”,点选“macintosh HD”,再重启。
参考:
发表在 技术 | 留下评论

google code关门转移到github

google code在2015.3.12 发文《再见google code》,关闭google code,并推荐将项目转移到github。

日期:

  • March 12, 2015 – New project creation disabled.
  • August 24, 2015 – The site goes read-only. You can still checkout/view project source, issues, and wikis.
  • January 25, 2016 – The project hosting service is closed. You will be able to download a tarball of project source, issues, and wikis. These tarballs will be available throughout the rest of 2016.

即至2016年1月25日正式关闭。

移植方法:

最简单:Google Code to GitHub exporter tool ,还有独立工具(https://code.google.com/p/support-tools/)可帮助移植到 github或gitbucket及source forge。

 

可怜我托管在google code的几个项目也要迁移,纪念一下。

 

原文地址(翻墙):

http://google-opensource.blogspot.com/2015/03/farewell-to-google-code.html

发表在 技术 | 留下评论

mysql 支持地理信息查询

周海汉 2014.8.21
mysql> create table geom(g geometry);
mysql> desc geom;
+——-+———-+——+—–+———+——-+
| Field | Type     | Null | Key | Default | Extra |
+——-+———-+——+—–+———+——-+
| g     | geometry | YES  |     | NULL    |       |
+——-+———-+——+—–+———+——-+
1 row in set (0.01 sec)
mysql> insert into geom set g=geomfromtext(‘point(1 1)’);
Query OK, 1 row affected (0.00 sec)
mysql> insert into geom set g=geomfromtext(‘point(1000 1000)’);
Query OK, 1 row affected (0.00 sec)
mysql> select * from geom;
+—————————+
| g                         |
+—————————+
|              ??      ??   |
|             @?@     @?@   |
+—————————+
5 rows in set (0.00 sec)
mysql> select astext(g) from geom;
+——————+
| astext(g)        |
+——————+
| POINT(1 1)       |
| POINT(1000 1000) |
+——————+
5 rows in set (0.00 sec)

ALTER TABLE geom ADD pt POINT;ALTER TABLE geom DROP pt;

mysql> select g from geom where g= point(1,1);
+—————————+
| g                         |
+—————————+
|              ??      ??   |
+—————————+
1 row in set (0.00 sec)

mysql> help geometry;
Name: ‘GEOMETRY’
Description:
MySQL provides a standard way of creating spatial columns for geometry
types, for example, with CREATE TABLE or ALTER TABLE. Currently,
spatial columns are supported for MyISAM, InnoDB, NDB, and ARCHIVE
tables. See also the annotations about spatial indexes under [HELP
SPATIAL].

URL: http://dev.mysql.com/doc/refman/5.6/en/creating-spatial-columns.html

Examples:
CREATE TABLE geom (g GEOMETRY);

mysql> select astext(g) from geom where g= point(1,1);
+————+
| astext(g)  |
+————+
| POINT(1 1) |
+————+
1 row in set (0.00 sec)

UPDATE myTable
SET Coord = PointFromText(CONCAT(‘POINT(‘,myTable.DLong,’ ‘,myTable.DLat,’)’));

创建表并填入数据的方式,可以直接通过经纬度来导入数据:
CREATE TABLE `table_with_a_point` (
`id` bigint(20) not null,
`location` point not NULL,
`latitude` float default NULL,
`longitude` float default NULL,
`value` int(11) not null,
PRIMARY KEY (`id`)
);
create spatial index table_with_a_point_index on table_with_a_point(location);

LOAD DATA LOCAL INFILE ‘somedata.txt’
INTO TABLE table_with_a_point
COLUMNS TERMINATED BY ‘ ‘ LINES TERMINATED BY ‘rn’
(id, latitude, longitude, value)
set location = PointFromText(CONCAT(‘POINT(‘,latitude,’ ‘,longitude,’)’));

CREATE TABLE geom (g GEOMETRY NOT NULL, SPATIAL INDEX(g)) ENGINE=MyISAM;
ALTER TABLE geom ADD SPATIAL INDEX(g);
CREATE SPATIAL INDEX sp_index ON geom (g);

查找矩形中是否包含相应的点:

mysql> set @poly=’polygon((0 0,0 1001,1001 1001,1001 0,0 0))’;
Query OK, 0 rows affected (0.00 sec)
注意polygon后的两层括号,否则会出错。

mysql> select astext(g) from geom where mbrcontains(geomfromtext(@poly),g);
+——————+
| astext(g)        |
+——————+
| POINT(1 1)       |
| POINT(1000 1000) |
+——————+
2 rows in set (0.00 sec)

mysql> set @poly=’polygon((0 0,0 1000,1000 1000,1000 0,0 0))’;
Query OK, 0 rows affected (0.00 sec)

mysql> select astext(g) from geom where mbrcontains(geomfromtext(@poly),g);
+——————+
| astext(g)        |
+——————+
| POINT(1 1)       |
| POINT(1000 1000) |
+——————+
2 rows in set (0.00 sec)

mysql> set @poly=’polygon((0 0,0 100,100 100,100 0,0 0))’;
Query OK, 0 rows affected (0.00 sec)

mysql> select astext(g) from geom where mbrcontains(geomfromtext(@poly),g);
+————+
| astext(g)  |
+————+
| POINT(1 1) |
+————+
1 row in set (0.00 sec)

 

参考:

http://dev.mysql.com/doc/refman/5.6/en/using-spatial-data.html

发表在 技术 | 一条评论

linuxmint 16 国内源

 

http://abloz.com

2014.1.22

linuxmint 16 国内源

先将/etc/apt/sources.list 及

deb http://mirrors.oschina.net/linuxmint/ petra main upstream import

deb http://mirrors.oschina.net/ubuntu/ saucy main restricted universe multiverse

deb http://mirrors.oschina.net/ubuntu/ saucy-backports main restricted universe multiverse
deb http://mirrors.oschina.net/ubuntu/ saucy-proposed main restricted universe multiverse
deb http://mirrors.oschina.net/ubuntu/ saucy-security main restricted universe multiverse
deb http://mirrors.oschina.net/ubuntu/ saucy-updates main restricted universe multiverse
deb http://archive.canonical.com/ubuntu/ saucy partner
deb http://ftp.stust.edu.tw/pub/Linux/LinuxMint/packages/ petra main upstream import
deb http://extra.linuxmint.com petra main

/etc/apt/sources.list.d备份

编辑sources.list如下,并执行sudo apt-get update

其中ubuntu的源可以替换为sohu和163的源。

 

发表在 技术 | 一条评论

kafka 试用

kafka 试用
周海汉/文
2013.10.29
kafka是linkedin 2010年开发的分布式消息订阅发布系统,目前已经开源并贡献给apache,成为apache的顶级项目。kafka用scala开发,以实时处理消息,低IO消耗见长。因此多用于大数据实时处理和离线消息处理。
下面是kafka单机安装试用。
下载最新版本kafka:
解压,mv到kafka目录。
[andy@s41 kafka]$ ./sbt update
[andy@s41 kafka]$ ./sbt package
[andy@s41 kafka]$ ls -l
total 468
drwxrwxr-x  3 andy andy   4096 Sep 26 17:29 bin
drwxrwxr-x  2 andy andy   4096 Sep 26 17:27 config
drwxrwxr-x  5 andy andy   4096 Sep 26 18:10 contrib
drwxrwxr-x  4 andy andy   4096 Sep 26 18:10 core
-rw-rw-r–  1 andy andy    678 Sep 26 17:27 DISCLAIMER
drwxrwxr-x  5 andy andy   4096 Sep 27 10:48 examples
-rw-rw-r–  1 andy andy   4157 Sep 26 17:27 kafka-patch-review.py
drwxrwxr-x  2 andy andy   4096 Sep 26 17:27 lib
-rw-rw-r–  1 andy andy  12932 Sep 26 17:27 LICENSE
-rw-rw-r–  1 andy andy      0 Oct 29 17:38 log-cleaner.log
drwxrwxr-x  2 andy andy   4096 Oct 29 17:50 logs
-rw-rw-r–  1 andy andy    162 Sep 26 17:27 NOTICE
drwxrwxr-x  5 andy andy   4096 Sep 26 18:10 perf
drwxrwxr-x  5 andy andy   4096 Sep 26 17:43 project
-rw-rw-r–  1 andy andy 387643 Sep 26 17:28 rat.out
-rw-rw-r–  1 andy andy   2293 Sep 26 17:27 README.md
-rwxrwxr-x  1 andy andy    890 Sep 26 17:27 sbt
-rw-rw-r–  1 andy andy    900 Sep 26 17:27 sbt.bat
drwxrwxr-x 10 andy andy   4096 Sep 26 17:27 system_test
drwxrwxr-x  5 andy andy   4096 Oct 29 16:56 target
[andy@s41 kafka]$ ./bin/kafka-server-start.sh config/server.properties
Exception in thread “main” java.lang.NoClassDefFoundError: scala/ScalaObject
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClassCond(ClassLoader.java:631)
at java.lang.ClassLoader.defineClass(ClassLoader.java:615)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClassCond(ClassLoader.java:631)
at java.lang.ClassLoader.defineClass(ClassLoader.java:615)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
at kafka.Kafka.main(Kafka.scala)
Caused by: java.lang.ClassNotFoundException: scala.ScalaObject
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
… 25 more
Please build the project using sbt. Documentation is available at http://kafka.apache.org/
还需执行如下的依赖关系:
[andy@s41 kafka]$  ./sbt assembly-package-dependency
如果已有zookeeper,可以不执行下面的语句,没有则执行:
[andy@s41 kafka]$ ./bin/zookeeper-server-start.sh config/zookeeper.properties
 
启动server
[andy@s41 kafka]$ ./bin/kafka-server-start.sh config/server.properties
创建一个话题
[andy@s41 kafka]$ ./bin/kafka-topics.sh –create –zookeeper localhost:2181 –replication-factor 1  –partition 1 –topic test
Created topic “test”.
查看话题
[andy@s41 kafka]$ ./bin/kafka-topics.sh –list  –zookeeper localhost:2181
test
订阅话题
[andy@s41 kafka]$ ./bin/kafka-console-consumer.sh –zookeeper  localhost:2181 –topic test
生产话题消息内容:
[andy@s41 kafka]$ ./bin/kafka-console-producer.sh –broker-list localhost:9092 –topic test
hello
这是第二条消息
你好吗?
hello kafka.
此时可以实时看到订阅的消息显示出来
[andy@s41 kafka]$ ./bin/kafka-console-producer.sh –broker-list localhost:9092 –topic test
hello
这是第二条消息
你好吗?
hello kafka.
或者用如下的命令:
[andy@s41 kafka]$ bin/kafka-console-consumer.sh –zookeeper localhost:2181 –topic test –from-beginning
This is a message.
这是另一个消息。
这是第三个message. hello kafka.
hello
这是第二条消息
你好吗?
hello kafka.
发表在 技术 | 标签为 | 留下评论

hadoop 1.1.2 eclipse plugin 编译

hadoop 1.1.2 eclipse plugin 编译
周海汉/文
2013.10.24
环境
[andy@s41 ~]$ echo $JAVA_HOME
/usr/java/jdk1.6.0_45
[andy@s41 ~]$ uname -a
Linux s41 2.6.32-358.el6.x86_64 #1 SMP Fri Feb 22 00:31:26 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
[andy@s41 ~]$ cat /etc/redhat-release
CentOS release 6.4 (Final)
下载最新 eclipse
wget http://mirror.bit.edu.cn/eclipse/technology/epp/downloads/release/kepler/SR1/eclipse-standard-kepler-SR1-linux-gtk-x86_64.tar.gz
or
下载eclipse sdk 4.3.1
解压到/home/andy目录下。
进入插件源码目录
[andy@s41 eclipse-plugin]$ pwd

/home/andy/hadoop-1.1.2/src/contrib/eclipse-plugin

修改build.properties,增加
eclipse.home=/home/andy/eclipse/
version=1.1.2
[andy@s41 eclipse-plugin]$ vi build.properties
output.. = bin/
bin.includes = META-INF/,
plugin.xml,
resources/,
classes/,
classes/,
lib/

eclipse.home=/home/andy/eclipse/
version=1.1.2

修改build.xml,增加fileset,复制两个jar文件到新目录
<path id=”eclipse-sdk-jars”>
     <fileset dir=”../../../”>
<include name=”hadoop*.jar”/>
</fileset>
<target name=”jar” depends=”compile” unless=”skip.contrib”>
 <!–
<copy file=”${hadoop.root}/build/hadoop-core-${version}.jar” tofile=”${build.dir}/lib/hadoop-core.jar” verbose=”true”/>
<copy file=”${hadoop.root}/build/ivy/lib/Hadoop/common/commons-cli-${commons-cli.version}.jar”  todir=”${build.dir}/lib” verbose=”true”/>
–>
<copy file=”${hadoop.root}/hadoop-core-${version}.jar” tofile=”${build.dir}/lib/hadoop-core.jar” verbose=”true”/>
<copy file=”${hadoop.root}/lib/commons-cli-${commons-cli.version}.jar”  todir=”${build.dir}/lib” verbose=”true”/>
[andy@s41 eclipse-plugin]$ ant
[andy@s41 hadoop-1.1.2]$ cp build/contrib/eclipse-plugin/hadoop-eclipse-plugin-1.1.2.jar ~/eclipse/plugins/
此前遇到如下错误,应是eclipse sdk和sdk版本不配套的问题。
compile:
[echo] contrib: eclipse-plugin
[javac] Compiling 45 source files to /home/andy/hadoop-1.1.2/build/contrib/eclipse-plugin/classes
[javac] /home/andy/hadoop-1.1.2/src/contrib/eclipse-plugin/src/java/org/apache/hadoop/eclipse/HadoopPerspectiveFactory.java:22: package org.eclipse.jdt.ui does not exist
[javac] import org.eclipse.jdt.ui.JavaUI;
[javac]                          ^
[javac] /home/andy/hadoop-1.1.2/src/contrib/eclipse-plugin/src/java/org/apache/hadoop/eclipse/MapReduceNature.java:35: package org.eclipse.jdt.core does not exist
[javac] import org.eclipse.jdt.core.IClasspathEntry;
[javac]                            ^
[javac] /home/andy/hadoop-1.1.2/src/contrib/eclipse-plugin/src/java/org/apache/hadoop/eclipse/MapReduceNature.java:36: package org.eclipse.jdt.core does not exist
[javac] import org.eclipse.jdt.core.IJavaProject;
[javac]                            ^
[javac] /home/andy/hadoop-1.1.2/src/contrib/eclipse-plugin/src/java/org/apache/hadoop/eclipse/MapReduceNature.java:37: package org.eclipse.jdt.core does not exist
[javac] import org.eclipse.jdt.core.JavaCore;
[javac]                            ^
[javac] /home/andy/hadoop-1.1.2/src/contrib/eclipse-plugin/src/java/org/apache/hadoop/eclipse/NewDriverWizard.java:24: package org.eclipse.jdt.core does not exist
[javac] import org.eclipse.jdt.core.IJavaElement;
[javac]                            ^
[javac] /home/andy/hadoop-1.1.2/src/contrib/eclipse-plugin/src/java/org/apache/hadoop/eclipse/NewDriverWizard.java:25: package org.eclipse.jdt.internal.ui.wizards does not exist
[javac] import org.eclipse.jdt.internal.ui.wizards.NewElementWizard;
[javac]                                           ^
[javac] /home/andy/hadoop-1.1.2/src/contrib/eclipse-plugin/src/java/org/apache/hadoop/eclipse/NewDriverWizard.java:36: cannot find symbol
[javac] symbol: class NewElementWizard
[javac] public class NewDriverWizard extends NewElementWizard implements INewWizard,
[javac]                                      ^
[javac] /home/andy/hadoop-1.1.2/src/contrib/eclipse-plugin/src/java/org/apache/hadoop/eclipse/NewDriverWizardPage.java:28: package org.eclipse.jdt.core does not exist
[javac] import org.eclipse.jdt.core.IType;
[javac]                            ^
[javac] /home/andy/hadoop-1.1.2/src/contrib/eclipse-plugin/src/java/org/apache/hadoop/eclipse/NewDriverWizardPage.java:29: package org.eclipse.jdt.core does not exist
[javac] import org.eclipse.jdt.core.JavaModelException;
[javac]                            ^
[javac] /home/andy/hadoop-1.1.2/src/contrib/eclipse-plugin/src/java/org/apache/hadoop/eclipse/NewDriverWizardPage.java:30: package org.eclipse.jdt.core.search does not exist
[javac] import org.eclipse.jdt.core.search.SearchEngine;
[javac]                                   ^
[javac] /home/andy/hadoop-1.1.2/src/contrib/eclipse-plugin/src/java/org/apache/hadoop/eclipse/NewDriverWizardPage.java:31: package org.eclipse.jdt.ui does not exist
发表在 技术 | 标签为 | 留下评论

在线虚拟机学编程?

周海汉/文

2013.10.24

在线虚拟机学编程?

全web界面,拥有编辑器,terminal,各种编程语言(php,python,java,android),虚拟机。

真强大,而且全免费。试用了一下,真不错。

https://koding.com/R/ablozhou

另外一款

支持python, nodejs,go,ruby等语言,shell,编辑器等。

https://www.nitrous.io/join/RdOR6fa0zRE

现在web界面越来越强大了。

发表在 技术 | 留下评论

编译 hadoop 2.2.0

周海汉 /文

2013.10.17

Hadoop 2.2 是 Hadoop 2 即yarn的第一个稳定版。并且解决单点问题。

maven安装

解压后放到/usr/local目录下。
增加国内maven 开源中国镜像
[andy@s41 ~]$ sudo vi /usr/local/apache-maven-3.1.1/conf/settings.xml
  <mirror>
<id>nexus-osc</id>
<mirrorOf>*</mirrorOf>
<name>Nexus osc</name>
<url>http://maven.oschina.net/content/groups/public/</url>
</mirror>

下载安装hadoop2.2

[andy@s41 ~]$ wget http://mirrors.cnnic.cn/apache/hadoop/common/stable2/hadoop-2.2.0-src.tar.gz
[andy@s41 ~]$ cd hadoop-2.2.0-src
[andy@s41 hadoop-2.2.0-src]$

编译

[andy@s41 hadoop-2.2.0-src]$ mvn clean install -DskipTests
[INFO] — hadoop-maven-plugins:2.2.0:protoc (compile-protoc) @ hadoop-common —
[WARNING] [protoc, –version] failed: java.io.IOException: Cannot run program “protoc”: java.io.IOException: error=2, No such file or directory
[ERROR] Failed to execute goal org.apache.hadoop:hadoop-maven-plugins:2.2.0:protoc (compile-protoc) on project hadoop-common: org.apache.maven.plugin.MojoExecutionException: ‘protoc –version’ did not return a version -> [Help 1]

安装编译protobuf

[andy@s41 ~]$ tar jxvf protobuf-2.5.0.tar.bz2
[andy@s41 protobuf-2.5.0]$ ./configure
[andy@s41 protobuf-2.5.0]$ make
[andy@s41 protobuf-2.5.0]$ protoc –version
protoc: error while loading shared libraries: libprotobuf.so.8: cannot open shared object file: No such file or directory
[root@s41 protobuf-2.5.0]# ls /usr/local/lib
libhiredis.a        libltdl.a     libltdl.so.3.1.0    libprotobuf-lite.la        libprotobuf.so        libprotoc.la        liby.a
libhiredis.so       libltdl.la    libprotobuf.a       libprotobuf-lite.so        libprotobuf.so.8      libprotoc.so        pkgconfig
libhiredis.so.0     libltdl.so    libprotobuf.la      libprotobuf-lite.so.8      libprotobuf.so.8.0.0  libprotoc.so.8
libhiredis.so.0.10  libltdl.so.3  libprotobuf-lite.a  libprotobuf-lite.so.8.0.0  libprotoc.a           libprotoc.so.8.0.0
[root@s41 protobuf-2.5.0]# vi /etc/ld.so.conf
include /usr/local/lib
[andy@s41 protobuf-2.5.0]$ protoc –version
libprotoc 2.5.0
[andy@s41 hadoop-2.2.0-src]$ mvn install -DskipTests

[INFO] Apache Hadoop Main ………………………….. SUCCESS [0.947s]
[INFO] Apache Hadoop Project POM ……………………. SUCCESS [0.294s]
[INFO] Apache Hadoop Annotations ……………………. SUCCESS [0.474s]
[INFO] Apache Hadoop Project Dist POM ……………….. SUCCESS [0.287s]
[INFO] Apache Hadoop Assemblies …………………….. SUCCESS [0.106s]
[INFO] Apache Hadoop Maven Plugins ………………….. SUCCESS [0.937s]
[INFO] Apache Hadoop Auth ………………………….. SUCCESS [0.248s]
[INFO] Apache Hadoop Auth Examples ………………….. SUCCESS [0.318s]
[INFO] Apache Hadoop Common ………………………… SUCCESS [17.582s]
[INFO] Apache Hadoop NFS …………………………… SUCCESS [1.364s]
[INFO] Apache Hadoop Common Project …………………. SUCCESS [0.016s]
[INFO] Apache Hadoop HDFS ………………………….. SUCCESS [39.854s]
[INFO] Apache Hadoop HttpFS ………………………… SUCCESS [1.544s]
[INFO] Apache Hadoop HDFS BookKeeper Journal …………. SUCCESS [1.494s]
[INFO] Apache Hadoop HDFS-NFS ………………………. SUCCESS [0.189s]
[INFO] Apache Hadoop HDFS Project …………………… SUCCESS [0.017s]
[INFO] hadoop-yarn ………………………………… SUCCESS [5.859s]
[INFO] hadoop-yarn-api …………………………….. SUCCESS [2.837s]
[INFO] hadoop-yarn-common ………………………….. SUCCESS [1.263s]
[INFO] hadoop-yarn-server ………………………….. SUCCESS [0.045s]
[INFO] hadoop-yarn-server-common ……………………. SUCCESS [0.458s]
[INFO] hadoop-yarn-server-nodemanager ……………….. SUCCESS [0.776s]
[INFO] hadoop-yarn-server-web-proxy …………………. SUCCESS [0.192s]
[INFO] hadoop-yarn-server-resourcemanager ……………. SUCCESS [0.952s]
[INFO] hadoop-yarn-server-tests …………………….. SUCCESS [0.150s]
[INFO] hadoop-yarn-client ………………………….. SUCCESS [0.239s]
[INFO] hadoop-yarn-applications …………………….. SUCCESS [0.032s]
[INFO] hadoop-yarn-applications-distributedshell ……… SUCCESS [0.155s]
[INFO] hadoop-mapreduce-client ……………………… SUCCESS [0.028s]
[INFO] hadoop-mapreduce-client-core …………………. SUCCESS [1.472s]
[INFO] hadoop-yarn-applications-unmanaged-am-launcher …. SUCCESS [0.124s]
[INFO] hadoop-yarn-site ……………………………. SUCCESS [0.047s]
[INFO] hadoop-yarn-project …………………………. SUCCESS [1.431s]
[INFO] hadoop-mapreduce-client-common ……………….. SUCCESS [1.460s]
[INFO] hadoop-mapreduce-client-shuffle ………………. SUCCESS [0.140s]
[INFO] hadoop-mapreduce-client-app ………………….. SUCCESS [0.718s]
[INFO] hadoop-mapreduce-client-hs …………………… SUCCESS [0.320s]
[INFO] hadoop-mapreduce-client-jobclient …………….. SUCCESS [1.065s]
[INFO] hadoop-mapreduce-client-hs-plugins ……………. SUCCESS [0.104s]
[INFO] Apache Hadoop MapReduce Examples ……………… SUCCESS [0.292s]
[INFO] hadoop-mapreduce ……………………………. SUCCESS [0.035s]
[INFO] Apache Hadoop MapReduce Streaming …………….. SUCCESS [0.243s]
[INFO] Apache Hadoop Distributed Copy ……………….. SUCCESS [31.506s]
[INFO] Apache Hadoop Archives ………………………. SUCCESS [0.138s]
[INFO] Apache Hadoop Rumen …………………………. SUCCESS [0.296s]
[INFO] Apache Hadoop Gridmix ……………………….. SUCCESS [0.330s]
[INFO] Apache Hadoop Data Join ……………………… SUCCESS [0.132s]
[INFO] Apache Hadoop Extras ………………………… SUCCESS [0.182s]
[INFO] Apache Hadoop Pipes …………………………. SUCCESS [0.011s]
[INFO] Apache Hadoop Tools Dist …………………….. SUCCESS [0.185s]
[INFO] Apache Hadoop Tools …………………………. SUCCESS [0.011s]
[INFO] Apache Hadoop Distribution …………………… SUCCESS [0.043s]
[INFO] Apache Hadoop Client ………………………… SUCCESS [0.106s]
[INFO] Apache Hadoop Mini-Cluster …………………… SUCCESS [0.054s]
[INFO] ————————————————————————
[INFO] BUILD SUCCESS
[INFO] ————————————————————————
[INFO] Total time: 2:00.410s
[INFO] Finished at: Thu Oct 17 15:26:18 CST 2013
[INFO] Final Memory: 95M/1548M

发表在 技术 | 标签为 , | 留下评论

数据分析相关网上学习课程

周海汉 2013.9.24

coursera(http://www.coursera.org ) 是斯坦福大学年轻的华裔教授吴恩达 Andrew Ng 创建的免费课程学习网站。 吴恩达是斯坦福大学人工智能实验室主任,担任该大学机器学习课程的主讲,业内声誉很高。推荐感兴趣的可以了解一下。
Andrew Ng 介绍网页:https://www.coursera.org/instructor/~35
<!– –>

10/14/2013 (10 weeks)
Learn about the most effective machine learning techniques, and gain practice implementing them and getting them to work for yourself.

10/07/2013 (9 weeks)
This course will use social network analysis, both its theory and computational tools, to make sense of the social and information networks…

09/23/2013 (4 weeks)
This course is about learning the fundamental computing skills necessary for effective data analysis. You will learn to program in R and to…

发表在 技术 | 标签为 | 一条评论

linux vpn 连接

周海汉/文 2013.09.23

[root@s1 andy]# cat /etc/redhat-release
CentOS release 6.4 (Final)
[root@s1 andy]# uname -a

Linux s1 2.6.32-358.el6.x86_64 #1 SMP Fri Feb 22 00:31:26 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
更新epel

[root@s1 network-scripts]# rpm -Uvh http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

[root@s1 andy]# yum install ppp pptp pptp-setup

1. pptpsetup –create –server [–domain ] –username [–password ] [–encrypt] [–start]
2. pptpsetup –delete 删除一个节点

[root@s1 andy]# pptpsetup –create myvpn –server ip –username myuser –password mypasswd –encrypt –start

[root@s1 andy]# route add -net 10.0.0.0/8 gw 10.10.10.244 dev ppp0

[root@s1 andy]# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
114.113.96.80 localhost 255.255.255.255 UGH 0 0 0 eth0
localhost * 255.255.255.255 UH 0 0 0 ppp0
192.168.10.0 * 255.255.255.0 U 0 0 0 eth0
link-local * 255.255.0.0 U 1002 0 0 eth0
10.0.0.0 localhost 255.0.0.0 UG 0 0 0 ppp0
default localhost 0.0.0.0 UG 0 0 0 eth0

[andy@s1 ~]$ ping 10.10.20.11
PING 10.10.20.11 (10.10.20.11) 56(84) bytes of data.
^C
— 10.10.20.11 ping statistics —
4 packets transmitted, 0 received, 100% packet loss, time 3334ms

[root@s1 andy]# route del default

[root@s1 andy]# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
114.113.96.80 192.168.10.1 255.255.255.255 UGH 0 0 0 eth0
10.10.10.11 * 255.255.255.255 UH 0 0 0 ppp0
192.168.10.0 * 255.255.255.0 U 0 0 0 eth0
link-local * 255.255.0.0 U 1002 0 0 eth0
10.0.0.0 10.10.10.244 255.0.0.0 UG 0 0 0 ppp0

[root@s1 andy]# ping 10.10.20.11
PING 10.10.20.11 (10.10.20.11) 56(84) bytes of data.
64 bytes from 10.10.20.11: icmp_seq=1 ttl=63 time=8.25 ms
64 bytes from 10.10.20.11: icmp_seq=2 ttl=63 time=9.34 ms
64 bytes from 10.10.20.11: icmp_seq=3 ttl=63 time=6.80 ms
64 bytes from 10.10.20.11: icmp_seq=4 ttl=63 time=9.96 ms
^C
— 10.10.20.11 ping statistics —
4 packets transmitted, 4 received, 0% packet loss, time 3533ms

发表在 技术 | 标签为 , | 一条评论