실행 파라미터에 eclipse.exe  -clean -refresh -clearPersistedState

 

 

 

 

'java' 카테고리의 다른 글

Java Decompiler  (0) 2015.09.04
Posted by satis
,

maven mirror site

maven 2016. 3. 5. 15:38
<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
          xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd">
 
  <mirrors>
    <mirror>
      <id>beany-nexus</id>
      <mirrorOf>*</mirrorOf>
      <url>http://nexus.dev.beany.co.kr/content/groups/public</url>
    </mirror>
  </mirrors>
 
  <profiles>
    <profile>
      <id>beany-nexus</id>
      <repositories>
        <repository>
          <id>central</id>
          <url>http://central</url>
          <releases><enabled>true</enabled></releases>
          <snapshots><enabled>true</enabled></snapshots>
        </repository>
      </repositories>
 
      <pluginRepositories>
        <pluginRepository>
          <id>central</id>
          <url>http://central</url>
          <releases><enabled>true</enabled></releases>
          <snapshots><enabled>true</enabled></snapshots>
        </pluginRepository>
      </pluginRepositories>
 
    </profile>
  </profiles>
 
  <activeProfiles>
    <activeProfile>beany-nexus</activeProfile>
  </activeProfiles>
 
</settings>


'maven' 카테고리의 다른 글

MVN 관련  (0) 2015.01.15
Posted by satis
,

<-- 인덱스 생성 -->

curl -XPUT 'http://localhost:9200/epo_backuplog_meta'

curl -XPUT 'http://localhost:9200/m14_201602011000'



<-- delete -->

curl -XDELETE 'http://localhost:9200/epo_backuplog_meta/extract'

curl -XDELETE 'http://localhost:9200/epo_backuplog_meta/archive'

curl -XDELETE 'http://localhost:9200/epo_backuplog_meta/archive/p9_*'

curl -XDELETE 'http://localhost:9200/m14*/'

curl -XDELETE 'http://localhost:9200/info_*/'

curl -XDELETE 'http://localhost:9200/epo_server_info/epo_server_info/M14_defaultwip'


<-- Shutdown -->

curl -XPUT localhost:9200/_cluster/settings -d '{

                "transient" : {

                    "cluster.routing.allocation.enable" : "none"

                }

        }'


curl -XPOST 'http://localhost:9200/_shutdown'

curl -XPOST 'http://localhost:9200/_cluster/nodes/_local/_shutdown'


curl -XPUT localhost:9200/_cluster/settings -d '{

                "transient" : {

                    "cluster.routing.allocation.enable" : "all"

                }

        }'



<-- index 상태 변경 -->

curl -XPOST 'http://localhost:9200/m14_*/_close'

curl -XPOST 'http://localhost:9200/m14_*/_open'


<-- update -->

curl -XPOST 'http://localhost:9200/epo_backuplog_meta/archive/m14_201507230000/_update' -d '{"doc": {"clusters":{"vmEngine2":{"isStandby:true, "isComplete":false}}}}'


<-- index 최적화 -->

curl -XPOST 'http://localhost:9200/m14_/_optimize'


<-- shard 이동 -->

curl -XPOST 'localhost:9200/_cluster/reroute' -d '{

 "commands" : [ {

  "move" : {

   "index" : ".kibana", "shard" : 0, "from_node" : "zgjlZCqJQreDYJv28_3JWQ", "to_node" : "MNqXxLywSDKCPniBSuZujg"

  }

 } ]

}'


<--plugin 설치 -->

plugin -install lmenezes/elasticsearch-kopf/1.5.5

plugin -install mobz/elasticsearch-head

plugin -install lukas-vlcek/bigdesk

plugin -install jettro/elasticsearch-gui

plugin -install polyfractal/elasticsearch-inquisitor









############################## Network And HTTP ###############################


# Elasticsearch, by default, binds itself to the 0.0.0.0 address, and listens

# on port [9200-9300] for HTTP traffic and on port [9300-9400] for node-to-node

# communication. (the range means that if the port is busy, it will automatically

# try the next port).


# Set the bind address specifically (IPv4 or IPv6):

#

network.bind_host: 0.0.0.0


# Set the address other nodes will use to communicate with this node. If not

# set, it is automatically derived. It must point to an actual IP address.

#

network.publish_host: 192.168.10.244


# Set both 'bind_host' and 'publish_host':

#

#network.host: 192.168.0.1


# Set a custom port for the node to node communication (9300 by default):

#

transport.tcp.port: 9300


# Enable compression for all communication between nodes (disabled by default):

#

#transport.tcp.compress: true


# Set a custom port to listen for HTTP traffic:

#

http.bind_host: 0.0.0.0

http.publish_host: 192.168.10.244

http.port: 9200


# Set a custom allowed content length:

#

#http.max_content_length: 100mb


# Disable HTTP completely:

#

#http.enabled: false

Posted by satis
,

Apache Zepplin 설치 하기

Requirements

  • Java 7+
  • Maven
  • Node.js Package Manager
  • Git

소스 받기

$ git clone https://github.com/apache/incubator-zeppelin

PATH 등록

export ZEPPELIN_HOME={Zepplin Directory}
export PATH=$ZEPPELIN_HOME/sbin:$PATH

Mavne POM 변경

  • spark/pom.xml 변경

    $ vi $ZEPPELIN_HOME/spark/pom.xml
    
    <properties>
      <spark.version>1.5.2</spark.version>
      <scala.version>2.11.5</scala.version>
      <scala.binary.version>2.11</scala.binary.version>
      <hadoop.version>2.6.0</hadoop.version>
      <py4j.version>0.8.2.1</py4j.version>
    </properties>
    
    <dependency>
    <groupId>org.elasticsearch</groupId>
    <artifactId>elasticsearch-hadoop</artifactId>
    <version>2.2.0-beta1</version>
    <exclusions>
      <exclusion>
        <groupId>org.slf4j</groupId>
        <artifactId>log4j-over-slf4j</artifactId>
      </exclusion>
      <exclusion>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-core_2.10</artifactId>
      </exclusion>
      <exclusion>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-sql_2.10</artifactId>
      </exclusion>
    </exclusions>
    </dependency>
    
  • spark-dependency/pom.xml 변경

    vi $ZEPPELIN_HOME/spark-dependency/pom.xml
    
    <properties>
      <spark.version>1.5.2</spark.version>
      <scala.version>2.11.5</scala.version>
      <scala.binary.version>2.11</scala.binary.version>
    
      <hadoop.version>2.6.0</hadoop.version>
      <yarn.version>${hadoop.version}</yarn.version>
      <avro.version>1.7.7</avro.version>
      <avro.mapred.classifier></avro.mapred.classifier>
      <jets3t.version>0.7.1</jets3t.version>
      <protobuf.version>2.4.1</protobuf.version>
    
      <akka.group>org.spark-project.akka</akka.group>
      <akka.version>2.3.4-spark</akka.version>
    
      <spark.download.url>http://archive.apache.org/dist/spark/spark-${spark.version}/spark-${spark.version}.tgz</spark.download.url>
      <py4j.version>0.8.2.1</py4j.version>
    </properties>
    

Maven 빌드

  • config 파일 복사
    $ cd $ZEPPLIN_HOME
    $ mvn clean package install -Pspark-1.5 -Dspark.version=1.5.2 -Phadoop-2.6 -Dhadoop.version=2.6.0 -Pyarn -Ppyspark -DskipTests
    

Config 변경

  • config 파일 복사

    $ mv $ZEPPELIN_HOME/conf/zeppelin-env.sh.template $ZEPPELIN_HOME/conf/zeppelin-env.sh
    $ mv $ZEPPELIN_HOME/conf/zeppelin-site.xml.template $ZEPPELIN_HOME/conf/zeppelin-site.xml
    
  • zeppelin-env.sh 변경

    $ vi $ZEPPELIN_HOME/conf/zeppelin-env.sh
    
    export SPARK_HOME=/home/logvadmin/spark-1.5.2-bin-hadoop2.6
    
  • zeppelin web 환경 설정 변경

    $ vi $ZEPPELIN_HOME/conf/zeppelin-site.xml
    
    $ vi zeppelin-site.xml
    <property>
    <name>zeppelin.server.addr</name>
    <value>192.168.10.251</value>
    <description>Server address</description>
    </property>
    <property>
    <name>zeppelin.server.port</name>
    <value>9090</value>
    <description>Server port.</description>
    </property>
    

    Server Address, Port 변경

Zepplin 실행


Posted by satis
,

Apache Spark 설치 하기

Bigdata 2016. 1. 20. 11:06

Apache Spark 설치 하기

Requirements

  • Java 7+
  • Scala 2.10.x

소스 받기

$ wget http://apache.mirror.cdnetworks.com/spark/spark-1.5.2/spark-1.5.2-bin-hadoop2.6.tgz
$ tar zxvf spark-1.5.2-bin-hadoop2.6.tgz

PATH 등록

export SPARK_HOME={Spark Directory}
export PATH=$SPARK_HOME/sbin:$PATH

Config 변경

  • config 파일 복사

    $ cp $SPARK_HOME/spark-defaults.conf.template $SPARK_HOME/spark-defaults.conf
    $ cp $SPARK_HOME/slaves.template $SPARK_HOME/slaves
    
  • spark-default.conf 수정

    $ vi $SPARK_HOME/conf/spark-defaults.conf
    
    환경변수 설명
    spark.master Master 주소 spark://192.168.10.251:7077
    spark.eventLog.enabled 로그 사용여부 TRUE
    spark.eventLog.dir 로그 폴더 $SPARK_HOME/logs
    spark.serializer org.apache.spark.serializer.KryoSerializer
    spark.driver.memory 2g
    spark.executor.extraJavaOptions JVM Option -XX:+PrintGCDetails -Dkey=value -Dnumbers=”one two three”
  • slaves 수정

    $ vi $SPARK_HOME/conf/slaves
    

    slave가 될 host명 또는 ip를 입력한다.

Spark Cluster 실행


Posted by satis
,

nodeJS 설치

Linux 2016. 1. 7. 11:01

nodeJS 설치

Requirements

  • apt-get install build-essential

소스 받기

$ git clone https://github.com/nodejs/node

PATH 등록

export NODEJS_HOME="nodejs_path"
export PATH=$PATH:./:$NODEJS_HOME/bin

설치 경로 변경

$ ./configure --prefix=$NODEJS_HOME

소스 빌드

$ make
$ make install

테스트

$ node -v
v6.0.0-pre
$ npc -
3.3.12


'Linux' 카테고리의 다른 글

Ubuntu 12.04 Subversion 구축  (0) 2015.09.22
top grep  (0) 2015.09.21
shell script 반복문  (0) 2015.02.07
CentOS vsftpd FTP 설치 및 설정  (0) 2014.12.26
chkconfig 사용법 (서비스 자동실행)  (0) 2014.12.26
Posted by satis
,

gxt 개발환경

Etc 2015. 12. 21. 13:07

gxt 개발환경

이클립스 플러그인 설치

gwt-plugin - http://storage.googleapis.com/gwt-eclipse-plugin/release

gxt maven archetype

Archetype Property Archetype Value
Archetype Repository https://oss.sonatype.org/content/repositories/snapshots
Archetype GroupId com.sencha.gxt.archetypes
Archetype ArtifactId gxt-basic-public-3x-archetype
Archetype Version 1.0.0-SNAPSHOT

'Etc' 카테고리의 다른 글

tomcat6 서비스 등록  (0) 2015.12.21
Posted by satis
,

tomcat6 서비스 등록

Etc 2015. 12. 21. 12:44

tomcat6 서비스 등록

서비스 등록

cd \apache-tomcat-6.0.39\bin
install tomcat6

path 등록

cd \apache-tomcat-6.0.39\bin
tomcat6 //US//tomcat6 --Environment PATH=C:\tibco\tibrv\8.3x86\bin;%PATH%


'Etc' 카테고리의 다른 글

gxt 개발환경  (0) 2015.12.21
Posted by satis
,