hbase主从配置文件|如何将一个hbase的数据导入另一个hbase

❶ 如何使用eclipse maven构建hbase开发环境

步骤如下:1:从HBase集群中复制一份Hbase部署文件,放置在开发端某一目录下(如在/app/hadoop/hbase096目录下)。2:在eclipse里新建一个java项目HBase,然后选择项目属性,在Libraries->Add External JARs…,然后选择/app/hadoop/hbase096/lib下相关的JAR包,如果只是测试用的话,就简单一点,将所有的JAR选上。3:在项目HBase下增加一个文件夹conf,将Hbase集群的配置文件hbase-site.xml复制到该目录,然后选择项目属性在Libraries->Add Class Folder,将刚刚增加的conf目录选上。4:在HBase项目中增加一个chapter12的package,然后增加一个HBaseTestCase的class,然后将《Hadoop实战第2版》12章的代码复制进去,做适当的修改,代码如下:package chapter12;import java.io.IOException;import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.hbase.HBaseConfiguration;import org.apache.hadoop.hbase.HColumnDescriptor;import org.apache.hadoop.hbase.HTableDescriptor;import org.apache.hadoop.hbase.client.Get;import org.apache.hadoop.hbase.client.HBaseAdmin;import org.apache.hadoop.hbase.client.HTable;import org.apache.hadoop.hbase.client.Put;import org.apache.hadoop.hbase.client.Result;import org.apache.hadoop.hbase.client.ResultScanner;import org.apache.hadoop.hbase.client.Scan;import org.apache.hadoop.hbase.util.Bytes;public class HBaseTestCase {//声明静态配置 HBaseConfigurationstatic Configuration cfg=HBaseConfiguration.create();//创建一张表,通过HBaseAdmin HTableDescriptor来创建public static void creat(String tablename,String columnFamily) throws Exception {HBaseAdmin admin = new HBaseAdmin(cfg);if (admin.tableExists(tablename)) {System.out.println("table Exists!");System.exit(0);}else{HTableDescriptor tableDesc = new HTableDescriptor(tablename);tableDesc.addFamily(new HColumnDescriptor(columnFamily));admin.createTable(tableDesc);System.out.println("create table success!");}}//添加一条数据,通过HTable Put为已经存在的表来添加数据public static void put(String tablename,String row, String columnFamily,String column,String data) throws Exception {HTable table = new HTable(cfg, tablename);Put p1=new Put(Bytes.toBytes(row));p1.add(Bytes.toBytes(columnFamily), Bytes.toBytes(column), Bytes.toBytes(data));table.put(p1);System.out.println("put '"+row+"','"+columnFamily+":"+column+"','"+data+"'");}public static void get(String tablename,String row) throws IOException{HTable table=new HTable(cfg,tablename);Get g=new Get(Bytes.toBytes(row));Result result=table.get(g);System.out.println("Get: "+result);}//显示所有数据,通过HTable Scan来获取已有表的信息public static void scan(String tablename) throws Exception{HTable table = new HTable(cfg, tablename);Scan s = new Scan();ResultScanner rs = table.getScanner(s);for(Result r:rs){System.out.println("Scan: "+r);}}public static boolean delete(String tablename) throws IOException{HBaseAdmin admin=new HBaseAdmin(cfg);if(admin.tableExists(tablename)){try{admin.disableTable(tablename);admin.deleteTable(tablename);}catch(Exception ex){ex.printStackTrace();return false;}}return true;}public static void main (String [] agrs) {String tablename="hbase_tb";String columnFamily="cf";try {HBaseTestCase.creat(tablename, columnFamily);HBaseTestCase.put(tablename, "row1", columnFamily, "cl1", "data");HBaseTestCase.get(tablename, "row1");HBaseTestCase.scan(tablename);/* if(true==HBaseTestCase.delete(tablename))System.out.println("Delete table:"+tablename+"success!");*/}catch (Exception e) {e.printStackTrace();}}}5:设置运行配置,然后运行。运行前将Hbase集群先启动。6:检验,使用hbase shell查看hbase,发现已经建立表hbase_tb。

❷ 如何用java导入hbase.dat文件

开发环境硬件环境:Centos 6.5 服务器3台(一台为Master节点,两台为Slave节点) 软件环境:Java 1.7.0_71、IDEA、Hadoop-2.6.2、Hbase-1.1.4一、生成日志文件假设日志文件有六列,每列之间由空格间隔 例如:aaa 20.3.111.3 bbb user nothing 2016-05-01www 22.3.201.7 ggg user nothing 2016-05-02……12341234日志文件存在于HDFS:/in/文件夹下二、创建Java项目,将HBase包中的jar包导入工程为了保证大量数据操作,所以使用MapRece函数实现,但只涉及到插入表操作,没用统计操作,所以只需要实现Map函数即可。1.首先创建一个配置文件,包含用户自定义参数MRDriver.properties#maprecehbase.zookeeper.quorum=hadoop101maprece.job.tracker=hadoop101:9001#HTableHTable.tableName=hbases4HTable.tableName.colFamily=logs#HDFS Filemaprece.inputPath=hdfs://hadoop101:9000/in/*123456789101112345678910112.获得配置文件信息HConfiguration.javapublic class HConfiguration { public static String hbase_zookeeper_quorum; public static String maprece_job_tracker; //创建的表名和列族名 public static String tableName; public static String colFamily; //行键在行中的位置 public static int htable_rowkey_first; public static int htable_rowkey_second; public static String maprece_inputPath; static{ try { InputStream in = MRDriver.class.getClassLoader().getResourceAsStream("MRDriver.properties"); Properties props = new Properties(); props.load(in); hbase_zookeeper_quorum = props.getProperty("hbase.zookeeper.quorum"); maprece_job_tracker = props.getProperty("maprece.job.tracker"); tableName = props.getProperty("HTable.tableName"); colFamily = props.getProperty("HTable.tableName.colFamily"); htable_rowkey_first = Integer.parseInt(props.getProperty("HTable.rowkey.first")); htable_rowkey_second = Integer.parseInt(props.getProperty("HTable.rowkey.second")); maprece_inputPath = props.getProperty("maprece.inputPath"); } catch (Exception e) { throw new ExceptionInInitializerError(e); } }}3.创建操作HBase的单例类MRDriver.javapublic class MRDriver { private static MRDriver single = null; private static HTable table = null; public MRDriver() { } // 静态工厂方法 public static MRDriver getInstance() { if (single == null) { single = new MRDriver(); } return single; } // 声明静态配置 static Configuration conf = null; static { conf = HBaseConfiguration.create(); // 配置hbase.zookeeper.quorum: 后接zookeeper集群的机器列表 conf.set("hbase.zookeeper.quorum", HConfiguration.hbase_zookeeper_quorum); conf.set("hbase.zookeeper.property.clientPort", "2181"); } // 获取htable实例,节约系统资源 public static HTable getHTable(String tableName) { try { if (table == null) { table = new HTable(conf, Bytes.toBytes(tableName)); } } catch (IOException e) { e.printStackTrace(); } return table; } /* * 创建表 * * @tableName 表名 * * @family 列族列表 */ public void creatTable(String tableName, String family) throws Exception { HBaseAdmin admin = new HBaseAdmin(conf); HTableDescriptor desc = new HTableDescriptor(tableName); desc.addFamily(new HColumnDescriptor(family)); if (admin.tableExists(tableName)) { MyMRDriver.logger.info("table Exists!"); } else { admin.createTable(desc); MyMRDriver.logger.info("create table Success!"); } }/* * 为表添加数据 * * @rowKey rowKey 行健 * * @tableName 表名 * * @column1 列族名列表 * * @value1 列族值的列表 */ @SuppressWarnings({"resource", "deprecation"}) public void addData(String rowKey, String tableName, ArrayList<String> column, String[] value) throws IOException { Put put = new Put(Bytes.toBytes(rowKey));// 设置rowkey getHTable(tableName);// 获取表htable用来增删改查 HColumnDescriptor columnFamilies = table.getTableDescriptor() // 获取所有的列族 .getColumnFamilies()[0]; String familyName = columnFamilies.getNameAsString(); // 获取列族名 if (familyName.equals(HConfiguration.colFamily)) { // article列族put数据 for (int j = 0; j < column.size(); j++) { put.add(Bytes.toBytes(familyName), Bytes.toBytes(column.get(j)), Bytes.toBytes(value[j])); } } table.put(put); MyMRDriver.logger.info("Add data Success!"); }}4.创建实现Map函数HMap.javapublic class HMap extends Mapper<LongWritable, Text, LongWritable, Text> { MRDriver myDriver = MRDriver.getInstance(); // 起多个列名 ArrayList<String> arrayList = new ArrayList<String>(); // 一行分割-数组 String[] lineValue = new String[]{}; int lineNum = 0; @Override protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException { super.map(key, value, context); //根据空格分隔 lineValue = value.toString().split(" "); if (lineNum == 0) {// 给子列族起名 for (int i = 0; i < lineValue.length; i++) { arrayList.add("log" + i); } lineNum++; } // 添加数据 Date date = new Date(); //这里为了方便使用了getTime()作为rowkey,实际开发中这样并不好 myDriver.addData(date.getTime() + "" , HConfiguration.tableName, arrayList, lineValue); }}5.实现Job调度main方法MyMRDriver.javapublic class MyMRDriver { public static Logger logger = Logger.getLogger(MRDriver.class); public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException { //初始化前,请先修改配置文件 MRDriver.properties MRDriver myDriver = MRDriver.getInstance(); Job job = new Job(new Configuration(), "HDFS2HBase2"); job.setJarByClass(MyMRDriver.class); try { myDriver.creatTable(HConfiguration.tableName, HConfiguration.colFamily); } catch (Exception e) { e.printStackTrace(); } // 设置 Map 和 Rece 处理类 job.setMapperClass(HMap.class); // 设置输入和输出格式 job.setInputFormatClass(TextInputFormat.class); job.setOutputFormatClass(NullOutputFormat.class); // 设置输入目录 FileInputFormat.addInputPath(job, new Path(HConfiguration.maprece_inputPath)); System.exit(job.waitForCompletion(true) ? 0 : 1); }}6.运行主方法测试也可以在hbase shell中使用count检查是否插入成功

❸ hbase怎么用

HBase安装及简单使用

通过之前的hadoop0.20.2的安装并调试成功,接下来我们继续安装hbase0.90.5。在安装hbase0.90.5之前,因为hbase0.90.5只支持jdk1.6,所以,我把之前的jdk1.8卸载,重新安装了jdk1.6。

第一步:

首先需要下载hbase0.90.5.tar.gz,并解压到/home/hadoop/的目录下,同时将目录修改为hbase0.90.5

第二步:

替换hadoop核心jar包,主要母的是防止hbase和hadoop版本不同出现兼容问题,造成hmaster启动异常

将hbase0.90.5/lib目录中的hadoop-core-0.20-append-r1056497.jar包进行备份后删除,再将/home/hadoop/hadoop下面的hadoop-0.20.2-core.jar赋值到/home/hadoop/hbase0.90.5目录下即可

第三步:

编辑配置文件

①/home/hadoop/hbase0.90.5/conf/hbase-env.sh

我们对行键'1001'中列族info的列名age赋值24(1001:info:age=>24),插入两次,会将最后一次的值进行合并,其中,用时间戳来区分。从图片中的时间戳不同可以看出保存的是最后一次put的内容。

❹ 关于hbase的配置。

安装失败,先安装hadoop,hadoop安装好了再说hbase吧。查看下日志文件,在hadoop安装目录的log文件夹下

❺ hbase 怎么自动加载conf目录下配置文件的

查看源码,可以发现在new HbaseConfiguration对象时会加载conf目录下的配置文件

❻ linux上搭建hbase

1.下载和安装hbase数据库[[email protected] ~]# wget http://mirrors.hust.e.cn/apache/hbase/stable/hbase-0.98.9-hadoop2-bin.tar.gz[[email protected] ~]# tar xvf hbase-0.98.9-hadoop2-bin.tar.gz[[email protected] ~]# mv hbase-0.98.9-hadoop2 /usr/local/[[email protected] local]# chown -R hadoop:hadoop hbase-0.98.9-hadoop2[[email protected] local]# ll hbase-0.98.9-hadoop2total 352drwxr-xr-x. 4 hadoop hadoop 4096 Dec 16 14:16 bin-rw-r–r–. 1 hadoop hadoop 164928 Dec 16 14:20 CHANGES.txtdrwxr-xr-x. 2 hadoop hadoop 4096 Jan 8 12:48 confdrwxr-xr-x. 4 hadoop hadoop 4096 Dec 16 14:16 dev-supportdrwxr-xr-x. 3 hadoop hadoop 4096 Dec 16 14:22 hbase-annotationsdrwxr-xr-x. 3 hadoop hadoop 4096 Dec 16 14:23 hbase-assemblydrwxr-xr-x. 3 hadoop hadoop 4096 Dec 16 14:22 hbase-checkstyledrwxr-xr-x. 3 hadoop hadoop 4096 Dec 16 14:23 hbase-clientdrwxr-xr-x. 3 hadoop hadoop 4096 Dec 16 14:22 hbase-commondrwxr-xr-x. 3 hadoop hadoop 4096 Dec 16 14:23 hbase-examplesdrwxr-xr-x. 3 hadoop hadoop 4096 Dec 16 14:25 hbase-hadoop1-compatdrwxr-xr-x. 3 hadoop hadoop 4096 Dec 16 14:23 hbase-hadoop2-compatdrwxr-xr-x. 3 hadoop hadoop 4096 Dec 16 14:23 hbase-hadoop-compatdrwxr-xr-x. 3 hadoop hadoop 4096 Dec 16 14:23 hbase-itdrwxr-xr-x. 3 hadoop hadoop 4096 Dec 16 14:23 hbase-prefix-treedrwxr-xr-x. 3 hadoop hadoop 4096 Dec 16 14:23 hbase-protocoldrwxr-xr-x. 3 hadoop hadoop 4096 Dec 16 14:23 hbase-restdrwxr-xr-x. 3 hadoop hadoop 4096 Dec 16 14:23 hbase-serverdrwxr-xr-x. 3 hadoop hadoop 4096 Dec 16 14:23 hbase-shelldrwxr-xr-x. 2 hadoop hadoop 4096 Dec 16 14:23 hbase-testing-utildrwxr-xr-x. 3 hadoop hadoop 4096 Dec 16 14:23 hbase-thrift-rw-r–r–. 1 hadoop hadoop 11358 Dec 2 07:36 LICENSE.txtdrwxrwxr-x. 2 hadoop hadoop 4096 Jan 8 12:01 logs-rw-r–r–. 1 hadoop hadoop 897 Dec 16 14:16 NOTICE.txt-rw-r–r–. 1 hadoop hadoop 81667 Dec 16 14:16 pom.xml-rw-r–r–. 1 hadoop hadoop 1377 Dec 16 14:16 README.txtdrwxr-xr-x. 3 hadoop hadoop 4096 Dec 16 06:37 src[[email protected] local]#2.修改hbase配置文件[[email protected] local]# cd /usr/local/hbase-0.98.9-hadoop2/conf/ [[email protected] conf]# vim hbase-site.xml<configuration> <property> <name>hbase.rootdir</name> <value>hdfs://tong1:9000/hbase</value> -与hadoop中的core-site.xml文件中一至 </property> <property> <name>hbase.cluster.distributed</name> <value>true</value> </property></configuration>[[email protected] conf]# vim hbase-env.shexport JAVA_HOME=/usr/local/jdk1.8.0_25[[email protected] conf]#3.启动hbase服务[[email protected] conf]# su – hadoop[[email protected] ~]$ start-hbase.sh localhost: starting zookeeper, logging to /usr/local/hbase-0.98.9-hadoop2/bin/../logs/hbase-hadoop-zookeeper-tong1.outstarting master, logging to /usr/local/hbase-0.98.9-hadoop2/logs/hbase-hadoop-master-tong1.outlocalhost: starting regionserver, logging to /usr/local/hbase-0.98.9-hadoop2/bin/../logs/hbase-hadoop-regionserver-tong1.out[[email protected] ~]$ hbase shell2015-01-08 15:01:36,052 INFO [main] Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available2015-01-08 15:01:36,082 INFO [main] Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available2015-01-08 15:01:36,109 INFO [main] Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available2015-01-08 15:01:36,135 INFO [main] Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available2015-01-08 15:01:36,147 INFO [main] Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.availableHBase Shell; enter 'help<RETURN>' for list of supported commands.Type "exit<RETURN>" to leave the HBase ShellVersion 0.98.9-hadoop2, , Mon Dec 15 23:00:20 PST 2014 hbase(main):008:0* create 'tong1' ,'test'0 row(s) in 0.9120 seconds=> Hbase::Table – tong1hbase(main):009:0> scan 'tong1'ROW COLUMN+CELL 0 row(s) in 0.0390 secondshbase(main):010:0>4.在浏览器查看状态即可。

❼ 本地如何连接hbase数据库

1.使用xshell或者crt等工具连接到hbase所在的服务器2.然后通过ls查找到hbase3.然后cd切换到hbase目录下4.bin/start-hbase.sh5.bin/hbaseshell6.list查看该用户下的所有表格

❽ hbase怎么查看配置文件

hbase有本地模式和分布式模式hbase-site.xml配置hbase.tmp.dir本地文件系统tmp目录,一般配置成local模式的设置一下,但是最好还是需要设置一下,因为很多文件都会默认设置成它下面的线上配置<property><name>hbase.tmp.dir</name><value>/mnt/路径</value></property>默认值:${java.io.tmpdir}/hbase-${user.name}写到系统的/tmp目录hbase.rootdirHBase集群中所有RegionServer共享目录,用来持久化HBase的数据,一般设置的是hdfs的文件目录,如hdfs://master:9000/hbasedata线上配置<property><name>hbase.rootdir</name><value>hdfs://master:9000/hbasedata</value></property>默认值:${hbase.tmp.dir}/hbasehbase.cluster.distributed集群的模式,分布式还是单机模式,如果设置成false的话,HBase进程和Zookeeper进程在同一个JVM进程。线上配置为true默认值:falsehbase.zookeeper.quorumzookeeper集群的URL配置,多个host中间用逗号分割线上配置<property><name>hbase.zookeeper.quorum</name><value>master,slave,slave1</value></property>默认值:localhosthbase.zookeeper.property.dataDirZooKeeper的zoo.conf中的配置。 快照的存储位置线上配置:/home/hadoop/zookeeperData默认值:${hbase.tmp.dir}/zookeeperzookeeper.session.timeout客户端与zk连接超时时间线上配置:1200000(20min)默认值:180000(3min)hbase.zookeeper.property.tickTimeClient端与zk发送心跳的时间间隔线上配置:6000(6s)默认值:6000hbase.security.authenticationHBase集群安全认证机制,目前的版本只支持kerberos安全认证。线上配置:kerberos默认值:空hbase.security.authorizationHBase是否开启安全授权机制线上配置: true默认值: falsehbase.regionserver.kerberos.principalregionserver的kerberos认证的主体名称(由三部分组成:服务或用户名称、实例名称以及域名)线上配置:hbase/[email protected]默认:无hbase.regionserver.keytab.fileregionserver keytab文件路径线上配置:/home/hadoop/etc/conf/hbase.keytab默认值:无hbase.master.kerberos.principalmaster的kerberos认证的主体名称(由三部分组成:服务或用户名称、实例名称以及域名)线上配置:hbase/[email protected]默认:无hbase.master.keytab.filemaster keytab文件路径线上配置:/home/hadoop/etc/conf/hbase.keytab默认值:无hbase.regionserver.handler.countregionserver处理IO请求的线程数线上配置:50默认配置:10hbase.regionserver.global.memstore.upperLimitRegionServer进程block进行flush触发条件:该节点上所有region的memstore之和达到upperLimit*heapsize线上配置:0.45默认配置:0.4hbase.regionserver.global.memstore.lowerLimitRegionServer进程触发flush的一个条件:该节点上所有region的memstore之和达到lowerLimit*heapsize线上配置:0.4默认配置:0.35hbase.client.write.buffer客户端写buffer,设置autoFlush为false时,当客户端写满buffer才flush线上配置:8388608(8M)默认配置:2097152(2M)hbase.hregion.max.filesize单个ColumnFamily的region大小,若按照ConstantSizeRegionSplitPolicy策略,超过设置的该值则自动split线上配置:107374182400(100G)默认配置:21474836480(20G)hbase.hregion.memstore.block.multiplier超过memstore大小的倍数达到该值则block所有写入请求,自我保护线上配置:8(内存够大可以适当调大一些,出现这种情况需要客户端做调整)默认配置:2hbase.hregion.memstore.flush.sizememstore大小,当达到该值则会flush到外存设备线上配置:104857600(100M)默认值: 134217728(128M)hbase.hregion.memstore.mslab.enabled是否开启mslab方案,减少因内存碎片导致的Full GC,提高整体性能线上配置:true默认配置: truehbase.regionserver.maxlogsregionserver的hlog数量线上配置:128默认配置:32hbase.regionserver.hlog.blocksizehlog大小上限,达到该值则block,进行roll掉线上配置:536870912(512M)默认配置:hdfs配置的block大小hbase.hstore.compaction.min进入minor compact队列的storefiles最小个数线上配置:10默认配置:3hbase.hstore.compaction.max单次minor compact最多的文件个数线上配置:30默认配置:10hbase.hstore.blockingStoreFiles当某一个region的storefile个数达到该值则block写入,等待compact线上配置:100(生产环境可以设置得很大)默认配置: 7hbase.hstore.blockingWaitTimeblock的等待时间线上配置:90000(90s)默认配置:90000(90s)hbase.hregion.majorcompaction触发major compact的周期线上配置:0(关掉major compact)默认配置:86400000(1d)hbase.regionserver.thread.compaction.largelarge compact线程池的线程个数线上配置:5默认配置:1hbase.regionserver.thread.compaction.smallsmall compact线程池的线程个数线上配置:5默认配置:1hbase.regionserver.thread.compaction.throttlecompact(major和minor)请求进入large和small compact线程池的临界点线上配置:10737418240(10G)默认配置:2 * this.minFilesToCompact * this.region.memstoreFlushSizehbase.hstore.compaction.max.sizeminor compact队列中storefile文件最大size线上配置:21474836480(20G)默认配置:Long.MAX_VALUEhbase.rpc.timeoutRPC请求timeout时间线上配置:300000(5min)默认配置:60000(10s)hbase.regionserver.region.split.policysplit操作默认的策略线上配置: org.apache.hadoop.hbase.regionserver.ConstantSizeRegionSplitPolicy(采取老的策略,自己控制split)默认配置: org.apache.hadoop.hbase.regionserver.(在region没有达到maxFileSize的前提下,如果fileSize达到regionCount * regionCount * flushSize则进行split操作)hbase.regionserver.regionSplitLimit单台RegionServer上region数上限线上配置:150默认配置:2147483647hbase-env.sh配置指定系统运行环境export JAVA_HOME=/usr/lib/jvm/java-6-sun/ #JDK HOMEexport HBASE_HOME=/home/hadoop/cdh4/hbase-0.94.2-cdh4.2.1 # HBase 安装目录export HBASE_LOG_DIR=/mnt/dfs/11/hbase/hbase-logs #日志输出路径JVM参数调优export HBASE_OPTS="-verbose:gc -XX:+PrintGCDetails -Xloggc:${HBASE_LOG_DIR}/hbase-gc.log -XX:+PrintGCTimeStamps -XX:+ -XX:+PrintGCApplicationStoppedTime \-server -Xmx20480m -Xms20480m -Xmn10240m -Xss256k -XX:SurvivorRatio=4 -XX:MaxPermSize=256m -XX:MaxTenuringThreshold=15 \-XX:ParallelGCThreads=16 -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:CMSFullGCsBeforeCompaction=5 -XX:+UseCMSCompactAtFullCollection \-XX:+CMSClassUnloadingEnabled -XX:=70 -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSMaxAbortablePrecleanTime=5000 \"

❾ 如何将一个hbase的数据导入另一个hbase

1.在hbase中创建一个表例如:create 'test','info'2.配置环境在hadoop的安装目录下找到hadoop.env.sh配置文件,将一文件加入到此配置文件中(export HBASE_HOME=/usr/hbaseexport HADOOP_CLASSPATH=$HBASE_HOME/hbase-0.94.12.jar:$HBASE_HOME/hbase-0.94.12-test.jar:$HBASE_HOME/conf:${HBASE_HOME}/lib/zookeeper-3.4.5.jar:${HBASE_HOME}/lib/guava-11.0.2.jar)以上的配置文件可以不用配置,一但配置在启动hive时就会出错,需要另加配置。然后拷贝jar包将hbase的hbase-0.91.12.jar拷贝到haddoop的lib下,将hbase-0.94.12.tests.jar 到hadoop的lib下将hbase的配置文件hbase-site.xml文件拷贝到hadoop的conf下3.重新启动hadoop4.将所需要的文件上传到hdfs上,我用的eclipse上传的,大家也可以用hadoop fs -put test3.dat /application/logAnalyse/test/5.在你安装的hbase的lib目录下执行一下的命令hbase org.apache.hadoop.hbase.maprece.ImportTsv – Dimporttsv.columns=info:userid,HBASE_ROW_KEY,info:netid test2 /application/logAnalyse/test/test3.dat或是hbase org.apache.hadoop.hbase.maprece.ImportTsv – Dimporttsv.columns=HBASE_ROW_KEY,cf:c1,cf:c2 -Dimporttsv.separator=, test2 /application/logAnalyse/test/test3.txt这样你去hbase执行scan 'test2'几可以看到已经有数据了


赞 (0)