目录 1,环境准备 2,安装Hive和配置环境变量 3,安装MySQL 4,在mysql上创建hive元数据库,并对hive进行授权 5,安装jar包到hive 6,配置hive-site.xml 7,元数据存储初始化 8,启动验证hive 9,报错及解决方法
准备好Hadoop集群,参照文章:【hadoop分布式集群的安装教程】 启动Hadoop
# cd /opt/hadoop-2.6.4/sbin # start-all.sh
# cd /opt # tar zxvf apache-hive-2.1.0-bin.tar.gz # ln -s apache-hive-2.1.0-bin hive //创建超链接,方便以后版本更新 # vim /etc/bashrc //配置环境变量 export HIVE_HOME=/opt/hive export PATH=$PATH:$HIVE_HOME/bin # source /etc/bashrc //使之生效
# rpm -q mysql-server //检查是否安装mysql # yum install -y mysql-server //yum安装 # service mysqld start # chkconfig mysqld on //开机启动mysql
# mysql -uroot create database if not exists hive_metadata; grant all privileges on hive_metadata.* to 'root'@'%' identified by '123456'; grant all privileges on hive_metadata.* to 'root'@'localhost' identified by '123456'; grant all privileges on hive_metadata.* to 'root'@'hadoop1' identified by '123456'; flush privileges; //刷盘生效
# yum install -y mysql-connector-java //安装mysql-connector-java # cp /usr/share/java/mysql-connector-java-5.1.17.jar /opt/hive/lib/ //将mysql connector拷贝到hive的lib包中
# cd /opt/hive/conf/ # cp hive-default.xml.template hive-site.xml # vim hive-site.xml ------------------------------------------------------------------------------------------------------------------------- <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://hadoop1:3306/hive_metadata?createDatabaseIfNotExist=true </value> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>root</value> //我这里是用root用户操作的,要与5.1步骤授权一致。 </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>123456</value> </property> <property> <name>hive.metastore.warehouse.dir</name> <value>/user/hive/warehouse</value> </property> <property> <name>datanucleus.schema.autoCreateAll</name> <value>true</value> //我安装时报错,加上这个完美解决 <description>creates necessary schema on a startup if one doesn't exist. set this to false, after creating it once</description> </property>
初始化hive的元数据库(使用mysql数据库)
# cd /opt/hive/bin # schematool -initSchema -dbType mysql
# hi
大功告成!!!不容易啊!碰到一堆报错,一个一个某度解决的,顺便吐槽一下某度,全特么是没用信息。 下面是遇到的坑及解决方法。
补充1: remote分开模式,客户端配置文件:
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>hive.metastore.warehouse.dir</name> <value>/user/hive/warehouse</value> </property> <property> <name>hive.metastore.local</name> <value>false</value> </property> <property> <name>hive.metastore.uris</name> <value>thrift://hadoop1:9083</value> </property> </configuration>
master运行hive –service metastore &起服务 slave运行hive验证
补充2: Hive Thrift客户端:
master运行
nohup -service hiveserver2 &
slave运行
beeline !connect jdbc:hive2://hadoop10000 root 123456
俗话说的好,有问题,看日志。Hive的日志默认在/tmp/root/hive.log
报错1: 刚安装完,启动hive时报错: 经查,是Hadoop的slf4j 与Hbase的slf4j jar包发生绑定冲突,移除其中一个即可。 解决方法:
# rm -rf /opt/hive/lib/log4j-slf4j-impl-2.4.1.jar
报错2: 奶奶的,就说没这么顺利,刚解决1,紧接又来个错。 连不上mysql库,经查是hive-site.xml和mysql库里授权的帐号密码填错了。 解决方法: 改成一样的就好啦~
报错3: 淡定,接连3个错,哥已经习惯了报错,就像习惯单身一样~。 网上找到的解决方法,解决之后大功告成! 解决方法: 跟踪hive源码得知:修改的配置的内容如下
<property> <name>datanucleus.readOnlyDatastore</name> <value>false</value> </property> <property> <name>datanucleus.fixedDatastore</name> <value>false</value> </property> <property> <name>datanucleus.autoCreateSchema</name> <value>true</value> </property> <property> <name>datanucleus.autoCreateTables</name> <value>true</value> </property> <property> <name>datanucleus.autoCreateColumns</name> <value>true</value> </property>
或者将:
<property> <name>datanucleus.schema.autoCreateAll</name> <value>false</value> <description>creates necessary schema on a startup if one doesn't exist. set this to false, after creating it once</description> </property>
修改为
<property> <name>datanucleus.schema.autoCreateAll</name> <value>true</value> <description>creates necessary schema on a startup if one doesn't exist. set this to false, after creating it once</description> </property>
posted on 2017-07-20 15:20 鸿雁 阅读(458) 评论(0) 编辑 收藏 所属分类: IT技术相关
Powered by: BlogJava Copyright © 鸿雁