@青牛 老师,MySQL中存储的该字段是定义成json类型的,但是导入到hive里面(在hive里面定义成string),查询出来就乱码了
这是原始在mysql中的数据
@青牛 老师,是导入内部表吗?自己不建表及字段?
@青牛 老师 如果遇到这样的问题,我把cm的数据库drop删掉再重新初始化可以吗?会不会对关键集群数据造成影响? 初始化数据库:
cd /opt/cloudera-manager/cm-5.15.0/share/cmf/schema/
./scm_prepare_database.sh mysql -hmaster.bd.dp -uroot -p密码 --scm-host master.bd.dp cm scm 密码
阿里云的机器确实联网了 我重新搭建了一遍集群好了 出现这个问题特别奇怪
感谢青牛老师,问题已解决,哈哈😃
谢谢
@青牛 老师,可否提供下具体的文档操作步骤呢?感谢
嗯嗯,这样是必须的,试过了,结果是得先在hive中把表创建好,之后可以在spark-sql中操作此表。再spaqk-sql中创建再操作的时候会出现这个错误。
@青牛 老师,我删除了一些压缩包和文件就可以了,如何修理挂盘呢?这是什么意思啊?感谢解答
已解决!
@青牛 青牛老师,可否分享个链接,我参照配置一下,感谢!!!
@青牛 青牛老师,我的hdfs-site.xml只配置了dfs.replication让文件副本数为2,我一共开了2台从机,1个主机,这样可以吗?文件副本数有没有什么要求?
@青牛 青牛老师,是每台机器上都要配置hadoop_home外的hadoop_hdfs_home,hadoop_conf_dir这些吗?
问题已解决,是jdk路径问题!!!
@青牛
当telnet ip 2181后显示这个,这是怎么回事呢?
@青牛 为什么我按这个步骤操作之后,其他都正常显示,而到最后telnet ip 2181却什么也不显示?求大神解答
为什么我按这个步骤操作之后,其他都正常显示,而到最后telnet ip 2181却什么也不显示?求大神解答
总算编译成功了!我重新执行了命令
老师,这个可以运行但编译失败Downloaded: https://repo.maven.apache.org/maven2/org/sonatype/aether/aether-api/1.7/aether-api-1.7.jar (73 KB at 0.5 KB/sec) [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary: [INFO] [INFO] Apache Hadoop Main ................................. FAILURE [14:06 min] [INFO] Apache Hadoop Build Tools .......................... SKIPPED [INFO] Apache Hadoop Project POM .......................... SKIPPED [INFO] Apache Hadoop Annotations .......................... SKIPPED [INFO] Apache Hadoop Assemblies ........................... SKIPPED [INFO] Apache Hadoop Project Dist POM ..................... SKIPPED [INFO] Apache Hadoop Maven Plugins ........................ SKIPPED [INFO] Apache Hadoop MiniKDC .............................. SKIPPED [INFO] Apache Hadoop Auth ................................. SKIPPED [INFO] Apache Hadoop Auth Examples ........................ SKIPPED [INFO] Apache Hadoop Common ............................... SKIPPED [INFO] Apache Hadoop NFS .................................. SKIPPED [INFO] Apache Hadoop KMS .................................. SKIPPED [INFO] Apache Hadoop Common Project ....................... SKIPPED [INFO] Apache Hadoop HDFS ................................. SKIPPED [INFO] Apache Hadoop HttpFS ............................... SKIPPED [INFO] Apache Hadoop HDFS BookKeeper Journal .............. SKIPPED [INFO] Apache Hadoop HDFS-NFS ............................. SKIPPED [INFO] Apache Hadoop HDFS Project ......................... SKIPPED [INFO] hadoop-yarn ........................................ SKIPPED ................................................ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 18:20 min [INFO] Finished at: 2018-03-04T21:48:15+08:00 [INFO] Final Memory: 44M/120M [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.apache.maven.plugins:maven-site-plugin:3.4:attach-descriptor (attach-descriptor) on project hadoop-main: Execution attach-descriptor of goal org.apache.maven.plugins:maven-site-plugin:3.4:attach-descriptor failed: Plugin org.apache.maven.plugins:maven-site-plugin:3.4 or one of its dependencies could not be resolved: Could not transfer artifact org.sonatype.aether:aether-spi:jar:1.7 from/to central (https://repo.maven.apache.org/maven2): Connect to repo.maven.apache.org:443 [repo.maven.apache.org/151.101.52.215] failed: 拒绝连接 -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/PluginResolutionException
这是怎么回事呢?求解决
真的管用,谢谢老师!!!