Hi folks, I propose the following RC to be released for the Apache Zeppelin 0.9.0-preview2 release. The commit id is a74365c0813b451db1bc78def7d1ad1279429224 : https://gitbox.apache.org/repos/asf?p=zeppelin.git;a=commit;h=dd2058395ad4cf08fb6bdc901ec0c426c5095a94 This corresponds to the tag: v0.9.0-preview1-rc2 : https://gitbox.apache.org/repos/asf?p=zeppelin.git;a=shortlog;h=refs/tags/v0.9.0-preview2-rc1 The release archives (tgz), signature, and checksums are here https://dist.apache.org/repos/dist/dev/zeppelin/zeppelin-0.9.0-preview2-rc1/ The release candidate consists of the following source distribution archive zeppelin-v0.9.0-preview2.tgz In addition, the following supplementary binary distributions are provided for user convenience at the same location zeppelin-0.9.0-preview2-bin-all.tgz The maven artifacts are here https://repository.apache.org/content/repositories/orgapachezeppelin-1279/org/apache/zeppelin/ You can find the KEYS file here: https://dist.apache.org/repos/dist/release/zeppelin/KEYS Release notes available at https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12342692&styleName=&projectId=12316221 Vote will be open for next 72 hours (close at 8am 20/July PDT). [ ] +1 approve [ ] 0 no opinion [ ] -1 disapprove (and reason why) Best Regards Jeff Zhang |
Very excited to see preview2, 0.9.0 has been much more stable then previous versions, and the ipyn export makes it easier to use with more clients in my work.
+1 (non binding)
_______________________ Eric Pugh | Founder & CEO | OpenSource Connections, LLC | 434.466.1467 | http://www.opensourceconnect This e-mail and all contents, including attachments, is considered to be Company Confidential unless explicitly stated otherwise, regardless of whether attachments are marked as such. |
+1 On Sat, Jul 18, 2020 at 12:54 PM Prabhjyot Singh <[hidden email]> wrote:
|
In reply to this post by Jeff Zhang
Thanks for the feedback, [hidden email] , We can wait for your fix, and everyone else can continue to test the preview2. Alex Ott <[hidden email]> 于2020年7月18日周六 下午6:13写道: I'm hitting https://issues.apache.org/jira/browse/ZEPPELIN-4787 in Best Regards
Jeff Zhang |
In reply to this post by Eric Pugh
+1 On Fri, Jul 17, 2020 at 9:05 AM Eric Pugh <[hidden email]> wrote:
|
+1 2020년 7월 19일 (일) 08:51, Surjan S Rawat <[hidden email]>님이 작성:
이종열, Jongyoul Lee, 李宗烈
|
In reply to this post by Jeff Zhang
Hi Jeff I didn't identify the root cause (I'm not a HTML/JavaScript developer), but I fixed the issue. PR is open: https://github.com/apache/zeppelin/pull/3858 - it's primarily HTML templates changes, so it could be merged quite fast, and then we can cut new RC. On Sat, Jul 18, 2020 at 3:43 PM Jeff Zhang <[hidden email]> wrote:
-- |
Thanks Alex, I also found another blocker issue in spark interpreter. https://issues.apache.org/jira/browse/ZEPPELIN-4912 Folks, I'd like to cancel this RC, and will prepare another RC after these 2 blocker issues. Alex Ott <[hidden email]> 于2020年7月20日周一 下午6:44写道:
Best Regards
Jeff Zhang |
Sorry, the blocker issue in spark interpreter is this one, https://issues.apache.org/jira/browse/ZEPPELIN-4962 Jeff Zhang <[hidden email]> 于2020年7月20日周一 下午9:53写道:
Best Regards
Jeff Zhang |
Hi Jeff I've found another issue in both rc1 & rc2 - if you don't specify the SPARK_HOME, then the default Spark interpreter doesn't start with following error if I execute the code for reading from Cassandra: %spark import org.apache.spark.sql.cassandra._ val data = spark.read.cassandraFormat("test", "test").load() z.show(data) org.apache.zeppelin.interpreter.InterpreterException: java.lang.NoClassDefFoundError: org/apache/hadoop/fs/FSDataInputStream at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:76) at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:760) at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:668) at org.apache.zeppelin.scheduler.Job.run(Job.java:172) at org.apache.zeppelin.scheduler.AbstractScheduler.runJob(AbstractScheduler.java:130) at org.apache.zeppelin.scheduler.FIFOScheduler.lambda$runJobInScheduler$0(FIFOScheduler.java:39) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/fs/FSDataInputStream at org.apache.spark.SparkConf.loadFromSystemProperties(SparkConf.scala:76) at org.apache.spark.SparkConf.<init>(SparkConf.scala:71) at org.apache.spark.SparkConf.<init>(SparkConf.scala:58) at org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:80) at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:70) ... 8 more Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.fs.FSDataInputStream at java.net.URLClassLoader.findClass(URLClassLoader.java:382) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ... 13 more ERROR This works just fine in the preview1, without any additional configuration. I remember that we had something around this already reported, but I can't find JIRA What do you think? On Mon, Jul 20, 2020 at 3:54 PM Jeff Zhang <[hidden email]> wrote:
-- |
Hi Alex, It seems that Hadoop classes are missing. Do you include Hadoop jars with "-P include-hadoop"?I think it's related to https://github.com/apache/zeppelin/commit/6fa79a9fc743f2b4321ac9e8713b3380bb4d64c9#diff-600376dffeb79835ede4a0b285078036. Philipp
Am 21.07.20 um 11:28 schrieb Alex Ott:
|
That's right, In that PR, I exclude hadoop jars from zeppelin distribution, so that we can support both hadoop2 and hadoop3 (user could set USE_HADOOP=true in zeppelin-env.sh, so that zeppelin run command `hadoop classpath` and put all the hadoop jars in classpath of zeppelin. But for this issue, it seems even setting USE_HADOOP won't work, because we didn't put hadoop jars in the spark dependencies jars. Let me fix it quickly. Philipp Dallig <[hidden email]> 于2020年7月21日周二 下午5:41写道:
Best Regards
Jeff Zhang |
In reply to this post by Philipp Dallig
I didn't compile it myself, I just use binaries that Jeff created for preview2. My point is that it worked out of box in preview1, and previous versions, and should continue be the same, otherwise it's a very breaking change that requires that people know about that... On Tue, Jul 21, 2020 at 11:41 AM Philipp Dallig <[hidden email]> wrote:
-- |
Free forum by Nabble | Edit this page |