Unable to start Zeppelin + Spark

classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

Unable to start Zeppelin + Spark

ÐΞ€ρ@Ҝ (๏̯͡๏)
I have a spark 1.4.1 and Zeppelin from GitHub built using

mvn clean package -Pspark-1.4 -Dspark.version=1.4.1 -Dhadoop.version=2.7.0 -Phadoop-2.6 -Pyarn -DskipTests


and i see this in logs

BlockManagerId(2, datanode-2-9429.phx01.dev.ebayc3.com, 60713)

 INFO [2015-10-01 11:43:50,967] ({sparkDriver-akka.actor.default-dispatcher-18} Logging.scala[logInfo]:59) - Registering block manager datanode-1-8428.phx01.dev.ebayc3.com:36087 with 265.1 MB RAM, BlockManagerId(1, datanode-1-8428.phx01.dev.ebayc3.com, 36087)

 INFO [2015-10-01 11:43:51,062] ({pool-2-thread-3} Logging.scala[logInfo]:59) - Initializing execution hive, version 0.13.1

 INFO [2015-10-01 11:43:51,498] ({pool-2-thread-3} HiveMetaStoreClient.java[open]:297) - Trying to connect to metastore with URI thrift://hive-metastore-8611.phx01.dev.ebayc3.com:9083

 INFO [2015-10-01 11:43:51,954] ({pool-2-thread-3} HiveMetaStoreClient.java[open]:385) - Connected to metastore.

 WARN [2015-10-01 11:43:52,025] ({pool-2-thread-3} SparkInterpreter.java[getSQLContext]:216) - Can't create HiveContext. Fallback to SQLContext

java.lang.reflect.InvocationTargetException

at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)

at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

at java.lang.reflect.Constructor.newInstance(Constructor.java:526)

at org.apache.zeppelin.spark.SparkInterpreter.getSQLContext(SparkInterpreter.java:211)

at org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:476)

at org.apache.zeppelin.interpreter.ClassloaderInterpreter.open(ClassloaderInterpreter.java:74)

at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:68)

at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:92)

at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:276)

at org.apache.zeppelin.scheduler.Job.run(Job.java:170)

at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:118)

at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)

at java.util.concurrent.FutureTask.run(FutureTask.java:262)

at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)

at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

at java.lang.Thread.run(Thread.java:745)

Caused by: java.lang.NoClassDefFoundError: org/apache/tez/dag/api/SessionNotRunning

at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:353)

at org.apache.spark.sql.hive.client.ClientWrapper.<init>(ClientWrapper.scala:116)

at org.apache.spark.sql.hive.HiveContext.executionHive$lzycompute(HiveContext.scala:163)

at org.apache.spark.sql.hive.HiveContext.executionHive(HiveContext.scala:161)

at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:168)

... 19 more

Caused by: java.lang.ClassNotFoundException: org.apache.tez.dag.api.SessionNotRunning

at java.net.URLClassLoader$1.run(URLClassLoader.java:366)

at java.net.URLClassLoader$1.run(URLClassLoader.java:355)

at java.security.AccessController.doPrivileged(Native Method)

at java.net.URLClassLoader.findClass(URLClassLoader.java:354)

at java.lang.ClassLoader.loadClass(ClassLoader.java:425)

at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)

at java.lang.ClassLoader.loadClass(ClassLoader.java:358)

... 24 more

 INFO [2015-10-01 11:43:59,552] ({pool-2-thread-3} Logging.scala[logInfo]:59) - ensureFreeSpace(211776) called with curMem=0, maxMem=515553361

 INFO [2015-10-01 11:43:59,555] ({pool-2-thread-3} Logging.scala[logInfo]:59) - Block broadcast_0 stored as values in memory (estimated size 206.8 KB, free 491.5 MB)


Hadoop Version : 2.7.x
Spark: 1.4.1

Hadoop installation is done using Ambari.

--
Deepak

Reply | Threaded
Open this post in threaded view
|

Re: Unable to start Zeppelin + Spark

ÐΞ€ρ@Ҝ (๏̯͡๏)
Ignoring this exception. As graphs are being rendered. 

On Thu, Oct 1, 2015 at 11:47 AM, ÐΞ€ρ@Ҝ (๏̯͡๏) <[hidden email]> wrote:
I have a spark 1.4.1 and Zeppelin from GitHub built using

mvn clean package -Pspark-1.4 -Dspark.version=1.4.1 -Dhadoop.version=2.7.0 -Phadoop-2.6 -Pyarn -DskipTests


and i see this in logs

BlockManagerId(2, datanode-2-9429.phx01.dev.ebayc3.com, 60713)

 INFO [2015-10-01 11:43:50,967] ({sparkDriver-akka.actor.default-dispatcher-18} Logging.scala[logInfo]:59) - Registering block manager datanode-1-8428.phx01.dev.ebayc3.com:36087 with 265.1 MB RAM, BlockManagerId(1, datanode-1-8428.phx01.dev.ebayc3.com, 36087)

 INFO [2015-10-01 11:43:51,062] ({pool-2-thread-3} Logging.scala[logInfo]:59) - Initializing execution hive, version 0.13.1

 INFO [2015-10-01 11:43:51,498] ({pool-2-thread-3} HiveMetaStoreClient.java[open]:297) - Trying to connect to metastore with URI thrift://hive-metastore-8611.phx01.dev.ebayc3.com:9083

 INFO [2015-10-01 11:43:51,954] ({pool-2-thread-3} HiveMetaStoreClient.java[open]:385) - Connected to metastore.

 WARN [2015-10-01 11:43:52,025] ({pool-2-thread-3} SparkInterpreter.java[getSQLContext]:216) - Can't create HiveContext. Fallback to SQLContext

java.lang.reflect.InvocationTargetException

at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)

at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

at java.lang.reflect.Constructor.newInstance(Constructor.java:526)

at org.apache.zeppelin.spark.SparkInterpreter.getSQLContext(SparkInterpreter.java:211)

at org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:476)

at org.apache.zeppelin.interpreter.ClassloaderInterpreter.open(ClassloaderInterpreter.java:74)

at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:68)

at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:92)

at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:276)

at org.apache.zeppelin.scheduler.Job.run(Job.java:170)

at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:118)

at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)

at java.util.concurrent.FutureTask.run(FutureTask.java:262)

at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)

at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

at java.lang.Thread.run(Thread.java:745)

Caused by: java.lang.NoClassDefFoundError: org/apache/tez/dag/api/SessionNotRunning

at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:353)

at org.apache.spark.sql.hive.client.ClientWrapper.<init>(ClientWrapper.scala:116)

at org.apache.spark.sql.hive.HiveContext.executionHive$lzycompute(HiveContext.scala:163)

at org.apache.spark.sql.hive.HiveContext.executionHive(HiveContext.scala:161)

at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:168)

... 19 more

Caused by: java.lang.ClassNotFoundException: org.apache.tez.dag.api.SessionNotRunning

at java.net.URLClassLoader$1.run(URLClassLoader.java:366)

at java.net.URLClassLoader$1.run(URLClassLoader.java:355)

at java.security.AccessController.doPrivileged(Native Method)

at java.net.URLClassLoader.findClass(URLClassLoader.java:354)

at java.lang.ClassLoader.loadClass(ClassLoader.java:425)

at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)

at java.lang.ClassLoader.loadClass(ClassLoader.java:358)

... 24 more

 INFO [2015-10-01 11:43:59,552] ({pool-2-thread-3} Logging.scala[logInfo]:59) - ensureFreeSpace(211776) called with curMem=0, maxMem=515553361

 INFO [2015-10-01 11:43:59,555] ({pool-2-thread-3} Logging.scala[logInfo]:59) - Block broadcast_0 stored as values in memory (estimated size 206.8 KB, free 491.5 MB)


Hadoop Version : 2.7.x
Spark: 1.4.1

Hadoop installation is done using Ambari.

--
Deepak




--
Deepak

Reply | Threaded
Open this post in threaded view
|

Re: Unable to start Zeppelin + Spark

Randy Gelhausen-2
See https://issues.apache.org/jira/browse/ZEPPELIN-324 which was just merged. We switched the Karma JS unit tests to port 9002 since it is less commonly used than the previous default of 8080.

If you're getting this error, can you check that you don't have another service already using port 9002?

On Thu, Oct 1, 2015 at 3:09 PM, ÐΞ€ρ@Ҝ (๏̯͡๏) <[hidden email]> wrote:
Ignoring this exception. As graphs are being rendered. 

On Thu, Oct 1, 2015 at 11:47 AM, ÐΞ€ρ@Ҝ (๏̯͡๏) <[hidden email]> wrote:
I have a spark 1.4.1 and Zeppelin from GitHub built using

mvn clean package -Pspark-1.4 -Dspark.version=1.4.1 -Dhadoop.version=2.7.0 -Phadoop-2.6 -Pyarn -DskipTests


and i see this in logs

BlockManagerId(2, datanode-2-9429.phx01.dev.ebayc3.com, 60713)

 INFO [2015-10-01 11:43:50,967] ({sparkDriver-akka.actor.default-dispatcher-18} Logging.scala[logInfo]:59) - Registering block manager datanode-1-8428.phx01.dev.ebayc3.com:36087 with 265.1 MB RAM, BlockManagerId(1, datanode-1-8428.phx01.dev.ebayc3.com, 36087)

 INFO [2015-10-01 11:43:51,062] ({pool-2-thread-3} Logging.scala[logInfo]:59) - Initializing execution hive, version 0.13.1

 INFO [2015-10-01 11:43:51,498] ({pool-2-thread-3} HiveMetaStoreClient.java[open]:297) - Trying to connect to metastore with URI thrift://hive-metastore-8611.phx01.dev.ebayc3.com:9083

 INFO [2015-10-01 11:43:51,954] ({pool-2-thread-3} HiveMetaStoreClient.java[open]:385) - Connected to metastore.

 WARN [2015-10-01 11:43:52,025] ({pool-2-thread-3} SparkInterpreter.java[getSQLContext]:216) - Can't create HiveContext. Fallback to SQLContext

java.lang.reflect.InvocationTargetException

at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)

at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

at java.lang.reflect.Constructor.newInstance(Constructor.java:526)

at org.apache.zeppelin.spark.SparkInterpreter.getSQLContext(SparkInterpreter.java:211)

at org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:476)

at org.apache.zeppelin.interpreter.ClassloaderInterpreter.open(ClassloaderInterpreter.java:74)

at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:68)

at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:92)

at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:276)

at org.apache.zeppelin.scheduler.Job.run(Job.java:170)

at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:118)

at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)

at java.util.concurrent.FutureTask.run(FutureTask.java:262)

at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)

at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

at java.lang.Thread.run(Thread.java:745)

Caused by: java.lang.NoClassDefFoundError: org/apache/tez/dag/api/SessionNotRunning

at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:353)

at org.apache.spark.sql.hive.client.ClientWrapper.<init>(ClientWrapper.scala:116)

at org.apache.spark.sql.hive.HiveContext.executionHive$lzycompute(HiveContext.scala:163)

at org.apache.spark.sql.hive.HiveContext.executionHive(HiveContext.scala:161)

at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:168)

... 19 more

Caused by: java.lang.ClassNotFoundException: org.apache.tez.dag.api.SessionNotRunning

at java.net.URLClassLoader$1.run(URLClassLoader.java:366)

at java.net.URLClassLoader$1.run(URLClassLoader.java:355)

at java.security.AccessController.doPrivileged(Native Method)

at java.net.URLClassLoader.findClass(URLClassLoader.java:354)

at java.lang.ClassLoader.loadClass(ClassLoader.java:425)

at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)

at java.lang.ClassLoader.loadClass(ClassLoader.java:358)

... 24 more

 INFO [2015-10-01 11:43:59,552] ({pool-2-thread-3} Logging.scala[logInfo]:59) - ensureFreeSpace(211776) called with curMem=0, maxMem=515553361

 INFO [2015-10-01 11:43:59,555] ({pool-2-thread-3} Logging.scala[logInfo]:59) - Block broadcast_0 stored as values in memory (estimated size 206.8 KB, free 491.5 MB)


Hadoop Version : 2.7.x
Spark: 1.4.1

Hadoop installation is done using Ambari.

--
Deepak




--
Deepak


Reply | Threaded
Open this post in threaded view
|

Re: Unable to start Zeppelin + Spark

ÐΞ€ρ@Ҝ (๏̯͡๏)
I have 9002 and i then modified to 9012.

Logs
====

$ ./grunt build 

Running "clean:server" (clean) task

Cleaning .tmp...OK


Running "wiredep:app" (wiredep) task


Running "wiredep:test" (wiredep) task


Running "concurrent:test" (concurrent) task

    

    Running "copy:styles" (copy) task

    Copied 7 files

    

    Done, without errors.

    

    

    Execution Time (2015-10-02 18:38:44 UTC)

    loading tasks   3ms  ▇▇▇▇▇▇▇▇▇ 18%

    copy:styles    13ms  ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 76%

    Total 17ms

    

Running "autoprefixer:dist" (autoprefixer) task

File .tmp/styles/custom-font.css created.

File .tmp/styles/font-awesome.min.css created.

File .tmp/styles/home.css created.

File .tmp/styles/interpreter.css created.

File .tmp/styles/notebook.css created.

File .tmp/styles/paragraph.css created.

File .tmp/styles/simple-line-icons.css created.


Running "connect:test" (connect) task

Started connect web server on http://localhost:9001


Running "karma:unit" (karma) task

INFO [karma]: Karma v0.12.37 server started at http://localhost:9012/

INFO [launcher]: Starting browser PhantomJS

ERROR [launcher]: Cannot start PhantomJS

INFO [launcher]: Trying to start PhantomJS again (1/2).

ERROR [launcher]: Cannot start PhantomJS

INFO [launcher]: Trying to start PhantomJS again (2/2).

ERROR [launcher]: Cannot start PhantomJS

ERROR [launcher]: PhantomJS failed 2 times (cannot start). Giving up.

Warning: Task "karma:unit" failed. Use --force to continue.


Aborted due to warnings.



Execution Time (2015-10-02 18:38:42 UTC)

wiredep:app        205ms  ▇▇▇▇▇▇ 3%

concurrent:test     1.1s  ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 19%

autoprefixer:dist  116ms  ▇▇▇▇ 2%


Does that mean build is complete ?


karma:unit          4.4s  ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 75%

Total 5.9s


On Thu, Oct 1, 2015 at 7:18 PM, Randy Gelhausen <[hidden email]> wrote:
See https://issues.apache.org/jira/browse/ZEPPELIN-324 which was just merged. We switched the Karma JS unit tests to port 9002 since it is less commonly used than the previous default of 8080.

If you're getting this error, can you check that you don't have another service already using port 9002?

On Thu, Oct 1, 2015 at 3:09 PM, ÐΞ€ρ@Ҝ (๏̯͡๏) <[hidden email]> wrote:
Ignoring this exception. As graphs are being rendered. 

On Thu, Oct 1, 2015 at 11:47 AM, ÐΞ€ρ@Ҝ (๏̯͡๏) <[hidden email]> wrote:
I have a spark 1.4.1 and Zeppelin from GitHub built using

mvn clean package -Pspark-1.4 -Dspark.version=1.4.1 -Dhadoop.version=2.7.0 -Phadoop-2.6 -Pyarn -DskipTests


and i see this in logs

BlockManagerId(2, datanode-2-9429.phx01.dev.ebayc3.com, 60713)

 INFO [2015-10-01 11:43:50,967] ({sparkDriver-akka.actor.default-dispatcher-18} Logging.scala[logInfo]:59) - Registering block manager datanode-1-8428.phx01.dev.ebayc3.com:36087 with 265.1 MB RAM, BlockManagerId(1, datanode-1-8428.phx01.dev.ebayc3.com, 36087)

 INFO [2015-10-01 11:43:51,062] ({pool-2-thread-3} Logging.scala[logInfo]:59) - Initializing execution hive, version 0.13.1

 INFO [2015-10-01 11:43:51,498] ({pool-2-thread-3} HiveMetaStoreClient.java[open]:297) - Trying to connect to metastore with URI thrift://hive-metastore-8611.phx01.dev.ebayc3.com:9083

 INFO [2015-10-01 11:43:51,954] ({pool-2-thread-3} HiveMetaStoreClient.java[open]:385) - Connected to metastore.

 WARN [2015-10-01 11:43:52,025] ({pool-2-thread-3} SparkInterpreter.java[getSQLContext]:216) - Can't create HiveContext. Fallback to SQLContext

java.lang.reflect.InvocationTargetException

at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)

at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

at java.lang.reflect.Constructor.newInstance(Constructor.java:526)

at org.apache.zeppelin.spark.SparkInterpreter.getSQLContext(SparkInterpreter.java:211)

at org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:476)

at org.apache.zeppelin.interpreter.ClassloaderInterpreter.open(ClassloaderInterpreter.java:74)

at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:68)

at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:92)

at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:276)

at org.apache.zeppelin.scheduler.Job.run(Job.java:170)

at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:118)

at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)

at java.util.concurrent.FutureTask.run(FutureTask.java:262)

at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)

at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

at java.lang.Thread.run(Thread.java:745)

Caused by: java.lang.NoClassDefFoundError: org/apache/tez/dag/api/SessionNotRunning

at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:353)

at org.apache.spark.sql.hive.client.ClientWrapper.<init>(ClientWrapper.scala:116)

at org.apache.spark.sql.hive.HiveContext.executionHive$lzycompute(HiveContext.scala:163)

at org.apache.spark.sql.hive.HiveContext.executionHive(HiveContext.scala:161)

at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:168)

... 19 more

Caused by: java.lang.ClassNotFoundException: org.apache.tez.dag.api.SessionNotRunning

at java.net.URLClassLoader$1.run(URLClassLoader.java:366)

at java.net.URLClassLoader$1.run(URLClassLoader.java:355)

at java.security.AccessController.doPrivileged(Native Method)

at java.net.URLClassLoader.findClass(URLClassLoader.java:354)

at java.lang.ClassLoader.loadClass(ClassLoader.java:425)

at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)

at java.lang.ClassLoader.loadClass(ClassLoader.java:358)

... 24 more

 INFO [2015-10-01 11:43:59,552] ({pool-2-thread-3} Logging.scala[logInfo]:59) - ensureFreeSpace(211776) called with curMem=0, maxMem=515553361

 INFO [2015-10-01 11:43:59,555] ({pool-2-thread-3} Logging.scala[logInfo]:59) - Block broadcast_0 stored as values in memory (estimated size 206.8 KB, free 491.5 MB)


Hadoop Version : 2.7.x
Spark: 1.4.1

Hadoop installation is done using Ambari.

--
Deepak




--
Deepak





--
Deepak

Reply | Threaded
Open this post in threaded view
|

Re: Unable to start Zeppelin + Spark

Corneau Damien

No, it still fail.
You have a problem with phantomJS on your machine.

On Oct 3, 2015 3:40 AM, "ÐΞ€ρ@Ҝ (๏̯͡๏)" <[hidden email]> wrote:
I have 9002 and i then modified to 9012.

Logs
====

$ ./grunt build 

Running "clean:server" (clean) task

Cleaning .tmp...OK


Running "wiredep:app" (wiredep) task


Running "wiredep:test" (wiredep) task


Running "concurrent:test" (concurrent) task

    

    Running "copy:styles" (copy) task

    Copied 7 files

    

    Done, without errors.

    

    

    Execution Time (2015-10-02 18:38:44 UTC)

    loading tasks   3ms  ▇▇▇▇▇▇▇▇▇ 18%

    copy:styles    13ms  ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 76%

    Total 17ms

    

Running "autoprefixer:dist" (autoprefixer) task

File .tmp/styles/custom-font.css created.

File .tmp/styles/font-awesome.min.css created.

File .tmp/styles/home.css created.

File .tmp/styles/interpreter.css created.

File .tmp/styles/notebook.css created.

File .tmp/styles/paragraph.css created.

File .tmp/styles/simple-line-icons.css created.


Running "connect:test" (connect) task

Started connect web server on http://localhost:9001


Running "karma:unit" (karma) task

INFO [karma]: Karma v0.12.37 server started at http://localhost:9012/

INFO [launcher]: Starting browser PhantomJS

ERROR [launcher]: Cannot start PhantomJS

INFO [launcher]: Trying to start PhantomJS again (1/2).

ERROR [launcher]: Cannot start PhantomJS

INFO [launcher]: Trying to start PhantomJS again (2/2).

ERROR [launcher]: Cannot start PhantomJS

ERROR [launcher]: PhantomJS failed 2 times (cannot start). Giving up.

Warning: Task "karma:unit" failed. Use --force to continue.


Aborted due to warnings.



Execution Time (2015-10-02 18:38:42 UTC)

wiredep:app        205ms  ▇▇▇▇▇▇ 3%

concurrent:test     1.1s  ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 19%

autoprefixer:dist  116ms  ▇▇▇▇ 2%


Does that mean build is complete ?


karma:unit          4.4s  ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 75%

Total 5.9s


On Thu, Oct 1, 2015 at 7:18 PM, Randy Gelhausen <[hidden email]> wrote:
See https://issues.apache.org/jira/browse/ZEPPELIN-324 which was just merged. We switched the Karma JS unit tests to port 9002 since it is less commonly used than the previous default of 8080.

If you're getting this error, can you check that you don't have another service already using port 9002?

On Thu, Oct 1, 2015 at 3:09 PM, ÐΞ€ρ@Ҝ (๏̯͡๏) <[hidden email]> wrote:
Ignoring this exception. As graphs are being rendered. 

On Thu, Oct 1, 2015 at 11:47 AM, ÐΞ€ρ@Ҝ (๏̯͡๏) <[hidden email]> wrote:
I have a spark 1.4.1 and Zeppelin from GitHub built using

mvn clean package -Pspark-1.4 -Dspark.version=1.4.1 -Dhadoop.version=2.7.0 -Phadoop-2.6 -Pyarn -DskipTests


and i see this in logs

BlockManagerId(2, datanode-2-9429.phx01.dev.ebayc3.com, 60713)

 INFO [2015-10-01 11:43:50,967] ({sparkDriver-akka.actor.default-dispatcher-18} Logging.scala[logInfo]:59) - Registering block manager datanode-1-8428.phx01.dev.ebayc3.com:36087 with 265.1 MB RAM, BlockManagerId(1, datanode-1-8428.phx01.dev.ebayc3.com, 36087)

 INFO [2015-10-01 11:43:51,062] ({pool-2-thread-3} Logging.scala[logInfo]:59) - Initializing execution hive, version 0.13.1

 INFO [2015-10-01 11:43:51,498] ({pool-2-thread-3} HiveMetaStoreClient.java[open]:297) - Trying to connect to metastore with URI thrift://hive-metastore-8611.phx01.dev.ebayc3.com:9083

 INFO [2015-10-01 11:43:51,954] ({pool-2-thread-3} HiveMetaStoreClient.java[open]:385) - Connected to metastore.

 WARN [2015-10-01 11:43:52,025] ({pool-2-thread-3} SparkInterpreter.java[getSQLContext]:216) - Can't create HiveContext. Fallback to SQLContext

java.lang.reflect.InvocationTargetException

at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)

at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

at java.lang.reflect.Constructor.newInstance(Constructor.java:526)

at org.apache.zeppelin.spark.SparkInterpreter.getSQLContext(SparkInterpreter.java:211)

at org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:476)

at org.apache.zeppelin.interpreter.ClassloaderInterpreter.open(ClassloaderInterpreter.java:74)

at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:68)

at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:92)

at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:276)

at org.apache.zeppelin.scheduler.Job.run(Job.java:170)

at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:118)

at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)

at java.util.concurrent.FutureTask.run(FutureTask.java:262)

at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)

at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

at java.lang.Thread.run(Thread.java:745)

Caused by: java.lang.NoClassDefFoundError: org/apache/tez/dag/api/SessionNotRunning

at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:353)

at org.apache.spark.sql.hive.client.ClientWrapper.<init>(ClientWrapper.scala:116)

at org.apache.spark.sql.hive.HiveContext.executionHive$lzycompute(HiveContext.scala:163)

at org.apache.spark.sql.hive.HiveContext.executionHive(HiveContext.scala:161)

at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:168)

... 19 more

Caused by: java.lang.ClassNotFoundException: org.apache.tez.dag.api.SessionNotRunning

at java.net.URLClassLoader$1.run(URLClassLoader.java:366)

at java.net.URLClassLoader$1.run(URLClassLoader.java:355)

at java.security.AccessController.doPrivileged(Native Method)

at java.net.URLClassLoader.findClass(URLClassLoader.java:354)

at java.lang.ClassLoader.loadClass(ClassLoader.java:425)

at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)

at java.lang.ClassLoader.loadClass(ClassLoader.java:358)

... 24 more

 INFO [2015-10-01 11:43:59,552] ({pool-2-thread-3} Logging.scala[logInfo]:59) - ensureFreeSpace(211776) called with curMem=0, maxMem=515553361

 INFO [2015-10-01 11:43:59,555] ({pool-2-thread-3} Logging.scala[logInfo]:59) - Block broadcast_0 stored as values in memory (estimated size 206.8 KB, free 491.5 MB)


Hadoop Version : 2.7.x
Spark: 1.4.1

Hadoop installation is done using Ambari.

--
Deepak




--
Deepak





--
Deepak