spark job error

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

spark job error

shyla
I am running Zeppelin on EMR. with the default settings.  I am getting the following error. Restarting the Zeppelin application fixes the problem. 

What default settings do I need to override that will help fix this error.

org.apache.spark.SparkException: Job aborted due to stage failure: Task 71 in stage 231.0 failed 4 times, most recent failure: Lost task 71.3 in stage 231.0 Reason: Container killed by YARN for exceeding memory limits. 1.4 GB of 1.4 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.

Thanks