Setting spark.driver.memory

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

Setting spark.driver.memory

Paul Brenner
We are on Zeppelin 0.7.2 and it I suspect that we are unable to set spark.driver.memory through the interpreter settings page. 

If I add a property “spark.driver.memory” to my interpreter and set it to 10g, I see in my spark application environment page that the spark property spark.driver.memory is intact set to 10g, but when I look at the executors tab I see my driver is only getting 384mb (which seems suspiciously like the amount I would get if I was actually getting a default 512mb).

Digging around the web it sounds like people have had luck setting driver memory conf/zeppelin-env.sh ( e.g., http://apache-zeppelin-users-incubating-mailing-list.75479.x6.nabble.com/Can-not-configure-driver-memory-size-td1513.html ).

  • Can anyone confirm whether it is possible to set driver memory in interpreter settings? 
  • If it isn’t possible, how can I find out which settings are settable through interpreter settings and which aren’t? 
  • Also, if it isn’t possible, should I open a ticket for this? We really want users to be able to customize driver memory size. 




Paul Brenner
DATA SCIENTIST
(217) 390-3033 

PlaceIQ:CES 2018
Reply | Threaded
Open this post in threaded view
|

Re: Setting spark.driver.memory

Jeff Zhang

This is a known issue, and fixed in 0.8.0




Paul Brenner <[hidden email]>于2017年12月29日周五 上午2:22写道:
We are on Zeppelin 0.7.2 and it I suspect that we are unable to set spark.driver.memory through the interpreter settings page. 

If I add a property “spark.driver.memory” to my interpreter and set it to 10g, I see in my spark application environment page that the spark property spark.driver.memory is intact set to 10g, but when I look at the executors tab I see my driver is only getting 384mb (which seems suspiciously like the amount I would get if I was actually getting a default 512mb).

Digging around the web it sounds like people have had luck setting driver memory conf/zeppelin-env.sh ( e.g., http://apache-zeppelin-users-incubating-mailing-list.75479.x6.nabble.com/Can-not-configure-driver-memory-size-td1513.html ).

  • Can anyone confirm whether it is possible to set driver memory in interpreter settings? 
  • If it isn’t possible, how can I find out which settings are settable through interpreter settings and which aren’t? 
  • Also, if it isn’t possible, should I open a ticket for this? We really want users to be able to customize driver memory size. 




Paul Brenner
DATA SCIENTIST
<a href="tel:(217)%20390-3033" value="+12173903033" target="_blank">(217) 390-3033 

PlaceIQ:CES 2018
Reply | Threaded
Open this post in threaded view
|

Re: Setting spark.driver.memory

Paul Brenner
Thanks for the quick response!

Seeing as the other live thread in this group is about how we can’t build 0.8.0 anymore due to the dependency convergence issues, do I have any options in the near term? Do I just need to wait until 0.8 comes out in some unspecified number of months? Have people had success resolving these issues and getting 0.8 to build? I hit a dead end last time I tried and have been struggling to get time to dedicate to trying again. 



Paul Brenner
DATA SCIENTIST
(217) 390-3033 

PlaceIQ:CES 2018

On Thu, Dec 28, 2017 at 5:58 PM Jeff Zhang <[hidden email]> wrote:

This is a known issue, and fixed in 0.8.0




Paul Brenner <[hidden email]>于2017年12月29日周五 上午2:22写道:
We are on Zeppelin 0.7.2 and it I suspect that we are unable to set spark.driver.memory through the interpreter settings page. 

If I add a property “spark.driver.memory” to my interpreter and set it to 10g, I see in my spark application environment page that the spark property spark.driver.memory is intact set to 10g, but when I look at the executors tab I see my driver is only getting 384mb (which seems suspiciously like the amount I would get if I was actually getting a default 512mb).

Digging around the web it sounds like people have had luck setting driver memory conf/zeppelin-env.sh ( e.g., http://apache-zeppelin-users-incubating-mailing-list.75479.x6.nabble.com/Can-not-configure-driver-memory-size-td1513.html ).

  • Can anyone confirm whether it is possible to set driver memory in interpreter settings? 
  • If it isn’t possible, how can I find out which settings are settable through interpreter settings and which aren’t? 
  • Also, if it isn’t possible, should I open a ticket for this? We really want users to be able to customize driver memory size. 




Paul Brenner
DATA SCIENTIST
<a href="tel:(217)%20390-3033" value="+12173903033" target="_blank">(217) 390-3033 

PlaceIQ:CES 2018

Reply | Threaded
Open this post in threaded view
|

Re: Setting spark.driver.memory

Paul Brenner
In reply to this post by Paul Brenner
Sorry, yes, build issues with CDH.

Paul Brenner
DATA SCIENTIST
(217) 390-3033 

PlaceIQ:CES 2018

On Thu, Dec 28, 2017 at 6:12 PM Jeff Zhang <[hidden email]> wrote:

Do you mean build convergence issue with CDH ? It should be able to build successfully with apache hadoop


Paul Brenner <[hidden email]>于2017年12月29日周五 上午7:10写道:
Thanks for the quick response!

Seeing as the other live thread in this group is about how we can’t build 0.8.0 anymore due to the dependency convergence issues, do I have any options in the near term? Do I just need to wait until 0.8 comes out in some unspecified number of months? Have people had success resolving these issues and getting 0.8 to build? I hit a dead end last time I tried and have been struggling to get time to dedicate to trying again. 



Paul Brenner
DATA SCIENTIST
<a href="tel:(217)%20390-3033" value="+12173903033" target="_blank">(217) 390-3033 

PlaceIQ:CES 2018

On Thu, Dec 28, 2017 at 5:58 PM Jeff Zhang <[hidden email]> wrote:

This is a known issue, and fixed in 0.8.0




Paul Brenner <[hidden email]>于2017年12月29日周五 上午2:22写道:
We are on Zeppelin 0.7.2 and it I suspect that we are unable to set spark.driver.memory through the interpreter settings page. 

If I add a property “spark.driver.memory” to my interpreter and set it to 10g, I see in my spark application environment page that the spark property spark.driver.memory is intact set to 10g, but when I look at the executors tab I see my driver is only getting 384mb (which seems suspiciously like the amount I would get if I was actually getting a default 512mb).

Digging around the web it sounds like people have had luck setting driver memory conf/zeppelin-env.sh ( e.g., http://apache-zeppelin-users-incubating-mailing-list.75479.x6.nabble.com/Can-not-configure-driver-memory-size-td1513.html ).

  • Can anyone confirm whether it is possible to set driver memory in interpreter settings? 
  • If it isn’t possible, how can I find out which settings are settable through interpreter settings and which aren’t? 
  • Also, if it isn’t possible, should I open a ticket for this? We really want users to be able to customize driver memory size. 




Paul Brenner
DATA SCIENTIST
<a href="tel:(217)%20390-3033" value="+12173903033" target="_blank">(217) 390-3033 

PlaceIQ:CES 2018