Re: error: value read is not a member of org.apache.spark.sql.SQLContext

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: error: value read is not a member of org.apache.spark.sql.SQLContext

moon
Administrator
Hi,

I tried load spark-csv package using %dep. 
Let me know if it works.

Thanks,
moon

On Tue, Jun 23, 2015 at 8:41 PM George Koshy <[hidden email]> wrote:
Yes Alex, 
sc.version tells me String = 1.3.1
So it is 1.3 and I used the 1.3 syntax from spark-csv website then there is a class not found error.

NoClassDefFoundError: org/apache/commons/csv/CSVFormat

I guess the spark-csv packages are not loading properly.

I used %dep z.load("com.databricks:spark-csv_2.10:1.0.3")
%dep z.load("/Users/george/Downloads/spark-csv_2.11-1.0.3.jar")
alternatively and none of them worked. I also put the jar path in ZEPPELIN_JAVA_OPTS environment variable in zeppelin-env.sh


On Tue, Jun 23, 2015 at 8:32 PM, Alexander Bezzubov <[hidden email]> wrote:
Hi George,

does spark version that you use on the cluster matches the one zeppelin is build with
API that you use was introduced only in spark 1.4 and it is not the default version yet

You can check by running simple scala paragraph with `sc.version`

--
Alex

On Wed, Jun 24, 2015 at 11:51 AM, George Koshy <[hidden email]> wrote:
Please help,
I get this error 
error: value read is not a member of org.apache.spark.sql.SQLContext
val df = sqlContext.read.format("com.databricks.spark.csv").option("header", "true").load("filename.csv")


My code is as follows:
import org.apache.spark.SparkContext

%dep
com.databricks:spark-csv_2.11:1.0.3

import org.apache.spark.sql.SQLContext val sqlContext = new SQLContext(sc) val df = sqlContext.read.format("com.databricks.spark.csv").option("header", "true").load("fileName.csv")

--
Sincerely!
George Koshy,



--
--
Kind regards,
Alexander.




--
Sincerely!
George Koshy,
Loading...