I am loading a csv file into pyspark as follows (within pyspark shell):
>>> from pyspark.sql import SQLContext
>>> sqlContext = SQLContext(sc)
>>> df = sqlContext.read.format('com.databricks.spark.csv').options(header='true').load('data.csv')
but I am getting this error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'SQLContext' object has no attribute 'read'
>>>
I am using spark 1.3.1 and I am trying to use spark-csv