I have an hdfs directory with thousands of files. It seems that some of them – and I don’t know which ones –
have a problem with their schema and it’s causing my Spark application to fail with this error:

Caused by: org.apache.spark.sql.execution.QueryExecutionException: 
Parquet column cannot be converted in file hdfs://*.parquet.
Column: [price], Expected: double, Found: FIXED_LEN_BYTE_ARRAY

The problem is not only that it’s causing the application to fail, but every time if does fail, I have tocopy that file out of the directory and start the app again.

I thought of trying to use try-except, but I can’t seem to get that to work.


Looks like the schema of some files is unexpected.

possible solution:

You could either run parquet-tools on each of the files and extract the schema to find the problematic

hdfs -stat "%n" hdfs://*.parquet | while read file
   echo -n "$file: "
   hadoop jar parquet-tools-1.9.0.jar schema $file

Or you can use Spark to investigate the parquet files in parallel:

  .map { case (path, _) =>
    import collection.JavaConverters._
    val file = HadoopInputFile.fromPath(new Path(path), new Configuration())
    val reader =
    try {
      val schema = reader.getFileMetaData().getSchema
        schema.getName, => (
    } finally {
  } //map
  .toDF("schema name", "fields")

.binaryFiles provides you all filenames that match the given pattern as an RDD, so the following .map is executed on the Spark executors.
The map then opens each parquet file via ParquetFileReader and provides access to its schema and data.

Happy coding.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s