Closes the HDFS log files for a write-only table (making them available for MapReduce and HAWQ), and begins writing to new log files.
See How GemFire XD Manages HDFS Data for more information about the default log file rollover behavior for HDFS write-only tables.
SYS.HDFS_FORCE_WRITEONLY_FILEROLLOVER ( IN FQTN VARCHAR(256), IN MIN_SIZE_FOR_ROLLOVER INTEGER )
- The fully-qualified name of the write-only table (schemaname.tablename) to roll over. The table that you specify must use HDFS write-only persistence. HDFS read/write tables can instead use SYS.HDFS_FORCE_COMPACTION.
- The minimum size of HDFS log file that GemFire XD will consider for rollover. Any of the
table's HDFS log files that are smaller than this value are skipped and not
rolled over. Specify a value of 0 to roll over all of the table's log files,
regardless of size.Note: Using SYS.HDFS_FORCE_WRITEONLY_FILEROLLOVER repeatedly can lead to numerous small files in HDFS, which creates pressure in the Hadoop cluster. As a best practice, use the MIN_SIZE_FOR_ROLLOVER procedure parameter to skip rollover for log files that are smaller than a specified minimum size.
Close and roll over all HDFS log files for the write-only FLIGHTSWRITE table in the APP schema. The procedure does not return control until the rollover completes:
gfxd> call sys.hdfs_force_writeonly_filerollover('APP.FLIGHTSWRITE', 0); Statement executed.
Close and roll over only those HDFS log files that are over 200 MB (200000 KB):
gfxd> call sys.hdfs_force_writeonly_filerollover('APP.FLIGHTSWRITE', 200000); Statement executed.