edu.yale.cs.hadoopdb.connector
Class DBInputFormat<T extends DBWritable>
java.lang.Object
edu.yale.cs.hadoopdb.connector.DBInputFormat<T>
- Type Parameters:
T
-
- All Implemented Interfaces:
- org.apache.hadoop.mapred.InputFormat<org.apache.hadoop.io.LongWritable,T>, org.apache.hadoop.mapred.JobConfigurable
- Direct Known Subclasses:
- DBJobBase.DBJobBaseInputFormat
public abstract class DBInputFormat<T extends DBWritable>
- extends java.lang.Object
- implements org.apache.hadoop.mapred.InputFormat<org.apache.hadoop.io.LongWritable,T>, org.apache.hadoop.mapred.JobConfigurable
Base DBInputFormat class. Extensions required to specialize value class.
Method Summary |
void |
configure(org.apache.hadoop.mapred.JobConf conf)
Method necessary for JobConfigurable interface. |
org.apache.hadoop.mapred.RecordReader<org.apache.hadoop.io.LongWritable,T> |
getRecordReader(org.apache.hadoop.mapred.InputSplit split,
org.apache.hadoop.mapred.JobConf job,
org.apache.hadoop.mapred.Reporter reporter)
Returns DBRecordReader for a given split. |
org.apache.hadoop.mapred.InputSplit[] |
getSplits(org.apache.hadoop.mapred.JobConf conf,
int numSplits)
Retrieves the location of chunks for a given
relation. |
Methods inherited from class java.lang.Object |
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait |
dbConf
protected DBConfiguration dbConf
DBInputFormat
public DBInputFormat()
configure
public void configure(org.apache.hadoop.mapred.JobConf conf)
- Method necessary for JobConfigurable interface.
We allow different extensions to utilize different information
from the Hadoop JobConf object.
- Specified by:
configure
in interface org.apache.hadoop.mapred.JobConfigurable
getRecordReader
public org.apache.hadoop.mapred.RecordReader<org.apache.hadoop.io.LongWritable,T> getRecordReader(org.apache.hadoop.mapred.InputSplit split,
org.apache.hadoop.mapred.JobConf job,
org.apache.hadoop.mapred.Reporter reporter)
throws java.io.IOException
- Returns DBRecordReader for a given split.
- Specified by:
getRecordReader
in interface org.apache.hadoop.mapred.InputFormat<org.apache.hadoop.io.LongWritable,T extends DBWritable>
- Throws:
java.io.IOException
getSplits
public org.apache.hadoop.mapred.InputSplit[] getSplits(org.apache.hadoop.mapred.JobConf conf,
int numSplits)
throws java.io.IOException
- Retrieves the location of chunks for a given
relation. Then, it creates as many splits as the number of chunks. Each split is assigned
a chunk (which holds connection and location information).
- Specified by:
getSplits
in interface org.apache.hadoop.mapred.InputFormat<org.apache.hadoop.io.LongWritable,T extends DBWritable>
- Throws:
java.io.IOException