Using file format depends on what you're getting from your text file.

You can write a custom record reader to parse the text log file and return the way you want, Input format class does that job for you. You will use this jar to create the Hive table and load the data in that table.

write a custom InputFormat (extends org.apache.hadoop.mapred.TextInputFormat) which returns a custom RecordReader (implements org.apache.hadoop.mapred.RecordReader<K, V>). The RecordReader implements logic to read and parse my files and returns tab delimited rows.

create table mytable ( 
field1 string, 
..
fieldN int)        
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\t'    
STORED AS INPUTFORMAT 'namespace.CustomFileInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat';

It is required to specify an output format when using a custom input format, so I choose one of the built-in output formats.

You can take a help from sample code related to custom input format attached to this mail.