that's what I got yesterday while running my new shining MapReduce job.java.lang.OutOfMemoryError: GC overhead limit exceeded
OutOfMemory in java has different reasons: no more memory available, or GC was called to often (my case), no more free PermGem space, etc.
To get more information, about JVM internals we have to tune JVM runing. I'm using Hortonworks distribution, so I went to Ambari, MapReduce configuration tab and found mapreduce.reduce.java.opts This property is responsible for reducer's JVM configuration. Let's add GarbageCollector loggining
-verbose:gc -Xloggc:/tmp/@taskid@.gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps
We set up to write GC log to local filesystem in folder tmp, file name - taskId + gc extension.
In general, the following properties are important for JVM tuning:
- mapred.child.java.opts - Provides JVM options to pass to map and reduce tasks. Usually includes the -Xmx option to specify the maximum heap size. May also specify -Xms to specify the start heap size.
- mapreduce.map.java.opts - Overrides mapred.child.java.opts for map tasks.
- mapreduce.reduce.java.opts - Overrides mapred.child.java.opts for reduce tasks.
It'a but diffiulty to read the log, but hopefulyl several UI tools exist on the market. i prefer the open sourced GCViewer, which is java application and doesn't require instalation. It supports wide range of JVM, moreove it has command line interface for generation reports - so automation for getting reports might be applied.
The open GC log gets the detail overview of memory state:
Legend:
- Green line that shows the length of all GCs
- Magenta area that shows the size of the tenured generation (not available without PrintGCDetails)
- Orange area that shows the size of the young generation (not available without PrintGCDetails)
- Blue line that shows used heap size