This page compares the performance of the following SML compilers on a range of benchmarks. There are tables comparing run time, compile time, and code size.

All of the benchmarks are available here. Some of the benchmarks were obtained from here. Some of the benchmarks use data files in the DATA subdirectory.

Setup

All benchmarks were compiled and run on a 1.6 GHz Pentium 4 with 512M of physical memory. The benchmarks were compiled with the default settings for all the compilers, except for Moscow ML, which was passed the -orthodox -standalone -toplevel switches. The Poly/ML executables were produced by useing the file, followed by a PolyML.commit. The SML/NJ executables were produced by wrapping the entire program in a local declaration whose body performs an SMLofNJ.exportFn.

For more details, or if you want to run the benchmarks yourself, please see the benchmark directory of the MLton sources.

Run time ratio

The following table gives the ratio of the run time of each benchmark when compiled by another compiler to the run time when compiled by MLton. That is, the larger the number, the slower the generated code runs. A number larger than one indicates that the corresponding compiler produces code that runs more slowly than MLton. If an entry is *, that means that the corresponding compiler failed to compile the benchmark or that the benchmark failed to run.

benchmark ML-Kit Moscow-ML Poly/ML SML/NJ
barnes-hut 4.7 25.2 * 0.9
checksum * * * 5.5
count-graphs 12.0 47.3 3.9 2.3
DLXSimulator 1.5 * * *
fft 6.2 18.4 11.5 0.8
fib 0.8 4.9 1.1 1.0
hamlet * 14.2 2.0 1.8
imp-for 7.7 51.0 5.3 5.1
knuth-bendix 6.8 17.4 3.7 2.5
lexgen 2.8 6.6 2.1 1.4
life 9.1 32.7 8.8 1.4
logic 3.1 9.4 1.2 1.4
mandelbrot 6.7 72.8 87.7 2.4
matrix-multiply 27.1 122.3 30.7 8.2
md5 * * * 15.6
merge * * 1.3 10.3
mlyacc 2.6 8.6 1.3 1.7
mpuz 7.9 50.5 4.2 2.6
nucleic 2.8 22.2 27.2 0.4
peek 16.5 143.9 22.2 10.6
psdes-random 9.0 * * 3.5
ratio-regions 14.9 61.7 2.8 6.5
ray 5.9 27.3 34.6 1.0
raytrace * * 38.5 2.4
simple 2.4 17.7 6.6 1.4
smith-normal-form * * 20.1 101.2
tailfib 1.2 37.5 3.5 1.4
tak 1.7 12.9 1.2 1.7
tensor * * * 5.2
tsp 4.5 44.5 * 2.0
tyan * 20.3 1.3 1.3
vector-concat 14.0 25.4 2.2 6.5
vector-rev 17.5 86.7 4.6 34.7
vliw 3.8 12.3 1.7 1.9
wc-input1 18.3 * 8.2 11.0
wc-scanStream 32.8 * 163.5 5.9
zebra 22.1 48.7 6.0 11.1
zern * * * 1.6

Compile time

The following table gives the compile time of each benchmark in seconds. A * in an entry means that the compiler failed to compile the benchmark.

benchmark MLton ML-Kit Moscow-ML Poly/ML SML/NJ
barnes-hut 1.4 4.9 0.5 * 1.0
checksum 0.4 * * * 0.2
count-graphs 1.0 1.6 0.2 0.1 0.7
DLXSimulator 2.7 6.9 * * *
fft 0.8 1.3 0.1 0.1 0.5
fib 0.3 0.6 0.0 0.0 0.1
hamlet 26.9 * 26.1 10.5 48.1
imp-for 0.4 0.6 0.1 0.0 0.2
knuth-bendix 1.2 3.2 0.2 0.2 1.1
lexgen 3.7 6.0 0.5 0.3 2.9
life 0.8 1.7 0.1 0.1 0.4
logic 1.6 3.9 0.3 0.1 1.1
mandelbrot 0.4 0.7 0.0 0.0 0.2
matrix-multiply 0.4 0.7 0.1 0.0 0.2
md5 0.7 * * * 1.1
merge 0.4 0.6 0.1 0.0 0.1
mlyacc 12.8 36.0 4.6 1.4 14.2
mpuz 0.5 0.8 0.1 0.1 0.3
nucleic 1.4 14.8 1.3 0.3 1.4
peek 0.5 0.6 0.1 0.0 0.2
psdes-random 0.4 0.6 * * 0.2
ratio-regions 1.4 2.6 0.2 0.1 1.1
ray 2.2 2.2 0.2 0.1 0.7
raytrace 6.1 * * 0.6 4.0
simple 3.9 8.5 0.6 0.2 2.9
smith-normal-form 3.8 * * 0.1 1.9
tailfib 0.3 0.5 0.0 0.0 0.1
tak 0.3 0.5 0.0 0.0 0.1
tensor 1.8 * * * 2.1
tsp 0.9 1.6 0.2 * 0.5
tyan 2.2 * 0.4 0.2 1.7
vector-concat 0.4 0.6 0.0 0.0 0.2
vector-rev 0.4 0.6 0.0 0.0 0.2
vliw 7.9 22.2 1.9 1.3 10.7
wc-input1 1.0 0.6 0.1 0.0 0.2
wc-scanStream 1.1 0.6 0.1 0.0 0.2
zebra 3.2 1.6 0.1 0.1 0.5
zern 0.7 * * * 0.4

Code size

The following table gives the code size of each benchmark in bytes. The size for MLton and the ML Kit is the sum of text and data for the standalone executable as reported by size. The size for Moscow ML is the size in bytes of the executable a.out. The size for Poly/ML is the difference in size of the database before the session start and after the commit. The size for SML/NJ is the size of the heap file created by exportFn and does not include the size of the SML/NJ runtime system (approximately 100K). A * in an entry means that the compiler failed to compile the benchmark.

benchmark MLton ML-Kit Moscow-ML Poly/ML SML/NJ
barnes-hut 56,959 179,112 94,990 * 328,696
checksum 22,733 * * * 333,528
count-graphs 44,205 109,232 84,575 98,304 355,376
DLXSimulator 88,285 178,376 * * *
fft 32,765 106,576 84,095 65,536 329,736
fib 22,765 67,936 79,878 49,152 307,896
hamlet 1,105,596 * 277,168 5,316,608 1,259,720
imp-for 22,765 68,240 80,040 57,344 308,920
knuth-bendix 64,358 114,520 88,439 180,224 322,528
lexgen 152,165 226,384 104,883 188,416 392,184
life 39,501 99,472 83,390 90,112 318,424
logic 79,917 135,312 87,252 114,688 331,744
mandelbrot 22,797 100,808 81,341 57,344 313,016
matrix-multiply 23,341 117,752 82,419 57,344 339,656
md5 32,550 * * * 332,816
merge 24,013 68,144 80,091 49,152 308,928
mlyacc 471,365 524,544 148,286 2,908,160 690,216
mpuz 27,309 88,816 82,381 81,920 321,208
nucleic 61,677 233,208 207,154 204,800 352,240
peek 31,494 76,752 81,618 57,344 312,040
psdes-random 24,237 83,864 * * 314,040
ratio-regions 42,605 110,576 87,485 73,728 334,832
ray 85,692 122,584 89,860 147,456 383,048
raytrace 237,321 * * 524,288 502,888
simple 181,661 193,216 94,397 475,136 637,984
smith-normal-form 137,519 * * 131,072 484,424
tailfib 22,477 67,824 79,939 57,344 307,896
tak 22,861 67,712 79,928 57,344 303,800
tensor 57,006 * * * 342,048
tsp 37,894 114,928 86,140 * 332,784
tyan 85,798 * 91,587 204,800 368,656
vector-concat 23,725 77,016 80,191 49,152 318,152
vector-rev 23,661 77,208 80,073 57,344 318,152
vliw 299,861 417,144 135,386 696,320 618,576
wc-input1 48,638 143,616 86,900 49,152 313,016
wc-scanStream 49,598 144,064 87,076 49,152 314,040
zebra 109,670 84,744 83,419 90,112 323,560
zern 30,396 * * * 365,296