download | history | home | papers | performance | projects | team | user guide | users

This page compares the performance of the following SML compilers on a range of benchmarks.

There are tables for run time ratio, compile time, and code size.

All of the benchmarks are available here. Some of the benchmarks were obtained from the SML/NJ benchmark suite. Some of the benchmarks use data files in the DATA subdirectory.

Setup

All benchmarks were compiled and run on a 1.6 GHz Pentium 4 with 4G of RAM. The benchmarks were compiled with the default settings for all the compilers, except for Moscow ML, which was passed the -orthodox -standalone -toplevel switches. The Poly/ML executables were produced by useing the file, followed by a PolyML.commit. The SML/NJ executables were produced by wrapping the entire program in a local declaration whose body performs an SMLofNJ.exportFn.

For more details, or if you want to run the benchmarks yourself, please see the benchmark directory of the MLton sources.

Run time ratio

The following table gives the ratio of the run time of each benchmark when compiled by another compiler to the run time when compiled by MLton. That is, the larger the number, the slower the generated code runs. A number larger than one indicates that the corresponding compiler produces code that runs more slowly than MLton. If an entry is *, that means that the corresponding compiler failed to compile the benchmark or that the benchmark failed to run.

benchmark ML-Kit Moscow-ML Poly/ML SML/NJ
barnes-hut * 10.77 11.33 0.90
boyer * 8.22 2.09 2.76
checksum 9.08 * * 4.10
count-graphs 6.48 38.15 6.73 2.43
DLXSimulator 1.89 * * *
fft 2.55 * 46.32 1.08
fib 1.32 5.45 0.89 1.37
hamlet * 11.19 2.65 1.90
imp-for 4.84 76.22 12.49 6.89
knuth-bendix * 16.75 8.47 3.37
lexgen 1.87 5.18 1.90 1.43
life 2.19 22.75 9.66 1.27
logic * 5.25 1.28 0.84
mandelbrot 14.08 40.12 63.92 1.29
matrix-multiply * 67.89 22.12 5.04
md5 * * * 5.68
merge * * 1.32 5.68
mlyacc * 5.39 1.20 1.22
model-elimination * 6.99 3.13 1.59
mpuz 3.50 59.14 6.39 3.73
nucleic * * 20.33 0.57
peek 23.36 137.72 23.57 16.79
psdes-random 9.10 * * 6.51
ratio-regions 2.76 31.42 3.62 6.04
ray * 21.83 35.84 1.34
raytrace * * 37.31 2.61
simple 1.66 13.09 7.02 1.50
smith-normal-form * * * >1000
tailfib 1.45 35.23 2.35 1.43
tak 2.16 9.71 0.85 1.58
tensor * * * *
tsp 2.69 20.32 * 17.72
tyan * 11.56 1.47 0.78
vector-concat 1.60 13.57 1.89 9.03
vector-rev 2.03 18.18 2.83 62.72
vliw 1.95 9.36 2.13 1.41
wc-input1 17.16 * 7.63 9.41
wc-scanStream 18.59 * 305.46 7.73
zebra 5.54 24.12 5.69 6.90
zern * * * 2.80

For SML/NJ, the smith-normal-form benchmark was killed after running for over 36,000 seconds.

Compile time

The following table gives the compile time of each benchmark in seconds. A * in an entry means that the compiler failed to compile the benchmark.

benchmark MLton ML-Kit Moscow-ML Poly/ML SML/NJ
barnes-hut 1.79 6.10 0.43 0.22 1.09
boyer 3.26 9.23 0.38 0.15 3.21
checksum 0.53 0.94 * * 0.16
count-graphs 1.16 2.02 0.11 0.07 0.71
DLXSimulator 2.67 7.40 * * *
fft 1.09 1.71 0.10 0.08 0.66
fib 0.60 1.10 0.04 0.03 0.19
hamlet 41.70 * 21.76 9.90 54.46
imp-for 0.64 1.22 0.05 0.03 0.20
knuth-bendix 1.88 4.95 0.19 0.17 1.50
lexgen 4.27 7.50 0.40 0.36 3.73
life 1.27 2.94 0.08 0.07 0.52
logic 2.13 5.61 0.22 0.14 1.60
mandelbrot 0.58 1.20 0.06 0.03 0.26
matrix-multiply 0.71 1.31 0.06 0.04 0.29
md5 1.12 2.72 * * 1.97
merge 0.68 1.15 0.02 0.02 0.26
mlyacc 19.35 42.67 3.82 1.58 16.80
model-elimination 16.80 * 2.42 2.95 27.78
mpuz 0.83 1.52 0.07 0.05 0.37
nucleic 59.42 36.22 * 0.54 2.69
peek 0.92 1.20 0.05 0.03 0.21
psdes-random 0.67 1.21 * * 0.24
ratio-regions 2.02 4.45 0.20 0.15 1.60
ray 3.09 3.79 0.15 0.12 0.99
raytrace 8.58 * * 0.65 5.44
simple 5.10 13.79 0.46 0.29 3.55
smith-normal-form 6.09 * * 0.14 2.46
tailfib 0.62 1.12 0.05 0.03 0.24
tak 0.61 1.10 0.05 0.03 0.19
tensor 2.57 * * * *
tsp 1.34 2.90 0.16 * 0.59
tyan 2.95 6.93 0.30 0.23 2.20
vector-concat 0.70 1.13 0.05 0.03 0.22
vector-rev 0.64 1.13 0.06 0.04 0.24
vliw 9.53 26.81 1.45 1.25 12.22
wc-input1 1.09 0.94 0.05 0.03 0.19
wc-scanStream 1.12 0.97 0.05 0.04 0.20
zebra 2.59 2.60 0.09 0.07 0.59
zern 0.84 * * * 0.48

Code size

The following table gives the code size of each benchmark in bytes. The size for MLton and the ML Kit is the sum of text and data for the standalone executable as reported by size. The size for Moscow ML is the size in bytes of the executable a.out. The size for Poly/ML is the difference in size of the database before the session start and after the commit. The size for SML/NJ is the size of the heap file created by exportFn and does not include the size of the SML/NJ runtime system (approximately 100K). A * in an entry means that the compiler failed to compile the benchmark.

benchmark MLton ML-Kit Moscow-ML Poly/ML SML/NJ
barnes-hut 121,200 171,033 94,945 507,904 332,792
boyer 137,864 156,737 116,301 122,880 421,856
checksum 51,496 72,413 * * 320,512
count-graphs 69,784 88,617 84,613 98,304 359,472
DLXSimulator 107,497 162,261 * * *
fft 60,156 85,689 84,094 65,536 332,808
fib 51,496 16,125 79,878 49,152 294,880
hamlet 1,233,161 * 277,168 5,316,608 1,275,080
imp-for 51,488 16,853 80,041 57,344 295,904
knuth-bendix 92,681 97,177 88,440 180,224 326,624
lexgen 163,414 215,745 104,884 188,416 396,280
life 71,496 79,253 83,390 65,536 309,216
logic 110,720 115,217 87,252 114,688 335,840
mandelbrot 51,608 77,905 81,340 57,344 300,000
matrix-multiply 52,064 96,121 82,417 57,344 326,640
md5 60,361 91,649 * * 322,560
merge 52,848 25,585 80,091 49,152 295,912
mlyacc 480,566 502,081 148,287 2,850,816 701,480
model-elimination 611,248 * 175,885 2,146,304 896,136
mpuz 56,440 75,925 82,383 81,920 308,192
nucleic 198,408 268,253 * 221,184 386,032
peek 58,537 60,829 81,619 57,344 303,120
psdes-random 52,200 25,617 * * 301,024
ratio-regions 69,896 98,345 87,485 73,728 337,904
ray 112,105 112,309 89,860 147,456 387,144
raytrace 281,638 * * 524,288 508,008
simple 193,828 202,561 94,397 475,136 643,104
smith-normal-form 196,972 * * 131,072 488,520
tailfib 51,296 16,285 79,939 57,344 294,880
tak 51,704 16,041 79,926 57,344 290,784
tensor 119,179 * * * *
tsp 65,345 99,497 86,147 * 323,576
tyan 111,577 146,117 91,587 196,608 371,728
vector-concat 52,664 24,485 80,193 49,152 305,136
vector-rev 51,872 24,681 80,075 57,344 305,136
vliw 323,906 477,129 135,387 704,512 628,816
wc-input1 70,142 132,749 85,772 49,152 300,000
wc-scanStream 70,630 133,261 85,948 49,152 301,024
zebra 123,961 44,709 83,419 90,112 314,352
zern 57,427 * * * 339,992


MLton
Last modified: Mon Jul 7 16:21:01 PDT 2003