MS MARCO V1 Passage

TREC 2019 TREC 2020 dev

AP
nDCG@10 R@1K
AP
nDCG@10 R@1K RR@10 R@1K
Command to generate run on TREC 2019 queries:
python -m pyserini.search.lucene \
  --topics dl19-passage \
  --index msmarco-v1-passage-slim \
  --output run.msmarco-v1-passage.bm25-default.dl19.txt \
  --bm25 --k1 0.9 --b 0.4
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl19-passage run.msmarco-v1-passage.bm25-default.dl19.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl19-passage run.msmarco-v1-passage.bm25-default.dl19.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl19-passage run.msmarco-v1-passage.bm25-default.dl19.txt
Command to generate run on TREC 2020 queries:
python -m pyserini.search.lucene \
  --topics dl20 \
  --index msmarco-v1-passage-slim \
  --output run.msmarco-v1-passage.bm25-default.dl20.txt \
  --bm25 --k1 0.9 --b 0.4
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl20-passage run.msmarco-v1-passage.bm25-default.dl20.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl20-passage run.msmarco-v1-passage.bm25-default.dl20.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl20-passage run.msmarco-v1-passage.bm25-default.dl20.txt
Command to generate run on dev queries:
python -m pyserini.search.lucene \
  --topics msmarco-passage-dev-subset \
  --index msmarco-v1-passage-slim \
  --output run.msmarco-v1-passage.bm25-default.dev.txt \
  --bm25 --k1 0.9 --b 0.4
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 10 -m recip_rank msmarco-passage-dev-subset run.msmarco-v1-passage.bm25-default.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-passage-dev-subset run.msmarco-v1-passage.bm25-default.dev.txt
Command to generate run on TREC 2019 queries:
python -m pyserini.search.lucene \
  --topics dl19-passage \
  --index msmarco-v1-passage-full \
  --output run.msmarco-v1-passage.bm25-rm3-default.dl19.txt \
  --bm25 --k1 0.9 --b 0.4 --rm3
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl19-passage run.msmarco-v1-passage.bm25-rm3-default.dl19.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl19-passage run.msmarco-v1-passage.bm25-rm3-default.dl19.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl19-passage run.msmarco-v1-passage.bm25-rm3-default.dl19.txt
Command to generate run on TREC 2020 queries:
python -m pyserini.search.lucene \
  --topics dl20 \
  --index msmarco-v1-passage-full \
  --output run.msmarco-v1-passage.bm25-rm3-default.dl20.txt \
  --bm25 --k1 0.9 --b 0.4 --rm3
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl20-passage run.msmarco-v1-passage.bm25-rm3-default.dl20.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl20-passage run.msmarco-v1-passage.bm25-rm3-default.dl20.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl20-passage run.msmarco-v1-passage.bm25-rm3-default.dl20.txt
Command to generate run on dev queries:
python -m pyserini.search.lucene \
  --topics msmarco-passage-dev-subset \
  --index msmarco-v1-passage-full \
  --output run.msmarco-v1-passage.bm25-rm3-default.dev.txt \
  --bm25 --k1 0.9 --b 0.4 --rm3
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 10 -m recip_rank msmarco-passage-dev-subset run.msmarco-v1-passage.bm25-rm3-default.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-passage-dev-subset run.msmarco-v1-passage.bm25-rm3-default.dev.txt
Command to generate run on TREC 2019 queries:
python -m pyserini.search.lucene \
  --topics dl19-passage \
  --index msmarco-v1-passage-d2q-t5 \
  --output run.msmarco-v1-passage.bm25-d2q-t5-default.dl19.txt \
  --bm25 --k1 0.9 --b 0.4
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl19-passage run.msmarco-v1-passage.bm25-d2q-t5-default.dl19.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl19-passage run.msmarco-v1-passage.bm25-d2q-t5-default.dl19.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl19-passage run.msmarco-v1-passage.bm25-d2q-t5-default.dl19.txt
Command to generate run on TREC 2020 queries:
python -m pyserini.search.lucene \
  --topics dl20 \
  --index msmarco-v1-passage-d2q-t5 \
  --output run.msmarco-v1-passage.bm25-d2q-t5-default.dl20.txt \
  --bm25 --k1 0.9 --b 0.4
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl20-passage run.msmarco-v1-passage.bm25-d2q-t5-default.dl20.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl20-passage run.msmarco-v1-passage.bm25-d2q-t5-default.dl20.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl20-passage run.msmarco-v1-passage.bm25-d2q-t5-default.dl20.txt
Command to generate run on dev queries:
python -m pyserini.search.lucene \
  --topics msmarco-passage-dev-subset \
  --index msmarco-v1-passage-d2q-t5 \
  --output run.msmarco-v1-passage.bm25-d2q-t5-default.dev.txt \
  --bm25 --k1 0.9 --b 0.4
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 10 -m recip_rank msmarco-passage-dev-subset run.msmarco-v1-passage.bm25-d2q-t5-default.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-passage-dev-subset run.msmarco-v1-passage.bm25-d2q-t5-default.dev.txt
Command to generate run on TREC 2019 queries:
python -m pyserini.search.lucene \
  --topics dl19-passage \
  --index msmarco-v1-passage-d2q-t5-docvectors \
  --output run.msmarco-v1-passage.bm25-rm3-d2q-t5-default.dl19.txt \
  --bm25 --rm3 --k1 0.9 --b 0.4
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl19-passage run.msmarco-v1-passage.bm25-rm3-d2q-t5-default.dl19.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl19-passage run.msmarco-v1-passage.bm25-rm3-d2q-t5-default.dl19.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl19-passage run.msmarco-v1-passage.bm25-rm3-d2q-t5-default.dl19.txt
Command to generate run on TREC 2020 queries:
python -m pyserini.search.lucene \
  --topics dl20 \
  --index msmarco-v1-passage-d2q-t5-docvectors \
  --output run.msmarco-v1-passage.bm25-rm3-d2q-t5-default.dl20.txt \
  --bm25 --rm3 --k1 0.9 --b 0.4
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl20-passage run.msmarco-v1-passage.bm25-rm3-d2q-t5-default.dl20.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl20-passage run.msmarco-v1-passage.bm25-rm3-d2q-t5-default.dl20.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl20-passage run.msmarco-v1-passage.bm25-rm3-d2q-t5-default.dl20.txt
Command to generate run on dev queries:
python -m pyserini.search.lucene \
  --topics msmarco-passage-dev-subset \
  --index msmarco-v1-passage-d2q-t5-docvectors \
  --output run.msmarco-v1-passage.bm25-rm3-d2q-t5-default.dev.txt \
  --bm25 --rm3 --k1 0.9 --b 0.4
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 10 -m recip_rank msmarco-passage-dev-subset run.msmarco-v1-passage.bm25-rm3-d2q-t5-default.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-passage-dev-subset run.msmarco-v1-passage.bm25-rm3-d2q-t5-default.dev.txt
Command to generate run on TREC 2019 queries:
python -m pyserini.search.lucene \
  --index msmarco-v1-passage-unicoil-noexp \
  --topics dl19-passage-unicoil-noexp \
  --output run.msmarco-v1-passage.unicoil-noexp.dl19.txt \
  --batch 36 --threads 12 --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl19-passage run.msmarco-v1-passage.unicoil-noexp.dl19.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl19-passage run.msmarco-v1-passage.unicoil-noexp.dl19.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl19-passage run.msmarco-v1-passage.unicoil-noexp.dl19.txt
Command to generate run on TREC 2020 queries:
python -m pyserini.search.lucene \
  --index msmarco-v1-passage-unicoil-noexp \
  --topics dl20-unicoil-noexp \
  --output run.msmarco-v1-passage.unicoil-noexp.dl20.txt \
  --batch 36 --threads 12 --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl20-passage run.msmarco-v1-passage.unicoil-noexp.dl20.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl20-passage run.msmarco-v1-passage.unicoil-noexp.dl20.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl20-passage run.msmarco-v1-passage.unicoil-noexp.dl20.txt
Command to generate run on dev queries:
python -m pyserini.search.lucene \
  --index msmarco-v1-passage-unicoil-noexp \
  --topics msmarco-passage-dev-subset-unicoil-noexp \
  --output run.msmarco-v1-passage.unicoil-noexp.dev.txt \
  --batch 36 --threads 12 --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 10 -m recip_rank msmarco-passage-dev-subset run.msmarco-v1-passage.unicoil-noexp.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-passage-dev-subset run.msmarco-v1-passage.unicoil-noexp.dev.txt
Command to generate run on TREC 2019 queries:
python -m pyserini.search.lucene \
  --index msmarco-v1-passage-unicoil \
  --topics dl19-passage-unicoil \
  --output run.msmarco-v1-passage.unicoil.dl19.txt \
  --batch 36 --threads 12 --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl19-passage run.msmarco-v1-passage.unicoil.dl19.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl19-passage run.msmarco-v1-passage.unicoil.dl19.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl19-passage run.msmarco-v1-passage.unicoil.dl19.txt
Command to generate run on TREC 2020 queries:
python -m pyserini.search.lucene \
  --index msmarco-v1-passage-unicoil \
  --topics dl20-unicoil \
  --output run.msmarco-v1-passage.unicoil.dl20.txt \
  --batch 36 --threads 12 --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl20-passage run.msmarco-v1-passage.unicoil.dl20.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl20-passage run.msmarco-v1-passage.unicoil.dl20.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl20-passage run.msmarco-v1-passage.unicoil.dl20.txt
Command to generate run on dev queries:
python -m pyserini.search.lucene \
  --index msmarco-v1-passage-unicoil \
  --topics msmarco-passage-dev-subset-unicoil \
  --output run.msmarco-v1-passage.unicoil.dev.txt \
  --batch 36 --threads 12 --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 10 -m recip_rank msmarco-passage-dev-subset run.msmarco-v1-passage.unicoil.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-passage-dev-subset run.msmarco-v1-passage.unicoil.dev.txt
Command to generate run on TREC 2019 queries:
python -m pyserini.search.lucene \
  --topics dl19-passage \
  --index msmarco-v1-passage-slim \
  --output run.msmarco-v1-passage.bm25-tuned.dl19.txt \
  --bm25
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl19-passage run.msmarco-v1-passage.bm25-tuned.dl19.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl19-passage run.msmarco-v1-passage.bm25-tuned.dl19.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl19-passage run.msmarco-v1-passage.bm25-tuned.dl19.txt
Command to generate run on TREC 2020 queries:
python -m pyserini.search.lucene \
  --topics dl20 \
  --index msmarco-v1-passage-slim \
  --output run.msmarco-v1-passage.bm25-tuned.dl20.txt \
  --bm25
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl20-passage run.msmarco-v1-passage.bm25-tuned.dl20.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl20-passage run.msmarco-v1-passage.bm25-tuned.dl20.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl20-passage run.msmarco-v1-passage.bm25-tuned.dl20.txt
Command to generate run on dev queries:
python -m pyserini.search.lucene \
  --topics msmarco-passage-dev-subset \
  --index msmarco-v1-passage-slim \
  --output run.msmarco-v1-passage.bm25-tuned.dev.txt \
  --bm25
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 10 -m recip_rank msmarco-passage-dev-subset run.msmarco-v1-passage.bm25-tuned.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-passage-dev-subset run.msmarco-v1-passage.bm25-tuned.dev.txt
Command to generate run on TREC 2019 queries:
python -m pyserini.search.lucene \
  --topics dl19-passage \
  --index msmarco-v1-passage-full \
  --output run.msmarco-v1-passage.bm25-rm3-tuned.dl19.txt \
  --bm25 --rm3
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl19-passage run.msmarco-v1-passage.bm25-rm3-tuned.dl19.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl19-passage run.msmarco-v1-passage.bm25-rm3-tuned.dl19.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl19-passage run.msmarco-v1-passage.bm25-rm3-tuned.dl19.txt
Command to generate run on TREC 2020 queries:
python -m pyserini.search.lucene \
  --topics dl20 \
  --index msmarco-v1-passage-full \
  --output run.msmarco-v1-passage.bm25-rm3-tuned.dl20.txt \
  --bm25 --rm3
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl20-passage run.msmarco-v1-passage.bm25-rm3-tuned.dl20.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl20-passage run.msmarco-v1-passage.bm25-rm3-tuned.dl20.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl20-passage run.msmarco-v1-passage.bm25-rm3-tuned.dl20.txt
Command to generate run on dev queries:
python -m pyserini.search.lucene \
  --topics msmarco-passage-dev-subset \
  --index msmarco-v1-passage-full \
  --output run.msmarco-v1-passage.bm25-rm3-tuned.dev.txt \
  --bm25 --rm3
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 10 -m recip_rank msmarco-passage-dev-subset run.msmarco-v1-passage.bm25-rm3-tuned.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-passage-dev-subset run.msmarco-v1-passage.bm25-rm3-tuned.dev.txt
Command to generate run on TREC 2019 queries:
python -m pyserini.search.lucene \
  --topics dl19-passage \
  --index msmarco-v1-passage-d2q-t5 \
  --output run.msmarco-v1-passage.bm25-d2q-t5-tuned.dl19.txt \
  --bm25
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl19-passage run.msmarco-v1-passage.bm25-d2q-t5-tuned.dl19.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl19-passage run.msmarco-v1-passage.bm25-d2q-t5-tuned.dl19.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl19-passage run.msmarco-v1-passage.bm25-d2q-t5-tuned.dl19.txt
Command to generate run on TREC 2020 queries:
python -m pyserini.search.lucene \
  --topics dl20 \
  --index msmarco-v1-passage-d2q-t5 \
  --output run.msmarco-v1-passage.bm25-d2q-t5-tuned.dl20.txt \
  --bm25
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl20-passage run.msmarco-v1-passage.bm25-d2q-t5-tuned.dl20.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl20-passage run.msmarco-v1-passage.bm25-d2q-t5-tuned.dl20.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl20-passage run.msmarco-v1-passage.bm25-d2q-t5-tuned.dl20.txt
Command to generate run on dev queries:
python -m pyserini.search.lucene \
  --topics msmarco-passage-dev-subset \
  --index msmarco-v1-passage-d2q-t5 \
  --output run.msmarco-v1-passage.bm25-d2q-t5-tuned.dev.txt \
  --bm25
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 10 -m recip_rank msmarco-passage-dev-subset run.msmarco-v1-passage.bm25-d2q-t5-tuned.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-passage-dev-subset run.msmarco-v1-passage.bm25-d2q-t5-tuned.dev.txt
Command to generate run on TREC 2019 queries:
python -m pyserini.search.lucene \
  --topics dl19-passage \
  --index msmarco-v1-passage-d2q-t5-docvectors \
  --output run.msmarco-v1-passage.bm25-rm3-d2q-t5-tuned.dl19.txt \
  --bm25 --rm3
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl19-passage run.msmarco-v1-passage.bm25-rm3-d2q-t5-tuned.dl19.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl19-passage run.msmarco-v1-passage.bm25-rm3-d2q-t5-tuned.dl19.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl19-passage run.msmarco-v1-passage.bm25-rm3-d2q-t5-tuned.dl19.txt
Command to generate run on TREC 2020 queries:
python -m pyserini.search.lucene \
  --topics dl20 \
  --index msmarco-v1-passage-d2q-t5-docvectors \
  --output run.msmarco-v1-passage.bm25-rm3-d2q-t5-tuned.dl20.txt \
  --bm25 --rm3
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl20-passage run.msmarco-v1-passage.bm25-rm3-d2q-t5-tuned.dl20.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl20-passage run.msmarco-v1-passage.bm25-rm3-d2q-t5-tuned.dl20.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl20-passage run.msmarco-v1-passage.bm25-rm3-d2q-t5-tuned.dl20.txt
Command to generate run on dev queries:
python -m pyserini.search.lucene \
  --topics msmarco-passage-dev-subset \
  --index msmarco-v1-passage-d2q-t5-docvectors \
  --output run.msmarco-v1-passage.bm25-rm3-d2q-t5-tuned.dev.txt \
  --bm25 --rm3
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 10 -m recip_rank msmarco-passage-dev-subset run.msmarco-v1-passage.bm25-rm3-d2q-t5-tuned.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-passage-dev-subset run.msmarco-v1-passage.bm25-rm3-d2q-t5-tuned.dev.txt
Command to generate run on TREC 2019 queries:
python -m pyserini.search.lucene \
  --index msmarco-v1-passage-unicoil-noexp \
  --topics dl19-passage --encoder castorini/unicoil-noexp-msmarco-passage \
  --output run.msmarco-v1-passage.unicoil-noexp-otf.dl19.txt \
  --batch 36 --threads 12 --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl19-passage run.msmarco-v1-passage.unicoil-noexp-otf.dl19.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl19-passage run.msmarco-v1-passage.unicoil-noexp-otf.dl19.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl19-passage run.msmarco-v1-passage.unicoil-noexp-otf.dl19.txt
Command to generate run on TREC 2020 queries:
python -m pyserini.search.lucene \
  --index msmarco-v1-passage-unicoil-noexp \
  --topics dl20 --encoder castorini/unicoil-noexp-msmarco-passage \
  --output run.msmarco-v1-passage.unicoil-noexp-otf.dl20.txt \
  --batch 36 --threads 12 --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl20-passage run.msmarco-v1-passage.unicoil-noexp-otf.dl20.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl20-passage run.msmarco-v1-passage.unicoil-noexp-otf.dl20.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl20-passage run.msmarco-v1-passage.unicoil-noexp-otf.dl20.txt
Command to generate run on dev queries:
python -m pyserini.search.lucene \
  --index msmarco-v1-passage-unicoil-noexp \
  --topics msmarco-passage-dev-subset --encoder castorini/unicoil-noexp-msmarco-passage \
  --output run.msmarco-v1-passage.unicoil-noexp-otf.dev.txt \
  --batch 36 --threads 12 --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 10 -m recip_rank msmarco-passage-dev-subset run.msmarco-v1-passage.unicoil-noexp-otf.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-passage-dev-subset run.msmarco-v1-passage.unicoil-noexp-otf.dev.txt
Command to generate run on TREC 2019 queries:
python -m pyserini.search.lucene \
  --index msmarco-v1-passage-unicoil \
  --topics dl19-passage --encoder castorini/unicoil-msmarco-passage \
  --output run.msmarco-v1-passage.unicoil-otf.dl19.txt \
  --batch 36 --threads 12 --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl19-passage run.msmarco-v1-passage.unicoil-otf.dl19.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl19-passage run.msmarco-v1-passage.unicoil-otf.dl19.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl19-passage run.msmarco-v1-passage.unicoil-otf.dl19.txt
Command to generate run on TREC 2020 queries:
python -m pyserini.search.lucene \
  --index msmarco-v1-passage-unicoil \
  --topics dl20 --encoder castorini/unicoil-msmarco-passage \
  --output run.msmarco-v1-passage.unicoil-otf.dl20.txt \
  --batch 36 --threads 12 --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl20-passage run.msmarco-v1-passage.unicoil-otf.dl20.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl20-passage run.msmarco-v1-passage.unicoil-otf.dl20.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl20-passage run.msmarco-v1-passage.unicoil-otf.dl20.txt
Command to generate run on dev queries:
python -m pyserini.search.lucene \
  --index msmarco-v1-passage-unicoil \
  --topics msmarco-passage-dev-subset --encoder castorini/unicoil-msmarco-passage \
  --output run.msmarco-v1-passage.unicoil-otf.dev.txt \
  --batch 36 --threads 12 --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 10 -m recip_rank msmarco-passage-dev-subset run.msmarco-v1-passage.unicoil-otf.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-passage-dev-subset run.msmarco-v1-passage.unicoil-otf.dev.txt
Not available.
Not available.
Command to generate run on dev queries:
python -m pyserini.search.faiss \
  --index msmarco-passage-tct_colbert-v2-hnp-bf \
  --topics msmarco-passage-dev-subset --encoded-queries tct_colbert-v2-hnp-msmarco-passage-dev-subset \
  --output run.msmarco-v1-passage.tct_colbert-v2-hnp.dev.txt \
  --batch-size 36 --threads 12
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 10 -m recip_rank msmarco-passage-dev-subset run.msmarco-v1-passage.tct_colbert-v2-hnp.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-passage-dev-subset run.msmarco-v1-passage.tct_colbert-v2-hnp.dev.txt
Command to generate run on TREC 2019 queries:
python -m pyserini.search.faiss \
  --index msmarco-passage-tct_colbert-v2-hnp-bf \
  --topics dl19-passage --encoder castorini/tct_colbert-v2-hnp-msmarco \
  --output run.msmarco-v1-passage.tct_colbert-v2-hnp-otf.dl19.txt \
  --batch-size 36 --threads 12
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl19-passage run.msmarco-v1-passage.tct_colbert-v2-hnp-otf.dl19.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl19-passage run.msmarco-v1-passage.tct_colbert-v2-hnp-otf.dl19.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl19-passage run.msmarco-v1-passage.tct_colbert-v2-hnp-otf.dl19.txt
Command to generate run on TREC 2020 queries:
python -m pyserini.search.faiss \
  --index msmarco-passage-tct_colbert-v2-hnp-bf \
  --topics dl20 --encoder castorini/tct_colbert-v2-hnp-msmarco \
  --output run.msmarco-v1-passage.tct_colbert-v2-hnp-otf.dl20.txt \
  --batch-size 36 --threads 12
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl20-passage run.msmarco-v1-passage.tct_colbert-v2-hnp-otf.dl20.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl20-passage run.msmarco-v1-passage.tct_colbert-v2-hnp-otf.dl20.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl20-passage run.msmarco-v1-passage.tct_colbert-v2-hnp-otf.dl20.txt
Command to generate run on dev queries:
python -m pyserini.search.faiss \
  --index msmarco-passage-tct_colbert-v2-hnp-bf \
  --topics msmarco-passage-dev-subset --encoder castorini/tct_colbert-v2-hnp-msmarco \
  --output run.msmarco-v1-passage.tct_colbert-v2-hnp-otf.dev.txt \
  --batch-size 36 --threads 12
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 10 -m recip_rank msmarco-passage-dev-subset run.msmarco-v1-passage.tct_colbert-v2-hnp-otf.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-passage-dev-subset run.msmarco-v1-passage.tct_colbert-v2-hnp-otf.dev.txt