MS MARCO V2 Passage

TREC 2021 dev dev2

AP
nDCG@10 RR@100 R@100 R@1K RR@100 R@1K RR@100 R@1K
Command to generate run on TREC 2021 queries:
python -m pyserini.search.lucene \
  --topics dl21 \
  --index msmarco-v2-passage-slim \
  --output run.msmarco-v2-passage.bm25-default.dl21.txt \
  --bm25
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m map dl21-passage run.msmarco-v2-passage.bm25-default.dl21.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl21-passage run.msmarco-v2-passage.bm25-default.dl21.txt
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m recip_rank dl21-passage run.msmarco-v2-passage.bm25-default.dl21.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.100 dl21-passage run.msmarco-v2-passage.bm25-default.dl21.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl21-passage run.msmarco-v2-passage.bm25-default.dl21.txt
Command to generate run on dev queries:
python -m pyserini.search.lucene \
  --topics msmarco-v2-passage-dev \
  --index msmarco-v2-passage-slim \
  --output run.msmarco-v2-passage.bm25-default.dev.txt \
  --bm25
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 100 -m recip_rank msmarco-v2-passage-dev run.msmarco-v2-passage.bm25-default.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-v2-passage-dev run.msmarco-v2-passage.bm25-default.dev.txt
Command to generate run on dev2 queries:
python -m pyserini.search.lucene \
  --topics msmarco-v2-passage-dev2 \
  --index msmarco-v2-passage-slim \
  --output run.msmarco-v2-passage.bm25-default.dev2.txt \
  --bm25
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 100 -m recip_rank msmarco-v2-passage-dev2 run.msmarco-v2-passage.bm25-default.dev2.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-v2-passage-dev2 run.msmarco-v2-passage.bm25-default.dev2.txt
Command to generate run on TREC 2021 queries:
python -m pyserini.search.lucene \
  --topics dl21 \
  --index msmarco-v2-passage-augmented-slim \
  --output run.msmarco-v2-passage.bm25-augmented-default.dl21.txt \
  --bm25
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m map dl21-passage run.msmarco-v2-passage.bm25-augmented-default.dl21.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl21-passage run.msmarco-v2-passage.bm25-augmented-default.dl21.txt
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m recip_rank dl21-passage run.msmarco-v2-passage.bm25-augmented-default.dl21.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.100 dl21-passage run.msmarco-v2-passage.bm25-augmented-default.dl21.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl21-passage run.msmarco-v2-passage.bm25-augmented-default.dl21.txt
Command to generate run on dev queries:
python -m pyserini.search.lucene \
  --topics msmarco-v2-passage-dev \
  --index msmarco-v2-passage-augmented-slim \
  --output run.msmarco-v2-passage.bm25-augmented-default.dev.txt \
  --bm25
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 100 -m recip_rank msmarco-v2-passage-dev run.msmarco-v2-passage.bm25-augmented-default.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-v2-passage-dev run.msmarco-v2-passage.bm25-augmented-default.dev.txt
Command to generate run on dev2 queries:
python -m pyserini.search.lucene \
  --topics msmarco-v2-passage-dev2 \
  --index msmarco-v2-passage-augmented-slim \
  --output run.msmarco-v2-passage.bm25-augmented-default.dev2.txt \
  --bm25
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 100 -m recip_rank msmarco-v2-passage-dev2 run.msmarco-v2-passage.bm25-augmented-default.dev2.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-v2-passage-dev2 run.msmarco-v2-passage.bm25-augmented-default.dev2.txt
Command to generate run on TREC 2021 queries:
python -m pyserini.search.lucene \
  --topics dl21 \
  --index msmarco-v2-passage-full \
  --output run.msmarco-v2-passage.bm25-rm3-default.dl21.txt \
  --bm25 --rm3
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m map dl21-passage run.msmarco-v2-passage.bm25-rm3-default.dl21.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl21-passage run.msmarco-v2-passage.bm25-rm3-default.dl21.txt
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m recip_rank dl21-passage run.msmarco-v2-passage.bm25-rm3-default.dl21.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.100 dl21-passage run.msmarco-v2-passage.bm25-rm3-default.dl21.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl21-passage run.msmarco-v2-passage.bm25-rm3-default.dl21.txt
Command to generate run on dev queries:
python -m pyserini.search.lucene \
  --topics msmarco-v2-passage-dev \
  --index msmarco-v2-passage-full \
  --output run.msmarco-v2-passage.bm25-rm3-default.dev.txt \
  --bm25 --rm3
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 100 -m recip_rank msmarco-v2-passage-dev run.msmarco-v2-passage.bm25-rm3-default.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-v2-passage-dev run.msmarco-v2-passage.bm25-rm3-default.dev.txt
Command to generate run on dev2 queries:
python -m pyserini.search.lucene \
  --topics msmarco-v2-passage-dev2 \
  --index msmarco-v2-passage-full \
  --output run.msmarco-v2-passage.bm25-rm3-default.dev2.txt \
  --bm25 --rm3
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 100 -m recip_rank msmarco-v2-passage-dev2 run.msmarco-v2-passage.bm25-rm3-default.dev2.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-v2-passage-dev2 run.msmarco-v2-passage.bm25-rm3-default.dev2.txt
Command to generate run on TREC 2021 queries:
python -m pyserini.search.lucene \
  --topics dl21 \
  --index msmarco-v2-passage-augmented-full \
  --output run.msmarco-v2-passage.bm25-rm3-augmented-default.dl21.txt \
  --bm25 --rm3
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m map dl21-passage run.msmarco-v2-passage.bm25-rm3-augmented-default.dl21.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl21-passage run.msmarco-v2-passage.bm25-rm3-augmented-default.dl21.txt
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m recip_rank dl21-passage run.msmarco-v2-passage.bm25-rm3-augmented-default.dl21.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.100 dl21-passage run.msmarco-v2-passage.bm25-rm3-augmented-default.dl21.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl21-passage run.msmarco-v2-passage.bm25-rm3-augmented-default.dl21.txt
Command to generate run on dev queries:
python -m pyserini.search.lucene \
  --topics msmarco-v2-passage-dev \
  --index msmarco-v2-passage-augmented-full \
  --output run.msmarco-v2-passage.bm25-rm3-augmented-default.dev.txt \
  --bm25 --rm3
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 100 -m recip_rank msmarco-v2-passage-dev run.msmarco-v2-passage.bm25-rm3-augmented-default.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-v2-passage-dev run.msmarco-v2-passage.bm25-rm3-augmented-default.dev.txt
Command to generate run on dev2 queries:
python -m pyserini.search.lucene \
  --topics msmarco-v2-passage-dev2 \
  --index msmarco-v2-passage-augmented-full \
  --output run.msmarco-v2-passage.bm25-rm3-augmented-default.dev2.txt \
  --bm25 --rm3
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 100 -m recip_rank msmarco-v2-passage-dev2 run.msmarco-v2-passage.bm25-rm3-augmented-default.dev2.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-v2-passage-dev2 run.msmarco-v2-passage.bm25-rm3-augmented-default.dev2.txt
Command to generate run on TREC 2021 queries:
python -m pyserini.search.lucene \
  --topics dl21 \
  --index msmarco-v2-passage-d2q-t5 \
  --output run.msmarco-v2-passage.bm25-d2q-t5-default.dl21.txt \
  --bm25
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m map dl21-passage run.msmarco-v2-passage.bm25-d2q-t5-default.dl21.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl21-passage run.msmarco-v2-passage.bm25-d2q-t5-default.dl21.txt
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m recip_rank dl21-passage run.msmarco-v2-passage.bm25-d2q-t5-default.dl21.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.100 dl21-passage run.msmarco-v2-passage.bm25-d2q-t5-default.dl21.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl21-passage run.msmarco-v2-passage.bm25-d2q-t5-default.dl21.txt
Command to generate run on dev queries:
python -m pyserini.search.lucene \
  --topics msmarco-v2-passage-dev \
  --index msmarco-v2-passage-d2q-t5 \
  --output run.msmarco-v2-passage.bm25-d2q-t5-default.dev.txt \
  --bm25
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 100 -m recip_rank msmarco-v2-passage-dev run.msmarco-v2-passage.bm25-d2q-t5-default.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-v2-passage-dev run.msmarco-v2-passage.bm25-d2q-t5-default.dev.txt
Command to generate run on dev2 queries:
python -m pyserini.search.lucene \
  --topics msmarco-v2-passage-dev2 \
  --index msmarco-v2-passage-d2q-t5 \
  --output run.msmarco-v2-passage.bm25-d2q-t5-default.dev2.txt \
  --bm25
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 100 -m recip_rank msmarco-v2-passage-dev2 run.msmarco-v2-passage.bm25-d2q-t5-default.dev2.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-v2-passage-dev2 run.msmarco-v2-passage.bm25-d2q-t5-default.dev2.txt
Command to generate run on TREC 2021 queries:
python -m pyserini.search.lucene \
  --topics dl21 \
  --index msmarco-v2-passage-augmented-d2q-t5 \
  --output run.msmarco-v2-passage.bm25-d2q-t5-augmented-default.dl21.txt \
  --bm25
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m map dl21-passage run.msmarco-v2-passage.bm25-d2q-t5-augmented-default.dl21.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl21-passage run.msmarco-v2-passage.bm25-d2q-t5-augmented-default.dl21.txt
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m recip_rank dl21-passage run.msmarco-v2-passage.bm25-d2q-t5-augmented-default.dl21.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.100 dl21-passage run.msmarco-v2-passage.bm25-d2q-t5-augmented-default.dl21.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl21-passage run.msmarco-v2-passage.bm25-d2q-t5-augmented-default.dl21.txt
Command to generate run on dev queries:
python -m pyserini.search.lucene \
  --topics msmarco-v2-passage-dev \
  --index msmarco-v2-passage-augmented-d2q-t5 \
  --output run.msmarco-v2-passage.bm25-d2q-t5-augmented-default.dev.txt \
  --bm25
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 100 -m recip_rank msmarco-v2-passage-dev run.msmarco-v2-passage.bm25-d2q-t5-augmented-default.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-v2-passage-dev run.msmarco-v2-passage.bm25-d2q-t5-augmented-default.dev.txt
Command to generate run on dev2 queries:
python -m pyserini.search.lucene \
  --topics msmarco-v2-passage-dev2 \
  --index msmarco-v2-passage-augmented-d2q-t5 \
  --output run.msmarco-v2-passage.bm25-d2q-t5-augmented-default.dev2.txt \
  --bm25
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 100 -m recip_rank msmarco-v2-passage-dev2 run.msmarco-v2-passage.bm25-d2q-t5-augmented-default.dev2.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-v2-passage-dev2 run.msmarco-v2-passage.bm25-d2q-t5-augmented-default.dev2.txt
Command to generate run on TREC 2021 queries:
python -m pyserini.search.lucene \
  --topics dl21 \
  --index msmarco-v2-passage-d2q-t5-docvectors \
  --output run.msmarco-v2-passage.bm25-rm3-d2q-t5-default.dl21.txt \
  --bm25 --rm3
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m map dl21-passage run.msmarco-v2-passage.bm25-rm3-d2q-t5-default.dl21.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl21-passage run.msmarco-v2-passage.bm25-rm3-d2q-t5-default.dl21.txt
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m recip_rank dl21-passage run.msmarco-v2-passage.bm25-rm3-d2q-t5-default.dl21.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.100 dl21-passage run.msmarco-v2-passage.bm25-rm3-d2q-t5-default.dl21.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl21-passage run.msmarco-v2-passage.bm25-rm3-d2q-t5-default.dl21.txt
Command to generate run on dev queries:
python -m pyserini.search.lucene \
  --topics msmarco-v2-passage-dev \
  --index msmarco-v2-passage-d2q-t5-docvectors \
  --output run.msmarco-v2-passage.bm25-rm3-d2q-t5-default.dev.txt \
  --bm25 --rm3
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 100 -m recip_rank msmarco-v2-passage-dev run.msmarco-v2-passage.bm25-rm3-d2q-t5-default.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-v2-passage-dev run.msmarco-v2-passage.bm25-rm3-d2q-t5-default.dev.txt
Command to generate run on dev2 queries:
python -m pyserini.search.lucene \
  --topics msmarco-v2-passage-dev2 \
  --index msmarco-v2-passage-d2q-t5-docvectors \
  --output run.msmarco-v2-passage.bm25-rm3-d2q-t5-default.dev2.txt \
  --bm25 --rm3
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 100 -m recip_rank msmarco-v2-passage-dev2 run.msmarco-v2-passage.bm25-rm3-d2q-t5-default.dev2.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-v2-passage-dev2 run.msmarco-v2-passage.bm25-rm3-d2q-t5-default.dev2.txt
Command to generate run on TREC 2021 queries:
python -m pyserini.search.lucene \
  --topics dl21 \
  --index msmarco-v2-passage-augmented-d2q-t5-docvectors \
  --output run.msmarco-v2-passage.bm25-rm3-d2q-t5-augmented-default.dl21.txt \
  --bm25 --rm3
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m map dl21-passage run.msmarco-v2-passage.bm25-rm3-d2q-t5-augmented-default.dl21.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl21-passage run.msmarco-v2-passage.bm25-rm3-d2q-t5-augmented-default.dl21.txt
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m recip_rank dl21-passage run.msmarco-v2-passage.bm25-rm3-d2q-t5-augmented-default.dl21.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.100 dl21-passage run.msmarco-v2-passage.bm25-rm3-d2q-t5-augmented-default.dl21.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl21-passage run.msmarco-v2-passage.bm25-rm3-d2q-t5-augmented-default.dl21.txt
Command to generate run on dev queries:
python -m pyserini.search.lucene \
  --topics msmarco-v2-passage-dev \
  --index msmarco-v2-passage-augmented-d2q-t5-docvectors \
  --output run.msmarco-v2-passage.bm25-rm3-d2q-t5-augmented-default.dev.txt \
  --bm25 --rm3
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 100 -m recip_rank msmarco-v2-passage-dev run.msmarco-v2-passage.bm25-rm3-d2q-t5-augmented-default.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-v2-passage-dev run.msmarco-v2-passage.bm25-rm3-d2q-t5-augmented-default.dev.txt
Command to generate run on dev2 queries:
python -m pyserini.search.lucene \
  --topics msmarco-v2-passage-dev2 \
  --index msmarco-v2-passage-augmented-d2q-t5-docvectors \
  --output run.msmarco-v2-passage.bm25-rm3-d2q-t5-augmented-default.dev2.txt \
  --bm25 --rm3
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 100 -m recip_rank msmarco-v2-passage-dev2 run.msmarco-v2-passage.bm25-rm3-d2q-t5-augmented-default.dev2.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-v2-passage-dev2 run.msmarco-v2-passage.bm25-rm3-d2q-t5-augmented-default.dev2.txt
Command to generate run on TREC 2021 queries:
python -m pyserini.search.lucene \
  --index msmarco-v2-passage-unicoil-noexp-0shot \
  --topics dl21-unicoil-noexp \
  --output run.msmarco-v2-passage.unicoil-noexp.dl21.txt \
  --batch 36 --threads 12 --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m map dl21-passage run.msmarco-v2-passage.unicoil-noexp.dl21.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl21-passage run.msmarco-v2-passage.unicoil-noexp.dl21.txt
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m recip_rank dl21-passage run.msmarco-v2-passage.unicoil-noexp.dl21.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.100 dl21-passage run.msmarco-v2-passage.unicoil-noexp.dl21.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl21-passage run.msmarco-v2-passage.unicoil-noexp.dl21.txt
Command to generate run on dev queries:
python -m pyserini.search.lucene \
  --index msmarco-v2-passage-unicoil-noexp-0shot \
  --topics msmarco-v2-passage-dev-unicoil-noexp \
  --output run.msmarco-v2-passage.unicoil-noexp.dev.txt \
  --batch 36 --threads 12 --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 100 -m recip_rank msmarco-v2-passage-dev run.msmarco-v2-passage.unicoil-noexp.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-v2-passage-dev run.msmarco-v2-passage.unicoil-noexp.dev.txt
Command to generate run on dev2 queries:
python -m pyserini.search.lucene \
  --index msmarco-v2-passage-unicoil-noexp-0shot \
  --topics msmarco-v2-passage-dev2-unicoil-noexp \
  --output run.msmarco-v2-passage.unicoil-noexp.dev2.txt \
  --batch 36 --threads 12 --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 100 -m recip_rank msmarco-v2-passage-dev2 run.msmarco-v2-passage.unicoil-noexp.dev2.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-v2-passage-dev2 run.msmarco-v2-passage.unicoil-noexp.dev2.txt
Command to generate run on TREC 2021 queries:
python -m pyserini.search.lucene \
  --index msmarco-v2-passage-unicoil-0shot \
  --topics dl21-unicoil \
  --output run.msmarco-v2-passage.unicoil.dl21.txt \
  --batch 36 --threads 12 --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m map dl21-passage run.msmarco-v2-passage.unicoil.dl21.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl21-passage run.msmarco-v2-passage.unicoil.dl21.txt
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m recip_rank dl21-passage run.msmarco-v2-passage.unicoil.dl21.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.100 dl21-passage run.msmarco-v2-passage.unicoil.dl21.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl21-passage run.msmarco-v2-passage.unicoil.dl21.txt
Command to generate run on dev queries:
python -m pyserini.search.lucene \
  --index msmarco-v2-passage-unicoil-0shot \
  --topics msmarco-v2-passage-dev-unicoil \
  --output run.msmarco-v2-passage.unicoil.dev.txt \
  --batch 36 --threads 12 --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 100 -m recip_rank msmarco-v2-passage-dev run.msmarco-v2-passage.unicoil.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-v2-passage-dev run.msmarco-v2-passage.unicoil.dev.txt
Command to generate run on dev2 queries:
python -m pyserini.search.lucene \
  --index msmarco-v2-passage-unicoil-0shot \
  --topics msmarco-v2-passage-dev2-unicoil \
  --output run.msmarco-v2-passage.unicoil.dev2.txt \
  --batch 36 --threads 12 --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 100 -m recip_rank msmarco-v2-passage-dev2 run.msmarco-v2-passage.unicoil.dev2.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-v2-passage-dev2 run.msmarco-v2-passage.unicoil.dev2.txt
Command to generate run on TREC 2021 queries:
python -m pyserini.search.lucene \
  --index msmarco-v2-passage-unicoil-noexp-0shot \
  --topics dl21 --encoder castorini/unicoil-noexp-msmarco-passage \
  --output run.msmarco-v2-passage.unicoil-noexp-otf.dl21.txt \
  --batch 36 --threads 12 --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m map dl21-passage run.msmarco-v2-passage.unicoil-noexp-otf.dl21.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl21-passage run.msmarco-v2-passage.unicoil-noexp-otf.dl21.txt
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m recip_rank dl21-passage run.msmarco-v2-passage.unicoil-noexp-otf.dl21.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.100 dl21-passage run.msmarco-v2-passage.unicoil-noexp-otf.dl21.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl21-passage run.msmarco-v2-passage.unicoil-noexp-otf.dl21.txt
Command to generate run on dev queries:
python -m pyserini.search.lucene \
  --index msmarco-v2-passage-unicoil-noexp-0shot \
  --topics msmarco-v2-passage-dev --encoder castorini/unicoil-noexp-msmarco-passage \
  --output run.msmarco-v2-passage.unicoil-noexp-otf.dev.txt \
  --batch 36 --threads 12 --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 100 -m recip_rank msmarco-v2-passage-dev run.msmarco-v2-passage.unicoil-noexp-otf.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-v2-passage-dev run.msmarco-v2-passage.unicoil-noexp-otf.dev.txt
Command to generate run on dev2 queries:
python -m pyserini.search.lucene \
  --index msmarco-v2-passage-unicoil-noexp-0shot \
  --topics msmarco-v2-passage-dev2 --encoder castorini/unicoil-noexp-msmarco-passage \
  --output run.msmarco-v2-passage.unicoil-noexp-otf.dev2.txt \
  --batch 36 --threads 12 --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 100 -m recip_rank msmarco-v2-passage-dev2 run.msmarco-v2-passage.unicoil-noexp-otf.dev2.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-v2-passage-dev2 run.msmarco-v2-passage.unicoil-noexp-otf.dev2.txt
Command to generate run on TREC 2021 queries:
python -m pyserini.search.lucene \
  --index msmarco-v2-passage-unicoil-0shot \
  --topics dl21 --encoder castorini/unicoil-msmarco-passage \
  --output run.msmarco-v2-passage.unicoil-otf.dl21.txt \
  --batch 36 --threads 12 --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m map dl21-passage run.msmarco-v2-passage.unicoil-otf.dl21.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl21-passage run.msmarco-v2-passage.unicoil-otf.dl21.txt
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m recip_rank dl21-passage run.msmarco-v2-passage.unicoil-otf.dl21.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.100 dl21-passage run.msmarco-v2-passage.unicoil-otf.dl21.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl21-passage run.msmarco-v2-passage.unicoil-otf.dl21.txt
Command to generate run on dev queries:
python -m pyserini.search.lucene \
  --index msmarco-v2-passage-unicoil-0shot \
  --topics msmarco-v2-passage-dev --encoder castorini/unicoil-msmarco-passage \
  --output run.msmarco-v2-passage.unicoil-otf.dev.txt \
  --batch 36 --threads 12 --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 100 -m recip_rank msmarco-v2-passage-dev run.msmarco-v2-passage.unicoil-otf.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-v2-passage-dev run.msmarco-v2-passage.unicoil-otf.dev.txt
Command to generate run on dev2 queries:
python -m pyserini.search.lucene \
  --index msmarco-v2-passage-unicoil-0shot \
  --topics msmarco-v2-passage-dev2 --encoder castorini/unicoil-msmarco-passage \
  --output run.msmarco-v2-passage.unicoil-otf.dev2.txt \
  --batch 36 --threads 12 --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 100 -m recip_rank msmarco-v2-passage-dev2 run.msmarco-v2-passage.unicoil-otf.dev2.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-v2-passage-dev2 run.msmarco-v2-passage.unicoil-otf.dev2.txt