MS MARCO V1 Passage

The two-click* reproduction matrix below provides commands for reproducing experimental results reported in a number of papers, denoted by the references in square brackets.

TREC 2019 TREC 2020 dev

AP
nDCG@10 R@1K
AP
nDCG@10 R@1K RR@10 R@1K
Command to generate run on TREC 2019 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v1-passage-slim \
  --topics dl19-passage \
  --output run.msmarco-v1-passage.bm25-default.dl19.txt \
  --bm25 --k1 0.9 --b 0.4
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl19-passage run.msmarco-v1-passage.bm25-default.dl19.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl19-passage run.msmarco-v1-passage.bm25-default.dl19.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl19-passage run.msmarco-v1-passage.bm25-default.dl19.txt
Command to generate run on TREC 2020 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v1-passage-slim \
  --topics dl20 \
  --output run.msmarco-v1-passage.bm25-default.dl20.txt \
  --bm25 --k1 0.9 --b 0.4
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl20-passage run.msmarco-v1-passage.bm25-default.dl20.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl20-passage run.msmarco-v1-passage.bm25-default.dl20.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl20-passage run.msmarco-v1-passage.bm25-default.dl20.txt
Command to generate run on dev queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v1-passage-slim \
  --topics msmarco-passage-dev-subset \
  --output run.msmarco-v1-passage.bm25-default.dev.txt \
  --bm25 --k1 0.9 --b 0.4
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 10 -m recip_rank msmarco-passage-dev-subset run.msmarco-v1-passage.bm25-default.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-passage-dev-subset run.msmarco-v1-passage.bm25-default.dev.txt
Command to generate run on TREC 2019 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v1-passage-full \
  --topics dl19-passage \
  --output run.msmarco-v1-passage.bm25-rm3-default.dl19.txt \
  --bm25 --k1 0.9 --b 0.4 --rm3
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl19-passage run.msmarco-v1-passage.bm25-rm3-default.dl19.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl19-passage run.msmarco-v1-passage.bm25-rm3-default.dl19.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl19-passage run.msmarco-v1-passage.bm25-rm3-default.dl19.txt
Command to generate run on TREC 2020 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v1-passage-full \
  --topics dl20 \
  --output run.msmarco-v1-passage.bm25-rm3-default.dl20.txt \
  --bm25 --k1 0.9 --b 0.4 --rm3
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl20-passage run.msmarco-v1-passage.bm25-rm3-default.dl20.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl20-passage run.msmarco-v1-passage.bm25-rm3-default.dl20.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl20-passage run.msmarco-v1-passage.bm25-rm3-default.dl20.txt
Command to generate run on dev queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v1-passage-full \
  --topics msmarco-passage-dev-subset \
  --output run.msmarco-v1-passage.bm25-rm3-default.dev.txt \
  --bm25 --k1 0.9 --b 0.4 --rm3
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 10 -m recip_rank msmarco-passage-dev-subset run.msmarco-v1-passage.bm25-rm3-default.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-passage-dev-subset run.msmarco-v1-passage.bm25-rm3-default.dev.txt
Command to generate run on TREC 2019 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v1-passage-full \
  --topics dl19-passage \
  --output run.msmarco-v1-passage.bm25-rocchio-default.dl19.txt \
  --bm25 --k1 0.9 --b 0.4 --rocchio
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl19-passage run.msmarco-v1-passage.bm25-rocchio-default.dl19.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl19-passage run.msmarco-v1-passage.bm25-rocchio-default.dl19.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl19-passage run.msmarco-v1-passage.bm25-rocchio-default.dl19.txt
Command to generate run on TREC 2020 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v1-passage-full \
  --topics dl20 \
  --output run.msmarco-v1-passage.bm25-rocchio-default.dl20.txt \
  --bm25 --k1 0.9 --b 0.4 --rocchio
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl20-passage run.msmarco-v1-passage.bm25-rocchio-default.dl20.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl20-passage run.msmarco-v1-passage.bm25-rocchio-default.dl20.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl20-passage run.msmarco-v1-passage.bm25-rocchio-default.dl20.txt
Command to generate run on dev queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v1-passage-full \
  --topics msmarco-passage-dev-subset \
  --output run.msmarco-v1-passage.bm25-rocchio-default.dev.txt \
  --bm25 --k1 0.9 --b 0.4 --rocchio
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 10 -m recip_rank msmarco-passage-dev-subset run.msmarco-v1-passage.bm25-rocchio-default.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-passage-dev-subset run.msmarco-v1-passage.bm25-rocchio-default.dev.txt
Command to generate run on TREC 2019 queries:
python -m pyserini.search.lucene \
  --topics dl19-passage \
  --index msmarco-v1-passage-slim \
  --output run.msmarco-v1-passage.bm25-tuned.dl19.txt \
  --bm25
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl19-passage run.msmarco-v1-passage.bm25-tuned.dl19.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl19-passage run.msmarco-v1-passage.bm25-tuned.dl19.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl19-passage run.msmarco-v1-passage.bm25-tuned.dl19.txt
Command to generate run on TREC 2020 queries:
python -m pyserini.search.lucene \
  --topics dl20 \
  --index msmarco-v1-passage-slim \
  --output run.msmarco-v1-passage.bm25-tuned.dl20.txt \
  --bm25
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl20-passage run.msmarco-v1-passage.bm25-tuned.dl20.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl20-passage run.msmarco-v1-passage.bm25-tuned.dl20.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl20-passage run.msmarco-v1-passage.bm25-tuned.dl20.txt
Command to generate run on dev queries:
python -m pyserini.search.lucene \
  --topics msmarco-passage-dev-subset \
  --index msmarco-v1-passage-slim \
  --output run.msmarco-v1-passage.bm25-tuned.dev.txt \
  --bm25
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 10 -m recip_rank msmarco-passage-dev-subset run.msmarco-v1-passage.bm25-tuned.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-passage-dev-subset run.msmarco-v1-passage.bm25-tuned.dev.txt
Command to generate run on TREC 2019 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v1-passage-full \
  --topics dl19-passage \
  --output run.msmarco-v1-passage.bm25-rm3-tuned.dl19.txt \
  --bm25 --rm3
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl19-passage run.msmarco-v1-passage.bm25-rm3-tuned.dl19.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl19-passage run.msmarco-v1-passage.bm25-rm3-tuned.dl19.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl19-passage run.msmarco-v1-passage.bm25-rm3-tuned.dl19.txt
Command to generate run on TREC 2020 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v1-passage-full \
  --topics dl20 \
  --output run.msmarco-v1-passage.bm25-rm3-tuned.dl20.txt \
  --bm25 --rm3
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl20-passage run.msmarco-v1-passage.bm25-rm3-tuned.dl20.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl20-passage run.msmarco-v1-passage.bm25-rm3-tuned.dl20.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl20-passage run.msmarco-v1-passage.bm25-rm3-tuned.dl20.txt
Command to generate run on dev queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v1-passage-full \
  --topics msmarco-passage-dev-subset \
  --output run.msmarco-v1-passage.bm25-rm3-tuned.dev.txt \
  --bm25 --rm3
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 10 -m recip_rank msmarco-passage-dev-subset run.msmarco-v1-passage.bm25-rm3-tuned.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-passage-dev-subset run.msmarco-v1-passage.bm25-rm3-tuned.dev.txt
Command to generate run on TREC 2019 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v1-passage-full \
  --topics dl19-passage \
  --output run.msmarco-v1-passage.bm25-rocchio-tuned.dl19.txt \
  --bm25 --rocchio
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl19-passage run.msmarco-v1-passage.bm25-rocchio-tuned.dl19.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl19-passage run.msmarco-v1-passage.bm25-rocchio-tuned.dl19.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl19-passage run.msmarco-v1-passage.bm25-rocchio-tuned.dl19.txt
Command to generate run on TREC 2020 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v1-passage-full \
  --topics dl20 \
  --output run.msmarco-v1-passage.bm25-rocchio-tuned.dl20.txt \
  --bm25 --rocchio
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl20-passage run.msmarco-v1-passage.bm25-rocchio-tuned.dl20.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl20-passage run.msmarco-v1-passage.bm25-rocchio-tuned.dl20.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl20-passage run.msmarco-v1-passage.bm25-rocchio-tuned.dl20.txt
Command to generate run on dev queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v1-passage-full \
  --topics msmarco-passage-dev-subset \
  --output run.msmarco-v1-passage.bm25-rocchio-tuned.dev.txt \
  --bm25 --rocchio
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 10 -m recip_rank msmarco-passage-dev-subset run.msmarco-v1-passage.bm25-rocchio-tuned.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-passage-dev-subset run.msmarco-v1-passage.bm25-rocchio-tuned.dev.txt
Command to generate run on TREC 2019 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v1-passage-d2q-t5 \
  --topics dl19-passage \
  --output run.msmarco-v1-passage.bm25-d2q-t5-default.dl19.txt \
  --bm25 --k1 0.9 --b 0.4
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl19-passage run.msmarco-v1-passage.bm25-d2q-t5-default.dl19.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl19-passage run.msmarco-v1-passage.bm25-d2q-t5-default.dl19.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl19-passage run.msmarco-v1-passage.bm25-d2q-t5-default.dl19.txt
Command to generate run on TREC 2020 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v1-passage-d2q-t5 \
  --topics dl20 \
  --output run.msmarco-v1-passage.bm25-d2q-t5-default.dl20.txt \
  --bm25 --k1 0.9 --b 0.4
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl20-passage run.msmarco-v1-passage.bm25-d2q-t5-default.dl20.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl20-passage run.msmarco-v1-passage.bm25-d2q-t5-default.dl20.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl20-passage run.msmarco-v1-passage.bm25-d2q-t5-default.dl20.txt
Command to generate run on dev queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v1-passage-d2q-t5 \
  --topics msmarco-passage-dev-subset \
  --output run.msmarco-v1-passage.bm25-d2q-t5-default.dev.txt \
  --bm25 --k1 0.9 --b 0.4
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 10 -m recip_rank msmarco-passage-dev-subset run.msmarco-v1-passage.bm25-d2q-t5-default.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-passage-dev-subset run.msmarco-v1-passage.bm25-d2q-t5-default.dev.txt
Command to generate run on TREC 2019 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v1-passage-d2q-t5-docvectors \
  --topics dl19-passage \
  --output run.msmarco-v1-passage.bm25-rm3-d2q-t5-default.dl19.txt \
  --bm25 --rm3 --k1 0.9 --b 0.4
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl19-passage run.msmarco-v1-passage.bm25-rm3-d2q-t5-default.dl19.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl19-passage run.msmarco-v1-passage.bm25-rm3-d2q-t5-default.dl19.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl19-passage run.msmarco-v1-passage.bm25-rm3-d2q-t5-default.dl19.txt
Command to generate run on TREC 2020 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v1-passage-d2q-t5-docvectors \
  --topics dl20 \
  --output run.msmarco-v1-passage.bm25-rm3-d2q-t5-default.dl20.txt \
  --bm25 --rm3 --k1 0.9 --b 0.4
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl20-passage run.msmarco-v1-passage.bm25-rm3-d2q-t5-default.dl20.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl20-passage run.msmarco-v1-passage.bm25-rm3-d2q-t5-default.dl20.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl20-passage run.msmarco-v1-passage.bm25-rm3-d2q-t5-default.dl20.txt
Command to generate run on dev queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v1-passage-d2q-t5-docvectors \
  --topics msmarco-passage-dev-subset \
  --output run.msmarco-v1-passage.bm25-rm3-d2q-t5-default.dev.txt \
  --bm25 --rm3 --k1 0.9 --b 0.4
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 10 -m recip_rank msmarco-passage-dev-subset run.msmarco-v1-passage.bm25-rm3-d2q-t5-default.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-passage-dev-subset run.msmarco-v1-passage.bm25-rm3-d2q-t5-default.dev.txt
Command to generate run on TREC 2019 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v1-passage-d2q-t5-docvectors \
  --topics dl19-passage \
  --output run.msmarco-v1-passage.bm25-rocchio-d2q-t5-default.dl19.txt \
  --bm25 --rocchio --k1 0.9 --b 0.4
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl19-passage run.msmarco-v1-passage.bm25-rocchio-d2q-t5-default.dl19.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl19-passage run.msmarco-v1-passage.bm25-rocchio-d2q-t5-default.dl19.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl19-passage run.msmarco-v1-passage.bm25-rocchio-d2q-t5-default.dl19.txt
Command to generate run on TREC 2020 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v1-passage-d2q-t5-docvectors \
  --topics dl20 \
  --output run.msmarco-v1-passage.bm25-rocchio-d2q-t5-default.dl20.txt \
  --bm25 --rocchio --k1 0.9 --b 0.4
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl20-passage run.msmarco-v1-passage.bm25-rocchio-d2q-t5-default.dl20.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl20-passage run.msmarco-v1-passage.bm25-rocchio-d2q-t5-default.dl20.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl20-passage run.msmarco-v1-passage.bm25-rocchio-d2q-t5-default.dl20.txt
Command to generate run on dev queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v1-passage-d2q-t5-docvectors \
  --topics msmarco-passage-dev-subset \
  --output run.msmarco-v1-passage.bm25-rocchio-d2q-t5-default.dev.txt \
  --bm25 --rocchio --k1 0.9 --b 0.4
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 10 -m recip_rank msmarco-passage-dev-subset run.msmarco-v1-passage.bm25-rocchio-d2q-t5-default.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-passage-dev-subset run.msmarco-v1-passage.bm25-rocchio-d2q-t5-default.dev.txt
Command to generate run on TREC 2019 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v1-passage-d2q-t5 \
  --topics dl19-passage \
  --output run.msmarco-v1-passage.bm25-d2q-t5-tuned.dl19.txt \
  --bm25
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl19-passage run.msmarco-v1-passage.bm25-d2q-t5-tuned.dl19.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl19-passage run.msmarco-v1-passage.bm25-d2q-t5-tuned.dl19.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl19-passage run.msmarco-v1-passage.bm25-d2q-t5-tuned.dl19.txt
Command to generate run on TREC 2020 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v1-passage-d2q-t5 \
  --topics dl20 \
  --output run.msmarco-v1-passage.bm25-d2q-t5-tuned.dl20.txt \
  --bm25
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl20-passage run.msmarco-v1-passage.bm25-d2q-t5-tuned.dl20.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl20-passage run.msmarco-v1-passage.bm25-d2q-t5-tuned.dl20.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl20-passage run.msmarco-v1-passage.bm25-d2q-t5-tuned.dl20.txt
Command to generate run on dev queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v1-passage-d2q-t5 \
  --topics msmarco-passage-dev-subset \
  --output run.msmarco-v1-passage.bm25-d2q-t5-tuned.dev.txt \
  --bm25
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 10 -m recip_rank msmarco-passage-dev-subset run.msmarco-v1-passage.bm25-d2q-t5-tuned.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-passage-dev-subset run.msmarco-v1-passage.bm25-d2q-t5-tuned.dev.txt
Command to generate run on TREC 2019 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v1-passage-d2q-t5-docvectors \
  --topics dl19-passage \
  --output run.msmarco-v1-passage.bm25-rm3-d2q-t5-tuned.dl19.txt \
  --bm25 --rm3
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl19-passage run.msmarco-v1-passage.bm25-rm3-d2q-t5-tuned.dl19.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl19-passage run.msmarco-v1-passage.bm25-rm3-d2q-t5-tuned.dl19.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl19-passage run.msmarco-v1-passage.bm25-rm3-d2q-t5-tuned.dl19.txt
Command to generate run on TREC 2020 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v1-passage-d2q-t5-docvectors \
  --topics dl20 \
  --output run.msmarco-v1-passage.bm25-rm3-d2q-t5-tuned.dl20.txt \
  --bm25 --rm3
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl20-passage run.msmarco-v1-passage.bm25-rm3-d2q-t5-tuned.dl20.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl20-passage run.msmarco-v1-passage.bm25-rm3-d2q-t5-tuned.dl20.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl20-passage run.msmarco-v1-passage.bm25-rm3-d2q-t5-tuned.dl20.txt
Command to generate run on dev queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v1-passage-d2q-t5-docvectors \
  --topics msmarco-passage-dev-subset \
  --output run.msmarco-v1-passage.bm25-rm3-d2q-t5-tuned.dev.txt \
  --bm25 --rm3
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 10 -m recip_rank msmarco-passage-dev-subset run.msmarco-v1-passage.bm25-rm3-d2q-t5-tuned.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-passage-dev-subset run.msmarco-v1-passage.bm25-rm3-d2q-t5-tuned.dev.txt
Command to generate run on TREC 2019 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v1-passage-d2q-t5-docvectors \
  --topics dl19-passage \
  --output run.msmarco-v1-passage.bm25-rocchio-d2q-t5-tuned.dl19.txt \
  --bm25 --rocchio
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl19-passage run.msmarco-v1-passage.bm25-rocchio-d2q-t5-tuned.dl19.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl19-passage run.msmarco-v1-passage.bm25-rocchio-d2q-t5-tuned.dl19.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl19-passage run.msmarco-v1-passage.bm25-rocchio-d2q-t5-tuned.dl19.txt
Command to generate run on TREC 2020 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v1-passage-d2q-t5-docvectors \
  --topics dl20 \
  --output run.msmarco-v1-passage.bm25-rocchio-d2q-t5-tuned.dl20.txt \
  --bm25 --rocchio
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl20-passage run.msmarco-v1-passage.bm25-rocchio-d2q-t5-tuned.dl20.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl20-passage run.msmarco-v1-passage.bm25-rocchio-d2q-t5-tuned.dl20.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl20-passage run.msmarco-v1-passage.bm25-rocchio-d2q-t5-tuned.dl20.txt
Command to generate run on dev queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v1-passage-d2q-t5-docvectors \
  --topics msmarco-passage-dev-subset \
  --output run.msmarco-v1-passage.bm25-rocchio-d2q-t5-tuned.dev.txt \
  --bm25 --rocchio
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 10 -m recip_rank msmarco-passage-dev-subset run.msmarco-v1-passage.bm25-rocchio-d2q-t5-tuned.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-passage-dev-subset run.msmarco-v1-passage.bm25-rocchio-d2q-t5-tuned.dev.txt
Command to generate run on TREC 2019 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v1-passage-unicoil-noexp \
  --topics dl19-passage-unicoil-noexp \
  --output run.msmarco-v1-passage.unicoil-noexp.dl19.txt \
  --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl19-passage run.msmarco-v1-passage.unicoil-noexp.dl19.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl19-passage run.msmarco-v1-passage.unicoil-noexp.dl19.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl19-passage run.msmarco-v1-passage.unicoil-noexp.dl19.txt
Command to generate run on TREC 2020 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v1-passage-unicoil-noexp \
  --topics dl20-unicoil-noexp \
  --output run.msmarco-v1-passage.unicoil-noexp.dl20.txt \
  --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl20-passage run.msmarco-v1-passage.unicoil-noexp.dl20.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl20-passage run.msmarco-v1-passage.unicoil-noexp.dl20.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl20-passage run.msmarco-v1-passage.unicoil-noexp.dl20.txt
Command to generate run on dev queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v1-passage-unicoil-noexp \
  --topics msmarco-passage-dev-subset-unicoil-noexp \
  --output run.msmarco-v1-passage.unicoil-noexp.dev.txt \
  --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 10 -m recip_rank msmarco-passage-dev-subset run.msmarco-v1-passage.unicoil-noexp.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-passage-dev-subset run.msmarco-v1-passage.unicoil-noexp.dev.txt
Command to generate run on TREC 2019 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v1-passage-unicoil \
  --topics dl19-passage-unicoil \
  --output run.msmarco-v1-passage.unicoil.dl19.txt \
  --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl19-passage run.msmarco-v1-passage.unicoil.dl19.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl19-passage run.msmarco-v1-passage.unicoil.dl19.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl19-passage run.msmarco-v1-passage.unicoil.dl19.txt
Command to generate run on TREC 2020 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v1-passage-unicoil \
  --topics dl20-unicoil \
  --output run.msmarco-v1-passage.unicoil.dl20.txt \
  --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl20-passage run.msmarco-v1-passage.unicoil.dl20.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl20-passage run.msmarco-v1-passage.unicoil.dl20.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl20-passage run.msmarco-v1-passage.unicoil.dl20.txt
Command to generate run on dev queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v1-passage-unicoil \
  --topics msmarco-passage-dev-subset-unicoil \
  --output run.msmarco-v1-passage.unicoil.dev.txt \
  --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 10 -m recip_rank msmarco-passage-dev-subset run.msmarco-v1-passage.unicoil.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-passage-dev-subset run.msmarco-v1-passage.unicoil.dev.txt
Command to generate run on TREC 2019 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v1-passage-unicoil-noexp \
  --topics dl19-passage --encoder castorini/unicoil-noexp-msmarco-passage \
  --output run.msmarco-v1-passage.unicoil-noexp-otf.dl19.txt \
  --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl19-passage run.msmarco-v1-passage.unicoil-noexp-otf.dl19.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl19-passage run.msmarco-v1-passage.unicoil-noexp-otf.dl19.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl19-passage run.msmarco-v1-passage.unicoil-noexp-otf.dl19.txt
Command to generate run on TREC 2020 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v1-passage-unicoil-noexp \
  --topics dl20 --encoder castorini/unicoil-noexp-msmarco-passage \
  --output run.msmarco-v1-passage.unicoil-noexp-otf.dl20.txt \
  --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl20-passage run.msmarco-v1-passage.unicoil-noexp-otf.dl20.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl20-passage run.msmarco-v1-passage.unicoil-noexp-otf.dl20.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl20-passage run.msmarco-v1-passage.unicoil-noexp-otf.dl20.txt
Command to generate run on dev queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v1-passage-unicoil-noexp \
  --topics msmarco-passage-dev-subset --encoder castorini/unicoil-noexp-msmarco-passage \
  --output run.msmarco-v1-passage.unicoil-noexp-otf.dev.txt \
  --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 10 -m recip_rank msmarco-passage-dev-subset run.msmarco-v1-passage.unicoil-noexp-otf.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-passage-dev-subset run.msmarco-v1-passage.unicoil-noexp-otf.dev.txt
Command to generate run on TREC 2019 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v1-passage-unicoil \
  --topics dl19-passage --encoder castorini/unicoil-msmarco-passage \
  --output run.msmarco-v1-passage.unicoil-otf.dl19.txt \
  --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl19-passage run.msmarco-v1-passage.unicoil-otf.dl19.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl19-passage run.msmarco-v1-passage.unicoil-otf.dl19.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl19-passage run.msmarco-v1-passage.unicoil-otf.dl19.txt
Command to generate run on TREC 2020 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v1-passage-unicoil \
  --topics dl20 --encoder castorini/unicoil-msmarco-passage \
  --output run.msmarco-v1-passage.unicoil-otf.dl20.txt \
  --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl20-passage run.msmarco-v1-passage.unicoil-otf.dl20.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl20-passage run.msmarco-v1-passage.unicoil-otf.dl20.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl20-passage run.msmarco-v1-passage.unicoil-otf.dl20.txt
Command to generate run on dev queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v1-passage-unicoil \
  --topics msmarco-passage-dev-subset --encoder castorini/unicoil-msmarco-passage \
  --output run.msmarco-v1-passage.unicoil-otf.dev.txt \
  --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 10 -m recip_rank msmarco-passage-dev-subset run.msmarco-v1-passage.unicoil-otf.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-passage-dev-subset run.msmarco-v1-passage.unicoil-otf.dev.txt
Not available.
Not available.
Command to generate run on dev queries:
python -m pyserini.search.faiss \
  --threads 16 --batch-size 512 \
  --index msmarco-passage-ance-bf \
  --topics msmarco-passage-dev-subset --encoded-queries ance-msmarco-passage-dev-subset \
  --output run.msmarco-v1-passage.ance.dev.txt \
 
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 10 -m recip_rank msmarco-passage-dev-subset run.msmarco-v1-passage.ance.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-passage-dev-subset run.msmarco-v1-passage.ance.dev.txt
Not available.
Not available.
Command to generate run on dev queries:
python -m pyserini.search.faiss \
  --threads 16 --batch-size 512 \
  --index msmarco-passage-distilbert-dot-margin_mse-T2-bf \
  --topics msmarco-passage-dev-subset --encoded-queries distilbert_kd-msmarco-passage-dev-subset \
  --output run.msmarco-v1-passage.distilbert-kd.dev.txt \
 
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 10 -m recip_rank msmarco-passage-dev-subset run.msmarco-v1-passage.distilbert-kd.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-passage-dev-subset run.msmarco-v1-passage.distilbert-kd.dev.txt
Not available.
Not available.
Command to generate run on dev queries:
python -m pyserini.search.faiss \
  --threads 16 --batch-size 512 \
  --index msmarco-passage-distilbert-dot-tas_b-b256-bf \
  --topics msmarco-passage-dev-subset --encoded-queries distilbert_tas_b-msmarco-passage-dev-subset \
  --output run.msmarco-v1-passage.distilbert-kd-tasb.dev.txt \
 
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 10 -m recip_rank msmarco-passage-dev-subset run.msmarco-v1-passage.distilbert-kd-tasb.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-passage-dev-subset run.msmarco-v1-passage.distilbert-kd-tasb.dev.txt
Not available.
Not available.
Command to generate run on dev queries:
python -m pyserini.search.faiss \
  --threads 16 --batch-size 512 \
  --index msmarco-passage-tct_colbert-v2-hnp-bf \
  --topics msmarco-passage-dev-subset --encoded-queries tct_colbert-v2-hnp-msmarco-passage-dev-subset \
  --output run.msmarco-v1-passage.tct_colbert-v2-hnp.dev.txt \
 
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 10 -m recip_rank msmarco-passage-dev-subset run.msmarco-v1-passage.tct_colbert-v2-hnp.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-passage-dev-subset run.msmarco-v1-passage.tct_colbert-v2-hnp.dev.txt
Command to generate run on TREC 2019 queries:
python -m pyserini.search.faiss \
  --threads 16 --batch-size 512 \
  --index msmarco-passage-ance-bf \
  --topics dl19-passage --encoder castorini/ance-msmarco-passage \
  --output run.msmarco-v1-passage.ance-otf.dl19.txt \
 
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl19-passage run.msmarco-v1-passage.ance-otf.dl19.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl19-passage run.msmarco-v1-passage.ance-otf.dl19.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl19-passage run.msmarco-v1-passage.ance-otf.dl19.txt
Command to generate run on TREC 2020 queries:
python -m pyserini.search.faiss \
  --threads 16 --batch-size 512 \
  --index msmarco-passage-ance-bf \
  --topics dl20 --encoder castorini/ance-msmarco-passage \
  --output run.msmarco-v1-passage.ance-otf.dl20.txt \
 
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl20-passage run.msmarco-v1-passage.ance-otf.dl20.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl20-passage run.msmarco-v1-passage.ance-otf.dl20.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl20-passage run.msmarco-v1-passage.ance-otf.dl20.txt
Command to generate run on dev queries:
python -m pyserini.search.faiss \
  --threads 16 --batch-size 512 \
  --index msmarco-passage-ance-bf \
  --topics msmarco-passage-dev-subset --encoder castorini/ance-msmarco-passage \
  --output run.msmarco-v1-passage.ance-otf.dev.txt \
 
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 10 -m recip_rank msmarco-passage-dev-subset run.msmarco-v1-passage.ance-otf.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-passage-dev-subset run.msmarco-v1-passage.ance-otf.dev.txt
Command to generate run on TREC 2019 queries:
python -m pyserini.search.faiss \
  --threads 16 --batch-size 512 \
  --index msmarco-passage-distilbert-dot-margin_mse-T2-bf \
  --topics dl19-passage --encoder sebastian-hofstaetter/distilbert-dot-margin_mse-T2-msmarco \
  --output run.msmarco-v1-passage.distilbert-kd-otf.dl19.txt \
 
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl19-passage run.msmarco-v1-passage.distilbert-kd-otf.dl19.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl19-passage run.msmarco-v1-passage.distilbert-kd-otf.dl19.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl19-passage run.msmarco-v1-passage.distilbert-kd-otf.dl19.txt
Command to generate run on TREC 2020 queries:
python -m pyserini.search.faiss \
  --threads 16 --batch-size 512 \
  --index msmarco-passage-distilbert-dot-margin_mse-T2-bf \
  --topics dl20 --encoder sebastian-hofstaetter/distilbert-dot-margin_mse-T2-msmarco \
  --output run.msmarco-v1-passage.distilbert-kd-otf.dl20.txt \
 
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl20-passage run.msmarco-v1-passage.distilbert-kd-otf.dl20.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl20-passage run.msmarco-v1-passage.distilbert-kd-otf.dl20.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl20-passage run.msmarco-v1-passage.distilbert-kd-otf.dl20.txt
Command to generate run on dev queries:
python -m pyserini.search.faiss \
  --threads 16 --batch-size 512 \
  --index msmarco-passage-distilbert-dot-margin_mse-T2-bf \
  --topics msmarco-passage-dev-subset --encoder sebastian-hofstaetter/distilbert-dot-margin_mse-T2-msmarco \
  --output run.msmarco-v1-passage.distilbert-kd-otf.dev.txt \
 
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 10 -m recip_rank msmarco-passage-dev-subset run.msmarco-v1-passage.distilbert-kd-otf.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-passage-dev-subset run.msmarco-v1-passage.distilbert-kd-otf.dev.txt
Command to generate run on TREC 2019 queries:
python -m pyserini.search.faiss \
  --threads 16 --batch-size 512 \
  --index msmarco-passage-distilbert-dot-tas_b-b256-bf \
  --topics dl19-passage --encoder sebastian-hofstaetter/distilbert-dot-tas_b-b256-msmarco \
  --output run.msmarco-v1-passage.distilbert-kd-tasb-otf.dl19.txt \
 
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl19-passage run.msmarco-v1-passage.distilbert-kd-tasb-otf.dl19.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl19-passage run.msmarco-v1-passage.distilbert-kd-tasb-otf.dl19.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl19-passage run.msmarco-v1-passage.distilbert-kd-tasb-otf.dl19.txt
Command to generate run on TREC 2020 queries:
python -m pyserini.search.faiss \
  --threads 16 --batch-size 512 \
  --index msmarco-passage-distilbert-dot-tas_b-b256-bf \
  --topics dl20 --encoder sebastian-hofstaetter/distilbert-dot-tas_b-b256-msmarco \
  --output run.msmarco-v1-passage.distilbert-kd-tasb-otf.dl20.txt \
 
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl20-passage run.msmarco-v1-passage.distilbert-kd-tasb-otf.dl20.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl20-passage run.msmarco-v1-passage.distilbert-kd-tasb-otf.dl20.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl20-passage run.msmarco-v1-passage.distilbert-kd-tasb-otf.dl20.txt
Command to generate run on dev queries:
python -m pyserini.search.faiss \
  --threads 16 --batch-size 512 \
  --index msmarco-passage-distilbert-dot-tas_b-b256-bf \
  --topics msmarco-passage-dev-subset --encoder sebastian-hofstaetter/distilbert-dot-tas_b-b256-msmarco \
  --output run.msmarco-v1-passage.distilbert-kd-tasb-otf.dev.txt \
 
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 10 -m recip_rank msmarco-passage-dev-subset run.msmarco-v1-passage.distilbert-kd-tasb-otf.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-passage-dev-subset run.msmarco-v1-passage.distilbert-kd-tasb-otf.dev.txt
Command to generate run on TREC 2019 queries:
python -m pyserini.search.faiss \
  --threads 16 --batch-size 512 \
  --index msmarco-passage-tct_colbert-v2-hnp-bf \
  --topics dl19-passage --encoder castorini/tct_colbert-v2-hnp-msmarco \
  --output run.msmarco-v1-passage.tct_colbert-v2-hnp-otf.dl19.txt \
 
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl19-passage run.msmarco-v1-passage.tct_colbert-v2-hnp-otf.dl19.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl19-passage run.msmarco-v1-passage.tct_colbert-v2-hnp-otf.dl19.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl19-passage run.msmarco-v1-passage.tct_colbert-v2-hnp-otf.dl19.txt
Command to generate run on TREC 2020 queries:
python -m pyserini.search.faiss \
  --threads 16 --batch-size 512 \
  --index msmarco-passage-tct_colbert-v2-hnp-bf \
  --topics dl20 --encoder castorini/tct_colbert-v2-hnp-msmarco \
  --output run.msmarco-v1-passage.tct_colbert-v2-hnp-otf.dl20.txt \
 
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -m map dl20-passage run.msmarco-v1-passage.tct_colbert-v2-hnp-otf.dl20.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl20-passage run.msmarco-v1-passage.tct_colbert-v2-hnp-otf.dl20.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl20-passage run.msmarco-v1-passage.tct_colbert-v2-hnp-otf.dl20.txt
Command to generate run on dev queries:
python -m pyserini.search.faiss \
  --threads 16 --batch-size 512 \
  --index msmarco-passage-tct_colbert-v2-hnp-bf \
  --topics msmarco-passage-dev-subset --encoder castorini/tct_colbert-v2-hnp-msmarco \
  --output run.msmarco-v1-passage.tct_colbert-v2-hnp-otf.dev.txt \
 
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 10 -m recip_rank msmarco-passage-dev-subset run.msmarco-v1-passage.tct_colbert-v2-hnp-otf.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-passage-dev-subset run.msmarco-v1-passage.tct_colbert-v2-hnp-otf.dev.txt