A unified platform for sharing, training and evaluating dialogue models across many tasks.

Many popular datasets available all in one place -- from open-domain chitchat to visual question answering.

A wide set of reference models -- from retrieval baselines to Transformers.

Seamless integration of Amazon Mechanical Turk for data collection, training and human evaluation.

Get Started Fork me on GitHub

What's New

{{{CONTENT}}}

Get Started

Check out our GitHub repository:

Run this command:
git clone https://github.com/facebookresearch/ParlAI.git
cd ParlAI; python setup.py develop

Examples

Display 10 random examples from task 1 of the "1k training examples" bAbI task:

Run this command:
parlai display_data -t babi:task1k:1

Displays 100 random examples from multitasking on the bAbI task and the SQuAD dataset at the same time:

Run this command:
parlai display_data -t babi:task1k:1,squad -n 100

Evaluate an IR baseline model on the validation set of the Movies Subreddit dataset:

Run this command:
parlai eval_model -m ir_baseline -t "#moviedd-reddit" -dt valid

Display the predictions of that same IR baseline model:

Run this command:
parlai display_model -m ir_baseline -t "#moviedd-reddit" -dt valid

Trains an attentive LSTM model on the SQuAD dataset with a batch size of 32 examples (pytorch and regex):

Run this command:
parlai train_model -m drqa -t squad -bs 32 -mf /tmp/model_drqa

For more examples, please read our tutorial. To learn more about ParlAI, click here.