--- title: utils keywords: fastai sidebar: home_sidebar summary: "Various utility functions used by the blurr package." description: "Various utility functions used by the blurr package." nb_path: "nbs/00_utils.ipynb" ---
torch.cuda.set_device(1)
print(f'Using GPU #{torch.cuda.current_device()}: {torch.cuda.get_device_name()}')
@Singleton
class TestSingleton: pass
a = TestSingleton()
b = TestSingleton()
test_eq(a,b)
ModelHelper
is a Singleton
(there exists only one instance, and the same instance is returned upon subsequent instantiation requests). You can get at via the BLURR_MODEL_HELPER
constant below.
mh = ModelHelper()
mh2 = ModelHelper()
test_eq(mh, mh2)
Users of this library can simply use BLURR_MODEL_HELPER
to access all the ModelHelper
capabilities without having to fetch an instance themselves.
print(BLURR_MODEL_HELPER.get_architectures())
We'll also create an enum for downstream tasks
print(L(HF_ARCHITECTURES))
print(BLURR_MODEL_HELPER.get_config('bert'))
print(BLURR_MODEL_HELPER.get_tokenizers('electra'))
print(BLURR_MODEL_HELPER.get_tasks())
print('')
print(BLURR_MODEL_HELPER.get_tasks('bart'))
We'll create an enum for tasks as well, one for all tasks and another for tasks available via huggingface's AutoModel
capabilities
print('--- all tasks ---')
print(L(HF_TASKS_ALL))
print('\n--- auto only ---')
print(L(HF_TASKS_AUTO))
HF_TASKS_ALL.Classification
print(L(BLURR_MODEL_HELPER.get_models()))
print(BLURR_MODEL_HELPER.get_models(arch='bert'))
print(BLURR_MODEL_HELPER.get_models(task='TokenClassification'))
print(BLURR_MODEL_HELPER.get_models(arch='bert', task='TokenClassification'))
config, tokenizers, model = BLURR_MODEL_HELPER.get_classes_for_model('RobertaForSequenceClassification')
print(config)
print(tokenizers[0])
print(model)
config, tokenizers, model = BLURR_MODEL_HELPER.get_classes_for_model(DistilBertModel)
print(config)
print(tokenizers[0])
print(model)
BLURR_MODEL_HELPER.get_model_architecture('RobertaForSequenceClassification')
arch, config, tokenizer, model = BLURR_MODEL_HELPER.get_hf_objects("bert-base-cased-finetuned-mrpc",
task=HF_TASKS_AUTO.LMHead)
print(arch)
print(type(config))
print(type(tokenizer))
print(type(model))
arch, tokenizer, config, model = BLURR_MODEL_HELPER.get_hf_objects("fmikaelian/flaubert-base-uncased-squad",
task=HF_TASKS_AUTO.QuestionAnswering)
print(arch)
print(type(config))
print(type(tokenizer))
print(type(model))
arch, tokenizer, config, model = BLURR_MODEL_HELPER.get_hf_objects("bert-base-cased-finetuned-mrpc",
config=None,
tokenizer_cls=BertTokenizer,
model_cls=BertForNextSentencePrediction)
print(arch)
print(type(config))
print(type(tokenizer))
print(type(model))