How to Perform Transfer Learning for HanLP NER model?

Objective: Perform transfer learning and fine tuning using pre-trained HanLP NER Model.

Problem: Not sure how to do this. Tried to look at documentation but got confused.

Code:

n_batch_size = 8

NER = TransformerNamedEntityRecognizer()

save_dir = ‘./TestingModel_NER’

NER.fit(MSRA_NER_TOKEN_LEVEL_SHORT_IOBES_TRAIN,
MSRA_NER_TOKEN_LEVEL_SHORT_IOBES_DEV,
save_dir,
‘MSRA_NER_ELECTRA_SMALL_ZH’
word_dropout = 0.2,
lr=5e-05,
adam_epsilon=1e-08,
weight_decay=0, warmup_steps=0.1,
reduction=‘sum’, batch_size = n_batch_size,
epochs = 1)

NER.evaluate(MSRA_NER_TOKEN_LEVEL_SHORT_IOBES_TEST, save_dir = save_dir)

Please help. Thank you

1 Like

The only difference between trainning and finetuning is the finetune parameter in fit(), which allows you to specify an existing model as the initial weight for your new model.

Maybe this demo will give you the idea:

1 Like

Thank you for your quick reply!

Just curious the example you gave me trains and test data at token level of Chinese Characters. I’m wondering if HanLP can also perform NER when reading entire sentences or paragraphs? Similar to like how Spacy does it. Can you provide me some guidance here? Thank you so much

1 Like

Of course, HanLP is designed for all senarios and it’s your choice to run it on characters or tokens or sentences or paragraphs, or Chinese or Engligh or whatever language. For prediction on raw text, you can wrap EOS, TOK and NER together.

import hanlp

HanLP = hanlp.pipeline().append(hanlp.utils.rules.split_sentence) \
    .append(hanlp.load(hanlp.pretrained.tok.FINE_ELECTRA_SMALL_ZH), output_key='tok') \
    .append(hanlp.load(hanlp.pretrained.ner.MSRA_NER_ELECTRA_SMALL_ZH), output_key='ner')

HanLP('2021年HanLPv2.1为生产环境带来次世代最先进的多语种NLP技术。晓美焰来到北京立方庭参观自然语义科技公司。').pretty_print()

I’d suggest you to go over our tutorials:

1 Like

Thank you for the links! I have went through some of your tutorials before.

So it seems from your response that HanLP NER would still require the text to be tokenized before performing NER.

Then similarly, if I need to perform the fit.() function and fine tune pre-trained NER model, then the training and testing data will need to be tokenized with labels (2 columns, first column is the tokenized character, the second column is the label like ‘Person/Organization/Location’ etc.)?

Thank you for your patience with me

1 Like

Right, either tokenize into characters or tokens, on your choice. Just make sure you do the same tokenization before prediction.

Yes, BMESO or IOBES notation.

1 Like

I tried to follow the fine tune example you provided. Code snippet below:

import hanlp
from hanlp.components.ner.transformer_ner import TransformerNamedEntityRecognizer


recognizer = TransformerNamedEntityRecognizer()
save_dir = 'finetune_ner_albert_base_zh_msra'
DomainTrainData="train.tsv"
DomainTestData="test.tsv"

recognizer.fit(DomainTrainData, 
               DomainTestData,
               save_dir, 
               epochs=100,
               average_subwords=False,
               transformer='albert_base_zh',
               finetune=hanlp.pretrained.ner.MSRA_NER_ALBERT_BASE_ZH)

But I keep getting the key error below:

KeyError                                  Traceback (most recent call last)
<ipython-input-38-d2e15272c10a> in <module>
----> 1 recognizer.fit(DomainTrainData, 
      2                DomainTestData,
      3                save_dir,
      4                epochs=100,
      5                average_subwords=False,

~/opt/anaconda3/lib/python3.8/site-packages/hanlp/components/ner/transformer_ner.py in fit(self, trn_data, dev_data, save_dir, transformer, delimiter_in_entity, merge_types, average_subwords, word_dropout, hidden_dropout, layer_dropout, scalar_mix, grad_norm, lr, transformer_lr, adam_epsilon, weight_decay, warmup_steps, crf, secondary_encoder, reduction, batch_size, sampler_builder, epochs, tagset, token_key, max_seq_len, sent_delimiter, char_level, hard_constraint, transform, logger, seed, devices, **kwargs)
    200             The best metrics on training set.
    201         """
--> 202         return super().fit(**merge_locals_kwargs(locals(), kwargs))
    203 
    204     def build_vocabs(self, trn, logger, **kwargs):

~/opt/anaconda3/lib/python3.8/site-packages/hanlp/components/taggers/transformers/transformer_tagger.py in fit(self, trn_data, dev_data, save_dir, transformer, average_subwords, word_dropout, hidden_dropout, layer_dropout, scalar_mix, mix_embedding, grad_norm, transformer_grad_norm, lr, transformer_lr, transformer_layers, gradient_accumulation, adam_epsilon, weight_decay, warmup_steps, secondary_encoder, extra_embeddings, crf, reduction, batch_size, sampler_builder, epochs, patience, token_key, max_seq_len, sent_delimiter, char_level, hard_constraint, transform, logger, devices, **kwargs)
    250             devices: Union[float, int, List[int]] = None,
    251             **kwargs):
--> 252         return super().fit(**merge_locals_kwargs(locals(), kwargs))
    253 
    254     def feed_batch(self, batch: dict):

~/opt/anaconda3/lib/python3.8/site-packages/hanlp/components/classifiers/transformer_classifier.py in fit(self, trn_data, dev_data, save_dir, transformer, lr, transformer_lr, adam_epsilon, weight_decay, warmup_steps, batch_size, gradient_accumulation, grad_norm, transformer_grad_norm, average_subwords, scalar_mix, word_dropout, hidden_dropout, max_seq_len, ret_raw_hidden_states, batch_max_tokens, epochs, logger, devices, **kwargs)
    106             devices: Union[float, int, List[int]] = None,
    107             **kwargs):
--> 108         return super().fit(**merge_locals_kwargs(locals(), kwargs))
    109 
    110     def on_config_ready(self, **kwargs):

~/opt/anaconda3/lib/python3.8/site-packages/hanlp/common/torch_component.py in fit(self, trn_data, dev_data, save_dir, batch_size, epochs, devices, logger, seed, finetune, eval_trn, _device_placeholder, **kwargs)
    246         if finetune:
    247             if isinstance(finetune, str):
--> 248                 self.load(finetune, devices=devices)
    249             else:
    250                 self.load(save_dir, devices=devices)

~/opt/anaconda3/lib/python3.8/site-packages/hanlp/common/torch_component.py in load(self, save_dir, devices, verbose, **kwargs)
    176             flash('Building model [blink][yellow]...[/yellow][/blink]')
    177         self.config.pop('training', None)  # Some legacy versions accidentally put training into config file
--> 178         self.model = self.build_model(
    179             **merge_dict(self.config, **kwargs, overwrite=True, inplace=True), training=False, save_dir=save_dir)
    180         if verbose:

~/opt/anaconda3/lib/python3.8/site-packages/hanlp/components/taggers/transformers/transformer_tagger.py in build_model(self, training, extra_embeddings, **kwargs)
    145     def build_model(self, training=True, extra_embeddings: Embedding = None, **kwargs) -> torch.nn.Module:
    146         model = TransformerTaggingModel(
--> 147             self.build_transformer(training=training),
    148             len(self.vocabs.tag),
    149             self.config.crf,

~/opt/anaconda3/lib/python3.8/site-packages/hanlp/components/classifiers/transformer_classifier.py in build_transformer(self, training)
    117     def build_transformer(self, training=True):
    118         transformer = TransformerEncoder(self.config.transformer, self.transformer_tokenizer,
--> 119                                          self.config.average_subwords,
    120                                          self.config.scalar_mix, self.config.word_dropout,
    121                                          ret_raw_hidden_states=self.config.ret_raw_hidden_states,

~/opt/anaconda3/lib/python3.8/site-packages/hanlp_common/structure.py in __getattr__(self, key)
     93         if key.startswith('__'):
     94             return dict.__getattr__(key)
---> 95         return self.__getitem__(key)
     96 
     97     def __setattr__(self, key, value):

KeyError: 'average_subwords'

I tried setting average_subwords to ‘True’ or ‘False’ or left empty. Not sure from the error message what exactly is the issue. Can you help me?

MSRA_NER_ALBERT_BASE_ZH is a TensorFlow model which can only be finetuned with TransformerNamedEntityRecognizerTF while TransformerNamedEntityRecognizer is a PyTorch model. You cannot mix them. And TransformerNamedEntityRecognizerTF doesn’t support average_subwords .

1 Like

Thank you for your response. I was able to get the function working now.

Now I’m preparing the train/test files with your tokenizer pre-trained model.

I want to perform word level processing (e.g. tokens are like ‘鄧慧穎’, with the label ‘S-Person’), I saw that the model I am using ‘hanlp.pretrained.ner.MSRA_NER_ELECTRA_SMALL_ZH’ uses similar tag according to their train.log.

But I am still getting an error saying it cannot load the training file. Sample output below:

{
  "adam_epsilon": 1e-08,
  "average_subwords": false,
  "batch_max_tokens": null,
  "batch_size": 32,
  "char_level": false,
  "classpath": "hanlp.components.ner.transformer_ner.TransformerNamedEntityRecognizer",
  "crf": false,
  "delimiter_in_entity": null,
  "epochs": 100,
  "extra_embeddings": null,
  "finetune": "https://file.hankcs.com/hanlp/ner/msra_ner_electra_small_20220215_205503.zip",
  "grad_norm": 5.0,
  "gradient_accumulation": 1,
  "hanlp_version": "2.1.0-beta.45",
  "hard_constraint": false,
  "hidden_dropout": null,
  "layer_dropout": 0,
  "lr": 5e-05,
  "max_seq_len": null,
  "merge_types": null,
  "mix_embedding": 0,
  "patience": 5,
  "reduction": "sum",
  "ret_raw_hidden_states": false,
  "sampler_builder": null,
  "scalar_mix": null,
  "secondary_encoder": null,
  "seed": 1678702763,
  "sent_delimiter": null,
  "tagset": null,
  "token_key": null,
  "transform": null,
  "transformer": "albert_base_zh",
  "transformer_grad_norm": null,
  "transformer_layers": null,
  "transformer_lr": null,
  "warmup_steps": 0.1,
  "weight_decay": 0,
  "word_dropout": 0.2
}
                                          
Finetune model loaded with 12686113/12686113 trainable/total parameters.
---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
~/opt/anaconda3/lib/python3.8/site-packages/hanlp/utils/io_util.py in generate_words_tags_from_tsv(tsv_file_path, lower, gold, max_seq_length, sent_delimiter, char_level, hard_constraint)
    461                 try:
--> 462                     tags = [cells[1] for cells in sent]
    463                 except:

~/opt/anaconda3/lib/python3.8/site-packages/hanlp/utils/io_util.py in <listcomp>(.0)
    461                 try:
--> 462                     tags = [cells[1] for cells in sent]
    463                 except:

IndexError: list index out of range

During handling of the above exception, another exception occurred:

ValueError                                Traceback (most recent call last)
<ipython-input-333-03009f045040> in <module>
      8 test_dataset="NER_Test.tsv"
      9 
---> 10 recognizer.fit(train_dataset, 
     11                test_dataset,
     12                save_dir,

~/opt/anaconda3/lib/python3.8/site-packages/hanlp/components/ner/transformer_ner.py in fit(self, trn_data, dev_data, save_dir, transformer, delimiter_in_entity, merge_types, average_subwords, word_dropout, hidden_dropout, layer_dropout, scalar_mix, grad_norm, lr, transformer_lr, adam_epsilon, weight_decay, warmup_steps, crf, secondary_encoder, reduction, batch_size, sampler_builder, epochs, tagset, token_key, max_seq_len, sent_delimiter, char_level, hard_constraint, transform, logger, seed, devices, **kwargs)
    200             The best metrics on training set.
    201         """
--> 202         return super().fit(**merge_locals_kwargs(locals(), kwargs))
    203 
    204     def build_vocabs(self, trn, logger, **kwargs):

~/opt/anaconda3/lib/python3.8/site-packages/hanlp/components/taggers/transformers/transformer_tagger.py in fit(self, trn_data, dev_data, save_dir, transformer, average_subwords, word_dropout, hidden_dropout, layer_dropout, scalar_mix, mix_embedding, grad_norm, transformer_grad_norm, lr, transformer_lr, transformer_layers, gradient_accumulation, adam_epsilon, weight_decay, warmup_steps, secondary_encoder, extra_embeddings, crf, reduction, batch_size, sampler_builder, epochs, patience, token_key, max_seq_len, sent_delimiter, char_level, hard_constraint, transform, logger, devices, **kwargs)
    250             devices: Union[float, int, List[int]] = None,
    251             **kwargs):
--> 252         return super().fit(**merge_locals_kwargs(locals(), kwargs))
    253 
    254     def feed_batch(self, batch: dict):

~/opt/anaconda3/lib/python3.8/site-packages/hanlp/components/classifiers/transformer_classifier.py in fit(self, trn_data, dev_data, save_dir, transformer, lr, transformer_lr, adam_epsilon, weight_decay, warmup_steps, batch_size, gradient_accumulation, grad_norm, transformer_grad_norm, average_subwords, scalar_mix, word_dropout, hidden_dropout, max_seq_len, ret_raw_hidden_states, batch_max_tokens, epochs, logger, devices, **kwargs)
    106             devices: Union[float, int, List[int]] = None,
    107             **kwargs):
--> 108         return super().fit(**merge_locals_kwargs(locals(), kwargs))
    109 
    110     def on_config_ready(self, **kwargs):

~/opt/anaconda3/lib/python3.8/site-packages/hanlp/common/torch_component.py in fit(self, trn_data, dev_data, save_dir, batch_size, epochs, devices, logger, seed, finetune, eval_trn, _device_placeholder, **kwargs)
    253                 f'/{sum(p.numel() for p in self.model.parameters())} trainable/total parameters.')
    254         self.on_config_ready(**self.config, save_dir=save_dir)
--> 255         trn = self.build_dataloader(**merge_dict(config, data=trn_data, batch_size=batch_size, shuffle=True,
    256                                                  training=True, device=first_device, logger=logger, vocabs=self.vocabs,
    257                                                  overwrite=True))

~/opt/anaconda3/lib/python3.8/site-packages/hanlp/components/taggers/transformers/transformer_tagger.py in build_dataloader(self, data, batch_size, shuffle, device, logger, sampler_builder, gradient_accumulation, extra_embeddings, transform, max_seq_len, **kwargs)
    162             args = dict((k, self.config.get(k, None)) for k in
    163                         ['delimiter', 'max_seq_len', 'sent_delimiter', 'char_level', 'hard_constraint'])
--> 164             dataset = self.build_dataset(data, **args)
    165         if self.config.token_key is None:
    166             self.config.token_key = next(iter(dataset[0]))

~/opt/anaconda3/lib/python3.8/site-packages/hanlp/components/ner/transformer_ner.py in build_dataset(self, data, transform, **kwargs)
    213 
    214     def build_dataset(self, data, transform=None, **kwargs):
--> 215         dataset = super().build_dataset(data, transform, **kwargs)
    216         if isinstance(data, str):
    217             tagset = self.config.get('tagset', None)

~/opt/anaconda3/lib/python3.8/site-packages/hanlp/components/taggers/transformers/transformer_tagger.py in build_dataset(self, data, transform, **kwargs)
    189 
    190     def build_dataset(self, data, transform=None, **kwargs):
--> 191         return TSVTaggingDataset(data, transform=transform, **kwargs)
    192 
    193     def last_transform(self):

~/opt/anaconda3/lib/python3.8/site-packages/hanlp/datasets/ner/loaders/tsv.py in __init__(self, data, transform, cache, generate_idx, max_seq_len, sent_delimiter, char_level, hard_constraint, **kwargs)
     43         self.sent_delimiter = sent_delimiter
     44         self.max_seq_len = max_seq_len
---> 45         super().__init__(data, transform, cache, generate_idx)
     46 
     47     def load_file(self, filepath):

~/opt/anaconda3/lib/python3.8/site-packages/hanlp/common/dataset.py in __init__(self, data, transform, cache, generate_idx)
    126         if generate_idx is None:
    127             generate_idx = isinstance(data, list)
--> 128         data_ = self.load_data(data, generate_idx)
    129         # assert data_, f'No samples loaded from {data}'
    130         if data_:

~/opt/anaconda3/lib/python3.8/site-packages/hanlp/common/dataset.py in load_data(self, data, generate_idx)
    152             if isinstance(data, str):
    153                 data = get_resource(data)
--> 154             data = list(self.load_file(data))
    155         if generate_idx:
    156             for i, each in enumerate(data):

~/opt/anaconda3/lib/python3.8/site-packages/hanlp/datasets/ner/loaders/tsv.py in load_file(self, filepath)
     70         filepath = get_resource(filepath)
     71         # idx = 0
---> 72         for words, tags in generate_words_tags_from_tsv(filepath, lower=False):
     73             # idx += 1
     74             # if idx % 1000 == 0:

~/opt/anaconda3/lib/python3.8/site-packages/hanlp/utils/io_util.py in generate_words_tags_from_tsv(tsv_file_path, lower, gold, max_seq_length, sent_delimiter, char_level, hard_constraint)
    462                     tags = [cells[1] for cells in sent]
    463                 except:
--> 464                     raise ValueError(f'Failed to load {tsv_file_path}: {sent}')
    465             else:
    466                 tags = None

ValueError: Failed to load NER_Train.tsv: [['MEMORIAL', 'O'], ['of', 'O'], ['an', 'O'], ['instrument', 'O'], ['to', 'O'], ['be', 'O'], ['registered', 'O'], ['in', 'O'], ['the', 'O'], ['Land', 'O'],

The code I used is below:

recognizer.fit(train_dataset, 
               test_dataset,
               save_dir, 
               epochs=100,
               transformer='albert_base_zh',
               finetune=hanlp.pretrained.ner.MSRA_NER_ELECTRA_SMALL_ZH)

Does it have to do with how I created the tsv file? Is there a recommended way?

Each line in your tsv file must contain 2 columns. Why don’t you put a breakpoint here to reveal where went wrong in your tsv file?

1 Like

Thank you for your support. Was able to get the fine tuning ready for the model, but encounter new issues.

Wondering if you have any sample code to share on how to perform fine tuning on a pre-trained model for your tokenizer?

Here is what I wrote so far:

tokenizer.fit(token_train, 
               token_test,
               save_dir, 
               epochs=2,
               average_subwords = True,
               max_seq_len = 510,
               tagging_scheme = 'BMES',
               transformer= 'https://file.hankcs.com/hanlp/tok/ctb9_electra_small_20220215_205427.zip')

But release this error message:

---------------------------------------------------------------------------
HFValidationError                         Traceback (most recent call last)
<ipython-input-290-37578e6850b6> in <module>
      7 
      8 
----> 9 tokenizer.fit(token_train, 
     10                token_test,
     11                save_dir,

~/opt/anaconda3/lib/python3.8/site-packages/hanlp/components/tokenizers/transformer.py in fit(self, trn_data, dev_data, save_dir, transformer, average_subwords, word_dropout, hidden_dropout, layer_dropout, scalar_mix, grad_norm, transformer_grad_norm, lr, eval_trn, transformer_lr, transformer_layers, gradient_accumulation, adam_epsilon, weight_decay, warmup_steps, crf, reduction, batch_size, sampler_builder, epochs, patience, token_key, tagging_scheme, delimiter, max_seq_len, sent_delimiter, char_level, hard_constraint, transform, logger, devices, **kwargs)
    293             Best metrics on dev set.
    294         """
--> 295         return super().fit(**merge_locals_kwargs(locals(), kwargs))
    296 
    297     def feed_batch(self, batch: dict):

~/opt/anaconda3/lib/python3.8/site-packages/hanlp/components/taggers/transformers/transformer_tagger.py in fit(self, trn_data, dev_data, save_dir, transformer, average_subwords, word_dropout, hidden_dropout, layer_dropout, scalar_mix, mix_embedding, grad_norm, transformer_grad_norm, lr, transformer_lr, transformer_layers, gradient_accumulation, adam_epsilon, weight_decay, warmup_steps, secondary_encoder, extra_embeddings, crf, reduction, batch_size, sampler_builder, epochs, patience, token_key, max_seq_len, sent_delimiter, char_level, hard_constraint, transform, logger, devices, **kwargs)
    250             devices: Union[float, int, List[int]] = None,
    251             **kwargs):
--> 252         return super().fit(**merge_locals_kwargs(locals(), kwargs))
    253 
    254     def feed_batch(self, batch: dict):

~/opt/anaconda3/lib/python3.8/site-packages/hanlp/components/classifiers/transformer_classifier.py in fit(self, trn_data, dev_data, save_dir, transformer, lr, transformer_lr, adam_epsilon, weight_decay, warmup_steps, batch_size, gradient_accumulation, grad_norm, transformer_grad_norm, average_subwords, scalar_mix, word_dropout, hidden_dropout, max_seq_len, ret_raw_hidden_states, batch_max_tokens, epochs, logger, devices, **kwargs)
    106             devices: Union[float, int, List[int]] = None,
    107             **kwargs):
--> 108         return super().fit(**merge_locals_kwargs(locals(), kwargs))
    109 
    110     def on_config_ready(self, **kwargs):

~/opt/anaconda3/lib/python3.8/site-packages/hanlp/common/torch_component.py in fit(self, trn_data, dev_data, save_dir, batch_size, epochs, devices, logger, seed, finetune, eval_trn, _device_placeholder, **kwargs)
    252                 f'Finetune model loaded with {sum(p.numel() for p in self.model.parameters() if p.requires_grad)}'
    253                 f'/{sum(p.numel() for p in self.model.parameters())} trainable/total parameters.')
--> 254         self.on_config_ready(**self.config, save_dir=save_dir)
    255         trn = self.build_dataloader(**merge_dict(config, data=trn_data, batch_size=batch_size, shuffle=True,
    256                                                  training=True, device=first_device, logger=logger, vocabs=self.vocabs,

~/opt/anaconda3/lib/python3.8/site-packages/hanlp/components/classifiers/transformer_classifier.py in on_config_ready(self, **kwargs)
    113             self.transformer_tokenizer = BertTokenizer.from_pretrained(self.config.transformer, use_fast=True)
    114         else:
--> 115             self.transformer_tokenizer = AutoTokenizer_.from_pretrained(self.config.transformer, use_fast=True)
    116 
    117     def build_transformer(self, training=True):

~/opt/anaconda3/lib/python3.8/site-packages/hanlp/layers/transformers/pt_imports.py in from_pretrained(cls, pretrained_model_name_or_path, use_fast, do_basic_tokenize)
     66         if use_fast and not do_basic_tokenize:
     67             warnings.warn('`do_basic_tokenize=False` might not work when `use_fast=True`')
---> 68         tokenizer = cls.from_pretrained(get_tokenizer_mirror(transformer), use_fast=use_fast,
     69                                         do_basic_tokenize=do_basic_tokenize,
     70                                         **additional_config)

~/opt/anaconda3/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs)
    596 
    597         # Next, let's try to use the tokenizer_config file to get the tokenizer class.
--> 598         tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs)
    599         if "_commit_hash" in tokenizer_config:
    600             kwargs["_commit_hash"] = tokenizer_config["_commit_hash"]

~/opt/anaconda3/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py in get_tokenizer_config(pretrained_model_name_or_path, cache_dir, force_download, resume_download, proxies, use_auth_token, revision, local_files_only, subfolder, **kwargs)
    440     ```"""
    441     commit_hash = kwargs.get("_commit_hash", None)
--> 442     resolved_config_file = cached_file(
    443         pretrained_model_name_or_path,
    444         TOKENIZER_CONFIG_FILE,

~/opt/anaconda3/lib/python3.8/site-packages/transformers/utils/hub.py in cached_file(path_or_repo_id, filename, cache_dir, force_download, resume_download, proxies, use_auth_token, revision, local_files_only, subfolder, user_agent, _raise_exceptions_for_missing_entries, _raise_exceptions_for_connection_errors, _commit_hash)
    407     try:
    408         # Load from URL or cache if already cached
--> 409         resolved_file = hf_hub_download(
    410             path_or_repo_id,
    411             filename,

~/opt/anaconda3/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py in _inner_fn(*args, **kwargs)
    112         ):
    113             if arg_name == "repo_id":
--> 114                 validate_repo_id(arg_value)
    115 
    116             elif arg_name == "token" and arg_value is not None:

~/opt/anaconda3/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py in validate_repo_id(repo_id)
    164 
    165     if repo_id.count("/") > 1:
--> 166         raise HFValidationError(
    167             "Repo id must be in the form 'repo_name' or 'namespace/repo_name':"
    168             f" '{repo_id}'. Use `repo_type` argument if needed."

HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': 'https://file.hankcs.com/hanlp/tok/ctb9_electra_small_20220215_205427.zip'. Use `repo_type` argument if needed.

I cannot find ‘repo_type’ information in the documentation for transformer.

Many thanks

Use finetune argument for HanLP models instead. transformer is for models listed on HF hub.

Thank you for your response! Was able to build a model quite well!

Want to ask next about hyper parameter tuning. Does TransformerNamedEntityRecognizer() allow for hyperparameter tuning? If so, any documentation or sample code to share for this?

No, there is no automatic tuning function at this moment. You need to tune each hyperparameter manually.

Name

Organization

Phone

Research Field

Apply the following APIs

  • Tokenization
  • [ ×] Part-of-Speech tagging
  • Lemmatization
  • Universal Dependencies Features Extraction
  • Named Entity Recognition
  • Dependency Parsing
  • Constituency Parsing
  • Semantic Role Labeling
  • Semantic Dependency Parsing
  • Abstract Meaning Representation Parsing

Disclaimer

The APIs are provided “as is”, without warranty of any kind, express or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose and noninfringement. In no event shall the authors or service providers be liable for any claim, damages or other liability, whether in an action of contract, tort or otherwise, arising from, out of or in connection with the APIs or the use or other dealings in the APIs.

I fully understand and accept the above disclaimer and apply for an auth token licenced under CC BY-NC-SA 4.0.

Signature