You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When loading some popular embedding models, I am currently coming across a Jackson parsing error of MismatchInputException from loading of the tokenizer.json. After further investigation, it seems like the datatype of the value of “vocab” key in the tokenizer.json file between some models on hugging face are not consistent. Sometimes, the value of the “vocab” key in tokenizer.json is another nested map, but in other cases, the value of the “vocab” key is an array of arrays. When the “vocab” is an array of arrays, it causes an error in the SafeTensorSupport.loadTokenizer method, where the TokenizerModel model = om.treeToValue(rootNode.get(”model”), TokenizerModel.class) is unable to parse the JsonNode to a TokenizerModel class because it expects the value of “vocab” to be a BiMap<String, Long>. The stack trace error is pasted below for reference.
This is pretty prevalent in some of the most popular embedding models on hugging face like the sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2, intfloat/multilingual-e5-small, and so forth, both which have array of array as values from the “vocab” key in the tokenizer.json file.
Caused by: com.fasterxml.jackson.databind.exc.MismatchedInputException: Cannot deserialize value of type `java.util.LinkedHashMap<java.lang.String,java.lang.Long>` from Array value (token `JsonToken.START_ARRAY`)
at [Source: UNKNOWN; byte offset: #UNKNOWN] (through reference chain: com.github.tjake.jlama.safetensors.tokenizer.TokenizerModel["vocab"])
at com.fasterxml.jackson.databind.exc.MismatchedInputException.from(MismatchedInputException.java:59)
...
at com.fasterxml.jackson.databind.deser.DefaultDeserializationContext.readRootValue(DefaultDeserializationContext.java:342)
at com.fasterxml.jackson.databind.ObjectMapper._readValue(ObjectMapper.java:4881)
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3035)
at com.fasterxml.jackson.databind.ObjectMapper.treeToValue(ObjectMapper.java:3499)
at com.github.tjake.jlama.safetensors.SafeTensorSupport.loadTokenizer(SafeTensorSupport.java:144)
at com.github.tjake.jlama.safetensors.tokenizer.WordPieceTokenizer.<init>(WordPieceTokenizer.java:50)
at com.github.tjake.jlama.model.bert.BertTokenizer.<init>(BertTokenizer.java:24)
at jdk.internal.reflect.DirectConstructorHandleAccessor.newInstance(DirectConstructorHandleAccessor.java:62)
at java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:502)
at java.lang.reflect.Constructor.newInstance(Constructor.java:486)
at com.github.tjake.jlama.model.ModelSupport.loadModel(ModelSupport.java:186)
at com.github.tjake.jlama.model.ModelSupport.loadEmbeddingModel(ModelSupport.java:93)
The text was updated successfully, but these errors were encountered:
When loading some popular embedding models, I am currently coming across a Jackson parsing error of MismatchInputException from loading of the
tokenizer.json
. After further investigation, it seems like the datatype of the value of “vocab” key in thetokenizer.json
file between some models on hugging face are not consistent. Sometimes, the value of the “vocab” key intokenizer.json
is another nested map, but in other cases, the value of the “vocab” key is an array of arrays. When the “vocab” is an array of arrays, it causes an error in theSafeTensorSupport.loadTokenizer
method, where theTokenizerModel model = om.treeToValue(rootNode.get(”model”), TokenizerModel.class)
is unable to parse the JsonNode to aTokenizerModel
class because it expects the value of “vocab” to be aBiMap<String, Long>
. The stack trace error is pasted below for reference.This is pretty prevalent in some of the most popular embedding models on hugging face like the
sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
,intfloat/multilingual-e5-small
, and so forth, both which have array of array as values from the “vocab” key in thetokenizer.json
file.The text was updated successfully, but these errors were encountered: