You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
At present, I do not see how to output more than the top 100 predicted hits on a model. For example:
_```
model <- paragraph2vec(x = df_d2v, type = "PV-DM"
vocab <- summary(model, type = "vocabulary", which = "docs")
Sentences <- "my bag of words"
sentences <- setNames(sentences, sentences)
sentences <- strsplit(sentences, split = " ")
model_predictions <-predict(
model,
newdata = sentences,
type = "nearest", which = "sent2doc", top_n = 100)
dim(model_predictions) is at max 100 rows
There appears to be no way to output the predictions for a model with more than 100 "vocabulary"s/doc_id. Is there a workaround for generating predictions on all available "vocabulary"/doc_id?
Thank you,
David
The text was updated successfully, but these errors were encountered:
I want to calculate and export the similarity between a single sentence and ALL docs in a given model. How do you suggest I go about accomplishing this?
At present, I do not see how to output more than the top 100 predicted hits on a model. For example:
_```
model <- paragraph2vec(x = df_d2v, type = "PV-DM"
vocab <- summary(model, type = "vocabulary", which = "docs")
Sentences <- "my bag of words"
sentences <- setNames(sentences, sentences)
sentences <- strsplit(sentences, split = " ")
model_predictions <-predict(
model,
newdata = sentences,
type = "nearest", which = "sent2doc", top_n = 100)
dim(model_predictions) is at max 100 rows
The text was updated successfully, but these errors were encountered: