-
Notifications
You must be signed in to change notification settings - Fork 69
SEG TOKEN Usage #49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hi @Ruining0916 , Thanks for your interest in our work. The 5 in the first question does not actually mean 5 frames. The 5 here means that there are 5 instances in a set of image/video data. That is to say, 5 [SEG] tokens will generate 5 instance masks. The code you mentioned is mainly to support the empty execution for supporting zero3 during the training process to keep different GPUs executing the same code. During training, the input texts look like this:
|
Thanks for your clarification! I wonder if the is guaranteed to generate from VLM output, which is saying that will each instance of image/video has a non-empty ? Thanks, |
Hi @Ruining0916 Can you explain further about your question? I do not understand your question. |
Oh I just realized there are some typos in my previous question. I’m wondering whether the token is always generated by the VLM module—specifically, if every image or video instance is guaranteed to have a non-empty token. I am asking as I observed that it is possible that seg_token_count is 0 here |
Ah, I see your question. In some cases, the VLM will not generate the [SEG] token. In this case, we will consider that there is no such object and do not generate a mask. |
Thanks for your clarification! Is this issue caused by the absence of an object in this frame according to the ground truth, or is it due to the VLM's insufficient capability? |
I think both may happen. In some cases, the VLM will output [SEG] even if there is not this object in the frame. In some other case, VLM may not generate [SEG] even the object is in the frame. |
Hi Authors,
Thanks for your excellent work! I am a little confused about the SEG token design from the script llava_sam.py
1.SEG token invalid, I wonder when seg token is invalid, why you need to add the number 5?
2. If I understand it correctly, you put 5 sampled video frames input prompt as tokens, it supposed to generate 1 seg token for each data entry. However, I observed that you used here, which extract the last 5 indices of hidden states, why not 1? Additionally, for batch_size =2, and if frame_per_batch = [5, 5], then the seg_token_counts is [5, 5] instead of [1,1] from the current model. As the self.seg_token_idx is a single integer, are these five seg_tokens the same?
Thanks a lot for your clarification!
Thanks,
Ruining
The text was updated successfully, but these errors were encountered: