Uncovering the Token Splitting Effect in Soft Prompts for Multi-Model LLM Training
Published in SKILL24, 2024
We investigate how multi-model tuning affects the interpretability and transferability of soft prompts.
Published in SKILL24, 2024
We investigate how multi-model tuning affects the interpretability and transferability of soft prompts.
Published in Arxiv, 2024
We evaluate the annotation quality of different labeling sources on a custom task.