Best paper award at SKILL24
Published:
Best paper award for: Uncovering the Token Splitting Effect in Soft Prompts for Multi-Model LLM Training
Published:
Best paper award for: Uncovering the Token Splitting Effect in Soft Prompts for Multi-Model LLM Training
Short description of portfolio item number 1
Short description of portfolio item number 2 
Published:
We investigate ways to increase the efficiency of BERT pretraining in order to create the most efficient German BERT model possible.
Published:
We investigate how multi-model tuning affects the interpretability and transferability of soft prompts.
Published:
We evaluate the annotation quality of different labeling sources on a custom task.
Published:
We benchmarked the power consumption and CO2e emission of pretraining BERT models.
Published:
We investigate how methods of reinforcement learning can be used for algorithm discovery using LLMs.
Published in SKILL24, 2024
We investigate how multi-model tuning affects the interpretability and transferability of soft prompts.
Published in Arxiv, 2024
We evaluate the annotation quality of different labeling sources on a custom task.