What are the differences between adapter tuning and prefix turing?
I am trying to understand the concept of adapter-tuning, prompt-tuning, and prefix-tuning in the context of few-shot learning.
It appears to me I can apply prompt-tuning to a black box language model.
I read for prompt tuning the entire pre-trained language mo开发者_运维问答del is frozen. If thats the case prompt tuning could be applied for a OpenAI model like gpt-3 and Codex.
How could I do prompt tuning with OpenAI Codex?
Can anyone please guide me to the correct direction?
精彩评论