开发者

What are the differences between adapter tuning and prefix turing?

I am trying to understand the concept of adapter-tuning, prompt-tuning, and prefix-tuning in the context of few-shot learning.

It appears to me I can apply prompt-tuning to a black box language model.

I read for prompt tuning the entire pre-trained language mo开发者_运维问答del is frozen. If thats the case prompt tuning could be applied for a OpenAI model like gpt-3 and Codex.

How could I do prompt tuning with OpenAI Codex?

Can anyone please guide me to the correct direction?

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜