It is hinting GPT to skip generating unimportant texts such as filler words, while still maintaining coherence in the output.
For example, if I asked GPT "How to make bibimbap?" with the prompt "skip prose", it will give concise list of ingredients and instructions in about ~250 tokens [1]. Without the prompt, it would first explain what bibimbap is and then give a slightly longer instructions, totaling around ~360 tokens.
[1] A token is like a building block of a sentence - it can be a word, a punctuation mark, a number, or even a combination of words. In case you didn't know, Chat API users are charged based on the number of tokens used. So we try to keep it to a minimum.
For example, if I asked GPT "How to make bibimbap?" with the prompt "skip prose", it will give concise list of ingredients and instructions in about ~250 tokens [1]. Without the prompt, it would first explain what bibimbap is and then give a slightly longer instructions, totaling around ~360 tokens.
[1] A token is like a building block of a sentence - it can be a word, a punctuation mark, a number, or even a combination of words. In case you didn't know, Chat API users are charged based on the number of tokens used. So we try to keep it to a minimum.