GPT-3: Language Models are Few-Shot Learners - GitHub GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic
GitHub - openai gpt-2: Code for the paper Language Models are . . . The dataset our GPT-2 models were trained on contains many texts with biases and factual inaccuracies, and thus GPT-2 models are likely to be biased and inaccurate as well To avoid having samples mistaken as human-written, we recommend clearly labeling samples as synthetic before wide dissemination