copy and paste this google map to your website or blog!
Press copy button and paste into your blog or website.
(Please switch to 'HTML' mode when posting into your blog. Examples: WordPress Example, Blogger Example)
Language Models Can Teach Themselves to Program Better Keywords: deep learning, natural language processing, program synthesis, large language models TL;DR: Language Models can be used to generate Programming Puzzles and Solutions, which can be filtered for correctness and used to finetune the LLM to improve its performance
CRITIC: Large Language Models Can Self-Correct with. . . Unlike these models, humans typically utilize external tools to cross-check and refine their initial content, like using a search engine for fact-checking, or a code interpreter for debugging
TPTU: Task Planning and Tool Usage of Large Language Model-based AI Agents Abstract With recent advancements in natural language processing, Large Language Models (LLMs) have emerged as powerful tools for various real-world applications De-spite their powers, the intrinsic generative abilities of LLMs may prove insuficient for handling complex tasks, which necessitate a combination of task planning and the usage of external tools In this paper, we first propose a
Quiet-STaR: Language Models Can Teach Themselves to Think Before Speaking Quiet-STaR represents a step towards language models that can learn to reason in a general and scalable way By training on the rich spectrum of reasoning tasks implicit in diverse web text, rather than narrowly specializing for particular datasets, Quiet-STaR points the way to more robust and adaptable LMs
Large Language Models Can Plan Your Travels Rigorously with Formal . . . The SMT solvers guarantee the sat-isfiable of input constraints and the LLMs can enable a language-based interaction with our framework When the input constraints can-not be satisfiable, our LLM-based framework will interactively offer suggestions to users to modify their travel requirements via automatic reasoning using the SMT solvers
Published as a conference paper at ICLR 2025 - OpenReview INTRODUCTION A Tool-Augmented Language Model (TALM) is a language model designed to select and call appro-priate tools (usually APIs) while interacting with the user to answer the user’s query By leveraging external tools, the TALM can conduct complex tasks beyond its parametric knowledge and adapt its actions based on API results Recent TALM benchmarks mostly feature single-turn
GPT4Tools: Teaching Large Language Model to Use Tools via . . . - OpenReview Abstract This paper aims to eficiently enable Large Language Models (LLMs) to use multi-modal tools Advanced proprietary LLMs, such as ChatGPT and GPT-4, have shown great potential for tool usage through sophisticated prompt engineering Nevertheless, these models typically rely on prohibitive computational costs and publicly inaccessible data
Language Models Can Teach Themselves to Program Better Keywords: deep learning, natural language processing, program synthesis, large language models, reinforcement learning TL;DR: Language Models can use reinforcement learning to generate Programming Puzzles and Solutions, which can be scored for correctness and used to finetune the LLM to improve its performance