|
- CLEVER: A Curated Benchmark for Formally Verified Code Generation
TL;DR: We introduce CLEVER, a hand-curated benchmark for verified code generation in Lean It requires full formal specs and proofs No few-shot method solves all stages, making it a strong testbed for synthesis and formal reasoning
- Forum - OpenReview
Promoting openness in scientific communication and the peer-review process
- Submissions | OpenReview
Leaving the barn door open for Clever Hans: Simple features predict LLM benchmark answers Lorenzo Pacchiardi, Marko Tesic, Lucy G Cheke, Jose Hernandez-Orallo 27 Sept 2024 (modified: 05 Feb 2025) Submitted to ICLR 2025 Readers: Everyone
- Counterfactual Debiasing for Fact Verification
579 In this paper, we have proposed a novel counter- factual framework CLEVER for debiasing fact- checking models Unlike existing works, CLEVER is augmentation-free and mitigates biases on infer- ence stage In CLEVER, the claim-evidence fusion model and the claim-only model are independently trained to capture the corresponding information
- Clever: A Curated Benchmark for Formally Verified Code Generation
We introduce CLEVER, the first curated benchmark for evaluating the generation of specifications and formally verified code in Lean The benchmark comprises of 161 programming problems; it evaluates both formal speci-fication generation and implementation synthesis from natural language, requiring formal correctness proofs for both
- Super Deep Contrastive Information Bottleneck for Multi . . . - OpenReview
Super Deep Contrastive Information Bottleneck for Multi-modal Clustering Zhengzheng Lou 1 Ke Zhang 1 Yucong Wu 1 Shizhe Hu 1
- STAIR: Improving Safety Alignment with Introspective Reasoning
One common approach is training models to refuse unsafe queries, but this strategy can be vulnerable to clever prompts, often referred to as jailbreak attacks, which can trick the AI into providing harmful responses Our method, STAIR (SafeTy Alignment with Introspective Reasoning), guides models to think more carefully before responding
- A Universal Prompt Generator for Large Language Models
LLMs are primarily reliant on high-quality and task-specific prompts However, the prompt engineering process relies on clever heuristics and requires multiple iterations Some recent works attempt
|
|
|