copy and paste this google map to your website or blog!
Press copy button and paste into your blog or website.
(Please switch to 'HTML' mode when posting into your blog. Examples: WordPress Example, Blogger Example)
Improved Space Bounds for Subset Sum - OpenReview Their algorithm is a clever combination of a number of previously known techniques with a new reduction and a new algorithm for the Orthogonal Vectors problem In this paper, we give two new algorithms for Subset Sum
A survey on Concept-based Approaches For Model Improvement Explanations in terms of concepts enable detecting spurious correlations, inherent biases, or clever-hans With the advent of concept-based explanations, a range of concept representation methods and automatic concept discovery algorithms have been introduced
Evaluating the Robustness of Neural Networks: An Extreme Value. . . Our analysis yields a novel robustness metric called CLEVER, which is short for Cross Lipschitz Extreme Value for nEtwork Robustness The proposed CLEVER score is attack-agnostic and is computationally feasible for large neural networks
A Universal Prompt Generator for Large Language Models LLMs are primarily reliant on high-quality and task-specific prompts However, the prompt engineering process relies on clever heuristics and requires multiple iterations Some recent works attempt
Submissions | OpenReview Leaving the barn door open for Clever Hans: Simple features predict LLM benchmark answers Lorenzo Pacchiardi, Marko Tesic, Lucy G Cheke, Jose Hernandez-Orallo 27 Sept 2024 (modified: 05 Feb 2025) Submitted to ICLR 2025 Readers: Everyone
Off-Policy Evaluation under Nonignorable Missing Data Off-Policy Evaluation (OPE) aims to estimate the value of a target policy using offline data collected from potentially different policies In real-world applications, however, logged data often
CLEVER: A Curated Benchmark for Formally Verified Code Generation TL;DR: We introduce CLEVER, a hand-curated benchmark for verified code generation in Lean It requires full formal specs and proofs No few-shot method solves all stages, making it a strong testbed for synthesis and formal reasoning
STAIR: Improving Safety Alignment with Introspective Reasoning One common approach is training models to refuse unsafe queries, but this strategy can be vulnerable to clever prompts, often referred to as jailbreak attacks, which can trick the AI into providing harmful responses Our method, STAIR (SafeTy Alignment with Introspective Reasoning), guides models to think more carefully before responding