copy and paste this google map to your website or blog!
Press copy button and paste into your blog or website.
(Please switch to 'HTML' mode when posting into your blog. Examples: WordPress Example, Blogger Example)
GUI-for-Cores GUI. for. SingBox - GitHub GUI-for-Cores GUI for SingBox Public Notifications You must be signed in to change notification settings Fork 599 Star 6 7k
gui · GitHub Topics · GitHub GUI GUI stands for graphical user interface It is a visual representation of communication presented to the user for easy interaction with the machine It allows users to manipulate elements on the screen using a mouse, a stylus or even a finger The actions in a GUI are usually performed through direct manipulation of the graphical elements
showlab Awesome-GUI-Agent - GitHub 💻 A curated list of papers and resources for multi-modal Graphical User Interface (GUI) agents - showlab Awesome-GUI-Agent
ZJU-REAL Awesome-GUI-Agents - GitHub A curated collection of resources, tools, and frameworks for developing GUI Agents - ZJU-REAL Awesome-GUI-Agents
GitHub - ritzz-ai GUI-R1: Official implementation of GUI-R1 : A . . . By leveraging a small amount of carefully curated high-quality data across multiple platforms (including Windows, Linux, MacOS, Android, and Web) and employing policy optimization algorithms such as group relative policy optimization (GRPO) to update the model, GUI-R1 achieves superior performance using only 0 02% of the data (3K vs 13M
GUI-G²: Gaussian Reward Modeling for GUI Grounding - GitHub Recent studies on human interaction behavior—especially from the AITW dataset—demonstrate that GUI clicks are not random but instead form natural Gaussian-like distributions around the intended targets Motivated by this, GUI-G² adopts a gaussian reward framework that reflects these real-world behaviors by: Rewarding proximity to target centers (Gaussian Point Reward), Encouraging spatial
AgentCPM-GUI: An on-device GUI agent for operating Android apps . . . AgentCPM-GUI is an open-source on-device LLM agent model jointly developed by THUNLP, Renmin University of China and ModelBest Built on MiniCPM-V with 8 billion parameters, it accepts smartphone screenshots as input and autonomously executes user-specified tasks Key features include: High-quality GUI grounding — Pre-training on a large-scale bilingual Android dataset significantly boosts