8 MATATA: a weak-supervised MAthematical Tool-Assisted reasoning for Tabular Applications Mathematical reasoning capabilities are increasing with tool-augmented language agents, but methods often rely either on closed-source or large models, external data, or extensive prompt engineering. This work introduces MATATA, a novel cost-effective method to train LLM agents for tabular data problems through reasoning, planning, and tool use. With a progressive self-improvement paradigm and an iterative weak supervision, it empowers 3.8B/8B Small Language Models (SLMs), particularly suited for local hosting and sensitive business contexts where data privacy is crucial. By employing a flexible and reusable tools across different datasets, it achieves robust performance with effective scalability across shared tasks. Experiments show that MATATA reaches state-of-the-art performances on FinQA and TAT-QA among reasoning frameworks based on open-source models. Moreover, MATATA models compete with GPT-4 based frameworks on TabMWP, while being SLMs. 3 authors · Nov 28, 2024 2
1 DP-OPT: Make Large Language Model Your Privacy-Preserving Prompt Engineer Large Language Models (LLMs) have emerged as dominant tools for various tasks, particularly when tailored for a specific target by prompt tuning. Nevertheless, concerns surrounding data privacy present obstacles due to the tuned prompts' dependency on sensitive private information. A practical solution is to host a local LLM and optimize a soft prompt privately using data. Yet, hosting a local model becomes problematic when model ownership is protected. Alternative methods, like sending data to the model's provider for training, intensify these privacy issues facing an untrusted provider. In this paper, we present a novel solution called Differentially-Private Offsite Prompt Tuning (DP-OPT) to address this challenge. Our approach involves tuning a discrete prompt on the client side and then applying it to the desired cloud models. We demonstrate that prompts suggested by LLMs themselves can be transferred without compromising performance significantly. To ensure that the prompts do not leak private information, we introduce the first private prompt generation mechanism, by a differentially-private (DP) ensemble of in-context learning with private demonstrations. With DP-OPT, generating privacy-preserving prompts by Vicuna-7b can yield competitive performance compared to non-private in-context learning on GPT3.5 or local private prompt tuning. Codes are available at https://github.com/VITA-Group/DP-OPT . 6 authors · Nov 26, 2023
- Discovery of a high-velocity cloud of the Milky Way as a potential dark galaxy High-velocity clouds (HVCs) are composed of neutral hydrogen (HI) moving at velocities that deviate from the general rotational motion of the Milky Way. Currently, the origins of the HVCs remain poorly known due to the difficulty in determining their distance and the lack of any other suitable identification. Here we report the detection of a compact gas clump in HVC AC-I, which displays characteristics typical of a disk galaxy, named AC G185.0-11.5, using the HI observations. We estimated the distance of AC G185.0-11.5 to be about 277.7 kpc using the Baryonic Tully-Fisher relation and constrained its HI gas mass to be between 3.0*10^7 and 4.7*10^8 solar masses. The distance determination indicates that the HVC AC-I hosting AC G185.0-11.5 is an extragalactic object in the Local Group. The absence of molecular gas and an optical counterpart for AC G185.0-11.5 implies that it may be a rare dark galaxy. 9 authors · Apr 12