new

Get trending papers in your email inbox!

Subscribe

Daily Papers

byAK and the research community

Jan 19

Matters Arising from S. Vaitiekenas et al., "Zero-bias peaks at zero magnetic field in ferromagnetic hybrid nanowires" Nature Physics 2021

In 2021 Nature Physics published a paper by Vaitiekenas, Liu, Krogstrup and Marcus titled "Zero-bias peaks at zero magnetic field in ferromagnetic hybrid nanowires". The paper reports low temperature transport measurements on semiconductor InAs nanowires with two partly overlapping shells -- a shell of EuS, a magnetic insulator, and a shell of Al, a metal that becomes superconducting at temperatures below 1.2K. The paper claims that (1) the data are consistent with induced topological superconductivity and Majorana zero modes (MZMs), and (2) that this is facilitated by the breaking of the time reversal symmetry through a direct magnetic interaction with the EuS shell. In this Matters Arising, we present an alternative explanation which is based on trivial effects that are likely to appear in the reported geometry. Specifically, first, we find that data the authors present in support of the topological superconductivity claim can originate from unintended quantum dots in their devices, a widely known likely explanation that is not being discussed in the paper. Second, our analysis of the setup, supported by our numerical micromagnetic simulations, shows similar effects could be obtained due to stray magnetic fields from the region of the EuS shell damaged during Al etching. This basic picture should come before the exotic interpretation in terms of magnetic exchange interaction with a ferromagnetic insulator.

  • 6 authors
·
Jan 7, 2025

A Llama walks into the 'Bar': Efficient Supervised Fine-Tuning for Legal Reasoning in the Multi-state Bar Exam

Legal reasoning tasks present unique challenges for large language models (LLMs) due to the complexity of domain-specific knowledge and reasoning processes. This paper investigates how effectively smaller language models (Llama 2 7B and Llama 3 8B) can be fine-tuned with a limited dataset of 1,514 Multi-state Bar Examination (MBE) questions to improve legal question answering accuracy. We evaluate these models on the 2022 MBE questions licensed from JD Advising, the same dataset used in the 'GPT-4 passes the Bar exam' study. Our methodology involves collecting approximately 200 questions per legal domain across 7 domains. We distill the dataset using Llama 3 (70B) to transform explanations into a structured IRAC (Issue, Rule, Application, Conclusion) format as a guided reasoning process to see if it results in better performance over the non-distilled dataset. We compare the non-fine-tuned models against their supervised fine-tuned (SFT) counterparts, trained for different sample sizes per domain, to study the effect on accuracy and prompt adherence. We also analyse option selection biases and their mitigation following SFT. In addition, we consolidate the performance across multiple variables: prompt type (few-shot vs zero-shot), answer ordering (chosen-option first vs generated-explanation first), response format (Numbered list vs Markdown vs JSON), and different decoding temperatures. Our findings show that domain-specific SFT helps some model configurations achieve close to human baseline performance, despite limited computational resources and a relatively small dataset. We release both the gathered SFT dataset and the family of Supervised Fine-tuned (SFT) adapters optimised for MBE performance. This establishes a practical lower bound on resources needed towards achieving effective legal question answering in smaller LLMs.

  • 4 authors
·
Apr 7, 2025