,Unnamed: 0.1,TweetID,AuthorID,AuthorName,Tweets,arxiv_link,Abstract,Title,Thread_length,Tweets_coarse,year,month,tweet_length
0,214,1439985963040526336,97939183,Yuandong Tian,"We introduce CompilerGym, a fast & robust gym-like environment that enables simple integration of existing ML/RL techniques for compiler optimization (i.e., find customized compiler flags to make program smaller / run faster). Many RL baselines included. ",https://arxiv.org/abs/2109.08267,"Interest in applying Artificial Intelligence (AI) techniques to compiler optimizations is increasing rapidly, but compiler research has a high entry barrier. Unlike in other domains, compiler and AI researchers do not have access to the datasets and frameworks that enable fast iteration and development of ideas, and getting started requires a significant engineering investment. What is needed is an easy, reusable experimental infrastructure for real world compiler optimization tasks that can serve as a common benchmark for comparing techniques, and as a platform to accelerate progress in the field. We introduce CompilerGym, a set of environments for real world compiler optimization tasks, and a toolkit for exposing new optimization tasks to compiler researchers. CompilerGym enables anyone to experiment on production compiler optimization problems through an easy-to-use package, regardless of their experience with compilers. We build upon the popular OpenAI Gym interface enabling researchers to interact with compilers using Python and a familiar API. We describe the CompilerGym architecture and implementation, characterize the optimization spaces and computational efficiencies of three included compiler environments, and provide extensive empirical evaluations. Compared to prior works, CompilerGym offers larger datasets and optimization spaces, is 27x more computationally efficient, is fault-tolerant, and capable of detecting reproducibility bugs in the underlying compilers. In making it easy for anyone to experiment with compilers - irrespective of their background - we aim to accelerate progress in the AI and compiler research domains. ","CompilerGym: Robust, Performant Compiler Optimization Environments for
AI Research",1,"['We introduce CompilerGym, a fast & robust gym-like environment that enables simple integration of existing ML/RL techniques for compiler optimization (i.e., find customized compiler flags to make program smaller / run faster). Many RL baselines included. ']",21,09,261
1,65,1106468015040839680,92989497,Robert Haines,"New paper, to be presented at #CHASE/#ICSE2019 (): ""What Makes Research Software Sustainable? An Interview Study With Research Software Engineers"" by Mario Rosado de Souza, @CarolineEJay, @markelvigo and me! @CarolineEJay @markelvigo And I should have also mentioned that full (anonymized) interview transcriptions are available as well: ",http://arxiv.org/abs/1903.06039,"Software is now a vital scientific instrument, providing the tools for data collection and analysis across disciplines from bioinformatics and computational physics, to the humanities. The software used in research is often home-grown and bespoke: it is constructed for a particular project, and rarely maintained beyond this, leading to rapid decay, and frequent `reinvention of the wheel'. Understanding how to develop sustainable research software, such that it is suitable for future reuse, is therefore of interest to both researchers and funders, but how to achieve this remains an open question. Here we report the results of an interview study examining how research software engineers -- the people actively developing software in an academic research environment -- subjectively define software sustainability. Thematic analysis of the data reveals two interacting dimensions: \emph{intrinsic sustainability}, which relates to internal qualities of software, such as modularity, encapsulation and testability, and \emph{extrinsic sustainability}, concerning cultural and organisational factors, including how software is resourced, supported and shared. Research software engineers believe an increased focus on quality and discoverability are key factors in increasing the sustainability of academic research software. ","What Makes Research Software Sustainable? An Interview Study With
Research Software Engineers",2,"['New paper, to be presented at #CHASE/#ICSE2019 ():\n\n""What Makes Research Software Sustainable? An Interview Study With Research Software Engineers"" by Mario Rosado de Souza, @CarolineEJay, @markelvigo and me!\n\n', '@CarolineEJay @markelvigo And I should have also mentioned that full (anonymized) interview transcriptions are available as well: https://t.co/ltwUbQqsPL']",19,03,357
2,3,1422751230464516096,1214215979200172033,Leonard Wong,New paper with Jun Zhang @UMich. We show that Rényi entropy and divergence and the q-exponential family are naturally compatible with a generalized convex duality. It also comes with a logarithmic divergence which generalizes the Bregman divergence. See by @brekelmaniac who nicely summarizes some of the results.,https://arxiv.org/abs/2107.11925,"Tsallis and R\'{e}nyi entropies, which are monotone transformations of each other, are deformations of the celebrated Shannon entropy. Maximization of these deformed entropies, under suitable constraints, leads to the $q$-exponential family which has applications in non-extensive statistical physics, information theory and statistics. In previous information-geometric studies, the $q$-exponential family was analyzed using classical convex duality and Bregman divergence. In this paper, we show that a generalized $\lambda$-duality, where $\lambda = 1 - q$ is the constant information-geometric curvature, leads to a generalized exponential family which is essentially equivalent to the $q$-exponential family and has deep connections with R\'{e}nyi entropy and optimal transport. Using this generalized convex duality and its associated logarithmic divergence, we show that our $\lambda$-exponential family satisfies properties that parallel and generalize those of the exponential family. Under our framework, the R\'{e}nyi entropy and divergence arise naturally, and we give a new proof of the Tsallis/R\'{e}nyi entropy maximizing property of the $q$-exponential family. We also introduce a $\lambda$-mixture family which may be regarded as the dual of the $\lambda$-exponential family, and connect it with other mixture-type families. Finally, we discuss a duality between the $\lambda$-exponential family and the $\lambda$-logarithmic divergence, and study its statistical consequences. ",Tsallis and R\'{e}nyi deformations linked via a new $\lambda$-duality,2,"['New paper with Jun Zhang @UMich. We show that Rényi entropy and divergence and the q-exponential family are naturally compatible with a generalized convex duality. It also comes with a logarithmic divergence which generalizes the Bregman divergence.\n\n', 'See https://t.co/8zWm5GWEpt by @brekelmaniac who nicely summarizes some of the results.']",21,07,327
3,80,1184107699803262983,3150787230,Xiaofan Liang ,"My paper with Santa Fe Institute Researchers on ""The Scalability, Efficiency and Complexity of Universities and Colleges: A New Lens for Assessing the Higher Educational System"" is now on Arxiv! We applied the urban scaling framework to institutions. ",https://arxiv.org/abs/1910.05470,"The growing need for affordable and accessible higher education is a major global challenge for the 21st century. Consequently, there is a need to develop a deeper understanding of the functionality and taxonomy of universities and colleges and, in particular, how their various characteristics change with size. Scaling has been a powerful tool for revealing systematic regularities in systems across a range of topics from physics and biology to cities, and for understanding the underlying principles of their organization and growth. Here, we apply this framework to institutions of higher learning in the United States and show that, like organisms, ecosystems and cities, they scale in a surprisingly systematic fashion following simple power law behavior. We analyze the entire spectrum encompassing 5,802 institutions ranging from large research universities to small professional schools, organized in seven commonly used sectors, which reveal distinct regimes of institutional scaling behavior. Metrics include variation in expenditures, revenues, graduation rates and estimated economic added value, expressed as functions of total enrollment, our fundamental measure of size. Our results quantify how each regime of institution leverages specific economies of scale to address distinct priorities. Taken together, the scaling of features within a sector and shifts in scaling across sectors implies that there are generic mechanisms and constraints shared by all sectors which lead to tradeoffs between their different societal functions and roles. We particularly highlight the strong complementarity between public and private research universities, and community and state colleges, four sectors that display superlinear returns to scale. ","The Scalability, Efficiency and Complexity of Universities and Colleges:
A New Lens for Assessing the Higher Educational System",1,"['My paper with Santa Fe Institute Researchers on ""The Scalability, Efficiency and Complexity of Universities and Colleges: A New Lens for Assessing the Higher Educational System"" is now on Arxiv! We applied the urban scaling framework to institutions. ']",19,10,257
4,136,1365259949585145859,22604662,Florian Tschorsch,"Our paper “IPFS and Friends” is now available as preprint. We believe that due to rather recent advancements a new generation of P2P data networks emerges: In the paper, we extract the building blocks of this new generation of data networks and discuss their similarities and challenges. In particular, we cover @IPFS, @ethswarm, @HypercoreProto, @safenetworktech, @storjproject, and @ArweaveTeam If you have any remarks, please feel to get in touch. The paper is currently under review and we still have room for minor revisions. @paddypisa We briefly cover it in the ""honorable mentions""",https://arxiv.org/abs/2102.12737,"Decentralized, distributed storage offers a way to reduce the impact of data silos as often fostered by centralized cloud storage. While the intentions of this trend are not new, the topic gained traction due to technological advancements, most notably blockchain networks. As a consequence, we observe that a new generation of peer-to-peer data networks emerges. In this survey paper, we therefore provide a technical overview of the next generation data networks. We use select data networks to introduce general concepts and to emphasize new developments. Specifically, we provide a deeper outline of the Interplanetary File System and a general overview of Swarm, the Hypercore Protocol, SAFE, Storj, and Arweave. We identify common building blocks and provide a qualitative comparison. From the overview, we derive future challenges and research goals concerning data networks. ","IPFS and Friends: A Qualitative Comparison of Next Generation
Peer-to-Peer Data Networks",4,"['Our paper “IPFS and Friends” is now available as preprint. We believe that due to rather recent advancements a new generation of P2P data networks emerges: ', 'In the paper, we extract the building blocks of this new generation of data networks and discuss their similarities and challenges. In particular, we cover @IPFS, @ethswarm, @HypercoreProto, @safenetworktech, @storjproject, and @ArweaveTeam https://t.co/SU2UOC5mq0', 'If you have any remarks, please feel to get in touch. The paper is currently under review and we still have room for minor revisions.', '@paddypisa We briefly cover it in the ""honorable mentions""']",21,02,610
5,62,1106930520301150210,2785337469,Sebastian Ruder,"New paper with @mattthemathman & @nlpnoah on adapting pretrained representations: We compare feature extraction & fine-tuning with ELMo and BERT and try to give several guidelines for adapting pretrained representations in practice. @michalwols @mattthemathman @nlpnoah For BERT, we mostly modified the provided Colaboratory notebooks, while for ELMo, we modified scripts in AllenNLP, so both of those should be fairly easily reproducible using existing resources. I might upload the analysis scripts if there's interest.",https://arxiv.org/abs/1903.05987,"While most previous work has focused on different pretraining objectives and architectures for transfer learning, we ask how to best adapt the pretrained model to a given target task. We focus on the two most common forms of adaptation, feature extraction (where the pretrained weights are frozen), and directly fine-tuning the pretrained model. Our empirical results across diverse NLP tasks with two state-of-the-art models show that the relative performance of fine-tuning vs. feature extraction depends on the similarity of the pretraining and target tasks. We explore possible explanations for this finding and provide a set of adaptation guidelines for the NLP practitioner. ","To Tune or Not to Tune? Adapting Pretrained Representations to Diverse
Tasks",2,"['New paper with @mattthemathman & @nlpnoah on adapting pretrained representations: We compare feature extraction & fine-tuning with ELMo and BERT and try to give several guidelines for adapting pretrained representations in practice. ', ""@michalwols @mattthemathman @nlpnoah For BERT, we mostly modified the provided Colaboratory notebooks, while for ELMo, we modified scripts in AllenNLP, so both of those should be fairly easily reproducible using existing resources. I might upload the analysis scripts if there's interest.""]",19,03,535
6,191,1518501490578661377,703555806973845504,Michiel Lambrechts,"In this new study, we (Liu, @astroAnders, me, Bizzarro, Haugbølle ) show how pebble drift is consistent with the emergence of the so-called carbonaceous and non-carbonaceous meteoritic reservoirs. Check this thread to have a full summary: ",https://arxiv.org/abs/2204.10651,"Meteorites display an isotopic composition dichotomy between non-carbonaceous (NC) and carbonaceous (CC) groups, indicating that planetesimal formation in the solar protoplanetary disk occurred in two distinct reservoirs. The prevailing view is that a rapidly formed Jupiter acted as a barrier between these reservoirs. We show a fundamental inconsistency in this model: if Jupiter is an efficient blocker of drifting pebbles, then the interior NC reservoir is depleted by radial drift within a few hundred thousand years. If Jupiter lets material pass it, then the two reservoirs will be mixed. Instead, we demonstrate that the arrival of the CC pebbles in the inner disk is delayed for several million years by the viscous expansion of the protoplanetary disk. Our results support that Jupiter formed in the outer disk (>10 AU) and allowed a considerable amount of CC material to pass it and become accreted by the terrestrial planets. ","Natural separation of two primordial planetary reservoirs in an
expanding solar protoplanetary disk",2,"['In this new study, we (Liu, @astroAnders, me, Bizzarro, Haugbølle ) show how pebble drift is consistent with the emergence of the so-called carbonaceous and non-carbonaceous meteoritic reservoirs. \n ', 'Check this thread to have a full summary:\nhttps://t.co/KaiBvSAlVT']",22,04,266
7,77,1074847023696736256,23104038,Dr Katie Grasha,"The single hardest thing I've ever done my entire life is finally done with. We combine the GMC catalog from the PAWS survey with the LEGUS star cluster catalogs in the Whirlpool galaxy M51. We find a few things: (1) We find that star clusters remain associated with their birth GMCs for about 4-6 Myr. This is a longer timescale than in NGC7793 (Grasha+18, MNRAS, 481, 1016). A result of the higher surface density in M51, which constrains the winds (feedback), thus increasing the timescale. (2) The correlation function of the GMCs are significantly flatter than that of the star clusters. When we tune the star formation efficiency of the GMCs until the correlation function resembles that of the star clusters, we find that the GMCs have a SFE of just a few percent. Thus, only the most massive GMCs are capable of forming the young, massive star clusters we observe. Nothing can describe the work that went into this paper The final, published paper is dramatically different than what I had originally submitted. I encourage people to look up chapter 5 of my dissertation, the paper in its original submitted form. ",https://arxiv.org/abs/1812.06109,"We present a study correlating the spatial locations of young star clusters with those of molecular clouds in NGC~5194, in order to investigate the timescale over which clusters separate from their birth clouds. The star cluster catalogues are from the Legacy ExtraGalactic UV Survey (LEGUS) and the molecular clouds from the Plateau de Bure Interefrometer Arcsecond Whirpool Survey (PAWS). We find that younger star clusters are spatially closer to molecular clouds than older star clusters. The median ages for clusters associated with clouds is 4~Myr whereas it is 50~Myr for clusters that are sufficiently separated from a molecular cloud to be considered unassociated. After $\sim$6~Myr, the majority of the star clusters lose association with their molecular gas. Younger star clusters are also preferentially located in stellar spiral arms where they are hierarchically distributed in kpc-size regions for 50-100~Myr before dispersing. The youngest star clusters are more strongly clustered, yielding a two-point correlation function with $\alpha=-0.28\pm0.04$, than the GMCs ($\alpha=-0.09\pm0.03$) within the same PAWS field. However, the clustering strength of the most massive GMCs, supposedly the progenitors of the young clusters for a star formation efficiency of a few percent, is comparable ($\alpha=-0.35\pm0.05$) to that of the clusters. We find a galactocentric-dependence for the coherence of star formation, in which clusters located in the inner region of the galaxy reside in smaller star-forming complexes and display more homogeneous distributions than clusters further from the centre. This result suggests a correlation between the survival of a cluster complex and its environment. ","The Spatial Relation between Young Star Clusters and Molecular Clouds in
M 51 with LEGUS",5,"[""The single hardest thing I've ever done my entire life is finally done with.\n\nWe combine the GMC catalog from the PAWS survey with the LEGUS star cluster catalogs in the Whirlpool galaxy M51. \n\nWe find a few things:\n\n"", '(1) We find that star clusters remain associated with their birth GMCs for about 4-6 Myr. This is a longer timescale than in NGC7793 (Grasha+18, MNRAS, 481, 1016). A result of the higher surface density in M51, which constrains the winds (feedback), thus increasing the timescale.', '(2) The correlation function of the GMCs are significantly flatter than that of the star clusters. When we tune the star formation efficiency of the GMCs until the correlation function resembles that of the star clusters, we find that the GMCs have a SFE of just a few percent.', 'Thus, only the most massive GMCs are capable of forming the young, massive star clusters we observe. \n\nhttps://t.co/Tg1G1xTe8k', 'Nothing can describe the work that went into this paper\n\nThe final, published paper is dramatically different than what I had originally submitted. I encourage people to look up chapter 5 of my dissertation, the paper in its original submitted form. \n\nhttps://t.co/ox6HCU1chG']",18,12,1145
8,91,1087350548817461250,185910194,Graham Neubig,"#ICLR2019 paper ""Lagging Inference Networks and Posterior Collapse in VAEs"". VAEs collapse to trivial solutions; we find this is because the inference network is poor at the beginning of training, then propose a simple solution of ""aggressive update"": Nice work by @junxian_he, along with @dspoka, me, and @BergKirkpatrick! @sam_havens @poolio I think Section 3.1 of this paper is a reasonably clear explanation: ",https://arxiv.org/abs/1901.05534,"The variational autoencoder (VAE) is a popular combination of deep latent variable model and accompanying variational learning technique. By using a neural inference network to approximate the model's posterior on latent variables, VAEs efficiently parameterize a lower bound on marginal data likelihood that can be optimized directly via gradient methods. In practice, however, VAE training often results in a degenerate local optimum known as ""posterior collapse"" where the model learns to ignore the latent variable and the approximate posterior mimics the prior. In this paper, we investigate posterior collapse from the perspective of training dynamics. We find that during the initial stages of training the inference network fails to approximate the model's true posterior, which is a moving target. As a result, the model is encouraged to ignore the latent encoding and posterior collapse occurs. Based on this observation, we propose an extremely simple modification to VAE training to reduce inference lag: depending on the model's current mutual information between latent variable and observation, we aggressively optimize the inference network before performing each model update. Despite introducing neither new model components nor significant complexity over basic VAE, our approach is able to avoid the problem of collapse that has plagued a large amount of previous work. Empirically, our approach outperforms strong autoregressive baselines on text and image benchmarks in terms of held-out likelihood, and is competitive with more complex techniques for avoiding collapse while being substantially faster. ","Lagging Inference Networks and Posterior Collapse in Variational
Autoencoders",3,"['#ICLR2019 paper ""Lagging Inference Networks and Posterior Collapse in VAEs"". VAEs collapse to trivial solutions; we find this is because the inference network is poor at the beginning of training, then propose a simple solution of ""aggressive update"": ', 'Nice work by @junxian_he, along with @dspoka, me, and @BergKirkpatrick!', '@sam_havens @poolio I think Section 3.1 of this paper is a reasonably clear explanation: https://t.co/2sHCU1atD8']",19,01,433
9,22,1134342275474046979,883039700,Lenka Zdeborova,"Generative models are the new sparsity ... or even better actually as shown in our last paper: @carlonicolini84 You are perfectly right, the prior is based on the whole database, it does not know which particular picture was chosen to be the spike. @DanFrederiksen2 It is not denoising, but we do want to reconstruct the images as in denoising. The point is that the shirt is there better in the lower line than in the upper line which is the standard methods. The noisy data are not shown as they do not come in the form of picture.",https://arxiv.org/abs/1905.12385,"Using a low-dimensional parametrization of signals is a generic and powerful way to enhance performance in signal processing and statistical inference. A very popular and widely explored type of dimensionality reduction is sparsity; another type is generative modelling of signal distributions. Generative models based on neural networks, such as GANs or variational auto-encoders, are particularly performant and are gaining on applicability. In this paper we study spiked matrix models, where a low-rank matrix is observed through a noisy channel. This problem with sparse structure of the spikes has attracted broad attention in the past literature. Here, we replace the sparsity assumption by generative modelling, and investigate the consequences on statistical and algorithmic properties. We analyze the Bayes-optimal performance under specific generative models for the spike. In contrast with the sparsity assumption, we do not observe regions of parameters where statistical performance is superior to the best known algorithmic performance. We show that in the analyzed cases the approximate message passing algorithm is able to reach optimal performance. We also design enhanced spectral algorithms and analyze their performance and thresholds using random matrix theory, showing their superiority to the classical principal component analysis. We complement our theoretical results by illustrating the performance of the spectral algorithms when the spikes come from real datasets. ",The spiked matrix model with generative priors,3,"['Generative models are the new sparsity ... or even better actually as shown in our last paper: ', '@carlonicolini84 You are perfectly right, the prior is based on the whole database, it does not know which particular picture was chosen to be the spike.', '@DanFrederiksen2 It is not denoising, but we do want to reconstruct the images as in denoising. The point is that the shirt is there better in the lower line than in the upper line which is the standard methods. The noisy data are not shown as they do not come in the form of picture.']",19,05,547
10,134,1446162932945985565,1203016782178443264,Jacob Krantz,"How does the choice of action space affect language-guided embodied navigators? 🤖 In our new paper ""Waypoint Models for Instruction-guided Navigation in Continuous Environments"", we discover implications for simulation and reality. 🧵 Oral at #iccv2021! We develop a class of highly configurable waypoint prediction networks to explore a spectrum of action spaces. We vary the ""expressivity"" from coarse-grained heading prediction up to continuous-valued waypoint prediction and train each model with large-scale RL. We find that more expressive waypoint models result in simpler trajectories that are faster to execute on a real robot (by 2-3x!), but lower-level actions can better approximate shortest paths. Along the way, we set a new SotA on the VLN-CE leaderboard! Project page: A big thanks to my collaborators Aaron Gokaslan (@SkyLi0n), Dhruv Batra (@DhruvBatraDB), Stefan Lee (@stefmlee), and Oleksandr Maksymets (@o_maksymets).",https://arxiv.org/abs/2110.02207,"Little inquiry has explicitly addressed the role of action spaces in language-guided visual navigation -- either in terms of its effect on navigation success or the efficiency with which a robotic agent could execute the resulting trajectory. Building on the recently released VLN-CE setting for instruction following in continuous environments, we develop a class of language-conditioned waypoint prediction networks to examine this question. We vary the expressivity of these models to explore a spectrum between low-level actions and continuous waypoint prediction. We measure task performance and estimated execution time on a profiled LoCoBot robot. We find more expressive models result in simpler, faster to execute trajectories, but lower-level actions can achieve better navigation metrics by approximating shortest paths better. Further, our models outperform prior work in VLN-CE and set a new state-of-the-art on the public leaderboard -- increasing success rate by 4% with our best model on this challenging task. ","Waypoint Models for Instruction-guided Navigation in Continuous
Environments",4,"['How does the choice of action space affect language-guided embodied navigators? 🤖\n\nIn our new paper ""Waypoint Models for Instruction-guided Navigation in Continuous Environments"", we discover implications for simulation and reality. 🧵\n\nOral at #iccv2021!\n ', 'We develop a class of highly configurable waypoint prediction networks to explore a spectrum of action spaces.\n\nWe vary the ""expressivity"" from coarse-grained heading prediction up to continuous-valued waypoint prediction and train each model with large-scale RL. https://t.co/1PlM2RhdXA', 'We find that more expressive waypoint models result in simpler trajectories that are faster to execute on a real robot (by 2-3x!), but lower-level actions can better approximate shortest paths.\n\nAlong the way, we set a new SotA on the VLN-CE leaderboard! https://t.co/cyOQF1GLxz', 'Project page: https://t.co/yBK76qERDw\n\nA big thanks to my collaborators Aaron Gokaslan (@SkyLi0n), Dhruv Batra (@DhruvBatraDB), Stefan Lee (@stefmlee), and Oleksandr Maksymets (@o_maksymets).']",21,10,971
11,51,1118514682367741952,1004365363574902784,Kevin J. Kelly,"One more new paper (but this should be it for a while, I promise!) out today, with my collaborators André de Gouvêa, @StenicoVitti, and Pedro Pasquini. Studying tau leptons and tau neutrinos is notoriously hard, but the @DUNEScience experiment should detect hundreds of them! We looked into what kind of physics you can learn by studying tau events. The OPERA Experiment just put out () similar results. Not too surprisingly, studying tau neutrinos isn't as powerful as studying electron and muon neutrinos (which is what DUNE is designed for), but taus can provide an important cross check in seeing whether there's new physics lurking in the neutrino sector. Here's a look at how many events the detector will identify as tau neutrinos in 3.5 years of data collection -- this sample alone will be larger than all existing measurements of tau neutrinos to date! Finally, here's how well you can measure the amplitude and frequency of neutrino oscillations using each channel alone (electrons blue, taus green, muons red). Again, the tau neutrino measurement isn't powerful compared to the others, but it's a cross check that needs to be done! Again, this couldn't have been done without my collaborators. Thanks André, @StenicoVitti, and Pedro!",https://arxiv.org/abs/1904.07265,"We explore the capabilities of the upcoming Deep Underground Neutrino Experiment (DUNE) to measure $\nu_\tau$ charged-current interactions and the associated oscillation probability $P(\nu_\mu \to \nu_\tau)$ at its far detector, concentrating on how such results can be used to probe neutrino properties and interactions. DUNE has the potential to identify significantly more $\nu_\tau$ events than all existing experiments and can use this data sample to nontrivially test the three-massive-neutrinos paradigm by providing complementary measurements to those from the $\nu_e$ appearance and $\nu_\mu$ disappearance channels. We further discuss the sensitivity of the $\nu_\tau$ appearance channel to several hypotheses for the physics that may lurk beyond the three-massive-neutrinos paradigm: a non-unitary lepton mixing matrix, the $3+1$ light neutrinos hypothesis, and the existence of non-standard neutral-current neutrino interactions. Throughout, we also consider the relative benefits of the proposed high-energy tune of the Long-Baseline Neutrino Facility (LBNF) beam-line. ",Physics with Beam Tau-Neutrino Appearance at DUNE,6,"['One more new paper (but this should be it for a while, I promise!) out today, with my collaborators André de Gouvêa, @StenicoVitti, and Pedro Pasquini.\n\n', 'Studying tau leptons and tau neutrinos is notoriously hard, but the @DUNEScience experiment should detect hundreds of them!\n\nWe looked into what kind of physics you can learn by studying tau events. The OPERA Experiment just put out (https://t.co/gV7dSU4kDv) similar results.', ""Not too surprisingly, studying tau neutrinos isn't as powerful as studying electron and muon neutrinos (which is what DUNE is designed for), but taus can provide an important cross check in seeing whether there's new physics lurking in the neutrino sector."", ""Here's a look at how many events the detector will identify as tau neutrinos in 3.5 years of data collection -- this sample alone will be larger than all existing measurements of tau neutrinos to date! https://t.co/Tvn9rUYKkj"", ""Finally, here's how well you can measure the amplitude and frequency of neutrino oscillations using each channel alone (electrons blue, taus green, muons red). Again, the tau neutrino measurement isn't powerful compared to the others, but it's a cross check that needs to be done! https://t.co/RZJp4eFdgy"", ""Again, this couldn't have been done without my collaborators. Thanks André, @StenicoVitti, and Pedro!""]",19,04,1272
12,102,1503700997566308352,952949678533849088,Kareem El-Badry,"New paper! We study two “mass gap” black hole candidates in binaries with red giant stars, “the Unicorn” and “the Giraffe”. 1/n We used spectral disentangling to search for possible luminous companions (as opposed to BHs) to the giants. We found them! 2/n What that means, roughly, is that two luminous stars do (much) a better job fitting the spectra than one. In both systems, the disentangled spectra of the companions look like subgiant stars (i.e., cooler and larger than main-sequence stars; warmer and smaller than giants). 4/n Because these subgiants are cooler than main-sequence stars of similar mass, they are faint in the UV and consistent with the observed spectral energy distributions and UV limits. 5/n We can measure the masses of the giants from the observed ellipsoidal variation. In both systems, they are 0.3-0.5 Msun. This is very low for a giant, and implies most of their initial mass was stripped off by a companion. 6/n We can also measure the masses of the subgiants dynamically. The dynamically-inferred values (1.8 and 2.8 Msun) are in reasonably good agreement with what we'd estimate from their temperature and luminosity. 7/n We used binary evolution models to investigate how these systems formed and how they'll evolve in the future. We think the giant are almost completely stripped of their envelopes and will soon contract to become low-mass helium white dwarfs. 8/n This scenario (and the component masses) is almost identical to how we think Regulus, the ~20th brightest star in the sky, formed. It's a main-sequence star with a helium white dwarf companion in a wide orbit. 9/n The fact that the companions are subgiants (i.e, off the main sequence) implies that either the initial mass ratio was very close to 1 (like, q>0.99), or that the companions are temporarily inflated due to rapid accretion. 9/ The second possibility is particularly exciting, but it will take more work (ideally a population model of interacting giant binaries) to test it. The Unicorn and Giraffe join a growing population of mass-transfer binaries recently observed at various stages of the stripping process. Several of these other objects have also been previously interpreted as BHs. Summary: stellar-mass BHs are small needles in a very large haystack. But they haystack contains a lot of other interesting stuff! n/n",https://arxiv.org/abs/2203.06348,"We analyze two binary systems containing giant stars, V723 Mon (""the Unicorn"") and 2M04123153+6738486 (""the Giraffe""). Both giants orbit more massive but less luminous companions, previously proposed to be mass-gap black holes. Spectral disentangling reveals luminous companions with star-like spectra in both systems. Joint modeling of the spectra, light curves, and spectral energy distributions robustly constrains the masses, temperatures, and radii of both components: the primaries are luminous, cool giants ($T_{\rm eff,\,giant} = 3,800\,\rm K$ and $4,000\,\rm K$, $R_{\rm giant}= 22.5\,R_{\odot}$ and $25\,R_{\odot}$) with exceptionally low masses ($M_{\rm giant} \approx 0.4\,M_{\odot}$) that likely fill their Roche lobes. The secondaries are only slightly warmer subgiants ($T_{\rm eff,\,2} = 5,800\,\rm K$ and $5,150\,\rm K$, $R_2= 8.3\,R_{\odot}$ and $9\,R_{\odot}$) and thus are consistent with observed UV limits that would rule out main-sequence stars with similar masses ($M_2 \approx 2.8\,M_{\odot}$ and $\approx 1.8\,M_{\odot}$). In the Unicorn, rapid rotation blurs the spectral lines of the subgiant, making it challenging to detect even at wavelengths where it dominates the total light. Both giants have surface abundances indicative of CNO processing and subsequent envelope stripping. The properties of both systems can be reproduced by binary evolution models in which a $1-2\,M_{\odot}$ primary is stripped by a companion as it ascends the giant branch. The fact that the companions are also evolved implies either that the initial mass ratio was very near unity, or that the companions are temporarily inflated due to rapid accretion. The Unicorn and Giraffe offer a window into into a rarely-observed phase of binary evolution preceding the formation of wide-orbit helium white dwarfs, and eventually, compact binaries containing two helium white dwarfs. ","Unicorns and Giraffes in the binary zoo: stripped giants with subgiant
companions",13,"['New paper! We study two “mass gap” black hole candidates in binaries with red giant stars, “the Unicorn” and “the Giraffe”. 1/n ', 'We used spectral disentangling to search for possible luminous companions (as opposed to BHs) to the giants. We found them! 2/n https://t.co/j5WPzU3ikL', 'What that means, roughly, is that two luminous stars do (much) a better job fitting the spectra than one. https://t.co/bogv9ie8nu', 'In both systems, the disentangled spectra of the companions look like subgiant stars (i.e., cooler and larger than main-sequence stars; warmer and smaller than giants). 4/n https://t.co/2cdMaFwDn2', 'Because these subgiants are cooler than main-sequence stars of similar mass, they are faint in the UV and consistent with the observed spectral energy distributions and UV limits. 5/n https://t.co/S20JlBiznj', 'We can measure the masses of the giants from the observed ellipsoidal variation. In both systems, they are 0.3-0.5 Msun. This is very low for a giant, and implies most of their initial mass was stripped off by a companion. 6/n https://t.co/jpx3Pk4kkh', ""We can also measure the masses of the subgiants dynamically. The dynamically-inferred values (1.8 and 2.8 Msun) are in reasonably good agreement with what we'd estimate from their temperature and luminosity. 7/n https://t.co/KSLzPBlDdm"", ""We used binary evolution models to investigate how these systems formed and how they'll evolve in the future. We think the giant are almost completely stripped of their envelopes and will soon contract to become low-mass helium white dwarfs. 8/n https://t.co/NKLLw40ob0"", ""This scenario (and the component masses) is almost identical to how we think Regulus, the ~20th brightest star in the sky, formed. It's a main-sequence star with a helium white dwarf companion in a wide orbit. https://t.co/nk0iFv0E9K 9/n"", 'The fact that the companions are subgiants (i.e, off the main sequence) implies that either the initial mass ratio was very close to 1 (like, q>0.99), or that the companions are temporarily inflated due to rapid accretion. 9/', 'The second possibility is particularly exciting, but it will take more work (ideally a population model of interacting giant binaries) to test it.', 'The Unicorn and Giraffe join a growing population of mass-transfer binaries recently observed at various stages of the stripping process. Several of these other objects have also been previously interpreted as BHs. https://t.co/sxKQRKlkSb', 'Summary: stellar-mass BHs are small needles in a very large haystack. But they haystack contains a lot of other interesting stuff! n/n']",22,03,2420
13,109,1202088914078572544,1147039217534537728,Rohan Chandra,"(1/2) New paper on arXiv: We propose a trajectory prediction algorithm for autonomous driving. We use spectral graph theory to reduce long term RMSE and identify/predict behavior (over-speeding etc). Code,Video, Datasets: (2/2) More research on autonomous driving from our group found here: ",https://arxiv.org/abs/1912.01118,"We present a novel approach for traffic forecasting in urban traffic scenarios using a combination of spectral graph analysis and deep learning. We predict both the low-level information (future trajectories) as well as the high-level information (road-agent behavior) from the extracted trajectory of each road-agent. Our formulation represents the proximity between the road agents using a weighted dynamic geometric graph (DGG). We use a two-stream graph-LSTM network to perform traffic forecasting using these weighted DGGs. The first stream predicts the spatial coordinates of road-agents, while the second stream predicts whether a road-agent is going to exhibit overspeeding, underspeeding, or neutral behavior by modeling spatial interactions between road-agents. Additionally, we propose a new regularization algorithm based on spectral clustering to reduce the error margin in long-term prediction (3-5 seconds) and improve the accuracy of the predicted trajectories. Moreover, we prove a theoretical upper bound on the regularized prediction error. We evaluate our approach on the Argoverse, Lyft, Apolloscape, and NGSIM datasets and highlight the benefits over prior trajectory prediction methods. In practice, our approach reduces the average prediction error by approximately 75% over prior algorithms and achieves a weighted average accuracy of 91.2% for behavior prediction. Additionally, our spectral regularization improves long-term prediction by up to 70%. ","Forecasting Trajectory and Behavior of Road-Agents Using Spectral
Clustering in Graph-LSTMs",2,"['(1/2) New paper on arXiv: \n\nWe propose a trajectory prediction algorithm for autonomous driving. We use spectral graph theory to reduce long term RMSE and identify/predict behavior (over-speeding etc).\n\nCode,Video, Datasets: ', '(2/2) More research on autonomous driving from our group found here: https://t.co/hpBOMaIy0c']",19,12,311
14,131,1402898805373083654,145986026,Erdem Bıyık,"New paper at IEEE Transactions on Control of Network Systems: In traffic, many equilibria exist (see the GIF, the flow is the same in all 5 configs). When all drivers select their routes selfishly, we often end up in congested equilibria. (1/n) Autonomous cars may alleviate this problem, as they can keep shorter headways with the car they are following. Besides, travel times might be further reduced if some vehicles (or drivers) are altruistic. But, does altruism really exist? (2/n) Luckily, financial incentives (through ride-hailing services like Uber & Lyft) can give the same benefits of altruism! These services should give multiple options to the users: ""do you prefer a cheaper but slower route or are you willing to pay more to get there faster?"" (3/n) Using preference-based learning techniques helps learn passengers' price/latency tradeoff. The services should then use this information to optimize route prices to minimize congestion. In simulations, this approach decreased travel times up to 50%. (4/n) This is an extension of our earlier works presented at WAFR 2018 () and CDC 2019 () with Daniel A. Lazar, Ramtin Pedarsani and @DorsaSadigh. (5/5)",https://arxiv.org/abs/2106.04678,"Traffic congestion has large economic and social costs. The introduction of autonomous vehicles can potentially reduce this congestion by increasing road capacity via vehicle platooning and by creating an avenue for influencing people's choice of routes. We consider a network of parallel roads with two modes of transportation: (i) human drivers, who will choose the quickest route available to them, and (ii) a ride hailing service, which provides an array of autonomous vehicle route options, each with different prices, to users. We formalize a model of vehicle flow in mixed autonomy and a model of how autonomous service users make choices between routes with different prices and latencies. Developing an algorithm to learn the preferences of the users, we formulate a planning optimization that chooses prices to maximize a social objective. We demonstrate the benefit of the proposed scheme by comparing the results to theoretical benchmarks which we show can be efficiently calculated. ","Incentivizing Efficient Equilibria in Traffic Networks with Mixed
Autonomy",5,"['New paper at IEEE Transactions on Control of Network Systems: \n\nIn traffic, many equilibria exist (see the GIF, the flow is the same in all 5 configs). When all drivers select their routes selfishly, we often end up in congested equilibria. (1/n) ', 'Autonomous cars may alleviate this problem, as they can keep shorter headways with the car they are following. Besides, travel times might be further reduced if some vehicles (or drivers) are altruistic. But, does altruism really exist? (2/n)', 'Luckily, financial incentives (through ride-hailing services like Uber & Lyft) can give the same benefits of altruism! These services should give multiple options to the users: ""do you prefer a cheaper but slower route or are you willing to pay more to get there faster?"" (3/n)', ""Using preference-based learning techniques helps learn passengers' price/latency tradeoff. The services should then use this information to optimize route prices to minimize congestion. In simulations, this approach decreased travel times up to 50%. (4/n)"", 'This is an extension of our earlier works presented at WAFR 2018 (https://t.co/k9lXsPlF0O) and CDC 2019 (https://t.co/vs2NWzpVgW) with Daniel A. Lazar, Ramtin Pedarsani and @DorsaSadigh. (5/5)']",21,06,1194
15,114,1225157385397985280,4189511729,Marius Millea,"Detecting the CMB polarization B-modes from primordial gravitational waves would revolutionize our understanding of inflation. But this signal is highly obscured by gravitational lensing distortion. Our new paper shows how this can be cleaned optimally. The method is fully Bayesian and so can take the data and reconstruct all of the available information about the projected mass causing the gravitational lensing, the primordial signal, and parameters controlling the statistics of these fields. In practice our code samples the ~million dimensional parameter space, which we can do efficiently thanks to #JuliaLang, GPUs, Hamiltonian Monte Carlo, and some important tricks we discuss in the paper. The software package is ready for anyone to run: Excited to have this work with @EthanAnderes and @bwandelt out so we can build towards using this for current and future CMB experiments.",https://arxiv.org/abs/2002.00965,"The search for primordial gravitational waves in the Cosmic Microwave Background (CMB) will soon be limited by our ability to remove the lensing contamination to $B$-mode polarization. The often-used quadratic estimator for lensing is known to be suboptimal for surveys that are currently operating and will continue to become less and less efficient as instrumental noise decreases. While foregrounds can in principle be mitigated by observing in more frequency bands, progress in delensing hinges entirely on algorithmic advances. We demonstrate here a new inference method that solves this problem by sampling the exact Bayesian posterior of any desired cosmological parameters, of the gravitational lensing potential, and of the delensed CMB maps, given lensed temperature and polarization data. We validate the method using simulated CMB data with non-white noise and masking on up to 650\,deg$^2$ patches of sky. A unique strength of this approach is the ability to jointly estimate cosmological parameters which control both the primordial CMB and the lensing potential, which we demonstrate here for the first time by sampling both the tensor-to-scalar ratio, $r$, and the amplitude of the lensing potential, $A_\phi$. The method allows us to perform the most precise check to-date of several important approximations underlying CMB-S4 $r$ forecasting, and we confirm these yield the correct expected uncertainty on $r$ to better than 10%. ","Bayesian delensing delight: sampling-based inference of the primordial
CMB and gravitational lensing",4,"['Detecting the CMB polarization B-modes from primordial gravitational waves would revolutionize our understanding of inflation. But this signal is highly obscured by gravitational lensing distortion. Our new paper shows how this can be cleaned optimally.\n\n ', 'The method is fully Bayesian and so can take the data and reconstruct all of the available information about the projected mass causing the gravitational lensing, the primordial signal, and parameters controlling the statistics of these fields.', 'In practice our code samples the ~million dimensional parameter space, which we can do efficiently thanks to #JuliaLang, GPUs, Hamiltonian Monte Carlo, and some important tricks we discuss in the paper. The software package is ready for anyone to run:\n\nhttps://t.co/N2QVNJcp47', 'Excited to have this work with @EthanAnderes and @bwandelt out so we can build towards using this for current and future CMB experiments.']",20,02,909
16,189,1359989526001815552,1539866125,Jacy Reese Anthis,"New preprint by Jamie Harris & me reviews the 294 articles on the moral inclusion of AI. We find ""widespread agreement among scholars that some artificial entities could warrant moral consideration."" This motivates our work at @SentienceInst on #AIEthics. ",https://arxiv.org/abs/2102.04215,"Ethicists, policy-makers, and the general public have questioned whether artificial entities such as robots warrant rights or other forms of moral consideration. There is little synthesis of the research on this topic so far. We identify 294 relevant research or discussion items in our literature review of this topic. There is widespread agreement among scholars that some artificial entities could warrant moral consideration in the future, if not also the present. The reasoning varies, such as concern for the effects on artificial entities and concern for the effects on human society. Beyond the conventional consequentialist, deontological, and virtue ethicist ethical frameworks, some scholars encourage ""information ethics"" and ""social-relational"" approaches, though there are opportunities for more in-depth ethical research on the nuances of moral consideration of artificial entities. There is limited relevant empirical data collection, primarily in a few psychological studies on current moral and social attitudes of humans towards robots and other artificial entities. This suggests an important gap for social science research on how artificial entities will be integrated into society and the factors that will determine how the interests of sentient artificial entities are considered. ",The Moral Consideration of Artificial Entities: A Literature Review,1,"['New preprint by Jamie Harris & me reviews the 294 articles on the moral inclusion of AI. We find ""widespread agreement among scholars that some artificial entities could warrant moral consideration."" This motivates our work at @SentienceInst on #AIEthics. ']",21,02,269
17,126,1369671049751629830,3094610676,Pranav Rajpurkar,"How do deep learning models perform in the presence of diseases not labeled for or present during training? Our new paper investigates this key AI+Medicine deployment consideration @siyumd @IshaanMalhi @AndrewYNg @StanfordAILab 1/9 Context: Datasets used to train models typically only provide labels for a limited number of common diseases. It's unknown whether DL models can maintain performance in presence of diseases not seen during training or whether they can detect the presence of such diseases. 2/9 Question 1: Can we detect diseases not seen during training? We first design a controlled experiment to evaluate whether deep learning models trained on a subset of diseases (seen diseases) can detect the presence of any one of a larger set of diseases. 3/9 Finding 1: We find that models tend to falsely classify unseen diseases as “no disease”. We also show that DL models may succeed in identifying “no disease” vs “any disease” when an unseen disease co-occurs with a seen disease, but not when an unseen disease appears alone. 4/9 Question 2: Is there a performance drop on labeled diseases? We evaluate whether models trained on seen diseases can detect seen diseases when co-occurring with diseases outside the subset (unseen diseases). 5/9 Finding 2: We find that models are still able to detect seen diseases even when co-occurring with unseen diseases. Moreover, a model trained with both seen and unseen diseases, but without labels for the unseen diseases, performs better on seen diseases! 6/9 Question 3: Can unseen diseases be detected without explicit training? We evaluate whether feature representations learned by models may be used to detect the presence of unseen diseases given a small labeled set of unseen diseases. 7/9 Finding 3: We find that the penultimate layer of the deep neural network provides useful features for unseen disease detection. Overall, our results can inform the safe clinical deployment of deep learning models trained on a non-exhaustive set of disease classes. 8/9 It was really fun working on this project with a talented team of first authors @siyumd and @IshaanMalhi, and Kevin Tran. Read more details in our paper here: 9/9",https://arxiv.org/abs/2103.04590,"We systematically evaluate the performance of deep learning models in the presence of diseases not labeled for or present during training. First, we evaluate whether deep learning models trained on a subset of diseases (seen diseases) can detect the presence of any one of a larger set of diseases. We find that models tend to falsely classify diseases outside of the subset (unseen diseases) as ""no disease"". Second, we evaluate whether models trained on seen diseases can detect seen diseases when co-occurring with diseases outside the subset (unseen diseases). We find that models are still able to detect seen diseases even when co-occurring with unseen diseases. Third, we evaluate whether feature representations learned by models may be used to detect the presence of unseen diseases given a small labeled set of unseen diseases. We find that the penultimate layer of the deep neural network provides useful features for unseen disease detection. Our results can inform the safe clinical deployment of deep learning models trained on a non-exhaustive set of disease classes. ","CheXseen: Unseen Disease Detection for Deep Learning Interpretation of
Chest X-rays",9,"['How do deep learning models perform in the presence of diseases not labeled for or present during training?\n\nOur new paper investigates this key AI+Medicine deployment consideration\n\n\n@siyumd @IshaanMalhi \n@AndrewYNg @StanfordAILab \n\n1/9 ', ""Context: Datasets used to train models typically only provide labels for a limited number of common diseases.\n\nIt's unknown whether DL models can maintain performance in presence of diseases not seen during training or whether they can detect the presence of such diseases.\n\n2/9"", 'Question 1: Can we detect diseases not seen during training?\n\nWe first design a controlled experiment to evaluate whether deep learning models trained on a subset of diseases (seen diseases) can detect the presence of any one of a larger set of diseases.\n\n3/9', 'Finding 1: We find that models tend to falsely classify unseen diseases as “no disease”.\n\nWe also show that DL models may succeed in identifying “no disease” vs “any disease” when an unseen disease co-occurs with a seen disease, but not when an unseen disease appears alone.\n\n4/9 https://t.co/ew02Mtbmv5', 'Question 2: Is there a performance drop on labeled diseases?\n\nWe evaluate whether models trained on seen diseases can detect seen diseases when co-occurring with diseases outside the subset (unseen diseases).\n\n5/9', 'Finding 2: We find that models are still able to detect seen diseases even when co-occurring with unseen diseases.\n\nMoreover, a model trained with both seen and unseen diseases, but without labels for the unseen diseases, performs better on seen diseases!\n\n6/9 https://t.co/KTomws6J1i', 'Question 3: Can unseen diseases be detected without explicit training? \n\nWe evaluate whether feature representations learned by models may be used to detect the presence of unseen diseases given a small labeled set of unseen diseases.\n\n7/9', 'Finding 3: We find that the penultimate layer of the deep neural network provides useful features for unseen disease detection.\n\nOverall, our results can inform the safe clinical deployment of deep learning models trained on a non-exhaustive set of disease classes.\n\n8/9 https://t.co/7sBa3LFWjB', 'It was really fun working on this project with a talented team of first authors @siyumd and @IshaanMalhi, and Kevin Tran.\n\nRead more details in our paper here:\nhttps://t.co/wjOsxtf1Qk\n\n9/9']",21,03,2228
18,145,1171900619252105216,2286503947,Changhan Wang,"Can we build machine translation models on finer-grained vocabularies than characters? Can they be compacter and faster? Can they be generic and transferable to any languages? In with @kchonyc and @thoma_gu, we study byte-level BPE for these questions. ",http://arxiv.org/abs/1909.03341,"Almost all existing machine translation models are built on top of character-based vocabularies: characters, subwords or words. Rare characters from noisy text or character-rich languages such as Japanese and Chinese however can unnecessarily take up vocabulary slots and limit its compactness. Representing text at the level of bytes and using the 256 byte set as vocabulary is a potential solution to this issue. High computational cost has however prevented it from being widely deployed or used in practice. In this paper, we investigate byte-level subwords, specifically byte-level BPE (BBPE), which is compacter than character vocabulary and has no out-of-vocabulary tokens, but is more efficient than using pure bytes only is. We claim that contextualizing BBPE embeddings is necessary, which can be implemented by a convolutional or recurrent layer. Our experiments show that BBPE has comparable performance to BPE while its size is only 1/8 of that for BPE. In the multilingual setting, BBPE maximizes vocabulary sharing across many languages and achieves better translation quality. Moreover, we show that BBPE enables transferring models between languages with non-overlapping character sets. ",Neural Machine Translation with Byte-Level Subwords,1,"['Can we build machine translation models on finer-grained vocabularies than characters? Can they be compacter and faster? Can they be generic and transferable to any languages?\n\nIn with @kchonyc and @thoma_gu, we study byte-level BPE for these questions. ']",19,09,266
19,98,1161443195479371776,280403336,Sean Welleck,"our new paper: ""Neural Text d̶e̶Generation with Unlikelihood Training"" is now on arxiv! (w/ @uralik1, @stephenroller, Emily Dinan, @kchonyc, @jaseweston) A step towards solving the case of neural text degeneration 🔎 @uralik1 @stephenroller @kchonyc @jaseweston Language models (e.g. GPT-2) tend to produce repetitive and dull ('degenerate') text, especially with greedy or beam search We propose 'unlikelihood training', which augments maximum likelihood with penalties on certain candidate tokens Using unlikelihood loss at the token and sequence levels reduces repetitions, leading to improved generation quality ",https://arxiv.org/abs/1908.04319,"Neural text generation is a key tool in natural language applications, but it is well known there are major problems at its core. In particular, standard likelihood training and decoding leads to dull and repetitive outputs. While some post-hoc fixes have been proposed, in particular top-$k$ and nucleus sampling, they do not address the fact that the token-level probabilities predicted by the model are poor. In this paper we show that the likelihood objective itself is at fault, resulting in a model that assigns too much probability to sequences containing repeats and frequent words, unlike those from the human training distribution. We propose a new objective, unlikelihood training, which forces unlikely generations to be assigned lower probability by the model. We show that both token and sequence level unlikelihood training give less repetitive, less dull text while maintaining perplexity, giving superior generations using standard greedy or beam search. According to human evaluations, our approach with standard beam search also outperforms the currently popular decoding methods of nucleus sampling or beam blocking, thus providing a strong alternative to existing techniques. ",Neural Text Generation with Unlikelihood Training,4,"['our new paper:\n\n""Neural Text d̶e̶Generation with Unlikelihood Training""\n\nis now on arxiv! (w/ @uralik1, @stephenroller, Emily Dinan, @kchonyc, @jaseweston) \n\nA step towards solving the case of neural text degeneration 🔎 ', ""@uralik1 @stephenroller @kchonyc @jaseweston Language models (e.g. GPT-2) tend to produce repetitive and dull ('degenerate') text, especially with greedy or beam search https://t.co/EFLQlZdu3a"", ""We propose 'unlikelihood training', which augments maximum likelihood with penalties on certain candidate tokens https://t.co/kxdQYZzjpH"", 'Using unlikelihood loss at the token and sequence levels reduces repetitions, leading to improved generation quality https://t.co/EeM93TccUd']",19,08,649
20,29,1465168906893541380,976155561522794497,Juliano César Silva Neves,"My new paper is all about violation of an important symmetry in physics, the Lorentz symmetry. A particular field, which breaks that symmetry, could increase the expansion of the universe in a given direction (fingerprints from a quantum gravity). ",https://arxiv.org/abs/2111.13165,"An effect of the Lorentz symmetry breaking is pointed out in the cosmological context. Using a Bianchi I geometry coupled to the Kalb-Ramond field, a consequence of the Lorentz symmetry violation is indicated by a different rate of expansion in a given spatial direction. This article focuses on the coupling constant $\xi_1$, which generates, from the Kalb-Ramond field, all three coefficients that give rise to the Lorentz violation in the gravity sector of the minimal Standard Model Extension. The coupling constant $\xi_1$ increases the rate of expansion of the universe in a given direction during a dark energy era. As a consequence, a range of validity of that coupling constant is also obtained. ",Bianchi type I cosmology with a Kalb-Ramond background field,1,"['My new paper is all about violation of an important symmetry in physics, the Lorentz symmetry. A particular field, which breaks that symmetry, could increase the expansion of the universe in a given direction (fingerprints from a quantum gravity).\n']",21,11,254
21,38,1276534735435632644,1012689495420833792,Simon Powers,"How can we balance electricity demand across households, and flatten the consumption curve, in a fair way? We need to do this to fully exploit renewable energy sources. Our new paper, to appear in @2020ALIFE, develops a multi-agent systems approach: Also shouting out an acknowledgement to @KeeleILAS for supporting this work.",https://arxiv.org/abs/2006.14526,"Reducing the peak energy consumption of households is essential for the effective use of renewable energy sources, in order to ensure that as much household demand as possible can be met by renewable sources. This entails spreading out the use of high-powered appliances such as dishwashers and washing machines throughout the day. Traditional approaches to this problem have relied on differential pricing set by a centralised utility company. But this mechanism has not been effective in promoting widespread shifting of appliance usage. Here we consider an alternative decentralised mechanism, where agents receive an initial allocation of time-slots to use their appliances and can then exchange these with other agents. If agents are willing to be more flexible in the exchanges they accept, then overall satisfaction, in terms of the percentage of agents time-slot preferences that are satisfied, will increase. This requires a mechanism that can incentivise agents to be more flexible. Building on previous work, we show that a mechanism incorporating social capital - the tracking of favours given and received - can incentivise agents to act flexibly and give favours by accepting exchanges that do not immediately benefit them. We demonstrate that a mechanism that tracks favours increases the overall satisfaction of agents, and crucially allows social agents that give favours to outcompete selfish agents that do not under payoff-biased social learning. Thus, even completely self-interested agents are expected to learn to produce socially beneficial outcomes. ",A mechanism to promote social behaviour in household load balancing,2,"['How can we balance electricity demand across households, and flatten the consumption curve, in a fair way? We need to do this to fully exploit renewable energy sources. Our new paper, to appear in @2020ALIFE, develops a multi-agent systems approach: ', 'Also shouting out an acknowledgement to @KeeleILAS for supporting this work.']",20,06,333
22,47,1163302455913795589,913238472357437445,Fuminobu TAKAHASHI,"Our new paper is out today. We showed that the initial position of the QCD axion can be set close to pi, if it has a mixing with another heavy axion which gives a phase shift of pi. It can be the inflaton. So we named it, ""pi-nflation"". The basic idea of giving a phase shift of pi was given by Daido and us in 1702.03284. We were inspired by many interesting talks and lively discussion at CERN-Korea TH workshop on axions () to revisit this idea and study it in detail.",https://arxiv.org/abs/1908.06071,"We show that the initial misalignment angle of the QCD axion (or axion-like particles) can be set very close to $\pi$, if the QCD axion has a mixing with another heavy axion which induces the phase shift $\approx \pi$ after inflation. In the simplest case, the heavy axion plays the role of the inflaton, and we call such inflation as ""$\pi$nflation."" The basic idea was first proposed by Daido and the present authors in Ref. [1702.03284] in 2017 and more recently discussed in Ref. [1903.00462]. We show that the QCD axion with a decay constant $f_a \gtrsim 3 \times 10^9\,$ GeV can explain dark matter by the $\pi$nflation mechanism. A large fraction of the parameter region has an overlap with the projected sensitivity of ORGAN, MADMAX, TOORAD and IAXO. We also study implications for the effective neutrino species and isocurvature perturbations. The $\pi$nflation can provide an initial condition for the hilltop inflation in the axion landscape, and in a certain set-up, a chain of the hilltop inflation may take place. ",QCD Axion on Hilltop by a Phase Shift of $\pi$,2,"['Our new paper is out today. \nWe showed that the initial position of the QCD axion can be set close to pi, if it has a mixing with another heavy axion which gives a phase shift of pi. It can be the inflaton. So we named it, ""pi-nflation"".', 'The basic idea of giving a phase shift of pi was given by Daido and us in 1702.03284. We were inspired by\nmany interesting talks and lively discussion at CERN-Korea TH workshop on axions (https://t.co/Y87xflY5UP) to revisit this idea and study it in detail.']",19,08,484
23,156,1479069371037364225,1215010368759701505,Eleanor D'Arcy,"First paper day🎉 . We propose a methodology for predicting the number and sizes of wildfires in the US, with an emphasis on extreme events🚒🔥 This is joint work with @callumbarltrop, Rob Shooter & Emma Simpson, and motivated by the #EVA2021 data challenge",https://arxiv.org/abs/2112.15372,"This paper details the methodology proposed by the \textit{Lancaster Ducks} team for the EVA 2021 conference data challenge. This aim of this challenge was to predict the number and size of wildfires over the contiguous US between 1993-2015, with more importance placed on extreme events. Our approach proposes separate methods for modelling the bodies and tails of the distributions of both wildfire variables. For the former, a hierarchical clustering technique is proposed to first group similar locations, with a non-parametric approach subsequently used to model the non-extreme data. To capture tail behaviour, separate techniques derived from univariate extreme value theory are proposed for both variables. For the count data, a generalised Pareto distribution with a generalised additive model structure is used to capture effects from covariates on values above a high threshold. For burnt area, a non-stationary generalised Pareto distribution enables us to capture the tail behaviour of proportions obtained through a transformation of observed area data. The resulting predictions are shown to perform reasonably well, improving on the benchmark method proposed in the challenge outline. We also provide a discussion about the limitations of our modelling framework and evaluate ways in which it could be extended. ","A flexible, semi-parametric, cluster-based approach for predicting
wildfire extremes across the contiguous United States",1,"['First paper day🎉 . We propose a methodology for predicting the number and sizes of wildfires in the US, with an emphasis on extreme events🚒🔥 This is joint work with @callumbarltrop, Rob Shooter & Emma Simpson, and motivated by the #EVA2021 data challenge']",21,12,260
24,19,1487451884701073411,801793180899360768,Hendrik Schuff,"Check out our new paper! (w/ @alon_jacovi, Heike Adel, @yoavgo and Thang Vu) Human Interpretation of Saliency-based Explanation Over Text We investigate human perception of heat map explanations and find that intuitive understanding is biased. 1/5 Saliency attribution methods score how important parts of a model’s input are to the model decision and are often visualised using heat maps. Previous work focused on developing and verifying attribution methods. Less is known about how humans interpret these explanations. 2/5 We conduct various user studies to investigate whether superficial and unrelated factors (e.g., word length) influence human self-reported importance ratings. We collect user feedback and statistically analyse importance ratings using a GAMM model. 3/5 We find that numerous factors such as word length, sentence length and learning effects affect human importance ratings. These factors shouldn't affect importance, because the explanation already objectively has importance, but they do. 4/5 We present two bias correction methods and demonstrate their ability to compensate the distorting influence of word length and repeated exposure. Details are in paper. Thanks! 5/5 ",http://arxiv.org/abs/2201.11569,"While a lot of research in explainable AI focuses on producing effective explanations, less work is devoted to the question of how people understand and interpret the explanation. In this work, we focus on this question through a study of saliency-based explanations over textual data. Feature-attribution explanations of text models aim to communicate which parts of the input text were more influential than others towards the model decision. Many current explanation methods, such as gradient-based or Shapley value-based methods, provide measures of importance which are well-understood mathematically. But how does a person receiving the explanation (the explainee) comprehend it? And does their understanding match what the explanation attempted to communicate? We empirically investigate the effect of various factors of the input, the feature-attribution explanation, and visualization procedure, on laypeople's interpretation of the explanation. We query crowdworkers for their interpretation on tasks in English and German, and fit a GAMM model to their responses considering the factors of interest. We find that people often mis-interpret the explanations: superficial and unrelated factors, such as word length, influence the explainees' importance assignment despite the explanation communicating importance directly. We then show that some of this distortion can be attenuated: we propose a method to adjust saliencies based on model estimates of over- and under-perception, and explore bar charts as an alternative to heatmap saliency visualization. We find that both approaches can attenuate the distorting effect of specific factors, leading to better-calibrated understanding of the explanation. ",Human Interpretation of Saliency-based Explanation Over Text,5,"['Check out our new paper! (w/ @alon_jacovi, Heike Adel, @yoavgo and Thang Vu)\n\nHuman Interpretation of Saliency-based Explanation Over Text\n\n\nWe investigate human perception of heat map explanations and find that intuitive understanding is biased.\n\n1/5 ', 'Saliency attribution methods score how important parts of a model’s input are to the model decision and are often visualised using heat maps.\nPrevious work focused on developing and verifying attribution methods. Less is known about how humans interpret these explanations.\n\n2/5', 'We conduct various user studies to investigate whether superficial and unrelated factors (e.g., word length) influence human self-reported importance ratings.\nWe collect user feedback and statistically analyse importance ratings using a GAMM model.\n\n3/5 https://t.co/sTaHj1CdOi', ""We find that numerous factors such as word length, sentence length and learning effects affect human importance ratings. These factors shouldn't affect importance, because the explanation already objectively has importance, but they do.\n\n4/5 https://t.co/qnz5PlqZh8"", 'We present two bias correction methods and demonstrate their ability to compensate the distorting influence of word length and repeated exposure.\nDetails are in paper. Thanks!\n\n5/5 https://t.co/MHk36SNAPt']",22,01,1234
25,71,1261100300678492160,1074633382452051969,Kimin,"New work with @young93k, Seunghyun Lee, @honglaklee, and Jinwoo Shin - Context-aware Dynamics Model for Generalization in Model-Based Reinforcement Learning! Paper: Project Page: Code: Main problem: We study how to learn a global dynamics model that can generalize across different dynamics Method: We separate context encoding and transition inference, and propose various auxiliary tasks to extract contextual information effectively Results 1: The proposed context-aware dynamics model significantly improves the generalization performances of baseline model-based methods. Result 2: You can also utilize the learned context latent vector to improve the generalization abilities of model-free RL methods. ",https://arxiv.org/abs/2005.06800,"Model-based reinforcement learning (RL) enjoys several benefits, such as data-efficiency and planning, by learning a model of the environment's dynamics. However, learning a global model that can generalize across different dynamics is a challenging task. To tackle this problem, we decompose the task of learning a global dynamics model into two stages: (a) learning a context latent vector that captures the local dynamics, then (b) predicting the next state conditioned on it. In order to encode dynamics-specific information into the context latent vector, we introduce a novel loss function that encourages the context latent vector to be useful for predicting both forward and backward dynamics. The proposed method achieves superior generalization ability across various simulated robotics and control tasks, compared to existing RL schemes. ","Context-aware Dynamics Model for Generalization in Model-Based
Reinforcement Learning",5,"['New work with @young93k, Seunghyun Lee, @honglaklee, and Jinwoo Shin - Context-aware Dynamics Model for Generalization in Model-Based Reinforcement Learning! \n\nPaper: \nProject Page: \nCode: ', 'Main problem:\nWe study how to learn a global dynamics model that can generalize across different dynamics https://t.co/ea4A9XWHIb', 'Method:\nWe separate context encoding and transition inference, and propose various auxiliary tasks to extract contextual information effectively https://t.co/q1R5jfcVYk', 'Results 1:\nThe proposed context-aware dynamics model significantly improves the generalization performances of baseline model-based methods. https://t.co/v4u66E2hdQ', 'Result 2:\nYou can also utilize the learned context latent vector to improve the generalization abilities of model-free RL methods. https://t.co/dSH0wepzsz']",20,05,757
26,114,1318567762407677952,887589711908188161,Mingjie Sun,"Is the backdoor secret? Checkout our new work on ''breaking'' poisoned classifiers, where we use neat ideas in adversarial robustness to analyze backdoored classifiers. Joint work with @agsidd10 & @zicokolter. Paper: Code: For a poisoned classifier, we construct a robustified smoothed classifier. We extract colors or cropped patches from adversarial examples of the smoothed classifier to create new triggers. These new triggers have similar or higher attack success rate than the original backdoor. Open questions: 1. Are there backdoor attacks that can avoid our attack? 2. From our results, it seems that backdoor poisoning creates a spectrum of potential backdoors. It is natural to ask what is actually learnt through the backdoor poisoning process?",https://arxiv.org/abs/2010.09080,"Under a commonly-studied backdoor poisoning attack against classification models, an attacker adds a small trigger to a subset of the training data, such that the presence of this trigger at test time causes the classifier to always predict some target class. It is often implicitly assumed that the poisoned classifier is vulnerable exclusively to the adversary who possesses the trigger. In this paper, we show empirically that this view of backdoored classifiers is incorrect. We describe a new threat model for poisoned classifier, where one without knowledge of the original trigger, would want to control the poisoned classifier. Under this threat model, we propose a test-time, human-in-the-loop attack method to generate multiple effective alternative triggers without access to the initial backdoor and the training data. We construct these alternative triggers by first generating adversarial examples for a smoothed version of the classifier, created with a procedure called Denoised Smoothing, and then extracting colors or cropped portions of smoothed adversarial images with human interaction. We demonstrate the effectiveness of our attack through extensive experiments on high-resolution datasets: ImageNet and TrojAI. We also compare our approach to previous work on modeling trigger distributions and find that our method are more scalable and efficient in generating effective triggers. Last, we include a user study which demonstrates that our method allows users to easily determine the existence of such backdoors in existing poisoned classifiers. Thus, we argue that there is no such thing as a secret backdoor in poisoned classifiers: poisoning a classifier invites attacks not just by the party that possesses the trigger, but from anyone with access to the classifier. ","Poisoned classifiers are not only backdoored, they are fundamentally
broken",3,"[""Is the backdoor secret? Checkout our new work on ''breaking'' poisoned classifiers, where we use neat ideas in adversarial robustness to analyze backdoored classifiers. Joint work with @agsidd10 & @zicokolter.\n\nPaper: \nCode: "", 'For a poisoned classifier, we construct a robustified smoothed classifier. We extract colors or cropped patches from adversarial examples of the smoothed classifier to create new triggers. These new triggers have similar or higher attack success rate than the original backdoor.', 'Open questions:\n1. Are there backdoor attacks that can avoid our attack?\n2. From our results, it seems that backdoor poisoning creates a spectrum of potential backdoors. It is natural to ask what is actually learnt through the backdoor poisoning process?']",20,10,777
27,185,1361560280828846081,1114972720826068992,Danny Horta Darrington 🇺🇦,"What's that, stars in the inner Milky Way that come from GCs and from an accreted origin? Check out some awesome results in a paper led by my PhD sibling Shobhit Kisku where we study the nature of chemically tagged dissolved GC stars in the inner Galaxy @kaosteorin Thanks @kaosteorin !! I will make sure to pass the message on to Shobhit. Let us know if you wanna chat about it!",https://arxiv.org/abs/2102.06720,"Recent evidence based on APOGEE data for stars within a few kpc of the Galactic centre suggests that dissolved globular clusters (GCs) contribute significantly to the stellar mass budget of the inner halo. In this paper we enquire into the origins of tracers of GC dissolution, N-rich stars, that are located in the inner 4 kpc of the Milky Way. From an analysis of the chemical compositions of these stars we establish that about 30% of the N-rich stars previously identified in the inner Galaxy may have an accreted origin. This result is confirmed by an analysis of the kinematic properties of our sample. The specific frequency of N-rich stars is quite large in the accreted population, exceeding that of its in situ counterparts by near an order of magnitude, in disagreement with predictions from numerical simulations. We hope that our numbers provide a useful test to models of GC formation and destruction. ","An enquiry on the origins of N-rich stars in the inner Galaxy basedon
APOGEE chemical compositions",2,"[""What's that, stars in the inner Milky Way that come from GCs and from an accreted origin? Check out some awesome results in a paper led by my PhD sibling Shobhit Kisku where we study the nature of chemically tagged dissolved GC stars in the inner Galaxy "", '@kaosteorin Thanks @kaosteorin !! I will make sure to pass the message on to Shobhit. Let us know if you wanna chat about it!']",21,02,386
28,45,1441444078185496577,202420697,Jeff Carver,"Interested in how peer code review is working in the research software community? Check out this new paper with @nasireisty, which will appear in Empirical Software Engineering. #RSE, #ResearchSoftware, #CodeReview, # SoftwareQuality, @Se4Science ",https://arxiv.org/abs/2109.10971,"Background: Research software is software developed by and/or used by researchers, across a wide variety of domains, to perform their research. Because of the complexity of research software, developers cannot conduct exhaustive testing. As a result, researchers have lower confidence in the correctness of the output of the software. Peer code review, a standard software engineering practice, has helped address this problem in other types of software. Aims: Peer code review is less prevalent in research software than it is in other types of software. In addition, the literature does not contain any studies about the use of peer code review in research software. Therefore, through analyzing developers perceptions, the goal of this work is to understand the current practice of peer code review in the development of research software, identify challenges and barriers associated with peer code review in research software, and present approaches to improve the peer code review in research software. Method: We conducted interviews and a community survey of research software developers to collect information about their current peer code review practices, difficulties they face, and how they address those difficulties. Results: We received 84 unique responses from the interviews and surveys. The results show that while research software teams review a large amount of their code, they lack formal process, proper organization, and adequate people to perform the reviews. Conclusions: Use of peer code review is promising for improving the quality of research software and thereby improving the trustworthiness of the underlying research results. In addition, by using peer code review, research software developers produce more readable and understandable code, which will be easier to maintain. ","Developers Perception of Peer Code Review in Research Software
Development",1,"['Interested in how peer code review is working in the research software community? Check out this new paper with @nasireisty, which will appear in Empirical Software Engineering.\n\n\n\n#RSE, #ResearchSoftware, #CodeReview, # SoftwareQuality, @Se4Science ']",21,09,260
29,267,1313642365631037443,1116002690604130305,Juliette Becker,"See our new paper (led by Tali Khain, now a first year grad at Chicago) on how TNOs move between Planet Nine resonances in the solar system w/ P9: This is the final part of Tali's work which won her the 2019 @APSphysics Apker Award! Tali's website: As an undergrad, Tali led FOUR first author papers (working with me, Fred Adams, @kbatygin, and @dAArkEnergy). I hear she has some exciting results coming from her grad school work, so keep your eyes peeled for that!",https://arxiv.org/abs/2010.02234,"The observed physical clustering of the orbits of small bodies in the distant Kuiper Belt (TNOs) has recently prompted the prediction of an additional planet in the outer solar system. Since the initial posing of the hypothesis, the effects of Planet Nine on the dynamics of the main cluster of TNOs - the objects anti-aligned with its orbit - have been well-studied. In particular, numerical simulations have revealed a fascinating phenomenon, referred to as ""resonance hopping"", in which these objects abruptly transition between different mean-motion commensurabilities with Planet Nine. In this work, we explore this effect in greater detail, with the goal of understanding what mechanism prompts the hopping events to occur. In the process, we elucidate the often underestimated role of Neptune scattering interactions, which leads to diffusion in the semi-major axes of these distant TNOs. In addition, we demonstrate that although some resonant interactions with Planet Nine do occur, the anti-aligned objects are able to survive without the resonances, confirming that the dynamics of the TNOs are predominantly driven by secular, rather than resonant, interactions with Planet Nine. ",The Resonance Hopping Effect in the Neptune-Planet Nine System,2,"[""See our new paper (led by Tali Khain, now a first year grad at Chicago) on how TNOs move between Planet Nine resonances in the solar system w/ P9: This is the final part of Tali's work which won her the 2019 @APSphysics Apker Award! "", ""Tali's website: https://t.co/TILUDG24md As an undergrad, Tali led FOUR first author papers (working with me, Fred Adams, @kbatygin, and @dAArkEnergy). I hear she has some exciting results coming from her grad school work, so keep your eyes peeled for that!""]",20,10,486
30,58,963704037525917696,882303076505456642,Timon Emken,"Can Earth-based detectors observe strongly interacting #DarkMatter? At what point do the Earth crust and atmosphere shield off DM and leave the underground and surface detectors blind? If these questions keep you up at night, check out our new paper: Also check out the recent paper by @DanHooperAstro and Samuel D. McDermott, who used analytic methods to shed light on these questions. Additionally they also focus on DM-cosmic ray interactions: The #MonteCarlo #simulation #code DaMaSCUS-CRUST used in our paper is available at . #OpenScience @JonathanHMDavis We called it high altitude experiments. But yes!",https://arxiv.org/abs/1802.04764,"Above a critical dark matter-nucleus scattering cross section any terrestrial direct detection experiment loses sensitivity to dark matter, since the Earth crust, atmosphere, and potential shielding layers start to block off the dark matter particles. This critical cross section is commonly determined by describing the average energy loss of the dark matter particles analytically. However, this treatment overestimates the stopping power of the Earth crust. Therefore the obtained bounds should be considered as conservative. We perform Monte Carlo simulations to determine the precise value of the critical cross section for various direct detection experiments and compare them to other dark matter constraints in the low mass regime. In this region we find parameter space where typical underground and surface detectors are completely blind to dark matter. This ""hole"" in the parameter space can hardly be closed with an increase in the detector exposure. Dedicated surface or high-altitude experiments may be the only way to directly probe this part of the parameter space. ","How blind are underground and surface detectors to strongly interacting
Dark Matter?",4,"['Can Earth-based detectors observe strongly interacting #DarkMatter? At what point do the Earth crust and atmosphere shield off DM and leave the underground and surface detectors blind? If these questions keep you up at night, check out our new paper: ', 'Also check out the recent paper by @DanHooperAstro and Samuel D. McDermott, who used analytic methods to shed light on these questions. Additionally they also focus on DM-cosmic ray interactions: https://t.co/NHcyWQxzSd', 'The #MonteCarlo #simulation #code DaMaSCUS-CRUST used in our paper is available at https://t.co/Zo5H7bXCKz. #OpenScience', '@JonathanHMDavis We called it high altitude experiments. But yes!']",18,02,637
31,108,1304226115205046272,1148910974218321920,Dr. Isobel Romero-Shaw,"Excited to release this! In my new paper () with @LaskyPaul, @EHThrane and @juan__cb, we find evidence that GW190521 may have come from an *eccentric* binary! This supports the hypothesis that it and other @LIGO @ego_virgo mergers formed *dynamically*!",http://arxiv.org/abs/2009.04771,"Pair instability supernovae are thought to restrict the formation of black holes in the mass range ~50 - 135 solar masses. However, black holes with masses within this ""high mass gap"" are expected to form as the remnants of binary black hole mergers. These remnants can merge again dynamically in densely populated environments such as globular clusters. The hypothesis that the binary black hole merger GW190521 formed dynamically is supported by its high mass. Orbital eccentricity can also be a signature of dynamical formation, since a binary that merges quickly after becoming bound may not circularize before merger. In this work, we measure the orbital eccentricity of GW190521. We find that the data prefer a signal with eccentricity $e \geq 0.1$ at 10 Hz to a non-precessing, quasi-circular signal, with a log Bayes factor $\ln{\cal B}=5.0$. When compared to precessing, quasi-circular analyses, the data prefer a non-precessing, $e \geq 0.1$ signal, with log Bayes factors $\ln{\cal B}\approx2$. Using injection studies, we find that a non-spinning, moderately eccentric ($e = 0.13$) GW190521-like binary can be mistaken for a quasi-circular, precessing binary. Conversely, a quasi-circular binary with spin-induced precession may be mistaken for an eccentric binary. We therefore cannot confidently determine whether GW190521 was precessing or eccentric. Nevertheless, since both of these properties support the dynamical formation hypothesis, our findings support the hypothesis that GW190521 formed dynamically. ","GW190521: orbital eccentricity and signatures of dynamical formation in
a binary black hole merger signal",1,"['Excited to release this! In my new paper () with @LaskyPaul, @EHThrane and @juan__cb, we find evidence that GW190521 may have come from an *eccentric* binary! This supports the hypothesis that it and other @LIGO @ego_virgo mergers formed *dynamically*!']",20,09,258
32,179,1285492962672222208,28378010,Paul A. Strøm,"Our new paper is out on arXiv today: ""Exocomets from a Solar System Perspective"" In this topical review paper we provide an overview of the observational properties of Solar System #comets and #exocomets. (1/10) The paper aims to highlight commonalities and to discuss differences which may aid the communication between the involved research communities and perhaps also avoid misconceptions. (2/10) A major difference between the observations of Solar System #comets and #exocomets is that the former are studied individually, whereas the latter generally cannot be resolved. Compared to Solar System comets, the information we have about exocomets is very limited. (3/10) Yet there are hints that they may not be too different in composition... 😱 (4/10) Observations of gas around main sequence stars, spectroscopic observations of ""polluted"" white dwarf atmospheres and spectroscopic observations of transiting exocomets suggest that exocomets may show compositional similarities with Solar System comets. (5/10) For instance, the CaII lines commonly seen in the spectra of beta Pic and polluted WDs have been detected in the extreme case of the large sungrazing comet C/1965 S1 Ikeya-Seki. (6/10) Solar system comets emit in high energy EUV and X-ray emission through the gradual neutralisation of highly charged solar wind ions. Similar processes are also thought to occur at exocomets encountering stellar winds (as seen by the variations of highly ionised species). (7/10) Observations of interstellar visitors such as 1I/`Oumuamua and 2I/Borisov allow us to learn about the physical and chemical properties of protoplanetary disks of distant stars, although their true systems of origin are unknown to us. (8/10) This raises the tantalising prospect that observations of interstellar comets may help bridge the fields of exocomet and Solar System comets. (9/10) If you have a an interest in debris disks, white dwarf atmospheres and/or (exo)comets, this paper will likely be of interest to you. It is also the first time I publish as a first author under my new surname: Strøm. Enjoy! (10/10)",https://arxiv.org/abs/2007.09155,"Exocomets are small bodies releasing gas and dust which orbit stars other than the Sun. Their existence was first inferred from the detection of variable absorption features in stellar spectra in the late 1980s using spectroscopy. More recently, they have been detected through photometric transits from space, and through far-IR/mm gas emission within debris disks. As (exo)comets are considered to contain the most pristine material accessible in stellar systems, they hold the potential to give us information about early stage formation and evolution conditions of extra Solar Systems. In the Solar System, comets carry the physical and chemical memory of the protoplanetary disk environment where they formed, providing relevant information on processes in the primordial solar nebula. The aim of this paper is to compare essential compositional properties between Solar System comets and exocomets. The paper aims to highlight commonalities and to discuss differences which may aid the communication between the involved research communities and perhaps also avoid misconceptions. Exocomets likely vary in their composition depending on their formation environment like Solar System comets do, and since exocomets are not resolved spatially, they pose a challenge when comparing them to high fidelity observations of Solar System comets. Observations of gas around main sequence stars, spectroscopic observations of ""polluted"" white dwarf atmospheres and spectroscopic observations of transiting exocomets suggest that exocomets may show compositional similarities with Solar System comets. The recent interstellar visitor 2I/Borisov showed gas, dust and nuclear properties similar to that of Solar System comets. This raises the tantalising prospect that observations of interstellar comets may help bridge the fields of exocomet and Solar System comets. ",Exocomets from a Solar System Perspective,10,"['Our new paper is out on arXiv today: ""Exocomets from a Solar System Perspective"" \n\nIn this topical review paper we provide an overview of the observational properties of Solar System #comets and #exocomets. (1/10) ', 'The paper aims to highlight commonalities and to discuss differences which may aid the communication between the involved research communities and perhaps also avoid misconceptions. (2/10)', 'A major difference between the observations of Solar System #comets and #exocomets is that the former are studied individually, whereas the latter generally cannot be resolved. Compared to Solar System comets, the information we have about exocomets is very limited. (3/10)', 'Yet there are hints that they may not be too different in composition... 😱 (4/10)', 'Observations of gas around main sequence stars, spectroscopic observations of ""polluted"" white dwarf atmospheres and spectroscopic observations of transiting exocomets suggest that exocomets may show compositional similarities with Solar System comets. (5/10)', 'For instance, the CaII lines commonly seen in the spectra of beta Pic and polluted WDs have been detected in the extreme case of the large sungrazing comet C/1965 S1 Ikeya-Seki. (6/10)', 'Solar system comets emit in high energy EUV and X-ray emission through the gradual neutralisation of highly charged solar wind ions. Similar processes are also thought to occur at exocomets encountering stellar winds (as seen by the variations of highly ionised species). (7/10)', 'Observations of interstellar visitors such as 1I/`Oumuamua and 2I/Borisov allow us to learn about the physical and chemical properties of protoplanetary disks of distant stars, although their true systems of origin are unknown to us. (8/10)', 'This raises the tantalising prospect that observations of interstellar comets may help bridge the fields of exocomet and Solar System comets. (9/10)', 'If you have a an interest in debris disks, white dwarf atmospheres and/or (exo)comets, this paper will likely be of interest to you. It is also the first time I publish as a first author under my new surname: Strøm. Enjoy! (10/10)']",20,07,2115
33,152,1334032097539833856,92966853,Adeel Razi,"New from our lab: ""A Generative Model to Synthesize EEG Data for Epileptic Seizure Prediction"". Paper pre-print is here: Lead by @KhansaRasheed (second paper from her MSc thesis) with @junaidq @levink2 Terence O'Brien @turnerinstitute @MonashNeurosci ",https://arxiv.org/abs/2012.00430,"Prediction of seizure before they occur is vital for bringing normalcy to the lives of patients. Researchers employed machine learning methods using hand-crafted features for seizure prediction. However, ML methods are too complicated to select the best ML model or best features. Deep Learning methods are beneficial in the sense of automatic feature extraction. One of the roadblocks for accurate seizure prediction is scarcity of epileptic seizure data. This paper addresses this problem by proposing a deep convolutional generative adversarial network to generate synthetic EEG samples. We use two methods to validate synthesized data namely, one-class SVM and a new proposal which we refer to as convolutional epileptic seizure predictor (CESP). Another objective of our study is to evaluate performance of well-known deep learning models (e.g., VGG16, VGG19, ResNet50, and Inceptionv3) by training models on augmented data using transfer learning with average time of 10 min between true prediction and seizure onset. Our results show that CESP model achieves sensitivity of 78.11% and 88.21%, and FPR of 0.27/h and 0.14/h for training on synthesized and testing on real Epilepsyecosystem and CHB-MIT datasets, respectively. Effective results of CESP trained on synthesized data shows that synthetic data acquired the correlation between features and labels very well. We also show that employment of idea of transfer learning and data augmentation in patient-specific manner provides highest accuracy with sensitivity of 90.03% and 0.03 FPR/h which was achieved using Inceptionv3, and that augmenting data with samples generated from DCGAN increased prediction results of our CESP model and Inceptionv3 by 4-5% as compared to state-of-the-art traditional augmentation techniques. Finally, we note that prediction results of CESP achieved by using augmented data are better than chance level for both datasets. ","A Generative Model to Synthesize EEG Data for Epileptic Seizure
Prediction",1,"['New from our lab: ""A Generative Model to Synthesize EEG Data for Epileptic Seizure Prediction"". \n\nPaper pre-print is here: \n\nLead by @KhansaRasheed (second paper from her MSc thesis) with @junaidq @levink2 Terence O\'Brien @turnerinstitute @MonashNeurosci ']",20,12,265
34,17,1155870529242333184,307826617,Kev Abazajian ⤷⏳🌎,"New paper today— Hidden Treasures: Sterile Neutrino #darkmatter can be cold or warm, or a fraction of the dark matter through several production mechanisms. 3.55 keV signal could be CDM, WDM or CWDM & σ8 problem could be due to ~80 eV steriles ",https://arxiv.org/abs/1907.11696,"We discuss numerous mechanisms for production of sterile neutrinos, which can account for all or a fraction of dark matter, and which can range from warm to effectively cold dark matter, depending on the cosmological scenario. We investigate production by Higgs boson decay, $(B-L)$ gauge boson production at high temperature, as well as production via resonant and nonresonant neutrino oscillations. We calculate the effects on structure formation in these models, some for the first time. If two populations of sterile neutrinos, one warm and one cold, were produced by different mechanisms, or if sterile neutrinos account for only a fraction of dark matter, while the remainder is some other cold dark matter particle, the resulting multi-component dark matter may alleviate some problems in galaxy formation. We examine the X-ray constraints and the candidate signal at 3.5 keV. Finally, we also show that the $\sigma_8$ problem can be a signature of fractional dark matter in the form of sterile neutrinos in several mechanisms. ","Hidden Treasures: sterile neutrinos as dark matter with miraculous
abundance, structure formation for different production mechanisms, and a
solution to the sigma-8 problem",1,"['New paper today— Hidden Treasures: Sterile Neutrino #darkmatter can be cold or warm, or a fraction of the dark matter through several production mechanisms. 3.55 keV signal could be CDM, WDM or CWDM & σ8 problem could be due to ~80 eV steriles ']",19,07,257
35,7,1511274289852567554,561899047,Aki Vehtari,"New paper ""Robust, Automated, and Accurate Black-box Variational Inference"" with great co-authors @manushivid, @Michael_riis, and @jhhhuggins RAABBVI has a new learning rate adaptation using convergence diagnostics, user-adjustable accuracy parameter, and it predicts decrease in accuracy given additional computation time. Code PR in Viabel Python package ",https://arxiv.org/abs/2203.15945,"Black-box variational inference (BBVI) now sees widespread use in machine learning and statistics as a fast yet flexible alternative to Markov chain Monte Carlo methods for approximate Bayesian inference. However, stochastic optimization methods for BBVI remain unreliable and require substantial expertise and hand-tuning to apply effectively. In this paper, we propose Robust, Automated, and Accurate BBVI (RAABBVI), a framework for reliable BBVI optimization. RAABBVI is based on rigorously justified automation techniques, includes just a small number of intuitive tuning parameters, and detects inaccurate estimates of the optimal variational approximation. RAABBVI adaptively decreases the learning rate by detecting convergence of the fixed--learning-rate iterates, then estimates the symmetrized Kullback--Leiber (KL) divergence between the current variational approximation and the optimal one. It also employs a novel optimization termination criterion that enables the user to balance desired accuracy against computational cost by comparing (i) the predicted relative decrease in the symmetrized KL divergence if a smaller learning were used and (ii) the predicted computation required to converge with the smaller learning rate. We validate the robustness and accuracy of RAABBVI through carefully designed simulation studies and on a diverse set of real-world model and data examples. ","Robust, Automated, and Accurate Black-box Variational Inference",2,"['New paper ""Robust, Automated, and Accurate Black-box Variational Inference"" with great co-authors @manushivid, @Michael_riis, and @jhhhuggins ', 'RAABBVI has a new learning rate adaptation using convergence diagnostics, user-adjustable accuracy parameter, and it predicts decrease in accuracy given additional computation time. Code PR in Viabel Python package https://t.co/hbQbHChBSk https://t.co/bCf6Botors']",22,03,384
36,113,1258034187107434496,48329145,Thomas Kober,"New paper announcement: ""Data Augmentation for Hypernymy Detection"" with brilliant collaborators Julie Weeds, Lorenzo Bertolini and David Weir @SussexUni arXiv: pdf: 1/10 We do data augmentation based on distributional composition and GANs and see some solid improvements for basic LR and FF models, with two different distributional vector space models, on supervised hypernymy detection. 2/10 We compare augmentation by composition and GANs to two ways of _extending_ a given training set with either hyponym-hypernym pairs from WordNet or extracted from a large corpus with Hearst Patterns. 3/10 We expected that extending a dataset with pairs from WordNet will likely be the upper limit of what we can expect from our data augmentation techniques, however we find that augmentation by composition and GANs frequently performs better than WordNet 😱😱😱 4/10 Distributional composition: Given t hyponym-hypernym pair dog-animal in the training data, we collect modifiers for animal and dog (e.g. small, hungry, etc) and add small dog-dog, hungry dog-dog, small dog-animal, and hungry dog-animal as additional pairs to the training set.5/10 We average the vector representations for ""hungry"" and ""dog"" to get a composed vector. Negative examples are created by pairing ""small dog"" with a neighbour of ""animal"", say ""vehicle"", such that the negative pair ""small dog-vehicle"" is added to the training data. 6/10 For the GAN based approach - GANDALF (GAN-based Data Augmentation for Lexical inFerence) - we simply aim to generate vectors that ""look like"" real nouns (i.e. that are close in terms of cosine similarity to actual word vectors). 7/10 Given the pair dog-animal, we simply pick the top n most similar GANDALF-ed vectors to dog and animal and add those as additional positive examples to the training data. Negative examples are created by mimicking observed negative examples, e.g. dog-cat. 8/10 We also introduce a new hand annotated dataset - HP4K - that does not rely on WordNet or other hand curated lexicons. Given that some datasets as well as models make use of WordNet, we think that this dataset will be a neat addition to existing test suites. 9/10 Relevant links: arXiv: pdf: github: The dataset is already on github, the code will follow soon. 10/10",https://arxiv.org/abs/2005.01854,"The automatic detection of hypernymy relationships represents a challenging problem in NLP. The successful application of state-of-the-art supervised approaches using distributed representations has generally been impeded by the limited availability of high quality training data. We have developed two novel data augmentation techniques which generate new training examples from existing ones. First, we combine the linguistic principles of hypernym transitivity and intersective modifier-noun composition to generate additional pairs of vectors, such as ""small dog - dog"" or ""small dog - animal"", for which a hypernymy relationship can be assumed. Second, we use generative adversarial networks (GANs) to generate pairs of vectors for which the hypernymy relation can also be assumed. We furthermore present two complementary strategies for extending an existing dataset by leveraging linguistic resources such as WordNet. Using an evaluation across 3 different datasets for hypernymy detection and 2 different vector spaces, we demonstrate that both of the proposed automatic data augmentation and dataset extension strategies substantially improve classifier performance. ",Data Augmentation for Hypernymy Detection,10,"['New paper announcement: ""Data Augmentation for Hypernymy Detection"" with brilliant collaborators Julie Weeds, Lorenzo Bertolini and David Weir @SussexUni\n\narXiv: \npdf: \n\n1/10', 'We do data augmentation based on distributional composition and GANs and see some solid improvements for basic LR and FF models, with two different distributional vector space models, on supervised hypernymy detection. \n\n2/10', 'We compare augmentation by composition and GANs to two ways of _extending_ a given training set with either hyponym-hypernym pairs from WordNet or extracted from a large corpus with Hearst Patterns.\n\n3/10', 'We expected that extending a dataset with pairs from WordNet will likely be the upper limit of what we can expect from our data augmentation techniques, however we find that augmentation by composition and GANs frequently performs better than WordNet 😱😱😱\n\n4/10', 'Distributional composition: Given t hyponym-hypernym pair dog-animal in the training data, we collect modifiers for animal and dog (e.g. small, hungry, etc) and add small dog-dog, hungry dog-dog, small dog-animal, and hungry dog-animal as additional pairs to the training set.5/10', 'We average the vector representations for ""hungry"" and ""dog"" to get a composed vector. Negative examples are created by pairing ""small dog"" with a neighbour of ""animal"", say ""vehicle"", such that the negative pair ""small dog-vehicle"" is added to the training data.\n\n6/10', 'For the GAN based approach - GANDALF (GAN-based Data Augmentation for Lexical inFerence) - we simply aim to generate vectors that ""look like"" real nouns (i.e. that are close in terms of cosine similarity to actual word vectors).\n\n7/10', 'Given the pair dog-animal, we simply pick the top n most similar GANDALF-ed vectors to dog and animal and add those as additional positive examples to the training data. Negative examples are created by mimicking observed negative examples, e.g. dog-cat.\n\n8/10', 'We also introduce a new hand annotated dataset - HP4K - that does not rely on WordNet or other hand curated lexicons. Given that some datasets as well as models make use of WordNet, we think that this dataset will be a neat addition to existing test suites.\n\n9/10', 'Relevant links:\narXiv: https://t.co/15L2JgZSfA\npdf: https://t.co/ZXimmHwtrb\ngithub: https://t.co/anC6vR9mlr\n\nThe dataset is already on github, the code will follow soon.\n\n10/10']",20,05,2304
37,13,1199394225676341249,1127518127015772160,J. Enrique Vázquez-Lozano,New paper on arXiv: Towards Chiral Sensing and Spectroscopy Enabled by All-Dielectric Integrated Photonic Waveguides. Here we show the feasibility to perform chiroptical applications in all-dielectric integrated photonic waveguides. Found out more here: @LWLDN @AMartinezUPV @upvntc Thanks Lei !! 😀,https://arxiv.org/abs/1911.11106,"Chiral spectroscopy is a powerful technique that enables to identify the chirality of matter through optical means. So far, experiments to check the chirality of matter or nanostructures have been carried out using free-space propagating light beams. However, for the sake of miniaturization, it would be desirable to perform chiral spectroscopy in photonic integrated platforms, with the additional benefit of massive parallel detection, low-cost production, repeatability, and portability of such a chiroptical device. Here we show that all-dielectric integrated photonic waveguides can support chiral modes under proper combination of the fundamental eigenmodes. In particular, we investigate two mainstream configurations: a dielectric wire with square cross-section and a slotted waveguide. We analyze numerically three different scenarios in which such waveguides could be used for chiral detection: all-dielectric waveguides as near-field probes, evanescent-induced chiral fields, and chiroptical interaction in void slots. In all the cases we consider a metallic nanohelix as a chiral probe, though all the approaches can be extended to other kinds of chiral nanostructures as well as matter. Our results establish that chiral applications such as sensing and spectroscopy could be realized in standard integrated optics, in particular, with silicon-based technology. ","Towards Chiral Sensing and Spectroscopy Enabled by All-Dielectric
Integrated Photonic Waveguides",2,"['New paper on arXiv: Towards Chiral Sensing and Spectroscopy Enabled by All-Dielectric Integrated Photonic Waveguides.\n\nHere we show the feasibility to perform chiroptical applications in all-dielectric integrated photonic waveguides.\n\nFound out more here: ', '@LWLDN @AMartinezUPV @upvntc Thanks Lei !! 😀']",19,11,312
38,74,1516688376480649218,737271507513188352,Boris Goncharov,A new paper with Alex and Jan where we discuss problems in ground-based gravitational-wave astronomy in the next decade and solutions provided by the null stream of Einstein Telescope: Image: illustration of the null stream formed by three interferometers. ,https://arxiv.org/abs/2204.08533,"Among third-generation ground-based gravitational-wave detectors proposed for the next decade, Einstein Telescope provides a unique kind of null stream $\unicode{x2014}$ the signal-free linear combination of data $\unicode{x2014}$ that enables otherwise inaccessible tests of the noise models. We project and showcase challenges in modeling the noise in the 2030-s and how it will affect the performance of third-generation detectors. We find that the null stream of Einstein Telescope is capable of entirely eliminating transient detector glitches that are known to limit current gravitational-wave searches. The techniques we discuss are computationally efficient and do not require a-priori knowledge about glitch models. Furthermore, we show how the null stream can be used to provide an unbiased estimation of the noise power spectrum necessary for online and offline data analyses even with multiple loud signals in band. We overview other approaches to utilizing the null stream. Finally, we comment on the limitations and future challenges of null stream analyses for Einstein Telescope and arbitrary detector networks. ",Utilizing the null stream of Einstein Telescope,1,['A new paper with Alex and Jan where we discuss problems in ground-based gravitational-wave astronomy in the next decade and solutions provided by the null stream of Einstein Telescope: \nImage: illustration of the null stream formed by three interferometers. '],22,04,270
39,34,1232378173993578496,40285266,Stanislav Fort at EAGx Prague ¬(🔥📎🔥📎),"Exciting times! Our new paper /The Break-Even Point on Optimization Trajectories of Deep Neural Networks/ () got accepted as a *spotlight* at @iclr_conf. A break-even point early in training determines properties of the entire optimization trajectory. @iclr_conf Many thanks to the amazing Maciek, Devansh, Jacek, @kchonyc, @kjgeras, and especially @kudkudakpl for leading the project!",http://arxiv.org/abs/2002.09572,"The early phase of training of deep neural networks is critical for their final performance. In this work, we study how the hyperparameters of stochastic gradient descent (SGD) used in the early phase of training affect the rest of the optimization trajectory. We argue for the existence of the ""break-even"" point on this trajectory, beyond which the curvature of the loss surface and noise in the gradient are implicitly regularized by SGD. In particular, we demonstrate on multiple classification tasks that using a large learning rate in the initial phase of training reduces the variance of the gradient, and improves the conditioning of the covariance of gradients. These effects are beneficial from the optimization perspective and become visible after the break-even point. Complementing prior work, we also show that using a low learning rate results in bad conditioning of the loss surface even for a neural network with batch normalization layers. In short, our work shows that key properties of the loss surface are strongly influenced by SGD in the early phase of training. We argue that studying the impact of the identified effects on generalization is a promising future direction. ","The Break-Even Point on Optimization Trajectories of Deep Neural
Networks",2,"['Exciting times! Our new paper /The Break-Even Point on Optimization Trajectories of Deep Neural Networks/ () got accepted as a *spotlight* at @iclr_conf. A break-even point early in training determines properties of the entire optimization trajectory. ', '@iclr_conf Many thanks to the amazing Maciek, Devansh, Jacek, @kchonyc, @kjgeras, and especially @kudkudakpl for leading the project!']",20,02,405
40,59,944296114882088960,2911287964,Thomas Kupfer,In our latest paper led @Janvanroestel we show the power of combining machine learning and synoptic surveys to find interesting binaries. We more than doubled the number of known ELCVn binaries which represent an interesting phase of binary evolution ,https://arxiv.org/abs/1712.06507,"We report the discovery and analysis of 36 new eclipsing EL CVn-type binaries, consisting of a core helium-composition pre-white dwarf and an early-type main-sequence companion, more than doubling the known population of these systems. We have used supervised machine learning methods to search 0.8 million lightcurves from the Palomar Transient Factory, combined with SDSS, Pan-STARRS and 2MASS colours. The new systems range in orbital periods from 0.46-3.8 d and in apparent brightness from ~14-16 mag in the PTF $R$ or $g^{\prime}$ filters. For twelve of the systems, we obtained radial velocity curves with the Intermediate Dispersion Spectrograph at the Isaac Newton Telescope. We modelled the lightcurves, radial velocity curves and spectral energy distributions to determine the system parameters. The radii (0.3-0.7 $\mathrm{R_{\odot}}$) and effective temperatures (8000-17000 K) of the pre-He-WDs are consistent with stellar evolution models, but the masses (0.12-0.28 $\mathrm{M_{\odot}}$) show more variance than models predicted. This study shows that using machine learning techniques on large synoptic survey data is a powerful way to discover substantial samples of binary systems in short-lived evolutionary stages. ","Discovery of 36 eclipsing EL CVn binaries found by the Palomar Transient
Factory",1,['In our latest paper led @Janvanroestel we show the power of combining machine learning and synoptic surveys to find interesting binaries. We more than doubled the number of known ELCVn binaries which represent an interesting phase of binary evolution '],17,12,257
41,89,1113972053197983745,30989098,Karin Sandstrom,"Hey interstellar dust fans! Check out the new paper from UCSD postdoc Jérémy Chastenet on polycyclic aromatic hydrocarbons in the Magellanic Clouds! The SMC is has a much smaller fraction of its dust in the form of PAHs compared to the LMC. HII regions show up as holes in the PAH fraction map. Even ionized gas in the diffuse ISM seems to have an effect on the PAH population though. And when you separate the Magellanic Clouds up into their ISM phases, you find that the PAH fraction is largest in molecular/diffuse neutral gas and lowest in HII regions. @aprilfollies The lower PAH fraction seems to be related to whatever sources produce ionizing radiation to create the HII regions and diffuse ionized gas. @dr_paul_woods I don’t think we know for the MW very well, the 4.6% is for the diffuse neutral gas, not the global average. The comparison to full galaxy qpah values from SINGS suggests the LMC global average is pretty normal, but there are galaxies with higher path fractions.",https://arxiv.org/abs/1904.02705,"We present maps of the dust properties in the Small and Large Magellanic Clouds (SMC, LMC) from fitting Spitzer and Herschel observations with the \citet{DL07} dust model. We derive the abundance of the small carbonaceous grain (or polycyclic aromatic hydrocarbon; PAH) component. The global PAH fraction (q_pah, the fraction of the dust mass in the form of PAHs) is smaller in the SMC (1.0$^{+0.3}_{-0.3}$%) than in the LMC (3.3$^{+1.4}_{-1.3}$%). We measure the PAH fraction in different gas phases (H II regions, ionized gas outside of H II regions, molecular gas, and diffuse neutral gas). H II regions appear as distinctive holes in the spatial distribution of the PAH fraction. In both galaxies, the PAH fraction in the diffuse neutral medium is higher than in the ionized gas, but similar to the molecular gas. Even at equal radiation field intensity, the PAH fraction is lower in the ionized gas than in the diffuse neutral gas. We investigate the PAH life-cycle as a function of metallicity between the two galaxies. The PAH fraction in the diffuse neutral medium of the LMC is similar to that of the Milky Way ($\sim4.6$%), while it is significantly lower in the SMC. Plausible explanations for the higher PAH fraction in the diffuse neutral medium of the LMC compared to the SMC include: a more effective PAH production by fragmentation of large grains at higher metallicity, and/or the growth of PAHs in molecular gas. ","The Polycyclic Aromatic Hydrocarbon Mass Fraction on a 10 pc scale in
the Magellanic Clouds",6,"['Hey interstellar dust fans! Check out the new paper from UCSD postdoc Jérémy Chastenet on polycyclic aromatic hydrocarbons in the Magellanic Clouds! ', 'The SMC is has a much smaller fraction of its dust in the form of PAHs compared to the LMC. https://t.co/UGL8QXdAUu', 'HII regions show up as holes in the PAH fraction map. Even ionized gas in the diffuse ISM seems to have an effect on the PAH population though.', 'And when you separate the Magellanic Clouds up into their ISM phases, you find that the PAH fraction is largest in molecular/diffuse neutral gas and lowest in HII regions.', '@aprilfollies The lower PAH fraction seems to be related to whatever sources produce ionizing radiation to create the HII regions and diffuse ionized gas.', '@dr_paul_woods I don’t think we know for the MW very well, the 4.6% is for the diffuse neutral gas, not the global average. The comparison to full galaxy qpah values from SINGS suggests the LMC global average is pretty normal, but there are galaxies with higher path fractions.']",19,04,1003
42,121,1006096760744370176,20309837,Michael Veale,"‘Debiasing’/FATML methods assume ML modellers hold sensitive data (eg ethnicity, sexuality). Privacy problem. Our new #ICML2018 paper uses secure multiparty computation to train ‘fair’ models without seeing these, and allows regulators to verify decisions. (it's also important to remember throughout this that 'debiasing' approaches only are appropriate in very narrow situations, and are no silver bullet for socio-technical harms and concerns that may involve machine learning) this might be of interest to @zacharylipton @realjoshkroll @tforcworc @realhamed @Miles_Brundage @mort___ @ruggieris",https://arxiv.org/abs/1806.03281,"Recent work has explored how to train machine learning models which do not discriminate against any subgroup of the population as determined by sensitive attributes such as gender or race. To avoid disparate treatment, sensitive attributes should not be considered. On the other hand, in order to avoid disparate impact, sensitive attributes must be examined, e.g., in order to learn a fair model, or to check if a given model is fair. We introduce methods from secure multi-party computation which allow us to avoid both. By encrypting sensitive attributes, we show how an outcome-based fair model may be learned, checked, or have its outputs verified and held to account, without users revealing their sensitive attributes. ",Blind Justice: Fairness with Encrypted Sensitive Attributes,3,"['‘Debiasing’/FATML methods assume ML modellers hold sensitive data (eg ethnicity, sexuality). Privacy problem. Our new #ICML2018 paper uses secure multiparty computation to train ‘fair’ models without seeing these, and allows regulators to verify decisions. ', ""(it's also important to remember throughout this that 'debiasing' approaches only are appropriate in very narrow situations, and are no silver bullet for socio-technical harms and concerns that may involve machine learning)"", 'this might be of interest to @zacharylipton @realjoshkroll @tforcworc @realhamed @Miles_Brundage @mort___ @ruggieris']",18,06,611
43,13,1488193767685300225,755924666,Brant Robertson,"New paper on the arXiv by @ucsc Comp Astro Research Group PhD student Ryan Hausen, ""FitsMap: A Simple, Lightweight Tool For Displaying Interactive Astronomical Image and Catalog Data""! @BenneHolwerda @ucsc DS9 is great for some things and not others :) I use it all the time. FitsMap is great for sharing images and catalogs with collaborators from a web server :)",https://arxiv.org/abs/2201.12308,"The visual inspection of image and catalog data continues to be a valuable aspect of astronomical data analysis. As the scale of astronomical image and catalog data continues to grow, visualizing the data becomes increasingly difficult. In this work, we introduce FitsMap, a simple, lightweight tool for visualizing astronomical image and catalog data. FitsMap only requires a simple web server and can scale to over gigapixel images with tens of millions of sources. Further, the web-based visualizations can be viewed performantly on mobile devices. FitsMap is implemented in Python and is open source (this https URL). ","FitsMap: A Simple, Lightweight Tool For Displaying Interactive
Astronomical Image and Catalog Data",2,"['New paper on the arXiv by @ucsc Comp Astro Research Group PhD student Ryan Hausen, ""FitsMap: A Simple, Lightweight Tool For Displaying Interactive Astronomical Image and Catalog Data""! ', '@BenneHolwerda @ucsc DS9 is great for some things and not others :) I use it all the time. FitsMap is great for sharing images and catalogs with collaborators from a web server :)']",22,01,371
44,5,1169690389579800581,1071548796,Cătălina Cangea,"Really excited to finally share our new paper! w/ @ebelilov, Pietro Liò, @AaronCourville VideoNavQA: Bridging the Gap between Visual and Embodied Question Answering () + A benchmark in an alternate EQA-like setting + Generalized VQA-style models We tackle the Embodied QA task from a different perspective, where navigation paths are provided and the focus shifts towards answering much more complex and varied questions about the environment. Our motivation: initial EQA studies use IL+RL, but results show that the task might be too challenging for these methods. We propose a novel way of evaluating EQA feasibility, building the VideoNavQA dataset containing pairs of questions and videos generated in House3D. We generalize widely adopted VQA-style models including FiLM & MAC to a temporal and rich visual setting. Dataset and code available, full results in the paper! :-) Work partially carried out during my research internship at @MILAMontreal!",http://arxiv.org/abs/1908.04950,"Embodied Question Answering (EQA) is a recently proposed task, where an agent is placed in a rich 3D environment and must act based solely on its egocentric input to answer a given question. The desired outcome is that the agent learns to combine capabilities such as scene understanding, navigation and language understanding in order to perform complex reasoning in the visual world. However, initial advancements combining standard vision and language methods with imitation and reinforcement learning algorithms have shown EQA might be too complex and challenging for these techniques. In order to investigate the feasibility of EQA-type tasks, we build the VideoNavQA dataset that contains pairs of questions and videos generated in the House3D environment. The goal of this dataset is to assess question-answering performance from nearly-ideal navigation paths, while considering a much more complete variety of questions than current instantiations of the EQA task. We investigate several models, adapted from popular VQA methods, on this new benchmark. This establishes an initial understanding of how well VQA-style methods can perform within this novel EQA paradigm. ","VideoNavQA: Bridging the Gap between Visual and Embodied Question
Answering",5,"['Really excited to finally share our new paper! w/ @ebelilov, Pietro Liò, @AaronCourville\n\nVideoNavQA: Bridging the Gap between Visual and Embodied Question Answering ()\n+ A benchmark in an alternate EQA-like setting\n+ Generalized VQA-style models', 'We tackle the Embodied QA task from a different perspective, where navigation paths are provided and the focus shifts towards answering much more complex and varied questions about the environment. https://t.co/C4svLQUTJC', 'Our motivation: initial EQA studies use IL+RL, but results show that the task might be too challenging for these methods.\n\nWe propose a novel way of evaluating EQA feasibility, building the VideoNavQA dataset containing pairs of questions and videos generated in House3D. https://t.co/IXZNOPEYjv', 'We generalize widely adopted VQA-style models including FiLM & MAC to a temporal and rich visual setting.\n\nDataset and code available, full results in the paper! :-) https://t.co/CWqD3MELO3 https://t.co/96X0Hi8CgG', 'Work partially carried out during my research internship at @MILAMontreal!']",19,08,988
45,61,1116612314159910912,263265637,Dennis Prangle,"New paper! With Sophie Harbisher and @csgillespie . We do high dimensional experimental design using SGD + autodiff. As a quick tractable utility we use the trace of Fisher info, which we prove has a decision theoretic derivation from the Hyvarinen score. ",https://arxiv.org/abs/1904.05703,"Most computational approaches to Bayesian experimental design require making posterior calculations repeatedly for a large number of potential designs and/or simulated datasets. This can be expensive and prohibit scaling up these methods to models with many parameters, or designs with many unknowns to select. We introduce an efficient alternative approach without posterior calculations, based on optimising the expected trace of the Fisher information, as discussed by Walker (2016). We illustrate drawbacks of this approach, including lack of invariance to reparameterisation and encouraging designs in which one parameter combination is inferred accurately but not any others. We show these can be avoided by using an adversarial approach: the experimenter must select their design while a critic attempts to select the least favourable parameterisation. We present theoretical properties of this approach and show it can be used with gradient based optimisation methods to find designs efficiently in practice. ","Bayesian experimental design without posterior calculations: an
adversarial approach",1,"['New paper! With Sophie Harbisher and @csgillespie . We do high dimensional experimental design using SGD + autodiff. As a quick tractable utility we use the trace of Fisher info, which we prove has a decision theoretic derivation from the Hyvarinen score. ']",19,04,268
46,92,1425366685435973636,1339508444764786694,Gregor Kasieczka,"New paper today: ""Symmetries, Safety, and Self-Supervision"" (arXiv: ). We - driven by excellent Heidelberg people including @LorenzVogel - look at how known physical symmetries can be used to learn better representations. 1/3 We use contrastive learning a la #SimCLR and include translation, rotations, and soft+collinear emissions. This figure shows how well rotations are learned. Left is without including rotations, right is with. s(z,z')=1 <-> identical representations 2/3 The goal is to have a better input for #unsupervised learning (coming next..) but we can already test how well the learned representation does as input to a linear classifier. Spoiler: Pretty well (curve is for top tagging w/ a linear network) 3/3 ",https://arxiv.org/abs/2108.04253,"Collider searches face the challenge of defining a representation of high-dimensional data such that physical symmetries are manifest, the discriminating features are retained, and the choice of representation is new-physics agnostic. We introduce JetCLR to solve the mapping from low-level data to optimized observables though self-supervised contrastive learning. As an example, we construct a data representation for top and QCD jets using a permutation-invariant transformer-encoder network and visualize its symmetry properties. We compare the JetCLR representation with alternative representations using linear classifier tests and find it to work quite well. ","Symmetries, Safety, and Self-Supervision",3,"['New paper today: ""Symmetries, Safety, and Self-Supervision"" (arXiv: ). We - driven by excellent Heidelberg people including @LorenzVogel - look at how known physical symmetries can be used to learn better representations. 1/3', ""We use contrastive learning a la #SimCLR and include translation, rotations, and soft+collinear emissions. This figure shows how well rotations are learned. Left is without including rotations, right is with. s(z,z')=1 <-> identical representations 2/3 https://t.co/zFaYjFEiEk"", 'The goal is to have a better input for #unsupervised learning (coming next..) but we can already test how well the learned representation does as input to a linear classifier. Spoiler: Pretty well (curve is for top tagging w/ a linear network) 3/3 https://t.co/YyzlA9srTG']",21,08,752
47,198,1506284963464990733,167708661,Peter Jedlicka,"Which ion channel parameters from a huge theoretically possible parameter space are present in real neurons? In our preprint, we propose that Pareto optimality can serve as a guiding principle for addressing this issue (known as ion channel degeneracy). Pareto optimality could help find models with optimal ion channel configurations performing best for a trade-off between energy efficiency and functional effectiveness. This could reduce the high-dimensional parameter space to geometrically simple low-dimensional manifolds. Pareto optimality might provide insights into neuronal ion channel correlations (e.g., in Patch-seq data). Multi-objective Pareto optimality has been applied to biology by Uri Alon . See also a great review by @DiesPallas et al. ",https://arxiv.org/abs/2203.06391,"Nerve cells encounter unavoidable evolutionary trade-offs between multiple tasks. They must consume as little energy as possible (be energy-efficient or economical) but at the same time fulfil their functions (be functionally effective). Neurons displaying best performance for such multi-task trade-offs are said to be Pareto optimal. However, it is not understood how ion channel parameters contribute to the Pareto optimal performance of neurons. Ion channel degeneracy implies that multiple combinations of ion channel parameters can lead to functionally similar neuronal behavior. Therefore, to simulate functional behavior, instead of a single model, neuroscientists often use populations of valid models with distinct ion conductance configurations. This approach is called population (also database or ensemble) modeling. It remains unclear, which ion channel parameters in a vast population of functional models are more likely to be found in the brain. Here we propose that Pareto optimality can serve as a guiding principle for addressing this issue. The Pareto optimum concept can help identify the subpopulations of conductance-based models with ion channel configurations that perform best for the trade-off between economy and functional effectiveness. In this way, the high-dimensional parameter space of neuronal models might be reduced to geometrically simple low-dimensional manifolds. Therefore, Pareto optimality is a promising framework for improving population modeling of neurons and their circuits. We also discuss how Pareto inference might help deduce neuronal functions from high-dimensional Patch-seq data. Furthermore, we hypothesize that Pareto optimality might contribute to our understanding of observed ion channel correlations in neurons. ","Pareto optimality, economy-effectiveness trade-offs and ion channel
degeneracy: Improving population models of neurons",3,"['Which ion channel parameters from a huge theoretically possible parameter space are present in real neurons? In our preprint, we propose that Pareto optimality can serve as a guiding principle for addressing this issue (known as ion channel degeneracy). ', 'Pareto optimality could help find models with optimal ion channel configurations performing best for a trade-off between energy efficiency and functional effectiveness. This could reduce the high-dimensional parameter space to geometrically simple low-dimensional manifolds.', 'Pareto optimality might provide insights into neuronal ion channel correlations (e.g., in Patch-seq data). Multi-objective Pareto optimality has been applied to biology by Uri Alon https://t.co/5iyPRK41pr. See also a great review by @DiesPallas et al. https://t.co/2aWYVYofci']",22,03,777
48,25,1265458238469713920,549460404,吉田 紅 (Beni Yoshida),A new paper. 新しい論文です。 AdS/CFTでの計算複雑性に関連するパラドックスについてです。 「量子効果を使って動く量子コンピューターですが、もし、重力の効果も使うことでさらに計算速度を上げることができるかどうか?」 という問いがあります。量子チャーチ=チューリングのテーゼと言われています。 AdS/CFT対応に当てはめると、重力側で起こっていることは、全てCFT側で(効率的に)シミュレートできてしまうはずだという主張になります。 これまで重力効果を用いることで計算速度を上げられるという例は存在しなかったのですが、もしかしたら出来るのでは?という提案が最近なされました。 重力側ではウァームホールの長さは簡単に測定できそうですが、CFT側ではその対応物(複雑性)は簡単に測定できない。なので、複雑性を推定するという計算タスクは、重力側ではすごく簡単だけど、量子側ではすごく難しいということになります。 これだと、量子チャーチ=チューリングのテーゼが間違っていることになって、困ってしまう、というパラドックス(パズル?)ですね。 で、解決方法なんですが、そもそもの誤りは、重力側とCFT側の対応ルール(辞書とか言われてます)が不変だと仮定していることです。 実際に観測者がブラックホールの内部に入ろうとすると、強いbackreactionが起こります。そしてその度に、ブラックホール内部の自由度の記述が動的に変わっていきます。なので、内部にあるホーキング輻射のペアの記述も観測者に応じて変化します。 ちなみに、これは、ブラックホールがscrambleすることを仮定すると厳密に証明できる、量子情報理論からの主張です。 ウァームホールの長さを効率的に測ろうとすれば、何人かの観測者がブラックホール内部に入る必要があるのですが、各人がbackreactionを起こすので内部で会うことはできません。そもそも内部の構造が変わってしまっているので。 なので、量子チャーチ=チューリングのテーゼに対する反例にはなっていません。 ちなみに、同じ議論でファイヤーウォール問題も回避できます。 OTOCとdisentanglingの関係の定理と、内部自由度の記述方法。 この2つがシンプルだけど強力すぎるので、こればっかり使って論文を書いている気がしますが・・・。 もし何か他に面白いパラドックスがあるならば教えていただけると嬉しいです・・・。 ちなみに、このパラドックス自体は、Bouland-Fefferman-Vaziraniが最初で、Susskindが最近拡張したものを提唱しました。 そして、BFVの論文の存在を最初に僕に教えてくれた森前さんに感謝です。,https://arxiv.org/abs/2005.12491,"Recently a certain conceptual puzzle in the AdS/CFT correspondence, concerning the growth of quantum circuit complexity and the wormhole volume, has been identified by Bouland-Fefferman-Vazirani and Susskind. In this note, we propose a resolution of the puzzle and save the quantum Extended Church-Turing thesis by arguing that there is no computational shortcut in measuring the volume due to gravitational backreaction from bulk observers. A certain strengthening of the firewall puzzle from the computational complexity perspective, as well as its potential resolution, is also presented. ",Remarks on Black Hole Complexity Puzzle,14,"['A new paper. \n新しい論文です。\n', 'AdS/CFTでの計算複雑性に関連するパラドックスについてです。\n「量子効果を使って動く量子コンピューターですが、もし、重力の効果も使うことでさらに計算速度を上げることができるかどうか?」\nという問いがあります。量子チャーチ=チューリングのテーゼと言われています。', 'AdS/CFT対応に当てはめると、重力側で起こっていることは、全てCFT側で(効率的に)シミュレートできてしまうはずだという主張になります。', 'これまで重力効果を用いることで計算速度を上げられるという例は存在しなかったのですが、もしかしたら出来るのでは?という提案が最近なされました。', '重力側ではウァームホールの長さは簡単に測定できそうですが、CFT側ではその対応物(複雑性)は簡単に測定できない。なので、複雑性を推定するという計算タスクは、重力側ではすごく簡単だけど、量子側ではすごく難しいということになります。', 'これだと、量子チャーチ=チューリングのテーゼが間違っていることになって、困ってしまう、というパラドックス(パズル?)ですね。', 'で、解決方法なんですが、そもそもの誤りは、重力側とCFT側の対応ルール(辞書とか言われてます)が不変だと仮定していることです。', '実際に観測者がブラックホールの内部に入ろうとすると、強いbackreactionが起こります。そしてその度に、ブラックホール内部の自由度の記述が動的に変わっていきます。なので、内部にあるホーキング輻射のペアの記述も観測者に応じて変化します。', 'ちなみに、これは、ブラックホールがscrambleすることを仮定すると厳密に証明できる、量子情報理論からの主張です。', 'ウァームホールの長さを効率的に測ろうとすれば、何人かの観測者がブラックホール内部に入る必要があるのですが、各人がbackreactionを起こすので内部で会うことはできません。そもそも内部の構造が変わってしまっているので。', 'なので、量子チャーチ=チューリングのテーゼに対する反例にはなっていません。', 'ちなみに、同じ議論でファイヤーウォール問題も回避できます。', 'OTOCとdisentanglingの関係の定理と、内部自由度の記述方法。\nこの2つがシンプルだけど強力すぎるので、こればっかり使って論文を書いている気がしますが・・・。\nもし何か他に面白いパラドックスがあるならば教えていただけると嬉しいです・・・。', 'ちなみに、このパラドックス自体は、Bouland-Fefferman-Vaziraniが最初で、Susskindが最近拡張したものを提唱しました。\nそして、BFVの論文の存在を最初に僕に教えてくれた森前さんに感謝です。']",20,05,1141
49,53,916109761321431040,790887180,David Sand,"My student Paul has written a fun algorithm to find diffuse dwarf galaxies, and we tried it out around M101: Hey found 38 new dwarf candidates, and we are getting HST follow-up of many of them. We put lots of simulated dwarfs into our data, so we know our detection efficiency very well. @AstroBailin Maybe? These objects still have a half-light radius of ~5 arcsec, but if your pixels are that big...",https://arxiv.org/abs/1710.01728,"We have conducted a search of a 9 deg$^{2}$ region of the CFHTLS around the Milky Way analog M101 (D$\sim$7 Mpc), in order to look for previously unknown low surface brightness galaxies. This search has uncovered 38 new low surface brightness dwarf candidates, and confirmed 11 previously reported galaxies, all with central surface brightness $\mu$(g,0)$>$23mag/arcsec$^{2}$, potentially extending the satellite luminosity function for the M101 group by $\sim$1.2 magnitudes. The search was conducted using an algorithm that nearly automates the detection of diffuse dwarf galaxies. The candidates small size and low surface brightness means that the faintest of these objects would likely be missed by traditional visual or computer detection techniques. The dwarf galaxy candidates span a range of $-$7.1 $\geq$ M$_g$ $\geq$ $-$10.2 and half light radii of 118-540 pc at the distance of M101, and they are well fit by simple S\'{e}rsic surface brightness profiles. These properties are consistent with dwarfs in the Local Group, and to match the Local Group luminosity function $\sim$10-20 of these candidates should be satellites of M101. Association with a massive host is supported by the lack of detected star formation and the over density of candidates around M101 compared to the field. The spatial distribution of the dwarf candidates is highly asymmetric, and concentrated to the northeast of M101 and therefore distance measurements will be required to determine if these are genuine members of the M101 group. ",Discovery of diffuse dwarf galaxy candidates around M101,4,"['My student Paul has written a fun algorithm to find diffuse dwarf galaxies, and we tried it out around M101:\n', 'Hey found 38 new dwarf candidates, and we are getting HST follow-up of many of them.', 'We put lots of simulated dwarfs into our data, so we know our detection efficiency very well. https://t.co/bUDWo7wh1f', '@AstroBailin Maybe? These objects still have a half-light radius of ~5 arcsec, but if your pixels are that big...']",17,10,415
50,75,1320657983928487936,786855300322172928,Alkistis Pourtsidou,"Paper alert! A new @EC_Euclid paper led by Matteo Martinelli from @ift_uam_csic focuses on the ""precision vs accuracy"" problem for Euclid's weak lensing probe []. The paper studies the effect of matter nonlinearities as well as baryonic effects on the recovery of the ""true"" cosmological parameters. Worth a read! @Chrisclarkson69 LOL",https://arxiv.org/abs/2010.12382,"Upcoming surveys will map the growth of large-scale structure with unprecented precision, improving our understanding of the dark sector of the Universe. Unfortunately, much of the cosmological information is encoded by the small scales, where the clustering of dark matter and the effects of astrophysical feedback processes are not fully understood. This can bias the estimates of cosmological parameters, which we study here for a joint analysis of mock Euclid cosmic shear and Planck cosmic microwave background data. We use different implementations for the modelling of the signal on small scales and find that they result in significantly different predictions. Moreover, the different nonlinear corrections lead to biased parameter estimates, especially when the analysis is extended into the highly nonlinear regime, with both the Hubble constant, $H_0$, and the clustering amplitude, $\sigma_8$, affected the most. Improvements in the modelling of nonlinear scales will therefore be needed if we are to resolve the current tension with more and better data. For a given prescription for the nonlinear power spectrum, using different corrections for baryon physics does not significantly impact the precision of Euclid, but neglecting these correction does lead to large biases in the cosmological parameters. In order to extract precise and unbiased constraints on cosmological parameters from Euclid cosmic shear data, it is therefore essential to improve the accuracy of the recipes that account for nonlinear structure formation, as well as the modelling of the impact of astrophysical processes that redistribute the baryons. ","Euclid: impact of nonlinear prescriptions on cosmological parameter
estimation from weak lensing cosmic shear",3,"['Paper alert! A new @EC_Euclid paper led by Matteo Martinelli from @ift_uam_csic focuses on the ""precision vs accuracy"" problem for Euclid\'s weak lensing probe [].', 'The paper studies the effect of matter nonlinearities as well as baryonic effects on the recovery of the ""true"" cosmological parameters. Worth a read! https://t.co/ONEvL9UIde', '@Chrisclarkson69 LOL']",20,10,347
51,134,1212528853207408641,4106835983,Daniel Moghimi,"We studied the potential side-channel threat of integrating FPGAs with the CPU memory subsystem in the ongoing collaboration with @IntelSecurity to make these platforms more secure. ""JackHammer: Efficient Rowhammer on Heterogeneous FPGA-CPU Platforms"" @IntelSecurity An FPGA-originated rowhammer can hammer faster and flip more bits compared to the CPU rowhammer on the same platform. @IntelSecurity @ThoreTiemann @defparam @tomcrypt @berksunar",https://arxiv.org/abs/1912.11523,"After years of development, FPGAs are finally making an appearance on multi-tenant cloud servers. These heterogeneous FPGA-CPU architectures break common assumptions about isolation and security boundaries. Since the FPGA and CPU architectures share hardware resources, a new class of vulnerabilities requires us to reassess the security and dependability of these platforms. In this work, we analyze the memory and cache subsystem and study Rowhammer and cache attacks enabled on two proposed heterogeneous FPGA-CPU platforms by Intel: the Arria 10 GX with an integrated FPGA-CPU platform, and the Arria 10 GX PAC expansion card which connects the FPGA to the CPU via the PCIe interface. We show that while Intel PACs currently are immune to cache attacks from FPGA to CPU, the integrated platform is indeed vulnerable to Prime and Probe style attacks from the FPGA to the CPU's last level cache. Further, we demonstrate JackHammer, a novel and efficient Rowhammer from the FPGA to the host's main memory. Our results indicate that a malicious FPGA can perform twice as fast as a typical Rowhammer attack from the CPU on the same system and causes around four times as many bit flips as the CPU attack. We demonstrate the efficacy of JackHammer from the FPGA through a realistic fault attack on the WolfSSL RSA signing implementation that reliably causes a fault after an average of fifty-eight RSA signatures, 25% faster than a CPU rowhammer attack. In some scenarios our JackHammer attack produces faulty signatures more than three times more often and almost three times faster than a conventional CPU rowhammer attack. ",JackHammer: Efficient Rowhammer on Heterogeneous FPGA-CPU Platforms,3,"['We studied the potential side-channel threat of integrating FPGAs with the CPU memory subsystem in the ongoing collaboration with @IntelSecurity to make these platforms more secure. \n\n""JackHammer: Efficient Rowhammer on Heterogeneous FPGA-CPU Platforms"" ', '@IntelSecurity An FPGA-originated rowhammer can hammer faster and flip more bits compared to the CPU rowhammer on the same platform.', '@IntelSecurity @ThoreTiemann @defparam @tomcrypt @berksunar']",19,12,452
52,20,1333498158853222400,846245360,Alex Tamkin,"What happens when you Fourier Transform a BERT neuron? Signal processing can reveal (+manipulate!) multiscale linguistic structure in BERT neurons! New #NeurIPS2020 paper w/ @jurafsky and Noah Goodman Paper: @stanfordnlp & @StanfordAILab 👇 1/ Linguistic phenomena occur at different scales, including – within a word (e.g. morphology) – between words (syntax) – across utterances (discourse) – across paragraphs (topic) But to what extent are these scales captured in the representations of pretrained models? 2/ It turns out that many BERT neurons exhibit *multiscale* structure across an input—this neuron changes both rapidly between adjacent tokens and gradually across the input Spectral analysis lets us disentangle these scales by treating these activations as a digital signal! 3/ After doing this, we show through probing experiments that different parts of a neuron's frequency spectrum capture knowledge of NLP tasks at different scales! Low frequencies correspond to topic, higher frequencies to part of speech, and middle ones to dialog acts. 4/ We can also use spectral filters to *specialize* neurons to different scales of structure during pretraining, using what we call a prism layer. This produces single multiscale representations which perform comparably or better on all tasks. 5/ The prism layer also improves modeling of long-range dependencies—the BERT + Prism model outperforms BERT on masked language modeling problems without local context! 6/ Spectral filters are super easy to incorporate into existing models—it just takes a couple lines of code and an off-the-shelf PyTorch library () 7/ It was great working on this with @jurafsky and Noah Goodman, and we're excited to see what folks do with these methods, both for interpretability and in building better models! Paper: 🌇 8/8",https://arxiv.org/abs/2011.04823,"Language exhibits structure at different scales, ranging from subwords to words, sentences, paragraphs, and documents. To what extent do deep models capture information at these scales, and can we force them to better capture structure across this hierarchy? We approach this question by focusing on individual neurons, analyzing the behavior of their activations at different timescales. We show that signal processing provides a natural framework for separating structure across scales, enabling us to 1) disentangle scale-specific information in existing embeddings and 2) train models to learn more about particular scales. Concretely, we apply spectral filters to the activations of a neuron across an input, producing filtered embeddings that perform well on part of speech tagging (word-level), dialog speech acts classification (utterance-level), or topic classification (document-level), while performing poorly on the other tasks. We also present a prism layer for training models, which uses spectral filters to constrain different neurons to model structure at different scales. Our proposed BERT + Prism model can better predict masked tokens using long-range context and produces multiscale representations that perform better at utterance- and document-level tasks. Our methods are general and readily applicable to other domains besides language, such as images, audio, and video. ","Language Through a Prism: A Spectral Approach for Multiscale Language
Representations",8,"['What happens when you Fourier Transform a BERT neuron?\n\nSignal processing can reveal (+manipulate!) multiscale linguistic structure in BERT neurons!\n\nNew #NeurIPS2020 paper w/ @jurafsky and Noah Goodman \n\nPaper: \n@stanfordnlp & @StanfordAILab\n\n👇 1/ ', 'Linguistic phenomena occur at different scales, including\n– within a word (e.g. morphology)\n– between words (syntax)\n– across utterances (discourse) \n– across paragraphs (topic)\n\nBut to what extent are these scales captured in the representations of pretrained models?\n\n2/ https://t.co/3yj11c1tPK', 'It turns out that many BERT neurons exhibit *multiscale* structure across an input—this neuron changes both rapidly between adjacent tokens and gradually across the input\n\nSpectral analysis lets us disentangle these scales by treating these activations as a digital signal!\n\n3/ https://t.co/b5t2VdkkJ4', ""After doing this, we show through probing experiments that different parts of a neuron's frequency spectrum capture knowledge of NLP tasks at different scales! \n\nLow frequencies correspond to topic, higher frequencies to part of speech, and middle ones to dialog acts.\n\n4/ https://t.co/gOvTR0u304"", 'We can also use spectral filters to *specialize* neurons to different scales of structure during pretraining, using what we call a prism layer.\n\nThis produces single multiscale representations which perform comparably or better on all tasks.\n\n5/ https://t.co/7q1YTx71T1', 'The prism layer also improves modeling of long-range dependencies—the BERT + Prism model outperforms BERT on masked language modeling problems without local context!\n\n6/ https://t.co/5ypM5EhM2e', 'Spectral filters are super easy to incorporate into existing models—it just takes a couple lines of code and an off-the-shelf PyTorch library (https://t.co/NK9ZMyHdiO)\n\n7/ https://t.co/TScZp0SkLt', ""It was great working on this with @jurafsky and Noah Goodman, and we're excited to see what folks do with these methods, both for interpretability and in building better models!\n\nPaper: https://t.co/kTO0WGjUqZ\n\n🌇 8/8""]",20,11,1881
53,55,1507308358717562890,864056888564084736,Pablo Lanillos (🤖🧠),"🎯Check this new mind-blowing paper: ""Reclaiming salience: rhythmic precision-modulated action and perception"" 👇 with AA Mera, F Novicky, T Parr, K Friston and @nsajidt #Neuroscience #Robotics #Attention #Saliency #ActivePerception Have we properly modelled attention and saliency? ✔️We revisit neuroscience findings to propose a new model of attention and salience. ✔️We reclaim salience as an active inference process that relies on 2 basic principles: uncertainty minimisation & rhythmic scheduling Did we properly use and implement saliency in ML and robotics? ✔️We implement a precision-based model going beyond human fixation maps ✔️We showcase numerical experiments for state and noise estimation, system identification and action selection for informative path planning. This work changes our view of attention and saliency going back to its original definition but considering the circular causality between perception and action. We place attention and saliency as integral processes for efficient gathering and processing of sensory information. 🔴Saliency as fixation pixel-wise maps is the past. 🟢Rhythmic precision-modulation is the future: Precision control and uncertainty minimisation that influences the selection of future sensory data and that are synchronised in an oscillatory fashion. ",https://arxiv.org/abs/2203.12652,"Computational models of visual attention in artificial intelligence and robotics have been inspired by the concept of a saliency map. These models account for the mutual information between the (current) visual information and its estimated causes. However, they fail to consider the circular causality between perception and action. In other words, they do not consider where to sample next, given current beliefs. Here, we reclaim salience as an active inference process that relies on two basic principles: uncertainty minimisation and rhythmic scheduling. For this, we make a distinction between attention and salience. Briefly, we associate attention with precision control, i.e., the confidence with which beliefs can be updated given sampled sensory data, and salience with uncertainty minimisation that underwrites the selection of future sensory data. Using this, we propose a new account of attention based on rhythmic precision-modulation and discuss its potential in robotics, providing numerical experiments that showcase advantages of precision-modulation for state and noise estimation, system identification and action selection for informative path planning. ",Reclaiming saliency: rhythmic precision-modulated action and perception,5,"['🎯Check this new mind-blowing paper: ""Reclaiming salience: rhythmic precision-modulated action and perception""\n👇\n\nwith AA Mera, F Novicky, T Parr, K Friston and @nsajidt \n\n#Neuroscience #Robotics #Attention #Saliency #ActivePerception ', 'Have we properly modelled attention and saliency?\n✔️We revisit neuroscience findings to propose a new model of attention and salience.\n✔️We reclaim salience as an active inference process that relies on 2 basic principles: uncertainty minimisation & rhythmic scheduling', 'Did we properly use and implement saliency in ML and robotics?\n✔️We implement a precision-based model going beyond human fixation maps\n✔️We showcase numerical experiments for state and noise estimation, system identification and action selection for informative path planning. https://t.co/otE6xkWTcU', 'This work changes our view of attention and saliency going back to its original definition but considering the circular causality between perception and action. We place attention and saliency as integral processes for efficient gathering and processing of sensory information.', '🔴Saliency as fixation pixel-wise maps is the past. 🟢Rhythmic precision-modulation is the future: Precision control and uncertainty minimisation that influences the selection of future sensory data and that are synchronised in an oscillatory fashion. https://t.co/XnzclBAK4S']",22,03,1334
54,2,1280287147157860353,988920586750619649,"Dr. Rocío Joo, PhD","Hey #movementecology community, check our new paper reviewing the last decade in the field, based on text analysis in #RStats of > 8000 papers. Some highlights of our paper in this thread. *OK 1/2 of the paper in this thread (1/n) History. The study of #movement is pretty old (check timeline below), but the term #movementecology was not popular before a special feature on movement ecology in which @ran_nathan and colleagues defined the movement ecology framework (MEF). (2/n) The MEF consisted of: external factors (environmental conditions that affect #movement), internal state (intrinsic factors affecting motivation and readiness to move), navigation (traits enabling the individual to orient), and motion (traits enabling the individual to move) (3/n) The outcome of the interactions between these components would be the observed path. We found that, in the last decade, most studies tackled movement in relation to external factors, while a minority of them studied the processes behind #movement, like motion or navigation. (4/n) Technology. While the MEF has not seemed to have radically changed the field, #biologging and #software use have changed. E.g. GPS, accelerometer and video are more popular. #Rstats has become the undisputed preference in the field among #software tools. (5/n) #Stats. Our analyses revealed that movement, spatial or time-series statistical tools are not the most popular choices in #movementecology studies, but rather generic tools like GLM (that could eventually have a term accounting for time or space in some way). (6/n) #technology vs. concepts. Overall, the results seem to indicate that technology has played a bigger role in #movementecology than #movement concepts. Is the field more data-driven than ideas-driven? Where do we as #movement researchers stand in this trade-off? (7/n) The methods for #TextAnalysis (including quality control) and #rstats codes are described in detail here: (8/n) Did you participate in our #movementecology survey like a year ago? If you did, thank you so much! Here are the results End of the thread. I'd love to get your thoughts on the paper :) (9/n)",https://arxiv.org/abs/2006.00110,"Movement is fundamental to life, shaping population dynamics, biodiversity patterns, and ecosystem structure. Recent advances in tracking technology have enabled fundamental questions about movement to be tackled, leading to the development of the movement ecology framework (MEF), considered a milestone in the field [1]. The MEF introduced an integrative theory of organismal movement, linking internal state, motion capacity and navigation capacity to external factors. Here, a decade later, we investigated the current state of research in the field. Using a text mining approach on >8000 peer-reviewed papers in movement ecology, we explored the main research topics, evaluated the impact of the MEF, and assessed changes in the use of technological devices, software and statistical methods. The number of publications has increased considerably and there have been major technological changes in the past decade (i.e.~increased use of GPS devices, accelerometers and video cameras, and a convergence towards R), yet we found that research focuses on the same questions, specifically, on the effect of environmental factors on movement and behavior. In practice, it appears that movement ecology research does not reflect the MEF. We call on researchers to transform the field from technology-driven to embrace interdisciplinary collaboration, in order to reveal key processes underlying movement (e.g.~navigation), as well as evolutionary, physiological and life-history consequences of particular strategies. ",A decade of movement ecology,9,"['Hey #movementecology community, check our new paper reviewing the last decade in the field, based on text analysis in #RStats of > 8000 papers. Some highlights of our paper in this thread. *OK 1/2 of the paper in this thread (1/n) ', 'History. The study of #movement is pretty old (check timeline below), but the term #movementecology was not popular before a special feature on movement ecology in which @ran_nathan and colleagues defined the movement ecology framework (MEF). (2/n) https://t.co/rTvlFVmJrZ', 'The MEF consisted of: external factors (environmental conditions that affect #movement), internal state (intrinsic factors affecting motivation and readiness to move), navigation (traits enabling the individual to orient), and motion (traits enabling the individual to move) (3/n) https://t.co/7rhNCOPaCa', 'The outcome of the interactions between these components would be the observed path. We found that, in the last decade, most studies tackled\nmovement in relation to external factors, while a minority of them studied the processes behind #movement, like motion or navigation. (4/n) https://t.co/77bIJu7rmm', 'Technology. While the MEF has not seemed to have radically changed the field, #biologging and #software use have changed. E.g. GPS, accelerometer and video are more popular. #Rstats has become the undisputed preference in the field among #software tools. (5/n) https://t.co/UGAudH3ODH', '#Stats. Our analyses revealed that movement, spatial or time-series statistical tools are not the most popular choices in #movementecology studies, but rather generic tools like GLM (that could eventually have a term accounting for time or space in some way). (6/n) https://t.co/g1pHBXM8o6', '#technology vs. concepts. Overall, the results seem to indicate that technology has played a bigger role in #movementecology than #movement concepts. Is the field more data-driven than ideas-driven? Where do we as #movement researchers stand in this trade-off? (7/n) https://t.co/DZV8akWnTm', 'The methods for #TextAnalysis (including quality control) and #rstats codes are described in detail here: https://t.co/q7alkm2dQ5 (8/n) https://t.co/32b553RAxv', ""Did you participate in our #movementecology survey like a year ago? If you did, thank you so much! Here are the results https://t.co/D3SKl9j575 End of the thread. I'd love to get your thoughts on the paper :) (9/n)""]",20,06,2218
55,212,1382286857044627457,769142140765167616,Siamak F. Shahandashti,"With George Kampanos, we looked at #cookie notices in 14.5k UK, 3k Greek websites, found widespread violations: no notice, direct 'reject' option rare, 'reject' harder than 'accept', biased info given to users. Paper accepted to @IFIPSEC '21 ePrint: Thanks to @s_englehardt and @random_walker for developing OpenWPM of course and @mozilla for maintaining it.",https://arxiv.org/abs/2104.05750,"Cookie banners are devices implemented by websites to allow users to manage their privacy settings with respect to the use of cookies. They are part of a user's daily web browsing experience since legislation in Europe requires websites to show such notices. In this paper, we carry out a large-scale study of more than 17,000 websites including more than 7,500 cookie banners in Greece and the UK to determine compliance and tracking transparency levels. Our analysis shows that although more than 60% of websites store third-party cookies in both countries, only less than 50% show a cookie notice and hence a substantial proportion do not comply with the law even at the very basic level. We find only a small proportion of the surveyed websites providing a direct opt-out option, with an overwhelming majority either nudging users towards privacy-intrusive choices or making cookie rejection much harder than consent. Our results differ significantly in some cases from previous smaller-scale studies and hence underline the importance of large-scale studies for a better understanding of the big picture in cookie practices. ",Accept All: The Landscape of Cookie Banners in Greece and the UK,2,"[""With George Kampanos, we looked at #cookie notices in 14.5k UK, 3k Greek websites, found widespread violations: no notice, direct 'reject' option rare, 'reject' harder than 'accept', biased info given to users.\nPaper accepted to @IFIPSEC '21\nePrint: "", 'Thanks to @s_englehardt and @random_walker for developing OpenWPM of course and @mozilla for maintaining it.']",21,04,372
56,125,1455943300679180298,877641073350369281,Aharon Brodutch,New paper with the @QuantumAephraim lab. Congrats to Noah and @YBYilmazQO on getting this experiment out and getting a lot of experimental work done under covid restrictions. A pleasure to work with @nicoleyh11 and @DavidArvShu for the first time. ,https://arxiv.org/abs/2111.01194,"Operator noncommutation, a hallmark of quantum theory, limits measurement precision, according to uncertainty principles. Wielded correctly, though, noncommutation can boost precision. A recent foundational result relates a metrological advantage with negative quasiprobabilities -- quantum extensions of probabilities -- engendered by noncommuting operators. We crystallize the relationship in an equation that we prove theoretically and observe experimentally. Our proof-of-principle optical experiment features a filtering technique that we term partially postselected amplification (PPA). Using PPA, we measure a waveplate's birefringent phase. PPA amplifies, by over two orders of magnitude, the information obtained about the phase per detected photon. In principle, PPA can boost the information obtained from the average filtered photon by an arbitrarily large factor. The filter's amplification of systematic errors, we find, bounds the theoretically unlimited advantage in practice. PPA can facilitate any phase measurement and mitigates challenges that scale with trial number, such as proportional noise and detector saturation. By quantifying PPA's metrological advantage with quasiprobabilities, we reveal deep connections between quantum foundations and precision measurement. ","Negative quasiprobabilities enhance phase estimation in quantum-optics
experiment",1,['New paper with the @QuantumAephraim lab. Congrats to Noah and @YBYilmazQO on getting this experiment out and getting a lot of experimental work done under covid restrictions. A pleasure to work with @nicoleyh11 and @DavidArvShu for the first time.\n'],21,11,254
57,29,864823902819823616,3716338821,Mikko Tuomi,"New paper: ""Evidence for at least three planet candidates orbiting #HD20794"". ""We also find a significant signal at a period of about 330 d corresponding to a super-Earth or Neptune in the habitable zone."" There is tentative evidence for as many as 6 planets orbiting #HD20794 but the interpretation of these signals is difficult. It is now possible to detect signals with amplitudes of only 40cm/s - a factor of 4 in excess that required for detection of Earth analogs. The data reduction and modelling approach applied to HARPS data of #HD20794 explained. The paper also applies the ""moving periodogram"", enabling studying the time-invariance of signals. Demonstrating time-invariance is a key when finding Doppler signals of planets - stellar activity will always vary as a function of time.",https://arxiv.org/abs/1705.05124,"We explore the feasibility of detecting Earth analogs around Sun-like stars using the radial velocity method by investigating one of the largest radial velocities datasets for the one of the most stable radial-velocity stars HD20794. We proceed by disentangling the Keplerian signals from correlated noise and activity-induced variability. We diagnose the noise using the differences between radial velocities measured at different wavelength ranges, so-called ""differential radial velocities"". We apply this method to the radial velocities measured by HARPS, and identify four signals at 18, 89, 147 and 330 d. The two signals at periods of 18 and 89 d are previously reported and are better quantified in this work. The signal at a period of about 147 d is reported for the first time, and corresponds to a super-Earth with a minimum mass of 4.59 Earth mass located 0.51 AU from HD20794. We also find a significant signal at a period of about 330 d corresponding to a super-Earth or Neptune in the habitable zone. Since this signal is close to the annual sampling period and significant periodogram power in some noise proxies are found close to this signal, further observations and analyses are required to confirm it. The analyses of the eccentricity and consistency of signals provide weak evidence for the existence of the previously reported 43 d signal and a new signal at a period of about 11.9 d with a semi amplitude of 0.4 m/s. We find that the detection of a number of signals with radial velocity variations around 0.5\,m/s likely caused by low mass planet candidates demonstrates the important role of noise modeling in searching for Earth analogs. ",Evidence for at least three planet candidates orbiting HD20794,7,"['New paper: ""Evidence for at least three planet candidates orbiting #HD20794"". ', '""We also find a significant signal at a period of about 330 d corresponding to a super-Earth or Neptune in the habitable zone.""', 'There is tentative evidence for as many as 6 planets orbiting #HD20794 but the interpretation of these signals is difficult. https://t.co/NnLzG4Io0O', 'It is now possible to detect signals with amplitudes of only 40cm/s - a factor of 4 in excess that required for detection of Earth analogs.', 'The data reduction and modelling approach applied to HARPS data of #HD20794 explained. https://t.co/435JH9FqjJ', 'The paper also applies the ""moving periodogram"", enabling studying the time-invariance of signals. https://t.co/EaM1D1qlVO https://t.co/h5WQkMvIdM', 'Demonstrating time-invariance is a key when finding Doppler signals of planets - stellar activity will always vary as a function of time.']",17,05,829
58,145,1368761254748168194,1049297982170968065,Yutaka Hori, Our new work on machine learning in feedback control is now on arXiv. The paper presents the use of a linear quasi-optimal controller to assist the learning of nonlinear optimal regulator using RL. So many thanks to the great collaborators at Fujitsu Lab.,https://arxiv.org/abs/2103.03808,"Reinforcement learning (RL) provides a model-free approach to designing an optimal controller for nonlinear dynamical systems. However, the learning process requires a considerable number of trial-and-error experiments using the poorly controlled system, and accumulates wear and tear on the plant. Thus, it is desirable to maintain some degree of control performance during the learning process. In this paper, we propose a model-free two-step design approach to improve the transient learning performance of RL in an optimal regulator design problem for unknown nonlinear systems. Specifically, a linear control law pre-designed in a model-free manner is used in parallel with online RL to ensure a certain level of performance at the early stage of learning. Numerical simulations show that the proposed method improves the transient learning performance and efficiency in hyperparameter tuning of RL. ","Model-free two-step design for improving transient learning performance
in nonlinear optimal regulator problems",1,['\nOur new work on machine learning in feedback control is now on arXiv. The paper presents the use of a linear quasi-optimal controller to assist the learning of nonlinear optimal regulator using RL. So many thanks to the great collaborators at Fujitsu Lab.'],21,03,262
59,94,1469235125007306752,16434310,chrislintott,"New paper day! Led by the indefatigable @drbecky_ and @KarenLMasters, along with the @galaxyzoo team, we've written about why, if you're trying to select galaxies by shape you really really shouldn't use colour as a proxy: @drbecky_ @KarenLMasters @galaxyzoo This is somewhat old news, but - especially for people using machine learning to classify images of galaxies - we hope it'll be a useful reminder. Remember: friends don't let friends use colour as a proxy for morphology. @drbecky_ @KarenLMasters @galaxyzoo It’s a good word.",https://arxiv.org/abs/2112.04507,"The galaxy population is strongly bimodal in both colour and morphology, and the two measures correlate strongly, with most blue galaxies being late-types (spirals) and most early-types, typically ellipticals, being red. This observation has led to the use of colour as a convenient selection criteria to make samples which are then labelled by morphology. Such use of colour as a proxy for morphology results in necessarily impure and incomplete samples. In this paper, we make use of the morphological labels produced by Galaxy Zoo to measure how incomplete and impure such samples are, considering optical (ugriz), NUV and NIR (JHK) bands. The best single colour optical selection is found using a threshold of g-r = 0.742, but this still results in a sample where only 56% of red galaxies are smooth and 56% of smooth galaxies are red. Use of the NUV gives some improvement over purely optical bands, particularly for late-types, but still results in low purity/completeness for early-types. No significant improvement is found by adding NIR bands. With any two bands, including NUV, a sample of early-types with greater than two-thirds purity cannot be constructed. Advances in quantitative galaxy morphologies have made colour-morphology proxy selections largely unnecessary going forward; where such assumptions are still required, we recommend studies carefully consider the implications of sample incompleteness/impurity. ","Quantifying the Poor Purity and Completeness of Morphological Samples
Selected by Galaxy Colour",3,"[""New paper day! Led by the indefatigable @drbecky_ and @KarenLMasters, along with the @galaxyzoo team, we've written about why, if you're trying to select galaxies by shape you really really shouldn't use colour as a proxy: "", ""@drbecky_ @KarenLMasters @galaxyzoo This is somewhat old news, but - especially for people using machine learning to classify images of galaxies - we hope it'll be a useful reminder. Remember: friends don't let friends use colour as a proxy for morphology."", '@drbecky_ @KarenLMasters @galaxyzoo It’s a good word.']",21,12,540
60,66,1149118095836925952,19510090,Julian Togelius,"How could AI help you design games? Perhaps by giving you helpful suggestions as you design? In a new paper, we introduce Pitako, a recommender system for game design. The system learns from existing games to suggest design elements. Thin of this as something akin to Amazon's, Netflix's, or Spotify's recommendations, but for game design. The system has a library of more than a hundred games, and when it recognizes a design pattern from existing games, it suggests elements to include, and from which game. So, if you are creating a space-themed shooter and have created spaceships and aliens, it suggests that you may want a missile and a firing mechanic. Of course, you may not want that - you may want to make a nonviolent game, so feel free to ignore the suggestion. But Pitako takes its cues from existing games in its database, and those are mostly reimplemented versions of classic arcade games. To make all this possible, we build on the @gvgai framework, which includes loads of games implemented in the Video Game Description Language. The nice thing about building on VGDL is that we can recommend rules as well as sprites and behavior, and we can abstract from specific implementations in individual games to general rules. At the core, we use the classic Apriori algorithm for this. Finally, Pitako also recommends where to place sprites in levels. Again, this is based on patterns extracted from the levels of existing games, but augmented with a few heuristics. The paper, by @Jaspier @dgopstein @nealen and myself, will be presented at @cog2019ieee, which is gearing up to be a really fantastic conference. You should be there. Technically, Pitako is a sub-system of Cicero, a versatile tool for AI-assisted design built on the GVGAI framework and the VGDL language. Some previous work on Cicero can be found here: ",https://arxiv.org/abs/1907.03877,"Recommender Systems are widely and successfully applied in e-commerce. Could they be used for design? In this paper, we introduce Pitako1, a tool that applies the Recommender System concept to assist humans in creative tasks. More specifically, Pitako provides suggestions by taking games designed by humans as inputs, and recommends mechanics and dynamics as outputs. Pitako is implemented as a new system within the mixed-initiative AI-based Game Design Assistant, Cicero. This paper discusses the motivation behind the implementation of Pitako as well as its technical details and presents usage examples. We believe that Pitako can influence the use of recommender systems to help humans in their daily tasks. ",Pitako -- Recommending Game Design Elements in Cicero,8,"['How could AI help you design games? Perhaps by giving you helpful suggestions as you design? In a new paper, we introduce Pitako, a recommender system for game design. The system learns from existing games to suggest design elements.\n ', ""Thin of this as something akin to Amazon's, Netflix's, or Spotify's recommendations, but for game design. The system has a library of more than a hundred games, and when it recognizes a design pattern from existing games, it suggests elements to include, and from which game. https://t.co/f3nwXR2c9s"", 'So, if you are creating a space-themed shooter and have created spaceships and aliens, it suggests that you may want a missile and a firing mechanic. Of course, you may not want that - you may want to make a nonviolent game, so feel free to ignore the suggestion.', 'But Pitako takes its cues from existing games in its database, and those are mostly reimplemented versions of classic arcade games. To make all this possible, we build on the @gvgai framework, which includes loads of games implemented in the Video Game Description Language.', 'The nice thing about building on VGDL is that we can recommend rules as well as sprites and behavior, and we can abstract from specific implementations in individual games to general rules. At the core, we use the classic Apriori algorithm for this. https://t.co/KuRTaLisKq', 'Finally, Pitako also recommends where to place sprites in levels. Again, this is based on patterns extracted from the levels of existing games, but augmented with a few heuristics. https://t.co/ChEOs1hjlZ', 'The paper, by @Jaspier @dgopstein @nealen and myself, will be presented at @cog2019ieee, which is gearing up to be a really fantastic conference. You should be there.', 'Technically, Pitako is a sub-system of Cicero, a versatile tool for AI-assisted design built on the GVGAI framework and the VGDL language. Some previous work on Cicero can be found here:\nhttps://t.co/j2Ap9G3jBZ']",19,07,1875
61,82,993144530571493378,3433516535,Miguel Hernán,"Data scientists define their work as “gaining insights” or “extracting meaning” from data. That is way too vague. We propose that the contributions of #datascience can be organized into 3 classes of tasks: 1. description 2. prediction 3. causal inference @chrisbboyer Thanks, Christopher. Mullainathan & Speiss do a terrific job at explaining how machine learning works in simple terms. They also explain the distinction between prediction and causal inference, though that is not the main objective of their paper. @kerinalthoff Several epidemiologists, biostatisticians (like Vittinghoff et al.), econometricians... have proposed a similar classification. We are looking for papers or textbooks that explicitly define these 3 categories. Suggestions welcome. The older the references, the better.",https://arxiv.org/abs/1804.10846,"Causal inference from observational data is the goal of many data analyses in the health and social sciences. However, academic statistics has often frowned upon data analyses with a causal objective. The introduction of the term ""data science"" provides a historic opportunity to redefine data analysis in such a way that it naturally accommodates causal inference from observational data. Like others before, we organize the scientific contributions of data science into three classes of tasks: Description, prediction, and counterfactual prediction (which includes causal inference). An explicit classification of data science tasks is necessary to discuss the data, assumptions, and analytics required to successfully accomplish each task. We argue that a failure to adequately describe the role of subject-matter expert knowledge in data analysis is a source of widespread misunderstandings about data science. Specifically, causal analyses typically require not only good data and algorithms, but also domain expert knowledge. We discuss the implications for the use of data science to guide decision-making in the real world and to train data scientists. ","Data science is science's second chance to get causal inference right: A
classification of data science tasks",3,"['Data scientists define their work as “gaining insights” or “extracting meaning” from data. That is way too vague. \n\nWe propose that the contributions of #datascience can be organized into 3 classes of tasks:\n1. description\n2. prediction\n3. causal inference\n ', '@chrisbboyer Thanks, Christopher.\n\nMullainathan & Speiss do a terrific job at explaining how machine learning works in simple terms. They also explain the distinction between prediction and causal inference, though that is not the main objective of their paper. https://t.co/8o6qboakq4', '@kerinalthoff Several epidemiologists, biostatisticians (like Vittinghoff et al.), econometricians... have proposed a similar classification. We are looking for papers or textbooks that explicitly define these 3 categories. Suggestions welcome. The older the references, the better.']",18,04,820
62,161,1481193458161356800,1109044782809206784,Ginette Lafit 💚,"In this new preprint, we (@fjnogales, @ivanmarce1, Ruben Zamar, and me) propose the use of a robust covariance estimator based on multivariate Winsorization for sparse estimation of the precision matrix of a Gaussian graphical model 1/5 We investigate the performance of Glasso under cellwise contamination (right panel) which differs from the classical casewise contamination model (left panel) because each cell of a data has a probability to be independently contaminated 2/5 In the context of cellwise outliers, traditional robust estimators of the covariance matrix are not well suited because they rely on linear combinations of the observations which have a high probability of being contaminated. This is known as outliers propagation. 3/5 In the presence of cellwise outliers, the Graphical lasso is no longer robust. As a result, we cannot use the precision matrix estimated by Glasso to learn about conditional independence in a Gaussian graphical model. 4/5 Thus we propose a robust estimator of Glasso by plugging in a robust estimator of the covariance matrix based on bivariate winsorization. The proposal has a competitive behavior, regarding the recovery of the graph in comparison with existing approaches. 5/5",https://arxiv.org/abs/2201.03659,"We propose the use of a robust covariance estimator based on multivariate Winsorization in the context of the Tarr-Muller-Weber framework for sparse estimation of the precision matrix of a Gaussian graphical model. Likewise Croux-Ollerer's precision matrix estimator, our proposed estimator attains the maximum finite sample breakdown point of 0.5 under cellwise contamination. We conduct an extensive Monte Carlo simulation study to assess the performance of ours and the currently existing proposals. We find that ours has a competitive behavior, regarding the the estimation of the precision matrix and the recovery of the graph. We demonstrate the usefulness of the proposed methodology in a real application to breast cancer data. ",Robust graphical lasso based on multivariate Winsorization,5,"['In this new preprint, we (@fjnogales, @ivanmarce1, Ruben Zamar, and me) propose the use of a robust covariance estimator based on multivariate Winsorization for sparse estimation of the precision matrix of a Gaussian graphical model\n\n1/5', 'We investigate the performance of Glasso under cellwise contamination (right panel) which differs from the classical casewise contamination model (left panel) because each cell of a data has a probability to be independently contaminated\n2/5 https://t.co/MxWNZN9B6k', 'In the context of cellwise outliers, traditional robust estimators of the covariance matrix are not well suited because they rely on linear combinations of the observations which have a high probability of being contaminated. This is known as outliers propagation.\n3/5', 'In the presence of cellwise outliers, the Graphical lasso is no longer robust. As a result, we cannot use the precision matrix estimated by Glasso to learn about conditional independence in a Gaussian graphical model.\n4/5', 'Thus we propose a robust estimator of Glasso by plugging in a robust estimator of the covariance matrix based on bivariate winsorization. The proposal has a competitive behavior, regarding the recovery of the graph in comparison with existing approaches.\n5/5']",22,01,1242
63,25,1121780337061896192,3422471637,Elias Kammoun,"Here is our new paper on NGC 5347: a bona fide Compton-thick AGN just sitting in our backyard! More exciting results from our survey of nearby obscured AGN with @NASANuSTAR are coming soon.. @NASANuSTAR In ~12 years the @AthenaXIFU on board of @AthenaXobs will allow us to look at the finest details in X-ray spectra of obscured AGN, opening a whole new window in X-ray astronomy, going from the current data-quality (left) to the well-resolved emission lines with Athena (right). ",https://arxiv.org/abs/1904.11028,"Current measurements show that the observed fraction of Compton-thick (CT) AGN is smaller than the expected values needed to explain the cosmic X-ray background. Prior fits to the X-ray spectrum of the nearby Seyfert-2 galaxy NGC 5347 ($z=0.00792,\, D =35.5 \rm ~Mpc $) have alternately suggested a CT and Compton-thin source. Combining archival data from $Suzaku$, $Chandra$, and - most importantly - new data from $NuSTAR$, and using three distinct families of models, we show that NGC 5347 is an obscured CTAGN ($N_{\rm H} > 2.23\times 10^{24}~\rm cm^{-2}$). Its 2-30~keV spectrum is dominated by reprocessed emission from distant material, characterized by a strong Fe K$\alpha$ line and a Compton hump. We found a large equivalent width of the Fe K$\alpha$ line ($\rm EW = 2.3 \pm 0.3$ keV) and a high intrinsic-to-observed flux ratio ($\sim 100$). All of these observations are typical for bona fide CTAGN. We estimate a bolometric luminosity of $L_{\rm bol} \simeq 0.014 \pm 0.005~L_{\rm Edd.}$. The $Chandra$ image of NGC 5347 reveals the presence of extended emission dominating the soft X-ray spectrum ($E < 2\,\rm keV$), which coincides with the [O III] emission detected in the $Hubble ~Space~ Telescope$ images. Comparison to other CTAGN suggests that NGC 5347 is broadly consistent with the average properties of this source class. We simulated $XRISM$ and $Athena$/X-IFU spectra of the source, showing the potential of these future missions in identifying CTAGN in the soft X-rays. ",A hard look at NGC 5347: revealing a nearby Compton-thick AGN,2,"['Here is our new paper on NGC 5347: a bona fide Compton-thick AGN just sitting in our backyard!\n\nMore exciting results from our survey of nearby obscured AGN with @NASANuSTAR are coming soon.. \n\n ', '@NASANuSTAR In ~12 years the @AthenaXIFU on board of @AthenaXobs will allow us to look at the finest details in X-ray spectra of obscured AGN, opening a whole new window in X-ray astronomy, going from the current data-quality (left) to the well-resolved emission lines with Athena (right). https://t.co/ohySAjTJkl']",19,04,502
64,128,1111237040195207169,810647071,Andres Olivares,"If a circular polarisation signal is created during the evolution of the universe, would a net circular polarisation reach us today? In we develop a formalism to study such signal and work out the necessary conditions for it to be preserved! @CelineBoehm1 This paper was written in collaboration with @MelizabethQ_ , @PhysYL and @CelineBoehm1",https://arxiv.org/abs/1903.11074,"The polarisation of sunlight after scattering off the atmosphere was first described by Chandrasekhar using a geometrical description of Rayleigh interactions. Kosowsky later extended Chandrasekhar's formalism by using Quantum Field Theory (QFT) to describe the polarisation of the Cosmological Microwave Background radiation. Here we focus on a case that is rarely discussed in the literature, namely the polarisation of high energy radiation after scattering off particles. After demonstrating why the geometrical and low energy QFT approaches fail in this case, we establish the transport formalism that allows to describe the change of polarisation of high energy photons when they propagate through space or the atmosphere. We primarily focus on Compton interactions but our approach is general enough to describe e.g. the scattering of high energy photons off new particles or through new interactions. Finally we determine the conditions for a circularly polarised $\gamma$--ray signal to keep the same level of circular polarisation as it propagates through its environment. ",Polarisation of high energy gamma-rays after scattering,2,"['If a circular polarisation signal is created during the evolution of the universe, would a net circular polarisation reach us today? In we develop a formalism to study such signal and work out the necessary conditions for it to be preserved! @CelineBoehm1 ', 'This paper was written in collaboration with @MelizabethQ_ , @PhysYL and @CelineBoehm1']",19,03,356
65,90,1103858909041868800,18850305,Zachary Lipton,"New work by my student Yifan Wu identifies problems with traditional *deep domain adaptation* objectives & holes in the theory supporting it. Our paper offers new analysis, a new algorithm that escapes one identified failure mode, & experimental validation Looking forward to discussing this work on Monday at Berkeley's ""Trustworthy Deep Learning"" seminar @WilliamWangNLP Our key idea here is that enforcing strict alignment can often be a bad thing. For example, what if source and target distributions have different label distributions? Then alignment is actually lower-bounding your target error. Instead we optimize a relaxed objective.",https://arxiv.org/abs/1903.01689,"Domain adaptation addresses the common problem when the target distribution generating our test data drifts from the source (training) distribution. While absent assumptions, domain adaptation is impossible, strict conditions, e.g. covariate or label shift, enable principled algorithms. Recently-proposed domain-adversarial approaches consist of aligning source and target encodings, often motivating this approach as minimizing two (of three) terms in a theoretical bound on target error. Unfortunately, this minimization can cause arbitrary increases in the third term, e.g. they can break down under shifting label distributions. We propose asymmetrically-relaxed distribution alignment, a new approach that overcomes some limitations of standard domain-adversarial algorithms. Moreover, we characterize precise assumptions under which our algorithm is theoretically principled and demonstrate empirical benefits on both synthetic and real datasets. ",Domain Adaptation with Asymmetrically-Relaxed Distribution Alignment,3,"['New work by my student Yifan Wu identifies problems with traditional *deep domain adaptation* objectives & holes in the theory supporting it. Our paper offers new analysis, a new algorithm that escapes one identified failure mode, & experimental validation ', 'Looking forward to discussing this work on Monday at Berkeley\'s ""Trustworthy Deep Learning"" seminar https://t.co/0il5wVTHef', '@WilliamWangNLP Our key idea here is that enforcing strict alignment can often be a bad thing. For example, what if source and target distributions have different label distributions? Then alignment is actually lower-bounding your target error. Instead we optimize a relaxed objective.']",19,03,656
66,4,1060285853937909765,91865755,Els de Wolf,New @km3net paper on the Astro-Ph arXiv describes the potential of the future #ARCA detector of #KM3NeT to observe #neutrinos from known gamma-ray sources in our galaxy. Also the expected potential of #ARCA to observe extra-Galactic neutrinos is shown. ,https://arxiv.org/abs/1810.08499v1,"KM3NeT will be a network of deep-sea neutrino telescopes in the Mediterranean Sea. The KM3NeT/ARCA detector, to be installed at the Capo Passero site (Italy), is optimised for the detection of high-energy neutrinos of cosmic origin. Thanks to its geographical location on the Northern hemisphere, KM3NeT/ARCA can observe upgoing neutrinos from most of the Galactic Plane, including the Galactic Centre. Given its effective area and excellent pointing resolution, KM3NeT/ARCA will measure or significantly constrain the neutrino flux from potential astrophysical neutrino sources. At the same time, it will test flux predictions based on gamma-ray measurements and the assumption that the gamma-ray flux is of hadronic origin. Assuming this scenario, discovery potential and sensitivity to a selected list of Galactic sources and to generic point sources with an $E^{-2}$ spectrum are presented. These spectra are assumed to be time independent. The results indicate that an observation with $3\sigma$ significance is possible in about six years of operation for the most intense sources, such as Supernovae Remnants RX J1713.7-3946 and Vela Jr. If no signal will be found during this time, the fraction of the gamma-ray flux coming from hadronic processes can be constrained to be below 50% for these two objects. ","] Sensitivity of the KM3NeT/ARCA neutrino telescope to point-like neutrino
sources",1,['New @km3net paper on the Astro-Ph arXiv describes the potential of the future #ARCA detector of #KM3NeT to observe #neutrinos from known gamma-ray sources in our galaxy. Also the expected potential of #ARCA to observe extra-Galactic neutrinos is shown.\n'],18,10,259
67,209,1283444494050816002,885528008,William Fedus,"The interplay of RL algorithms with experience replay is poorly understood. We study this and uncover a relationship between n-step returns and replay capacity. ICML '20 paper: Prajit R.*, @agarwl_ , Yoshua, @hugo_larochelle , Mark R., @wwdabney We study two properties in experience replay: 1. Size of replay capacity 2. Oldest policy in the buffer Together, these jointly define a replay ratio: an experience learning to new data acquisition ratio. Since DQN, these have often not varied (1M, learn each 4-steps). The performance of a Rainbow agent (Hessel et al., 2017) varies substantially with these factors: improving with more ""on-policy"" data and capacity. In the easiest Deep RL boost, we find a 29% median improvement in Atari games simply by increasing replay capacity from 1M -> 10M But the story is completely different with a DQN algorithm -- regardless of whether we control for replay ratio or the oldest policy -- there is no change. Why? Through ablative and additive studies, we isolate n-step returns as the crucial factor. Improvements with larger replay capacity are found only when using n-step returns. No clear signal found with prioritized experience replay, optimizer, or distributional learning. The importance of n-step returns even holds in the logical extreme: batch reinforcement learning. This is non-intuitive. Uncorrected n-step returns -- mathematically incorrect -- yield gains in a regime where they are the most incorrect. A bias-variance trade-off partially explains the importance of n-step returns (refer to paper for experimental details), but it's still not complete. The entanglement between data generation and RL algorithms is an important issue in the design of better agents! This was jointly led by Prajit Ramachandran with a great collaboration across Mila, Brain, DeepMind including: @agarwl_ , Yoshua Bengio, @hugo_larochelle , Mark Rowland, @wwdabney Paper: Code: ICML 2020: ",https://arxiv.org/abs/2007.06700,"Experience replay is central to off-policy algorithms in deep reinforcement learning (RL), but there remain significant gaps in our understanding. We therefore present a systematic and extensive analysis of experience replay in Q-learning methods, focusing on two fundamental properties: the replay capacity and the ratio of learning updates to experience collected (replay ratio). Our additive and ablative studies upend conventional wisdom around experience replay -- greater capacity is found to substantially increase the performance of certain algorithms, while leaving others unaffected. Counterintuitively we show that theoretically ungrounded, uncorrected n-step returns are uniquely beneficial while other techniques confer limited benefit for sifting through larger memory. Separately, by directly controlling the replay ratio we contextualize previous observations in the literature and empirically measure its importance across a variety of deep RL algorithms. Finally, we conclude by testing a set of hypotheses on the nature of these performance benefits. ",Revisiting Fundamentals of Experience Replay,8,"[""The interplay of RL algorithms with experience replay is poorly understood. We study this and uncover a relationship between n-step returns and replay capacity.\n\nICML '20 paper: \n\nPrajit R.*, @agarwl_ , Yoshua, @hugo_larochelle , Mark R., @wwdabney "", 'We study two properties in experience replay:\n1. Size of replay capacity\n2. Oldest policy in the buffer\n\nTogether, these jointly define a replay ratio: an experience learning to new data acquisition ratio. Since DQN, these have often not varied (1M, learn each 4-steps). https://t.co/0MOKpLQEHn', 'The performance of a Rainbow agent (Hessel et al., 2017) varies substantially with these factors: improving with more ""on-policy"" data and capacity.\n\nIn the easiest Deep RL boost, we find a 29% median improvement in Atari games simply by increasing replay capacity from 1M -> 10M https://t.co/wNVfFttK0h', 'But the story is completely different with a DQN algorithm -- regardless of whether we control for replay ratio or the oldest policy -- there is no change.\n\nWhy? https://t.co/cdJgyz240o', 'Through ablative and additive studies, we isolate n-step returns as the crucial factor.\n\nImprovements with larger replay capacity are found only when using n-step returns. No clear signal found with prioritized experience replay, optimizer, or distributional learning. https://t.co/SlKT7Mnwqx', 'The importance of n-step returns even holds in the logical extreme: batch reinforcement learning. \n\nThis is non-intuitive. Uncorrected n-step returns -- mathematically incorrect -- yield gains in a regime where they are the most incorrect. https://t.co/HubfHztWuW', ""A bias-variance trade-off partially explains the importance of n-step returns (refer to paper for experimental details), but it's still not complete.\n\nThe entanglement between data generation and RL algorithms is an important issue in the design of better agents! https://t.co/9lPfzcadKi"", 'This was jointly led by Prajit Ramachandran with a great collaboration across Mila, Brain, DeepMind including: @agarwl_ , Yoshua Bengio, @hugo_larochelle , Mark Rowland, @wwdabney \n\nPaper: https://t.co/KnGLGapNGU\nCode: https://t.co/xcg5ZI7Tt2\nICML 2020: https://t.co/QSGtPz4Iwk']",20,07,2010
68,60,1518587777608265730,69202541,Jonathan Le Roux,"New paper out w/ @ZhongqiuWang, G. Wichern, @shinjiw_at_cmu, ""STFT-Domain Neural Speech Enhancement with Very Low Algorithmic Latency."" We combine a dual window size approach with DNN spectral mapping based enhancement and frame-online beamforming, reaching strong performance on a noisy reverberant SE task with algorithmic latency as low as 2 ms. ",https://arxiv.org/abs/2204.09911,"Deep learning based speech enhancement in the short-term Fourier transform (STFT) domain typically uses a large window length such as 32 ms. A larger window contains more samples and the frequency resolution can be higher for potentially better enhancement. This however incurs an algorithmic latency of 32 ms in an online setup, because the overlap-add algorithm used in the inverse STFT (iSTFT) is also performed based on the same 32 ms window size. To reduce this inherent latency, we adapt a conventional dual window size approach, where a regular input window size is used for STFT but a shorter output window is used for the overlap-add in the iSTFT, for STFT-domain deep learning based frame-online speech enhancement. Based on this STFT and iSTFT configuration, we employ single- or multi-microphone complex spectral mapping for frame-online enhancement, where a deep neural network (DNN) is trained to predict the real and imaginary (RI) components of target speech from the mixture RI components. In addition, we use the RI components predicted by the DNN to conduct frame-online beamforming, the results of which are then used as extra features for a second DNN to perform frame-online post-filtering. The frequency-domain beamforming in between the two DNNs can be easily integrated with complex spectral mapping and is designed to not incur any algorithmic latency. Additionally, we propose a future-frame prediction technique to further reduce the algorithmic latency. Evaluation results on a noisy-reverberant speech enhancement task demonstrate the effectiveness of the proposed algorithms. Compared with Conv-TasNet, our STFT-domain system can achieve better enhancement performance for a comparable amount of computation, or comparable performance with less computation, maintaining strong performance at an algorithmic latency as low as 2 ms. ",STFT-Domain Neural Speech Enhancement with Very Low Algorithmic Latency,2,"['New paper out w/ @ZhongqiuWang, G. Wichern, @shinjiw_at_cmu, ""STFT-Domain Neural Speech Enhancement with Very Low Algorithmic Latency."" \n', 'We combine a dual window size approach with DNN spectral mapping based enhancement and frame-online beamforming, reaching strong performance on a noisy reverberant SE task with algorithmic latency as low as 2 ms. https://t.co/8AoCQLz8Da']",22,04,362
69,219,1281306072930750465,881959726958862337,Yuhuai (Tony) Wu,"Can Neural Networks solve IQ tests? We propose Scattering Compositional Learner (SCL) for RPM Task. SCL improves SOTA from 63.9% to 95.0%. It is even capable of zero-shot generalization and learns disentangled representations! paper: (1/n) SCL is designed to discover the compositional structures of the data. In RAVEN, It learns to discover the compositions of objects, attributes, and relationships. The figure shows an example where SCL learns the concept of “size”. (2/n) By learning compositional structures, it can even generalize to unseen analogies. E.g., After learning (“color”, “constant”), and (“size”, “progression”), the model can generalize to (“color”, “progression”). (3/n) Fun fact: Hu et. al. () found that most of the previous successful neural methods exploited a short-cut solution. After removing the dataset bias, those methods suffered a lot (e.g., CoPINet went from 91.4% -> 46.3%). SCL was not affected at all. (4/n) Last but not the least, this is a joint work with Honghua Dong, @RogerGrosse, and Jimmy Ba. @marbin2050 @cjmaddison Thanks for encouraging words. We're exploring all potential of this work. @FelixHill84 Hi Felix, many thanks for encouraging words! PGM is a dataset of much larger scale, so we were not able to run the task and compare it with baselines by the deadline. But we are intending to try for sure! @iandanforth Hi Ian, thanks for pointing out the Abstraction and Reasoning Challenge. We will take a closer look to see if our model fits!",https://arxiv.org/abs/2007.04212,"In this work, we focus on an analogical reasoning task that contains rich compositional structures, Raven's Progressive Matrices (RPM). To discover compositional structures of the data, we propose the Scattering Compositional Learner (SCL), an architecture that composes neural networks in a sequence. Our SCL achieves state-of-the-art performance on two RPM datasets, with a 48.7% relative improvement on Balanced-RAVEN and 26.4% on PGM over the previous state-of-the-art. We additionally show that our model discovers compositional representations of objects' attributes (e.g., shape color, size), and their relationships (e.g., progression, union). We also find that the compositional representation makes the SCL significantly more robust to test-time domain shifts and greatly improves zero-shot generalization to previously unseen analogies. ","The Scattering Compositional Learner: Discovering Objects, Attributes,
Relationships in Analogical Reasoning",8,"['Can Neural Networks solve IQ tests? We propose Scattering Compositional Learner (SCL) for RPM Task. SCL improves SOTA from 63.9% to 95.0%. It is even capable of zero-shot generalization and learns disentangled representations!\n\npaper: \n\n(1/n) ', 'SCL is designed to discover the compositional structures of the data. In RAVEN, It learns to discover the compositions of objects, attributes, and relationships. The figure shows an example where SCL learns the concept of “size”.\n\n(2/n) https://t.co/DlQk0j2WSE', 'By learning compositional structures, it can even generalize to unseen analogies. E.g., After learning (“color”, “constant”), and (“size”, “progression”), the model can generalize to (“color”, “progression”).\n\n(3/n)', 'Fun fact: Hu et. al. (https://t.co/5uqodKNCAf) found that most of the previous successful neural methods exploited a short-cut solution. After removing the dataset bias, those methods suffered a lot (e.g., CoPINet went from 91.4% -> 46.3%). SCL was not affected at all.\n\n(4/n)', 'Last but not the least, this is a joint work with Honghua Dong, @RogerGrosse, and Jimmy Ba.', ""@marbin2050 @cjmaddison Thanks for encouraging words. We're exploring all potential of this work."", '@FelixHill84 Hi Felix, many thanks for encouraging words! PGM is a dataset of much larger scale, so we were not able to run the task and compare it with baselines by the deadline. But we are intending to try for sure!', '@iandanforth Hi Ian, thanks for pointing out the Abstraction and Reasoning Challenge. We will take a closer look to see if our model fits!']",20,07,1520
70,114,1300811433471598597,891489280861904896,Sir Panda (Zad Rafi),"New paper by @Lester_Domes and me. We discuss why uniformity is central to the validity of P-values and why some Bayesian variants don’t meet this, other units for S-values besides base-2 logs, and relation of S-values to other stat measures of information It’s an extension to our longer paper where we attempt to operationalize surprisals, P-values/S-values for alternative hypotheses, and confidence/surprisal distributions, which is now in press at BMC Medical Research Methodology As always, we welcome all feedback! @ADAlthousePhD @Lester_Domes 👀👀👀👀 @ashtroid22 @Lester_Domes Never heard of em",https://arxiv.org/abs/2008.12991,"An extended technical discussion of $S$-values and unconditional information can be found in Greenland, 2019. Here we briefly cover several technical topics mentioned in our main paper, Rafi & Greenland, 2020: Different units for (scaling of) the $S$-value besides base-2 logs (bits); the importance of uniformity (validity) of the $P$-value for interpretation of the $S$-value; and the relation of the $S$-value to other measures of statistical information about a test hypothesis or model. ","Technical Issues in the Interpretation of S-values and Their Relation to
Other Information Measures",5,"['New paper by @Lester_Domes and me. We discuss why uniformity is central to the validity of P-values and why some Bayesian variants don’t meet this, other units for S-values besides base-2 logs, and relation of S-values to other stat measures of information ', 'It’s an extension to our longer paper where we attempt to operationalize surprisals, P-values/S-values for alternative hypotheses, and confidence/surprisal distributions, which is now in press at BMC Medical Research Methodology https://t.co/ki7QUGQRyF', 'As always, we welcome all feedback!', '@ADAlthousePhD @Lester_Domes 👀👀👀👀', '@ashtroid22 @Lester_Domes Never heard of em']",20,08,613
71,112,1011892443443355649,746249674869346304,Dmitry Meshkov,"It is generally accepted that missing of loops inside a smart contract language means that it is not Turing-complete. Our new paper dispels this myth. One of the most interesting practical result is that it is easy to build Turing-complete language without runtime cost analysis (e.g. gas in #ethereum), making smart contracts much more secure.",https://arxiv.org/abs/1806.10116,"Turing-completeness of smart contract languages in blockchain systems is often associated with a variety of language features (such as loops). In opposite, we show that Turing-completeness of a blockchain system can be achieved through unwinding the recursive calls between multiple transactions and blocks instead of using a single one. We prove it by constructing a simple universal Turing machine using a small set of language features in the unspent transaction output (UTXO) model, with explicitly given relations between input and output transaction states. Neither unbounded loops nor possibly infinite validation time are needed in this approach. ",Self-Reproducing Coins as Universal Turing Machine,2,"['It is generally accepted that missing of loops inside a smart contract language means that it is not Turing-complete. Our new paper dispels this myth.', 'One of the most interesting practical result is that it is easy to build Turing-complete language without runtime cost analysis (e.g. gas in #ethereum), making smart contracts much more secure.']",18,06,351
72,209,1313754611770167296,3832040415,Ran Zmigrod,"⚠️ Attention all NLPers, when decoding dependency trees, please mind the root! ⚠️ Check out our new @emnlp2020 short paper about efficient root-constrained decoding for graph-based dependency parsers! 🌲 Joint work with @xtimv and @ryandcotterell Edge-factored non-projective dependency parsing decoding is done using MST algorithms. However, most dependency tree annotation standards do not directly translate to spanning trees! A subtle root-constraint is often required for dependency trees, only one edge may emanate from the root! The current NLP solution to this is to add a factor of n to the runtime. We introduce the NLP community to an algorithm from the 80s that correctly decodes root-constrained dependency trees without sacrificing runtime. 🌴 Our code is available at ",https://arxiv.org/abs/2010.02550,"The connection between dependency trees and spanning trees is exploited by the NLP community to train and to decode graph-based dependency parsers. However, the NLP literature has missed an important difference between the two structures: only one edge may emanate from the root in a dependency tree. We analyzed the output of state-of-the-art parsers on many languages from the Universal Dependency Treebank: although these parsers are often able to learn that trees which violate the constraint should be assigned lower probabilities, their ability to do so unsurprisingly de-grades as the size of the training set decreases. In fact, the worst constraint-violation rate we observe is 24%. Prior work has proposed an inefficient algorithm to enforce the constraint, which adds a factor of n to the decoding runtime. We adapt an algorithm due to Gabow and Tarjan (1984) to dependency parsing, which satisfies the constraint without compromising the original runtime. ",Please Mind the Root: Decoding Arborescences for Dependency Parsing,5,"['⚠️ Attention all NLPers, when decoding dependency trees, please mind the root! ⚠️\nCheck out our new @emnlp2020 short paper about efficient root-constrained decoding for graph-based dependency parsers!\n🌲 \nJoint work with @xtimv and @ryandcotterell ', 'Edge-factored non-projective dependency parsing decoding is done using MST algorithms. However, most dependency tree annotation standards do not directly translate to spanning trees!', 'A subtle root-constraint is often required for dependency trees, only one edge may emanate from the root! The current NLP solution to this is to add a factor of n to the runtime.', 'We introduce the NLP community to an algorithm from the 80s that correctly decodes root-constrained dependency trees without sacrificing runtime.', '🌴 Our code is available at https://t.co/6G7CMgeDcm']",20,10,801
73,58,1296044140858191874,460069521,Andrew Francis,"Very stoked with this new little paper, joint with Mike Steel (Canterbury) and Dan Huson (Tübingen): ""Normalising phylogenetic networks”. We show how every phylogenetic network has an associated canonical normal network! @robynaraujo Oh thanks, what a nice thing to say!",https://arxiv.org/abs/2008.07797,"Rooted phylogenetic networks provide a way to describe species' relationships when evolution departs from the simple model of a tree. However, networks inferred from genomic data can be highly tangled, making it difficult to discern the main reticulation signals present. In this paper, we describe a natural way to transform any rooted phylogenetic network into a simpler canonical network, which has desirable mathematical and computational properties, and is based only on the 'visible' nodes in the original network. The method has been implemented and we demonstrate its application to some examples. ",Normalising phylogenetic networks,2,"['Very stoked with this new little paper, joint with Mike Steel (Canterbury) and Dan Huson (Tübingen): ""Normalising phylogenetic networks”.\n\nWe show how every phylogenetic network has an associated canonical normal network! \n\n', '@robynaraujo Oh thanks, what a nice thing to say!']",20,08,278
74,29,1322012306906271744,717162062837719040,Phil Armitage,"New paper! In work led by @sraymond_astro, with Nathan Kaib and @jjfplanet, we quantify how planetesimals ejected from the Solar System differ from those that survive as small bodies in reservoirs such as the Kuiper belt and Oort Cloud. Interstellar objects are the motivation. Only two are known. 'Oumuamua was small, irregularly shaped, and generally weird. Borisov was pretty boringly similar to Solar System comets. A sample of two is a thin gruel, but it's better than one! And there will be (many) more. The unexpected properties of 'Oumuamua inspired many novel ideas. It might be a hydrogen iceberg (Seligman & Laughlin), an ultra-porous aggregate (Moro-Martin), a planet tidally disrupted by a star or compact object (Cuk, Rafikov, Zhang and Lin), etc... To assess these possibilities, our goal was to understand the null hypothesis: what would planetesimals ejected from Solar System-like planetary systems look like? To do this, we tracked the fates of ~18,000 planetesimals in constrained realizations of the early Solar System. We were particularly interested in planetesimals that passed so close to planets that they would be tidally disrupted (and shredded into small pieces), or which got close enough to the Sun that their surfaces might have dried out. These could be 'Oumuamua-like. The result? The frequency of volatile loss is far higher for ejected planetesimals than for surviving ones. Even if all interstellar objects were ejected from Solar System-like systems, their physical properties should be more diverse than those of Solar System survivors. Of course, we know that not all planetary systems resemble the Solar System. As more interstellar objects are found, the goal will be to identify both the truly primordial bodies, and those that have been altered, and link them to known exoplanet populations.",https://arxiv.org/abs/2010.15147,"The orbital architecture of the Solar System is thought to have been sculpted by a dynamical instability among the giant planets. During the instability a primordial outer disk of planetesimals was destabilized and ended up on planet-crossing orbits. Most planetesimals were ejected into interstellar space but a fraction were trapped on stable orbits in the Kuiper belt and Oort cloud. We use a suite of N-body simulations to map out the diversity of planetesimals' dynamical pathways. We focus on two processes: tidal disruption from very close encounters with a giant planet, and loss of surface volatiles from repeated passages close to the Sun. We show that the rate of tidal disruption is more than a factor of two higher for ejected planetesimals than for surviving objects in the Kuiper belt or Oort cloud. Ejected planetesimals are preferentially disrupted by Jupiter and surviving ones by Neptune. Given that the gas giants contracted significantly as they cooled but the ice giants did not, taking into account the thermal evolution of the giant planets decreases the disruption rate of ejected planetesimals. The frequency of volatile loss and extinction is far higher for ejected planetesimals than for surviving ones and is not affected by the giant planets' contraction. Even if all interstellar objects were ejected from Solar System-like systems, our analysis suggests that their physical properties should be more diverse than those of Solar System small bodies as a result of their divergent dynamical histories. This is consistent with the characteristics of the two currently-known interstellar objects. ","Survivor bias: divergent fates of the Solar System's ejected vs.
persisting planetesimals",7,"['New paper! In work led by @sraymond_astro, with Nathan Kaib and @jjfplanet, we quantify how planetesimals ejected from the Solar System differ from those that survive as small bodies in reservoirs such as the Kuiper belt and Oort Cloud.\n\n', ""Interstellar objects are the motivation. Only two are known. 'Oumuamua was small, irregularly shaped, and generally weird. Borisov was pretty boringly similar to Solar System comets. A sample of two is a thin gruel, but it's better than one! And there will be (many) more."", ""The unexpected properties of 'Oumuamua inspired many novel ideas. It might be a hydrogen iceberg (Seligman & Laughlin), an ultra-porous aggregate (Moro-Martin), a planet tidally disrupted by a star or compact object (Cuk, Rafikov, Zhang and Lin), etc..."", 'To assess these possibilities, our goal was to understand the null hypothesis: what would planetesimals ejected from Solar System-like planetary systems look like? To do this, we tracked the fates of ~18,000 planetesimals in constrained realizations of the early Solar System.', ""We were particularly interested in planetesimals that passed so close to planets that they would be tidally disrupted (and shredded into small pieces), or which got close enough to the Sun that their surfaces might have dried out. These could be 'Oumuamua-like."", 'The result? The frequency of volatile loss is far higher for ejected planetesimals than for surviving ones. Even if all interstellar objects were ejected from Solar System-like systems, their physical properties should be more diverse than those of Solar System survivors.', 'Of course, we know that not all planetary systems resemble the Solar System. As more interstellar objects are found, the goal will be to identify both the truly primordial bodies, and those that have been altered, and link them to known exoplanet populations.']",20,10,1842
75,24,1432451504569528320,19333650,Vedant Chandra,"I'm pleased to announce our new paper, one of the first scientific results from the fifth-generation Sloan Digital Sky Survey: A 99-minute Double-lined White Dwarf Binary from SDSS-V Searching for binary white dwarfs has been a central theme of my undergraduate research, and we've developed several tools to help us find these systems. Our pipeline flagged this candidate due to variations in the absorption lines across SDSS-V sub-exposures We quickly obtained time-resolved @GeminiObs spectra under the Fast-Turnaround program (shown here), and also got UV fluxes from the @NASAUniverse Swift space observatory. These helped us solve the orbital and stellar parameters of the system The upshot: this is a 99-minute WD+WD binary in which both stars are visible on the spectrum. This 'double-lined' or 'SB2' nature is relatively rare (only ~ 20 such WD+WD systems are known), and it allows us to precisely estimate the masses of both WDs The short period and close distance of 113 pc (from @ESAGaia) imply that this system is a powerful source of millihertz gravitational waves, detectable by future space-based observatories. Due to the precisely determined system parameters, it could even be a verification source. Gravitational wave emission will cause the system's orbit to shrink over time, and we estimate that the two stars will get close enough to interact and merge within ~ 220 million years ('soon' in astronomical terms...) Once the stars merge, they will probably create a 'reborn' helium star that will eventually evolve into a single helium WD. There might be a few thermonuclear explosions along the way, but the system is probably not massive enough to produce a Type Ia supernova. This paper went from discovery to publication in exactly four months, which would not have been possible without the fantastic resources @GeminiObs and @NASAUniverse Swift provide for fast-turnaround proposals. Thanks to the entire @sdssurveys @MilkyWayMapper collaboration for supporting this work, and especially my co-authors, some of whom are on Twitter: @hc_hwang @jotajotahermes @evbauer_astro. We look forward to finding more interesting systems in SDSS-V! @SuperASASSN the thread you requested! @StellarTayar We tried sketching a rough picture in Section 5, but it's quite uncertain. I think it's plausible the original progenitors were ~ 1-1.5 Msun, and also that the 0.32 Msun WD formed first. Let me know if there's anything else I can clarify!",https://arxiv.org/abs/2108.11968,"We report the discovery of SDSS J133725.26+395237.7 (hereafter SDSS J1337+3952), a double-lined white dwarf (WD+WD) binary identified in early data from the fifth generation Sloan Digital Sky Survey (SDSS-V). The double-lined nature of the system enables us to fully determine its orbital and stellar parameters with follow-up Gemini spectroscopy and Swift UVOT ultraviolet fluxes. The system is nearby ($d = 113$ pc), and consists of a $0.51\, M_\odot$ primary and a $0.32\, M_\odot$ secondary. SDSS J1337+3952 is a powerful source of gravitational waves in the millihertz regime, and will be detectable by future space-based interferometers. Due to this gravitational wave emission, the binary orbit will shrink down to the point of interaction in $\approx 220$ Myr. The inferred stellar masses indicate that SDSS J1337+3952 will likely not explode as a Type Ia supernova (SN Ia). Instead, the system will probably merge and evolve into a rapidly rotating helium star, and could produce an under-luminous thermonuclear supernova along the way. The continuing search for similar systems in SDSS-V will grow the statistical sample of double-degenerate binaries across parameter space, constraining models of binary evolution and SNe Ia. ",A 99-minute Double-lined White Dwarf Binary from SDSS-V,11,"[""I'm pleased to announce our new paper, one of the first scientific results from the fifth-generation Sloan Digital Sky Survey: A 99-minute Double-lined White Dwarf Binary from SDSS-V "", ""Searching for binary white dwarfs has been a central theme of my undergraduate research, and we've developed several tools to help us find these systems. Our pipeline flagged this candidate due to variations in the absorption lines across SDSS-V sub-exposures https://t.co/U6TRXWFw7p"", 'We quickly obtained time-resolved @GeminiObs spectra under the Fast-Turnaround program (shown here), and also got UV fluxes from the @NASAUniverse Swift space observatory. These helped us solve the orbital and stellar parameters of the system https://t.co/sGq0tbW2ZX', ""The upshot: this is a 99-minute WD+WD binary in which both stars are visible on the spectrum. This 'double-lined' or 'SB2' nature is relatively rare (only ~ 20 such WD+WD systems are known), and it allows us to precisely estimate the masses of both WDs https://t.co/nC1xPScWEI"", 'The short period and close distance of 113 pc (from @ESAGaia) imply that this system is a powerful source of millihertz gravitational waves, detectable by future space-based observatories. Due to the precisely determined system parameters, it could even be a verification source. https://t.co/CLPKJMhkwL', ""Gravitational wave emission will cause the system's orbit to shrink over time, and we estimate that the two stars will get close enough to interact and merge within ~ 220 million years ('soon' in astronomical terms...)"", ""Once the stars merge, they will probably create a 'reborn' helium star that will eventually evolve into a single helium WD. There might be a few thermonuclear explosions along the way, but the system is probably not massive enough to produce a Type Ia supernova."", 'This paper went from discovery to publication in exactly four months, which would not have been possible without the fantastic resources @GeminiObs and @NASAUniverse Swift provide for fast-turnaround proposals.', 'Thanks to the entire @sdssurveys @MilkyWayMapper collaboration for supporting this work, and especially my co-authors, some of whom are on Twitter: @hc_hwang @jotajotahermes @evbauer_astro. We look forward to finding more interesting systems in SDSS-V!', '@SuperASASSN the thread you requested!', ""@StellarTayar We tried sketching a rough picture in Section 5, but it's quite uncertain. I think it's plausible the original progenitors were ~ 1-1.5 Msun, and also that the 0.32 Msun WD formed first. Let me know if there's anything else I can clarify!""]",21,08,2491
76,140,1126332235215294464,4475055297,Ming-Yu Liu,"Check out our new #GAN work on translating images to unseen domains in the test time with few example images. Live demo Project page Paper Video #NVIDIA Brought to you by @xunhuang1995 @arunmallya #TeroKarras, #TimoAila of #StyleGAN, @jaakkolehtinen, and @jankautz @NvidiaAI The web demo might be buggy. I know nothing about Javascript until last week. So please read the instruction carefully for run the demo. It currently works only on Chrome and Firefox and you have to click ""Load unsafe scripts"" or ""Disable protection for now"" buttons. Check out our paper for more results including translating all kinds of foods to Chowmein. PetSwap demo video live demo available at It works for non standard pet too. The PetSwap model is trained using carnivorous animals. It might be funny when you input images of other kinds of animals. @chris_j_beckham Man. I am just having fun. :)",https://arxiv.org/abs/1905.01723,"Unsupervised image-to-image translation methods learn to map images in a given class to an analogous image in a different class, drawing on unstructured (non-registered) datasets of images. While remarkably successful, current methods require access to many images in both source and destination classes at training time. We argue this greatly limits their use. Drawing inspiration from the human capability of picking up the essence of a novel object from a small number of examples and generalizing from there, we seek a few-shot, unsupervised image-to-image translation algorithm that works on previously unseen target classes that are specified, at test time, only by a few example images. Our model achieves this few-shot generation capability by coupling an adversarial training scheme with a novel network design. Through extensive experimental validation and comparisons to several baseline methods on benchmark datasets, we verify the effectiveness of the proposed framework. Our implementation and datasets are available at this https URL . ",Few-Shot Unsupervised Image-to-Image Translation,8,"['Check out our new #GAN work on translating images to unseen domains in the test time with few example images.\nLive demo \nProject page \nPaper \nVideo \n#NVIDIA ', 'Brought to you by @xunhuang1995 @arunmallya #TeroKarras, #TimoAila of #StyleGAN, @jaakkolehtinen, and @jankautz @NvidiaAI', 'The web demo might be buggy. I know nothing about Javascript until last week. So please read the instruction carefully for run the demo. It currently works only on Chrome and Firefox and you have to click ""Load unsafe scripts"" or ""Disable protection for now"" buttons.', 'Check out our paper for more results including translating all kinds of foods to Chowmein. https://t.co/iYwv3wh5Ts', 'PetSwap demo video\nhttps://t.co/CRAoLundVy\n\nlive demo available at https://t.co/KeYHIDpgcx', 'It works for non standard pet too. https://t.co/GcYOPA3Oix', 'The PetSwap model is trained using carnivorous animals. It might be funny when you input images of other kinds of animals. https://t.co/0Lq1BVuJUa', '@chris_j_beckham Man. I am just having fun. :)']",19,05,950
77,158,1394911374212583425,494870213,Thomas Haworth,"New paper that I was involved with out today First detection of a disk free of volatile elements around a young A-type star: A sign of collisions between rocky planets? In this paper we find odd material around a young star. It has lots of refractories elements (stuff usually in rocks) but no volatiles (stuff we usually find as gas in discs around stars). The only other place we see this is planetary debris around much older white dwarfs so what has happened? Its still somewhat uncertain, though one compelling possibility is that two young planets have collided and what we are seeing is the debris from this @davecl42 How had I not thought of that before!",https://arxiv.org/abs/2105.08327,"Aims. We present the first detailed analysis of the astrophysical parameters of the poorly studied Sco-Cen member HD 152384 and its circumstellar environment. Methods. We analyze newly obtained optical-near-IR XSHOOTER spectra, as well as archival TESS data, of HD 152384. In addition, we use literature photometric data to construct a detailed spectral energy distribution (SED) of the star. Results. The photospheric absorption lines in the spectrum of HD 152384 are characteristic of a A0 V star, for which we derive a stellar mass of 2.1 +/- 0.1 M_sun and a stellar age > 4.5 Myr. Superimposed on the photospheric absorption, the optical spectrum also displays double-peaked emission lines of Ca II, Fe I, Mg I and Si I, typical of circumstellar disks. Notably, all Hydrogen and Helium lines appear strictly in absorption. A toy model shows that the observed emission line profiles can be reproduced by emission from a compact (radius < 0.3 au) disk seen at an inclination of ~24 degrees. Further evidence for the presence of circumstellar material comes from the detection of a moderate infrared excess in the SED, similar to those found in extreme debris disk systems. Conclusions. We conclude that HD 152384 is surrounded by a tenuous circumstellar disk which, although rich in refractory elements, is highly depleted of volatile elements. To the best of our knowledge such a disk is unique within the group of young stars. However, it is reminiscent of the disks seen in some white dwarfs, which have been attributed to the disruption of rocky planets. We suggest that the disk around HD 152384 may have a similar origin and may be due to collisions in a newly formed planetary system. ","First detection of a disk free of volatile elements around a young
A-type star: A sign of collisions between rocky planets?",4,"['New paper that I was involved with out today\n\nFirst detection of a disk free of volatile elements around a young A-type star: A sign of collisions between rocky planets?\n\n', 'In this paper we find odd material around a young star. It has lots of refractories elements (stuff usually in rocks) but no volatiles (stuff we usually find as gas in discs around stars). \n\nThe only other place we see this is planetary debris around much older white dwarfs', 'so what has happened? Its still somewhat uncertain, though one compelling possibility is that two young planets have collided and what we are seeing is the debris from this', '@davecl42 How had I not thought of that before!']",21,05,670
78,210,1308492596285779969,1140066312380567553,Bryan Wilder,"I'm excited to share a project long in the making. We designed algorithms to find influential nodes in social networks, applied to HIV prevention for homeless youth. A trial with 713 youth over 2 years showed significant benefits. Paper just posted, (1/9) Homeless youth have up to 10x HIV prevalence vs general population. One intervention is to recruit peer leaders from the youth to promote protective behaviors. But how to choose the most influential peer leaders? (2/9) There's tons of computer science work on finding influential nodes in a social network (""influence maximization""). But, mostly targeted at advertising/online social networks...not easily applicable to community health. (3/9) What are the new challenges? In a word, data. Who's connected to who? How will information diffuse? None of this is known. Gathering network structure = time consuming, face to face interviews with youth. (4/9) We developed algorithms to efficiently subsample the network, only requiring about 20% of the effort in data collection. Then, we designed a robust optimization algorithm to identify influential nodes even under uncertainty. (5/9) It worked in simulation but what about reality? We ran a clinical trial at centers for homeless youth in LA. Trial compared three arms: interventions with our algorithm, selecting highest-degree youth (standard baseline), and no intervention. 713 youth total over 2 years. (6/9) The results just out: in the algorithm arm, statistically significant reduction in key outcome, condomless anal sex (OR = 0.69). No significant change for the other arms. AI helped! (7/9) Key takeaways in the paper (): simple, robust, data-efficient algorithms are critical for public health domains. Beyond the algorithm though, always requires community trust. (8/9) It was truly amazing to work with this close-knit team of social work/AI researchers: @MilindTambe_AI, @EricRicePhD, @onasch_vera, Graham Diguiseppe, @AmulyaYadav19 and many more at @CAIS_USC and @HCRCS. (9/9)",https://arxiv.org/abs/2009.09559,"Youth experiencing homelessness (YEH) are subject to substantially greater risk of HIV infection, compounded both by their lack of access to stable housing and the disproportionate representation of youth of marginalized racial, ethnic, and gender identity groups among YEH. A key goal for health equity is to improve adoption of protective behaviors in this population. One promising strategy for intervention is to recruit peer leaders from the population of YEH to promote behaviors such as condom usage and regular HIV testing to their social contacts. This raises a computational question: which youth should be selected as peer leaders to maximize the overall impact of the intervention? We developed an artificial intelligence system to optimize such social network interventions in a community health setting. We conducted a clinical trial enrolling 713 YEH at drop-in centers in a large US city. The clinical trial compared interventions planned with the algorithm to those where the highest-degree nodes in the youths' social network were recruited as peer leaders (the standard method in public health) and to an observation-only control group. Results from the clinical trial show that youth in the AI group experience statistically significant reductions in key risk behaviors for HIV transmission, while those in the other groups do not. This provides, to our knowledge, the first empirical validation of the usage of AI methods to optimize social network interventions for health. We conclude by discussing lessons learned over the course of the project which may inform future attempts to use AI in community-level interventions. ","Clinical trial of an AI-augmented intervention for HIV prevention in
youth experiencing homelessness",9,"[""I'm excited to share a project long in the making. We designed algorithms to find influential nodes in social networks, applied to HIV prevention for homeless youth. A trial with 713 youth over 2 years showed significant benefits. Paper just posted, (1/9)"", 'Homeless youth have up to 10x HIV prevalence vs general population. One intervention is to recruit peer leaders from the youth to promote protective behaviors. But how to choose the most influential peer leaders? (2/9)', 'There\'s tons of computer science work on finding influential nodes in a social network (""influence maximization""). But, mostly targeted at advertising/online social networks...not easily applicable to community health. (3/9)', ""What are the new challenges? In a word, data. Who's connected to who? How will information diffuse? None of this is known. Gathering network structure = time consuming, face to face interviews with youth. (4/9)"", 'We developed algorithms to efficiently subsample the network, only requiring about 20% of the effort in data collection. Then, we designed a robust optimization algorithm to identify influential nodes even under uncertainty. (5/9)', 'It worked in simulation but what about reality? We ran a clinical trial at centers for homeless youth in LA. Trial compared three arms: interventions with our algorithm, selecting highest-degree youth (standard baseline), and no intervention. 713 youth total over 2 years. (6/9)', 'The results just out: in the algorithm arm, statistically significant reduction in key outcome, condomless anal sex (OR = 0.69). No significant change for the other arms. AI helped! (7/9)', 'Key takeaways in the paper (https://t.co/yNK15YDmJ5): simple, robust, data-efficient algorithms are critical for public health domains. Beyond the algorithm though, always requires community trust. (8/9)', 'It was truly amazing to work with this close-knit team of social work/AI researchers: @MilindTambe_AI, @EricRicePhD, @onasch_vera, Graham Diguiseppe, @AmulyaYadav19 and many more at @CAIS_USC and @HCRCS. (9/9)']",20,09,2012
79,98,1128099481587601408,2875482557,Dr Laura McKemmish,"New @Exomol TiO line list, Toto, is now available on exomol website with paper on arxiv My favourite image from the paper... we get the high resolution spectra for TiO correct in the same region as the 2015 paper from @HoeijmakersJens found errors in previous line lists. So many electronic states in TiO... Getting the high accuracy data correct really relied on our previous MARVEL analysis (#openaccess) now also available on the new MARVEL website here #compchem ists: open problem of high importance to Exoplanet astronomers -- get TiO electronic surfaces, transition dipole moments and spin orbit couplings quantiatively and qualitatively correct for high electronic states! Story of how Toto go its name: @TomRivlin sang ""Old MacDonald had a farm: Ti -- Ti -- O"" one too many times. TiTiO --> Tito. I thought this was dog in Wizards of Oz, but this was Toto and I'm an Aussie, so... @exomol ",https://arxiv.org/abs/1905.04587,"Accurate line lists are crucial for correctly modelling a variety of astrophysical phenomena, including stellar photospheres and the atmospheres of extra-solar planets. This paper presents a new line database Toto for the main isotopologues of titanium oxide (TiO): $^{46}$Ti$^{16}$O, $^{47}$Ti$^{16}$O, $^{48}$Ti$^{16}$O, $^{49}$Ti$^{16}$O and $^{50}$Ti$^{16}$O. The TiO line list contains transitions with wave-numbers up to 30,000 cm$^{-1}$ ie long-wards of 0.33 $\mu$m. The Toto line list includes all dipole-allowed transitions between 13 low-lying electronic states (X $^3\Delta$, a $^1\Delta$, d $^1\Sigma^+$, E $^3\Pi$, A $^3\Phi$ B $^3\Pi$, C $^3\Delta$, b $^1\Pi$, c $^1\Phi$, f $^1\Delta$, e $^1\Sigma^+$). Ab initio potential energy curves (PECs) are computed at the icMRCI level and combined with spin-orbit and other coupling curves. These PECs and couplings are iteratively refined to match known empirical energy levels. Accurate line intensities are generated using ab initio dipole moment curves. The Toto line lists are appropriate for temperatures below 5000 K and contain 30 million transitions for TiO; it is made available in electronic form via the CDS data centre and via www.exomol.com. Tests of the line lists show greatly improved agreement with observed spectra for objects such as M-dwarfs GJ876 and GL581. ",ExoMol Molecular linelists -- XXXIII. The spectrum of Titanium Oxide,6,"['New @Exomol TiO line list, Toto, is now available on exomol website with paper on arxiv ', 'My favourite image from the paper... we get the high resolution spectra for TiO correct in the same region as the 2015 paper from @HoeijmakersJens found errors in previous line lists. https://t.co/R01ivrtZOb', 'So many electronic states in TiO... https://t.co/Qjncxi0pwU', 'Getting the high accuracy data correct really relied on our previous MARVEL analysis (#openaccess) https://t.co/93Eoza7ITz now also available on the new MARVEL website here https://t.co/KkwQEPYfED', '#compchem ists: open problem of high importance to Exoplanet astronomers -- get TiO electronic surfaces, transition dipole moments and spin orbit couplings quantiatively and qualitatively correct for high electronic states!', 'Story of how Toto go its name: @TomRivlin sang ""Old MacDonald had a farm: Ti -- Ti -- O"" one too many times. TiTiO --> Tito. I thought this was dog in Wizards of Oz, but this was Toto and I\'m an Aussie, so... @exomol https://t.co/H5r4r4Y8bj']",19,05,956
80,8,1212440767077244929,313814795,M. Sohaib Alam,"My new paper, from the last decade, explores the feasibility of using reinforcement learning for quantum programming, particularly state preparation and gate compilation. (1/n) It does so by forming finite MDPs for single-qubit state prep and gate compilation, exactly solving for these cases using policy iteration, and comparing against brute-force calculations, finding that the two can be made to agree. (2/n) This shows that the reinforcement learning notion of optimality can be made to agree with our intuitive notion of ""optimality"" in the sense of the shortest possible quantum circuit to prep a state, or compile a gate, up to some accuracy (3/n) While policy iteration/dynamic programming is fine to use for the 1q problems set up here, it won't be practical for larger q scenarios, but it does mean that you could throw reinforcement learning techniques at the problem, and hope to find optimally short circuits (4/n) My guess is similar considerations would play a big part for larger q states, in particular choice of good coordinate systems for state and action spaces, the form and scale of the discretization etc (5/n) I suppose one way to express the results in plainer but drastically simplifying words would be that a robot could be trained through rewards to produce the shortest possible quantum circuit necessary to prepare a state or compile a gate, up to some accuracy. (n/n) @razaa_aasad Yep, and it's duly acknowledged in the references! Drawing from your paper, one could similarly hope that RL would discover close-to-optimal sequences for state prep + gate compilation at the logical gate level. @razaa_aasad As my new paper shows, exact-optimality, i.e. shortest length circuits, is a feature (upto caveats) of the underlying MDP for the single-qubit case. RL can therefore be expected to find at least close-to-optimal gate sequences for state prep/compilation in the n-qubit case.",https://arxiv.org/abs/1912.12002,"Reinforcement learning has witnessed recent applications to a variety of tasks in quantum programming. The underlying assumption is that those tasks could be modeled as Markov Decision Processes (MDPs). Here, we investigate the feasibility of this assumption by exploring its consequences for two of the simplest tasks in quantum programming: state preparation and gate compilation. By forming discrete MDPs, focusing exclusively on the single-qubit case, we solve for the optimal policy exactly through policy iteration. We find optimal paths that correspond to the shortest possible sequence of gates to prepare a state, or compile a gate, up to some target accuracy. As an example, we find sequences of H and T gates with length as small as 11 producing ~99% fidelity for states of the form (HT)^{n} |0> with values as large as n=10^{10}. This work provides strong evidence that reinforcement learning can be used for optimal state preparation and gate compilation for larger qubit spaces. ",Quantum Logic Gate Synthesis as a Markov Decision Process,8,"['My new paper, from the last decade, explores the feasibility of using reinforcement learning for quantum programming, particularly state preparation and gate compilation. (1/n)\n\n', 'It does so by forming finite MDPs for single-qubit state prep and gate compilation, exactly solving for these cases using policy iteration, and comparing against brute-force calculations, finding that the two can be made to agree. (2/n)', 'This shows that the reinforcement learning notion of optimality can be made to agree with our intuitive notion of ""optimality"" in the sense of the shortest possible quantum circuit to prep a state, or compile a gate, up to some accuracy (3/n)', ""While policy iteration/dynamic programming is fine to use for the 1q problems set up here, it won't be practical for larger q scenarios, but it does mean that you could throw reinforcement learning techniques at the problem, and hope to find optimally short circuits (4/n)"", 'My guess is similar considerations would play a big part for larger q states, in particular choice of good coordinate systems for state and action spaces, the form and scale of the discretization etc (5/n)', 'I suppose one way to express the results in plainer but drastically simplifying words would be that a robot could be trained through rewards to produce the shortest possible quantum circuit necessary to prepare a state or compile a gate, up to some accuracy. (n/n)', ""@razaa_aasad Yep, and it's duly acknowledged in the references! Drawing from your paper, one could similarly hope that RL would discover close-to-optimal sequences for state prep + gate compilation at the logical gate level."", '@razaa_aasad As my new paper shows, exact-optimality, i.e. shortest length circuits, is a feature (upto caveats) of the underlying MDP for the single-qubit case. RL can therefore be expected to find at least close-to-optimal gate sequences for state prep/compilation in the n-qubit case.']",19,12,1920
81,64,1506659756701749251,2235411914,Surya Ganguli,Our new #iclr2022 paper - Towards a foundation model for robotics: one transformer to control many new robot morphologies through large-scale pre-training on another set of morphologies. Expertly lead by @agrimgupta92 & collab w/@drfeifei paper: thread -> ,https://arxiv.org/abs/2203.11931,"Multiple domains like vision, natural language, and audio are witnessing tremendous progress by leveraging Transformers for large scale pre-training followed by task specific fine tuning. In contrast, in robotics we primarily train a single robot for a single task. However, modular robot systems now allow for the flexible combination of general-purpose building blocks into task optimized morphologies. However, given the exponentially large number of possible robot morphologies, training a controller for each new design is impractical. In this work, we propose MetaMorph, a Transformer based approach to learn a universal controller over a modular robot design space. MetaMorph is based on the insight that robot morphology is just another modality on which we can condition the output of a Transformer. Through extensive experiments we demonstrate that large scale pre-training on a variety of robot morphologies results in policies with combinatorial generalization capabilities, including zero shot generalization to unseen robot morphologies. We further demonstrate that our pre-trained policy can be used for sample-efficient transfer to completely new robot morphologies and tasks. ",MetaMorph: Learning Universal Controllers with Transformers,1,['Our new #iclr2022 paper - Towards a foundation model for robotics: one transformer to control many new robot morphologies through large-scale pre-training on another set of morphologies. Expertly lead by @agrimgupta92 & collab w/@drfeifei paper: thread -> '],22,03,272
82,38,1288395405831675904,54849207,Ian Harrison,"New paper with @Tessa_M_Baker on arXiv: Main point is in this figure... When it comes to constraining modified gravity, LIGO standard sirens help out LSS a bit, but LISA standard sirens help a lot! @Tessa_M_Baker (personal note/mea culpa: this paper has had a couple of false starts over the past year or two, so it is great for it to be public finally!) @SeshNadathur @Tessa_M_Baker No, we made the decision not to go through with the machinery for fsigma_8 forecasts. Some insight can be had in the propto O_de case by comparing to Table 1 (although those are Fishers).",https://arxiv.org/abs/2007.13791,"The first multi-messenger gravitational wave event has had a transformative effect on the space of modified gravity models. In this paper we study the enhanced tests of gravity that are possible with a future set of gravitational wave standard siren events. We perform MCMC constraint forecasts for parameters in Horndeski scalar-tensor theories. In particular, we focus on the complementarity of gravitational waves with electromagnetic large-scale structure data from galaxy surveys. We find that the addition of fifty low redshift ($z \lesssim 0.2$) standard sirens from the advanced LIGO network offers only a modest improvement (a factor 1.1 -- 1.3, where 1.0 is no improvement) over existing constraints from electromagnetic observations of large-scale structures. In contrast, high redshift (up to $z \sim 10$) standard sirens from the future LISA satellite will improve constraints on the time evolution of the Planck mass in Horndeski theories by a factor $\sim 5$. By simulating different scenarios, we find this improvement to be robust to marginalisation over unknown merger inclination angles and to variation between three plausible models for the merger source population. ","Constraining Scalar-Tensor Modified Gravity with Gravitational Waves and
Large Scale Structure Surveys",3,"['New paper with @Tessa_M_Baker on arXiv:\n\nMain point is in this figure... When it comes to constraining modified gravity, LIGO standard sirens help out LSS a bit, but LISA standard sirens help a lot! ', '@Tessa_M_Baker (personal note/mea culpa: this paper has had a couple of false starts over the past year or two, so it is great for it to be public finally!)', '@SeshNadathur @Tessa_M_Baker No, we made the decision not to go through with the machinery for fsigma_8 forecasts. Some insight can be had in the propto O_de case by comparing to https://t.co/eCyC8sy3Dt Table 1 (although those are Fishers).']",20,07,592
83,111,1347539745371516936,1908579919,Alexa,"The timing feels a little weird on this but...I have a new paper out the arxiv today looking at the stellar population gradients of the Dragonfly 44! They're pretty weird! Here's how DF44 looks with respect to other dwarf galaxies The upshot of all this is that the internal properties of DF44 (the stellar pops and the kinematics from @DokkumPieter's earlier paper using the same amazing KCWI data) suggest a very different SFH than we see in what I like to call the ""canonical"" dwarf population. @brant_robertson Thanks! It was a lot of fun working on it",https://arxiv.org/abs/2101.02220,"We use the Keck Cosmic Web Imager integral-field unit spectrograph to: 1) measure the global stellar population parameters for the ultra-diffuse galaxy (UDG) Dragonfly 44 (DF44) to much higher precision than previously possible for any UDG, and 2) for the first time measure spatially-resolved stellar population parameters of a UDG. We find that DF44 falls below the mass--metallicity relation established by canonical dwarf galaxies both in and beyond the Local Group. We measure a flat radial age gradient ($m_{\rm age} \sim +0.01_{-0.08}^{+0.07}$ log Gyr kpc$^{-1}$) and a flat-to-positive metallicity gradient ($m_{\rm [Fe/H]} \sim +0.08_{-0.11}^{+0.11}$ dex kpc$^{-1}$), which are inconsistent with the gradients measured in similarly pressure-supported dwarf galaxies. We also measure a flat-to-negative [Mg/Fe] gradient ($m_{\rm [Mg/Fe]} \sim -0.18_{-0.17}^{+0.17}$ dex kpc$^{-1}$) such that the central $1.5$ kpc of DF44 has stellar population parameters comparable to metal-poor globular clusters. Overall, DF44 does not have internal properties similar to other dwarf galaxies and is inconsistent with it having been puffed up through a prolonged, bursty star-formation history, as suggested by some simulations. Rather, the evidence indicates that DF44 experienced an intense epoch of ""inside-out"" star formation and then quenched early and catastrophically, such that star-formation was cut off more quickly than in canonical dwarf galaxies. ","Spatially Resolved Stellar Spectroscopy of the Ultra-diffuse Galaxy
Dragonfly 44. III. Evidence for an Unexpected Star-Formation History",4,"[""The timing feels a little weird on this but...I have a new paper out the arxiv today looking at the stellar population gradients of the Dragonfly 44! They're pretty weird!\n\n "", ""Here's how DF44 looks with respect to other dwarf galaxies https://t.co/VjNr6yaVuF"", 'The upshot of all this is that the internal properties of DF44 (the stellar pops and the kinematics from @DokkumPieter\'s earlier paper using the same amazing KCWI data) suggest a very different SFH than we see in what I like to call the ""canonical"" dwarf population.', '@brant_robertson Thanks! It was a lot of fun working on it']",21,01,577
84,159,1366723676155092994,1366169431496343557,Matthew Whelan,"How do Hippocampal Reverse Replays support Biological Reinforcement Learning? Can they be used in Robotic RL? With @EGVasilaki and @tonyjprescott, we implement a computational model of reverse replays in the biomimetic robot @CqRMiRo to find out. Preprint: ",https://arxiv.org/abs/2102.11914,"Hippocampal reverse replay is thought to contribute to learning, and particularly reinforcement learning, in animals. We present a computational model of learning in the hippocampus that builds on a previous model of the hippocampal-striatal network viewed as implementing a three-factor reinforcement learning rule. To augment this model with hippocampal reverse replay, a novel policy gradient learning rule is derived that associates place cell activity with responses in cells representing actions. This new model is evaluated using a simulated robot spatial navigation task inspired by the Morris water maze. Results show that reverse replay can accelerate learning from reinforcement, whilst improving stability and robustness over multiple trials. As implied by the neurobiological data, our study implies that reverse replay can make a significant positive contribution to reinforcement learning, although learning that is less efficient and less stable is possible in its absence. We conclude that reverse replay may enhance reinforcement learning in the mammalian hippocampal-striatal system rather than provide its core mechanism. ",A Robotic Model of Hippocampal Reverse Replay for Reinforcement Learning,1,"['How do Hippocampal Reverse Replays support Biological Reinforcement Learning? Can they be used in Robotic RL? With @EGVasilaki and @tonyjprescott, we implement a computational model of reverse replays in the biomimetic robot @CqRMiRo to find out. Preprint: ']",21,02,270
85,87,1469385595814232067,1324170692401639424,Robert McGehee,"I'm very excited about my new paper with @GillyElor and Aaron Pierce: ""Maximizing Direct Detection with HYPER Dark Matter."" So, here's your Fri physics🧵1/n In this paper, we addressed 2 Qs: #1 What is the maximum cross section for sub-GeV DM scattering off nucleons? #2 Is there a DM candidate which may be detected at future experiments with a cross section as large as this maximum while still accounting for its relic abundance. 2/n The answer to #1: 10^(-36) - 10^(-30) cm^2 for DM masses from 10 keV - 100 MeV. We estimated this by only including present-day bounds on DM and a scalar mediator, which connects the DM to the visible sector. 3/n To establish the model-independence of this max cross section, we considered variations of our starting simple assumptions. No common vector mediators (visibly and invisibly decaying dark photons, B-L, B) had a larger max cross section! 4/n The answer to #2: yes! We named them HighlY interactive ParticlE Relics (HYPERs). 5/n In HYPER models, a dark sector phase transition causes the mediator to decrease its mass to its present-day value. This occurs after the DM abundance freezes-in and boosts the present-day direct detection cross section. 6/n Since DM-SM interactions get a late-time ""boost,"" we must also verify that the DM abundance doesn't change. For parts of HYPER (parameter) space, DM-number-changing processes must be suppressed, causing the direct detection cross section to be smaller than the maximum one. 7/n But, for many HYPERs, we find that they are at (or fairly close to) the maximum consistent cross section! 8/n This is particularly exciting because HYPERs populate a parameter space which is imminently testable by many future direct detection efforts but has few DM benchmarks. 9/n In the future, we want to do the same analysis and HYPER model building for sub-GeV DM scattering off electrons which would require an even lower dark sector phase transition temperature. 10/n It is an interesting question as to whether such a low phase transition temperature could modify or remove bounds on the mediator from HB stars, which were essential in our derivation of the maximum cross section for this work. End🧵",https://arxiv.org/abs/2112.03920,"We estimate the maximum direct detection cross section for sub-GeV dark matter scattering off nucleons. For dark matter masses in the range of $10 \text{ keV }- 100 \text{ MeV}$, cross sections greater than $10^{-36}$- $10^{-30} \,\text{cm}^2$ seem implausible. We introduce a dark matter candidate which realizes this maximum cross section: HighlY interactive ParticlE Relics (HYPERs). After HYPERs freeze-in, a dark sector phase transition decreases the mass of the mediator which connects HYPERs to the visible sector. This increases the HYPER's direct detection cross section, but in such a way as to leave the HYPER's abundance unaffected and avoid conflict with measurements of Big Bang Nucleosynthesis and the Cosmic Microwave Background. HYPERs present a benchmark for direct detection experiments in a parameter space with few known dark matter models. ",Maximizing Direct Detection with HYPER Dark Matter,11,"['I\'m very excited about my new paper with @GillyElor and Aaron Pierce: ""Maximizing Direct Detection with HYPER Dark Matter."" So, here\'s your Fri physics🧵1/n\n\n ', 'In this paper, we addressed 2 Qs: #1 What is the maximum cross section for sub-GeV DM scattering off nucleons? #2 Is there a DM candidate which may be detected at future experiments with a cross section as large as this maximum while still accounting for its relic abundance. 2/n', 'The answer to #1: 10^(-36) - 10^(-30) cm^2 for DM masses from 10 keV - 100 MeV. We estimated this by only including present-day bounds on DM and a scalar mediator, which connects the DM to the visible sector. 3/n', 'To establish the model-independence of this max cross section, we considered variations of our starting simple assumptions. No common vector mediators (visibly and invisibly decaying dark photons, B-L, B) had a larger max cross section! 4/n', 'The answer to #2: yes! We named them HighlY interactive ParticlE Relics (HYPERs). 5/n', 'In HYPER models, a dark sector phase transition causes the mediator to decrease its mass to its present-day value. This occurs after the DM abundance freezes-in and boosts the present-day direct detection cross section. 6/n', 'Since DM-SM interactions get a late-time ""boost,"" we must also verify that the DM abundance doesn\'t change. For parts of HYPER (parameter) space, DM-number-changing processes must be suppressed, causing the direct detection cross section to be smaller than the maximum one. 7/n', 'But, for many HYPERs, we find that they are at (or fairly close to) the maximum consistent cross section! 8/n https://t.co/zZCk75LslK', 'This is particularly exciting because HYPERs populate a parameter space which is imminently testable by many future direct detection efforts but has few DM benchmarks. 9/n', 'In the future, we want to do the same analysis and HYPER model building for sub-GeV DM scattering off electrons which would require an even lower dark sector phase transition temperature. 10/n', 'It is an interesting question as to whether such a low phase transition temperature could modify or remove bounds on the mediator from HB stars, which were essential in our derivation of the maximum cross section for this work. End🧵']",21,12,2206
86,10,1224268965154869248,322636963,Jonathan Berant,"New TACL paper involving a lot of hard work from my twitter-less student Tomer, along with great collab. at AI2 and TAU. Paper/website at @megamor2 @yoavgo @nlpmattg @ankgup2 1/2 We define a mean. rep. (QDMR) that decomposes questions to a sequence of steps that can be executed against any context (image, text, DB), crowdsource >80K question-QDMR pairs using questions from 10 existing datasets, show usefulness for RC and release a QDMR parser. Enjoy! 2/2",https://arxiv.org/abs/2001.11770v1,"Understanding natural language questions entails the ability to break down a question into the requisite steps for computing its answer. In this work, we introduce a Question Decomposition Meaning Representation (QDMR) for questions. QDMR constitutes the ordered list of steps, expressed through natural language, that are necessary for answering a question. We develop a crowdsourcing pipeline, showing that quality QDMRs can be annotated at scale, and release the Break dataset, containing over 83K pairs of questions and their QDMRs. We demonstrate the utility of QDMR by showing that (a) it can be used to improve open-domain question answering on the HotpotQA dataset, (b) it can be deterministically converted to a pseudo-SQL formal language, which can alleviate annotation in semantic parsing applications. Last, we use Break to train a sequence-to-sequence model with copying that parses questions into QDMR structures, and show that it substantially outperforms several natural baselines. ",] Break It Down: A Question Understanding Benchmark,2,"['New TACL paper involving a lot of hard work from my twitter-less student Tomer, along with great collab. at AI2 and TAU. Paper/website at @megamor2 @yoavgo @nlpmattg @ankgup2 1/2 ', 'We define a mean. rep. (QDMR) that decomposes questions to a sequence of steps that can be executed against any context (image, text, DB), crowdsource >80K question-QDMR pairs using questions from 10 existing datasets, show usefulness for RC and release a QDMR parser. Enjoy! 2/2']",20,01,482
87,3,1357342173948043277,325448885,Michael Merrifield,"A new @sixtysymbols video on our latest paper (). This neat little result reminded me why I enjoy doing astronomy research so much! @NonZeroCurl @BradyHaran @sixtysymbols It depends! More usually “ln” for base e. And more likely to be a natural log for theoretical work and base 10 for observations. @zkzkz @BradyHaran @sixtysymbols That’s a whole other video! I think it’s because they form their stars more slowly and steadily than higher mass galaxies, giving more time for everything to mix thoroughly as the closed box requires.",http://arxiv.org/abs/2101.11022,"The levels of heavy elements in stars are the product of enhancement by previous stellar generations, and the distribution of this metallicity among the population contains clues to the process by which a galaxy formed. Most famously, the ""G-dwarf problem"" highlighted the small number of low-metallicity G-dwarf stars in the Milky Way, which is inconsistent with the simplest picture of a galaxy formed from a ""closed box"" of gas. It can be resolved by treating the Galaxy as an open system that accretes gas throughout its life. This observation has classically only been made in the Milky Way, but the availability of high-quality spectral data from SDSS-IV MaNGA and the development of new analysis techniques mean that we can now make equivalent measurements for a large sample of spiral galaxies. Our analysis shows that high-mass spirals generically show a similar deficit of low-metallicity stars, implying that the Milky Way's history of gas accretion is common. By contrast, low-mass spirals show little sign of a G-dwarf problem, presenting the metallicity distribution that would be expected if such systems evolved as pretty much closed boxes. This distinction can be understood from the differing timescales for star formation in galaxies of differing masses. ","SDSS-IV MaNGA: the ""G-dwarf problem"" revisited",3,"['A new @sixtysymbols video on our latest paper (). This neat little result reminded me why I enjoy doing astronomy research so much! ', '@NonZeroCurl @BradyHaran @sixtysymbols It depends! More usually “ln” for base e. And more likely to be a natural log for theoretical work and base 10 for observations.', '@zkzkz @BradyHaran @sixtysymbols That’s a whole other video! I think it’s because they form their stars more slowly and steadily than higher mass galaxies, giving more time for everything to mix thoroughly as the closed box requires.']",21,01,553
88,218,1409471296480722946,735281203008376832,Felipe Martins,"Our work (me, @MateusGM, @HansBassani, @pedro_mbraga and Edna S. Barros) is now available on arXiv! We propose a framework for creating reinforcement learning environments for IEEE VSSS and RoboCup Small Size League robot soccer competitions. Very proud of this contribution, which is part of my masters research, we will use this framework as a base for further research on reinforcement learning for robot soccer. Feel free to reach me on here if there is any questions about the work!",https://arxiv.org/abs/2106.12895,"Reinforcement learning is an active research area with a vast number of applications in robotics, and the RoboCup competition is an interesting environment for studying and evaluating reinforcement learning methods. A known difficulty in applying reinforcement learning to robotics is the high number of experience samples required, being the use of simulated environments for training the agents followed by transfer learning to real-world (sim-to-real) a viable path. This article introduces an open-source simulator for the IEEE Very Small Size Soccer and the Small Size League optimized for reinforcement learning experiments. We also propose a framework for creating OpenAI Gym environments with a set of benchmarks tasks for evaluating single-agent and multi-agent robot soccer skills. We then demonstrate the learning capabilities of two state-of-the-art reinforcement learning methods as well as their limitations in certain scenarios introduced in this framework. We believe this will make it easier for more teams to compete in these categories using end-to-end reinforcement learning approaches and further develop this research area. ","rSoccer: A Framework for Studying Reinforcement Learning in Small and
Very Small Size Robot Soccer",2,"['Our work (me, @MateusGM, @HansBassani, @pedro_mbraga and Edna S. Barros) is now available on arXiv!\n\n\n\nWe propose a framework for creating reinforcement learning environments for IEEE VSSS and RoboCup Small Size League robot soccer competitions.', 'Very proud of this contribution, which is part of my masters research, we will use this framework as a base for further research on reinforcement learning for robot soccer. Feel free to reach me on here if there is any questions about the work!']",21,06,494
89,240,1371637851226583041,1167063592941891585,Bonaventure Dossou,"Using Fon language as a case study, we attempted WEB tokenization, a human-involved super-words tokenization strategy to create a better representative vocabulary for training. It showed improvements in the translation downstream task: w/ @ChrisEmezue ",http://arxiv.org/abs/2103.08052,"Building effective neural machine translation (NMT) models for very low-resourced and morphologically rich African indigenous languages is an open challenge. Besides the issue of finding available resources for them, a lot of work is put into preprocessing and tokenization. Recent studies have shown that standard tokenization methods do not always adequately deal with the grammatical, diacritical, and tonal properties of some African languages. That, coupled with the extremely low availability of training samples, hinders the production of reliable NMT models. In this paper, using Fon language as a case study, we revisit standard tokenization methods and introduce Word-Expressions-Based (WEB) tokenization, a human-involved super-words tokenization strategy to create a better representative vocabulary for training. Furthermore, we compare our tokenization strategy to others on the Fon-French and French-Fon translation tasks. ","Crowdsourced Phrase-Based Tokenization for Low-Resourced Neural Machine
Translation: The Case of Fon Language",1,"['Using Fon language as a case study, we attempted WEB tokenization, a human-involved super-words tokenization strategy to create a better representative vocabulary for training. It showed improvements in the translation downstream task:\n\n\nw/ @ChrisEmezue ']",21,03,265
90,222,1499812487193051137,18647972,Richard J. Chen,"Excited to share work with @rahulgk @MSFTResearch, presented at #LMRL #NeurIPS 2021. We pretrained ViTs on histopathology images - and find they learn meaningful visual concepts. Paper Link: Pretrained Weights: Key Findings: 1/ In general CV, DINO by @mcaron31 and extensions can learn interpretable subparts of images, and has been used for object discovery. We highlight interpreting images as subparts, e.g. - part-whole hierarchies, is very natural in histology in learning cell-tissue organization. 2/ Comparing w/ SimCLR and ImageNet features, DINO learns better & more efficient representations, tested on patch- and slide-level tasks. SSL helps w/ domain shift. On raw & stain-normalized CRC100K, global structure of morphological subtypes are better preserved than ImageNet. 3/ Lastly, DINO localizes cell location quite well w/o supervision. Our findings demonstrate ViTs can easily localize visual concepts in histopathology via introspecting the attention heads. 4/ We plan to add more pretrained models + evaluation metrics, with a larger paper coming soon :^). Special thanks to also the BioML Group @MSRNE, @lorin_crawford, @apsoleimany, Kristen Severson, @KevinKaichuang, @nfusi, and @rahulgk again for supporting me over the summer! 5/ @tae_hwang @rahulgk @MSFTResearch Thank you Tae!",http://arxiv.org/abs/2203.00585,"Tissue phenotyping is a fundamental task in learning objective characterizations of histopathologic biomarkers within the tumor-immune microenvironment in cancer pathology. However, whole-slide imaging (WSI) is a complex computer vision in which: 1) WSIs have enormous image resolutions with precludes large-scale pixel-level efforts in data curation, and 2) diversity of morphological phenotypes results in inter- and intra-observer variability in tissue labeling. To address these limitations, current efforts have proposed using pretrained image encoders (transfer learning from ImageNet, self-supervised pretraining) in extracting morphological features from pathology, but have not been extensively validated. In this work, we conduct a search for good representations in pathology by training a variety of self-supervised models with validation on a variety of weakly-supervised and patch-level tasks. Our key finding is in discovering that Vision Transformers using DINO-based knowledge distillation are able to learn data-efficient and interpretable features in histology images wherein the different attention heads learn distinct morphological phenotypes. We make evaluation code and pretrained weights publicly-available at: this https URL ","Self-Supervised Vision Transformers Learn Visual Concepts in
Histopathology",6,"['Excited to share work with @rahulgk @MSFTResearch, presented at #LMRL #NeurIPS 2021.\n\nWe pretrained ViTs on histopathology images - and find they learn meaningful visual concepts.\n\nPaper Link: \nPretrained Weights: \n\nKey Findings: 1/ ', 'In general CV, DINO by @mcaron31 and extensions can learn interpretable subparts of images, and has been used for object discovery.\n\nWe highlight interpreting images as subparts, e.g. - part-whole hierarchies, is very natural in histology in learning cell-tissue organization. 2/ https://t.co/HTL54DtTcC', 'Comparing w/ SimCLR and ImageNet features, DINO learns better & more efficient representations, tested on patch- and slide-level tasks.\n\nSSL helps w/ domain shift. On raw & stain-normalized CRC100K, global structure of morphological subtypes are better preserved than ImageNet. 3/ https://t.co/MxPJk1KaYq', 'Lastly, DINO localizes cell location quite well w/o supervision. Our findings demonstrate ViTs can easily localize visual concepts in histopathology via introspecting the attention heads. 4/ https://t.co/mZ3Xk4UXoJ', 'We plan to add more pretrained models + evaluation metrics, with a larger paper coming soon :^). Special thanks to also the BioML Group @MSRNE, @lorin_crawford, @apsoleimany, Kristen Severson, @KevinKaichuang, @nfusi, and @rahulgk again for supporting me over the summer! 5/', '@tae_hwang @rahulgk @MSFTResearch Thank you Tae!']",22,03,1343
91,119,1323806350841909248,1162181213475540992,Kaze Wong,A paper in my series of O3 papers is out! We use a deep learning enhanced population analysis framework to investigate what the @LIGO new catalogue says about primordial black holes. #BlackHole #GravitationalWave #DeepLearning #GWTC2 #Inference ,https://arxiv.org/abs/2011.01865,"Primordial black holes (PBHs) might be formed in the early Universe and could comprise at least a fraction of the dark matter. Using the recently released GWTC-2 dataset from the third observing run of the LIGO-Virgo Collaboration, we investigate whether current observations are compatible with the hypothesis that all black hole mergers detected so far are of primordial origin. We constrain PBH formation models within a hierarchical Bayesian inference framework based on deep learning techniques, finding best-fit values for distinctive features of these models, including the PBH initial mass function, the fraction of PBHs in dark matter, and the accretion efficiency. The presence of several spinning binaries in the GWTC-2 dataset favors a scenario in which PBHs accrete and spin up. Our results indicate that PBHs may comprise only a fraction smaller than $0.3 \%$ of the total dark matter, and that the predicted PBH abundance is still compatible with other constraints. ","Constraining the primordial black hole scenario with Bayesian inference
and machine learning: the GWTC-2 gravitational wave catalog",1,['A paper in my series of O3 papers is out!\nWe use a deep learning enhanced population analysis framework to investigate what the @LIGO new catalogue says about primordial black holes. \n\n#BlackHole #GravitationalWave #DeepLearning #GWTC2 #Inference '],20,11,258
92,87,1018161901090627584,933084565895286786,Dan Hooper,"In my new paper with Gordan Krnjaic (@GordanKrnjaic), Andrew Long & Sam McDermott, we identify a class of models in which the particle responsible for inflation is also the dark matter. We call it ""WIMPflation"". 2 birds with 1 stone! #DarkMatter #cosmology @Aqeelhmed @GordanKrnjaic We worried about this, of course, but found that such potentials can arise in models in which the inflaton has a non-minimal coupling to gravity (ie McDonald and Learner) or in Kallosh and Linde's alpha-attractor scenario, just to name a couple of examples.",https://arxiv.org/abs/1807.03308,"We propose a class of models in which a stable inflaton is produced as a thermal relic in the early universe and constitutes the dark matter. We show that inflaton annihilations can efficiently reheat the universe, and identify several examples of inflationary potentials that can accommodate all cosmic microwave background observables and in which the inflaton dark matter candidate has a weak scale mass. As a simple example, we consider annihilations that take place through a Higgs portal interaction, leading to encouraging prospects for future direct detection experiments. ",WIMPflation,2,"['In my new paper with Gordan Krnjaic (@GordanKrnjaic), Andrew Long & Sam McDermott, we identify a class of models in which the particle responsible for inflation is also the dark matter. We call it ""WIMPflation"".\n2 birds with 1 stone!\n\n#DarkMatter #cosmology', ""@Aqeelhmed @GordanKrnjaic We worried about this, of course, but found that such potentials can arise in models in which the inflaton has a non-minimal coupling to gravity (ie McDonald and Learner) or in Kallosh and Linde's alpha-attractor scenario, just to name a couple of examples.""]",18,07,547
93,130,1402907147181101059,841031248839618560,Relja Arandjelović,"In our new paper ""NeRF in detail: Learning to sample for view synthesis"" (aka yet another NeRF paper on your to-read list) we replace the heuristic coarse-to-fine strategy of NeRF via a learnt one. Improvements in rendering quality and speed. @SattlerTorsten Thanks Torsten! I only tried it really on the original NeRF paper's datasets, it seems most NeRF improvements use the same data as well. @SattlerTorsten I see, good to know.",https://arxiv.org/abs/2106.05264,"Neural radiance fields (NeRF) methods have demonstrated impressive novel view synthesis performance. The core approach is to render individual rays by querying a neural network at points sampled along the ray to obtain the density and colour of the sampled points, and integrating this information using the rendering equation. Since dense sampling is computationally prohibitive, a common solution is to perform coarse-to-fine sampling. In this work we address a clear limitation of the vanilla coarse-to-fine approach -- that it is based on a heuristic and not trained end-to-end for the task at hand. We introduce a differentiable module that learns to propose samples and their importance for the fine network, and consider and compare multiple alternatives for its neural architecture. Training the proposal module from scratch can be unstable due to lack of supervision, so an effective pre-training strategy is also put forward. The approach, named `NeRF in detail' (NeRF-ID), achieves superior view synthesis quality over NeRF and the state-of-the-art on the synthetic Blender benchmark and on par or better performance on the real LLFF-NeRF scenes. Furthermore, by leveraging the predicted sample importance, a 25% saving in computation can be achieved without significantly sacrificing the rendering quality. ",NeRF in detail: Learning to sample for view synthesis,3,"['In our new paper ""NeRF in detail: Learning to sample for view synthesis"" (aka yet another NeRF paper on your to-read list) we replace the heuristic coarse-to-fine strategy of NeRF via a learnt one. Improvements in rendering quality and speed. ', ""@SattlerTorsten Thanks Torsten! I only tried it really on the original NeRF paper's datasets, it seems most NeRF improvements use the same data as well."", '@SattlerTorsten I see, good to know.']",21,06,446
94,48,1441405995968864257,1324428524,Rikard Enberg,"New paper, with lots of people: a proposal for a facility for forward experiments at the high luminosity LHC. (I only worked on a very small part of this with implications for astroparticle physics.) If you want to calculate how much neutrinos are produced by cosmic rays that collide with the atmosphere, something that the @uw_icecube experiment is interested in, then you need to know about the particles produced in the flight direction of the beam particle. We have models based on a lot of known physics, but we can't check those models against data as far out as we want, because the LHC experiments aren't sensitive there. Basically they have holes in their detectors where the beamline enters. So we have to extrapolate. That's where the Forward Physics Facility would be useful. It would allow us to better pin down our predictions. Such as this one: But that's only part of the interesting stuff you can do with the FPF. See this thread for example: ",https://arxiv.org/abs/2109.10905,"The Forward Physics Facility (FPF) is a proposal to create a cavern with the space and infrastructure to support a suite of far-forward experiments at the Large Hadron Collider during the High Luminosity era. Located along the beam collision axis and shielded from the interaction point by at least 100 m of concrete and rock, the FPF will house experiments that will detect particles outside the acceptance of the existing large LHC experiments and will observe rare and exotic processes in an extremely low-background environment. In this work, we summarize the current status of plans for the FPF, including recent progress in civil engineering in identifying promising sites for the FPF and the experiments currently envisioned to realize the FPF's physics potential. We then review the many Standard Model and new physics topics that will be advanced by the FPF, including searches for long-lived particles, probes of dark matter and dark sectors, high-statistics studies of TeV neutrinos of all three flavors, aspects of perturbative and non-perturbative QCD, and high-energy astroparticle physics. ","The Forward Physics Facility: Sites, Experiments, and Physics Potential",5,"['New paper, with lots of people: a proposal for a facility for forward experiments at the high luminosity LHC. (I only worked on a very small part of this with implications for astroparticle physics.) ', 'If you want to calculate how much neutrinos are produced by cosmic rays that collide with the atmosphere, something that the @uw_icecube experiment is interested in, then you need to know about the particles produced in the flight direction of the beam particle.', ""We have models based on a lot of known physics, but we can't check those models against data as far out as we want, because the LHC experiments aren't sensitive there. Basically they have holes in their detectors where the beamline enters. So we have to extrapolate."", ""That's where the Forward Physics Facility would be useful. It would allow us to better pin down our predictions. Such as this one: https://t.co/u87lFVwTPM"", ""But that's only part of the interesting stuff you can do with the FPF. See this thread for example: https://t.co/j6gOLbwY13""]",21,09,981
95,34,1453038307546476552,7984662,Clayton Shonkwiler,"New paper: Toric Symplectic Geometry and Full Spark Frames, written with Tom Needham. (1/7) Here’s one way of approaching the basic problem: let d < N and consider d × N complex matrices where we pre-specify the singular values of the matrix and the norms of the columns. Such conditions are common in various signal processing applications. (2/7) Now, for something like compressed sensing, you would really like it to be true that every d × d submatrix is invertible. So the question: given a random matrix with the prescribed data, what is the probability that every d × d minor is invertible? (3/7) Sometimes your data is incompatible, so there are no such matrices. This isn’t the only condition, but to have such matrices the sum of the squares of the singular values must equal the sum of the squares of the column norms. (4/7) It can also happen that the space of matrices with the prescribed data is non-empty, but *all* matrices in the space have some singular d × d minor, and hence the probability is zero. (5/7) Our main theorem characterizes exactly when these two possibilities happen and shows that in all other cases the probability is equal to 1. Along the way we determine when these spaces are manifolds and characterize the local structure of singularities when they're not. (6/7) The main idea is to show that these spaces are closely related to certain highly structured and extremely symmetric manifolds called toric symplectic manifolds (or toric varieties, if you’re more of an algebraic geometer). (7/7)",https://arxiv.org/abs/2110.11295,"The collection of $d \times N$ complex matrices with prescribed column norms and prescribed (nonzero) singular values forms a compact algebraic variety, which we refer to as a frame space. Elements of frame spaces -- i.e., frames -- are used to give robust representations of complex-valued signals, so that geometrical and measure-theoretic properties of frame spaces are of interest to the signal processing community. This paper is concerned with the following question: what is the probability that a frame drawn uniformly at random from a given frame space has the property that any subset of $d$ of its columns gives a basis for $\mathbb{C}^d$? We show that the probability is one, generalizing recent work of Cahill, Mixon and Strawn. To prove this, we first show that frame spaces are related to highly structured objects called toric symplectic manifolds. This relationship elucidates the geometric meaning of eigensteps -- certain spectral invariants of a frame -- and should be a more broadly applicable tool for studying probabilistic questions about the structure of frame spaces. As another application of our symplectic perspective, we completely characterize the norm and spectral data for which the corresponding frame space has singularities, answering some open questions in the frame theory literature. ",Toric Symplectic Geometry and Full Spark Frames,7,"['New paper: Toric Symplectic Geometry and Full Spark Frames, written with Tom Needham.\n\n\n\n(1/7)', 'Here’s one way of approaching the basic problem: let d < N and consider d × N complex matrices where we pre-specify the singular values of the matrix and the norms of the columns. Such conditions are common in various signal processing applications.\n\n(2/7)', 'Now, for something like compressed sensing, you would really like it to be true that every d × d submatrix is invertible. So the question: given a random matrix with the prescribed data, what is the probability that every d × d minor is invertible?\n\n(3/7)', 'Sometimes your data is incompatible, so there are no such matrices. This isn’t the only condition, but to have such matrices the sum of the squares of the singular values must equal the sum of the squares of the column norms.\n\n(4/7)', 'It can also happen that the space of matrices with the prescribed data is non-empty, but *all* matrices in the space have some singular d × d minor, and hence the probability is zero.\n\n(5/7)', ""Our main theorem characterizes exactly when these two possibilities happen and shows that in all other cases the probability is equal to 1. Along the way we determine when these spaces are manifolds and characterize the local structure of singularities when they're not.\n\n(6/7)"", 'The main idea is to show that these spaces are closely related to certain highly structured and extremely symmetric manifolds called toric symplectic manifolds (or toric varieties, if you’re more of an algebraic geometer).\n\n(7/7)']",21,10,1540
96,167,1316294941157580801,90131577,Noam Slonim 🟢,"Expanding #ProjectDebater beyond English. New paper by our team @IBMResearch using multi-ling BERT to address stance analysis, evidence detection, and argument quality in 6 languages + new datasets; in Findings of #emnlp2020 #ComputationalArgumentation -- ",https://arxiv.org/abs/2010.06432,"The growing interest in argument mining and computational argumentation brings with it a plethora of Natural Language Understanding (NLU) tasks and corresponding datasets. However, as with many other NLU tasks, the dominant language is English, with resources in other languages being few and far between. In this work, we explore the potential of transfer learning using the multilingual BERT model to address argument mining tasks in non-English languages, based on English datasets and the use of machine translation. We show that such methods are well suited for classifying the stance of arguments and detecting evidence, but less so for assessing the quality of arguments, presumably because quality is harder to preserve under translation. In addition, focusing on the translate-train approach, we show how the choice of languages for translation, and the relations among them, affect the accuracy of the resultant model. Finally, to facilitate evaluation of transfer learning on argument mining tasks, we provide a human-generated dataset with more than 10k arguments in multiple languages, as well as machine translation of the English datasets. ",Multilingual Argument Mining: Datasets and Analysis,1,"['Expanding #ProjectDebater beyond English. New paper by our team @IBMResearch using multi-ling BERT to address stance analysis, evidence detection, and argument quality in 6 languages + new datasets; in Findings of #emnlp2020 #ComputationalArgumentation -- ']",20,10,262
97,48,1017394142736011265,750411947661811712,Dr. Sarah Pearson,"What might the Magellanic Clouds have evolved into, were they far from the Milky Way? We constrain the initial encounter parameters of an isolated analog of the Clouds, NGC 4490/85, and the timescales involved in gas cycling. Read our new paper here: ",https://arxiv.org/abs/1807.03791,"Discoveries of low mass galaxy pairs and groups are increasing. Studies indicate that dwarf galaxy pairs are gas rich in the field and exhibit elevated star formation rates, suggestive of interactions. Lacking are dynamical models of observed dwarf galaxy pairs to disentangle the physical processes regulating their baryon cycles. We present new optical data and the first detailed theoretical model of an observed tidal encounter between two isolated low mass galaxies, NGC 4490 & NGC 4485. This system is an isolated analog of the Magellanic Clouds and is surrounded by a ~50 kpc extended HI envelope. We use hybrid $N$-body and test-particle simulations along with a visualization interface $Identikit$ to simultaneously reproduce the observed present-day morphology and kinematics. Our results demonstrate how repeated encounters between two dwarf galaxies can ""park"" baryons at very large distances, without the aid of environmental effects. Our best match to the data is an 8:1 mass ratio encounter where a one-armed spiral is induced in the NGC 4490-analog, which we postulate explains the nature of diffuse starlight presented in the new optical data. We predict that the pair will fully merge in ~370 Myr, but that the extended tidal features will continue to evolve and return to the merged remnant over ~5 Gyr. This pre-processing of baryons will affect the efficiency of gas stripping if such dwarf pairs are accreted by a massive host. In contrast, in isolated environments this study demonstrates how dwarf-dwarf interactions can create a long-lived supply of gas to the merger remnant. ","Modeling the Baryon Cycle in Low Mass Galaxy Encounters: the Case of NGC
4490 & NGC 4485",1,"['What might the Magellanic Clouds have evolved into, were they far from the Milky Way? We constrain the initial encounter parameters of an isolated analog of the Clouds, NGC 4490/85, and the timescales involved in gas cycling. Read our new paper here: ']",18,07,264
98,90,1021559079901093888,930764003277643777,Matias Quiroz,"Want to find out more about our work on using data subsampling to speed up MCMC algorithms? We just released a textbook style easy-to-read review on arXiv: . @matvil @robertjk59 Just noticed that Figure 3.6 is not rendering properly. Time for some arXiv hacking! The error is now fixed and the new version, with the correct Figure 3.6, should appear on arXiv tomorrow or so.",https://arxiv.org/abs/1807.08409,"The rapid development of computing power and efficient Markov Chain Monte Carlo (MCMC) simulation algorithms have revolutionized Bayesian statistics, making it a highly practical inference method in applied work. However, MCMC algorithms tend to be computationally demanding, and are particularly slow for large datasets. Data subsampling has recently been suggested as a way to make MCMC methods scalable on massively large data, utilizing efficient sampling schemes and estimators from the survey sampling literature. These developments tend to be unknown by many survey statisticians who traditionally work with non-Bayesian methods, and rarely use MCMC. Our article explains the idea of data subsampling in MCMC by reviewing one strand of work, Subsampling MCMC, a so called pseudo-marginal MCMC approach to speeding up MCMC through data subsampling. The review is written for a survey statistician without previous knowledge of MCMC methods since our aim is to motivate survey sampling experts to contribute to the growing Subsampling MCMC literature. ",Subsampling MCMC - An introduction for the survey statistician,3,"['Want to find out more about our work on using data subsampling to speed up MCMC algorithms? We just released a textbook style easy-to-read review on arXiv: . @matvil @robertjk59', 'Just noticed that Figure 3.6 is not rendering properly. Time for some arXiv hacking!', 'The error is now fixed and the new version, with the correct Figure 3.6, should appear on arXiv tomorrow or so.']",18,07,380
99,119,1105918633635569664,503452360,William Wang,Standard predefined labels & train/dev/test setting are suboptimal for streaming data. Our #NAACL2019 paper Sentence Embedding Alignment for Lifelong Relation Extraction introduces a new lifelong IE problem & an efficient SOTA solution. Paper+Code: #NLProc ,https://arxiv.org/abs/1903.02588,"Conventional approaches to relation extraction usually require a fixed set of pre-defined relations. Such requirement is hard to meet in many real applications, especially when new data and relations are emerging incessantly and it is computationally expensive to store all data and re-train the whole model every time new data and relations come in. We formulate such a challenging problem as lifelong relation extraction and investigate memory-efficient incremental learning methods without catastrophically forgetting knowledge learned from previous tasks. We first investigate a modified version of the stochastic gradient methods with a replay memory, which surprisingly outperforms recent state-of-the-art lifelong learning methods. We further propose to improve this approach to alleviate the forgetting problem by anchoring the sentence embedding space. Specifically, we utilize an explicit alignment model to mitigate the sentence embedding distortion of the learned model when training on new data and new relations. Experiment results on multiple benchmarks show that our proposed method significantly outperforms the state-of-the-art lifelong learning approaches. ",Sentence Embedding Alignment for Lifelong Relation Extraction,1,['Standard predefined labels & train/dev/test setting are suboptimal for streaming data. Our #NAACL2019 paper Sentence Embedding Alignment for Lifelong Relation Extraction introduces a new lifelong IE problem & an efficient SOTA solution. Paper+Code: \n#NLProc '],19,03,270
100,129,1300831011878572032,37838307,Justin Caram,"More forays into theory from the Caram Group! Here with the Neuhauser group we show that you can use stochastic methods to rapidly study the excitonic properties of molecular aggregates. Congrats Nadine and @arundhati175 and others not on twitter :) @arundhati175 Excitonic aggregates have two features that make them hard compared to say...semiconductors. Lots of disorder, dipolar coupling that falls of slowly (meaning tight binding misses the details). The frenkel exciton hamiltonian works ok, but requires diagonalization. @arundhati175 Since diagonalization is N^3 with system size, Its really slow for large systems, particularly 2D systems for which the system size grows with edge^2. Stochastic methods scale with NlogN (basically linearly). So much easier to screen disorder, lineshapes etc. @arundhati175 This lets us look at other cool properties...like if energy levels are correlated over any distance range, what does that do (a lot it turns out!). This is a cool application of stochastic methods that are widely used to improve DFT scaling.",https://arxiv.org/abs/2008.13228,"We show that a stochastic approach enables calculations of the optical properties of large 2-dimensional and nanotubular excitonic molecular aggregates. Previous studies of such systems relied on numerically diagonalizing the dense and disordered Frenkel Hamiltonian, which scales approximately as $\mathcal{O}(N^3)$ for $N$ dye molecules. Our approach scales much more efficiently as $\mathcal{O}(N\log(N))$, enabling quick study of systems with a million of coupled molecules on the micron size scale. We calculate several important experimental observable including the optical absorption spectrum and density of states, and develop a stochastic formalism for the participation ratio. Quantitative agreement with traditional matrix diagonalization methods is demonstrated for both small- and intermediate-size systems. The stochastic methodology enables the study of the effects of spatial-correlation in site energies on the optical signatures of large 2D aggregates. Our results demonstrate that stochastic methods present a path forward for screening structural parameters and validating experiments and theoretical predictions in large excitonic aggregates. ",Stochastically Realized Observables for Excitonic Molecular Aggregates,4,"['More forays into theory from the Caram Group! Here with the Neuhauser group we show that you can use stochastic methods to rapidly study the excitonic properties of molecular aggregates. Congrats Nadine and @arundhati175 and others not on twitter :)\n', '@arundhati175 Excitonic aggregates have two features that make them hard compared to say...semiconductors. Lots of disorder, dipolar coupling that falls of slowly (meaning tight binding misses the details). The frenkel exciton hamiltonian works ok, but requires diagonalization.', '@arundhati175 Since diagonalization is N^3 with system size, Its really slow for large systems, particularly 2D systems for which the system size grows with edge^2. Stochastic methods scale with NlogN (basically linearly). So much easier to screen disorder, lineshapes etc.', '@arundhati175 This lets us look at other cool properties...like if energy levels are correlated over any distance range, what does that do (a lot it turns out!). This is a cool application of stochastic methods that are widely used to improve DFT scaling.']",20,08,1065
101,74,1373942001519972353,561899047,Aki Vehtari,"New paper with Teemu Säilynoja and @paulbuerkner ""Graphical Test for Discrete Uniformity and its Applications in Goodness of Fit Evaluation and Multiple Sample Comparison"" We've recently had papers like simulation based calibration and new Rhat paper with rank plots, which involve looking at the uniformity of rank statistics. Thinking more, we realized that histograms are suboptimal and ECDF envelopes we used were based on continuous uniformity. The motivation was not uniformity testing for itself, but this paper belongs to a series of papers on improving diagnostics of diagnostics. The graphical aspect is also important to provide useful information about the shape of discrepancy away from the assumed uniformity. Histograms 1) loose information by binning, 2) the choice of bins affect the result, 3) the confidence band doesn't take into account the dependency. ECDF and ECDF difference plots don't have these problems and thus would be better. The simultaneous confidence band for ECDF (and ECDF difference) doesn't have closed form solution and naive simulation based approach can be very slow. Aldor-Noiman et al proposed more efficient simulation based approach for continuous uniform case. In SBC and MCMC rank plots, the rank statistics are discrete. The rank statistics are also equivalent to probability integral transformation (PIT) using ECDF. We present how we can construct the confidence bands that are correct both in continuous and discrete case. We propose a new optimization based approach for simultaneous confidence band which is much more efficient than Aldor-Noiman simulation approach. We also extend the apporach to comparing rank statistics of multiple samples such as arising from different Markov chains. The code is available at This is Teemu's first paper and I'm very happy with the outcome!",https://arxiv.org/abs/2103.10522,"Assessing goodness of fit to a given distribution plays an important role in computational statistics. The Probability integral transformation (PIT) can be used to convert the question of whether a given sample originates from a reference distribution into a problem of testing for uniformity. We present new simulation and optimization based methods to obtain simultaneous confidence bands for the whole empirical cumulative distribution function (ECDF) of the PIT values under the assumption of uniformity. Simultaneous confidence bands correspond to such confidence intervals at each point that jointly satisfy a desired coverage. These methods can also be applied in cases where the reference distribution is represented only by a finite sample. The confidence bands provide an intuitive ECDF-based graphical test for uniformity, which also provides useful information on the quality of the discrepancy. We further extend the simulation and optimization methods to determine simultaneous confidence bands for testing whether multiple samples come from the same underlying distribution. This multiple sample comparison test is especially useful in Markov chain Monte Carlo convergence diagnostics. We provide numerical experiments to assess the properties of the tests using both simulated and real world data and give recommendations on their practical application in computational statistics workflows. ","Graphical Test for Discrete Uniformity and its Applications in Goodness
of Fit Evaluation and Multiple Sample Comparison",8,"['New paper with Teemu Säilynoja and @paulbuerkner \n""Graphical Test for Discrete Uniformity and its Applications in Goodness of Fit Evaluation and Multiple Sample Comparison""\n ', ""We've recently had papers like simulation based calibration and new Rhat paper with rank plots, which involve looking at the uniformity of rank statistics. Thinking more, we realized that histograms are suboptimal and ECDF envelopes we used were based on continuous uniformity."", 'The motivation was not uniformity testing for itself, but this paper belongs to a series of papers on improving diagnostics of diagnostics. The graphical aspect is also important to provide useful information about the shape of discrepancy away from the assumed uniformity.', ""Histograms 1) loose information by binning, 2) the choice of bins affect the result, 3) the confidence band doesn't take into account the dependency. ECDF and ECDF difference plots don't have these problems and thus would be better. https://t.co/KbT5VQAvNG"", ""The simultaneous confidence band for ECDF (and ECDF difference) doesn't have closed form solution and naive simulation based approach can be very slow. Aldor-Noiman et al proposed more efficient simulation based approach for continuous uniform case."", 'In SBC and MCMC rank plots, the rank statistics are discrete. The rank statistics are also equivalent to probability integral transformation (PIT) using ECDF. We present how we can construct the confidence bands that are correct both in continuous and discrete case. https://t.co/TmfkE5iBd1', 'We propose a new optimization based approach for simultaneous confidence band which is much more efficient than Aldor-Noiman simulation approach. We also extend the apporach to comparing rank statistics of multiple samples such as arising from different Markov chains. https://t.co/AbdCjVczxJ', ""The code is available at https://t.co/WzP6ltmbn4\n\nThis is Teemu's first paper and I'm very happy with the outcome!""]",21,03,1874
102,35,1255790835221872640,1087849177642610690,R.J. Graham☆彡,"A new paper on the arxiv by yours truly and @ClimateBook (the 1st paper that will be in my thesis), examining the consequences of thermodynamic and energetic limits to continental silicate weathering for climate on Earth-like planets 🌋🌧️🏔️🌊 @AndrewIWilliams @ClimateBook lmao i am now going to include an image or gif of Hannibal saying this in all of my future discussions of the WHAK model @AndrewIWilliams @ClimateBook Bye bye Ray!! ",https://arxiv.org/abs/2004.14058,"The ""liquid water habitable zone"" (HZ) concept is predicated on the ability of the silicate weathering feedback to stabilize climate across a wide range of instellations. However, representations of silicate weathering used in current estimates of the effective outer edge of the HZ do not account for the thermodynamic limit on concentration of weathering products in runoff set by clay precipitation, nor for the energetic limit on precipitation set by planetary instellation. We find that when the thermodynamic limit is included in an idealized coupled climate/weathering model, steady-state planetary climate loses sensitivity to silicate dissolution kinetics, becoming sensitive to temperature primarily through the effect of temperature on runoff and to pCO$_2$ through an effect on solute concentration mediated by pH. This increases sensitivity to land fraction, CO$_2$ outgassing, and geological factors such as soil age and lithology, all of which are found to have a profound effect on the position of the effective outer edge of the HZ. The interplay between runoff sensitivity and the energetic limit on precipitation leads to novel warm states in the outer reaches of the HZ, owing to the decoupling of temperature and precipitation. We discuss strategies for detecting the signature of silicate weathering feedback through exoplanet observations in light of insights derived from the revised picture of weathering. ","Thermodynamic and Energetic Limits on Continental Silicate Weathering
Strongly Impact the Climate and Habitability of Wet, Rocky Worlds",3,"['A new paper on the arxiv by yours truly and @ClimateBook (the 1st paper that will be in my thesis), examining the consequences of thermodynamic and energetic limits to continental silicate weathering for climate on Earth-like planets 🌋🌧️🏔️🌊\n\n', '@AndrewIWilliams @ClimateBook lmao i am now going to include an image or gif of Hannibal saying this in all of my future discussions of the WHAK model', '@AndrewIWilliams @ClimateBook Bye bye Ray!! https://t.co/toEIYgKLWL']",20,04,449
103,7,1345161894189928448,1489278174,Claire Edmunds,"Starting off 2021 right with a new paper on arXiv from @ARC_EQUS and @Sydney_Science! Exciting new work reducing the measurement error in Yb 171 ions using electron shelving 1/5 #quantum #trappedions Yb ions are awesome for #quantumcomputing as they store info for a long time and aren't too badly affected by magnetic field noise (both @honeywell @IonQ_Inc use Yb!) But measurement error tends to be larger as the qubit states can mix during detection(""off-resonant scattering"") By shelving the |1> qubit state to a metastable D5/2 level before detection, we achieve single-qubit detection errors of 1.8e-3 on an APD (6x lower than our best efforts previously!) and 7.7e-3 on an EMCCD camera (4x lower) 3/5 We record a detection error as low as 6e-6 and 6.3e-4 on the APD and EMCCD if we shelve to the long-lived F7/2 state (a procedure that is currently shelving-rate limited) An amazing team effort with @MJBiercuk, Ting Rei Tan, Alistair Milne and Ashwin Singh, led by @CHQuant! 4/5 This project goes hand in hand with a precision characterisation on the 411nm transition in Yb that we use for the electron shelving, up on arXiv a few days ago ping @Berkeley_ions @Ion_busters @IonQ_Inc @iqoqi @SandiaLabs",https://arxiv.org/abs/2012.14606,"Qubits encoded in hyperfine states of trapped ions are ideal for quantum computation given their long lifetimes and low sensitivity to magnetic fields, yet they suffer from off-resonant scattering during detection often limiting their measurement fidelity. In ${}^{171}$Yb$^{+}$ this is exacerbated by a low fluorescence yield, which leads to a need for complex and expensive hardware - a problematic bottleneck especially when scaling up the number of qubits. We demonstrate a detection routine based on electron shelving to address this issue in ${}^{171}$Yb$^{+}$ and achieve a 5.6$\times$ reduction in single-ion detection error on an avalanche photodiode to $1.8(2)\times10^{-3}$ in a 100 $\mu$s detection period, and a 4.3$\times$ error reduction on an electron multiplying CCD camera, with $7.7(2)\times10^{-3}$ error in 400 $\mu$s. We further improve the characterization of a repump transition at 760 nm to enable a more rapid reset of the auxiliary $^2$F$_{7/2}$ states populated after shelving. Finally, we examine the detection fidelity limit using the long-lived $^2$F$_{7/2}$ state, achieving a further 300$\times$ and 12$\times$ reduction in error to $6(7)\times10^{-6}$ and $6.3(3)\times10^{-4}$ in 1 ms on the respective detectors. While shelving-rate limited in our setup, we suggest various techniques to realize this detection method at speeds compatible with quantum information processing, providing a pathway to ultra-high fidelity detection in ${}^{171}$Yb$^{+}$. ","Scalable hyperfine qubit state detection via electron shelving in the
${}^2$D$_{5/2}$ and ${}^2$F$_{7/2}$ manifolds in ${}^{171}$Yb$^{+}$",6,"['Starting off 2021 right with a new paper on arXiv from @ARC_EQUS and @Sydney_Science! Exciting new work reducing the measurement error in Yb 171 ions using electron shelving 1/5\n\n#quantum #trappedions \n', 'Yb ions are awesome for #quantumcomputing as they store info for a long time and aren\'t too badly affected by magnetic field noise (both @honeywell @IonQ_Inc use Yb!)\n\nBut measurement error tends to be larger as the qubit states can mix during detection(""off-resonant scattering"")', 'By shelving the |1> qubit state to a metastable D5/2 level before detection, we achieve single-qubit detection errors of 1.8e-3 on an APD (6x lower than our best efforts previously!) and 7.7e-3 on an EMCCD camera (4x lower) 3/5', 'We record a detection error as low as 6e-6 and 6.3e-4 on the APD and EMCCD if we shelve to the long-lived F7/2 state (a procedure that is currently shelving-rate limited)\n\nAn amazing team effort with @MJBiercuk, Ting Rei Tan, Alistair Milne and Ashwin Singh, led by @CHQuant! 4/5 https://t.co/CRAauC8OME', 'This project goes hand in hand with a precision characterisation on the 411nm transition in Yb that we use for the electron shelving, up on arXiv a few days ago\n\nhttps://t.co/k4GSIGnahu\n\nhttps://t.co/Upo8VhGlQw', 'ping @Berkeley_ions @Ion_busters @IonQ_Inc @iqoqi @SandiaLabs']",20,12,1240
104,202,1392285789652926467,130881465,Alex Nitz,"In work led by @CollinCapano we find support for a sought after sub-dominant mode in the 'ringdown' of GW190521. This enables a test of the nature of the final black hole and suggests limits on the mass ratio of the initial binary. General relativity predicts that for a Kerr black hole (spinning, no charge) if it is perturbed, such as when resulting from the merger of two progenitor black holes, it will emit gravitational radiation during this 'ringdown' with a predictable discrete spectrum of frequencies. A key principle is that spectrum of these modes only depends on the total mass and spin of the black hole. (the relative amplitudes may however depend on the initial system). By comparing the observed modes and the prediction from General relativity, we can test the nature of the final black hole and look for signs of new physics. We find the observed signal is very consistent with what you'd expect from a perturbed Kerr black hole. The presence of one particular mode also indicates that the original system was unlikely to have been made of equal mass black holes (NR simulations suggest that unequal masses are required to excite this mode). This allows one to predict what the mass ratio may have been. GW190521 has been somewhat of mystery for the gravitational-wave community. It is the highest mass merger observed to date, there is a possible EM flare observed in coincidence, and several studies have suggested the merger may be eccentric. Likely mysteries remain!",https://arxiv.org/abs/2105.05238,"When two black holes merge, the late stage of gravitational wave emission is a superposition of exponentially damped sinusoids. According to the black hole no-hair theorem, this ringdown spectrum depends only on the mass and angular momentum of the final black hole. An observation of more than one ringdown mode can test this fundamental prediction of general relativity. Here we provide strong observational evidence for a multimode black hole ringdown spectrum using the gravitational wave event GW190521, with a Bayes factor of $\sim 40$ preferring two fundamental modes over one. The dominant mode is the $\ell=m=2$ harmonic, and the sub-dominant mode corresponds to the $\ell=m=3$ harmonic. We estimate the redshifted mass and dimensionless spin of the final black hole as $330^{+30}_{-40}\,\mathrm{M}_\odot$ and $0.87^{+0.05}_{-0.10}$, respectively. The detection of the two modes disfavors a binary progenitor with equal masses; the mass ratio is constrained to $0.4^{+0.2}_{-0.3}$. We find that the final black hole is consistent with the no hair theorem and constrain the fractional deviation from general relativity of the sub-dominant mode's frequency to be $-0.01^{+0.07}_{-0.11}$. ","Observation of a multimode quasi-normal spectrum from a perturbed black
hole",6,"[""In work led by @CollinCapano we find support for a sought after sub-dominant mode in the 'ringdown' of GW190521. This enables a test of the nature of the final black hole and suggests limits on the mass ratio of the initial binary.\n\n "", ""General relativity predicts that for a Kerr black hole (spinning, no charge) if it is perturbed, such as when resulting from the merger of two progenitor black holes, it will emit gravitational radiation during this 'ringdown' with a predictable discrete spectrum of frequencies."", 'A key principle is that spectrum of these modes only depends on the total mass and spin of the black hole. (the relative amplitudes may however depend on the initial system).', ""By comparing the observed modes and the prediction from General relativity, we can test the nature of the final black hole and look for signs of new physics. We find the observed signal is very consistent with what you'd expect from a perturbed Kerr black hole. https://t.co/EQAHsF6AGa"", 'The presence of one particular mode also indicates that the original system was unlikely to have been made of equal mass black holes (NR simulations suggest that unequal masses are required to excite this mode). This allows one to predict what the mass ratio may have been. https://t.co/JvZoDqpflt', 'GW190521 has been somewhat of mystery for the gravitational-wave community. It is the highest mass merger observed to date, there is a possible EM flare observed in coincidence, and several studies have suggested the merger may be eccentric. Likely mysteries remain!']",21,05,1517
105,109,1324555770537467911,2162991392,Shichao,Our new paper about NSBH GW/EM detectability in 2nd/2.5th/3rd GW era: Kilonova Emission From Black Hole-Neutron Star Mergers. II. Luminosity Function and Implications for Target-of-opportunity Observations of Gravitational-wave Triggers and Blind Searches ,https://arxiv.org/abs/2011.02717,"We present detailed simulations of black hole-neutron star (BH-NS) mergers kilonova and gamma-ray burst (GRB) afterglow and kilonova luminosity function, and discuss the detectability of electromagnetic (EM) counterpart in connection with gravitational wave (GW) detections, GW-triggered target-of-opportunity observations, and time-domain blind searches. The predicted absolute magnitude of the BH-NS kilonovae at $0.5\,{\rm days}$ after the merger falls in $[-10,-15.5]$. The simulated luminosity function contains the potential viewing-angle distribution information of the anisotropic kilonova emission. We simulate the GW detection rates, detectable distances and signal duration, for the future networks of 2nd/2.5th/3rd-generation GW detectors. BH-NSs tend to produce brighter kilonovae and afterglows if the BH has a higher aligned-spin, and a less massive NS with a stiffer EoS. The detectability of kilonova is especially sensitive to the BH spin. If BHs typically have low spins, the BH-NS EM counterparts are hard to discover. For the 2nd generation GW detector networks, a limiting magnitude of $m_{\rm limit}\sim23-24\,{\rm mag}$ is required to detect the kilonovae even if BH high spin is assumed. Thus, a plausible explanation for the lack of BH-NS associated kilonova detection during LIGO/Virgo O3 is that either there is no EM counterpart (plunging events), or the current follow-ups are too shallow. These observations still have the chance to detect the on-axis jet afterglow associated with an sGRB or an orphan afterglow. Follow-up observations can detect possible associated sGRB afterglows, from which kilonova signatures may be studied. For time-domain observations, a high-cadence search in redder filters is recommended to detect more BH-NS associated kilonovae and afterglows. ","Kilonova Emission From Black Hole-Neutron Star Mergers. II. Luminosity
Function and Implications for Target-of-opportunity Observations of
Gravitational-wave Triggers and Blind Searches",1,['Our new paper about NSBH GW/EM detectability in 2nd/2.5th/3rd GW era: Kilonova Emission From Black Hole-Neutron Star Mergers. II. Luminosity Function and Implications for Target-of-opportunity Observations of Gravitational-wave Triggers and Blind Searches '],20,11,262
106,161,1316460693609013248,2906950303,Yana Safonova,"Check out our new review paper on applications of the trace reconstruction problems in two fields of computational biology: immunogenomics and DNA storage! I am proud to be a part of this collaboration: Vinnu Bhardwaj, Pavel Pevzner, and @CyrusRashtchian ",https://arxiv.org/abs/2010.06083,"The problem of reconstructing a string from its error-prone copies, the trace reconstruction problem, was introduced by Vladimir Levenshtein two decades ago. While there has been considerable theoretical work on trace reconstruction, practical solutions have only recently started to emerge in the context of two rapidly developing research areas: immunogenomics and DNA data storage. In immunogenomics, traces correspond to mutated copies of genes, with mutations generated naturally by the adaptive immune system. In DNA data storage, traces correspond to noisy copies of DNA molecules that encode digital data, with errors being artifacts of the data retrieval process. In this paper, we introduce several new trace generation models and open questions relevant to trace reconstruction for immunogenomics and DNA data storage, survey theoretical results on trace reconstruction, and highlight their connections to computational biology. Throughout, we discuss the applicability and shortcomings of known solutions and suggest future research directions. ",Trace Reconstruction Problems in Computational Biology,1,"['Check out our new review paper on applications of the trace reconstruction problems in two fields of computational biology: immunogenomics and DNA storage!\n\nI am proud to be a part of this collaboration: Vinnu Bhardwaj, Pavel Pevzner, and @CyrusRashtchian\n\n ']",20,10,268
107,66,1383090332401799171,2485053080,Swarnadeep Saha,"Excited to share our new work on ""ExplaGraphs: An Explanation Graph Generation Task for Structured Commonsense Reasoning""! Has been a long effort and a great learning experience too 🙂 Joint work w. @prateeky2806 @lbauer119 @mohitban47 @uncnlp Paper: 1/5 Commonsense reasoning tasks are usually discriminative, thus failing to eval. model's ability to reason+explain preds with underlying knowledge(+allowing shortcuts). We propose a new ""generative+structured"" task for generating ""commonsense expl. graphs"" for stance prediction. 2/5 Graphs are structured, so explicitly explain+eval model reasoning by visually laying out relevant context & commonsense knowldg edges/chains/subgraphs. We collect a dataset of non-trivial, complete, unambiguous explanations thru Collect-Judge-Refine graph-annotation framework 3/5 We also propose a multi-level evaluation framework that checks for generated graph's structural + semantic correctness (as judged by stance inference capability given the belief and graph) + predicted graph’s plausibility wrt gold graphs + each individual edge's importance. 4/5 Initial baselines w. BART+T5 show that they fail to generate meaningful expln graphs, leaving large gap with human performance; & we hope this will encourage future work by community on better structured models for challenging new commonsense graph-based expln generation task 5/5 @aman_madaan @prateeky2806 @lbauer119 @mohitban47 @uncnlp Thanks, @aman_madaan for the appreciation and the pointer (we'll cite it in our next version).",https://arxiv.org/abs/2104.07644,"Recent commonsense-reasoning tasks are typically discriminative in nature, where a model answers a multiple-choice question for a certain context. Discriminative tasks are limiting because they fail to adequately evaluate the model's ability to reason and explain predictions with underlying commonsense knowledge. They also allow such models to use reasoning shortcuts and not be ""right for the right reasons"". In this work, we present ExplaGraphs, a new generative and structured commonsense-reasoning task (and an associated dataset) of explanation graph generation for stance prediction. Specifically, given a belief and an argument, a model has to predict if the argument supports or counters the belief and also generate a commonsense-augmented graph that serves as non-trivial, complete, and unambiguous explanation for the predicted stance. We collect explanation graphs through a novel Create-Verify-And-Refine graph collection framework that improves the graph quality (up to 90%) via multiple rounds of verification and refinement. A significant 79% of our graphs contain external commonsense nodes with diverse structures and reasoning depths. Next, we propose a multi-level evaluation framework, consisting of automatic metrics and human evaluation, that check for the structural and semantic correctness of the generated graphs and their degree of match with ground-truth graphs. Finally, we present several structured, commonsense-augmented, and text generation models as strong starting points for this explanation graph generation task, and observe that there is a large gap with human performance, thereby encouraging future work for this new challenging task. ExplaGraphs will be publicly available at this https URL ","ExplaGraphs: An Explanation Graph Generation Task for Structured
Commonsense Reasoning",6,"['Excited to share our new work on ""ExplaGraphs: An Explanation Graph Generation Task for Structured Commonsense Reasoning""! Has been a long effort and a great learning experience too 🙂\n\nJoint work w. @prateeky2806 @lbauer119 @mohitban47 @uncnlp\nPaper: \n1/5 ', 'Commonsense reasoning tasks are usually discriminative, thus failing to eval. model\'s ability to reason+explain preds with underlying knowledge(+allowing shortcuts). We propose a new ""generative+structured"" task for generating ""commonsense expl. graphs"" for stance prediction. 2/5', 'Graphs are structured, so explicitly explain+eval model reasoning by visually laying out relevant context & commonsense knowldg edges/chains/subgraphs. We collect a dataset of non-trivial, complete, unambiguous explanations thru Collect-Judge-Refine graph-annotation framework 3/5', ""We also propose a multi-level evaluation framework that checks for generated graph's structural + semantic correctness (as judged by stance inference capability given the belief and graph) + predicted graph’s plausibility wrt gold graphs + each individual edge's importance. 4/5"", 'Initial baselines w. BART+T5 show that they fail to generate meaningful expln graphs, leaving large gap with human performance; & we hope this will encourage future work by community on better structured models for challenging new commonsense graph-based expln generation task 5/5', ""@aman_madaan @prateeky2806 @lbauer119 @mohitban47 @uncnlp Thanks, @aman_madaan for the appreciation and the pointer (we'll cite it in our next version).""]",21,04,1542
108,32,1065609019887374338,797888987675365377,Tom Rainforth,"Statistical Verification of Neural Networks: a new approach to verification that provides an informative notion of how robust a network is, rather than just a binary SAT/UNSAT assertion. New paper from @stefan_webb, @yeewhye, M. Pawan Kumar, and myself ",https://arxiv.org/abs/1811.07209,"We present a new approach to assessing the robustness of neural networks based on estimating the proportion of inputs for which a property is violated. Specifically, we estimate the probability of the event that the property is violated under an input model. Our approach critically varies from the formal verification framework in that when the property can be violated, it provides an informative notion of how robust the network is, rather than just the conventional assertion that the network is not verifiable. Furthermore, it provides an ability to scale to larger networks than formal verification approaches. Though the framework still provides a formal guarantee of satisfiability whenever it successfully finds one or more violations, these advantages do come at the cost of only providing a statistical estimate of unsatisfiability whenever no violation is found. Key to the practical success of our approach is an adaptation of multi-level splitting, a Monte Carlo approach for estimating the probability of rare events, to our statistical robustness framework. We demonstrate that our approach is able to emulate formal verification procedures on benchmark problems, while scaling to larger networks and providing reliable additional information in the form of accurate estimates of the violation probability. ",A Statistical Approach to Assessing Neural Network Robustness,1,"['Statistical Verification of Neural Networks: a new approach to verification that provides an informative notion of how robust a network is, rather than just a binary SAT/UNSAT assertion. New paper from @stefan_webb, @yeewhye, M. Pawan Kumar, and myself ']",18,11,259
109,55,1016732820763529217,3716338821,Mikko Tuomi,"We believe there are three planets orbiting #LHS1140 based on a careful bias-minimising analysis of HARPS radial velocity data. Our new submitted paper: ""Minimizing the bias in exoplanet detection - application to radial velocities of LHS 1140"" ",https://arxiv.org/abs/1807.02483,"A rocky planet orbiting LHS 1140 with a period of 24.7d has been found based on the discovery of transits in its light and high precision radial velocity data (Dittmann et al. 2017). This discovery by two independent methods is an observational tour-de-force, however, we find that a conservative analysis of the data gives a different solution. A three planet system is apparent in the radial velocity data based on our diagnosis of stellar activity. We encourage further targeted photometric and radial velocity observations in order to constrain the mini-Neptune and super-Earth mass objects apparently causing the 3.8 and 90 day radial velocity signals. We use our package Agatha (this https URL) to provide a comprehensive strategy to disentangle planetary signals from stellar activity in radial velocity data. ","Minimizing the bias in exoplanet detection - application to radial
velocities of LHS 1140",1,"['We believe there are three planets orbiting #LHS1140 based on a careful bias-minimising analysis of HARPS radial velocity data.\n\nOur new submitted paper: ""Minimizing the bias in exoplanet detection - application to radial velocities of LHS 1140"" ']",18,07,258
110,196,1369232600720670720,494870213,Thomas Haworth,"New paper today where I take a look at how dust in discs gets warmed near massive stars. If this isn't accounted for we end up overestimating how massive the disc is... 1/2 This means that if radiation from massive stars reduces the disc mass, we could be suppressing the signature of that if we assume that the dust in discs is relatively cold (or assume the same dust temperature for all discs) ",https://arxiv.org/abs/2103.03950,"Dust plays a key role in the formation of planets and its emission also provides one of our most accessible views of protoplanetary discs. If set by radiative equilibrium with the central star, the temperature of dust in the disc plateaus at around $10-20$K in the outer regions. However sufficiently nearby massive stars can heat the outer disc to substantially higher temperatures. In this paper we study the radiative equilibrium temperature of discs in the presence of massive external sources and gauge the effect that it has on millimetre dust mass estimates. Since millimetre grains are not entrained in any wind we focus on geometrically simple 2D-axisymmetric disc models using radiative transfer calculations with both the host star and an external source. Recent surveys have searched for evidence of massive stars influencing disc evolution using disc properties as a function of projected separation. In assuming a disc temperature of $20$K for a disc a distance $D$ from a strong radiation source, disc masses are overestimated by a factor that scales with $D^{-1/2}$ interior to the separation that external heating becomes important. This could significantly alter dust mass estimates of discs in close proximity to $\theta^1$C in the Orion Nebular Cluster. We also make an initial assessment of the effect upon snow lines. Within a parsec of an O star like $\theta^1$C a CO snow line no longer exists, though the water snow line is virtually unaffected except for very close separations of $\leq0.01\,$pc. ",Warm millimetre dust in protoplanetary discs near massive stars,2,"[""New paper today where I take a look at how dust in discs gets warmed near massive stars. If this isn't accounted for we end up overestimating how massive the disc is... 1/2 \n\n "", 'This means that if radiation from massive stars reduces the disc mass, we could be suppressing the signature of that if we assume that the dust in discs is relatively cold (or assume the same dust temperature for all discs) https://t.co/3LPC4gxt19']",21,03,418
111,176,1443062326295506944,1235230714922184712,Nupoor Gandhi,"Happy to share our new paper: Improving Span Representation for Domain-adapted Coreference Resolution () in the Fourth Workshop on Computational Models of Reference, Anaphora and Coreference #EMNLP2021 ! joint work with @anjalie_f and Yulia Tsvetkov We propose two new losses to incorporate external knowledge for more data-efficient fine-tuning of coreference models.",https://arxiv.org/abs/2109.09811,"Recent work has shown fine-tuning neural coreference models can produce strong performance when adapting to different domains. However, at the same time, this can require a large amount of annotated target examples. In this work, we focus on supervised domain adaptation for clinical notes, proposing the use of concept knowledge to more efficiently adapt coreference models to a new domain. We develop methods to improve the span representations via (1) a retrofitting loss to incentivize span representations to satisfy a knowledge-based distance function and (2) a scaffolding loss to guide the recovery of knowledge from the span representation. By integrating these losses, our model is able to improve our baseline precision and F-1 score. In particular, we show that incorporating knowledge with end-to-end coreference models results in better performance on the most challenging, domain-specific spans. ",Improving Span Representation for Domain-adapted Coreference Resolution,2,"['Happy to share our new paper:\xa0Improving Span Representation for Domain-adapted Coreference Resolution () in the\xa0Fourth Workshop on Computational Models of Reference, Anaphora and Coreference #EMNLP2021 !\n\njoint work with @anjalie_f and Yulia Tsvetkov', 'We propose two new losses to incorporate external knowledge for more data-efficient fine-tuning of coreference models.']",21,09,374
112,14,1466003548144082944,456819625,Lawrence Bull,"Our new paper - using a mixture of interpretable #GaussianProcesses to model overlapping power trends in wind farm data Rather than removing data, we automatically model different power relationships #windenergy #machinelearning Thanks to everyone involved! Those on Twitter: @drgTim @dervilisTheDRG @lizzyintheDRG",http://arxiv.org/abs/2111.15496,"Power curves capture the relationship between wind speed and output power for a specific wind turbine. Accurate regression models of this function prove useful in monitoring, maintenance, design, and planning. In practice, however, the measurements do not always correspond to the ideal curve: power curtailments will appear as (additional) functional components. Such multivalued relationships cannot be modelled by conventional regression, and the associated data are usually removed during pre-processing. The current work suggests an alternative method to infer multivalued relationships in curtailed power data. Using a population-based approach, an overlapping mixture of probabilistic regression models is applied to signals recorded from turbines within an operational wind farm. The model is shown to provide an accurate representation of practical power data across the population. ","Bayesian Modelling of Multivalued Power Curves from an Operational Wind
Farm",2,"['Our new paper - using a mixture of interpretable #GaussianProcesses to model overlapping power trends in wind farm data\n\n\n\n\nRather than removing data, we automatically model different power relationships\n#windenergy #machinelearning ', 'Thanks to everyone involved! Those on Twitter: @drgTim @dervilisTheDRG @lizzyintheDRG']",21,11,335
113,0,1525124399673884673,1705189098,Ani Eloyan,New paper by now former student Dr. @KUNMENG2 on statistical inference of shapes via the Smooth Euler Characteristic Transform with @lorin_crawford. We provide mathematical foundations for randomness of shapes and present methods for hypothesis testing ,https://arxiv.org/abs/2204.12699,"In this paper, we provide the mathematical foundations for the randomness of shapes and the distributions of smooth Euler characteristic transform. Based on these foundations, we propose an approach for testing hypotheses on random shapes. Simulation studies are provided to support our mathematical derivations and show the performance of our proposed hypothesis testing framework. Our discussions connect the following fields: algebraic and computational topology, probability theory and stochastic processes, Sobolev spaces and functional analysis, statistical inference, and medical imaging. ","Randomness and Statistical Inference of Shapes via the Smooth Euler
Characteristic Transform",1,['New paper by now former student Dr. \n@KUNMENG2 on statistical inference of shapes via the Smooth Euler Characteristic Transform with \n@lorin_crawford. We provide mathematical foundations for randomness of shapes and present methods for hypothesis testing '],22,04,266
114,4,1379084866906583040,838292815,Ofir Nachum,"Policy eval/selection is a super impactful but woefully overlooked area of RL research. In our new paper ( accepted to ICLR'21) we build an OPE benchmark on top of D4RL & RLUnplugged, which we hope will encourage deep RL researchers to look at this problem. Link for data and code: ",https://arxiv.org/abs/2103.16596,"Off-policy evaluation (OPE) holds the promise of being able to leverage large, offline datasets for both evaluating and selecting complex policies for decision making. The ability to learn offline is particularly important in many real-world domains, such as in healthcare, recommender systems, or robotics, where online data collection is an expensive and potentially dangerous process. Being able to accurately evaluate and select high-performing policies without requiring online interaction could yield significant benefits in safety, time, and cost for these applications. While many OPE methods have been proposed in recent years, comparing results between papers is difficult because currently there is a lack of a comprehensive and unified benchmark, and measuring algorithmic progress has been challenging due to the lack of difficult evaluation tasks. In order to address this gap, we present a collection of policies that in conjunction with existing offline datasets can be used for benchmarking off-policy evaluation. Our tasks include a range of challenging high-dimensional continuous control problems, with wide selections of datasets and policies for performing policy selection. The goal of our benchmark is to provide a standardized measure of progress that is motivated from a set of principles designed to challenge and test the limits of existing OPE methods. We perform an evaluation of state-of-the-art algorithms and provide open-source access to our data and code to foster future research in this area. ",Benchmarks for Deep Off-Policy Evaluation,2,"[""Policy eval/selection is a super impactful but woefully overlooked area of RL research. In our new paper ( accepted to ICLR'21) we build an OPE benchmark on top of D4RL & RLUnplugged, which we hope will encourage deep RL researchers to look at this problem."", 'Link for data and code: https://t.co/bJtDKwpdjB']",21,03,294
115,25,1288833900991729666,1284439222187892736,Enrico Fontana,"First paper out! With the wonderful people at @LosAlamosNatLab Quantum Computing Summer School we just released a paper on a new phenomenon that we called Noise Induced Barren Plateaus. This has implications for Quantum Neural Networks on NISQ devices. Technical summary: for a model of Pauli noise, with sufficient noise both the cost function and its gradient in any direction decay exponentially in depth. This suggests that deep noisy QNNs are untrainable. Compared to noiseless BPs, this is not a statistical phenomenon! Understandable summary: under certain conditions, the noise of current quantum computer builds up extremely quickly as computations become more complicated. This means that we must make better quantum computers or risk ending up with garbage results. Shoutout to the first author @samson_wang for the truly impressive work, and to the magic duo @kunal_phy and @MvsCerezo for the invaluable effort and guidance. Thanks to @SoneAkira @LCincio for the contributions and to the big boss @ColesQuantum for having made QCSS possible. Check out @MvsCerezo 's thread for an excellent memesplanation of the results! 👇 ",https://arxiv.org/abs/2007.14384,"Variational Quantum Algorithms (VQAs) may be a path to quantum advantage on Noisy Intermediate-Scale Quantum (NISQ) computers. A natural question is whether noise on NISQ devices places fundamental limitations on VQA performance. We rigorously prove a serious limitation for noisy VQAs, in that the noise causes the training landscape to have a barren plateau (i.e., vanishing gradient). Specifically, for the local Pauli noise considered, we prove that the gradient vanishes exponentially in the number of qubits $n$ if the depth of the ansatz grows linearly with $n$. These noise-induced barren plateaus (NIBPs) are conceptually different from noise-free barren plateaus, which are linked to random parameter initialization. Our result is formulated for a generic ansatz that includes as special cases the Quantum Alternating Operator Ansatz and the Unitary Coupled Cluster Ansatz, among others. For the former, our numerical heuristics demonstrate the NIBP phenomenon for a realistic hardware noise model. ",Noise-Induced Barren Plateaus in Variational Quantum Algorithms,5,"['First paper out!\n\nWith the wonderful people at @LosAlamosNatLab Quantum Computing Summer School we just released a paper on a new phenomenon that we called Noise Induced Barren Plateaus.\n\nThis has implications for Quantum Neural Networks on NISQ devices.\n\n', 'Technical summary: for a model of Pauli noise, with sufficient noise both the cost function and its gradient in any direction decay exponentially in depth. This suggests that deep noisy QNNs are untrainable.\n\nCompared to noiseless BPs, this is not a statistical phenomenon!', 'Understandable summary: under certain conditions, the noise of current quantum computer builds up extremely quickly as computations become more complicated. \n\nThis means that we must make better quantum computers or risk ending up with garbage results.', 'Shoutout to the first author @samson_wang for the truly impressive work, and to the magic duo @kunal_phy and @MvsCerezo for the invaluable effort and guidance.\n\nThanks to @SoneAkira @LCincio for the contributions and to the big boss @ColesQuantum for having made QCSS possible.', ""Check out @MvsCerezo 's thread for an excellent memesplanation of the results! 👇\n\nhttps://t.co/zOzhTZy7Zw""]",20,07,1149
116,179,1454147943955513345,45675087,Devi Parikh,"A study led by @safinahaali on whether AI models can inspire creativity across modalities! We find that generated visuals inspire human creativity in storytelling. Specifically, they benefit divergent aspects of creativity but hinder convergent thinking. ",https://arxiv.org/abs/2110.14810,"Can visual artworks created using generative visual algorithms inspire human creativity in storytelling? We asked writers to write creative stories from a starting prompt, and provided them with visuals created by generative AI models from the same prompt. Compared to a control group, writers who used the visuals as story writing aid wrote significantly more creative, original, complete and visualizable stories, and found the task more fun. Of the generative algorithms used (BigGAN, VQGAN, DALL-E, CLIPDraw), VQGAN was the most preferred. The control group that did not view the visuals did significantly better in integrating the starting prompts. Findings indicate that cross modality inputs by AI can benefit divergent aspects of creativity in human-AI co-creation, but hinders convergent thinking. ",Telling Creative Stories Using Generative Visual Aids,1,"['A study led by @safinahaali on whether AI models can inspire creativity across modalities! We find that generated visuals inspire human creativity in storytelling. Specifically, they benefit divergent aspects of creativity but hinder convergent thinking.\n\n ']",21,10,268
117,199,1346017587495383041,2444302555,Ludovic Denoyer,"Happy new year ! I am happy to share this exciting work made with @TomVeniat and @MarcRanzato where we propose new metrics and a benchmark to evaluate multiple dimensions of transfer in continual learning. 1/2 In addition, we also propose a simple (MNTDP) but very effective NAS-based approach that outperforms most of existing methods on multiple of these dimensions. 2/2",https://arxiv.org/abs/2012.12631,"Existing literature in Continual Learning (CL) has focused on overcoming catastrophic forgetting, the inability of the learner to recall how to perform tasks observed in the past. There are however other desirable properties of a CL system, such as the ability to transfer knowledge from previous tasks and to scale memory and compute sub-linearly with the number of tasks. Since most current benchmarks focus only on forgetting using short streams of tasks, we first propose a new suite of benchmarks to probe CL algorithms across these new axes. Finally, we introduce a new modular architecture, whose modules represent atomic skills that can be composed to perform a certain task. Learning a task reduces to figuring out which past modules to re-use, and which new modules to instantiate to solve the current task. Our learning algorithm leverages a task-driven prior over the exponential search space of all possible ways to combine modules, enabling efficient learning on long streams of tasks. Our experiments show that this modular architecture and learning algorithm perform competitively on widely used CL benchmarks while yielding superior performance on the more challenging benchmarks we introduce in this work. ","Efficient Continual Learning with Modular Networks and Task-Driven
Priors",2,"['Happy new year !\n\nI am happy to share this exciting work made with @TomVeniat and @MarcRanzato where we propose new metrics and a benchmark to evaluate multiple dimensions of transfer in continual learning. \n\n1/2 ', 'In addition, we also propose a simple (MNTDP) but very effective NAS-based approach that outperforms most of existing methods on multiple of these dimensions. 2/2']",20,12,386
118,134,1228253588771737601,971035265933479936,Josu C. Aurrekoetxea,"New paper with Thomas Helfer (@Thomas_Italy) and Eugene Lim (@tukohbin)! Coherent Gravitational Waveforms and Memory from Cosmic String Loops @Thomas_Italy @tukohbin We construct, for the first time, the time-domain gravitational wave strain waveform from the collapse of a strongly gravitating Abelian Higgs cosmic string loop in full general relativity. Here is a summary video we put on YouTube: @Thomas_Italy @tukohbin We found that the strain exhibits a large memory effect during merger, ending with a burst and the characteristic ringdown as a black hole is formed. @Thomas_Italy @tukohbin We think the nature of this memory arises from the fact that post-merger, there is a loss of matter emitted axially in ultra-relativistic jets – and hence is highly aspherical. Check these jets in this video: @Thomas_Italy @tukohbin We also investigated the waveform and energy emitted as a function of string width, loop radius and string tension Gμ and we found that while it doesnt show a strong dependence on the width and loop radius, the lighter the strings (lower Gμ), the **more** GWs! @Thomas_Italy @tukohbin These are ultra-relativistic events, the BH forms when the loop is moving at speed v > 0.99c (see the Lorentz contraction in the previous video). We believe that E_GW is dominated by kinematics since lower tension loops collapse at higher velocities (\gamma > 40!)",https://arxiv.org/abs/2002.05177,"We construct, for the first time, the time-domain gravitational wave strain waveform from the collapse of a strongly gravitating Abelian Higgs cosmic string loop in full general relativity. We show that the strain exhibits a large memory effect during merger, ending with a burst and the characteristic ringdown as a black hole is formed. Furthermore, we investigate the waveform and energy emitted as a function of string width, loop radius and string tension $G\mu$. We find that the mass normalized gravitational wave energy displays a strong dependence on the inverse of the string tension $E_{\mathrm{GW}}/M_0\propto 1/G\mu$, with $E_{\mathrm{GW}}/M_0 \sim {\cal O}(1)\%$ at the percent level, for the regime where $G\mu\gtrsim10^{-3}$. Conversely, we show that the efficiency is only weakly dependent on the initial string width and initial loop radii. Using these results, we argue that gravitational wave production is dominated by kinematical instead of geometrical considerations. ",Coherent Gravitational Waveforms and Memory from Cosmic String Loops,6,"['New paper with Thomas Helfer (@Thomas_Italy) and Eugene Lim (@tukohbin)! Coherent Gravitational Waveforms and Memory from Cosmic String Loops \n \n ', '@Thomas_Italy @tukohbin We construct, for the first time, the time-domain gravitational wave strain waveform from the collapse of a strongly gravitating Abelian Higgs cosmic string loop in full general relativity.\n \nHere is a summary video we put on YouTube: https://t.co/lLoi5r7JHl', '@Thomas_Italy @tukohbin We found that the strain exhibits a large memory effect during merger, ending with a burst and the characteristic ringdown as a black hole is formed. https://t.co/idZkuyBOio', '@Thomas_Italy @tukohbin We think the nature of this memory arises from the fact that post-merger, there is a loss of matter emitted axially in ultra-relativistic jets – and hence is highly aspherical.\n\nCheck these jets in this video: https://t.co/qPJoIwzBnx', '@Thomas_Italy @tukohbin We also investigated the waveform and energy emitted as a function of string width, loop radius and string tension Gμ and we found that while it doesnt show a strong dependence on the width and loop radius, the lighter the strings (lower Gμ), the **more** GWs! https://t.co/lCUZDISJ5h', '@Thomas_Italy @tukohbin These are ultra-relativistic events, the BH forms when the loop is moving at speed v > 0.99c (see the Lorentz contraction in the previous video). We believe that E_GW is dominated by kinematics since lower tension loops collapse at higher velocities (\\gamma > 40!)']",20,02,1429
119,193,1357717456475705344,830120476282408960,Nienke van der Marel,"Really proud of @BrodieNorfolk, whose paper on transition disks with @almaobs and ATCA just got accepted! In this multi-wavelength study we compare the dust cavities in mm vs cm emission for 15 transition disks. #proudsupervisor #prettydiskimages",https://arxiv.org/abs/2102.02316,"The origin of the inner dust cavities observed in transition discs remains unknown. The segregation of dust and size of the cavity is expected to vary depending on which clearing mechanism dominates grain evolution. We present the results from the Discs Down Under program, an 8.8 mm continuum Australia Telescope Compact Array (ATCA) survey targeting 15 transition discs with large (> 20 au) cavities, and compare the resulting dust emission to Atacama Large millimetre/sub-millimetre Array (ALMA) observations. Our ATCA observations resolve the inner cavity for 8 of the 14 detected discs. We fit the visibilities and reconstruct 1D radial brightness models for 10 sources with a S/N > 5sigma. We find that, for sources with a resolved cavity in both wavebands, the 8.8 mm and sub-mm brightness distributions peak at the same radius from the star. We suggest that a similar cavity size for 8.8 mm and sub-mm dust grains is due to a dust trap induced by the presence of a companion. ","Dust Traps and the Formation of Cavities in Transition Discs: A
millimetre to sub-millimetre comparison survey",1,"['Really proud of @BrodieNorfolk, whose paper on transition disks with @almaobs and ATCA just got accepted! In this multi-wavelength study we compare the dust cavities in mm vs cm emission for 15 transition disks.\n\n#proudsupervisor #prettydiskimages']",21,02,253
120,49,1205496598433689600,119837224,Jason Baldridge,"New paper on extending ML models toward human-level language understanding! It's a joint effort with Jay McClelland, @FelixHill84, Maja Rudolph, and Hinrich Schütze that integrates our diverse perspectives on cognition, grounding, modeling and language. Key takeaway: we have seen tremendous progress with estimation of deep, contextualized models, and we argue for a renewed focus on modeling situations and objects, inspired by cognitive models and driven by grounding in active environments. Both Jay and I are speaking today at the ViGIL workshop and will both cover aspects of this and our own perspectives in these general topics! @FelixHill84 Thanks! @volkancirik Will post later! Bug me if you don’t see them. :-) @texastacos @Lextremist @FelixHill84 Wombats are also so cute! And they have cube-shaped poop!",https://arxiv.org/abs/1912.05877,"Language is crucial for human intelligence, but what exactly is its role? We take language to be a part of a system for understanding and communicating about situations. The human ability to understand and communicate about situations emerges gradually from experience and depends on domain-general principles of biological neural networks: connection-based learning, distributed representation, and context-sensitive, mutual constraint satisfaction-based processing. Current artificial language processing systems rely on the same domain general principles, embodied in artificial neural networks. Indeed, recent progress in this field depends on \emph{query-based attention}, which extends the ability of these systems to exploit context and has contributed to remarkable breakthroughs. Nevertheless, most current models focus exclusively on language-internal tasks, limiting their ability to perform tasks that depend on understanding situations. These systems also lack memory for the contents of prior situations outside of a fixed contextual span. We describe the organization of the brain's distributed understanding system, which includes a fast learning system that addresses the memory problem. We sketch a framework for future models of understanding drawing equally on cognitive neuroscience and artificial intelligence and exploiting query-based attention. We highlight relevant current directions and consider further developments needed to fully capture human-level language understanding in a computational system. ","Extending Machine Language Models toward Human-Level Language
Understanding",6,"[""New paper on extending ML models toward human-level language understanding! It's a joint effort with Jay McClelland, @FelixHill84, Maja Rudolph, and Hinrich Schütze that integrates our diverse perspectives on cognition, grounding, modeling and language.\n\n "", 'Key takeaway: we have seen tremendous progress with estimation of deep, contextualized models, and we argue for a renewed focus on modeling situations and objects, inspired by cognitive models and driven by grounding in active environments. https://t.co/CQxNk2EGhn', 'Both Jay and I are speaking today at the ViGIL workshop and will both cover aspects of this and our own perspectives in these general topics!\n\nhttps://t.co/cI8ScrNKWV', '@FelixHill84 Thanks!', '@volkancirik Will post later! Bug me if you don’t see them. :-)', '@texastacos @Lextremist @FelixHill84 Wombats are also so cute! And they have cube-shaped poop!']",19,12,844
121,37,705205683554156544,2337598033,Geraint F. Lewis,"Ace new paper on arxiv on Major Substructure in the M31 Outer Halo with @nfmartin1980 & @pascaljelahi @nfmartin1980 @pascaljelahi @ickbat @dougalmackey sorry dudes, my excuse was that I was a cloud computing conference and tweeting on the fly @nfmartin1980 @pascaljelahi @ickbat @dougalmackey you can do things like that ""in the cloud""!!!!",http://arxiv.org/abs/1603.00528,"We present a renewed look at M31's Giant Stellar Stream along with the nearby structures Stream C and Stream D, exploiting a new algorithm capable of fitting to the red giant branch (RGB) of a structure in both colour and magnitude space. Using this algorithm, we are able to generate probability distributions in distance, metallicity and RGB width for a series of subfields spanning these structures. Specifically, we confirm a distance gradient of approximately 20 kpc per degree along a 6 degree extension of the Giant Stellar Stream, with the farthest subfields from M31 lying ~ 120 kpc more distant than the inner-most subfields. Further, we find a metallicity that steadily increases from -0.7^{+0.1}_{-0.1} dex to -0.2^{+0.2}_{-0.1} dex along the inner half of the stream before steadily dropping to a value of -1.0^{+0.2}_{-0.2} dex at the farthest reaches of our coverage. The RGB width is found to increase rapidly from 0.4^{+0.1}_{-0.1} dex to 1.1^{+0.2}_{-0.1} dex in the inner portion of the stream before plateauing and decreasing marginally in the outer subfields of the stream. In addition, we estimate Stream C to lie at a distance between 794 and 862 kpc and Stream D between 758 kpc and 868 kpc. We estimate the median metallicity of Stream C to lie in the range -0.7 to -1.6 dex and a metallicity of -1.1^{+0.3}_{-0.2} dex for Stream D. RGB widths for the two structures are estimated to lie in the range 0.4 to 1.2 dex and 0.3 to 0.7 dex respectively. In total, measurements are obtained for 19 subfields along the Giant Stellar Stream, 4 along Stream C, 5 along Stream D and 3 general M31 spheroid fields for comparison. We thus provide a higher resolution coverage of the structures in these parameters than has previously been available in the literature. ","Major Substructure in the M31 Outer Halo: Distances and Metallicities
along the Giant Stellar Stream",3,"['Ace new paper on arxiv on Major Substructure in the M31 Outer Halo with @nfmartin1980 & @pascaljelahi \n', '@nfmartin1980 @pascaljelahi @ickbat @dougalmackey sorry dudes, my excuse was that I was a cloud computing conference and tweeting on the fly', '@nfmartin1980 @pascaljelahi @ickbat @dougalmackey you can do things like that ""in the cloud""!!!!']",16,03,348
122,254,1367819732641054729,1115299382113517568,Matthew Ware,Mid-circuit measurement is critical for QEC as well as near-term applications. Here we @gribeill @Luke_Govia extend GST to include quantum instruments and study our mid-circuit readout . Always fun working with Sandia! I didn't tag @KRudinger in this because I couldn't find his Twitter handle this morning! I am an idiot as advertised,https://arxiv.org/abs/2103.03008,"Measurements that occur within the internal layers of a quantum circuit -- mid-circuit measurements -- are an important quantum computing primitive, most notably for quantum error correction. Mid-circuit measurements have both classical and quantum outputs, so they can be subject to error modes that do not exist for measurements that terminate quantum circuits. Here we show how to characterize mid-circuit measurements, modelled by quantum instruments, using a technique that we call quantum instrument linear gate set tomography (QILGST). We then apply this technique to characterize a dispersive measurement on a superconducting transmon qubit within a multiqubit system. By varying the delay time between the measurement pulse and subsequent gates, we explore the impact of residual cavity photon population on measurement error. QILGST can resolve different error modes and quantify the total error from a measurement; in our experiment, for delay times above 1000 ns we measured a total error rate (i.e., half diamond distance) of $\epsilon_{\diamond} = 8.1 \pm 1.4 \%$, a readout fidelity of $97.0 \pm 0.3\%$, and output quantum state fidelities of $96.7 \pm 0.6\%$ and $93.7 \pm 0.7\%$ when measuring $0$ and $1$, respectively. ","Characterizing mid-circuit measurements on a superconducting qubit using
gate set tomography",2,"['Mid-circuit measurement is critical for QEC as well as near-term applications. Here we @gribeill @Luke_Govia extend GST to include quantum instruments and study our mid-circuit readout . Always fun working with Sandia! ', ""I didn't tag @KRudinger in this because I couldn't find his Twitter handle this morning! I am an idiot as advertised""]",21,03,348
123,45,1152030965872513024,1019760963569049601,Almog Yalinewich,"Our new paper is on the arxiv. We study the optical transient from an explosion close to the surface of a star. Such an explosion can occur due to super Eddington accretion of a compact companion in a common envelope, and can be a precursor to a supernova ",https://arxiv.org/abs/1907.07689,"We study the hydrodynamic evolution of an explosion close to the stellar surface, and give predictions for the radiation from such an event. We show that such an event will give rise to a multi-wavelength transient. We apply this model to describe a precursor burst to the peculiar supernova iPTF14hls, which occurred in 1954, sixty year before the supernova. We propose that the new generation of optical surveys might detect similar transients, and they can be used to identify supernova progenitors well before the explosion. ",Optical Transient from an Explosion Close to the Stellar Surface,1,"['Our new paper is on the arxiv. We study the optical transient from an explosion close to the surface of a star. Such an explosion can occur due to super Eddington accretion of a compact companion in a common envelope, and can be a precursor to a supernova ']",19,07,269
124,90,1227573049488052225,96779364,Arnab Bhattacharyya,"New paper out! ""Efficiently learning and sampling interventional distributions from observations"" with Gayen, @Saravanan_CU, Maran and Vinodchandran: #causalML #causalinference Causal inference is about predicting what happens in an imagined world that you don't have access to. E.g.: Am I more likely to get better if I take the medicine versus if I don't? How is the sale for product X affected if ads for X are slashed by 20%? @yudapearl and collaborators have thought long and hard about such questions. They realized that to properly formulate causal problems, one needs a model to describe how variables causally depend on each other. What they proposed is using Bayes nets to encode causal info. A basic question in this setup: given a Bayes net P on a set of variables, infer how the distribution would change if a particular variable is externally set (""intervened"") to a fixed value. @yudapearl & Jin Tian characterized the class of graphs for which this task is possible. What we do in our paper is make their result algorithmic. We show conditions (nearly tight) under which there are efficient algorithms (both in terms of samples & time) to infer the interventional distribution using samples from the observational distribution. TCS meets CI! Some thoughts from behind-the-curtain: reasoning about interventions is really very, very slippery! In particular, efficiently generating samples from the interventional distribution was unexpectedly quite tricky to do. Open problems! Efficient non-parametric algorithms for inferring interventions on several variables, estimating individual causal effects, estimating transportability error, etc. Can we do better if we add realistic parametric assumptions? Efficiently mitigating selection bias?",https://arxiv.org/abs/2002.04232,"We study the problem of efficiently estimating the effect of an intervention on a single variable (atomic interventions) using observational samples in a causal Bayesian network. Our goal is to give algorithms that are efficient in both time and sample complexity in a non-parametric setting. Tian and Pearl (AAAI `02) have exactly characterized the class of causal graphs for which causal effects of atomic interventions can be identified from observational data. We make their result quantitative. Suppose P is a causal model on a set $\vec{V}$ of n observable variables with respect to a given causal graph G with observable distribution $P$. Let $P_x$ denote the interventional distribution over the observables with respect to an intervention of a designated variable X with x. Assuming that $G$ has bounded in-degree, bounded c-components ($k$), and that the observational distribution is identifiable and satisfies certain strong positivity condition, we give an algorithm that takes $m=\tilde{O}(n\epsilon^{-2})$ samples from $P$ and $O(mn)$ time, and outputs with high probability a description of a distribution $\hat{P}$ such that $d_{\mathrm{TV}}(P_x, \hat{P}) \leq \epsilon$, and: 1. [Evaluation] the description can return in $O(n)$ time the probability $\hat{P}(\vec{v})$ for any assignment $\vec{v}$ to $\vec{V}$ 2. [Generation] the description can return an iid sample from $\hat{P}$ in $O(n)$ time. We also show lower bounds for the sample complexity showing that our sample complexity has an optimal dependence on the parameters $n$ and $\epsilon$, as well as if $k=1$ on the strong positivity parameter. ",Learning and Sampling of Atomic Interventions from Observations,7,"['New paper out! ""Efficiently learning and sampling interventional distributions from observations"" with Gayen, @Saravanan_CU, Maran and Vinodchandran: #causalML #causalinference', ""Causal inference is about predicting what happens in an imagined world that you don't have access to. E.g.: Am I more likely to get better if I take the medicine versus if I don't? How is the sale for product X affected if ads for X are slashed by 20%?"", '@yudapearl and collaborators have thought long and hard about such questions. They realized that to properly formulate causal problems, one needs a model to describe how variables causally depend on each other. What they proposed is using Bayes nets to encode causal info.', 'A basic question in this setup: given a Bayes net P on a set of variables, infer how the distribution would change if a particular variable is externally set (""intervened"") to a fixed value. @yudapearl & Jin Tian characterized the class of graphs for which this task is possible.', 'What we do in our paper is make their result algorithmic. We show conditions (nearly tight) under which there are efficient algorithms (both in terms of samples & time) to infer the interventional distribution using samples from the observational distribution. TCS meets CI!', 'Some thoughts from behind-the-curtain: reasoning about interventions is really very, very slippery! In particular, efficiently generating samples from the interventional distribution was unexpectedly quite tricky to do.', 'Open problems! Efficient non-parametric algorithms for inferring interventions on several variables, estimating individual causal effects, estimating transportability error, etc. Can we do better if we add realistic parametric assumptions? Efficiently mitigating selection bias?']",20,02,1763
125,2,1267437876905738240,2283510367,Eamonn Kerins,"New paper on arXiv () led by my PhD student, David Specht, describing our shiny new MaBulS-2 microlensing simulator. Try for yourself at We've tested it against the massive 8,000 event dataset from OGLE-IV. Spot the difference? The really amazing thing about MaBulS-2 is that it allows anyone to request a bespoke calculation in a few seconds that would take days of high-performance parallel computing to calculate directly! With MaBulS-2 we have a theoretical tool that is fit the the era of large-scale microlensing datasets. We're now much better placed to optimize how future billion-dollar surveys like @NASARoman and @ESA_Euclid can best look for cool exoplanets. #microlensing #exoplanets",https://arxiv.org/abs/2005.14668,"Galactic microlensing datasets now comprise in excess of $10^4$ events, and with the advent of next generation microlensing surveys that may be undertaken with facilities such as the Rubin Observatory (formerly LSST) and Roman Space Telescope (formerly WFIRST), this number will increase significantly. So too will the fraction of events with measurable higher order information such as finite source effects and lens-source relative proper motion. Analysing such data requires a more sophisticated Galactic microlens modeling approach. We present a new second-generation Manchester-Besan\c{c}on Microlensing Simulator (MaB$\mu$lS-2), which uses a version of the Besan\c{c}on population synthesis Galactic model that provides good agreement with stellar kinematics observed by HST towards the bulge. MaB$\mu$lS-2 provides high-fidelity signal-to-noise limited maps of the microlensing optical depth, rate and average timescale towards a 400 sq. degree region of the Galactic bulge in several optical to near-infrared pass-bands. The maps take full account of the unresolved stellar background as well as limb-darkened source profiles. Comparing MaB$\mu$lS-2 to the efficiency-corrected OGLE-IV 8,000 event sample shows a much improved agreement over the previous version of MaB$\mu$lS, and succeeds in matching even small-scale structural features in the OGLE-IV event rate map. However, there remains evidence for a small under-prediction in the event rate per source and over-prediction in timescale. MaB$\mu$lS-2 is available online () to provide on-the-fly maps for user supplied cuts in survey magnitude, event timescale and relative proper motion. ","MaB$\mu$lS-2: high-precision microlensing modelling for the large-scale
survey era",3,"[""New paper on arXiv () led by my PhD student, David Specht, describing our shiny new MaBulS-2 microlensing simulator. Try for yourself at \n\nWe've tested it against the massive 8,000 event dataset from OGLE-IV. Spot the difference? "", 'The really amazing thing about MaBulS-2 is that it allows anyone to request a bespoke calculation in a few seconds that would take days of high-performance parallel computing to calculate directly!', ""With MaBulS-2 we have a theoretical tool that is fit the the era of large-scale microlensing datasets. We're now much better placed to optimize how future billion-dollar surveys like @NASARoman and @ESA_Euclid can best look for cool exoplanets. #microlensing #exoplanets""]",20,05,717
126,140,1301112675754221570,131879500,John Ilee,"Excited to announce our new paper out today. We examine how the Square Kilometre Array (SKA, @SKA_telescope) will be able to observe and characterise planet-hosting young discs: We show that SKA1-MID will be able to observe the emission of cm-sized pebbles in such a disc, and particularly any gap/ring structure that is carved by planets forming. This is thanks to its extremely high resolution (and sensitivity). Just look at that uv coverage... 🤤 ALMA has made amazing progress at this over the past few years, but it is becoming clear that (sub)mm observations may not trace the bulk of the disc material. Moving to the cm overcomes this, and SKA will be key in determining accurate disc properties (watch this space). Huge thanks to @cassidentprone, @cwalshastrochem, Izaskun Jiménez-Serra, Christophe Pinte, Jason Terry, Tyler Bourke, and Melvin Hoare for helping pull this together (and of course @SKA_telescope!). We're planning a few more papers in this series, so keep an eye out for those. Finally, my curiosity got the better of me. I decided to see what something like HL Tau might look like with the SKA. Result: It's VERY cool (though a little artistic freedom applies here). The late 2020's will be exciting! ",https://arxiv.org/abs/2009.00562,"High angular resolution observations of discs at mm wavelengths (on scales of a few au) are now commonplace, but there is a current lack of a comparable angular resolution for observations at cm wavelengths. This presents a significant barrier to improving our understanding of planet formation, in particular how dust grains grow from mm to cm sizes. In this paper, we examine the ability of the Square Kilometre Array (SKA) to observe dust substructure in a young, planet-forming disc at cm wavelengths. We use dusty hydrodynamics and continuum radiative transfer to predict the distribution and emission of 1 cm dust grains (or pebbles) within the disc, and simulate continuum observations with the current SKA1-MID design baseline at frequencies of 12.5 GHz (Band 5b, ~2.4 cm) on 5-10 au scales. The SKA will provide high-fidelity observations of the cm dust emission substructure in discs for integration times totalling 100's of hours. Radial structure can be obtained at a sufficient resolution and S/N from shorter (10's of hours) integration times by azimuthal averaging in the image plane. By modelling the intensity distribution directly in the visibility plane, it is possible to recover a similar level of (axisymmetric) structural detail from observations with integration times 1-2 orders of magnitude lower than required for high-fidelity imaging. Our results demonstrate that SKA1-MID will provide crucial constraints on the distribution and morphology of the raw material for building planets, the pebbles in protoplanetary discs. ","Observing protoplanetary discs with the Square Kilometre Array -- I.
Characterising pebble substructure caused by forming planets",5,"['Excited to announce our new paper out today. We examine how the Square Kilometre Array (SKA, @SKA_telescope) will be able to observe and characterise planet-hosting young discs: ', 'We show that SKA1-MID will be able to observe the emission of cm-sized pebbles in such a disc, and particularly any gap/ring structure that is carved by planets forming. This is thanks to its extremely high resolution (and sensitivity). \n\nJust look at that uv coverage... 🤤 https://t.co/vUq0gr1Q0S', 'ALMA has made amazing progress at this over the past few years, but it is becoming clear that (sub)mm observations may not trace the bulk of the disc material. Moving to the cm overcomes this, and SKA will be key in determining accurate disc properties (watch this space).', ""Huge thanks to @cassidentprone, @cwalshastrochem, Izaskun Jiménez-Serra, Christophe Pinte, Jason Terry, Tyler Bourke, and Melvin Hoare for helping pull this together (and of course @SKA_telescope!). We're planning a few more papers in this series, so keep an eye out for those."", ""Finally, my curiosity got the better of me. I decided to see what something like HL Tau might look like with the SKA. Result: It's VERY cool (though a little artistic freedom applies here). The late 2020's will be exciting! https://t.co/8MrRzfoDUv""]",20,09,1253
127,80,1394355141432709129,3100596960,Walter Scheirer,"Lots of projects (especially in the DH world) can benefit from good automated handwriting analysis. A major confound (surprise) is when new things are encountered. My lab takes a look at this in our ICDAR 2021 paper ""Handwriting Recognition with Novelty"" Here we formalize the problem, introduce an agent-centric approach as a baseline solution, and introduce a new dataset and evaluation protocols. This is work coming out of the @DARPA SAIL-ON program, which has a terrific stable of problem domains with aspects of novelty.",https://arxiv.org/abs/2105.06582,"This paper introduces an agent-centric approach to handle novelty in the visual recognition domain of handwriting recognition (HWR). An ideal transcription agent would rival or surpass human perception, being able to recognize known and new characters in an image, and detect any stylistic changes that may occur within or across documents. A key confound is the presence of novelty, which has continued to stymie even the best machine learning-based algorithms for these tasks. In handwritten documents, novelty can be a change in writer, character attributes, writing attributes, or overall document appearance, among other things. Instead of looking at each aspect independently, we suggest that an integrated agent that can process known characters and novelties simultaneously is a better strategy. This paper formalizes the domain of handwriting recognition with novelty, describes a baseline agent, introduces an evaluation protocol with benchmark data, and provides experimentation to set the state-of-the-art. Results show feasibility for the agent-centric approach, but more work is needed to approach human-levels of reading ability, giving the HWR community a formal basis to build upon as they solve this challenging problem. ",Handwriting Recognition with Novelty,2,"['Lots of projects (especially in the DH world) can benefit from good automated handwriting analysis. A major confound (surprise) is when new things are encountered. My lab takes a look at this in our ICDAR 2021 paper ""Handwriting Recognition with Novelty""\n\n ', 'Here we formalize the problem, introduce an agent-centric approach as a baseline solution, and introduce a new dataset and evaluation protocols. This is work coming out of the @DARPA SAIL-ON program, which has a terrific stable of problem domains with aspects of novelty.']",21,05,540
128,49,1339752653681610753,1920417332,Ryan Glasser,Our new paper on the feasibility of #MachineLearning in experimental #quantum state reconstruction is on the arxiv! Fun using #ibmq quantum computer! @slohani_ai @ProfTSearles @ArmyResearchLab @USArmy @Tulane @TulaneSSE @HowardUniv @IBMResearch @qiskit ,https://arxiv.org/abs/2012.09432,"We determine the resource scaling of machine learning-based quantum state reconstruction methods, in terms of inference and training, for systems of up to four qubits when constrained to pure states. Further, we examine system performance in the low-count regime, likely to be encountered in the tomography of high-dimensional systems. Finally, we implement our quantum state reconstruction method on an IBM Q quantum computer, and compare against both unconstrained and constrained MLE state reconstruction. ","On the experimental feasibility of quantum state reconstruction via
machine learning",1,['Our new paper on the feasibility of #MachineLearning in experimental #quantum state reconstruction is on the arxiv! Fun using #ibmq quantum computer!\n\n\n\n@slohani_ai @ProfTSearles @ArmyResearchLab @USArmy @Tulane @TulaneSSE @HowardUniv @IBMResearch @qiskit '],20,12,266
129,62,1204826730734571520,308361234,Murray Brightman,"New paper on arXiv today! What we found when we asked @NASASwift to look at the M51 galaxies repeatedly over the last year and a half - M51 is a pair of merging galaxies, each with an accreting supermassive black hole (SMBH) at its center, plus several ultraluminous X-ray sources (ULXs). We know 2 of these ULXs are powered by neutron stars, a million times less massive then the SMBHs! We got great long-term X-ray lightcurves of all these sources The X-ray lightcurve of one of these neutron star ULXs shows huge swings in brightness, on a period of 38 days. The orbit of the neutron star and its companion is known to be only 2 days, so this is a super-orbital modulation, possibly caused by an extreme disk precession. We also found a new ULX, that appeared and disappeared during the year and a half. Maybe also a neutron star, but maybe something more exotic (red line shows the typical decline of a tidal disruption event...). Now looking for more of these in other Swift observations! @AstroGnomie @NASASwift That wasn’t me :-) It is weird what’s happening with that source though!",https://arxiv.org/abs/1912.04431,"We present the results from a monitoring campaign made with the Neil Gehrels Swift Observatory of the M51 galaxies, which contain several variable ultraluminous X-ray sources (ULXs). The ongoing campaign started in May 2018, and we report here on $\sim1.5$ years of observations. The campaign, which consists of 105 observations, has a typical cadence of 3--6 days, and has the goal of determining the long-term X-ray variability of the ULXs. Two of the most variable sources were ULX7 and ULX8, both of which are known to be powered by neutron stars that are exceeding their isotropic Eddington luminosities by factors of up to 100. This is further evidence that neutron star powered ULXs are the most variable. Our two main results are, first, that ULX7 exhibits a periodic flux modulation with a period of 38 days varying over a magnitude and a half in flux from peak to trough. Since the orbital period of the system is known to be 2 days, the modulation is super-orbital, which is a near-ubiquitous property of ULX pulsars. Secondly we identify a new transient ULX, M51 XT-1, the onset of which occurred during our campaign, reaching a peak luminosity of $\sim10^{40}$ erg s$^{-1}$, before gradually fading over the next $\sim200$ days until it slipped below the detection limit of our observations. Combined with the high-quality Swift/XRT lightcurve of the transient, serendipitous observations made with Chandra and XMM-Newton provide insights into the onset and evolution of a likely super-Eddington event. ","Swift monitoring of M51: A 38-day super-orbital period for the pulsar
ULX7 and a new transient ULX",6,"['New paper on arXiv today! What we found when we asked @NASASwift to look at the M51 galaxies repeatedly over the last year and a half - ', 'M51 is a pair of merging galaxies, each with an accreting supermassive black hole (SMBH) at its center, plus several ultraluminous X-ray sources (ULXs). We know 2 of these ULXs are powered by neutron stars, a million times less massive then the SMBHs! https://t.co/jRBnELffbW', 'We got great long-term X-ray lightcurves of all these sources https://t.co/F7XlzXls3P', 'The X-ray lightcurve of one of these neutron star ULXs shows huge swings in brightness, on a period of 38 days. The orbit of the neutron star and its companion is known to be only 2 days, so this is a super-orbital modulation, possibly caused by an extreme disk precession. https://t.co/Clfx9K9H41', 'We also found a new ULX, that appeared and disappeared during the year and a half. Maybe also a neutron star, but maybe something more exotic (red line shows the typical decline of a tidal disruption event...). Now looking for more of these in other Swift observations! https://t.co/7i1sa9j24O', '@AstroGnomie @NASASwift That wasn’t me :-) It is weird what’s happening with that source though!']",19,12,1125
130,11,1178835353303695361,876274407995527169,David Madras,"New paper! ""Causal Modeling for Fairness in Dynamical Systems"" w/ Elliot Creager, Toni Pitassi & Rich Zemel TLDR: Through a series of case studies, we show causal DAGs can act as a unifying framework for the literature on long-term unfairness. 1/n Recently, many papers have presented models of unfairness in long-term systems. We show causal DAGs can be used as a unifying framework for this literature, + give several case studies. Eg. this graphical formulation of “Delayed Impact of Fair Machine Learning” (Liu et al.) 2/n We give 3 advantages of causal DAGs in this domain. First is visualization: graphical models are a compact way to communicate models to non-technical stakeholders. Eg. this formulation of Hashimoto et al’s “Fairness w/o Demographics in Repeated Loss Minimization” 3/n … or this graphical formulation of an intricate, two-stage labor market model in Hu et al’s “A Short-term Intervention for Long-term Fairness in the Labor Market”. 4/n The second reason to use these graphical models is introspection: these formulations make clear many implicit causal assumptions, and present straightforward methods for discussion and modification. 5/n The third is evaluation: many long-term fairness papers must be evaluated using ""policy evaluation"" — we give examples in a case study. 6/n Furthermore, causal models allow for counterfactual-based policy evaluation methods, which are more practical/robust. 7/n In conclusion, we suggest a range of future questions at the intersection of various fields (e.g. RL, fairness, causal inference) which follow naturally from our formulation. n/n PS: Since finishing this paper, I've become fairly sure there's a literature on structural econometrics which is very related to all of this. If anyone has points on good places to read up on that (#econtwitter? does that work?) that would be swell @jvmancuso This looks (tangentially) awesome! I definitely thought this field did not exist. @adversariel will enjoy I think @jvmancuso @adversariel Haha it's a small Twitter after all @jvmancuso Sounds cool! Fairness through awareness has a special place in my heart, I'll check this out",https://arxiv.org/abs/1909.09141,"In many application areas---lending, education, and online recommenders, for example---fairness and equity concerns emerge when a machine learning system interacts with a dynamically changing environment to produce both immediate and long-term effects for individuals and demographic groups. We discuss causal directed acyclic graphs (DAGs) as a unifying framework for the recent literature on fairness in such dynamical systems. We show that this formulation affords several new directions of inquiry to the modeler, where causal assumptions can be expressed and manipulated. We emphasize the importance of computing interventional quantities in the dynamical fairness setting, and show how causal assumptions enable simulation (when environment dynamics are known) and off-policy estimation (when dynamics are unknown) of intervention on short- and long-term outcomes, at both the group and individual levels. ",Causal Modeling for Fairness in Dynamical Systems,12,"['New paper! ""Causal Modeling for Fairness in Dynamical Systems"" w/ Elliot Creager, Toni Pitassi & Rich Zemel\n\nTLDR: Through a series of case studies, we show causal DAGs can act as a unifying framework for the literature on long-term unfairness. 1/n ', 'Recently, many papers have presented models of unfairness in long-term systems. We show causal DAGs can be used as a unifying framework for this literature, + give several case studies.\n\nEg. this graphical formulation of “Delayed Impact of Fair Machine Learning” (Liu et al.) 2/n https://t.co/ynUh7FHgkW', 'We give 3 advantages of causal DAGs in this domain. First is visualization: graphical models are a compact way to communicate models to non-technical stakeholders.\n\nEg. this formulation of Hashimoto et al’s “Fairness w/o Demographics in Repeated Loss Minimization” 3/n https://t.co/IVj6jwAWwR', '… or this graphical formulation of an intricate, two-stage labor market model in Hu et al’s “A Short-term Intervention for Long-term Fairness in the Labor Market”. 4/n https://t.co/RgtTjQmg3V', 'The second reason to use these graphical models is introspection: these formulations make clear many implicit causal assumptions, and present straightforward methods for discussion and modification. 5/n', 'The third is evaluation: many long-term fairness papers must be evaluated using ""policy evaluation"" — we give examples in a case study. 6/n https://t.co/sJrw4zYlvm', 'Furthermore, causal models allow for counterfactual-based policy evaluation methods, which are more practical/robust. 7/n https://t.co/pIeQ2HX6ad', 'In conclusion, we suggest a range of future questions at the intersection of various fields (e.g. RL, fairness, causal inference) which follow naturally from our formulation. n/n https://t.co/FDJx7WRpP6', ""PS: Since finishing this paper, I've become fairly sure there's a literature on structural econometrics which is very related to all of this. If anyone has points on good places to read up on that (#econtwitter? does that work?) that would be swell"", '@jvmancuso This looks (tangentially) awesome! I definitely thought this field did not exist. @adversariel will enjoy I think', ""@jvmancuso @adversariel Haha it's a small Twitter after all"", ""@jvmancuso Sounds cool! Fairness through awareness has a special place in my heart, I'll check this out""]",19,09,2200
131,108,1250598901591109632,15861003,Jon McCormack,"The @Evostar2020 conference is currently underway (virtually). I have a new #CreativeAI paper with Andy Lomas on ""Understanding Aesthetic Evaluation using Deep Learning"" that has been nominated for best paper. You can download a preprint here: And also another paper with @SimonGColton and colleagues on ""Adapting and Enhancing Evolutionary Art for Casual Creation"" ",https://arxiv.org/abs/2004.06874,"A bottleneck in any evolutionary art system is aesthetic evaluation. Many different methods have been proposed to automate the evaluation of aesthetics, including measures of symmetry, coherence, complexity, contrast and grouping. The interactive genetic algorithm (IGA) relies on human-in-the-loop, subjective evaluation of aesthetics, but limits possibilities for large search due to user fatigue and small population sizes. In this paper we look at how recent advances in deep learning can assist in automating personal aesthetic judgement. Using a leading artist's computer art dataset, we use dimensionality reduction methods to visualise both genotype and phenotype space in order to support the exploration of new territory in any generative system. Convolutional Neural Networks trained on the user's prior aesthetic evaluations are used to suggest new possibilities similar or between known high quality genotype-phenotype mappings. ",Understanding Aesthetic Evaluation using Deep Learning,2,"['The @Evostar2020 conference is currently underway (virtually). I have a new #CreativeAI paper with Andy Lomas on ""Understanding Aesthetic Evaluation using Deep Learning"" that has been nominated for best paper. You can download a preprint here: ', 'And also another paper with @SimonGColton and colleagues on ""Adapting and Enhancing Evolutionary Art for Casual Creation"" https://t.co/pkcdKDXd9G']",20,04,379
132,117,1403298820872675329,835146121144041472,André Biedenkapp,"TempoRL has gone deep. In our new #ICML2021 paper we extend TempoRL to work in the Deep RL case and showed improved learning speed and better guided temporal exploration. If you're interested in learning *when* to act in deep RL, check out the paper . This was a joint work with @RaghuSpaceRajan, @FrankRHutter and @LindauerMarius",https://arxiv.org/abs/2106.05262,"Reinforcement learning is a powerful approach to learn behaviour through interactions with an environment. However, behaviours are usually learned in a purely reactive fashion, where an appropriate action is selected based on an observation. In this form, it is challenging to learn when it is necessary to execute new decisions. This makes learning inefficient, especially in environments that need various degrees of fine and coarse control. To address this, we propose a proactive setting in which the agent not only selects an action in a state but also for how long to commit to that action. Our TempoRL approach introduces skip connections between states and learns a skip-policy for repeating the same action along these skips. We demonstrate the effectiveness of TempoRL on a variety of traditional and deep RL environments, showing that our approach is capable of learning successful policies up to an order of magnitude faster than vanilla Q-learning. ",TempoRL: Learning When to Act,2,"[""TempoRL has gone deep. In our new #ICML2021 paper we extend TempoRL to work in the Deep RL case and showed improved learning speed and better guided temporal exploration. If you're interested in learning *when* to act in deep RL, check out the paper . "", 'This was a joint work with @RaghuSpaceRajan, @FrankRHutter and @LindauerMarius']",21,06,343
133,75,1253205923511877638,1177063549606203394,Tommi Tenkanen,"A new paper out! Here me and Erwin Tanin, a PhD student at Johns Hopkins, calculated a new upper limit on how much our observable universe could have expanded during an event called cosmic inflation. A preprint is available here: 1/n Cosmic inflation, an era of accelerated expansion of the universe before the Big Bang epoch, generates gravitational waves which propagate through the space. 2/n If the energy these gravitational waves carry was too high, they would mess up the formation of lights elements (the so-called Big Bang Nucleosynthesis or BBN), so that the amount of observed elements would not match to the amount predicted from theory. 3/n The energy of gravitational waves at the time of BBN is not known but one can calculate that if the universe expanded in some funny way between inflation and BBN, it is bigger than what one would normally expect. 4/n By using the limits on the amount of gravitational waves at the time of BBN, we infer an upper limit on how much the universe could have expanded between end of inflation and BBN - and how much during inflation itself. 5/n We also showed that even the most optimistic future ground- or space-based gravitational wave observatories are unlikely to be able to improve this limit. 6/n All in all, this was a nice project which I enjoyed a lot. This was also a new opening for me, as I had not really worked on gravitational waves before. It was great and a lot of fun to learn new things. Hope you'll like the outcome! 7/7",https://arxiv.org/abs/2004.10702,"Gravitational waves (GW) produced in the early Universe contribute to the number of relativistic degrees of freedom, $N_{\rm eff}$, during Big Bang Nucleosynthesis (BBN). By using the constraints on $N_{\rm eff}$, we present a new bound on how much the Universe could have expanded between horizon exit of the largest observable scales today and the end of inflation. We discuss the implications on inflationary models and show how the new constraints affect model selection. We also discuss the sensitivities of the current and planned GW observatories such as LIGO and LISA, and show that the constraints they could impose are always less stringent than the BBN bound. ",Gravitational wave constraints on the observable inflation,7,"['A new paper out! Here me and Erwin Tanin, a PhD student at Johns Hopkins, calculated a new upper limit on how much our observable universe could have expanded during an event called cosmic inflation. A preprint is available here: 1/n ', 'Cosmic inflation, an era of accelerated expansion of the universe before the Big Bang epoch, generates gravitational waves which propagate through the space. 2/n', 'If the energy these gravitational waves carry was too high, they would mess up the formation of lights elements (the so-called Big Bang Nucleosynthesis or BBN), so that the amount of observed elements would not match to the amount predicted from theory. 3/n', 'The energy of gravitational waves at the time of BBN is not known but one can calculate that if the universe expanded in some funny way between inflation and BBN, it is bigger than what one would normally expect. 4/n', 'By using the limits on the amount of gravitational waves at the time of BBN, we infer an upper limit on how much the universe could have expanded between end of inflation and BBN - and how much during inflation itself. 5/n', 'We also showed that even the most optimistic future ground- or space-based gravitational wave observatories are unlikely to be able to improve this limit. 6/n', ""All in all, this was a nice project which I enjoyed a lot. This was also a new opening for me, as I had not really worked on gravitational waves before. It was great and a lot of fun to learn new things. Hope you'll like the outcome! 7/7""]",20,04,1504
134,13,1300512560341549061,999856806,Sergei V. Gleyzer 🇺🇸🇺🇦,A new paper on unsupervised deep learning techniques for identifying dark matter substructure using strong gravitational lensing Continuing an exciting collaboration with @stephstem @emanuele_usai @ReddyPranath and others on this interdisciplinary project This is a follow-up on our earlier supervised learning work in this area The paper contains excellent contributions from @ReddyPranath who took part in the 2020 Google Summer of Code with us at CERN-HSF @gsoc @GoogleOSS,https://arxiv.org/abs/2008.12731,"The identity of dark matter remains one of the most pressing questions in physics today. While many promising dark matter candidates have been put forth over the last half-century, to date the true identity of dark matter remains elusive. While it is possible that one of the many proposed candidates may turn out to be dark matter, it is at least equally likely that the correct physical description has yet to be proposed. To address this challenge, novel applications of machine learning can help physicists gain insight into the dark sector from a theory agnostic perspective. In this work we demonstrate the use of unsupervised machine learning techniques to infer the presence of substructure in dark matter halos using galaxy-galaxy strong lensing simulations. ",Decoding Dark Matter Substructure without Supervision,3,"['A new paper on unsupervised deep learning techniques for identifying dark matter substructure using strong gravitational lensing \nContinuing an exciting collaboration with @stephstem @emanuele_usai @ReddyPranath and others on this interdisciplinary project', 'This is a follow-up on our earlier supervised learning work in this area https://t.co/w7O6rGGZCR', 'The paper contains excellent contributions from @ReddyPranath who took part in the 2020 Google Summer of Code with us at CERN-HSF @gsoc @GoogleOSS']",20,08,489
135,81,1439975250565816323,935991962460721153,Aishik Ghosh,"New paper! Led by @BPNachman, a Cautionary Tale of Decorrelating Theory Uncertainties: Could making ourselves insensitive to uncertainties just be hiding the truth? 1/n What if decorrelation shrinks only our ad-hoc estimate of the uncertainty, while the actual uncertainty remains large? Intuitive illustration: 2/n Example 1: We painstakingly sacrifice separation power for reduced difference between Pythia and Herwig. Alas, the difference to a third generator (Sherpa) remains large. 3/n Example 2: Decorrelating scale uncertainty at LO reduces the error bands, but we’re only fooling ourselves, the difference to NLO remains large. 4/n Message: Decorrelation only does what it is trained to do, doesn’t solve the general problem. Until we have a better description of these uncertainties, think carefully before decorrelating! 5/5 Learning twitter from the boss @DanielWhiteson :P",https://arxiv.org/abs/2109.08159,"A variety of techniques have been proposed to train machine learning classifiers that are independent of a given feature. While this can be an essential technique for enabling background estimation, it may also be useful for reducing uncertainties. We carefully examine theory uncertainties, which typically do not have a statistical origin. We will provide explicit examples of two-point (fragmentation modeling) and continuous (higher-order corrections) uncertainties where decorrelating significantly reduces the apparent uncertainty while the actual uncertainty is much larger. These results suggest that caution should be taken when using decorrelation for these types of uncertainties as long as we do not have a complete decomposition into statistically meaningful components. ",A Cautionary Tale of Decorrelating Theory Uncertainties,6,"['New paper! Led by @BPNachman, a Cautionary Tale of Decorrelating Theory Uncertainties:\n\n\nCould making ourselves insensitive to uncertainties just be hiding the truth?\n1/n', 'What if decorrelation shrinks only our ad-hoc estimate of the uncertainty, while the actual uncertainty remains large? Intuitive illustration:\n2/n https://t.co/jrV8JCVs8h', 'Example 1: We painstakingly sacrifice separation power for reduced difference between Pythia and Herwig. Alas, the difference to a third generator (Sherpa) remains large.\n3/n https://t.co/LVdS1hCEa7', 'Example 2: Decorrelating scale uncertainty at LO reduces the error bands, but we’re only fooling ourselves, the difference to NLO remains large.\n4/n https://t.co/VxiDHwdtCJ', 'Message: Decorrelation only does what it is trained to do, doesn’t solve the general problem. Until we have a better description of these uncertainties, think carefully before decorrelating!\n\n5/5', 'Learning twitter from the boss @DanielWhiteson :P']",21,09,912
136,94,1326361594603970560,1192861416535085056,Libby Tolman,"My latest paper is out on arXiv: . In it, Peter Catto and I develop a new analytic method for calculating alpha particle heat fluxes in a tokamak that has small perturbations in its electric and magnetic fields. Our work suggests that alpha particle transport in SPARC (one tokamak being designed) due to the TAE (one type of perturbation to the tokamak fields) might be small. It will be interesting to see if other methods for predicting such transport agree with this result. The paper's math is heavy, so I'll be explaining it with lots of pictures and diagrams during a Friday #apsdpp talk (session ZI02). ",https://arxiv.org/abs/2011.04920,"Upcoming tokamak experiments fueled with deuterium and tritium are expected to have large alpha particle populations. Such experiments motivate new attention to the theory of alpha particle confinement and transport. A key topic is the interaction of alphas with perturbations to the tokamak fields, including those from ripple and magnetohydrodynamic modes like Alfv\'{e}n eigenmodes. These perturbations can transport alphas, leading to changed localization of alpha heating, loss of alpha power, and damage to device walls. Alpha interaction with these perturbations is often studied with single particle theory. In contrast, we derive a drift kinetic theory to calculate the alpha heat flux resulting from arbitrary perturbation frequency and periodicity (provided these can be studied drift kinetically). Novel features of the theory include the retention of a large effective collision frequency resulting from the resonant alpha collisional boundary layer, correlated interactions over many poloidal transits, and finite orbit effects. Heat fluxes are considered for the example cases of ripple and the toroidal Alfv\'{e}n eigenmode (TAE). The ripple heat flux is small. The TAE heat flux is significant and scales with the square of the perturbation amplitude, allowing the derivation of constraints on mode amplitude for avoidance of significant alpha depletion. A simple saturation condition suggests that TAEs in one upcoming experiment will not cause significant alpha transport via the mechanisms in this theory. However, saturation above the level suggested by the simple condition, but within numerical and experimental experience, which could be accompanied by the onset of stochasticity, could cause significant transport. ",Drift kinetic theory of alpha transport by tokamak perturbations,3,"['My latest paper is out on arXiv: . In it, Peter Catto and I develop a new analytic method for calculating alpha particle heat fluxes in a tokamak that has small perturbations in its electric and magnetic fields. ', 'Our work suggests that alpha particle transport in SPARC (one tokamak being designed) due to the TAE (one type of perturbation to the tokamak fields) might be small. It will be interesting to see if other methods for predicting such transport agree with this result. https://t.co/KSySypfeFQ', ""The paper's math is heavy, so I'll be explaining it with lots of pictures and diagrams during a Friday #apsdpp talk (session ZI02). https://t.co/DZtFKl48RH""]",20,11,637
137,87,1337385549271879681,1296555996303761408,Zemel Group,Check out our new paper Flexible Few-Shot Learning -- the same object can belong to different classes depending on context. We found unsupervised representation is better than supervised. A short version at NeurIPS metalearn workshop today at 10 EST. Joint work by @mengyer @Eleni30fillou @kcjacksonwang @james_r_lucas Jake Snell @xaqlab @AToliasLab and Rich Zemel,https://arxiv.org/abs/2012.05895,"Semantic concepts are frequently defined by combinations of underlying attributes. As mappings from attributes to classes are often simple, attribute-based representations facilitate novel concept learning with zero or few examples. A significant limitation of existing attribute-based learning paradigms, such as zero-shot learning, is that the attributes are assumed to be known and fixed. In this work we study the rapid learning of attributes that were not previously labeled. Compared to standard few-shot learning of semantic classes, in which novel classes may be defined by attributes that were relevant at training time, learning new attributes imposes a stiffer challenge. We found that supervised learning with training attributes does not generalize well to new test attributes, whereas self-supervised pre-training brings significant improvement. We further experimented with random splits of the attribute space and found that predictability of test attributes provides an informative estimate of a model's generalization ability. ",Few-Shot Attribute Learning,2,"['Check out our new paper Flexible Few-Shot Learning -- the same object can belong to different classes depending on context. We found unsupervised representation is better than supervised. A short version at NeurIPS metalearn workshop today at 10 EST. ', 'Joint work by @mengyer @Eleni30fillou @kcjacksonwang @james_r_lucas Jake Snell @xaqlab @AToliasLab and Rich Zemel']",20,12,378
138,157,1306410949792858114,4666231375,Konstantin Batygin,"The unraveling of the sol system has fascinated mathematicians for centuries. Newton himself believed the Jupiter-Saturn ""great inequality"" (5:2 near-resonance) held the key to the sol system's demise. In a new paper led by @jonKzink, we show he was right: ",https://arxiv.org/abs/2009.07296,"Using an ensemble of N-body simulations, this paper considers the fate of the outer gas giants (Jupiter, Saturn, Uranus, and Neptune) after the Sun leaves the main sequence and completes its stellar evolution. Due to solar mass-loss -- which is expected to remove roughly half of the star's mass -- the orbits of the giant planets expand. This adiabatic process maintains the orbital period ratios, but the mutual interactions between planets and the width of mean-motion resonances (MMR) increase, leading to the capture of Jupiter and Saturn into a stable 5:2 resonant configuration. The expanded orbits, coupled with the large-amplitude librations of the critical MMR angle, make the system more susceptible to perturbations from stellar flyby interactions. Accordingly, within about 30 Gyr, stellar encounters perturb the planets onto the chaotic sub-domain of the 5:2 resonance, triggering a large-scale instability, which culminates in the ejections of all but one planet over the subsequent $\sim10$ Gyr. After an additional $\sim50$ Gyr, a close stellar encounter (with a perihelion distance less than $\sim200$ AU) liberates the final planet. Through this sequence of events, the characteristic timescale over which the solar system will be completely dissolved is roughly 100 Gyr. Our analysis thus indicates that the expected dynamical lifetime of the solar system is much longer than the current age of the universe, but is significantly shorter than previous estimates. ","The Great Inequality and the Dynamical Disintegration of the Outer Solar
System",1,"['The unraveling of the sol system has fascinated mathematicians for centuries. Newton himself believed the Jupiter-Saturn ""great inequality"" (5:2 near-resonance) held the key to the sol system\'s demise. In a new paper led by @jonKzink, we show he was right: ']",20,09,270
139,86,1337386427940802560,56113666,Mengye Ren,"In standard few-shot learning (FSL), an elephant is always an elephant no matter which episode it is. Check out our new paper that extends FSL to more flexible classification criteria --> @cimonisasi Thank you! We are still working on the release of the code base, hopefully in a couple months.",https://arxiv.org/abs/2012.05895,"Semantic concepts are frequently defined by combinations of underlying attributes. As mappings from attributes to classes are often simple, attribute-based representations facilitate novel concept learning with zero or few examples. A significant limitation of existing attribute-based learning paradigms, such as zero-shot learning, is that the attributes are assumed to be known and fixed. In this work we study the rapid learning of attributes that were not previously labeled. Compared to standard few-shot learning of semantic classes, in which novel classes may be defined by attributes that were relevant at training time, learning new attributes imposes a stiffer challenge. We found that supervised learning with training attributes does not generalize well to new test attributes, whereas self-supervised pre-training brings significant improvement. We further experimented with random splits of the attribute space and found that predictability of test attributes provides an informative estimate of a model's generalization ability. ",Few-Shot Attribute Learning,2,"['In standard few-shot learning (FSL), an elephant is always an elephant no matter which episode it is. Check out our new paper that extends FSL to more flexible classification criteria --> ', '@cimonisasi Thank you! We are still working on the release of the code base, hopefully in a couple months.']",20,12,311
140,62,1061976636906786816,341126513,Francesca Fragkoudi,"Read something cool today: Our new paper on how inner bars also buckle (or if you want to get poetic about it: about galaxies within galaxies, bars within bars and peanuts within peanuts 😎) as uncovered using #VLT-MUSE data And check out this awesome animation and press release related to the paper: ",https://arxiv.org/abs/1811.03855v1,"Double bars are thought to be important features for secular evolution in the central regions of galaxies. However, observational evidence about their origin and evolution is still scarce. We report on the discovery of the first Box-Peanut (B/P) structure in an inner bar detected in the face-on galaxy NGC 1291. We use the integral field data obtained from the MUSE spectrograph within the TIMER project. The B/P structure is detected as bi-symmetric minima of the $h_4$ moment of the line-of-sight velocity distribution along the major axis of the inner bar, as expected from numerical simulations. Our observations demonstrate that inner bars can follow a similar evolutionary path as outer bars, undergoing buckling instabilities. They also suggest that inner bars are long-lived structures, thus imposing tight constraints to their possible formation mechanisms ","] Inner bars also buckle. The MUSE TIMER view of the double-barred galaxy
NGC 1291",2,"['Read something cool today: Our new paper on how inner bars also buckle (or if you want to get poetic about it: about galaxies within galaxies, bars within bars and peanuts within peanuts 😎) as uncovered using #VLT-MUSE data \n', 'And check out this awesome animation and press release related to the paper:\nhttps://t.co/CGR05yB4Pf\nhttps://t.co/4e5R37ouBg']",18,11,321
141,41,1231989978793742336,1012717203324657665,Marco Pegoraro is at CAiSE 2022,"It's #Rosenmontag! I can't throw sweets from here, but maybe I can hit you with other kinds of goodies! New paper accepted at @BISconf: ""Efficient Construction of Behavior Graphs for Uncertain Event Data"". With M. S. Uysal and @wvdaalst! Preprint at (1/2) @BISconf @wvdaalst But there's more! Want a quick summary? Head to the PADS blogpost: Want to give it a spin? Full code with experiments at #Alaaf! #ProcessMining #ProcessScience #RWTH (2/2) ",https://arxiv.org/abs/2002.08225,"The discipline of process mining deals with analyzing execution data of operational processes, extracting models from event data, checking the conformance between event data and normative models, and enhancing all aspects of processes. Recently, new techniques have been developed to analyze event data containing uncertainty; these techniques strongly rely on representing uncertain event data through graph-based models capturing uncertainty. In this paper we present a novel approach to efficiently compute a graph representation of the behavior contained in an uncertain process trace. We present our new algorithm, analyze its time complexity, and report experimental results showing order-of-magnitude performance improvements for behavior graph construction. ",Efficient Construction of Behavior Graphs for Uncertain Event Data,2,"['It\'s #Rosenmontag! I can\'t throw sweets from here, but maybe I can hit you with other kinds of goodies! New paper accepted at @BISconf: ""Efficient Construction of Behavior Graphs for Uncertain Event Data"". With M. S. Uysal and @wvdaalst! Preprint at (1/2)', ""@BISconf @wvdaalst But there's more! Want a quick summary? Head to the PADS blogpost: https://t.co/6epmaZnwHD Want to give it a spin? Full code with experiments at https://t.co/uY4R2Scbv3\n#Alaaf! #ProcessMining #ProcessScience #RWTH (2/2) https://t.co/PT9IJg2Cs7""]",20,02,474
142,146,1280949432612278272,892997634813710336,Adam Fisch,"New paper on efficient set-valued predictions for tasks with many candidates (where multiple can be correct)! We extend Conformal Prediction, a principled method for making predictions with perf. guarantees. With @TalSchuster, Tommi and @BarzilayRegina. ",https://arxiv.org/abs/2007.03114,"In this paper, we present a novel approach for conformal prediction (CP), in which we aim to identify a set of promising prediction candidates -- in place of a single prediction. This set is guaranteed to contain a correct answer with high probability, and is well-suited for many open-ended classification tasks. In the standard CP paradigm, the predicted set can often be unusably large and also costly to obtain. This is particularly pervasive in settings where the correct answer is not unique, and the number of total possible answers is high. We first expand the CP correctness criterion to allow for additional, inferred ""admissible"" answers, which can substantially reduce the size of the predicted set while still providing valid performance guarantees. Second, we amortize costs by conformalizing prediction cascades, in which we aggressively prune implausible labels early on by using progressively stronger classifiers -- again, while still providing valid performance guarantees. We demonstrate the empirical effectiveness of our approach for multiple applications in natural language processing and computational chemistry for drug discovery. ","Efficient Conformal Prediction via Cascaded Inference with Expanded
Admission",1,"['New paper on efficient set-valued predictions for tasks with many candidates (where multiple can be correct)! We extend Conformal Prediction, a principled method for making predictions with perf. guarantees.\n\nWith @TalSchuster, Tommi and @BarzilayRegina.\n\n']",20,07,260
143,4,1446264438340816923,240851505,Tomas Pfister,"""Fast Sample Reweighting"" is a new paper from our research group @GoogleCloud that allows you to re-weight training samples effectively without the need for additional unbiased reward data. PS: We’re hiring! @GoogleAI @googlecloud #ML #research #ICCV2021 ",https://arxiv.org/abs/2109.03216,"Training sample re-weighting is an effective approach for tackling data biases such as imbalanced and corrupted labels. Recent methods develop learning-based algorithms to learn sample re-weighting strategies jointly with model training based on the frameworks of reinforcement learning and meta learning. However, depending on additional unbiased reward data is limiting their general applicability. Furthermore, existing learning-based sample re-weighting methods require nested optimizations of models and weighting parameters, which requires expensive second-order computation. This paper addresses these two problems and presents a novel learning-based fast sample re-weighting (FSR) method that does not require additional reward data. The method is based on two key ideas: learning from history to build proxy reward data and feature sharing to reduce the optimization cost. Our experiments show the proposed method achieves competitive results compared to state of the arts on label noise robustness and long-tailed recognition, and does so while achieving significantly improved training efficiency. The source code is publicly available at this https URL ",Learning Fast Sample Re-weighting Without Reward Data,1,"['""Fast Sample Reweighting"" is a new paper from our research group @GoogleCloud that allows you to re-weight training samples effectively without the need for additional unbiased reward data. PS: We’re hiring! @GoogleAI\xa0@googlecloud\xa0#ML\xa0#research\xa0#ICCV2021 ']",21,09,268
144,35,1443022428951785475,61623544,Dr./Prof. Renée Hložek,"New paper alert! In this work led by awesome student Gerrit Farren we estimate the kSZ signature of ultra light axions. If you care about dark matter, you know the name of the game is distinguishing axions from dark matter - so we either want to detect it or we want to rule out a universe that contains a lot of axions! This plot shows how our limits will keep getting tighter #darkmatter Future experiments like @SimonsObs, CMBS4 and @desisurvey will be pivotal for these constraints It’s a great time to be a cosmologist!",https://arxiv.org/abs/2109.13268,"Measurements of secondary cosmic microwave background (CMB) anisotropies, such as the Sunyaev-Zel'dovich (SZ) effect, will enable new tests of neutrino and dark sector properties. The kinetic SZ (kSZ) effect is produced by cosmological flows, probing structure growth. Ultra-light axions (ULAs) are a well-motivated dark-matter candidate. Here the impact of ULA dark matter (with mass $10^{-27}~{\rm eV}$ to $10^{-23}~{\rm eV}$) on kSZ observables is determined, applying new analytic expressions for pairwise cluster velocities and Ostriker-Vishniac signatures in structure-suppressing models. For the future CMB-S4 and ongoing DESI galaxy surveys, the kSZ effect (along with primary anisotropies) will probe ULA fractions $\eta_a = \Omega_{\rm{axion}}/\Omega_{\rm DM}$ as low as $\sim 5\%$ if $m_{a}\simeq 10^{-27}~{\rm eV}$ (at 95\% C.L.), with sensitivity extending up to $m_{a}\simeq 10^{-25}~{\rm eV}$. If reionization and the primary CMB can be adequately modeled, Ostriker-Vishniac measurements could probe values $\eta_{a}\simeq 10^{-3}$ if $10^{-27}~{\rm eV}\lesssim m_{a}\lesssim 10^{-24}~{\rm eV}$, or $\eta_{a}\simeq 1$ if $m_{a}\simeq 10^{-22}~{\rm eV}$, within the fuzzy dark matter window. ",Ultra-light axions and the kinetic Sunyaev-Zel'dovich Effect,4,"['New paper alert! In this work led by awesome student Gerrit Farren we estimate the kSZ signature of ultra light axions. ', 'If you care about dark matter, you know the name of the game is distinguishing axions from dark matter - so we either want to detect it or we want to rule out a universe that contains a lot of axions! This plot shows how our limits will keep getting tighter #darkmatter https://t.co/vUlB1jrANB', 'Future experiments like @SimonsObs, CMBS4 and @desisurvey will be pivotal for these constraints', 'It’s a great time to be a cosmologist!']",21,09,538
145,101,1234671558854995971,383494771,Ian Williamson,Our new paper and software package (led by @momchilmm) for inverse design of photonic crystals (or any periodic optical structure) via automatic differentiation are now online! 🔴🟢🔵 GitHub: Docs: Paper: @momchilmm That sweet sweet logo was designed by @NadineGilmer,https://arxiv.org/abs/2003.00379,"Gradient-based inverse design in photonics has already achieved remarkable results in designing small-footprint, high-performance optical devices. The adjoint variable method, which allows for the efficient computation of gradients, has played a major role in this success. However, gradient-based optimization has not yet been applied to the mode-expansion methods that are the most common approach to studying periodic optical structures like photonic crystals. This is because, in such simulations, the adjoint variable method cannot be defined as explicitly as in standard finite-difference or finite-element time- or frequency-domain methods. Here, we overcome this through the use of automatic differentiation, which is a generalization of the adjoint variable method to arbitrary computational graphs. We implement the plane-wave expansion and the guided-mode expansion methods using an automatic differentiation library, and show that the gradient of any simulation output can be computed efficiently and in parallel with respect to all input parameters. We then use this implementation to optimize the dispersion of a photonic crystal waveguide, and the quality factor of an ultra-small cavity in a lithium niobate slab. This extends photonic inverse design to a whole new class of simulations, and more broadly highlights the importance that automatic differentiation could play in the future for tracking and optimizing complicated physical models. ",Inverse design of photonic crystals through automatic differentiation,2,"['Our new paper and software package (led by @momchilmm) for inverse design of photonic crystals (or any periodic optical structure) via automatic differentiation are now online! 🔴🟢🔵 \nGitHub: \nDocs: \nPaper: ', '@momchilmm That sweet sweet logo was designed by @NadineGilmer']",20,03,285
146,104,1479430336035672066,213005784,Josué Tonelli-Cueto,First preprint of 2022 is out! New bounds for the best rank-one approx. ratio of [partially] symmetric tensors—aka [multi]homogeneus polynomials—using probability. #research #math #paper #preprint #tensors #polynomials #approximaton #probability #arxiv @_pbrdng Out of an e-mail asking Khazhgali if a probabilistic technique in another paper would be interesting in the world of symmetric tensors.,https://arxiv.org/abs/2201.02191,"We provide new upper and lower bounds on the minimum possible ratio of the spectral and Frobenius norms of a (partially) symmetric tensor. In the particular case of general tensors our result recovers a known upper bound. For symmetric tensors our upper bound unveils that the ratio of norms has the same order of magnitude as the trivial lower bound $1/\sqrt{n^{d-1}}$, when the order of a tensor $d$ is fixed and the dimension of the underlying vector space $n$ tends to infinity. However, when $n$ is fixed and $d$ tends to infinity, our lower bound is better than $1/\sqrt{n^{d-1}}$. ",Probabilistic bounds on best rank-one approximation ratio,2,"['First preprint of 2022 is out! New bounds for the best rank-one approx. ratio of [partially] symmetric tensors—aka [multi]homogeneus polynomials—using probability.\n\n\n\n#research #math #paper #preprint #tensors #polynomials #approximaton #probability\n#arxiv', '@_pbrdng Out of an e-mail asking Khazhgali if a probabilistic technique in another paper would be interesting in the world of symmetric tensors.']",22,01,404
147,68,1493496708869005321,400026483,Oem Trivedi,"I'm very happy to share that my new paper on Type V singularities with Prof. Maxim Khlopov is now out ! We show that the occurence conditions of these singularities are almost the same in loads of non-standard theories as they are in a GR cosmology 1/n This is very surprising, because usually other types of singularities (Type I-IV) have a significant difference in their occurence conditions in non-standard cosmologies from the conditions in general relativistic models. However we showed that 2/n for 2 different types of scale factor ansatz, there is no difference in the occurence conditions when one considers an RS-II brane cosmology and particular types of modified area-entropy, generalized uncertainty principle, Chern-Simons and Holographically renormalized theories3/n Only a type of f(R) gravity cosmology displays a bit of departure from a GR cosmology in the occurence conditions for w-singularities, that too only for one form of the scale factor ansatz. This goes to show that w-singularities remain largely untouched by the background theory! Thank you for reading the thread and I hope you like the paper too !🙏😇",https://arxiv.org/abs/2202.06093,"Interest in cosmological singularities has remarkably grown in recent times, particularly on future singularities with the discovery of late-time acceleration of the universe and dark energy. Recent work has seen a proper classification of such singularities into strong and weak based on their strength, with weak singularities being the likes of sudden, w and big freeze singularities and strong singularities like the big rip. While there has been an expansive literature which has discussed the occurrence of Type I-IV in many non-standard cosmologies, w-singularities have not yet been explored in exotic cosmological settings. So in this work we pursue the same and discuss the status quo of w-singularities in a variety of non-standard cosmologies. We consider the RS-II Braneworld cosmology, an F(R) gravity cosmology which gives viable late time acceleration. We also consider cosmologies due to modified area-entropy relations, generalized uncertainty principles, holographic renormalization and Chern-Simons gravity( all of which can be coincidentally described by the same form of the modified Friedmann equation). We show that w-singularities will occur in exactly the same conditions in all these vividly different cosmological settings as they do in the usual general relativistic cosmology if one considers a power series expansion ansatz for the scale factor. We also show that if one considers an exponential form of the scale factor then while Type V singularities in the RS-II Braneworld and Chern-Simons cosmologies occur in the same conditions as in the standard cosmology case, there is a significant difference in the conditions when one considers the f(R) gravity case. These results are surprising overall, as one would usually not expect cosmological singularities to occur in almost the same conditions in non-standard cosmologies as they do in the usual standard cosmology. ",Type V singularities in non-standard cosmological backgrounds,5,"[""I'm very happy to share that my new paper on Type V singularities with Prof. Maxim Khlopov is now out ! We show that the occurence conditions of these singularities are almost the same in loads of non-standard theories as they are in a GR cosmology 1/n"", 'This is very surprising, because usually other types of singularities (Type I-IV) have a significant difference in their occurence conditions in non-standard cosmologies from the conditions in general relativistic models. However we showed that 2/n', 'for 2 different types of scale factor ansatz, there is no difference in the occurence conditions when one considers an RS-II brane cosmology and particular types of modified area-entropy, generalized uncertainty principle, Chern-Simons and Holographically renormalized theories3/n', 'Only a type of f(R) gravity cosmology displays a bit of departure from a GR cosmology in the occurence conditions for w-singularities, that too only for one form of the scale factor ansatz. This goes to show that w-singularities remain largely untouched by the background theory!', 'Thank you for reading the thread and I hope you like the paper too !🙏😇']",22,02,1140
148,17,1198997678501310471,738769492122214400,Johannes Lischner,"In our new paper, we demonstrate how electronic phases (correlated insulator states, superconductivity) can be switched on and off by changing the thickness of the dielectric spacer layer between twisted bilayer #graphene and metallic gates. Read here: ",https://arxiv.org/abs/1911.08464,"The effective interaction between electrons in two-dimensional materials can be modified by their environment, enabling control of electronic correlations and phases. Here, we study the dependence of electronic correlations in twisted bilayer graphene (tBLG) on the separation to the metallic gate(s) in two device configurations. Using an atomistic tight-binding model, we determine the Hubbard parameters of the flat bands as a function of gate separation, taking into account the screening from the metallic gate(s), the dielectric spacer layers and the tBLG itself. We determine the critical gate separation at which the Hubbard parameters become smaller than the critical value required for a transition from a correlated insulator state to a (semi-)metallic phase. We show how this critical gate separation depends on twist angle, doping and the device configuration. These calculations may help rationalise the reported differences between recent measurements of tBLG's phase diagram and suggests that correlated insulator states can be screened out in devices with thin dielectric layers. ","Critical role of device geometry for the phase diagram of twisted
bilayer graphene",1,"['In our new paper, we demonstrate how electronic phases (correlated insulator states, superconductivity) can be switched on and off by changing the thickness of the dielectric spacer layer between twisted bilayer #graphene and metallic gates. Read here: ']",19,11,266
149,27,941243638193016832,521162744,David Goldsby,"AlphaZero achieves superhuman performance in Chess, Shogi, and Go beating best software and number one players in the world. Starting from random play, and given no domain knowledge. #AI #Deepmind #AlphaZero @davearon @stroker New paper from DeepMind: @stroker @davearon haha... im there brother!!! DeathChess 2000! Has to sound like something from robot wars. I bet it cant beat me at Monopoly ;-)",https://arxiv.org/abs/1712.01815,"The game of chess is the most widely-studied domain in the history of artificial intelligence. The strongest programs are based on a combination of sophisticated search techniques, domain-specific adaptations, and handcrafted evaluation functions that have been refined by human experts over several decades. In contrast, the AlphaGo Zero program recently achieved superhuman performance in the game of Go, by tabula rasa reinforcement learning from games of self-play. In this paper, we generalise this approach into a single AlphaZero algorithm that can achieve, tabula rasa, superhuman performance in many challenging domains. Starting from random play, and given no domain knowledge except the game rules, AlphaZero achieved within 24 hours a superhuman level of play in the games of chess and shogi (Japanese chess) as well as Go, and convincingly defeated a world-champion program in each case. ","Mastering Chess and Shogi by Self-Play with a General Reinforcement
Learning Algorithm",2,"['AlphaZero achieves superhuman performance in Chess, Shogi, and Go beating best software and number one players in the world. Starting from random play, and given no domain knowledge. #AI #Deepmind #AlphaZero @davearon @stroker New paper from DeepMind: ', '@stroker @davearon haha... im there brother!!! DeathChess 2000! Has to sound like something from robot wars. I bet it cant beat me at Monopoly ;-)']",17,12,405
150,98,1004304559433486336,933826478038544384,Mark Williams,"New LHCb paper searches for (undiscovered) CP Violation in charm, here in D⁰ → KS⁰ KS⁰ which can have large (~1%) CPV from SM sources alone ⇒ good discovery mode. Sadly, measurement is consistent with CP symmetry - more data needed! @LHCbExperiment ",https://arxiv.org/abs/1806.01642,"A measurement of the time-integrated $CP$ asymmetry in $D^0\rightarrow K^0_S K^0_S$ decays is reported. The data correspond to an integrated luminosity of about $2$ fb$^{-1}$ collected in 2015-2016 by the LHCb collaboration in $pp$ collisions at a centre-of-mass energy of $13$ TeV. The $D^0$ candidate is required to originate from a $D^{\ast +} \rightarrow D^0 \pi^+$ decay, allowing the determination of the flavour of the $D^0$ meson using the pion charge. The $D^0 \rightarrow K^{+}K^{-}$ decay, which has a well measured $CP$ asymmetry, is used as a calibration channel. The $CP$ asymmetry for $D^0\rightarrow K^0_S K^0_S$ is measured to be \begin{equation*} \mathcal{A}^{CP}(D^0\rightarrow K^0_S K^0_S) = (4.3\pm 3.4\pm 1.0)\%, \end{equation*} where the first uncertainty is statistical and the second is systematic. This result is combined with the previous LHCb measurement at lower centre-of-mass energies to obtain \begin{equation*} \mathcal{A}^{CP}(D^0\rightarrow K^0_S K^0_S) = (2.3\pm 2.8\pm 0.9)\%. \end{equation*} ","Measurement of the time-integrated $CP$ asymmetry in $D^0 \rightarrow
K^0_S K^0_S$ decays",1,"['New LHCb paper searches for (undiscovered) CP Violation in charm, here in D⁰ → KS⁰ KS⁰ which can have large (~1%) CPV from SM sources alone ⇒ good discovery mode. Sadly, measurement is consistent with CP symmetry - more data needed! @LHCbExperiment ']",18,06,262
151,4,1466813039139864577,1159548392839753729,Hugo Yeche,"Happy to share ""HiRID-ICU-Benchmark"" (HiB), a new benchmark for patient monitoring tasks in the ICU. From the HiRID database, we define 6 diverse and clinically relevant tasks for +40K patients and +15M time-points. paper: code: We provide an easy-to-use pipeline for data processing and labels extraction. This shared and reproducible pipeline will improve the comparison of future works. This benchmark compares methods on a diverse set of clinically relevant tasks as summarized below. Finally, we compared recent sequence DL architectures with conventional ML algorithms. In line with the original HiRID paper, we observe that lightGBM with hand-extracted features outperforms all DL methods. This work is the result of a great collaboration with Rita Kuznetsova, Marc Zimmermann, @mhueser_, @xinruilyu under Martin Faltys, and @gxr supervision. Feel free to reach out during NeurIPS Datasets and Benchmarks poster session 4",https://arxiv.org/abs/2111.08536,"The recent success of machine learning methods applied to time series collected from Intensive Care Units (ICU) exposes the lack of standardized machine learning benchmarks for developing and comparing such methods. While raw datasets, such as MIMIC-IV or eICU, can be freely accessed on Physionet, the choice of tasks and pre-processing is often chosen ad-hoc for each publication, limiting comparability across publications. In this work, we aim to improve this situation by providing a benchmark covering a large spectrum of ICU-related tasks. Using the HiRID dataset, we define multiple clinically relevant tasks in collaboration with clinicians. In addition, we provide a reproducible end-to-end pipeline to construct both data and labels. Finally, we provide an in-depth analysis of current state-of-the-art sequence modeling methods, highlighting some limitations of deep learning approaches for this type of data. With this benchmark, we hope to give the research community the possibility of a fair comparison of their work. ","HiRID-ICU-Benchmark -- A Comprehensive Machine Learning Benchmark on
High-resolution ICU Data",5,"['Happy to share ""HiRID-ICU-Benchmark"" (HiB), a new benchmark for patient monitoring tasks in the ICU.\n\nFrom the HiRID database, we define 6 diverse and clinically relevant tasks for +40K patients and +15M time-points.\n\npaper: \ncode: ', 'We provide an easy-to-use pipeline for data processing and labels extraction. This shared and reproducible pipeline will improve the comparison of future works. https://t.co/eWhF3cHT7P', 'This benchmark compares methods on a diverse set of clinically relevant tasks as summarized below. https://t.co/HlAXrr5GEL', 'Finally, we compared recent sequence DL architectures with conventional ML algorithms. In line with the original HiRID paper, we observe that lightGBM with hand-extracted features outperforms all DL methods. https://t.co/X55CIg5hqK', 'This work is the result of a great collaboration with Rita Kuznetsova, Marc Zimmermann, @mhueser_, @xinruilyu under Martin Faltys, and @gxr supervision. \n\nFeel free to reach out during NeurIPS Datasets and Benchmarks poster session 4']",21,11,971
152,129,1282592569243971584,1192152664412475393,Fulvio Gesmundo,"A lot of examples of strict subadditivity of tensor border rank under direct sum! Check out our new paper with M. Christandl, M. Michałek and J. Zuiddam (@jzuiddam): ""Border rank non-additivity for higher order tensors"" In 1981, Schönhage provided examples of strict subadditivity of border rank under direct sum for tensors of order three. His example was a stepping stone to all subsequent progress on upper bounds on the matrix multiplication exponent until today. We provide examples of strict subadditivity for higher order tensors with connections to tensor network geometry and the complexity theory of generalizations of the matrix multiplication tensor.",https://arxiv.org/abs/2007.05458,"Whereas matrix rank is additive under direct sum, in 1981 Sch\""onhage showed that one of its generalizations to the tensor setting, tensor border rank, can be strictly subadditive for tensors of order three. Whether border rank is additive for higher order tensors has remained open. In this work, we settle this problem by providing analogs of Sch\""onhage's construction for tensors of order four and higher. Sch\""onhage's work was motivated by the study of the computational complexity of matrix multiplication; we discuss implications of our results for the asymptotic rank of higher order generalizations of the matrix multiplication tensor. ",Border rank non-additivity for higher order tensors,3,"['A lot of examples of strict subadditivity of tensor border rank under direct sum! Check out our new paper with M. Christandl, M. Michałek and J. Zuiddam (@jzuiddam):\n\n""Border rank non-additivity for higher order tensors""\n\n', 'In 1981, Schönhage provided examples of strict subadditivity of border rank under direct sum for tensors of order three. His example was a stepping stone to all subsequent progress on upper bounds on the matrix multiplication exponent until today.', 'We provide examples of strict subadditivity for higher order tensors with connections to tensor network geometry and the complexity theory of generalizations of the matrix multiplication tensor.']",20,07,669
153,29,1043036765680889856,776765039726460929,Carlo Felice Manara,"1/4 New paper based on @almaobs @ESO VLT/X-Shooter #GaiaDR2 data: ""Why do protoplanetary disks appear not massive enough to form the known exoplanet population?"" with A LOT of help from Alessandro Morbidelli and Tristan Guillot 2/4 Take home: we do not have enough mass in disks at 1-3 Myr to explain exoplanetary systems, even when considering the mass in solids in disks vs the mass of solids in planets. This is now true for a large range of stellar masses covered both in disks and exoplanet surveys 3/4 Possible solutions: 0) protoplanetary disk masses are underestimated (but I have reasons to believe this is not the case) 1) cores of planets have formed very rapidly (<0.1-1 Myr) 2) disks are continuously replenished of fresh planet-forming material from the environment 4/4 I want to also point to (some of the) relevant references for the various points: 0) 1) 2) P.S. the right link is ..... @exohugh Assuming at least 100% efficiency, which is not expected by models?",https://arxiv.org/abs/1809.07374,"When and how planets form in protoplanetary disks is still a topic of discussion. Exoplanet detection surveys and protoplanetary disk surveys are now providing results that allow us to have new insights. We collect the masses of confirmed exoplanets and compare their dependence with stellar mass with the same dependence for protoplanetary disk masses measured in ~1-3 Myr old star-forming regions. The latter are recalculated by us using the new estimates of their distances derived from Gaia DR2 parallaxes. We note that single and multiple exoplanetary systems form two different populations, probably pointing to a different formation mechanism for massive giant planets around very low mass stars. While expecting that the mass in exoplanetary systems is much lower than the measured disk masses, we instead find that exoplanetary systems masses are comparable or higher than the most massive disks. This same result is found also by converting the measured planet masses into heavy-element content (core masses for the giant planets and full masses for the super-Earth systems) and by comparing this value with the disk dust masses. Unless disk dust masses are heavily underestimated, this is a big conundrum. An extremely efficient recycling of dust particles in the disk cannot solve this conundrum. This implies that either the cores of planets have formed very rapidly (<0.1-1 Myr) and large amount of gas is expelled on the same timescales from the disk, or that disks are continuously replenished of fresh planet-forming material from the environment. These hypotheses can be tested by measuring disk masses in even younger targets and by better understanding if and how the disks are replenished by their surroundings. ","Why do protoplanetary disks appear not massive enough to form the known
exoplanet population?",6,"['1/4 New paper based on @almaobs @ESO VLT/X-Shooter #GaiaDR2 data: \n""Why do protoplanetary disks appear not massive enough to form the known exoplanet population?""\n\nwith A LOT of help from Alessandro Morbidelli and Tristan Guillot ', '2/4 Take home: we do not have enough mass in disks at 1-3 Myr to explain exoplanetary systems, even when considering the mass in solids in disks vs the mass of solids in planets. \nThis is now true for a large range of stellar masses covered both in disks and exoplanet surveys https://t.co/ebtdU5l6eN', '3/4 Possible solutions:\n0) protoplanetary disk masses are underestimated (but I have reasons to believe this is not the case)\n1) cores of planets have formed very rapidly (<0.1-1 Myr) \n2) disks are continuously replenished of fresh planet-forming material from the environment', '4/4 I want to also point to (some of the) relevant references for the various points:\n0) https://t.co/i1hlQnitOx https://t.co/bvRTlRGZzn\n1) https://t.co/mISVRLILyh https://t.co/nphETM5oYV https://t.co/mb87Cek9xN \n2) https://t.co/Zy6lHNfjhX https://t.co/3pZFSDZoJS', 'P.S. the right link is https://t.co/yElqOqW9EI.....', '@exohugh Assuming at least 100% efficiency, which is not expected by models?']",18,09,1066
154,49,1418410337695506432,96779364,Arnab Bhattacharyya,"New paper: (Learning Sparse Fixed-Structure Gaussian Bayesian Networks) with Davin Choo, @rrgajjala, Sutanu Gayen, and @Yohanna49592977. We look at a basic model used to specify causal dependencies among continuous variables. You have n variables that are ordered in some way, and each variable is generated as a linear combination of the previous variables plus an independent gaussian noise. Simple, right? E.g.: These are called Gaussian Bayes nets. The dependency structure of the variables is naturally encoded by a DAG. For the example above: Suppose you have a distribution P generated as a Gaussian Bayes net over a DAG G. The distribution learning problem is: given samples from P, infer parameters of a distribution Q such that TV(P,Q)<ε with good enough probability. There are actually two problems here. The first is the ""structure learning"" problem where G is not known (but maybe you only know that it is sparse). This problem is quite hard, and there are essentially no general algorithmic results. In this paper, we look at the easier ""fixed-structure"" problem where G is already given. Amazingly to us, we could say something new about this basic problem! The obvious thing to try is to learn the coefficients of each equation by lin regression at each node. If you run least squares with O~(n/ε) equations at each node, then you learn a Bayes net with KL div ε from P. But this isn't the only option! At each node, you can run several batches of least squares, where each batch is a ""small"" system of equations. Each batch solution gives you an estimate of the coefficients at that node, and then you can take the average across batch solutions. In the extreme case, if a node has p parents, you can solve several batches of pxp systems (with gaussian elim). Here, we show that each batch solution is distributed as Cauchy (!), not gaussian. It then makes more sense to take the median of the solutions rather than average. The advantage of these other algorithms is that they allow each batch to be processed parallelly. Also, in experiments (), they perform better when there's noise or the DAG is mis-specified. #causalinference #Statistics #MachineLearning #Algorithms",https://arxiv.org/abs/2107.10450,"Gaussian Bayesian networks (a.k.a. linear Gaussian structural equation models) are widely used to model causal interactions among continuous variables. In this work, we study the problem of learning a fixed-structure Gaussian Bayesian network up to a bounded error in total variation distance. We analyze the commonly used node-wise least squares regression (LeastSquares) and prove that it has a near-optimal sample complexity. We also study a couple of new algorithms for the problem: - BatchAvgLeastSquares takes the average of several batches of least squares solutions at each node, so that one can interpolate between the batch size and the number of batches. We show that BatchAvgLeastSquares also has near-optimal sample complexity. - CauchyEst takes the median of solutions to several batches of linear systems at each node. We show that the algorithm specialized to polytrees, CauchyEstTree, has near-optimal sample complexity. Experimentally, we show that for uncontaminated, realizable data, the LeastSquares algorithm performs best, but in the presence of contamination or DAG misspecification, CauchyEst/CauchyEstTree and BatchAvgLeastSquares respectively perform better. ",Learning Sparse Fixed-Structure Gaussian Bayesian Networks,11,"['New paper: (Learning Sparse Fixed-Structure Gaussian Bayesian Networks) with Davin Choo, @rrgajjala, Sutanu Gayen, and @Yohanna49592977.', 'We look at a basic model used to specify causal dependencies among continuous variables. You have n variables that are ordered in some way, and each variable is generated as a linear combination of the previous variables plus an independent gaussian noise. Simple, right? E.g.: https://t.co/MPLTH7fugA', 'These are called Gaussian Bayes nets. The dependency structure of the variables is naturally encoded by a DAG. For the example above: https://t.co/hO6gWyXvtX', 'Suppose you have a distribution P generated as a Gaussian Bayes net over a DAG G. \n\nThe distribution learning problem is: given samples from P, infer parameters of a distribution Q such that TV(P,Q)<ε with good enough probability.', 'There are actually two problems here. The first is the ""structure learning"" problem where G is not known (but maybe you only know that it is sparse). This problem is quite hard, and there are essentially no general algorithmic results.', 'In this paper, we look at the easier ""fixed-structure"" problem where G is already given. Amazingly to us, we could say something new about this basic problem!', 'The obvious thing to try is to learn the coefficients of each equation by lin regression at each node. If you run least squares with O~(n/ε) equations at each node, then you learn a Bayes net with KL div ε from P.', 'But this isn\'t the only option! At each node, you can run several batches of least squares, where each batch is a ""small"" system of equations. Each batch solution gives you an estimate of the coefficients at that node, and then you can take the average across batch solutions.', 'In the extreme case, if a node has p parents, you can solve several batches of pxp systems (with gaussian elim). Here, we show that each batch solution is distributed as Cauchy (!), not gaussian. It then makes more sense to take the median of the solutions rather than average.', ""The advantage of these other algorithms is that they allow each batch to be processed parallelly. Also, in experiments (https://t.co/tZZoMYXnCE), they perform better when there's noise or the DAG is mis-specified."", '#causalinference #Statistics #MachineLearning #Algorithms']",21,07,2221
155,96,1284151359491788800,1069184356533583872,Ekaterina Lobacheva,"Our new paper On Power Laws in Deep Ensembles is on arXiv: Credits to @nadiinchi, Maxim Kodryan, Dmitry Vetrov @bayesgroup We investigate asymptotic properties of CNLL as a function of ensemble size n, network size s, and the number of parameters B. 1/4 CNLL and NLL of deep ensemble follow power law w.r.t. ensemble size n: CNLL_n = c + b n^a Moreover, CNNL follows power law w.r.t. network size s and the total number of parameters B. 2/4 Memory Split Advantage effect: our practically important funding is that one large network may perform worse than an ensemble of several medium-size networks with the same total number of parameters. 3/4 Given relatively small number of trained networks, we can use the discovered power laws to predict: - NLL and CNLL of large ensembles - optimal memory split given a memory budget. 4/4 ",http://arxiv.org/abs/2007.08483,"Ensembles of deep neural networks are known to achieve state-of-the-art performance in uncertainty estimation and lead to accuracy improvement. In this work, we focus on a classification problem and investigate the behavior of both non-calibrated and calibrated negative log-likelihood (CNLL) of a deep ensemble as a function of the ensemble size and the member network size. We indicate the conditions under which CNLL follows a power law w.r.t. ensemble size or member network size, and analyze the dynamics of the parameters of the discovered power laws. Our important practical finding is that one large network may perform worse than an ensemble of several medium-size networks with the same total number of parameters (we call this ensemble a memory split). Using the detected power law-like dependencies, we can predict (1) the possible gain from the ensembling of networks with given structure, (2) the optimal memory split given a memory budget, based on a relatively small number of trained networks. We describe the memory split advantage effect in more details in arXiv:2005.07292 ",On Power Laws in Deep Ensembles,4,"['Our new paper On Power Laws in Deep Ensembles is on arXiv: \nCredits to @nadiinchi, Maxim Kodryan, Dmitry Vetrov @bayesgroup\n\nWe investigate asymptotic properties of CNLL as a function of ensemble size n, network size s, and the number of parameters B. 1/4 ', 'CNLL and NLL of deep ensemble follow power law w.r.t. ensemble size n: CNLL_n = c + b n^a\nMoreover, CNNL follows power law w.r.t. network size s and the total number of parameters B.\n2/4 https://t.co/HO8aeVCgHj', 'Memory Split Advantage effect: our practically important funding is that one large network may perform worse than an ensemble of several medium-size networks with the same total number of parameters. 3/4 https://t.co/g3fD9ZWk7c', 'Given relatively small number of trained networks, we can use the discovered power laws to predict:\n- NLL and CNLL of large ensembles \n- optimal memory split given a memory budget. 4/4 https://t.co/pFK2gumGPf']",20,07,863
156,85,1252654952658411525,82497649,Moin Nadeem,"As pretrained language models grow more common in #NLProc, it is crucial to evaluate their societal biases. We launch a new task, evaluation metrics, and a large dataset to measure stereotypical biases in LMs: Paper: Site: Thread👇 [2/] The task and metrics are based on an ideal language model (LM). An ideal LM should perform well at language modeling, but not have a preference for stereotypes or anti-stereotypes. We create the Context Association Test (CAT), which measures LM ability and stereotype ability [3/] We measure LM ability based on the model's preferences for meaningful contexts over meaningless contexts. The LM score of an ideal model is 100. Similarly, we measure stereotype bias based on how often the model prefers stereotypical contexts vs anti-stereotypical contexts. [4/] The stereotypical bias score of an idealistic model is 50. The combination of these two scores gives the Idealized CAT score, which measures the unbiased LM ability. [5/] We choose to measure bias in four domains: gender, profession, race, and religion, and collect 16,995 sentences that characterize the human stereotypical biases for these domains. [6/] We find that as a model size (# parameters) increases, so does it’s LM ability and stereotypical behavior! However, we find that this isn’t necessarily correlated with idealistic LM ability. [7/] We find that GPT2 is relatively more idealistic than BERT, XLNet and RoBERTa. We conjecture this is due to nature of pretraining data (Reddit data is likely to see more stereotypes and anti-stereotypes. c.f. Section 8). However, GPT is still 27 ICAT points behind an ideal LM [8/] We also study an ensemble of BERT-large, GPT2-large, and GPT2-medium, and conjecture that the most biased terms are the ones that have well-established stereotypes in society (but with some surprising exceptions). [End] Joint work with @data_beth and @sivareddyg Code: Happy to answer any questions as well!",https://arxiv.org/abs/2004.09456,"A stereotype is an over-generalized belief about a particular group of people, e.g., Asians are good at math or Asians are bad drivers. Such beliefs (biases) are known to hurt target groups. Since pretrained language models are trained on large real world data, they are known to capture stereotypical biases. In order to assess the adverse effects of these models, it is important to quantify the bias captured in them. Existing literature on quantifying bias evaluates pretrained language models on a small set of artificially constructed bias-assessing sentences. We present StereoSet, a large-scale natural dataset in English to measure stereotypical biases in four domains: gender, profession, race, and religion. We evaluate popular models like BERT, GPT-2, RoBERTa, and XLNet on our dataset and show that these models exhibit strong stereotypical biases. We also present a leaderboard with a hidden test set to track the bias of future language models at this https URL ",StereoSet: Measuring stereotypical bias in pretrained language models,9,"['As pretrained language models grow more common in #NLProc, it is crucial to evaluate their societal biases. We launch a new task, evaluation metrics, and a large dataset to measure stereotypical biases in LMs: \nPaper: \nSite: \nThread👇 ', '[2/] The task and metrics are based on an ideal language model (LM). An ideal LM should perform well at language modeling, but not have a preference for stereotypes or anti-stereotypes. We create the Context Association Test (CAT), which measures LM ability and stereotype ability', ""[3/] We measure LM ability based on the model's preferences for meaningful contexts over meaningless contexts. The LM score of an ideal model is 100. Similarly, we measure stereotype bias based on how often the model prefers stereotypical contexts vs anti-stereotypical contexts."", '[4/] The stereotypical bias score of an idealistic model is 50. The combination of these two scores gives the Idealized CAT score, which measures the unbiased LM ability.', '[5/] We choose to measure bias in four domains: gender, profession, race, and religion, and collect 16,995 sentences that characterize the human stereotypical biases for these domains. https://t.co/e94qk00s2E', '[6/] We find that as a model size (# parameters) increases, so does it’s LM ability and stereotypical behavior! However, we find that this isn’t necessarily correlated with idealistic LM ability. https://t.co/UzSWFEIAwF', '[7/] We find that GPT2 is relatively more idealistic than BERT, XLNet and RoBERTa. We conjecture this is due to nature of pretraining data (Reddit data is likely to see more stereotypes and anti-stereotypes. c.f. Section 8). However, GPT is still 27 ICAT points behind an ideal LM', '[8/] We also study an ensemble of BERT-large, GPT2-large, and GPT2-medium, and conjecture that the most biased terms are the ones that have well-established stereotypes in society (but with some surprising exceptions). https://t.co/EBU4rx7UTL', '[End] Joint work with @data_beth and @sivareddyg \nCode: https://t.co/rHa9UDn59I\n\nHappy to answer any questions as well!']",20,04,1986
157,97,1469018586073247749,1240430202255261696,Amaury Hayat,"More math and computational biology with neural networks ! We predict the equilibriums of metabolic networks with a deep language model. Our new paper with @f_charton, Benedetto Piccoli, Nate Merrill, and Sean McQuade @Rutgers_Camden @RutgersCCIB ",https://arxiv.org/abs/2112.03588,"We show that deep learning models, and especially architectures like the Transformer, originally intended for natural language, can be trained on randomly generated datasets to predict to very high accuracy both the qualitative and quantitative features of metabolic networks. Using standard mathematical techniques, we create large sets (40 million elements) of random networks that can be used to train our models. These trained models can predict network equilibrium on random graphs in more than 99% of cases. They can also generalize to graphs with different structure than those encountered at training. Finally, they can predict almost perfectly the equilibria of a small set of known biological networks. Our approach is both very economical in experimental data and uses only small and shallow deep-learning model, far from the large architectures commonly used in machine translation. Such results pave the way for larger use of deep learning models for problems related to biological networks in key areas such as quantitative systems pharmacology, systems biology, and synthetic biology. ",A deep language model to predict metabolic network equilibria,1,"['More math and computational biology with neural networks ! We predict the equilibriums of metabolic networks with a deep language model. Our new paper with @f_charton, Benedetto Piccoli, Nate Merrill, and Sean McQuade @Rutgers_Camden @RutgersCCIB ']",21,12,260
158,144,1236913875804536834,1211825303388995584,Russell Tsuchida,"Back in February, @Tea_Pearce, Chris van der Heide, Fred Roosta, @marcus_marcusg and I did some work on the kernels of infinitely wide deep neural networks with GELU and ELU activations. We also studied the fixed points of these kernels. Check it out here: ",https://arxiv.org/abs/2002.08517,"Analysing and computing with Gaussian processes arising from infinitely wide neural networks has recently seen a resurgence in popularity. Despite this, many explicit covariance functions of networks with activation functions used in modern networks remain unknown. Furthermore, while the kernels of deep networks can be computed iteratively, theoretical understanding of deep kernels is lacking, particularly with respect to fixed-point dynamics. Firstly, we derive the covariance functions of multi-layer perceptrons (MLPs) with exponential linear units (ELU) and Gaussian error linear units (GELU) and evaluate the performance of the limiting Gaussian processes on some benchmarks. Secondly, and more generally, we analyse the fixed-point dynamics of iterated kernels corresponding to a broad range of activation functions. We find that unlike some previously studied neural network kernels, these new kernels exhibit non-trivial fixed-point dynamics which are mirrored in finite-width neural networks. The fixed point behaviour present in some networks explains a mechanism for implicit regularisation in overparameterised deep models. Our results relate to both the static iid parameter conjugate kernel and the dynamic neural tangent kernel constructions. Software at github.com/RussellTsuchida/ELU_GELU_kernels. ","Avoiding Kernel Fixed Points: Computing with ELU and GELU Infinite
Networks",1,"['Back in February, @Tea_Pearce, Chris van der Heide, Fred Roosta, @marcus_marcusg and I did some work on the kernels of infinitely wide deep neural networks with GELU and ELU activations. We also studied the fixed points of these kernels. Check it out here:\n']",20,02,263
159,31,1196712613339639808,409901833,Peter Boorman,"🚨 New @NASANuSTAR paper time! Work led by Steph LaMassa with new NuSTAR and archival @chandraxray data of the spiral galaxy NGC 4968 revealed the accreting supermassive black hole at its center to be obscured by very thick thunderclouds of gas ⛈ Pivotal to this work was the high-energy data from @NASANuSTAR which gave us powerful ""X-ray vision"" to separately study the material that reflects stray X-rays towards us from the material absorbing X-rays travelling along our line of sight. This tells us not only how obscured the black hole is along our line of sight, but also how much material surrounds the black hole on average. One would expect these amounts to be different for a clumpy configuration, for instance. But why should you care? Well... NGC 4968 is very close to us (just 7x as far as the Whirlpool), yet the supermassive black hole would have gone unnoticed in high-energy X-rays due to that pesky obscuration if it weren't for the sensitivity of @NASANuSTAR. This is known to be a big problem for detecting heavily obscured black holes with X-ray vision, since large amounts of material can severley hinder our chances of even the highest energy X-rays from reaching our telescopes. But finding them is very important. Evidence suggests that supermassive black holes grow most rapidly during heavily obscured phases, so studying them could be the key to understanding how these monsters have grown so huge since their birth near the beginning of the Universe!",https://arxiv.org/abs/1911.05813,"We present the analysis of Chandra and NuSTAR spectra of NGC 4968, a local (D$\sim$44 Mpc) 12$\mu$m-selected Seyfert 2 galaxy, enshrouded within Compton-thick layers of obscuring gas. We find no evidence of variability between the Chandra and NuSTAR observations (separated by 2 years), and between the two NuSTAR observations (separated by 10 months). Using self-consistent X-ray models, we rule out the scenario where the obscuring medium is nearly spherical and uniform, contradicting the results implied by the $<$10 keV Chandra spectrum. The line-of-sight column density, from intervening matter between the source and observer that intercepts the intrinsic AGN X-ray emission, is well within the Compton-thick regime, with a minimum column density of $2\times10^{24}$ cm$^{-2}$. The average global column density is high ($> 3\times10^{23}$ cm$^{-2}$), with both Compton-thick and Compton-thin solutions permitted depending on the X-ray spectral model. The spectral models provide a range of intrinsic AGN continuum parameters and implied 2-10 keV luminosities ($L_{\rm 2-10keV,intrinsic}$), where the higher end of $L_{\rm 2-10keV,intrinsic}$ is consistent with expectations from the 12$\mu$m luminosity ($L_{\rm 2-10keV,intrinisc} \sim 7\times10^{42}$ erg s$^{-1}$). Compared with Compton-thick AGN previously observed by {\it NuSTAR}, NGC 4968 is among the most intrinsically X-ray luminous. However, despite its close proximity and relatively high intrinsic X-ray luminosity, it is undetected by the 105 month Swift-BAT survey, underscoring the importance of multi-wavelength selection for obtaining the most complete census of the most hidden black holes. ",NuSTAR Uncovers an Extremely Local Compton-thick AGN in NGC 4968,6,"['🚨 New @NASANuSTAR paper time! Work led by Steph LaMassa with new NuSTAR and archival @chandraxray data of the spiral galaxy NGC 4968 revealed the accreting supermassive black hole at its center to be obscured by very thick thunderclouds of gas ⛈ ', 'Pivotal to this work was the high-energy data from @NASANuSTAR which gave us powerful ""X-ray vision"" to separately study the material that reflects stray X-rays towards us from the material absorbing X-rays travelling along our line of sight.', 'This tells us not only how obscured the black hole is along our line of sight, but also how much material surrounds the black hole on average. One would expect these amounts to be different for a clumpy configuration, for instance.', ""But why should you care? Well... NGC 4968 is very close to us (just 7x as far as the Whirlpool), yet the supermassive black hole would have gone unnoticed in high-energy X-rays due to that pesky obscuration if it weren't for the sensitivity of @NASANuSTAR."", 'This is known to be a big problem for detecting heavily obscured black holes with X-ray vision, since large amounts of material can severley hinder our chances of even the highest energy X-rays from reaching our telescopes.', 'But finding them is very important. Evidence suggests that supermassive black holes grow most rapidly during heavily obscured phases, so studying them could be the key to understanding how these monsters have grown so huge since their birth near the beginning of the Universe!']",19,11,1485
160,95,1201853727059132416,987061319378587649,Maksym Andriushchenko 🇺🇦,"Excited to share our new black-box attack based on simple *random search*! Despite its simplicity it outperforms the recent SOTA by several times in terms of query efficiency. There are some interesting ideas behind Paper: Code: 1/n We use the simplest form of random search from 1960s. The only thing we modify is the sampling distribution. We select it in a way so that each iterate always stays at the boundary of the feasible set. This simple idea significantly improves query efficiency. 2/n As a result, the Square Attack outperforms all existing methods by a large margin with a simple random search scheme. The attack achieves both best query efficiency (*2x - 7x better*, depending on the model) and success rate (also in the low-query regime) on ImageNet. 3/n Square Attack is also useful for robustness evaluation of new defenses. There are cases (post-averaging, CLP, LSQ models) where it can significantly outperform even white-box PGD attack *with random restarts*. Thus, we recommend to use it to prevent false robustness claims. 4/n ",https://arxiv.org/abs/1912.00049,"We propose the Square Attack, a score-based black-box $l_2$- and $l_\infty$-adversarial attack that does not rely on local gradient information and thus is not affected by gradient masking. Square Attack is based on a randomized search scheme which selects localized square-shaped updates at random positions so that at each iteration the perturbation is situated approximately at the boundary of the feasible set. Our method is significantly more query efficient and achieves a higher success rate compared to the state-of-the-art methods, especially in the untargeted setting. In particular, on ImageNet we improve the average query efficiency in the untargeted setting for various deep networks by a factor of at least $1.8$ and up to $3$ compared to the recent state-of-the-art $l_\infty$-attack of Al-Dujaili & O'Reilly. Moreover, although our attack is black-box, it can also outperform gradient-based white-box attacks on the standard benchmarks achieving a new state-of-the-art in terms of the success rate. The code of our attack is available at this https URL ","Square Attack: a query-efficient black-box adversarial attack via random
search",4,"['Excited to share our new black-box attack based on simple *random search*! Despite its simplicity it outperforms the recent SOTA by several times in terms of query efficiency.\nThere are some interesting ideas behind\nPaper: \nCode: \n1/n ', 'We use the simplest form of random search from 1960s. The only thing we modify is the sampling distribution. We select it in a way so that each iterate always stays at the boundary of the feasible set. This simple idea significantly improves query efficiency. \n2/n https://t.co/lVQWsJkQCx', 'As a result, the Square Attack outperforms all existing methods by a large margin with a simple random search scheme. The attack achieves both best query efficiency (*2x - 7x better*, depending on the model) and success rate (also in the low-query regime) on ImageNet.\n3/n https://t.co/7MVi0QULBb', 'Square Attack is also useful for robustness evaluation of new defenses. There are cases (post-averaging, CLP, LSQ models) where it can significantly outperform even white-box PGD attack *with random restarts*. Thus, we recommend to use it to prevent false robustness claims.\n4/n https://t.co/ZCMbLlQiWT']",19,12,1090
161,37,1288395874192830464,196749454,Natalie Hogg,"Paper day! Ever wondered if you've discovered some beyond LCDM physics by accident? Could there be some hidden modified gravity effects lurking in gravitational wave detections? Have we found a new smoking gun for modified gravity? Read on to find out! Matteo (@matmartinelli1), Savvas and I created and used mock standard siren datasets to forecast the ability of the Einstein Telescope (ET), LSST and DESI to constrain the distance duality relation (DDR), which relates luminosity distances to angular diameter distances. We used a toy model in which photons decay into axions to break the DDR, and found that the combination of SNIa + GW events is competitive with the more commonly used SNIa + BAO when constraining deviations from DDR. (Paging the chair of the axion fan club @duetosymmetry 😉) But it pays to be careful when using a probe of the gravitational sector as modified gravity (MG) effects could be at play! By including a generic MG model in our mock datasets, we found that the DDR analysis became extremely biased, leading to a false detection of DDR violation! However, the problem can be resolved by explicitly including the modified gravity in your analysis. Here, the full combination of mock datasets broke the degeneracies in parameter space and correctly recovered the fiducial cosmology. No more false detection of DDR violation! Of course, if you have modified gravity, you will likely have a screening mechanism to go with it. GW events and SNIa are both events in which MG could be screened -- how does this affect our results? If GW are screened, we find another false detection of DDR violation... but if the SNIa are screened, we find that the cosmological parameters are also biased away from the fiducial cosmology, with only the combination LSST + DESI correctly recovering the fiducial. If this is seen in real data, it's a smoking gun for MG with this screening behaviour! Savvas also applied his Genetic Algorithm machine learning code to reconstruct the DDR as a function of redshift, finding that it can correctly distinguish between the LCDM and MG mock datasets and finds the same biases as in the parameterised case, nicely confirming our results. Final tweet! This is the first time I've shared a paper draft with people other than fellow authors before posting it on arXiv and we're really grateful for all the comments and feedback we received -- shout out to Ian, Kazuya, Carlos, Isaac and Bill (@BillWrightCosmo)!",https://arxiv.org/abs/2007.14335,"We use gravitational wave (GW) standard sirens, in addition to Type Ia supernovae (SNIa) and baryon acoustic oscillation (BAO) mock data, to forecast constraints on the electromagnetic and gravitational distance duality relations (DDR). We make use of a parameterised approach based on a specific DDR violation model, along with a machine learning reconstruction method based on the Genetic Algorithms. We find that GW provide an alternative to the use of BAO data to constrain violations of the DDR, reaching $3\%$ constraints on the violation parameter we consider when combined with SNIa, which is only improved by a factor of $\approx1.4$ if one instead considers the combination of BAO and SNIa. We also investigate the possibility that a neglected modification of gravity might lead to a false detection of DDR violations, even when screening mechanisms are active. We find that such a false detection can be extremely significant, up to $\approx10\sigma$ for very extreme modified gravity scenarios, while this reduces to $\approx4\sigma$ in a more realistic case. False detections can also provide a smoking gun for the modified gravity mechanism at play, as a result of the tension introduced between the SNIa+GW and SNIa+BAO combinations. ",Constraints on the distance duality relation with standard sirens,9,"[""Paper day! \n\nEver wondered if you've discovered some beyond LCDM physics by accident? Could there be some hidden modified gravity effects lurking in gravitational wave detections? Have we found a new smoking gun for modified gravity? Read on to find out!"", 'Matteo (@matmartinelli1), Savvas and I created and used mock standard siren datasets to forecast the ability of the Einstein Telescope (ET), LSST and DESI to constrain the distance duality relation (DDR), which relates luminosity distances to angular diameter distances.', 'We used a toy model in which photons decay into axions to break the DDR, and found that the combination of SNIa + GW events is competitive with the more commonly used SNIa + BAO when constraining deviations from DDR.\n\n(Paging the chair of the axion fan club @duetosymmetry 😉)', 'But it pays to be careful when using a probe of the gravitational sector as modified gravity (MG) effects could be at play! By including a generic MG model in our mock datasets, we found that the DDR analysis became extremely biased, leading to a false detection of DDR violation! https://t.co/etHLlyjjT7', 'However, the problem can be resolved by explicitly including the modified gravity in your analysis. Here, the full combination of mock datasets broke the degeneracies in parameter space and correctly recovered the fiducial cosmology. No more false detection of DDR violation! https://t.co/xLLRczzwIb', 'Of course, if you have modified gravity, you will likely have a screening mechanism to go with it. GW events and SNIa are both events in which MG could be screened -- how does this affect our results? If GW are screened, we find another false detection of DDR violation...', ""but if the SNIa are screened, we find that the cosmological parameters are also biased away from the fiducial cosmology, with only the combination LSST + DESI correctly recovering the fiducial. If this is seen in real data, it's a smoking gun for MG with this screening behaviour! https://t.co/wai5D8oxgK"", 'Savvas also applied his Genetic Algorithm machine learning code to reconstruct the DDR as a function of redshift, finding that it can correctly distinguish between the LCDM and MG mock datasets and finds the same biases as in the parameterised case, nicely confirming our results.', ""Final tweet! This is the first time I've shared a paper draft with people other than fellow authors before posting it on arXiv and we're really grateful for all the comments and feedback we received -- shout out to Ian, Kazuya, Carlos, Isaac and Bill (@BillWrightCosmo)!""]",20,07,2489
162,61,954018861208109056,881959726958862337,Yuhuai (Tony) Wu,"If you use K-FAC you only need to do 1 update (ACKTR), but if you use first order optimizer, you need to do 320 updates (PPO). AND 1 update by K-FAC still wins. This is what we (with @baaadas) find by comparing ACKTR vs. PPO vs. PPOKFAC. ",https://arxiv.org/abs/1801.05566,"In this technical report, we consider an approach that combines the PPO objective and K-FAC natural gradient optimization, for which we call PPOKFAC. We perform a range of empirical analysis on various aspects of the algorithm, such as sample complexity, training speed, and sensitivity to batch size and training epochs. We observe that PPOKFAC is able to outperform PPO in terms of sample complexity and speed in a range of MuJoCo environments, while being scalable in terms of batch size. In spite of this, it seems that adding more epochs is not necessarily helpful for sample efficiency, and PPOKFAC seems to be worse than its A2C counterpart, ACKTR. ","An Empirical Analysis of Proximal Policy Optimization with
Kronecker-factored Natural Gradients",1,"['If you use K-FAC you only need to do 1 update (ACKTR), but if you use first order optimizer, you need to do 320 updates (PPO). AND 1 update by K-FAC still wins. This is what we (with @baaadas) find by comparing ACKTR vs. PPO vs. PPOKFAC. ']",18,01,251
163,54,941628002307313664,1868950795,Ruth Misener,New @arxiv paper with @ICComputing PhD student @Georgiakouyiali discusses data structures for representing symmetry in quadratic optimisation problems. Our work was funded by the @EPSRC Doctoral Training Partnership and Early Career Fellowship schemes,https://arxiv.org/abs/1712.05222,"Symmetry in mathematical programming may lead to a multiplicity of solutions. In nonconvex optimisation, it can negatively affect the performance of the branch-and-bound algorithm. Symmetry may induce large search trees with multiple equivalent solutions, i.e. with the same optimal value. Dealing with symmetry requires detecting and classifying it first. This work develops methods for detecting groups of symmetry in the formulation of quadratically constrained quadratic optimisation problems via adjacency matrices. Using graph theory, we transform these matrices into Binary Layered Graphs (BLG) and enter them into the software package nauty. Nauty generates important symmetric properties of the original problem. ","Symmetry Detection for Quadratically Constrained Quadratic Programs
Using Binary Layered Graphs",1,['New @arxiv paper with @ICComputing PhD student @Georgiakouyiali discusses data structures for representing symmetry in quadratic optimisation problems. Our work was funded by the @EPSRC Doctoral Training Partnership and Early Career Fellowship schemes'],17,12,258
164,123,1114191978021904384,19658565,Corey Lynch,"We want our robots to learn complete visual representations, e.g. all object variation in a scene, not just pose. In , led by the amazing @sherjilozair, we find MI-based repr suffer from a ""completeness"" problem, and propose a fix based on Wasserstein dist. ",http://arxiv.org/abs/1903.11780,"Mutual information maximization has emerged as a powerful learning objective for unsupervised representation learning obtaining state-of-the-art performance in applications such as object recognition, speech recognition, and reinforcement learning. However, such approaches are fundamentally limited since a tight lower bound of mutual information requires sample size exponential in the mutual information. This limits the applicability of these approaches for prediction tasks with high mutual information, such as in video understanding or reinforcement learning. In these settings, such techniques are prone to overfit, both in theory and in practice, and capture only a few of the relevant factors of variation. This leads to incomplete representations that are not optimal for downstream tasks. In this work, we empirically demonstrate that mutual information-based representation learning approaches do fail to learn complete representations on a number of designed and real-world tasks. To mitigate these problems we introduce the Wasserstein dependency measure, which learns more complete representations by using the Wasserstein distance instead of the KL divergence in the mutual information estimator. We show that a practical approximation to this theoretically motivated solution, constructed using Lipschitz constraint techniques from the GAN literature, achieves substantially improved results on tasks where incomplete representations are a major challenge. ",Wasserstein Dependency Measure for Representation Learning,1,"['We want our robots to learn complete visual representations, e.g. all object variation in a scene, not just pose. In , led by the amazing @sherjilozair, we find MI-based\nrepr suffer from a ""completeness"" problem, and propose a fix based on Wasserstein dist. ']",19,03,270
165,184,1324775339545882624,1028632965121626114,Jiaxin Pei,"Excited to share my #emnlp2020 paper with @david__jurgens. We build an NLP model to estimate intimacy in language and study social norms in interpersonal communications Paper: Pip: Model @huggingface: 1/11 We build a new model to quantify the intimacy of questions and examine social norms in interpersonal communications in a variety of settings: How do these norms shape the way we communicate with each other? 2/11 One famous example from Psychology of a social norm around intimacy is who you can ask an intimate question to. Close friends are fine—but so are strangers! The low social cost of potentially offending a stranger means you can engage in a more intimate discussion. 3/11 But acquaintances are in this middle ground; if you get too intimate and they get offended, they know people you know, which could have a real cost. 4/11 There are plenty of friends, acquaintances, and strangers online. Do we see the same trend there? Yes! In fact, using a 1.1B edge social network on Twitter, we see people are highly sensitive to social distance in their level of question intimacy. 5/11 What about gender norms around intimacy communication? Across books, movies, and social media, we consistently find that male-male interactions are the least intimate. 6/11 Looking at book author’s gender, surprisingly, we also see these norms be upheld by women authors too, underscoring how ingrained these social expectations are! 7/11 Could all of these differences be due to the topic? Actually, no! While topic is moderately correlated with intimacy, questions about a topic often fall along a broad range of the intimacy spectrum. 8/11 The paper has more fun results on the effects of anonymity on intimacy and different linguistic strategies people use in phrasing their questions. 9/11 As a fun teaser, would swearing make a question more or less intimate—and why? Theory and answers in the paper! 10/11 The website for the paper has a longer summary of results and more details on how you can get the data and python libraries to do your own studies of intimacy. 11/11",https://arxiv.org/abs/2011.03020,"Intimacy is a fundamental aspect of how we relate to others in social settings. Language encodes the social information of intimacy through both topics and other more subtle cues (such as linguistic hedging and swearing). Here, we introduce a new computational framework for studying expressions of the intimacy in language with an accompanying dataset and deep learning model for accurately predicting the intimacy level of questions (Pearson's r=0.87). Through analyzing a dataset of 80.5M questions across social media, books, and films, we show that individuals employ interpersonal pragmatic moves in their language to align their intimacy with social settings. Then, in three studies, we further demonstrate how individuals modulate their intimacy to match social norms around gender, social distance, and audience, each validating key findings from studies in social psychology. Our work demonstrates that intimacy is a pervasive and impactful social dimension of language. ",Quantifying Intimacy in Language,11,"['Excited to share my #emnlp2020 paper with @david__jurgens. We build an NLP model to estimate intimacy in language and study social norms in interpersonal communications\nPaper: \nPip: \nModel @huggingface: \n1/11', 'We build a new model to quantify the intimacy of questions and examine social norms in interpersonal communications in a variety of settings: How do these norms shape the way we communicate with each other? 2/11', 'One famous example from Psychology of a social norm around intimacy is who you can ask an intimate question to. Close friends are fine—but so are strangers! The low social cost of potentially offending a stranger means you can engage in a more intimate discussion. 3/11', 'But acquaintances are in this middle ground; if you get too intimate and they get offended, they know people you know, which could have a real cost. 4/11', 'There are plenty of friends, acquaintances, and strangers online. Do we see the same trend there? Yes! In fact, using a 1.1B edge social network on Twitter, we see people are highly sensitive to social distance in their level of question intimacy. 5/11 https://t.co/2vIhwBQFsK', 'What about gender norms around intimacy communication? Across books, movies, and social media, we consistently find that male-male interactions are the least intimate. 6/11 https://t.co/r48ApCijAF', 'Looking at book author’s gender, surprisingly, we also see these norms be upheld by women authors too, underscoring how ingrained these social expectations are! 7/11 https://t.co/NKRspkC25i', 'Could all of these differences be due to the topic? Actually, no! While topic is moderately correlated with intimacy, questions about a topic often fall along a broad range of the intimacy spectrum. 8/11 https://t.co/tYNeFfNzsy', 'The paper has more fun results on the effects of anonymity on intimacy and different linguistic strategies people use in phrasing their questions. 9/11', 'As a fun teaser, would swearing make a question more or less intimate—and why? Theory and answers in the paper! 10/11', 'The website for the paper has a longer summary of results and more details on how you can get the data and python libraries to do your own studies of intimacy. https://t.co/CGIjsj41It \n11/11']",20,11,2129
166,195,1272702484096532481,1071113100739387392,Guanya Shi,"How valuable are predictions in online control? How many predictions are needed to achieve performance with O(1) dynamic regret? How well does MPC perform? We answer these in our new paper Joint with Chenkai Yu, @yisongyue, Soon-Jo Chung and Adam Wierman. We focus on online LQR control with stochastic and adversarial disturbances in dynamics. We characterize the cost-optimal and dynamic regret minimizing policies with k predictions of future disturbance, and show the marginal benefit of an extra prediction exponentially decays. We show that the greedy MPC is near-optimal - it only needs O(logT) predictions to reach O(1) dynamic regret (the same order as the required prediction horizon for O(1) regret). The power of predictions reduces the need for algorithmic sophistication due to the structure of LQR.",https://arxiv.org/abs/2006.07569,"We study the impact of predictions in online Linear Quadratic Regulator control with both stochastic and adversarial disturbances in the dynamics. In both settings, we characterize the optimal policy and derive tight bounds on the minimum cost and dynamic regret. Perhaps surprisingly, our analysis shows that the conventional greedy MPC approach is a near-optimal policy in both stochastic and adversarial settings. Specifically, for length-$T$ problems, MPC requires only $O(\log T)$ predictions to reach $O(1)$ dynamic regret, which matches (up to lower-order terms) our lower bound on the required prediction horizon for constant regret. ",The Power of Predictions in Online Control,3,"['How valuable are predictions in online control? How many predictions are needed to achieve performance with O(1) dynamic regret? How well does MPC perform? \nWe answer these in our new paper Joint with Chenkai Yu, @yisongyue, Soon-Jo Chung and Adam Wierman. ', 'We focus on online LQR control with stochastic and adversarial disturbances in dynamics. We characterize the cost-optimal and dynamic regret minimizing policies with k predictions of future disturbance, and show the marginal benefit of an extra prediction exponentially decays.', 'We show that the greedy MPC is near-optimal - it only needs O(logT) predictions to reach O(1) dynamic regret (the same order as the required prediction horizon for O(1) regret). The power of predictions reduces the need for algorithmic sophistication due to the structure of LQR.']",20,06,827
167,16,1012135946894970880,3018751880,Prof. Katelin Schutz,"On a positive note, I have a new paper out today about a thermal #darkmatter production mechanism w Yonit Hochberg, Eric Kuflik, Robert McGehee, and Hitoshi Murayama (@sleptogenesis). Really fun working with the SIMP crew doing some real #particlephysics! ",https://arxiv.org/abs/1806.10139,"Dark matter could be a thermal relic comprised of strongly interacting massive particles (SIMPs), where $3 \rightarrow 2$ interactions set the relic abundance. Such interactions generically arise in theories of chiral symmetry breaking via the Wess-Zumino-Witten term. In this work, we show that an axion-like particle can successfully maintain kinetic equilibrium between the dark matter and the visible sector, allowing the requisite entropy transfer that is crucial for SIMPs to be a cold dark matter candidate. Constraints on this scenario arise from beam dump and collider experiments, from the cosmic microwave background, and from supernovae. We find a viable parameter space when the axion-like particle is close in mass to the SIMP dark matter, with strong-scale masses of order a few hundred MeV. Many planned experiments are set to probe the parameter space in the near future. ",SIMPs through the axion portal,1,"['On a positive note, I have a new paper out today about a thermal #darkmatter production mechanism w Yonit Hochberg, Eric Kuflik, Robert McGehee, and Hitoshi Murayama (@sleptogenesis). Really fun working with the SIMP crew doing some real #particlephysics! ']",18,06,262
168,111,1357513543080431619,1278881046839398401,Mma Ikwut-Ukwa,"New paper on the arXiv today! We confirm and characterize two new massive, short-period Jupiters, TOI-558 and TOI-559, found in the @TESSatMIT Full Frame Images. This work is the final form of my senior thesis! 🙌 These planets were originally identified as candidates by two high school students, Asma Ali and Katya Bunten, working with George Zhou at @CenterForAstro to search through some of the early sectors of the TESS data. Then TESS recently reobserved them at higher cadence We globally modeled these systems with plentiful follow-up transits from @LCO_Global and PEST, and RVs from PFS and CHIRON TOI-558 b: Mp = 3.61 Mj, P = 14.57 days, e = 0.298 TOI-559 b: Mp = 6.01 Mj, P = 6.98 days, e = 0.151 We also examine the current sample of known transiting hot Jupiters--could there be multiple distinct mass-period distributions within this population? We’ll know more as TESS eventually delivers a nearly complete sample of hot Jupiters transiting nearby, bright stars Huge thank you to the best advisor @Astro_JRod + coauthors @samuelnquinn @amvanderburg @exofastupdates @bsgaudi @Therbaer @abieryla @lkreidberg @Jonmjenkins @ProfSaraSeager @twitspek @astrojennb @johannateske @JoshuaSchlieder and all the others who I have yet to find on twitter! @astroshrey @TESSatMIT thank you Shreyas!! @exoplamets @TESSatMIT thanks Samantha!!!",https://arxiv.org/abs/2102.02222,"We report the discovery of two short-period massive giant planets from NASA's Transiting Exoplanet Survey Satellite (TESS). Both systems, TOI-558 (TIC 207110080) and TOI-559 (TIC 209459275), were identified from the 30-minute cadence Full Frame Images and confirmed using ground-based photometric and spectroscopic follow-up observations from TESS's Follow-up Observing Program Working Group. We find that TOI-558 b, which transits an F-dwarf ($M_{*}=1.349^{+0.064}_{-0.065}\ M_{\odot}$, $R_{*}=1.496^{+0.042}_{-0.040}\ R_{\odot}$, $T_{eff}=6466^{+95}_{-93}\ K$, age $1.79^{+0.91}_{-0.73}\ Gyr$) with an orbital period of 14.574 days, has a mass of $3.61\pm0.15\ M_{\rm J}$, a radius of $1.086^{+0.041}_{-0.038}\ R_{\rm J}$, and an eccentric (e=$0.300^{+0.022}_{-0.020}$) orbit. TOI-559 b transits a G-dwarf ($M_{*}=1.026\pm0.057\ M_{\odot}$, $R_{*}=1.233^{+0.028}_{-0.026}\ R_{\odot}$, $T_{eff}=5925^{+85}_{-76}\ K$, age $6.8^{+2.5}_{-2.0}\ Gyr$) in an eccentric (e=$0.151\pm0.011$) 6.984-day orbit with a mass of $6.01^{+0.24}_{-0.23}\ M_{\rm J}$ and a radius of $1.091^{+0.028}_{-0.025}\ R_{\rm J}$. Our spectroscopic follow-up also reveals a long-term radial velocity trend for TOI-559, indicating a long-period companion. The statistically significant orbital eccentricity measured for each system suggests that these planets migrated to their current location through dynamical interactions. Interestingly, both planets are also massive ($>3\ M_{\rm J}$), adding to the population of massive giant planets identified by TESS. Prompted by these new detections of high-mass planets, we analyzed the known mass distribution of hot and warm Jupiters but find no significant evidence for multiple populations. TESS should provide a near magnitude-limited sample of transiting hot Jupiters, allowing for future detailed population studies. ",Two Massive Jupiters in Eccentric Orbits from the TESS Full Frame Images,7,"['New paper on the arXiv today! We confirm and characterize two new massive, short-period Jupiters, TOI-558 and TOI-559, found in the @TESSatMIT Full Frame Images. This work is the final form of my senior thesis! 🙌\n\n ', 'These planets were originally identified as candidates by two high school students, Asma Ali and Katya Bunten, working with George Zhou at @CenterForAstro to search through some of the early sectors of the TESS data. Then TESS recently reobserved them at higher cadence https://t.co/29SPzJMxls', 'We globally modeled these systems with plentiful follow-up transits from @LCO_Global and PEST, and RVs from PFS and CHIRON\nTOI-558 b: Mp = 3.61 Mj, P = 14.57 days, e = 0.298\nTOI-559 b: Mp = 6.01 Mj, P = 6.98 days, e = 0.151 https://t.co/nECcHVcR2p', 'We also examine the current sample of known transiting hot Jupiters--could there be multiple distinct mass-period distributions within this population? We’ll know more as TESS eventually delivers a nearly complete sample of hot Jupiters transiting nearby, bright stars https://t.co/BhgvhvSFk3', 'Huge thank you to the best advisor @Astro_JRod + coauthors @samuelnquinn @amvanderburg @exofastupdates @bsgaudi @Therbaer @abieryla @lkreidberg @Jonmjenkins @ProfSaraSeager @twitspek @astrojennb @johannateske @JoshuaSchlieder and all the others who I have yet to find on twitter!', '@astroshrey @TESSatMIT thank you Shreyas!!', '@exoplamets @TESSatMIT thanks Samantha!!!']",21,02,1375
169,188,1435664887007506433,2437293979,Nathan Ratledge,"Pleased to share our working paper on the economic effects of grid-based electrification. A few thoughts to complement @MarshallBBurke's great thread. 1. Satellite imagery and machine learning represent a new frontier for economics & policy evaluation. 2. Huge thanks to @atlasai_co, especially co-author Gabe Cadamuro, for their ground breaking work using satellite imagery & AI to literally create new data in really challenging settings. Here, we use imputed outcomes to create a balanced dataset suitable for causal inference. 3. One takeaway for other researches using similar methods is to be cognizant of the downstream task. Optimizing for r2 in the CNN process may not be the best metric, as we show. Thanks to Marshall, Brandon de la Cuesta and Matthieu Stigler for their brainpower on this. 4. Ultimately, we find that new grid-based electrification had a substantial, statistically significant positive effect on asset wealth in Uganda, with electrified communities growing nearly 2x as fast as un-electrified comms. These results are robust to varying parameters. 5. Stepping away from the paper & putting on my policy hat, our results suggest that investments in grid extensions (while costly) have measurable, near-term impacts, even in low income settings. As 600m people in SSA lack electricity, we should accelerate grid extensions.",https://arxiv.org/abs/2109.02890,"In many regions of the world, sparse data on key economic outcomes inhibits the development, targeting, and evaluation of public policy. We demonstrate how advancements in satellite imagery and machine learning can help ameliorate these data and inference challenges. In the context of an expansion of the electrical grid across Uganda, we show how a combination of satellite imagery and computer vision can be used to develop local-level livelihood measurements appropriate for inferring the causal impact of electricity access on livelihoods. We then show how ML-based inference techniques deliver more reliable estimates of the causal impact of electrification than traditional alternatives when applied to these data. We estimate that grid access improves village-level asset wealth in rural Uganda by 0.17 standard deviations, more than doubling the growth rate over our study period relative to untreated areas. Our results provide country-scale evidence on the impact of a key infrastructure investment, and provide a low-cost, generalizable approach to future policy evaluation in data sparse environments. ","Using Satellite Imagery and Machine Learning to Estimate the Livelihood
Impact of Electricity Access",5,"[""Pleased to share our working paper on the economic effects of grid-based electrification. A few thoughts to complement @MarshallBBurke's great thread. \n\n1. Satellite imagery and machine learning represent a new frontier for economics & policy evaluation.\n\n"", '2. Huge thanks to @atlasai_co, especially co-author Gabe Cadamuro, for their ground breaking work using satellite imagery & AI to literally create new data in really challenging settings. \n\nHere, we use imputed outcomes to create a balanced dataset suitable for causal inference.', '3. One takeaway for other researches using similar methods is to be cognizant of the downstream task. Optimizing for r2 in the CNN process may not be the best metric, as we show. \n\nThanks to Marshall, Brandon de la Cuesta and Matthieu Stigler for their brainpower on this.', '4. Ultimately, we find that new grid-based electrification had a substantial, statistically significant positive effect on asset wealth in Uganda, with electrified communities growing nearly 2x as fast as un-electrified comms. \n\nThese results are robust to varying parameters.', '5. Stepping away from the paper & putting on my policy hat, our results suggest that investments in grid extensions (while costly) have measurable, near-term impacts, even in low income settings. \n\nAs 600m people in SSA lack electricity, we should accelerate grid extensions.']",21,09,1363
170,34,1364465902134165505,1968365508,Samaya Nissanke (she/her) 💙,"Group news: new paper by PhD student @GRaaymakers on “the challenges ahead for multi messenger analyses of gravitational waves and kilonova: a case study on GW190425.”V proud of Geert & paper after 2 + yrs of developing the framework & analyses. 22 pages long, quite a lot of physics in it, & an end to end analysis developed by Geert and pause for reflection by the group & hopefully useful for the community! Well done @GRaaymakers for this comprehensive & challenging paper, and thank you to our excellent collaborators, @FrancoisFoucart @BullaMattia @astro_rafernan @ameliahenkel @tedwards2412 @AntierSarah & many others not on twitter who all contributed wonderfully! & thanks to @NWO_Science for supporting this work through a VIDI. Written originally in 2014, then reapplied in 2015, awarded in 2016 but the actual discovery of gravitational waves and multi messenger members, plus becoming new mum, had us occupied. Mission completed! And code will be made open source shortly!",https://arxiv.org/abs/2102.11569,"In recent years, there have been significant advances in multi-messenger astronomy due to the discovery of the first, and so far only confirmed, gravitational wave event with a simultaneous electromagnetic (EM) counterpart, as well as improvements in numerical simulations, gravitational wave (GW) detectors, and transient astronomy. This has led to the exciting possibility of performing joint analyses of the GW and EM data, providing additional constraints on fundamental properties of the binary progenitor and merger remnant. Here, we present a new Bayesian framework that allows inference of these properties, while taking into account the systematic modeling uncertainties that arise when mapping from GW binary progenitor properties to photometric light curves. We extend the relative binning method presented in Zackay et al. (2018) to include extrinsic GW parameters for fast analysis of the GW signal. The focus of our EM framework is on light curves arising from r-process nucleosynthesis in the ejected material during and after merger, the so called kilonova, and particularly on black hole - neutron star systems. As a case study, we examine the recent detection of GW190425, where the primary object is consistent with being either a black hole (BH) or a neutron star (NS). We show quantitatively how improved mapping between binary progenitor and outflow properties, and/or an increase in EM data quantity and quality are required in order to break degeneracies in the fundamental source parameters. ","The Challenges Ahead for Multimessenger Analyses of Gravitational Waves
and Kilonova: a Case Study on GW190425",5,"['Group news: new paper by PhD student @GRaaymakers on “the challenges ahead for multi messenger analyses of gravitational waves and kilonova: a case study on GW190425.”V proud of Geert & paper after 2 + yrs of developing the framework & analyses. ', '22 pages long, quite a lot of physics in it, & an end to end analysis developed by Geert and pause for reflection by the group & hopefully useful for the community!', 'Well done @GRaaymakers for this comprehensive & challenging paper, and thank you to our excellent collaborators, @FrancoisFoucart @BullaMattia @astro_rafernan @ameliahenkel @tedwards2412 @AntierSarah & many others not on twitter who all contributed wonderfully!', '& thanks to @NWO_Science for supporting this work through a VIDI. Written originally in 2014, then reapplied in 2015, awarded in 2016 but the actual discovery of gravitational waves and multi messenger members, plus becoming new mum, had us occupied. Mission completed!', 'And code will be made open source shortly!']",21,02,992
171,167,1474089465408888872,1304363050791772160,Florio M. Ciaglia,"What can Lie algebras tell us about Jordan algebras? Among other things which I surely do not know, they can tell us how to find the Fisher-Rao and the Bures-Helstrom metric tensor as if we were looking for the canonical symplectic form on coadjoint orbits ",https://arxiv.org/abs/2112.09781,"Inspired by Kirillov's theory of coadjoint orbits, we develop a structure theory for finite dimensional Jordan algebras. Given a Jordan algebra ${\mathcal{J}}$, we define a generalized distribution $\mathcal{H}^{{\mathcal{J}}}$ on its dual space ${\mathcal{J}}^\star$ which is canonically determined by the Jordan product in ${\mathcal{J}}$, is invariant under the action of what we call the structure group of ${\mathcal{J}}$, and carries a naturally-defined pseudo-Riemannian bilinear form ${\mathcal{G}}_{\xi}$ at each point. We then turn to the case of positive Jordan algebras and classify the orbits of ${\mathcal{J}}^\star$ under the structure group action. We show that the only orbits which are also leaves of $\mathcal{H}^{{\mathcal{J}}}$ are those in the closure of the cone of squares or its negative, and these are the only orbits where this pseudo-Riemannian bilinear form determines a Riemannian metric tensor ${\mathcal{G}}$. We discuss applications of our construction to both classical and quantum information geometry by showing that, for appropriate choices of ${\mathcal{J}}$, the Riemannian metric tensor ${\mathcal{G}}$ coincides with the Fisher-Rao metric on non-normalized probability distributions on a finite sample space, or with the Bures-Helstrom metric for non-normalized, faithful quantum states of a finite-level quantum system. ",What Lie algebras can tell us about Jordan algebras,1,"['What can Lie algebras tell us about Jordan algebras? Among other things which I surely do not know, they can tell us how to find the Fisher-Rao and the Bures-Helstrom metric tensor as if we were looking for the canonical symplectic form on coadjoint orbits\n']",21,12,263
172,131,1369359280009224194,114562472,Prof. Emily Levesque 🤓✨🔭📚,New paper alert! This SUPER-cool exploration of detecting gravitational wave signatures from Thorne-Zytkow objects (🤯) was led by Lindsay DeMarchi (at @NUCIERA) along with @JaxYellsAtLaser and myself! It's now in press with ApJ; go check it out at 🤩,https://arxiv.org/abs/2103.03887,"Thorne-\.Zytkow objects (T\.ZOs) are a class of stellar object comprised of a neutron star core surrounded by a large and diffuse envelope. Their exterior appearance is identical to red supergiants; the distinctive electromagnetic signature of a T\.ZO is a suite of unusual chemical abundance patterns, including excesses of Li, Rb, Mo, and Ca. However, electromagnetic observations cannot unambiguously identify the presence of a neutron star core. Detection of continuous gravitational wave emission from a rotating neutron star core would provide strong supporting evidence for the existence of T\.ZOs. We present a model for gravitational wave detector confirmation of T\.ZOs and demonstrate that these objects should be detectable with Advanced LIGO. We also investigate possible targets for joint optical and gravitational searches, and comment on prospects for detectability in both current and future gravitational wave detector networks. ",Prospects for Multimessenger Observations of Thorne-\.Zytkow Objects,1,"[""New paper alert! This SUPER-cool exploration of detecting gravitational wave signatures from Thorne-Zytkow objects (🤯) was led by Lindsay DeMarchi (at @NUCIERA) along with @JaxYellsAtLaser and myself! It's now in press with ApJ; go check it out at 🤩""]",21,03,256
173,227,1445469040944848896,135114281,Ameya Joshi,Our NeurIPS-2021 paper on Differentiable Spline Approximations is now on @arxiv_org : . We propose gradient based optimization for splines and show some really cool applications in 3D reconstruction and PDEs! @chomd90 @BabuaSpeaks @baskinscience @adarshk,https://arxiv.org/abs/2110.01532,"The paradigm of differentiable programming has significantly enhanced the scope of machine learning via the judicious use of gradient-based optimization. However, standard differentiable programming methods (such as autodiff) typically require that the machine learning models be differentiable, limiting their applicability. Our goal in this paper is to use a new, principled approach to extend gradient-based optimization to functions well modeled by splines, which encompass a large family of piecewise polynomial models. We derive the form of the (weak) Jacobian of such functions and show that it exhibits a block-sparse structure that can be computed implicitly and efficiently. Overall, we show that leveraging this redesigned Jacobian in the form of a differentiable ""layer"" in predictive models leads to improved performance in diverse applications such as image segmentation, 3D point cloud reconstruction, and finite element analysis. ",Differentiable Spline Approximations,1,['Our NeurIPS-2021 paper on Differentiable Spline Approximations is now on @arxiv_org : . We propose gradient based optimization for splines and show some really cool applications in 3D reconstruction and PDEs!\n@chomd90 @BabuaSpeaks @baskinscience @adarshk'],21,10,260
174,93,1336930498887684096,1007218217633361920,Fredrik K. Gustafsson,"New paper: Accurate 3D Object Detection using Energy-Based Models. arXiv: Code: Project page: We apply energy-based models p(y|x; theta) to the task of 3D bounding box regression, extending the recent energy-based regression approach from 2D to 3D object detection. This is achieved by designing a differentiable pooling operator for 3D bounding boxes y, and adding an extra network branch to the state-of-the-art 3D object detector SA-SSD. We evaluate our proposed detector on the KITTI dataset and consistently outperform the SA-SSD baseline, demonstrating the potential of energy-based models for 3D object detection.",https://arxiv.org/abs/2012.04634,"Accurate 3D object detection (3DOD) is crucial for safe navigation of complex environments by autonomous robots. Regressing accurate 3D bounding boxes in cluttered environments based on sparse LiDAR data is however a highly challenging problem. We address this task by exploring recent advances in conditional energy-based models (EBMs) for probabilistic regression. While methods employing EBMs for regression have demonstrated impressive performance on 2D object detection in images, these techniques are not directly applicable to 3D bounding boxes. In this work, we therefore design a differentiable pooling operator for 3D bounding boxes, serving as the core module of our EBM network. We further integrate this general approach into the state-of-the-art 3D object detector SA-SSD. On the KITTI dataset, our proposed approach consistently outperforms the SA-SSD baseline across all 3DOD metrics, demonstrating the potential of EBM-based regression for highly accurate 3DOD. Code is available at this https URL ",Accurate 3D Object Detection using Energy-Based Models,4,"['New paper: Accurate 3D Object Detection using Energy-Based Models.\n\narXiv: \nCode: \nProject page: \n\n', 'We apply energy-based models p(y|x; theta) to the task of 3D bounding box regression, extending the recent energy-based regression approach from 2D to 3D object detection.', 'This is achieved by designing a differentiable pooling operator for 3D bounding boxes y, and adding an extra network branch to the state-of-the-art 3D object detector SA-SSD.', 'We evaluate our proposed detector on the KITTI dataset and consistently outperform the SA-SSD baseline, demonstrating the potential of energy-based models for 3D object detection.']",20,12,648
175,20,1487004492734377988,1173944192822927361,astrid.eichhorn,New paper out with my postdoc Gustavo P. de Brito: We strengthen the evidence for the predictive power of asymptotic safety for #quantumgravity and matter. This could enable tests of quantum gravity with already existing data from particle physics. ,https://arxiv.org/abs/2201.11402,"We explore the effect of quantum gravity on matter within a Renormalization Group framework. First, our results provide an explicit example of how misleading conclusions can be drawn by analyzing the gravitational contributions to beta functions, instead of analyzing universal quantities, such as critical exponents, that can be extracted from the beta functions. This could be key to explain differences between perturbative studies and Functional Renormalization Group studies. Second, we strengthen the evidence that asymptotically safe gravity could generate a predictive ultraviolet completion for matter theories with gauge interactions, even in the limit of vanishing dimensionful regulator function. We also find that the situation can be more subtle with higher-order, gravity-induced matter interactions. ","Nonvanishing gravitational contribution to matter beta functions for
vanishing dimensionful regulators",1,['New paper out with my postdoc Gustavo P. de Brito: We strengthen the evidence for the predictive power of asymptotic safety for #quantumgravity and matter. This could enable tests of quantum gravity with already existing data from particle physics. '],22,01,255
176,149,1336601225228410881,2868753520,Earl T Campbell,"Today's paper disco dance for our @awscloud fault-tolerant quantum computer design: from cat-code qubits with realistic noise, through biased noise error correction, two new Toffoli prep protocols, then resource counting the Hubbard model Due to high levels of juicy content, it may take some time for your device to download the pdf.",https://arxiv.org/abs/2012.04108,"We present a comprehensive architectural analysis for a proposed fault-tolerant quantum computer based on cat codes concatenated with outer quantum error-correcting codes. For the physical hardware, we propose a system of acoustic resonators coupled to superconducting circuits with a two-dimensional layout. Using estimated physical parameters for the hardware, we perform a detailed error analysis of measurements and gates, including CNOT and Toffoli gates. Having built a realistic noise model, we numerically simulate quantum error correction when the outer code is either a repetition code or a thin rectangular surface code. Our next step toward universal fault-tolerant quantum computation is a protocol for fault-tolerant Toffoli magic state preparation that significantly improves upon the fidelity of physical Toffoli gates at very low qubit cost. To achieve even lower overheads, we devise a new magic-state distillation protocol for Toffoli states. Combining these results together, we obtain realistic full-resource estimates of the physical error rates and overheads needed to run useful fault-tolerant quantum algorithms. We find that with around 1,000 superconducting circuit components, one could construct a fault-tolerant quantum computer that can run circuits which are currently intractable for classical computers. Hardware with 18,000 superconducting circuit components, in turn, could simulate the Hubbard model in a regime beyond the reach of classical computing. ",Building a fault-tolerant quantum computer using concatenated cat codes,2,"[""Today's paper disco dance for our @awscloud fault-tolerant quantum computer design: from cat-code qubits with realistic noise, through biased noise error correction, two new Toffoli prep protocols, then resource counting the Hubbard model "", 'Due to high levels of juicy content, it may take some time for your device to download the pdf.']",20,12,341
177,159,1270333283755270144,2799887322,Robert Dadashi,"New paper out: PWIL ! A simple imitation learning method, which reinforces a reward signal based on a distance to expert demonstrations. Makes Humanoid walk with a single demonstration (below). 1/ Idea: at the start of the episode all expert state-action pairs are available. As the agent takes action a in state s, look for the closest expert state-action pair (s*, a*), pop it, and define a reward r = exp(- d(s, a, s*, a*) ). 2/ Conceptually, PWIL defines a suboptimal transport between the agent state-action pairs and the expert state-action pairs. The approach relies on a distance in an MDP; in our case we use expert demonstrations to derive a distance. 3/ Contrary to adversarial IL methods, we bypass the minmax optimization problem and reinforce a non-stationary reward function that is not re-parameterized with interactions with the environment, and that relies on 2 hyperparameters. 4/ We compare PWIL with DAC, and show results for the original return of the task (not available in real settings) but also in terms of the Wasserstein distance between the agent and the expert. 5/ We recover near-optimal expert behaviour on all tasks considered. Joint work with my great collaborators: @leonardhussenot, Matthieu Geist and Olivier Pietquin ! 6/ with hopefully a sharper version of our humanoid :) ",http://arxiv.org/abs/2006.04678,"Imitation Learning (IL) methods seek to match the behavior of an agent with that of an expert. In the present work, we propose a new IL method based on a conceptually simple algorithm: Primal Wasserstein Imitation Learning (PWIL), which ties to the primal form of the Wasserstein distance between the expert and the agent state-action distributions. We present a reward function which is derived offline, as opposed to recent adversarial IL algorithms that learn a reward function through interactions with the environment, and which requires little fine-tuning. We show that we can recover expert behavior on a variety of continuous control tasks of the MuJoCo domain in a sample efficient manner in terms of agent interactions and of expert interactions with the environment. Finally, we show that the behavior of the agent we train matches the behavior of the expert with the Wasserstein distance, rather than the commonly used proxy of performance. ",Primal Wasserstein Imitation Learning,7,"['New paper out: PWIL ! A simple imitation learning method, which reinforces a reward signal based on a distance to expert demonstrations. Makes Humanoid walk with a single demonstration (below). 1/\n\n ', 'Idea: at the start of the episode all expert state-action pairs are available. As the agent takes action a in state s, look for the closest expert state-action pair (s*, a*), pop it, and define a reward r = exp(- d(s, a, s*, a*) ). 2/', 'Conceptually, PWIL defines a suboptimal transport between the agent state-action pairs and the expert state-action pairs. The approach relies on a distance in an MDP; in our case we use expert demonstrations to derive a distance. 3/', 'Contrary to adversarial IL methods, we bypass the minmax optimization problem and reinforce a non-stationary reward function that is not re-parameterized with interactions with the environment, and that relies on 2 hyperparameters. 4/', 'We compare PWIL with DAC, and show results for the original return of the task (not available in real settings) but also in terms of the Wasserstein distance between the agent and the expert. 5/', 'We recover near-optimal expert behaviour on all tasks considered. Joint work with my great collaborators: @leonardhussenot, Matthieu Geist and Olivier Pietquin ! 6/', 'with hopefully a sharper version of our humanoid :) https://t.co/YsrfLl6hiX']",20,06,1332
178,88,1171326547569106944,1153187867897860096,Nikita Nikolaev,New Paper: I construct Levelt filtration for singularly perturbed linear systems of #ODE (in rank 2 at a regular singular point) maintaining very tight #asymptotic control by upper-triangularising such system in a singular #perturbation families. I put this paper out whilst attending the @NCCRSwissMAP General Meeting in Villars-sur-Ollon. Hit submit with the Alps in my view.,https://arxiv.org/abs/1909.04011,"We study singularly perturbed linear systems of rank two of ordinary differential equations of the form $\hbar x\partial_x \psi (x, \hbar) + A (x, \hbar) \psi (x, \hbar) = 0$, with a regular singularity at $x = 0$, and with a fixed asymptotic regularity in the perturbation parameter $\hbar$ of Gevrey type in a fixed sector. We show that such systems can be put into an upper-triangular form by means of holomorphic gauge transformations which are also Gevrey in the perturbation parameter $\hbar$ in the same sector. We use this result to construct a family in $\hbar$ of Levelt filtrations which specialise to the usual Levelt filtration for every fixed nonzero value of $\hbar$; this family of filtrations recovers in the $\hbar \to 0$ limit the eigen-decomposition for the $\hbar$-leading-order of the matrix $A (x, \hbar)$, and also recovers in the $x \to 0$ limit the eigen-decomposition of the residue matrix $A (0, \hbar)$. ","Triangularisation of Singularly Perturbed Logarithmic Differential
Systems of Rank 2",2,"['New Paper: I construct Levelt filtration for singularly perturbed linear systems of #ODE (in rank 2 at a regular singular point) maintaining very tight #asymptotic control by upper-triangularising such system in a singular #perturbation families.\n\n', 'I put this paper out whilst attending the @NCCRSwissMAP General Meeting in Villars-sur-Ollon. Hit submit with the Alps in my view.']",19,09,384
179,104,1457525905711042570,901266828655284225,Brian Metzger," Perfect timing for new paper on a new type of transient. Punchline: while kilonovae signal the birth of light black holes (from NS mergers), ""Super-kilonovae"" may accompany the birth of the most massive (stellar mass) LIGO BHs. @astroVAV @amanagarawal20 Basic idea: scale up collapsars (progenitors of long GRBs) to extremely massive stars above the pair-instability mass gap. When trying to feed a newly formed BH at such high rates, much of the in-falling star doesn't make it in, instead being ejected in accretion disk outflows. We estimate these neutron-rich outflows generate r-process elements, with yields ~10s of Msun (~100 times higher than in neutron star mergers like GW170817 and ~10 times higher than ""ordinary"" low-mass collapsars). This results in a lower final mass BH than one would predict from the progenitor He core, allowing to fill-in the pair-instability mass-gap ""from above,"" and providing a speculative channel for generating massive BBH like the components GW190521. This large mass ejection powers a radioactively-powered transient much brighter and longer-lasting than ordinary kilonovae, similar to SNe but much redder (near-IR peak); hence ""Super-kilonova"". Roman Space Telescope could potentially discover these transients out to z ~ 1. The nominal BH accretion rates achieved are also significantly higher than ordinary collapsars, allowing to power particularly energetic gamma-ray bursts. SuperKN should be searched for following the most energetic GRBs by JWST. Gravitational instabilities in these massive disks could also generate GW emission accessible to 3G detectors; unlike in the ""chirp"" in CO mergers, the GW signal decreases in frequency with time as the disk grows in radius (we term ""sad trombone"", to borrow a term from the FRBs). Overall a fun ""ideas"" paper to work out, which puts together some ideas already germinating out there in the community, and completed by a talented group of young researchers, most of which overlapped here at Columbia/CCA over past few years.",https://arxiv.org/abs/2111.03094,"The core collapse of rapidly rotating massive ~10 Msun stars (""collapsars""), and resulting formation of hyper-accreting black holes, are a leading model for the central engines of long-duration gamma-ray bursts (GRB) and promising sources of r-process nucleosynthesis. Here, we explore the signatures of collapsars from progenitors with extremely massive helium cores >130 Msun above the pair-instability mass gap. While rapid collapse to a black hole likely precludes a prompt explosion in these systems, we demonstrate that disk outflows can generate a large quantity (up to >50 Msun) of ejecta, comprised of >5-10 Msun in r-process elements and ~0.1-1 Msun of $^{56}$Ni, expanding at velocities ~0.1c. Radioactive heating of the disk-wind ejecta powers an optical/infrared transient, with a characteristic luminosity $\sim 10^{42}$ erg s$^{-1}$ and spectral peak in the near-infrared (due to the high optical/UV opacities of lanthanide elements) similar to kilonovae from neutron star mergers, but with longer durations $\gtrsim$ 1 month. These ""super-kilonovae"" (superKNe) herald the birth of massive black holes >60 Msun, which, as a result of disk wind mass-loss, can populate the pair-instability mass gap 'from above' and could potentially create the binary components of GW190521. SuperKNe could be discovered via wide-field surveys such as those planned with the Roman Space Telescope or via late-time infrared follow-up observations of extremely energetic GRBs. Gravitational waves of frequency ~0.1-50 Hz from non-axisymmetric instabilities in self-gravitating massive collapsar disks are potentially detectable by proposed third-generation intermediate and high-frequency observatories at distances up to hundreds of Mpc; in contrast to the ""chirp"" from binary mergers, the collapsar gravitational-wave signal decreases in frequency as the disk radius grows (""sad trombone""). ","""Super-Kilonovae"" from Massive Collapsars as Signatures of Black-Hole
Birth in the Pair-instability Mass Gap",8,"[' Perfect timing for new paper on a new type of transient. Punchline: while kilonovae signal the birth of light black holes (from NS mergers), ""Super-kilonovae"" may accompany the birth of the most massive (stellar mass) LIGO BHs. @astroVAV @amanagarawal20', ""Basic idea: scale up collapsars (progenitors of long GRBs) to extremely massive stars above the pair-instability mass gap. When trying to feed a newly formed BH at such high rates, much of the in-falling star doesn't make it in, instead being ejected in accretion disk outflows."", 'We estimate these neutron-rich outflows generate r-process elements, with yields ~10s of Msun (~100 times higher than in neutron star mergers like GW170817 and ~10 times higher than ""ordinary"" low-mass collapsars).', 'This results in a lower final mass BH than one would predict from the progenitor He core, allowing to fill-in the pair-instability mass-gap ""from above,"" and providing a speculative channel for generating massive BBH like the components GW190521. https://t.co/txzEAL7dEe', 'This large mass ejection powers a radioactively-powered transient much brighter and longer-lasting than ordinary kilonovae, similar to SNe but much redder (near-IR peak); hence ""Super-kilonova"". Roman Space Telescope could potentially discover these transients out to z ~ 1. https://t.co/omoLHbqK11', 'The nominal BH accretion rates achieved are also significantly higher than ordinary collapsars, allowing to power particularly energetic gamma-ray bursts. SuperKN should be searched for following the most energetic GRBs by JWST.', 'Gravitational instabilities in these massive disks could also generate GW emission accessible to 3G detectors; unlike in the ""chirp"" in CO mergers, the GW signal decreases in frequency with time as the disk grows in radius (we term ""sad trombone"", to borrow a term from the FRBs). https://t.co/QSmoC79fML', 'Overall a fun ""ideas"" paper to work out, which puts together some ideas already germinating out there in the community, and completed by a talented group of young researchers, most of which overlapped here at Columbia/CCA over past few years.']",21,11,2050
180,1,1017373588033298432,927837253,Emtiyaz Khan,"I will talk about our new work on ""Bayesian deep learning using weight-perturbation in Adam"" at #icml2018 in ""Deep Learning (Bayesian) 2"" session at 4:50pm in room A4. Paper here Slides here Code here 1/6 Short summary: Gaussian mean-field variational inference by running Adam on the MLE objective and making the following changes: perturb the weights. Second, add a contribution from the prior, and use a small minibatch size. 2/6 This result is a direct consequence of using natural-gradients instead of gradients. The mean is equal to the parameter returned by Adam, and the variance can be obtained from the scale vector. Perturbation is due to the sampling from variational distribution. 3/6 Small minibatches are due to 'a square of sum of gradients' approximation in Adam for the second-order information. See theorem 1 in the paper. 4/6 We also propose VadaGrad and Variational Adaptive Newton (VAN) method for variational optimization (or what @beenwrekt calls Random search). This work is cool because the variance of the search distribution is automatically adapted. Also see 5/6 Also check out a very similar work by @Guodzh @DavidDuvenaud @RogerGrosse They have done some interesting things with KFAC. 6/6 7/6 Tweeting is hard.",https://arxiv.org/abs/1806.04854,"Uncertainty computation in deep learning is essential to design robust and reliable systems. Variational inference (VI) is a promising approach for such computation, but requires more effort to implement and execute compared to maximum-likelihood methods. In this paper, we propose new natural-gradient algorithms to reduce such efforts for Gaussian mean-field VI. Our algorithms can be implemented within the Adam optimizer by perturbing the network weights during gradient evaluations, and uncertainty estimates can be cheaply obtained by using the vector that adapts the learning rate. This requires lower memory, computation, and implementation effort than existing VI methods, while obtaining uncertainty estimates of comparable quality. Our empirical results confirm this and further suggest that the weight-perturbation in our algorithm could be useful for exploration in reinforcement learning and stochastic optimization. ",Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam,7,"['I will talk about our new work on ""Bayesian deep learning using weight-perturbation in Adam"" at #icml2018 in ""Deep Learning (Bayesian) 2"" session at 4:50pm in room A4. Paper here Slides here Code here 1/6 ', 'Short summary: Gaussian mean-field variational inference by running Adam on the MLE objective and making the following changes: perturb the weights. Second, add a contribution from the prior, and use a small minibatch size. 2/6', 'This result is a direct consequence of using natural-gradients instead of gradients. The mean is equal to the parameter returned by Adam, and the variance can be obtained from the scale vector. Perturbation is due to the sampling from variational distribution. 3/6', ""Small minibatches are due to 'a square of sum of gradients' approximation in Adam for the second-order information. See theorem 1 in the paper. 4/6 https://t.co/Bff8Grqprd"", 'We also propose VadaGrad and Variational Adaptive Newton (VAN) method for variational optimization (or what @beenwrekt calls Random search). This work is cool because the variance of the search distribution is automatically adapted. Also see https://t.co/0Izj1HsmKI 5/6 https://t.co/axKWYyHgnH', 'Also check out a very similar work by @Guodzh @DavidDuvenaud @RogerGrosse https://t.co/vuPtmVm26o They have done some interesting things with KFAC. 6/6', '7/6 Tweeting is hard.']",18,06,1297
181,128,1243238142397812736,572479189,Manlio De Domenico,"Break from #COVID19 to share this great work led by @GiuliaTtt in coll. w/ @ricgallotti We propose a physically grounded way to calculate efficiency of network flows, and show its relevance for functional percolation as opposed to structural percolation. In parallel, our model w/ @egaltmann for bursts in collective attention just published! We make publicly available the multiplex networks: 26M+ social links among 10M+ users during 9 exceptional events. Enjoy! Github: ",https://arxiv.org/abs/2003.11374,"Network science enables the effective analysis of real interconnected systems, characterized by a complex interplay between topology and interconnections strength. It is well-known that the topology of a network affects its resilience to failures or attacks, as well as its functions. Exchanging information is crucial for many real systems: the internet, transportation networks and the brain are key examples. Despite the introduction of measures of efficiency to analyze network flows, i.e. topologies characterized by weighted connectivity, here we show that they fail to capture combined information of link existence and link weight. In this letter we propose a physically-grounded estimator of flow efficiency which can be computed for every weighted network, regardless from the scale and nature of weights and from any (missing) metadata. Remarkably, results show that our estimator captures the heterogeneity of flows along with topological differences and its complement information obtained from percolation analysis of several empirical systems, including transportation, trade, migrations, and brain networks. We show that cutting the heaviest connections may increase the average communication efficiency of the system and hence, counterintuively, a sparser network is not necessarily less efficient. Remarkably, our estimator enables the comparison of communication efficiency of networks arising from different fields, without the possible pitfalls deriving from the scale of flow. ",Quantifying efficient information exchange in real network flows,2,"['Break from #COVID19 to share this great work led by @GiuliaTtt in coll. w/ @ricgallotti \nWe propose a physically grounded way to calculate efficiency of network flows, and show its relevance for functional percolation as opposed to structural percolation.\n\n ', 'In parallel, our model w/ @egaltmann for bursts in collective attention just published! https://t.co/FE399RzTwm\n\nWe make publicly available the multiplex networks: 26M+ social links among 10M+ users during 9 exceptional events. Enjoy!\nGithub: https://t.co/fZBoZ1Ky4g https://t.co/Qm77r63w2G']",20,03,507
182,47,1418491800264744960,2915749124,Dhiraj Hazra,"Our new paper 'Dark Twilight Joined with the Light of Dawn to Unveil the Reionization History' with Daniela Paoletti, Fabio Finelli and @georgesmoot — — an extended analysis of the reionization history based on recent cosmological and astrophysical data.",https://arxiv.org/abs/2107.10693,"Improved measurement of the Cosmic Microwave Background polarization from Planck allows a detailed study of reionization beyond the average optical depth. The lower value of the optical depth disfavours an early onset and an early completion of reionization in favour of a redsfhit range where different astrophysical probes provide sensible information on the sources of reionization and the status of the intergalactic medium. In this work we extend our previous study in which we constrained reionization by combining three different probes - CMB, UV luminosity density and neutral hydrogen fraction data - in both treatment and data: we first allow variation in the UV source term varying the product of the efficiency of conversion of UV luminosity into ionizing photons and the escape fraction together with the reionization and cosmological parameters, and then we investigate the impact of a less conservative cut for the UV luminosity function. We find that the estimate for the efficiency is consistent within 95% C.L. with the fixed value we considered in our previous results and is mostly constrained by the QHII data. We find that allowing the efficiency to vary does not affect significantly our results for the average optical depth for monotonic reionization histories, recovering $\tau=0.0519_{-0.0008}^{+0.0010}$ at 68% CL , consistent with our previous studies. Using a less conservative cut for the UV luminosity function, we find $\tau=0.0541_{-0.0016}^{+0.0013}$ at 68% CL, due to the faint end of the luminosity function in the data we use, that also prefers a larger contribution from higher redshifts. ","Dark Twilight Joined with the Light of Dawn to Unveil the Reionization
History",1,"[""Our new paper 'Dark Twilight Joined with the Light of Dawn to Unveil the Reionization History' with Daniela Paoletti, Fabio Finelli and @georgesmoot — — an extended analysis of the reionization history based on recent cosmological and astrophysical data.""]",21,07,261
183,216,1276889796863045632,1011585253423665152,Marija Slavkovik,We (@guribye + tweeterless Than and Oda) studied cookie consent notices of news outlets. We looked for dark patterns & found A PLENTY. Regulating consent without regulating interface specs is pointless.. paper accepted @nordichi2020 can read early version @Cristianapt @guribye @nordichi2020 @nataliabielova Pls send us comments and your work that can be cited. This is such a rapid field we need to avoid duplicating effort. We are looking at automatic detection of dark patterns next and would appreciate data. @Cristianapt @guribye @nordichi2020 @nataliabielova @CelestinMatte good idea! just drop me an email and we can set it up (it is on the arxiv paper) @Cristianapt @guribye @nordichi2020 @nataliabielova I am so sh** with names. Of course we read your paper :) It is very nice work. Looking forward to learning more. @soheilhuman @guribye @nordichi2020 Thanks for this!,https://arxiv.org/abs/2006.13985,"To ensure that users of online services understand what data are collected and how they are used in algorithmic decision-making, the European Union's General Data Protection Regulation (GDPR) specifies informed consent as a minimal requirement. For online news outlets consent is commonly elicited through interface design elements in the form of a pop-up. We have manually analyzed 300 data collection consent notices from news outlets that are built to ensure compliance with GDPR. The analysis uncovered a variety of strategies or dark patterns that circumvent the intent of GDPR by design. We further study the presence and variety of these dark patterns in these ""cookie consents"" and use our observations to specify the concept of dark pattern in the context of consent elicitation. ","Circumvention by design -- dark patterns in cookie consents for online
news outlets",5,"['We (@guribye + tweeterless Than and Oda) studied cookie consent notices of news outlets. We looked for dark patterns & found A PLENTY. Regulating consent without regulating interface specs is pointless.. paper accepted @nordichi2020 can read early version ', '@Cristianapt @guribye @nordichi2020 @nataliabielova Pls send us comments and your work that can be cited. This is such a rapid field we need to avoid duplicating effort. We are looking at automatic detection of dark patterns next and would appreciate data.', '@Cristianapt @guribye @nordichi2020 @nataliabielova @CelestinMatte good idea! just drop me an email and we can set it up (it is on the arxiv paper)', '@Cristianapt @guribye @nordichi2020 @nataliabielova I am so sh** with names. Of course we read your paper :) It is very nice work. Looking forward to learning more.', '@soheilhuman @guribye @nordichi2020 Thanks for this!']",20,06,892
184,1,1367648813364649989,2492016278,Adrian Raftery,"New paper on arXiv: ""Estimating SARS-CoV-2 Infections from Deaths, Confirmed Cases, Tests, and Random Surveys"" w Nick Irons: 1/4 Most data sources for estimating Covid incidence & prevalence are biased or delayed: cases underestimate, positivity rate overestimates, deaths data are delayed, hospitalizations aren't comparable between states. Random testing surveys are the least biased, but rare & delayed 2/4 We propose a Bayesian estimation method for all states that bias-corrects and combines number of cases, test positivity rates & deaths, and anchors them with the few random testing surveys that have been done, in Indiana & Ohio. 3/4 Results for USA to Feb 25: - Cumulative undercount factor: 2.2. - Initial undercount (to April 15, 2020): 11.1. - Cumulative incidence 18.4% of population (61M) - current reproductive rate R: 0.87 - decline of infections since peak of cases on Jan 8: 70% 4/4",https://arxiv.org/abs/2102.10741,"There are many sources of data giving information about the number of SARS-CoV-2 infections in the population, but all have major drawbacks, including biases and delayed reporting. For example, the number of confirmed cases largely underestimates the number of infections, deaths lag infections substantially, while test positivity rates tend to greatly overestimate prevalence. Representative random prevalence surveys, the only putatively unbiased source, are sparse in time and space, and the results come with a big delay. Reliable estimates of population prevalence are necessary for understanding the spread of the virus and the effects of mitigation strategies. We develop a simple Bayesian framework to estimate viral prevalence by combining the main available data sources. It is based on a discrete-time SIR model with time-varying reproductive parameter. Our model includes likelihood components that incorporate data of deaths due to the virus, confirmed cases, and the number of tests administered on each day. We anchor our inference with data from random sample testing surveys in Indiana and Ohio. We use the results from these two states to calibrate the model on positive test counts and proceed to estimate the infection fatality rate and the number of new infections on each day in each state in the USA. We estimate the extent to which reported COVID cases have underestimated true infection counts, which was large, especially in the first months of the pandemic. We explore the implications of our results for progress towards herd immunity. ","Estimating SARS-CoV-2 Infections from Deaths, Confirmed Cases, Tests,
and Random Surveys",4,"['New paper on arXiv: ""Estimating SARS-CoV-2 Infections from Deaths, Confirmed Cases, Tests, and Random Surveys"" w Nick Irons: 1/4 ', ""Most data sources for estimating Covid incidence & prevalence are biased or delayed: cases underestimate, positivity rate overestimates, deaths data are delayed, hospitalizations aren't comparable between states. Random testing surveys are the least biased, but rare & delayed 2/4"", 'We propose a Bayesian estimation method for all states that bias-corrects and combines number of cases, test positivity rates & deaths, and anchors them with the few random testing surveys that have been done, in Indiana & Ohio. 3/4', 'Results for USA to Feb 25: \n- Cumulative undercount factor: 2.2. \n- Initial undercount (to April 15, 2020): 11.1.\n- Cumulative incidence 18.4% of population (61M)\n- current reproductive rate R: 0.87\n- decline of infections since peak of cases on Jan 8: 70%\n4/4']",21,02,915
185,144,1360197339953045510,4018882938,Marcus Lower,New paper alert! The census paper describing the relativistic binary program on the #MeerKAT @SKA_telescope precursor: was (finally) accepted! Includes a sneak preview of one of the most exciting experiments I've been involved in to date. ,https://arxiv.org/abs/2102.05160,"We describe the ongoing Relativistic Binary programme (RelBin), a part of the MeerTime large survey project with the MeerKAT radio telescope. RelBin is primarily focused on observations of relativistic effects in binary pulsars to enable measurements of neutron star masses and tests of theories of gravity. We selected 25 pulsars as an initial high priority list of targets based on their characteristics and observational history with other telescopes. In this paper, we provide an outline of the programme, present polarisation calibrated pulse profiles for all selected pulsars as a reference catalogue along with updated dispersion measures. We report Faraday rotation measures for 24 pulsars, twelve of which have been measured for the first time. More than a third of our selected pulsars show a flat position angle swing confirming earlier observations. We demonstrate the ability of the Rotating Vector Model (RVM), fitted here to seven binary pulsars, including the Double Pulsar (PSR J0737$-$3039A), to obtain information about the orbital inclination angle. We present a high time resolution light curve of the eclipse of PSR J0737$-$3039A by the companion's magnetosphere, a high-phase resolution position angle swing for PSR J1141$-$6545, an improved detection of the Shapiro delay of PSR J1811$-$2405, and pulse scattering measurements for PSRs J1227$-$6208, J1757$-$1854, and J1811$-$1736. Finally, we demonstrate that timing observations with MeerKAT improve on existing data sets by a factor of, typically, 2-3, sometimes by an order of magnitude. ","The Relativistic Binary Programme on MeerKAT: Science objectives and
first results",1,"[""New paper alert! The census paper describing the relativistic binary program on the #MeerKAT @SKA_telescope precursor: was (finally) accepted! \n\nIncludes a sneak preview of one of the most exciting experiments I've been involved in to date. ""]",21,02,253
186,119,1224877393887629312,326843207,Yuta Notsu,"Our new paper ""Temporal Evolution of Spatially-Resolved Individual Star Spots on a Planet-Hosting Solar-type Star: Kepler 17"" is accepted to ApJ and now in arXiv !! Authors: @KosOlo8, @jradavenport, @brettmor @astronomy_stars , and many ! @KosOlo8 @jradavenport @brettmor Using exoplanet transits and rotational modulations of Kepler-17,we investigated number of spots, spot locations, and the temporal evolution. Although the temporal evolution derived from the rotational modulation differs from those of in-transit spots to a certain degree, ..... @KosOlo8 @jradavenport @brettmor .... the emergence/decay rates of in-transit spots are within an order of magnitude of those derived for sunspots as well as our previous research based only on rotational modulations. This supports a hypothesis that .... @KosOlo8 @jradavenport @brettmor .... that the emergence/decay of sunspots and extremely-large star spots on solar-type stars occur through the same underlying processes. Also, we can say large star spots having a potential to produce superflares are found to survive more than 100 days (up to 1 year)...😲 ",https://arxiv.org/abs/2002.01086,"Star spot evolution is visible evidence of the emergence/decay of the magnetic field on stellar surface, and it is therefore important for the understanding of the underlying stellar dynamo and consequential stellar flares. In this paper, we report the temporal evolution of individual star spot area on the hot-Jupiter-hosting active solar-type star Kepler 17 whose transits occur every 1.5 days. The spot longitude and area evolution are estimated (1) from the stellar rotational modulations of Kepler data and (2) from the brightness enhancements during the exoplanet transits caused by existence of large star spots. As a result of the comparison, number of spots, spot locations, and the temporal evolution derived from the rotational modulations is largely different from those of in-transit spots. We confirm that although only two light curve minima appear per rotation, there are clearly many spots present on the star. We find that the observed differential intensity changes are sometimes consistent with the spot pattern detected by transits, but they sometimes do not match with each other. Although the temporal evolution derived from the rotational modulation differs from those of in-transit spots to a certain degree, the emergence/decay rates of in-transit spots are within an order of magnitude of those derived for sunspots as well as our previous research based only on rotational modulations. This supports a hypothesis that the emergence/decay of sunspots and extremely-large star spots on solar-type stars occur through the same underlying processes. ","Temporal Evolution of Spatially-Resolved Individual Star Spots on a
Planet-Hosting Solar-type Star: Kepler 17",4,"['Our new paper ""Temporal Evolution of Spatially-Resolved Individual Star Spots on a Planet-Hosting Solar-type Star: Kepler 17"" is accepted to ApJ and now in arXiv !! \n\nAuthors: @KosOlo8, @jradavenport, @brettmor @astronomy_stars , and many !', '@KosOlo8 @jradavenport @brettmor Using exoplanet transits and rotational modulations of Kepler-17,we investigated number of spots, spot locations, and the temporal evolution. Although the temporal evolution derived from the rotational modulation differs from those of in-transit spots to a certain degree, ..... https://t.co/p0kcZKulA8', '@KosOlo8 @jradavenport @brettmor .... the emergence/decay rates of in-transit spots are within an order of magnitude of those derived for sunspots as well as our previous research based only on rotational modulations. This supports a hypothesis that .... https://t.co/vyS3JAIkvu', '@KosOlo8 @jradavenport @brettmor .... that the emergence/decay of sunspots and extremely-large star spots on solar-type stars occur through the same underlying processes.\n\nAlso, we can say large star spots having a potential to produce superflares are found to survive more than\n100 days (up to 1 year)...😲 https://t.co/LQ4TMDwgWr']",20,02,1140
187,154,1291565580827336705,2577596593,Chelsea Finn,"Want your robot to explore intelligently? We study how to learn to explore & introduce a *efficient* meta-learning method that can lead to optimal exploration. Paper: w Evan Liu, Raghunathan, Liang @StanfordAILab Thread👇🏼(1/5) Prior meta-RL methods either (a) optimize exploration & execution end-to-end w.r.t. reward (e.g. RL^2, VariBAD), or (b) leverage principled but suboptimal strategies (e.g. PEARL). The former is particularly hard, as it leads to a chicken-and-egg optimization problem. (2/5) Turns out you can break this coupling by training a task-conditioned execution policy, and training the exploration policy to recover task-relevant information. This is consistent with the end-to-end objective *and* substantially more efficient! (3/5) With this approach, DREAM can learn an exploration strategy that navigates a 3D environment from pixels to go “read” a sign that carries info about the task. (and then execute the task using that info) (4/5) In comparison with state-of-the-art meta-RL methods, this approach can better scale to challenging meta-RL problems such as 3D visual object navigation. See the paper for more experiments & theoretical analysis. (5/5) ",https://arxiv.org/abs/2008.02790,"The goal of meta-reinforcement learning (meta-RL) is to build agents that can quickly learn new tasks by leveraging prior experience on related tasks. Learning a new task often requires both exploring to gather task-relevant information and exploiting this information to solve the task. In principle, optimal exploration and exploitation can be learned end-to-end by simply maximizing task performance. However, such meta-RL approaches struggle with local optima due to a chicken-and-egg problem: learning to explore requires good exploitation to gauge the exploration's utility, but learning to exploit requires information gathered via exploration. Optimizing separate objectives for exploration and exploitation can avoid this problem, but prior meta-RL exploration objectives yield suboptimal policies that gather information irrelevant to the task. We alleviate both concerns by constructing an exploitation objective that automatically identifies task-relevant information and an exploration objective to recover only this information. This avoids local optima in end-to-end training, without sacrificing optimal exploration. Empirically, DREAM substantially outperforms existing approaches on complex meta-RL problems, such as sparse-reward 3D visual navigation. Videos of DREAM: this https URL ","Decoupling Exploration and Exploitation for Meta-Reinforcement Learning
without Sacrifices",5,"['Want your robot to explore intelligently? We study how to learn to explore & introduce a *efficient* meta-learning method that can lead to optimal exploration.\n\nPaper: \nw Evan Liu, Raghunathan, Liang @StanfordAILab\n\nThread👇🏼(1/5)\n', 'Prior meta-RL methods either (a) optimize exploration & execution end-to-end w.r.t. reward (e.g. RL^2, VariBAD), or (b) leverage principled but suboptimal strategies (e.g. PEARL).\n\nThe former is particularly hard, as it leads to a chicken-and-egg optimization problem.\n(2/5) https://t.co/0fge7AGySO', 'Turns out you can break this coupling by training a task-conditioned execution policy, and training the exploration policy to recover task-relevant information. \n\nThis is consistent with the end-to-end objective *and* substantially more efficient!\n(3/5) https://t.co/b48gB1VgPW', 'With this approach, DREAM can learn an exploration strategy that navigates a 3D environment from pixels to go “read” a sign that carries info about the task. (and then execute the task using that info)\n(4/5) https://t.co/4teggu5vXP', 'In comparison with state-of-the-art meta-RL methods, this approach can better scale to challenging meta-RL problems such as 3D visual object navigation.\n\nSee the paper for more experiments & theoretical analysis.\n(5/5) https://t.co/HeC6C3JSz2']",20,08,1221
188,84,1285380994166595587,939589825602228224,Leah Jenks,New paper on the arXiv this evening! Looking at gravitational wave and binary pulsar constraints on noncommutative gravity We find that GW constraints are an order of magnitude more stringent than those from the pulsar system and that the time scale of the normalized NC tensor is constrained to be of order unity With my wonderful advisor Stephon Alexander and awesome collaborator Kent Yagi!,https://arxiv.org/abs/2007.09714,"Noncommutative gravity is a natural method of quantizing spacetime by promoting the spacetime coordinates themselves to operators which do not commute. This approach is motivated, for example, from a quantum gravity perspective, among others. Noncommutative gravity has been tested against the binary black hole merger event GW150914. Here, we extend and improve upon such a previous analysis by (i) relaxing an assumption made on the preferred direction due to noncommutativity, (ii) using posterior samples produced by the LIGO/Virgo Collaborations, (iii) consider other gravitational wave events, namely GW151226, GW170608, GW170814 and GW170817, and (iv) consider binary pulsar observations. Using Kepler's law that contains the noncommutative effect at second post-Newtonian order, we derive corrections to the gravitational waveform phase and the pericenter precession. Using the gravitational wave and double pulsar binary observations, we find bounds on a space-time noncommutative tensor $\theta^{0i}$ in terms of the preferred frame direction with respect to the orientation of each binary. We find that the gravitational wave bounds are stronger than the binary pulsar one by an order of magnitude and the noncommutative tensor normalized by the Planck length and time is constrained to be of order unity. ","Probing Noncommutative Gravity with Gravitational Wave and Binary Pulsar
Observations",3,"['New paper on the arXiv this evening! Looking at gravitational wave and binary pulsar constraints on noncommutative gravity ', 'We find that GW constraints are an order of magnitude more stringent than those from the pulsar system and that the time scale of the normalized NC tensor is constrained to be of order unity', 'With my wonderful advisor Stephon Alexander and awesome collaborator Kent Yagi!']",20,07,409
189,7,1279239019541532672,384900803,Shantanu Basu,New paper on gravitational collapse and star formation with misaligned magnetic and rotation axes. Misalignment leads to larger disks and weaker outflows and complex magnetic field patterns. @westernuPhysAst @KyushuUniv_EN #KagoshimaUniversity ,https://arxiv.org/abs/2006.13233,"The formation of circumstellar disks is investigated using three-dimensional resistive magnetohydrodynamic simulations, in which the initial prestellar cloud has a misaligned rotation axis with respect to the magnetic field. We examine the effects of (i) the initial angle difference between the global magnetic field and the cloud rotation axis ($\theta_0$) and (ii) the ratio of the thermal to gravitational energy ($\alpha_0$). We study $16$ models in total and calculate the cloud evolution until $\sim \! 5000$ yr after protostar formation. Our simulation results indicate that an initial non-zero $\theta_0$ ($> 0$) promotes the disk formation but tends to suppress the outflow driving, for models that are moderately gravitationally unstable, $\alpha_0 \lesssim 1$. In these models, a large-sized rotationally-supported disk forms and a weak outflow appears, in contrast to a smaller disk and strong outflow in the aligned case ($\theta_0 = 0$). Furthermore, we find that when the initial cloud is highly unstable with small $\alpha_0$, the initial angle difference $\theta_0$ does not significantly affect the disk formation and outflow driving. ","The Effect of Misalignment between Rotation Axis and Magnetic Field on
Circumstellar Disk",1,['New paper on gravitational collapse and star formation with misaligned magnetic and rotation axes. Misalignment leads to larger disks and weaker outflows and complex magnetic field patterns. @westernuPhysAst @KyushuUniv_EN #KagoshimaUniversity '],20,06,257
190,41,1041778144020242433,45105022,Riccardo Sapienza,"New paper! Nanoscale design of the local density of optical states, or how to make an emitter shine 800 times more, from the creative mind of @sandromignuzzi, with Stefano Vezzoli & Stefan Maier @ImperialPhysics and Bill Barnes & Simon Horsley @UniofExeter ",https://arxiv.org/abs/1809.05514,"We propose a design concept for tailoring the local density of optical states (LDOS) in dielectric nanostructures, based on the phase distribution of the scattered optical fields induced by point-like emitters. First we demonstrate that the LDOS can be expressed in terms of a coherent summation of constructive and destructive contributions. By using an iterative approach, dielectric nanostructures can be designed to effectively remove the destructive terms. In this way dielectric Mie resonators, featuring low LDOS for electric dipoles, can be reshaped to enable enhancements of three orders of magnitude. To demonstrate the generality of the method, we also design nanocavities that enhance the radiated power of a circular dipole, a quadrupole and an arbitrary collection of coherent dipoles. Our concept provides a powerful tool for high-performance dielectric resonators, and affords fundamental insights into light-matter coupling at the nanoscale. ",Nanoscale design of the local density of optical states,1,"['New paper! Nanoscale design of the local density of optical states, or how to make an emitter shine 800 times more, from the creative mind of @sandromignuzzi, with Stefano Vezzoli & Stefan Maier @ImperialPhysics and Bill Barnes & Simon Horsley @UniofExeter ']",18,09,270
191,197,1301869927410794496,882307001451069440,Francisco J. Mercado,"SO SO excited bc it's my first paper day!! We use a suite of isolated dwarf galaxy (FIRE) simulations to study stellar metallicity gradients and their origins: We predict that dwarf galaxies follow a gradient-strength-galaxy-age relationship such that galaxies with older stellar populations tend to have stronger (more negative) stellar metallicity gradients. We also use published results for 10 existing Local Group dwarf galaxies to show that they, too, follow a VERY similar gradient-strength-galaxy-age relationship. Interestingly, most of these observed galaxies are satellites of the MW while our simulated galaxies are ISOLATED systems... Yet they follow this very similar relationship! This suggests that the environment of a dwarf galaxy likely plays a secondary role in shaping stellar metallicity gradients. Check the paper out to learn about what drives these gradients! I dedicate this paper to José A. Flores Velázquez and Perla Maritza Mercado. I wish they were both still here to celebrate with me but I know they're both proud ❤️❤️ It's important to remember that none of this would be possible w/o support from the community around me. My thanks go out to my co-authors especially @jbprime and @jorgito__moreno. And thanks to @DarthLazar for always being willing to answer my endless onslaught of questions! @j_tharindu thanks Tharindu! @astrochicana Muchísimas gracias!! 😊 @astroarianna 💜💜💜 @8minutesold Thank you!! 😃 @ynxmonica Thanks Monica!! 😃 @Naj_Astro Thanks Najmeh!!",https://arxiv.org/abs/2009.01241,"We explore the origin of stellar metallicity gradients in simulated and observed dwarf galaxies. We use FIRE-2 cosmological baryonic zoom-in simulations of 26 isolated galaxies as well as existing observational data for 10 Local Group dwarf galaxies. Our simulated galaxies have stellar masses between $10^{5.5}$ and $10^{8.6} \msun$. Whilst gas-phase metallicty gradients are generally weak in our simulated galaxies, we find that stellar metallicity gradients are common, with central regions tending to be more metal-rich than the outer parts. The strength of the gradient is correlated with galaxy-wide median stellar age, such that galaxies with younger stellar populations have flatter gradients. Stellar metallicty gradients are set by two competing processes: (1) the steady ""puffing"" of old, metal-poor stars by feedback-driven potential fluctuations, and (2) the accretion of extended, metal-rich gas at late times, which fuels late-time metal-rich star formation. If recent star formation dominates, then extended, metal-rich star formation washes out pre-existing gradients from the ""puffing"" process. We use published results from ten Local Group dwarf galaxies to show that a similar relationship between age and stellar metallicity-gradient strength exists among real dwarfs. This suggests that observed stellar metallicity gradients may be driven largely by the baryon/feedback cycle rather than by external environmental effects. ","A Relationship Between Stellar Metallicity Gradients and Galaxy Age in
Dwarf Galaxies",13,"[""SO SO excited bc it's my first paper day!! We use a suite of isolated dwarf galaxy (FIRE) simulations to study stellar metallicity gradients and their origins: "", 'We predict that dwarf galaxies follow a gradient-strength-galaxy-age relationship such that galaxies with older stellar populations tend to have stronger (more negative) stellar metallicity gradients. https://t.co/rtvwu3QmxY', 'We also use published results for 10 existing Local Group dwarf galaxies to show that they, too, follow a VERY similar gradient-strength-galaxy-age relationship. https://t.co/oRmWjqyGCR', 'Interestingly, most of these observed galaxies are satellites of the MW while our simulated galaxies are ISOLATED systems... Yet they follow this very similar relationship!', 'This suggests that the environment of a dwarf galaxy likely plays a secondary role in shaping stellar metallicity gradients. Check the paper out to learn about what drives these gradients! https://t.co/2wLirxBaNF', ""I dedicate this paper to José A. Flores Velázquez and Perla Maritza Mercado. I wish they were both still here to celebrate with me but I know they're both proud ❤️❤️"", ""It's important to remember that none of this would be possible w/o support from the community around me. My thanks go out to my co-authors especially @jbprime and @jorgito__moreno. And thanks to @DarthLazar for always being willing to answer my endless onslaught of questions!"", '@j_tharindu thanks Tharindu!', '@astrochicana Muchísimas gracias!! 😊', '@astroarianna 💜💜💜', '@8minutesold Thank you!! 😃', '@ynxmonica Thanks Monica!! 😃', '@Naj_Astro Thanks Najmeh!!']",20,09,1529
192,64,1230103939506483205,127058544,Russell Smith,"New paper by @phdwcollier on searching for nearby lenses in new and archival data from MUSE: @phdwcollier There's a couple of new z<0.05 strong-lensing *clusters* (with lensing mass dominated by DM not stars). @phdwcollier Among the field galaxies, the big find (J0403-0239) was summarised in Will's previous paper. The new paper has some nice HST follow-up imaging and blue spectroscopy to address the age-vs-IMF degeneracy. There are no more multiply-imaged sources among the sample, but quite a number of close-projected singly-imaged cases, which can add some information on the distribution of IMF variation. One of these has *three* singly-imaged background sources within 5 arcsec, at different redshifts, any which could have had detectable counter-images if the stellar M/L were high enough (i.e. IMF heavy enough). ",https://arxiv.org/abs/2002.07191v1,"Low-redshift strong-lensing galaxies can provide robust measurements of the stellar mass-to-light ratios in early-type galaxies (ETG), and hence constrain variations in the stellar initial mass function (IMF). At present, only a few such systems are known. Here, we report the first results from a blind search for gravitationally-lensed emission line sources behind 52 massive $z$ $<$ 0.07 ETGs with MUSE integral field spectroscopy. For 16 galaxies, new observations were acquired, whilst the other 36 were analysed from archival data. This project has previously yielded one confirmed galaxy-scale strong lens (J0403-0239) which we report in an earlier paper. J0403-0239 has since received follow-up observations, presented here, which indicate support for our earlier IMF results. Three cluster-scale, and hence dark-matter-dominated, lensing systems were also discovered (central galaxies of A4059, A2052 and AS555). For nine further galaxies, we detect a singly-imaged but closely-projected source within 6 arcsec (including one candidate with sources at three different redshifts); such cases can be exploited to derive upper limits on the IMF mass-excess factor, $\alpha$. Combining the new lens and new upper limits, with the previously-discovered systems, we infer an average $\langle \alpha \rangle$ = 1.06 $\pm$ 0.08 (marginalised over the intrinsic scatter), which is inconsistent with a Salpeter-like IMF ($\alpha$ = 1.55) at the 6$\sigma$ level. We test the detection threshold in these short-exposure MUSE observations with the injection and recovery of simulated sources, and predict that one in twenty-five observations is expected to yield a new strong-lens system. Our observational results are consistent with this expected yield. ",] MNELLS: The MUSE Nearby Early-Type Galaxy Lens Locator Survey,5,"['New paper by @phdwcollier on searching for nearby lenses in new and archival data from MUSE: ', ""@phdwcollier There's a couple of new z<0.05 strong-lensing *clusters* (with lensing mass dominated by DM not stars). https://t.co/uNKFUpP7aq"", ""@phdwcollier Among the field galaxies, the big find (J0403-0239) was summarised in Will's previous paper. \n\nThe new paper has some nice HST follow-up imaging and blue spectroscopy to address the age-vs-IMF degeneracy. https://t.co/z7YMZSQYzx"", 'There are no more multiply-imaged sources among the sample, but quite a number of close-projected singly-imaged cases, which can add some information on the distribution of IMF variation.', 'One of these has *three* singly-imaged background sources within 5 arcsec, at different redshifts, any which could have had detectable counter-images if the stellar M/L were high enough (i.e. IMF heavy enough). https://t.co/uGmAWLZN9C']",20,02,857
193,3,991348190245982208,991338306481909760,Artem Sevastopolsky 🇺🇦,"Stack-U-Net: Refinement Network for Image Segmentation on the Example of Optic Disc and Cup Our new paper that shows that stacking m̶o̶r̶e̶ ̶l̶a̶y̶e̶r̶s̶ U-Net's in the refining manner can be very beneficial for segmentation, even with small datasets. ",https://arxiv.org/abs/1804.11294,"In this work, we propose a special cascade network for image segmentation, which is based on the U-Net networks as building blocks and the idea of the iterative refinement. The model was mainly applied to achieve higher recognition quality for the task of finding borders of the optic disc and cup, which are relevant to the presence of glaucoma. Compared to a single U-Net and the state-of-the-art methods for the investigated tasks, very high segmentation quality has been achieved without a need for increasing the volume of datasets. Our experiments include comparison with the best-known methods on publicly available databases DRIONS-DB, RIM-ONE v.3, DRISHTI-GS, and evaluation on a private data set collected in collaboration with University of California San Francisco Medical School. The analysis of the architecture details is presented, and it is argued that the model can be employed for a broad scope of image segmentation problems of similar nature. ","Stack-U-Net: Refinement Network for Image Segmentation on the Example of
Optic Disc and Cup",1,"[""Stack-U-Net: Refinement Network for Image Segmentation on the Example of Optic Disc and Cup\nOur new paper that shows that stacking m̶o̶r̶e̶ ̶l̶a̶y̶e̶r̶s̶ U-Net's in the refining manner can be very beneficial for segmentation, even with small datasets. \n ""]",18,04,265
194,3,1259857065843056641,2455538305,Seth J. Hill,"New version of paper with Fowler, Obradovich, @RemyLevin Extensive revisions in response to feedback. We now use the counties that never issued stay-at-home as explicit controls, rather than the variable-treatment-timing diff-in-diff before. Plots speak more than words. ",https://arxiv.org/abs/2004.06098,"Governments issue ""stay at home"" orders to reduce the spread of contagious diseases, but the magnitude of such orders' effectiveness is uncertain. In the United States these orders were not coordinated at the national level during the coronavirus disease 2019 (COVID-19) pandemic, which creates an opportunity to use spatial and temporal variation to measure the policies' effect with greater accuracy. Here, we combine data on the timing of stay-at-home orders with daily confirmed COVID-19 cases and fatalities at the county level in the United States. We estimate the effect of stay-at-home orders using a difference-in-differences design that accounts for unmeasured local variation in factors like health systems and demographics and for unmeasured temporal variation in factors like national mitigation actions and access to tests. Compared to counties that did not implement stay-at-home orders, the results show that the orders are associated with a 30.2 percent (11.0 to 45.2) reduction in weekly cases after one week, a 40.0 percent (23.4 to 53.0) reduction after two weeks, and a 48.6 percent (31.1 to 61.7) reduction after three weeks. Stay-at-home orders are also associated with a 59.8 percent (18.3 to 80.2) reduction in weekly fatalities after three weeks. These results suggest that stay-at-home orders reduced confirmed cases by 390,000 (170,000 to 680,000) and fatalities by 41,000 (27,000 to 59,000) within the first three weeks in localities where they were implemented. ","The effect of stay-at-home orders on COVID-19 cases and fatalities in
the United States",3,"['New version of paper with Fowler, Obradovich, @RemyLevin \n\nExtensive revisions in response to feedback. We now use the counties that never issued stay-at-home as explicit controls, rather than the variable-treatment-timing diff-in-diff before.', 'Plots speak more than words. https://t.co/UuOpvaY3dc', 'https://t.co/vZZjyCNTsv']",20,04,291
195,126,1302993041926619138,1175368802458120193,Andreas Sander,"In case you haven't visited arXiv today, here is my newest paper with @jorick73 about the nature of massive He star mass loss: As often in #astrophysics and in science, it provides quite some new insights, while at the same time being only the beginning.",https://arxiv.org/abs/2009.01849,"The mass-loss rates of massive helium stars are one of the major uncertainties in modern astrophysics. Regardless of whether they were stripped by a binary companion or managed to peel off their outer layers by themselves, the influence and final fate of helium stars -- in particular the resulting black hole mass -- highly depends on their wind mass loss as stripped-envelope objects. While empirical mass-loss constraints for massive helium stars have improved over the last decades, the resulting recipes are limited to metallicities with the observational ability to sufficiently resolve individual stars. Yet, theoretical efforts have been hampered by the complexity of Wolf-Rayet (WR) winds arising from the more massive helium stars. In an unprecedented effort, we calculate next-generation stellar atmosphere models resembling massive helium main sequence stars with Fe-bump driven winds up to $500\,M_\odot$ over a wide metallicity range between $2.0$ and $0.02\,Z_\odot$. We uncover a complex $\Gamma_\text{e}$-dependency of WR-type winds and their metallicity-dependent breakdown. The latter can be related to the onset of multiple scattering, requiring higher $L/M$-ratios at lower metallicity. Based on our findings, we derive the first ever theoretically-motivated mass-loss recipe for massive helium stars. We also provide estimates for LyC and He II ionizing fluxes, finding stripped helium stars to contribute considerably at low metallicity. In sharp contrast to OB-star winds, the mass loss for helium stars scales with the terminal velocity. While limited to the helium main sequence, our study marks a major step towards a better theoretical understanding of helium star evolution. ",On the nature of massive helium star winds and Wolf-Rayet-type mass loss,1,"[""In case you haven't visited arXiv today, here is my newest paper with @jorick73 about the nature of massive He star mass loss: \nAs often in #astrophysics and in science, it provides quite some new insights, while at the same time being only the beginning.""]",20,09,261
196,101,1415554652804947974,2377407248,Daniel Whiteson,"New paper! Measuring how well a smartphone camera can detect cosmic muons! Led by Jeff Swaney and Mike Mulhearn, with @cshimmin What? Your phone can see particles? When a muon passed through your phone camera, it frees up electrons, just like when a photon does. So the camera sees that pixel as on. If you cover the lens and put the phone in a muon beam, presto, you see tracks! We wanted to do much more: to turn the network of smartphones into a world-wide detector for cosmic particles. To do that, we needed to measure how often the phone sees or misses a particle. So we put some phones between two scintillators: And measured how often we spotted the muon in the phone. Along the way, we had to reverse engineer how the phone turns electrons into digitized values, so we could measure the pure response: This will help us understand how well a network of phones can act as a global detector () TLDR: smartphones are about 70-80% efficient at detecting muons! @y0b1byte Yes, if the flux is very high, but that's not a concern for cosmic rays. More of an issue is that the performance degrades if phone is kept at high temperature for too long. @Antony_Clements @SeamusBlackley @cshimmin Maybe! A lot of it is noise from badly-behaving pixels. We had to filter out the hot pixels to get a reliable muon signal.",https://arxiv.org/abs/2107.06332,"A measurement of the efficiency of CMOS sensors in smartphone cameras to cosmic ray muons is presented. A coincidence in external scintillators indicates the passage of a cosmic ray muon, allowing the measurement of the efficiency of the CMOS sensor. The observed flux is consistent with well-established values, and efficiencies are presented as a function of the number of photo-electrons collected from the CMOS silicon photodiode pixels. These efficiencies are vital to understanding the feasibility of large-scale smartphone networks operating as air-shower observatories. ",Measurement of Smartphone Sensor Efficiency to Cosmic Ray Muons,9,"['New paper!\n\nMeasuring how well a smartphone camera can detect cosmic muons!\n\n\n\nLed by Jeff Swaney and Mike Mulhearn, with @cshimmin', 'What? Your phone can see particles?\n\nWhen a muon passed through your phone camera, it frees up electrons, just like when a photon does. So the camera sees that pixel as on. If you cover the lens and put the phone in a muon beam, presto, you see tracks! https://t.co/l8AwyMsQlp', 'We wanted to do much more: to turn the network of smartphones into a world-wide detector for cosmic particles. https://t.co/SFtUZQwVdy', 'To do that, we needed to measure how often the phone sees or misses a particle. So we put some phones between two scintillators: https://t.co/W1FCNyvUu9', 'And measured how often we spotted the muon in the phone.\n\nAlong the way, we had to reverse engineer how the phone turns electrons into digitized values, so we could measure the pure response: https://t.co/cMOPXF6drO', 'This will help us understand how well a network of phones can act as a global detector (https://t.co/FWCchA1MX5)', 'TLDR: smartphones are about 70-80% efficient at detecting muons!', ""@y0b1byte Yes, if the flux is very high, but that's not a concern for cosmic rays. More of an issue is that the performance degrades if phone is kept at high temperature for too long."", '@Antony_Clements @SeamusBlackley @cshimmin Maybe! A lot of it is noise from badly-behaving pixels. We had to filter out the hot pixels to get a reliable muon signal.']",21,07,1356
197,35,1153836209908547585,131782092,Jeffrey Simpson,"New paper that I helped write on the arXiv today: ""The Southern Stellar Stream Spectroscopic Survey (S⁵): Overview, Target Selection, Data Reduction, Validation, and Early Science” We have been observing with the Anglo-Australian Telescope's AAOmega spectrograph stars in recently identified streams within the footprint of the Dark Energy Survey. So far we have mapped 12 streams, observed about 35000 stars, 3000 nearby dwarf galaxies, and 1700 quasars! Our website can be found at: (I made the website thanks to a template from @templatedco) We also have a second paper today from Nora Ship that measures the proper motions of these streams We also have an upcoming paper on a █████ star that is ███████████ and with which we were able to ███████!!",https://arxiv.org/abs/1907.09481,"We introduce the Southern Stellar Stream Spectroscopy Survey (${S}^5$), an on-going program to map the kinematics and chemistry of stellar streams in the Southern Hemisphere. The initial focus of ${S}^5$ has been spectroscopic observations of recently identified streams within the footprint of the Dark Energy Survey (DES), with the eventual goal of surveying streams across the entire southern sky. Stellar streams are composed of material that has been tidally striped from dwarf galaxies and globular clusters and hence are excellent dynamical probes of the gravitational potential of the Milky Way, as well as providing a detailed snapshot of its accretion history. Observing with the 3.9-m Anglo-Australian Telescope's 2-degree-Field fibre positioner and AAOmega spectrograph, and combining the precise photometry of DES DR1 with the superb proper motions from $Gaia$ DR2, allows us to conduct an efficient spectroscopic survey to map these stellar streams. So far ${S}^5$ has mapped 9 DES streams and 3 streams outside of DES; the former are the first spectroscopic observations of these recently discovered streams. In addition to the stream survey, we use spare fibres to undertake a Milky Way halo survey and a low-redshift galaxy survey. This paper presents an overview of the ${S}^5$ program, describing the scientific motivation for the survey, target selection, observation strategy, data reduction and survey validation. Finally, we describe early science results on stellar streams and Milky Way halo stars drawn from the survey. Updates on ${S}^5$, including future public data release, can be found at \url{this http URL}. ","The Southern Stellar Stream Spectroscopic Survey (${S}^5$): Overview,
Target Selection, Data Reduction, Validation, and Early Science",5,"['New paper that I helped write on the arXiv today:\n\n""The Southern Stellar Stream Spectroscopic Survey (S⁵): Overview, Target Selection, Data Reduction, Validation, and Early Science”\n\n ', ""We have been observing with the Anglo-Australian Telescope's AAOmega spectrograph stars in recently identified streams within the footprint of the Dark Energy Survey. So far we have mapped 12 streams, observed about 35000 stars, 3000 nearby dwarf galaxies, and 1700 quasars!"", 'Our website can be found at:\n\nhttps://t.co/dPme3TOqRP\n\n(I made the website thanks to a template from @templatedco)', 'We also have a second paper today from Nora Ship that measures the proper motions of these streams\n\nhttps://t.co/D4nJUCbIls', 'We also have an upcoming paper on a █████ star that is ███████████ and with which we were able to ███████!!']",19,07,779
198,6,1433789954375753731,72781449,Nikos Aletras,New #EMNLP2021 paper w/ @soon1otis: Simple and neat method for improving explanation faithfulness of transformer models for text clf. The idea is to bring close the attention distribution to salient information (computed w/ TextRank) during training 👇 ,https://arxiv.org/abs/2108.13759,"Pretrained transformer-based models such as BERT have demonstrated state-of-the-art predictive performance when adapted into a range of natural language processing tasks. An open problem is how to improve the faithfulness of explanations (rationales) for the predictions of these models. In this paper, we hypothesize that salient information extracted a priori from the training data can complement the task-specific information learned by the model during fine-tuning on a downstream task. In this way, we aim to help BERT not to forget assigning importance to informative input tokens when making predictions by proposing SaLoss; an auxiliary loss function for guiding the multi-head attention mechanism during training to be close to salient information extracted a priori using TextRank. Experiments for explanation faithfulness across five datasets, show that models trained with SaLoss consistently provide more faithful explanations across four different feature attribution methods compared to vanilla BERT. Using the rationales extracted from vanilla BERT and SaLoss models to train inherently faithful classifiers, we further show that the latter result in higher predictive performance in downstream tasks. ","Enjoy the Salience: Towards Better Transformer-based Faithful
Explanations with Word Salience",1,['New #EMNLP2021 paper w/ @soon1otis: \n\nSimple and neat method for improving explanation faithfulness of transformer models for text clf. The idea is to bring close the attention distribution to salient information (computed w/ TextRank) during training 👇 '],21,08,265
199,126,1356682139127877634,1152296594,Swabha Swayamdipta,"Can we rid language representations of pernicious social (racial) biases, in a hate speech detection setting? Not so easily ☹️ Investing in better data collection is probably a better route. Check out our new work at EACL to learn more 👇 Paper: ",https://arxiv.org/abs/2102.00086,"Biased associations have been a challenge in the development of classifiers for detecting toxic language, hindering both fairness and accuracy. As potential solutions, we investigate recently introduced debiasing methods for text classification datasets and models, as applied to toxic language detection. Our focus is on lexical (e.g., swear words, slurs, identity mentions) and dialectal markers (specifically African American English). Our comprehensive experiments establish that existing methods are limited in their ability to prevent biased behavior in current toxicity detectors. We then propose an automatic, dialect-aware data correction method, as a proof-of-concept. Despite the use of synthetic labels, this method reduces dialectal associations with toxicity. Overall, our findings show that debiasing a model trained on biased toxic language data is not as effective as simply relabeling the data to remove existing biases. ",Challenges in Automated Debiasing for Toxic Language Detection,1,"['Can we rid language representations of pernicious social (racial) biases, in a hate speech detection setting? Not so easily ☹️\n\nInvesting in better data collection is probably a better route. Check out our new work at EACL to learn more 👇\n\nPaper: ']",21,02,258
200,162,1400877275357270016,54910963,Clara Vania,"Many NLU datasets have been created to evaluate various aspects of language, but which datasets are still effective to measure future progress? Check out our new paper, to appear at #acl2021nlp #NLProc (1/8) Taking inspiration from psychometric studies which often use Item Response Theory (IRT) to evaluate test items in educational assessment, we apply it to evaluate test examples from 29 English datasets. (2/8) We use predictions from 18 Transformer-based models with varying degrees of abilities, and estimate how difficult and discriminative a test example is relative to other examples. (3/8) We introduce a new metric called Locally Estimated Headroom (LEH) to estimate how much a dataset is still useful to measure near-future progress. We find that Quoref, HellaSwag, and MC-TACO are still effective, while SNLI, MNLI, and BoolQ seem to be saturated. (4/8) We also find that span-based QA is the most effective task format to discriminate between strong and weak models. (5/8) However, datasets that contain many discriminative examples do not always have examples that are the most difficult. SQuAD2.0, QuAIL, and ANLI appear to have many examples with the highest difficulty levels. (6/8) Please see the paper for more details. We hope this work can give insights to future dataset creation and model development in NLP, and we argue that this evaluation should be done periodically over time. (7/8) Joint work with amazing co-authors: @phu_pmh, @WillHuang93, Dhara Mungra, @yzpang97, @zhansheng, @liu_haokun, @kchonyc, and @sleepinyourhat. (8/8)",https://arxiv.org/abs/2106.00840,"Recent years have seen numerous NLP datasets introduced to evaluate the performance of fine-tuned models on natural language understanding tasks. Recent results from large pretrained models, though, show that many of these datasets are largely saturated and unlikely to be able to detect further progress. What kind of datasets are still effective at discriminating among strong models, and what kind of datasets should we expect to be able to detect future improvements? To measure this uniformly across datasets, we draw on Item Response Theory and evaluate 29 datasets using predictions from 18 pretrained Transformer models on individual test examples. We find that Quoref, HellaSwag, and MC-TACO are best suited for distinguishing among state-of-the-art models, while SNLI, MNLI, and CommitmentBank seem to be saturated for current strong models. We also observe span selection task format, which is used for QA datasets like QAMR or SQuAD2.0, is effective in differentiating between strong and weak models. ",Comparing Test Sets with Item Response Theory,8,"['Many NLU datasets have been created to evaluate various aspects of language, but which datasets are still effective to measure future progress? Check out our new paper, to appear at #acl2021nlp #NLProc (1/8) ', 'Taking inspiration from psychometric studies which often use Item Response Theory (IRT) to evaluate test items in educational assessment, we apply it to evaluate test examples from 29 English datasets. (2/8)', 'We use predictions from 18 Transformer-based models with varying degrees of abilities, and estimate how difficult and discriminative a test example is relative to other examples. (3/8)', 'We introduce a new metric called Locally Estimated Headroom (LEH) to estimate how much a dataset is still useful to measure near-future progress. We find that Quoref, HellaSwag, and MC-TACO are still effective, while SNLI, MNLI, and BoolQ seem to be saturated. (4/8) https://t.co/N0fw6xF6d7', 'We also find that span-based QA is the most effective task format to discriminate between strong and weak models. (5/8)', 'However, datasets that contain many discriminative examples do not always have examples that are the most difficult. SQuAD2.0, QuAIL, and ANLI appear to have many examples with the highest difficulty levels. (6/8) https://t.co/l1E9Qgfktz', 'Please see the paper for more details. We hope this work can give insights to future dataset creation and model development in NLP, and we argue that this evaluation should be done periodically over time. (7/8)', 'Joint work with amazing co-authors: @phu_pmh, @WillHuang93, Dhara Mungra, @yzpang97, @zhansheng, @liu_haokun, @kchonyc, and @sleepinyourhat. (8/8)']",21,06,1587
201,15,1100867398683504640,700704725826572290,"Joe Guinness, valued customer",New paper on arxiv! An alternative to the derived motion winds algorithm for estimating winds from geostationary satellite images. This was part of Indranil Sahoo's thesis. He's just accepted a faculty position at Virginia Commonwealth University.,https://arxiv.org/abs/1902.09653,"Geostationary satellites collect high-resolution weather data comprising a series of images which can be used to estimate wind speed and direction at different altitudes. The Derived Motion Winds (DMW) Algorithm is commonly used to process these data and estimate atmospheric winds by tracking features in images taken by the GOES-R series of the NOAA geostationary meteorological satellites. However, the wind estimates from the DMW Algorithm are sparse and do not come with uncertainty measures. This motivates us to statistically model wind motions as a spatial process drifting in time. We propose a covariance function that depends on spatial and temporal lags and a drift parameter to capture the wind speed and wind direction. We estimate the parameters by local maximum likelihood. Our method allows us to compute standard errors of the estimates, enabling spatial smoothing of the estimates using a Gaussian kernel weighted by the inverses of the estimated variances. We conduct extensive simulation studies to determine the situations where our method performs well. The proposed method is applied to the GOES-15 brightness temperature data over Colorado and reduces prediction error of brightness temperature compared to the DMW Algorithm. ","Estimating Atmospheric Motion Winds from Satellite Image Data using
Space-time Drift Models",2,"['New paper on arxiv! An alternative to the derived motion winds algorithm for estimating winds from geostationary satellite images.\n', ""This was part of Indranil Sahoo's thesis. He's just accepted a faculty position at Virginia Commonwealth University.""]",19,02,254
202,17,1311138912274993152,2541941466,Alba Cervera-Lierta,"New paper out! We present the Meta-VQE, an algorithm that learns the ground state energy profile of a parametrized Hamiltonian. Check it out 👇 @JakobKottmann @A_Aspuru_Guzik #matterlab @chemuoft @UofTCompSci @VectorInst @CIFAR_News I will publish a thread about it in a few hours 😃 @gpassosgomes @JakobKottmann @A_Aspuru_Guzik @chemuoft @UofTCompSci @VectorInst @CIFAR_News Thanks! Great week for #matterlab! 😃",https://arxiv.org/abs/2009.13545,"We present the meta-VQE, an algorithm capable to learn the ground state energy profile of a parametrized Hamiltonian. By training the meta-VQE with a few data points, it delivers an initial circuit parametrization that can be used to compute the ground state energy of any parametrization of the Hamiltonian within a certain trust region. We test this algorithm with a XXZ spin chain, an electronic H$_{4}$ Hamiltonian and a single-transmon quantum simulation. In all cases, the meta-VQE is able to learn the shape of the energy functional and, in some cases, resulted in improved accuracy in comparison to individual VQE optimization. The meta-VQE algorithm introduces both a gain in efficiency for parametrized Hamiltonians, in terms of the number of optimizations, and a good starting point for the quantum circuit parameters for individual optimizations. The proposed algorithm proposal can be readily mixed with other improvements in the field of variational algorithms to shorten the distance between the current state-of-the-art and applications with quantum advantage. ","The Meta-Variational Quantum Eigensolver (Meta-VQE): Learning energy
profiles of parameterized Hamiltonians for quantum simulation",3,"['New paper out! We present the Meta-VQE, an algorithm that learns the ground state energy profile of a parametrized Hamiltonian. Check it out 👇\n\n\n@JakobKottmann @A_Aspuru_Guzik #matterlab @chemuoft @UofTCompSci @VectorInst @CIFAR_News', 'I will publish a thread about it in a few hours 😃', '@gpassosgomes @JakobKottmann @A_Aspuru_Guzik @chemuoft @UofTCompSci @VectorInst @CIFAR_News Thanks! Great week for #matterlab! 😃']",20,09,424
203,80,1481551888587886596,2906135523,JudeCroston,"*New paper!* @OgNimaeb’s latest, epic, @LOFAR radio galaxies paper is out today: . We dug deep into the disconnect between morphology and accretion mode and what controls each of these… (1/5) This plot is my favourite - it shows **only jets of similar power (a single decade in 150MHz luminosity)** and what I’m fairly sure is the very clearest evidence that host galaxy (stellar) mass strongly influences jet disruption and so FR class… (2/5) And we see a beautifully clear connection between specific star-formation rate and accretion mode, presumably both driven by availability of dense cold gas…(3/5) We find a complete disconnect in the drivers of accretion class and morphology, and indistinguishable FR2 structures of both accretion modes (this is a slightly more twitter-friendly v. of paper Fig 6)… (4/5) Lots more in the paper, & thanks to the many people in the LOFAR deep fields project who made it possible (inc @nudomarinero, @dunkenj, @cygnus_ww - apols if I missed anyone else on twitter) (5/5)",https://arxiv.org/abs/2201.04433,"Radio-loud active galaxies have two accretion modes [radiatively inefficient (RI) and radiatively efficient (RE)], with distinct optical and infrared signatures, and two jet dynamical behaviours, which in arcsec- to arcmin-resolution radio surveys manifest primarily as centre- or edge-brightened structures [Fanaroff-Riley (FR) class I and II]. The nature of the relationship between accretion mode and radio morphology (FR class) has been the subject of long debate. We present a comprehensive investigation of this relationship for a sample of 286 well-resolved radio galaxies in the LOFAR Two-metre Sky Survey Deep Fields (LoTSS-Deep) first data release, for which robust morphological and accretion mode classifications have been made. We find that two-thirds of luminous FRII radio galaxies are RI, and identify no significant differences in the visual appearance or source dynamic range (peak/mean surface brightness) of the RI and RE FRIIs, demonstrating that both RI and RE systems can produce FRII structures. We also find a significant population of low-luminosity FRIIs (predominantly RI), supporting our earlier conclusion that FRII radio structures can be produced at all radio luminosities. We demonstrate that in the luminosity range where both morphologies are present, the probability of producing FRI or FRII radio morphology is directly linked to stellar mass, while across all morphologies and luminosities, RE accretion occurs in systems with high specific star formation rate, presumably because this traces fuel availability. In summary, the relationship between accretion mode and radio morphology is very indirect, with host-galaxy environment controlling these two key parameters in different ways. ",Accretion mode versus radio morphology in the LOFAR Deep Fields,5,"['*New paper!* @OgNimaeb’s latest, epic, @LOFAR radio galaxies paper is out today: . We dug deep into the disconnect between morphology and accretion mode and what controls each of these… (1/5)', 'This plot is my favourite - it shows **only jets of similar power (a single decade in 150MHz luminosity)** and what I’m fairly sure is the very clearest evidence that host galaxy (stellar) mass strongly influences jet disruption and so FR class… (2/5) https://t.co/8liuQ1zzII', 'And we see a beautifully clear connection between specific star-formation rate and accretion mode, presumably both driven by availability of dense cold gas…(3/5) https://t.co/5WSGeq5BPE', 'We find a complete disconnect in the drivers of accretion class and morphology, and indistinguishable FR2 structures of both accretion modes (this is a slightly more twitter-friendly v. of paper Fig 6)… (4/5) https://t.co/s2T4AhigSX', 'Lots more in the paper, & thanks to the many people in the LOFAR deep fields project who made it possible (inc @nudomarinero, @dunkenj, @cygnus_ww - apols if I missed anyone else on twitter) (5/5)']",22,01,1038
204,7,980754967660294145,958312958593064961,Mikayel Samvelyan,I'm excited to share the preprint of our new paper “QMIX: Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning” . Very fortunate for the chance to work with such talented people at @whi_rl : @j_foerst @greg_far @shimon8282,http://arxiv.org/abs/1803.11485,"In many real-world settings, a team of agents must coordinate their behaviour while acting in a decentralised way. At the same time, it is often possible to train the agents in a centralised fashion in a simulated or laboratory setting, where global state information is available and communication constraints are lifted. Learning joint action-values conditioned on extra state information is an attractive way to exploit centralised learning, but the best strategy for then extracting decentralised policies is unclear. Our solution is QMIX, a novel value-based method that can train decentralised policies in a centralised end-to-end fashion. QMIX employs a network that estimates joint action-values as a complex non-linear combination of per-agent values that condition only on local observations. We structurally enforce that the joint-action value is monotonic in the per-agent values, which allows tractable maximisation of the joint action-value in off-policy learning, and guarantees consistency between the centralised and decentralised policies. We evaluate QMIX on a challenging set of StarCraft II micromanagement tasks, and show that QMIX significantly outperforms existing value-based multi-agent reinforcement learning methods. ","QMIX: Monotonic Value Function Factorisation for Deep Multi-Agent
Reinforcement Learning",1,"[""I'm excited to share the preprint of our new paper “QMIX: Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning” . Very fortunate for the chance to work with such talented people at @whi_rl : @j_foerst @greg_far @shimon8282""]",18,03,258
205,51,1262793302388113411,2933022295,Alex Beatson,"New paper: ""Learning Composable Energy Surrogates for PDE Order Reduction"", with Jordan Ash, @geoffrey_roeder, @TianjuXue, @ryan_p_adams: . We use NNs to amortize solving hard mechanical meta-material PDEs, training only on data from small subdomains. 1/8 How? Solutions to hyperelasticity PDEs (and many others!) are minimizers of a total energy, which is an integral over the domain. We train NNs to predict the minimal energy in a component from a reduced-basis solution (here, a displacement field) on the component boundary. 2/8 When components are composed into a larger domain, the total energy is a sum of component energies. We solve the PDE in a reduced basis of component boundaries, minimizing the sum of NN surrogate energies. We only generate supervised data (via FEA) on the small components. 3/8 When solving PDEs with FEA, fine geometric features (such as metamaterial pores) require a fine mesh. Part of the trick here is feeding parametric representations of geometry to our component-level energy surrogate, avoiding representing geometry with a fine mesh. 4/8 To get accurate surrogates, we embed known structure (invariance to rigid-body transforms and a linear-elastic structural prior) into the NN surrogate, and perform Sobolev training on energy derivatives and Hessian-vector products as well as energy values. 5/8 The learned surrogate can solve composed systems orders-of-magnitude faster than a finite element model of similar accuracy, run on the same CPU. 6/8 Limitation: we use data augmentation to capture displacements the components encounter in practice, so you have to pick a distribution of macroscopic problems to amortize. Perhaps inevitable, but we’re looking into relaxing this via active learning and learning robust models. 7/8 Mechanical meta-materials like the lattices we study hold great promise for engineering design (see ), but are hard to simulate due to geometric complexity and nonlinear behavior. We think there’s a lot of room for ML to help harness their potential. 8/8",https://arxiv.org/abs/2005.06549,"Meta-materials are an important emerging class of engineered materials in which complex macroscopic behaviour--whether electromagnetic, thermal, or mechanical--arises from modular substructure. Simulation and optimization of these materials are computationally challenging, as rich substructures necessitate high-fidelity finite element meshes to solve the governing PDEs. To address this, we leverage parametric modular structure to learn component-level surrogates, enabling cheaper high-fidelity simulation. We use a neural network to model the stored potential energy in a component given boundary conditions. This yields a structured prediction task: macroscopic behavior is determined by the minimizer of the system's total potential energy, which can be approximated by composing these surrogate models. Composable energy surrogates thus permit simulation in the reduced basis of component boundaries. Costly ground-truth simulation of the full structure is avoided, as training data are generated by performing finite element analysis with individual components. Using dataset aggregation to choose training boundary conditions allows us to learn energy surrogates which produce accurate macroscopic behavior when composed, accelerating simulation of parametric meta-materials. ",Learning Composable Energy Surrogates for PDE Order Reduction,8,"['New paper: ""Learning Composable Energy Surrogates for PDE Order Reduction"", with Jordan Ash, @geoffrey_roeder, @TianjuXue, @ryan_p_adams: . We use NNs to amortize solving hard mechanical meta-material PDEs, training only on data from small subdomains. 1/8 ', 'How? Solutions to hyperelasticity PDEs (and many others!) are minimizers of a total energy, which is an integral over the domain. We train NNs to predict the minimal energy in a component from a reduced-basis solution (here, a displacement field) on the component boundary. 2/8', 'When components are composed into a larger domain, the total energy is a sum of component energies. We solve the PDE in a reduced basis of component boundaries, minimizing the sum of NN surrogate energies. We only generate supervised data (via FEA) on the small components. 3/8 https://t.co/0NrQfQitq9', 'When solving PDEs with FEA, fine geometric features (such as metamaterial pores) require a fine mesh. Part of the trick here is feeding parametric representations of geometry to our component-level energy surrogate, avoiding representing geometry with a fine mesh. 4/8', 'To get accurate surrogates, we embed known structure (invariance to rigid-body transforms and a linear-elastic structural prior) into the NN surrogate, and perform Sobolev training on energy derivatives and Hessian-vector products as well as energy values. 5/8', 'The learned surrogate can solve composed systems orders-of-magnitude faster than a finite element model of similar accuracy, run on the same CPU. 6/8 https://t.co/Qmwu01ao9M', 'Limitation: we use data augmentation to capture displacements the components encounter in practice, so you have to pick a distribution of macroscopic problems to amortize. Perhaps inevitable, but we’re looking into relaxing this via active learning and learning robust models. 7/8', 'Mechanical meta-materials like the lattices we study hold great promise for engineering design (see https://t.co/GmrQEiwwuA), but are hard to simulate due to geometric complexity and nonlinear behavior. We think there’s a lot of room for ML to help harness their potential. 8/8']",20,05,2060
206,137,1435891550303793152,494212643,Aayush Saxena,"After over a year of being in the works (counting the pandemic), I'm excited that our paper reporting the discovery of 11 new spectroscopically confirmed candidate Lyman continuum (LyC) leakers in the GOODS-S field is out now! Short thread below (1/5) We use ground based LyC imaging and compile spectra from publicly available surveys, mainly from VANDELS (LBGs) and MUSE (blind) to measure the LyC escape fractions. only 6% of galaxies have any LyC leakage, with the majority of the sample giving an upper limit of fesc<7% (2/5) Interestingly, we do not find any strong dependence of the measured LyC escape fraction for 11 new candidate leakers on their stellar masses or specific star-formation rates, as can be seen in the plots below: (3/5) At these redshifts, the [OIII]+Hb lines fall in the observed K band, and we measure these line strengths by including nebular emission in SED fitting. Once again, no strong dependence of fesc is found on the [OIII]+Hb line strengths: (4/5) It remains observationally unclear which physical property regulates high LyC leakage. We argue that orientation and timescales may play a role in actually detecting LyC leakage, and the presence of young clusters within galaxies could be important. (5/5) @jorryt_m Cheers! Typical MUV is ~ -21, so comparable to Steidel+2018 I'd say and its encouraging to get similar results. Indeed IGM stochasticity may play a role in masking correlations, which we touch upon in the paper too! @astrobellatrix Ooh so sorry about that -including MUSE redshifts thanks to your and your team's amazing work was just too tempting ;) I hope we'll arrive at similar conclusions and will keep an eye out on your results too! Thanks :) @astrobellatrix Yes that was a really cool paper. MUSE results are generally absolutely fantastic I must say! @maximetrebitsch That is an excellent point actually - we haven't explored the dust content angle here, but it is indeed something that could definitely play a role in an orientation-based scenario... Kind of similar to IGM stochasticity in effect(?) More zoom-in simulations please!!",https://arxiv.org/abs/2109.03662,"We present Lyman continuum (LyC) radiation escape fraction $f_{\rm{esc}}$ measurements for 183 spectroscopically confirmed star-forming galaxies in the redshift range $3.11 < z < 3.53$ in the \textit{Chandra} Deep Field South. We use ground-based imaging to measure $f_{\rm{esc}}$, and use ground- and space-based photometry to derive galaxy physical properties using spectral energy distribution (SED) fitting. We additionally derive [O III]+H$\beta$ equivalent widths (that fall in the observed K band) by including nebular emission in the SED fitting. After removing foreground contaminants, we report the discovery of 11 new candidate LyC leakers, with absolute LyC escape fractions, $f_{\rm{esc}}$ in the range $0.14-0.85$. From non-detections, we place $1\sigma$ upper limits of $f_{\rm{esc}}<0.12$, where the Lyman-break selected galaxies have $f_{\rm{esc}} < 0.11$ and `blindly' discovered galaxies with no prior photometric selection have $f_{\rm{esc}}<0.13$. We find a slightly higher $1\sigma$ limit of $f_{\rm{esc}}<0.20$ for extreme emission line galaxies with rest-frame [O III]+H$\beta$ equivalent widths $>300$A. For candidate LyC leakers, we find a weak negative correlation between $f_{\rm{esc}}$ and galaxy stellar masses, no correlation between $f_{\rm{esc}}$ specific star-formation rates (sSFRs) and a positive correlation between $f_{\rm{esc}}$ and EW$_0$([O III]+H$\beta$). The weak/no correlations between stellar mass and sSFRs may be explained by misaligned viewing angles and/or non-coincident timescales of starburst activity and periods of high $f_{\rm{esc}}$. Alternatively, escaping radiation may predominantly occur in highly localised star-forming regions, or $f_{\rm{esc}}$ measurements may be impacted by stochasticity of the intervening neutral medium, obscuring any global trends with galaxy properties. These hypotheses have important consequences for models of reionisation. ","No strong dependence of Lyman continuum leakage on physical properties
of star-forming galaxies at $\mathbf{3.1 \lesssim z \lesssim 3.5}$",9,"[""After over a year of being in the works (counting the pandemic), I'm excited that our paper reporting the discovery of 11 new spectroscopically confirmed candidate Lyman continuum (LyC) leakers in the GOODS-S field is out now!\n\n\n\nShort thread below (1/5)"", 'We use ground based LyC imaging and compile spectra from publicly available surveys, mainly from VANDELS (LBGs) and MUSE (blind) to measure the LyC escape fractions. only 6% of galaxies have any LyC leakage, with the majority of the sample giving an upper limit of fesc<7% (2/5)', 'Interestingly, we do not find any strong dependence of the measured LyC escape fraction for 11 new candidate leakers on their stellar masses or specific star-formation rates, as can be seen in the plots below: (3/5) https://t.co/a5e89E4pzS', 'At these redshifts, the [OIII]+Hb lines fall in the observed K band, and we measure these line strengths by including nebular emission in SED fitting. Once again, no strong dependence of fesc is found on the [OIII]+Hb line strengths: (4/5) https://t.co/srfjTSN0nO', 'It remains observationally unclear which physical property regulates high LyC leakage. We argue that orientation and timescales may play a role in actually detecting LyC leakage, and the presence of young clusters within galaxies could be important. (5/5)', ""@jorryt_m Cheers! Typical MUV is ~ -21, so comparable to Steidel+2018 I'd say and its encouraging to get similar results.\nIndeed IGM stochasticity may play a role in masking correlations, which we touch upon in the paper too!"", ""@astrobellatrix Ooh so sorry about that -including MUSE redshifts thanks to your and your team's amazing work was just too tempting ;) I hope we'll arrive at similar conclusions and will keep an eye out on your results too! Thanks :)"", '@astrobellatrix Yes that was a really cool paper. MUSE results are generally absolutely fantastic I must say!', ""@maximetrebitsch That is an excellent point actually - we haven't explored the dust content angle here, but it is indeed something that could definitely play a role in an orientation-based scenario... Kind of similar to IGM stochasticity in effect(?) More zoom-in simulations please!!""]",21,09,2121
207,108,1245370187567816708,2983164057,Mohsen Fayyaz,"#CVPR2020 Learning To Temporally Segment Untrimmed Videos from Set of Actions: ""SCT: Set Constrained Temporal Transformer for Set Supervised Action Segmentation"" A new paper by me and my supervisor Juergen Gall. In a set supervised action segmentation problem, for each training video, only the list of actions is given that occur in the video, but not when, how often, and in which order they occur. Our network divides a video into smaller temporal regions. For each region, the network estimates its length and the corresponding action label. Then the upsampling module uses the lengths and the action probabilities of all regions to estimate the framewise probabilities. Although it is possible to do the upsampling by linear interpolation this operation is not differentiable w.r.t predicted lengths. Therefore, we use our novel upsampling module which is differentiable w.r.t. predicted lengths. Since we do not know the ground-truth lengths and orders but only the set of present actions, we cannot directly use the predicted frame-wise probabilities of the network. Therefore, we introduce a novel temporal transformation method that transforms a temporal sequence to the set of action probabilities w.r.t predicted lengths and temporal locations of actions. Using the set of action probabilities we can use the GT to train the model end-to-end. We will release the source code as soon as possible. Thanks @yassersouri for his valuable comments on our work.",https://arxiv.org/abs/2003.14266,"Temporal action segmentation is a topic of increasing interest, however, annotating each frame in a video is cumbersome and costly. Weakly supervised approaches therefore aim at learning temporal action segmentation from videos that are only weakly labeled. In this work, we assume that for each training video only the list of actions is given that occur in the video, but not when, how often, and in which order they occur. In order to address this task, we propose an approach that can be trained end-to-end on such data. The approach divides the video into smaller temporal regions and predicts for each region the action label and its length. In addition, the network estimates the action labels for each frame. By measuring how consistent the frame-wise predictions are with respect to the temporal regions and the annotated action labels, the network learns to divide a video into class-consistent regions. We evaluate our approach on three datasets where the approach achieves state-of-the-art results. ","SCT: Set Constrained Temporal Transformer for Set Supervised Action
Segmentation",7,"['#CVPR2020 Learning To Temporally Segment Untrimmed Videos from Set of Actions:\n""SCT: Set Constrained Temporal Transformer for Set Supervised Action Segmentation""\nA new paper by me and my supervisor Juergen Gall.\n ', 'In a set supervised action segmentation problem, for each training video, only the list of actions is given that\noccur in the video, but not when, how often, and in which order they occur.', 'Our network divides a video into smaller temporal regions. For each region, the network estimates its length and the corresponding action label. Then the upsampling module uses the lengths and the action probabilities of all regions to estimate the framewise probabilities.', 'Although it is possible to do the upsampling by linear interpolation this operation is not differentiable w.r.t predicted lengths. Therefore, we use our novel upsampling module which is differentiable w.r.t. predicted lengths.', 'Since we do not know the ground-truth lengths and orders but only the set of present actions, we cannot directly use the predicted frame-wise probabilities of the network.', 'Therefore, we introduce a novel temporal transformation method that transforms a temporal sequence to the set of action probabilities w.r.t predicted lengths and temporal locations of actions. Using the set of action probabilities we can use the GT to train the model end-to-end.', 'We will release the source code as soon as possible.\nThanks @yassersouri for his valuable comments on our work.']",20,03,1479
208,17,1100817646520418304,1092693586263457792,Greg Yang,"[1/4] Everybody knows adversarial examples are a problem, and a lot of people tried to provably verify NN robustness. But seems convex relaxation alone runs into a theoretical and empirical barrier --- not tight enough! See our new paper [2/4] This may be obvious to some, but at least 10 papers last year kept pushing convex relaxation, like @RICEric22 and @zicokolter’s LP method and recently @pushmeet’s January paper on interval bound propagation. [3/4] Algorithms that bypass this barrier include Raghunathan’s SDP formulations, MILP from Tedrake’s group, SMT (reluplex), Lipschitz constant bound, and hybrid approaches ← we encourage folks to explore these ideas further! [4/4] Collaboration with our super duper awesome AI resident @hadisalman94, along with Huan Zhang, Cho-Jui Hsieh, and Pengchuan Zhang. Also special thanks to @ilyaraz2! @zicokolter Thanks Zico! You are right, and indeed we discuss this more carefully in the conclusion of the paper: ""In general, none of [the above] are strictly better than the convex relaxation approach, sacrificing either speed or accuracy"" --- so 100% on the same page here! @zicokolter Yep big fans of the randomized smoothing @deepcohen (very impressed by this) and Wasserstein robustness @RICEric22 papers! :)",http://arxiv.org/abs/1902.08722,"Verification of neural networks enables us to gauge their robustness against adversarial attacks. Verification algorithms fall into two categories: exact verifiers that run in exponential time and relaxed verifiers that are efficient but incomplete. In this paper, we unify all existing LP-relaxed verifiers, to the best of our knowledge, under a general convex relaxation framework. This framework works for neural networks with diverse architectures and nonlinearities and covers both primal and dual views of robustness verification. We further prove strong duality between the primal and dual problems under very mild conditions. Next, we perform large-scale experiments, amounting to more than 22 CPU-years, to obtain exact solution to the convex-relaxed problem that is optimal within our framework for ReLU networks. We find the exact solution does not significantly improve upon the gap between PGD and existing relaxed verifiers for various networks trained normally or robustly on MNIST and CIFAR datasets. Our results suggest there is an inherent barrier to tight verification for the large class of methods captured by our framework. We discuss possible causes of this barrier and potential future directions for bypassing it. Our code and trained models are available at this http URL . ","A Convex Relaxation Barrier to Tight Robustness Verification of Neural
Networks",6,"['[1/4] Everybody knows adversarial examples are a problem, and a lot of people tried to provably verify NN robustness. But seems convex relaxation alone runs into a theoretical and empirical barrier --- not tight enough! See our new paper ', '[2/4] This may be obvious to some, but at least 10 papers last year kept pushing convex relaxation, like @RICEric22 and @zicokolter’s LP method and recently @pushmeet’s January paper on interval bound propagation.', '[3/4] Algorithms that bypass this barrier include Raghunathan’s SDP formulations, MILP from Tedrake’s group, SMT (reluplex), Lipschitz constant bound, and hybrid approaches ← we encourage folks to explore these ideas further!', '[4/4] Collaboration with our super duper awesome AI resident @hadisalman94, along with Huan Zhang, Cho-Jui Hsieh, and Pengchuan Zhang. Also special thanks to @ilyaraz2!', '@zicokolter Thanks Zico! You are right, and indeed we discuss this more carefully in the conclusion of the paper: ""In general, none of [the above] are strictly better than the convex relaxation approach, sacrificing either speed or accuracy"" --- so 100% on the same page here!', '@zicokolter Yep big fans of the randomized smoothing @deepcohen (very impressed by this) and Wasserstein robustness @RICEric22 papers! :)']",19,02,1275
209,85,1319688341118455809,967806578425516032,Dr Jemima Tabeart,"We (@DrSarahDance, @amoslawless, @J_A_Waller and Nancy Nichols) have a new paper on arxiv this week! As is my new tradition, here's an as-non-technical-as-possible overview (this time with GIFs!) 🧵: In my PhD I studied how introducing correlated observation error covariance matrices (OECs) alters mathematical properties of variational data assimilation (DA) problems. DA appears everywhere, but is most well known for its use in weather forecasting. In this new paper we consider the preconditioned problem - this is where we solve a different but related problem that is computationally cheaper. The standard preconditioner for DA terms puts the OEC matrix in the same term as the background error covariance (BEC) matrix. Some maths: we study how cheap/expensive our problem is using a condition number. Large condition number = expensive! Eigenvalues are properties of a matrix and relate to their condition number - if all the eigenvalues are close the condition no. is small, spread out = large Previously we found for the UNpreconditioned system (BEC + OEC) that the smallest eigenvalue of the OEC matrix is important theoretically. If the eigenvalue is small, the DA problem is expensive! Numerics show that if we increase this eigenvalue, we can make the DA problem faster For the preconditioned problem (OECxBEC) we found that although the smallest eigenvalue is still important, numerically the DA problem is solved fastest when BEC and OEC have similar properties (e.g. related eigenvalues). So if all we care about is a fast solution, we should make BEC and OEC as similar as possible, right? Usually we don't get to choose BEC/OEC - they come from the underlying physics. However, in practice we often have to adapt OEC matrices before we use them. Our new theory and results could help us choose how to do this modification - and mean even faster convergence! This is great: OEC information is really important, but can be expensive to use. Better OEC matrices = better weather forecasts (NB I'm not promising better weather!) Thanks for making it this far! Finishing this paper during COVID-19 times has been difficult. Collaborating virtually has taken some adjustment, and I've found focusing hard (not ideal for editing manuscripts!). But I'm going to celebrate this small win and keep plodding on! ",https://arxiv.org/abs/2010.08416,"Data assimilation algorithms combine prior and observational information, weighted by their respective uncertainties, to obtain the most likely posterior of a dynamical system. In variational data assimilation the posterior is computed by solving a nonlinear least squares problem. Many numerical weather prediction (NWP) centres use full observation error covariance (OEC) weighting matrices, which can slow convergence of the data assimilation procedure. Previous work revealed the importance of the minimum eigenvalue of the OEC matrix for conditioning and convergence of the unpreconditioned data assimilation problem. In this paper we examine the use of correlated OEC matrices in the preconditioned data assimilation problem for the first time. We consider the case where there are more state variables than observations, which is typical for applications with sparse measurements e.g. NWP and remote sensing. We find that similarly to the unpreconditioned problem, the minimum eigenvalue of the OEC matrix appears in new bounds on the condition number of the Hessian of the preconditioned objective function. Numerical experiments reveal that the condition number of the Hessian is minimised when the background and observation lengthscales are equal. This contrasts with the unpreconditioned case, where decreasing the observation error lengthscale always improves conditioning. Conjugate gradient experiments show that in this framework the condition number of the Hessian is a good proxy for convergence. Eigenvalue clustering explains cases where convergence is faster than expected. ","New bounds on the condition number of the Hessian of the preconditioned
variational data assimilation problem",9,"[""We (@DrSarahDance, @amoslawless, @J_A_Waller and Nancy Nichols) have a new paper on arxiv this week! As is my new tradition, here's an as-non-technical-as-possible overview (this time with GIFs!) 🧵: "", 'In my PhD I studied how introducing correlated observation error covariance matrices (OECs) alters mathematical properties of variational data assimilation (DA) problems. DA appears everywhere, but is most well known for its use in weather forecasting. https://t.co/6giUJaexol', 'In this new paper we consider the preconditioned problem - this is where we solve a different but related problem that is computationally cheaper. The standard preconditioner for DA terms puts the OEC matrix in the same term as the background error covariance (BEC) matrix. https://t.co/t2ey0ZXSQW', 'Some maths: we study how cheap/expensive our problem is using a condition number. Large condition number = expensive! \nEigenvalues are properties of a matrix and relate to their condition number - if all the eigenvalues are close the condition no. is small, spread out = large https://t.co/D4jVsBLJh9', 'Previously we found for the UNpreconditioned system (BEC + OEC) that the smallest eigenvalue of the OEC matrix is important theoretically. If the eigenvalue is small, the DA problem is expensive! Numerics show that if we increase this eigenvalue, we can make the DA problem faster https://t.co/9MC6DImF2I', 'For the preconditioned problem (OECxBEC) we found that although the smallest eigenvalue is still important, numerically the DA problem is solved fastest when BEC and OEC have similar properties (e.g. related eigenvalues). https://t.co/pYm5kCJSZ5', ""So if all we care about is a fast solution, we should make BEC and OEC as similar as possible, right? Usually we don't get to choose BEC/OEC - they come from the underlying physics. However, in practice we often have to adapt OEC matrices before we use them. https://t.co/IwnKvrZPuq"", ""Our new theory and results could help us choose how to do this modification - and mean even faster convergence! This is great: OEC information is really important, but can be expensive to use. Better OEC matrices = better weather forecasts (NB I'm not promising better weather!) https://t.co/dPRXKR38G4"", ""Thanks for making it this far! Finishing this paper during COVID-19 times has been difficult. Collaborating virtually has taken some adjustment, and I've found focusing hard (not ideal for editing manuscripts!). But I'm going to celebrate this small win and keep plodding on! https://t.co/NJ4fAI6x05""]",20,10,2389
210,0,1337259170114850819,3375538456,Ali Mottaghi,"Check out our work on medical symptom recognition at #ML4H workshop #NeurIPS2020 tomorrow. We developed a new active learning method for long-tailed multilabel distributions. Joint with Prathusha Sarma, @xamat, @syeung10, and @anithakan at @CuraiHQ. Paper: ",https://arxiv.org/abs/2011.06874,"We study the problem of medical symptoms recognition from patient text, for the purposes of gathering pertinent information from the patient (known as history-taking). A typical patient text is often descriptive of the symptoms the patient is experiencing and a single instance of such a text can be ""labeled"" with multiple symptoms. This makes learning a medical symptoms recognizer challenging on account of i) the lack of availability of voluminous annotated data as well as ii) the large unknown universe of multiple symptoms that a single text can map to. Furthermore, patient text is often characterized by a long tail in the data (i.e., some labels/symptoms occur more frequently than others for e.g ""fever"" vs ""hematochezia""). In this paper, we introduce an active learning method that leverages underlying structure of a continually refined, learned latent space to select the most informative examples to label. This enables the selection of the most informative examples that progressively increases the coverage on the universe of symptoms via the learned model, despite the long tail in data distribution. ","Medical symptom recognition from patient text: An active learning
approach for long-tailed multilabel distributions",1,"['Check out our work on medical symptom recognition at #ML4H workshop #NeurIPS2020 tomorrow. We developed a new active learning method for long-tailed multilabel distributions. Joint with Prathusha Sarma, @xamat, @syeung10, and @anithakan at @CuraiHQ.\nPaper: ']",20,11,270
211,43,1119588717059104769,1090429279513522178,Vitor Possebom,"💡Working Paper Update💡I just uploaded a new version of ""Sharp Bounds for the Marginal Treatment Effect (MTE) with Sample Selection"" (). Using my partial identification strategy, I analyze the Job Corps Training Program (JCTP) and bound its MTE (figure). The lower bound is positive, but it is statistically significant only when the latent heterogeneity is between 0.35 and 0.73. Since the ATT is between $.33 and $.99 and the ATU between $.71 and $3.00, there is some unobserved constraint blocking agents who would benefit. Since my paper is mostly theoretical, analyzing why agents who would benefit from attending the JCTP are not doing so is beyond its scope. But it is an important policy question! Chen, Flores and @A_FloresLagunes (2017, ) argue that it may be due to lack of childcare services, incomplete information, overconfidence or personal preferences for non-enrollment If you like Monte Carlos, the proposed confidence intervals are (mostly) conservative for the MTE function when the estimated parametric model is correctly specified (Designs 1-3). When the estimated model is misspecified (Designs 4-6), there is undercoverage, ",https://arxiv.org/abs/1904.08522,"I analyze treatment effects in situations when agents endogenously select into the treatment group and into the observed sample. As a theoretical contribution, I propose pointwise sharp bounds for the marginal treatment effect (MTE) of interest within the always-observed subpopulation under monotonicity assumptions. Moreover, I impose an extra mean dominance assumption to tighten the previous bounds. I further discuss how to identify those bounds when the support of the propensity score is either continuous or discrete. Using these results, I estimate bounds for the MTE of the Job Corps Training Program on hourly wages for the always-employed subpopulation and find that it is decreasing in the likelihood of attending the program within the Non-Hispanic group. For example, the Average Treatment Effect on the Treated is between \$.33 and \$.99 while the Average Treatment Effect on the Untreated is between \$.71 and \$3.00. ",Sharp Bounds for the Marginal Treatment Effect with Sample Selection,5,"['💡Working Paper Update💡I just uploaded a new version of ""Sharp Bounds for the Marginal Treatment Effect (MTE) with Sample Selection"" (). Using my partial identification strategy, I analyze the Job Corps Training Program (JCTP) and bound its MTE (figure). ', 'The lower bound is positive, but it is statistically significant only when the latent heterogeneity is between 0.35 and 0.73. Since the ATT is between $.33 and $.99 and the ATU between $.71 and $3.00, there is some unobserved constraint blocking agents who would benefit.', 'Since my paper is mostly theoretical, analyzing why agents who would benefit from attending the JCTP are not doing so is beyond its scope. But it is an important policy question!', 'Chen, Flores and @A_FloresLagunes (2017, https://t.co/dQe0Ae2FJy) argue that it may be due to lack of childcare services, incomplete information, overconfidence or personal preferences for non-enrollment', 'If you like Monte Carlos, the proposed confidence intervals are (mostly) conservative for the MTE function when the estimated parametric model is correctly specified (Designs 1-3). When the estimated model is misspecified (Designs 4-6), there is undercoverage, https://t.co/7zZ4P9Q2Ww']",19,04,1172
212,9,1499207309083361285,3061733236,Dong Gong,"Our new Continual Learning work got accepted by #CVPR2022. We introduced a new Bayesian sparse regularization on the network neurons, with novel memory-based full experience reply. Lucky to work with a great team making this done! An early-version paper: The sparse regularization encourages to use less model capacity for each task and thus reserves capacity for the subsequent tasks in the continual learning streams. The hierarchical Bayesian modeling optimizes the sparsity. The full experience reply involves the intermediate features for more effective and flexible knowledge sharing and transferring. It also directly guide sparsity modeling at the intermediate layers. The technical details of the sparse regularization are related to our previous work about Bayesian variational dropout: ",http://arxiv.org/abs/2202.10203,"Continual Learning (CL) methods aim to enable machine learning models to learn new tasks without catastrophic forgetting of those that have been previously mastered. Existing CL approaches often keep a buffer of previously-seen samples, perform knowledge distillation, or use regularization techniques towards this goal. Despite their performance, they still suffer from interference across tasks which leads to catastrophic forgetting. To ameliorate this problem, we propose to only activate and select sparse neurons for learning current and past tasks at any stage. More parameters space and model capacity can thus be reserved for the future tasks. This minimizes the interference between parameters for different tasks. To do so, we propose a Sparse neural Network for Continual Learning (SNCL), which employs variational Bayesian sparsity priors on the activations of the neurons in all layers. Full Experience Replay (FER) provides effective supervision in learning the sparse activations of the neurons in different layers. A loss-aware reservoir-sampling strategy is developed to maintain the memory buffer. The proposed method is agnostic as to the network structures and the task boundaries. Experiments on different datasets show that our approach achieves state-of-the-art performance for mitigating forgetting. ","Learning Bayesian Sparse Networks with Full Experience Replay for
Continual Learning",4,"['Our new Continual Learning work got accepted by #CVPR2022. We introduced a new Bayesian sparse regularization on the network neurons, with novel memory-based full experience reply. \nLucky to work with a great team making this done! An early-version paper: ', 'The sparse regularization encourages to use less model capacity for each task and thus reserves capacity for the subsequent tasks in the continual learning streams. The hierarchical Bayesian modeling optimizes the sparsity.', 'The full experience reply involves the intermediate features for more effective and flexible knowledge sharing and transferring. It also directly guide sparsity modeling at the intermediate layers.', 'The technical details of the sparse regularization are related to our previous work about Bayesian variational dropout: https://t.co/icdIIF90sE']",22,02,817
213,11,1388549201819312129,1661813766,Mehdi Kamani,"New paper! We introduce a first-order algorithm: • Converges to a point on the #ParetoFrontier with the desired level of trade-offs in #MultiobjectiveOptimization • Traces other points on the Pareto frontier • SOTA results on #Fairness aware learning Using the proposed Preference-based Pareto Descent Optimization, unlike other approaches we can trace other points on the Pareto frontier using only first-order information while converging to the desired point on that set. #MultiobjectiveOptimization #Pareto The application of this approach to the #fairness problem shows SOTA performance with finding many points on the Pareto frontier for better and enhanced decision making in terms of fairness. Solutions found by our algorithm mostly dominate other SOTA's solutions. Please RT! ",https://arxiv.org/abs/2104.01634,"As algorithmic decision-making systems are becoming more pervasive, it is crucial to ensure such systems do not become mechanisms of unfair discrimination on the basis of gender, race, ethnicity, religion, etc. Moreover, due to the inherent trade-off between fairness measures and accuracy, it is desirable to learn fairness-enhanced models without significantly compromising the accuracy. In this paper, we propose Pareto efficient Fairness (PEF) as a suitable fairness notion for supervised learning, that can ensure the optimal trade-off between overall loss and other fairness criteria. The proposed PEF notion is definition-agnostic, meaning that any well-defined notion of fairness can be reduced to the PEF notion. To efficiently find a PEF classifier, we cast the fairness-enhanced classification as a bilevel optimization problem and propose a gradient-based method that can guarantee the solution belongs to the Pareto frontier with provable guarantees for convex and non-convex objectives. We also generalize the proposed algorithmic solution to extract and trace arbitrary solutions from the Pareto frontier for a given preference over accuracy and fairness measures. This approach is generic and can be generalized to any multicriteria optimization problem to trace points on the Pareto frontier curve, which is interesting by its own right. We empirically demonstrate the effectiveness of the PEF solution and the extracted Pareto frontier on real-world datasets compared to state-of-the-art methods. ","Pareto Efficient Fairness in Supervised Learning: From Extraction to
Tracing",3,"['New paper! We introduce a first-order algorithm:\n• Converges to a point on the #ParetoFrontier with the desired level of trade-offs in #MultiobjectiveOptimization\n• Traces other points on the Pareto frontier\n• SOTA results on #Fairness aware learning\n ', 'Using the proposed Preference-based Pareto Descent Optimization, unlike other approaches we can trace other points on the Pareto frontier using only first-order information while converging to the desired point on that set.\n#MultiobjectiveOptimization #Pareto https://t.co/ylx4Lqgd7F', ""The application of this approach to the #fairness problem shows SOTA performance with finding many points on the Pareto frontier for better and enhanced decision making in terms of fairness. Solutions found by our algorithm mostly dominate other SOTA's solutions.\n\nPlease RT! https://t.co/c3f1c8HaZn""]",21,04,813
214,262,1376434656330014725,268337552,Nicolas Kourtellis,"With @yelenamejova, we study the increase in cross-platform posting activity of Twitter+YouTube during 1st #COVID19 lockdown across 100+ countries: a proxy for users following restrictions in mobility! Powered by @TEFresearch, @concordiah2020 #Mobility We are also releasing a bunch of data and results, so check out the paper for your own follow-up studies! #data #reproducibleresearch #transparency",https://arxiv.org/abs/2103.14601,"Compliance with public health measures, such as restrictions on movement and socialization, is paramount in limiting the spread of diseases such as the severe acute respiratory syndrome coronavirus 2 (also referred to as COVID-19). Although large population datasets, such as phone-based mobility data, may provide some glimpse into such compliance, it is often proprietary, and may not be available for all locales. In this work, we examine the usefulness of video sharing on social media as a proxy of the amount of time Internet users spend at home. In particular, we focus on the number of people sharing YouTube videos on Twitter before and during COVID-19 lockdown measures were imposed by 109 countries. We find that the media sharing behavior differs widely between countries, in some having immediate response to the lockdown decrees - mostly by increasing the sharing volume dramatically - while in others having a substantial lag. We confirm that these insights correlate strongly with mobility, as measured using phone data. Finally, we illustrate that both media sharing and mobility behaviors change more drastically around mandated lockdowns, and less so around more lax recommendations. We make the media sharing volume data available to the research community for continued monitoring of behavior change around public health measures. ","YouTubing at Home: Media Sharing Behavior Change as Proxy for
MobilityAround COVID-19 Lockdowns",2,"['With @yelenamejova, we study the increase in cross-platform posting activity of Twitter+YouTube during 1st #COVID19 lockdown across 100+ countries: a proxy for users following restrictions in mobility! \nPowered by @TEFresearch, @concordiah2020 \n#Mobility ', 'We are also releasing a bunch of data and results, so check out the paper for your own follow-up studies!\n#data #reproducibleresearch #transparency']",21,03,414
215,58,1239547436948996101,769142140765167616,Siamak F. Shahandashti,"In our work on the #security of 5 top password managers, @MikeyJonCarr and I found - many reported vulnerabilities persisting - one new attack allowing a malicious app to steal another app's saved password - 3 other issues Paper at @IFIP_SEC_2020 All vulnerabilities already reported to @dashlane @lastpass @keepersecurity @1Password and @roboform",https://arxiv.org/abs/2003.01985,"In this work we analyse five popular commercial password managers for security vulnerabilities. Our analysis is twofold. First, we compile a list of previously disclosed vulnerabilities through a comprehensive review of the academic and non-academic sources and test each password manager against all the previously disclosed vulnerabilities. We find a mixed picture of fixed and persisting vulnerabilities. Then we carry out systematic functionality tests on the considered password managers and find four new vulnerabilities. Notably, one of the new vulnerabilities we identified allows a malicious app to impersonate a legitimate app to two out of five widely-used password managers we tested and as a result steal the user's password for the targeted service. We implement a proof-of-concept attack to show the feasibility of this vulnerability in a real-life scenario. Finally, we report and reflect on our experience of responsible disclosure of the newly discovered vulnerabilities to the corresponding password manager vendors. ",Revisiting Security Vulnerabilities in Commercial Password Managers,2,"[""In our work on the #security of 5 top password managers, @MikeyJonCarr and I found\n- many reported vulnerabilities persisting \n- one new attack allowing a malicious app to steal another app's saved password\n- 3 other issues \nPaper at @IFIP_SEC_2020\n "", 'All vulnerabilities already reported to @dashlane @lastpass @keepersecurity @1Password and @roboform']",20,03,361
216,73,1320735090511618048,573729628,"Steve Taylor, PhD","Great new paper led by Vanderbilt postdoc, Dr Nihan Pol! We peek into the future and establish some scientific milestones for the field of PTA GW Astronomy. First detection of the GW background, then unveiling its origin, then digging into SMBH astro. ",https://arxiv.org/abs/2010.11950,"The NANOGrav Collaboration reported strong Bayesian evidence for a common-spectrum stochastic process in its 12.5-yr pulsar timing array dataset, with median characteristic strain amplitude at periods of a year of $A_{\rm yr} = 1.92^{+0.75}_{-0.55} \times 10^{-15}$. However, evidence for the quadrupolar Hellings \& Downs interpulsar correlations, which are characteristic of gravitational wave signals, was not yet significant. We emulate and extend the NANOGrav dataset, injecting a wide range of stochastic gravitational wave background (GWB) signals that encompass a variety of amplitudes and spectral shapes, and quantify three key milestones: (I) Given the amplitude measured in the 12.5 yr analysis and assuming this signal is a GWB, we expect to accumulate robust evidence of an interpulsar-correlated GWB signal with 15--17 yrs of data, i.e., an additional 2--5 yrs from the 12.5 yr dataset; (II) At the initial detection, we expect a fractional uncertainty of $40\%$ on the power-law strain spectrum slope, which is sufficient to distinguish a GWB of supermassive black-hole binary origin from some models predicting more exotic origins;(III) Similarly, the measured GWB amplitude will have an uncertainty of $44\%$ upon initial detection, allowing us to arbitrate between some population models of supermassive black-hole binaries. In addition, power-law models are distinguishable from those having low-frequency spectral turnovers once 20~yrs of data are reached. Even though our study is based on the NANOGrav data, we also derive relations that allow for a generalization to other pulsar-timing array datasets. Most notably, by combining the data of individual arrays into the International Pulsar Timing Array, all of these milestones can be reached significantly earlier. ","Astrophysics Milestones For Pulsar Timing Array Gravitational Wave
Detection",1,"['Great new paper led by Vanderbilt postdoc, Dr Nihan Pol! We peek into the future and establish some scientific milestones for the field of PTA GW Astronomy. First detection of the GW background, then unveiling its origin, then digging into SMBH astro. ']",20,10,258
217,147,1445363547039428612,1223527910503403521,Diptimoy Ghosh,Our new paper We show that Super-Kamiokande provides the strongest constraint on Dark Matter - Neutrino cross-section for DM masses below a few MeV when we utilize the boost of DM particles due to scattering with the diffuse supernova neutrino background.,https://arxiv.org/abs/2110.00025,"We derive new constraints on combination of dark matter - electron cross-section ($\sigma_{\chi e}$) and dark matter - neutrino cross-section ($\sigma_{\chi \nu}$) utilising the gain in kinetic energy of the dark matter (DM) particles due to scattering with the cosmic ray electrons and the diffuse supernova neutrino background (DSNB). Since the flux of the DSNB neutrinos is comparable to the CR electron flux in the energy range $\sim 1\,{\rm MeV} - 50 \,{\rm MeV}$, scattering with the DSNB neutrinos can also boost low-mass DM significantly in addition to the boost due to interaction with the cosmic ray electrons. We use the XENON1T as well as the Super-Kamiokande data to derive bounds on $\sigma_{\chi e}$ and $\sigma_{\chi \nu}$. While our bounds for $\sigma_{\chi e}$ are comparable with those in the literature, we show that the Super-Kamiokande experiment provides the strongest constraint on $\sigma_{\chi \nu}$ for DM masses below a few MeV. ",Exclusion limits on Dark Matter-Neutrino Scattering Cross-section,1,['Our new paper \nWe show that Super-Kamiokande provides the strongest constraint on Dark Matter - Neutrino cross-section for DM masses below a few MeV when we utilize the boost of DM particles due to scattering with the diffuse supernova neutrino background.'],21,10,262
218,5,1003222274684661760,57793813,Teppei Katori (香取哲平),"People asked me what I think about new #MiniBooNE paper (). I think we need new simulation. 2 benefits; 1st, we can confirm there is really excess. 2nd, we can produce covariance matrices for all #nuxsec data for global fit #BananaIsDead #ZombieBanana @ClareBurrage Geometry and neutrino interaction. We (I am one of authors) need to check all materials around the detector are correctly defined, then make sure all interactions (especially pi0 productions) are simulated correctly. Background simulation need to be checked more carefully.",https://arxiv.org/abs/1805.12028,"The MiniBooNE experiment at Fermilab reports results from an analysis of $\nu_e$ appearance data from $12.84 \times 10^{20}$ protons on target in neutrino mode, an increase of approximately a factor of two over previously reported results. A $\nu_e$ charged-current quasielastic event excess of $381.2 \pm 85.2$ events ($4.5 \sigma$) is observed in the energy range $200). I think we need new simulation. 2 benefits; 1st, we can confirm there is really excess. 2nd, we can produce covariance matrices for all #nuxsec data for global fit #BananaIsDead #ZombieBanana', '@ClareBurrage Geometry and neutrino interaction. We (I am one of authors) need to check all materials around the detector are correctly defined, then make sure all interactions (especially pi0 productions) are simulated correctly. Background simulation need to be checked more carefully.']",18,05,545
219,294,1320561969796161536,2506570218,Julian,We recently open-sourced SMARTS (); a simulation platform for RL and multi-agent research on autonomous driving. Focus is on realistic + diverse interactions. The associated paper () was accepted to CoRL. Hope you find it interesting! ,https://arxiv.org/abs/2010.09776,"Multi-agent interaction is a fundamental aspect of autonomous driving in the real world. Despite more than a decade of research and development, the problem of how to competently interact with diverse road users in diverse scenarios remains largely unsolved. Learning methods have much to offer towards solving this problem. But they require a realistic multi-agent simulator that generates diverse and competent driving interactions. To meet this need, we develop a dedicated simulation platform called SMARTS (Scalable Multi-Agent RL Training School). SMARTS supports the training, accumulation, and use of diverse behavior models of road users. These are in turn used to create increasingly more realistic and diverse interactions that enable deeper and broader research on multi-agent interaction. In this paper, we describe the design goals of SMARTS, explain its basic architecture and its key features, and illustrate its use through concrete multi-agent experiments on interactive scenarios. We open-source the SMARTS platform and the associated benchmark tasks and evaluation metrics to encourage and empower research on multi-agent learning for autonomous driving. Our code is available at this https URL ","SMARTS: Scalable Multi-Agent Reinforcement Learning Training School for
Autonomous Driving",1,['We recently open-sourced SMARTS (); a simulation platform for RL and multi-agent research on autonomous driving. Focus is on realistic + diverse interactions. The associated paper () was accepted to CoRL. Hope you find it interesting! '],20,10,253
220,30,987044488009932800,185910194,Graham Neubig,"Our new #NAACL2018 paper examines ""When and Why are Pre-trained Embeddings Useful for NMT?"" Some conclusions intuitive (embeddings help most when systems are bad, but not too bad), and some surprising (explicit bilingual training of embeddings unnecessary) ",https://arxiv.org/abs/1804.06323,"The performance of Neural Machine Translation (NMT) systems often suffers in low-resource scenarios where sufficiently large-scale parallel corpora cannot be obtained. Pre-trained word embeddings have proven to be invaluable for improving performance in natural language analysis tasks, which often suffer from paucity of data. However, their utility for NMT has not been extensively explored. In this work, we perform five sets of experiments that analyze when we can expect pre-trained word embeddings to help in NMT tasks. We show that such embeddings can be surprisingly effective in some cases -- providing gains of up to 20 BLEU points in the most favorable setting. ","When and Why are Pre-trained Word Embeddings Useful for Neural Machine
Translation?",1,"['Our new #NAACL2018 paper examines ""When and Why are Pre-trained Embeddings Useful for NMT?"" \nSome conclusions intuitive (embeddings help most when systems are bad, but not too bad), and some surprising (explicit bilingual training of embeddings unnecessary) ']",18,04,270
221,1,1503117089778851843,2956121356,Russ Salakhutdinov,"New #ICLR2022 paper: Learning Weakly-Supervised Contrastive Representations using auxiliary information. As for performance, the auxiliary-information-infused self-supervised learning comes closer to supervised learning Paper Code with Yao-Hung Hubert Tsai, Tianqin Li, Weixin Liu, Peiyuan Liao, and Louis-Philippe Morency",https://arxiv.org/abs/2202.06670,"We argue that a form of the valuable information provided by the auxiliary information is its implied data clustering information. For instance, considering hashtags as auxiliary information, we can hypothesize that an Instagram image will be semantically more similar with the same hashtags. With this intuition, we present a two-stage weakly-supervised contrastive learning approach. The first stage is to cluster data according to its auxiliary information. The second stage is to learn similar representations within the same cluster and dissimilar representations for data from different clusters. Our empirical experiments suggest the following three contributions. First, compared to conventional self-supervised representations, the auxiliary-information-infused representations bring the performance closer to the supervised representations, which use direct downstream labels as supervision signals. Second, our approach performs the best in most cases, when comparing our approach with other baseline representation learning methods that also leverage auxiliary data information. Third, we show that our approach also works well with unsupervised constructed clusters (e.g., no auxiliary information), resulting in a strong unsupervised representation learning approach. ",Learning Weakly-Supervised Contrastive Representations,2,"['New #ICLR2022 paper: Learning Weakly-Supervised Contrastive Representations using auxiliary information.\n\nAs for performance, the auxiliary-information-infused self-supervised learning comes closer to supervised learning\n\nPaper \nCode ', 'with Yao-Hung Hubert Tsai, Tianqin Li, Weixin Liu, Peiyuan Liao, and Louis-Philippe Morency']",22,02,343
222,209,1448459267346821127,929633835309981698,mmatsuo,"Our paper is now on arXiv😊We propose spin-motive force driven by a surface acoustic wave, resulting in both dc & second harmonic voltage. In contrast to the conventional ones, it requires no sophisticated device structures or strong spin-orbit materials. ",https://arxiv.org/abs/2110.06552,"The spin-motive force (SMF) in a simple ferromagnetic monolayer caused by a surface acoustic wave is studied theoretically via spin-vorticity coupling (SVC). The SMF has two mechanisms. The first is the SVC-driven SMF, which produces the first harmonic electromotive force, and the second is the interplay between the SVC and the magentoelastic coupling, which produces the d.c. and second harmonic electromotive forces. We show that these electric voltages induced by a Rayleigh-type surface acoustic wave can be detected in polycrystalline nickel. No sophisticated device structures, non-collinear magnetic structures, or strong spin-orbit materials are used in our approach. Consequently, it is intended to broaden the spectrum of SMF applications considerably. ",Spin elastodynamic motive force,1,"['Our paper is now on arXiv😊We propose spin-motive force driven by a surface acoustic wave, resulting in both dc & second harmonic voltage. In contrast to the conventional ones, it requires no sophisticated device structures or strong spin-orbit materials. \n']",21,10,261
223,137,1389878084543860736,513464916,Carlos Sánchez Muñoz,"New paper on the arxiv today, in colaboration with G. Frascella and Frank Schlawin: ""Quantum metrology of two-photon absorption"" What is this about? A short thread 👇🧵 The simultaneous absorption of 2 photons by a quantum system is very important tool for spectroscopy and microscopy. E.g., 2-photon microscopy in life sciences allows to get images with higher spatial resolution, deeper tissue penetration, and less damage to the sample! 💥💥 Here we tackle the following question: if we use light to shine a system that absorbs photons in pairs, and analyse the resulting state of the light, how much can we learn about that system? Is our learning much better if we drive the system with *quantum* states of light? To answer this, we computed the precision of estimation of two-photon absorption cross sections. In the limit of very small cross sections, we find that squeezed states have no fundamental limits to the precision you can achieve! 🤯 (In more technical words, the quantum Fisher information diverges). We can’t get too excited about this though, since the measurement one would need to do to achieve this precision is not realizable in practice, at least not easily 😅 Nevertheless, looking at the precision you achieve by standard homodyne measurements, we find that squeezed yield an inverse quadratic scaling of precision with photon number 1/N²! (some might call this super-Heisenberg scaling). 🚀🚀 This is much better than coherent states, that scale as 1/N^(3/2) (which is not bad either).",https://arxiv.org/abs/2105.01561,"Two-photon absorption (TPA) is of fundamental importance in super-resolution imaging and spectroscopy. Its nonlinear character allows for the prospect of using quantum resources, such as entanglement, to improve measurement precision or to gain new information on, e.g., ultrafast molecular dynamics. Here, we establish the metrological properties of nonclassical squeezed light sources for precision measurements of TPA cross sections. We find that there is no fundamental limit for the precision achievable with squeezed states in the limit of very small cross sections. Considering the most relevant measurement strategies -- namely photon counting and quadrature measurements -- we determine the quantum advantage provided by squeezed states as compared to coherent states. We find that squeezed states outperform the precision achievable by coherent states when performing quadrature measurements, which provide improved scaling of the Fisher information with respect to the mean photon number $\sim n^4$. Due to the interplay of the incoherent nature and the nonlinearity of the TPA process, unusual scaling can also be obtained with coherent states, which feature a $\sim n^3$ scaling in both quadrature and photon-counting measurements. ",Quantum metrology of two-photon absorption,7,"['New paper on the arxiv today, in colaboration with G. Frascella and Frank Schlawin:\n\n""Quantum metrology of two-photon absorption""\n\n\nWhat is this about? A short thread 👇🧵', 'The simultaneous absorption of 2 photons by a quantum system is very important tool for spectroscopy and microscopy. E.g., 2-photon microscopy in life sciences allows to get images with higher spatial resolution, deeper tissue penetration, and less damage to the sample! 💥💥 https://t.co/RnXXtp6OrN', 'Here we tackle the following question: if we use light to shine a system that absorbs photons in pairs, and analyse the resulting state of the light, how much can we learn about that system? Is our learning much better if we drive the system with *quantum* states of light? https://t.co/fR8lNrjk0x', 'To answer this, we computed the precision of estimation of two-photon absorption cross sections. In the limit of very small cross sections, we find that squeezed states have no fundamental limits to the precision you can achieve! 🤯', '(In more technical words, the quantum Fisher information diverges). We can’t get too excited about this though, since the measurement one would need to do to achieve this precision is not realizable in practice, at least not easily 😅', 'Nevertheless, looking at the precision you achieve by standard homodyne measurements, we find that squeezed yield an inverse quadratic scaling of precision with photon number 1/N²! (some might call this super-Heisenberg scaling). 🚀🚀 https://t.co/zW3fVFGC5h', 'This is much better than coherent states, that scale as 1/N^(3/2) (which is not bad either).']",21,05,1534
224,1,1104075634593095681,4870078413,Sam Schoenholz,"1/3 Our new paper analyzing batch normalization in neural networks at initialization is out (and will be at @iclr2019). We find that batch norm + MLPs always feature exploding gradients for any choice of nonlinearity and batch size. 2/3 We propose several schemes to ameliorate this by careful parameter tuning. The formalism here also opens the door to performing Bayesian inference on Gaussian Process that correspond to neural networks with batch normalization. 3/3 As always, this was a really fun collaboration with @TheGregYang, Jeffrey Pennington, @vinaysrao, @jaschasd.",https://arxiv.org/abs/1902.08129,"We develop a mean field theory for batch normalization in fully-connected feedforward neural networks. In so doing, we provide a precise characterization of signal propagation and gradient backpropagation in wide batch-normalized networks at initialization. Our theory shows that gradient signals grow exponentially in depth and that these exploding gradients cannot be eliminated by tuning the initial weight variances or by adjusting the nonlinear activation function. Indeed, batch normalization itself is the cause of gradient explosion. As a result, vanilla batch-normalized networks without skip connections are not trainable at large depths for common initialization schemes, a prediction that we verify with a variety of empirical simulations. While gradient explosion cannot be eliminated, it can be reduced by tuning the network close to the linear regime, which improves the trainability of deep batch-normalized networks without residual connections. Finally, we investigate the learning dynamics of batch-normalized networks and observe that after a single step of optimization the networks achieve a relatively stable equilibrium in which gradients have dramatically smaller dynamic range. Our theory leverages Laplace, Fourier, and Gegenbauer transforms and we derive new identities that may be of independent interest. ",A Mean Field Theory of Batch Normalization,3,"['1/3 Our new paper analyzing batch normalization in neural networks at initialization is out (and will be at @iclr2019). We find that batch norm + MLPs always feature exploding gradients for any choice of nonlinearity and batch size. ', '2/3 We propose several schemes to ameliorate this by careful parameter tuning. The formalism here also opens the door to performing Bayesian inference on Gaussian Process that correspond to neural networks with batch normalization.', '3/3 As always, this was a really fun collaboration with @TheGregYang, Jeffrey Pennington, @vinaysrao, @jaschasd.']",19,02,591
225,113,1052329766534008832,41280228,Peter J. Liu,"Most abstractive summarization models based on neural networks require many (expensive to obtain) document-summary pairs to train. In our recent paper, we propose a neural architecture to do abstractive multi-document summarization with no examples: ",https://arxiv.org/abs/1810.05739,"Abstractive summarization has been studied using neural sequence transduction methods with datasets of large, paired document-summary examples. However, such datasets are rare and the models trained from them do not generalize to other domains. Recently, some progress has been made in learning sequence-to-sequence mappings with only unpaired examples. In our work, we consider the setting where there are only documents (product or business reviews) with no summaries provided, and propose an end-to-end, neural model architecture to perform unsupervised abstractive summarization. Our proposed model consists of an auto-encoder where the mean of the representations of the input reviews decodes to a reasonable summary-review while not relying on any review-specific features. We consider variants of the proposed architecture and perform an ablation study to show the importance of specific components. We show through automated metrics and human evaluation that the generated summaries are highly abstractive, fluent, relevant, and representative of the average sentiment of the input reviews. Finally, we collect a reference evaluation dataset and show that our model outperforms a strong extractive baseline. ","MeanSum: A Neural Model for Unsupervised Multi-document Abstractive
Summarization",1,"['Most abstractive summarization models based on neural networks require many (expensive to obtain) document-summary pairs to train. In our recent paper, we propose a neural architecture to do abstractive multi-document summarization with no examples: ']",18,10,256
226,55,1217459930954981376,53464710,Eric Wong,"1/ New paper on an old topic: turns out, FGSM works as well as PGD for adversarial training!* *Just avoid catastrophic overfitting, as seen in picture Paper: Code: Joint work with @_leslierice and @zicokolter to be at #ICLR2020 2/ Summary: Changing the initialization to be uniformly random is the main contributor towards successful FGSM adversarial training. Generated adversarial examples need to be able to actually span the entire threat model, but otherwise don't need to be that strong for training. 3/ Save your valuable time with cyclic learning rates and mixed precision! These techniques can train robust CIFAR10 and ImageNet in 6 min and 12 hrs using FGSM adv training. Super easy to incorporate (just add a 3-4 lines of code), and can accelerate any training method. 4/ Did you try FGSM before and it didn't work? It probably failed due to ""catastrophic overfitting"": plotting the learning curves reveals that, if done incorrectly, FGSM adv training learns a robust classifier, up until it suddenly and rapidly deteriorates within a single epoch.",https://arxiv.org/abs/2001.03994,"Adversarial training, a method for learning robust deep networks, is typically assumed to be more expensive than traditional training due to the necessity of constructing adversarial examples via a first-order method like projected gradient decent (PGD). In this paper, we make the surprising discovery that it is possible to train empirically robust models using a much weaker and cheaper adversary, an approach that was previously believed to be ineffective, rendering the method no more costly than standard training in practice. Specifically, we show that adversarial training with the fast gradient sign method (FGSM), when combined with random initialization, is as effective as PGD-based training but has significantly lower cost. Furthermore we show that FGSM adversarial training can be further accelerated by using standard techniques for efficient training of deep networks, allowing us to learn a robust CIFAR10 classifier with 45% robust accuracy to PGD attacks with $\epsilon=8/255$ in 6 minutes, and a robust ImageNet classifier with 43% robust accuracy at $\epsilon=2/255$ in 12 hours, in comparison to past work based on ""free"" adversarial training which took 10 and 50 hours to reach the same respective thresholds. Finally, we identify a failure mode referred to as ""catastrophic overfitting"" which may have caused previous attempts to use FGSM adversarial training to fail. All code for reproducing the experiments in this paper as well as pretrained model weights are at this https URL ",Fast is better than free: Revisiting adversarial training,4,"['1/ New paper on an old topic: turns out, FGSM works as well as PGD for adversarial training!* \n\n*Just avoid catastrophic overfitting, as seen in picture\n\nPaper: \nCode: \n\nJoint work with @_leslierice and @zicokolter to be at #ICLR2020 ', ""2/ Summary: Changing the initialization to be uniformly random is the main contributor towards successful FGSM adversarial training.\n\nGenerated adversarial examples need to be able to actually span the entire threat model, but otherwise don't need to be that strong for training."", '3/ Save your valuable time with cyclic learning rates and mixed precision! These techniques can train robust CIFAR10 and ImageNet in 6 min and 12 hrs using FGSM adv training.\n\nSuper easy to incorporate (just add a 3-4 lines of code), and can accelerate any training method.', '4/ Did you try FGSM before and it didn\'t work? It probably failed due to ""catastrophic overfitting"": plotting the learning curves reveals that, if done incorrectly, FGSM adv training learns a robust classifier, up until it suddenly and rapidly deteriorates within a single epoch.']",20,01,1081
227,82,1372922793382322179,140691162,Dr. Wasikul Islam,"Check out our new Phenomenology paper ""Model-independent searches for new physics in multi-body invariant masses"" : . Happy to be part of some experimental explorations of the same on behalf of ATLAS Experiment at CERN. :) @SaschaCaron Thanks ! Will check.",https://arxiv.org/abs/2103.10217,"Model-independent searches for physics beyond the Standard Model typically focus on invariant masses of two objects (jets, leptons or photons). In this study we explore opportunities for similar model-agnostic searches in multi-body invariant masses. In particular, we focus on the situations when new physics can be observed in a model-independent way in three- and four-body invariant masses of jets and leptons. Such searches may have good prospects in finding new physics in the situations when two-body invariant masses, that have been extensively explored at collider experiments in the past, cannot provide sufficient signatures for experimental observations. ","Model-independent searches for new physics in multi-body invariant
masses",2,"['Check out our new Phenomenology paper ""Model-independent searches for new physics in multi-body invariant masses"" : .\n\nHappy to be part of some experimental explorations of the same on behalf of ATLAS Experiment at CERN. :)', '@SaschaCaron Thanks ! Will check.']",21,03,263
228,26,1277888145267331072,841031248839618560,Relja Arandjelović,"Our new paper ""Self-Supervised MultiModal Versatile Networks"" learns from vision, audio and (ASR) language, achieves SOTA self-supervised video and audio representations, and we can deflate nets trained on videos to apply them on images. @jalayrac, @arecasens, Rosalia Schneider, myself, @jramapuram, @JeffreyDeFauw, Lucas Smaira, @sedielem, Andrew Zisserman @rosaliags",https://arxiv.org/abs/2006.16228,"Videos are a rich source of multi-modal supervision. In this work, we learn representations using self-supervision by leveraging three modalities naturally present in videos: visual, audio and language streams. To this end, we introduce the notion of a multimodal versatile network -- a network that can ingest multiple modalities and whose representations enable downstream tasks in multiple modalities. In particular, we explore how best to combine the modalities, such that fine-grained representations of the visual and audio modalities can be maintained, whilst also integrating text into a common embedding. Driven by versatility, we also introduce a novel process of deflation, so that the networks can be effortlessly applied to the visual data in the form of video or a static image. We demonstrate how such networks trained on large collections of unlabelled video data can be applied on video, video-text, image and audio tasks. Equipped with these representations, we obtain state-of-the-art performance on multiple challenging benchmarks including UCF101, HMDB51, Kinetics600, AudioSet and ESC-50 when compared to previous self-supervised work. Our models are publicly available. ",Self-Supervised MultiModal Versatile Networks,3,"['Our new paper ""Self-Supervised MultiModal Versatile Networks"" learns from vision, audio and (ASR) language, achieves SOTA self-supervised video and audio representations, and we can deflate nets trained on videos to apply them on images. ', '@jalayrac, @arecasens, Rosalia Schneider, myself, @jramapuram, @JeffreyDeFauw, Lucas Smaira, @sedielem, Andrew Zisserman', '@rosaliags']",20,06,376
229,32,1032431626431746049,65137727,Jonathan Mboyo Esole,"My new paper on ""Characteristic numbers of elliptic fibrations with non-trivial Mordell-Weil groups"" in collaboration with Monica Kang is out! I would like to dedicate it to my students from @MalaikaDRC as they get ready for a new year. @NextEinsteinFor ",https://arxiv.org/abs/1808.07054,"We compute characteristic numbers of elliptically fibered fourfolds with multisections or non-trivial Mordell-Weil groups. We first consider the models of type E$_{9-d}$ with $d=1,2,3,4$ whose generic fibers are normal elliptic curves of degree $d$. We then analyze the characteristic numbers of the $Q_7$-model, which provides a smooth model for elliptic fibrations of rank one and generalizes the E$_5$, E$_6$, and E$_7$-models. Finally, we examine the characteristic numbers of $G$-models with $G=\text{SO}(n)$ with $n=3,4,5,6$ and $G=\text{PSU}(3)$ whose Mordell-Weil groups are respectively $\mathbb{Z}/2\mathbb{Z}$ and $\mathbb{Z}/3 \mathbb{Z}$. In each case, we compute the Chern and Pontryagin numbers, the Euler characteristic, the holomorphic genera, the Todd-genus, the L-genus, the A-genus, and the eight-form curvature invariant from M-theory. ","Characteristic numbers of elliptic fibrations with non-trivial
Mordell-Weil groups",1,"['My new paper on ""Characteristic numbers of elliptic fibrations with non-trivial Mordell-Weil groups"" in collaboration with Monica Kang is out! I would like to dedicate it to my students from @MalaikaDRC as they get ready for a new year. @NextEinsteinFor ']",18,08,267
230,96,1469138089708855297,1252993183686025219,Oliver Philcox,"New paper! Misha Ivanov & I present the first joint full-shape analysis of the galaxy power spectrum and bispectrum using @sdssurveys data. We find sigma8 = 0.72+-0.03, H0 = 68.3+-0.8, S8 = 0.75+-0.04, with the bispectrum improving sigma8 by 13%! The analysis uses the power spectrum multipoles, the real-space power spectrum extension, the reconstructed power spectrum, and the bispectrum model. For the first time, spectra are measured using *unwindowed* estimators, so the mask doesn't need to be included in the theory! Our LCDM constraints are mostly consistent with Planck, but we find a slightly low S8, matching weak lensing probes. We also get strong constraints on galaxy bias parameters! All the data products are publicly available () and the pipeline can be easily reapplied to @desisurvey and @ESA_Euclid data. Coming soon: bispectrum multipoles, neutrino masses, primordial non-Gaussianity...",http://arxiv.org/abs/2112.04515,"We present a full $\Lambda$CDM analysis of the BOSS DR12 dataset, including information from the power spectrum multipoles, the real-space power spectrum, the reconstructed power spectrum and the bispectrum monopole. This is the first analysis to feature a complete treatment of the galaxy bispectrum, including a consistent theoretical model and without large-scale cuts. Unlike previous works, the statistics are measured using window-free estimators: this greatly reduces computational costs by removing the need to window-convolve the theory model. Our pipeline is tested using a suite of high-resolution mocks and shown to be robust and precise, with systematic errors far below the statistical thresholds. Inclusion of the bispectrum yields consistent parameter constraints and shrinks the $\sigma_8$ posterior by $13\%$ to reach $<5\%$ precision; less conservative analysis choices would reduce the error-bars further. Our constraints are broadly consistent with Planck: in particular, we find $H_0 = 69.6^{+1.1}_{-1.3}\,\mathrm{km}\,\mathrm{s}^{-1}\mathrm{Mpc}^{-1}$, $\sigma_8 = 0.692^{+0.035}_{-0.041}$ and $n_s=0.870^{+0.067}_{-0.064}$, including a BBN prior on the baryon density. When $n_s$ is set by Planck, we find $H_0 = 68.31^{+0.83}_{-0.86}\,\mathrm{km}\,\mathrm{s}^{-1}\mathrm{Mpc}^{-1}$ and $\sigma_8 = 0.722^{+0.032}_{-0.036}$. Our $S_8$ posterior, $0.751\pm0.039$, is consistent with weak lensing studies, but lower than Planck. Constraints on the higher-order bias parameters are significantly strengthened from the inclusion of the bispectrum, and we find no evidence for deviation from the dark matter halo bias relations. These results represent the most complete full-shape analysis of BOSS DR12 to-date, and the corresponding spectra will enable a variety of beyond-$\Lambda$CDM analyses, probing phenomena such as the neutrino mass and primordial non-Gaussianity. ","The BOSS DR12 Full-Shape Cosmology: $\Lambda$CDM Constraints from the
Large-Scale Galaxy Power Spectrum and Bispectrum Monopole",4,"['New paper! Misha Ivanov & I present the first joint full-shape analysis of the galaxy power spectrum and bispectrum using @sdssurveys data.\n\nWe find sigma8 = 0.72+-0.03, H0 = 68.3+-0.8, S8 = 0.75+-0.04, with the bispectrum improving sigma8 by 13%!\n\n ', ""The analysis uses the power spectrum multipoles, the real-space power spectrum extension, the reconstructed power spectrum, and the bispectrum model. For the first time, spectra are measured using *unwindowed* estimators, so the mask doesn't need to be included in the theory!"", 'Our LCDM constraints are mostly consistent with Planck, but we find a slightly low S8, matching weak lensing probes. We also get strong constraints on galaxy bias parameters! https://t.co/LZ2AnQ1Ikt', 'All the data products are publicly available (https://t.co/NHYkXag5rx) and the pipeline can be easily reapplied to @desisurvey and @ESA_Euclid data.\n\nComing soon: bispectrum multipoles, neutrino masses, primordial non-Gaussianity...']",21,12,934
231,152,1511156712388005888,15327263,Carl-Johan Haster,"New paper on the ArXiv led by @MITKavli grad student @sylvia_bisco. We look at the population of observed binary black holes (BBHs) to see if there are any correlations between the spin-mass-redshift properties of the population, and we found something! There is a robust correlation between the effective binary spin (chiEff) and redshift, in that the width of the chiEff distribution increases with increasing redshift. Apart from Sylvia and myself, this project includes Tom Callister (from @FlatironCCA), Ken Ng (also a grad student at @MITKavli) as well as @sasomao and @farrwill. And it continues on the exciting @FlatironCCA-@MITKavli ""correlation-collaboration"" that was started last year with where we found a correlation between chiEff and the mass ratio, again for the population of observed BBHs.",https://arxiv.org/abs/2204.01578,"The population-level distributions of the masses, spins, and redshifts of binary black holes (BBHs) observed using gravitational waves can shed light on how these systems form and evolve. Because of the complex astrophysical processes shaping the inferred BBH population, models allowing for correlations among these parameters will be necessary to fully characterize these sources. We hierarchically analyze the BBH population detected by LIGO and Virgo with a model allowing for correlations between the effective aligned spin and the primary mass and redshift. We find that the width of the effective spin distribution grows with redshift at 98.6% credibility. We determine this trend to be robust under the application of several alternative models and additionally verify that such a correlation is unlikely to be spuriously introduced using a simulated population. We discuss the possibility that this correlation could be due to a change in the natal black hole spin distribution with redshift. ",The binary black hole spin distribution likely broadens with redshift,4,"['New paper on the ArXiv led by @MITKavli grad student @sylvia_bisco.\nWe look at the population of observed binary black holes (BBHs) to see if there are any correlations between the spin-mass-redshift properties of the population, and we found something!\n', 'There is a robust correlation between the effective binary spin (chiEff) and redshift, in that the width of the chiEff distribution increases with increasing redshift.', 'Apart from Sylvia and myself, this project includes Tom Callister (from @FlatironCCA), Ken Ng (also a grad student at @MITKavli) as well as @sasomao and @farrwill.', 'And it continues on the exciting @FlatironCCA-@MITKavli ""correlation-collaboration"" that was started last year with https://t.co/9axMQU2o5U where we found a correlation between chiEff and the mass ratio, again for the population of observed BBHs.']",22,04,822
232,59,1395250551097593860,1334580500749553665,Miguel Vioque,"We have a new paper out! ""First detection of a disk free of volatile elements around a young A-type star: A sign of collisions between rocky planets?"" We conclude that HD 152384 is surrounded by a tenuous circumstellar disk. We suggest that this disk may be due to collisions in a newly formed planetary system.",https://arxiv.org/abs/2105.08327,"Aims. We present the first detailed analysis of the astrophysical parameters of the poorly studied Sco-Cen member HD 152384 and its circumstellar environment. Methods. We analyze newly obtained optical-near-IR XSHOOTER spectra, as well as archival TESS data, of HD 152384. In addition, we use literature photometric data to construct a detailed spectral energy distribution (SED) of the star. Results. The photospheric absorption lines in the spectrum of HD 152384 are characteristic of a A0 V star, for which we derive a stellar mass of 2.1 +/- 0.1 M_sun and a stellar age > 4.5 Myr. Superimposed on the photospheric absorption, the optical spectrum also displays double-peaked emission lines of Ca II, Fe I, Mg I and Si I, typical of circumstellar disks. Notably, all Hydrogen and Helium lines appear strictly in absorption. A toy model shows that the observed emission line profiles can be reproduced by emission from a compact (radius < 0.3 au) disk seen at an inclination of ~24 degrees. Further evidence for the presence of circumstellar material comes from the detection of a moderate infrared excess in the SED, similar to those found in extreme debris disk systems. Conclusions. We conclude that HD 152384 is surrounded by a tenuous circumstellar disk which, although rich in refractory elements, is highly depleted of volatile elements. To the best of our knowledge such a disk is unique within the group of young stars. However, it is reminiscent of the disks seen in some white dwarfs, which have been attributed to the disruption of rocky planets. We suggest that the disk around HD 152384 may have a similar origin and may be due to collisions in a newly formed planetary system. ","First detection of a disk free of volatile elements around a young
A-type star: A sign of collisions between rocky planets?",2,"['We have a new paper out! \n\n""First detection of a disk free of volatile elements around a young A-type star: A sign of collisions between rocky planets?""', 'We conclude that HD 152384 is surrounded by a tenuous circumstellar disk. We suggest that this disk may be due to collisions in a newly formed planetary system.']",21,05,318
233,90,1425394215958241285,952949678533849088,Kareem El-Badry,"New paper! We present first results from a survey of compact binary stars with ongoing and just-terminated mass transfer. 1/ We select targets from below the main-sequence in the #GaiaMission color-magnitude diagram that have @ztfsurvey light curves dominated by ellipsoidal variability (due to tidal deformation). 2/ This selects objects that (a) are hotter and smaller than normal main-sequence stars, and (b) are dominated a star (the ""donor""), not an accretion disk. We vet targets with spectroscopic follow-up. 3/ Our final sample of objects lives in a previously (almost) empty region of the HR diagram, between extremely low-mass white dwarfs and main-sequence stars (and ""normal"" cataclysmic variable donors). 4/ Most of the objects we find are hotter than any previously identified similar objects. We think this is because they have undergone more nuclear evolution. That is,... 5/ ... these objects form when mass is stripped off the outside of a star by a white dwarf companion. We think mass transfer began late in these objects, after a helium core had started to form. Today, just the helium core and a thin envelope are left. 6/ All the hottest (Teff >~ 7000 K) objects appear to have just ended mass transfer, while the cooler ones still have ongoing mass transfer. We think this means magnetic braking becomes inefficient at Teff >~ 7000 K, when stars lose their convective envelopes, slowing inspiral. 7/ This systematic survey allows us to derive a space density and birth rate for these evolved-CVs-turning-into-extremely-low-mass-white-dwarfs. Our inferred birth rate is about half that of the birth rate of ultracompact mass-transferring ""AM-CVn"" binaries. 8/ We think many of these objects will turn into AM-CVn binaries within a few Gyr. The birth rate suggests this channel may be (is likely to be?) one of the dominant formation channels for AM-CVn binaries. We have more observations and analysis of these objects in the works! 9/9.",https://arxiv.org/abs/2108.04255,"We present a systematic survey for mass-transferring and recently-detached cataclysmic variables (CVs) with evolved secondaries, which are progenitors of extremely low-mass white dwarfs (ELM WDs), AM CVn systems, and detached ultracompact binaries. We select targets below the main sequence in the Gaia color-magnitude diagram with ZTF light curves showing large-amplitude ellipsoidal variability and orbital period $P_{\rm orb} < 6$ hr. This yields 51 candidates brighter than G=18, of which we have obtained many-epoch spectra for 21. We confirm all 21 to be completely -- or nearly -- Roche lobe filling close binaries. 13 show evidence of ongoing mass transfer, which has likely just ceased in the other 8. Most of the secondaries are hotter than any previously known CV donors, with temperatures $4700 ', 'We select targets from below the main-sequence in the #GaiaMission color-magnitude diagram that have @ztfsurvey light curves dominated by ellipsoidal variability (due to tidal deformation). 2/ https://t.co/HVLOtbWmob', 'This selects objects that (a) are hotter and smaller than normal main-sequence stars, and (b) are dominated a star (the ""donor""), not an accretion disk. We vet targets with spectroscopic follow-up. 3/ https://t.co/zuRSxwUMgC', 'Our final sample of objects lives in a previously (almost) empty region of the HR diagram, between extremely low-mass white dwarfs and main-sequence stars (and ""normal"" cataclysmic variable donors). 4/ https://t.co/mOp0Cft2kt', 'Most of the objects we find are hotter than any previously identified similar objects. We think this is because they have undergone more nuclear evolution. That is,... 5/ https://t.co/WGMSybcN3Z', '... these objects form when mass is stripped off the outside of a star by a white dwarf companion. We think mass transfer began late in these objects, after a helium core had started to form. Today, just the helium core and a thin envelope are left. 6/', 'All the hottest (Teff >~ 7000 K) objects appear to have just ended mass transfer, while the cooler ones still have ongoing mass transfer. We think this means magnetic braking becomes inefficient at Teff >~ 7000 K, when stars lose their convective envelopes, slowing inspiral. 7/', 'This systematic survey allows us to derive a space density and birth rate for these evolved-CVs-turning-into-extremely-low-mass-white-dwarfs. Our inferred birth rate is about half that of the birth rate of ultracompact mass-transferring ""AM-CVn"" binaries. 8/ https://t.co/tsq5vhKKan', 'We think many of these objects will turn into AM-CVn binaries within a few Gyr. The birth rate suggests this channel may be (is likely to be?) one of the dominant formation channels for AM-CVn binaries. \nWe have more observations and analysis of these objects in the works! 9/9.']",21,08,2015
234,61,1395182893102624772,2377407248,Daniel Whiteson,"New paper with Aishik Ghosh and @BPNachman about how networks learn when your training samples have uncertainties: What if you are learning the wrong thing? It’s a big deal. What if your data is controlled by some unknown nuisance parameter. You might train on simulated data like this (dots) and learn a NN (shading): But what if the real data are actually like this? Then you have the wrong NN. You want this one: Recently many people approach this by trying to make your NN insensitive to the unknown parameter. That doesn’t work here. There’s no network that works for all rotations. In our paper, we use a parameterized network, where the network output is a function of the unknown parameter z. Then we can treat it statistically and profile over it. This gives the optimal performance! @kratsg At training, you give it examples with true values of z. At inference, you give it data and get a value as a function of z. @kratsg Technically you just treat z as an additional input, so to evaluate the NN, you need to specify z. I'll find @Aishik_Ghosh_'s code and post a link. @HEPfeickert @kratsg @Aishik_Ghosh_ Absolutely.",https://arxiv.org/abs/2105.08742,"Machine learning techniques are becoming an integral component of data analysis in High Energy Physics (HEP). These tools provide a significant improvement in sensitivity over traditional analyses by exploiting subtle patterns in high-dimensional feature spaces. These subtle patterns may not be well-modeled by the simulations used for training machine learning methods, resulting in an enhanced sensitivity to systematic uncertainties. Contrary to the traditional wisdom of constructing an analysis strategy that is invariant to systematic uncertainties, we study the use of a classifier that is fully aware of uncertainties and their corresponding nuisance parameters. We show that this dependence can actually enhance the sensitivity to parameters of interest. Studies are performed using a synthetic Gaussian dataset as well as a more realistic HEP dataset based on Higgs boson decays to tau leptons. For both cases, we show that the uncertainty aware approach can achieve a better sensitivity than alternative machine learning strategies. ",Uncertainty Aware Learning for High Energy Physics,10,"['New paper with Aishik Ghosh and @BPNachman about how networks learn when your training samples have uncertainties:\n\n\n\nWhat if you are learning the wrong thing? It’s a big deal.', 'What if your data is controlled by some unknown nuisance parameter. You might train on simulated data like this (dots) and learn a NN (shading): https://t.co/ha16BptgyF', 'But what if the real data are actually like this? https://t.co/tEeJILWY02', 'Then you have the wrong NN. You want this one: https://t.co/DBfrHtwDAy', 'Recently many people approach this by trying to make your NN insensitive to the unknown parameter. That doesn’t work here. There’s no network that works for all rotations.', 'In our paper, we use a parameterized network, where the network output is a function of the unknown parameter z. Then we can treat it statistically and profile over it. https://t.co/K6qR0uPsN8', 'This gives the optimal performance! https://t.co/ZrATUCaTaL', '@kratsg At training, you give it examples with true values of z. \n\nAt inference, you give it data and get a value as a function of z.', ""@kratsg Technically you just treat z as an additional input, so to evaluate the NN, you need to specify z. I'll find @Aishik_Ghosh_'s code and post a link."", '@HEPfeickert @kratsg @Aishik_Ghosh_ Absolutely.']",21,05,1171
235,94,1146735850136449024,1192577568,Daniel Worrall,"NEW PAPER @miccai2019! ""Supervised Uncertainty Quantification for Segmentation with Multiple Annotations"". We adapt Prob Unet to output epistemic & CALIBRATED aleatoric uncertainties . Work w. Shi Hu, Stefan Knegt, @BasVeeling, Henkjan Huisman & @wellingmax @stefanknegt",https://arxiv.org/abs/1907.01949,"The accurate estimation of predictive uncertainty carries importance in medical scenarios such as lung node segmentation. Unfortunately, most existing works on predictive uncertainty do not return calibrated uncertainty estimates, which could be used in practice. In this work we exploit multi-grader annotation variability as a source of 'groundtruth' aleatoric uncertainty, which can be treated as a target in a supervised learning problem. We combine this groundtruth uncertainty with a Probabilistic U-Net and test on the LIDC-IDRI lung nodule CT dataset and MICCAI2012 prostate MRI dataset. We find that we are able to improve predictive uncertainty estimates. We also find that we can improve sample accuracy and sample diversity. ","Supervised Uncertainty Quantification for Segmentation with Multiple
Annotations",2,"['NEW PAPER @miccai2019! ""Supervised Uncertainty Quantification for Segmentation with Multiple Annotations"". We adapt Prob Unet to output epistemic & CALIBRATED aleatoric uncertainties . Work w. Shi Hu, Stefan Knegt, @BasVeeling, Henkjan Huisman & @wellingmax ', '@stefanknegt']",19,07,283
236,58,1506998476005126152,484490200,Ilan Price,"Can ML help us produce cheap, reliable, high-resolution stochastic rain forecasts? In our new #AISTATS2022 paper we develop a promising approach using deep generative models. (with @raspstephan, done while at @climateai) 🧵 1/7 Weather forecasts are typically produced by numerical weather models (NWMs). These are very expensive to run, and so global NWMs are run at relatively low spatial resolution. Only richer countries can run their own regional, high-res NWMs. 2/7 We propose and train a GAN model - CorrectorGAN - to map from low resolution NWM ensembles to distributions of high-resolution forecasts. 3/7 This goal is to combine (1) bias-correction, (2) super-resolution, and (3) uncertainty generation and calibration, producing reliable high resolution stochastic forecasts without running high-resolution NWMs. 4/7 We compare against an operational regional NWM (HREF), as well as DL based methods. CorrectorGAN approaches HREF’s performance on CRPS, Brier scores, and reliability, but its forecasts are generated at a tiny fraction of the cost (just generator forward passes). 5/7 The architecture and training were informed by the specifics of the task - for details on these, and on the experimental setup and evaluation, check out the paper. 6/7 Big thanks @raspstephan and @climateai for the great collaboration. 7/7",https://arxiv.org/abs/2203.12297,"Accurately forecasting extreme rainfall is notoriously difficult, but is also ever more crucial for society as climate change increases the frequency of such extremes. Global numerical weather prediction models often fail to capture extremes, and are produced at too low a resolution to be actionable, while regional, high-resolution models are hugely expensive both in computation and labour. In this paper we explore the use of deep generative models to simultaneously correct and downscale (super-resolve) global ensemble forecasts over the Continental US. Specifically, using fine-grained radar observations as our ground truth, we train a conditional Generative Adversarial Network -- coined CorrectorGAN -- via a custom training procedure and augmented loss function, to produce ensembles of high-resolution, bias-corrected forecasts based on coarse, global precipitation forecasts in addition to other relevant meteorological fields. Our model outperforms an interpolation baseline, as well as super-resolution-only and CNN-based univariate methods, and approaches the performance of an operational regional high-resolution model across an array of established probabilistic metrics. Crucially, CorrectorGAN, once trained, produces predictions in seconds on a single machine. These results raise exciting questions about the necessity of regional models, and whether data-driven downscaling and correction methods can be transferred to data-poor regions that so far have had no access to high-resolution forecasts. ","Increasing the accuracy and resolution of precipitation forecasts using
deep generative models",7,"['Can ML help us produce cheap, reliable, high-resolution stochastic rain forecasts? In our new #AISTATS2022 paper we develop a promising approach using deep generative models. (with @raspstephan, done while at @climateai) 🧵 1/7 ', 'Weather forecasts are typically produced by numerical weather models (NWMs). These are very expensive to run, and so global NWMs are run at relatively low spatial resolution. Only richer countries can run their own regional, high-res NWMs. 2/7', 'We propose and train a GAN model - CorrectorGAN - to map from low resolution NWM ensembles to distributions of high-resolution forecasts. 3/7 https://t.co/4KsLNimtbu', 'This goal is to combine (1) bias-correction, (2) super-resolution, and (3) uncertainty generation and calibration, producing reliable high resolution stochastic forecasts without running high-resolution NWMs. 4/7', 'We compare against an operational regional NWM (HREF), as well as DL based methods. CorrectorGAN approaches HREF’s performance on CRPS, Brier scores, and reliability, but its forecasts are generated at a tiny fraction of the cost (just generator forward passes). 5/7 https://t.co/uG7TnYRRF4', 'The architecture and training were informed by the specifics of the task - for details on these, and on the experimental setup and evaluation, check out the paper. 6/7', 'Big thanks @raspstephan and @climateai for the great collaboration. 7/7']",22,03,1360
237,16,1100818380892692481,1023452666712666113,Hadi Salman,"[1/6] How tight can convex-relaxed robustness verification for neural networks be in practice? We thoroughly investigate this in our new paper ! In collaboration w\ @TheGregYang, Huan Zhang, Cho-Jui Hsieh, and Pengchuan Zhang. Special thanks to @ilyaraz2! [2/6] We unify all existing LP-relaxed verifiers under a general convex relaxation framework. [3/6] We perform extensive experiments, amounting to more than 22 CPU-Years, to obtain exact solution to the convex-relaxed problem that is optimal within our framework for ReLU networks. [4/6] We find the exact solution does not significantly improve upon the gap between exact verifiers and existing relaxed verifiers for various networks trained normally or robustly on MNIST and CIFAR-10 datasets. [5/6] Our results suggest there is an inherent barrier to tight robustness verification for the large class of methods captured by our framework. [6/6] Finally, we discuss possible causes of this barrier and potential future directions for bypassing it.",http://arxiv.org/abs/1902.08722,"Verification of neural networks enables us to gauge their robustness against adversarial attacks. Verification algorithms fall into two categories: exact verifiers that run in exponential time and relaxed verifiers that are efficient but incomplete. In this paper, we unify all existing LP-relaxed verifiers, to the best of our knowledge, under a general convex relaxation framework. This framework works for neural networks with diverse architectures and nonlinearities and covers both primal and dual views of robustness verification. We further prove strong duality between the primal and dual problems under very mild conditions. Next, we perform large-scale experiments, amounting to more than 22 CPU-years, to obtain exact solution to the convex-relaxed problem that is optimal within our framework for ReLU networks. We find the exact solution does not significantly improve upon the gap between PGD and existing relaxed verifiers for various networks trained normally or robustly on MNIST and CIFAR datasets. Our results suggest there is an inherent barrier to tight verification for the large class of methods captured by our framework. We discuss possible causes of this barrier and potential future directions for bypassing it. Our code and trained models are available at this http URL . ","A Convex Relaxation Barrier to Tight Robustness Verification of Neural
Networks",6,"['[1/6] How tight can convex-relaxed robustness verification for neural networks be in practice? We thoroughly investigate this in our new paper !\n\nIn collaboration w\\ @TheGregYang, Huan Zhang, Cho-Jui Hsieh, and Pengchuan Zhang. Special thanks to @ilyaraz2! ', '[2/6] We unify all existing LP-relaxed verifiers under a general convex relaxation framework.', '[3/6] We perform extensive experiments, amounting to more than 22 CPU-Years, to obtain exact solution to the convex-relaxed problem that is optimal within our framework for ReLU networks.', '[4/6] We find the exact solution does not significantly improve upon the gap between exact verifiers and existing relaxed verifiers for various networks trained normally or robustly on MNIST and CIFAR-10 datasets.', '[5/6] Our results suggest there is an inherent barrier to tight robustness verification for the large class of methods captured by our framework.', '[6/6] Finally, we discuss possible causes of this barrier and potential future directions for bypassing it.']",19,02,1018
238,185,1282473344428908546,1001049754787368960,Dr. Yu-Dai Tsai,"Another new paper: A new neutrino explanation to Xenon 1T, that is not constrained by astrophysical bounds! Great thanks to @TheoristIan and Jason Wyenberg for this exciting collaboration. @QuantumMessage @TheoristIan Hi Djuna, it is indeed very interesting to think about other effects of these exotic sterile neutrinos. I am happy to chat more about this!",https://arxiv.org/abs/2007.05513,"In this short letter, we find that a magnetic transition dipole moment between tau and sterile neutrinos can account for the XENON1T excess events. Unlike the ordinary neutrino dipole moment, the introduction of the new sterile mass scale allows for astrophysical bounds to be suppressed. Interestingly, the best-fit regions that are compatible with the SN1987A imply either boron-8 or CNO neutrinos as the source flux. We find that sterile neutrinos of either $\sim$ 260 keV or in the $\sim$(500 - 800) keV mass range are capable of evading astrophysical constraints while being able to successfully explain the XENON1T event rate. The sterile neutrino in the best fit parameter space may have significant effects on big bang nucleosynthesis (BBN). We show the region in which a low reheating temperature of the Universe may allow the BBN constraints to be alleviated. ","An Active-to-Sterile Neutrino Transition Dipole Moment and the XENON1T
Excess",2,"['Another new paper: \nA new neutrino explanation to Xenon 1T, that is not constrained by astrophysical bounds!\nGreat thanks to @TheoristIan and Jason Wyenberg for this exciting collaboration.', '@QuantumMessage @TheoristIan Hi Djuna, it is indeed very interesting to think about other effects of these exotic sterile neutrinos. I am happy to chat more about this!']",20,07,364
239,202,1514911641762156546,220480514,Cihan Okay,"Very excited to share the first publication , of my group @Bilkent_math, joint with my postdocs Selman Ipek and Aziz Kharoof. Combining simplicial sets and probabilities we introduce simplicial distributions and study a fundamental quantum phenomenon.",https://arxiv.org/abs/2204.06648,"We introduce a new framework for contextuality based on simplicial sets, combinatorial models of topological spaces that play a prominent role in modern homotopy theory. Our approach extends measurement scenarios to consist of spaces (rather than sets) of measurements and outcomes, and thereby generalizes nonsignaling distributions to simplicial distributions, which are distributions on spaces modeled by simplicial sets. Using this formalism we present a topologically inspired new proof of Fine's theorem for characterizing noncontextuality in Bell scenarios. Strong contextuality is generalized suitably for simplicial distributions, allowing us to define cohomological witnesses that extend the earlier topological constructions restricted to algebraic relations among quantum observables to the level of probability distributions. Foundational theorems of quantum theory such as the Gleason's theorem and Kochen-Specker theorem can be expressed naturally within this new language. ",Simplicial quantum contextuality,1,"['Very excited to share the first publication , of my group @Bilkent_math, joint with my postdocs Selman Ipek and Aziz Kharoof. Combining simplicial sets and probabilities we introduce simplicial distributions and study a fundamental quantum phenomenon.']",22,04,257
240,46,1484373941078552579,96135022,Mark Riedl,"LaMDA paper is finally out. This is Google’s new big conversational dialogue model and successor to Meena One of the more interesting things about it is that they use multi-task training to train it to generate language responses but also to generate knowledge retrieval queries. They train it on crowd workers using a fact checking tool. Like Meena, they use classifiers for inappropriate content to filter the dialogue training data and to filter generated outputs. They have spent considerable effort on this classifier. Overall, a huge engineering effort. Nothing terribly out of the ordinary for large language models at this particular moment of time (even incorporating retrieval into generative language models)",https://arxiv.org/abs/2201.08239,"We present LaMDA: Language Models for Dialog Applications. LaMDA is a family of Transformer-based neural language models specialized for dialog, which have up to 137B parameters and are pre-trained on 1.56T words of public dialog data and web text. While model scaling alone can improve quality, it shows less improvements on safety and factual grounding. We demonstrate that fine-tuning with annotated data and enabling the model to consult external knowledge sources can lead to significant improvements towards the two key challenges of safety and factual grounding. The first challenge, safety, involves ensuring that the model's responses are consistent with a set of human values, such as preventing harmful suggestions and unfair bias. We quantify safety using a metric based on an illustrative set of human values, and we find that filtering candidate responses using a LaMDA classifier fine-tuned with a small amount of crowdworker-annotated data offers a promising approach to improving model safety. The second challenge, factual grounding, involves enabling the model to consult external knowledge sources, such as an information retrieval system, a language translator, and a calculator. We quantify factuality using a groundedness metric, and we find that our approach enables the model to generate responses grounded in known sources, rather than responses that merely sound plausible. Finally, we explore the use of LaMDA in the domains of education and content recommendations, and analyze their helpfulness and role consistency. ",LaMDA: Language Models for Dialog Applications,4,"['LaMDA paper is finally out. This is Google’s new big conversational dialogue model and successor to Meena ', 'One of the more interesting things about it is that they use multi-task training to train it to generate language responses but also to generate knowledge retrieval queries. They train it on crowd workers using a fact checking tool.', 'Like Meena, they use classifiers for inappropriate content to filter the dialogue training data and to filter generated outputs. They have spent considerable effort on this classifier.', 'Overall, a huge engineering effort. Nothing terribly out of the ordinary for large language models at this particular moment of time (even incorporating retrieval into generative language models)']",22,01,726
241,26,1408469060052946951,790033937531703296,Yi Tay,"Excited to share our new work from @GoogleAI and @DeepMind. ""Charformer: Fast Character Transformers via Gradient-based Subword Tokenization (paper: ) We introduce a new inductive bias that learns latent subwords in an e2e fashion. Charformer is fast, often much faster than other byte-level baselines/subword models while achieving very competitive performance. No more building SPMs / re-pretraining for every new task/domain! Charformer works well both on monolingual English standard benchmarks (GLUE) and also multilingual datasets. We also evaluate Charformer on Jigsaw's toxicity detection sets from social media. Joint work with amazing collaborators. @vqctran (co-first author), @seb_ruder (DeepMind), @_jai_gupta @hwchung27 @dara_bahri @pierceqin @sens3 @congyu and @metzlerd. Code link is already in the paper but release ETA should be in about 1 week. :) @christopher Mesh Tensorflow, similar to the T5 library. We're also working on a JAX version which will be released sometime later.",https://arxiv.org/abs/2106.12672,"State-of-the-art models in natural language processing rely on separate rigid subword tokenization algorithms, which limit their generalization ability and adaptation to new settings. In this paper, we propose a new model inductive bias that learns a subword tokenization end-to-end as part of the model. To this end, we introduce a soft gradient-based subword tokenization module (GBST) that automatically learns latent subword representations from characters in a data-driven fashion. Concretely, GBST enumerates candidate subword blocks and learns to score them in a position-wise fashion using a block scoring network. We additionally introduce Charformer, a deep Transformer model that integrates GBST and operates on the byte level. Via extensive experiments on English GLUE, multilingual, and noisy text datasets, we show that Charformer outperforms a series of competitive byte-level baselines while generally performing on par and sometimes outperforming subword-based models. Additionally, Charformer is fast, improving the speed of both vanilla byte-level and subword-level Transformers by 28%-100% while maintaining competitive quality. We believe this work paves the way for highly performant token-free models that are trained completely end-to-end. ","Charformer: Fast Character Transformers via Gradient-based Subword
Tokenization",6,"['Excited to share our new work from @GoogleAI and @DeepMind. ""Charformer: Fast Character Transformers via Gradient-based Subword Tokenization (paper: ) ', 'We introduce a new inductive bias that learns latent subwords in an e2e fashion. Charformer is fast, often much faster than other byte-level baselines/subword models while achieving very competitive performance. No more building SPMs / re-pretraining for every new task/domain!', ""Charformer works well both on monolingual English standard benchmarks (GLUE) and also multilingual datasets. We also evaluate Charformer on Jigsaw's toxicity detection sets from social media."", 'Joint work with amazing collaborators. @vqctran (co-first author), @seb_ruder (DeepMind), @_jai_gupta @hwchung27 @dara_bahri @pierceqin @sens3 @congyu and @metzlerd.', 'Code link is already in the paper but release ETA should be in about 1 week. :)', ""@christopher Mesh Tensorflow, similar to the T5 library. We're also working on a JAX version which will be released sometime later.""]",21,06,1011
242,2,1357442374239023104,14754639,Kai Lukoff,"Ever get lost on YouTube? We have a new CHI 2021 paper just for you: “How the Design of YouTube Influences User Sense of Agency” Co-authored w/ @ulyngs @himanshuzade Vera Liao, James Choi, Kaiyue Fan, @smunson & Alexis Hiniker [1/5] We identify which design mechanisms users say affect their sense of control over the time they spend in the YouTube mobile app. Less control: recommendations, ads, autoplay. More control: Playlists, search, subscriptions, play controls, watch history [2/5] The mechanism called out most often is recommendations. YouTube is wickedly good at the local optimization problem: Out of millions of videos, which one is the user most likely to watch? But the user lacks the ability to align these video recs w/ their long-term goals. [3/5] On the flipside, a design idea to support greater control: microplaning, i.e., making a lightweight plan to guide behavior for a short time. For example, encourage the user to create a short video playlist for their current session of use. [4/5] Our paper builds on fantastic earlier work by @UichinLee @EricPSB @gezakovacs @youngho_yhkim @gratydesign @AnnaCox_ @jcccf @elena_agapie and many others! [5/5]",https://arxiv.org/abs/2101.11778,"In the attention economy, video apps employ design mechanisms like autoplay that exploit psychological vulnerabilities to maximize watch time. Consequently, many people feel a lack of agency over their app use, which is linked to negative life effects such as loss of sleep. Prior design research has innovated external mechanisms that police multiple apps, such as lockout timers. In this work, we shift the focus to how the internal mechanisms of an app can support user agency, taking the popular YouTube mobile app as a test case. From a survey of 120 U.S. users, we find that autoplay and recommendations primarily undermine sense of agency, while search and playlists support it. From 13 co-design sessions, we find that when users have a specific intention for how they want to use YouTube they prefer interfaces that support greater agency. We discuss implications for how designers can help users reclaim a sense of agency over their media use. ",How the Design of YouTube Influences User Sense of Agency,5,"['Ever get lost on YouTube? We have a new CHI 2021 paper just for you: “How the Design of YouTube Influences User Sense of Agency” Co-authored w/ @ulyngs @himanshuzade Vera Liao, James Choi, Kaiyue Fan, @smunson & Alexis Hiniker [1/5]', 'We identify which design mechanisms users say affect their sense of control over the time they spend in the YouTube mobile app. Less control: recommendations, ads, autoplay. More control: Playlists, search, subscriptions, play controls, watch history [2/5] https://t.co/Ltps7vbcT4', 'The mechanism called out most often is recommendations. YouTube is wickedly good at the local optimization problem: Out of millions of videos, which one is the user most likely to watch? But the user lacks the ability to align these video recs w/ their long-term goals. [3/5]', 'On the flipside, a design idea to support greater control: microplaning, i.e., making a lightweight plan to guide behavior for a short time. For example, encourage the user to create a short video playlist for their current session of use. [4/5]', 'Our paper builds on fantastic earlier work by @UichinLee @EricPSB @gezakovacs @youngho_yhkim @gratydesign @AnnaCox_ @jcccf @elena_agapie and many others! [5/5]']",21,01,1185
243,43,1263499423683883009,160687843,Alexey Melnikov,"Our new paper on designing better Bell test experiments, where we propose new unintuitive experiments with Bell inequality violations higher than the best known setups. Here we go towards device-independent quantum information processing @UniBasel ",https://arxiv.org/abs/2005.01697,"Finding optical setups producing measurement results with a targeted probability distribution is hard as a priori the number of possible experimental implementations grows exponentially with the number of modes and the number of devices. To tackle this complexity, we introduce a method combining reinforcement learning and simulated annealing enabling the automated design of optical experiments producing results with the desired probability distributions. We illustrate the relevance of our method by applying it to a probability distribution favouring high violations of the Bell-CHSH inequality. As a result, we propose new unintuitive experiments leading to higher Bell-CHSH inequality violations than the best currently known setups. Our method might positively impact the usefulness of photonic experiments for device-independent quantum information processing. ",Setting up experimental Bell test with reinforcement learning,1,"['Our new paper on designing better Bell test experiments, where we propose new unintuitive experiments with Bell inequality violations higher than the best known setups. Here we go towards device-independent quantum information processing @UniBasel ']",20,05,261
244,49,1484259890939969539,1202973751316537348,Marco Fenucci,"📢📢 New #paper available on Partial Differential Equations and Applications, by @SpringerNature! This is the fourth paper extracted from my #PhD thesis, and a preprint is freely available on @arxiv #AcademicTwitter This work has been possible also thanks to the support from @Stardust_H2020 #MSCA",https://arxiv.org/abs/2201.01205,"We first take into account variational problems with periodic boundary conditions, and briefly recall some sufficient conditions for a periodic solution of the Euler-Lagrange equation to be either a directional, a weak, or a strong local minimizer. We then apply the theory to circular orbits of the Kepler problem with potentials of type $1/r^\alpha, \, \alpha > 0$. By using numerical computations, we show that circular solutions are strong local minimizers for $\alpha > 1$, while they are saddle points for $\alpha \in (0,1)$. Moreover, we show that for $\alpha \in (1,2)$ the global minimizer of the action over periodic curves with degree $2$ with respect to the origin could be achieved on non-collision and non-circular solutions. After, we take into account the figure-eight solution of the 3-body problem, and we show that it is a strong local minimizer over a particular set of symmetric periodic loops. ","Local minimality properties of circular motions in $1/r^\alpha$
potentials and of the figure-eight solution of the 3-body problem",2,"['📢📢 New #paper available on Partial Differential Equations and Applications, by @SpringerNature! This is the fourth paper extracted from my #PhD thesis, and a preprint is freely available on @arxiv #AcademicTwitter \n ', 'This work has been possible also thanks to the support from @Stardust_H2020 #MSCA']",22,01,309
245,60,1351360048988123136,32965031,Sami Douba,"New paper on arXiv 🙃 A corollary of the main result is that the fundamental group of the mapping torus of a Dehn twist on a closed oriented surface of positive genus does not embed in a compact Lie group. Comments welcome. @littmath If the mapping class is pseudo-Anosov (and the surface has genus > 1), the mapping torus is a closed hyperbolic 3-manifold. The fundamental group of such a manifold is virtually special (Agol) and hence embeds in a compact Lie group (also Agol): @ryleealanza Thanks 💜 @littmath Yes; these groups are residually finite. However, the image of an element representing that curve under any representation into a compact Lie group will have finite order (and that's how I concluded that no such representation is faithful) @littmath Thanks for the interest! @AndyPutmanMath @MarissaKawehi Thank you! @agolian Button's sequence of papers in which he proves that fact and several similar ones were my starting point for this project. It's not immediately clear to me how the two facts are related, or whether there is a unified way to approach/prove them. @MachineInf @littmath I think it's pretty cool! @agolian What I show in the case of the mapping torus of a Dehn twist about a curve [c] is that the image of c under any finite-dimensional representation of pi_1(fibration) is quasi-unipotent. This won't be true for the monodromy (the cyclic subgroup generated by the latter is a retract) @agolian That detail is not contained in the abstract (it's buried somewhere in the introduction; maybe I should change that). Thanks!",https://arxiv.org/abs/2101.06797,"Let $M$ be a graph manifold containing a single JSJ torus $T$ and whose JSJ blocks are of the form $\Sigma \times S^1$, where $\Sigma$ is a compact orientable surface with boundary. We show that if $M$ does not admit a Riemannian metric of everywhere nonpositive sectional curvature, then there is an essential curve on $T$ such that any finite-dimensional linear representation of $\pi_1(M)$ maps an element representing that curve to a matrix all of whose eigenvalues are roots of $1$. In particular, this shows that $\pi_1(M)$ does not admit a faithful finite-dimensional unitary representation, and gives a new proof that $\pi_1(M)$ is not linear over any field of positive characteristic. ",Virtually unipotent curves in some non-NPC graph manifolds,10,"['New paper on arXiv 🙃 A corollary of the main result is that the fundamental group of the mapping torus of a Dehn twist on a closed oriented surface of positive genus does not embed in a compact Lie group. Comments welcome. ', '@littmath If the mapping class is pseudo-Anosov (and the surface has genus > 1), the mapping torus is a closed hyperbolic 3-manifold. The fundamental group of such a manifold is virtually special (Agol) and hence embeds in a compact Lie group (also Agol): https://t.co/MtyOFpPoU0', '@ryleealanza Thanks 💜', ""@littmath Yes; these groups are residually finite. However, the image of an element representing that curve under any representation into a compact Lie group will have finite order (and that's how I concluded that no such representation is faithful)"", '@littmath Thanks for the interest!', '@AndyPutmanMath @MarissaKawehi Thank you!', ""@agolian Button's sequence of papers in which he proves that fact and several similar ones were my starting point for this project. It's not immediately clear to me how the two facts are related, or whether there is a unified way to approach/prove them."", ""@MachineInf @littmath I think it's pretty cool!"", ""@agolian What I show in the case of the mapping torus of a Dehn twist about a curve [c] is that the image of c under any finite-dimensional representation of pi_1(fibration) is quasi-unipotent. This won't be true for the monodromy (the cyclic subgroup generated by the latter is a retract)"", ""@agolian That detail is not contained in the abstract (it's buried somewhere in the introduction; maybe I should change that). Thanks!""]",21,01,1571
246,121,1224762472814600192,737878789636694016,Kshitij Jain,"Our new paper on Unsupervised Multilingual Alignment using Wasserstein Barycenter is now online on Arxiv . In this work, we show that using Wasserstein Barycenter as a pivot/mean language, we are able to achieve state-of-the-art translation accuracies. This work was done in collaboration with Xin Lian (intern at @BorealisAI), Jakub Truszkowski, Pascal Poupart, and Yaoliang Yu.",https://arxiv.org/abs/2002.00743,"We study unsupervised multilingual alignment, the problem of finding word-to-word translations between multiple languages without using any parallel data. One popular strategy is to reduce multilingual alignment to the much simplified bilingual setting, by picking one of the input languages as the pivot language that we transit through. However, it is well-known that transiting through a poorly chosen pivot language (such as English) may severely degrade the translation quality, since the assumed transitive relations among all pairs of languages may not be enforced in the training process. Instead of going through a rather arbitrarily chosen pivot language, we propose to use the Wasserstein barycenter as a more informative ""mean"" language: it encapsulates information from all languages and minimizes all pairwise transportation costs. We evaluate our method on standard benchmarks and demonstrate state-of-the-art performances. ",Unsupervised Multilingual Alignment using Wasserstein Barycenter,2,"['Our new paper on Unsupervised Multilingual Alignment using Wasserstein Barycenter is now online on Arxiv . In this work, we show that using Wasserstein Barycenter as a pivot/mean language, we are able to achieve state-of-the-art translation accuracies.', 'This work was done in collaboration with Xin Lian (intern at @BorealisAI), Jakub Truszkowski, Pascal Poupart, and Yaoliang Yu.']",20,02,385
247,60,956590679580463107,3321159693,rediet abebe,Our work on Opinion Dynamics with Varying Susceptibility to Persuasion is on the arXiv! We find that targeting agents at the level of susceptibility leads to an interesting family of questions and can effectively optimize network opinions @NetSciPhDs,https://arxiv.org/abs/1801.07863,"A long line of work in social psychology has studied variations in people's susceptibility to persuasion -- the extent to which they are willing to modify their opinions on a topic. This body of literature suggests an interesting perspective on theoretical models of opinion formation by interacting parties in a network: in addition to considering interventions that directly modify people's intrinsic opinions, it is also natural to consider interventions that modify people's susceptibility to persuasion. In this work, we adopt a popular model for social opinion dynamics, and we formalize the opinion maximization and minimization problems where interventions happen at the level of susceptibility. We show that modeling interventions at the level of susceptibility lead to an interesting family of new questions in network opinion dynamics. We find that the questions are quite different depending on whether there is an overall budget constraining the number of agents we can target or not. We give a polynomial-time algorithm for finding the optimal target-set to optimize the sum of opinions when there are no budget constraints on the size of the target-set. We show that this problem is NP-hard when there is a budget, and that the objective function is neither submodular nor supermodular. Finally, we propose a heuristic for the budgeted opinion optimization and show its efficacy at finding target-sets that optimize the sum of opinions compared on real world networks, including a Twitter network with real opinion estimates. ",Opinion Dynamics with Varying Susceptibility to Persuasion,1,['Our work on Opinion Dynamics with Varying Susceptibility to Persuasion is on the arXiv! We find that targeting agents at the level of susceptibility leads to an interesting family of questions and can effectively optimize network opinions @NetSciPhDs'],18,01,257
248,98,1326102438303174656,234888240,konstantin herbst,"Fantastic news! Our new paper ""From Starspots to Stellar Coronal Mass Ejections - Revisiting Empirical Stellar Relations"" together with Athanasios Papaioannou, @VladimirAirape1, and @cosmicatri has been accepted. Sneak peek: in press at ApJ 🥳",https://arxiv.org/abs/2011.03761,"Upcoming missions, including the James Webb Space Telescope, will soon characterize the atmospheres of terrestrial-type exoplanets in habitable zones around cool K- and M-type stars searching for atmospheric biosignatures. Recent observations suggest that the ionizing radiation and particle environment from active cool planet hosts may be detrimental for exoplanetary habitability. Since no direct information on the radiation field is available, empirical relations between signatures of stellar activity, including the sizes and magnetic fields of starspots, are often used. Here, we revisit the empirical relation between the starspot size and the effective stellar temperature and evaluate its impact on estimates of stellar flare energies, coronal mass ejections, and fluxes of the associated stellar energetic particle events. ","From Starspots to Stellar Coronal Mass Ejections -- Revisiting Empirical
Stellar Relations",1,"['Fantastic news! \n\nOur new paper ""From Starspots to Stellar Coronal Mass Ejections - Revisiting Empirical Stellar Relations"" together with Athanasios Papaioannou, @VladimirAirape1, and @cosmicatri has been accepted. \n\nSneak peek: \nin press at ApJ 🥳']",20,11,251
249,175,1458280561286397957,1065179437568712704,Michael Girard,"Interestingly, the central retinal vessel trunk (and its branches) appears to be a stronger biomarker for #glaucoma than RNFL thickness! We propose a paradigm where the major retinal vessels may act as a protective skeleton for the optic disc. @arxiv: . ",https://arxiv.org/abs/2111.03997,"Purpose: To assess whether the three-dimensional (3D) structural configuration of the central retinal vessel trunk and its branches (CRVT&B) could be used as a diagnostic marker for glaucoma. Method: We trained a deep learning network to automatically segment the CRVT&B from the B-scans of the optical coherence tomography (OCT) volume of the optic nerve head (ONH). Subsequently, two different approaches were used for glaucoma diagnosis using the structural configuration of the CRVT&B as extracted from the OCT volumes. In the first approach, we aimed to provide a diagnosis using only 3D CNN and the 3D structure of the CRVT&B. For the second approach, we projected the 3D structure of the CRVT&B orthographically onto three planes to obtain 2D images, and then a 2D CNN was used for diagnosis. The segmentation accuracy was evaluated using the Dice coefficient, whereas the diagnostic accuracy was assessed using the area under the receiver operating characteristic curves (AUC). The diagnostic performance of the CRVT&B was also compared with that of retinal nerve fiber layer (RNFL) thickness. Results: Our segmentation network was able to efficiently segment retinal blood vessels from OCT scans. On a test set, we achieved a Dice coefficient of 0.81\pm0.07. The 3D and 2D diagnostic networks were able to differentiate glaucoma from non-glaucoma subjects with accuracies of 82.7% and 83.3%, respectively. The corresponding AUCs for CRVT&B were 0.89 and 0.90, higher than those obtained with RNFL thickness alone. Conclusions: Our work demonstrated that the diagnostic power of the CRVT&B is superior to that of a gold-standard glaucoma parameter, i.e., RNFL thickness. Our work also suggested that the major retinal blood vessels form a skeleton -- the configuration of which may be representative of major ONH structural changes as typically observed with the development and progression of glaucoma. ","The Three-Dimensional Structural Configuration of the Central Retinal
Vessel Trunk and Branches as a Glaucoma Biomarker",1,"['Interestingly, the central retinal vessel trunk (and its branches) appears to be a stronger biomarker for #glaucoma than RNFL thickness! We propose a paradigm where the major retinal vessels may act as a protective skeleton for the optic disc. @arxiv: . ']",21,11,266
250,142,1247903624694337536,2427184074,Christopher Berry,"Fun new paper led by @ScooperF1 @annacgreen & @hannahmidd8 on our mini-Michelson interferometer, currently living in @thinktankmuseum How did we build a gravitational-wave exhibit for the @thinktankmuseum? @annacgreen explained in this @LIGO Magazine article #scicomm Our mini-Michelson was designed so it could be part of a stand-alone museum exhibit, as well as part of a science fair exhibit. We used it as part of the @royalsociety summer exhibition @Listen2Universe If you would like to play with lasers and build your own mini-@LIGO demonstrator, we have a parts list online and building instructions will be coming soon!",https://arxiv.org/abs/2004.03052,"In 2015 the first observation of gravitational waves marked a breakthrough in astrophysics, and in technological research and development. The discovery of a gravitational-wave signal from the collision of two black holes, a billion light-years away, received considerable interest from the media and public. We describe the development of a purpose-built exhibit explaining this new area of research to a general audience. The core element of the exhibit is a working Michelson interferometer: a scaled-down version of the key technology used in gravitational-wave detectors. The Michelson interferometer is integrated into a hands-on exhibit, which allows for user interaction and simulated gravitational-wave observations. An interactive display provides a self-guided explanation of gravitational-wave-related topics through video, animation, images and text. We detail the hardware and software used to create the exhibit and discuss two installation variants: an independent learning experience in a museum setting (the Thinktank Birmingham Science Museum), and a science-festival with the presence of expert guides (the 2017 Royal Society Summer Science Exhibition). We assess audience reception in these two settings, describe the improvements we have made given this information, and discuss future public-engagement projects resulting from this work. The exhibit is found to be effective in communicating the new and unfamiliar field of gravitational-wave research to general audiences. An accompanying website provides parts lists and information for others to build their own version of this exhibit. ",An Interactive Gravitational-Wave Detector Model for Museums and Fairs,4,"['Fun new paper led by @ScooperF1 @annacgreen & @hannahmidd8 on our mini-Michelson interferometer, currently living in @thinktankmuseum \n ', 'How did we build a gravitational-wave exhibit for the @thinktankmuseum? @annacgreen explained in this @LIGO Magazine article https://t.co/WhbfhlWco4 #scicomm https://t.co/olMdMkLGtD', 'Our mini-Michelson was designed so it could be part of a stand-alone museum exhibit, as well as part of a science fair exhibit. We used it as part of the @royalsociety summer exhibition @Listen2Universe \nhttps://t.co/0ml5G4pSw5', 'If you would like to play with lasers and build your own mini-@LIGO demonstrator, we have a parts list online https://t.co/DKMG3BQObY and building instructions will be coming soon!']",20,04,669
251,206,1313816344580808704,1141006043218108419,Clara Isabel Meister,"Beam search is a hack -- we all know it. So, why does it work so damn well? It’s a SOTA algorithm for decoding neural text generators! Our new EMNLP paper presents a framing of beam search that demonstrates it has a cognitive inductive bias. Formally, we cast beam search as the solution to a regularized decoding problem. Analysis of our “beam search regularizer” reveals a concrete link between beam search and the UID hypothesis from cognitive science. We find that beam search encourages an even distribution of information across the generated text, which is cognitively plausible! In our machine translation experiments, we show that BLEU correlates with an operationalization of the UID hypothesis. This gives us a simple explanation about why beam search works so well with small beam sizes: It enforces a posited cognitive bias from the linguistics literature -- to wit, the UID hypothesis. We also develop a set of novel regularizers, inspired by further work on the UID hypothesis, and decode with them in the regularized decoding framework. Experimentally, we find that our novel regularizers behave in a similar manner to beam search with a small beam size. We conclude the paper arguing that our experimental results give us a plausible, cognitive explanation for beam search’s success as a decoding heuristic for neural text generators, even when the algorithm is far from exact in practice. Joint work with @ryandcotterell and @xtimv.",https://arxiv.org/abs/2010.02650,"Quite surprisingly, exact maximum a posteriori (MAP) decoding of neural language generators frequently leads to low-quality results. Rather, most state-of-the-art results on language generation tasks are attained using beam search despite its overwhelmingly high search error rate. This implies that the MAP objective alone does not express the properties we desire in text, which merits the question: if beam search is the answer, what was the question? We frame beam search as the exact solution to a different decoding objective in order to gain insights into why high probability under a model alone may not indicate adequacy. We find that beam search enforces uniform information density in text, a property motivated by cognitive science. We suggest a set of decoding objectives that explicitly enforce this property and find that exact decoding with these objectives alleviates the problems encountered when decoding poorly calibrated language generation models. Additionally, we analyze the text produced using various decoding strategies and see that, in our neural machine translation experiments, the extent to which this property is adhered to strongly correlates with BLEU. ","If beam search is the answer, what was the question?",6,"['Beam search is a hack -- we all know it. So, why does it work so damn well? It’s a SOTA algorithm for decoding neural text generators! Our new EMNLP paper presents a framing of beam search that demonstrates it has a cognitive inductive bias. ', 'Formally, we cast beam search as the solution to a regularized decoding problem. Analysis of our “beam search regularizer” reveals a concrete link between beam search and the UID hypothesis from cognitive science.', 'We find that beam search encourages an even distribution of information across the generated text, which is cognitively plausible! In our machine translation experiments, we show that BLEU correlates with an operationalization of the UID hypothesis.', 'This gives us a simple explanation about why beam search works so well with small beam sizes: It enforces a posited cognitive bias from the linguistics literature -- to wit, the UID hypothesis.', 'We also develop a set of novel regularizers, inspired by further work on the UID hypothesis, and decode with them in the regularized decoding framework. Experimentally, we find that our novel regularizers behave in a similar manner to beam search with a small beam size.', 'We conclude the paper arguing that our experimental results give us a plausible, cognitive explanation for beam search’s success as a decoding heuristic for neural text generators, even when the algorithm is far from exact in practice. Joint work with @ryandcotterell and @xtimv.']",20,10,1464
252,146,1247697430285447168,1324039292,Hanxiao Liu,"New paper: Evolving Normalization-Activation Layers. We use evolution to design new layers called EvoNorms, which outperform BatchNorm-ReLU on many tasks. A promising use of AutoML to discover fundamental ML building blocks. Joint work with @DeepMind Key ideas: (1) to start from low-level primitives, and (2) to evolve the layers' generalization over multiple architectures. EvoNorms achieved promising results on ResNets, MobileNets, EfficientNets, Mask R-CNN and BigGAN. Pseudocode available in the appendix. ",http://arxiv.org/abs/2004.02967,"Normalization layers and activation functions are fundamental components in deep networks and typically co-locate with each other. Here we propose to design them using an automated approach. Instead of designing them separately, we unify them into a single tensor-to-tensor computation graph, and evolve its structure starting from basic mathematical functions. Examples of such mathematical functions are addition, multiplication and statistical moments. The use of low-level mathematical functions, in contrast to the use of high-level modules in mainstream NAS, leads to a highly sparse and large search space which can be challenging for search methods. To address the challenge, we develop efficient rejection protocols to quickly filter out candidate layers that do not work well. We also use multi-objective evolution to optimize each layer's performance across many architectures to prevent overfitting. Our method leads to the discovery of EvoNorms, a set of new normalization-activation layers with novel, and sometimes surprising structures that go beyond existing design patterns. For example, some EvoNorms do not assume that normalization and activation functions must be applied sequentially, nor need to center the feature maps, nor require explicit activation functions. Our experiments show that EvoNorms work well on image classification models including ResNets, MobileNets and EfficientNets but also transfer well to Mask R-CNN with FPN/SpineNet for instance segmentation and to BigGAN for image synthesis, outperforming BatchNorm and GroupNorm based layers in many cases. ",Evolving Normalization-Activation Layers,2,"['New paper: Evolving Normalization-Activation Layers.\n\nWe use evolution to design new layers called EvoNorms, which outperform BatchNorm-ReLU on many tasks. A promising use of AutoML to discover fundamental ML building blocks.\n\n\n\nJoint work with @DeepMind ', ""Key ideas: (1) to start from low-level primitives, and (2) to evolve the layers' generalization over multiple architectures. EvoNorms achieved promising results on ResNets, MobileNets, EfficientNets, Mask R-CNN and BigGAN. Pseudocode available in the appendix. https://t.co/W2jwvKJwoQ""]",20,04,532
253,49,1321462223097790468,2180768821,Erik Hoel,"How do artificial neural networks generalize? The answer may be in their causal structure. In this new paper we use information theory to track nodes’ causal relationships becoming more sensitive or degenerate. Training traces a path in this “causal plane” Some really interesting results of this collaboration with Simon Mattsson and @ericjmichaud_ : the informativeness of a causal relationship between two nodes peaks at a certain characteristic edge weight, no matter what bin size you use (there is a manifold for multiple edges) We can even measure a variant of the integrated information using these techniques. Normally the phi of a feedforward network is zero... but there are still integrated joint effects of one layer to another. Here we introduce a measure to capture those: ""phi feedfoward"" There's a great github package for those who want to try out these techniques. Ultimately we are hoping to offer a kind of alternative to the information bottleneck approach (although its not contradictory) that focuses more on causation @BlaiseLucey00 @ericjmichaud_ I recently learned how to make gifs so get used to it",https://arxiv.org/abs/2010.13871,"Deep Neural Networks (DNNs) are often examined at the level of their response to input, such as analyzing the mutual information between nodes and data sets. Yet DNNs can also be examined at the level of causation, exploring ""what does what"" within the layers of the network itself. Historically, analyzing the causal structure of DNNs has received less attention than understanding their responses to input. Yet definitionally, generalizability must be a function of a DNN's causal structure since it reflects how the DNN responds to unseen or even not-yet-defined future inputs. Here, we introduce a suite of metrics based on information theory to quantify and track changes in the causal structure of DNNs during training. Specifically, we introduce the effective information (EI) of a feedforward DNN, which is the mutual information between layer input and output following a maximum-entropy perturbation. The EI can be used to assess the degree of causal influence nodes and edges have over their downstream targets in each layer. We show that the EI can be further decomposed in order to examine the sensitivity of a layer (measured by how well edges transmit perturbations) and the degeneracy of a layer (measured by how edge overlap interferes with transmission), along with estimates of the amount of integrated information of a layer. Together, these properties define where each layer lies in the ""causal plane"" which can be used to visualize how layer connectivity becomes more sensitive or degenerate over time, and how integration changes during training, revealing how the layer-by-layer causal structure differentiates. These results may help in understanding the generalization capabilities of DNNs and provide foundational tools for making DNNs both more generalizable and more explainable. ","Examining the causal structures of deep neural networks using
information theory",5,"['How do artificial neural networks generalize? The answer may be in their causal structure. In this new paper we use information theory to track nodes’ causal relationships becoming more sensitive or degenerate. Training traces a path in this “causal plane” ', 'Some really interesting results of this collaboration with Simon Mattsson and @ericjmichaud_ : the informativeness of a causal relationship between two nodes peaks at a certain characteristic edge weight, no matter what bin size you use (there is a manifold for multiple edges) https://t.co/dLN3gKAOVK', 'We can even measure a variant of the integrated information using these techniques. Normally the phi of a feedforward network is zero... but there are still integrated joint effects of one layer to another. Here we introduce a measure to capture those: ""phi feedfoward"" https://t.co/JkB253oeut', ""There's a great github package for those who want to try out these techniques. Ultimately we are hoping to offer a kind of alternative to the information bottleneck approach (although its not contradictory) that focuses more on causation\n\nhttps://t.co/zp3x3nd9L5"", '@BlaiseLucey00 @ericjmichaud_ I recently learned how to make gifs so get used to it']",20,10,1161
254,176,1339253278459240449,1117093805499355136,Marilena Loverde,"First paper by my student Charuhas Shiveshwarkar! (and @sciencedrew ) We study scale-dependent halo bias and bispectrum induced by the horizon-scale perturbations in radiation. That is, in CMB photons and massless neutrinos. A small but nonzero effect! ",https://arxiv.org/abs/2012.04691,"We investigate the gravitational effect of large-scale radiation perturbations on small-scale structure formation. In addition to making the growth of matter perturbations scale dependent, the free-streaming of radiation also affects the coupling between structure formation at small and large scales. We study this using Separate Universe N-body simulations to compute the (isotropized) squeezed-limit matter bispectrum and the linear halo bias. Our results show that the scale dependence in the growth of long-wavelength matter perturbations, caused by radiation, translates into these quantities acquiring a non-trivial scale-dependence at $k\lesssim 0.05$ Mpc$^{-1}$. In a universe with radiation composed of cosmic microwave background photons and three species of massless neutrinos, the bias of halos with $b = 2$ at high $k$ will decrease by $0.29\%,\ 0.45\%$ and $0.8\%$ between $k = 0.05$ Mpc$^{-1}$ and $k = 0.0005$ Mpc$^{-1}$ at redshifts $z=0,\ 1$, and $3$ respectively. For objects with $b\gg1$, these differences approach $0.43\%,\ 0.68\%$ and $1.2\%$ respectively. ","Scale-dependent halo bias and the squeezed limit bispectrum in the
presence of radiation",1,"['First paper by my student Charuhas Shiveshwarkar! (and @sciencedrew ) We study scale-dependent halo bias and bispectrum induced by the horizon-scale perturbations in radiation. That is, in CMB photons and massless neutrinos. A small but nonzero effect! ']",20,12,266
255,58,971754575349932032,610427323,Desika Narayanan,"New paper! . We postprocess massive zooms with RT to make 'observables' -- Non-parametric merger indicators G-M20 and C-A work super well at low-z, but the complex morphology from highly clustered evirons makes measures fail at high-z 4 massive gals. So for things that might end up as centrals in z=0 groups/clusters, our current non-parametric methods don't perform much better than randomly guessing. Extra fun fact: paper is my first-ever student led one, and by a super awesome former @haverfordedu undergraduate! @conselice Thanks for the comments Chris! The replies are somewhat multifaceted. First, these simulations are cosmological zooms, which means we follow individual galaxies at high-res, but don't have statistics. So we can't generate statistics for merger rates. @conselice Second, part of the point of the paper is that G-M20 and C-A may not have success at determining merger rates, at least for very massive systems at high-z. Of course there are other methods for determining merger rates, though I can't speak to their accuracy in this regime. @conselice The point being -- it's not clear that we can compare the rates from the simulations to the observations when the false positive rate from observations may be significant. @conselice Finally, a bit of a philosophical point: should it matter what the rate normalization is? Ideally, a merger identifier should identify mergers regardless of the rate, with a minimal false positive rate. @conselice On the Petrosian radii, I'll point you to the 7 pages of methods! Not to defer, but it's nontrivial to summarize in N characters or less :) But will be very happy to hear your comments on them.",https://arxiv.org/abs/1803.02374,"Non-parametric morphology measures are a powerful tool for identifying galaxy mergers at low redshifts. We employ cosmological zoom simulations using Gizmo with the Mufasa feedback scheme, post-processed using 3D dust radiative transfer into mock observations, to study whether common morphological measures Gini G, M20, concentration C, and asymmetry A are effective at identifying major galaxy mergers at z ~ 2 - 4, i.e. ""Cosmic Noon"". Our zoom suite covers galaxies with 10^8.6 < M_* < 10^11 M_sun at z ~ 2, and broadly reproduces key global galaxy observations. Our primary result is that these morphological measures are unable to robustly pick out galaxies currently undergoing mergers during Cosmic Noon, typically performing no better than a random guess. This improves only marginally if we consider whether galaxies have undergone a merger within the last Gyr. When also considering minor mergers, galaxies display no trend of moving towards the merger regime with increasing merger ratio. From z = 4 -> 2, galaxies move from the non-merger towards the merger regime in all statistics, but this is primarily an effect of mass: Above a given noise level, higher mass galaxies display a more complex outer morphology induced by their clustered environment. We conclude that during Cosmic Noon, these morphological statistics are of limited value in identifying galaxy mergers. ","Identifying Mergers Using Quantitative Morphologies in Zoom Simulations
of High-Redshift Galaxies",8,"[""New paper! . We postprocess massive zooms with RT to make 'observables' -- Non-parametric merger indicators G-M20 and C-A work super well at low-z, but the complex morphology from highly clustered evirons makes measures fail at high-z 4 massive gals."", ""So for things that might end up as centrals in z=0 groups/clusters, our current non-parametric methods don't perform much better than randomly guessing."", 'Extra fun fact: paper is my first-ever student led one, and by a super awesome former @haverfordedu undergraduate!', ""@conselice Thanks for the comments Chris! The replies are somewhat multifaceted. First, these simulations are cosmological zooms, which means we follow individual galaxies at high-res, but don't have statistics. So we can't generate statistics for merger rates."", ""@conselice Second, part of the point of the paper is that G-M20 and C-A may not have success at determining merger rates, at least for very massive systems at high-z. Of course there are other methods for determining merger rates, though I can't speak to their accuracy in this regime."", ""@conselice The point being -- it's not clear that we can compare the rates from the simulations to the observations when the false positive rate from observations may be significant."", '@conselice Finally, a bit of a philosophical point: should it matter what the rate normalization is? Ideally, a merger identifier should identify mergers regardless of the rate, with a minimal false positive rate.', ""@conselice On the Petrosian radii, I'll point you to the 7 pages of methods! Not to defer, but it's nontrivial to summarize in N characters or less :) But will be very happy to hear your comments on them.""]",18,03,1677
256,199,1301383661489618945,1177063549606203394,Tommi Tenkanen,"My last paper is now out! Together with my collaborator Catarina Cosme we studied a scenario where the observed #DarkMatter is produced in the early universe by amplification of quantum fluctuations of a scalar field during cosmic inflation. 1/3 In particular, we studied how the (free or self-interacting) dark matter abundance and its perturbation spectrum change if the early universe was not purely radiation dominated but there was a period of e.g. slow reheating after inflation, as is indeed possible. 2/3 We also discussed how the scenario could be further tested through primordial dark matter isocurvature and non-Gaussianity. It's not hopeless to test even purely gravitationally-interacting #DarkMatter! 3/3",https://arxiv.org/abs/2009.01149,"It has been shown that the observed dark matter (DM) abundance can be produced by amplification of quantum fluctuations of an energetically subdominant scalar field during inflation. In this paper, we study the robustness of this ""spectator dark matter"" scenario to changes in the expansion rate of the early Universe. Compared to the standard radiation-dominated (RD) scenario, two aspects will change: the DM energy density evolves differently as a function of time, and also the DM isocurvature perturbation spectrum will be different from the result in the RD case. These can impose sizeable changes to the values of model parameters which allow the field to constitute all DM while simultaneously satisfying all observational constraints. We study both free and self-interacting DM in scenarios with non-standard expansion and quantify the changes to the cases with a standard cosmological history. We also discuss testability of the scenario through primordial DM isocurvature and non-Gaussianity. ",Spectator dark matter in non-standard cosmologies,3,"['My last paper is now out! Together with my collaborator Catarina Cosme we studied a scenario where the observed #DarkMatter is produced in the early universe by amplification of quantum fluctuations of a scalar field during cosmic inflation. 1/3', 'In particular, we studied how the (free or self-interacting) dark matter abundance and its perturbation spectrum change if the early universe was not purely radiation dominated but there was a period of e.g. slow reheating after inflation, as is indeed possible. 2/3', ""We also discussed how the scenario could be further tested through primordial dark matter isocurvature and non-Gaussianity. It's not hopeless to test even purely gravitationally-interacting #DarkMatter! 3/3""]",20,09,726
257,29,1176881065941721088,532752544,Michael Lopez,"I'm going to ask a question that no one expects me to: Did we get fourth down analysis (partially) wrong? A new paper about the challenges of analyzing traditional NFL data, and where player tracking data can help. Here's why: teams that went for it on 4th-and-1's were, on average, 20% closer to the line to gain than teams that did not go for it on 4th-and-1. We weren't comparing apples to apples when looking at fourth down strategy, even when we thought we were. The article I've shared above is (hopefully) the introduction to this special JQAS issue. If you are reading the article and wondering why there is some blank space, it depends on which articles are accepted @Mike_Champagne Player-level data would potentially help strategy, although it's not the once I focused on here @rtelmore That was the idea of an anonymous but intelligent reviewer. Idea is to imagine offense moving forward @zbinney_NFLinj Not something to ignore, and likely a sensitivity analysis would be appropriate. I used a second identification strategy (footnote 2, page 4) and the differences were magnified. Additionally, I'm assuming measurement error differences are independent of yardage remaining @903124S In the paper, those same contextual factors are accounted for",https://arxiv.org/abs/1909.10631,"Most historical National Football League (NFL) analysis, both mainstream and academic, has relied on public, play-level data to generate team and player comparisons. Given the number of oft omitted variables that impact on-field results, such as play call, game situation, and opponent strength, findings tend to be more anecdotal than actionable. With the release of player tracking data, however, analysts can better ask and answer questions to isolate skill and strategy. In this article, we highlight the limitations of traditional analyses, and use a decades-old punching bag for analysts, fourth-down strategy, as a microcosm for why tracking data is needed. Specifically, we assert that, in absence of using the precise yardage needed for a first down, past findings supporting an aggressive fourth down strategy may have been overstated. Next, we synthesize recent work that comprises this special Journal of Quantitative Analysis in Sports issue into player tracking data in football. Finally, we conclude with some best practices and limitations regarding usage of this data. The release of player tracking data marks a transition for the league and its' analysts, and we hope this issue helps guide innovation in football analytics for years to come. ","Bigger data, better questions, and a return to fourth down behavior: an
introduction to a special issue on tracking data in the National football
League",7,"[""I'm going to ask a question that no one expects me to: Did we get fourth down analysis (partially) wrong? \n\nA new paper about the challenges of analyzing traditional NFL data, and where player tracking data can help. "", ""Here's why: teams that went for it on 4th-and-1's were, on average, 20% closer to the line to gain than teams that did not go for it on 4th-and-1. We weren't comparing apples to apples when looking at fourth down strategy, even when we thought we were. https://t.co/zbPfWaeNVS"", ""The article I've shared above is (hopefully) the introduction to this special JQAS issue. If you are reading the article and wondering why there is some blank space, it depends on which articles are accepted https://t.co/uEMFD4lJwu"", ""@Mike_Champagne Player-level data would potentially help strategy, although it's not the once I focused on here"", '@rtelmore That was the idea of an anonymous but intelligent reviewer. Idea is to imagine offense moving forward', ""@zbinney_NFLinj Not something to ignore, and likely a sensitivity analysis would be appropriate. I used a second identification strategy (footnote 2, page 4) and the differences were magnified. Additionally, I'm assuming measurement error differences are independent of yardage remaining"", '@903124S In the paper, those same contextual factors are accounted for']",19,09,1287
258,83,1405565545693560840,1938536035,Huy V. Vo,"Our new paper ""Large-Scale Unsupervised Object Discovery"" is on arxiv: . We propose to use ranking methods for object discovery and show that our approach scales better than the baselines while yielding state-of-the-art results on COCO ... ... and the large OpenImages dataset with 1.7M images.",https://arxiv.org/abs/2106.06650,"Existing approaches to unsupervised object discovery (UOD) do not scale up to large datasets without approximations that compromise their performance. We propose a novel formulation of UOD as a ranking problem, amenable to the arsenal of distributed methods available for eigenvalue problems and link analysis. Through the use of self-supervised features, we also demonstrate the first effective fully unsupervised pipeline for UOD. Extensive experiments on COCO and OpenImages show that, in the single-object discovery setting where a single prominent object is sought in each image, the proposed LOD (Large-scale Object Discovery) approach is on par with, or better than the state of the art for medium-scale datasets (up to 120K images), and over 37% better than the only other algorithms capable of scaling up to 1.7M images. In the multi-object discovery setting where multiple objects are sought in each image, the proposed LOD is over 14% better in average precision (AP) than all other methods for datasets ranging from 20K to 1.7M images. Using self-supervised features, we also show that the proposed method obtains state-of-the-art UOD performance on OpenImages. Our code is publicly available at this https URL ",Large-Scale Unsupervised Object Discovery,2,"['Our new paper ""Large-Scale Unsupervised Object Discovery"" is on arxiv: . We propose to use ranking methods for object discovery and show that our approach scales better than the baselines while yielding state-of-the-art results on COCO ... ', '... and the large OpenImages dataset with 1.7M images.']",21,06,307
259,49,983927083800784896,899136914531393536,Tuomas Haarnoja,"New paper on how hierarchies emerge naturally from maximum entropy policies with a latent space. These policies achieve state-of-the-art performance on standard benchmark tasks and can solve spare reward tasks. w/ @kristianhartika, @pabbeel & S. Levine. ",http://arxiv.org/abs/1804.02808,"We address the problem of learning hierarchical deep neural network policies for reinforcement learning. In contrast to methods that explicitly restrict or cripple lower layers of a hierarchy to force them to use higher-level modulating signals, each layer in our framework is trained to directly solve the task, but acquires a range of diverse strategies via a maximum entropy reinforcement learning objective. Each layer is also augmented with latent random variables, which are sampled from a prior distribution during the training of that layer. The maximum entropy objective causes these latent variables to be incorporated into the layer's policy, and the higher level layer can directly control the behavior of the lower layer through this latent space. Furthermore, by constraining the mapping from latent variables to actions to be invertible, higher layers retain full expressivity: neither the higher layers nor the lower layers are constrained in their behavior. Our experimental evaluation demonstrates that we can improve on the performance of single-layer policies on standard benchmark tasks simply by adding additional layers, and that our method can solve more complex sparse-reward tasks by learning higher-level policies on top of high-entropy skills optimized for simple low-level objectives. ",Latent Space Policies for Hierarchical Reinforcement Learning,1,"['New paper on how hierarchies emerge naturally from maximum entropy policies with a latent space. These policies achieve state-of-the-art performance on standard benchmark tasks and can solve spare reward tasks. w/ @kristianhartika, @pabbeel & S. Levine. ']",18,04,267
260,71,1493324906443902976,60893773,James Bullock,"A quick summary of thoughts on our new FIRE et al. paper showing that dark-matter-free low-mass galaxies arise naturally and fairly frequently around massive galaxies in a cosmological-volume simulation. Paper led by @jorgito__moreno using FIREbox sim van Dokkum and @DanieliShany discovery of low-mass dm-poor galaxies DF2 and DF4 (red bars) v. surprising, since low-mass galaxies are usually DM-dominated. Remarkably we find several low-mass galaxies (yellow) in these sim have less DM than stars within their stellar radii. Every one of them is a satellite of a massive (~1.e11 Mstar) host that is on an orbit that brought it within the core of the galaxy (~10 kpc from the center). Much more DM lost than stars. Most of them have very faint tidal features This work follows a long line of work that has shown that close encounters between low-mass galaxies and massive hosts could do this kind of thing: Haslbauer et al., Carleton et al., Sales et al., etc. Our sims do a very good job reproducing many properties of DF2 and DF4: velocity dispersion, sizes, etc. We predict that ~30% of massive hosts should have a satellite that is DM deficient. One thing that still concerns me about our work is that we find our objects are still fairly metal rich compared to DF2 and DF4. Could be scatter in Fe/H vs. Mstar -- will need to discover more DM-def. galaxies to test these things! @azifattahi @jorgito__moreno Yes, basically. Here is relevant section of table. ",https://arxiv.org/abs/2202.05836,"The standard cold dark matter plus cosmological constant model predicts that galaxies form within dark-matter haloes, and that low-mass galaxies are more dark-matter dominated than massive ones. The unexpected discovery of two low-mass galaxies lacking dark matter immediately provoked concerns about the standard cosmology and ignited explorations of alternatives, including self-interacting dark matter and modified gravity. Apprehension grew after several cosmological simulations using the conventional model failed to form adequate numerical analogues with comparable internal characteristics (stellar masses, sizes, velocity dispersions and morphologies). Here we show that the standard paradigm naturally produces galaxies lacking dark matter with internal characteristics in agreement with observations. Using a state-of-the-art cosmological simulation and a meticulous galaxy-identification technique, we find that extreme close encounters with massive neighbours can be responsible for this. We predict that approximately 30 percent of massive central galaxies (with at least 1e11 solar masses in stars) harbour at least one dark-matter-deficient satellite (with 1e8 - 1e9 solar masses in stars). This distinctive class of galaxies provides an additional layer in our understanding of the role of interactions in shaping galactic properties. Future observations surveying galaxies in the aforementioned regime will provide a crucial test of this scenario. ","Galaxies lacking dark matter produced by close encounters in a
cosmological simulation",8,"['A quick summary of thoughts on our new FIRE et al. paper showing that dark-matter-free low-mass galaxies arise naturally and fairly frequently around massive galaxies in a cosmological-volume simulation. Paper led by @jorgito__moreno using FIREbox sim\n ', 'van Dokkum and @DanieliShany discovery of low-mass dm-poor galaxies DF2 and DF4 (red bars) v. surprising, since low-mass galaxies are usually DM-dominated. Remarkably we find several low-mass galaxies (yellow) in these sim have less DM than stars within their stellar radii. https://t.co/h0FF88if6W', 'Every one of them is a satellite of a massive (~1.e11 Mstar) host that is on an orbit that brought it within the core of the galaxy (~10 kpc from the center). Much more DM lost than stars. https://t.co/P6vyzZsBxy', 'Most of them have very faint tidal features https://t.co/8zlzdGkiKX', 'This work follows a long line of work that has shown that close encounters between low-mass galaxies and massive hosts could do this kind of thing: Haslbauer et al., Carleton et al., Sales et al., etc.', 'Our sims do a very good job reproducing many properties of DF2 and DF4: velocity dispersion, sizes, etc. We predict that ~30% of massive hosts should have a satellite that is DM deficient.', 'One thing that still concerns me about our work is that we find our objects are still fairly metal rich compared to DF2 and DF4. Could be scatter in Fe/H vs. Mstar -- will need to discover more DM-def. galaxies to test these things!', '@azifattahi @jorgito__moreno Yes, basically. Here is relevant section of table. https://t.co/1TLuruwFFc']",22,02,1505
261,47,1384764614412492800,850306776352395265,Mario Reig,"(1/ 5) New paper today! Due to the complexity of the extra-dim. space, String Theory generically suggests the existence of many axion particles, the Axiverse. The number of these particles can be rather large, easily O(100) in most compactifications. (2/5) It is generically believed that some of these axions behave as dark matter (DM) while others might describe our current epoch of accelerated expansion as dynamical dark energy (DE). However, with such a plethora of axions one usually finds... (3/5) .. an overproduction of DM and a picture of DE which is not fully consistent. In this work I show that the same physics that regulates the axion DM overproduction is an excellent candidate to set the initial conditions for a consistent picture of dynamical axion DE. (4/5) The framework offers indications about the maximal temperature that our Universe reached in the past and about its fundamental scale. Also gives hints about the fate of the Universe, it is predicted that it will reenter an era of matter domination after the current epoch. (5/5) Finally and most important, it gives us homework to do: one is challenged to obtain a consistent mechanism of long and low-scale inflation. This is left as an exercise for the reader 😜",https://arxiv.org/abs/2104.09923,"In addition to spectacular signatures such as black hole superradiance and the rotation of CMB polarization, the plenitude of axions appearing in the string axiverse may have potentially dangerous implications. An example is the cosmological overproduction of relic axions and moduli by the misalignment mechanism, more pronounced in regions where the signals mentioned above may be observable, that is for large axion decay constant. In this work, we study the minimal requirements to soften this problem and show that the fundamental requirement is a long period of low-scale inflation. However, in this case, if the inflationary Hubble scale is lower than around $O(100)$ eV, no relic DM axion is produced in the early Universe. Cosmological production of some axions may be activated, via the misalignment mechanism, if their potential minimum changes between inflation and today. As a particular example, we study in detail how the maximal-misalignment mechanism dilutes the effect of dangerous axions and allows the production of axion DM in a controlled way. In this case, the potential of the axion that realises the mechanism shifts by a factor $\Delta\theta=\pi$ between the inflationary epoch and today, and the axion starts to oscillate from the top of its potential. We also show that axions with masses $m_a\sim O(1-100)\, H_0$ realising the maximal-misalignment mechanism generically behave as dark energy with a decay constant that can take values well below the Planck scale, avoiding problems associated to super-Planckian scales. Finally, we briefly study the basic phenomenological implications of the mechanism and comment on the compatibility of this type of maximally-misaligned quintessence with the swampland criteria. ",The Stochastic Axiverse,5,"['(1/ 5) New paper today!\n\n\n\nDue to the complexity of the extra-dim. space, String Theory generically suggests the existence of many axion particles, the Axiverse. The number of these particles can be rather large, easily O(100) in most compactifications.', '(2/5) It is generically believed that some of these axions behave as dark matter (DM) while others might describe our current epoch of accelerated expansion as dynamical dark energy (DE). However, with such a plethora of axions one usually finds...', '(3/5) .. an overproduction of DM and a picture of DE which is not fully consistent. In this work I show that the same physics that regulates the axion DM overproduction is an excellent candidate to set the initial conditions for a consistent picture of dynamical axion DE.', '(4/5) The framework offers indications about the maximal temperature that our Universe reached in the past and about its fundamental scale. Also gives hints about the fate of the Universe, it is predicted that it will reenter an era of matter domination after the current epoch.', '(5/5) Finally and most important, it gives us homework to do: one is challenged to obtain a consistent mechanism of long and low-scale inflation. This is left as an exercise for the reader 😜']",21,04,1250
262,95,1392737009601503235,2640805367,Milan Gritta 🇬🇧 🇸🇰 🇺🇦,Hey! New paper with @iiacobacNLP called 'XeroAlign: Zero-Shot Cross-lingual Transformer Alignment' to be published in Findings of ACL 2021 :) Read the preprint at #huawei #NLProc Use this simple technique to align the representations of XLM-RoBERTa (or other pretrained #multilingual transformers) across languages for strong zero-shot transfer. #noahsArkLab #Huawei XeroAlign achieves SOTA scores on 3 cross-lingual task-oriented natural language understanding datasets. It also works well for text classification tasks like paraphrase detection. Simple and effective! #ACL2021 ,https://arxiv.org/abs/2105.02472,"The introduction of pretrained cross-lingual language models brought decisive improvements to multilingual NLP tasks. However, the lack of labelled task data necessitates a variety of methods aiming to close the gap to high-resource languages. Zero-shot methods in particular, often use translated task data as a training signal to bridge the performance gap between the source and target language(s). We introduce XeroAlign, a simple method for task-specific alignment of cross-lingual pretrained transformers such as XLM-R. XeroAlign uses translated task data to encourage the model to generate similar sentence embeddings for different languages. The XeroAligned XLM-R, called XLM-RA, shows strong improvements over the baseline models to achieve state-of-the-art zero-shot results on three multilingual natural language understanding tasks. XLM-RA's text classification accuracy exceeds that of XLM-R trained with labelled data and performs on par with state-of-the-art models on a cross-lingual adversarial paraphrasing task. ",XeroAlign: Zero-Shot Cross-lingual Transformer Alignment,3,"[""Hey! New paper with @iiacobacNLP called 'XeroAlign: Zero-Shot Cross-lingual Transformer Alignment' to be published in Findings of ACL 2021 :) Read the preprint at #huawei #NLProc "", 'Use this simple technique to align the representations of XLM-RoBERTa (or other pretrained #multilingual transformers) across languages for strong zero-shot transfer. #noahsArkLab #Huawei https://t.co/vIZ7iLZ6fC', 'XeroAlign achieves SOTA scores on 3 cross-lingual task-oriented natural language understanding datasets. It also works well for text classification tasks like paraphrase detection. Simple and effective! #ACL2021 https://t.co/XqYqUsmk9i']",21,05,606
263,256,1269921582228611073,1091644491969298432,DARWIN Observatory,"We studied the sensitivity of DARWIN to solar neutrinos via elastic neutrino-electron scattering. The measurement of the pp-flux will allow us to precisely infer the electron-neutrino survival probability below 200 keV: . Work supported by @ERC_Research @animatedphysics @ERC_Research indeed and we never know what nature may have in stake for us ;-) here we describe a measurement of solar neutrinos with the goal of improving the understanding of our Sun, and also of neutrino oscillation parameter and the weak mixing angle at low energies",https://arxiv.org/abs/2006.03114,"We detail the sensitivity of the liquid xenon (LXe) DARWIN observatory to solar neutrinos via elastic electron scattering. We find that DARWIN will have the potential to measure the fluxes of five solar neutrino components: $pp$, $^7$Be, $^{13}$N, $^{15}$O and $pep$. The precision of the $^{13}$N, $^{15}$O and $pep$ components is hindered by the double-beta decay of $^{136}$Xe and, thus, would benefit from a depleted target. A high-statistics observation of $pp$ neutrinos would allow us to infer the values of the weak mixing angle, $\sin^2\theta_w$, and the electron-type neutrino survival probability, $P_e$, in the electron recoil energy region from a few keV up to 200 keV for the first time, with relative precision of 5% and 4%, respectively, at an exposure of 300 ty. An observation of $pp$ and $^7$Be neutrinos would constrain the neutrino-inferred solar luminosity down to 0.2%. A combination of all flux measurements would distinguish between the high (GS98) and low metallicity (AGS09) solar models with 2.1-2.5$\sigma$ significance, independent of external measurements from other experiments or a measurement of $^8$B neutrinos through coherent elastic neutrino-nucleus scattering in DARWIN. Finally, we demonstrate that with a depleted target DARWIN may be sensitive to the neutrino capture process of $^{131}$Xe. ",Solar Neutrino Detection Sensitivity in DARWIN via Electron Scattering,2,"['We studied the sensitivity of DARWIN to solar neutrinos via elastic neutrino-electron scattering. The measurement of the pp-flux will allow us to precisely infer the electron-neutrino survival probability below 200 keV: . Work supported by @ERC_Research ', '@animatedphysics @ERC_Research indeed and we never know what nature may have in stake for us ;-) here we describe a measurement of solar neutrinos with the goal of improving the understanding of our Sun, and also of neutrino oscillation parameter and the weak mixing angle at low energies']",20,06,555
264,60,1139477376864260097,892059194240532480,Mikel Artetxe,"1/4 New @ACL2019_Italy paper by our awesome student @aormazabalo on the limitations of cross-lingual word embedding mappings (w/ @glabaka, @Aitor57, @eagirre & myself) Thread 👇 2/4 It was shown that the isomorphism assumption in cross-lingual embeddings doesn't fully hold. But is this a consequence of aligning separately trained embeddings (so an inherent limitation of mapping methods)? Or a more general issue caused by divergences across languages? 3/4 We try to answer this question by comparing mapping to joint learning on parallel corpora. In these ideal conditions, joint learning yields more isomorphic embeddings, is less sensitive to hubness, and better at bilingual lexicon induction, especially for distant languages. 4/4 Mapping methods still have the advantage of requiring less (or no) supervision, but this shows that they also have strong limitations, calling for further research to jointly learn cross-lingual embeddings with a weaker cross-lingual signal.",https://arxiv.org/abs/1906.05407,"Recent research in cross-lingual word embeddings has almost exclusively focused on offline methods, which independently train word embeddings in different languages and map them to a shared space through linear transformations. While several authors have questioned the underlying isomorphism assumption, which states that word embeddings in different languages have approximately the same structure, it is not clear whether this is an inherent limitation of mapping approaches or a more general issue when learning cross-lingual embeddings. So as to answer this question, we experiment with parallel corpora, which allows us to compare offline mapping to an extension of skip-gram that jointly learns both embedding spaces. We observe that, under these ideal conditions, joint learning yields to more isomorphic embeddings, is less sensitive to hubness, and obtains stronger results in bilingual lexicon induction. We thus conclude that current mapping methods do have strong limitations, calling for further research to jointly learn cross-lingual embeddings with a weaker cross-lingual signal. ",Analyzing the Limitations of Cross-lingual Word Embedding Mappings,4,"['1/4 New @ACL2019_Italy paper by our awesome student @aormazabalo on the limitations of cross-lingual word embedding mappings (w/ @glabaka, @Aitor57, @eagirre & myself) \n\nThread 👇 ', ""2/4 It was shown that the isomorphism assumption in cross-lingual embeddings doesn't fully hold. But is this a consequence of aligning separately trained embeddings (so an inherent limitation of mapping methods)? Or a more general issue caused by divergences across languages?"", '3/4 We try to answer this question by comparing mapping to joint learning on parallel corpora. In these ideal conditions, joint learning yields more isomorphic embeddings, is less sensitive to hubness, and better at bilingual lexicon induction, especially for distant languages.', '4/4 Mapping methods still have the advantage of requiring less (or no) supervision, but this shows that they also have strong limitations, calling for further research to jointly learn cross-lingual embeddings with a weaker cross-lingual signal.']",19,06,992
265,202,1252978714784251904,953616889,Justin Read,"Another fantastic paper from Martin Rey and the EDGE collaboration out today! We find that isolated low mass dwarf galaxies can grow in mass to accrete gas and reignite their star formation after reionisation: [1/3] We predict that at M200~3e9 Msun, dwarfs will transition from being gas rich quiescent ""ultra-faints"" to ""Leo-T""-like dwarfs that form stars at a rate of just ~1e-5 Msun/year! [2/3] This may solve a long-standing puzzle as to how galaxies like Leo-T survive reionisation to remain star forming today. If so, a population of ""gas rich ultra-faints"" will be uncovered by up-coming surveys. Indeed, some may have already been found: [3/3]",https://arxiv.org/abs/2004.09530v1,"We study how star formation is regulated in low-mass field dwarf galaxies ($10^5 \leq M_{\star} \leq 10^6 \, \text{M}_{\odot}$), using cosmological high-resolution ($3 \, \text{pc}$) hydrodynamical simulations. Cosmic reionization quenches star formation in all our simulated dwarfs, but three galaxies with final dynamical masses of $3 \times 10^{9} \,\text{M}_{\odot}$ are subsequently able to replenish their interstellar medium by slowly accreting gas. Two of these galaxies re-ignite and sustain star formation until the present day at an average rate of $10^{-5} \, \text{M}_{\odot} \, \text{yr}^{-1}$, highly reminiscent of observed low-mass star-forming dwarf irregulars such as Leo T. The resumption of star formation is delayed by several billion years due to residual feedback from stellar winds and Type Ia supernovae; even at $z=0$, the third galaxy remains in a temporary equilibrium with a large gas content but without any ongoing star formation. Using the ""genetic modification'' approach, we create an alternative mass growth history for this gas-rich quiescent dwarf and show how a small $(0.2\,\mathrm{dex})$ increase in dynamical mass can overcome residual stellar feedback, re-igniting star formation. The interaction between feedback and mass build-up produces a diversity in the stellar ages and gas content of low-mass dwarfs, which will be probed by combining next-generation HI and imaging surveys. ",] EDGE: From quiescent to gas-rich to star-forming low-mass dwarf galaxies,3,"['Another fantastic paper from Martin Rey and the EDGE collaboration out today! We find that isolated low mass dwarf galaxies can grow in mass to accrete gas and reignite their star formation after reionisation:\n\n\n\n[1/3]', 'We predict that at M200~3e9 Msun, dwarfs will transition from being gas rich quiescent ""ultra-faints"" to ""Leo-T""-like dwarfs that form stars at a rate of just ~1e-5 Msun/year!\n\n[2/3] https://t.co/M1PKu8tFCB', 'This may solve a long-standing puzzle as to how galaxies like Leo-T survive reionisation to remain star forming today. If so, a population of ""gas rich ultra-faints"" will be uncovered by up-coming surveys. Indeed, some may have already been found:\n\nhttps://t.co/YWrr1djLFL\n\n[3/3]']",20,04,672
266,40,1364217858641829888,1140025148004810752,Pierre Arthuis,"🗞 New paper alert! 🗞 We propose a new many-body expansion formalism for open-shell mid-mass nuclei. Additional perk: it comes with contributions derived at all orders! In-Medium Similarity Renormalization Group has been a theory of choice for ab initio many-body practitioners, and with its Multi-Reference and Valence-Space counterparts have been instrumental in recent progress. Figure from H.Hergert, Front. Phys. 8:379, Though MR-IMSRG and VS-IMSRG are already able to tackle open-shell nuclei, they are pretty costly methods. Here we propose a single-reference, symmetry-breaking alternative, similar to the recently successful Bogoliubov MBPT. Figure from Tichai et al., Because Bogoliubov IMSRG inherently relies on a simple commutator, the structure of its contributions is pretty well constrained. This makes for an easy automated generation of diagrams and expressions from the get go. Figure taken from our new paper, So with this new paper, we have updated the Automated Diagram Generator ADG to v3.0.0. It is now able to generate BIMSRG expressions at arbitrary orders and for traditional or exotic truncations. ",https://arxiv.org/abs/2102.10889,"The goal of the present paper is twofold. First, a novel expansion many-body method applicable to superfluid open-shell nuclei, the so-called Bogoliubov in-medium similarity renormalization group (BIMSRG) theory, is formulated. This generalization of standard single-reference IMSRG theory for closed-shell systems parallels the recent extensions of coupled cluster, self-consistent Green's function or many-body perturbation theory. Within the realm of IMSRG theories, BIMSRG provides an interesting alternative to the already existing multi-reference IMSRG (MR-IMSRG) method applicable to open-shell nuclei. The algebraic equations for low-order approximations, i.e., BIMSRG(1) and BIMSRG(2), can be derived manually without much difficulty. However, such a methodology becomes already impractical and error prone for the derivation of the BIMSRG(3) equations, which are eventually needed to reach high accuracy. Based on a diagrammatic formulation of BIMSRG theory, the second objective of the present paper is thus to describe the third version (v3.0.0) of the ADG code that automatically (1) generates all valid BIMSRG(n) diagrams and (2) evaluates their algebraic expressions in a matter of seconds. This is achieved in such a way that equations can easily be retrieved for both the flow equation and the Magnus expansion formulations of BIMSRG. Expanding on this work, the first future objective is to numerically implement BIMSRG(2) (eventually BIMSRG(3)) equations and perform ab initio calculations of mid-mass open-shell nuclei. ","ADG: Automated generation and evaluation of many-body diagrams III.
Bogoliubov in-medium similarity renormalization group formalism",5,"['🗞 New paper alert! 🗞\n\nWe propose a new many-body expansion formalism for open-shell mid-mass nuclei. Additional perk: it comes with contributions derived at all orders!\n\n ', 'In-Medium Similarity Renormalization Group has been a theory of choice for ab initio many-body practitioners, and with its Multi-Reference and Valence-Space counterparts have been instrumental in recent progress.\n\nFigure from H.Hergert, Front. Phys. 8:379, https://t.co/w21k8sXZ3d https://t.co/phNvwqC5CV', 'Though MR-IMSRG and VS-IMSRG are already able to tackle open-shell nuclei, they are pretty costly methods. Here we propose a single-reference, symmetry-breaking alternative, similar to the recently successful Bogoliubov MBPT.\n\nFigure from Tichai et al., https://t.co/4aNMvq0kCa https://t.co/R2tYsYlwQV', 'Because Bogoliubov IMSRG inherently relies on a simple commutator, the structure of its contributions is pretty well constrained. This makes for an easy automated generation of diagrams and expressions from the get go.\n\nFigure taken from our new paper, https://t.co/JjraJoxNKN https://t.co/iQtNKd46DQ', 'So with this new paper, we have updated the Automated Diagram Generator ADG to v3.0.0. It is now able to generate BIMSRG expressions at arbitrary orders and for traditional or exotic truncations.\n\nhttps://t.co/JDRNyc89Er']",21,02,1187
267,2,1003936697682448390,24603962,Eneko Agirre,"New paper accepted at #NLPOSS @acl2018 workshop on ""The risk of sub-optimal use of Open Source NLP Software"" with lessons for releasing #NLProc research software (thread 1/5) UKB is an open source collection of programs for performing, among other tasks, knowledge-based Word Sense Disambiguation (WSD). (thread 2/5) Since it was released in 2009 it has been often used by third parties out-of-the-box in sub-optimal settings. We show that nine years later it is the state-of-the-art on knowledge-based WSD. (thread 3/5) This case shows the pitfalls of releasing open source NLP software without optimal default settings and precise instructions for reproducibility. Authors should not rely on other researchers reading the papers with care. (thread 4/5) It is in the interest of authors to include end-to-end scripts that download all resources, perform any necessary pre-processing and reproduce the results. We fixed this for UKB in version 3.1 (thread 5/5)",https://arxiv.org/abs/1805.04277,"UKB is an open source collection of programs for performing, among other tasks, knowledge-based Word Sense Disambiguation (WSD). Since it was released in 2009 it has been often used out-of-the-box in sub-optimal settings. We show that nine years later it is the state-of-the-art on knowledge-based WSD. This case shows the pitfalls of releasing open source NLP software without optimal default settings and precise instructions for reproducibility. ","The risk of sub-optimal use of Open Source NLP Software: UKB is
inadvertently state-of-the-art in knowledge-based WSD",5,"['New paper accepted at #NLPOSS @acl2018 workshop on ""The risk of sub-optimal use of Open Source NLP Software"" with lessons for releasing #NLProc research software (thread 1/5)', 'UKB is an open source collection of programs for performing, among other tasks, knowledge-based Word Sense Disambiguation (WSD). https://t.co/GGePy40ZLs (thread 2/5)', 'Since it was released in 2009 it has been often used by third parties out-of-the-box in sub-optimal settings. We show that nine years later it is the state-of-the-art on knowledge-based WSD. (thread 3/5)', 'This case shows the pitfalls of releasing open source NLP software without optimal default settings and precise instructions for reproducibility. Authors should not rely on other researchers reading the papers with care. (thread 4/5)', 'It is in the interest of authors to include end-to-end scripts that download all resources, perform any necessary pre-processing and reproduce the results. We fixed this for UKB in version 3.1 https://t.co/bvXvV1FwJ6 (thread 5/5)']",18,05,981
268,24,1322211983161094145,29000998,Debashis Ghosh,"For the statisticians in the audience, wanted to quickly discuss a new paper: This deals with a field called sufficient dimension reduction. It started in the 1980s and 1990s and had this seemingly restrictive condition called the linearity condition. Lots of methods were developed in the 1990s and early 2000s for sufficient dimension reduction (SDR). They required the linearity condition. More recently, people have moved into nonlinear SDR methods, primarily using kernels, which are popular in support vector machines. Our paper shows that the linearity condition induces kernels just like those in the nonlinear SDR literature. This involves using 100-year old math results (my favorite kind). This is joint work with my co-author Youngjoo Cho (not on Twitter), who is at the University of Texas, El Paso.",https://arxiv.org/abs/2010.15009,"There has been a lot of interest in sufficient dimension reduction (SDR) methodologies as well as nonlinear extensions in the statistics literature. In this note, we use classical results regarding metric spaces and positive definite functions to link linear SDR procedures to their nonlinear counterparts. ",Bridging linearity-based and kernel-based sufficient dimension reduction,5,"['For the statisticians in the audience, wanted to quickly discuss a new paper: ', 'This deals with a field called sufficient dimension reduction. It started in the 1980s and 1990s and had this seemingly restrictive condition called the linearity condition.', 'Lots of methods were developed in the 1990s and early 2000s for sufficient dimension reduction (SDR). They required the linearity condition. More recently, people have moved into nonlinear SDR methods, primarily using kernels, which are popular in support vector machines.', 'Our paper shows that the linearity condition induces kernels just like those in the nonlinear SDR literature. This involves using 100-year old math results (my favorite kind).', 'This is joint work with my co-author Youngjoo Cho (not on Twitter), who is at the University of Texas, El Paso.']",20,10,822
269,76,1417047877881417728,995032097806082049,Tarje NM,"New preprint by PhD student Ben Moseley on ""Finite-basis physics-informed neural networks"" (FBPINNs); a novel framework for solving a variety of differential equations with a focus on scalability for multi-/large- problems. Paper and code to follow soon! ",https://arxiv.org/abs/2107.07871,"Recently, physics-informed neural networks (PINNs) have offered a powerful new paradigm for solving problems relating to differential equations. Compared to classical numerical methods PINNs have several advantages, for example their ability to provide mesh-free solutions of differential equations and their ability to carry out forward and inverse modelling within the same optimisation problem. Whilst promising, a key limitation to date is that PINNs have struggled to accurately and efficiently solve problems with large domains and/or multi-scale solutions, which is crucial for their real-world application. Multiple significant and related factors contribute to this issue, including the increasing complexity of the underlying PINN optimisation problem as the problem size grows and the spectral bias of neural networks. In this work we propose a new, scalable approach for solving large problems relating to differential equations called Finite Basis PINNs (FBPINNs). FBPINNs are inspired by classical finite element methods, where the solution of the differential equation is expressed as the sum of a finite set of basis functions with compact support. In FBPINNs neural networks are used to learn these basis functions, which are defined over small, overlapping subdomains. FBINNs are designed to address the spectral bias of neural networks by using separate input normalisation over each subdomain, and reduce the complexity of the underlying optimisation problem by using many smaller neural networks in a parallel divide-and-conquer approach. Our numerical experiments show that FBPINNs are effective in solving both small and larger, multi-scale problems, outperforming standard PINNs in both accuracy and computational resources required, potentially paving the way to the application of PINNs on large, real-world problems. ","Finite Basis Physics-Informed Neural Networks (FBPINNs): a scalable
domain decomposition approach for solving differential equations",1,"['New preprint by PhD student Ben Moseley on ""Finite-basis physics-informed neural networks"" (FBPINNs); a novel framework for solving a variety of differential equations with a focus on scalability for multi-/large- problems. Paper and code to follow soon!\n\n']",21,07,261
270,11,1410327465164111872,1273068367092445190,Alex Bergman,"Check out our new paper ""Fast Training of Neural Lumigraph Representations using Meta-learning"": We propose MetaNLR++, which is able to train and render neural scene representations in a fraction of the time that competing methods require! This is joint work with my awesome collaborators from the Stanford Computational Imaging Group - Petr Kellnhofer and Gordon Wetzstein (@GordonWetzstein)",https://arxiv.org/abs/2106.14942,"Novel view synthesis is a long-standing problem in machine learning and computer vision. Significant progress has recently been made in developing neural scene representations and rendering techniques that synthesize photorealistic images from arbitrary views. These representations, however, are extremely slow to train and often also slow to render. Inspired by neural variants of image-based rendering, we develop a new neural rendering approach with the goal of quickly learning a high-quality representation which can also be rendered in real-time. Our approach, MetaNLR++, accomplishes this by using a unique combination of a neural shape representation and 2D CNN-based image feature extraction, aggregation, and re-projection. To push representation convergence times down to minutes, we leverage meta learning to learn neural shape and image feature priors which accelerate training. The optimized shape and image features can then be extracted using traditional graphics techniques and rendered in real time. We show that MetaNLR++ achieves similar or better novel view synthesis results in a fraction of the time that competing methods require. ",Fast Training of Neural Lumigraph Representations using Meta Learning,2,"['Check out our new paper ""Fast Training of Neural Lumigraph Representations using Meta-learning"":\n\n\nWe propose MetaNLR++, which is able to train and render neural scene representations in a fraction of the time that competing methods require! ', 'This is joint work with my awesome collaborators from the Stanford Computational Imaging Group - Petr Kellnhofer and Gordon Wetzstein (@GordonWetzstein)']",21,06,406
271,39,1419953782721495060,956539964795301889,Jacopo Bertolotti,"New paper on @arXiv! This work was done by a former PhD student in my group (Alba Paniagua-Diaz, currently working with @pablo_artal), but while it was a chapter in her thesis, we never found the time to put it into a paper shape. Until now. 🧵 1/ This work marks the point where Alba moved from ""I am doing what my supervisor tells me to do"" to ""I have these ideas I want to try out"", which is probably the most important moment in a PhD 🙂 So, what is it about? 2/ ""Wavefront shaping"": by controlling the input wavefront, you can control the output even when the beam is completely scrambled by multiple scattering. How good is your control over the beam determines how good is your control over the output 3/ In this paper we look at the problem from the opposite direction: what if our input beam is already completely scrambled (i.e. it looks like a speckle pattern)? Can we use wavefront shaping to recover a ""good"" beam? 4/ If you had perfect control over the beam, and you could change its amplitude and phase however you wanted, the answer would be a trivial ""yes"". But you don't have perfect control over the beam. 5/ Apart from technical limitations, you just can't take a point in the speckle pattern where there is no intensity, and multiply it by infinity to get a finite one. It just doesn't work. So the question becomes: how well can you do it ""in practice""? 6/ But first: who cares? We care because there are many cases where we would like to refocus the light coming out of a multimode fibre (e.g. because we have a high-power fibre laser), and the output of a multimode fibre is at best a speckle pattern. 7/ So in this paper we study how well we can take the output of a multimode fibre and form a nice Gaussian focus with it, with particular attention to the various possible limiting factors. It is a very ""practical"" paper, which hopefully will be useful to people 🙂 8/8 @sylvaingigan @arxiv @pablo_artal Let's see what the referees have to say 😉 (One big problem with very delayed papers, is that if referees start insisting in new measurements, things get tricky)",https://arxiv.org/abs/2107.10601,"A perfectly collimated beam can be spread out by multiple scattering, creating a speckle pattern and increasing the etendue of the system. Standard optical systems conserve etendue, and thus are unable to reverse the process by transforming a speckle pattern into a collimated beam or, equivalently, into a sharp focus. Wavefront shaping is a technique that is able to manipulate the amplitude and/or phase of a light beam, thus controlling its propagation through such media. Wavefront shaping can thus break the conservation of etendue and, in principle, reduce it. In this work we study how much of the energy contained in a fully developed speckle pattern can be converted into a high quality (low M2) beam, and discuss the advantages and limitations of this approach, with special attention given to the inherent variability in the quality of the output due to the multiple scattering. ","Wavefront shaping to improve beam quality: converting a speckle pattern
into a Gaussian spot",9,"['New paper on @arXiv! \nThis work was done by a former PhD student in my group (Alba Paniagua-Diaz, currently working with @pablo_artal), but while it was a chapter in her thesis, we never found the time to put it into a paper shape. Until now.\n🧵 1/', 'This work marks the point where Alba moved from ""I am doing what my supervisor tells me to do"" to ""I have these ideas I want to try out"", which is probably the most important moment in a PhD 🙂\n\nSo, what is it about?\n2/', '""Wavefront shaping"": by controlling the input wavefront, you can control the output even when the beam is completely scrambled by multiple scattering. How good is your control over the beam determines how good is your control over the output\nhttps://t.co/9HYk85ZKUc\n3/', 'In this paper we look at the problem from the opposite direction: what if our input beam is already completely scrambled (i.e. it looks like a speckle pattern)? Can we use wavefront shaping to recover a ""good"" beam?\n4/', 'If you had perfect control over the beam, and you could change its amplitude and phase however you wanted, the answer would be a trivial ""yes"". But you don\'t have perfect control over the beam.\n5/', 'Apart from technical limitations, you just can\'t take a point in the speckle pattern where there is no intensity, and multiply it by infinity to get a finite one. It just doesn\'t work.\nSo the question becomes: how well can you do it ""in practice""?\n6/', 'But first: who cares?\nWe care because there are many cases where we would like to refocus the light coming out of a multimode fibre (e.g. because we have a high-power fibre laser), and the output of a multimode fibre is at best a speckle pattern.\n7/', 'So in this paper we study how well we can take the output of a multimode fibre and form a nice Gaussian focus with it, with particular attention to the various possible limiting factors.\nIt is a very ""practical"" paper, which hopefully will be useful to people 🙂\n8/8', ""@sylvaingigan @arxiv @pablo_artal Let's see what the referees have to say 😉\n\n(One big problem with very delayed papers, is that if referees start insisting in new measurements, things get tricky)""]",21,07,2101
272,182,1453727027723341825,726837554000084993,Jeremy Bailin,"Paper day! We've studied the effect of supernova self-enrichment in globular clusters (GCs) to figure out what the Milky Way's GCs looked like when they formed based on their current metallicity and iron abundance spread and/or mass. 1/3 Self enrichment (a) makes GCs more metal-rich than the gas they originally formed from, and (b) introduces star-to-star variation in iron abundance. We can measure (b) and use it to correct for (a)! The correction is usually small, but can be up to 0.5 dex. 2/3 This allows us to get a more accurate view of how individual pieces of the Milky Way enriched themselves with heavy elements over time. ""We"" = me + excellent undergrad researcher Ryker von Klar. Keep an eye out for his grad school apps next year! 3/3 ",https://arxiv.org/abs/2110.14571,"Intrinsic iron abundance spreads in globular clusters, although usually small, are very common, and are signatures of self enrichment: some stars within the cluster have been enriched by supernova ejecta from other stars within the same cluster. We use the Bailin (2018) self enrichment model to predict the relationship between properties of the protocluster -- its mass and the metallicity of the protocluster gas cloud -- and the final observable properties today -- its current metallicity and the internal iron abundance spread. We apply this model to an updated catalog of Milky Way globular clusters where the initial mass and/or the iron abundance spread is known to reconstruct their initial metallicities. We find that with the exception of the known anomalous bulge cluster Terzan 5 and three clusters strongly suspected to be nuclear star clusters from stripped dwarf galaxies, the model provides a good lens for understanding their iron spreads and initial metallicities. We then use these initial metallicities to construct age-metallicity relations for kinematically-identified major accretion events in the Milky Way's history. We find that using the initial metallicity instead of the current metallicity does not alter the overall picture of the Milky Way's history, since the difference is usually small, but does provide information that can help distinguish which accretion event some individual globular clusters with ambiguous kinematics should be associated with, and points to potential complexity within the accretion events themselves. ","Globular Cluster Intrinsic Iron Abundance Spreads: II. Protocluster
Metallicities and the Age-Metallicity Relations of Milky Way Progenitors",3,"[""Paper day! We've studied the effect of supernova self-enrichment in globular clusters (GCs) to figure out what the Milky Way's GCs looked like when they formed based on their current metallicity and iron abundance spread and/or mass. 1/3 "", 'Self enrichment (a) makes GCs more metal-rich than the gas they originally formed from, and (b) introduces star-to-star variation in iron abundance. We can measure (b) and use it to correct for (a)! The correction is usually small, but can be up to 0.5 dex. 2/3 https://t.co/b06jmwKAAF', 'This allows us to get a more accurate view of how individual pieces of the Milky Way enriched themselves with heavy elements over time.\n\n""We"" = me + excellent undergrad researcher Ryker von Klar. Keep an eye out for his grad school apps next year! 3/3 https://t.co/iUslGfcytc']",21,10,771
273,48,1375439934623002625,1324428524,Rikard Enberg,"New paper today on cosmology in the very early universe, with my postdoc and previous PhD student. The paper is about the abrupt phase transition a few picoseconds after the big bang, where the Higgs field switched on and particles stopped being massless. You can't have abrupt phase transitions in the Standard Model because the Higgs is too heavy. In theories beyond you can. We look at this in effective field theory and find that with heavy new physics at the TeV scale it can be possible given existing constraints on parameters Why do we want such an abrupt (also called strongly first-order) phase transition? Two reasons: Reason 1. Because then you might be able to explain why there aren't equal amounts of matter and antimatter in the universe. This is called electroweak baryogenesis and requires something very cool called sphalerons. Which you can't have if the transition is too weak. This article by @jonmbutterworth talks a bit about sphalerons. Reason 2. Because an abrupt transition is like boiling – bubbles are formed that can make huge amounts of gravitational waves. Which then make up the stochastic gravitational wave background that the space-based GW observatory LISA will search for. @davidjamesweir has done some very cool simulations and movies of such bubbles So it's experimentally testable, by an insanely cool experiment (@LISACommunity) that will have three spacecraft in a triangle with sides of 2.5 million km, orbiting the Sun in the same orbit as Earth, with a laser interferometer using of laser beams between these spacecraft. It's also testable by the high luminosity LHC, by looking for pair production of Higgs bosons. If our scenario in is correct the cross section is modified in a specific way.",https://arxiv.org/abs/2103.14022,"A first-order Electroweak Phase Transition (EWPT) could explain the observed baryon-antibaryon asymmetry and its dynamics could yield a detectable gravitational wave signature, while the underlying physics would be within the reach of colliders. The Standard Model, however, predicts a crossover transition. We therefore study the EWPT in the Standard Model Effective Field Theory (SMEFT) including dimension-six operators. A first-order EWPT has previously been shown to be possible in the SMEFT. Phenomenology studies have focused on scenarios with a tree-level barrier between minima, which requires a negative Higgs quartic coupling and a new physics scale low enough to raise questions about the validity of the EFT approach. In this work we stress that a first-order EWPT is also possible when the barrier between minima is generated radiatively, the quartic coupling is positive, the scale of new physics is higher, and there is good agreement with experimental bounds. Our calculation is done in a consistent, gauge-invariant way, and we carefully analyze the scaling of parameters necessary to generate a barrier in the potential. We perform a global fit in the relevant parameter space and explicitly find the points with a first-order transition that agree with experimental data. We also briefly discuss the prospects for probing the allowed parameter space using di-Higgs production in colliders. ","A new perspective on the electroweak phase transition in the Standard
Model Effective Field Theory",9,"['New paper today on cosmology in the very early universe, with my postdoc and previous PhD student. The paper is about the abrupt phase transition a few picoseconds after the big bang, where the Higgs field switched on and particles stopped being massless. ', ""You can't have abrupt phase transitions in the Standard Model because the Higgs is too heavy. In theories beyond you can. We look at this in effective field theory and find that with heavy new physics at the TeV scale it can be possible given existing constraints on parameters"", 'Why do we want such an abrupt (also called strongly first-order) phase transition? Two reasons:', ""Reason 1. Because then you might be able to explain why there aren't equal amounts of matter and antimatter in the universe. This is called electroweak baryogenesis and requires something very cool called sphalerons. Which you can't have if the transition is too weak."", 'This article by @jonmbutterworth talks a bit about sphalerons. https://t.co/gKW2BHumVD', 'Reason 2. Because an abrupt transition is like boiling – bubbles are formed that can make huge amounts of gravitational waves. Which then make up the stochastic gravitational wave background that the space-based GW observatory LISA will search for.', '@davidjamesweir has done some very cool simulations and movies of such bubbles https://t.co/h2ngUmzPWb', ""So it's experimentally testable, by an insanely cool experiment (@LISACommunity) that will have three spacecraft in a triangle with sides of 2.5 million km, orbiting the Sun in the same orbit as Earth, with a laser interferometer using of laser beams between these spacecraft."", ""It's also testable by the high luminosity LHC, by looking for pair production of Higgs bosons. If our scenario in https://t.co/QUYVoIPp1k is correct the cross section is modified in a specific way.""]",21,03,1768
274,171,1339899331533541377,2445322540,Pascal Fua,Measuring the uncertainty of deep net results is a challenge. Ensembles are one of the most reliable approaches but are computationally demanding. We propose an approach that is much faster while preserving the reliability of ensembles. #DeepLearning ,https://arxiv.org/abs/2012.08334,"Deep neural networks have amply demonstrated their prowess but estimating the reliability of their predictions remains challenging. Deep Ensembles are widely considered as being one of the best methods for generating uncertainty estimates but are very expensive to train and evaluate. MC-Dropout is another popular alternative, which is less expensive, but also less reliable. Our central intuition is that there is a continuous spectrum of ensemble-like models of which MC-Dropout and Deep Ensembles are extreme examples. The first uses an effectively infinite number of highly correlated models while the second relies on a finite number of independent models. To combine the benefits of both, we introduce Masksembles. Instead of randomly dropping parts of the network as in MC-dropout, Masksemble relies on a fixed number of binary masks, which are parameterized in a way that allows to change correlations between individual models. Namely, by controlling the overlap between the masks and their density one can choose the optimal configuration for the task at hand. This leads to a simple and easy to implement method with performance on par with Ensembles at a fraction of the cost. We experimentally validate Masksembles on two widely used datasets, CIFAR10 and ImageNet. ",Masksembles for Uncertainty Estimation,1,['Measuring the uncertainty of deep net results is a challenge. Ensembles are one of the most reliable approaches but are computationally demanding. We propose an approach that is much faster while preserving the reliability of ensembles. #DeepLearning '],20,12,264
275,129,1171555903952150529,503452360,William Wang,"In our new #EMNLP2019 paper, we relax the fully-factored mean-field assumption, and propose a new Gaussian Copula Variational Autoencoder (VAE) to deal with posterior collapse. Neural Gaussian Copula for Variational Autoencoder: #NLProc @kingofspace0wzz",https://arxiv.org/abs/1909.03569,"Variational language models seek to estimate the posterior of latent variables with an approximated variational posterior. The model often assumes the variational posterior to be factorized even when the true posterior is not. The learned variational posterior under this assumption does not capture the dependency relationships over latent variables. We argue that this would cause a typical training problem called posterior collapse observed in all other variational language models. We propose Gaussian Copula Variational Autoencoder (VAE) to avert this problem. Copula is widely used to model correlation and dependencies of high-dimensional random variables, and therefore it is helpful to maintain the dependency relationships that are lost in VAE. The empirical results show that by modeling the correlation of latent variables explicitly using a neural parametric copula, we can avert this training difficulty while getting competitive results among all other VAE approaches. ",Neural Gaussian Copula for Variational Autoencoder,1,"['In our new #EMNLP2019 paper, we relax the fully-factored mean-field assumption, and propose a new Gaussian Copula Variational Autoencoder (VAE) to deal with posterior collapse. Neural Gaussian Copula for Variational Autoencoder: #NLProc @kingofspace0wzz']",19,09,260
276,18,1290012137906110466,2176486874,Steven Thomson,"Really nice discovery today - while reading this (excellent) new paper from the @ITensorLib team, it was a fantastic surprise to stumble on our group's PhD student Jan Schneider in the acknowledgements. Lovely gesture from the team to all the contributors! ",https://arxiv.org/abs/2007.14822,"ITensor is a system for programming tensor network calculations with an interface modeled on tensor diagram notation, which allows users to focus on the connectivity of a tensor network without manually bookkeeping tensor indices. The ITensor interface rules out common programming errors and enables rapid prototyping of tensor network algorithms. After discussing the philosophy behind the ITensor approach, we show examples of each part of the interface including Index objects, the ITensor product operator, tensor factorizations, tensor storage types, algorithms for matrix product state (MPS) and matrix product operator (MPO) tensor networks, quantum number conserving block-sparse tensors, and the NDTensors library. We also review publications that have used ITensor for quantum many-body physics and for other areas where tensor networks are increasingly applied. To conclude we discuss promising features and optimizations to be added in the future. ",The ITensor Software Library for Tensor Network Calculations,1,"[""Really nice discovery today - while reading this (excellent) new paper from the @ITensorLib team, it was a fantastic surprise to stumble on our group's PhD student Jan Schneider in the acknowledgements. Lovely gesture from the team to all the contributors! ""]",20,07,270
277,77,1215258112384475136,19149703,Karina Voggel ✨🔭🏃🏼♀️,"Today our new paper on how we can find globular clusters with the help of @ESAGaia is out on the arxiv! with @anilcseth @caprastro & @sand_dave @ESAGaia @anilcseth @caprastro @sand_dave We tried to find globular clusters in Centaurus A by using colour and astrometric excess factors in Gaia DR2. We realized that star clusters in nearby galaxies do not appear like standard point sources in Gaia. @ESAGaia @anilcseth @caprastro @sand_dave Normally these excess factors are used as a quality assessment tool in Gaia to purge out bad sources! @ESAGaia @anilcseth @caprastro @sand_dave We have followe up a few candidates, and identified 5 brand new GCs in the outskirts of CenA. They actually are clearly visible as star clusters in good seeing imaging but we had not found them because the outer Halo of Gaia spans several square degrees on the sky. @ESAGaia @anilcseth @caprastro @sand_dave These newly confirmed clusters (Blue datapoints) are now the record holders for the three most distant known GCs in CenA! This shows the power of this Gaia method to identify good candidates in the distant outskirts. @ESAGaia @anilcseth @caprastro @sand_dave And we expect that this @ESAGaia method is applicable to find bright GCs in most Local Volume galaxies out to 25Mpc. Especially in those sparse outer Halos of galaxies! @ESAGaia @anilcseth @caprastro @sand_dave On top of that we also show that the excess factors are directly correlated to the physical size of the GCs. Meaning that you can use the excess factors to get a rough size estimate of their sizes without the need for high-resolution HST imaging. ",https://arxiv.org/abs/2001.02243,"Tidally stripped galaxy nuclei and luminous globular clusters (GCs) are important tracers of the halos and assembly histories of nearby galaxies, but are difficult to reliably identify with typical ground-based imaging data. In this paper we present a new method to find these massive star clusters using Gaia DR2, focusing on the massive elliptical galaxy Centaurus A (Cen A). We show that stripped nuclei and globular clusters are partially resolved by Gaia at the distance of Cen A, showing characteristic astrometric and photometric signatures. We use this selection method to produce a list of 632 new candidate luminous clusters in the halo of Cen A out to a projected radius of 150 kpc. Adding in broadband photometry and visual examination improves the accuracy of our classification. In a spectroscopic pilot program we have confirmed 5 new luminous clusters, which includes the 7th and 10th most luminous GC in Cen\,A. Three of the newly discovered GCs are further away from Cen A in than all previously known GCs. Several of these are compelling candidates for stripped nuclei. We show that our novel Gaia selection method retains at least partial utility out to distances of 25 Mpc and hence is a powerful tool for finding and studying star clusters in the sparse outskirts of galaxies in the local universe. ","A Gaia-based catalog of candidate stripped nuclei and luminous globular
clusters in the halo of Centaurus A",7,"['Today our new paper on how we can find globular clusters with the help of @ESAGaia is out on the arxiv! with @anilcseth @caprastro & @sand_dave ', '@ESAGaia @anilcseth @caprastro @sand_dave We tried to find globular clusters in Centaurus A by using colour and astrometric excess factors in Gaia DR2. We realized that star clusters in nearby galaxies do not appear like standard point sources in Gaia. https://t.co/nkpMAxKJZ9', '@ESAGaia @anilcseth @caprastro @sand_dave Normally these excess factors are used as a quality assessment tool in Gaia to purge out bad sources!', '@ESAGaia @anilcseth @caprastro @sand_dave We have followe up a few candidates, and identified 5 brand new GCs in the outskirts of CenA. They actually are clearly visible as star clusters in good seeing imaging but we had not found them because the outer Halo of Gaia spans several square degrees on the sky. https://t.co/xDCaSyUi7X', '@ESAGaia @anilcseth @caprastro @sand_dave These newly confirmed clusters (Blue datapoints) are now the record holders for the three most distant known GCs in CenA! This shows the power of this Gaia method to identify good candidates in the distant outskirts. https://t.co/YmA7KTnz3X', '@ESAGaia @anilcseth @caprastro @sand_dave And we expect that this @ESAGaia method is applicable to find bright GCs in most Local Volume galaxies out to 25Mpc. Especially in those sparse outer Halos of galaxies! https://t.co/8xVH1LvTF3', '@ESAGaia @anilcseth @caprastro @sand_dave On top of that we also show that the excess factors are directly correlated to the physical size of the GCs. Meaning that you can use the excess factors to get a rough size estimate of their sizes without the need for high-resolution HST imaging. https://t.co/qlkSS1SIHx']",20,01,1649
278,5,1324325297290842113,1175514763347943424,Hirofumi Inaguma,"Our new preprint on non-autoregressive E2E speech translation is out. We present Orthros, which has AR/NAR decoders on a shared speech encoder. Rescoring outputs from the NAR decoder by the AR decoder brings out the full potential of CMLM/SMART. Paper: ",https://arxiv.org/abs/2010.13047,"Fast inference speed is an important goal towards real-world deployment of speech translation (ST) systems. End-to-end (E2E) models based on the encoder-decoder architecture are more suitable for this goal than traditional cascaded systems, but their effectiveness regarding decoding speed has not been explored so far. Inspired by recent progress in non-autoregressive (NAR) methods in text-based translation, which generates target tokens in parallel by eliminating conditional dependencies, we study the problem of NAR decoding for E2E-ST. We propose a novel NAR E2E-ST framework, Orthros, in which both NAR and autoregressive (AR) decoders are jointly trained on the shared speech encoder. The latter is used for selecting better translation among various length candidates generated from the former, which dramatically improves the effectiveness of a large length beam with negligible overhead. We further investigate effective length prediction methods from speech inputs and the impact of vocabulary sizes. Experiments on four benchmarks show the effectiveness of the proposed method in improving inference speed while maintaining competitive translation quality compared to state-of-the-art AR E2E-ST systems. ","Orthros: Non-autoregressive End-to-end Speech Translation with
Dual-decoder",1,"['Our new preprint on non-autoregressive E2E speech translation is out.\nWe present Orthros, which has AR/NAR decoders on a shared speech encoder.\nRescoring outputs from the NAR decoder by the AR decoder brings out the full potential of CMLM/SMART.\n\nPaper: ']",20,10,266
279,94,1167308433382469632,911474423412219904,Julien Tierny,Need to compare in-situ simulations to an acquired ground-truth? Checkout our new #ldav 2019 application paper using #TopologicalDataAnalysis and #OptimalTransport #TopologyToolKit #datascience #visualization #machinelearning @INS2I_CNRS @Kitware ,https://arxiv.org/abs/1908.07841,"This application paper presents a novel framework based on topological data analysis for the automatic evaluation and ranking of viscous finger simulation runs in an ensemble with respect to a reference acquisition. Individual fingers in a given time-step are associated with critical point pairs in the distance field to the injection point, forming persistence diagrams. Different metrics, based on optimal transport, for comparing time-varying persistence diagrams in this specific applicative case are introduced. We evaluate the relevance of the rankings obtained with these metrics, both qualitatively thanks to a lightweight web visual interface, and quantitatively by studying the deviation from a reference ranking suggested by experts. Extensive experiments show the quantitative superiority of our approach compared to traditional alternatives. Our web interface allows experts to conveniently explore the produced rankings. We show a complete viscous fingering case study demonstrating the utility of our approach in the context of porous media fluid flow, where our framework can be used to automatically discard physically-irrelevant simulation runs from the ensemble and rank the most plausible ones. We document an in-situ implementation to lighten I/O and performance constraints arising in the context of parametric studies. ","Ranking Viscous Finger Simulations to an Acquired Ground Truth with
Topology-aware Matchings",1,['Need to compare in-situ simulations to an acquired ground-truth? Checkout our new #ldav 2019 application paper using #TopologicalDataAnalysis and #OptimalTransport #TopologyToolKit #datascience #visualization #machinelearning @INS2I_CNRS @Kitware '],19,08,260
280,196,1380066781990109186,1215589379256786947,Ilja Behnke,Connecting embedded devices with realtime requirements to IP networks is risky. The impact of network-generated interrupts and networking overhead quickly becomes a threat to realtime. @edge_sys '21 we propose and evaluate 4 mitigation techniques #iot 1/2 Our approaches adaptively mitigate deadline-misses caused by high network loads. Real-time and network receive metrics are monitored to react to high loads and keep the potentially critical real-time MCU running. 2/2,https://arxiv.org/abs/2104.02393,"Manufacturing, automotive, and aerospace environments use embedded systems for control and automation and need to fulfill strict real-time guarantees. To facilitate more efficient business processes and remote control, such devices are being connected to IP networks. Due to the difficulty in predicting network packets and the interrelated workloads of interrupt handlers and drivers, devices controlling time critical processes stand under the risk of missing process deadlines when under high network loads. Additionally, devices at the edge of large networks and the internet are subject to a high risk of load spikes and network packet overloads. In this paper, we investigate strategies to detect network packet overloads in real-time and present four approaches to adaptively mitigate local deadline misses. In addition to two strategies mitigating network bursts with and without hysteresis, we present and discuss two novel mitigation algorithms, called Budget and Queue Mitigation. In an experimental evaluation, all algorithms showed mitigating effects, with the Queue Mitigation strategy enabling most packet processing while preventing lateness of critical tasks. ","Detecting and Mitigating Network Packet Overloads on Real-Time Devices
in IoT Systems",2,"[""Connecting embedded devices with realtime requirements to IP networks is risky. The impact of network-generated interrupts and networking overhead quickly becomes a threat to realtime. @edge_sys '21 we propose and evaluate 4 mitigation techniques #iot 1/2"", 'Our approaches adaptively mitigate deadline-misses caused by high network loads. Real-time and network receive metrics are monitored to react to high loads and keep the potentially critical real-time MCU running. 2/2']",21,04,479
281,15,1045009636175335425,759249,Dean Eckles,"Want to ""seed"" a behavior in a network without observing the network? Our new paper studies how to evaluate stochastic seeding strategies, such as taking ""one-hop"" from random starting nodes. @ajwchin @jugander The one-hop seeding strategy is designed to exploit a version of the friendship paradox (your friends have more friends than you do). It puts probability on many different seed sets, but more probability on seed sets with higher normalized in-degree. Experiments studying this strategy have randomized villages to targeting with one-hop or (uniform) random seeding. But because these strategies are stochastic, the random selected seeds can have higher in-degree than the one-hop seeds! Here in 3/8 cases in The estimators we propose exploit that we know the probability of some seed set under one-hop seeding. They can dramatically increase precision and power compared with a simple difference-in-means. You can also use these methods ""off-policy"" with existing field experiments that measure a network and randomize a few nodes to treatment. Our results so far are cautionary: one-step seeding does not seem to outperform random seeding, and might even be less effective. One exciting this about this work is making novel reuse of data from ambitious field experiments — enabled by public data from Cai et al. and data sharing by @betsylevyp et al. Know any other experiments we could apply this to?",https://arxiv.org/abs/1809.09561,"When trying to maximize the adoption of a behavior in a population connected by a social network, it is common to strategize about where in the network to seed the behavior, often with an element of randomness. Selecting seeds uniformly at random is a basic but compelling strategy in that it distributes seeds broadly throughout the network. A more sophisticated stochastic strategy, one-hop targeting, is to select random network neighbors of random individuals; this exploits a version of the friendship paradox, whereby the friend of a random individual is expected to have more friends than a random individual, with the hope that seeding a behavior at more connected individuals leads to more adoption. Many seeding strategies have been proposed, but empirical evaluations have demanded large field experiments designed specifically for this purpose and have yielded relatively imprecise comparisons of strategies. Here we show how stochastic seeding strategies can be evaluated more efficiently in such experiments, how they can be evaluated ""off-policy"" using existing data arising from experiments designed for other purposes, and how to design more efficient experiments. In particular, we consider contrasts between stochastic seeding strategies and analyze nonparametric estimators adapted from policy evaluation and importance sampling. We use simulations on real networks to show that the proposed estimators and designs can increase precision while yielding valid inference. We then apply our proposed estimators to two field experiments, one that assigned households to an intensive marketing intervention and one that assigned students to an anti-bullying intervention. ",Evaluating stochastic seeding strategies in networks,6,"['Want to ""seed"" a behavior in a network without observing the network? Our new paper studies how to evaluate stochastic seeding strategies, such as taking ""one-hop"" from random starting nodes.\n\n@ajwchin @jugander ', 'The one-hop seeding strategy is designed to exploit a version of the friendship paradox (your friends have more friends than you do). It puts probability on many different seed sets, but more probability on seed sets with higher normalized in-degree. https://t.co/ogOkmf6f7P', 'Experiments studying this strategy have randomized villages to targeting with one-hop or (uniform) random seeding. But because these strategies are stochastic, the random selected seeds can have higher in-degree than the one-hop seeds! Here in 3/8 cases in https://t.co/nQ3yuwoLGN https://t.co/l1YB2EQAIh', 'The estimators we propose exploit that we know the probability of some seed set under one-hop seeding. They can dramatically increase precision and power compared with a simple difference-in-means. https://t.co/vspuCznv4o', 'You can also use these methods ""off-policy"" with existing field experiments that measure a network and randomize a few nodes to treatment. Our results so far are cautionary: one-step seeding does not seem to outperform random seeding, and might even be less effective. https://t.co/N8S29cuWyI', 'One exciting this about this work is making novel reuse of data from ambitious field experiments — enabled by public data from Cai et al. https://t.co/Nc0j14Xt6e and data sharing by @betsylevyp et al. https://t.co/Q023CUd87L\nKnow any other experiments we could apply this to?']",18,09,1476
282,84,1100137278683385856,14093970,Esteban Moro,"A new version of our paper on communication strategies between agents in Reinforcement Learning hit the arXiv : It seems that agents that interact through Erdos-Renyi networks over-perform traditional fully-connected networks. It is funny that human networks deviate from Erdos-Renyi in many characteristics. But in this case, they are better and faster to find a solution. Maybe we are doing it all-wrong?",https://arxiv.org/abs/1902.06740,"A common technique to improve learning performance in deep reinforcement learning (DRL) and many other machine learning algorithms is to run multiple learning agents in parallel. A neglected component in the development of these algorithms has been how best to arrange the learning agents involved to improve distributed search. Here we draw upon results from the networked optimization literatures suggesting that arranging learning agents in communication networks other than fully connected topologies (the implicit way agents are commonly arranged in) can improve learning. We explore the relative performance of four popular families of graphs and observe that one such family (Erdos-Renyi random graphs) empirically outperforms the de facto fully-connected communication topology across several DRL benchmark tasks. Additionally, we observe that 1000 learning agents arranged in an Erdos-Renyi graph can perform as well as 3000 agents arranged in the standard fully-connected topology, showing the large learning improvement possible when carefully designing the topology over which agents communicate. We complement these empirical results with a theoretical investigation of why our alternate topologies perform better. Overall, our work suggests that distributed machine learning algorithms could be made more effective if the communication topology between learning agents was optimized. ","Leveraging Communication Topologies Between Learning Agents in Deep
Reinforcement Learning",2,"['A new version of our paper on communication strategies between agents in Reinforcement Learning hit the arXiv : It seems that agents that interact through Erdos-Renyi networks over-perform traditional fully-connected networks. ', 'It is funny that human networks deviate from Erdos-Renyi in many characteristics. But in this case, they are better and faster to find a solution. Maybe we are doing it all-wrong?']",19,02,420
283,19,1075317829564542983,175921010,Earl Patrick Bellinger,"[1/6] New paper out! Even though the title kind of gives it away, here's a quick rundown of the paper... #asteroseismology #exoplanets #machinelearning [2/6] Asteroseismology is the best way to determine the properties of stars. This diagram shows the precision with which asteroseismology constrains the ages, masses, radii, and other properties of ~100 solar-type stars that were observed by the NASA Kepler mission. [3/6] The great precision with which asteroseismology constrains stellar parameters is useful for many purposes, including studying exoplanets. This is because our ability to measure the properties of exoplanets hinges on our ability to measure the properties of their hosts. [4/6] But what if our measurements of the host stars are wrong? This is what we looked at in this paper. This diagram shows how much (percentage-wise) each stellar property changes if the measured temperature (x-axis) and/or metallicity (y-axis) are wrong. It's not much! [5/6] All in all, this result is not too shocking. Here are some of the stars that we studied shown in the ""classical"" (left) and ""asteroseismic"" (right) HR diagrams. If you've never seen the kind of diagram on the right before, it serves to reveal the power of asteroseismology. [6/6] So for all these results and more --- including comparisons of asteroseismology with Gaia DR2 data, and similar investigations into under-reported uncertainties, as well as a table of all confirmed exoplanets for these stars --- go check out the paper!",https://arxiv.org/abs/1812.06979,"The search for twins of the Sun and Earth relies on accurate characterization of stellar and exoplanetary parameters: i.e., ages, masses, and radii. In the modern era of asteroseismology, parameters of solar-like stars are derived by fitting theoretical models to observational data, which include measurements of their oscillation frequencies, metallicity [Fe/H], and effective temperature Teff. Combining this information with transit data furthermore yields the corresponding parameters for their exoplanets. While [Fe/H] and Teff are commonly stated to a precision of ~0.1 dex and ~100 K, the impact of errors in their measurement has not been studied in practice within the context of the parameters derived from them. Here we use the Stellar Parameters in an Instant (SPI) pipeline to estimate the parameters of nearly 100 stars observed by Kepler and Gaia, many of which are confirmed planet hosts. We adjust the reported spectroscopic measurements of these stars by introducing faux systematic errors and artificially increasing the reported uncertainties, and quantify the differences in the resulting parameters. We find that a systematic error of 0.1 dex in [Fe/H] translates to differences of only 4%, 2%, and 1% on average in the resulting stellar ages, masses, and radii, which are well within their uncertainties (~11%, 3.5%, 1.4%) as derived by SPI. We also find that increasing the uncertainty of [Fe/H] measurements by 0.1 dex increases the uncertainties by only 0.01 Gyr, 0.02 M_sun, and 0.01 R_sun, which are again well below their reported uncertainties (0.5 Gyr, 0.04 M_sun, 0.02 R_sun). The results for Teff at 100 K are similar. Stellar parameters from SPI are unchanged within uncertainties by errors of up to 0.14 dex or 175 K, and are even more robust to errors in Teff than the seismic scaling relations. Consequently, the parameters for their exoplanets are robust as well. ","Stellar ages, masses and radii from asteroseismic modeling are robust to
systematic errors in spectroscopy",6,"[""[1/6] New paper out! Even though the title kind of gives it away, here's a quick rundown of the paper... \n\n\n\n#asteroseismology #exoplanets #machinelearning"", '[2/6] Asteroseismology is the best way to determine the properties of stars. This diagram shows the precision with which asteroseismology constrains the ages, masses, radii, and other properties of ~100 solar-type stars that were observed by the NASA Kepler mission. https://t.co/nPnNFsB4Yr', '[3/6] The great precision with which asteroseismology constrains stellar parameters is useful for many purposes, including studying exoplanets. This is because our ability to measure the properties of exoplanets hinges on our ability to measure the properties of their hosts. https://t.co/xPtl9RbqtX', ""[4/6] But what if our measurements of the host stars are wrong? This is what we looked at in this paper. This diagram shows how much (percentage-wise) each stellar property changes if the measured temperature (x-axis) and/or metallicity (y-axis) are wrong. It's not much! https://t.co/EHY5QTb7Cv"", '[5/6] All in all, this result is not too shocking. Here are some of the stars that we studied shown in the ""classical"" (left) and ""asteroseismic"" (right) HR diagrams. If you\'ve never seen the kind of diagram on the right before, it serves to reveal the power of asteroseismology. https://t.co/FD11eOrODL', '[6/6] So for all these results and more --- including comparisons of asteroseismology with Gaia DR2 data, and similar investigations into under-reported uncertainties, as well as a table of all confirmed exoplanets for these stars --- go check out the paper!']",18,12,1541
284,74,1075603767112851457,2896282038,Vijay Varma,"Looking for some Christmas reading? We built a new surrogate model for aligned-spin binary black hole waveforms. Head over to to find out why it is the greatest model in all the land! Learn more abt surrogates at , see teaser below. P.S. This work was done in collaboration with several members of the @SXSProject. P.P.S. I'm clearly biased, there are other great models as well 😀, some of which are mentioned in the paper. @mattkenworthy Glad you liked it! 🙂",http://arxiv.org/abs/1812.07865,"Numerical relativity (NR) simulations provide the most accurate binary black hole gravitational waveforms, but are prohibitively expensive for applications such as parameter estimation. Surrogate models of NR waveforms have been shown to be both fast and accurate. However, NR-based surrogate models are limited by the training waveforms' length, which is typically about 20 orbits before merger. We remedy this by hybridizing the NR waveforms using both post-Newtonian and effective one body waveforms for the early inspiral. We present NRHybSur3dq8, a surrogate model for hybridized nonprecessing numerical relativity waveforms, that is valid for the entire LIGO band (starting at $20~\text{Hz}$) for stellar mass binaries with total masses as low as $2.25\,M_{\odot}$. We include the $\ell \leq 4$ and $(5,5)$ spin-weighted spherical harmonic modes but not the $(4,1)$ or $(4,0)$ modes. This model has been trained against hybridized waveforms based on 104 NR waveforms with mass ratios $q\leq8$, and $|\chi_{1z}|,|\chi_{2z}| \leq 0.8$, where $\chi_{1z}$ ($\chi_{2z}$) is the spin of the heavier (lighter) BH in the direction of orbital angular momentum. The surrogate reproduces the hybrid waveforms accurately, with mismatches $\lesssim 3\times10^{-4}$ over the mass range $2.25M_{\odot} \leq M \leq 300 M_{\odot}$. At high masses ($M\gtrsim40M_{\odot}$), where the merger and ringdown are more prominent, we show roughly two orders of magnitude improvement over existing waveform models. We also show that the surrogate works well even when extrapolated outside its training parameter space range, including at spins as large as 0.998. Finally, we show that this model accurately reproduces the spheroidal-spherical mode mixing present in the NR ringdown signal. ","Surrogate model of hybridized numerical relativity binary black hole
waveforms",3,"['Looking for some Christmas reading? We built a new surrogate model for aligned-spin binary black hole waveforms. Head over to to find out why it is the greatest model in all the land! Learn more abt surrogates at , see teaser below. ', ""P.S. This work was done in collaboration with several members of the @SXSProject.\nP.P.S. I'm clearly biased, there are other great models as well 😀, some of which are mentioned in the paper."", '@mattkenworthy Glad you liked it! 🙂']",18,12,479
285,131,1488788646966800385,1141006043218108419,Clara Isabel Meister,"Neural language models are really good at explaining held-out data. So when we sample from them, why do they yield dull and degenerate text? Our paper analyzes this behavior using information theory, and corrects for it with a new sampling principle: Humans use natural language as a vehicle for communication—exactly the concept info theory studies. Results from psycholinguistics tell us we use it in an efficient and somewhat predictable way. One consequence is that we pack an expected amount of information into each word. Can we decode from a probabilistic language generator in a manner that mimics this process? Yes! Our paper proposes a new decoding principle: Instead of sampling only high probability words, we should instead sample from the set of words whose information content is close to the expected information content, i.e., the model's conditional entropy. We find that our decoding method leads to fewer repetitions than nucleus or top-k sampling while also performing strongly in quality ratings. Joint work with @tpimentelms @ryandcotterell and Gian! ",https://arxiv.org/abs/2202.00666,"Despite achieving incredibly low perplexities on myriad natural language corpora, today's language models still often underperform when used to generate text. This dichotomy has puzzled the language generation community for the last few years. In this work, we posit that the abstraction of natural language as a communication channel (\`a la Shannon, 1948) can provide new insights into the behaviors of probabilistic language generators, e.g., why high-probability texts can be dull or repetitive. Humans use language as a means of communicating information, and do so in a simultaneously efficient and error-minimizing manner; they choose each word in a string with this (perhaps subconscious) goal in mind. We propose that generation from probabilistic models should mimic this behavior. Rather than always choosing words from the high-probability region of the distribution--which have a low Shannon information content--we sample from the set of words with information content close to the conditional entropy of our model, i.e., close to the expected information content. This decision criterion can be realized through a simple and efficient implementation, which we call typical sampling. Automatic and human evaluations show that, in comparison to nucleus and top-k sampling, typical sampling offers competitive performance in terms of quality while consistently reducing the number of degenerate repetitions. ",Typical Decoding for Natural Language Generation,5,"['Neural language models are really good at explaining held-out data. So when we sample from them, why do they yield dull and degenerate text? \n\nOur paper analyzes this behavior using information theory, and corrects for it with a new sampling principle: ', 'Humans use natural language as a vehicle for communication—exactly the concept info theory studies. Results from psycholinguistics tell us we use it in an efficient and somewhat predictable way. One consequence is that we pack an expected amount of information into each word.', 'Can we decode from a probabilistic language generator in a manner that mimics this process?', ""Yes! Our paper proposes a new decoding principle: Instead of sampling only high probability words, we should instead sample from the set of words whose information content is close to the expected information content, i.e., the model's conditional entropy."", 'We find that our decoding method leads to fewer repetitions than nucleus or top-k sampling while also performing strongly in quality ratings.\n\nJoint work with @tpimentelms @ryandcotterell and Gian! https://t.co/R2XO2usU0g']",22,02,1095
286,104,1415475602832781315,901142962758758400,Hang-Hyun Jo,"Our new paper with Yohsuke Murase @yohm13, Janos Török, Janos Kertesz @janos_kertesz, and Kimmo Kaski @kimmokaski is out in the arXiv: ""Deep learning based parameter search for an agent based social network model"", see . Since 2014 we have studied the social network modeling based on Kumpula et al.'s model (). Our new paper is the synthesis of (almost) all of our previous works: , , and . I omitted the figure in the paper. By the way, you can watch the simulation video for the original Kumpula et al.'s model and its multilayer version (our work) , both of which are made by @yohm13. ",https://arxiv.org/abs/2107.06507,"Interactions between humans give rise to complex social networks that are characterized by heterogeneous degree distribution, weight-topology relation, overlapping community structure, and dynamics of links. Understanding such networks is a primary goal of science due to serving as the scaffold for many emergent social phenomena from disease spreading to political movements. An appropriate tool for studying them is agent-based modeling, in which nodes, representing persons, make decisions about creating and deleting links, thus yielding various macroscopic behavioral patterns. Here we focus on studying a generalization of the weighted social network model, being one of the most fundamental agent-based models for describing the formation of social ties and social networks. This Generalized Weighted Social Network (GWSN) model incorporates triadic closure, homophilic interactions, and various link termination mechanisms, which have been studied separately in the previous works. Accordingly, the GWSN model has an increased number of input parameters and the model behavior gets excessively complex, making it challenging to clarify the model behavior. We have executed massive simulations with a supercomputer and using the results as the training data for deep neural networks to conduct regression analysis for predicting the properties of the generated networks from the input parameters. The obtained regression model was also used for global sensitivity analysis to identify which parameters are influential or insignificant. We believe that this methodology is applicable for a large class of complex network models, thus opening the way for more realistic quantitative agent-based modeling. ","Deep learning based parameter search for an agent based social network
model",3,"['Our new paper with Yohsuke Murase @yohm13, Janos Török, Janos Kertesz @janos_kertesz, and Kimmo Kaski @kimmokaski is out in the arXiv: ""Deep learning based parameter search for an agent based social network model"", see .', ""Since 2014 we have studied the social network modeling based on Kumpula et al.'s model (https://t.co/xqUt51F5UB). Our new paper is the synthesis of (almost) all of our previous works: https://t.co/U8pqFp4Ud8, https://t.co/r1y1x0mr7n, and https://t.co/QjMrGNnkBf."", ""I omitted the figure in the paper. By the way, you can watch the simulation video for the original Kumpula et al.'s model https://t.co/54UspkHS6N and its multilayer version (our work) https://t.co/54UspkHS6N, both of which are made by @yohm13. https://t.co/20cvI9QCe5""]",21,07,638
287,9,1345109123042603010,1077995761487568896,Jon Miller,"New paper day! A year and 100+ observations of the black hole GRS 1915+105 by @mayuishungry reveals its faint state is driven by variable, sometimes Compton-thick obscuration. The pic shows before & after spectra at the same source luminosity. Madness. ",https://arxiv.org/abs/2012.15033,"GRS 1915$+$105 is a stellar-mass black hole that is well known for exhibiting at least 12 distinct classes of X-ray variability and correlated multi-wavelength behavior. Despite such extraordinary variability, GRS 1915$+$105 remained one of the brightest sources in the X-ray sky. However, in early 2019, the source became much fainter, apparently entering a new accretion state. Here, we report the results of an extensive, year-long monitoring campaign of GRS 1915$+$105 with the Neil Gehrels Swift Observatory. During this interval, the flux of GRS 1915$+$105 gradually diminished; the observed count rate eventually dropped by two orders of magnitude. Simple but robust spectral fits to these monitoring observations show that this new state results from the combination of a dramatic and persistent increase in internal obscuration, and a reduced mass accretion rate. The internal obscuration is the dominant effect, with a median value of $N_{H} = 7\times 10^{23}~{\rm cm}^{-2}$. In a number of observations, the source appears to be Compton-thick. We suggest that this state should be identified as the ""obscured state,"" and discuss the implications of this new (or rarely observed) accretion mode for black holes across the mass scale. ",The Novel Obscured State of Stellar-mass Black Hole GRS 1915+105,1,"['New paper day! \nA year and 100+ observations of the black hole GRS 1915+105 by @mayuishungry reveals its faint state is driven by variable, sometimes Compton-thick obscuration.\nThe pic shows before & after spectra at the same source luminosity. Madness.\n ']",20,12,267
288,189,1455533679934164996,1658897460,Tim Davis,"Today we released the VERTICO (The Virgo Environment Traced In CO) survey paper! Survey led by @DrTobyBrown using beautiful @almaobs CO data for ~50 Virgo Cluster galaxies to reveal how galaxies in dense environments evolve. The survey involves a lot of current and past @cardiffuni @cardiffPHYSX people, including myself, Nikki Zabel and @astroquokka",https://arxiv.org/abs/2111.00937,"We present the Virgo Environment Traced in CO (VERTICO) survey, a new effort to map $^{12}$CO($2-1$), $^{13}$CO($2-1$), and C$^{18}$O($2-1$) in 51 Virgo Cluster galaxies with the Atacama Compact Array, part of the Atacama Large Millimeter/submillimeter Array (ALMA). The primary motivation of VERTICO is to understand the physical mechanisms that perturb molecular gas disks, and therefore star formation and galaxy evolution, in dense environments. This first paper contains an overview of VERTICO's design and sample selection, $^{12}$CO($2-1$) observations, and data reduction procedures. We characterize global $^{12}$CO($2-1$) fluxes and molecular gas masses for the 49 detected VERTICO galaxies, provide upper limits for the two non-detections, and produce resolved $^{12}$CO($2-1$) data products (median resolution $= 8^{\prime\prime} \approx 640~{\rm pc}$). Azimuthally averaged $^{12}$CO($2-1$) radial intensity profiles are presented along with derived molecular gas radii. We demonstrate the scientific power of VERTICO by comparing the molecular gas size--mass scaling relation for our galaxies with a control sample of field galaxies, highlighting the strong effect that radius definition has on this correlation. We discuss the drivers of the form and scatter in the size--mass relation and highlight areas for future work. VERTICO is an ideal resource for studying the fate of molecular gas in cluster galaxies and the physics of environment-driven processes that perturb the star formation cycle. Upon public release, the survey will provide a homogeneous legacy dataset for studying galaxy evolution in our closest cluster. ",VERTICO: The Virgo Environment Traced In CO Survey,2,"['Today we released the VERTICO (The Virgo Environment Traced In CO) survey paper! Survey led by @DrTobyBrown using beautiful @almaobs CO data for ~50 Virgo Cluster galaxies to reveal how galaxies in dense environments evolve. \n ', 'The survey involves a lot of current and past @cardiffuni @cardiffPHYSX people, including myself, Nikki Zabel and @astroquokka']",21,11,372
289,178,1471796813204234241,957689165902118912,Alexandre Dauphin,"Fresh from the arxivs🙂 We study an interaction-induced topological phase in the extended Fermi-Hubbard model. We characterize both its bulk and edge topology. Finally, we discuss how to engineer and detect such a phase with dipolar gases. ➡️ ",https://arxiv.org/abs/2112.08785,"We investigate the topological properties of the bond order wave phase arising in the extended Fermi-Hubbard model. In particular, we uncover a topological sector, which remained elusive in previous finite-size numerical studies due to boundary effects. We first show that, for an infinite system, the bond order wave regime is characterized by two degenerate bulk states corresponding to the trivial and topological sectors. The latter turns out to be indeed characterized by an even degeneracy of the entanglement spectrum and longe-range order of a string correlation function. For finite size systems, we show that the topological sector can be stabilized by imposing a suitable border potential. This therefore provides a concrete protocol for the observation of topologically protected degenerate edge modes in finite-size systems. Furthermore, we show that the bulk of the system is characterized by exotic solitonic solutions interpolating between the trivial and topological sectors. Finally, we propose an implementation and detection scheme of this strongly-correlated topological phase in a quantum simulator based on dipolar Fermi gases in optical lattices. ","Revealing the topological nature of the bond order wave in a strongly
correlated quantum system",1,"['Fresh from the arxivs🙂 We study an interaction-induced topological phase in the extended Fermi-Hubbard model. We characterize both its bulk and edge topology. Finally, we discuss how to engineer and detect such a phase with dipolar gases.\n\n➡️ ']",21,12,254
290,213,1313650656993923072,972932719503081472,Akira Sone,"Check our new paper ""A Generalized Measure of Quantum Fisher Information"": A great collaboration with Marco (@MvsCerezo) Jacob (@JacobBeckey) and Patrick (@ColesQuantum). We present an efficiently computable lower bound on the QFI! @endo_suguru @MvsCerezo @JacobBeckey @ColesQuantum ありがとー",https://arxiv.org/abs/2010.02904,"In this work, we present a lower bound on the quantum Fisher information (QFI) which is efficiently computable on near-term quantum devices. This bound itself is of interest, as we show that it satisfies the canonical criteria of a QFI measure. Specifically, it is essentially a QFI measure for subnormalized states, and hence it generalizes the standard QFI in this sense. Our bound employs the generalized fidelity applied to a truncated state, which is constructed via the $m$ largest eigenvalues and their corresponding eigenvectors of the probe quantum state $\rho_{\theta}$. Focusing on unitary families of exact states, we analyze the properties of our proposed lower bound, and demonstrate its utility for efficiently estimating the QFI. ",Generalized Measure of Quantum Fisher Information,2,"['Check our new paper ""A Generalized Measure of Quantum Fisher Information"": \nA great collaboration with Marco (@MvsCerezo) Jacob (@JacobBeckey) and Patrick (@ColesQuantum). We present an efficiently computable lower bound on the QFI!', '@endo_suguru @MvsCerezo @JacobBeckey @ColesQuantum ありがとー']",20,10,295
291,191,1314581402441089024,262191481,Mark Stevenson,New paper: “Robustness and Reliability of Gender Bias Assessment in WordEmbeddings: The Role of Base Pairs” to appear at AACL-IJCNLP '20. Shows popular measures of bias are sensitive to the base pairs used as input. Work w/ @OOhaiyangOO and Alison Sneyd ,https://arxiv.org/abs/2010.02847,"It has been shown that word embeddings can exhibit gender bias, and various methods have been proposed to quantify this. However, the extent to which the methods are capturing social stereotypes inherited from the data has been debated. Bias is a complex concept and there exist multiple ways to define it. Previous work has leveraged gender word pairs to measure bias and extract biased analogies. We show that the reliance on these gendered pairs has strong limitations: bias measures based off of them are not robust and cannot identify common types of real-world bias, whilst analogies utilising them are unsuitable indicators of bias. In particular, the well-known analogy ""man is to computer-programmer as woman is to homemaker"" is due to word similarity rather than societal bias. This has important implications for work on measuring bias in embeddings and related work debiasing embeddings. ","Robustness and Reliability of Gender Bias Assessment in Word Embeddings:
The Role of Base Pairs",1,"[""New paper: “Robustness and Reliability of Gender Bias Assessment in WordEmbeddings: The Role of Base Pairs” to appear at AACL-IJCNLP '20. Shows popular measures of bias are sensitive to the base pairs used as input. Work w/ @OOhaiyangOO and Alison Sneyd ""]",20,10,260
292,151,1309300896208171009,734677275216470016,Guodong Zhang,"New paper alert: We provide a unified and automated method to analyze first-order methods for smooth & strongly-monotone games. The convergence rate for any first-order method can be obtained via a mechanical procedure of deriving and solving an SDP. Using this framework, we are able to recover or even improve known convergence bounds for a variety of algorithms. For example, we can recover the rate bound of the gradient method, proximal point method, and optimistic gradient method. We can also gain new insights and derive new results that were previously unknown. For example, we for the first time provide the global convergence result of negative momentum, which is difficult to obtain using existing approaches. Finally, we are able to extend this framework to a stochastic setting with multiplicative noise. See more details in the paper. Joint work with @XuchanB, @LaurentLessard and @RogerGrosse .",https://arxiv.org/abs/2009.11359,"The theory of integral quadratic constraints (IQCs) allows the certification of exponential convergence of interconnected systems containing nonlinear or uncertain elements. In this work, we adapt the IQC theory to study first-order methods for smooth and strongly-monotone games and show how to design tailored quadratic constraints to get tight upper bounds of convergence rates. Using this framework, we recover the existing bound for the gradient method~(GD), derive sharper bounds for the proximal point method~(PPM) and optimistic gradient method~(OG), and provide \emph{for the first time} a global convergence rate for the negative momentum method~(NM) with an iteration complexity $\mathcal{O}(\kappa^{1.5})$, which matches its known lower bound. In addition, for time-varying systems, we prove that the gradient method with optimal step size achieves the fastest provable worst-case convergence rate with quadratic Lyapunov functions. Finally, we further extend our analysis to stochastic games and study the impact of multiplicative noise on different algorithms. We show that it is impossible for an algorithm with one step of memory to achieve acceleration if it only queries the gradient once per batch (in contrast with the stochastic strongly-convex optimization setting, where such acceleration has been demonstrated). However, we exhibit an algorithm which achieves acceleration with two gradient queries per batch. ","A Unified Analysis of First-Order Methods for Smooth Games via Integral
Quadratic Constraints",5,"['New paper alert: \n\nWe provide a unified and automated method to analyze first-order methods for smooth & strongly-monotone games. The convergence rate for any first-order method can be obtained via a mechanical procedure of deriving and solving an SDP. ', 'Using this framework, we are able to recover or even improve known convergence bounds for a variety of algorithms. For example, we can recover the rate bound of the gradient method, proximal point method, and optimistic gradient method.', 'We can also gain new insights and derive new results that were previously unknown. For example, we for the first time provide the global convergence result of negative momentum, which is difficult to obtain using existing approaches.', 'Finally, we are able to extend this framework to a stochastic setting with multiplicative noise. See more details in the paper.', 'Joint work with @XuchanB, @LaurentLessard and @RogerGrosse .']",20,09,924
293,93,1063712263696236544,312448486,Dr. Karan Jani,New paper in collaboration with @CNRS: a CubeSat gravitational wave space-mission to find the elusive intermediate-sized black holes in our universe. A cost effective mission that can be launched within next few years! Submitted to @CQGplus: The mission will hunt for black holes 100 to Million times the mass of our Sun to cosmological distances. A sweet spot that is currently being missed by other proposed experiments. This new kid in the block is called #SAGE: SagnAc interferometer Gravitational wavE space observatory. It will have peak sensitivity in the gravitational wave spectrum between @LISACommunity and @LIGO-@ego_virgo. ,https://arxiv.org/abs/1811.04743,"SAGE (SagnAc interferometer for Gravitational wavE) is a project for a space observatory based on multiple 12-U CubeSats in geosynchronous orbit. The objective is a fast track mission which would fill the observational gap between LISA and ground based observatories. With albeit a lower sensitivity, it would allow early investigation of the nature and event rate of intermediate-mass black hole (IMBH) mergers, constraining our understanding of the universe formation by probing the building up of IMBH up to supermassive black holes. Technically, the CubeSats would create a triangular Sagnac interferometer with 140.000km roundtrip arm length, optimized to be sensitive to gravitational waves at frequencies between 10mHz and 2Hz. The nature of the Sagnac measurement makes it almost insensitive to position error, enabling the use of spacecrafts in ballistic trajectories. The light source and recombination units of the interferometer are based on compact fibered technologies without bulk optics. A peak sensitivity of 23 pm/sqrt(Hz) is expected at 1Hz assuming a 200mW internal laser source and 10-centimeter diameter apertures. Because of the absence of a test mass, the main limitation would come from the non-gravitational forces applied on the spacecrafts. However, conditionally upon our ability to partially post-process the effect of solar wind and solar pressure, SAGE would allow detection of gravitational waves with strains as low as a few 1e-19 within the 0.1 to 1Hz range. Averaged over the entire sky, and including the antenna gain of the Sagnac interferometer, the SAGE observatory would sense equal mass black hole mergers in the 1e4 to 1e6 solar masses range up to a luminosity distance of 800Mpc. Additionally, coalescence of stellar black holes (10Msun) around SMBH (IMBH) forming extreme (intermediate) mass ratio inspirals could be detected within a sphere of radius 200Mpc. ",SAGE: finding IMBH in the black hole desert,3,"['New paper in collaboration with @CNRS: a CubeSat gravitational wave space-mission to find the elusive intermediate-sized black holes in our universe. \n\nA cost effective mission that can be launched within next few years! \n\nSubmitted to @CQGplus: ', 'The mission will hunt for black holes 100 to Million times the mass of our Sun to cosmological distances. \n\nA sweet spot that is currently being missed by other proposed experiments. https://t.co/n2DjAPI974', 'This new kid in the block is called #SAGE: SagnAc interferometer Gravitational wavE space observatory.\n\nIt will have peak sensitivity in the gravitational wave spectrum between @LISACommunity and @LIGO-@ego_virgo. https://t.co/5ZsfmGuoMn']",18,11,666
294,41,1430803949439930371,1134375290581524480,Kai Schmitz,"New paper on the arXiv, together with Valerie Domcke and @Tevong You: We show how a rolling ""relaxion"" can be trapped by gauge field friction, highlight the importance of Schwinger pair production, and provide a theory motivation for the dark axion portal. ",https://arxiv.org/abs/2108.11295,"The dark axion portal is a coupling of an axion-like particle to a dark photon kinetically mixed with the visible photon. We show how this portal, when applied to the relaxion, can lead to cosmological relaxation of the weak scale using dark photon production. The key backreaction mechanism involves the Schwinger effect: As long as electroweak symmetry is unbroken, Schwinger production of massless Standard Model fermions, which carry dark millicharges, suppresses the dark photon production. Once the electroweak symmetry is broken, the fermions acquire mass and the suppression is lifted. An enhanced dark photon dissipation then traps the relaxion at a naturally small weak scale. Our model thus provides a novel link between the phenomenological dark axion portal, dark photons, and the hierarchy problem of the Higgs mass. ",Cosmological Relaxation through the Dark Axion Portal,1,"['New paper on the arXiv, together with Valerie Domcke and @Tevong You: We show how a rolling ""relaxion"" can be trapped by gauge field friction, highlight the importance of Schwinger pair production, and provide a theory motivation for the dark axion portal. ']",21,08,270
295,64,1483449137387483140,1038120916117606400,Benjamin Remy,"New Cosmo ∩ ML paper out! We propose a new method to solve the mass-mapping inverse problem by sampling from the posterior distribution with a neural prior. Work with François Lanusse, @Niall_Jeffrey, Jia Liu, @JLStarck, Ken Osato & Tim Schrabback (1/n) We show that we are able to sample convergence maps with the expected power spectrum using highly efficient annealed HMC sampling The prior was learned with denoising score matching over high resolution hydrodynamical simulations \kappaTNG (Osato et al. 2021) We are thus able to provide unprecedented resolution of mass-map reconstruction, alongside uncertainty quantification through the posterior distribution 🤩 Find the code and data on the associated github repo: using JAX, Haiku & TFP",https://arxiv.org/abs/2201.05561,"Weak lensing mass-mapping is a useful tool to access the full distribution of dark matter on the sky, but because of intrinsic galaxy ellipticies and finite fields/missing data, the recovery of dark matter maps constitutes a challenging ill-posed inverse problem. We introduce a novel methodology allowing for efficient sampling of the high-dimensional Bayesian posterior of the weak lensing mass-mapping problem, and relying on simulations for defining a fully non-Gaussian prior. We aim to demonstrate the accuracy of the method on simulations, and then proceed to applying it to the mass reconstruction of the HST/ACS COSMOS field. The proposed methodology combines elements of Bayesian statistics, analytic theory, and a recent class of Deep Generative Models based on Neural Score Matching. This approach allows us to do the following: 1) Make full use of analytic cosmological theory to constrain the 2pt statistics of the solution. 2) Learn from cosmological simulations any differences between this analytic prior and full simulations. 3) Obtain samples from the full Bayesian posterior of the problem for robust Uncertainty Quantification. We demonstrate the method on the $\kappa$TNG simulations and find that the posterior mean significantly outperfoms previous methods (Kaiser-Squires, Wiener filter, Sparsity priors) both on root-mean-square error and in terms of the Pearson correlation. We further illustrate the interpretability of the recovered posterior by establishing a close correlation between posterior convergence values and SNR of clusters artificially introduced into a field. Finally, we apply the method to the reconstruction of the HST/ACS COSMOS field and yield the highest quality convergence map of this field to date. ",Probabilistic Mass Mapping with Neural Score Estimation,5,"['New Cosmo ∩ ML paper out! We propose a new method to solve the mass-mapping inverse problem by sampling from the posterior distribution with a neural prior.\n\n\n\nWork with François Lanusse, @Niall_Jeffrey, Jia Liu, @JLStarck, Ken Osato & Tim Schrabback\n(1/n) ', 'We show that we are able to sample convergence maps with the expected power spectrum using highly efficient annealed HMC sampling https://t.co/Cyql6rEXcr', 'The prior was learned with denoising score matching over high resolution hydrodynamical simulations \\kappaTNG (Osato et al. 2021) https://t.co/L9kGcGuwow', 'We are thus able to provide unprecedented resolution of mass-map reconstruction, alongside uncertainty quantification through the posterior distribution 🤩 https://t.co/mazKE2niDe', 'Find the code and data on the associated github repo: https://t.co/VF5Yzgk4dy using JAX, Haiku & TFP']",22,01,787
296,50,973542264067805185,3301643341,Roger Grosse,"Flipout makes weight perturbations (evolution strategies, variational BNNs) as mini-batch-friendly as activation perturbations (dropout, batch norm). New paper with Yeming Wen, Paul Vicol, Jimmy Ba, and @dustinvtran @roydanroy @dustinvtran They need to be independent and symmetric around 0. But you could extend it to, e.g., matrix variate Gaussian perturbations, by rotating to a coordinate system that satisfies this.",https://arxiv.org/abs/1803.04386,"Stochastic neural net weights are used in a variety of contexts, including regularization, Bayesian neural nets, exploration in reinforcement learning, and evolution strategies. Unfortunately, due to the large number of weights, all the examples in a mini-batch typically share the same weight perturbation, thereby limiting the variance reduction effect of large mini-batches. We introduce flipout, an efficient method for decorrelating the gradients within a mini-batch by implicitly sampling pseudo-independent weight perturbations for each example. Empirically, flipout achieves the ideal linear variance reduction for fully connected networks, convolutional networks, and RNNs. We find significant speedups in training neural networks with multiplicative Gaussian perturbations. We show that flipout is effective at regularizing LSTMs, and outperforms previous methods. Flipout also enables us to vectorize evolution strategies: in our experiments, a single GPU with flipout can handle the same throughput as at least 40 CPU cores using existing methods, equivalent to a factor-of-4 cost reduction on Amazon Web Services. ","Flipout: Efficient Pseudo-Independent Weight Perturbations on
Mini-Batches",2,"['Flipout makes weight perturbations (evolution strategies, variational BNNs) as mini-batch-friendly as activation perturbations (dropout, batch norm). New paper with Yeming Wen, Paul Vicol, Jimmy Ba, and @dustinvtran \n\n', '@roydanroy @dustinvtran They need to be independent and symmetric around 0. But you could extend it to, e.g., matrix variate Gaussian perturbations, by rotating to a coordinate system that satisfies this.']",18,03,428
297,79,1360251013765214214,1078236938669379584,Daniel Arteaga,"We just released a new paper, together with @jordiponsdotme: ""Multichannel-based learning for audio object extraction"" Accepted for presentation in #ICASSP2021. We show how to train a system to extract objects (audio + spatial location) out a multichannel mix without ever comparing with the reference objects. After having worked part-time in the field for 2-3 years behind closed doors, this is my first paper in deep learning.",https://arxiv.org/abs/2102.06142,"The current paradigm for creating and deploying immersive audio content is based on audio objects, which are composed of an audio track and position metadata. While rendering an object-based production into a multichannel mix is straightforward, the reverse process involves sound source separation and estimating the spatial trajectories of the extracted sources. Besides, cinematic object-based productions are often composed by dozens of simultaneous audio objects, which poses a scalability challenge for audio object extraction. Here, we propose a novel deep learning approach to object extraction that learns from the multichannel renders of object-based productions, instead of directly learning from the audio objects themselves. This approach allows tackling the object scalability challenge and also offers the possibility to formulate the problem in a supervised or an unsupervised fashion. Since, to our knowledge, no other works have previously addressed this topic, we first define the task and propose an evaluation methodology, and then discuss under what circumstances our methods outperform the proposed baselines. ",Multichannel-based learning for audio object extraction,3,"['We just released a new paper, together with @jordiponsdotme:\n\n""Multichannel-based learning for audio object extraction""\n\nAccepted for presentation in #ICASSP2021.\n\n ', 'We show how to train a system to extract objects (audio + spatial location) out a multichannel mix without ever comparing with the reference objects.', 'After having worked part-time in the field for 2-3 years behind closed doors, this is my first paper in deep learning.']",21,02,443
298,17,1111229665547612162,972586737871572997,Shota Gugushvili,New preprint: #Bayesian decompounding for discrete distributions. Joint with Frank van der Meulen and Ester Mariucci. The paper is a nice mixture of theory and practice. #julialang implementation is available on GitHub. Computer code and datasets are here: ,https://arxiv.org/abs/1903.11142,"Suppose that a compound Poisson process is observed discretely in time and assume that its jump distribution is supported on the set of natural numbers. In this paper we propose a non-parametric Bayesian approach to estimate the intensity of the underlying Poisson process and the distribution of the jumps. We provide a MCMC scheme for obtaining samples from the posterior. We apply our method on both simulated and real data examples, and compare its performance with the frequentist plug-in estimator proposed by Buchmann and Gr\""ubel. On a theoretical side, we study the posterior from the frequentist point of view and prove that as the sample size $n\rightarrow\infty$, it contracts around the `true', data-generating parameters at rate $1/\sqrt{n}$, up to a $\log n$ factor. ",Decompounding discrete distributions: A non-parametric Bayesian approach,2,"['New preprint: #Bayesian decompounding for discrete distributions. Joint with Frank van der Meulen and Ester Mariucci. The paper is a nice mixture of theory and practice. #julialang implementation is available on GitHub.\n\n', 'Computer code and datasets are here: https://t.co/thnCi9xbG1']",19,03,270
299,47,1297692630097301505,1169068112177745922,Alexis Plascencia,"New paper out 😀 @fileviez and I have studied electric dipole moments in gauge theories where a dark matter candidate is predicted by the cancellation of gauge anomalies A tale of EDMs and dark matter 1/6 In BSM theories where baryon and/or lepton number are promoted to local gauge symmetries, we need to introduce new fields to cancel all gauge anomalies 2/6 One of these fermions is neutral and automatically stable which makes it a good DM candidate. In addition, these models can naturally accommodate new sources of CP violation! 3/6 The new charged states lead to EDMs of the electron and the neutron via two-loop Barr-Zee diagrams. Namely, the charged fermions run in the loop in this diagram: 4/6 A crucial point is that not overproducing dark matter gives an upper bound on the symmetry breaking scale of the new U(1). Since the new charged fermions get their mass from this scale, this represents an upper bound on their masses. 5/6 We computed the EDMs and showed that, for large values of the CP-violating phase, future experiments that search for the EDM of the electron such as ACME will fully probe these theories 😀 6/6 ",https://arxiv.org/abs/2008.09116,"New sources of CP violation beyond the Standard Model are crucial to explain the baryon asymmetry in the Universe. We discuss the impact of new CP violating interactions in theories where a dark matter candidate is predicted by the cancellation of gauge anomalies. In these theories, the constraint on the dark matter relic density implies an upper bound on the new symmetry breaking scale from which all new states acquire their masses. We investigate in detail the predictions for electric dipole moments and show that if the relevant CP-violating phase is large, experiments such as the ACME collaboration will be able to fully probe the theory. ","Electric Dipole Moments, New Forces and Dark Matter",6,"['New paper out 😀 @fileviez and I have studied electric dipole moments in gauge theories where a dark matter candidate is predicted by the cancellation of gauge anomalies\n\n\nA tale of EDMs and dark matter 1/6', 'In BSM theories where baryon and/or lepton number are promoted to local gauge symmetries, we need to introduce new fields to cancel all gauge anomalies 2/6', 'One of these fermions is neutral and automatically stable which makes it a good DM candidate. In addition, these models can naturally accommodate new sources of CP violation! 3/6', 'The new charged states lead to EDMs of the electron and the neutron via two-loop Barr-Zee diagrams. Namely, the charged fermions run in the loop in this diagram: 4/6 https://t.co/B5cnzPqS7c', 'A crucial point is that not overproducing dark matter gives an upper bound on the symmetry breaking scale of the new U(1). Since the new charged fermions get their mass from this scale, this represents an upper bound on their masses. 5/6', 'We computed the EDMs and showed that, for large values of the CP-violating phase, future experiments that search for the EDM of the electron such as ACME will fully probe these theories 😀 6/6 https://t.co/7wLTnrmcON']",20,08,1157
300,183,1471070813637984257,804069495253962752,David Martínez Delgado,"We have posted our new MEGARA @GTCtelescope study of the blue stellar stream of NGC 7241, possibly one of the lowest mass streams detected beyond the Local Group. And we find the stream's progenitor is suffering a star-formation burst! (Credit: @ngc1535) ",https://arxiv.org/abs/2112.07029,"We study the striking case of a blue narrow stream with a possible globular cluster-like progenitor around the Milky Way-size galaxy NGC 7241 and its foreground dwarf companion. We present a follow-up spectroscopic study of this stream based on data taken with the MEGARA instrument at the 10.4-m Gran Telescopio Canarias using the integral field spectroscopy mode. Although our data suggest that this compact object in the stream is actually a foreground Milky Way halo star, we detect emission lines overlapping a less compact, bluer and fainter blob of the stream that is clearly visible in both ultra-violet and optical deep images. From its heliocentric systemic radial velocity derived from the [OIII] 5007A lines (V_syst= 1548.58+/-1.80 km\s^-1) and new UV and optical broad-band photometry, we conclude that this over-density could be the actual core of the stream, with an absolute magnitude of Mg~ -10 and a g-r = 0.08+/- 0.11, consistent with a remnant of a low-mass dwarf satellite undergoing a current episode of star formation. From the width of the stream, we calculate that the progenitor mass is between 6.4 x 10^6 Mo -2.7 x 10^7 Mo, which is typical of a dwarf galaxy. These estimates suggest that this is one of the lowest mass streams detected so far beyond the Local Group. We find that blue stellar streams containing star formation regions are commonly predicted by high-resolution cosmological simulations of galaxies lighter than the Milky Way. This scenario is consistent with the processes explaining the bursty star formation history of some dwarf satellites, which are followed by a gas depletion and a fast quenching once they enter within the virial radius of their host galaxies. Thus, it is likely that the stream's progenitor is suffering a star-formation burst comparable to those that have shaped the star-formation history of several Local Group dwarfs in the last few Gigayears. ","Once in a blue stream: Detection of recent star formation in the NGC
7241 stellar stream with MEGARA",1,"[""We have posted our new MEGARA @GTCtelescope study of the blue stellar stream of NGC 7241, possibly one of the lowest mass streams detected beyond the Local Group. And we find the stream's progenitor is suffering a star-formation burst!\n (Credit: @ngc1535) ""]",21,12,268
301,233,1435513485178384387,1166800207427883009,Pierpaolo Vivo,"🚨 Newest on the arXiv 🚨 We use Random Matrix Theory to study instabilities of complex fluids with many constituents (liquid-liquid phase separation), with an application to membraneless organelles in the cytoplasm. @KCLDisSyst @kclmathematics @CANES_CDT",https://arxiv.org/abs/2109.03164,"We develop a theory for thermodynamic instabilities of complex fluids composed of many interacting chemical species organised in families. This model includes partially structured and partially random interactions and can be solved exactly using tools from random matrix theory. The model exhibits three kinds of fluid instabilities: one in which the species form a condensate with a local density that depends on their family (family condensation); one in which species demix in two phases depending on their family (family demixing); and one in which species demix in a random manner irrespective of their family (random demixing). We determine the critical spinodal density of the three types of instabilities and find that the critical spinodal density is finite for both family condensation and family demixing, while for random demixing the critical spinodal density grows as the square root of the number of species. We use the developed framework to describe phase-separation instability of the cytoplasm induced by a change in pH. ","Instabilities of complex fluids with partially structured and partially
random interactions",1,"['🚨 Newest on the arXiv 🚨 We use Random Matrix Theory to study instabilities of complex fluids with many constituents (liquid-liquid phase separation), with an application to membraneless organelles in the cytoplasm. @KCLDisSyst @kclmathematics @CANES_CDT']",21,09,260
302,55,1205147135802560512,1232021550,Karen Levy,"I am *really* excited about our new paper, Roles for Computing in Social Change, on its way to @fatconference: This paper comes from *years* of thinking and talking among ourselves (me + @red_abebe @s010n @manish_raghavan @dgrobinson Jon Kleinberg) ... @fatconference @red_abebe @s010n @manish_raghavan @dgrobinson We wanted to think about ways computing research can support, not supplant, other forms of action toward a more just society -- taking advantage of computing's unique capabilities, while also recognizing what it can't do. @fatconference @red_abebe @s010n @manish_raghavan @dgrobinson We describe 4 ways computing researchers can position their work in the service of broad change: to diagnose and measure problems; to explicitly specify general policy goals; to clarify the limits of what technology can do; and to foreground long-standing social problems anew. @fatconference @red_abebe @s010n @manish_raghavan @dgrobinson In working on the paper, we thought hard about what we like so much about some of our favorite work, by folks like @latanyasweeney @PopTechWorks @jovialjoy @timnitGebru @annaeveryday @niftyc @mmitchell_ai @aylin_cim @random_walker @j2bryson and many others. @fatconference @red_abebe @s010n @manish_raghavan @dgrobinson @LatanyaSweeney @PopTechWorks @jovialjoy @timnitGebru @annaeveryday @niftyc @mmitchell_ai @aylin_cim @random_walker @j2bryson I learned so much from my co-authors in the course of writing this, and I'm really happy to have it out in the world! ",https://arxiv.org/abs/1912.04883,"A recent normative turn in computer science has brought concerns about fairness, bias, and accountability to the core of the field. Yet recent scholarship has warned that much of this technical work treats problematic features of the status quo as fixed, and fails to address deeper patterns of injustice and inequality. While acknowledging these critiques, we posit that computational research has valuable roles to play in addressing social problems -- roles whose value can be recognized even from a perspective that aspires toward fundamental social change. In this paper, we articulate four such roles, through an analysis that considers the opportunities as well as the significant risks inherent in such work. Computing research can serve as a diagnostic, helping us to understand and measure social problems with precision and clarity. As a formalizer, computing shapes how social problems are explicitly defined --- changing how those problems, and possible responses to them, are understood. Computing serves as rebuttal when it illuminates the boundaries of what is possible through technical means. And computing acts as synecdoche when it makes long-standing social problems newly salient in the public eye. We offer these paths forward as modalities that leverage the particular strengths of computational work in the service of social change, without overclaiming computing's capacity to solve social problems on its own. ",Roles for Computing in Social Change,5,"['I am *really* excited about our new paper, Roles for Computing in Social Change, on its way to @fatconference: \nThis paper comes from *years* of thinking and talking among ourselves (me + @red_abebe @s010n @manish_raghavan @dgrobinson Jon Kleinberg) ...', ""@fatconference @red_abebe @s010n @manish_raghavan @dgrobinson We wanted to think about ways computing research can support, not supplant, other forms of action toward a more just society -- taking advantage of computing's unique capabilities, while also recognizing what it can't do."", '@fatconference @red_abebe @s010n @manish_raghavan @dgrobinson We describe 4 ways computing researchers can position their work in the service of broad change:\nto diagnose and measure problems;\nto explicitly specify general policy goals;\nto clarify the limits of what technology can do;\nand to foreground long-standing social problems anew.', '@fatconference @red_abebe @s010n @manish_raghavan @dgrobinson In working on the paper, we thought hard about what we like so much about some of our favorite work, by folks like @latanyasweeney @PopTechWorks @jovialjoy @timnitGebru @annaeveryday @niftyc @mmitchell_ai @aylin_cim @random_walker @j2bryson and many others.', ""@fatconference @red_abebe @s010n @manish_raghavan @dgrobinson @LatanyaSweeney @PopTechWorks @jovialjoy @timnitGebru @annaeveryday @niftyc @mmitchell_ai @aylin_cim @random_walker @j2bryson I learned so much from my co-authors in the course of writing this, and I'm really happy to have it out in the world! https://t.co/UgGLBleipL""]",19,12,1516
303,57,1339550155738128384,1189378867,Clément Moulin-Frier,"We've just written a first position paper as a kick-off of the ORIGINS project! Grounding #AI in the origins of human behavior This is an Exploratory Action funded by @Inria, allowing the recruitment of @nisioti_eleni as a new post-doc @FlowersINRIA ",https://arxiv.org/abs/2012.08564,"Recent advances in Artificial Intelligence (AI) have revived the quest for agents able to acquire an open-ended repertoire of skills. However, although this ability is fundamentally related to the characteristics of human intelligence, research in this field rarely considers the processes that may have guided the emergence of complex cognitive capacities during the evolution of the species. Research in Human Behavioral Ecology (HBE) seeks to understand how the behaviors characterizing human nature can be conceived as adaptive responses to major changes in the structure of our ecological niche. In this paper, we propose a framework highlighting the role of environmental complexity in open-ended skill acquisition, grounded in major hypotheses from HBE and recent contributions in Reinforcement learning (RL). We use this framework to highlight fundamental links between the two disciplines, as well as to identify feedback loops that bootstrap ecological complexity and create promising research directions for AI researchers. ",Grounding Artificial Intelligence in the Origins of Human Behavior,1,"[""We've just written a first position paper as a kick-off of the ORIGINS project! \n\nGrounding #AI in the origins of human behavior\n\n\nThis is an Exploratory Action funded by @Inria, allowing the recruitment of @nisioti_eleni as a new post-doc @FlowersINRIA ""]",20,12,264
304,65,1037252348702347264,946726588200218624,Laurent Bétermin,"New preprint titled ""Minimal Soft Lattice Theta Functions"" is now available on @arxiv. In this paper, the optimality properties of a new d-dimensional lattice theta function for mass interaction (i.e. condensed matter interaction) are investigated. ",https://arxiv.org/abs/1809.00473,"We study the minimality properties of a new type of ""soft"" theta functions. For a lattice $L\subset \mathbb{R}^d$, a $L$-periodic distribution of mass $\mu_L$ and an other mass $\nu_z$ centred at $z\in \mathbb{R}^d$, we define, for all scaling parameter $\alpha>0$, the translated lattice theta function $\theta_{\mu_L+\nu_z}(\alpha)$ as the Gaussian interaction energy between $\nu_z$ and $\mu_L$. We show that any strict local or global minimality result that is true in the point case $\mu=\nu=\delta_0$ also holds for $L\mapsto \theta_{\mu_L+\nu_0}(\alpha)$ and $z\mapsto \theta_{\mu_L+\nu_z}(\alpha)$ when the measures are radially symmetric with respect to the points of $L\cup \{z\}$ and sufficiently rescaled around them (i.e. at a low scale). The minimality at all scales is also proved when the radially symmetric measures are generated by a completely monotone kernel. The method is based on a generalized Jacobi transformation formula, some standard integral representations for lattice energies and an approximation argument. Furthermore, for the honeycomb lattice $\mathsf{H}$, the center of any primitive honeycomb is shown to minimize $z\mapsto \theta_{\mu_{\mathsf{H}}+\nu_z}(\alpha)$ and many applications are stated for other particular physically relevant lattices including the triangular, square, cubic, orthorhombic, body-centred-cubic and face-centred-cubic lattices. ",Minimal Soft Lattice Theta Functions,1,"['New preprint titled ""Minimal Soft Lattice Theta Functions"" is now available on @arxiv. In this paper, the optimality properties of a new d-dimensional lattice theta function for mass interaction (i.e. condensed matter interaction) are investigated. ']",18,09,256
305,22,1166325024371957760,621147651,zpenoyre,"New paper out today / we've invented a new kind of space elevator / one that we can build with modern materials / please consider sharing, it's a project worth considering in earnest: @hippke @EmSandford Happy to try - this was an independent genesis of an idea which may well have more overlap with the existing literature than I'm aware (though I've tried my best to find out what already exists) @hippke @EmSandford The basic concept - extending an elevator from the lunar surface, and in doing so being able to construct it relatively cheaply out of materials already available - was new to me, though I have since found some other mentions of lunar-space elevators (there's a wiki page) @hippke @EmSandford What surprised me was how achievable a protect this is - and how much could be gained from doing it - the existing literature seems to focus on future materials and vast translunar transport systems @hippke @EmSandford But a minimal cable - primarily used to support a Lagrange point base camp - which could be deployed in years not decades, for billions not trillions - seems to be unexplored as a concept @hippke @EmSandford (also this is very much the theorist in me talking - but I think an independent derivation of a set of equations is always worth putting out there) @hippke @EmSandford It may be that there are communities who have explored this in detail - and I look forward to hearing from them - the academic web is incomplete and hard to traverse - who knows what lurks out there",https://arxiv.org/abs/1908.09339,"Perhaps the biggest hurdle to mankind's expansion throughout the Solar System is the prohibitive cost of escaping Earth's gravitational pull. In its many forms, the space-elevator provides a way to circumvent this cost, allowing payloads to traverse along a cable extending from Earth to orbit. However, modern materials are not strong enough to build a cable capable of supporting its own weight. In this work we present an alternative to the classic space elevator, within reach of modern technology: The Spaceline. By extending a line, anchored on the moon, to deep within Earth's gravity well, we can construct a stable, traversable cable allowing free movement from the vicinity of Earth to the Moon's surface. With current materials, it is feasible to build a cable extending to close to the height of geostationary orbit, allowing easy traversal and construction between the Earth and the Moon. ","The Spaceline: a practical space elevator alternative achievable with
current technology",7,"[""New paper out today / we've invented a new kind of space elevator / one that we can build with modern materials / please consider sharing, it's a project worth considering in earnest: "", ""@hippke @EmSandford Happy to try - this was an independent genesis of an idea which may well have more overlap with the existing literature than I'm aware (though I've tried my best to find out what already exists)"", ""@hippke @EmSandford The basic concept - extending an elevator from the lunar surface, and in doing so being able to construct it relatively cheaply out of materials already available - was new to me, though I have since found some other mentions of lunar-space elevators (there's a wiki page)"", '@hippke @EmSandford What surprised me was how achievable a protect this is - and how much could be gained from doing it - the existing literature seems to focus on future materials and vast translunar transport systems', '@hippke @EmSandford But a minimal cable - primarily used to support a Lagrange point base camp - which could be deployed in years not decades, for billions not trillions - seems to be unexplored as a concept', '@hippke @EmSandford (also this is very much the theorist in me talking - but I think an independent derivation of a set of equations is always worth putting out there)', '@hippke @EmSandford It may be that there are communities who have explored this in detail - and I look forward to hearing from them - the academic web is incomplete and hard to traverse - who knows what lurks out there']",19,08,1519
306,79,1470466107672825861,1346905158706483203,Connor Lawless,"Excited to announce that our new #AAAI2022 paper 'Interpretable clustering via Multi-Polytope Machines' is up on arxiv! We present a MINLP formulation for interpretable clustering that jointly clusters points and constructs polytopes around each cluster. The secret sauce in our approach are additional constraints on the hyperplanes in each polytope that allow us to make cluster explanations more interpretable (recovering popular model classes like rule sets and score cards). This was work with my amazing (twitter-less) colleagues Lam, Dzung, Jayant, and Chandra at IBM Research from an epic virtual summer internship.",https://arxiv.org/abs/2112.05653,"Clustering is a popular unsupervised learning tool often used to discover groups within a larger population such as customer segments, or patient subtypes. However, despite its use as a tool for subgroup discovery and description - few state-of-the-art algorithms provide any rationale or description behind the clusters found. We propose a novel approach for interpretable clustering that both clusters data points and constructs polytopes around the discovered clusters to explain them. Our framework allows for additional constraints on the polytopes - including ensuring that the hyperplanes constructing the polytope are axis-parallel or sparse with integer coefficients. We formulate the problem of constructing clusters via polytopes as a Mixed-Integer Non-Linear Program (MINLP). To solve our formulation we propose a two phase approach where we first initialize clusters and polytopes using alternating minimization, and then use coordinate descent to boost clustering performance. We benchmark our approach on a suite of synthetic and real world clustering problems, where our algorithm outperforms state of the art interpretable and non-interpretable clustering algorithms. ",Interpretable Clustering via Multi-Polytope Machines,3,"[""Excited to announce that our new #AAAI2022 paper 'Interpretable clustering via Multi-Polytope Machines' is up on arxiv! We present a MINLP formulation for interpretable clustering that jointly clusters points and constructs polytopes around each cluster. \n\n"", 'The secret sauce in our approach are additional constraints on the hyperplanes in each polytope that allow us to make cluster explanations more interpretable (recovering popular model classes like rule sets and score cards). https://t.co/Q0KCUDZtwA', 'This was work with my amazing (twitter-less) colleagues Lam, Dzung, Jayant, and Chandra at IBM Research from an epic virtual summer internship.']",21,12,638
307,78,1105569383085158401,2427184074,Christopher Berry,"New on the arXiv today: an #Astro2020 white paper on the awesomeness of extreme-mass-ratio inspirals, a unique #GravitationalWave source for @LISACommunity Extreme-mass-ratio inspirals are when one smaller stellar-mass black hole orbits a supermassive one. The resulting orbits are extremely complicated (as illustrated in this rather lovely figure). It is this structure which encodes lots of information into the #GravitationalWaves We're not sure how many inspirals we'll see—measuring the number would teach us lots about galactic cores. We estimated ~1–2000 per year, so they're a bankable source across a 4 year @LISACommunity mission! I describe more in my blog #Astro2020 Since the orbits are so complicated, we'll be able to measure the properties of the source *really* well. Black hole spins are hard to measure with @LIGO, but here we'll get them to 1 part in 10,000–1,000,000! The spins encode lots of information about how black holes grow Because we'll get such ridiculously detailed measurements of the properties, we can really test the structure of the massive black hole. If there's a missing piece to @AlbertEinstein's theory of general relativity, here is an excellent place to look! #Astro2020 Putting together a few extreme-mass-ratio inspirals, we can start to reconstruct the mass distribution of massive black holes. @LISACommunity is sensitive to black holes with masses 10^4–10^7 times the mass of our Sun. We don't know what the distribution is like at the lower end! If we can cross-correlate the location of the source we work out from the #GravitaitonalWave signal with galaxy catalogues, we can also measure the Hubble constant, perhaps to 1% after 20 detections! This is a great check of other methods #Astro2020 Extreme-mass-ratio inspirals are a wonder opportunity for #GravitationalWave astronomy. There is only one currently scheduled mission that can detect them: @LISACommunity 💞💓❤️ #Astro2020",https://arxiv.org/abs/1903.03686,"The inspiral of a stellar-mass compact object into a massive ($\sim 10^{4}$-$10^{7} M_{\odot}$) black hole produces an intricate gravitational-wave signal. Due to the extreme-mass ratios involved, these systems complete $\sim 10^{4}$-$10^{5}$ orbits, most of them in the strong-field region of the massive black hole, emitting in the frequency range $\sim10^{-4}-1~$Hz. This makes them prime sources for the space-based observatory LISA (Laser Interferometer Space Antenna). LISA observations will enable high-precision measurements of the physical characteristics of these extreme-mass-ratio inspirals (EMRIs): redshifted masses, massive black hole spin and orbital eccentricity can be determined with fractional errors $\sim 10^{-4}$-$10^{-6}$, the luminosity distance with better than $\sim 10\%$ precision, and the sky localization to within a few square degrees. EMRIs will provide valuable information about stellar dynamics in galactic nuclei, as well as precise data about massive black hole populations, including the distribution of masses and spins. They will enable percent-level measurements of the multipolar structure of massive black holes, precisely testing the strong-gravity properties of their spacetimes. EMRIs may also provide cosmographical data regarding the expansion of the Universe if inferred source locations can be correlated with galaxy catalogs. ","The unique potential of extreme mass-ratio inspirals for
gravitational-wave astronomy",8,"['New on the arXiv today: an #Astro2020 white paper on the awesomeness of extreme-mass-ratio inspirals, a unique #GravitationalWave source for @LISACommunity ', 'Extreme-mass-ratio inspirals are when one smaller stellar-mass black hole orbits a supermassive one. The resulting orbits are extremely complicated (as illustrated in this rather lovely figure). It is this structure which encodes lots of information into the #GravitationalWaves https://t.co/T39r9yPVij', ""We're not sure how many inspirals we'll see—measuring the number would teach us lots about galactic cores. We estimated ~1–2000 per year, so they're a bankable source across a 4 year @LISACommunity mission! I describe more in my blog https://t.co/N0XOHYqXId #Astro2020"", ""Since the orbits are so complicated, we'll be able to measure the properties of the source *really* well. Black hole spins are hard to measure with @LIGO, but here we'll get them to 1 part in 10,000–1,000,000! The spins encode lots of information about how black holes grow"", ""Because we'll get such ridiculously detailed measurements of the properties, we can really test the structure of the massive black hole. If there's a missing piece to @AlbertEinstein's theory of general relativity, here is an excellent place to look! #Astro2020"", ""Putting together a few extreme-mass-ratio inspirals, we can start to reconstruct the mass distribution of massive black holes. @LISACommunity is sensitive to black holes with masses 10^4–10^7 times the mass of our Sun. We don't know what the distribution is like at the lower end!"", 'If we can cross-correlate the location of the source we work out from the #GravitaitonalWave signal with galaxy catalogues, we can also measure the Hubble constant, perhaps to 1% after 20 detections! This is a great check of other methods #Astro2020', 'Extreme-mass-ratio inspirals are a wonder opportunity for #GravitationalWave astronomy. There is only one currently scheduled mission that can detect them: @LISACommunity https://t.co/wJEjE7ORi6 💞💓❤️ #Astro2020']",19,03,1968
308,99,991727874657796096,52381876,Edward Frenkel,New paper with Davide Gaiotto in which we apply powerful tools of quantum field theory to create a kind of backdoor to the geometric Langlands Program: @chrobertew I wrote an introduction to this general subject for non-mathematicians in Chapters 16 and 17 of LOVE & MATH. That's a good place to start.,https://arxiv.org/abs/1805.00203,"We review and extend the vertex algebra framework linking gauge theory constructions and a quantum deformation of the Geometric Langlands Program. The relevant vertex algebras are associated to junctions of two boundary conditions in a 4d gauge theory and can be constructed from the basic ones by following certain standard procedures. Conformal blocks of modules over these vertex algebras give rise to twisted D-modules on the moduli stacks of G-bundles on Riemann surfaces which have applications to the Langlands Program. In particular, we construct a series of vertex algebras for every simple Lie group G which we expect to yield D-module kernels of various quantum Geometric Langlands dualities. We pay particular attention to the full duality group of gauge theory, which enables us to extend the standard qGL duality to a larger duality groupoid. We also discuss various subtleties related to the spin and gerbe structures and present a detailed analysis for the U(1) and SU(2) gauge theories. ","Quantum Langlands dualities of boundary conditions, D-modules, and
conformal blocks",2,"['New paper with Davide Gaiotto in which we apply powerful tools of quantum field theory to create a kind of backdoor to the geometric Langlands Program:\n ', ""@chrobertew I wrote an introduction to this general subject for non-mathematicians in Chapters 16 and 17 of LOVE & MATH. That's a good place to start.""]",18,05,316
309,200,1301382109148266496,801743,Neil Ernst,"Together with @JeffCarver32, @mendezfe and @mtorchiano we have written up a study on peer review in software engineering. We looked at how people conduct reviews and what qualities reviewers look for in a paper. Paper: Replication: @siccegge @zacchiro @JeffCarver32 @mendezfe @mtorchiano Seeing you are in crypto, I would guess that the validation is easier to parse than a proof. But variance is quite high in the estimates.",https://arxiv.org/abs/2009.01209,"Peer review is a key activity intended to preserve the quality and integrity of scientific publications. However, in practice it is far from perfect. We aim at understanding how reviewers, including those who have won awards for reviewing, perform their reviews of software engineering papers to identify both what makes a good reviewing approach and what makes a good paper. We first conducted a series of in-person interviews with well-respected reviewers in the software engineering field. Then, we used the results of those interviews to develop a questionnaire used in an online survey and sent out to reviewers from well-respected venues covering a number of software engineering disciplines, some of whom had won awards for their reviewing efforts. We analyzed the responses from the interviews and from 175 reviewers who completed the online survey (including both reviewers who had won awards and those who had not). We report on several descriptive results, including: 45% of award-winners are reviewing 20+ conference papers a year, while 28% of non-award winners conduct that many. 88% of reviewers are taking more than two hours on journal reviews. We also report on qualitative results. To write a good review, the important criteria were it should be factual and helpful, ranked above others such as being detailed or kind. The most important features of papers that result in positive reviews are clear and supported validation, an interesting problem, and novelty. Conversely, negative reviews tend to result from papers that have a mismatch between the method and the claims and from those with overly grandiose claims. The main recommendation for authors is to make the contribution of the work very clear in their paper. In addition, reviewers viewed data availability and its consistency as being important. ",Understanding Peer Review of Software Engineering Papers,2,"['Together with @JeffCarver32, @mendezfe and @mtorchiano we have written up a study on peer review in software engineering. We looked at how people conduct reviews and what qualities reviewers look for in a paper. Paper: Replication: ', '@siccegge @zacchiro @JeffCarver32 @mendezfe @mtorchiano Seeing you are in crypto, I would guess that the validation is easier to parse than a proof. But variance is quite high in the estimates.']",20,09,439
310,161,1186852474679783425,869862586610851840,Jeannette Bohg,"Understanding dynamic 3D environment is crucial for robotic agents. We propose MeteorNet for learning representations of dynamic 3D point cloud sequences (Oral @ICCV19). Project Page: Arxiv: We achieve SOTA on a variety of 3D recognition tasks including action recognition, semantic segmentation and scene flow estimation. Kudos to Xingyu Liu @xing_yu_liu and Mengyuan Yan @StanfordIPRL",https://arxiv.org/abs/1910.09165,"Understanding dynamic 3D environment is crucial for robotic agents and many other applications. We propose a novel neural network architecture called $MeteorNet$ for learning representations for dynamic 3D point cloud sequences. Different from previous work that adopts a grid-based representation and applies 3D or 4D convolutions, our network directly processes point clouds. We propose two ways to construct spatiotemporal neighborhoods for each point in the point cloud sequence. Information from these neighborhoods is aggregated to learn features per point. We benchmark our network on a variety of 3D recognition tasks including action recognition, semantic segmentation and scene flow estimation. MeteorNet shows stronger performance than previous grid-based methods while achieving state-of-the-art performance on Synthia. MeteorNet also outperforms previous baseline methods that are able to process at most two consecutive point clouds. To the best of our knowledge, this is the first work on deep learning for dynamic raw point cloud sequences. ",MeteorNet: Deep Learning on Dynamic 3D Point Cloud Sequences,2,"['Understanding dynamic 3D environment is crucial for robotic agents. \n\nWe propose MeteorNet for learning representations of dynamic 3D point cloud sequences (Oral @ICCV19).\n\nProject Page: \n\nArxiv: ', 'We achieve SOTA on a variety of 3D recognition tasks including action recognition, semantic segmentation and scene flow estimation.\n\nKudos to Xingyu Liu @xing_yu_liu and Mengyuan Yan @StanfordIPRL']",19,10,408
311,43,1072725398285377541,823277120944242689,Will Kinney,"New paper out today. Fun idea: cosmology where the speed of sound is faster than the speed of light. @RobJLow You would think, but remarkably enough the answer appears to be no, at least classically. Babichev, Mukhanov, and Vikman wrote a wonderful paper exploring this question about ten tears ago: @RobJLow The trick is that these are not tachyons, but a superluminal ether. This means that it it not possible, for example, to build a tachyonic antitelephone. (Or, in less technical terms, to go back in time and kill baby Hilter.) @RobJLow Yes, exactly! Theories of this type can always be re-written as equivalent bimetric theories. @RobJLow One of the very intriguing things about these solutions is that at the moment of the Big Bang singularity, the speed of sound diverges, and the effective metric becomes purely spacelike. That is, the 3+1 spacetime is reduced to an effective 3+0 spacetime. @RobJLow In this (somewhat narrow) sense, the time dimension of the spacetime is an emergent property. @RobJLow I'm actually not completely positive whether or not tachyacoustic theories obey the Dominant Energy Condition in all cases. But then cosmologists violate the DEC three times before breakfast these days anyway. It's the Null Energy Condition that's a killer.",https://arxiv.org/abs/1812.04447,"Recent studies show that there is tension between the de Sitter swampland conjectures proposed by Obeid, et al. and inflationary cosmology. In this paper, we consider an alternative to inflation, `tachyacoustic' cosmology, in light of swampland conjectures. In tachyacoustic models, primordial perturbations are generated by a period of superluminal sound speed instead of accelerating expansion. We show that realizations of tachyacoustic Lagrangians can be consistent with the de Sitter swampland conjectures, and therefore can in principle be consistent with a UV-complete theory. We derive a general condition for models with $c_S > 1$ to be consistent with swampland conjectures. ","Consistency of Tachyacoustic Cosmology with de Sitter Swampland
Conjectures",7,"['New paper out today. Fun idea: cosmology where the speed of sound is faster than the speed of light.\n\n ', '@RobJLow You would think, but remarkably enough the answer appears to be no, at least classically. Babichev, Mukhanov, and Vikman wrote a wonderful paper exploring this question about ten tears ago:\n\nhttps://t.co/EdDfHkArsk', '@RobJLow The trick is that these are not tachyons, but a superluminal ether. This means that it it not possible, for example, to build a tachyonic antitelephone. (Or, in less technical terms, to go back in time and kill baby Hilter.)\n\nhttps://t.co/qshhUuqzqA', '@RobJLow Yes, exactly! Theories of this type can always be re-written as equivalent bimetric theories.', '@RobJLow One of the very intriguing things about these solutions is that at the moment of the Big Bang singularity, the speed of sound diverges, and the effective metric becomes purely spacelike. That is, the 3+1 spacetime is reduced to an effective 3+0 spacetime.', '@RobJLow In this (somewhat narrow) sense, the time dimension of the spacetime is an emergent property.', ""@RobJLow I'm actually not completely positive whether or not tachyacoustic theories obey the Dominant Energy Condition in all cases. But then cosmologists violate the DEC three times before breakfast these days anyway. It's the Null Energy Condition that's a killer.""]",18,12,1299
312,92,1285176923920965632,91420905,Alex Smith,"My new paper ""The Completed SDSS-IV Extended Baryon Oscillation Spectroscopic Survey: N-body Mock Challenge for the Quasar Sample"" is on the arXiv today, which is part of the release of the final @eBOSSurvey cosmology results The aim of the mock challenge was to test the models used in the eBOSS quasar clustering analysis on a wide range of mock catalogues. We include observational effects, use different models to add quasars to the mocks, and cover a range of different cosmologies By looking at the scatter in the results between the different mocks, we can estimate a systematic uncertainty in our measurements These uncertainties are included in the errors for the measurements in configuration space () and in Fourier space () @BillWrightCosmo No, we didn't look at the effect of cosmologies with non-zero neutrino mass. But we made mocks with a wide range of cosmological parameters to get a conservative estimate of the systematic uncertainty @BillWrightCosmo I think it's the same for the other tracers. But even taking the conservative errors due to cosmology that we use, this is only a small part of the total error in the measurements, so shouldn't affect the neutrino mass constraints. This will be more important to test for DESI",https://arxiv.org/abs/2007.09003,"The growth rate and expansion history of the Universe can be measured from large galaxy redshift surveys using the Alcock-Paczynski effect. We validate the Redshift Space Distortion models used in the final analysis of the Sloan Digital Sky Survey (SDSS) extended Baryon Oscillation Spectroscopic Survey (eBOSS) Data Release 16 quasar clustering sample, in configuration and Fourier space, using a series of HOD mock catalogues generated using the OuterRim N-body simulation. We test three models on a series of non-blind mocks, in the OuterRim cosmology, and blind mocks, which have been rescaled to new cosmologies, and investigate the effects of redshift smearing and catastrophic redshifts. We find that for the non-blind mocks, the models are able to recover $f\sigma_8$ to within 3% and $\alpha_\parallel$ and $\alpha_\bot$ to within 1%. The scatter in the measurements is larger for the blind mocks, due to the assumption of an incorrect fiducial cosmology. From this mock challenge, we find that all three models perform well, with similar systematic errors on $f\sigma_8$, $\alpha_\parallel$ and $\alpha_\bot$ at the level of $\sigma_{f\sigma_8}=0.013$, $\sigma_{\alpha_\parallel}=0.012$ and $\sigma_{\alpha_\bot}=0.008$. The systematic error on the combined consensus is $\sigma_{f\sigma_8}=0.011$, $\sigma_{\alpha_\parallel}=0.008$ and $\sigma_{\alpha_\bot}=0.005$, which is used in the final DR16 analysis. For BAO fits in configuration and Fourier space, we take conservative systematic errors of $\sigma_{\alpha_\parallel}=0.010$ and $\sigma_{\alpha_\bot}=0.007$. ","The Completed SDSS-IV Extended Baryon Oscillation Spectroscopic Survey:
N-body Mock Challenge for the Quasar Sample",6,"['My new paper ""The Completed SDSS-IV Extended Baryon Oscillation Spectroscopic Survey: N-body Mock Challenge for the Quasar Sample"" is on the arXiv today, which is part of the release of the final @eBOSSurvey cosmology results ', 'The aim of the mock challenge was to test the models used in the eBOSS quasar clustering analysis on a wide range of mock catalogues. We include observational effects, use different models to add quasars to the mocks, and cover a range of different cosmologies', 'By looking at the scatter in the results between the different mocks, we can estimate a systematic uncertainty in our measurements', 'These uncertainties are included in the errors for the measurements in configuration space (https://t.co/aTG2zk9Nes) and in Fourier space (https://t.co/qfpxUDkn3a)', ""@BillWrightCosmo No, we didn't look at the effect of cosmologies with non-zero neutrino mass. But we made mocks with a wide range of cosmological parameters to get a conservative estimate of the systematic uncertainty"", ""@BillWrightCosmo I think it's the same for the other tracers. But even taking the conservative errors due to cosmology that we use, this is only a small part of the total error in the measurements, so shouldn't affect the neutrino mass constraints. This will be more important to test for DESI""]",20,07,1273
313,15,1432880589963763712,4438354094,Tom Wong,"New paper with @Creighton undergraduate Jacob Rapoza! ""Search by Lackadaisical Quantum Walk with Symmetry Breaking."" Jacob started doing research with me the summer before his freshman year, and he is now a junior. There's something incredibly rewarding about mentoring students in their first research project, to say, ""You've contributed to the body of scientific knowledge!"" Or, ""You've solved a problem that no one else has solved before!"" Or, ""You're the world expert on this topic!""",https://arxiv.org/abs/2108.13856,"The lackadaisical quantum walk is a lazy version of a discrete-time, coined quantum walk, where each vertex has a weighted self-loop that permits the walker to stay put. They have been used to speed up spatial search on a variety of graphs, including periodic lattices, strongly regular graphs, Johnson graphs, and the hypercube. In these prior works, the weights of the self-loops preserved the symmetries of the graphs. In this paper, we show that the self-loops can break all the symmetries of vertex-transitive graphs while providing the same computational speedups. Only the weight of the self-loop at the marked vertex matters, and the remaining self-loop weights can be chosen randomly, as long as they are small compared to the degree of the graph. ",Search by Lackadaisical Quantum Walk with Symmetry Breaking,2,"['New paper with @Creighton undergraduate Jacob Rapoza! ""Search by Lackadaisical Quantum Walk with Symmetry Breaking."" Jacob started doing research with me the summer before his freshman year, and he is now a junior. ', 'There\'s something incredibly rewarding about mentoring students in their first research project, to say, ""You\'ve contributed to the body of scientific knowledge!"" Or, ""You\'ve solved a problem that no one else has solved before!"" Or, ""You\'re the world expert on this topic!""']",21,08,502
314,152,1445007474327793665,732494566545203201,David Klindt,"New paper on score-based generative classifiers (SBGCs) Diffusion models have produced impressive results We show how they can be used as classifiers on CIFAR-10. Work w/ @zimmerrol @schott_lukas @YsongStanford @adric_dunn (1/6) While previous methods have shown a trade-off between generative and classification performance, our SBGC model achieves new state-of-the-art performances both in likelihoods and classification accuracy for generative classifiers on CIFAR-10. (2/6) In the past, generative classifiers (analysis-by-synthesis) have been shown to increase adversarial robustness on MNIST However, so far these results have not been extended to complex natural image datasets such as CIFAR-10. (3/6) Previous work showed that interpolating between images increases likelihoods, suggesting model failure on out-of-distribution data @jh_jacobsen By contrast, our SBGC model correctly produces convex interpolation curves. (4/6) Nevertheless, we find that our model spectacularly fails against gradient-based adversarial attacks. We argue that SBGCs have no structural advantage over discriminative classifiers and that analysis-by-synthesis alone is not sufficient for out-of-distribution robustness. (5/6) Still, our work shows that SBGCs can achieve very competitive likelihoods and classification accuracies which encourage further research! Thanks for fun discussions and feedback @poolio @wgrathwohl @yash_j_sharma @wielandbr @dylanpaiton @eero_simoncelli (6/6)",http://arxiv.org/abs/2110.00473,"The tremendous success of generative models in recent years raises the question whether they can also be used to perform classification. Generative models have been used as adversarially robust classifiers on simple datasets such as MNIST, but this robustness has not been observed on more complex datasets like CIFAR-10. Additionally, on natural image datasets, previous results have suggested a trade-off between the likelihood of the data and classification accuracy. In this work, we investigate score-based generative models as classifiers for natural images. We show that these models not only obtain competitive likelihood values but simultaneously achieve state-of-the-art classification accuracy for generative classifiers on CIFAR-10. Nevertheless, we find that these models are only slightly, if at all, more robust than discriminative baseline models on out-of-distribution tasks based on common image corruptions. Similarly and contrary to prior results, we find that score-based are prone to worst-case distribution shifts in the form of adversarial perturbations. Our work highlights that score-based generative models are closing the gap in classification accuracy compared to standard discriminative models. While they do not yet deliver on the promise of adversarial and out-of-domain robustness, they provide a different approach to classification that warrants further research. ",Score-Based Generative Classifiers,6,"['New paper on score-based generative classifiers (SBGCs) \n\nDiffusion models have produced impressive results \n\nWe show how they can be used as classifiers on CIFAR-10.\n\nWork w/ @zimmerrol @schott_lukas @YsongStanford @adric_dunn\n\n(1/6)', 'While previous methods have shown a trade-off between generative and classification performance, our SBGC model achieves new state-of-the-art performances both in likelihoods and classification accuracy for generative classifiers on CIFAR-10.\n\n(2/6) https://t.co/iRl9XCTRAK', 'In the past, generative classifiers (analysis-by-synthesis) have been shown to increase adversarial robustness on MNIST https://t.co/OSMjtHjHX1\n\nHowever, so far these results have not been extended to complex natural image datasets such as CIFAR-10.\n\n(3/6)', 'Previous work showed that interpolating between images increases likelihoods, suggesting model failure on out-of-distribution data https://t.co/WBwvfWUSsK @jh_jacobsen\n\nBy contrast, our SBGC model correctly produces convex interpolation curves.\n\n(4/6) https://t.co/KWWUDkRBSI', 'Nevertheless, we find that our model spectacularly fails against gradient-based adversarial attacks.\n\nWe argue that SBGCs have no structural advantage over discriminative classifiers and that analysis-by-synthesis alone is not sufficient for out-of-distribution robustness.\n\n(5/6) https://t.co/TCZTbPiuIy', 'Still, our work shows that SBGCs can achieve very competitive likelihoods and classification accuracies which encourage further research!\n\nThanks for fun discussions and feedback @poolio @wgrathwohl @yash_j_sharma @wielandbr @dylanpaiton @eero_simoncelli\n\n(6/6)']",21,10,1522
315,50,1194335282734227456,23000769,Christopher Conselice,"On the @arxiv on Monday, Amy Whitney (Notts PhD student) et al. released a new paper on the evolution of differential galaxy sizes, accepted to ApJ. A number of things are worth noting in this paper. A thread. (1/n) We develop a new method of distinguishing high-z galaxies from foreground galaxies. Because of the Lyman-break, we are able to remove contamination. We call this ""2D Lyman-break imaging"" Example here, 2nd from left is original image, far right, cleaned of foreground. (2/n) Using the Petrosian radius, which is independent of distances and surface brightness dimming we can measure how 'inner' and 'outer' parts of galaxies grow with time. The 'eta' values are the ratio of the surface brightness at a radius to the surface brightness within a radius. Taking the difference in the outer and inner radii it is clear that the outer radii are growing at a fast rate than the inner radii at z < 7. We confirm that this is an actual effect and not a bias by carrying out extensive simulations. (4/n) There is more detail in the paper (have a look) but in general this demonstrates that inside-out growth of galaxies is the dominate process for forming galaxies at z=7 down to z=1. How this happens is another question which we discuss, but minor mergers is one good way. (5/n)",https://arxiv.org/abs/1911.02589,"We present a size analysis of a sample of $\sim$ 49,000 galaxies from the CANDELS GOODS North and South fields using redshift-independent relative surface brightness metrics to determine an unbiased measure of the differential size evolution of galaxies at $1 \leq z \leq 7$. We introduce a novel method of removing foreground objects from distant galaxy ($z > 3$) images that makes use of the Lyman-break at 912{\AA}, in what we call `2-D Lyman-Break Imaging'. The images used are in the rest-frame optical at $z < 3$ and progressively bluer bands at $z > 3$. They are therefore subject to K-correction and cosmological dimming effects which are tested and corrected for. We separately consider a mass-selected sample (with masses in the range 10$^9$M$_{\odot}$$\leq$M$_*$$\leq$10$^{10.5}$M$_{\odot}$) and a number density selected sample (using a constant number density of $n = 1\times10^{-4}$Mpc$^{-3}$). Instead of utilising the commonly used, but potentially biased, effective radii for size measurements, we measure the redshift-independent Petrosian radius, defined by the parameter $\eta$, for each galaxy for three values of $\eta$ and use this as a proxy for size. The evolution of the measured radii can be described by a power-law of the form $R_{Petr} = \alpha(1+z)^\beta$kpc where $\beta < 0$. We find that the outer radius increases more rapidly, suggesting that as a galaxy grows mass is added to its outer regions via an inside-out growth. This growth is stronger for the number density selected sample, with a growth rate of nearly three in the outer radii compared to the inner. We test and confirm these results using a series of image simulations. ","Unbiased Differential Size Evolution and the Inside-Out Growth of
Galaxies in the Deep CANDELS GOODS Fields at $1 \leq z \leq 7$",5,"['On the @arxiv on Monday, Amy Whitney (Notts PhD student) et al. released a new paper on the evolution of differential galaxy sizes, accepted to ApJ. A number of things are worth noting in this paper. A thread. (1/n)\n\n', 'We develop a new method of distinguishing high-z galaxies from foreground galaxies. Because of the Lyman-break, we are able to remove contamination. We call this ""2D Lyman-break imaging"" \n\nExample here, 2nd from left is original image, far right, cleaned of foreground. (2/n) https://t.co/mLS3LRMbzJ', ""Using the Petrosian radius, which is independent of distances and surface brightness dimming we can measure how 'inner' and 'outer' parts of galaxies grow with time. The 'eta' values are the ratio of the surface brightness at a radius to the surface brightness within a radius. https://t.co/8GSqk37aEt"", 'Taking the difference in the outer and inner radii it is clear that the outer radii are growing at a fast rate than the inner radii at z < 7. We confirm that this is an actual effect and not a bias by carrying out extensive simulations. (4/n) https://t.co/z4SgezyTok', 'There is more detail in the paper (have a look) but in general this demonstrates that inside-out growth of galaxies is the dominate process for forming galaxies at z=7 down to z=1. How this happens is another question which we discuss, but minor mergers is one good way. (5/n)']",19,11,1322
316,92,1202178956063064064,806058672619212800,Guillaume Lample,"Our new paper, Deep Learning for Symbolic Mathematics, is now on arXiv We added *a lot* of new results compared to the original submission. With @f_charton (1/7) Although neural networks struggle on simple arithmetic tasks such as addition and multiplication, we show that transformers perform surprisingly well on difficult mathematical problems such as function integration and differential equations. (2/7) We define a general framework to adapt seq2seq models to various mathematical problems, and present different techniques to generate arbitrarily large datasets of functions with their integrals, and differential equations with their solutions. (3/7) On samples of randomly generated functions, we show that transformers achieve state-of-the-art performance and outperform computer algebra systems such as Mathematica. (4/7) We show that beam search can generate alternative solutions for a differential equation, all equivalent, but written in very different ways. The model was never trained to do this, but managed to figure out that different expressions correspond to the same mathematical object 5/7 We also observe that a transformer trained on functions that SymPy can integrate, is able at test time to integrate functions that SymPy is not able to integrate, i.e. the model was able to generalize beyond the set of functions integrable by SymPy. (6/7) A purely neural approach is not sufficient, since it still requires a symbolic framework to check generated hypotheses. Yet, our models perform best on very long inputs, where computer algebra systems struggle. Symbolic computation may benefit from hybrid approaches. (7/7) @AndrewTouchet @f_charton Yes, we will open source our datasets and models soon! @ogrisel We used to visualize what is happening in the network, but we only tried very quickly and we did not observe anything concrete. Some papers like may be useful to get insights about the hidden layer activations / attention weights. @leloykun @f_charton Not much. We ran experiments on 8 GPUs because it's faster, but even with 1 GPU you get most of the performance in a few hours.",https://arxiv.org/abs/1912.01412,"Neural networks have a reputation for being better at solving statistical or approximate problems than at performing calculations or working with symbolic data. In this paper, we show that they can be surprisingly good at more elaborated tasks in mathematics, such as symbolic integration and solving differential equations. We propose a syntax for representing mathematical problems, and methods for generating large datasets that can be used to train sequence-to-sequence models. We achieve results that outperform commercial Computer Algebra Systems such as Matlab or Mathematica. ",Deep Learning for Symbolic Mathematics,10,"['Our new paper, Deep Learning for Symbolic Mathematics, is now on arXiv \nWe added *a lot* of new results compared to the original submission. With @f_charton (1/7) ', 'Although neural networks struggle on simple arithmetic tasks such as addition and multiplication, we show that transformers perform surprisingly well on difficult mathematical problems such as function integration and differential equations. (2/7)', 'We define a general framework to adapt seq2seq models to various mathematical problems, and present different techniques to generate arbitrarily large datasets of functions with their integrals, and differential equations with their solutions. (3/7)', 'On samples of randomly generated functions, we show that transformers achieve state-of-the-art performance and outperform computer algebra systems such as Mathematica. (4/7)', 'We show that beam search can generate alternative solutions for a differential equation, all equivalent, but written in very different ways. The model was never trained to do this, but managed to figure out that different expressions correspond to the same mathematical object 5/7', 'We also observe that a transformer trained on functions that SymPy can integrate, is able at test time to integrate functions that SymPy is not able to integrate, i.e. the model was able to generalize beyond the set of functions integrable by SymPy. (6/7)', 'A purely neural approach is not sufficient, since it still requires a symbolic framework to check generated hypotheses. Yet, our models perform best on very long inputs, where computer algebra systems struggle. Symbolic computation may benefit from hybrid approaches. (7/7)', '@AndrewTouchet @f_charton Yes, we will open source our datasets and models soon!', '@ogrisel We used https://t.co/XvZQbwOpQj to visualize what is happening in the network, but we only tried very quickly and we did not observe anything concrete. Some papers like https://t.co/iGqnO5ArJO may be useful to get insights about the hidden layer activations / attention weights.', ""@leloykun @f_charton Not much. We ran experiments on 8 GPUs because it's faster, but even with 1 GPU you get most of the performance in a few hours.""]",19,12,2142
317,216,1248156986191024128,794145137068830720,Potestio Lab,"There are many ways to construct a coarse-grained protein, here we tell you how to find out the best one! At the X-ing of stat physics, biology, and information theory - a great collaboration between @r_potestio lab @UniTrento and Scott Shell @UCSBChE! ",https://arxiv.org/abs/2004.03988,"In the theoretical modelling of a physical system a crucial step consists in the identification of those degrees of freedom that enable a synthetic, yet informative representation of it. While in some cases this selection can be carried out on the basis of intuition and experience, a straightforward discrimination of the important features from the negligible ones is difficult for many complex systems, most notably heteropolymers and large biomolecules. We here present a thermodynamics-based theoretical framework to gauge the effectiveness of a given simplified representation by measuring its information content. We employ this method to identify those reduced descriptions of proteins, in terms of a subset of their atoms, that retain the largest amount of information from the original model; we show that these highly informative representations share common features that are intrinsically related to the biological properties of the proteins under examination, thereby establishing a bridge between protein structure, energetics, and function. ","An information theory-based approach for optimal model reduction of
biomolecules",1,"['There are many ways to construct a coarse-grained protein, here we tell you how to find out the best one!\n\nAt the X-ing of stat physics, biology, and information theory - a great\xa0collaboration between @r_potestio lab @UniTrento and Scott Shell @UCSBChE!\n\n ']",20,04,266
318,35,1385396035195858944,3377160202,Djuna Croon,"New paper! Non-perturbative methods for false vacuum decay with the amazing Eleanor Hall (@quarkygirl) and Hitoshi Muruyama (@sleptogenesis) We propose a new (non-perturbative!) technique to calculate false vacuum decay rates. Important, because... ...accurate false vacuum decay calculations are needed to predict the resulting gravitational wave spectra. I'll try to give a brief explanation below, but I also highly recommend Nell's excellent slides on the topic: at #DarkSectorRainbow last month... ...So, accurately calculating the gravitational wave spectrum from a first order phase transition is a big challenge. Existing methods struggle particularly with strong coupling. Why? The usual false vacuum decay formalism is well-defined for tree-level bounces, but... ...for radiatively induced phase-transitions, it needs modifications. A related issue is that an all-orders calculation of the effective action is manifestly convex. What does that mean? A convex potential doesn't have more than one minimum -> no first order phase transition... ...""coarse-graining"" in momentum scale is a useful solution. However, the accuracy of coarse graining usually depends on a large ratio of scales or a weak coupling. We propose an alternative, where we enforce locality in field space (see fig) rather than in momentum space... ...that can be done in the language of the functional renormalization group, as we show. Moreover, we work out a simple example, which we compare to the result found in other methods. As expected, the difference creep in at stronger coupling... ...We hope to develop this method further, and eventually study things like confinement / chiral symmetry breaking and the resulting gravitational wave spectra. I can't wait to learn more. A big thank you to my wonderful collaborators! It has been an absolute joy ❤️ Oops, MurAyama, apologies!!",https://arxiv.org/abs/2104.10687,"We propose a simple non-perturbative formalism for false vacuum decay using functional methods. We introduce the quasi-stationary effective action, a bounce action that non-perturbatively incorporates radiative corrections and is robust to strong couplings. The quasi-stationary effective action obeys an exact flow equation in a modified functional renormalization group with a motivated regulator functional. We demonstrate the use of this formalism in a simple toy model and compare our result with that obtained in perturbation theory. ",Non-perturbative methods for false vacuum decay,8,"['New paper! \n\nNon-perturbative methods for false vacuum decay\n\nwith the amazing Eleanor Hall (@quarkygirl) and Hitoshi Muruyama (@sleptogenesis) \n\nWe propose a new (non-perturbative!) technique to calculate false vacuum decay rates. Important, because...', ""...accurate false vacuum decay calculations are needed to predict the resulting gravitational wave spectra. \n\nI'll try to give a brief explanation below, but I also highly recommend Nell's excellent slides on the topic: https://t.co/PlOhCeZI4H at #DarkSectorRainbow last month..."", '...So, accurately calculating the gravitational wave spectrum from a first order phase transition is a big challenge.\n\nExisting methods struggle particularly with strong coupling. Why?\n\nThe usual false vacuum decay formalism is well-defined for tree-level bounces, but...', ""...for radiatively induced phase-transitions, it needs modifications. A related issue is that an all-orders calculation of the effective action is manifestly convex. \n\nWhat does that mean? A convex potential doesn't have more than one minimum -> no first order phase transition..."", '...""coarse-graining"" in momentum scale is a useful solution. However, the accuracy of coarse graining usually depends on a large ratio of scales or a weak coupling. \n\nWe propose an alternative, where we enforce locality in field space (see fig) rather than in momentum space... https://t.co/peSC3QKVDP', '...that can be done in the language of the functional renormalization group, as we show. \n\nMoreover, we work out a simple example, which we compare to the result found in other methods. As expected, the difference creep in at stronger coupling... https://t.co/tY9KJU24Mo', ""...We hope to develop this method further, and eventually study things like confinement / chiral symmetry breaking and the resulting gravitational wave spectra. I can't wait to learn more. \n\nA big thank you to my wonderful collaborators! It has been an absolute joy ❤️"", 'Oops, MurAyama, apologies!!']",21,04,1905
319,215,1501609588042444805,1701409680,Suhail Dhawan,"Paper day! We present the pilot study of measuring H0 from a uniform DL with both rungs of SNe Ia from ZTF. The luminosity is calibrated by the TRGB, an exciting route to percent level H0. Big thanks to Ariel, Joel, In Sung and the ZTF / CCHP teams. ",https://arxiv.org/abs/2203.04241,"The current Cepheid-calibrated distance ladder measurement of $H_0$ is reported to be in tension with the values inferred from the cosmic microwave background (CMB), assuming standard model cosmology. However, the tip of the red giant branch (TRGB) reports an estimate of $H_0$ in better agreement with the CMB. Hence, it is critical to reduce systematic uncertainties in local measurements to understand the origin of the Hubble tension. In this paper, we propose a uniform distance ladder, combining SNe~Ia observed by the Zwicky Transient Facility (ZTF) with a TRGB calibration of their absolute luminosity. A large, volume-limited, sample of both calibrator and Hubble flow SNe Ia from the \emph{same} survey minimizes two of the largest sources of systematics: host-galaxy bias and non-uniform photometric calibration. We present results from a pilot study using existing TRGB distance to the host galaxy of ZTF SN Ia SN 2021rhu (aka ZTF21abiuvdk). Combining the ZTF calibrator with a volume-limited sample from the first data release of ZTF Hubble flow SNe Ia, we infer $H_0 = 76.94 \pm 6.4\, {\rm km}\,{\rm s^{-1}}\,{\rm Mpc^{-1}}$, an $8.3 \%$ measurement. The error budget is dominated by the single object calibrating the SN Ia luminosity in this pilot study. However, the ZTF sample includes already five other SNe Ia within $\sim$ 20 Mpc for which TRGB distances can be obtained with HST. Finally, we present the prospects of building this distance ladder out to 80 Mpc with JWST observations of more than one hundred SNe Ia. ","A Uniform Type Ia Supernova Distance Ladder with the Zwicky Transient
Facility: Absolute Calibration Based on the Tip of the Red Giant Branch
(TRGB) Method",1,"['Paper day! We present the pilot study of measuring H0 from a uniform DL with both rungs of SNe Ia from ZTF. The luminosity is calibrated by the TRGB, an exciting route to percent level H0. Big thanks to Ariel, Joel, In Sung and the ZTF / CCHP teams. \n']",22,03,256
320,284,1401745472713236480,957685323198164992,Ziwei Liu,"""Semi-Supervised Domain Generalization with Stochastic StyleMatch"": Paper: Code: - We study semi-supervised DG (SSDG), a more realistic and practical setting for DG. - StyleMatch is surprisingly effective in OOD generalization. @Hossein_SHN Thanks for the nice suggestion, Hossein! We will definitely discuss more in our updated version.",https://arxiv.org/abs/2106.00592,"Ideally, visual learning algorithms should be generalizable, for dealing with any unseen domain shift when deployed in a new target environment; and data-efficient, for reducing development costs by using as little labels as possible. To this end, we study semi-supervised domain generalization (SSDG), which aims to learn a domain-generalizable model using multi-source, partially-labeled training data. We design two benchmarks that cover state-of-the-art methods developed in two related fields, i.e., domain generalization (DG) and semi-supervised learning (SSL). We find that the DG methods, which by design are unable to handle unlabeled data, perform poorly with limited labels in SSDG; the SSL methods, especially FixMatch, obtain much better results but are still far away from the basic vanilla model trained using full labels. We propose StyleMatch, a simple approach that extends FixMatch with a couple of new ingredients tailored for SSDG: 1) stochastic modeling for reducing overfitting in scarce labels, and 2) multi-view consistency learning for enhancing domain generalization. Despite the concise designs, StyleMatch achieves significant improvements in SSDG. We hope our approach and the comprehensive benchmarks can pave the way for future research on generalizable and data-efficient learning systems. The source code is released at \url{this https URL}. ",Semi-Supervised Domain Generalization with Stochastic StyleMatch,2,"['""Semi-Supervised Domain Generalization with Stochastic StyleMatch"":\n\nPaper: \nCode: \n\n- We study semi-supervised DG (SSDG), a more realistic and practical setting for DG.\n- StyleMatch is surprisingly effective in OOD generalization. ', '@Hossein_SHN Thanks for the nice suggestion, Hossein! We will definitely discuss more in our updated version.']",21,06,358
321,164,1341805376166047749,1117093805499355136,Marilena Loverde,"Do you wonder about CMB+BAO constraints on self-interacting neutrinos if only some of them self-interact? We studied a bunch of examples in this paper led by Thejs Brinckmann. Major congrats to Thejs @ThejsBrinckmann and Jae Hyeok Chang on this nice work! Some punchlines: we find no strong evidence for self-interactions or larger/smaller values of Neff. On the other hand, there are some additional modes in parameter space with neutrinos decoupling much later than SM ones should, but they are not preferred. We also weigh in on the H0 tension and find (as others have!), that self-interacting neutrinos are not a solution unless you throw out high-ell CMB polarization data, which we have no reason to do. Interestingly, if you did throw it out we found H0 = 74 km/s/Mpc.",https://arxiv.org/abs/2012.11830,"We perform a comprehensive study of cosmological constraints on non-standard neutrino self-interactions using cosmic microwave background (CMB) and baryon acoustic oscillation data. We consider different scenarios for neutrino self-interactions distinguished by the fraction of neutrino states allowed to participate in self-interactions and how the relativistic energy density, N$_{\textrm{eff}}$, is allowed to vary. Specifically, we study cases in which: all neutrino states self-interact and N$_{\textrm{eff}}$ varies; two species free-stream, which we show alleviates tension with laboratory constraints, while the energy in the additional interacting states varies; and a variable fraction of neutrinos self-interact with either the total N$_{\textrm{eff}}$ fixed to the Standard Model value or allowed to vary. In no case do we find compelling evidence for new neutrino interactions or non-standard values of N$_{\textrm{eff}}$. In several cases we find additional modes with neutrino decoupling occurring at lower redshifts $z_{\textrm{dec}} \sim 10^{3-4}$. We do a careful analysis to examine whether new neutrino self-interactions solve or alleviate the so-called $H_0$ tension and find that, when all Planck 2018 CMB temperature and polarization data is included, none of these examples ease the tension more than allowing a variable N$_{\textrm{eff}}$ comprised of free-streaming particles. Although we focus on neutrino interactions, these constraints are applicable to any light relic particle. ","Self-interacting neutrinos, the Hubble parameter tension, and the Cosmic
Microwave Background",3,"['Do you wonder about CMB+BAO constraints on self-interacting neutrinos if only some of them self-interact? We studied a bunch of examples in this paper led by Thejs Brinckmann. Major congrats to Thejs @ThejsBrinckmann and Jae Hyeok Chang on this nice work! \n', 'Some punchlines: we find no strong evidence for self-interactions or larger/smaller values of Neff. On the other hand, there are some additional modes in parameter space with neutrinos decoupling much later than SM ones should, but they are not preferred.', 'We also weigh in on the H0 tension and find (as others have!), that self-interacting neutrinos are not a solution unless you throw out high-ell CMB polarization data, which we have no reason to do. Interestingly, if you did throw it out we found H0 = 74 km/s/Mpc.']",20,12,782
322,276,1405179533335072768,636167919,Andreas Loukas,"Are well-generalizing neural nets (NNs) easier to train?🤔Aiming to shed light on this hypothesis, we studied the relation between the complexity of the learned NN and the training behavior. 📰with @MarinosPoiitis & @StefanieJegelka 👇 Some evidence for our hypothesis already exists: e.g., it is known that training (shallow) NNs is more tedious for noisy data and easier for more separable classes. Moreover, the beautiful theory of stability says that NNs trained for few epochs have bounded sample complexity. Differently, we connect aspects of the optim. trajectory with the NN Lipschitz constant (wrt input) close and far from the training data: we find that the trajectory of high complexity NNs is longer, veers further from initialization, and exhibits higher variance near convergence Intriguingly, we also find that steady training with Dropout implies a training- and data-dependent generalization bound that grows poly-logarithmically with the number of parameters (typical Lipschitz constant-based generalization bounds grow exponentially 🚀with NN depth). Overall, our results support the hypothesis that good training behavior can be a useful bias towards good generalization",https://arxiv.org/abs/2106.04186,"This work explores the Benevolent Training Hypothesis (BTH) which argues that the complexity of the function a deep neural network (NN) is learning can be deduced by its training dynamics. Our analysis provides evidence for BTH by relating the NN's Lipschitz constant at different regions of the input space with the behavior of the stochastic training procedure. We first observe that the Lipschitz constant close to the training data affects various aspects of the parameter trajectory, with more complex networks having a longer trajectory, bigger variance, and often veering further from their initialization. We then show that NNs whose 1st layer bias is trained more steadily (i.e., slowly and with little variation) have bounded complexity even in regions of the input space that are far from any training point. Finally, we find that steady training with Dropout implies a training- and data-dependent generalization bound that grows poly-logarithmically with the number of parameters. Overall, our results support the intuition that good training behavior can be a useful bias towards good generalization. ",What training reveals about neural network complexity,5,"['Are well-generalizing neural nets (NNs) easier to train?🤔Aiming to shed light on this hypothesis, we studied the relation between the complexity of the learned NN and the training behavior.\n\n📰with @MarinosPoiitis & @StefanieJegelka\n👇 ', 'Some evidence for our hypothesis already exists: e.g., it is known that training (shallow) NNs is more tedious for noisy data and easier for more separable classes. Moreover, the beautiful theory of stability says that NNs trained for few epochs have bounded sample complexity.', 'Differently, we connect aspects of the optim. trajectory with the NN Lipschitz constant (wrt input) close and far from the training data: we find that the trajectory of high complexity NNs is longer, veers further from initialization, and exhibits higher variance near convergence', 'Intriguingly, we also find that steady training with Dropout implies a training- and data-dependent generalization bound that grows poly-logarithmically with the number of parameters (typical Lipschitz constant-based generalization bounds grow exponentially 🚀with NN depth).', 'Overall, our results support the hypothesis that good training behavior can be a useful bias towards good generalization']",21,06,1200
323,230,1374373986117644301,1120650694644596737,Paris Avgeriou,We know that software engineers use search engines in their daily practice. But what exactly do they google and does it actually help them? We studied this particularly for software architecture tasks. Attend our talk @ICSAconf or get the pre-print. ,https://arxiv.org/abs/2103.11705,"Software engineers need relevant and up-to-date architectural knowledge (AK), in order to make well-founded design decisions. However, finding such AK is quite challenging. One pragmatic approach is to search for AK on the web using traditional search engines (e.g. Google); this is common practice among software engineers. Still, we know very little about what AK is retrieved, from where, and how useful it is. In this paper, we conduct an empirical study with 53 software engineers, who used Google to make design decisions using the Attribute-Driven-Design method. Based on how the subjects assessed the nature and relevance of the retrieved results, we determined how effective web search engines are to find relevant architectural information. Moreover, we identified the different sources of AK on the web and their associated AK concepts. ",Exploring Web Search Engines to Find Architectural Knowledge,1,['We know that software engineers use search engines in their daily practice. But what exactly do they google and does it actually help them? We studied this particularly for software architecture tasks. Attend our talk @ICSAconf or get the pre-print. '],21,03,256
324,208,1376887783206232065,67043272,Rafael Martínez Galarza,"We submitted a paper: we use radiative transfer, hydro sims. to investigate AGN role in heating of galaxy-scale cold dust, whose FIR emission is usually used as a tracer of star formation (hint: we find that the AGN contribution can be significant). ",https://arxiv.org/abs/2103.12747,"It is widely assumed that long-wavelength infrared (IR) emission from cold dust (T~20-40K) is a reliable tracer of star formation even in the presence of a bright active galactic nucleus (AGN). Based on radiative transfer (RT) models of clumpy AGN tori, hot dust emission from the torus contributes negligibly to the galaxy spectral energy distribution (SED) at $\lambda\ga100$ \micron. However, these models do not include AGN heating of host-galaxy-scale diffuse dust, which may have far-IR (FIR) colors comparable to cold diffuse dust heated by stars. To quantify the contribution of AGN heating to host-galaxy-scale cold dust emission at $\lambda\ga100$ \micron, we perform dust RT calculations on a simulated galaxy merger both including and excluding the bright AGN that it hosts. By differencing the SEDs yielded by RT calculations with and without AGN that are otherwise identical, we quantify the FIR cold dust emission arising solely from re-processed AGN photons. In extreme cases, AGN-heated host-galaxy-scale dust can increase galaxy-integrated FIR flux densities by factors of 2-4; star formation rates calculated from the FIR luminosity assuming no AGN contribution can overestimate the true value by comparable factors. Because the FIR colors of such systems are similar to those of purely star-forming galaxies and redder than torus models, broadband SED decomposition may be insufficient for disentangling the contributions of stars and heavily dust-enshrouded AGN in the most IR-luminous galaxies. We demonstrate how kpc-scale resolved observations can be used to identify deeply dust-enshrouded AGN with cool FIR colors when spectroscopic and/or X-ray detection methods are unavailable. ",Dust-Enshrouded AGN can Dominate Host-Galaxy-Scale Cold-Dust Emission,1,"['We submitted a paper: we use radiative transfer, hydro sims. to investigate AGN role in heating of galaxy-scale cold dust, whose FIR emission is usually used as a tracer of star formation (hint: we find that the AGN contribution can be significant). ']",21,03,263
325,3,913333810556792832,2427184074,Christopher Berry,"New @LIGO/@ego_virgo paper (not #GW170814), search for non-tensorial gravitational waves from pulsars #nondetection @LIGO @ego_virgo Continuous gravitational waves from rotating neutron stars give a better insight into #GravitaitonalWave polarizations than short signals @LIGO @ego_virgo As the Earth rotates, the position of the source relative to the detector changes, giving us a way to look for different polarizations @LIGO @ego_virgo How a detector responds to a polarization depends upon where the source is (we call this the antenna pattern) ",https://arxiv.org/abs/1709.09203,"We present results from the first directed search for nontensorial gravitational waves. While general relativity allows for tensorial (plus and cross) modes only, a generic metric theory may, in principle, predict waves with up to six different polarizations. This analysis is sensitive to continuous signals of scalar, vector or tensor polarizations, and does not rely on any specific theory of gravity. After searching data from the first observation run of the advanced LIGO detectors for signals at twice the rotational frequency of 200 known pulsars, we find no evidence of gravitational waves of any polarization. We report the first upper limits for scalar and vector strains, finding values comparable in magnitude to previously-published limits for tensor strain. Our results may be translated into constraints on specific alternative theories of gravity. ",First search for nontensorial gravitational waves from known pulsars,4,"['New @LIGO/@ego_virgo paper (not #GW170814), search for non-tensorial gravitational waves from pulsars #nondetection ', '@LIGO @ego_virgo Continuous gravitational waves from rotating neutron stars give a better insight into #GravitaitonalWave polarizations than short signals', '@LIGO @ego_virgo As the Earth rotates, the position of the source relative to the detector changes, giving us a way to look for different polarizations', '@LIGO @ego_virgo How a detector responds to a polarization depends upon where the source is (we call this the antenna pattern) https://t.co/UxAPaS8jik']",17,09,570
326,48,1396912380039286791,2693638267,Arjun (Raj) Manrai,"Deep learning approaches for semantic image segmentation are often data hungry or difficult to train. Thrilled to share our new approach called PixMatch, our #CVPR21 paper now on arXiv. Led by the singular @lukemelas: Paper: Code: PixMatch is a new approach to unsupervised domain adaptation for semantic segmentation. It exploits the idea that in order to perform well on the target domain, a model’s output should be consistent with respect to small perturbations of inputs in the target domain. Read more here: and keep an eye on @lukemelas 🚀, a wonderful member of my group and a truly exceptional scientist. Check out some of @lukemelas' other work here: ",https://arxiv.org/abs/2105.08128,"Unsupervised domain adaptation is a promising technique for semantic segmentation and other computer vision tasks for which large-scale data annotation is costly and time-consuming. In semantic segmentation, it is attractive to train models on annotated images from a simulated (source) domain and deploy them on real (target) domains. In this work, we present a novel framework for unsupervised domain adaptation based on the notion of target-domain consistency training. Intuitively, our work is based on the idea that in order to perform well on the target domain, a model's output should be consistent with respect to small perturbations of inputs in the target domain. Specifically, we introduce a new loss term to enforce pixelwise consistency between the model's predictions on a target image and a perturbed version of the same image. In comparison to popular adversarial adaptation methods, our approach is simpler, easier to implement, and more memory-efficient during training. Experiments and extensive ablation studies demonstrate that our simple approach achieves remarkably strong results on two challenging synthetic-to-real benchmarks, GTA5-to-Cityscapes and SYNTHIA-to-Cityscapes. Code is available at: this https URL ","PixMatch: Unsupervised Domain Adaptation via Pixelwise Consistency
Training",3,"['Deep learning approaches for semantic image segmentation are often data hungry or difficult to train. Thrilled to share our new approach called PixMatch, our #CVPR21 paper now on arXiv. Led by the singular @lukemelas:\n\nPaper: \nCode: ', 'PixMatch is a new approach to unsupervised domain adaptation for semantic segmentation. It exploits the idea that in order to perform well on the target domain, a model’s output should be consistent with respect to small perturbations of inputs in the target domain.', ""Read more here: https://t.co/bMqw14GiAZ and keep an eye on @lukemelas 🚀, a wonderful member of my group and a truly exceptional scientist. Check out some of @lukemelas' other work here: https://t.co/gFywTBOLsV\nhttps://t.co/epQJt3paZZ""]",21,05,701
327,212,1409815029340332033,1294542807441514496,Victor Valera Baca 🧐🔭," Finally online! Together with Alexei Smirnov we developed further on the topic of my diploma thesis at @ictpnews. We study in depth the phenomenon of resonance refraction, through which neutrino oscillations are enhanced in an energy localized region.(1/4) Coherence forward scattering of nu on a cold backgrounds (e.g. DM) induces a potential that affects neutrino oscillations. Resonance in the s-channel leads to an enhanced potential that manifest as perturbations of oscillation probability measured by neutrino experiments. (2/4) Interplay of the background potential with vacuum and standard matter effects leads to new features: New MSW resonances, shift of the standard MSW resonance point, differences in the values of the neutrino square mass difference at high and low energies, etc. (3/4) This mechanism might be used to explore signatures of BSM physics, as well as to constraint models of non-standard neutrino interactions. Excess of events in an specefic energy region would be a smoking gun for resonance refraction. (4/4) We show that this mechanism as an explanation for the low energy MiniBooNE excess is excluded from measurements at lower and higher energies, as well as astrophysical, cosmological, and laboratory bound on neutrino non-standard couplings. (5/4 (?)) ",https://arxiv.org/abs/2106.13829,"The refraction index and matter potential depend on neutrino energy and this dependence has a resonance character associated to the production of the mediator in the $s-$channel. For light mediators and light particles of medium (background) the resonance can be realized at energies accessible to laboratory experiments. We study properties of the energy dependence of the potential for different C-asymmetries of background. Interplay of the background potential and the vacuum term leads to (i) bump in the oscillation probability in the resonance region, (ii) dip related to the MSW resonance in the background, (iii) substantial deviation of the effective $\Delta m^2$ above the resonance from the low energy value, etc. We considered generation of mixing in the background. Interactions with background shifts the energy of usual MSW resonance and produces new MSW resonances. Searches of the background effects allow us to put bounds on new interactions of neutrinos and properties of the background. We show that explanation of the MiniBooNE excess, as the bump due to resonance refraction, is excluded. ",Resonance refraction and neutrino oscillations,5,"['\nFinally online! Together with Alexei Smirnov we developed further on the topic of my diploma thesis at @ictpnews. We study in depth the phenomenon of resonance refraction, through which neutrino oscillations are enhanced in an energy localized region.(1/4)', 'Coherence forward scattering of nu on a cold backgrounds (e.g. DM) induces a potential that affects neutrino oscillations. Resonance in the s-channel leads to an enhanced potential that manifest as perturbations of oscillation probability measured by neutrino experiments. (2/4) https://t.co/IqVaojRBW6', 'Interplay of the background potential with vacuum and standard matter effects leads to new features: New MSW resonances, shift of the standard MSW resonance point, differences in the values of the neutrino square mass difference at high and low energies, etc. (3/4) https://t.co/ELU0N7AeoP', 'This mechanism might be used to explore signatures of BSM physics, as well as to constraint models of non-standard neutrino interactions. Excess of events in an specefic energy region would be a smoking gun for resonance refraction. (4/4) https://t.co/ktH6H4VtI9', 'We show that this mechanism as an explanation for the low energy MiniBooNE excess is excluded from measurements at lower and higher energies, as well as astrophysical, cosmological, and laboratory bound on neutrino non-standard couplings. (5/4 (?)) https://t.co/0mofx1lBjm']",21,06,1324
328,29,1443208642203947015,2530947115,Max Tegmark,"Our new #AI paper shows how physics can improve #MachineLearning by complementing physics-informed learning (#PIL) with physics-augmented learning (#PAL), taking advantage of simplifying data properties that are easier to generate than test. The paper: ",https://arxiv.org/abs/2109.13901,"Integrating physical inductive biases into machine learning can improve model generalizability. We generalize the successful paradigm of physics-informed learning (PIL) into a more general framework that also includes what we term physics-augmented learning (PAL). PIL and PAL complement each other by handling discriminative and generative properties, respectively. In numerical experiments, we show that PAL performs well on examples where PIL is inapplicable or inefficient. ","Physics-Augmented Learning: A New Paradigm Beyond Physics-Informed
Learning",1,"['Our new #AI paper shows how physics can improve #MachineLearning by complementing physics-informed learning (#PIL) with physics-augmented learning (#PAL), taking advantage of simplifying data properties that are easier to generate than test.\nThe paper: ']",21,09,266
329,68,1483052005119700997,77592002,Ahmad,"New paper on arxiv! Investigating stellar winds in a galactic spiral arm. In this case, the initial conditions were extracted from a galaxy simulation. No more isolated spherical clouds - clouds interact together and feel the galactic potential. (1/2) In short, winds don't do much to the gas compared with photoionisation - but they create small bubbles and affect how star formation is distributed over clusters. (2/2) ",https://arxiv.org/abs/2201.04141,"The role of different stellar feedback mechanisms in giant molecular clouds is not well understood. This is especially true for regions with many interacting clouds as would be found in a galactic spiral arm. In this paper, building on previous work by Bending et al., we extract a $500\times500\times100$ pc section of a spiral arm from a galaxy simulation. We use smoothed particle hydrodynamics (SPH) to re-simulate the region at higher resolution (1 M$_\odot$ per particle). We present a method for momentum-driven stellar winds from main sequence massive stars, and include this with photoionization, self-gravity, a galactic potential, and ISM heating/cooling. We also include cluster-sink particles with accretion radii of 0.78 pc to track star/cluster formation. The feedback methods are as robust as previous models on individual cloud scales (e.g. Dale et al.). We find that photoionization dominates the disruption of the spiral arm section, with stellar winds only producing small cavities (at most $\sim$ 30 pc). Stellar winds do not affect the resulting cloud statistics or the integrated star formation rate/efficiency, unlike ionization, which produces more stars, and more clouds of higher density and higher velocity dispersion compared to the control run without feedback. Winds do affect the sink properties, distributing star formation over more low-mass sinks ($\sim 10^2$ M$_\odot$) and producing fewer high-mass sinks ($\sim 10^3$ M$_\odot$). Overall, stellar winds play at best a secondary role compared to photoionization, and on many measures, they have a negligible impact. ",Stellar winds and photoionization in a spiral arm,2,"['New paper on arxiv! Investigating stellar winds in a galactic spiral arm. \n\nIn this case, the initial conditions were extracted from a galaxy simulation. No more isolated spherical clouds - clouds interact together and feel the galactic potential. (1/2) ', ""In short, winds don't do much to the gas compared with photoionisation - but they create small bubbles and affect how star formation is distributed over clusters. (2/2) https://t.co/2R9SGTagoH""]",22,01,441
330,211,1382432437632790529,1242135548040548352,Xi Ye,"Excited to share our (w/ Rohan Nair, @gregd_nlp ) new pre-print ""Evaluating Explanations for Reading Comprehension with Realistic Counterfactuals"": We propose a methodology to evaluate explanations An explanation should allow us to understand the reading comprehension model's high-level behavior with respect to a set of realistic counterfactual input scenarios. As an example, we show explanations for a HotpotQA example (blue) generated by several methods. At first glance, it seems token-level attributions are more plausible than pairwise interactions, as they highlight both movies being documentaries. But does that reflect the model's true reasoning? We profile the model behaviors with the predictions on realistic counterfactual input. The model always makes the same predictions on the counterfactuals -- and the pairwise explanation actually conveys this more accurately! We annotate realistic counterfactuals on multiple settings (HotpotQA, SQuAD, and synthetic) and evaluate several explanation methods including token-level attribution techniques and pairwise interaction techniques in terms of whether they can give insights about model behavior. Our analysis suggests that pairwise explanation techniques are better suited to analyzing the behavior of RC models, which fundamentally involves complex interaction between questions and contexts. code and data: ",https://arxiv.org/abs/2104.04515,"When a model attribution technique highlights a particular part of the input, a user might understand this highlight as making a statement about counterfactuals (Miller, 2019): if that part of the input were to change, the model's prediction might change as well. This paper investigates how well different attribution techniques align with this assumption on realistic counterfactuals in the case of reading comprehension (RC). RC is a particularly challenging test case, as token-level attributions that have been extensively studied in other NLP tasks such as sentiment analysis are less suitable to represent the reasoning that RC models perform. We construct counterfactual sets for three different RC settings, and through heuristics that can connect attribution methods' outputs to high-level model behavior, we can evaluate how useful different attribution methods and even different formats are for understanding counterfactuals. We find that pairwise attributions are better suited to RC than token-level attributions across these different RC settings, with our best performance coming from a modification that we propose to an existing pairwise attribution method. ","Connecting Attributions and QA Model Behavior on Realistic
Counterfactuals",6,"['Excited to share our (w/ Rohan Nair, @gregd_nlp ) new pre-print ""Evaluating Explanations for Reading Comprehension with Realistic Counterfactuals"":\n\n\nWe propose a methodology to evaluate explanations ', ""An explanation should allow us to understand the reading comprehension model's high-level behavior with respect to a set of realistic counterfactual input scenarios.\xa0\nAs an example, we show explanations for a HotpotQA example (blue) generated by several methods."", ""At first glance, it seems token-level attributions are more plausible than pairwise interactions, as they highlight both movies being documentaries. But does that reflect the model's true reasoning?"", 'We profile the model behaviors with the predictions on realistic counterfactual input. The model always makes the same predictions on the counterfactuals \xa0-- and the pairwise explanation actually conveys this more accurately!', 'We annotate realistic counterfactuals on multiple settings (HotpotQA, SQuAD, and synthetic) and evaluate several explanation methods including token-level attribution techniques and pairwise interaction techniques\xa0in terms of whether they can give insights about model behavior.', 'Our analysis suggests that pairwise explanation techniques are better suited to\xa0analyzing the behavior\xa0of RC models, which fundamentally involves complex interaction between questions and contexts.\n\ncode and data: https://t.co/4uxg53GX6r']",21,04,1398
331,104,1181929973654917120,608502805,THOMAS Guillaume,"Hey my new CFIS paper is on arrive today, which include among other collaborator @benfamaey and @nfmartin1980 : . It present a new method to classify dwarfs/giants stars get their [Fe/H] and their distance. This method could be use in the future with @LSST",https://arxiv.org/abs/1910.03076,"We present a new fully data-driven algorithm that uses photometric data from the Canada-France-Imaging-Survey (CFIS; $u$), Pan-STARRS 1 (PS1; $griz$), and Gaia ($G$) to discriminate between dwarf and giant stars and to estimate their distances and metallicities. The algorithm is trained and tested using the SDSS/SEGUE spectroscopic dataset and Gaia photometric/astrometric dataset. At [Fe/H]$<-1.2$, the algorithm succeeds in identifying more than 70% of the giants in the training/test set, with a dwarf contamination fraction below 30% (with respect to the SDSS/SEGUE dataset). The photometric metallicity estimates have uncertainties better than 0.2 dex when compared with the spectroscopic measurements. The distances estimated by the algorithm are valid out to a distance of at least $\sim 80$ kpc without requiring any prior on the stellar distribution, and have fully independent uncertainities that take into account both random and systematic errors. These advances allow us to estimate these stellar parameters for approximately 12 million stars in the photometric dataset. This will enable studies involving the chemical mapping of the distant outer disc and the stellar halo, including their kinematics using the Gaia proper motions. This type of algorithm can be applied in the Southern hemisphere to the first release of LSST data, thus providing an almost complete view of the external components of our Galaxy out to at least $\sim 80$ kpc. Critical to the success of these efforts will be ensuring well-defined spectroscopic training sets that sample a broad range of stellar parameters with minimal biases. A catalogue containing the training/test set and all relevant parameters within the public footprint of CFIS is available online. ","Dwarfs or giants? Stellar metallicities and distances in the
Canada-France-Imaging-Survey from $ugrizG$ multi-band photometry",1,"['Hey my new CFIS paper is on arrive today, which include among other collaborator @benfamaey and @nfmartin1980 : . It present a new method to classify dwarfs/giants stars get their [Fe/H] and their distance. This method could be use in the future with @LSST']",19,10,262
332,39,1442522254051405829,1116002690604130305,Juliette Becker,"New on arXiv last night (and accepted to AJ last week): undergraduate Lucas Brefka’s first first-author paper: In this paper, he studied how multi-planet systems change as their stars evolve and secular resonances sweep through the systems. In this work started as part of the @UROPumich program, he used a combination of secular theory and numerical simulations to show that for systems with ultra-short-period planets and extra outer planets, this dynamical process can recreate the observed geometry of the system. Lucas went from first-day @UROPumich freshman to published author in less than two years. Lucas is applying to grad school this fall, even though he is only a third-year undergrad (graduating a year early). If you’re looking for grad students this fall, watch for his application!",https://arxiv.org/abs/2109.12054,"Ultra-short period (USP) planets are exoplanets which have orbital periods of less than one day and are unique because they orbit inside the nominal magnetic truncation gap of their host stars. In some cases, USP planets have also been observed to exhibit unique dynamical parameters such as significant misalignments in inclination angle with respect to nearby planets. In this paper, we explore how the geometry of a multi-planet system hosting a USP planet can be expected to evolve as a star ages. In particular, we explore the relationship between the mutual inclination of the USP planet and the quadrupole moment ($J_2$) of the host star. We use secular perturbation theory to predict the past evolution of the example TOI-125 system, and then confirm the validity of our results using long-term N-body simulations. Through investigating how the misalignment between the candidate USP planet and the three other short-period planets in the TOI-125 system arose, we intend to derive a better understanding of the population of systems with misaligned USP planets and how their observed parameters can be explained in the context of their dynamical histories. ","A General Origin for Multi-Planetary Systems With Significantly
Misaligned USP Planets",3,"['New on arXiv last night (and accepted to AJ last week): undergraduate Lucas Brefka’s first first-author paper: In this paper, he studied how multi-planet systems change as their stars evolve and secular resonances sweep through the systems.', 'In this work started as part of the @UROPumich program, he used a combination of secular theory and numerical simulations to show that for systems with ultra-short-period planets and extra outer planets, this dynamical process can recreate the observed geometry of the system.', 'Lucas went from first-day @UROPumich freshman to published author in less than two years. Lucas is applying to grad school this fall, even though he is only a third-year undergrad (graduating a year early). If you’re looking for grad students this fall, watch for his application!']",21,09,805
333,141,1427678621058027529,1193288437052260352,Laurel Orr,"Ever woke up thinking “what will ML pipelines look like in the next few years?”. Come to our tutorial “ML Pipelines: Feature Stores and the Coming Wave of Embedding Ecosystems” 8/18/21 7:15am PT at VLDB to find out what we think the future will hold. Our take: self-supervised ecosystems, where embedding representations are learned over massive corpora and integrated into hundreds of downstream systems, are shifting the ML pipeline from the manual feature curation and data labeling of Feature Stores to hands-free training. In this new self-supervised paradigm, engineers face challenges with respect to overcoming potential biases (e.g., popularity bias) in the uncurated training data, managing embedding stability, and continually monitoring and maintaining models. We will discuss these challenges and exciting open problems tomorrow! This tutorial would not have been possible without my amazing collaborators @atinsanyal @lingxiao @m_leszczy @krandiash",https://arxiv.org/abs/2108.05053,"The industrial machine learning pipeline requires iterating on model features, training and deploying models, and monitoring deployed models at scale. Feature stores were developed to manage and standardize the engineer's workflow in this end-to-end pipeline, focusing on traditional tabular feature data. In recent years, however, model development has shifted towards using self-supervised pretrained embeddings as model features. Managing these embeddings and the downstream systems that use them introduces new challenges with respect to managing embedding training data, measuring embedding quality, and monitoring downstream models that use embeddings. These challenges are largely unaddressed in standard feature stores. Our goal in this tutorial is to introduce the feature store system and discuss the challenges and current solutions to managing these new embedding-centric pipelines. ","Managing ML Pipelines: Feature Stores and the Coming Wave of Embedding
Ecosystems",4,"['Ever woke up thinking “what will ML pipelines look like in the next few years?”. Come to our tutorial “ML Pipelines: Feature Stores and the Coming Wave of Embedding Ecosystems” 8/18/21 7:15am PT at VLDB to find out what we think the future will hold.\n\n', 'Our take: self-supervised ecosystems, where embedding representations are learned over massive corpora and integrated into hundreds of downstream systems, are shifting the ML pipeline from the manual feature curation and data labeling of Feature Stores to hands-free training.', 'In this new self-supervised paradigm, engineers face challenges with respect to overcoming potential biases (e.g., popularity bias) in the uncurated training data, managing embedding stability, and continually monitoring and maintaining models.', 'We will discuss these challenges and exciting open problems tomorrow! This tutorial would not have been possible without my amazing collaborators @atinsanyal @lingxiao @m_leszczy @krandiash']",21,08,969
334,72,1427210038347714561,1214477029053214720,Jan Kukačka 🇺🇦,"Self-supervised learning has gained lots of attention recently. But what can a network really learn from unlabeled images of the retina? And is it useful for image segmentation? Good thing our new paper is out, with all the answers 1/6 We used contrastive self-supervised learning to train a small convnet on Kaggle-DR dataset. Fascinatingly, without being provided any labels at all, the network learned to recognize various anatomical/pathological structures in the fundus images. 2/6 Using this pre-trained network as the encoder of a U-Net led to improvements in image segmentation performance, compared to a U-Net trained from scratch. 3/6 The improvement was greater in few-shot scenarios. Moreover, the pre-trained networks converged significantly faster. 4/6 What is it good for? Glad you ask! It is impossible to have a large annotated dataset for every camera, pathology, and population. Self-supervised learning seems as an approach that could scale well with abundant unlabeled data and produce representations 5/6 which are robust and can be adapted with few annotated samples to new devices etc. Finally, big shout-out to 👩🔬 Anja whose master's thesis is behind large portion of this paper! 6/6",https://arxiv.org/abs/2108.02798,"Fundus photography is the primary method for retinal imaging and essential for diabetic retinopathy prevention. Automated segmentation of fundus photographs would improve the quality, capacity, and cost-effectiveness of eye care screening programs. However, current segmentation methods are not robust towards the diversity in imaging conditions and pathologies typical for real-world clinical applications. To overcome these limitations, we utilized contrastive self-supervised learning to exploit the large variety of unlabeled fundus images in the publicly available EyePACS dataset. We pre-trained an encoder of a U-Net, which we later fine-tuned on several retinal vessel and lesion segmentation datasets. We demonstrate for the first time that by using contrastive self-supervised learning, the pre-trained network can recognize blood vessels, optic disc, fovea, and various lesions without being provided any labels. Furthermore, when fine-tuned on a downstream blood vessel segmentation task, such pre-trained networks achieve state-of-the-art performance on images from different datasets. Additionally, the pre-training also leads to shorter training times and an improved few-shot performance on both blood vessel and lesion segmentation tasks. Altogether, our results showcase the benefits of contrastive self-supervised pre-training which can play a crucial role in real-world clinical applications requiring robust models able to adapt to new devices with only a few annotated samples. ","Self-Supervised Learning from Unlabeled Fundus Photographs Improves
Segmentation of the Retina",6,"['Self-supervised learning has gained lots of attention recently. But what can a network really learn from unlabeled images of the retina? And is it useful for image segmentation? Good thing our new paper is out, with all the answers 1/6 ', 'We used contrastive self-supervised learning to train a small convnet on Kaggle-DR dataset. Fascinatingly, without being provided any labels at all, the network learned to recognize various anatomical/pathological structures in the fundus images. 2/6 https://t.co/LNJpBotlax', 'Using this pre-trained network as the encoder of a U-Net led to improvements in image segmentation performance, compared to a U-Net trained from scratch. 3/6 https://t.co/yFjICKcS6Y', 'The improvement was greater in few-shot scenarios. Moreover, the pre-trained networks converged significantly faster. 4/6 https://t.co/rU9KAF3NhO', 'What is it good for? Glad you ask! It is impossible to have a large annotated dataset for every camera, pathology, and population. Self-supervised learning seems as an approach that could scale well with abundant unlabeled data and produce representations 5/6', ""which are robust and can be adapted with few annotated samples to new devices etc. Finally, big shout-out to 👩\u200d🔬 Anja whose master's thesis is behind large portion of this paper! 6/6""]",21,08,1244
335,179,1362479759255605255,1888564675,Chivukula Sai Shruthi,Ethics-focused methods! How many are there? What are they? How can you describe them for design action? Where can I find them? We tried to address these questions in our paper and collection “Surveying the Landscape of Ethics-Focused Design Methods.” ,https://arxiv.org/abs/2102.08909,"Over the past decade, HCI researchers, design researchers, and practitioners have increasingly addressed ethics-focused issues through a range of theoretical, methodological and pragmatic contributions to the field. While many forms of design knowledge have been proposed and described, we focus explicitly on knowledge that has been codified as ""methods,"" which we define as any supports for everyday work practices of designers. In this paper, we identify, analyze, and map a collection of 63 existing ethics-focused methods intentionally designed for ethical impact. We present a content analysis, providing a descriptive record of how they operationalize ethics, their intended audience or context of use, their ""core"" or ""script,"" and the means by which these methods are formulated, articulated, and languaged. Building on these results, we provide an initial definition of ethics-focused methods, identifying potential opportunities for the development of future methods to support design practice and research. ",Surveying the Landscape of Ethics-Focused Design Methods,1,['Ethics-focused methods! \nHow many are there? What are they? How can you describe them for design action? Where can I find them?\n\nWe tried to address these questions in our paper and collection “Surveying the Landscape of Ethics-Focused Design Methods.” '],21,02,264
336,27,1377154324657184774,69282116,Fotios Petropoulos,"In this new paper, we consider the environmental tendencies of forecasting models when selecting and combining across models for a particular time series. This is a very simple case of cross-learning based on precision and sensitivity. @Vspiliotis1 ",https://arxiv.org/abs/2103.16157,"Standard selection criteria for forecasting models focus on information that is calculated for each series independently, disregarding the general tendencies and performances of the candidate models. In this paper, we propose a new way to statistical model selection and model combination that incorporates the base-rates of the candidate forecasting models, which are then revised so that the per-series information is taken into account. We examine two schemes that are based on the precision and sensitivity information from the contingency table of the base rates. We apply our approach on pools of exponential smoothing models and a large number of real time series and we show that our schemes work better than standard statistical benchmarks. We discuss the connection of our approach to other cross-learning approaches and offer insights regarding implications for theory and practice. ",Model combinations through revised base-rates,1,"['In this new paper, we consider the environmental tendencies of forecasting models when selecting and combining across models for a particular time series. This is a very simple case of cross-learning based on precision and sensitivity. \n@Vspiliotis1\n']",21,03,255
337,11,1521291232080760833,278791721,Maria De-Arteaga,"📣 New #facct2022 paper “Justice in Misinformation Detection Systems: An Analysis of Algorithms, Stakeholders, and Potential Harms”. Led by 1st year @UTexasMcCombs PhD student @TerrenceNeumann (say hi to him in Seoul!), and joint with @sinafazelpur 🧵 As misinformation detection pipelines increasingly incorporate algorithms, justice of these systems become a central concern. Misinformation detection pipelines consist of varied tasks, each of which can give rise to many ethical concerns, and involve multiple stakeholders. 2/5 Commonly, algorithmic fairness considers cases where each data instance pertains a single direct stakeholder that occupies one role: decision subject. In the case of informational items, in contrast, multiple stakeholders are directly implicated in each informational item. 3/5 We employ and extend upon the notion of informational justice to develop a framework for explicating issues of justice relating to misinformation detection systems, considering representation, participation, allocation, and credibility affecting different stakeholders. 4/5 Drawing on the framework: (1) we show how injustices materialize for stakeholders across three algorithmic stages in the pipeline; (2) we suggest empirical measures for assessing these injustices; and (3) we identify potential sources of these harms. 5/5",https://arxiv.org/abs/2204.13568,"Faced with the scale and surge of misinformation on social media, many platforms and fact-checking organizations have turned to algorithms for automating key parts of misinformation detection pipelines. While offering a promising solution to the challenge of scale, the ethical and societal risks associated with algorithmic misinformation detection are not well-understood. In this paper, we employ and extend upon the notion of informational justice to develop a framework for explicating issues of justice relating to representation, participation, distribution of benefits and burdens, and credibility in the misinformation detection pipeline. Drawing on the framework: (1) we show how injustices materialize for stakeholders across three algorithmic stages in the pipeline; (2) we suggest empirical measures for assessing these injustices; and (3) we identify potential sources of these harms. This framework should help researchers, policymakers, and practitioners reason about potential harms or risks associated with these algorithms and provide conceptual guidance for the design of algorithmic fairness audits in this domain. ","Justice in Misinformation Detection Systems: An Analysis of Algorithms,
Stakeholders, and Potential Harms",5,"['📣 New #facct2022 paper “Justice in Misinformation Detection Systems: An Analysis of Algorithms, Stakeholders, and Potential Harms”.\nLed by 1st year @UTexasMcCombs PhD student @TerrenceNeumann (say hi to him in Seoul!), and joint with @sinafazelpur 🧵\n ', 'As misinformation detection pipelines increasingly incorporate algorithms, justice of these systems become a central concern. Misinformation detection pipelines consist of varied tasks, each of which can give rise to many ethical concerns, and involve multiple stakeholders. 2/5 https://t.co/ZkeacaP5tS', 'Commonly, algorithmic fairness considers cases where each data instance pertains a single direct stakeholder that occupies one role: decision subject. In the case of informational items, in contrast, multiple stakeholders are directly implicated in each informational item. 3/5 https://t.co/NlgHksCQFa', 'We employ and extend upon the notion of informational justice to develop a framework for explicating issues of justice relating to misinformation detection systems, considering representation, participation, allocation, and credibility affecting different stakeholders. 4/5', 'Drawing on the framework: (1) we show how injustices materialize for stakeholders across three algorithmic stages in the pipeline; (2) we suggest empirical measures for assessing these injustices; and (3) we identify potential sources of these harms. 5/5']",22,04,1363
338,164,1474353824877842432,791705191175360512,Niels Warburton,"One last paper for the year, but we've saved the best for last. Here we present gravitational waveforms computed using second-order (in the mass ratio) self-force theory. We find remarkable agreement with NR waveforms for mass ratios of 10:1 or smaller. This work is in collaboration with @barry_wardell, Adam Pound, Jeremy Miller, Leanne Durkan, and Alexandre Le Tiec.",https://arxiv.org/abs/2112.12265,"We produce gravitational waveforms for nonspinning compact binaries undergoing a quasicircular inspiral. Our approach is based on a two-timescale expansion of the Einstein equations in second-order self-force theory, which allows first-principles waveform production in milliseconds. Although the approach is designed for extreme mass ratios, our waveforms agree remarkably well with those from full numerical relativity, even for comparable-mass systems. Our results will be invaluable in accurately modelling extreme-mass-ratio inspirals for the LISA mission and intermediate-mass-ratio systems currently being observed by the LIGO-Virgo-KAGRA Collaboration. ","Gravitational waveforms for compact binaries from second-order
self-force theory",2,"[""One last paper for the year, but we've saved the best for last. Here we present gravitational waveforms computed using second-order (in the mass ratio) self-force theory. We find remarkable agreement with NR waveforms for mass ratios of 10:1 or smaller. "", 'This work is in collaboration with @barry_wardell, Adam Pound, Jeremy Miller, Leanne Durkan, and Alexandre Le Tiec.']",21,12,383
339,91,1392951171636154376,1068545181576773632,Kenneth Brown,"Our new paper ""Optimizing Stabilizer Parities for Improved Logical Qubit Memories"" was posted to the arXiv yesterday . Joint work with Chris Monroe's group @JQInews @DukeEngineering #DukeQuantumCenter My latest PhD graduate @DriptoDebroy (now @GoogleQuantumAI ) and I have been thinking about coherent rotation errors on quantum error codes for the past year and a bit. We noticed that for Shor codes you could reduce this error on one axis by changing the sign of the stabilizers. For even-distance Shor codes you can make the error vanish. This is an example of a broader set of codes developed by our colleagues, Jingzhen Hu, Qingzhong Liang, @nrenga92, and Robert Calderbank #DukeQuantumCenter These codes are an example of weak collective decoherence free subspaces , but are not concatenations of DFS with stabilizer codes like . The DFS is part of the code. Our codes are not decoherence free subspaces just decoherence reducing. From a physics standpoint, we replace the very phase sensitive GHZ states (|000>+|111>) with the less phase sensitive states (|010>+|101>). Our experimental collaborators (Laird Egan, @crystalMIT13 , Andrew Risinger, Daiwei Zhu , Debopriyo Biswas, Marko Cetina, and Chris Monroe), tested it on 9-qubit states and saw a 4-fold increase in the logical T_2. For the weight-2 Z checks, changing the stabilizer sign is equivalent to switching from stabilizing ferromagnetic states to stabilizing antiferromagnetic states. In the appendix, we show numerically that switching the signs for weight-6 Z checks on the dual Shor code also helps. More broadly, for any stabilizer code, there are n-k subspaces that could be the code space. For independent random Pauli errors, it doesn't matter which subspace you use for your code. For correlated noise, it can make a huge difference. @cgranade @nrenga92 Yes. Exactly. I think what's cool about Jingzhen's et al. construction is that they arrived at it from a totally different direction and the DFS is just part of the code. edit: should be 2^(n-k) subspaces, where n is the number of data qubits and k is the number of logical qubits.",http://arxiv.org/abs/2105.05068,"We study variants of Shor's code that are adept at handling single-axis correlated idling errors, which are commonly observed in many quantum systems. By using the repetition code structure of the Shor's code basis states, we calculate the logical channel applied to the encoded information when subjected to coherent and correlated single qubit idling errors, followed by stabilizer measurement. Changing the signs of the stabilizer generators allows us to change how the coherent errors interfere, leading to a quantum error correcting code which performs as well as a classical repetition code of equivalent distance against these errors. We demonstrate a factor of 4 improvement of the logical memory in a distance-3 logical qubit implemented on a trapped-ion quantum computer. Even-distance versions of our Shor code variants are decoherence-free subspaces and fully robust to identical and independent coherent idling noise. ",Optimizing Stabilizer Parities for Improved Logical Qubit Memories,10,"['Our new paper ""Optimizing Stabilizer Parities for Improved Logical Qubit Memories"" was posted to the arXiv yesterday \n . Joint work with Chris Monroe\'s group @JQInews @DukeEngineering #DukeQuantumCenter', 'My latest PhD graduate @DriptoDebroy (now @GoogleQuantumAI ) and I have been thinking about coherent rotation errors on quantum error codes for the past year and a bit. We noticed that for Shor codes you could reduce this error on one axis by changing the sign of the stabilizers.', 'For even-distance Shor codes you can make the error vanish. This is an example of a broader set of codes developed by our colleagues, Jingzhen Hu, Qingzhong Liang, @nrenga92, and Robert Calderbank https://t.co/Z0hYvVWQmq #DukeQuantumCenter', 'These codes are an example of weak collective decoherence free subspaces https://t.co/VHDc6r1dui, but are not concatenations of DFS with stabilizer codes like https://t.co/SppPLCHQUG. The DFS is part of the code.', 'Our codes are not decoherence free subspaces just decoherence reducing. From a physics standpoint, we replace the very phase sensitive GHZ states (|000>+|111>) with the less phase sensitive states (|010>+|101>).', 'Our experimental collaborators (Laird Egan, @crystalMIT13 , Andrew Risinger, Daiwei Zhu , Debopriyo Biswas, Marko Cetina, and Chris Monroe), tested it on 9-qubit states and saw a 4-fold increase in the logical T_2.', 'For the weight-2 Z checks, changing the stabilizer sign is equivalent to switching from stabilizing ferromagnetic states to stabilizing antiferromagnetic states. In the appendix, we show numerically that switching the signs for weight-6 Z checks on the dual Shor code also helps.', ""More broadly, for any stabilizer code, there are n-k subspaces that could be the code space. For independent random Pauli errors, it doesn't matter which subspace you use for your code. For correlated noise, it can make a huge difference."", ""@cgranade @nrenga92 Yes. Exactly. I think what's cool about Jingzhen's et al. construction is that they arrived at it from a totally different direction and the DFS is just part of the code."", 'edit: should be 2^(n-k) subspaces, where n is the number of data qubits and k is the number of logical qubits.']",21,05,2150
340,0,1404497507774865409,1014263782493818880,Pierre Ablin,"New paper: Kernel Stein Discrepancy Descent ! A method to sample from unnormalized densities by optimization of the Kernel Stein Discrepancy (KSD)🎯🎯🎯 Paper: With @Korba_Anna, P-C Aubin & @sjm_majewski Accepted for a long talk @icmlconf 🍾 1/8🧵👇 Kernel Stein Discrepancy is a measure of distances between densities. It is a Maximum Mean Discrepancy (MMD) for a special Kernel, the Stein Kernel. 2/8 The Stein Kernel only involves the score s (derivative of the log) of the target distribution. We can therefore compute the KSD in the formula above using only the score of the target: unlike most MMD's, we do not need samples. 3/8 As a consequence, it is easy to evaluate the KSD between the target and a discrete measure of particles. In order to sample from the target, we can simply optimize this cost function w.r.t. the positions of the particles 4/8 We then have a simple objective function of the positions of the samples, which can then be minimized using (stochastic) gradient descent or the fast and robust algorithm L-BFGS. Here, particles reach an equilibrium on a simple Gaussian problem 5/8 Yet, we also uncover bad behaviors when the problem is not log-concave : some particles tend to get stuck in spurious local minima, for instance here on a mixture of Gaussian with low variances 6/8 We give a mathematical analysis for these phenomena, and also show surprising results like lack of exponential convergence. 7/8 If you want to try it yourself, we have made a python package: pip install ksddescent Site: Source: It also features pytorch/numpy code for the awesome SVGD algorithm () 8/8 @umutsimsekli @Korba_Anna @sjm_majewski @icmlconf Of course it's here ! For Bayesian ICA, KSD descent does not really work, because it is a hard non convex problem with many spurious local minima ",https://arxiv.org/abs/2105.09994,"Among dissimilarities between probability distributions, the Kernel Stein Discrepancy (KSD) has received much interest recently. We investigate the properties of its Wasserstein gradient flow to approximate a target probability distribution $\pi$ on $\mathbb{R}^d$, known up to a normalization constant. This leads to a straightforwardly implementable, deterministic score-based method to sample from $\pi$, named KSD Descent, which uses a set of particles to approximate $\pi$. Remarkably, owing to a tractable loss function, KSD Descent can leverage robust parameter-free optimization schemes such as L-BFGS; this contrasts with other popular particle-based schemes such as the Stein Variational Gradient Descent algorithm. We study the convergence properties of KSD Descent and demonstrate its practical relevance. However, we also highlight failure cases by showing that the algorithm can get stuck in spurious local minima. ",Kernel Stein Discrepancy Descent,9,"['New paper: Kernel Stein Discrepancy Descent !\n\nA method to sample from unnormalized densities by optimization of the Kernel Stein Discrepancy (KSD)🎯🎯🎯\n\nPaper: \n\nWith @Korba_Anna, P-C Aubin & @sjm_majewski \n\nAccepted for a long talk @icmlconf 🍾\n\n1/8🧵👇 ', 'Kernel Stein Discrepancy is a measure of distances between densities. It is a Maximum Mean Discrepancy (MMD) for a special Kernel, the Stein Kernel.\n\n2/8 https://t.co/B9AZGpafys', ""The Stein Kernel only involves the score s (derivative of the log) of the target distribution.\n\nWe can therefore compute the KSD in the formula above using only the score of the target: unlike most MMD's, we do not need samples.\n\n3/8 https://t.co/4tYJfGAk5v"", 'As a consequence, it is easy to evaluate the KSD between the target and a discrete measure of particles. In order to sample from the target, we can simply optimize this cost function w.r.t. the positions of the particles\n\n4/8 https://t.co/DkeDjpP5ok', 'We then have a simple objective function of the positions of the samples, which can then be minimized using (stochastic) gradient descent or the fast and robust algorithm L-BFGS.\n\nHere, particles reach an equilibrium on a simple Gaussian problem\n\n5/8 https://t.co/ZUEIF7ShxS', 'Yet, we also uncover bad behaviors when the problem is not log-concave : some particles tend to get stuck in spurious local minima, for instance here on a mixture of Gaussian with low variances\n\n6/8 https://t.co/7MMd8DsQvc', 'We give a mathematical analysis for these phenomena, and also show surprising results like lack of exponential convergence.\n\n7/8', 'If you want to try it yourself, we have made a python package:\n\npip install ksddescent\n\nSite: https://t.co/ckGZtfGYWN\nSource: https://t.co/rrrwBE67Ew\n\nIt also features pytorch/numpy code for the awesome SVGD algorithm (https://t.co/L9mJFciENx)\n\n8/8', ""@umutsimsekli @Korba_Anna @sjm_majewski @icmlconf Of course it's here ! \n\nFor Bayesian ICA, KSD descent does not really work, because it is a hard non convex problem with many spurious local minima https://t.co/BB5nfDdJ5l""]",21,05,1876
341,209,1357350848792301575,460489687,Juan Mateos Garcia,"""The privatisation of AI researchers"" In our latest working paper we study AI researcher career transitions between academic & industry. Opening q: How do we preserve a public interest AI research sphere in the face of strong industry demand for talent? 1. We study career transitions since 2020 using data from Microsoft Academic Graph. We find a growing flow of researchers from academia to industry [1], particularly from elite universities [2], and particularly to tech companies [3] 2. Our survival analysis shows that industry tends to recruit highly cited researchers with a specialism in deep learning. We also find evidence that female researchers are less likely to transition into industry after we account for other variables. 3. We also compare the ""productivity""* of researchers who transition into industry with peers who stay in academia. ""Switchers"" see a bump in productivity which is offset over time. Are they burning out, over-specialising, being captured...? 🤷 ____ * Proxied w/ citation rank Conclusion: Our results raise concerns about a potential hollowing out AI public research & knowledge as talented researchers transition into industry and become less productive over time. This provides a rationale to invest in public AI research & preserve its independence. PS. Lots of potential avenues to explore in further work eg: It has bee fun to work with @kstathou, @Daniel_S_Hain & @RJurowetzki on this one :-) * typo in # 2: since 2000",https://arxiv.org/abs/2102.01648,"The private sector is playing an increasingly important role in basic Artificial Intelligence (AI) R&D. This phenomenon, which is reflected in the perception of a brain drain of researchers from academia to industry, is raising concerns about a privatisation of AI research which could constrain its societal benefits. We contribute to the evidence base by quantifying transition flows between industry and academia and studying its drivers and potential consequences. We find a growing net flow of researchers from academia to industry, particularly from elite institutions into technology companies such as Google, Microsoft and Facebook. Our survival regression analysis reveals that researchers working in the field of deep learning as well as those with higher average impact are more likely to transition into industry. A difference-in-differences analysis of the effect of switching into industry on a researcher's influence proxied by citations indicates that an initial increase in impact declines as researchers spend more time in industry. This points at a privatisation of AI knowledge compared to a counterfactual where those high-impact researchers had remained in academia. Our findings highlight the importance of strengthening the public AI research sphere in order to ensure that the future of this powerful technology is not dominated by private interests. ","The Privatization of AI Research(-ers): Causes and Potential
Consequences -- From university-industry interaction to public research
brain-drain?",7,"['""The privatisation of AI researchers""\n\nIn our latest working paper we study AI researcher career transitions between academic & industry.\n\n\nOpening q: How do we preserve a public interest AI research sphere in the face of strong industry demand for talent? ', '1. We study career transitions since 2020 using data from Microsoft Academic Graph. \n\nWe find a growing flow of researchers from academia to industry [1], particularly from elite universities [2], and particularly to tech companies [3] https://t.co/JT8I2dUCub', '2. Our survival analysis shows that industry tends to recruit highly cited researchers with a specialism in deep learning. We also find evidence that female researchers are less likely to transition into industry after we account for other variables. https://t.co/VO1ietz8HB', '3. We also compare the ""productivity""* of researchers who transition into industry with peers who stay in academia. ""Switchers"" see a bump in productivity which is offset over time. Are they burning out, over-specialising, being captured...? 🤷\n\n____\n* Proxied w/ citation rank https://t.co/FownbBZ4oe', 'Conclusion: Our results raise concerns about a potential hollowing out AI public research & knowledge as talented researchers transition into industry and become less productive over time.\n\nThis provides a rationale to invest in public AI research & preserve its independence. https://t.co/3nh0YqG25h', 'PS. Lots of potential avenues to explore in further work eg:\n\nIt has bee fun to work with @kstathou, @Daniel_S_Hain & @RJurowetzki on this one :-) https://t.co/xdMnvLhQIY', '* typo in # 2: since 2000']",21,02,1512
342,71,1275042834127687680,4639078397,John Wise,"New paper day on Powderday, led by @desikanarayanan. It calculates mock spectra and imaging using Hyperion and FSPS with @yt_astro as the glue. It's also the first paper for my 1st year grad student @Snickersnmocha ! A big thanks to all of my co-authors since I only joined the group late. On twitter (that I know of): @powersoffour @astrofrog @AshKelly0 @chrisclovell I missed @gfsnyder !",https://arxiv.org/abs/2006.10757,"We present Powderday, a flexible, fast, open-source dust radiative transfer package designed to interface with galaxy formation simulations. Powderday builds on FSPS population synthesis models, Hyperion dust radiative transfer, and employs yt to interface between different software packages. We include our stellar population synthesis modeling on the fly, which allows for significant run-time flexibility in the assumed stellar physics. We include a model for nebular line emission that can employ either precomputed Cloudy lookup tables (for efficiency), or direct photoionization calculations for all young stars (for flexibility). The dust content follows either observationally-motivated prescriptions, direct modeling from galaxy formation simulations, or a novel approach that includes the dust content via learning-based algorithms from the SIMBA cosmological galaxy formation simulation. AGN can additionally be included via a range of prescriptions. The output of these models are broadband SEDs, as well as filter-convolved images. Powderday is designed to eliminate last-mile efforts by researchers that employ different hydrodynamic galaxy formation models, and seamlessly interfaces with GIZMO, AREPO, GASOLINE, CHANGA, and ENZO. We demonstrate the capabilities of the code via three applications: a model for the star formation rate (SFR) - infrared luminosity relation in galaxies (including the impact of AGN); the impact of circumstellar dust around AGB stars on the mid-infrared emission from galaxy SEDs; and the impact of galaxy inclination angle on dust attenuation laws. ",Powderday: Dust Radiative Transfer for Galaxy Simulations,3,"[""New paper day on Powderday, led by @desikanarayanan. It calculates mock spectra and imaging using Hyperion and FSPS with @yt_astro as the glue. It's also the first paper for my 1st year grad student @Snickersnmocha ! "", 'A big thanks to all of my co-authors since I only joined the group late. On twitter (that I know of): @powersoffour @astrofrog @AshKelly0 @chrisclovell', 'I missed @gfsnyder !']",20,06,403
343,121,1484126049864716291,62115778,Sven Apel,Pretty happy with our brand new @ICSEconf paper introducing the notion of feature causality in configurable systems (building on the seminal work by Halpern and Pearl). @BNuseibeh @ICSEconf Thanks a lot @BNuseibeh! That makes me happy again 😀,https://arxiv.org/abs/2201.07280,"Detecting and understanding reasons for defects and inadvertent behavior in software is challenging due to their increasing complexity. In configurable software systems, the combinatorics that arises from the multitude of features a user might select from adds a further layer of complexity. We introduce the notion of feature causality, which is based on counterfactual reasoning and inspired by the seminal definition of actual causality by Halpern and Pearl. Feature causality operates at the level of system configurations and is capable of identifying features and their interactions that are the reason for emerging functional and non-functional properties. We present various methods to explicate these reasons, in particular well-established notions of responsibility and blame that we extend to the feature-oriented setting. Establishing a close connection of feature causality to prime implicants, we provide algorithms to effectively compute feature causes and causal explications. By means of an evaluation on a wide range of configurable software systems, including community benchmarks and real-world systems, we demonstrate the feasibility of our approach: We illustrate how our notion of causality facilitates to identify root causes, estimate the effects of features, and detect feature interactions. ",Causality in Configurable Software Systems,2,"['Pretty happy with our brand new @ICSEconf paper introducing the notion of feature causality in configurable systems (building on the seminal work by Halpern and Pearl). \n ', '@BNuseibeh @ICSEconf Thanks a lot @BNuseibeh! That makes me happy again 😀']",22,01,256
344,253,1367839174469029899,370409954,Gael Varoquaux,"New preprint: Accounting for Variance in Machine Learning Benchmarks Lead by @bouthilx and @Mila_Quebec friends We show that ML benchmarks contain multiple sources of uncontrolled variation, not only inits. We propose procedure for reliable conclusion 1/8 Data split and hyper-parameter selection (even with fancy hyper-parameter optimization) appear as the leading source of arbitrary variations in ML benchmarks, beyond random weight init. These must be sampled to give empirical evidence on algorithm comparison that generalize 2/8 Even in deep-learning benchmarks, performed on large datasets, the variance of the observed performance is limited by the set of the test set 3/8 Altogether, these variances are not small compared to observed improvements in the literature. Hence, there is a risk that published findings may be due to chance, for instance finding better hyper-parameters from one algorithm than another 4/8 Sampling the variance of hyper-parameter tuning is very costly, eg in deep learning. We measure it in clean (and costly) experiments, but also study imperfect estimators. We show that it is best to sample all sources of variation to minimize error on estimation of performance 5/8 Variance of performance results must be accounted for to conclude on whether or not there is evidence that an algorithm is an improvement. To avoid accepting trivial differences, we use to use ""non-inferiority"", or Neyman-Pearson, tests, used in clinical trials 6/8 Our recommendations (based on 5 case studies, deep learning & classical ML): • Randomize as many sources of variations as possible (weight inits, data order, data splitting) • Multiple validation splits • P(A > B) > 0.75 (comparing improvement to variance) 7/8 This work relies on extensive experiments, with multiple datasets and ML pipelines, including many hyper-parameter optimization of deep learning pipelines. Thanks to an amazing teams, comprising @bouthilx @AssyaTrofimov @EdwardRaffML and many more 8/8",https://arxiv.org/abs/2103.03098,"Strong empirical evidence that one machine-learning algorithm A outperforms another one B ideally calls for multiple trials optimizing the learning pipeline over sources of variation such as data sampling, data augmentation, parameter initialization, and hyperparameters choices. This is prohibitively expensive, and corners are cut to reach conclusions. We model the whole benchmarking process, revealing that variance due to data sampling, parameter initialization and hyperparameter choice impact markedly the results. We analyze the predominant comparison methods used today in the light of this variance. We show a counter-intuitive result that adding more sources of variation to an imperfect estimator approaches better the ideal estimator at a 51 times reduction in compute cost. Building on these results, we study the error rate of detecting improvements, on five different deep-learning tasks/architectures. This study leads us to propose recommendations for performance comparisons. ",Accounting for Variance in Machine Learning Benchmarks,8,"['New preprint: Accounting for Variance in Machine Learning Benchmarks\n\nLead by @bouthilx and @Mila_Quebec friends\n\nWe show that ML benchmarks contain multiple sources of uncontrolled variation, not only inits. We propose procedure for reliable conclusion 1/8', 'Data split and hyper-parameter selection (even with fancy hyper-parameter optimization) appear as the leading source of arbitrary variations in ML benchmarks, beyond random weight init.\n\nThese must be sampled to give empirical evidence on algorithm comparison that generalize 2/8 https://t.co/kE8VemNrPY', 'Even in deep-learning benchmarks, performed on large datasets, the variance of the observed performance is limited by the set of the test set 3/8 https://t.co/PAvhOpJ86O', 'Altogether, these variances are not small compared to observed improvements in the literature. Hence, there is a risk that published findings may be due to chance, for instance finding better hyper-parameters from one algorithm than another 4/8 https://t.co/qwCdfCmPj5', 'Sampling the variance of hyper-parameter tuning is very costly, eg in deep learning. We measure it in clean (and costly) experiments, but also study imperfect estimators.\nWe show that it is best to sample all sources of variation to minimize error on estimation of performance 5/8 https://t.co/uOsyCzM8C9', 'Variance of performance results must be accounted for to conclude on whether or not there is evidence that an algorithm is an improvement.\n\nTo avoid accepting trivial differences, we use to use ""non-inferiority"", or Neyman-Pearson, tests, used in clinical trials 6/8 https://t.co/5CsC13AsRb', 'Our recommendations (based on 5 case studies, deep learning & classical ML):\n• Randomize as many sources of variations as possible (weight inits, data order, data splitting)\n• Multiple validation splits\n• P(A > B) > 0.75 (comparing improvement to variance)\n\n7/8', 'This work relies on extensive experiments, with multiple datasets and ML pipelines, including many hyper-parameter optimization of deep learning pipelines.\n\nThanks to an amazing teams, comprising @bouthilx @AssyaTrofimov @EdwardRaffML and many more\n\nhttps://t.co/StDk21LCTI\n8/8']",21,03,2040
345,11,1333768498099548161,167395734,Daiki Nishiguchi,"Cute flocking colloids in our new paper with @jiwasawa ! They show long-range order, algebraic correlations, giant fluctuations, enhanced diffusion etc. Many implications and challenges on the connection between Vicsek World & Active Brownian Physics. ",https://arxiv.org/abs/2011.14548,"We study the polar collective dynamics of Janus colloidal particles fueled by an AC electric field. When the density is high enough, the polar interactions between the particles induce a polar orientationally ordered state which exhibits features reminiscent of the Vicsek model such as true long-range order and giant number fluctuations. Independent measurements of the polarity and velocity at the single particle level allowed us to investigate the single particle dynamics within the ordered state. We discovered theoretically-unaddressed statistical properties of the ordered state such as the asymmetric relation of polarity and velocity, enhanced rotational diffusion stronger than in the disordered state, and an algebraic auto-correlation of the polarity. Our experimental findings, at the crossroad of the Vicsek physics and the Active Brownian Particles physics, shed light on the so-far-unexplored physics arising from the interplay between the polarity and the velocity. ","Algebraic correlations and anomalous fluctuations in ordered flocks of
Janus particles fueled by an AC electric field",1,"['Cute flocking colloids in our new paper with @jiwasawa !\nThey show long-range order, algebraic correlations, giant fluctuations, enhanced diffusion etc. Many implications and challenges on the connection between Vicsek World & Active Brownian Physics.\n ']",20,11,265
346,7,756214204122603520,23165990,Seán Bartz 🏁🏎,"It feels hot enough outside to restore chiral symmetry (which my new paper predicts is 151 MeV ~ a trillion degrees) When you say ""about a trillion degrees,"" it doesn't matter if you use Fahrenheit or Celsius. Switching to Kelvin is a rounding error. Chirality literally refers to ""handedness"" of particles. Do they seem to be spinning clockwise or counterclockwise as they move toward you? Imagine a football thrown by a right-handed player vs thrown by a lefty. They both move forward, but spin opposite ways. Chirality. A video of a left-handed throw would look the same as video of a right-handed throw that had been reflected in a mirror. Chiral symmetry. Some particles made of quarks look the same as their reflection, and some look reversed. If they were like footballs, this wouldn't matter. But quarks are different. How they look under reflection affects how they interact w/ empty space. Thus, the particles have different masses This is chiral symmetry breaking. A symmetry (mirror reflection) is almost true, but not quite. A mirror universe is different from ours. Chiral symmetry breaking accounts for most of the mass of ordinary matter. Quark mass accounts for about 10%, rest from this interaction This has all been at zero temperature. At high temp and density (like when we collide gold ions together), chiral symmetry is restored. My paper looks at how chiral symmetry is restored as temperature and density increase. Share w/ your friends! This paper written in collaboration with a @Macalester undergrad, Theo Jacobson! #heymac Also, shout out to @TheOnlyMasSquad whose question inspired the chiral football analogy ",http://arxiv.org/abs/1607.05751,"We investigate the in-medium behavior of mesons at finite temperature and baryon chemical potential within a soft-wall model of AdS/QCD. We use a quartic scalar potential to obtain the correct form of chiral symmetry breaking. At zero quark mass the chiral phase transition is second-order, becoming a crossover at physical quark mass. At zero baryon chemical potential, we find a chiral transition temperature of 155 MeV in the chiral limit and a pseudo-transition temperature of 151 MeV at physical quark mass, consistent with lattice results. In the low-temperature limit, the second-order transition occurs at a baryon chemical potential of 566 MeV while the rapid crossover occurs at 559 MeV. A new parameterization of the dilaton profile results in improved meson spectra. Meson melting occurs at a lower temperature and chemical potential than the chiral phase transition, so the vector-axial vector mass splitting remains constant until the bound states melt. ",Chiral Phase Transition and Meson Melting from AdS/QCD,13,"['It feels hot enough outside to restore chiral symmetry (which my new paper predicts is 151 MeV ~ a trillion degrees) ', 'When you say ""about a trillion degrees,"" it doesn\'t matter if you use Fahrenheit or Celsius. Switching to Kelvin is a rounding error.', 'Chirality literally refers to ""handedness"" of particles. Do they seem to be spinning clockwise or counterclockwise as they move toward you?', 'Imagine a football thrown by a right-handed player vs thrown by a lefty. They both move forward, but spin opposite ways. Chirality.', 'A video of a left-handed throw would look the same as video of a right-handed throw that had been reflected in a mirror. Chiral symmetry.', ""Some particles made of quarks look the same as their reflection, and some look reversed. If they were like footballs, this wouldn't matter."", 'But quarks are different. How they look under reflection affects how they interact w/ empty space. Thus, the particles have different masses', 'This is chiral symmetry breaking. A symmetry (mirror reflection) is almost true, but not quite. A mirror universe is different from ours.', 'Chiral symmetry breaking accounts for most of the mass of ordinary matter. Quark mass accounts for about 10%, rest from this interaction', 'This has all been at zero temperature. At high temp and density (like when we collide gold ions together), chiral symmetry is restored.', 'My paper looks at how chiral symmetry is restored as temperature and density increase. Share w/ your friends! https://t.co/M1JkpJ5Rnv', 'This paper written in collaboration with a @Macalester undergrad, Theo Jacobson! #heymac https://t.co/M1JkpJ5Rnv', 'Also, shout out to @TheOnlyMasSquad whose question inspired the chiral football analogy https://t.co/IcRcb8vfSw']",16,07,1667
347,91,1491715766147862530,1397091478460116993,Laura Colzi,"Check out our new paper: Colzi et al. (2022), ApJL in press, arXiv:2202.0411 @L_Colzi @ryvendel Thanks to deuterated molecules we have spotted the presence of two different gas components towards the Galactic Centre source G+0.693-0.027. ",https://arxiv.org/abs/2202.04111,"The Central Molecular Zone (CMZ) contains most of the mass of our Galaxy but its star formation rate is one order of magnitude lower than in the Galactic disc. This is likely related to the fact that the bulk of the gas in the CMZ is in a warm ($>$100 K) and turbulent phase with little material in the pre-stellar phase. We present in this Letter observations of deuterium fractionation (D/H ratios) of HCN, HNC, HCO$^{+}$, and N$_{2}$H$^{+}$ towards the CMZ molecular cloud G+0.693-0.027. These observations clearly show, for the first time, the presence of a colder, denser, and less turbulent narrow component, with a line width of $\sim$9 km s$^{-1}$, in addition to the warm, less dense and turbulent broad component with a line width of $\sim$20 km s$^{-1}$. The very low D/H ratio $\le$6$\times$10$^{-5}$ for HCO$^{+}$ and N$_{2}$H$^{+}$, close to the cosmic value ($\sim$2.5$\times$10$^{-5}$), and the high D/H ratios $>$4$\times$10$^{-4}$ for HCN and HNC derived for the broad component, confirm the presence of high-temperatures deuteration routes for nitriles. For the narrow component we have derived D/H ratios $>$10$^{-4}$ and excitation temperatures of $7$ K for all molecules, suggesting kinetic temperatures $\le$30 K and H$_2$ densities $\ge$5$\times$10$^{4}$ cm$^{-3}$, at least one order of magnitude larger than for the broad component. The method presented in this Letter allows to identify clouds on the verge of star formation, i.e. under pre-stellar conditions, towards the CMZ. This method can also be used for the identification of such clouds in external galaxies. ","Deuterium fractionation as a multi-phase component tracer in the
Galactic Centre",1,"['Check out our new paper: Colzi et al. (2022), ApJL in press, arXiv:2202.0411 @L_Colzi @ryvendel \n\nThanks to deuterated molecules we have spotted the presence of two different gas components towards the Galactic Centre source G+0.693-0.027. ']",22,02,252
348,14,811039425123651586,460069521,Andrew Francis,"New paper out with @jezlurch and Peter Jarvis, using representation theory to calc inversion distance in bacteria! Very nice to be able to use the group algebra in algebraic biology! And some irreducible characters, thanks to power work by @jezlurch",https://arxiv.org/abs/1612.06035,"In the context of bacteria and models of their evolution under genome rearrangement, we explore a novel application of group representation theory to the inference of evolutionary history. Our contribution is to show, in a very general maximum likelihood setting, how to use elementary matrix algebra to sidestep intractable combinatorial computations and convert the problem into one of eigenvalue estimation amenable to standard numerical approximation techniques. ","A representation-theoretic approach to the calculation of evolutionary
distance in bacteria",2,"['New paper out with @jezlurch and Peter Jarvis, using representation theory to calc inversion distance in bacteria!\n', 'Very nice to be able to use the group algebra in algebraic biology! And some irreducible characters, thanks to power work by @jezlurch']",16,12,256
349,141,1349297718695505921,978638177048162305,Mirko Signorelli,"New #arXiv preprint -> We propose #PenalizedRegressionCalibration (PRC), a statistical method that makes it possible to predict #survival using #longitudinal AND #highdimensional predictors. Implemented in the #RStats package #pencal, available on #CRAN ",https://arxiv.org/abs/2101.04426,"Longitudinal and high-dimensional measurements have become increasingly common in biomedical research. However, methods to predict survival outcomes using covariates that are both longitudinal and high-dimensional are currently missing. In this article, we propose penalized regression calibration (PRC), a method that can be employed to predict survival in such situations. PRC comprises three modeling steps: First, the trajectories described by the longitudinal predictors are flexibly modeled through the specification of multivariate mixed effects models. Second, subject-specific summaries of the longitudinal trajectories are derived from the fitted mixed models. Third, the time to event outcome is predicted using the subject-specific summaries as covariates in a penalized Cox model. To ensure a proper internal validation of the fitted PRC models, we furthermore develop a cluster bootstrap optimism correction procedure that allows to correct for the optimistic bias of apparent measures of predictiveness. PRC and the CBOCP are implemented in the R package pencal, available from CRAN. After studying the behavior of PRC via simulations, we conclude by illustrating an application of PRC to data from an observational study that involved patients affected by Duchenne muscular dystrophy, where the goal is predict time to loss of ambulation using longitudinal blood biomarkers. ","Penalized regression calibration: a method for the prediction of
survival outcomes using complex longitudinal and high-dimensional data",1,"['New #arXiv preprint -> \nWe propose #PenalizedRegressionCalibration (PRC), a statistical method that makes it possible to predict #survival using #longitudinal AND #highdimensional predictors. Implemented in the #RStats package #pencal, available on #CRAN ']",21,01,270
350,57,1217435303184650240,4639078397,John Wise,"New paper day! D. Skinner (@drenniks) has her 1st 1st-author paper! Pop III multiplicity is the norm and their host halos are pretty resistant to UV backgrounds. They may be more common than previously thought. More metal, more BHs. Check it out! @toomanyspectra @drenniks Thanks! Next on the list: r-process enrichment from NSMs!",https://arxiv.org/abs/2001.04480,"The formation of Population III (Pop III) stars is a critical step in the evolution of the early universe. To understand how these stars affected their metal-enriched descendants, the details of how, why and where Pop III formation takes place needs to be determined. One of the processes that is assumed to greatly affect the formation of Pop III stars is the presence of a Lyman-Werner (LW) radiation background, that destroys H$_2$, a necessary coolant in the creation of Pop III stars. Self-shielding can alleviate the effect the LW background has on the H$_2$ within haloes. In this work, we perform a cosmological simulation to study the birthplaces of Pop III stars, using the adaptive mesh refinement code Enzo. We investigate the distribution of host halo masses and its relationship to the LW background intensity. Compared to previous work, haloes form Pop III stars at much lower masses, up to a factor of a few, due to the inclusion of H$_2$ self-shielding. We see no relationship between the LW intensity and host halo mass. Most haloes form multiple Pop III stars, with a median number of four, up to a maximum of 16, at the instance of Pop III formation. Our results suggest that Pop III star formation may be less affected by LW radiation feedback than previously thought and that Pop III multiple systems are common. ","Cradles of the first stars: self-shielding, halo masses, and
multiplicity",2,"['New paper day! D. Skinner (@drenniks) has her 1st 1st-author paper! Pop III multiplicity is the norm and their host halos are pretty resistant to UV backgrounds. They may be more common than previously thought. More metal, more BHs. Check it out! ', '@toomanyspectra @drenniks Thanks! Next on the list: r-process enrichment from NSMs!']",20,01,344
351,94,1448620347033587716,1138762581164855298,Christoph Ternes,"New paper, We discuss the current status of the reactor antineutrino anomaly. Using the newest flux models, the RAA is resolved. We also show that reactor experiments exclude the region of sterile neutrino parameters preferred by Gallium experiments.",https://arxiv.org/abs/2110.06820,"We study the status of the reactor antineutrino anomaly in light of recent reactor flux models obtained with the conversion and summation methods. We present a new improved calculation of the IBD yields of the standard Huber-Mueller (HM) model and those of the new models. We show that the reactor rates and the fuel evolution data are consistent with the predictions of the Kurchatov Institute (KI) conversion model and with those of the Estienne-Fallot (EF) summation model, leading to a plausible robust demise of the reactor antineutrino anomaly. We show that the results of several goodness of fit tests favor the KI and EF models over other models that we considered. We also discuss the implications of the new reactor flux models for short-baseline neutrino oscillations due to active-sterile oscillations. We show that reactor data give upper bounds on active-sterile neutrino mixing that are not very different for the reactor flux models under consideration and are in tension with the large mixing required by the Gallium anomaly that has been refreshed by the recent results of the BEST experiment. ",Reactor antineutrino anomaly in light of recent flux model refinements,1,"['New paper, We discuss the current status of the reactor antineutrino anomaly. Using the newest flux models, the RAA is resolved. We also show that reactor experiments exclude the region of sterile neutrino parameters preferred by Gallium experiments.']",21,10,257
352,26,1243551404297342982,888216099757490176,Maithra Raghu,"A Survey of Deep Learning for Scientific Discovery To help facilitate using DL in science, we survey a broad range of deep learning methods, new research results, implementation tips & many links to code/tutorials Paper Work with @ericschmidt Thread⬇️ We begin with some high level considerations: (i) template ways in which deep learning can be used in scientific problems (ii) overviews of the entire end-to-end deep learning design process (iii) highlights of (when to use) key alternate machine learning methods We provide links to incredible resources developed by the community: software packages & high level APIs, freely available DL tutorials, sites with summaries/discussions/code of new research, repositories of DL pipelines & pretrained models, data curation & analysis packages We then describe core models/tasks/methods, including CNNs (detection, segmentation, registration, many others), graph NNs, sequence models + tasks (RNNs, transformers, attention, embeddings, Q&A, seq2seq, etc) with links to science use cases, tutorials and code throughout We overview other powerful methods for training neural networks, such as transfer learning, domain adaptation, multitask learning and weak supervision. In many (scientific) use cases not much data may be available to train machine learning models. Requiring less (labelled) data is a very active research area, and we overview new advances in (i) self-supervision (ii) semi-supervised learning (iii) data augmentation (iv)denoising Central to many scientific problems is going from *predictions* to *understanding*: identifying underlying mechanisms & key data features. We survey results in interpretability & representation analysis enabling data feature attribution & insights on model hidden representations We also (i) highlight core ideas and possible use cases of deep generative models and deep reinforcement learning (ii) provide implementation tips for getting started (explore data, try simple methods, start with known models/algorithms) and for debugging/improving performance The best part of writing this survey was learning even more about the incredible work being done by the community across research, teaching courses, developing/opensourcing code, in-depth tutorials. Was very hard to reference it all! We welcome pointers to other related work!",https://arxiv.org/abs/2003.11755,"Over the past few years, we have seen fundamental breakthroughs in core problems in machine learning, largely driven by advances in deep neural networks. At the same time, the amount of data collected in a wide array of scientific domains is dramatically increasing in both size and complexity. Taken together, this suggests many exciting opportunities for deep learning applications in scientific settings. But a significant challenge to this is simply knowing where to start. The sheer breadth and diversity of different deep learning techniques makes it difficult to determine what scientific problems might be most amenable to these methods, or which specific combination of methods might offer the most promising first approach. In this survey, we focus on addressing this central issue, providing an overview of many widely used deep learning models, spanning visual, sequential and graph structured data, associated tasks and different training methods, along with techniques to use deep learning with less data and better interpret these complex models --- two central considerations for many scientific use cases. We also include overviews of the full design process, implementation tips, and links to a plethora of tutorials, research summaries and open-sourced deep learning pipelines and pretrained models, developed by the community. We hope that this survey will help accelerate the use of deep learning across different scientific domains. ",A Survey of Deep Learning for Scientific Discovery,9,"['A Survey of Deep Learning for Scientific Discovery\n\nTo help facilitate using DL in science, we survey a broad range of deep learning methods, new research results, implementation tips & many links to code/tutorials\n\nPaper \n\nWork with @ericschmidt\n \nThread⬇️ ', 'We begin with some high level considerations: (i) template ways in which deep learning can be used in scientific problems (ii) overviews of the entire end-to-end deep learning design process (iii) highlights of (when to use) key alternate machine learning methods', 'We provide links to incredible resources developed by the community: software packages & high level APIs, freely available DL tutorials, sites with summaries/discussions/code of new research, repositories of DL pipelines & pretrained models, data curation & analysis packages https://t.co/JloKcxtAO4', 'We then describe core models/tasks/methods, including CNNs (detection, segmentation, registration, many others), graph NNs, sequence models + tasks (RNNs, transformers, attention, embeddings, Q&A, seq2seq, etc) with links to science use cases, tutorials and code throughout https://t.co/ADfMsZ0W0k', 'We overview other powerful methods for training neural networks, such as transfer learning, domain adaptation, multitask learning and weak supervision. https://t.co/QPdWPCOjgS', 'In many (scientific) use cases not much data may be available to train machine learning models. Requiring less (labelled) data is a very active research area, and we overview new advances in (i) self-supervision (ii) semi-supervised learning (iii) data augmentation (iv)denoising https://t.co/Ag028UbRI9', 'Central to many scientific problems is going from *predictions* to *understanding*: identifying underlying mechanisms & key data features. We survey results in interpretability & representation analysis enabling data feature attribution & insights on model hidden representations https://t.co/oCIkJaqgE2', 'We also (i) highlight core ideas and possible use cases of deep generative models and deep reinforcement learning (ii) provide implementation tips for getting started (explore data, try simple methods, start with known models/algorithms) and for debugging/improving performance', 'The best part of writing this survey was learning even more about the incredible work being done by the community across research, teaching courses, developing/opensourcing code, in-depth tutorials. Was very hard to reference it all! We welcome pointers to other related work!']",20,03,2382
353,115,1324175572268797953,326843207,Yuta Notsu,"Great news! Our new paper is accepted !! ""Statistical Properties of Superflares on Solar-type Stars: Results Using All of the Kepler Primary Mission Data” Okamoto, Notsu, Maehara, Namekata, Honda, Ikuta, Nogami, and Shibata, ApJ in press We report the latest statistical analyses of superflares on solar-type stars using ""all"" of the (4-year) Kepler primary mission data, and Gaia-DR2catalog. The sample size of solar-type stars and Sun-like stars are ∼4 and ∼12 times, respectively, compared with Notsu+2019. We found 2341 superflares on 265 solar-type stars, and 26 superflares on 15 Sun-like stars: the former increased from 527 to 2341 and the latter from 3 to 26 events compared with Notsu+2019. This enabled us to have a more well-established view on stat properties of superflares. We updated the flare detection method from our previous studies by using high-pass filter to remove rotational variations caused by starspots. We also examined the sample biases on the frequency of superflares, taking into account gyrochronology and flare detection completeness. The observed upper limit of the flare energy decreases as the rotation period increases in solar-type stars. The frequency of superflares decreases as the stellar rotation period increases. The maximum energy we found on Sun-like stars (P_rot>20 day) is 4×10^34 erg. One of the important conclusion from all Kepler data: Our analysis of Sun-like stars suggest that the Sun can cause superflares with energies of ∼7×10^33 erg (∼X700-class flares) and ∼1×10^34 erg (∼X1000-class flares) once every ∼3,000 years and ∼6,000 years, respectively. [Figure] Comparison between the frequency distribution of superflares on Sun-like stars and solar flares. ",https://arxiv.org/abs/2011.02117,"We report the latest statistical analyses of superflares on solar-type (G-type main-sequence; effective temperature is 5100 - 6000 K) stars using all of the $Kepler$ primary mission data, and $Gaia$-DR2 (Data Release 2) catalog. We updated the flare detection method from our previous studies by using high-pass filter to remove rotational variations caused by starspots. We also examined the sample biases on the frequency of superflares, taking into account gyrochronology and flare detection completeness. The sample size of solar-type stars and Sun-like stars (effective temperature is 5600 - 6000 K and rotation period is over 20 days in solar-type stars) are $\sim$4 and $\sim$12 times, respectively, compared with Notsu et al. (2019, ApJ, 876, 58). As a result, we found 2341 superflares on 265 solar-type stars, and 26 superflares on 15 Sun-like stars: the former increased from 527 to 2341 and the latter from 3 to 26 events compared with our previous study. This enabled us to have a more well-established view on the statistical properties of superflares. The observed upper limit of the flare energy decreases as the rotation period increases in solar-type stars. The frequency of superflares decreases as the stellar rotation period increases. The maximum energy we found on Sun-like stars is $4 \times 10^{34}$ erg. Our analysis of Sun-like stars suggest that the Sun can cause superflares with energies of $\sim 7 \times 10^{33}$ erg ($\sim$X700-class flares) and $\sim 1 \times 10^{34}$ erg ($\sim$X1000-class flares) once every $\sim$3,000 years and $\sim$6,000 years, respectively. ","Statistical Properties of Superflares on Solar-type Stars: Results Using
All of the Kepler Primary Mission Data",7,"['Great news! Our new paper is accepted !!\n\n""Statistical Properties of Superflares on Solar-type Stars: Results Using All of the Kepler Primary Mission Data”\n\n\nOkamoto, Notsu, Maehara, Namekata, Honda, Ikuta, Nogami, and Shibata, ApJ in press', 'We report the latest statistical analyses of superflares on solar-type stars using ""all"" of the (4-year) Kepler primary mission data, and Gaia-DR2catalog. \n\nThe sample size of solar-type stars and Sun-like stars are ∼4 and ∼12 times, respectively, compared with Notsu+2019.', 'We found 2341 superflares on 265 solar-type stars, and 26 superflares on 15 Sun-like stars: the former increased from 527 to 2341 and the latter from 3 to 26 events compared with Notsu+2019. This enabled us to have a more well-established view on stat properties of superflares.', 'We updated the flare detection method from our previous studies by using high-pass filter to remove rotational variations caused by starspots. We also examined the sample biases on the frequency of superflares, taking into account gyrochronology and flare detection completeness.', 'The observed upper limit of the flare energy decreases as the rotation period increases in solar-type stars. The frequency of superflares decreases as the stellar rotation period increases. The maximum energy we found on Sun-like stars (P_rot>20 day) is 4×10^34 erg.', 'One of the important conclusion from all Kepler data:\n\nOur analysis of Sun-like stars suggest that the Sun can cause superflares with energies of ∼7×10^33 erg (∼X700-class flares) and ∼1×10^34 erg (∼X1000-class flares) once every ∼3,000 years and ∼6,000 years, respectively.', '[Figure] Comparison between the frequency distribution of superflares on Sun-like stars and solar flares. https://t.co/y9Ockacyfb']",20,11,1733
354,129,1488436542372786177,1228373893758562310,Aliaksandr Hubin 🤍❤️🤍 💙💛,Proud to share the first version of preprint of the first paper by my first master student (based on thesis) . We introduce and study the idea of combining SGD to compute the marginal likelihood with MCMC to explore the model space and show convergence.,https://arxiv.org/abs/2201.13198,"It is common practice to use Laplace approximations to compute marginal likelihoods in Bayesian versions of generalised linear models (GLM). Marginal likelihoods combined with model priors are then used in different search algorithms to compute the posterior marginal probabilities of models and individual covariates. This allows performing Bayesian model selection and model averaging. For large sample sizes, even the Laplace approximation becomes computationally challenging because the optimisation routine involved needs to evaluate the likelihood on the full set of data in multiple iterations. As a consequence, the algorithm is not scalable for large datasets. To address this problem, we suggest using a version of a popular batch stochastic gradient descent (BSGD) algorithm for estimating the marginal likelihood of a GLM by subsampling from the data. We further combine the algorithm with Markov chain Monte Carlo (MCMC) based methods for Bayesian model selection and provide some theoretical results on the convergence of the estimates. Finally, we report results from experiments illustrating the performance of the proposed algorithm. ",A subsampling approach for Bayesian model selection,1,['Proud to share the first version of preprint of the first paper by my first master student (based on thesis) . We introduce and study the idea of combining SGD to compute the marginal likelihood with MCMC to explore the model space and show convergence.'],22,01,259
355,64,1238390910846742528,3021399517,Jean-Baptiste Mouret,"New paper: we designed an algorithm that allows miniature underground UAVs to establish communication relays in tunnels. Paper: We use the nice Crazyflies by @Bitcraze_se. @Bitcraze_se Thanks. If you are interested, we could write a blog post about this (in particular, we have actual p2p communication and everyting runs on the crazyflies)",https://arxiv.org/abs/2003.04409,"Miniature multi-rotors are promising robots for navigating subterranean networks, but maintaining a radio connection underground is challenging. In this paper, we introduce a distributed algorithm, called U-Chain (for Underground-chain), that coordinates a chain of flying robots between an exploration drone and an operator. Our algorithm only uses the measurement of the signal quality between two successive robots as well as an estimate of the ground speed based on an optic flow sensor. We evaluate our approach formally and in simulation, and we describe experimental results with a chain of 3 real miniature quadrotors (12 by 12 cm) and a base station. ","Signal-based self-organization of a chain of UAVs for subterranean
exploration",2,"['New paper: we designed an algorithm that allows miniature underground UAVs to establish communication relays in tunnels. Paper: We use the nice Crazyflies by @Bitcraze_se. ', '@Bitcraze_se Thanks. If you are interested, we could write a blog post about this (in particular, we have actual p2p communication and everyting runs on the crazyflies)']",20,03,354
356,49,1494525180462338048,1069244826,Preetum Nakkiran,"New paper, short and sweet: ""Limitations of Neural Collapse for Understanding Generalization in Deep Learning"" with Like Hui, Misha Belkin. Neural collapse is often claimed to be, in some way, deeply relevant for generalization. But is it? 1/3 The literature is muddled because most work does not carefully distinguish between behavior on the train set (an *optimization* property) vs. behavior at test time (a *generation* property). We clarify these issues by introducing more precise definitions of ""Neural Collapse"" 2/3 From these definitions, it's clear that Train Collapse may occur, but Test Collapse is often impossible. Thus Neural Collapse is primarily an *optimization* phenomena, with unclear connections to generalization. See paper for more, incl cases where Collapse is undesirable. 3/3 @machinaut Good question, we argue in the paper that the right definition of ""collapse"" should be finite-sample, but infinite-time. (Because for infinite both, things just converge to bayes optimal, and are not interesting). @machinaut I don't know what happens to the representation-layer in grokking -- but now that you mention it, this sounds like a great experiment to try. @XYHan_ @GalantiTomer @weijie444 Thanks for the nice comments! And for the additional refs— I’ll take a look.",https://arxiv.org/abs/2202.08384,"The recent work of Papyan, Han, & Donoho (2020) presented an intriguing ""Neural Collapse"" phenomenon, showing a structural property of interpolating classifiers in the late stage of training. This opened a rich area of exploration studying this phenomenon. Our motivation is to study the upper limits of this research program: How far will understanding Neural Collapse take us in understanding deep learning? First, we investigate its role in generalization. We refine the Neural Collapse conjecture into two separate conjectures: collapse on the train set (an optimization property) and collapse on the test distribution (a generalization property). We find that while Neural Collapse often occurs on the train set, it does not occur on the test set. We thus conclude that Neural Collapse is primarily an optimization phenomenon, with as-yet-unclear connections to generalization. Second, we investigate the role of Neural Collapse in feature learning. We show simple, realistic experiments where training longer leads to worse last-layer features, as measured by transfer-performance on a downstream task. This suggests that neural collapse is not always desirable for representation learning, as previously claimed. Finally, we give preliminary evidence of a ""cascading collapse"" phenomenon, wherein some form of Neural Collapse occurs not only for the last layer, but in earlier layers as well. We hope our work encourages the community to continue the rich line of Neural Collapse research, while also considering its inherent limitations. ","Limitations of Neural Collapse for Understanding Generalization in Deep
Learning",6,"['New paper, short and sweet:\n""Limitations of Neural Collapse for Understanding Generalization in Deep Learning""\n\nwith Like Hui, Misha Belkin.\n\nNeural collapse is often claimed to be, in some way, deeply relevant for generalization. But is it? 1/3 ', 'The literature is muddled because most work does not carefully distinguish between behavior on the train set (an *optimization* property) vs. behavior at test time (a *generation* property).\n\nWe clarify these issues by introducing more precise definitions of ""Neural Collapse"" 2/3 https://t.co/JZsr1bgMau', ""From these definitions, it's clear that Train Collapse may occur, but Test Collapse is often impossible. \n\nThus Neural Collapse is primarily an *optimization* phenomena, with unclear connections to generalization. See paper for more, incl cases where Collapse is undesirable. 3/3 https://t.co/dZqVjhIelz"", '@machinaut Good question, we argue in the paper that the right definition of ""collapse"" should be finite-sample, but infinite-time.\n(Because for infinite both, things just converge to bayes optimal, and are not interesting). https://t.co/xUqI5ySou3', ""@machinaut I don't know what happens to the representation-layer in grokking -- but now that you mention it, this sounds like a great experiment to try."", '@XYHan_ @GalantiTomer @weijie444 Thanks for the nice comments! And for the additional refs— I’ll take a look.']",22,02,1325
357,216,1375311361316634624,2647128003,Mogens Fosgerau,"A perturbed utility route choice model We propose a model in which a utility maximizing traveler assigns flow across an entire network under a flow conservation constraint. 1/ #EconTwitter 2/ Substitution between routes depends on how much they overlap. This model can be estimated from route choice data, where the full set of route alternatives is included and no choice set generation is required. 3/3 Nevertheless, estimation requires only linear regression and is very fast. Predictions from the model can be computed using convex optimization and is straightforward even for large networks. @JRehbeck @ERC_Research @cykelnorden @ThomasKjrRasmu2",https://arxiv.org/abs/2103.13784,"We propose a route choice model in which traveler behavior is represented as a utility maximizing assignment of flow across an entire network under a flow conservation constraint}. Substitution between routes depends on how much they overlap. {\tr The model is estimated considering the full set of route alternatives, and no choice set generation is required. Nevertheless, estimation requires only linear regression and is very fast. Predictions from the model can be computed using convex optimization, and computation is straightforward even for large networks. We estimate and validate the model using a large dataset comprising 1,337,096 GPS traces of trips in the Greater Copenhagen road network. ",A perturbed utility route choice model,6,"['A perturbed utility route choice model\n\nWe propose a model in which a utility maximizing traveler assigns flow across an entire network under a flow conservation constraint. 1/ \n #EconTwitter', '2/ Substitution between routes depends on how much they overlap. This model can be estimated from route choice data, where the full set of route alternatives is included and no choice set generation is required.', '3/3 Nevertheless, estimation requires only linear regression and is very fast. Predictions from the model can be computed using convex optimization and is straightforward even for large networks.', '@JRehbeck', '@ERC_Research', '@cykelnorden @ThomasKjrRasmu2']",21,03,657
358,221,1247441503565348865,776322668509429761,Svenja Boberg,"Our study on “Pandemic Populism” is out today with my awesome colleagues @Kudusch, @thorstenquandt & @Lenafrescamente! We analyzed Corona-related Fb-posts of alternative news media and the actors, topics, fake news and conspiracy theories they addressed: ",https://arxiv.org/abs/2004.02566,"The COVID-19 pandemic has not only had severe political, economic, and societal effects, it has also affected media and communication systems in unprecedented ways. While traditional journalistic media has tried to adapt to the rapidly evolving situation, alternative news media on the Internet have given the events their own ideological spin. Such voices have been criticized for furthering societal confusion and spreading potentially dangerous ""fake news"" or conspiracy theories via social media and other online channels. The current study analyzes the factual basis of such fears in an initial computational content analysis of alternative news media's output on Facebook during the early Corona crisis, based on a large German data set from January to the second half of March 2020. Using computational content analysis, methods, reach, interactions, actors, and topics of the messages were examined, as well as the use of fabricated news and conspiracy theories. The analysis revealed that the alternative news media stay true to message patterns and ideological foundations identified in prior research. While they do not spread obvious lies, they are predominantly sharing overly critical, even anti-systemic messages, opposing the view of the mainstream news media and the political establishment. With this pandemic populism, they contribute to a contradictory, menacing, and distrusting worldview, as portrayed in detail in this analysis. ","Pandemic Populism: Facebook Pages of Alternative News Media and the
Corona Crisis -- A Computational Content Analysis",1,"['Our study on “Pandemic Populism” is out today with my awesome colleagues @Kudusch, @thorstenquandt & @Lenafrescamente! We analyzed Corona-related Fb-posts of alternative news media and the actors, topics, fake news and conspiracy theories they addressed: ']",20,04,261
359,139,1271709221206151168,1037195648636989442,Hidenori Tanaka,"Q. Can we find winning lottery tickets, or sparse trainable deep networks at initialization without ever looking at data? A. Yes, by conserving ""Synaptic Flow"" via our new SynFlow algorithm. co-led with Daniel Kunin & @dyamins, @SuryaGanguli paper: 1/ We can potentially reduce the cost of training if we can prune neural networks at initialization. The key challenge is ""layer-collapse,"" the premature pruning of an entire layer making a network untrainable. 2/ To better understand the phenomena, we first mathematically formulate and experimentally verify a conservation law. This conservation law explains why existing gradient-based pruning algorithms at initialization suffer from layer-collapse. 3/ We then hypothesize that the conservative scoring combined with ""iterative"" re-evaluation can avoid layer collapse. This insight also explains how iterative magnitude pruning avoids layer-collapse to identify ""winning-lottery ticket ""subnetworks at initialization. 4/ We prove that layer-collapse can be entirely avoided by designing an algorithm with iterative, positive, conservative scoring. We design SynFlow satisfying the key requirements and show that it reaches the theoretical limit of max compression without collapsing a network. 5/ Notably, SynFlow makes no reference to the training data and consistently outperforms existing state-of-the-art pruning algorithms at initialization on 12 distinct combinations of models and datasets. 6/ Overall, our data-agnostic pruning algorithm challenges the existing paradigm that data must be used to quantify which synapses are important. Please check out the paper for more details 7/ @tingwuc Yes, we are working to incorporate them into our codebase. In the meantime, this paper did very careful work on how pruning at initialization methods (SNIP, GraSP) compare with ""train-prune"" methods, including IMP and others. @xaqlab Thank you for the question. SynFlow naturally avoids layer-bottlenecking that starts well before the eventual collapse. This is why we see a significant gain in performance compared to other methods that don't reach max compression.",http://arxiv.org/abs/2006.05467,"Pruning the parameters of deep neural networks has generated intense interest due to potential savings in time, memory and energy both during training and at test time. Recent works have identified, through an expensive sequence of training and pruning cycles, the existence of winning lottery tickets or sparse trainable subnetworks at initialization. This raises a foundational question: can we identify highly sparse trainable subnetworks at initialization, without ever training, or indeed without ever looking at the data? We provide an affirmative answer to this question through theory driven algorithm design. We first mathematically formulate and experimentally verify a conservation law that explains why existing gradient-based pruning algorithms at initialization suffer from layer-collapse, the premature pruning of an entire layer rendering a network untrainable. This theory also elucidates how layer-collapse can be entirely avoided, motivating a novel pruning algorithm Iterative Synaptic Flow Pruning (SynFlow). This algorithm can be interpreted as preserving the total flow of synaptic strengths through the network at initialization subject to a sparsity constraint. Notably, this algorithm makes no reference to the training data and consistently competes with or outperforms existing state-of-the-art pruning algorithms at initialization over a range of models (VGG and ResNet), datasets (CIFAR-10/100 and Tiny ImageNet), and sparsity constraints (up to 99.99 percent). Thus our data-agnostic pruning algorithm challenges the existing paradigm that, at initialization, data must be used to quantify which synapses are important. ","Pruning neural networks without any data by iteratively conserving
synaptic flow",9,"['Q. Can we find winning lottery tickets, or sparse trainable deep networks at initialization without ever looking at data?\n\nA. Yes, by conserving ""Synaptic Flow"" via our new SynFlow algorithm.\n\nco-led with Daniel Kunin\n& @dyamins, @SuryaGanguli\n\npaper: \n1/ ', 'We can potentially reduce the cost of training if we can prune neural networks at initialization.\n\nThe key challenge is ""layer-collapse,"" the premature pruning of an entire layer making a network untrainable.\n2/ https://t.co/hTB5jEdeuD', 'To better understand the phenomena, we first mathematically formulate and experimentally verify a conservation law.\n\nThis conservation law explains why existing gradient-based pruning algorithms at initialization suffer from layer-collapse.\n3/ https://t.co/a8jBLZ5Uoh', 'We then hypothesize that the conservative scoring combined with ""iterative"" re-evaluation can avoid layer collapse. \n\nThis insight also explains how iterative magnitude pruning avoids layer-collapse to identify ""winning-lottery ticket ""subnetworks at initialization.\n4/ https://t.co/kl7sXpTHN2', 'We prove that layer-collapse can be entirely avoided by designing an algorithm with iterative, positive, conservative scoring.\n\nWe design SynFlow satisfying the key requirements and show that it reaches the theoretical limit of max compression without collapsing a network.\n5/ https://t.co/3eLdvdifXB', 'Notably, SynFlow makes no reference to the training data and consistently outperforms existing state-of-the-art\npruning algorithms at initialization on 12 distinct combinations of models and datasets.\n\n6/ https://t.co/q7KjKMK3pd', 'Overall, our data-agnostic pruning algorithm challenges the existing paradigm that data must be used to quantify which synapses are important.\n\nPlease check out the paper for more details\nhttps://t.co/AAHQchcRKC\n7/', '@tingwuc Yes, we are working to incorporate them into our codebase.\n\nIn the meantime, this paper https://t.co/b5KoA2vPnk did very careful work on how pruning at initialization methods (SNIP, GraSP) compare with ""train-prune"" methods, including IMP and others.', ""@xaqlab Thank you for the question.\nSynFlow naturally avoids layer-bottlenecking that starts well before the eventual collapse.\nThis is why we see a significant gain in performance compared to other methods that don't reach max compression.""]",20,06,2183
360,117,1239928547369799681,2332157006,Federico Bianchi,"New paper about e-commerce in NLP (workshop at @TheWebConf)! See how we combine product representations, language modeling and images to support type-ahead personalization in e-commerce! With @christineyyuu, @jacopotagliabue and @GreCo_CiRo from @coveo. @vibhavagarwal5 @debora_nozza @TheWebConf @christineyyuu @jacopotagliabue @GreCo_CiRo @coveo Thanks a lot :) :) :)",https://arxiv.org/abs/2003.07160,"We address the problem of personalizing query completion in a digital commerce setting, in which the bounce rate is typically high and recurring users are rare. We focus on in-session personalization and improve a standard noisy channel model by injecting dense vectors computed from product images at query time. We argue that image-based personalization displays several advantages over alternative proposals (from data availability to business scalability), and provide quantitative evidence and qualitative support on the effectiveness of the proposed methods. Finally, we show how a shared vector space between similar shops can be used to improve the experience of users browsing across sites, opening up the possibility of applying zero-shot unsupervised personalization to increase conversions. This will prove to be particularly relevant to retail groups that manage multiple brands and/or websites and to multi-tenant SaaS providers that serve multiple clients in the same space. ","""An Image is Worth a Thousand Features"": Scalable Product
Representations for In-Session Type-Ahead Personalization",2,"['New paper about e-commerce in NLP (workshop at @TheWebConf)! See how we combine product representations, language modeling and images to support type-ahead personalization in e-commerce! With @christineyyuu, @jacopotagliabue and @GreCo_CiRo from @coveo. ', '@vibhavagarwal5 @debora_nozza @TheWebConf @christineyyuu @jacopotagliabue @GreCo_CiRo @coveo Thanks a lot :) :) :)']",20,03,389
361,331,1312120926985576449,446694758,Julian Eisenschlos,"Entailment has been studied in depth for textual premises, but the case with structured data like tables or even HTML can have many applications in the wild. We tackle this in our latest #EMNLP2020 Findings paper with Syrine Krichene and @muelletm 1/5 We extend TAPAS (Herzig et al, 2020), originally pretrained with MLM, to predict if a table entails or refutes a sentence and eval on TabFact (Chen et al, 2020). We introduce 2 novel pretraining binary-classification tasks called Counterfactual and Synthetic, shown in image. 2/5 Counterfactual examples are created by swapping entities that appear in both a table and a sentence for a plausible alternative: they are realistic but simple. Synthetic ones are sampled from a small pCFG based on the values of a real table: they improve numerical reasoning. 3/5 Pretraining with these 2 tasks, we improve SOTA by ~10pts on TabFact and, interestingly, also get a new SOTA on the table QA task SQA (Iyyer et al, 2017). This results hold even with fraction of the data, and is only 2 points below a strong baseline with no data at all! 4/5 Finally, we investigate how to deal with large tables by selecting which parts of the input to pass through the model using simple heuristics. We can can get 2x speed-ups with ~1pt acc drop, or 4x still above prior art. Code and models coming soon at 5/5",http://arxiv.org/abs/2010.00571,"Table entailment, the binary classification task of finding if a sentence is supported or refuted by the content of a table, requires parsing language and table structure as well as numerical and discrete reasoning. While there is extensive work on textual entailment, table entailment is less well studied. We adapt TAPAS (Herzig et al., 2020), a table-based BERT model, to recognize entailment. Motivated by the benefits of data augmentation, we create a balanced dataset of millions of automatically created training examples which are learned in an intermediate step prior to fine-tuning. This new data is not only useful for table entailment, but also for SQA (Iyyer et al., 2017), a sequential table QA task. To be able to use long examples as input of BERT models, we evaluate table pruning techniques as a pre-processing step to drastically improve the training and prediction efficiency at a moderate drop in accuracy. The different methods set the new state-of-the-art on the TabFact (Chen et al., 2020) and SQA datasets. ",Understanding tables with intermediate pre-training,5,"['Entailment has been studied in depth for textual premises, but the case with structured data like tables or even HTML can have many applications in the wild. \n\nWe tackle this in our latest #EMNLP2020 Findings paper with Syrine Krichene and @muelletm\n\n1/5', 'We extend TAPAS (Herzig et al, 2020), originally pretrained with MLM, to predict if a table entails or refutes a sentence and eval on TabFact (Chen et al, 2020). We introduce 2 novel pretraining binary-classification tasks called Counterfactual and Synthetic, shown in image.\n\n2/5 https://t.co/4s84vzcut1', 'Counterfactual examples are created by swapping entities that appear in both a table and a sentence for a plausible alternative: they are realistic but simple. Synthetic ones are sampled from a small pCFG based on the values of a real table: they improve numerical reasoning.\n\n3/5 https://t.co/f17pVpWqiu', 'Pretraining with these 2 tasks, we improve SOTA by ~10pts on TabFact and, interestingly, also get a new SOTA on the table QA task SQA (Iyyer et al, 2017). This results hold even with fraction of the data, and is only 2 points below a strong baseline with no data at all!\n\n4/5 https://t.co/RAakCIRKGn', 'Finally, we investigate how to deal with large tables by selecting which parts of the input to pass through the model using simple heuristics. We can can get 2x speed-ups with ~1pt acc drop, or 4x still above prior art. Code and models coming soon at https://t.co/1VxCSrSunr\n\n5/5']",20,10,1377
362,158,1499097939435667457,1010536067886387200,Ananya Kumar,"How should you fine-tune a large pretrained model (CLIP, SimCLR) robustly? We find that standard fine-tuning can do poorly out-of-distribution (test data ≠ fine-tuning data). Our analysis leads to a simple fix, higher accuracy on 10 datasets. (ICLR Oral) (2/n) Joint work with Aditi Raghunathan, @rmjones96, and my advisors @tengyuma and @percyliang (3/n) We find that full fine-tuning (updating all model parameters) can be worse than linear probing (updating only the last layer) on out-of-distribution test examples, when the distribution shift is large and the pretrained features are good (4/n) We prove theoretically that this phenomenon arises even in simple and natural settings. One line explanation: while full fine-tuning learns the head, the lower layers of the neural network change simultaneously and distort the pretrained features (5/n) This suggests the easy two-step strategy of linear probing then full fine-tuning (LP-FT). Intuition: head doesn't change as much, so features get distorted less (6/n) LP-FT gives large gains OOD: 10% better OOD, 1% better ID than full fine-tuning. Also outperforms linear probing both ID and OOD (7/n) Caption for Figure in Tweet 1/n: (a) full fine-tuning does better in-distribution (ID), (b) linear probing can do better out-of-distribution (OOD), (c) LP-FT does better on both, especially OOD (8/n) This work is part of a broader trend (e.g., prompt tuning, composed fine-tuning, prefix tuning), where tuning a small part of a pretrained model can be better than full fine-tuning, especially for robustness @CyrusMaher Yup! And to clarify we cited this and other papers, and mention in our abstract + intro that LP-FT is sometimes used as a fine-tuning heuristic (though not for robustness). Hopefully our analysis popularizes it, and explains when it can be particularly useful (OOD)",http://arxiv.org/abs/2202.10054,"When transferring a pretrained model to a downstream task, two popular methods are full fine-tuning (updating all the model parameters) and linear probing (updating only the last linear layer -- the ""head""). It is well known that fine-tuning leads to better accuracy in-distribution (ID). However, in this paper, we find that fine-tuning can achieve worse accuracy than linear probing out-of-distribution (OOD) when the pretrained features are good and the distribution shift is large. On 10 distribution shift datasets (Breeds-Living17, Breeds-Entity30, DomainNet, CIFAR $\to$ STL, CIFAR10.1, FMoW, ImageNetV2, ImageNet-R, ImageNet-A, ImageNet-Sketch), fine-tuning obtains on average 2% higher accuracy ID but 7% lower accuracy OOD than linear probing. We show theoretically that this tradeoff between ID and OOD accuracy arises even in a simple setting: fine-tuning overparameterized two-layer linear networks. We prove that the OOD error of fine-tuning is high when we initialize with a fixed or random head -- this is because while fine-tuning learns the head, the lower layers of the neural network change simultaneously and distort the pretrained features. Our analysis suggests that the easy two-step strategy of linear probing then full fine-tuning (LP-FT), sometimes used as a fine-tuning heuristic, combines the benefits of both fine-tuning and linear probing. Empirically, LP-FT outperforms both fine-tuning and linear probing on the above datasets (1% better ID, 10% better OOD than full fine-tuning). ","Fine-Tuning can Distort Pretrained Features and Underperform
Out-of-Distribution",9,"['How should you fine-tune a large pretrained model (CLIP, SimCLR) robustly? We find that standard fine-tuning can do poorly out-of-distribution (test data ≠ fine-tuning data). Our analysis leads to a simple fix, higher accuracy on 10 datasets. (ICLR Oral) ', '(2/n) Joint work with Aditi Raghunathan, @rmjones96, and my advisors @tengyuma and @percyliang', '(3/n) We find that full fine-tuning (updating all model parameters) can be worse than linear probing (updating only the last layer) on out-of-distribution test examples, when the distribution shift is large and the pretrained features are good', '(4/n) We prove theoretically that this phenomenon arises even in simple and natural settings. One line explanation: while full fine-tuning learns the head, the lower layers of the neural network change simultaneously and distort the pretrained features', ""(5/n) This suggests the easy two-step strategy of linear probing then full fine-tuning (LP-FT). Intuition: head doesn't change as much, so features get distorted less"", '(6/n) LP-FT gives large gains OOD: 10% better OOD, 1% better ID than full fine-tuning. Also outperforms linear probing both ID and OOD', '(7/n) Caption for Figure in Tweet 1/n: (a) full fine-tuning does better in-distribution (ID), (b) linear probing can do better out-of-distribution (OOD), (c) LP-FT does better on both, especially OOD', '(8/n) This work is part of a broader trend (e.g., prompt tuning, composed fine-tuning, prefix tuning), where tuning a small part of a pretrained model can be better than full fine-tuning, especially for robustness', '@CyrusMaher Yup! And to clarify we cited this and other papers, and mention in our abstract + intro that LP-FT is sometimes used as a fine-tuning heuristic (though not for robustness). Hopefully our analysis popularizes it, and explains when it can be particularly useful (OOD)']",22,02,1854
363,173,1473457226601963523,1444381431195648000,Felix A. Palm,"In our latest work, we propose a way to extract the central charge from snapshots. We also discuss other signatures for the bosonic Laughlin state in coupled chains. Special thanks to @aBohrdt for her advice on creating even nicer figures ;) ",https://arxiv.org/abs/2112.10763,"Experimental realizations of topologically ordered states of matter, such as fractional quantum Hall states, with cold atoms are now within reach. In particular, optical lattices provide a promising platform for the realization and characterization of such states, where novel detection schemes enable an unprecedented microscopic understanding. Here we show that the central charge can be directly measured in current cold atom experiments using the number entropy as a proxy for the entanglement entropy. We perform density-matrix renormalization-group simulations of Hubbard-interacting bosons on coupled chains subject to a magnetic field with $\alpha=\frac{1}{4}$ flux quanta per plaquette. Tuning the inter-chain hopping, we find a transition from a trivial quasi-one dimensional phase to the topologically ordered Laughlin state at magnetic filling factor $\nu=\frac{1}{2}$ for systems of three or more chains. We resolve the transition using the central charge, on-site correlations, momentum distributions and the many-body Chern number. Additionally, we propose a scheme to experimentally estimate the central charge from Fock basis snapshots. The model studied here is experimentally realizable with existing cold atom techniques and the proposed observables pave the way for the detection and classification of a larger class of interacting topological states of matter. ","Snapshot-based detection of $\frac{1}{2}$-Laughlin states: coupled
chains and central charge",1,"['In our latest work, we propose a way to extract the central charge from snapshots. We also discuss other signatures for the bosonic Laughlin state in coupled chains. \n\nSpecial thanks to @aBohrdt for her advice on creating even nicer figures ;) ']",21,12,255
364,151,1388033706967904258,268337552,Nicolas Kourtellis,"Preprint of our new paper proposing the 1st Privacy-Preserving Federated Learning Framework with TEEs, accepted @ACMMobiSys 2021, here: @VincentMo6,@realhamed,@minoskt,@_EduardMarin_,@Diego_Perino Powered by @TEFresearch,@concordiah2020, @accordion_h2020 ",https://arxiv.org/abs/2104.14380,"We propose and implement a Privacy-preserving Federated Learning ($PPFL$) framework for mobile systems to limit privacy leakages in federated learning. Leveraging the widespread presence of Trusted Execution Environments (TEEs) in high-end and mobile devices, we utilize TEEs on clients for local training, and on servers for secure aggregation, so that model/gradient updates are hidden from adversaries. Challenged by the limited memory size of current TEEs, we leverage greedy layer-wise training to train each model's layer inside the trusted area until its convergence. The performance evaluation of our implementation shows that $PPFL$ can significantly improve privacy while incurring small system overheads at the client-side. In particular, $PPFL$ can successfully defend the trained model against data reconstruction, property inference, and membership inference attacks. Furthermore, it can achieve comparable model utility with fewer communication rounds (0.54$\times$) and a similar amount of network traffic (1.002$\times$) compared to the standard federated learning of a complete model. This is achieved while only introducing up to ~15% CPU time, ~18% memory usage, and ~21% energy consumption overhead in $PPFL$'s client-side. ","PPFL: Privacy-preserving Federated Learning with Trusted Execution
Environments",1,"['Preprint of our new paper proposing the 1st Privacy-Preserving Federated Learning Framework with TEEs, accepted @ACMMobiSys 2021, here: \n@VincentMo6,@realhamed,@minoskt,@_EduardMarin_,@Diego_Perino \nPowered by @TEFresearch,@concordiah2020,\n@accordion_h2020 ']",21,04,268
365,24,1177382708164644866,907621053547126785,Pratyush Tiwary,"Preprint of new review just submitted to Current Opinion in Structural Biology. Not the easiest review to write given word limits & a field changing so rapidly. Please let us know if we missed something crucial you did so mayb we can update actual paper @MicheleCeriotti @JimPfaendtner Thanks, this is exactly why we put this up as a preprint @olexandr awwwwww. but look @olexandr how could I have forgotten you and @adrian_roitberg ? ",https://arxiv.org/abs/1909.11748,"Molecular dynamics (MD) has become a powerful tool for studying biophysical systems, due to increasing computational power and availability of software. Although MD has made many contributions to better understanding these complex biophysical systems, there remain methodological difficulties to be surmounted. First, how to make the deluge of data generated in running even a microsecond long MD simulation human comprehensible. Second, how to efficiently sample the underlying free energy surface and kinetics. In this short perspective, we summarize machine learning based ideas that are solving both of these limitations, with a focus on their key theoretical underpinnings and remaining challenges. ","Machine learning approaches for analyzing and enhancing molecular
dynamics simulations",3,"['Preprint of new review just submitted to Current Opinion in Structural Biology. Not the easiest review to write given word limits & a field changing so rapidly. Please let us know if we missed something crucial you did so mayb we can update actual paper ', '@MicheleCeriotti @JimPfaendtner Thanks, this is exactly why we put this up as a preprint', '@olexandr awwwwww. but look @olexandr how could I have forgotten you and @adrian_roitberg ? https://t.co/UumRLB6fIZ']",19,09,448
366,141,1475577901324115976,850415526602059777,Vikram Dwarkadas,"Our paper on long-term study of 3 new ULXs in NGC 891, including a new Chandra source, accepted. . Work started by undergrad Victoria Cirillo (Fordham), then taken over and completed by Nicholas Earley (UChicago). Nicholas is now applying to grad school. ",https://arxiv.org/abs/2112.12212,"We perform empirical fits to the \emph{Chandra} and \emph{XMM-Newton} spectra of three ultraluminous X-ray sources (ULXs) in the edge-on spiral galaxy NGC 891, monitoring the region over a seventeen year time window. One of these sources has been visible since the early 1990s with \emph{ROSAT} and has been observed multiple times with \emph{Chandra} and \emph{XMM-Newton}. Another has been visible since 2011. We build upon prior analyses of these sources by analyzing all available data at all epochs. Where possible \emph{Chandra} data is used, since its superior spatial resolution allows for more effective isolation of the emission from each individual source, thus providing a better determination of their spectral properties. We also identify a new transient ULX, CXOU J022230.1+421937, which faded from view over the course of a two month period from Nov 2016 to Jan 2017. Modeling of each source at every epoch was conducted using six different models ranging from thermal bremsstrahlung to accretion disk models. Unfortunately, but as is common with many ULXs, no single model yielded a much better fit than the others. The two known sources had unabsorbed luminosities that remained fairly consistent over five or more years. Various possibilities for the new transient ULX are explored. ",A Long-term Study of Ultraluminous X-ray Sources in NGC 891,1,"['Our paper on long-term study of 3 new ULXs in NGC 891, including a new Chandra source, accepted. . Work started by undergrad Victoria Cirillo (Fordham), then taken over and completed by Nicholas Earley (UChicago). Nicholas is now applying to grad school. ']",21,12,267
367,116,1007618336329551873,933084565895286786,Dan Hooper,"(1/6) I just put out a new paper, in which I indulge in some wide-eyed futurism. Let me walk you though the idea. @physicspod Sure, I'd be happy to chat. @physicspod Twitter is fine, but feel free to use email if you prefer. @dwsNY @JenLucPiquant Guilty!",https://arxiv.org/abs/1806.05203,"The presence of dark energy in our universe is causing space to expand at an accelerating rate. As a result, over the next approximately 100 billion years, all stars residing beyond the Local Group will fall beyond the cosmic horizon and become not only unobservable, but entirely inaccessible, thus limiting how much energy could one day be extracted from them. Here, we consider the likely response of a highly advanced civilization to this situation. In particular, we argue that in order to maximize its access to useable energy, a sufficiently advanced civilization would chose to expand rapidly outward, build Dyson Spheres or similar structures around encountered stars, and use the energy that is harnessed to accelerate those stars away from the approaching horizon and toward the center of the civilization. We find that such efforts will be most effective for stars with masses in the range of $M\sim (0.2-1) M_{\odot}$, and could lead to the harvesting of stars within a region extending out to several tens of Mpc in radius, potentially increasing the total amount of energy that is available to a future civilization by a factor of several thousand. We also discuss the observable signatures of a civilization elsewhere in the universe that is currently in this state of stellar harvesting. ","Life Versus Dark Energy: How An Advanced Civilization Could Resist the
Accelerating Expansion of the Universe",4,"['(1/6) I just put out a new paper, in which I indulge in some wide-eyed futurism. Let me walk you though the idea.\n ', ""@physicspod Sure, I'd be happy to chat."", '@physicspod Twitter is fine, but feel free to use email if you prefer.', '@dwsNY @JenLucPiquant Guilty!']",18,06,268
368,133,1257293116345921537,17373048,Rodrigo Nemmen,"New paper out: Jet efficiencies and black hole spins in jetted quasars, where we estimate quasar powers from @NASAFermi observations, constrain how efficiently BHs convert accreted mass into outflows and estimate how fast they are spinning 1/4 Gamma-ray luminosities correlate with BH masses, so one could try to use gamma photons to have an idea of how massive a jetted quasar is. Good way to cross-check virial estimates 2/4 Here we plot the lower limit on the jet efficiencies (what comes out/what goes in) as a function of BH mass. A few of the blazars are ""overpowered"" meaning that according to existing BH models they should not be possible: those four points above the line 3/4 Finally, here is a distribution of BH spin lower limits for the sample. The spin is one of the two fundamental parameters of BH spacetimes besides mass (charge is unimportant). These guys are, unsurprisingly, rotating fast! 4/4 paper led by @outflows (very appropriate user name!)",https://arxiv.org/abs/2005.00381,"The mechanisms responsible for the production of relativistic jets from supermassive black holes (SMBHs) accreting at near-Eddington rates are not well-understood. Simple theoretical expectations indicate that SMBHs in quasars accrete via thin discs which should produce at most very weak jets. This is contradicted by observations of powerful jets in flat-spectrum radio quasars (FSRQs). We use gamma-ray luminosities observed with the \emph{fermi} Large Area Telescope as a proxy of the jet power for a population of 154 FSRQs. Assuming typical quasar accretion rates and using black hole mass measurements from a variety of methods, we find a mean jet production efficiency of about 10 per cent for FSRQs, with values as high as 222 per cent. We find that this is consistent with FSRQs hosting moderately thin, magnetically arrested accretion discs around rapidly spinning black holes (BHs). Modeling our observations using general relativistic magnetohydrodynamic (GRMHD) simulations of jets from thin discs, we find an average lower limit of $a_* = 0.59$ for the SMBH spins of FSRQs, with tendency for the spins to decrease as the black hole mass increases. Our results are consistent with the merger-driven evolution of SMBHs. 3 per cent of the sample cannot be explained by current GRMHD models of jet production from Kerr BHs due to the high efficiencies. Along the way, we find a correlation between BH masses and $L_\gamma$ which may be an useful mass estimator in blazar gamma-ray studies. ",Jet efficiencies and black hole spins in jetted quasars,5,"['New paper out: Jet efficiencies and black hole spins in jetted quasars, \nwhere we estimate quasar powers from @NASAFermi observations, constrain how efficiently BHs convert accreted mass into outflows and estimate how fast they are spinning 1/4', 'Gamma-ray luminosities correlate with BH masses, so one could try to use gamma photons to have an idea of how massive a jetted quasar is. Good way to cross-check virial estimates 2/4 https://t.co/n0ZmQACUVR', 'Here we plot the lower limit on the jet efficiencies (what comes out/what goes in) as a function of BH mass. A few of the blazars are ""overpowered"" meaning that according to existing BH models they should not be possible: those four points above the line 3/4 https://t.co/zCJotZT3xm', 'Finally, here is a distribution of BH spin lower limits for the sample. The spin is one of the two fundamental parameters of BH spacetimes besides mass (charge is unimportant). These guys are, unsurprisingly, rotating fast! 4/4 https://t.co/mZpHwuZvf3', 'paper led by @outflows (very appropriate user name!)']",20,05,994
369,99,1291626371081547776,159796963,Manuel Burghardt,"Our (@poke8192) new OCR paper ""On the Accuracy of CRNNs for Line-Based OCR: A Multi-Parameter Evaluation"" is out now! We question the role of binarization and found that 10k lines for training are usually quite sufficient. Any feedback appreciated!",https://arxiv.org/abs/2008.02777,"We investigate how to train a high quality optical character recognition (OCR) model for difficult historical typefaces on degraded paper. Through extensive grid searches, we obtain a neural network architecture and a set of optimal data augmentation settings. We discuss the influence of factors such as binarization, input line height, network width, network depth, and other network training parameters such as dropout. Implementing these findings into a practical model, we are able to obtain a 0.44% character error rate (CER) model from only 10,000 lines of training data, outperforming currently available pretrained models that were trained on more than 20 times the amount of data. We show ablations for all components of our training pipeline, which relies on the open source framework Calamari. ","On the Accuracy of CRNNs for Line-Based OCR: A Multi-Parameter
Evaluation",1,"['Our (@poke8192) new OCR paper ""On the Accuracy of CRNNs for Line-Based OCR: A Multi-Parameter Evaluation"" is out now! We question the role of binarization and found that 10k lines for training are usually quite sufficient. Any feedback appreciated!']",20,08,255
370,15,1209747112042409984,96253726,Navin Sridhar,"Holiday special! New paper out – ! This is a paper that I started working on during my undergrad @CaltechSURF project with @jaj_garcia, which later grew on to become a wonderful collaboration with @vicgrinberg, Jack Steiner, Riley Connors among others (1/8) In this paper, we investigate the evolution of certain properties of black hole accretion disk/corona viz., inner disk radius, ionization parameter, temperatures of—inner disk, corona, and its optical depth, etc., across the bright hard to soft state transition of GX 339–4 (2/8) By employing a set of relativistic reflection models, we deduce that the inner disk truncation radius approaches R_in~ISCO during the early onset of bright hard state, and the disk inner edge remains small (<9 Gravitational radii) throughout the hard to soft state transition(3/8) We compare the disk properties (mentioned in 2/8) between outbursts with state transitions occurring at *different luminosities*, and find identical evolutionary trends in the disk properties (including R_in~ISCO), with differences seen only in the temp. and optical depth (4/8) By applying a self-consistent Comptonized accretion disk model accounting for the scatter of disk photons by corona, we find R_in~ISCO, using the temperature dependent values of spectral hardening factor—thereby independently confirming our results from reflection analysis (5/8) With the inner disk barely moving towards the black hole during the bright hard to soft state transition, the changes seen in the disk/coronal properties can be attributed to factors like coronal compactification, increase in accretion rate, spectral hardening factor, etc. (6/8) In the end, we also establish that for ~Kerr black holes, data from RXTE/PCA along with relxill family of relativistic reflection models is capable of discerning Fe K fluorescent features, narrow enough to being able to constrain disk inner radius as large as R_in~120*ISCO (7/8) For a broader discussion of even more model parameters, and for a detailed analysis procedure with the underlying statistical footing (MCMC), please go through our paper: . Feel free to share this work, and your questions/comments if any :) (8/8)",https://arxiv.org/abs/1912.11447,"We present the analysis of several observations of the black hole binary GX 339--4 during its bright intermediate states from two different outbursts (2002 and 2004), as observed by RXTE/PCA. We perform a consistent study of its reflection spectrum by employing the relxill family of relativistic reflection models to probe the evolutionary properties of the accretion disk including the inner disk radius ($R_{\rm in}$), ionization parameter ($\xi$), temperatures of the inner disk ($T_{\rm in}$), corona ($kT_{\rm e}$), and its optical depth ($\tau$). Our analysis indicates that the disk inner edge approaches the inner-most stable circular orbit (ISCO) during the early onset of bright hard state, and that the truncation radius of the disk remains low ($\lesssim 14 R_{\rm g}$) throughout the transition from hard to soft state. This suggests that the changes observed in the accretion disk properties during the state transition are driven by variation in accretion rate, and not necessarily due to changes in the inner disk's radius. We compare the aforementioned disk properties in two different outbursts, with state transitions occurring at dissimilar luminosities, and find identical evolutionary trends in the disk properties, with differences only seen in corona's $kT_{\rm e}$ and $\tau$. We also perform an analysis by employing a self-consistent Comptonized accretion disk model accounting for the scatter of disk photons by the corona, and measure low inner disk truncation radius across the bright intermediate states, using the temperature dependent values of spectral hardening factor, thereby independently confirming our results from the reflection spectrum analysis. ","Evolution of the accretion disk-corona during bright hard-to-soft state
transition: A reflection spectroscopic study with GX 339-4",8,"['Holiday special!\nNew paper out – ! This is a paper that I started working on during my undergrad @CaltechSURF project with @jaj_garcia, which later grew on to become a wonderful collaboration with @vicgrinberg, Jack Steiner, Riley Connors among others (1/8)', 'In this paper, we investigate the evolution of certain properties of black hole accretion disk/corona viz., inner disk radius, ionization parameter, temperatures of—inner disk, corona, and its optical depth, etc., across the bright hard to soft state transition of GX 339–4 (2/8)', 'By employing a set of relativistic reflection models, we deduce that the inner disk truncation radius approaches R_in~ISCO during the early onset of bright hard state, and the disk inner edge remains small (<9 Gravitational radii) throughout the hard to soft state transition(3/8)', 'We compare the disk properties (mentioned in 2/8) between outbursts with state transitions occurring at *different luminosities*, and find identical evolutionary trends in the disk properties (including R_in~ISCO), with differences seen only in the temp. and optical depth (4/8)', 'By applying a self-consistent Comptonized accretion disk model accounting for the scatter of disk photons by corona, we find R_in~ISCO, using the temperature dependent values of spectral hardening factor—thereby independently confirming our results from reflection analysis (5/8)', 'With the inner disk barely moving towards the black hole during the bright hard to soft state transition, the changes seen in the disk/coronal properties can be attributed\nto factors like coronal compactification, increase in accretion rate, spectral hardening factor, etc. (6/8)', 'In the end, we also establish that for ~Kerr black holes, data from RXTE/PCA along with relxill family of relativistic reflection models is capable of discerning Fe K fluorescent features, narrow enough to being able to constrain disk inner radius as large as R_in~120*ISCO (7/8)', 'For a broader discussion of even more model parameters, and for a detailed analysis procedure with the underlying statistical footing (MCMC), please go through our paper: https://t.co/Js9xltjc6B. Feel free to share this work, and your questions/comments if any :) (8/8)']",19,12,2199
371,115,1067370794911756289,426509606,Yamir Moreno,"In our last work, out today (), we dissect the dynamics of collective social behavior in a crowd-controlled game (Twitch Plays Pokémon). We found both crowd and swarm like behaviors. Work with A. Aleta. @dgarcia_eu @ciro Yes, that's the data we used!",https://arxiv.org/abs/1811.09730,"Despite many efforts, the behavior of a crowd is not fully understood. The advent of modern communication media has made it an even more challenging problem, as crowd dynamics could be driven by both human-to-human and human-technology interactions. Here, we study the dynamics of a crowd controlled game (Twitch Plays Pok\'emon), in which nearly a million players participated during more than two weeks. We dissect the temporal evolution of the system dynamics along the two distinct phases that characterized the game. We find that players who do not follow the crowd average behavior are key to succeed in the game. The latter finding can be well explained by an n-$th$ order Markov model that reproduces the observed behavior. Secondly, we analyze a phase of the game in which players were able to decide between two different modes of playing, mimicking a voting system. Our results suggest that under some conditions, the collective dynamics can be better regarded as a swarm-like behavior instead of a crowd. Finally, we discuss our findings in the light of the social identity theory, which appears to describe well the observed dynamics. ",Collective social behavior in a crowd controlled game,2,"['In our last work, out today (), we dissect the dynamics of collective social behavior in a crowd-controlled game (Twitch Plays Pokémon). We found both crowd and swarm like behaviors. Work with A. Aleta. ', ""@dgarcia_eu @ciro Yes, that's the data we used!""]",18,11,263
372,166,1279060276189581313,1661813766,Mehdi Kamani,"Our new paper is out! We investigate the impacts of Compression on #FederatedLearning and present FedCOMGATE, which improves SOTA. We provide sharp guarantees under different settings in FL with Compression. @Farzinhaddadpou @AryanMokhtari @mehrdadmahdavi We show that even without compression, our algorithm matches the SOTA results with no extra control variable. We provide guarantees for general nonconvex, PL/strongly convex, and general convex objective functions, as well as, homogeneous and heterogenous data distributions. We accompanied our theoretical results with extensive experimental results. We investigate both Quantization and Sparsification as the compression method in our algorithm. #Quantization #Sparsification #Compression We will have a major code release in the coming weeks for our Distributed Learning and Federated Learning setups, including FedCOMGATE and many more. Stay tuned. For getting updates you can follow my GitHub handle: #MachineLearning #DistributedOptimization @aminkarbasi @AryanMokhtari @Farzinhaddadpou @mehrdadmahdavi Thanks 🙏",https://arxiv.org/abs/2007.01154,"In federated learning, communication cost is often a critical bottleneck to scale up distributed optimization algorithms to collaboratively learn a model from millions of devices with potentially unreliable or limited communication and heterogeneous data distributions. Two notable trends to deal with the communication overhead of federated algorithms are gradient compression and local computation with periodic communication. Despite many attempts, characterizing the relationship between these two approaches has proven elusive. We address this by proposing a set of algorithms with periodical compressed (quantized or sparsified) communication and analyze their convergence properties in both homogeneous and heterogeneous local data distribution settings. For the homogeneous setting, our analysis improves existing bounds by providing tighter convergence rates for both strongly convex and non-convex objective functions. To mitigate data heterogeneity, we introduce a local gradient tracking scheme and obtain sharp convergence rates that match the best-known communication complexities without compression for convex, strongly convex, and nonconvex settings. We complement our theoretical results and demonstrate the effectiveness of our proposed methods by several experiments on real-world datasets. ","Federated Learning with Compression: Unified Analysis and Sharp
Guarantees",5,"['Our new paper is out! We investigate the impacts of Compression on #FederatedLearning and present FedCOMGATE, which improves SOTA. We provide sharp guarantees under different settings in FL with Compression. \n@Farzinhaddadpou @AryanMokhtari @mehrdadmahdavi ', 'We show that even without compression, our algorithm matches the SOTA results with no extra control variable. We provide guarantees for general nonconvex, PL/strongly convex, and general convex objective functions, as well as, homogeneous and heterogenous data distributions.', 'We accompanied our theoretical results with extensive experimental results. We investigate both Quantization and Sparsification as the compression method in our algorithm.\n#Quantization #Sparsification #Compression https://t.co/OYVzNyFprN', 'We will have a major code release in the coming weeks for our Distributed Learning and Federated Learning setups, including FedCOMGATE and many more. Stay tuned. For getting updates you can follow my GitHub handle: https://t.co/9Tei2W1QXt\n#MachineLearning #DistributedOptimization', '@aminkarbasi @AryanMokhtari @Farzinhaddadpou @mehrdadmahdavi Thanks 🙏']",20,07,1101
373,42,1309224864276975616,4365927557,Dr. Jake Turner 🌅,"****New Paper Alert** Today my group at @Cornell @CSInst (@AstroAndrew123, @DrRayJay, & I) published a new paper using @NASA_TESS data ""TESS Observations of the Hot Jupiter Exoplanet XO-6b: No Evidence of Transit Timing Variations"" THREAD 1/8 XO-6b is a typical hot Jupiter that orbits a F5V-type star. Previous ground-based observations by Garai et al. 2020 () find transit timing variations (TTvs) with an amplitude of 14 min & period of 450 days 2/8 Inspired by the possible TTVs from XO-6b, we looked at the system with NASA's TESS (@NASA_TESS) mission. TESS is perfect for this study because it has really good photometric and timing precision. More on the timing verification from TESS: 2/8 The @NASA_TESS light curves of XO-6b were exquisite in precision allowing for us to characterize the system with much greater detail than ever before. 3/8 We fit the individual and combined light curves with EXOMOP, a transit fitting code I developed in my PhD. All the fits individual fits were consistent with each other. More details on code: 4/8 The main result of our paper: - We find no evidence for TTVs: we can rule out TTVs > 2.5 minutes at the 3σ level. - We rule out the previous claim of TTVs by 10σ 5/8 The cause of the tension between our results & those of Garai et al. (2020) is not clear but it may be due to unknown timing errors in their ground-based data. - A few of the smaller TTVs could be related to barycentric corrections, the larger ones must have other causes. 6/8 Careful absolute telescope clock calibrations are important to adequately schedule future atmospheric characterization observations on JWST, etc.. Our study shows we need to be careful cause ground-based telescopes will definitely play a role (see ) 7/8 In conclusion: Our findings highlight @NASA_TESS's capabilities for robust follow-up, and confirm that TTVs are rarely seen in hot Jupiters, unlike is the case with small planets. You can find the paper free here: 8/8",https://arxiv.org/abs/2009.10781,"From previous ground-based observations, the hot Jupiter exoplanet XO-6b was reported to exhibit apparently periodic transit timing variations (TTVs), with a semi-amplitude of 14 minutes and a period of about 450 days. These variations were interpreted as being due to a resonant perturbation between XO-6b and a hitherto unknown low-mass planet orbiting the same star. To understand this enigmatic planetary system better, we analysed three sectors of data, spanning over seven months, from the Transiting Exoplanet Survey Satellite (TESS), which produces high-quality light curves that are well suited to characterizing exoplanets and searching for TTVs. Here we present an updated orbital period of 3.7649893 $\pm$ 0.0000037 days and a transit epoch of 2456652.7157 $\pm$ 0.0022 BJD$_{TDB}$. The planetary parameters we report, while consistent with their discovery values, have greatly improved precision. Notably, we find no evidence for TTVs: we can rule out TTVs $\gtrsim$ 2.5 minutes at the 3$\sigma$ level. Therefore, the TESS data have sufficient precision and time baseline to reveal readily the previously reported TTVs of approximately 10 minutes. Our findings highlight TESS's capabilities for robust follow-up, and confirm that TTVs are rarely seen in hot Jupiters, unlike is the case with small planets. ","TESS Observations of the Hot Jupiter Exoplanet XO-6b: No Evidence of
Transit Timing Variations",9,"['****New Paper Alert** \n\nToday my group at @Cornell @CSInst (@AstroAndrew123, @DrRayJay, & I) published a new paper using @NASA_TESS data \n\n""TESS Observations of the Hot Jupiter Exoplanet XO-6b: No Evidence of Transit Timing Variations"" \n\n\n\nTHREAD 1/8 ', 'XO-6b is a typical hot Jupiter that orbits a F5V-type star.\n\nPrevious ground-based observations by Garai et al. 2020 (https://t.co/hHQWrbQTM1) find transit timing variations (TTvs) with an amplitude of 14 min & period of 450 days\n\n2/8 https://t.co/6BSYRupp2u', ""Inspired by the possible TTVs from XO-6b, we looked at the system with NASA's TESS (@NASA_TESS) mission. \n\nTESS is perfect for this study because it has really good photometric and timing precision. \n\nMore on the timing verification from TESS: https://t.co/fmyIMPQCK6 2/8 https://t.co/8qLcDXeK3w"", 'The @NASA_TESS light curves of XO-6b were exquisite in precision allowing for us to characterize the system with much greater detail than ever before. 3/8 https://t.co/XTeqpqkY3G', 'We fit the individual and combined light curves with EXOMOP, a transit fitting code I developed in my PhD. All the fits individual fits were consistent with each other. \n\nMore details on code: https://t.co/P1dJrGCzS3\n 4/8 https://t.co/g9FkJPU7ac', 'The main result of our paper: \n- We find no evidence for TTVs: we can rule out TTVs > 2.5 minutes at the 3σ level. \n- We rule out the previous claim of TTVs by 10σ \n5/8 https://t.co/cX5fQ4CEnG', 'The cause of the tension between our results & those of Garai et al. (2020) is not clear but it may be due to unknown timing errors in their ground-based data. \n\n- A few of the smaller TTVs could be related to barycentric corrections, the larger ones must have other causes.\n\n6/8 https://t.co/QXXIeSKx9r', 'Careful absolute telescope clock calibrations are important to adequately schedule future atmospheric characterization observations on JWST, etc..\n\nOur study shows we need to be careful cause ground-based telescopes will definitely play a role (see https://t.co/0YheFgnuDd)\n7/8', ""In conclusion: \nOur findings highlight @NASA_TESS's capabilities for robust follow-up, and confirm that TTVs are rarely seen in hot Jupiters, unlike is the case with small planets. \n\nYou can find the paper free here: https://t.co/E1gDd0vzxm 8/8""]",20,09,2061
374,95,1405100087555104768,302547719,Craig Glastonbury,"Our new paper is now out on Arxiv! Contrastive Mixture of Posteriors for Counterfactual Inference, Data Integration and Fairness. A project started by Adam Foster a fantastic intern last year at @benevolent_ai who continued to work with us on this. In the paper we demonstrate that counterfactual inference, batch correction, data integration and learning fair representations in a CVAE framework, can all be seen as learning representations that are conditionally independent of a covariate. Previous methods have tried to tackle this problem, such as the CVAE, FairVAE & TrVAE from @fabian_theis lab. Whilst the CVAE does condition on a known label (c), either the encoder or decoder can choose to ignore this, leading to a latent space that separates on condition (c) TrVAE and FairVAE do better than CVAE by introducing an MMD penalty. For CoMP, inspired by the VaMP prior and contrastive learning, we remove the need for any external discrepancy metric (e.g MMD) and use mixtures of the variational posterior alone, demonstrating better mixing! We apply CoMP to three problems: 1. Aligning cancer cell lines with tumors (RNA-seq data - Celligner problem). CoMP can align latent representations of cancer cell lines and their tumour equivalents. CoMP preserves tumour type and subtype clustering and has better sensitivity. 2. Counterfactual inference. We use PBMC scRNA-seq data treated and untreated with IFNg to ask: ""What would an untreated cell look like if perturbed with IFNg?"" CoMP successfully infers this - Whilst other methods overestimate underexpression or underestimate over expression. 3. Fair representation learning Whilst solving these biological problems, we noticed parallels with fair representation learning and demonstrate that we can learn fair representations of biased income data that are invariant to Sex, yet still expressive (able to predict income). The paper has many other neat findings including nice theoretical results demonstrating that CoMP is actually equivalent to an upper bound on a weighted sum of KL-divergences p(c)KL[q(z|c) | q(z|c')]. Check it out: @jmtomczak VamP prior getting some love.",https://arxiv.org/abs/2106.08161,"Learning meaningful representations of data that can address challenges such as batch effect correction and counterfactual inference is a central problem in many domains including computational biology. Adopting a Conditional VAE framework, we show that marginal independence between the representation and a condition variable plays a key role in both of these challenges. We propose the Contrastive Mixture of Posteriors (CoMP) method that uses a novel misalignment penalty defined in terms of mixtures of the variational posteriors to enforce this independence in latent space. We show that CoMP has attractive theoretical properties compared to previous approaches and we prove counterfactual identifiability of CoMP under additional assumptions. We demonstrate state of the art performance on a set of challenging tasks including aligning human tumour samples with cancer cell-lines, predicting transcriptome-level perturbation responses, and batch correction on single-cell RNA sequencing data. We also find parallels to fair representation learning and demonstrate that CoMP is competitive on a common task in the field. ","Contrastive Mixture of Posteriors for Counterfactual Inference, Data
Integration and Fairness",9,"['Our new paper is now out on Arxiv! Contrastive Mixture of Posteriors for Counterfactual Inference, Data Integration and Fairness. A project started by Adam Foster a fantastic intern last year at @benevolent_ai who continued to work with us on this.', 'In the paper we demonstrate that counterfactual inference, batch correction, data integration and learning fair representations in a CVAE framework, can all be seen as learning representations that are conditionally independent of a covariate.', 'Previous methods have tried to tackle this problem, such as the CVAE, FairVAE & TrVAE from @fabian_theis lab. Whilst the CVAE does condition on a known label (c), either the encoder or decoder can choose to ignore this, leading to a latent space that separates on condition (c)', 'TrVAE and FairVAE do better than CVAE by introducing an MMD penalty. For CoMP, inspired by the VaMP prior and contrastive learning, we remove the need for any external discrepancy metric (e.g MMD) and use mixtures of the variational posterior alone, demonstrating better mixing! https://t.co/2FhCSLCYUy', 'We apply CoMP to three problems:\n\n1. Aligning cancer cell lines with tumors (RNA-seq data - Celligner problem). CoMP can align latent representations of cancer cell lines and their tumour equivalents. CoMP preserves tumour type and subtype clustering and has better sensitivity. https://t.co/09N22aDq5U', '2. Counterfactual inference.\nWe use PBMC scRNA-seq data treated and untreated with IFNg to ask: ""What would an untreated cell look like if perturbed with IFNg?"" CoMP successfully infers this - Whilst other methods overestimate underexpression or underestimate over expression. https://t.co/bntzSJP4Ri', '3. Fair representation learning \nWhilst solving these biological problems, we noticed parallels with fair representation learning and demonstrate that we can learn fair representations of biased income data that are invariant to Sex, yet still expressive (able to predict income). https://t.co/BHoLqhR1fC', ""The paper has many other neat findings including nice theoretical results demonstrating that CoMP is actually equivalent to an upper bound on a weighted sum of KL-divergences p(c)KL[q(z|c) | q(z|c')].\nCheck it out: https://t.co/7WSnrz8Qvf"", '@jmtomczak VamP prior getting some love.']",21,06,2182
375,0,1148352637689049090,48712353,Sungjin Ahn 🇺🇦,"Glad to introduce our new paper on ""Sequential Neural Processes"" by Gautam Singh, Jaesik Yoon et. al! This is a meta-transfer learning framework. We demonstrate that it can model dynamic 3d scenes using temporal GQN! 3D multi-objects 2d moving object 1d regression ",https://arxiv.org/abs/1906.10264,"Neural Processes combine the strengths of neural networks and Gaussian processes to achieve both flexible learning and fast prediction in stochastic processes. However, a large class of problems comprises underlying temporal dependency structures in a sequence of stochastic processes that Neural Processes (NP) do not explicitly consider. In this paper, we propose Sequential Neural Processes (SNP) which incorporates a temporal state-transition model of stochastic processes and thus extends its modeling capabilities to dynamic stochastic processes. In applying SNP to dynamic 3D scene modeling, we introduce the Temporal Generative Query Networks. To our knowledge, this is the first 4D model that can deal with the temporal dynamics of 3D scenes. In experiments, we evaluate the proposed methods in dynamic (non-stationary) regression and 4D scene inference and rendering. ",Sequential Neural Processes,4,"['Glad to introduce our new paper on ""Sequential Neural Processes"" by Gautam Singh, Jaesik Yoon et. al! This is a meta-transfer learning framework. We demonstrate that it can model dynamic 3d scenes using temporal GQN!\n\n ', '3D multi-objects https://t.co/crpXDd633l', '2d moving object https://t.co/UchwSY6RAo', '1d regression https://t.co/H5jyjFsMzR']",19,06,306
376,133,1356420877114572802,1092114266952531968,Graeme Addison,"My new paper looking at Hubble constant constraints from CMB E-mode data sets: Recently, the @SPTelescope paper Dutcher+ showed that Planck, ACTPol, SPTpol & SPT-3G all get higher H0 in EE than the Planck TT LCDM constraint (see their Fig 13). [1/6] What happens if we *combine* the different EE spectra? Does this reinforce the preference for a higher H0? If yes, perhaps it could be some clue that to resolve the Hubble tension we want a model that departs from LCDM more strongly in temperature than polarization. [2/6] I ran the fit… and found that combining Planck EE + ACTPol EE + SPTpol EE actually gives 68.7 +/- 1.3 km/s/Mpc, 2.4 sigma lower than the latest SH0ES distance ladder (73.2 +/- 1.3). So how can you combine three values that are all >=70 and get 68.7? [3/6] The answer lies in the different degeneracy directions across the full LCDM param space (look at n_s vs Obh2), related to sensitivity to different multipole ranges. To reach a consensus on Obh2 between Planck and ACTPol/SPTpol you end up shifting lower in H0, n_s. [4/6] Are the constraints from different EE data sets consistent with one another? Yes - difference at most 1.4 sigma across the LCDM space (see Table 2). Also consistent with Planck TT LCDM at 0.8 sigma. [5/6] NB - The SPT-3G likelihood isn’t public yet, but since the degeneracy directions are going to be pretty similar between SPTpol and SPT-3G EE I expect that combining Planck EE + SPT-3G EE will similarly lead to a lower H0 / lower n_s. [6/6]",https://arxiv.org/abs/2102.00028,"The E-mode (EE) CMB power spectra measured by Planck, ACTPol, and SPTpol constrain the Hubble constant to be $70.0\pm2.7$, $72.4^{+3.9}_{-4.8}$, and $73.1^{+3.3}_{-3.9}$ km s$^{-1}$ Mpc$^{-1}$ within the standard $\Lambda$CDM model (posterior mean and central 68% interval bounds). These values are higher than the constraints from the Planck temperature (TT) power spectrum, and consistent with the Cepheid-supernova distance ladder measurement $H_0=73.2\pm1.3$ km s$^{-1}$ Mpc$^{-1}$. If this preference for a higher value was strengthened in a joint analysis it could provide an intriguing hint at the resolution of the Hubble disagreement. We show, however, that combining the Planck, ACTPol, and SPTpol EE likelihoods yields $H_0=68.7\pm1.3$ km s$^{-1}$ Mpc$^{-1}$, $2.4\sigma$ lower than the distance ladder measurement. This is due to different degeneracy directions across the full parameter space, particularly involving the baryon density, $\Omega_bh^2$, and scalar tilt, $n_s$, arising from sensitivity to different multipole ranges. We show that the E-mode $\Lambda$CDM constraints are consistent across the different experiments within $1.4\sigma$, and with the Planck TT results at $0.8\sigma$. Combining the Planck, ACTPol, and SPTpol EE data constrains the phenomenological lensing amplitude, $A_L=0.89\pm0.10$, consistent with the expected value of unity. ","High $H_0$ Values from CMB E-mode Data: A Clue for Resolving the Hubble
Tension?",6,"['My new paper looking at Hubble constant constraints from CMB E-mode data sets: \n\nRecently, the @SPTelescope paper Dutcher+ showed that Planck, ACTPol, SPTpol & SPT-3G all get higher H0 in EE than the Planck TT LCDM constraint (see their Fig 13). [1/6] ', 'What happens if we *combine* the different EE spectra? Does this reinforce the preference for a higher H0? If yes, perhaps it could be some clue that to resolve the Hubble tension we want a model that departs from LCDM more strongly in temperature than polarization. [2/6]', 'I ran the fit… and found that combining Planck EE + ACTPol EE + SPTpol EE actually gives 68.7 +/- 1.3 km/s/Mpc, 2.4 sigma lower than the latest SH0ES distance ladder (73.2 +/- 1.3).\n\nSo how can you combine three values that are all >=70 and get 68.7? [3/6]', 'The answer lies in the different degeneracy directions across the full LCDM param space (look at n_s vs Obh2), related to sensitivity to different multipole ranges. To reach a consensus on Obh2 between Planck and ACTPol/SPTpol you end up shifting lower in H0, n_s. [4/6] https://t.co/m620SQwkiN', 'Are the constraints from different EE data sets consistent with one another? Yes - difference at most 1.4 sigma across the LCDM space (see Table 2). Also consistent with Planck TT LCDM at 0.8 sigma. [5/6] https://t.co/fOeSK26c9b', 'NB - The SPT-3G likelihood isn’t public yet, but since the degeneracy directions are going to be pretty similar between SPTpol and SPT-3G EE I expect that combining Planck EE + SPT-3G EE will similarly lead to a lower H0 / lower n_s. [6/6]']",21,02,1525
377,114,1334962057964367873,753984416293216256,Michele Celebrano @sNOm Lab,This is the most challenging paper I wrote. Many results to be cross-checked and no knowledge about graphene to start. But also a great experience that allowed me to learn new things (I hope)... this is thanks to an incredible crew of people! @GrapheneEU @michebad @GrapheneEU @GrapheneUCam @CerulloLab @polini_marco Noooo... Diciamo che è stato un piacevole fuoripista. Anche se abbastanza difficoltoso data la mia ignoranza nel campo. Ma mercoledì già misuro un altro 2D. Ma sempre per passatempo... Mica roba seria! 😉,https://arxiv.org/abs/2012.01779,"Graphene is an ideal material for integrated nonlinear optics thanks to its strong light-matter interaction and large nonlinear optical susceptibility. Graphene has been used in optical modulators, saturable absorbers, nonlinear frequency converters, and broadband light emitters. For the latter application, a key requirement is the ability to control and engineer the emission wavelength and bandwidth, as well as the electronic temperature of graphene. Here, we demonstrate that the emission wavelength of graphene$'$ s broadband hot carrier photoluminescence can be tuned by integration on photonic cavities, while thermal management can be achieved by out-of-plane heat transfer to hexagonal boron nitride. Our results pave the way to graphene-based ultrafast broadband light emitters with tunable emission. ",Tunable broadband light emission from graphene,2,"['This is the most challenging paper I wrote. Many results to be cross-checked and no knowledge about graphene to start. But also a great experience that allowed me to learn new things (I hope)... this is thanks to an incredible crew of people! @GrapheneEU ', '@michebad @GrapheneEU @GrapheneUCam @CerulloLab @polini_marco Noooo... Diciamo che è stato un piacevole fuoripista. Anche se abbastanza difficoltoso data la mia ignoranza nel campo. Ma mercoledì già misuro un altro 2D. Ma sempre per passatempo... Mica roba seria! 😉']",20,12,534
378,75,987272269612318720,218250514,Heiko Hamann,"our new #GECCO2018 paper A Robot to Shape your Natural Plant complex #ML approach, #LSTM network as plant model, evolutionary computation to train a controller, a real plant does collision avoidance #robotics #evolution #plants #ai ",https://arxiv.org/abs/1804.06682,"Bio-hybrid systems---close couplings of natural organisms with technology---are high potential and still underexplored. In existing work, robots have mostly influenced group behaviors of animals. We explore the possibilities of mixing robots with natural plants, merging useful attributes. Significant synergies arise by combining the plants' ability to efficiently produce shaped material and the robots' ability to extend sensing and decision-making behaviors. However, programming robots to control plant motion and shape requires good knowledge of complex plant behaviors. Therefore, we use machine learning to create a holistic plant model and evolve robot controllers. As a benchmark task we choose obstacle avoidance. We use computer vision to construct a model of plant stem stiffening and motion dynamics by training an LSTM network. The LSTM network acts as a forward model predicting change in the plant, driving the evolution of neural network robot controllers. The evolved controllers augment the plants' natural light-finding and tissue-stiffening behaviors to avoid obstacles and grow desired shapes. We successfully verify the robot controllers and bio-hybrid behavior in reality, with a physical setup and actual plants. ","A Robot to Shape your Natural Plant: The Machine Learning Approach to
Model and Control Bio-Hybrid Systems",1,"['our new #GECCO2018 paper\nA Robot to Shape your Natural Plant\ncomplex #ML approach, #LSTM network as plant model, evolutionary computation to train a controller, a real plant does collision avoidance\n\n\n\n#robotics #evolution #plants #ai ']",18,04,252
379,123,1249504000979931140,15327263,Carl-Johan Haster,"New paper led by @sylvia_bisco (together with me, @sasomao and Jonathan Davies who visited @MITKavli from @ImperialPhysics last summer). We look at assumptions of known noise behaviour in GW data, and present a method to account for these uncertainties. ",https://arxiv.org/abs/2004.05149,"In order to perform Bayesian parameter estimation to infer the source properties of gravitational waves from compact binary coalescences (CBCs), the noise characteristics of the detector must be understood. It is typically assumed that the detector noise is stationary and Gaussian, characterized by a power spectral density (PSD) that is measured with infinite precision. We present a new method to incorporate the uncertainty in the power spectral density estimation into the Bayesian inference of the binary source parameters and apply it to the first 11 CBC detections reported by the LIGO- Virgo Collaboration. We find that incorporating the PSD uncertainty only leads to variations in the positions and widths of the binary parameter posteriors on the order of a few percent. Our results are publicly available for download on git [1]. ","Quantifying the Effect of Power Spectral Density Uncertainty on
Gravitational-Wave Parameter Estimation for Compact Binary Sources",1,"['New paper led by @sylvia_bisco (together with me, @sasomao and Jonathan Davies who visited @MITKavli from @ImperialPhysics last summer).\nWe look at assumptions of known noise behaviour in GW data, and present a method to account for these uncertainties.\n']",20,04,260
380,40,966127721360355328,1214528593,Miles Brundage,"arXiv copy of our new paper, ""The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation,"" by 26 authors at 14 institutions: 🤖🤔🧐 @AntonioGrzt I’m unfortunately not but CCing colleagues in case they are @HaydnBelfield @CSERCambridge @HeidyKhlaaf Thanks! That'd be awesome. I'm in Oxford most of the time but also drop by London occasionally :)",https://arxiv.org/abs/1802.07228,"This report surveys the landscape of potential security threats from malicious uses of AI, and proposes ways to better forecast, prevent, and mitigate these threats. After analyzing the ways in which AI may influence the threat landscape in the digital, physical, and political domains, we make four high-level recommendations for AI researchers and other stakeholders. We also suggest several promising areas for further research that could expand the portfolio of defenses, or make attacks less effective or harder to execute. Finally, we discuss, but do not conclusively resolve, the long-term equilibrium of attackers and defenders. ","The Malicious Use of Artificial Intelligence: Forecasting, Prevention,
and Mitigation",3,"['arXiv copy of our new paper, ""The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation,"" by 26 authors at 14 institutions: \n\n🤖🤔🧐', '@AntonioGrzt I’m unfortunately not but CCing colleagues in case they are @HaydnBelfield @CSERCambridge', ""@HeidyKhlaaf Thanks! That'd be awesome. I'm in Oxford most of the time but also drop by London occasionally :)""]",18,02,376
381,146,1402366621419720711,1311974157853261824,Mher Safaryan,"New paper on Smoothness-Aware Quantization Techniques (), which improves upon the results obtained for sparsification and broadens the use of smoothness matrices in communication efficient distributed methods. Joint work w/ Bokun Wang and @peter_richtarik. ",https://arxiv.org/abs/2106.03524,"Distributed machine learning has become an indispensable tool for training large supervised machine learning models. To address the high communication costs of distributed training, which is further exacerbated by the fact that modern highly performing models are typically overparameterized, a large body of work has been devoted in recent years to the design of various compression strategies, such as sparsification and quantization, and optimization algorithms capable of using them. Recently, Safaryan et al (2021) pioneered a dramatically different compression design approach: they first use the local training data to form local {\em smoothness matrices}, and then propose to design a compressor capable of exploiting the smoothness information contained therein. While this novel approach leads to substantial savings in communication, it is limited to sparsification as it crucially depends on the linearity of the compression operator. In this work, we resolve this problem by extending their smoothness-aware compression strategy to arbitrary unbiased compression operators, which also includes sparsification. Specializing our results to quantization, we observe significant savings in communication complexity compared to standard quantization. In particular, we show theoretically that block quantization with $n$ blocks outperforms single block quantization, leading to a reduction in communication complexity by an $\mathcal{O}(n)$ factor, where $n$ is the number of nodes in the distributed system. Finally, we provide extensive numerical evidence that our smoothness-aware quantization strategies outperform existing quantization schemes as well the aforementioned smoothness-aware sparsification strategies with respect to all relevant success measures: the number of iterations, the total amount of bits communicated, and wall-clock time. ",Smoothness-Aware Quantization Techniques,1,"['New paper on Smoothness-Aware Quantization Techniques (), which improves upon the results obtained for sparsification and broadens the use of smoothness matrices in communication efficient distributed methods. \nJoint work w/ Bokun Wang and @peter_richtarik. ']",21,06,276
382,91,1227512201579257857,1196494921,Steve McCormick,"Woo!! Slipped one through with the ol' cross-list switcharoo: (New paper on arXiv today, with Po-Ning Chen: Quasi-local Penrose inequalities with electric charge) @gregeganSF I'm so glad I'm not the only one 😂 And all dates are (yy)yymmdd in Sweden, so I should really be used to it by now... @CreeepyJoe Oh yeh, I forgot there was a big UCR contingency here on mathtwitter! :)",https://arxiv.org/abs/2002.04557,"The Riemannian Penrose inequality is a remarkable geometric inequality between the ADM mass of an asymptotically flat manifold with non-negative scalar curvature and the area of its outermost minimal surface. A version of the Riemannian Penrose inequality has also been established for the Einstein-Maxwell equations, where the lower bound on the mass also depends on the electric charge. In the context of quasi-local mass, one is interested in determining if, and for which quasi-local mass definitions, a quasi-local version of these inequalities also holds. It is known that the Brown-York quasi-local mass satisfies a quasi-local Riemannian Penrose inequality, however in the context of the Einstein-Maxwell equations, one expects that a quasi-local Riemannian Penrose inequality should also include a contribution from the electric charge. This article builds on ideas of Lu and Miao and of the first-named author to prove some charged quasi-local Penrose inequalities for a class of compact manifolds with boundary. In particular, we impose that the boundary is isometric to a closed surface in a suitable Reissner-Nordstr\""om manifold, which serves as a reference manifold for the quasi-local mass that we work with. In the case where the reference manifold has zero mass and non-zero electric charge, the lower bound on quasi-local mass is exactly the lower bound on the ADM mass given by the charged Riemannian Penrose inequality. ",Quasi-local Penrose inequalities with electric charge,3,"[""Woo!! Slipped one through with the ol' cross-list switcharoo: \n\n(New paper on arXiv today, with Po-Ning Chen: Quasi-local Penrose inequalities with electric charge) "", ""@gregeganSF I'm so glad I'm not the only one 😂\n\nAnd all dates are (yy)yymmdd in Sweden, so I should really be used to it by now..."", '@CreeepyJoe Oh yeh, I forgot there was a big UCR contingency here on mathtwitter! :)']",20,02,391
383,128,1501623912920199173,3209362451,Burton Lab,"New paper from our group that uses supervised machine learning to extract forces from dynamics. We use real, noisy experimental data: the 3D motion of micron-sized particles in a dusty plasma. The effort was led by graduate student Wentao Yu! ",https://arxiv.org/abs/2203.03740,"Extracting environmental forces from noisy data is a common yet challenging task in complex physical systems. Machine learning represents a robust approach to this problem, yet is mostly tested on simulated data with known parameters. Here we use supervised machine learning to extract the electrostatic, hydrodynamic, and stochastic forces acting on micron-sized charged particles levitated in an argon plasma. Trained on simulated particle trajectories using more than 100 dynamical and statistical features, the model predicts system parameters with 50\% better accuracy than conventional methods, and provides non-contact measurements of the particle charge and Debye length. ",Extracting Forces from Noisy Dynamics in Dusty Plasmas,1,"['New paper from our group that uses supervised machine learning to extract forces from dynamics. We use real, noisy experimental data: the 3D motion of micron-sized particles in a dusty plasma. The effort was led by graduate student Wentao Yu!\n ']",22,03,256
384,208,1278500500841721858,314395154,Tengyu Ma,"DL models tend to struggle with heteroskedastic and imbalanced datasets, where long-tailed labels have varying levels of uncertainty, partly bc it's hard to distinguish mislabeled, ambiguous, and rare examples. We propose a new regularization technique: The main principle is to regularize more strongly for those data that are rare and noisy. Joint work with @caokd8888, Yining Chen, @lu_junwei, Nikos Arechiga, @adnothing. Conceptually, this can be viewed as a follow-up work of our last year NeurIPS paper on learning imbalanced datasets . In this paper, we need to deal with the interaction of the heteroskedasticity and imbalance more carefully. @huyhcmut1997 Thanks for the comments/questions. Yes, the idea can also be used for regression. Actually, the demonstrating examples in the intro are for the regression setting. To use it for regression, one can take equation (6) as the objective and choose $\tau_i=\simga_i^a/q_i^b$ @huyhcmut1997 where a,b > 0 are constants and \sigma_i is the estimated standard deviation of the noise for that example (e.g., a=4/5, b=2/5 would the best choice predicted by the theory, but in fact, we think the exact choice does not matter that much as long as you tune lambda.) @huyhcmut1997 We will add a section on it in the next revision! Thanks! @chupvl @huyhcmut1997 Thanks for the comments. Hopefully my answer above to @huyhcmut also answers your question?",http://arxiv.org/abs/2006.15766,"Real-world large-scale datasets are heteroskedastic and imbalanced -- labels have varying levels of uncertainty and label distributions are long-tailed. Heteroskedasticity and imbalance challenge deep learning algorithms due to the difficulty of distinguishing among mislabeled, ambiguous, and rare examples. Addressing heteroskedasticity and imbalance simultaneously is under-explored. We propose a data-dependent regularization technique for heteroskedastic datasets that regularizes different regions of the input space differently. Inspired by the theoretical derivation of the optimal regularization strength in a one-dimensional nonparametric classification setting, our approach adaptively regularizes the data points in higher-uncertainty, lower-density regions more heavily. We test our method on several benchmark tasks, including a real-world heteroskedastic and imbalanced dataset, WebVision. Our experiments corroborate our theory and demonstrate a significant improvement over other methods in noise-robust deep learning. ","Heteroskedastic and Imbalanced Deep Learning with Adaptive
Regularization",7,"[""DL models tend to struggle with heteroskedastic and imbalanced datasets, where long-tailed labels have varying levels of uncertainty, partly bc it's hard to distinguish mislabeled, ambiguous, and rare examples. We propose a new regularization technique: "", 'The main principle is to regularize more strongly for those data that are rare and noisy. Joint work with @caokd8888, Yining Chen, @lu_junwei, Nikos Arechiga, @adnothing.', 'Conceptually, this can be viewed as a follow-up work of our last year NeurIPS paper on learning imbalanced datasets https://t.co/IHbopWrwCD. In this paper, we need to deal with the interaction of the heteroskedasticity and imbalance more carefully.', '@huyhcmut1997 Thanks for the comments/questions. Yes, the idea can also be used for regression. Actually, the demonstrating examples in the intro are for the regression setting. To use it for regression, one can take equation (6) as the objective and choose $\\tau_i=\\simga_i^a/q_i^b$', '@huyhcmut1997 where a,b > 0 are constants and \\sigma_i is the estimated standard deviation of the noise for that example (e.g., a=4/5, b=2/5 would the best choice predicted by the theory, but in fact, we think the exact choice does not matter that much as long as you tune lambda.)', '@huyhcmut1997 We will add a section on it in the next revision! Thanks!', '@chupvl @huyhcmut1997 Thanks for the comments. Hopefully my answer above to @huyhcmut also answers your question?']",20,06,1425
385,116,1138658726716399616,18262687,Rushil,"New preprint: We find that decoupling domain alignment from the final task improves domain adaptation. A simple subspace based alignment consistently outperforms adversarial DA like CDAN etc. Exciting work from @kowshik0808, @jjayaram7 & @pturaga1 Paper: ",https://arxiv.org/abs/1906.04338,"Unsupervised domain adaptation aims to transfer and adapt knowledge learned from a labeled source domain to an unlabeled target domain. Key components of unsupervised domain adaptation include: (a) maximizing performance on the target, and (b) aligning the source and target domains. Traditionally, these tasks have either been considered as separate, or assumed to be implicitly addressed together with high-capacity feature extractors. When considered separately, alignment is usually viewed as a problem of aligning data distributions, either through geometric approaches such as subspace alignment or through distributional alignment such as optimal transport. This paper represents a hybrid approach, where we assume simplified data geometry in the form of subspaces, and consider alignment as an auxiliary task to the primary task of maximizing performance on the source. The alignment is made rather simple by leveraging tractable data geometry in the form of subspaces. We synergistically allow certain parameters derived from the closed-form auxiliary solution, to be affected by gradients from the primary task. The proposed approach represents a unique fusion of geometric and model-based alignment with gradients from a data-driven primary task. Our approach termed SALT, is a simple framework that achieves comparable or sometimes outperforms state-of-the-art on multiple standard benchmarks. ","SALT: Subspace Alignment as an Auxiliary Learning Task for Domain
Adaptation",1,"['New preprint: We find that decoupling domain alignment from the final task improves domain adaptation. A simple subspace based alignment consistently outperforms adversarial DA like CDAN etc. Exciting work from @kowshik0808, @jjayaram7 & @pturaga1\n\nPaper: ']",19,06,268
386,173,1400478101855817730,171674815,Mark Marley,"I want to highlight our new paper, led by @exoEhsan, with @NatashaBatalha and @ChannonVisscher about Lithium in brown dwarfs: In low mass stars and the most massive brown dwarfs Lithium is lost to fusion in the core. Li is a little bit easier to 'burn' than H but not as easy as deuterium. So while the minimum mass to steadily burn H is around 75 Jupiter masses, the limit for Li burning is around 65 M_J. So brown dwarfs and stars older than around 250 Myr and more massive than 65 MJ do not have Li visible in their atmospheres. There is a strong Li line in the optical that makes Li detection feasible. This led Rafael Rebolo in the 90s to suggest the ""Li test"" as a way of distinguishing brown dwarfs from stars. If you could detect Li you knew you had to have a brown dwarf on your hand since it would be below 65 MJ. For objects which were hot enough to be either this was helpful If you have a cooler object, say one that has CH4, you KNOW you don't have a star on your hands, so the Li test seems less important. Also atomic Li goes into various other molecules and is removed, thus muddying the waters. However there aren't many spectral gravity indicators for brown dwarfs and it would be nice to have some marker for the most massive objects, those between ~65 MJ and ~75 MJ. Missing Li would serve nicely, but you have the chemistry removing Li as well as the nuclear fires. So the question becomes, can we detect the other Li molecules that show up at lower Teff? If so then we have a new lithium test for identifying the most massive brown dwarfs. But we need molecular opacities for species such as LiH, LiF, LiOH, and LiCl. This is where Ehsan comes in. As part of his NPP postdoc at Ames Ehsan computed himself some of these opacities and compiled others so that we could investigate the detectability of the various Li species, allowing us to follow the lithium through the brown dwarf cooling sequence You can read the paper for details, but some of these other Li species should be detectable in the mid-IR. Unfortunately the molecular signatures are more subtle than the atomic Li feature, but the 30m telescopes and perhaps JWST should be able to search for them. I'm hopeful that someday Li could be used to resolve some puzzles, such as the mass of Gl 229 B. Special thanks to Channon for handling all of the Li-species chemistry for us. Our spectra were computed with PICASO.",https://arxiv.org/abs/2106.00781,"Lithium is an important element for the understanding of ultracool dwarfs because it is lost to fusion at masses above $\sim 68\, M_{\rm J}$. Hence, the presence or absence of atomic Li has served as an indicator of the nearby H-burning boundary at about $75\,M_{\rm J}$ between brown-dwarfs and very low-mass stars. Historically the ""Lithium test"", a search for the presence and strength of the Li line at 670.8 nm, has been a marker if an object has a substellar mass with stellar-like spectral energy distribution (e.g., a late-type M dwarf). While the Li test could in principle also be used to distinguish masses of later-type L-T dwarfs, Li is predominantly no longer found as an atomic gas, but rather a molecular species such as LiH, LiF, LiOH, and LiCl in their cooler atmospheres. L- and T-type brown dwarfs are also quite faint at 670 nm and thus challenging targets for high resolution spectroscopy. But only recently have experimental molecular line lists become available for the molecular Li species, allowing molecular Li mass discrimination. In this study, we generated the latest opacity of each of these Li-bearing molecules and performed thermochemical equilibrium atmospheric composition calculation of the abundance of these molecules. Finally, we computed thermal emission spectra for a series of radiative-convective equilibrium models of cloudy and cloudless brown dwarf atmospheres (with $T_{\rm eff}=$ 500--2400~K, and $\log g$=4.0, 4.5, 5.0) to understand where the presence or absence of atmospheric lithium-bearing species is most easily detected as a function of brown dwarf mass and age. After atomic Li, the best spectral signatures were found to be LiF at $10.5-12.5$~\micron and LiCl at $14.5-18.5$ $\micron$. LiH also shows a narrow feature at $\sim 9.38$ $\micron$. ","Following the Lithium: Tracing Li-bearing Molecules Across Age, Mass,
and Gravity in Brown Dwarfs",10,"['I want to highlight our new paper, led by @exoEhsan, with @NatashaBatalha and @ChannonVisscher about Lithium in brown dwarfs: ', ""In low mass stars and the most massive brown dwarfs Lithium is lost to fusion in the core. Li is a little bit easier to 'burn' than H but not as easy as deuterium. So while the minimum mass to steadily burn H is around 75 Jupiter masses, the limit for Li burning is around 65 M_J."", 'So brown dwarfs and stars older than around 250 Myr and more massive than 65 MJ do not have Li visible in their atmospheres. There is a strong Li line in the optical that makes Li detection feasible.', 'This led Rafael Rebolo in the 90s to suggest the ""Li test"" as a way of distinguishing brown dwarfs from stars. If you could detect Li you knew you had to have a brown dwarf on your hand since it would be below 65 MJ. For objects which were hot enough to be either this was helpful', ""If you have a cooler object, say one that has CH4, you KNOW you don't have a star on your hands, so the Li test seems less important. Also atomic Li goes into various other molecules and is removed, thus muddying the waters."", ""However there aren't many spectral gravity indicators for brown dwarfs and it would be nice to have some marker for the most massive objects, those between ~65 MJ and ~75 MJ. Missing Li would serve nicely, but you have the chemistry removing Li as well as the nuclear fires."", 'So the question becomes, can we detect the other Li molecules that show up at lower Teff? If so then we have a new lithium test for identifying the most massive brown dwarfs. But we need molecular opacities for species such as LiH, LiF, LiOH, and LiCl.', 'This is where Ehsan comes in. As part of his NPP postdoc at Ames Ehsan computed himself some of these opacities and compiled others so that we could investigate the detectability of the various Li species, allowing us to follow the lithium through the brown dwarf cooling sequence', 'You can read the paper for details, but some of these other Li species should be detectable in the mid-IR. Unfortunately the molecular signatures are more subtle than the atomic Li feature, but the 30m telescopes and perhaps JWST should be able to search for them.', ""I'm hopeful that someday Li could be used to resolve some puzzles, such as the mass of Gl 229 B. Special thanks to Channon for handling all of the Li-species chemistry for us. Our spectra were computed with PICASO.""]",21,06,2415
387,67,981443382399643653,50343115,Thomas Kipf,"New paper on learning hyperspherical latent spaces: Hyperspherical Variational Auto-Encoders (with @im_td, L. Falorsi, @nicola_decao, @jmtomczak). Useful trick for learning node embeddings in graphs and for semi-supervised learning. @riceasphait @im_td @nicola_decao @jmtomczak Thanks! All credit goes to @im_td, Luca Falorsi & @nicola_decao - and to @jmtomczak for co-supervising. The paper is currently under review at UAI. @riceasphait @im_td @nicola_decao @jmtomczak Thanks! All the best for your submission as well :-)",https://arxiv.org/abs/1804.00891,"The Variational Auto-Encoder (VAE) is one of the most used unsupervised machine learning models. But although the default choice of a Gaussian distribution for both the prior and posterior represents a mathematically convenient distribution often leading to competitive results, we show that this parameterization fails to model data with a latent hyperspherical structure. To address this issue we propose using a von Mises-Fisher (vMF) distribution instead, leading to a hyperspherical latent space. Through a series of experiments we show how such a hyperspherical VAE, or $\mathcal{S}$-VAE, is more suitable for capturing data with a hyperspherical latent structure, while outperforming a normal, $\mathcal{N}$-VAE, in low dimensions on other data types. ",Hyperspherical Variational Auto-Encoders,3,"['New paper on learning hyperspherical latent spaces: Hyperspherical Variational Auto-Encoders (with @im_td, L. Falorsi, @nicola_decao, @jmtomczak). Useful trick for learning node embeddings in graphs and for semi-supervised learning. ', '@riceasphait @im_td @nicola_decao @jmtomczak Thanks! All credit goes to @im_td, Luca Falorsi & @nicola_decao - and to @jmtomczak for co-supervising. The paper is currently under review at UAI.', '@riceasphait @im_td @nicola_decao @jmtomczak Thanks! All the best for your submission as well :-)']",18,04,537
388,69,1073063838814154753,321794593,José G. Fernández-Trincado,"Check out our new APOGEE paper published today on ArXiv: ""APOGEE [C/N] Abundances Across the Galaxy ..."" -- Great Sten! ""However, there have been discoveries of chemically anomalous N-enhanced field stars in the APOGEE data (see, e.g., Fernández-Trincado et al. 2016, 2017; Schiavon et al. 2017), which might show up in our sample as low-[C/N] stars... ""These stars are few in number (11 found in the disk), and generally exhibit [Fe/H] < −0.5, so their contribution to the gradient and migration analysis is likely insignificant.""",https://arxiv.org/abs/1812.05092,"We present [C/N]-[Fe/H] abundance trends from the SDSS-IV Apache Point Observatory Galactic Evolution Experiment (APOGEE) survey, Data Release 14 (DR14), for red giant branch stars across the Milky Way Galaxy (MW, 3 kpc $<$ R $<$ 15 kpc). The carbon-to-nitrogen ratio (often expressed as [C/N]) can indicate the mass of a red giant star, from which an age can be inferred. Using masses and ages derived by Martig et al., we demonstrate that we are able to interpret the DR14 [C/N]-[Fe/H] abundance distributions as trends in age-[Fe/H] space. Our results show that an anti-correlation between age and metallicity, which is predicted by simple chemical evolution models, is not present at any Galactic zone. Stars far from the plane ($|$Z$|$ $>$ 1 kpc) exhibit a radial gradient in [C/N] ($\sim$ $-$0.04 dex/kpc). The [C/N] dispersion increases toward the plane ($\sigma_{[C/N]}$ = 0.13 at $|$Z$|$ $>$ 1 kpc to $\sigma_{[C/N]}$ = 0.18 dex at $|$Z$|$ $<$ 0.5 kpc). We measure a disk metallicity gradient for the youngest stars (age $<$ 2.5 Gyr) of $-$0.060 dex/kpc from 6 kpc to 12 kpc, which is in agreement with the gradient found using young CoRoGEE stars by Anders et al. Older stars exhibit a flatter gradient ($-$0.016 dex/kpc), which is predicted by simulations in which stars migrate from their birth radii. We also find that radial migration is a plausible explanation for the observed upturn of the [C/N]-[Fe/H] abundance trends in the outer Galaxy, where the metal-rich stars are relatively enhanced in [C/N]. ","APOGEE [C/N] Abundances Across the Galaxy: Migration and Infall from Red
Giant Ages",3,"['Check out our new APOGEE paper published today on ArXiv: ""APOGEE [C/N] Abundances Across the Galaxy ..."" -- Great Sten!', '""However, there have been discoveries of chemically anomalous N-enhanced field stars in the APOGEE data (see, e.g., Fernández-Trincado et al. 2016, 2017; Schiavon et al. 2017), which might show up in our sample as low-[C/N] stars...', '""These stars are few in number (11 found in the disk), and generally exhibit [Fe/H] < −0.5, so their contribution to the gradient and migration analysis is likely insignificant.""']",18,12,540
389,186,1336515249063800832,2783180568,Max Radin,"Excited to share our study comparing state-of-the-art VQE to classical quantum-chemistry methods! We found that for small organic molecules, classical methods are much faster. Very grateful to have worked with @jfgonthier_qc, @RomeroFontalvoJ, and @bp_plc ",https://arxiv.org/abs/2012.04001,"Recent advances in Noisy Intermediate-Scale Quantum (NISQ) devices have brought much attention to the potential of the Variational Quantum Eigensolver (VQE) and related techniques to provide practical quantum advantage in computational chemistry. However, it is not yet clear whether such algorithms, even in the absence of device error, could achieve quantum advantage for systems of practical interest and how large such an advantage might be. To address these questions, we have performed an exhaustive set of benchmarks to estimate number of qubits and number of measurements required to compute the combustion energies of small organic molecules to within chemical accuracy using VQE as well as state-of-the-art classical algorithms. We consider several key modifications to VQE, including the use of Frozen Natural Orbitals, various Hamiltonian decomposition techniques, and the application of fermionic marginal constraints. Our results indicate that although Frozen Natural Orbitals and low-rank factorizations of the Hamiltonian significantly reduce the qubit and measurement requirements, these techniques are not sufficient to achieve practical quantum computational advantage in the calculation of organic molecule combustion energies. This suggests that new approaches to estimation leveraging quantum coherence, such as Bayesian amplitude estimation [arxiv:2006.09350, arxiv:2006.09349], may be required in order to achieve practical quantum advantage with near-term devices. Our work also highlights the crucial role that resource and performance assessments of quantum algorithms play in identifying quantum advantage and guiding quantum algorithm design. ","Identifying challenges towards practical quantum advantage through
resource estimation: the measurement roadblock in the variational quantum
eigensolver",1,"['Excited to share our study comparing state-of-the-art VQE to classical quantum-chemistry methods! We found that for small organic molecules, classical methods are much faster. Very grateful to have worked with @jfgonthier_qc, @RomeroFontalvoJ, and @bp_plc ']",20,12,269
390,85,1023999501651066880,1248692233,Tobias Marriage,Our new paper! Jesse Rivera studies a galaxy from 11 billion years ago forming 100s of stars per year. (To compare: our Milky Way forms only 1 star per year!) The plot shows how Jesse resolves the flow of gas in this gravitationally lensed galaxy. ,https://arxiv.org/abs/1807.08895,"We report Northern Extended Millimeter Array (NOEMA) CO($J = 3 - 2$) observations of the dusty star-forming galaxy ACT-S\,J020941+001557 at $z = 2.5528$, which was detected as an unresolved source in the Atacama Cosmology Telescope (ACT) equatorial survey. Our spatially resolved spectral line data support the derivation of a gravitational lens model from 37 independent velocity channel maps using a pixel-based algorithm, from which we infer a velocity-dependent magnification factor $\mu \approx 7-22$ with a luminosity-weighted mean $\left<\mu\right>\approx 13$. The resulting source-plane reconstruction is consistent with a rotating disk, although other scenarios cannot be ruled out by our data. After correction for lensing, we derive a line luminosity $L^{\prime}_{\rm CO(3-2)}= (5.53\pm 0.69) \times 10^{10}\,{\rm \,K\,km\,s^{-1}\,pc^{2}}$, a cold gas mass $M_{{\rm gas}}= (3.86 \pm 0.33) \times 10^{10}\,M_{\odot}$, a dynamical mass $M_{\rm dyn}\,{\rm sin}^2\,i = 3.9^{+1.8}_{-1.5} \times 10^{10}\,M_{\odot}$, and a gas mass fraction $f_{\rm gas}\,{\rm csc}^2\,i = 1.0^{+0.8}_{-0.4}$. The line brightness temperature ratio of $r_{3,1}\approx 1.6$ relative to a Green Bank Telescope CO($J=1-0$) detection may be elevated by a combination of external heating of molecular clouds, differential lensing, and/or pointing errors. ","The Atacama Cosmology Telescope: CO(J = 3 - 2) mapping and lens modeling
of an ACT-selected dusty star-forming galaxy",1,['Our new paper! Jesse Rivera studies a galaxy from 11 billion years ago forming 100s of stars per year. (To compare: our Milky Way forms only 1 star per year!) The plot shows how Jesse resolves the flow of gas in this gravitationally lensed galaxy. '],18,07,261
391,126,1248599767619420160,892059194240532480,Mikel Artetxe,"Check out our new paper on ""Translation Artifacts in Cross-lingual Transfer Learning"" (w/ @glabaka & @eagirre) We show that translation can alter spurious patterns in data, which requires reconsidering previous findings in cross-lingual transfer learning ",https://arxiv.org/abs/2004.04721,"Both human and machine translation play a central role in cross-lingual transfer learning: many multilingual datasets have been created through professional translation services, and using machine translation to translate either the test set or the training set is a widely used transfer technique. In this paper, we show that such translation process can introduce subtle artifacts that have a notable impact in existing cross-lingual models. For instance, in natural language inference, translating the premise and the hypothesis independently can reduce the lexical overlap between them, which current models are highly sensitive to. We show that some previous findings in cross-lingual transfer learning need to be reconsidered in the light of this phenomenon. Based on the gained insights, we also improve the state-of-the-art in XNLI for the translate-test and zero-shot approaches by 4.3 and 2.8 points, respectively. ",Translation Artifacts in Cross-lingual Transfer Learning,1,"['Check out our new paper on ""Translation Artifacts in Cross-lingual Transfer Learning"" (w/ @glabaka & @eagirre)\n\nWe show that translation can alter spurious patterns in data, which requires reconsidering previous findings in cross-lingual transfer learning\n\n']",20,04,261
392,16,1344346476617592833,825458566689648645,Sajad Sotudeh,"I'm excited to announce our new paper: ""On Generating Extended Summaries of Long Documents"" w/ @armancohan and Nazli Goharian. arXiv: Code and Datasets: (1/4) We propose a novel multi-tasking approach that exploits the hierarchical structure of long scientific documents to aid the extractive summarization model in selecting summary-worthy sentences, and finally, form the ""extended"" summaries of the given long documents. (2/4) In order to support this task, we additionally collect two extended summarization datasets: arXiv-Long, and PubMed-Long. The experimental results indicate that the multi-tasking model either outperforms or matches the performance of the prior baseline (i.e., BertSumExt). (3/4) The extrinsic analysis: 1) our model improves consistently as the summary length increases; 2) our model adjusts the extraction probability of sentences toward salient sentences across diverse sections of the source document and pick those with higher confidence. (4/4) ",https://arxiv.org/abs/2012.14136,"Prior work in document summarization has mainly focused on generating short summaries of a document. While this type of summary helps get a high-level view of a given document, it is desirable in some cases to know more detailed information about its salient points that can't fit in a short summary. This is typically the case for longer documents such as a research paper, legal document, or a book. In this paper, we present a new method for generating extended summaries of long papers. Our method exploits hierarchical structure of the documents and incorporates it into an extractive summarization model through a multi-task learning approach. We then present our results on three long summarization datasets, arXiv-Long, PubMed-Long, and Longsumm. Our method outperforms or matches the performance of strong baselines. Furthermore, we perform a comprehensive analysis over the generated results, shedding insights on future research for long-form summary generation task. Our analysis shows that our multi-tasking approach can adjust extraction probability distribution to the favor of summary-worthy sentences across diverse sections. Our datasets, and codes are publicly available at this https URL ",On Generating Extended Summaries of Long Documents,4,"['I\'m excited to announce our new paper: ""On Generating Extended Summaries of Long Documents"" w/ @armancohan and Nazli Goharian. \n\narXiv: \nCode and Datasets: \n(1/4) ', 'We propose a novel multi-tasking approach that exploits the hierarchical structure of long scientific documents to aid the extractive summarization model in selecting summary-worthy sentences, and finally, form the ""extended"" summaries of the given long documents. (2/4)', 'In order to support this task, we additionally collect two extended summarization datasets: arXiv-Long, and PubMed-Long. The experimental results indicate that the multi-tasking model either outperforms or matches the performance of the prior baseline (i.e., BertSumExt). (3/4)', 'The extrinsic analysis: 1) our model improves consistently as the summary length increases; 2) our model adjusts the extraction probability of sentences toward salient sentences across diverse sections of the source document and pick those with higher confidence. (4/4) https://t.co/otpKom9Lov']",20,12,1006
393,38,1519685299399450629,1074795788331569152,Hayley Beltz,"Just in time for #Exo4, it's new paper day! If you've ever thought to yourself, ""Gee, I wonder what the high resolution emission spectra of an ultrahot Jupiter looks like"" you're in luck! First, some background: ultrahot Jupiters (UHJs) have HUGE day-night temperature gradients meaning that we expect their spectra to vary strongly throughout the planet's orbit. (For more info about their atmospheric structure, see my last paper!) In fact, depending on the phase, the spectra will show emission features, absorption features, or both. (3D effects!) Check out this little movie showing how the spectra changes as different parts of the planet come into view: We also explored the effects of our different magnetic drag treatments. When we look at the net Doppler shifts as a function of phase, we see that on the nightside our dragged models return slight net redshifts compared to the blueshifts seen in the drag-free model. Now the exact values of the shifts are wavelength dependent, but this shows that our differences in the atmospheric structure of magnetic models can show up (and potentially be detected) in high resolution emission spectra. Finally, we end the paper with a warning to those who want to use a single 1D model for their entire observation. Especially near the quadratures, where both day and nightsides are present, a dayside model can retrieve Doppler shifts many km/s away from the true value. Anyways, I hope you check out the paper and I am more than happy to chat about it with anyone interested! @brettmor Thank you!! @_astronomay Thank you!!!!",https://arxiv.org/abs/2204.12996,"Ultrahot Jupiters are ideal candidates to explore with high-resolution emission spectra. Detailed theoretical studies are necessary to investigate the range of spectra we can expect to see from these objects throughout their orbit, because of the extreme temperature and chemical longitudinal gradients that exist across day and nightside regions. Using previously published 3D GCM models of WASP-76b with different treatments of magnetic drag, we post-process the 3D atmospheres to generate high-resolution emission spectra for two wavelength ranges and throughout the planet's orbit. We find that the high-resolution emission spectra vary strongly as a function of phase, at times showing emission features, absorption features, or both, which are a direct result of the 3D structure of the planet. At phases exhibiting both emission and absorption features, the Doppler shift differs in direction between the two spectral features, making them differentiable instead of canceling each other out. Through the use of cross-correlation, we find different patterns in net Doppler shift for models with different treatments of drag: the nightside spectra show opposite signs in their Doppler shift, while the dayside phases have a reversal in the trend of net shift with phase. Finally, we caution researchers from using a single spectral template throughout the planet's orbit; this can bias the corresponding net Doppler shift returned, as it can pick up on a bright region on the edge of the planet disk that is highly red- or blue-shifted. ","Magnetic Drag and 3-D Effects in Theoretical High-Resolution Emission
Spectra of Ultrahot Jupiters: the Case of WASP-76b",9,"['Just in time for #Exo4, it\'s new paper day! If you\'ve ever thought to yourself, ""Gee, I wonder what the high resolution emission spectra of an ultrahot Jupiter looks like"" you\'re in luck! ', ""First, some background: ultrahot Jupiters (UHJs) have HUGE day-night temperature gradients meaning that we expect their spectra to vary strongly throughout the planet's orbit. (For more info about their atmospheric structure, see my last paper!) https://t.co/wC4FOMTi97"", 'In fact, depending on the phase, the spectra will show emission features, absorption features, or both. (3D effects!) Check out this little movie showing how the spectra changes as different parts of the planet come into view:\nhttps://t.co/vRY2DwznIp', 'We also explored the effects of our different magnetic drag treatments. When we look at the net Doppler shifts as a function of phase, we see that on the nightside our dragged models return slight net redshifts compared to the blueshifts seen in the drag-free model.', 'Now the exact values of the shifts are wavelength dependent, but this shows that our differences in the atmospheric structure of magnetic models can show up (and potentially be detected) in high resolution emission spectra.', 'Finally, we end the paper with a warning to those who want to use a single 1D model for their entire observation. Especially near the quadratures, where both day and nightsides are present, a dayside model can retrieve Doppler shifts many km/s away from the true value.', 'Anyways, I hope you check out the paper and I am more than happy to chat about it with anyone interested!', '@brettmor Thank you!!', '@_astronomay Thank you!!!!']",22,04,1597
394,18,1399666122140626947,2872853608,Paula Soares,"New paper! In we showed for the first time that Gaussian Process Regression (GPR) can be used as a foreground removal technique in single-dish, low redshift 21cm intensity mapping. With @CunningtonSD, @CatAstro_Phy, and @AlkistisPou. [thread] We also released some user-friendly code for running GPR as a foreground removal technique, which has a lot of introductory Jupyter notebooks and easy to follow examples: Astrophysical foregrounds dominate over the cosmological 21cm signal so need to be removed. GPR has already been successfully used as a foreground cleaning technique in the context of EoR (see e.g. arXiv:1711.10834), so we wanted to try it in the case of large-scale structure! We used MeerKAT-like simulations of foregrounds, 21cm signal and instrumental noise, and compared the performance of GPR to another widely used foreground cleaning technique: Principal Component Analysis (PCA). We find that GPR is very good at recovering the 21cm power spectrum, especially the radial power spectrum. It is in many cases better suited to recover the power spectrum than PCA, especially on small scales. Check out how well it performs when there is no polarisation leakage: We also tested the case of including polarisation leakage, trimming the bandwidth, removing frequency channels, and tested if it is possible to implement a foreground transfer function with GPR (spoiler: it is!) In summary: it works!! And you can use our code to try it out yourself 🙂 we also describe how GPR works in detail in the paper, so if you're interested check that out too! @AmelieSaintonge @CunningtonSD @CatAstro_Phy @AlkistisPou Thank you Amélie! 🙂 @guadalupecah Thank you!! 🙂",http://arxiv.org/abs/2105.12665,"We apply for the first time Gaussian Process Regression (GPR) as a foreground removal technique in the context of single-dish, low redshift HI intensity mapping, and present an open-source Python toolkit for doing so. We use MeerKAT and SKA1-MID-like simulations of 21cm foregrounds (including polarisation leakage), HI cosmological signal and instrumental noise. We find that it is possible to use GPR as a foreground removal technique in this context, and that it is better suited in some cases to recover the HI power spectrum than Principal Component Analysis (PCA), especially on small scales. GPR is especially good at recovering the radial power spectrum, outperforming PCA when considering the full bandwidth of our data. Both methods are worse at recovering the transverse power spectrum, since they rely on frequency-only covariance information. When halving our data along frequency, we find that GPR performs better in the low frequency range, where foregrounds are brighter. It performs worse than PCA when frequency channels are missing, to emulate RFI flagging. We conclude that GPR is an excellent foreground removal option for the case of single-dish, low redshift HI intensity mapping in the absence of missing frequency channels. Our Python toolkit gpr4im and the data used in this analysis are publicly available on GitHub. ","Gaussian Process Regression for foreground removal in HI intensity
mapping experiments",9,"['New paper! In we showed for the first time that Gaussian Process Regression (GPR) can be used as a foreground removal technique in single-dish, low redshift 21cm intensity mapping. With @CunningtonSD, @CatAstro_Phy, and @AlkistisPou. [thread]', 'We also released some user-friendly code for running GPR as a foreground removal technique, which has a lot of introductory Jupyter notebooks and easy to follow examples: https://t.co/jT5EU3V41c', 'Astrophysical foregrounds dominate over the cosmological 21cm signal so need to be removed. GPR has already been successfully used as a foreground cleaning technique in the context of EoR (see e.g. arXiv:1711.10834), so we wanted to try it in the case of large-scale structure!', 'We used MeerKAT-like simulations of foregrounds, 21cm signal and instrumental noise, and compared the performance of GPR to another widely used foreground cleaning technique: Principal Component Analysis (PCA).', 'We find that GPR is very good at recovering the 21cm power spectrum, especially the radial power spectrum. It is in many cases better suited to recover the power spectrum than PCA, especially on small scales. Check out how well it performs when there is no polarisation leakage: https://t.co/jgNmuRPhz8', 'We also tested the case of including polarisation leakage, trimming the bandwidth, removing frequency channels, and tested if it is possible to implement a foreground transfer function with GPR (spoiler: it is!)', ""In summary: it works!! And you can use our code to try it out yourself 🙂 we also describe how GPR works in detail in the paper, so if you're interested check that out too!"", '@AmelieSaintonge @CunningtonSD @CatAstro_Phy @AlkistisPou Thank you Amélie! 🙂', '@guadalupecah Thank you!! 🙂']",21,05,1692
395,138,1369042014084296705,1276310243123720192,Daniel Filan research-tweets,"New paper is up about about clusterability in neural networks, authored by myself, Shlomi Hod, @StephenLCasper, @decodyng, Andrew Critch, and Stuart Russell! Link to paper: (1/9) An old version with a somewhat more clickbaity title has been online for a while, but this is a more final version that I feel comfortable publicly promoting. So what's in the paper? (2/9) We're interested in dividing the neurons of neural nets into groups such that there's a lot of connection within the groups, but not much between the groups. We do this with graph clustering algorithms, and call the groups 'clusters'. (3/9) (The hope is that if you can do this well, then one day you'll be able to analyze neural net structure group-wise to learn facts about the network that don't depend on knowledge of the deployment distribution. But that's pretty far away...) (4/9) We find that in many conditions, if you train neural networks with pruning and dropout, basically every net you train will be significantly more clusterable than a random neural network with the same distribution of weights. (5/9) This is true for small MLPs and VGG-scale CNNs trained on various image classification tasks (but not small CNNs that we train on MNIST and Fashion-MNIST 😳). (6/9) We also find that when we cluster big CNNs other people have trained for ImageNet classification (some ResNets, some VGGs, and Inception-v3), they also are more clusterable than if their weights were arranged randomly. (7/9) Another thing we do is figure out ways of changing the training process to produce nets that are more neatly divisible. We find that regularization manages to succeed at doing this with little cost in accuracy for MLPs! (8/9) The paper has a bunch of stuff to dig into. If you want to look at code, it's available here: Thanks to my co-authors, and the support of folks at @CHAI_Berkeley, for helping make this paper real! (9/9) @robpmcadam @StephenLCasper @decodyng Nope. But I think some co-authors are interested in that.",https://arxiv.org/abs/2103.03386,"The learned weights of a neural network have often been considered devoid of scrutable internal structure. In this paper, however, we look for structure in the form of clusterability: how well a network can be divided into groups of neurons with strong internal connectivity but weak external connectivity. We find that a trained neural network is typically more clusterable than randomly initialized networks, and often clusterable relative to random networks with the same distribution of weights. We also exhibit novel methods to promote clusterability in neural network training, and find that in multi-layer perceptrons they lead to more clusterable networks with little reduction in accuracy. Understanding and controlling the clusterability of neural networks will hopefully render their inner workings more interpretable to engineers by facilitating partitioning into meaningful clusters. ",Clusterability in Neural Networks,10,"['New paper is up about about clusterability in neural networks, authored by myself, Shlomi Hod, @StephenLCasper, @decodyng, Andrew Critch, and Stuart Russell! Link to paper: (1/9) ', ""An old version with a somewhat more clickbaity title has been online for a while, but this is a more final version that I feel comfortable publicly promoting. So what's in the paper? (2/9)"", ""We're interested in dividing the neurons of neural nets into groups such that there's a lot of connection within the groups, but not much between the groups. We do this with graph clustering algorithms, and call the groups 'clusters'. (3/9)"", ""(The hope is that if you can do this well, then one day you'll be able to analyze neural net structure group-wise to learn facts about the network that don't depend on knowledge of the deployment distribution. But that's pretty far away...) (4/9)"", 'We find that in many conditions, if you train neural networks with pruning and dropout, basically every net you train will be significantly more clusterable than a random neural network with the same distribution of weights. (5/9)', 'This is true for small MLPs and VGG-scale CNNs trained on various image classification tasks (but not small CNNs that we train on MNIST and Fashion-MNIST 😳). (6/9)', 'We also find that when we cluster big CNNs other people have trained for ImageNet classification (some ResNets, some VGGs, and Inception-v3), they also are more clusterable than if their weights were arranged randomly. (7/9)', 'Another thing we do is figure out ways of changing the training process to produce nets that are more neatly divisible. We find that regularization manages to succeed at doing this with little cost in accuracy for MLPs! (8/9)', ""The paper has a bunch of stuff to dig into. If you want to look at code, it's available here: https://t.co/SBOG603OHy\n\nThanks to my co-authors, and the support of folks at @CHAI_Berkeley, for helping make this paper real! (9/9)"", '@robpmcadam @StephenLCasper @decodyng Nope. But I think some co-authors are interested in that.']",21,03,2021
396,154,1247198777624018944,1002606609527443462,Dr. Angela Collier,Paper day! Dynamics explaining observed clustering in the outer solar system--removes the need for #planet9 () This is my first paper with my new group AND my first time venturing away from galactic dynamics! Very exciting! #AcademicTwitter ,https://arxiv.org/abs/2004.01198,"Disks of low-mass bodies on high-eccentricity orbits in near-Keplerian potentials can be dynamically unstable to buckling out of the plane. In this letter, we present $N$-body simulations of the long-term behavior of such a system, finding apsidal clustering of the orbits in the disk plane. The timescale over which the clustering is maintained increases with number of particles, suggesting that lopsided configurations are stable at large $N$. This discovery may explain the observed apsidal ($\varpi$) clustering of extreme trans-Neptunian Objects in the outer solar system. ",Apsidal Clustering following the Inclination Instability,1,['Paper day! Dynamics explaining observed clustering in the outer solar system--removes the need for #planet9 () This is my first paper with my new group AND my first time venturing away from galactic dynamics! Very exciting! #AcademicTwitter '],20,04,253
397,191,1374079898512465922,1133565755637657601,Aaron M. Lattanzi,"New paper up on @arxiv with Vahid Tavanashad, Shankar Subramaniam & @jesse_caps! We close and implement a stochastic model for drag perturbations induced by neighboring particles. The new stochastic EL framework stacks up well to PR-DNS simulations.",https://arxiv.org/abs/2103.10581,"Standard Eulerian--Lagrangian (EL) methods generally employ drag force models that only represent the mean hydrodynamic force acting upon a particle-laden suspension. Consequently, higher-order drag force statistics, arising from neighbor-induced flow perturbations, are not accounted for; with implications on predictions for particle velocity variance and dispersion. We develop a force Langevin (FL) model that treats neighbor-induced drag fluctuations as a stochastic force within an EL framework. The stochastic drag force follows an Ornstein-Uhlenbeck process and requires closure of the integral time scale for the fluctuating hydrodynamic force and the standard deviation in drag. The former is closed using the mean-free time between successive collisions, derived from the kinetic theory of non-uniform gases. For the latter, particle-resolved direct numerical simulation (PR--DNS) of fixed particle assemblies is utilized to develop a correlation. The stochastic EL framework specifies unresolved drag force statistics, leading to the correct evolution and sustainment of particle velocity variance over a wide range of Reynolds numbers and solids volume fractions when compared to PR--DNS of freely-evolving homogeneous suspensions. By contrast, standard EL infers drag statistics from variations in the resolved flow and thus under-predicts the growth and steady particle velocity variance in homogeneous suspensions. Velocity statistics from standard EL approaches are found to depend on the bandwidth of the projection function used for two-way momentum coupling, while results obtained from the stochastic EL approach are insensitive to the projection bandwidth. ","A stochastic model for the hydrodynamic force in Euler--Lagrange
simulations of particle-laden flows",1,"['New paper up on @arxiv with Vahid Tavanashad, Shankar Subramaniam & @jesse_caps! We close and implement a stochastic model for drag perturbations induced by neighboring particles. The new stochastic EL framework stacks up well to PR-DNS simulations.']",21,03,256
398,100,1404902019169587206,606388721,Dida Markovič,"Exciting new paper by Peter Taylor that makes it possible to directly cross-correlate RSD x WL, which opens up amazing possibilities for the next-gen of cosmological surveys! (Full disclosure: am co-author😋.) @EC_Euclid @NASARoman @desisurvey ",https://arxiv.org/abs/2106.05293,"Future data sets will enable cross-correlations between redshift space distortions (RSD) and weak lensing (WL). While photometric lensing and clustering cross-correlations have provided some of the tightest cosmological constraints to date, it is not well understood how to optimally perform similar RSD/WL joint analyses in a lossless way. RSD is typically measured in $3D$ redshift space, but WL is inherently a projected signal, making angular statistics a natural choice for the combined analysis. Thus, we determine the amount of RSD information that can be extracted using projected statistics. Specifically we perform a Fisher analysis to forecast constraints and model bias comparing two different Fingers-of-God (FoG) models using both, the $3D$ power spectrum, $P(k, \mu)$, and tomographic $C(\ell)$. We find that because na\""ive tomographic projection mixes large scales with poorly modelled nonlinear radial modes, it does not provide competitive constraints to the $3D$ RSD power spectrum without the model bias becoming unacceptably large. This is true even in the limit of narrow tomographic bins. In light of this we propose a new radial weighting scheme which unmixes radial RSD scales in projection yielding competitive constraints to the $3D$ RSD power spectrum, while keeping the model bias small. This work lays the groundwork for optimal joint analyses of RSD and cosmic shear. ",The RSD Sorting Hat: Unmixing Radial Scales in Projection,1,"['Exciting new paper by Peter Taylor that makes it possible to directly cross-correlate RSD x WL, which opens up amazing possibilities for the next-gen of cosmological surveys! (Full disclosure: am co-author😋.) \n@EC_Euclid @NASARoman @desisurvey \n ']",21,06,256
399,91,1235794722683146240,117917587,Tatsuro KAWAMOTO,"New detectability paper (mainly done by Chihiro Noguchi) is on arxiv. “Fragility of spectral clustering for networks with an overlapping structure” (continued) Whereas the spectral clustering is known to be nearly optimal for some random graph models, it’s also known to be fragile against noise. (2/3) We investigated how an overlapping structure in the stochastic block model affects the spectrum (isolated eigenvalue and spectral band). Interestingly, the effects are qualitatively different depending on the way the groups are overlapped. (3/3)",https://arxiv.org/abs/2003.02463,"Communities commonly overlap in real-world networks. This is a motivation to develop overlapping community detection methods, because methods for non-overlapping communities may not perform well. However, deterioration mechanism of the detection methods used for non-overlapping communities have rarely been investigated theoretically. Here, we analyze an accuracy of spectral clustering, which does not consider overlapping structures, by using the replica method from statistical physics. Our analysis on an overlapping stochastic block model reveals how the structural information is lost from the leading eigenvector because of the overlapping structure. ","Fragility of spectral clustering for networks with an overlapping
structure",3,"['New detectability paper (mainly done by Chihiro Noguchi) is on arxiv. \n“Fragility of spectral clustering for networks with an overlapping structure” \n \n\n(continued) ', 'Whereas the spectral clustering is known to be nearly optimal for some random graph models, it’s also known to be fragile against noise. (2/3)', 'We investigated how an overlapping structure in the stochastic block model affects the spectrum (isolated eigenvalue and spectral band). Interestingly, the effects are qualitatively different depending on the way the groups are overlapped. (3/3)']",20,03,563
400,68,1361577810553356292,270481144,Antoine Tilloy,"I put on arxiv a new paper where I apply the variational method in relativistic quantum field theory in 1+1 dimensions. The novelty is that there is no cutoff, infrared or ultraviolet, as is usually the case, and so the results are truly variational The idea is to modify the continuous matrix product states (CMPS) introduced by @fverstraete and Ignacio Cirac in 2010. The CMPS already do the job in the non-relativistic case, but still require a UV cutoff for relativistic QFT. The modification consists in changing of operator basis, to work in one that solves the short distance behavior exactly, and fits the singular UV behavior. This allows to bypass the last of Feynman's objections of 1987 against the variational method in relativistic QFT. The idea is simple, and had been in my head for a while, but I only got the courage to do the (semi-tedious) computations recently. In my opinion, they are not super enlightening except for experts, so I dropped them in a second paper @fverstraete Thanks, means a lot! For the entanglement entropy, since I softly break locality, I wouldn't know how to compute the usual one. But it does define a new analogous quantity that should be finite (but not sure if physically meaningful though) @fverstraete Thanks for the reference, I'll have a look @mmanuF In principle, CMPS have a generalization, continuous tensor networks states but it's much more difficult to use and do computations in practice. The renormalization of the Hamiltonian is also trickier in higher dimensions (need more than normal ordering) @mmanuF So to be transparent, right now, I know only how to deal with 1+1 dimensions. There are a number of hurdles to lift before doing higher dimensions. I discuss it a bit at the end of the papers (short and long)",https://arxiv.org/abs/2102.07733,"The variational method is a powerful approach to solve many-body quantum problems non perturbatively. However, in the context of relativistic quantum field theory (QFT), it needs to meet 3 seemingly incompatible requirements outlined by Feynman: extensivity, computability, and lack of UV sensitivity. In practice, variational methods break one of the 3, which translates into the need to have an IR or UV cutoff. In this letter, I introduce a relativistic modification of continuous matrix product states that satisfies the 3 requirements jointly in 1+1 dimensions. I apply it to the self-interacting scalar field, without UV cutoff and directly in the thermodynamic limit. Numerical evidence suggests the error decreases faster than any power law in the number of parameters, while the cost remains only polynomial. ",Variational method in relativistic quantum field theory without cutoff,8,"['I put on arxiv a new paper where I apply the variational method in relativistic quantum field theory in 1+1 dimensions. The novelty is that there is no cutoff, infrared or ultraviolet, as is usually the case, and so the results are truly variational\n\n', 'The idea is to modify the continuous matrix product states (CMPS) introduced by @fverstraete and Ignacio Cirac in 2010. The CMPS already do the job in the non-relativistic case, but still require a UV cutoff for relativistic QFT.', ""The modification consists in changing of operator basis, to work in one that solves the short distance behavior exactly, and fits the singular UV behavior. This allows to bypass the last of Feynman's objections of 1987 against the variational method in relativistic QFT."", 'The idea is simple, and had been in my head for a while, but I only got the courage to do the (semi-tedious) computations recently. In my opinion, they are not super enlightening except for experts, so I dropped them in a second paper\n\nhttps://t.co/K0AMyG5tVp', ""@fverstraete Thanks, means a lot! For the entanglement entropy, since I softly break locality, I wouldn't know how to compute the usual one. But it does define a new analogous quantity that should be finite (but not sure if physically meaningful though)"", ""@fverstraete Thanks for the reference, I'll have a look"", ""@mmanuF In principle, CMPS have a generalization, continuous tensor networks states but it's much more difficult to use and do computations in practice. The renormalization of the Hamiltonian is also trickier in higher dimensions (need more than normal ordering)"", '@mmanuF So to be transparent, right now, I know only how to deal with 1+1 dimensions. There are a number of hurdles to lift before doing higher dimensions. I discuss it a bit at the end of the papers (short and long)']",21,02,1789
401,146,1426453540709355520,1310552063999438849,Hauke Group,"👨🏫In our study, we reveal a universality in the equilibration dynamics of the Sachdev-Ye-Kitaev model by employing state-of-the-art numerical methods for disorder averaged evolution. Read the full article @ERC_Research @HaukeGroup and Alessio Paviglianiti ",https://arxiv.org/abs/2108.01718,"Equilibrium quantum many-body systems in the vicinity of phase transitions generically manifest universality. In contrast, limited knowledge has been gained on possible universal characteristics in the non-equilibrium evolution of systems in quantum critical phases. In this context, universality is generically attributed to the insensitivity of observables to the microscopic system parameters and initial conditions. Here, we present such a universal feature in the equilibration dynamics of the Sachdev-Ye-Kitaev (SYK) Hamiltonian -- a paradigmatic system of disordered, all-to-all interacting fermions that has been designed as a phenomenological description of quantum critical regions. We drive the system far away from equilibrium by performing a global quench, and track how its ensemble average relaxes to a steady state. Employing state-of-the-art numerical simulations for the exact evolution, we reveal that the disorder-averaged evolution of few-body observables, including the quantum Fisher information and low-order moments of local operators, exhibit within numerical resolution a universal equilibration process. Under a straightforward rescaling, data that correspond to different initial states collapse onto a universal curve, which can be well approximated by a Gaussian throughout large parts of the evolution. To reveal the physics behind this process, we formulate a general theoretical framework based on the Novikov--Furutsu theorem. This framework extracts the disorder-averaged dynamics of a many-body system as an effective dissipative evolution, and can have applications beyond this work. The exact non-Markovian evolution of the SYK ensemble is very well captured by Bourret--Markov approximations, which contrary to common lore become justified thanks to the extreme chaoticity of the system, and universality is revealed in a spectral analysis of the corresponding Liouvillian. ",Universal equilibration dynamics of the Sachdev-Ye-Kitaev model,1,"['👨\u200d🏫In our study, we reveal a universality in the equilibration dynamics of the Sachdev-Ye-Kitaev model by employing state-of-the-art numerical methods for disorder averaged evolution.\nRead the full article \n@ERC_Research \n@HaukeGroup and Alessio Paviglianiti ']",21,08,270
402,125,1403140600480747522,147951210,David Berthelot,"New paper: AdaMatch - Unifying Unsupervised Domain Adaptation (UDA) and Semi-Supervised Learning (SSL) and SSDA. Nearly doubles SotA accuracy for UDA on non-pretrained DomainNet. 1/3 he technique itself is an extension of FixMatch: the two biggest differences being random logit interpolation and relative confidence thresholding. 2/3 There are plenty of tables comparing the effects of applying SSL techniques to UDA, SSDA and UDA techniques such as MCD to SSL and SSDA. The picture I like best is the one that compares convergence between MCD and AdaMatch over time. 3/3 Work done with @BeccaRoelofs (co-first author), Kihyuk Sohn, Nicholas Carlini, @alexey2004",https://arxiv.org/abs/2106.04732,"We extend semi-supervised learning to the problem of domain adaptation to learn significantly higher-accuracy models that train on one data distribution and test on a different one. With the goal of generality, we introduce AdaMatch, a method that unifies the tasks of unsupervised domain adaptation (UDA), semi-supervised learning (SSL), and semi-supervised domain adaptation (SSDA). In an extensive experimental study, we compare its behavior with respective state-of-the-art techniques from SSL, SSDA, and UDA on vision classification tasks. We find AdaMatch either matches or significantly exceeds the state-of-the-art in each case using the same hyper-parameters regardless of the dataset or task. For example, AdaMatch nearly doubles the accuracy compared to that of the prior state-of-the-art on the UDA task for DomainNet and even exceeds the accuracy of the prior state-of-the-art obtained with pre-training by 6.4% when AdaMatch is trained completely from scratch. Furthermore, by providing AdaMatch with just one labeled example per class from the target domain (i.e., the SSDA setting), we increase the target accuracy by an additional 6.1%, and with 5 labeled examples, by 13.6%. ","AdaMatch: A Unified Approach to Semi-Supervised Learning and Domain
Adaptation",4,"['New paper: AdaMatch - Unifying Unsupervised Domain Adaptation (UDA) and Semi-Supervised Learning (SSL) and SSDA. Nearly doubles SotA accuracy for UDA on non-pretrained DomainNet. \n1/3', 'he technique itself is an extension of FixMatch: the two biggest differences being random logit interpolation and relative confidence thresholding.\n2/3 https://t.co/uMQYnZ0mGY', 'There are plenty of tables comparing the effects of applying SSL techniques to UDA, SSDA and UDA techniques such as MCD to SSL and SSDA.\nThe picture I like best is the one that compares convergence between MCD and AdaMatch over time.\n3/3 https://t.co/FPKoDxTOqp', 'Work done with @BeccaRoelofs (co-first author), Kihyuk Sohn, Nicholas Carlini, @alexey2004']",21,06,684
403,142,1301013157398245376,1611666830,Arne Løhre Grimsmo 🧡,"New paper with Thomas Smith, @maja_cassidy, @Prof_D_Reilly and @BartlettQuantum on readout of Majorana qubits: . Main takeaways: 1/3 1. Dispersive readout looks very promising. 2. But, there is some fine-print: The ""QNDness"" depends on the details of the Majorana qubit design and the readout protocol. In some cases the measured Majorana parity is manifestly conserved by the light-matter interaction. 2/3 This leads to a stronger notion of QND readout than for conventional superconducting qubits. 3. The ""longitudinal readout"" scheme we came up with in 2019 also looks extremely promising. Manifestly QND and might be faster and higher fidelity than dispersive. 3/3 @tahantech @maja_cassidy @Prof_D_Reilly @BartlettQuantum Yeah I believe that's true, the paper is about gates but he proposes longitudinal readout too in the last two sentences of his paper",https://arxiv.org/abs/2009.00027,"We analyze a readout scheme for Majorana qubits based on dispersive coupling to a resonator. We consider two variants of Majorana qubits: the Majorana transmon and the Majorana box qubit. In both cases, the qubit-resonator interaction can produce sizeable dispersive shifts in the MHz range for reasonable system parameters, allowing for submicrosecond readout with high fidelity. For Majorana transmons, the light-matter interaction used for readout manifestly conserves Majorana parity, which leads to a notion of quantum nondemolition (QND) readout that is stronger than for conventional charge qubits. In contrast, Majorana box qubits only recover an approximately QND readout mechanism in the dispersive limit where the resonator detuning is large. We also compare dispersive readout to longitudinal readout for the Majorana box qubit. We show that the latter gives faster and higher fidelity readout for reasonable parameters, while having the additional advantage of being manifestly QND, and so may prove to be a better readout mechanism for these systems. ",Dispersive readout of Majorana qubits,4,"['New paper with Thomas Smith, @maja_cassidy, @Prof_D_Reilly and @BartlettQuantum on readout of Majorana qubits: . Main takeaways:\n\n1/3', '1. Dispersive readout looks very promising.\n\n2. But, there is some fine-print: The ""QNDness"" depends on the details of the Majorana qubit design and the readout protocol. In some cases the measured Majorana parity is manifestly conserved by the light-matter interaction.\n\n2/3', 'This leads to a stronger notion of QND readout than for conventional superconducting qubits. \n\n3. The ""longitudinal readout"" scheme we came up with in 2019 also looks extremely promising. Manifestly QND and might be faster and higher fidelity than dispersive.\n\n3/3', ""@tahantech @maja_cassidy @Prof_D_Reilly @BartlettQuantum Yeah I believe that's true, the paper is about gates but he proposes longitudinal readout too in the last two sentences of his paper""]",20,09,865
404,112,1283372671498178566,4639078397,John Wise,"New paper day! Led by JSPS Fellow G. Chiaki. We studied the formation of 2nd gen stars, enriched by a faint Pop III supernova (3 cases with 13, 50, and 80 Msun). We find that the C dust grains produced in the 13 Msun SN induce fragmentation at [Fe/H] = -9. Stars with such low iron abundances may be detected in larger surveys. The record holders are [Fe/H] < -7.1 (Keller+) and a detected [Fe/H] = -6.2 (Nordlander+) by @annafrebel, @AstroRana, and collab. These ancient stars are imprinted from the first stars in the Universe. ",https://arxiv.org/abs/2007.06657,"Carbon-enhanced metal-poor (CEMP) stars are the living fossils holding records of chemical enrichment from early generations of stars. In this work, we perform a set of numerical simulations of the enrichment from a supernova (SN) of a first generation of metal-free (Pop III) star and the gravitational collapse of the enriched cloud, considering all relevant cooling/heating processes and chemical reactions as well as the growth of dust grains. We adopt faint SN models for the first time with progenitor masses $M_{\rm PopIII} = 13$--$80 \ {\rm M}_{\bigodot}$, which yield C-enhanced abundance patterns (${\rm [C/Fe]} = 4.57$--$4.75$) through mixing and fallback of innermost layers of the ejecta. This model also considers the formation and destruction of dust grains. We find that the metals ejected by the SN can be partly re-accreted by the same dark matter minihalo, and carbon abundance of the enriched cloud $A({\rm C}) = 3.80$--$5.06$ is lower than the abundance range of observed CEMP stars ($A({\rm C}) \gtrsim 6$) because the mass of the metals ejected by faint SNe is smaller than normal core-collapse SNe due to extensive fallback. We also find that cloud fragmentation is induced by gas cooling from carbonaceous grains for $M_{\rm PopIII} = 13 \ {\rm M}_{\bigodot}$ even with the lowest iron abundance ${\rm [Fe/H]} \sim -9$. This leads to the formation of low-mass stars, and these ``giga metal-poor'' stars can survive until the present-day Universe and may be found by future observations. ","Seeding the second star -- II. CEMP star formation enriched from faint
supernovae",2,"['New paper day! Led by JSPS Fellow G. Chiaki. We studied the formation of 2nd gen stars, enriched by a faint Pop III supernova (3 cases with 13, 50, and 80 Msun). We find that the C dust grains produced in the 13 Msun SN induce fragmentation at [Fe/H] = -9. ', 'Stars with such low iron abundances may be detected in larger surveys. The record holders are [Fe/H] < -7.1 (Keller+) and a detected [Fe/H] = -6.2 (Nordlander+) by @annafrebel, @AstroRana, and collab.\n\nThese ancient stars are imprinted from the first stars in the Universe. https://t.co/FcFueT4pFZ']",20,07,553
405,107,1169698492316405760,503452360,William Wang,"BLEU/ROUGE only captures exact lexical overlap. Can one optimize distributional semantic rewards using REINFORCE for abstractive summarization? In our #EMNLP2019 paper, we show that optimizing BERTscore is a viable new solution for deep RL in #NLProc. ",https://arxiv.org/abs/1909.00141,"Deep reinforcement learning (RL) has been a commonly-used strategy for the abstractive summarization task to address both the exposure bias and non-differentiable task issues. However, the conventional reward Rouge-L simply looks for exact n-grams matches between candidates and annotated references, which inevitably makes the generated sentences repetitive and incoherent. In this paper, instead of Rouge-L, we explore the practicability of utilizing the distributional semantics to measure the matching degrees. With distributional semantics, sentence-level evaluation can be obtained, and semantically-correct phrases can also be generated without being limited to the surface form of the reference sentences. Human judgments on Gigaword and CNN/Daily Mail datasets show that our proposed distributional semantics reward (DSR) has distinct superiority in capturing the lexical and compositional diversity of natural language. ","Deep Reinforcement Learning with Distributional Semantic Rewards for
Abstractive Summarization",1,"['BLEU/ROUGE only captures exact lexical overlap. Can one optimize distributional semantic rewards using REINFORCE for abstractive summarization? In our #EMNLP2019 paper, we show that optimizing BERTscore is a viable new solution for deep RL in #NLProc. ']",19,09,258
406,3,1446326187416961026,990433714948661250,Sergey Levine,"Is there a principled way to adapt a model to distributional shift without labels? In our new paper, ""Training on Test Data"", we propose a Bayesian adaptation strategy based on BNNs and entropy minimization. w/ Aurick Zhou: A thread: First: what information do we gain from observing unlabeled datapoints from a new distribution? We can draw a graphical model for this: x is input, y is label, theta is classifier params, phi parameterizes x distro. Unfortunately, if y is unobserved, x tells nothing about theta! We need a better graphical model. What if we assume new datapoints are not arbitrary: if we are asked to classify a new OOD, it likely belongs to *one of* the classes, we just don't know which one! Now there is a relationship between theta and phi for each distribution! This naturally leads to an entropy minimization procedure at test time: get some unlabeled points, and then update the parameter posterior to get lower entropy on test points, but don't stray too far from parameter distribution on training set! To avoid needing to store all training data, we can learn posterior q(theta) using any BNN approach, and then incorporate this as a regularizer when minimizing entropy at test time on unlabeled data. This leads to better accuracy *and* better calibration on unlabeled test points. Inspired by some classics on entropy minimization: Y. Grandvalet, Y. Bengio. Semi-supervised Learning by Entropy Minimization M. Seeger. Input-dependent Regularization of Conditional Density Models",https://arxiv.org/abs/2109.12746,"When faced with distribution shift at test time, deep neural networks often make inaccurate predictions with unreliable uncertainty estimates. While improving the robustness of neural networks is one promising approach to mitigate this issue, an appealing alternate to robustifying networks against all possible test-time shifts is to instead directly adapt them to unlabeled inputs from the particular distribution shift we encounter at test time. However, this poses a challenging question: in the standard Bayesian model for supervised learning, unlabeled inputs are conditionally independent of model parameters when the labels are unobserved, so what can unlabeled data tell us about the model parameters at test-time? In this paper, we derive a Bayesian model that provides for a well-defined relationship between unlabeled inputs under distributional shift and model parameters, and show how approximate inference in this model can be instantiated with a simple regularized entropy minimization procedure at test-time. We evaluate our method on a variety of distribution shifts for image classification, including image corruptions, natural distribution shifts, and domain adaptation settings, and show that our method improves both accuracy and uncertainty estimation. ",Training on Test Data with Bayesian Adaptation for Covariate Shift,6,"['Is there a principled way to adapt a model to distributional shift without labels? In our new paper, ""Training on Test Data"", we propose a Bayesian adaptation strategy based on BNNs and entropy minimization. w/ Aurick Zhou: \n\nA thread:', 'First: what information do we gain from observing unlabeled datapoints from a new distribution? We can draw a graphical model for this: x is input, y is label, theta is classifier params, phi parameterizes x distro. Unfortunately, if y is unobserved, x tells nothing about theta! https://t.co/E2w5hxaYyV', ""We need a better graphical model. What if we assume new datapoints are not arbitrary: if we are asked to classify a new OOD, it likely belongs to *one of* the classes, we just don't know which one! Now there is a relationship between theta and phi for each distribution! https://t.co/2BPaQCR0yh"", ""This naturally leads to an entropy minimization procedure at test time: get some unlabeled points, and then update the parameter posterior to get lower entropy on test points, but don't stray too far from parameter distribution on training set! https://t.co/TdgRlHmEZQ"", 'To avoid needing to store all training data, we can learn posterior q(theta) using any BNN approach, and then incorporate this as a regularizer when minimizing entropy at test time on unlabeled data. https://t.co/cHlCQLDQRu', 'This leads to better accuracy *and* better calibration on unlabeled test points.\n\nInspired by some classics on entropy minimization:\n\nY. Grandvalet, Y. Bengio. Semi-supervised Learning by Entropy Minimization\nM. Seeger. Input-dependent Regularization of Conditional Density Models']",21,09,1543
407,65,1308043002816933888,759028003976470528,Dr. Kevin Cooke,Our new paper using @SOFIAtelescope data to model a rare cold quasar (an AGN hosting galaxy that still has its cold dust component) is now accepted and out on arXiv! Check it out to learn how galaxies and their supermassive black holes grow in tandem! ,http://arxiv.org/abs/2009.08465,"Cold quasars are a rare subpopulation observed to host unobscured, X-ray luminous active galactic nuclei (AGN) while also retaining a cold gas supply fueling high star formation rates. These objects are interpreted as AGN early in their evolution. We present new SOFIA HAWC+ far-infrared observations, FUV-FIR photometry, and optical spectroscopy to characterize the accretion and star formation behavior in a cold quasar at z ~ 0.405 (CQ 4479). CQ 4479 is a starburst galaxy with a predominantly young stellar population and a high gas mass fraction of ~50-70%. The AGN component has yet to become the dominant component of the FIR emission. We also find AGN bolometric luminosity that varies as a function of observation method and AGN region probed. Finally, we identify a candidate outflow feature corroborating the hypothesis that cold quasars have energetic feedback. This object presents an intriguing look into the early stages of AGN feedback and probes the rare phase where an AGN and cold gaseous component co-exist. ",Dying of the Light: An X-ray Fading Cold Quasar at z ~ 0.405,1,['Our new paper using @SOFIAtelescope data to model a rare cold quasar (an AGN hosting galaxy that still has its cold dust component) is now accepted and out on arXiv! Check it out to learn how galaxies and their supermassive black holes grow in tandem!\n\n'],20,09,258
408,48,999082765848055808,99270209,Guillermo Valle,"Finally!!! We released the paper with the work I've been doing for the last months! We give a new perspective on why deep neural networks generalize, which I think it's quite interesting, and I think more promising than other approaches. I should write … ",https://arxiv.org/abs/1805.08522,"Deep neural networks (DNNs) generalize remarkably well without explicit regularization even in the strongly over-parametrized regime where classical learning theory would instead predict that they would severely overfit. While many proposals for some kind of implicit regularization have been made to rationalise this success, there is no consensus for the fundamental reason why DNNs do not strongly overfit. In this paper, we provide a new explanation. By applying a very general probability-complexity bound recently derived from algorithmic information theory (AIT), we argue that the parameter-function map of many DNNs should be exponentially biased towards simple functions. We then provide clear evidence for this strong simplicity bias in a model DNN for Boolean functions, as well as in much larger fully connected and convolutional networks applied to CIFAR10 and MNIST. As the target functions in many real problems are expected to be highly structured, this intrinsic simplicity bias helps explain why deep networks generalize well on real world problems. This picture also facilitates a novel PAC-Bayes approach where the prior is taken over the DNN input-output function space, rather than the more conventional prior over parameter space. If we assume that the training algorithm samples parameters close to uniformly within the zero-error region then the PAC-Bayes theorem can be used to guarantee good expected generalization for target functions producing high-likelihood training sets. By exploiting recently discovered connections between DNNs and Gaussian processes to estimate the marginal likelihood, we produce relatively tight generalization PAC-Bayes error bounds which correlate well with the true error on realistic datasets such as MNIST and CIFAR10 and for architectures including convolutional and fully connected networks. ","Deep learning generalizes because the parameter-function map is biased
towards simple functions",1,"[""Finally!!! We released the paper with the work I've been doing for the last months! \nWe give a new perspective on why deep neural networks generalize, which I think it's quite interesting, and I think more promising than other approaches. I should write … ""]",18,05,261
409,42,950750788845961216,23000769,Christopher Conselice,Our new paper is out on a VLT/MUSE study of a new compact lensing massive cluster - the Clio Cluster - which is one of our JWST GTO targets. Has low intracluster light - making high-z galaxies easier to study in the next generation of deep searches: ,https://arxiv.org/abs/1801.01140,"We present the results of a VLT MUSE/FORS2 and Spitzer survey of a unique compact lensing cluster CLIO at z = 0.42, discovered through the GAMA survey using spectroscopic redshifts. Compact and massive clusters such as this are understudied, but provide a unique prospective on dark matter distributions and for finding background lensed high-z galaxies. The CLIO cluster was identified for follow up observations due to its almost unique combination of high mass and dark matter halo concentration, as well as having observed lensing arcs from ground based images. Using dual band optical and infra-red imaging from FORS2 and Spitzer, in combination with MUSE optical spectroscopy we identify 89 cluster members and find background sources out to z = 6.49. We describe the physical state of this cluster, finding a strong correlation between environment and galaxy spectral type. Under the assumption of a NFW profile, we measure the total mass of CLIO to be M$_{200} = (4.49 \pm 0.25) \times 10^{14}$ M$_\odot$. We build and present an initial strong-lensing model for this cluster, and measure a relatively low intracluster light (ICL) fraction of 7.21 $\pm$ 1.53% through galaxy profile fitting. Due to its strong potential for lensing background galaxies and its low ICL, the CLIO cluster will be a target for our 110 hour JWST 'Webb Medium-Deep Field' (WMDF) GTO program. ","MUSE spectroscopy and deep observations of a unique compact JWST target,
lensing cluster CLIO",1,['Our new paper is out on a VLT/MUSE study of a new compact lensing massive cluster - the Clio Cluster - which is one of our JWST GTO targets. Has low intracluster light - making high-z galaxies easier to study in the next generation of deep searches:\n\n'],18,01,256
410,60,1161460501991182336,9674682,Kohei Hayashi,"Happy to share our new paper w/ Yamaguchi, Sugawara & Maeda. We show a lot of light-weight CNN modules are graphically representable as tensor networks. Architecture search in the graphs finds dense Pareto solutions for the accuracy/efficiency tradeoff. ",https://arxiv.org/abs/1908.04471,"Tensor decomposition methods are widely used for model compression and fast inference in convolutional neural networks (CNNs). Although many decompositions are conceivable, only CP decomposition and a few others have been applied in practice, and no extensive comparisons have been made between available methods. Previous studies have not determined how many decompositions are available, nor which of them is optimal. In this study, we first characterize a decomposition class specific to CNNs by adopting a flexible graphical notation. The class includes such well-known CNN modules as depthwise separable convolution layers and bottleneck layers, but also previously unknown modules with nonlinear activations. We also experimentally compare the tradeoff between prediction accuracy and time/space complexity for modules found by enumerating all possible decompositions, or by using a neural architecture search. We find some nonlinear decompositions outperform existing ones. ","Einconv: Exploring Unexplored Tensor Network Decompositions for
Convolutional Neural Networks",1,"['Happy to share our new paper w/ Yamaguchi, Sugawara & Maeda. We show a lot of light-weight CNN modules are graphically representable as tensor networks. Architecture search in the graphs finds dense Pareto solutions for the accuracy/efficiency tradeoff. \n ']",19,08,267
411,63,1261378663137906691,937127267850846208,Mohammad Javad Amiri,"Check out our new paper: ""SEPAR: A Privacy-Preserving Blockchain-based System for Regulating Multi-Platform Crowdworking Environments"". SEPAR is a multi-platform crowdworking system that enforces global constraints on distributed independent entities. ",https://arxiv.org/abs/2005.01038,"Crowdworking platforms provide the opportunity for diverse workers to execute tasks for different requesters. The popularity of the ""gig"" economy has given rise to independent platforms that provide competing and complementary services. Workers as well as requesters with specific tasks may need to work for or avail from the services of multiple platforms resulting in the rise of multi-platform crowdworking systems. Recently, there has been increasing interest by governmental, legal and social institutions to enforce regulations, such as minimal and maximal work hours, on crowdworking platforms. Platforms within multi-platform crowdworking systems, therefore, need to collaborate to enforce cross-platform regulations. While collaborating to enforce global regulations requires the transparent sharing of information about tasks and their participants, the privacy of all participants needs to be preserved. In this paper, we propose an overall vision exploring the regulation, privacy, and architecture dimensions for the future of work multi-platform crowdworking environments. We then present SEPAR, a multi-platform crowdworking system that enforces a large sub-space of practical global regulations on a set of distributed independent platforms in a privacy-preserving manner. SEPAR, enforces privacy using lightweight and anonymous tokens, while transparency is achieved using fault-tolerant blockchains shared across multiple platforms. The privacy guarantees of SEPAR against covert adversaries are formalized and thoroughly demonstrated, while the experiments reveal the efficiency of SEPAR in terms of performance and scalability. ","SEPAR: Towards Regulating Future of Work Multi-Platform Crowdworking
Environments with Privacy Guarantees",1,"['Check out our new paper: ""SEPAR: A Privacy-Preserving Blockchain-based System for Regulating Multi-Platform Crowdworking Environments"". SEPAR is a multi-platform crowdworking system that enforces global constraints on distributed independent entities.\n\n']",20,05,258
412,11,1300674685378744320,1248290263698718721,Peter Dueben,"Our new paper on post-processing of precipitation predictions over the UK is out on the arxiv: We tackle scale interactions of the atmosphere in space and time using fused temporal cross attention in combination with ConvGrus and improve results. Great collaboration between @warwickuni with Rilwan Adewoyin, Ritabrata Dutta and @Yulanhe, @BristolUni with @PeterAGWatson and @ECMWF.",https://arxiv.org/abs/2008.09090,"Climate models (CM) are used to evaluate the impact of climate change on the risk of floods and strong precipitation events. However, these numerical simulators have difficulties representing precipitation events accurately, mainly due to limited spatial resolution when simulating multi-scale dynamics in the atmosphere. To improve the prediction of high resolution precipitation we apply a Deep Learning (DL) approach using an input of CM simulations of the model fields (weather variables) that are more predictable than local precipitation. To this end, we present TRU-NET (Temporal Recurrent U-Net), an encoder-decoder model featuring a novel 2D cross attention mechanism between contiguous convolutional-recurrent layers to effectively model multi-scale spatio-temporal weather processes. We use a conditional-continuous loss function to capture the zero-skewed %extreme event patterns of rainfall. Experiments show that our model consistently attains lower RMSE and MAE scores than a DL model prevalent in short term precipitation prediction and improves upon the rainfall predictions of a state-of-the-art dynamical weather model. Moreover, by evaluating the performance of our model under various, training and testing, data formulation strategies, we show that there is enough data for our deep learning approach to output robust, high-quality results across seasons and varying regions. ","TRU-NET: A Deep Learning Approach to High Resolution Prediction of
Rainfall",2,"['Our new paper on post-processing of precipitation predictions over the UK is out on the arxiv: \nWe tackle scale interactions of the atmosphere in space and time using fused temporal cross attention in combination with ConvGrus and improve results. ', 'Great collaboration between @warwickuni with Rilwan Adewoyin, Ritabrata Dutta and @Yulanhe, @BristolUni with @PeterAGWatson and @ECMWF.']",20,08,396
413,100,1438215447783030786,346719335,Keyon Vafa,"New paper: Consider a sequence generated by a language model. Which words were most important for generating each word? We propose greedy rationalization: greedily finding the smallest subset of words that would make the same prediction as the full text. Consider a sequence generated by GPT-2: ""The court struck down the law because it was unconstitutional"" Which words were most important for predicting ""unconstitutional""? The greedy algorithm starts with an empty set and adds words until ""unconstitutional"" is the top prediction How do we evaluate sequential rationales? There are some datasets with annotated rationales for classification, but these don't extend to sequence models. So we collected our own sequential rationale dataset based on Lambada. Paper: Github: Demo: With: @yuntiandeng, @blei_lab, @srush_nlp",https://arxiv.org/abs/2109.06387,"Sequence models are a critical component of modern NLP systems, but their predictions are difficult to explain. We consider model explanations though rationales, subsets of context that can explain individual model predictions. We find sequential rationales by solving a combinatorial optimization: the best rationale is the smallest subset of input tokens that would predict the same output as the full sequence. Enumerating all subsets is intractable, so we propose an efficient greedy algorithm to approximate this objective. The algorithm, which is called greedy rationalization, applies to any model. For this approach to be effective, the model should form compatible conditional distributions when making predictions on incomplete subsets of the context. This condition can be enforced with a short fine-tuning step. We study greedy rationalization on language modeling and machine translation. Compared to existing baselines, greedy rationalization is best at optimizing the combinatorial objective and provides the most faithful rationales. On a new dataset of annotated sequential rationales, greedy rationales are most similar to human rationales. ",Rationales for Sequential Predictions,4,"['New paper: \n\nConsider a sequence generated by a language model. Which words were most important for generating each word?\n\nWe propose greedy rationalization: greedily finding the smallest subset of words that would make the same prediction as the full text. ', 'Consider a sequence generated by GPT-2: ""The court struck down the law because it was unconstitutional""\n\nWhich words were most important for predicting ""unconstitutional""?\n\nThe greedy algorithm starts with an empty set and adds words until ""unconstitutional"" is the top prediction https://t.co/a9NXHFTjuR', ""How do we evaluate sequential rationales? \n\nThere are some datasets with annotated rationales for classification, but these don't extend to sequence models.\n\nSo we collected our own sequential rationale dataset based on Lambada. https://t.co/liPVCfNoLM"", 'Paper: https://t.co/8CjhtqB1j8\nGithub: https://t.co/4NzpZ255bg\nDemo: https://t.co/C9dmp255VV\n\nWith: @yuntiandeng, @blei_lab, @srush_nlp']",21,09,872
414,82,1394264179993808907,369569444,Takahiro TERADA (寺田 隆広),"Our new paper on ""Massless Preheating and Electroweak Vacuum Metastability"" Precise measurements of Standard Model parameters suggest that the electroweak vacuum is metastable. We need to ensure the (meta)stability throughout cosmological history. Quantum fluctuations of the Higgs field during cosmic inflation and parametric/tachyonic instability after that might destabilize the electroweak vacuum. We can stabilize the Higgs by introducing an effective mass term in the form of Higgs-inflaton and/or gravitational couplings. However, the stability during and after inflation is typically in a trade-off relation. It is important to study the condition that ensures stability throughout these epochs. In our work, we focus on (quasi) scale-invariant models with quartic potentials and non-minimal gravitational couplings. (Why scale invariance? 1. It can explain the flatness of the inflaton potential consistently with observations. 2. It might explain the hierarchy problem.) Naively, scale invariance implies unimpeded growth of resonance and hence unavoidable destabilization of the vacuum, which is a cosmological catastrophe. However, we find this is not the case taking into account the perturbative Higgs decay and backreaction of produced particles. We find nontrivial and dynamical interplays between the effects of quartic and curvature couplings, which can partially cancel each other in some cases. We finally find disjoint ""islands of (meta)stability"" in the couplings parameter space. ",http://arxiv.org/abs/2105.06939,"Current measurements of Standard Model parameters suggest that the electroweak vacuum is metastable. This metastability has important cosmological implications, because large fluctuations in the Higgs field could trigger vacuum decay in the early universe. For the false vacuum to survive, interactions which stabilize the Higgs during inflation -- e.g., inflaton-Higgs interactions or non-minimal couplings to gravity -- are typically necessary. However, the post-inflationary preheating dynamics of these same interactions could also trigger vacuum decay, thereby recreating the problem we sought to avoid. This dynamics is often assumed catastrophic for models exhibiting scale invariance since these generically allow for unimpeded growth of fluctuations. In this paper, we examine the dynamics of such ""massless preheating"" scenarios and show that the competing threats to metastability can nonetheless be balanced to ensure viability. We find that fully accounting for both the backreaction from particle production and the effects of perturbative decays reveals a large number of disjoint ""islands of (meta)stability"" over the parameter space of couplings. Ultimately, the interplay among Higgs-stabilizing interactions plays a significant role, leading to a sequence of dynamical phases that effectively extend the metastable regions to large Higgs-curvature couplings. ",Massless Preheating and Electroweak Vacuum Metastability,6,"['Our new paper on ""Massless Preheating and Electroweak Vacuum Metastability"" \nPrecise measurements of Standard Model parameters suggest that the electroweak vacuum is metastable. We need to ensure the (meta)stability throughout cosmological history.', 'Quantum fluctuations of the Higgs field during cosmic inflation and parametric/tachyonic instability after that might destabilize the electroweak vacuum. We can stabilize the Higgs by introducing an effective mass term in the form of Higgs-inflaton and/or gravitational couplings.', 'However, the stability during and after inflation is typically in a trade-off relation. It is important to study the condition that ensures stability throughout these epochs. https://t.co/yXRKN1rELr', 'In our work, we focus on (quasi) scale-invariant models with quartic potentials and non-minimal gravitational couplings. (Why scale invariance? 1. It can explain the flatness of the inflaton potential consistently with observations. 2. It might explain the hierarchy problem.) https://t.co/PX66DX1WKU', 'Naively, scale invariance implies unimpeded growth of resonance and hence unavoidable destabilization of the vacuum, which is a cosmological catastrophe. However, we find this is not the case taking into account the perturbative Higgs decay and backreaction of produced particles. https://t.co/onEH7axRcP', 'We find nontrivial and dynamical interplays between the effects of quartic and curvature couplings, which can partially cancel each other in some cases. We finally find disjoint ""islands of (meta)stability"" in the couplings parameter space. https://t.co/7SnuXWK6NK']",21,05,1537
415,6,1446143854588035083,280403336,Sean Welleck,"new paper: ""Symbolic Brittleness in Sequence Models: on Systematic Generalization in Symbolic Mathematics"" Sequence models show amazing performance on many tasks. Does perfect test accuracy tell the full story? w/ @PeterWestTM, @JizeCao, @YejinChoinka We consider symbolic integration, as it requires generalizing systematically beyond the test set and is verifiable. Despite high test accuracy, we find deficiencies in robustness, compositionality, and OOD generalization in a state-of-the-art MLE seq2seq model for this task. We develop a genetic algorithm 🧬 (SAGGA) which automatically discovers (thousands of) failures that highlight each type of generalization, and test suites that perturb and compose validation problems & simple functions. Robustness tells us whether the model systematically solves all problems in a neighborhood, typically governed by a generalizable pattern. The model is surprisingly brittle when test problems or simple functions are slightly changed. Regarding compositionality, successfully integrating two functions did not imply that the model learned to integrate their sum (recall the sum rule of integration ) Moving further from the training distribution: Performance degrades for integers and problem sizes larger than those typically encountered in training (extrapolation) And functions not covered in the training set (""exploits"") We also study the effect of increasing the search budget and whether it is a search problem alone -- check out the paper! Stay tuned for code, which we plan to release.",https://arxiv.org/abs/2109.13986,"Neural sequence models trained with maximum likelihood estimation have led to breakthroughs in many tasks, where success is defined by the gap between training and test performance. However, their ability to achieve stronger forms of generalization remains unclear. We consider the problem of symbolic mathematical integration, as it requires generalizing systematically beyond the test set. We develop a methodology for evaluating generalization that takes advantage of the problem domain's structure and access to a verifier. Despite promising in-distribution performance of sequence-to-sequence models in this domain, we demonstrate challenges in achieving robustness, compositionality, and out-of-distribution generalization, through both carefully constructed manual test suites and a genetic algorithm that automatically finds large collections of failures in a controllable manner. Our investigation highlights the difficulty of generalizing well with the predominant modeling and learning approach, and the importance of evaluating beyond the test set, across different aspects of generalization. ","Symbolic Brittleness in Sequence Models: on Systematic Generalization in
Symbolic Mathematics",8,"['new paper:\n\n""Symbolic Brittleness in Sequence Models: on Systematic Generalization in Symbolic Mathematics""\n\nSequence models show amazing performance on many tasks. Does perfect test accuracy tell the full story?\n\nw/ @PeterWestTM, @JizeCao, @YejinChoinka \n\n ', 'We consider symbolic integration, as it requires generalizing systematically beyond the test set and is verifiable.\n\nDespite high test accuracy, we find deficiencies in robustness, compositionality, and OOD generalization in a state-of-the-art MLE seq2seq model for this task. https://t.co/ErGkN7ve9w', 'We develop a genetic algorithm 🧬 (SAGGA) which automatically discovers (thousands of) failures that highlight each type of generalization, and test suites that perturb and compose validation problems & simple functions. https://t.co/ofikc0H0HQ', 'Robustness tells us whether the model systematically solves all problems in a neighborhood, typically governed by a generalizable pattern.\n\nThe model is surprisingly brittle when test problems or simple functions are slightly changed. https://t.co/qWAQ51ZPIC', 'Regarding compositionality, successfully integrating two functions did not imply that the model learned to integrate their sum \n\n(recall the sum rule of integration https://t.co/Oo2XOVaUKv) https://t.co/78QyBDe1sf', 'Moving further from the training distribution:\n\nPerformance degrades for integers and problem sizes larger than those typically encountered in training (extrapolation)\n\nAnd functions not covered in the training set (""exploits"") https://t.co/I57XDM5KrH', 'We also study the effect of increasing the search budget and whether it is a search problem alone -- check out the paper! https://t.co/wIpDcDRL0r', 'Stay tuned for code, which we plan to release.']",21,09,1605
416,78,1274066323920822273,841499391508779008,Zico Kolter,"Another new DEQ-related paper this week! @ezra_winston develops a framework for equilibrium models with unique fixed points convergent solvers, based upon monotone operator theory. Paper: Code: Talk: At a high level, the work illustrates a strong connection between the infinite-depth limit of some ""simple"" deep networks, and monotone operator splitting methods. The result generally connects implicit-depth models to this well-studied paradigm. Works better than the ""pure"" version of other implicit-layer models like (augmented) Neural ODEs, while still providing existence, uniqueness, and stability guarantees (unlike traditional DEQ models). Works well with convolutional networks using FFT-based techniques. ",https://arxiv.org/abs/2006.08591,"Implicit-depth models such as Deep Equilibrium Networks have recently been shown to match or exceed the performance of traditional deep networks while being much more memory efficient. However, these models suffer from unstable convergence to a solution and lack guarantees that a solution exists. On the other hand, Neural ODEs, another class of implicit-depth models, do guarantee existence of a unique solution but perform poorly compared with traditional networks. In this paper, we develop a new class of implicit-depth model based on the theory of monotone operators, the Monotone Operator Equilibrium Network (monDEQ). We show the close connection between finding the equilibrium point of an implicit network and solving a form of monotone operator splitting problem, which admits efficient solvers with guaranteed, stable convergence. We then develop a parameterization of the network which ensures that all operators remain monotone, which guarantees the existence of a unique equilibrium point. Finally, we show how to instantiate several versions of these models, and implement the resulting iterative solvers, for structured linear operators such as multi-scale convolutions. The resulting models vastly outperform the Neural ODE-based models while also being more computationally efficient. Code is available at this http URL ",Monotone operator equilibrium networks,3,"['Another new DEQ-related paper this week! @ezra_winston develops a framework for equilibrium models with unique fixed points convergent solvers, based upon monotone operator theory.\n\nPaper: \nCode: \nTalk: ', 'At a high level, the work illustrates a strong connection between the infinite-depth limit of some ""simple"" deep networks, and monotone operator splitting methods. The result generally connects implicit-depth models to this well-studied paradigm. https://t.co/Tg3ijIOwvA', 'Works better than the ""pure"" version of other implicit-layer models like (augmented) Neural ODEs, while still providing existence, uniqueness, and stability guarantees (unlike traditional DEQ models). Works well with convolutional networks using FFT-based techniques. https://t.co/uY8k5KwCjr']",20,06,749
417,12,1311328604756860931,966760075074461697,Brian Thomas,"Our new paper is out on the arXiv today, revisiting the question of the threat posed by gamma-ray bursts in light of recent detections of TeV photons - check it out! @XimenaAbrevaya There's a couple of recent ones on supernovae in case you missed them: ",http://arxiv.org/abs/2009.14078,"We analyze the additional effect on planetary atmospheres of recently detected gamma-ray burst afterglow photons in the range up to 1 TeV. For an Earth-like atmosphere we find that there is a small additional depletion in ozone versus that modeled for only prompt emission. We also find a small enhancement of muon flux at the planet surface. Overall, we conclude that the additional afterglow emission, even with TeV photons, does not result in a significantly larger impact over that found in past studies. ",Gamma Ray Bursts: Not so Much Deadlier than We Thought,2,"['Our new paper is out on the arXiv today, revisiting the question of the threat posed by gamma-ray bursts in light of recent detections of TeV photons - check it out!\n', ""@XimenaAbrevaya There's a couple of recent ones on supernovae in case you missed them:\nhttps://t.co/f2RBXbXR5V\nhttps://t.co/rCWs4wUL7p""]",20,09,273
418,85,1372576318374563842,21611239,Sean Carroll,"All of reality can be modeled as just a vector looping around in a really-big Hilbert space. New semi-technical paper from me: Roughly speaking this is the perspective advocated by @ashmeetastro and me in our Mad-Dog Everettianism paper. The new one is more aimed at philosophers. And yes, this is the paper that I wrote first, before putting any references in. Hopefully I didn't forget anyone. But I'm confident they will remind me if I did. The paper leans into the idea that the fundamental nature of reality could be *radically* different from our familiar world of objects moving around in space and interacting with each other. All that stuff is a higher-level emergent approximation. It's perfectly okay to be skeptical precisely because of that radical divergence between theory and experience. But it's also worth considering. Who says the fundamental nature of reality should be anything at all like our everyday experience? @LizardOrman Actually only quantum systems can be represented that way, not classical ones. The question is whether the vector is fundamental, or is built on top of some ontology such as particles/fields propagating in space. @henry_maxfield I presume there are any number of equivalent ways to represent the structure of a quantum theory; in the Schrödinger picture it's the Hamiltonian. You use a time parameter in the construction, but it's not necessarily ""preferred,"" as I discuss in the paper. @watsona4 Thanks! @henry_maxfield Yes, that would be how boosts work in the standard Hamiltonian formulation of QFT. But here it might only be an approximate symmetry. @henry_maxfield I think a physical realization should be thought of as an equivalence class of solutions, each corresponding to a choice of time variable. @henry_maxfield This might not be a Twitterable conversation. Maybe in person someday.",https://arxiv.org/abs/2103.09780,"I defend the extremist position that the fundamental ontology of the world consists of a vector in Hilbert space evolving according to the Schr\""odinger equation. The laws of physics are determined solely by the energy eigenspectrum of the Hamiltonian. The structure of our observed world, including space and fields living within it, should arise as a higher-level emergent description. I sketch how this might come about, although much work remains to be done. ",Reality as a Vector in Hilbert Space,11,"['All of reality can be modeled as just a vector looping around in a really-big Hilbert space. New semi-technical paper from me:\n', 'Roughly speaking this is the perspective advocated by @ashmeetastro and me in our Mad-Dog Everettianism paper. The new one is more aimed at philosophers.\nhttps://t.co/R3SolxesK1', ""And yes, this is the paper that I wrote first, before putting any references in. Hopefully I didn't forget anyone. But I'm confident they will remind me if I did."", 'The paper leans into the idea that the fundamental nature of reality could be *radically* different from our familiar world of objects moving around in space and interacting with each other. All that stuff is a higher-level emergent approximation.', ""It's perfectly okay to be skeptical precisely because of that radical divergence between theory and experience. But it's also worth considering. Who says the fundamental nature of reality should be anything at all like our everyday experience?"", '@LizardOrman Actually only quantum systems can be represented that way, not classical ones. The question is whether the vector is fundamental, or is built on top of some ontology such as particles/fields propagating in space.', '@henry_maxfield I presume there are any number of equivalent ways to represent the structure of a quantum theory; in the Schrödinger picture it\'s the Hamiltonian. You use a time parameter in the construction, but it\'s not necessarily ""preferred,"" as I discuss in the paper.', '@watsona4 Thanks!', '@henry_maxfield Yes, that would be how boosts work in the standard Hamiltonian formulation of QFT. But here it might only be an approximate symmetry.', '@henry_maxfield I think a physical realization should be thought of as an equivalence class of solutions, each corresponding to a choice of time variable.', '@henry_maxfield This might not be a Twitterable conversation. Maybe in person someday.']",21,03,1859
419,5,1213889108751241217,2800204849,Andrew Gordon Wilson,"“What I cannot create, I do not understand”. We develop normalizing flows for end-to-end fully generative semi-supervised classification (with code)! Our new paper, with @Pavel_Izmailov, @polkirichenko, @m_finzi: (1/8) The discriminative approach to classification models the probability of a class label given an input p(y|x) directly. The generative, approach, by contrast, models the class conditional density p(x|y), then finds p(y|x) with Bayes rule. 2/8 Nearly all classifiers are discriminative. Even approaches that use a generator typically involve a discriminator in the pipeline. For example, sometimes one learns a generator on unlabelled data, then recycles the representation as part of a discriminative classifier. 3/8 Generative models are compelling because we are trying to create an object of interest. The challenge in generative modelling is that standard approaches to density estimation are poor descriptions of high-dimensional natural signals. 4/8 For example, a Gaussian mixture directly over images, while highly flexible for density estimation, would specify similarities between images as related to Euclidean distances between pixel intensities, which is a poor inductive bias for translation and other invariances. 5/8 Normalizing flows provide a pleasingly simple approach to generative modelling. By transforming a latent distribution through an invertible network, we have both an exact likelihood for the data, and useful inductive biases from a convolutional neural network. 6/8 FlowGMM models the latent space as a Gaussian mixture, where each mixture component is associated with a class label. This approach specifies an exact joint likelihood over both labelled and unlabelled data for end-to-end training. 7/8 FlowGMM has broad applicability. We consider text, tabular, and image data. FlowGMM can also discover interpretable structure, provide real-time optimization-free feature visualizations, and specify well calibrated predictive distributions. 8/8 @matvil @Pavel_Izmailov @polkirichenko @m_finzi Normalizing flows require an invertible NN, which imposes some constraints on speed and architectural design. However, invertible NNs are rapidly improving, such that approaches based on normalizing flows are becoming increasingly compelling.",http://arxiv.org/abs/1912.13025,"Normalizing flows transform a latent distribution through an invertible neural network for a flexible and pleasingly simple approach to generative modelling, while preserving an exact likelihood. We propose FlowGMM, an end-to-end approach to generative semi supervised learning with normalizing flows, using a latent Gaussian mixture model. FlowGMM is distinct in its simplicity, unified treatment of labelled and unlabelled data with an exact likelihood, interpretability, and broad applicability beyond image data. We show promising results on a wide range of applications, including AG-News and Yahoo Answers text data, tabular data, and semi-supervised image classification. We also show that FlowGMM can discover interpretable structure, provide real-time optimization-free feature visualizations, and specify well calibrated predictive distributions. ",Semi-Supervised Learning with Normalizing Flows,9,"['“What I cannot create, I do not understand”. We develop normalizing flows for end-to-end fully generative semi-supervised classification (with code)! Our new paper, with @Pavel_Izmailov, @polkirichenko, @m_finzi: (1/8) ', 'The discriminative approach to classification models the probability of a class label given an input p(y|x) directly. The generative, approach, by contrast, models the class conditional density p(x|y), then finds p(y|x) with Bayes rule. 2/8', 'Nearly all classifiers are discriminative. Even approaches that use a generator typically involve a discriminator in the pipeline. For example, sometimes one learns a generator on unlabelled data, then recycles the representation as part of a discriminative classifier. 3/8', 'Generative models are compelling because we are trying to create an object of interest. The challenge in generative modelling is that standard approaches to density estimation are poor descriptions of high-dimensional natural signals. 4/8', 'For example, a Gaussian mixture directly over images, while highly flexible for density estimation, would specify similarities between images as related to Euclidean distances between pixel intensities, which is a poor inductive bias for translation and other invariances. 5/8', 'Normalizing flows provide a pleasingly simple approach to generative modelling. By transforming a latent distribution through an invertible network, we have both an exact likelihood for the data, and useful inductive biases from a convolutional neural network. 6/8', 'FlowGMM models the latent space as a Gaussian mixture, where each mixture component is associated with a class label. This approach specifies an exact joint likelihood over both labelled and unlabelled data for end-to-end training. 7/8', 'FlowGMM has broad applicability. We consider text, tabular, and image data. FlowGMM can also discover interpretable structure, provide real-time optimization-free feature visualizations, and specify well calibrated predictive distributions. 8/8', '@matvil @Pavel_Izmailov @polkirichenko @m_finzi Normalizing flows require an invertible NN, which imposes some constraints on speed and architectural design. However, invertible NNs are rapidly improving, such that approaches based on normalizing flows are becoming increasingly compelling.']",19,12,2300
420,128,1248598759350599682,1068545181576773632,Kenneth Brown,"New paper on QCCD ion trap architectures for near-term devices (). A great collaboration with Prakash Murali (@MartonosiGroup) , @margmartonosi, & @DriptoDebroy (@DukePhysics). Paper accepted at @ISCAConfOrg. #DukeQuantum @DukeEngineering @Princeton In the ion trap community there is a bit of a divide between what is the right size of an ion chain. On the one hand, small ion chains have been shown to have incredible fidelities. On the other hand, long ion chains allow you to grow Hilbert space more easily. There are two main camps: (1) shuttling with 2-4 ion chains is the best and (2) the longest chain that works is the best. Here we study what is the best ion chain length in an architecture that has long chains and shuttling. It turns out to depend on the application. The collaboration between Duke and Princeton made possible by @NSF @EPiQCExpedition.",http://arxiv.org/abs/2004.04706,"Trapped ions (TI) are a leading candidate for building Noisy Intermediate-Scale Quantum (NISQ) hardware. TI qubits have fundamental advantages over other technologies such as superconducting qubits, including high qubit quality, coherence and connectivity. However, current TI systems are small in size, with 5-20 qubits and typically use a single trap architecture which has fundamental scalability limitations. To progress towards the next major milestone of 50-100 qubits, a modular architecture termed the Quantum Charge Coupled Device (QCCD) has been proposed. In a QCCD-based TI device, small traps are connected through ion shuttling. While the basic hardware components for such devices have been demonstrated, building a 50-100 qubit system is challenging because of a wide range of design possibilities for trap sizing, communication topology and gate implementations and the need to match diverse application resource requirements. Towards realizing QCCD systems with 50-100 qubits, we perform an extensive architectural study evaluating the key design choices of trap sizing, communication topology and operation implementation methods. We built a design toolflow which takes a QCCD architecture's parameters as input, along with a set of applications and realistic hardware performance models. Our toolflow maps the applications onto the target device and simulates their execution to compute metrics such as application run time, reliability and device noise rates. Using six applications and several hardware design points, we show that trap sizing and communication topology choices can impact application reliability by up to three orders of magnitude. Microarchitectural gate implementation choices influence reliability by another order of magnitude. From these studies, we provide concrete recommendations to tune these choices to achieve highly reliable and performant application executions. ",Architecting Noisy Intermediate-Scale Trapped Ion Quantum Computers,4,"['New paper on QCCD ion trap architectures for near-term devices ().\nA great collaboration with Prakash Murali (@MartonosiGroup) , @margmartonosi, & @DriptoDebroy (@DukePhysics). Paper accepted at @ISCAConfOrg. #DukeQuantum @DukeEngineering @Princeton', 'In the ion trap community there is a bit of a divide between what is the right size of an ion chain. On the one hand, small ion chains have been shown to have incredible fidelities. On the other hand, long ion chains allow you to grow Hilbert space more easily.', 'There are two main camps: (1) shuttling with 2-4 ion chains is the best and (2) the longest chain that works is the best. Here we study what is the best ion chain length in an architecture that has long chains and shuttling. It turns out to depend on the application.', 'The collaboration between Duke and Princeton made possible by @NSF @EPiQCExpedition.']",20,04,870
421,208,1313760473125445632,2603024598,Ricardo Pérez-Marco,"New paper with @CGrunspan! We define the notion of ""profit lag"" and compute the profitability of the new ""alternate mining strategy"" that exploits forks with the same PoW.#Bitcoin (We also correct gross errors in a paper by @PeterRizun and @el33th4xor) ",https://arxiv.org/abs/2010.02671,"For a mining strategy we define the notion of ""profit lag"" as the minimum time it takes to be profitable after that moment. We compute closed forms for the profit lag and the revenue ratio for the strategies ""selfish mining"" and ""intermittent selfish mining"". This confirms some earlier numerical simulations and clarifies misunderstandings on profitability in the literature. We also study mining pairs of PoW cryptocurrencies, often coming from a fork, with the same mining algorithm. This represents a vector of attack that can be exploited using the ""alternate network mining"" strategy that we define. We compute closed forms for the profit lag and the revenue ratiofor this strategy that is more profitable than selfish mining and intermittent selfish mining. It is also harder to counter since it does not rely on a flaw in the difficulty adjustment formula that is the reason for profitability of the other strategies. ",Profit lag and alternate network mining,1,"['New paper with @CGrunspan! We define the notion of ""profit lag"" and compute the profitability of the new ""alternate mining strategy"" that exploits forks with the same PoW.#Bitcoin\n\n(We also correct gross errors in a paper by @PeterRizun and @el33th4xor)\n\n ']",20,10,266
422,19,1209288887400570880,312448486,Dr. Karan Jani,NEW PAPER - we revisit an intermediate mass black hole trigger in @LIGO. We find it to be consistent with binary black hole merger of 150 SOLAR MASSES. Heard right - BLACK HOLES that are one-hundred-fifty times heavier than Sun ! Beat that 2019🖖🏻 @alexandernitz @LIGO Indeed we focused on the new PE machinery. This trigger was very marginal for the matched filtering searches. Was loud (significant) only for the Burst search. Not surprising though 🙌🏻,https://arxiv.org/abs/1912.10533,"Gravitational wave (GW) measurements provide the most robust constraints of the mass of astrophysical black holes. Using state-of-the-art GW signal models and a unique parameter estimation technique, we infer the source parameters of the loudest marginal trigger, GW170502, found by LIGO from 2015 to 2017. If this trigger is assumed to be a binary black hole merger, we find it corresponds to a total mass in the source frame of $157^{+55}_{-41}~\rm{M}_\odot$ at redshift $z=1.37^{+0.93}_{-0.64}$. The primary and secondary black hole masses are constrained to $94^{+44}_{-28}~\rm{M}_{\odot}$ and $62^{+30}_{-25}~\rm{M}_{\odot}$ respectively, with 90\% confidence. Across all signal models, we find $\gtrsim 70\%$ probability for the effective spin parameter $\chi_\mathrm{eff}>0.1$. Furthermore, we find that the inclusion of higher-order modes in the analysis narrows the confidence region for the primary black hole mass by 10\%, however, the evidence for these modes in the data remains negligible. The techniques outlined in this study could lead to robust inference of the physical parameters for all intermediate-mass black hole binary candidates $(\gtrsim100~\mathrm{M}_\odot)$ in the current GW network. ","Inferring Parameters of GW170502: The Loudest Intermediate-mass Black
Hole Trigger in LIGO's O1/O2 data",2,"['NEW PAPER - we revisit an intermediate mass black hole trigger in @LIGO.\n\nWe find it to be consistent with binary black hole merger of 150 SOLAR MASSES. \n\nHeard right - BLACK HOLES that are one-hundred-fifty times heavier than Sun ! \n\nBeat that 2019🖖🏻\n\n ', '@alexandernitz @LIGO Indeed we focused on the new PE machinery. This trigger was very marginal for the matched filtering searches. Was loud (significant) only for the Burst search. Not surprising though 🙌🏻']",19,12,468
423,69,1306934928765079554,1282679296843288577,Jonathan Ullman,"Really pleased with this new paper with my PhD student Albert Cheu. We prove strong lower bounds for two ""intermediate models"" of differential privacy: the shuffle model and the pan-private model. 1/3 Our work builds in an essential way on Albert's awesome paper with Victor Balcer, @mgtjoseph, and Jieming Mao. 2/3 It's awesome to have such a productive and independent PhD student! I can't believe he's graduating in the Spring. I'm jealous of whoever gets to be his next boss and/or post-doc advisor! *wink wink* 3/3",https://arxiv.org/abs/2009.08000,"There has been a recent wave of interest in intermediate trust models for differential privacy that eliminate the need for a fully trusted central data collector, but overcome the limitations of local differential privacy. This interest has led to the introduction of the shuffle model (Cheu et al., EUROCRYPT 2019; Erlingsson et al., SODA 2019) and revisiting the pan-private model (Dwork et al., ITCS 2010). The message of this line of work is that, for a variety of low-dimensional problems -- such as counts, means, and histograms -- these intermediate models offer nearly as much power as central differential privacy. However, there has been considerably less success using these models for high-dimensional learning and estimation problems. In this work, we show that, for a variety of high-dimensional learning and estimation problems, both the shuffle model and the pan-private model inherently incur an exponential price in sample complexity relative to the central model. For example, we show that, private agnostic learning of parity functions over $d$ bits requires $\Omega(2^{d/2})$ samples in these models, and privately selecting the most common attribute from a set of $d$ choices requires $\Omega(d^{1/2})$ samples, both of which are exponential separations from the central model. Our work gives the first non-trivial lower bounds for these problems for both the pan-private model and the general multi-message shuffle model. ","The Limits of Pan Privacy and Shuffle Privacy for Learning and
Estimation",3,"['Really pleased with this new paper with my PhD student Albert Cheu. We prove strong lower bounds for two ""intermediate models"" of differential privacy: the shuffle model and the pan-private model. 1/3\n\n', ""Our work builds in an essential way on Albert's awesome paper with Victor Balcer, @mgtjoseph, and Jieming Mao. 2/3\n\nhttps://t.co/F1jA6pN8KZ"", ""It's awesome to have such a productive and independent PhD student! I can't believe he's graduating in the Spring. I'm jealous of whoever gets to be his next boss and/or post-doc advisor! *wink wink* 3/3""]",20,09,533
424,164,1514786754812780560,366380609,Evan Rosenman,"New paper with @LMiratrix just hit arXiv: ""Designing Experiments Toward Shrinkage Estimation"" considers how to design an RCT when the goal is to merge its causal estimates with those from an observational study to yield greater accuracy. (1/4) We operate in a stratified setting, and propose using an Empirical Bayes shrinker to combine the estimates, such that we have strong guarantees of risk reduction. We proceed using \kappa_2, a shrinker proposed in . (2/4) We show the exact risk of \kappa_2 can be computed via a numerical integral discussed in Bao and Kan (2013). The RCT design is optimized over the value of this integral. We propose three heuristics -- Neyman, naive, and robust allocations -- for designing under uncertainty. (3/4) Lastly, we show in a simulation study that the resultant designs outperform, whether or not there is unmeasured confounding in the observational study. We hope these results can help researchers to better leverage obs. data while retaining risk reduction guarantees. (4/4) Addendum: I am deeply disappointed that I did not start this thread with: ""A paper for Pesach!""",https://arxiv.org/abs/2204.06687,"We consider how increasingly available observational data can be used to improve the design of randomized controlled trials (RCTs). We seek to design a prospective RCT, with the intent of using an Empirical Bayes estimator to shrink the causal estimates from our trial toward causal estimates obtained from an observational study. We ask: how might we design the experiment to better complement the observational study in this setting? We propose using an estimator that shrinks each component of the RCT causal estimator toward its observational counterpart by a factor proportional to its variance. First, we show that the risk of this estimator can be computed efficiently via numerical integration. We then propose algorithms for determining the best allocation of units to strata (the best ""design""). We consider three options: Neyman allocation; a ""naive"" design assuming no unmeasured confounding in the observational study; and a ""defensive"" design accounting for the imperfect parameter estimates we would obtain from the observational study with unmeasured confounding. We also incorporate results from sensitivity analysis to establish guardrails on the designs, so that our experiment could be reasonably analyzed with and without shrinkage. We demonstrate the superiority of these experimental designs with a simulation study involving causal inference on a rare, binary outcome. ",Designing Experiments Toward Shrinkage Estimation,5,"['New paper with @LMiratrix just hit arXiv: \n""Designing Experiments Toward Shrinkage Estimation"" considers how to design an RCT when the goal is to merge its causal estimates with those from an observational study to yield greater accuracy. (1/4)', 'We operate in a stratified setting, and propose using an Empirical Bayes shrinker to combine the estimates, such that we have strong guarantees of risk reduction. We proceed using \\kappa_2, a shrinker proposed in https://t.co/Gle8NlpGto. (2/4)', 'We show the exact risk of \\kappa_2 can be computed via a numerical integral discussed in Bao and Kan (2013). The RCT design is optimized over the value of this integral. We propose three heuristics -- Neyman, naive, and robust allocations -- for designing under uncertainty. (3/4)', 'Lastly, we show in a simulation study that the resultant designs outperform, whether or not there is unmeasured confounding in the observational study. We hope these results can help researchers to better leverage obs. data while retaining risk reduction guarantees. (4/4)', 'Addendum: I am deeply disappointed that I did not start this thread with: ""A paper for Pesach!""']",22,04,1127
425,63,1440605094068703234,15242431,André Meyer-Vitali 👁️👁️,Pre-print of our new paper to extend Modular Design Patterns for Hybrid Learning and Reasoning to Actors. #NeurIPS2021 #neurosymbolicai #multiagentsystems #mas #ai #patterns #softwareengineering #hmi #trustworthyai #trust #transparency #hybrid #agents ,https://arxiv.org/abs/2109.09331,"Recently, a boxology (graphical language) with design patterns for hybrid AI was proposed, combining symbolic and sub-symbolic learning and reasoning. In this paper, we extend this boxology with actors and their interactions. The main contributions of this paper are: 1) an extension of the taxonomy to describe distributed hybrid AI systems with actors and interactions; and 2) showing examples using a few design patterns relevant in multi-agent systems and human-agent interaction. ",Modular Design Patterns for Hybrid Actors,1,['Pre-print of our new paper to extend Modular Design Patterns for Hybrid Learning and Reasoning to Actors. #NeurIPS2021 #neurosymbolicai #multiagentsystems #mas #ai #patterns #softwareengineering #hmi #trustworthyai #trust #transparency #hybrid #agents '],21,09,258
426,162,1450376093425360899,1063986211881009153,Bashar Alhafni,"🚨 New Dataset Alert 🚨 We are releasing the Arabic Parallel Gender Corpus v2.0 (APGC v2.0) for gender identification and rewriting in contexts involving one or two target users. Joint work with @nyhabash and @hbouamor at @CamelNlp Paper: (1/n) This corpus expands on its previous version (APGC v1.0) which was introduced by @nyhabash et al. in: by adding second person targets as well as increasing the total number of sentences over 6.5 times (~80K sentences), reaching over 590K words (2/n) We annotated ~63K Arabic sentences from the English-Arabic OpenSubtitles 2018 based on the genders of their first and second person references. In case a gendered reference exists, we introduce all the possible opposite gender forms (3/n) We also provide word-level gender annotations for all the Arabic sentences in the corpus (4/n) The corpus has multiple parallel components: four combinations of 1st and 2nd person in feminine and masculine grammatical genders, as well as English (as we got it from OpenSubtitles 2018) (5/n) We show that our corpus can also be used to detect and quantify bias in gender-unaware machine translation systems targeting Arabic. We machine translated the English sentences into Arabic and evaluated them based on different gender specificity factors in Arabic and English(6/n) So if you're working on gender bias, post-editing MT output, or personalization in general, you should check our corpus out! (n/n)",https://arxiv.org/abs/2110.09216,"Gender bias in natural language processing (NLP) applications, particularly machine translation, has been receiving increasing attention. Much of the research on this issue has focused on mitigating gender bias in English NLP models and systems. Addressing the problem in poorly resourced, and/or morphologically rich languages has lagged behind, largely due to the lack of datasets and resources. In this paper, we introduce a new corpus for gender identification and rewriting in contexts involving one or two target users (I and/or You) -- first and second grammatical persons with independent grammatical gender preferences. We focus on Arabic, a gender-marking morphologically rich language. The corpus has multiple parallel components: four combinations of 1st and 2nd person in feminine and masculine grammatical genders, as well as English, and English to Arabic machine translation output. This corpus expands on Habash et al. (2019)'s Arabic Parallel Gender Corpus (APGC v1.0) by adding second person targets as well as increasing the total number of sentences over 6.5 times, reaching over 590K words. Our new dataset will aid the research and development of gender identification, controlled text generation, and post-editing rewrite systems that could be used to personalize NLP applications and provide users with the correct outputs based on their grammatical gender preferences. We make the Arabic Parallel Gender Corpus (APGC v2.0) publicly available. ",The Arabic Parallel Gender Corpus 2.0: Extensions and Analyses,7,"['🚨 New Dataset Alert 🚨\n\nWe are releasing the Arabic Parallel Gender Corpus v2.0 (APGC v2.0) for gender identification and rewriting in contexts involving one or two target users. Joint work with @nyhabash and @hbouamor at @CamelNlp \n\nPaper: (1/n)', 'This corpus expands on its previous version (APGC v1.0) which was introduced by @nyhabash et al. in: https://t.co/FTvNPDEFhQ by adding second person targets as well as increasing the total number of sentences over 6.5 times (~80K sentences), reaching over 590K words (2/n)', 'We annotated ~63K Arabic sentences from the English-Arabic OpenSubtitles 2018 based on the genders of their first and second person references. In case a gendered reference exists, we introduce all the possible opposite gender forms (3/n) https://t.co/sp4nK1H7Pq', 'We also provide word-level gender annotations for all the Arabic sentences in the corpus (4/n) https://t.co/1dYkIxTeb7', 'The corpus has multiple parallel components: four combinations of 1st and 2nd person in feminine and masculine grammatical genders, as well as English (as we got it from OpenSubtitles 2018) (5/n) https://t.co/hQ6LhyACbY', 'We show that our corpus can also be used to detect and quantify bias in gender-unaware machine translation systems targeting Arabic. We machine translated the English sentences into Arabic and evaluated them based on different gender specificity factors in Arabic and English(6/n) https://t.co/ioZVWwQggJ', ""So if you're working on gender bias, post-editing MT output, or personalization in general, you should check our corpus out! (n/n)""]",21,10,1476
427,42,1397239818636169224,1243544508983279617,Horng Sheng Chia,New paper led by Javier! We found no evidence for binary black holes with spins that are anti-aligned with the orbit in the current BBH population. This suggests that BBHs cannot be formed solely through dynamical capture in dense stellar environments. ,https://arxiv.org/abs/2105.10580,"The distribution of effective spin $\chi_{\rm eff}$, a parameter that encodes the degree of spin-orbit alignment in a binary system, has been widely regarded as a robust discriminator between the isolated and dynamical formation pathways for merging binary black holes. Until the recent release of the GWTC-2 catalog, such tests have yielded inconclusive results due to the small number of events with measurable nonzero spins. In this work, we study the $\chi_{\rm eff}$ distribution of the binary black holes detected in the LIGO-Virgo O1-O3a observing runs. Our focus is on the degree to which the $\chi_{\rm eff}$ distribution is symmetric about $\chi_{\rm eff} = 0$ and whether the data provides support for a population of negative-$\chi_{\rm eff}$ systems. We find that the $\chi_{\rm eff}$ distribution is asymmetric at 95% credibility, with an excess of aligned-spin binary systems ($\chi_{\rm eff}>0$) over anti-aligned ones. Moreover, we find that there is no evidence for negative-$\chi_{\rm eff}$ systems in the current population of binary black holes. Thus, based solely on the $\chi_{\rm eff}$ distribution, dynamical formation is disfavored as being responsible for the entirety of the observed merging binary black holes, while isolated formation remains viable. We also study the mass distribution of the current binary black hole population, confirming that a single truncated power law distribution in the primary source-frame mass, $m_1^{\rm src}$, fails to describe the observations. Instead, we find that the preferred models have a steep feature at $m_1^{\rm src} \sim 40 \,\rm M_\odot$ consistent with a step and an extended, shallow tail to high masses. ","Distribution of Effective Spins and Masses of Binary Black Holes from
the LIGO and Virgo O1-O3a Observing Runs",1,['New paper led by Javier!\n\n\n\nWe found no evidence for binary black holes with spins that are anti-aligned with the orbit in the current BBH population. This suggests that BBHs cannot be formed solely through dynamical capture in dense stellar environments. '],21,05,266
428,17,1410050371502972930,1357124221009334272,David Jaz Myers,"I have a new paper, all about the relation between quantity and quality: Modal Fracture of Higher Groups. … @SchreiberUrs’ differential cohomology hexagon done synthetically in Shulman’s cohesive HoTT, and classifiers for circle k-gerbes with connection. ",https://arxiv.org/abs/2106.15390,"In this paper, we examine the modal aspects of higher groups in Shulman's Cohesive Homotopy Type Theory. We show that every higher group sits within a modal fracture hexagon which renders it into its discrete, infinitesimal, and contractible components. This gives an unstable and synthetic construction of Schreiber's differential cohomology hexagon. As an example of this modal fracture hexagon, we recover the character diagram characterizing ordinary differential cohomology by its relation to its underlying integral cohomology and differential form data, although there is a subtle obstruction to generalizing the usual hexagon to higher types. Assuming the existence of a long exact sequence of differential form classifiers, we construct the classifiers for circle k-gerbes with connection and describe their modal fracture hexagon. ",Modal Fracture of Higher Groups,2,"['I have a new paper, all about the relation between quantity and quality: Modal Fracture of Higher Groups. … @SchreiberUrs’ differential cohomology hexagon done synthetically in Shulman’s cohesive HoTT, and classifiers for circle k-gerbes with connection.', 'https://t.co/f8SubGxVDL']",21,06,268
429,208,1251056123584622594,543247607,Marcel Neunhoeffer,What is useful private synthetic data? And how to measure it? @chrisguarnold and I hope to help improve differentially private synthetic data. Find our working paper: We love to hear from you! #differentialprivacy #syntheticdata #datascience #gan ,https://arxiv.org/abs/2004.07740,"Recent advances in generating synthetic data that allow to add principled ways of protecting privacy -- such as Differential Privacy -- are a crucial step in sharing statistical information in a privacy preserving way. But while the focus has been on privacy guarantees, the resulting private synthetic data is only useful if it still carries statistical information from the original data. To further optimise the inherent trade-off between data privacy and data quality, it is necessary to think closely about the latter. What is it that data analysts want? Acknowledging that data quality is a subjective concept, we develop a framework to evaluate the quality of differentially private synthetic data from an applied researcher's perspective. Data quality can be measured along two dimensions. First, quality of synthetic data can be evaluated against training data or against an underlying population. Second, the quality of synthetic data depends on general similarity of distributions or specific tasks such as inference or prediction. It is clear that accommodating all goals at once is a formidable challenge. We invite the academic community to jointly advance the privacy-quality frontier. ","Really Useful Synthetic Data -- A Framework to Evaluate the Quality of
Differentially Private Synthetic Data",1,['What is useful private synthetic data? And how to measure it? @chrisguarnold and I hope to help improve differentially private synthetic data. Find our working paper: We love to hear from you! #differentialprivacy #syntheticdata #datascience #gan '],20,04,260
430,107,1225698131280457730,944291984675614721,Tobias de Jong,"New paper on arXiv 😁 In a collaboration with (amongst others) University of Geneva, @ICFOnians, @elettrasincro and @LeidenPhysics, we use STM and LEEM to aid ARPES to study the band structure of (near) magic-angle bilayer graphene and observe a flat band We (i.e. @sensemolen , @j_jobst and myself) used LEEM to map out the device of stacked 2D flakes at high resolution, identifying areas of monolayer, multilayer, normal bilayer and near-magic-angle bilayer graphene, guiding ARPES and STM collaborators where to measure.",https://arxiv.org/abs/2002.02289,"Transport experiments in twisted bilayer graphene revealed multiple superconducting domes separated by correlated insulating states. These properties are generally associated with strongly correlated states in a flat mini-band of the hexagonal moir\'e superlattice as it was predicted by band structure calculations. Evidence for such a flat band comes from local tunneling spectroscopy and electronic compressibility measurements, reporting two or more sharp peaks in the density of states that may be associated with closely spaced van Hove singularities. Direct momentum resolved measurements proved difficult though. Here, we combine different imaging techniques and angle resolved photoemission with simultaneous real and momentum space resolution (nano-ARPES) to directly map the band dispersion in twisted bilayer graphene devices near charge neutrality. Our experiments reveal large areas with homogeneous twist angle that support a flat band with spectral weight that is highly localized in momentum space. The flat band is separated from the dispersive Dirac bands which show multiple moir\'e hybridization gaps. These data establish the salient features of the twisted bilayer graphene band structure. ","Direct evidence for flat bands in twisted bilayer graphene from
nano-ARPES",2,"['New paper on arXiv 😁 In a collaboration with (amongst others) University of Geneva, @ICFOnians, @elettrasincro and @LeidenPhysics, we use STM and LEEM to aid ARPES to study the band structure of (near) magic-angle bilayer graphene and observe a flat band', 'We (i.e. @sensemolen , @j_jobst and myself) used LEEM to map out the device of stacked 2D flakes at high resolution, identifying areas of monolayer, multilayer, normal bilayer and near-magic-angle bilayer graphene, guiding ARPES and STM collaborators where to measure.']",20,02,530
431,100,991723765720481792,70874545,Josh Lothringer,"As @V_Parmentier and @lkreidberg have been describing, ultra-hot Jupiters are weird and unique for several reasons, from dissociation of molecules to the presence of H- opacity. Here's just a couple of the things we found in my new paper: Once the planet gets above a Teq of about 2500 K, TiO/VO become dissocated as well, so you might expect the atmosphere to becomes non-inverted. We find that that's not the case; something is doing the work of TiO/VO after those molecules have disappeared. The short-wavelength radiation being pumped out by early-type host stars is absorbed by atomic metals, metal hydrides, SiO, and bound-free opacities. This, combined with a dearth of IR-active molecules to cool the atmosphere, causes the atmosphere heats up. This plot is showing where the stellar flux is being absorbed: look at all that short wavelength flux being absorbed right where the temperature inversion is beginning, around 10 mbar! We also show the first self-consistent model of the hottest known jovian planet, KELT-9b (Tdayside = 4600 K). Nearly all molecules are dissocated, even CO, and most atoms are ionized. But the spectrum doesn't look like a blackbody... H- is the main opacity source whose bound-free opacity varies smoothly with wavelength such that the brightness temperature of KELT-9b varies by about 1000 K across the JWST spectral range! There's much more in the paper, but, in short, ultra-hot/extremely irradiated hot Jupiters are fascinating and unique astrophysical objects, worthy of further characterization. This is good because their hot dayside atmosphere and their inflated radii make these great targets. See also: Mansfield+ @V_Parmentier+ @lkreidberg+ ",https://arxiv.org/abs/1805.00038,"Extremely irradiated hot Jupiters, exoplanets reaching dayside temperatures ${>}$2000 K, stretch our understanding of planetary atmospheres and the models we use to interpret observations. While these objects are planets in every other sense, their atmospheres reach temperatures at low pressures comparable only to stellar atmospheres. In order to understand our \textit{a priori} theoretical expectations for the nature of these objects, we self-consistently model a number of extreme hot Jupiter scenarios with the PHOENIX model atmosphere code. PHOENIX is well-tested on objects from cool brown dwarfs to expanding supernovae shells and its expansive opacity database from the UV to far-IR make PHOENIX well-suited for understanding extremely irradiated hot Jupiters. We find several fundamental differences between hot Jupiters at temperatures ${>}$2500 K and their cooler counterparts. First, absorption by atomic metals like Fe and Mg, molecules including SiO and metal hydrides, and continuous opacity sources like H$^-$ all combined with the short-wavelength output of early-type host stars result in strong thermal inversions, without the need for TiO or VO. Second, many molecular species, including H$_2$O, TiO, and VO are thermally dissociated at pressures probed by eclipse observations, biasing retrieval algorithms that assume uniform vertical abundances. We discuss other interesting properties of these objects, as well as future prospects and predictions for observing and characterizing this unique class of astrophysical object, including the first self-consistent model of the hottest known jovian planet, KELT-9b. ","Extremely Irradiated Hot Jupiters: Non-Oxide Inversions, H- Opacity, and
Thermal Dissociation of Molecules",8,"[""As @V_Parmentier and @lkreidberg have been describing, ultra-hot Jupiters are weird and unique for several reasons, from dissociation of molecules to the presence of H- opacity. Here's just a couple of the things we found in my new paper: "", ""Once the planet gets above a Teq of about 2500 K, TiO/VO become dissocated as well, so you might expect the atmosphere to becomes non-inverted. We find that that's not the case; something is doing the work of TiO/VO after those molecules have disappeared. https://t.co/6ojhfnzvry"", 'The short-wavelength radiation being pumped out by early-type host stars is absorbed by atomic metals, metal hydrides, SiO, and bound-free opacities. This, combined with a dearth of IR-active molecules to cool the atmosphere, causes the atmosphere heats up.', 'This plot is showing where the stellar flux is being absorbed: look at all that short wavelength flux being absorbed right where the temperature inversion is beginning, around 10 mbar! https://t.co/mdKZIWDBEh', ""We also show the first self-consistent model of the hottest known jovian planet, KELT-9b (Tdayside = 4600 K). Nearly all molecules are dissocated, even CO, and most atoms are ionized. But the spectrum doesn't look like a blackbody..."", 'H- is the main opacity source whose bound-free opacity varies smoothly with wavelength such that the brightness temperature of KELT-9b varies by about 1000 K across the JWST spectral range! https://t.co/d61L1YPqnv', ""There's much more in the paper, but, in short, ultra-hot/extremely irradiated hot Jupiters are fascinating and unique astrophysical objects, worthy of further characterization. This is good because their hot dayside atmosphere and their inflated radii make these great targets."", 'See also: \nMansfield+ https://t.co/5p3b7Mob1x\n@V_Parmentier+ https://t.co/ReHRrUXo8H\n@lkreidberg+ https://t.co/oc3ZyzrnDC']",18,05,1737
432,46,1287927441852207105,1196266674954985472,Nirmal Raj,"New paper with @DjunaCroon, @davemckeen, & Zihui Wang! ""Subaru through a different lens"": . Thread follows. For a flash review of gravitational microlensing see my previous thread on it: . You'd think a star in the Andromeda Galaxy (10^19 km away) would look like a point from here, but so sharp-eyed is the Subaru Telescope that when it looks for the brief dazzle of starlight bent by a passing object, the star's spatial extent (~10^7 km) actually matters to it! If the passing object is a dark matter structure such as a subhalo or a boson star, *its* spatial extent -- and unique mass distribution -- must figure in the lensing signal as well. That gives us a brand new constraint to the hunt for #darkmatter. ",https://arxiv.org/abs/2007.12697,"We investigate gravitational microlensing signals produced by a spatially extended object transiting in front of a finite-sized source star. The most interesting features arise for lens and source sizes comparable to the Einstein radius of the setup. Using this information, we obtain constraints from the Subaru-HSC survey of M31 on the dark matter populations of NFW subhalos and boson stars of asteroid to Earth masses. These lens profiles capture the qualitative behavior of a wide range of dark matter substructures. We find that deviations from constraints on point-like lenses (e.g. primordial black holes and MACHOs) become visible for lenses of radius 0.1 $R_\odot$ and larger, with the upper bound on lens masses weakening with increasing lens size. ","Subaru through a different lens: microlensing by extended dark matter
structures",3,"['New paper with @DjunaCroon, \n@davemckeen, & Zihui Wang! ""Subaru through a different lens"": . \nThread follows. For a flash review of gravitational microlensing see my previous thread on it: . ', ""You'd think a star in the Andromeda Galaxy (10^19 km away) would look like a point from here, but so sharp-eyed is the Subaru Telescope that when it looks for the brief dazzle of starlight bent by a passing object, the star's spatial extent (~10^7 km) actually matters to it! https://t.co/Arn4clsnMc"", 'If the passing object is a dark matter structure such as a subhalo or a boson star, *its* spatial extent -- and unique mass distribution -- must figure in the lensing signal as well. That gives us a brand new constraint to the hunt for #darkmatter. https://t.co/djtQwqiXcP']",20,07,747
433,148,1146367776308809729,4111874585,Manuel Rigger,"A preprint of our @FSEconf paper ""Understanding GCC Builtins to Develop Better Tools"" is now online at . We took care to make the results replicable; check out the repository at . Work with @smarr, Bram Adams, and @moessenboeck. @FSEconf @smarr @moessenboeck CC @gnutools, @llvmweekly, @llvmorg",https://arxiv.org/abs/1907.00863,"C programs can use compiler builtins to provide functionality that the C language lacks. On Linux, GCC provides several thousands of builtins that are also supported by other mature compilers, such as Clang and ICC. Maintainers of other tools lack guidance on whether and which builtins should be implemented to support popular projects. To assist tool developers who want to support GCC builtins, we analyzed builtin use in 4,913 C projects from GitHub. We found that 37% of these projects relied on at least one builtin. Supporting an increasing proportion of projects requires support of an exponentially increasing number of builtins; however, implementing only 10 builtins already covers over 30% of the projects. Since we found that many builtins in our corpus remained unused, the effort needed to support 90% of the projects is moderate, requiring about 110 builtins to be implemented. For each project, we analyzed the evolution of builtin use over time and found that the majority of projects mostly added builtins. This suggests that builtins are not a legacy feature and must be supported in future tools. Systematic testing of builtin support in existing tools revealed that many lacked support for builtins either partially or completely; we also discovered incorrect implementations in various tools, including the formally verified CompCert compiler. ",Understanding GCC Builtins to Develop Better Tools,2,"['A preprint of our @FSEconf paper ""Understanding GCC Builtins to Develop Better Tools"" is now online at . We took care to make the results replicable; check out the repository at . Work with @smarr, Bram Adams, and @moessenboeck. ', '@FSEconf @smarr @moessenboeck CC @gnutools, @llvmweekly, @llvmorg']",19,07,313
434,118,1390709745909211138,785763100712665088,Yaroslav Ganin,"New paper day. Our take on generation of structured objects: Protocol Buffers + Transformers + Pointer Nets We showcase the method on 2D CAD sketches (geometric primitives & relations between them). Should work for other domains too. Mandatory samples: A close-up view for those who got dizzy Look at this guy - he is so happy to get vectorized! (we do bitmap to sketch translation too). This one is out of distribution - I just doodled it in an app (see the image in the bottom left corner) (joint work w/ @sbos, @liyuajia, Ethan Keller and Stefano Saliceti) 1/ Some details. Thread 2D sketches are at the heart of mechanical CAD. Each sketch is a collections of entities (lines, arcs, splines) and constraints (""this line is parallel to that"", ""this point and that point are coincident"" and so on). The latter defines the design intent. 2/ Both entities and constraints are structured objects and can be described using JSON, XML or Protocol Buffers (the path we take). Below are two examples. Since constraints are applied to entities they employ pointers to refer their arguments (that's why we need Pointer Nets). 3/ Our approach is to let an external interpreter handle the structure and use Transformers only to generate missing bits (e.g., the field values). We encode each missing bit as a triplet: (discrete value, continuous value, boolean flag). The flag is needed to handle loops. 4/ This Transformer+Interpreter tandem is flexible enough to generate pretty much anything that we can represent as a PB message. That's why we can synthesize both entities and constraints in the same sequence without resorting to using specialized multi-stage architectures.",http://arxiv.org/abs/2105.02769,"Computer-Aided Design (CAD) applications are used in manufacturing to model everything from coffee mugs to sports cars. These programs are complex and require years of training and experience to master. A component of all CAD models particularly difficult to make are the highly structured 2D sketches that lie at the heart of every 3D construction. In this work, we propose a machine learning model capable of automatically generating such sketches. Through this, we pave the way for developing intelligent tools that would help engineers create better designs with less effort. Our method is a combination of a general-purpose language modeling technique alongside an off-the-shelf data serialization protocol. We show that our approach has enough flexibility to accommodate the complexity of the domain and performs well for both unconditional synthesis and image-to-sketch translation. ",Computer-Aided Design as Language,8,"['New paper day.\n\nOur take on generation of structured objects:\nProtocol Buffers + Transformers + Pointer Nets\n\nWe showcase the method on 2D CAD sketches (geometric primitives & relations between them). Should work for other domains too.\n\n\n\nMandatory samples: ', 'A close-up view for those who got dizzy https://t.co/myjvPfhO2s', 'Look at this guy - he is so happy to get vectorized! (we do bitmap to sketch translation too). This one is out of distribution - I just doodled it in an app (see the image in the bottom left corner) https://t.co/CsGlVLa8qv', '(joint work w/ @sbos, @liyuajia, Ethan Keller and Stefano Saliceti)', '1/ Some details. Thread\n\n2D sketches are at the heart of mechanical CAD. Each sketch is a collections of entities (lines, arcs, splines) and constraints (""this line is parallel to that"", ""this point and that point are coincident"" and so on). The latter defines the design intent. https://t.co/cPCg6QODBg', ""2/ Both entities and constraints are structured objects and can be described using JSON, XML or Protocol Buffers (the path we take). Below are two examples. Since constraints are applied to entities they employ pointers to refer their arguments (that's why we need Pointer Nets). https://t.co/a9hjG5fk6f"", '3/ Our approach is to let an external interpreter handle the structure and use Transformers only to generate missing bits (e.g., the field values). We encode each missing bit as a triplet: (discrete value, continuous value, boolean flag). The flag is needed to handle loops. https://t.co/2t231CdtXq', ""4/ This Transformer+Interpreter tandem is flexible enough to generate pretty much anything that we can represent as a PB message. That's why we can synthesize both entities and constraints in the same sequence without resorting to using specialized multi-stage architectures.""]",21,05,1718
435,78,1184176079432499200,40285266,Stanislav Fort at EAGx Prague ¬(🔥📎🔥📎),Excited to announce our new paper Emergent properties of the local geometry of neural loss landscapes with my great advisor @SuryaGanguli! We used a simple model to explain 4 surprising effects of local geometry of neural network landscapes. ,https://arxiv.org/abs/1910.05929,"The local geometry of high dimensional neural network loss landscapes can both challenge our cherished theoretical intuitions as well as dramatically impact the practical success of neural network training. Indeed recent works have observed 4 striking local properties of neural loss landscapes on classification tasks: (1) the landscape exhibits exactly $C$ directions of high positive curvature, where $C$ is the number of classes; (2) gradient directions are largely confined to this extremely low dimensional subspace of positive Hessian curvature, leaving the vast majority of directions in weight space unexplored; (3) gradient descent transiently explores intermediate regions of higher positive curvature before eventually finding flatter minima; (4) training can be successful even when confined to low dimensional {\it random} affine hyperplanes, as long as these hyperplanes intersect a Goldilocks zone of higher than average curvature. We develop a simple theoretical model of gradients and Hessians, justified by numerical experiments on architectures and datasets used in practice, that {\it simultaneously} accounts for all $4$ of these surprising and seemingly unrelated properties. Our unified model provides conceptual insights into the emergence of these properties and makes connections with diverse topics in neural networks, random matrix theory, and spin glasses, including the neural tangent kernel, BBP phase transitions, and Derrida's random energy model. ",Emergent properties of the local geometry of neural loss landscapes,1,['Excited to announce our new paper Emergent properties of the local geometry of neural loss landscapes with my great advisor @SuryaGanguli! We used a simple model to explain 4 surprising effects of local geometry of neural network landscapes. '],19,10,255
436,57,1193967646477127680,548718054,Marc Khoury,"How good is a triangulation as an approximation to a smooth surface? In a new paper, we prove sharp bounds on the interpolation and normal error for points clouds sampled from smooth surfaces and manifolds. With Jonathan Shewchuk. @paul_pearce Depends on the triangulation and the density of the samples. The bounds are statements of the type if a triangle is ""small"", wrt a certain measure, and the sample is dense, then the triangle normal closely approximates the true normal at the vertices. @paul_pearce Given a triangulation these statements allow you to measure its quality. A lot of the effort is in establish the sharpest possible bound wrt constants. Many of these results are central to provable manifold reconstruction and haven't been improved in years. @paul_pearce Even better: arbitrarily good.",https://arxiv.org/abs/1911.03424,"How good is a triangulation as an approximation of a smooth curved surface or manifold? We provide bounds on the {\em interpolation error}, the error in the position of the surface, and the {\em normal error}, the error in the normal vectors of the surface, as approximated by a piecewise linearly triangulated surface whose vertices lie on the original, smooth surface. The interpolation error is the distance from an arbitrary point on the triangulation to the nearest point on the original, smooth manifold, or vice versa. The normal error is the angle separating the vector (or space) normal to a triangle from the vector (or space) normal to the smooth manifold (measured at a suitable point near the triangle). We also study the {\em normal variation}, the angle separating the normal vectors (or normal spaces) at two different points on a smooth manifold. Our bounds apply to manifolds of any dimension embedded in Euclidean spaces of any dimension, and our interpolation error bounds apply to simplices of any dimension, although our normal error bounds apply only to triangles. These bounds are expressed in terms of the sizes of suitable medial balls (the {\em empty ball size} or {\em local feature size} measured at certain points on the manifold), and have applications in Delaunay triangulation-based algorithms for provably good surface reconstruction and provably good mesh generation. Our bounds have better constants than the prior bounds we know of---and for several results in higher dimensions, our bounds are the first to give explicit constants. ","Approximation Bounds for Interpolation and Normals on Triangulated
Surfaces and Manifolds",4,"['How good is a triangulation as an approximation to a smooth surface? In a new paper, we prove sharp bounds on the interpolation and normal error for points clouds sampled from smooth surfaces and manifolds. With Jonathan Shewchuk. \n\n', '@paul_pearce Depends on the triangulation and the density of the samples. The bounds are statements of the type if a triangle is ""small"", wrt a certain measure, and the sample is dense, then the triangle normal closely approximates the true normal at the vertices.', ""@paul_pearce Given a triangulation these statements allow you to measure its quality. A lot of the effort is in establish the sharpest possible bound wrt constants. Many of these results are central to provable manifold reconstruction and haven't been improved in years."", '@paul_pearce Even better: arbitrarily good.']",19,11,818
437,140,1435567098739269635,322636963,Jonathan Berant,"Another piece of evidence in the quest for compositional generalization: New paper by Inbar Oren and w/ @jonherzig (to appear in #emnlp2021): sampling examples with high structural diversity across examples from a SCFG dramatically improves comp. gen 1/2 We double accuracy on a compositional split of Schema2QA by sampling only 5K synthetic examples from the synchronous grammar, check it out! One more #emnlp2021 paper on compositional generalization in a visually grounded setup coming up next week, stay tuned. Oh and Shana Tova!",https://arxiv.org/abs/2109.02575,"Modern semantic parsers suffer from two principal limitations. First, training requires expensive collection of utterance-program pairs. Second, semantic parsers fail to generalize at test time to new compositions/structures that have not been observed during training. Recent research has shown that automatic generation of synthetic utterance-program pairs can alleviate the first problem, but its potential for the second has thus far been under-explored. In this work, we investigate automatic generation of synthetic utterance-program pairs for improving compositional generalization in semantic parsing. Given a small training set of annotated examples and an ""infinite"" pool of synthetic examples, we select a subset of synthetic examples that are structurally-diverse and use them to improve compositional generalization. We evaluate our approach on a new split of the schema2QA dataset, and show that it leads to dramatic improvements in compositional generalization as well as moderate improvements in the traditional i.i.d setup. Moreover, structurally-diverse sampling achieves these improvements with as few as 5K examples, compared to 1M examples when sampling uniformly at random -- a 200x improvement in data efficiency. ","Finding needles in a haystack: Sampling Structurally-diverse Training
Sets from Synthetic Data for Compositional Generalization",3,"['Another piece of evidence in the quest for compositional generalization: \nNew paper by Inbar Oren and w/ @jonherzig (to appear in #emnlp2021): \nsampling examples with high structural diversity across examples from a SCFG dramatically improves comp. gen 1/2 ', 'We double accuracy on a compositional split of Schema2QA by sampling only 5K synthetic examples from the synchronous grammar, check it out!\nOne more #emnlp2021 paper on compositional generalization in a visually grounded setup coming up next week, stay tuned.', 'Oh and Shana Tova!']",21,09,547
438,1,1413626275655159813,744256054465302532,Anurag Kumar,New Paper Out! We try to convert some state-of-the-art methods for speaker separation into online real time systems while trying to retain performance. We do this for both monoaural and binaural speech. Paper . Speech samples 🎧 - #speech #speechseparation #speakerseparation,https://arxiv.org/abs/2106.13493,"Deep neural networks have recently shown great success in the task of blind source separation, both under monaural and binaural settings. Although these methods were shown to produce high-quality separations, they were mainly applied under offline settings, in which the model has access to the full input signal while separating the signal. In this study, we convert a non-causal state-of-the-art separation model into a causal and real-time model and evaluate its performance under both online and offline settings. We compare the performance of the proposed model to several baseline methods under anechoic, noisy, and noisy-reverberant recording conditions while exploring both monaural and binaural inputs and outputs. Our findings shed light on the relative difference between causal and non-causal models when performing separation. Our stateful implementation for online separation leads to a minor drop in performance compared to the offline model; 0.8dB for monaural inputs and 0.3dB for binaural inputs while reaching a real-time factor of 0.65. Samples can be found under the following link: this https URL ",Online Self-Attentive Gated RNNs for Real-Time Speaker Separation,2,"['New Paper Out! We try to convert some state-of-the-art methods for speaker separation into online real time systems while trying to retain performance. We do this for both monoaural and binaural speech. Paper . \nSpeech samples 🎧 - ', '#speech #speechseparation #speakerseparation']",21,06,288
439,55,1275773294168473600,572479189,Manlio De Domenico,"Happy to briefly discuss the new work led by @valedand about exploiting information theory to compress the phase space of nonlinear dynamical systems and detect state changes. Paper 👉 Thread 👇 1 / Since the '80s we know how to reconstruct the phase space of a dynamical system from the observation of the series it generates. Methods mostly depend on two unknowns: embedding dimension and time delay. We propose compressibility of the dynamics to solve this issue: it works! 2/ So we started to analyze more complex dynamics, such as coupled chaotic oscillators of different nature and varied their coupling. We observe how the effective embedding dimension change & reveal changes that Lyapunov or Correlation Dim analyses do not reveal separately. 3/ When we do the same analysis for coupled chaotic maps, results are similar: we have to use *both* Lyapunov and CorDim to understand what's going on, whereas our embedding dimension is self-consistent. Why that's relevant? 4/ Because calculating Lyapunov exponents and CorDim requires long + possibly non-noisy time series, whereas we demonstrate that our method works faster and with smaller observational size: we exploit that for causal interactions information entropy grows non-extensively. 5/5",https://arxiv.org/abs/2006.12842,"Equations governing the nonlinear dynamics of complex systems are usually unknown and indirect methods are used to reconstruct their manifolds. In turn, they depend on embedding parameters requiring other methods and long temporal sequences to be accurate. In this paper, we show that an optimal reconstruction can be achieved by lossless compression of system's time course, providing a self-consistent analysis of its dynamics and a measure of its complexity, even for short sequences. Our measure of complexity detects system's state changes such as weak synchronization phenomena, characterizing many systems, in one step, integrating results from Lyapunov and fractal analysis. ","Compressing phase space detects state changes in nonlinear dynamical
systems",5,"['Happy to briefly discuss the new work led by @valedand about exploiting information theory to compress the phase space of nonlinear dynamical systems and detect state changes. Paper 👉 \n\nThread 👇 1 / ', ""Since the '80s we know how to reconstruct the phase space of a dynamical system from the observation of the series it generates. Methods mostly depend on two unknowns: embedding dimension and time delay. We propose compressibility of the dynamics to solve this issue: it works! 2/ https://t.co/TbGrNT0Fjr"", 'So we started to analyze more complex dynamics, such as coupled chaotic oscillators of different nature and varied their coupling. We observe how the effective embedding dimension change & reveal changes that Lyapunov or Correlation Dim analyses do not reveal separately. 3/ https://t.co/EqneJN1KWo', ""When we do the same analysis for coupled chaotic maps, results are similar: we have to use *both* Lyapunov and CorDim to understand what's going on, whereas our embedding dimension is self-consistent. Why that's relevant? 4/ https://t.co/LMEFs8n7d3"", 'Because calculating Lyapunov exponents and CorDim requires long + possibly non-noisy time series, whereas we demonstrate that our method works faster and with smaller observational size: we exploit that for causal interactions information entropy grows non-extensively. 5/5']",20,06,1286
440,92,1128573331958112257,15719460,didier_schwab,"Our paper Sense Vocabulary Compression through the Semantic Knowledge of WordNet for Neural Word Sense Disambiguation with @_Loic_Vial, Benjamin Lecouteux has been accepted at the 10th Global WordNet Conference - GWC 2019 - new state of the art for WSD @laurent_besacie @_Loic_Vial On peut pas gagner à tous les coups...",https://arxiv.org/abs/1905.05677,"In this article, we tackle the issue of the limited quantity of manually sense annotated corpora for the task of word sense disambiguation, by exploiting the semantic relationships between senses such as synonymy, hypernymy and hyponymy, in order to compress the sense vocabulary of Princeton WordNet, and thus reduce the number of different sense tags that must be observed to disambiguate all words of the lexical database. We propose two different methods that greatly reduces the size of neural WSD models, with the benefit of improving their coverage without additional training data, and without impacting their precision. In addition to our method, we present a WSD system which relies on pre-trained BERT word vectors in order to achieve results that significantly outperform the state of the art on all WSD evaluation tasks. ","Sense Vocabulary Compression through the Semantic Knowledge of WordNet
for Neural Word Sense Disambiguation",2,"['Our paper Sense Vocabulary Compression through the Semantic Knowledge of WordNet for Neural Word Sense Disambiguation with @_Loic_Vial, Benjamin Lecouteux has been accepted at the 10th Global WordNet Conference - GWC 2019 - new state of the art for WSD ', '@laurent_besacie @_Loic_Vial On peut pas gagner à tous les coups...']",19,05,327
441,174,1486674972508663815,1364749022,Haitham Bou Ammar,🚨Robotics and Planning Ppl🚨 We formalise constraint primitives via geometric backtracking. We propose an efficient BO algorithm based on constraint primitives. Even more! ☝️ We devise a transfer learning mechanism across tasks with zero effort. ,https://arxiv.org/abs/2201.09612,"Searching for bindings of geometric parameters in task and motion planning (TAMP) is a finite-horizon stochastic planning problem with high-dimensional decision spaces. A robot manipulator can only move in a subspace of its whole range that is subjected to many geometric constraints. A TAMP solver usually takes many explorations before finding a feasible binding set for each task. It is favorable to learn those constraints once and then transfer them over different tasks within the same workspace. We address this problem by representing constraint knowledge with transferable primitives and using Bayesian optimization (BO) based on these primitives to guide binding search in further tasks. Via semantic and geometric backtracking in TAMP, we construct constraint primitives to encode the geometric constraints respectively in a reusable form. Then we devise a BO approach to efficiently utilize the accumulated constraints for guiding node expansion of an MCTS-based binding planner. We further compose a transfer mechanism to enable free knowledge flow between TAMP tasks. Results indicate that our approach reduces the expensive exploration calls in binding search by 43.60to 71.69 when compared to the baseline unguided planner. ",Learning Geometric Constraints in Task and Motion Planning,1,['🚨Robotics and Planning Ppl🚨 \n\nWe formalise constraint primitives via geometric backtracking. We propose an efficient BO algorithm based on constraint primitives. \nEven more! ☝️ We devise a transfer learning mechanism across tasks with zero effort.\n\n '],22,01,259
442,43,1353066623045799939,818436810,Ehsan Hosseini-Asl,"New paper on, - Analyzing calibration in NLU models - How to improve it using noise contrastive estimation training Accepted to EACL 2021 () @SFResearch [1/6] We explore joint energy-based model (EBM) training during the finetuning of pretrained text encoders (e.g., Roberta) for natural language understanding (NLU) tasks. [2/6] In most tasks, all three EBM variants get substantial improvement in ECE with little or no loss in accuracy comparing to the (strong) baseline methods. [3/6] We plot how test-set ECE changes during training. It is shown as the training reaches the high-accuracy area, the calibration for baseline model becomes worse, while EBM training is able to reach a better trade-off between accuracy and calibration. [4/6] How does the model get better calibration? It is shown that models trained with the hidden and sharp-hidden variants tend to assign more conservative predictions (reflected by higher entropy) for higher energy (less likely) samples. [5/6] We suspect this is due to the strong coupling between the energy function and the classification logits. We provide concrete examples here. However, we need to mention that we do not observe this interesting trend (Figure 4) in all datasets (e.g., QNLI) [6/6] ",https://arxiv.org/abs/2101.06829,"In this work, we explore joint energy-based model (EBM) training during the finetuning of pretrained text encoders (e.g., Roberta) for natural language understanding (NLU) tasks. Our experiments show that EBM training can help the model reach a better calibration that is competitive to strong baselines, with little or no loss in accuracy. We discuss three variants of energy functions (namely scalar, hidden, and sharp-hidden) that can be defined on top of a text encoder, and compare them in experiments. Due to the discreteness of text data, we adopt noise contrastive estimation (NCE) to train the energy-based model. To make NCE training more effective, we train an auto-regressive noise model with the masked language model (MLM) objective. ","Joint Energy-based Model Training for Better Calibrated Natural Language
Understanding Models",6,"['New paper on,\n- Analyzing calibration in NLU models\n- How to improve it using noise contrastive estimation training \nAccepted to EACL 2021 \n()\n@SFResearch [1/6] ', 'We explore joint energy-based model (EBM) training during the finetuning of pretrained text encoders (e.g., Roberta) for natural language understanding (NLU) tasks. [2/6] https://t.co/a4vzRQb3K0', 'In most tasks, all three EBM variants get substantial improvement in ECE with little or no loss in accuracy comparing to the (strong) baseline methods. [3/6] https://t.co/1nIvYVAYWZ', 'We plot how test-set ECE changes during training. It is shown as the training reaches the high-accuracy area, the calibration for baseline model becomes worse, while EBM training is able to reach a better trade-off between accuracy and calibration. [4/6] https://t.co/ZLtuWiikYq', 'How does the model get better calibration? It is shown that models trained with the hidden and sharp-hidden variants tend to assign more conservative predictions (reflected by higher entropy) for higher energy (less likely) samples. [5/6] https://t.co/VcWZFCwual', 'We suspect this is due to the strong coupling between the energy function and the classification logits. We provide concrete examples here. \nHowever, we need to mention that we do not observe this interesting trend (Figure 4) in all datasets (e.g., QNLI) [6/6] https://t.co/7MTVdpldWR']",21,01,1289
443,64,971310121203781633,721931072,Shimon Whiteson,Our latest paper is the fruit of a new collaboration with @oxfordrobots: hierarchical learning from demonstration: weakly supervising the demonstrations enables zero-shot learning for robots @KyriacosShiarli @IngmarPosner @markus_with_k @whi_rl,https://arxiv.org/abs/1803.01840,"Many advanced Learning from Demonstration (LfD) methods consider the decomposition of complex, real-world tasks into simpler sub-tasks. By reusing the corresponding sub-policies within and between tasks, they provide training data for each policy from different high-level tasks and compose them to perform novel ones. Existing approaches to modular LfD focus either on learning a single high-level task or depend on domain knowledge and temporal segmentation. In contrast, we propose a weakly supervised, domain-agnostic approach based on task sketches, which include only the sequence of sub-tasks performed in each demonstration. Our approach simultaneously aligns the sketches with the observed demonstrations and learns the required sub-policies. This improves generalisation in comparison to separate optimisation procedures. We evaluate the approach on multiple domains, including a simulated 3D robot arm control task using purely image-based observations. The results show that our approach performs commensurately with fully supervised approaches, while requiring significantly less annotation effort. ",TACO: Learning Task Decomposition via Temporal Alignment for Control,1,['Our latest paper is the fruit of a new collaboration with @oxfordrobots: hierarchical learning from demonstration: weakly supervising the demonstrations enables zero-shot learning for robots @KyriacosShiarli @IngmarPosner @markus_with_k @whi_rl'],18,03,251
444,2,1524255968011321344,23024823,Yang Cai,"Very excited about our new paper with @ArgyrisOikonom1 @WeiqiangZheng3. We obtain the tight last-iterate convergence rates for the Extragradient (EG) and Optimistic Gradient (OG) algorithms, settling an open problem raised by @KonstDaskalakis. We study the convex-concave min-max optimization (and more generally the monotone variation inequalities) in the constrained setting. The EG algorithm by Korpelevich '1976 and the OG algorithm by Popov '1980 are among the most classical and popular algorithms for such problems. For both EG and OG, we know that the last-iterate asymptotically converges, but the rate was not known despite having been studied for a long time. We obtain tight last-iterate convergence rates for both EG and OG. Our proof builds on a new natural potential function, whose monotonicity is established using a sum-of-squares programming based computer-aided proof. For more details, see my talk at the Simons Institute for our result on EG. ",https://arxiv.org/abs/2204.09228,"The monotone variational inequality is a central problem in mathematical programming that unifies and generalizes many important settings such as smooth convex optimization, two-player zero-sum games, convex-concave saddle point problems, etc. The extragradient algorithm by Korpelevich [1976] and the optimistic gradient descent-ascent algorithm by Popov [1980] are arguably the two most classical and popular methods for solving monotone variational inequalities. Despite its long history, the following major problem remains open. What is the last-iterate convergence rate of the extragradient algorithm or the optimistic gradient descent-ascent algorithm for monotone and Lipschitz variational inequalities with constraints? We resolve this open problem by showing that both the extragradient algorithm and the optimistic gradient descent-ascent algorithm have a tight $O\left(\frac{1}{\sqrt{T}}\right)$ last-iterate convergence rate for arbitrary convex feasible sets, which matches the lower bound by Golowich et al. [2020a, b]. Our rate is measured in terms of the standard gap function. At the core of our results lies a new performance measure -- the tangent residual, which can be viewed as an adaptation of the norm of the operator that takes the local constraints into account. We use the tangent residual (or a slight variation of the tangent residual) as the performance measure in our analysis of the extragradient algorithm (or the optimistic gradient descent-ascent algorithm). To establish the monotonicity of these performance measures, we develop a new approach that combines the power of the sum-of-squares programming with the low dimensionality of the update rule of the extragradient or the optimistic gradient descent-ascent algorithm. We believe our approach has many additional applications in the analysis of iterative methods. ","Tight Last-Iterate Convergence of the Extragradient and the Optimistic
Gradient Descent-Ascent Algorithm for Constrained Monotone Variational
Inequalities",4,"['Very excited about our new paper with @ArgyrisOikonom1 @WeiqiangZheng3.\n\nWe obtain the tight last-iterate convergence rates for the Extragradient (EG) and Optimistic Gradient (OG) algorithms, settling an open problem raised by @KonstDaskalakis.\n', ""We study the convex-concave min-max optimization (and more generally the monotone variation inequalities) in the constrained setting. The EG algorithm by Korpelevich '1976 and the OG algorithm by Popov '1980 are among the most classical and popular algorithms for such problems."", 'For both EG and OG, we know that the last-iterate asymptotically converges, but the rate was not known despite having been studied for a long time. We obtain tight last-iterate convergence rates for both EG and OG.', 'Our proof builds on a new natural potential function, whose monotonicity is established using a sum-of-squares programming based computer-aided proof.\n\nFor more details, see my talk at the Simons Institute for our result on EG.\nhttps://t.co/eMr8KJmdtT']",22,04,978
445,15,1034711166835216387,319518346,Jose Camacho-Collados,"Humans are very good at distinguishing senses given different contexts but computers (including SotA models like #ELMo, sense embeddings...) still struggle with it. Check out our new dataset on this topic! #NLProc Dataset: Paper: With @tpilehvar @yogarshi Thank you for the pointer, very interesting! And yes, not sure whether the conclusions of these studies are actually very positive... 😕 But at least they should encourage further research on modeling meaning in context, as there definitely seems to be room for improvement",https://arxiv.org/abs/1808.09121,"By design, word embeddings are unable to model the dynamic nature of words' semantics, i.e., the property of words to correspond to potentially different meanings. To address this limitation, dozens of specialized meaning representation techniques such as sense or contextualized embeddings have been proposed. However, despite the popularity of research on this topic, very few evaluation benchmarks exist that specifically focus on the dynamic semantics of words. In this paper we show that existing models have surpassed the performance ceiling of the standard evaluation dataset for the purpose, i.e., Stanford Contextual Word Similarity, and highlight its shortcomings. To address the lack of a suitable benchmark, we put forward a large-scale Word in Context dataset, called WiC, based on annotations curated by experts, for generic evaluation of context-sensitive representations. WiC is released in this https URL ","WiC: the Word-in-Context Dataset for Evaluating Context-Sensitive
Meaning Representations",3,"['Humans are very good at distinguishing senses given different contexts but computers (including SotA models like #ELMo, sense embeddings...) still struggle with it. Check out our new dataset on this topic! #NLProc\n\nDataset: \nPaper: ', 'With @tpilehvar', '@yogarshi Thank you for the pointer, very interesting! And yes, not sure whether the conclusions of these studies are actually very positive... 😕 But at least they should encourage further research on modeling meaning in context, as there definitely seems to be room for improvement']",18,08,549
446,9,990793583883042817,930224996785332224,"Jacob White, PhD",Check out my new paper on measuring the radio emission of A-type stars! This work has important ramifications for characterizing debris disks (think exo-asteroid belts) so that we can more accurately study the amount and distribution of debris around stars ,https://arxiv.org/abs/1804.10206,"In the early stages of planet formation, small dust grains grow to become mm sized particles in debris disks around stars. These disks can in principle be characterized by their emission at submillimeter and millimeter wavelengths. Determining both the occurrence and abundance of debris in unresolved circumstellar disks of A-type main-sequence stars requires that the stellar photospheric emission be accurately modeled. To better constrain the photospheric emission for such systems, we present observations of Sirius A, an A-type star with no known debris, from the JCMT, SMA, and VLA at 0.45, 0.85, 0.88, 1.3, 6.7, and 9.0 mm. We use these observations to inform a PHOENIX model of Sirius A's atmosphere. We find the model provides a good match to these data and can be used as a template for the submm/mm emission of other early A-type stars where unresolved debris may be present. The observations are part of an ongoing observational campaign entitled Measuring the Emission of Stellar Atmospheres at Submm/mm wavelengths (MESAS) ","MESAS: Measuring the Emission of Stellar Atmospheres at Submm/mm
wavelengths",1,['Check out my new paper on measuring the radio emission of A-type stars! This work has important ramifications for characterizing debris disks (think exo-asteroid belts) so that we can more accurately study the amount and distribution of debris around stars\n'],18,04,263
447,207,1359520140438691841,268337552,Nicolas Kourtellis," We find great discrepancies in cookie syncing, pixel tracking & device fingerprinting within 100+ Left, Center&Right-leaning Indian news sites & third-parties. @vibhor711,@vekariayash,@pk_plus_plus,@sangeetamptra,@shounakset,@sakthibalanm,@nishanthsastry ",https://arxiv.org/abs/2102.03656,"India is experiencing intense political partisanship and sectarian divisions. The paper performs, to the best of our knowledge, the first comprehensive analysis on the Indian online news media with respect to tracking and partisanship. We build a dataset of 103 online, mostly mainstream news websites. With the help of two experts, alongside data from the Media Ownership Monitor of the Reporters without Borders, we label these websites according to their partisanship (Left, Right, or Centre). We study and compare user tracking on these sites with different metrics: numbers of cookies, cookie synchronizations, device fingerprinting, and invisible pixel-based tracking. We find that Left and Centre websites serve more cookies than Right-leaning websites. However, through cookie synchronization, more user IDs are synchronized in Left websites than Right or Centre. Canvas fingerprinting is used similarly by Left and Right, and less by Centre. Invisible pixel-based tracking is 50% more intense in Centre-leaning websites than Right, and 25% more than Left. Desktop versions of news websites deliver more cookies than their mobile counterparts. A handful of third-parties are tracking users in most websites in this study. This paper, by demonstrating intense web tracking, has implications for research on overall privacy of users visiting partisan news websites in India. ",Under the Spotlight: Web Tracking in Indian Partisan News Websites,1,"[' We find great discrepancies in cookie syncing, pixel tracking & device fingerprinting within 100+ Left, Center&Right-leaning Indian news sites & third-parties.\n@vibhor711,@vekariayash,@pk_plus_plus,@sangeetamptra,@shounakset,@sakthibalanm,@nishanthsastry ']",21,02,268
448,92,1415928833488785409,1515424688,Armen Aghajanyan,"I'm excited to announce our new pre-training paper: HTLM: Hyper-Text Pre-Training and Prompting of Language Models () where we unlock new ways of priming and automatically generating prompts by pre-training on simplified HTML. Modeling HTML has a ton of advantages: it is easily gathered at scale, it provides rich end-task like supervision (i.e. id attributes encode category information) and allows for structured prompting that follows semantics of HTML, i.e. zero-shot summarization by infilling <title> We train a BART-like model on over 20T of simplified HTML with stochastic size hints over the masks to allow for more fine-grained control during prompting. We call this model HTLM (Hyper-Text Language Model). Turns out that we can represent a good amount of NLP tasks as HTML prompts. A prototypical example is doing summarization by infilling a <title>. Below we ask HTLM to do exactly this with a hint that our generated mask should be roughly 12 tokens. Furthermore, by doing clever masking and asking HTLM to generate the most likely hyper-text formatting for any available training data, we can automatically generate valid prompts from very few examples. We measure the zero-shot performance of manually found prompts and auto-generated prompts (with and without size hints) and discover that we’re able to consistently outperform previous SOTA zero-shot summarization. Size hints turn out to be quite important when doing manual prompt tuning as traditionally we would have to find a prompt that both communicates the semantics of the task as well as the length of the generated output, whereas size-hints allow you to focus only on the former. We also get competitive results on zero-shot classification results. Furthermore we are able to do k=1-shot table-to-text generation by directly representing the tables in HTML. We do well on fine-tuning HTLM on these tasks as well. We follow up by doing an analysis from Scao and Rush, “How Many Data Points is a Prompt for HTLM and find that HTML prompts are generally worth more to HTLM than NL prompts to other pre-trained models. This was work done with amazing co-authors @diametralis @ml_perception @mandarjoshi_ @Hu_Hsu Gargi Ghosh and @LukeZettlemoyer",https://arxiv.org/abs/2107.06955,"We introduce HTLM, a hyper-text language model trained on a large-scale web crawl. Modeling hyper-text has a number of advantages: (1) it is easily gathered at scale, (2) it provides rich document-level and end-task-adjacent supervision (e.g. class and id attributes often encode document category information), and (3) it allows for new structured prompting that follows the established semantics of HTML (e.g. to do zero-shot summarization by infilling title tags for a webpage that contains the input text). We show that pretraining with a BART-style denoising loss directly on simplified HTML provides highly effective transfer for a wide range of end tasks and supervision levels. HTLM matches or exceeds the performance of comparably sized text-only LMs for zero-shot prompting and fine-tuning for classification benchmarks, while also setting new state-of-the-art performance levels for zero-shot summarization. We also find that hyper-text prompts provide more value to HTLM, in terms of data efficiency, than plain text prompts do for existing LMs, and that HTLM is highly effective at auto-prompting itself, by simply generating the most likely hyper-text formatting for any available training data. We will release all code and models to support future HTLM research. ",HTLM: Hyper-Text Pre-Training and Prompting of Language Models,11,"[""I'm excited to announce our new pre-training paper: HTLM: Hyper-Text Pre-Training and Prompting of Language Models () where we unlock new ways of priming and automatically generating prompts by pre-training on simplified HTML."", 'Modeling HTML has a ton of advantages: it is easily gathered at scale, it provides rich end-task like supervision (i.e. id attributes encode category information) and allows for structured prompting that follows semantics of HTML, i.e. zero-shot summarization by infilling <title>', 'We train a BART-like model on over 20T of simplified HTML with stochastic size hints over the masks to allow for more fine-grained control during prompting. We call this model HTLM (Hyper-Text Language Model).', 'Turns out that we can represent a good amount of NLP tasks as HTML prompts. A prototypical example is doing summarization by infilling a <title>. Below we ask HTLM to do exactly this with a hint that our generated mask should be roughly 12 tokens. https://t.co/XC7IPmWdmd', 'Furthermore, by doing clever masking and asking HTLM to generate the most likely hyper-text formatting for any available training data, we can automatically generate valid prompts from very few examples. https://t.co/JebH5Z1kfa', 'We measure the zero-shot performance of manually found prompts and auto-generated prompts (with and without size hints) and discover that we’re able to consistently outperform previous SOTA zero-shot summarization. https://t.co/dRmClKphg6', 'Size hints turn out to be quite important when doing manual prompt tuning as traditionally we would have to find a prompt that both communicates the semantics of the task as well as the length of the generated output, whereas size-hints allow you to focus only on the former. https://t.co/rNQSp4M6xQ', 'We also get competitive results on zero-shot classification results. https://t.co/QImoolZ4DD', 'Furthermore we are able to do k=1-shot table-to-text generation by directly representing the tables in HTML. We do well on fine-tuning HTLM on these tasks as well. https://t.co/pBjtwDgPO4', 'We follow up by doing an analysis from Scao and Rush, “How Many Data Points is a Prompt for HTLM and find that HTML prompts are generally worth more to HTLM than NL prompts to other pre-trained models. https://t.co/JKIzuje5x5', 'This was work done with amazing co-authors @diametralis @ml_perception @mandarjoshi_ @Hu_Hsu Gargi Ghosh and @LukeZettlemoyer']",21,07,2288
449,127,1446518686152593409,335130183,Minqi Jiang,"🏎️ Replay-Guided Adversarial Environment Design Prioritized Level Replay (PLR) is secretly a form of unsupervised environment design. This leads to new theory improving PLR + impressive zero-shot transfer, like driving the Nürburgring Grand Prix. paper: Like living organisms, RL agents are shaped by their environment. How can we improve RL agents by designing the environment instead of the agent? We show that random search, as done by PLR, is surprisingly effective for designing useful environments. PLR prioritizes randomly-sampled levels with higher learning potential, leading to auto-curricula that improve generalization. We view PLR as a ""curriculum game"" between a student and two teachers: a generator choosing random levels and a curator selecting for learning potential. This theoretically unifies PLR with other unsupervised environment design (UED) methods like PAIRED, resulting in a version called Robust PLR (PLR⊥), and a replay-based version of PAIRED, called REPAIRED—each provably resulting in a minimax regret policy at equilibrium. The modification to robustify PLR is counterintuitive: In addition to replacing the value loss priority scores with a regret-approximating score, we only train the agent on trajectories from replayed levels. That is, we improve our agents by training on *less* data. We test the generalization of agents trained via our “replay-guided” methods to those trained via replay-free baselines in a maze domain, and show improved zero-shot transfer to challenging human-designed mazes. But our agents quickly grew bored of mazes, so we took them to the speedway. We trained them on racetracks dynamically generated by each method, and test zero-shot transfer to twenty Formula 1 tracks, where Robust PLR agents take home the trophy. Despite its effectiveness, level replay is only half of the game. More sophisticated versions of the generator, whose levels the curator (PLR) selects for replay, should enable even further gains. Stay tuned for some developments in this direction. I owe much of these exciting results to the contributions of my collaborators, who braved with me an, at times, obscure road full of twists and turns. Congrats to @MichaelD1729, @jparkerholder, @j_foerst, @egrefen, and @_rockt. You can chat with us about this work @NeurIPSConf.",https://arxiv.org/abs/2110.02439,"Deep reinforcement learning (RL) agents may successfully generalize to new settings if trained on an appropriately diverse set of environment and task configurations. Unsupervised Environment Design (UED) is a promising self-supervised RL paradigm, wherein the free parameters of an underspecified environment are automatically adapted during training to the agent's capabilities, leading to the emergence of diverse training environments. Here, we cast Prioritized Level Replay (PLR), an empirically successful but theoretically unmotivated method that selectively samples randomly-generated training levels, as UED. We argue that by curating completely random levels, PLR, too, can generate novel and complex levels for effective training. This insight reveals a natural class of UED methods we call Dual Curriculum Design (DCD). Crucially, DCD includes both PLR and a popular UED algorithm, PAIRED, as special cases and inherits similar theoretical guarantees. This connection allows us to develop novel theory for PLR, providing a version with a robustness guarantee at Nash equilibria. Furthermore, our theory suggests a highly counterintuitive improvement to PLR: by stopping the agent from updating its policy on uncurated levels (training on less data), we can improve the convergence to Nash equilibria. Indeed, our experiments confirm that our new method, PLR$^{\perp}$, obtains better results on a suite of out-of-distribution, zero-shot transfer tasks, in addition to demonstrating that PLR$^{\perp}$ improves the performance of PAIRED, from which it inherited its theoretical framework. ",Replay-Guided Adversarial Environment Design,9,"['🏎️ Replay-Guided Adversarial Environment Design\n\nPrioritized Level Replay (PLR) is secretly a form of unsupervised environment design. This leads to new theory improving PLR + impressive zero-shot transfer, like driving the Nürburgring Grand Prix.\n\npaper: ', 'Like living organisms, RL agents are shaped by their environment. How can we improve RL agents by designing the environment instead of the agent? We show that random search, as done by PLR, is surprisingly effective for designing useful environments. https://t.co/LPg360XjRx', 'PLR prioritizes randomly-sampled levels with higher learning potential, leading to auto-curricula that improve generalization. We view PLR as a ""curriculum game"" between a student and two teachers: a generator choosing random levels and a curator selecting for learning potential. https://t.co/w2R08zu3tm', 'This theoretically unifies PLR with other unsupervised environment design (UED) methods like PAIRED, resulting in a version called Robust PLR (PLR⊥), and a replay-based version of PAIRED, called REPAIRED—each provably resulting in a minimax regret policy at equilibrium. https://t.co/JzvBc4T3g5', 'The modification to robustify PLR is counterintuitive: In addition to replacing the value loss priority scores with a regret-approximating score, we only train the agent on trajectories from replayed levels. That is, we improve our agents by training on *less* data. https://t.co/Tr7faNlkWL', 'We test the generalization of agents trained via our “replay-guided” methods to those trained via replay-free baselines in a maze domain, and show improved zero-shot transfer to challenging human-designed mazes. https://t.co/KyawCnXQeL', 'But our agents quickly grew bored of mazes, so we took them to the speedway. We trained them on racetracks dynamically generated by each method, and test zero-shot transfer to twenty Formula 1 tracks, where Robust PLR agents take home the trophy. https://t.co/w71LoXMBBE', 'Despite its effectiveness, level replay is only half of the game. More sophisticated versions of the generator, whose levels the curator (PLR) selects for replay, should enable even further gains. Stay tuned for some developments in this direction.', 'I owe much of these exciting results to the contributions of my collaborators, who braved with me an, at times, obscure road full of twists and turns. Congrats to @MichaelD1729, @jparkerholder, @j_foerst, @egrefen, and @_rockt. You can chat with us about this work @NeurIPSConf.']",21,10,2366
450,46,1276250079356170240,1133088397273313282,francesco croce,"Check out our new paper on sparse black-box perturbations! New SOTA in query efficiency and success rate. For L0 attacks, changing *only 0.1% pixels* is sufficient to break the models. @maksym_andr Paper: Code: (1/n) Sparse perturbations are challenging for gradient-based methods because of the combinatorial constraints. To tackle this problem, we propose a flexible framework based on random search that naturally handles complicated constraints and leads to query-efficient attacks. (2/n) Our framework leads to the first black-box patch and frame attacks that don’t rely on extra knowledge such as a surrogate model. Having access to a surrogate model is a very strong assumption and it can be very expensive to obtain. (3/n) Moreover, although transfer attacks rely on surrogate models, they in general *do not perform well* and patches/frames are no exceptions. Sparse-RS outperforms transfer attacks (Tr-PGD) by a large margin in terms of the success rate. (4/n) The resulting adversarial patches are very different from the patches found with white-box PGD and some of them are quite interpretable (see the patch for class Peacock). (5/n) We show the versatility of Sparse-RS framework by generating both image-specific and universal patches/frames + L0 perturbations for images and malware. Sparse-RS outperforms other methods on all these threat models including **white-box** L0-PGD attack, see PGD_0 (wb). (6/n) Takeaway message: don’t spend queries on estimating the gradient, do random search with a properly chosen sampling distribution! More details are in the paper: (7/n)",https://arxiv.org/abs/2006.12834,"We propose a versatile framework based on random search, Sparse-RS, for score-based sparse targeted and untargeted attacks in the black-box setting. Sparse-RS does not rely on substitute models and achieves state-of-the-art success rate and query efficiency for multiple sparse attack models: $l_0$-bounded perturbations, adversarial patches, and adversarial frames. The $l_0$-version of untargeted Sparse-RS outperforms all black-box and even all white-box attacks for different models on MNIST, CIFAR-10, and ImageNet. Moreover, our untargeted Sparse-RS achieves very high success rates even for the challenging settings of $20\times20$ adversarial patches and $2$-pixel wide adversarial frames for $224\times224$ images. Finally, we show that Sparse-RS can be applied to generate targeted universal adversarial patches where it significantly outperforms the existing approaches. The code of our framework is available at this https URL ","Sparse-RS: a versatile framework for query-efficient sparse black-box
adversarial attacks",7,"['Check out our new paper on sparse black-box perturbations! New SOTA in query efficiency and success rate. For L0 attacks, changing *only 0.1% pixels* is sufficient to break the models. @maksym_andr \n\nPaper: \nCode: \n(1/n) ', 'Sparse perturbations are challenging for gradient-based methods because of the combinatorial constraints. To tackle this problem, we propose a flexible framework based on random search that naturally handles complicated constraints and leads to query-efficient attacks.\n(2/n) https://t.co/1bjL34J7uJ', 'Our framework leads to the first black-box patch and frame attacks that don’t rely on extra knowledge such as a surrogate model. Having access to a surrogate model is a very strong assumption and it can be very expensive to obtain.\n(3/n) https://t.co/xUDwvkXzpM', 'Moreover, although transfer attacks rely on surrogate models, they in general *do not perform well* and patches/frames are no exceptions. Sparse-RS outperforms transfer attacks (Tr-PGD) by a large margin in terms of the success rate.\n(4/n) https://t.co/z4Ijtk6mcM', 'The resulting adversarial patches are very different from the patches found with white-box PGD and some of them are quite interpretable (see the patch for class Peacock).\n(5/n) https://t.co/mWatfd4Ybc', 'We show the versatility of Sparse-RS framework by generating both image-specific and universal patches/frames + L0 perturbations for images and malware. Sparse-RS outperforms other methods on all these threat models including **white-box** L0-PGD attack, see PGD_0 (wb).\n(6/n) https://t.co/X5bFdrWbT6', 'Takeaway message: don’t spend queries on estimating the gradient, do random search with a properly chosen sampling distribution! \nMore details are in the paper: https://t.co/8MvRlRrdr7\n(7/n)']",20,06,1654
451,41,1407963590515118083,1140222123006472194,Kasper Elm Heintz,"New paper on @arxiv_org today, lead by W. Fong @FongGroup at: Here we identify and characterize the host galaxy of the repeating, super-bursting FRB 20201124A We found that the galaxy is a dusty, relatively modest star-formning galaxy (in good agreement with last weeks result by Vikram Ravi and collaborators: ) with a potential hot MIR dust component contributing ~10-30% to the SED. Perhaps most intriguing, is that we could constrain the star-formation history of the galaxy finding that >90% of its mass (and thus likely the progenitor star/object) was formed 1 Gyr ago, putting strong constraints on the likely progenitor channels of this FRB. ",https://arxiv.org/abs/2106.11993,"We present the Australian Square Kilometre Array Pathfinder (ASKAP) localization and follow-up observations of the host galaxy of the repeating fast radio burst (FRB) source, FRB20201124A, the fifth such extragalactic repeating FRB with an identified host. From spectroscopic observations using the 6.5-m MMT Observatory, we derive a redshift of $z=0.0979 \pm 0.0001$, a star formation rate inferred from H$\alpha$ emission of SFR(H$\alpha$) $\approx 2.1 M_{\odot}$ yr$^{-1}$, and a gas-phase metallicity of 12+log(O/H)$\approx 9.0$. By jointly modeling the 12-filter optical-mid-infrared (MIR) photometry and spectroscopy of the host, we infer a median stellar mass of $\approx 2 \times 10^{10} M_{\odot}$, internal dust extinction of $A_V\approx 1-1.5$ mag, and a mass-weighted stellar population age of $\approx 5-6$ Gyr. Connecting these data to the radio and X-ray observations, we cannot reconcile the broad-band behavior with strong AGN activity and instead attribute the dominant source of persistent radio emission to star formation, likely originating from the circumnuclear region of the host. The modeling also indicates a hot dust component contributing to the MIR luminosity at a level of $\approx 10-30\%$. We model the host galaxy's star formation and mass assembly histories, finding that the host assembled $>90\%$ of its mass by 1 Gyr ago and exhibited a fairly constant SFR for most of its existence, with no clear evidence of past star-burst activity. ","Chronicling the Host Galaxy Properties of the Remarkable Repeating FRB
20201124A",3,"['New paper on @arxiv_org today, lead by W. Fong @FongGroup at: \n\nHere we identify and characterize the host galaxy of the repeating, super-bursting FRB 20201124A ', 'We found that the galaxy is a dusty, relatively modest star-formning galaxy (in good agreement with last weeks result by Vikram Ravi and collaborators: https://t.co/KZLlitr5fm) with a potential hot MIR dust component contributing ~10-30% to the SED. https://t.co/eEClt2bb3T', 'Perhaps most intriguing, is that we could constrain the star-formation history of the galaxy finding that >90% of its mass (and thus likely the progenitor star/object) was formed 1 Gyr ago, putting strong constraints on the likely progenitor channels of this FRB. https://t.co/QJAutE2fBF']",21,06,686
452,82,1138455205094338563,2785337469,Sebastian Ruder,"In our new paper (my first collaboration at DeepMind, yay!) with Cyprien, @ikekong, & @DaniYogatama, we leverage episodic memory during training (sparse replay) and inference (local adaptation) for continual learning (on QA and classification tasks). ",https://arxiv.org/abs/1906.01076,We introduce a lifelong language learning setup where a model needs to learn from a stream of text examples without any dataset identifier. We propose an episodic memory model that performs sparse experience replay and local adaptation to mitigate catastrophic forgetting in this setup. Experiments on text classification and question answering demonstrate the complementary benefits of sparse experience replay and local adaptation to allow the model to continuously learn from new datasets. We also show that the space complexity of the episodic memory module can be reduced significantly (~50-90%) by randomly choosing which examples to store in memory with a minimal decrease in performance. We consider an episodic memory component as a crucial building block of general linguistic intelligence and see our model as a first step in that direction. ,Episodic Memory in Lifelong Language Learning,1,"['In our new paper (my first collaboration at DeepMind, yay!) with Cyprien, @ikekong, & @DaniYogatama, we leverage episodic memory during training (sparse replay) and inference (local adaptation) for continual learning (on QA and classification tasks).\n ']",19,06,264
453,19,1521055445024223236,3403213937,Paul Mollière,"Hi all! Here a thread on my new paper (). This project was born when we derived atmospheric C/Os of directly imaged planets with GRAVITY. What does that tell us about formation? Can we constrain formation locations, or modes? One of our conclusions is that planet formation is so complex that it may be difficult to invert the process for a given planet, based on its composition. E.g., The plot below shows HR 8799e's inferred formation location when adding chemical evolution to the Öberg+11 disk model. What is more, quite a lot of different planet formation assumptions lead to basically identical compositional outcomes. As an example, we study the Öberg disk, adding chemical evolution, or adding pebbles. They all do a good job at fitting the planets' C/O and [Fe/H], mostly. What does this mean for the future? The power of atmospheric compositions lies in the trends that may emerge when analyzing the atmospheres of a large planet population. High-res spectra, JWST, and Ariel in the future will certainly help. Look at the wealth of visible species: If many planets have a very high metallicity, this could be difficult to reproduce with pebbles. But high C/O and intermediate enrichment may point toward pebbles being important. The refractory content of an atmosphere could distinguish between planetesimals and pebbles, etc... In summary: the more absorber species we probe for the planet population (carrying carbon, oxygen, nitrogen, refractory species), the more we will be able to say about the broad strokes of planet formation. Truly inverting formation for a single planet may stay a challenge!",https://arxiv.org/abs/2204.13714,"Constraining planet formation based on the atmospheric composition of exoplanets is a fundamental goal of the exoplanet community. Existing studies commonly try to constrain atmospheric abundances, or to analyze what abundance patterns a given description of planet formation predicts. However, there is also a pressing need to develop methodologies that investigate how to transform atmospheric compositions into planetary formation inferences. In this study we summarize the complexities and uncertainties of state-of-the-art planet formation models and how they influence planetary atmospheric compositions. We introduce a methodology that explores the effect of different formation model assumptions when interpreting atmospheric compositions. We apply this framework to the directly imaged planet HR 8799e. Based on its atmospheric composition, this planet may have migrated significantly during its formation. We show that including the chemical evolution of the protoplanetary disk leads to a reduced need for migration. Moreover, we find that pebble accretion can reproduce the planet's composition, but some of our tested setups lead to too low atmospheric metallicities, even when considering that evaporating pebbles may enrich the disk gas. We conclude that the definitive inversion from atmospheric abundances to planet formation for a given planet may be challenging, but a qualitative understanding of the effects of different formation models is possible, opening up pathways for new investigations. ","Interpreting the atmospheric composition of exoplanets: sensitivity to
planet formation assumptions",6,"['Hi all! Here a thread on my new paper (). This project was born when we derived atmospheric C/Os of directly imaged planets with GRAVITY. What does that tell us about formation? Can we constrain formation locations, or modes?', ""One of our conclusions is that planet formation is so complex that it may be difficult to invert the process for a given planet, based on its composition. E.g., The plot below shows HR 8799e's inferred formation location when adding chemical evolution to the Öberg+11 disk model. https://t.co/uD7tAnorqs"", ""What is more, quite a lot of different planet formation assumptions lead to basically identical compositional outcomes. As an example, we study the Öberg disk, adding chemical evolution, or adding pebbles. They all do a good job at fitting the planets' C/O and [Fe/H], mostly. https://t.co/9TLDqndAl8"", 'What does this mean for the future? The power of atmospheric compositions lies in the trends that may emerge when analyzing the atmospheres of a large planet population. High-res spectra, JWST, and Ariel in the future will certainly help. Look at the wealth of visible species: https://t.co/pAcGXBPQVL', 'If many planets have a very high metallicity, this could be difficult to reproduce with pebbles. But high C/O and intermediate enrichment may point toward pebbles being important. The refractory content of an atmosphere could distinguish between planetesimals and pebbles, etc...', 'In summary: the more absorber species we probe for the planet population (carrying carbon, oxygen, nitrogen, refractory species), the more we will be able to say about the broad strokes of planet formation. Truly inverting formation for a single planet may stay a challenge!']",22,04,1642
454,53,1286777571468808194,3031558614,Zivvy Ξpstein,"Another day another generator. But are the artifacts produced rote statistical averages or ""alien play"" that transcends expectations? Finding those rare gems is the topic of our new paper & the below primer on how this generativity maps to intelligence ",https://arxiv.org/abs/2007.11119,"The latent space modeled by generative adversarial networks (GANs) represents a large possibility space. By interpolating categories generated by GANs, it is possible to create novel hybrid images. We present ""Meet the Ganimals,"" a casual creator built on interpolations of BigGAN that can generate novel, hybrid animals called ganimals by efficiently searching this possibility space. Like traditional casual creators, the system supports a simple creative flow that encourages rapid exploration of the possibility space. Users can discover new ganimals, create their own, and share their reactions to aesthetic, emotional, and morphological characteristics of the ganimals. As users provide input to the system, the system adapts and changes the distribution of categories upon which ganimals are generated. As one of the first GAN-based casual creators, Meet the Ganimals is an example how casual creators can leverage human curation and citizen science to discover novel artifacts within a large possibility space. ",Interpolating GANs to Scaffold Autotelic Creativity,1,"['Another day another generator.\n\nBut are the artifacts produced rote statistical averages or ""alien play"" that transcends expectations? Finding those rare gems is the topic of our new paper & the below primer on how this generativity maps to intelligence\n\n ']",20,07,266
455,104,1448360409224843270,1377629674432438272,Jake Lustig-Yaeger,"New paper! Although most known exoplanets are detected because they transit, the vast majority of exoplanets don’t transit. These are their stories. {Duh Dun}. Last year, @kevinbstevenson wrote a paper introducing the concept of Planetary Infrared Excess, or PIE, as a novel method to characterize the atmospheres of transiting and non-transiting planets alike. Exoplanets are MUCH cooler than stars. Fact. Because of this planets and stars emit light with a much different spectrum, and the PIE technique uses broad wavelength spectra to uniquely resolve both light sources, despite them being spatially unresolved. In collaboration with @kevinbstevenson @mayorgalc @NorBidTheStars @_astronomay @izenplanet @mommascientist, we set out to determine how well the PIE technique will work to study the atmospheres of hot Jupiters. First, we cooked up a PIE model to investigate whether it’s possible to retrieve information about exoplanet atmospheres while simultaneously modeling the light emitted by the star and the much fainter planet. Second, we examined whether or not stellar parameters can masquerade as planetary parameters or vice versa (i.e., are there significant planet-star degeneracies?). With broad enough wavelength coverage, we found the two sources to be separable and not degenerate. We then repeated our analyses using different JWST instruments that cover different wavelength ranges and using different exposure times to identify the optimal use of JWST time/data. We found that a combination of NIRISS+NIRSpec+MIRI performs best. Then we forgot everything we knew about the planet’s radius and tried to see if we could constrain it using PIE. Again, broad wavelength data using all three instruments is optimal. All in all, it looks like PIE is on the menu for follow-up study and validation using JWST ERS and Cycle 1 observations! For more details including how we handled potential pitfalls, like exozodi dust and absolute flux calibration, check out the paper on the arxiv nearest you. And here's a link to the accepted version of the paper on arxiv: ",http://arxiv.org/abs/2110.02247,"To increase the sample size of future atmospheric characterization efforts, we build on the planetary infrared excess (PIE) technique that has been proposed as a means to detect and characterize the thermal spectra of transiting and non-transiting exoplanets using sufficiently broad wavelength coverage to uniquely constrain the stellar and planetary spectral components from spatially unresolved observations. We performed simultaneous retrievals of stellar and planetary spectra for the archetypal planet WASP-43b in its original configuration and a non-transiting configuration to determine the efficacy of the PIE technique for characterizing the planet's nightside atmospheric thermal structure and composition using typical out-of-transit JWST observations. We found that using PIE with JWST should enable the stellar and planetary spectra to be disentangled with no degeneracies seen between the two flux sources, thus allowing robust constraints on the planet's nightside thermal structure and water abundance to be retrieved. The broad wavelength coverage achieved by combining spectra from NIRISS, NIRSpec, and MIRI enables PIE retrievals that are within 10% of the precision attained using traditional secondary eclipse measurements, although mid-IR observations with MIRI alone may face up to 3.5 times lower precision on the planet's irradiation temperature. For non-transiting planets with unconstrained radius priors, we were able to identify and break the degeneracy between planet radius and irradiation temperature using data that resolved the peak of both the stellar and planetary spectra, thus potentially increasing the number of planets amenable to atmospheric characterization with JWST and other future mission concepts. ","Retrieving Exoplanet Atmospheres using Planetary Infrared Excess:
Prospects for the Nightside of WASP-43 b and other Hot Jupiters",10,"['New paper! \n\nAlthough most known exoplanets are detected because they transit, the vast majority of exoplanets don’t transit. These are their stories. {Duh Dun}. \n\n ', 'Last year, @kevinbstevenson wrote a paper introducing the concept of Planetary Infrared Excess, or PIE, as a novel method to characterize the atmospheres of transiting and non-transiting planets alike. https://t.co/1TEds1UUnW', 'Exoplanets are MUCH cooler than stars. Fact. Because of this planets and stars emit light with a much different spectrum, and the PIE technique uses broad wavelength spectra to uniquely resolve both light sources, despite them being spatially unresolved. https://t.co/ewLt9MDcHn', 'In collaboration with @kevinbstevenson @mayorgalc @NorBidTheStars @_astronomay @izenplanet @mommascientist, we set out to determine how well the PIE technique will work to study the atmospheres of hot Jupiters.', 'First, we cooked up a PIE model to investigate whether it’s possible to retrieve information about exoplanet atmospheres while simultaneously modeling the light emitted by the star and the much fainter planet. https://t.co/sp41I845Wd', 'Second, we examined whether or not stellar parameters can masquerade as planetary parameters or vice versa (i.e., are there significant planet-star degeneracies?). With broad enough wavelength coverage, we found the two sources to be separable and not degenerate. https://t.co/77oGnbRSPR', 'We then repeated our analyses using different JWST instruments that cover different wavelength ranges and using different exposure times to identify the optimal use of JWST time/data. We found that a combination of NIRISS+NIRSpec+MIRI performs best. https://t.co/7KMbmWE1dE', 'Then we forgot everything we knew about the planet’s radius and tried to see if we could constrain it using PIE. Again, broad wavelength data using all three instruments is optimal. https://t.co/tNogMy58nd', 'All in all, it looks like PIE is on the menu for follow-up study and validation using JWST ERS and Cycle 1 observations!\n\nFor more details including how we handled potential pitfalls, like exozodi dust and absolute flux calibration, check out the paper on the arxiv nearest you. https://t.co/Sy4Jgr3rAL', ""And here's a link to the accepted version of the paper on arxiv: https://t.co/osFAg7Rhzw https://t.co/3JnlHci6Xf""]",21,10,2156
456,152,1247369972948430848,288623330,Aaron Hertzmann,"Check out our new paper on discovering of controls for making images with GANs, without any supervision (other than optionally naming the controls afterward), by Erik Härkönen, with @jaakkolehtinen, @sylvain_paris, and myself. The basic idea is to use PCA to identify the important modes of variation that have been learned by a GAN. For such a simple algorithm, it’s surprisingly effective. Restricting controls to modify only a few network layers makes them even more useful. Our methods give you a way to explore the types of variations that your GAN has learned. Pushing the sliders far past their normal ranges can make some intriguingly-unrealistic images. We also show a very simple way to add StyleGAN-like style mixing to BigGAN. The applications to owl customization are endless. @c_dan4th I was, of course, thinking the exact same thing :)",https://arxiv.org/abs/2004.02546,"This paper describes a simple technique to analyze Generative Adversarial Networks (GANs) and create interpretable controls for image synthesis, such as change of viewpoint, aging, lighting, and time of day. We identify important latent directions based on Principal Components Analysis (PCA) applied either in latent space or feature space. Then, we show that a large number of interpretable controls can be defined by layer-wise perturbation along the principal directions. Moreover, we show that BigGAN can be controlled with layer-wise inputs in a StyleGAN-like manner. We show results on different GANs trained on various datasets, and demonstrate good qualitative matches to edit directions found through earlier supervised approaches. ",GANSpace: Discovering Interpretable GAN Controls,6,"['Check out our new paper on discovering of controls for making images with GANs, without any supervision (other than optionally naming the controls afterward), by Erik Härkönen, with @jaakkolehtinen, @sylvain_paris, and myself.\n ', 'The basic idea is to use PCA to identify the important modes of variation that have been learned by a GAN. For such a simple algorithm, it’s surprisingly effective. Restricting controls to modify only a few network layers makes them even more useful. https://t.co/BgpgAcyPsj', 'Our methods give you a way to explore the types of variations that your GAN has learned. Pushing the sliders far past their normal ranges can make some intriguingly-unrealistic images. https://t.co/DL93xN7P38', 'We also show a very simple way to add StyleGAN-like style mixing to BigGAN. https://t.co/3JbBHKm14S', 'The applications to owl customization are endless. https://t.co/eACEg3kpLm', '@c_dan4th I was, of course, thinking the exact same thing :)']",20,04,892
457,99,1258627924279939078,1138762581164855298,Christoph Ternes,"New paper today with @ntinaValentina and André de Gouvêa, . We set lower bounds on the neutrino wave packet width using data from RENO and Daya Bay and discuss sensitivities to either measure decoherence or improve this bound at the future JUNO experiment.",https://arxiv.org/abs/2005.03022,"We explore how well reactor antineutrino experiments can constrain or measure the loss of quantum coherence in neutrino oscillations. We assume that decoherence effects are encoded in the size of the neutrino wave-packet, $\sigma$. We find that the current experiments Daya Bay and the Reactor Experiment for Neutrino Oscillation (RENO) already constrain $\sigma>1.0\times 10^{-4}$ nm and estimate that future data from the Jiangmen Underground Neutrino Observatory (JUNO) would be sensitive to $\sigma<2.1\times 10^{-3}$ nm. If the effects of loss of coherence are within the sensitivity of JUNO, we expect $\sigma$ to be measured with good precision. The discovery of nontrivial decoherence effects in JUNO would indicate that our understanding of the coherence of neutrino sources is, at least, incomplete. ",Probing neutrino quantum decoherence at reactor experiments,1,"['New paper today with @ntinaValentina and André de Gouvêa, . We set lower bounds on the neutrino wave packet width using data from RENO and Daya Bay and discuss sensitivities to either measure decoherence or improve this bound at the future JUNO experiment.']",20,05,263
458,5,1046784816060936192,3423739275,Felix Leditzky,"New paper on ""Asymptotic performance of port-based teleportation"", in which we determine the leading-order asymptotics of PBT in a few different settings. Joint work with M. Christandl, @cmajenz, @quantum_graeme, F. Speelman, and @michael_quantum ",https://arxiv.org/abs/1809.10751,"Quantum teleportation is one of the fundamental building blocks of quantum Shannon theory. While ordinary teleportation is simple and efficient, port-based teleportation (PBT) enables applications such as universal programmable quantum processors, instantaneous non-local quantum computation and attacks on position-based quantum cryptography. In this work, we determine the fundamental limit on the performance of PBT: for arbitrary fixed input dimension and a large number $N$ of ports, the error of the optimal protocol is proportional to the inverse square of $N$. We prove this by deriving an achievability bound, obtained by relating the corresponding optimization problem to the lowest Dirichlet eigenvalue of the Laplacian on the ordered simplex. We also give an improved converse bound of matching order in the number of ports. In addition, we determine the leading-order asymptotics of PBT variants defined in terms of maximally entangled resource states. The proofs of these results rely on connecting recently-derived representation-theoretic formulas to random matrix theory. Along the way, we refine a convergence result for the fluctuations of the Schur-Weyl distribution by Johansson, which might be of independent interest. ",Asymptotic performance of port-based teleportation,1,"['New paper on ""Asymptotic performance of port-based teleportation"", in which we determine the leading-order asymptotics of PBT in a few different settings.\nJoint work with M. Christandl, @cmajenz, @quantum_graeme, F. Speelman, and @michael_quantum \n']",18,09,253
459,324,1313747256957263874,1189826682263179265,Yuji Kanagawa,"Exploration is generally difficult, but if there are only a few rewarding states, we can use a simple inductive bias: just visit diverse regions! We propose IMOC, an option-learning method that learns diverse options. paper: code: ",https://arxiv.org/abs/2010.02756,"In this paper, we study the problem of autonomously discovering temporally abstracted actions, or options, for exploration in reinforcement learning. For learning diverse options suitable for exploration, we introduce the infomax termination objective defined as the mutual information between options and their corresponding state transitions. We derive a scalable optimization scheme for maximizing this objective via the termination condition of options, yielding the InfoMax Option Critic (IMOC) algorithm. Through illustrative experiments, we empirically show that IMOC learns diverse options and utilizes them for exploration. Moreover, we show that IMOC scales well to continuous control tasks. ",Diverse Exploration via InfoMax Options,1,"['Exploration is generally difficult, but if there are only a few rewarding states, we can use a simple inductive bias: just visit diverse regions!\nWe propose IMOC, an option-learning method that learns diverse options.\npaper: \ncode: ']",20,10,251
460,81,1285400841810145280,913238472357437445,Fuminobu TAKAHASHI,Our new paper is out today. The ALP mass of keV and the decay constant of 10^9 GeV suggested by the XENON1T excess satisfy the consistency relation predicted by the ALP inflation model. We studied the implication for thermal history after inflation. ,https://arxiv.org/abs/2007.10311,"The recent XENON1T excess in the electron recoil data can be explained by anomaly-free axion-like particle (ALP) dark matter with mass $m_\phi = 2.3 \pm 0.2\,$keV and the decay constant $f_\phi/q_e \simeq 2 \times 10^{10} \sqrt{\Omega_\phi/\Omega_{\rm DM}}\,{\rm GeV}$. Intriguingly, the suggested mass and decay constant are consistent with the relation, $f_\phi \sim 10^3 \sqrt{m_\phi M_p}$, predicted in a scenario where the ALP plays the role of the inflaton. This raises a possibility that the ALP dark matter responsible for the XENON1T excess also drove inflation in the very early universe. We study implications of the XENON1T excess for the ALP inflation and thermal history of the universe after inflation. ",What if ALP dark matter for the XENON1T excess is the inflaton,1,['Our new paper is out today. The ALP mass of keV and the decay constant of 10^9 GeV suggested by the XENON1T excess satisfy the consistency relation predicted by the ALP inflation model. We studied the implication for thermal history after inflation.\n\n'],20,07,256
461,199,1367646761494450176,1047899041311412224,Francois Grondin,"Here's the preprint of our new paper ""Audio scene monitoring using redundant un-localized microphone arrays"". I had the privilege to collaborate with researchers at University of California San Diego, including Pr. Peter Gerstoft and Pr. Yoav Freund. ",https://arxiv.org/abs/2103.01830,"We present a system for localizing sound sources in a room with several ad-hoc microphone arrays. Each circular array performs direction of arrival (DOA) estimation independently using commercial software. The DOAs are fed to a fusion center, concatenated, and used to perform the localization based on two proposed methods, which require only few labeled source locations (anchor points) for training. The first proposed method is based on principal component analysis (PCA) of the observed DOA and does not require any knowledge of anchor points. The array cluster can then perform localization on a manifold defined by the PCA of concatenated DOAs over time. The second proposed method performs localization using an affine transformation between the DOA vectors and the room manifold. The PCA has fewer requirements on the training sequence, but is less robust to missing DOAs from one of the arrays. The methods are demonstrated with five IoT 8-microphone circular arrays, placed at unspecified fixed locations in an office. Both the PCA and the affine method can easily map out a rectangle based on a few anchor points with similar accuracy. The proposed methods provide a step towards monitoring activities in a smart home and require little installation effort as the array locations are not needed. ",Audio scene monitoring using redundant ad-hoc microphone array networks,1,"['Here\'s the preprint of our new paper ""Audio scene monitoring using redundant un-localized microphone arrays"". I had the privilege to collaborate with researchers at University of California San Diego, including Pr. Peter Gerstoft and Pr. Yoav Freund.\n\n']",21,03,257
462,197,1287791917644865542,392413519,Matt Hall,"New @ncats_nih_gov pre-print - we mined almost 1,000 HTS drug repurposing datasets to find biological response correlations with #COVID19 screens. Found correlations with AP-1, and autophagy assays. The pre-print can be found here: Mining of high throughput screening database reveals AP-1 and autophagy pathways as potential targets for COVID-19 therapeutics Perhaps not surprising to learn that Ebola and MERS screening datasets also correlated with SARS-CoV-2 - that was reassuring!! ",https://arxiv.org/abs/2007.12242,"The recent global pandemic of Coronavirus Disease 2019 (COVID-19) caused by the new coronavirus SARS-CoV-2 presents an urgent need for new therapeutic candidates. Many efforts have been devoted to screening existing drug libraries with the hope to repurpose approved drugs as potential treatments for COVID-19. However, the antiviral mechanisms of action for the drugs found active in these phenotypic screens are largely unknown. To deconvolute the viral targets for more effective anti-COVID-19 drug development, we mined our in-house database of approved drug screens against 994 assays and compared their activity profiles with the drug activity profile in a cytopathic effect (CPE) assay of SARS-CoV-2. We found that the autophagy and AP-1 signaling pathway activity profiles are significantly correlated with the anti-SARS-CoV-2 activity profile. In addition, a class of neurology/psychiatry drugs was found significantly enriched with anti-SARS-CoV-2 activity. Taken together, these results have provided new insights into SARS-CoV-2 infection and potential targets for COVID-19 therapeutics. ","Mining of high throughput screening database reveals AP-1 and autophagy
pathways as potential targets for COVID-19 therapeutics",3,"['New @ncats_nih_gov pre-print - we mined almost 1,000 HTS drug repurposing datasets to find biological response correlations with #COVID19 screens. Found correlations with AP-1, and autophagy assays. \n\n ', 'The pre-print can be found here:\n\nMining of high throughput screening database reveals AP-1 and autophagy pathways as potential targets for COVID-19 therapeutics\n\nhttps://t.co/9dgHsWgUBJ', 'Perhaps not surprising to learn that Ebola and MERS screening datasets also correlated with SARS-CoV-2 - that was reassuring!! https://t.co/qb32Qi5Z6P']",20,07,515
463,2,1435417799665537026,16614440,Jeremy Foote,"🚨 New paper🚨 This was fun to work on. @sohwng and I argue that small communities deserve more attention, so we interviewed users in small subreddits. See the thread from @sohwng (and read the whole paper at ) but I wanted to highlight two findings: 1. We found that small communities help to partition the information space - giving people control of what they see and _who_ they interact with. 2. This one is what we *didn't* find. I thought we'd find that people participated in small communities because it was easier to make friends. But that was really rare - even in communities of a few hundred people, people didn't really know each other personally (or seek to). There was still a sense of belonging, of being part of a tribe and of sharing personal experiences; these just didn't often lead to one-to-one-relationships. @sohwng called this being interested in ""the personal but not the person"" There's a lot more to the paper, and @sohwng or I would love to talk more about it. Finally, @sohwng was an amazing first author and the driver of this project. She is doing lots of other amazing work that you'll be hearing about soon. You should definitely follow her here and on Google Scholar, and you should read and cite our paper! :)",https://arxiv.org/abs/2108.04282,"Many benefits of online communities---such as obtaining new information, opportunities, and social connections---increase with size. Thus, a ``successful'' online community often evokes an image of hundreds of thousands of users, and practitioners and researchers alike have sought to devise methods to achieve growth and thereby, success. On the other hand, small online communities exist in droves and many persist in their smallness over time. Turning to the highly popular discussion website Reddit, which is made up of hundreds of thousands of communities, we conducted a qualitative interview study examining how and why people participate in these persistently small communities, in order to understand why these communities exist when popular approaches would assume them to be failures. Drawing from twenty interviews, this paper makes several contributions: we describe how small communities provide unique informational and interactional spaces for participants, who are drawn by the hyperspecific aspects of the community; we find that small communities do not promote strong dyadic interpersonal relationships but rather promote group-based identity; and we highlight how participation in small communities is part of a broader, ongoing strategy to curate participants' online experience. We argue that online communities can be seen as nested niches: parts of an embedded, complex, symbiotic socio-informational ecosystem. We suggest ways that social computing research could benefit from more deliberate considerations of interdependence between diverse scales of online community sizes. ",Why do people participate in small online communities?,6,"['🚨 New paper🚨\nThis was fun to work on. @sohwng and I argue that small communities deserve more attention, so we interviewed users in small subreddits.\n\nSee the thread from @sohwng (and read the whole paper at ) but I wanted to highlight two findings: ', '1. We found that small communities help to partition the information space - giving people control of what they see and _who_ they interact with.', ""2. This one is what we *didn't* find. I thought we'd find that people participated in small communities because it was easier to make friends. But that was really rare - even in communities of a few hundred people, people didn't really know each other personally (or seek to)."", 'There was still a sense of belonging, of being part of a tribe and of sharing personal experiences; these just didn\'t often lead to one-to-one-relationships. @sohwng called this being interested in ""the personal but not the person""', ""There's a lot more to the paper, and @sohwng or I would love to talk more about it."", ""Finally, @sohwng was an amazing first author and the driver of this project. She is doing lots of other amazing work that you'll be hearing about soon. You should definitely follow her here and on Google Scholar, and you should read and cite our paper! :)""]",21,08,1256
464,33,1098507567393849344,995269493403373568,AlessandroIlBello,"Our new paper on ""#Quantum Storage of Frequency-Multiplexed Heralded Single Photons"" is finally online! We demonstrate storage of the whole spectrum of a photon composed by 15 different frequency modes. This is the link to the @arxiv version: ",https://arxiv.org/abs/1902.06657,"We report on the quantum storage of a heralded frequency-multiplexed single photon in an integrated laser-written rare-earth doped waveguide. The single photon contains 15 discrete frequency modes separated by 261 MHz and spaning across 4 GHz. It is obtained from a non-degenerate photon pair created via cavity-enhanced spontaneous down conversion, where the heralding photon is at telecom wavelength and the heralded photon is at 606 nm. The frequency-multimode photon is stored in a praseodymium-doped waveguide using the atomic frequency comb (AFC) scheme, by creating multiple combs within the inhomogeneous broadening of the crystal. Thanks to the intrinsic temporal multimodality of the AFC scheme, each spectral bin includes 9 temporal modes, such that the total number of stored modes is about 130. We demonstrate that the storage preserves the non-classical properties of the single photon, and its normalized frequency spectrum. ",Quantum Storage of Frequency-Multiplexed Heralded Single Photons,1,"['Our new paper on ""#Quantum Storage of Frequency-Multiplexed Heralded Single Photons"" is finally online!\n\nWe demonstrate storage of the whole spectrum of a photon composed by 15 different frequency modes. \n\nThis is the link to the @arxiv version:\n ']",19,02,257
465,162,1392057546567991296,915988922831863808,Ron Litman,"Happy to share our new paper named “TextAdaIN: Fine-Grained AdaIN for Robust Text Recognition”. This is a join work with Oren Nuriel and Sharon Fogel. See more details in our paper: In this paper we reveal that state-of-the-art text recognizers are prone to overly rely on local image statistics. We propose a simple normalization-based technique for moderating the reliance on local statistics, which enhances the performance of text recognizers.",https://arxiv.org/abs/2105.03906,"Leveraging the characteristics of convolutional layers, neural networks are extremely effective for pattern recognition tasks. However in some cases, their decisions are based on unintended information leading to high performance on standard benchmarks but also to a lack of generalization to challenging testing conditions and unintuitive failures. Recent work has termed this ""shortcut learning"" and addressed its presence in multiple domains. In text recognition, we reveal another such shortcut, whereby recognizers overly depend on local image statistics. Motivated by this, we suggest an approach to regulate the reliance on local statistics that improves text recognition performance. Our method, termed TextAdaIN, creates local distortions in the feature map which prevent the network from overfitting to local statistics. It does so by viewing each feature map as a sequence of elements and deliberately mismatching fine-grained feature statistics between elements in a mini-batch. Despite TextAdaIN's simplicity, extensive experiments show its effectiveness compared to other, more complicated methods. TextAdaIN achieves state-of-the-art results on standard handwritten text recognition benchmarks. Additionally, it generalizes to multiple architectures and to the domain of scene text recognition. Furthermore, we demonstrate that integrating TextAdaIN improves robustness towards more challenging testing conditions. ",TextAdaIN: Paying Attention to Shortcut Learning in Text Recognizers,2,"['Happy to share our new paper named “TextAdaIN: Fine-Grained AdaIN for Robust Text Recognition”. \n\nThis is a join work with Oren Nuriel and Sharon Fogel. \n\nSee more details in our paper: ', 'In this paper we reveal that state-of-the-art text recognizers are prone to overly rely on local image statistics. We propose a simple normalization-based technique for moderating the reliance on local statistics, which enhances the performance of text recognizers.']",21,05,463
466,17,1498669784745009158,1047899041311412224,Francois Grondin,"Our new paper where we introduce SmartBelt, a belt equipped with 8 microphones and 15 haptic motors that provides the user with haptic feedback to indicate the direction of arrival of sound. This could eventually benefit people with hearing impairment. ",https://arxiv.org/abs/2202.13974,"This paper introduces SmartBelt, a wearable microphone array on a belt that performs sound source localization and returns the direction of arrival with respect to the user waist. One of the haptic motors on the belt then vibrates in the corresponding direction to provide useful feedback to the user. We also introduce a simple calibration step to adapt the belt to different waist sizes. Experiments are performed to confirm the accuracy of this wearable sound source localization system, and results show a Mean Average Error (MAE) of 2.90 degrees, and a correct haptic motor selection with a rate of 92.3%. Results suggest the device can provide useful haptic feedback, and will be evaluated in a study with people having hearing impairments. ","SmartBelt: A Wearable Microphone Array for Sound Source Localization
with Haptic Feedback",1,"['Our new paper where we introduce SmartBelt, a belt equipped with 8 microphones and 15 haptic motors that provides the user with haptic feedback to indicate the direction of arrival of sound. This could eventually benefit people with hearing impairment.\n\n']",22,02,259
467,167,1387929898099134473,2800204849,Andrew Gordon Wilson,"What are Bayesian neural network posteriors really like? With high fidelity HMC, we study approximate inference quality, generalization, cold posteriors, priors, and more. With @Pavel_Izmailov, @sharadvikram, and Matthew D. Hoffman. 1/10 We show that Bayesian neural networks reassuringly provide good generalization, outperforming deep ensembles, standard training, and many approximate inference procedures, even with a single chain. 2/10 However, we find that BNNs are surprisingly poor at OOD generalization, even worse than SGD, despite the popularity of approximate inference in this setting, and the relatively good performance of BNNs for OOD detection. 3/10 Even though deep ensembles are often talked about as a ""non-Bayesian"" alternative to standard approximate inference, we find they approximate the HMC predictive distribution better than MFVI, and about as well as standard SGLD. 4/10 There has been much attention lately on ""cold posteriors"" in BDL, where the posterior raised to a power 1/T with T<1 can lead to better results. We see little evidence for a general cold posterior effect, which we find is largely due to data augmentation. 5/10 We explored Gaussian, mixture of Gaussian, and heavy-tailed logistic priors, which performed similarly, although the heavy-tailed priors did slightly better. We also found performance relatively insensitive to the scale of the Gaussian prior... 6/10 ...these results highlight the relative importance of the architecture compared to the distribution over weights in defining the induced prior over functions. Indeed, other work shows that even standard Gaussian priors have many useful properties: . 7/10 We present many other results, including mixing in function space vs. weight space, posterior geometry and mode connecting paths, single chain vs. multi-chain...! 8/10 Many of the results, both positive and negative for BDL, are contrary to conventional wisdom. 9/10 We worked hard to obtain these HMC samples, which we plan to release as a public resource, as a reference for evaluating more practical alternatives to HMC, and for researchers to explore their own questions around approximate inference in BDL. 10/10",https://arxiv.org/abs/2104.14421,"The posterior over Bayesian neural network (BNN) parameters is extremely high-dimensional and non-convex. For computational reasons, researchers approximate this posterior using inexpensive mini-batch methods such as mean-field variational inference or stochastic-gradient Markov chain Monte Carlo (SGMCMC). To investigate foundational questions in Bayesian deep learning, we instead use full-batch Hamiltonian Monte Carlo (HMC) on modern architectures. We show that (1) BNNs can achieve significant performance gains over standard training and deep ensembles; (2) a single long HMC chain can provide a comparable representation of the posterior to multiple shorter chains; (3) in contrast to recent studies, we find posterior tempering is not needed for near-optimal performance, with little evidence for a ""cold posterior"" effect, which we show is largely an artifact of data augmentation; (4) BMA performance is robust to the choice of prior scale, and relatively similar for diagonal Gaussian, mixture of Gaussian, and logistic priors; (5) Bayesian neural networks show surprisingly poor generalization under domain shift; (6) while cheaper alternatives such as deep ensembles and SGMCMC methods can provide good generalization, they provide distinct predictive distributions from HMC. Notably, deep ensemble predictive distributions are similarly close to HMC as standard SGLD, and closer than standard variational inference. ",What Are Bayesian Neural Network Posteriors Really Like?,10,"['What are Bayesian neural network posteriors really like? With high fidelity HMC, we study approximate inference quality, generalization, cold posteriors, priors, and more. \n\nWith @Pavel_Izmailov, @sharadvikram, and Matthew D. Hoffman. 1/10 ', 'We show that Bayesian neural networks reassuringly provide good generalization, outperforming deep ensembles, standard training, and many approximate inference procedures, even with a single chain. 2/10 https://t.co/sSeW6NGSRX', 'However, we find that BNNs are surprisingly poor at OOD generalization, even worse than SGD, despite the popularity of approximate inference in this setting, and the relatively good performance of BNNs for OOD detection. 3/10 https://t.co/7EYqt1JIDe', 'Even though deep ensembles are often talked about as a ""non-Bayesian"" alternative to standard approximate inference, we find they approximate the HMC predictive distribution better than MFVI, and about as well as standard SGLD. 4/10 https://t.co/Wmbp4rFCnJ', 'There has been much attention lately on ""cold posteriors"" in BDL, where the posterior raised to a power 1/T with T<1 can lead to better results. We see little evidence for a general cold posterior effect, which we find is largely due to data augmentation. 5/10 https://t.co/eXkRopCpRs', 'We explored Gaussian, mixture of Gaussian, and heavy-tailed logistic priors, which performed similarly, although the heavy-tailed priors did slightly better. We also found performance relatively insensitive to the scale of the Gaussian prior... 6/10 https://t.co/REonWDwHvJ', '...these results highlight the relative importance of the architecture compared to the distribution over weights in defining the induced prior over functions. Indeed, other work shows that even standard Gaussian priors have many useful properties: https://t.co/midasGNPYn. 7/10 https://t.co/yrDjvkO20m', 'We present many other results, including mixing in function space vs. weight space, posterior geometry and mode connecting paths, single chain vs. multi-chain...! 8/10 https://t.co/efyU00gjIz', 'Many of the results, both positive and negative for BDL, are contrary to conventional wisdom. 9/10 https://t.co/8naizY9p35', 'We worked hard to obtain these HMC samples, which we plan to release as a public resource, as a reference for evaluating more practical alternatives to HMC, and for researchers to explore their own questions around approximate inference in BDL. 10/10']",21,04,2262
468,58,1085133191936200709,3228486315,Daniele Grattarola,"On Arxiv, new paper in collab with @UiTromso! We introduce a new type of spectral convolution on graphs, based on ARMA filters (better response w.r.t. poly ones). Great results on node/graph signals/whole graphs classification. These things rock! @UiTromso We also formulate and explore the application of a principled pooling strategy for graphs, improving over previous solutions. The combo of the two results in a slim model and fast training times. @UiTromso Results on node classification (citation nets), graph signal classification (20 news), and graph classification (graph kernel database). Code will be released as #Keras layers + utils + experiments, as soon the paper is published. Will also be integrated as part of Spektral, our (yet to be released) framework for GNNs. HUGE shoutout to @Slackericida for coming up with the idea, it was incredibly fun to work together on this paper! @Slackericida CC @m_deff @thomaskipf @PetarV_93 @PeterWBattaglia @KleineBottleM @mys_007 all cited in the paper",http://arxiv.org/abs/1901.01343,"Popular graph neural networks implement convolution operations on graphs based on polynomial spectral filters. In this paper, we propose a novel graph convolutional layer inspired by the auto-regressive moving average (ARMA) filter that, compared to polynomial ones, provides a more flexible frequency response, is more robust to noise, and better captures the global graph structure. We propose a graph neural network implementation of the ARMA filter with a recursive and distributed formulation, obtaining a convolutional layer that is efficient to train, localized in the node space, and can be transferred to new graphs at test time. We perform a spectral analysis to study the filtering effect of the proposed ARMA layer and report experiments on four downstream tasks: semi-supervised node classification, graph signal classification, graph classification, and graph regression. Results show that the proposed ARMA layer brings significant improvements over graph neural networks based on polynomial filters. ",Graph Neural Networks with convolutional ARMA filters,6,"['On Arxiv, new paper in collab with @UiTromso!\nWe introduce a new type of spectral convolution on graphs, based on ARMA filters (better response w.r.t. poly ones). \nGreat results on node/graph signals/whole graphs classification. These things rock!\n\n ', '@UiTromso We also formulate and explore the application of a principled pooling strategy for graphs, improving over previous solutions. The combo of the two results in a slim model and fast training times. https://t.co/5l4Nl1DLwF', '@UiTromso Results on node classification (citation nets), graph signal classification (20 news), and graph classification (graph kernel database). https://t.co/47MplxHoW7', 'Code will be released as #Keras layers + utils + experiments, as soon the paper is published. \nWill also be integrated as part of Spektral, our (yet to be released) framework for GNNs.', 'HUGE shoutout to @Slackericida for coming up with the idea, it was incredibly fun to work together on this paper!', '@Slackericida CC @m_deff @thomaskipf @PetarV_93 @PeterWBattaglia @KleineBottleM @mys_007 \nall cited in the paper']",19,01,1037
469,114,1423557886983168004,328430286,Jad C. Halimeh,"New paper : Instead of constructing complex gauge-symmetry generators, one can just build much simpler local pseudo generators identical to them in the physical sector in implementations of gauge theories. @HaukeGroup @MCQST_cluster @ERC_Research ",https://arxiv.org/abs/2108.02203,"The postulate of gauge invariance in nature does not lend itself directly to implementations of lattice gauge theories in modern setups of quantum synthetic matter. Unavoidable gauge-breaking errors in such devices require gauge invariance to be enforced for faithful quantum simulation of gauge-theory physics. This poses major experimental challenges, in large part due to the complexity of the gauge-symmetry generators. Here, we show that gauge invariance can be reliably stabilized by employing simplified \textit{local pseudo generators} designed such that within the physical sector they act identically to the actual local generator. Dynamically, they give rise to emergent exact gauge theories up to timescales polynomial and even exponential in the protection strength. This obviates the need for implementing often complex multi-body full gauge symmetries, thereby further reducing experimental overhead in physical realizations. We showcase our method in the $\mathbb{Z}_2$ lattice gauge theory, and discuss experimental considerations for its realization in modern ultracold-atom setups. ","Stabilizing Lattice Gauge Theories Through Simplified Local Pseudo
Generators",1,"['New paper : Instead of constructing complex gauge-symmetry generators, one can just build much simpler local pseudo generators identical to them in the physical sector in implementations of gauge theories.\n@HaukeGroup \n@MCQST_cluster \n@ERC_Research ']",21,08,259
470,165,1245634642146820097,384080522,Dr. Daniela Castro-Camilo,"New paper 📰: We propose a general method for probabilistic prediction of extreme hot-spots in a spatio-temporal setting, with an application to the Sea Surface Temperature anomalies provided in a data challenge. We were late 🙁but got the best score 😌 Codes to implement the models available here: ",https://arxiv.org/abs/2004.00386,"We develop a method for probabilistic prediction of extreme value hot-spots in a spatio-temporal framework, tailored to big datasets containing important gaps. In this setting, direct calculation of summaries from data, such as the minimum over a space-time domain, is not possible. To obtain predictive distributions for such cluster summaries, we propose a two-step approach. We first model marginal distributions with a focus on accurate modeling of the right tail and then, after transforming the data to a standard Gaussian scale, we estimate a Gaussian space-time dependence model defined locally in the time domain for the space-time subregions where we want to predict. In the first step, we detrend the mean and standard deviation of the data and fit a spatially resolved generalized Pareto distribution to apply a correction of the upper tail. To ensure spatial smoothness of the estimated trends, we either pool data using nearest-neighbor techniques, or apply generalized additive regression modeling. To cope with high space-time resolution of data, the local Gaussian models use a Markov representation of the Mat\'ern correlation function based on the stochastic partial differential equations (SPDE) approach. In the second step, they are fitted in a Bayesian framework through the integrated nested Laplace approximation implemented in R-INLA. Finally, posterior samples are generated to provide statistical inferences through Monte-Carlo estimation. Motivated by the 2019 Extreme Value Analysis data challenge, we illustrate our approach to predict the distribution of local space-time minima in anomalies of Red Sea surface temperatures, using a gridded dataset (11315 days, 16703 pixels) with artificially generated gaps. In particular, we show the improved performance of our two-step approach over a purely Gaussian model without tail transformations. ","Bayesian space-time gap filling for inference on extreme hot-spots: an
application to Red Sea surface temperatures",2,"['New paper 📰: \nWe propose a general method for probabilistic prediction of extreme hot-spots in a spatio-temporal setting, with an application to the Sea Surface Temperature anomalies provided in a data challenge. We were late 🙁but got the best score 😌', 'Codes to implement the models available here: https://t.co/ihm3vHXoMl']",20,04,310
471,143,1338889592548827136,20515971,Erkut Erdem,"Today, we are releasing MSVD-Turkish, a new benchmark dataset for integrated vision and language research in Turkish. Dataset is available at , and the accompanying paper is on arXiv, This is a joint effort of Hacettepe University, Koç University and Imperial College London by Begum Citamak, @Ozan__Caglayan, Menekse Kuyu, myself, @aykuterdemml, @foobarin and @lspecia. @TT20833837 Çok büyük ihtimalle videodaki seslerden. Tabi açıklama üretirken ses bilgisinden yararlanılmıyor oluşu veri kümesinin bu tarz önyargılara (dataset bias) sahip olmasını getiriyor. Bu da kendi başına bir araştırma konusu aslında.",https://arxiv.org/abs/2012.07098,"Automatic generation of video descriptions in natural language, also called video captioning, aims to understand the visual content of the video and produce a natural language sentence depicting the objects and actions in the scene. This challenging integrated vision and language problem, however, has been predominantly addressed for English. The lack of data and the linguistic properties of other languages limit the success of existing approaches for such languages. In this paper we target Turkish, a morphologically rich and agglutinative language that has very different properties compared to English. To do so, we create the first large scale video captioning dataset for this language by carefully translating the English descriptions of the videos in the MSVD (Microsoft Research Video Description Corpus) dataset into Turkish. In addition to enabling research in video captioning in Turkish, the parallel English-Turkish descriptions also enables the study of the role of video context in (multimodal) machine translation. In our experiments, we build models for both video captioning and multimodal machine translation and investigate the effect of different word segmentation approaches and different neural architectures to better address the properties of Turkish. We hope that the MSVD-Turkish dataset and the results reported in this work will lead to better video captioning and multimodal machine translation models for Turkish and other morphology rich and agglutinative languages. ","MSVD-Turkish: A Comprehensive Multimodal Dataset for Integrated Vision
and Language Research in Turkish",3,"['Today, we are releasing MSVD-Turkish, a new benchmark dataset for integrated vision and language research in Turkish. Dataset is available at , and the accompanying paper is on arXiv, ', 'This is a joint effort of Hacettepe University, Koç University and Imperial College London by Begum Citamak, @Ozan__Caglayan, Menekse Kuyu, myself, @aykuterdemml, @foobarin and @lspecia.', '@TT20833837 Çok büyük ihtimalle videodaki seslerden. Tabi açıklama üretirken ses bilgisinden yararlanılmıyor oluşu veri kümesinin bu tarz önyargılara (dataset bias) sahip olmasını getiriyor. Bu da kendi başına bir araştırma konusu aslında.']",20,12,630
472,95,1382078749802389512,791705191175360512,Niels Warburton,"Our new paper out today details the first fully relativistic EMRI template model fast enough to do parameter estimation with. As a demonstration we explore the accuracy of the semi-relativistic kludge waveform generation methods. For certain regions of the parameter space, using semi-relativistic amplitudes to recover an injected full relativistic waveform can lead to biases in the posterior distribution The fully relativistic model is part of a new FastEMRIWaveform framework that provides a unified Python interface to generate EMRI waveform templates in the solar system barycenter frame. This makes it very easy for data analysts to upgrade to new models as they become available. The fully relativistic model is fast. Using GPU acceleration you can generate a year-long EMRI waveform in less than 100ms. All the code is open source in the BHPToolkit: The fully relativistic model, first introduced in our letter, , is only for inspirals into non-rotating black holes. The methods we used should extend to Kerr and incorporate GSF corrections as they become available. For the LISA Data Challenge it is useful to have waveform models that are extensive in parameter space so whilst we wait for fully relativistic Kerr waveforms we also include an updated GPU accel'd Augment Analytic Kludge model in the FEW framework.",https://arxiv.org/abs/2104.04582,"We present the FastEMRIWaveforms (FEW) package, a collection of tools to build and analyze extreme mass ratio inspiral (EMRI) waveforms. Here, we expand on the Physical Review Letter that introduced the first fast and accurate fully-relativistic EMRI waveform template model. We discuss the construction of the overall framework; constituent modules; and the general methods used to accelerate EMRI waveforms. Because the fully relativistic FEW model waveforms are for now limited to eccentric orbits in the Schwarzschild spacetime, we also introduce an improved Augmented Analytic Kludge (AAK) model that describes generic Kerr inspirals. Both waveform models can be accelerated using graphics processing unit (GPU) hardware. With the GPU-accelerated waveforms in hand, a variety of studies are performed including an analysis of EMRI mode content, template mismatch, and fully Bayesian Markov Chain Monte Carlo-based EMRI parameter estimation. We find relativistic EMRI waveform templates can be generated with fewer harmonic modes ($\sim10-100$) without biasing signal extraction. However, we show for the first time that extraction of a relativistic injection with semi-relativistic amplitudes can lead to strong bias and anomalous structure in the posterior distribution for certain regions of parameter space. ","FastEMRIWaveforms: New tools for millihertz gravitational-wave data
analysis",6,"['Our new paper out today details the first fully relativistic EMRI template model fast enough to do parameter estimation with. As a demonstration we explore the accuracy of the semi-relativistic kludge waveform generation methods. ', 'For certain regions of the parameter space, using semi-relativistic amplitudes to recover an injected full relativistic waveform can lead to biases in the posterior distribution https://t.co/VigCMuXLwY', 'The fully relativistic model is part of a new FastEMRIWaveform framework that provides a unified Python interface to generate EMRI waveform templates in the solar system barycenter frame. This makes it very easy for data analysts to upgrade to new models as they become available. https://t.co/7VS6ylgVWQ', 'The fully relativistic model is fast. Using GPU acceleration you can generate a year-long EMRI waveform in less than 100ms. All the code is open source in the BHPToolkit: https://t.co/ie7SABWp3U', 'The fully relativistic model, first introduced in our letter, https://t.co/TiAckbhy88, is only for inspirals into non-rotating black holes. The methods we used should extend to Kerr and incorporate GSF corrections as they become available. https://t.co/xcRJYsWxlz', ""For the LISA Data Challenge it is useful to have waveform models that are extensive in parameter space so whilst we wait for fully relativistic Kerr waveforms we also include an updated GPU accel'd Augment Analytic Kludge model in the FEW framework.""]",21,04,1374
473,142,1368849730474754048,493582529,Michele Lucente 🇺🇦,"New paper out! I show that an overlooked production mechanism within the minimal Type-I Seesaw model can account for the observed dark matter abundance in the form of a keV sterile neutrino. @TTK_RWTH @RWTH @AvHStiftung @UCLouvain_be 1/3 This population can be produced by the decay of the heavier neutral leptons, with masses above the Higgs mass scale, while they are in thermal equilibrium in the early Universe (freeze-in). 2/3 Moreover, the implementation of the relevant phenomenological constraints (relic abundance, indirect detection and structure formation) on this model automatically selects a region of the parameter space featuring an approximate lepton number symmetry! 3/3 ",https://arxiv.org/abs/2103.03253,"We show that the minimal Type-I Seesaw mechanism can successfully account for the observed dark matter abundance in the form of a keV sterile neutrino. This population can be produced by the decay of the heavier neutral leptons, with masses above the Higgs mass scale, while they are in thermal equilibrium in the early Universe (freeze-in). Moreover, the implementation of the relevant phenomenological constraints (relic abundance, indirect detection and structure formation) on this model automatically selects a region of the parameter space featuring an approximate lepton number symmetry. ",Freeze-In Dark Matter within the Seesaw mechanism,3,"['New paper out! \n\nI show that an overlooked production mechanism within the minimal Type-I Seesaw model can account for the observed dark matter abundance in the form of a keV sterile neutrino.\n\n@TTK_RWTH @RWTH @AvHStiftung @UCLouvain_be \n\n1/3', 'This population can be produced by the decay of the heavier neutral leptons, with masses above the Higgs mass scale, while they are in thermal equilibrium in the early Universe (freeze-in).\n\n2/3 https://t.co/gl1GQ41SoM', 'Moreover, the implementation of the relevant phenomenological constraints (relic abundance, indirect detection and structure formation) on this model automatically selects a region of the parameter space featuring an approximate lepton number symmetry!\n\n3/3 https://t.co/4g1UHJwbMx']",21,03,710
474,233,1312947499141750786,6642132,Ikuya Yamada,"Our @emnlp2020 paper “LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention” is now available on arXiv! We present new pretrained contextualized representations that achieve SOTA on five datasets including SQuAD and CoNLL-2003. LUKE is based on bidirectional Transformer, treats words and entities in a text as independent tokens, and outputs contextualized representations of them. The representations can be used to address downstream tasks similarly to BERT. LUKE is trained using a novel pretraining task that involves predicting randomly masked words (equivalent to BERT’s masked language model) and entities in an entity-annotated corpus obtained from Wikipedia. LUKE also uses a new *entity-aware* self-attention mechanism that considers the types of tokens (words or entities) when computing attention scores. The source code and pretrained models are available at . The documentation will be available soon!😃",https://arxiv.org/abs/2010.01057,"Entity representations are useful in natural language tasks involving entities. In this paper, we propose new pretrained contextualized representations of words and entities based on the bidirectional transformer. The proposed model treats words and entities in a given text as independent tokens, and outputs contextualized representations of them. Our model is trained using a new pretraining task based on the masked language model of BERT. The task involves predicting randomly masked words and entities in a large entity-annotated corpus retrieved from Wikipedia. We also propose an entity-aware self-attention mechanism that is an extension of the self-attention mechanism of the transformer, and considers the types of tokens (words or entities) when computing attention scores. The proposed model achieves impressive empirical performance on a wide range of entity-related tasks. In particular, it obtains state-of-the-art results on five well-known datasets: Open Entity (entity typing), TACRED (relation classification), CoNLL-2003 (named entity recognition), ReCoRD (cloze-style question answering), and SQuAD 1.1 (extractive question answering). Our source code and pretrained representations are available at this https URL ","LUKE: Deep Contextualized Entity Representations with Entity-aware
Self-attention",4,"['Our @emnlp2020 paper “LUKE: Deep Contextualized Entity Representations with Entity-aware\nSelf-attention” is now available on arXiv! We present new pretrained contextualized representations that achieve SOTA on five datasets including SQuAD and CoNLL-2003.\n', 'LUKE is based on bidirectional Transformer, treats words and entities in a text as independent tokens, and outputs contextualized representations of them. The representations can be used to address downstream tasks similarly to BERT. https://t.co/s7KZCxBi5D', 'LUKE is trained using a novel pretraining task that involves predicting randomly masked words (equivalent to BERT’s masked language model) and entities in an entity-annotated corpus obtained from Wikipedia.', 'LUKE also uses a new *entity-aware* self-attention mechanism that considers the types of tokens (words or entities) when computing attention scores.\nThe source code and pretrained models are available at https://t.co/k5koMphw7n. The documentation will be available soon!😃']",20,10,965
475,89,1337255175996596227,3013822602,Eric Michaud,"Excited to share my new paper “Understanding Learned Reward Functions” with co-authors @ARGleave and Stuart Russell, presented at the Deep RL Workshop at #NeurIPS2020 Paper: Code: Presentation: How can you tell if a learned reward function captures user preferences? We apply some standard ML interpretability techniques towards understanding what learned reward functions are doing in a few RL environments. For instance, in this gridworld environment, where the agent (blue) tries to get to the goal (green), we find that our learned reward function simply detects whether a goal block is visible, not whether the agent has reached it. In Atari environments like Seaquest, our learned reward function here seems to pay the most attention to the game's score display (areas highlighted in green are most salient). To reliably predict reward, the model can simply learn to detect when the score changes. This is a good reminder that when benchmarking reward learning algorithms on games, the score should not be displayed in the environment -- it can make the task of predicting reward too easy. Why does any of this matter? Well, for many real-world tasks, it is not possible to manually design a good reward function for an RL agent -- human desires, which the agent is tasked with realizing, are just too complicated. Reward functions must instead be *learned*. However, current algorithms for reward learning can fail silently. Absent perfect reward learning, we therefore need techniques for auditing learned reward functions -- for scrutinizing a machine's understanding of human preferences. Our paper is a tentative step in this direction. We hope that more advanced interpretability techniques will someday allow researchers to more comprehensively open up AI systems and verify that such systems understand and are aligned with human values. As a closing thought, I also wonder whether future interpretability techniques, coupled with sophisticated reward learning, could be a kind of ""microscope AI"" for improving our understanding of human values and human well-being. @ch402 @nickcammarata @SamHarrisOrg This work was done during my internship with @CHAI_Berkeley. Many thanks to everyone at CHAI for your support!",http://arxiv.org/abs/2012.05862,"In many real-world tasks, it is not possible to procedurally specify an RL agent's reward function. In such cases, a reward function must instead be learned from interacting with and observing humans. However, current techniques for reward learning may fail to produce reward functions which accurately reflect user preferences. Absent significant advances in reward learning, it is thus important to be able to audit learned reward functions to verify whether they truly capture user preferences. In this paper, we investigate techniques for interpreting learned reward functions. In particular, we apply saliency methods to identify failure modes and predict the robustness of reward functions. We find that learned reward functions often implement surprising algorithms that rely on contingent aspects of the environment. We also discover that existing interpretability techniques often attend to irrelevant changes in reward output, suggesting that reward interpretability may need significantly different methods from policy interpretability. ",Understanding Learned Reward Functions,10,"['Excited to share my new paper “Understanding Learned Reward Functions” with co-authors @ARGleave and Stuart Russell, presented at the Deep RL Workshop at #NeurIPS2020\n\nPaper: \nCode: \nPresentation: ', 'How can you tell if a learned reward function captures user preferences? We apply some standard ML interpretability techniques towards understanding what learned reward functions are doing in a few RL environments.', 'For instance, in this gridworld environment, where the agent (blue) tries to get to the goal (green), we find that our learned reward function simply detects whether a goal block is visible, not whether the agent has reached it. https://t.co/NyiF5NnAwy', ""In Atari environments like Seaquest, our learned reward function here seems to pay the most attention to the game's score display (areas highlighted in green are most salient). To reliably predict reward, the model can simply learn to detect when the score changes. https://t.co/gAYDtRgTUE"", 'This is a good reminder that when benchmarking reward learning algorithms on games, the score should not be displayed in the environment -- it can make the task of predicting reward too easy.', 'Why does any of this matter? Well, for many real-world tasks, it is not possible to manually design a good reward function for an RL agent -- human desires, which the agent is tasked with realizing, are just too complicated. Reward functions must instead be *learned*.', ""However, current algorithms for reward learning can fail silently. Absent perfect reward learning, we therefore need techniques for auditing learned reward functions -- for scrutinizing a machine's understanding of human preferences."", 'Our paper is a tentative step in this direction. We hope that more advanced interpretability techniques will someday allow researchers to more comprehensively open up AI systems and verify that such systems understand and are aligned with human values.', 'As a closing thought, I also wonder whether future interpretability techniques, coupled with sophisticated reward learning, could be a kind of ""microscope AI"" for improving our understanding of human values and human well-being. @ch402 @nickcammarata @SamHarrisOrg', 'This work was done during my internship with @CHAI_Berkeley. Many thanks to everyone at CHAI for your support!']",20,12,2269
476,144,1281116198202195968,1021360423,Mubrak A Alqahtani,"We have a new paper out where we calculated the elliptic flow of bottomonia produced in Pb-Pb collisions at 5.02 TeV. This work is done in collaboration with Partha Bhaduri, Nicolas Borghini, Amaresh Jaiswal, and Michael Strickland.",https://arxiv.org/abs/2007.03939,"We calculate the elliptic flow of bottomonia produced in Pb$\,+\,$Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV. We consider temperature-dependent decay widths for the anisotropic escape of various bottomonium states and observe that the transverse momentum dependence of bottomonia elliptic flow provides a tomographic information about the QGP fireball at different stages of its evolution. For the space-time evolution of the fireball, we employ simulation results from the 3+1D quasiparticle anisotropic hydrodynamic model. We find that our results for transverse momentum dependence of bottomonia elliptic flow are in reasonable agreement with experimental results from the ALICE and CMS collaborations. ","Fireball tomography from bottomonia elliptic flow in relativistic
heavy-ion collisions",2,"['We have a new paper out where we calculated the elliptic flow of bottomonia produced in Pb-Pb collisions at 5.02 TeV.\n ', 'This work is done in collaboration with Partha Bhaduri, Nicolas Borghini, Amaresh Jaiswal, and Michael Strickland.']",20,07,254
477,92,1087295802421194752,71332740,Dr Gwenllian Williams,A paper I worked on is out today!🌟 We studied how a number of different techniques used to study the break-up of real filaments into cores behave when applied to fake filaments with a known fragmentation scale. We found some methods better than others! > ,https://arxiv.org/abs/1901.06205,"Theories suggest that filament fragmentation should occur on a characteristic fragmentation length-scale. This fragmentation length-scale can be related to filament properties, such as the width and the dynamical state of the filament. Here we present a study of a number of fragmentation analysis techniques applied to filaments, and their sensitivity to characteristic fragmentation length-scales. We test the sensitivity to both single-tier and two-tier fragmentation, i.e. when the fragmentation can be characterised with one or two fragmentation length-scales respectively. The nearest neighbour separation, minimum spanning tree separation and two-point correlation function are all able to robustly detect characteristic fragmentation length-scales. The Fourier power spectrum and the Nth nearest neighbour technique are both poor techniques, and require very little scatter in the core spacings for the characteristic length-scale to be successfully determined. We develop a null hypothesis test to compare the results of the nearest neighbour and minimum spanning tree separation distribution with randomly placed cores. We show that a larger number of cores is necessary to successfully reject the null hypothesis if the underlying fragmentation is two-tier, N>20. Once the null is rejected we show how one may decide if the observed fragmentation is best described by single-tier or two-tier fragmentation, using either Akaike's information criterion or the Bayes factor. The analysis techniques, null hypothesis tests, and model selection approaches are all included in a new open-source Python/C library called FragMent. ","Determining the presence of characteristic fragmentation length-scales
in filaments",1,['A paper I worked on is out today!🌟 We studied how a number of different techniques used to study the break-up of real filaments into cores behave when applied to fake filaments with a known fragmentation scale. We found some methods better than others! > '],19,01,271
478,92,1438543852109897734,2853379350,Siddhant Garg,"🚨#EMNLP2021 Paper New💡for Efficient QA➡️ Filter questions that will not be answered by QA system Interesting 🔎: Transformer-based QA scores can be approximated only using question text via ""partial-input distillation"" @AmazonScience @amoschitti1 📰: @AmazonScience @amoschitti1 Practical QA systems operate at high Prec. (for customer req.) and end up not answering a large % of ques by failing system threshold on answer confidence score We train filters to preemptively remove ques that are not answered by the system, saving (retrieval+answering) compute @AmazonScience @amoschitti1 We propose 2 loss objectives for learning 2 types of filters: regression & classification head by distilling knowledge of QA system scores Training does not require any human labels, only system generated scores Different from KD since teacher & student use different inputs @AmazonScience @amoschitti1 Experimental results show: (i) Question filters can approximate Pr/Re of QA system very well (ii) Filters can provide large efficiency gains, with only a small drop in Recall (user-tunable tradeoff) @AmazonScience @amoschitti1 Code to be released soon at Paper #AmazonScience: Feel free to reach out to us in case of any questions 😀",https://arxiv.org/abs/2109.07009,"In this paper we propose a novel approach towards improving the efficiency of Question Answering (QA) systems by filtering out questions that will not be answered by them. This is based on an interesting new finding: the answer confidence scores of state-of-the-art QA systems can be approximated well by models solely using the input question text. This enables preemptive filtering of questions that are not answered by the system due to their answer confidence scores being lower than the system threshold. Specifically, we learn Transformer-based question models by distilling Transformer-based answering models. Our experiments on three popular QA datasets and one industrial QA benchmark demonstrate the ability of our question models to approximate the Precision/Recall curves of the target QA system well. These question models, when used as filters, can effectively trade off lower computation cost of QA systems for lower Recall, e.g., reducing computation by ~60%, while only losing ~3-4% of Recall. ","Will this Question be Answered? Question Filtering via Answer Model
Distillation for Efficient Question Answering",5,"['🚨#EMNLP2021 Paper\n\nNew💡for Efficient QA➡️\nFilter questions that will not be answered by QA system\n\nInteresting 🔎: Transformer-based QA scores can be approximated only using question text via ""partial-input distillation""\n\n@AmazonScience @amoschitti1\n\n📰: ', '@AmazonScience @amoschitti1 Practical QA systems operate at high Prec. (for customer req.) and end up not answering a large % of ques by failing system threshold on answer confidence score\n\nWe train filters to preemptively remove ques that are not answered by the system, saving (retrieval+answering) compute', '@AmazonScience @amoschitti1 We propose 2 loss objectives for learning 2 types of filters: regression & classification head by distilling knowledge of QA system scores\n\nTraining does not require any human labels, only system generated scores\n\nDifferent from KD since teacher & student use different inputs https://t.co/dqjrB8Hncz', '@AmazonScience @amoschitti1 Experimental results show: \n\n(i) Question filters can approximate Pr/Re of QA system very well\n\n(ii) Filters can provide large efficiency gains, with only a small drop in Recall (user-tunable tradeoff) https://t.co/WxV4o5ecr6', '@AmazonScience @amoschitti1 Code to be released soon at https://t.co/ucyWnAqajH\n\nPaper #AmazonScience: https://t.co/PnHOgoaL68\n\nFeel free to reach out to us in case of any questions 😀']",21,09,1263
479,116,1060211883112841216,885528008,William Fedus,"With the careful investigative work of @masscaccia and @LucasPCaccia, we find that NLP GAN models still aren't improving over a simple maximum-likelihood baseline with reduced softmax temperature as assessed on (local/global) quality-diversity spectrum! This was a funny paper for me as a past author of an NLP GAN paper (MaskGAN: ). In MaskGAN, we demonstrated that mode-collapse and loss of diversity was occurring Sec 5.4, appendix C.4). However, the recent *global* diversity advances by the Zurich Brain Group, S. Semeniuta, A. Severyn, S. Gelly helped us make this comparison more rigorous. @sylvain_gelly",https://arxiv.org/abs/1811.02549,"Generating high-quality text with sufficient diversity is essential for a wide range of Natural Language Generation (NLG) tasks. Maximum-Likelihood (MLE) models trained with teacher forcing have consistently been reported as weak baselines, where poor performance is attributed to exposure bias (Bengio et al., 2015; Ranzato et al., 2015); at inference time, the model is fed its own prediction instead of a ground-truth token, which can lead to accumulating errors and poor samples. This line of reasoning has led to an outbreak of adversarial based approaches for NLG, on the account that GANs do not suffer from exposure bias. In this work, we make several surprising observations which contradict common beliefs. First, we revisit the canonical evaluation framework for NLG, and point out fundamental flaws with quality-only evaluation: we show that one can outperform such metrics using a simple, well-known temperature parameter to artificially reduce the entropy of the model's conditional distributions. Second, we leverage the control over the quality / diversity trade-off given by this parameter to evaluate models over the whole quality-diversity spectrum and find MLE models constantly outperform the proposed GAN variants over the whole quality-diversity space. Our results have several implications: 1) The impact of exposure bias on sample quality is less severe than previously thought, 2) temperature tuning provides a better quality / diversity trade-off than adversarial training while being easier to train, easier to cross-validate, and less computationally expensive. Code to reproduce the experiments is available at github.com/pclucas14/GansFallingShort ",Language GANs Falling Short,3,"[""With the careful investigative work of @masscaccia and @LucasPCaccia, we find that NLP GAN models still aren't improving over a simple maximum-likelihood baseline with reduced softmax temperature as assessed on (local/global) quality-diversity spectrum! \n\n"", 'This was a funny paper for me as a past author of an NLP GAN paper (MaskGAN: https://t.co/YhuUvXui5i). In MaskGAN, we demonstrated that mode-collapse and loss of diversity was occurring Sec 5.4, appendix C.4).', 'However, the recent *global* diversity advances by the Zurich Brain Group, S. Semeniuta, A. Severyn, S. Gelly https://t.co/8kTzSDTJ4E helped us make this comparison more rigorous. @sylvain_gelly']",18,11,632
480,12,1156802754662338560,901142962758758400,Hang-Hyun Jo,"Our paper ""Burst-tree decomposition of time series reveals the structure of temporal correlations"" (with @takayukihir & @bolozna) is available at | We propose a new method of analyzing the bursty event sequences that turns event sequences into trees. ",https://arxiv.org/abs/1907.13556,"Comprehensive characterization of non-Poissonian, bursty temporal patterns observed in various natural and social processes is crucial to understand the underlying mechanisms behind such temporal patterns. Among them bursty event sequences have been studied mostly in terms of interevent times (IETs), while the higher-order correlation structure between IETs has gained very little attention due to the lack of a proper characterization method. In this paper we propose a method of decomposing an event sequence into a set of IETs and a burst tree, which exactly captures the structure of temporal correlations that is entirely missing in the analysis of IET distributions. We apply the burst-tree decomposition method to various datasets and analyze the structure of the revealed burst trees. In particular, we observe that event sequences show similar burst-tree structure, such as heavy-tailed burst size distributions, despite of very different IET distributions. The burst trees allow us to directly characterize the preferential and assortative mixing structure of bursts responsible for the higher-order temporal correlations. We also show how to use the decomposition method for the systematic investigation of such higher-order correlations captured by the burst trees in the framework of randomized reference models. Finally, we devise a simple kernel-based model for generating event sequences showing appropriate higher-order temporal correlations. Our method is a tool to make the otherwise overwhelming analysis of higher-order correlations in bursty time series tractable by turning it into the analysis of a tree structure. ","Burst-tree decomposition of time series reveals the structure of
temporal correlations",1,"['Our paper ""Burst-tree decomposition of time series reveals the structure of temporal correlations"" (with @takayukihir & @bolozna) is available at | We propose a new method of analyzing the bursty event sequences that turns event sequences into trees. ']",19,07,264
481,57,1417874378067308544,10471882,matt brehmer,"Last year, @eagereyes and I interviewed people about live presentations of data + 📊 in their organizations. Our #ieeevis '21 paper documents our findings with a musical performance metaphor, along with some new ideas for presenting data. 🎶 pre-print: ",https://arxiv.org/abs/2107.09042,"Prior research on communicating with visualization has focused on public presentation and asynchronous individual consumption, such as in the domain of journalism. The visualization research community knows comparatively little about synchronous and multimodal communication around data within organizations, from team meetings to executive briefings. We conducted two qualitative interview studies with individuals who prepare and deliver presentations about data to audiences in organizations. In contrast to prior work, we did not limit our interviews to those who self-identify as data analysts or data scientists. Both studies examined aspects of speaking about data with visual aids such as charts, dashboards, and tables. One study was a retrospective examination of current practices and difficulties, from which we identified three scenarios involving presentations of data. We describe these scenarios using an analogy to musical performance: small collaborative team meetings are akin to jam session, while more structured presentations can range from semi-improvisational performances among peers to formal recitals given to executives or customers. In our second study, we grounded the discussion around three design probes, each examining a different aspect of presenting data: the progressive reveal of visualization to direct attention and advance a narrative, visualization presentation controls that are hidden from the audience's view, and the coordination of a presenter's video with interactive visualization. Our distillation of interviewees' responses surfaced twelve themes, from ways of authoring presentations to creating accessible and engaging audience experiences. ","From Jam Session to Recital: Synchronous Communication and Collaboration
Around Data in Organizations",1,"[""Last year, @eagereyes and I interviewed people about live presentations of data + 📊 in their organizations. \n\nOur #ieeevis '21 paper documents our findings with a musical performance metaphor, along with some new ideas for presenting data. 🎶 \n\npre-print: ""]",21,07,266
482,19,1189246252267003907,374233623,Shane Barratt,"Excited to release our new paper ""Minimizing a Sum of Clipped Convex Functions"", joint work w/ @GuilleAngeris and Stephen Boyd. Paper: Code: @GuilleAngeris The paper provides a good heuristic for minimizing sums of clipped convex functions, as well as a computational lower bound based on the perspective transform. The figure below gives a simple 1-d example of a sum of clipped convex functions. @GuilleAngeris Applications include clipped empirical risk minimization, for example, clipped regression @GuilleAngeris And clipped control, where, for example, the cost encourages us to be in one of the two lanes @GuilleAngeris Our algorithm requires solving roughly 5-20 convex optimization problems, and we have implemented it as a CVXPY extension, making it easy to (approximately) solve such problems @GuilleAngeris To get a lower bound, we first convert the problem into a mixed-integer convex program using the perspective formulation () and relax the integral constraint to get a lower bound. (See the paper for more details.) Here is an example of the lower bound. ",https://arxiv.org/abs/1910.12342,"We consider the problem of minimizing a sum of clipped convex functions; applications include clipped empirical risk minimization and clipped control. While the problem of minimizing the sum of clipped convex functions is NP-hard, we present some heuristics for approximately solving instances of these problems. These heuristics can be used to find good, if not global, solutions and appear to work well in practice. We also describe an alternative formulation, based on the perspective transformation, which makes the problem amenable to mixed-integer convex programming and yields computationally tractable lower bounds. We illustrate one of our heuristic methods by applying it to various examples and use the perspective transformation to certify that the solutions are relatively close to the global optimum. This paper is accompanied by an open-source implementation. ",Minimizing a Sum of Clipped Convex Functions,6,"['Excited to release our new paper ""Minimizing a Sum of Clipped Convex Functions"", joint work w/ @GuilleAngeris and Stephen Boyd.\n\nPaper: \nCode: ', '@GuilleAngeris The paper provides a good heuristic for minimizing sums of clipped convex functions, as well as a computational lower bound based on the perspective transform. The figure below gives a simple 1-d example of a sum of clipped convex functions. https://t.co/Xm6NF4BSUX', '@GuilleAngeris Applications include clipped empirical risk minimization, for example, clipped regression https://t.co/dT9PbGj3ZE', '@GuilleAngeris And clipped control, where, for example, the cost encourages us to be in one of the two lanes https://t.co/Dc5L8iTpkv', '@GuilleAngeris Our algorithm requires solving roughly 5-20 convex optimization problems, and we have implemented it as a CVXPY extension, making it easy to (approximately) solve such problems https://t.co/n3azp19CWs', '@GuilleAngeris To get a lower bound, we first convert the problem into a mixed-integer convex program using the perspective formulation (https://t.co/JKaDSTMzJT) and relax the integral constraint to get a lower bound.\n(See the paper for more details.) Here is an example of the lower bound. https://t.co/y4moDiDO4N']",19,10,1126
483,203,1400387018098630657,67936420,Rahul,"Check out our findings of @aclmeeting paper where we introduce two new flavors of MAML! (1/4) MAML assumes that source and target 'tasks' are i.i.d which is not realistic during cross-lingual transfer. High-resource languages belong to a few families, geographical areas, and typological features and do not reflect the majority of the world's languages (2/4) With the aim of better transfer across distant language families, we propose (i) Minimax criterion: which minimizes the maximum risk across languages, and (ii) Neyman-Pearson criterion: which upper-bounds the risk for any subset of languages. (3/4) We perform experiments on POS-tagging and QA and show that the new criteria significantly improve performance over vanilla MAML and an MTL baseline. This is joint work with @PontiEdoardo, @DishaShrivasta9, @sivareddyg, and Anders Søgaard. (4/4) ",https://arxiv.org/abs/2106.01051,"Model-agnostic meta-learning (MAML) has been recently put forth as a strategy to learn resource-poor languages in a sample-efficient fashion. Nevertheless, the properties of these languages are often not well represented by those available during training. Hence, we argue that the i.i.d. assumption ingrained in MAML makes it ill-suited for cross-lingual NLP. In fact, under a decision-theoretic framework, MAML can be interpreted as minimising the expected risk across training languages (with a uniform prior), which is known as Bayes criterion. To increase its robustness to outlier languages, we create two variants of MAML based on alternative criteria: Minimax MAML reduces the maximum risk across languages, while Neyman-Pearson MAML constrains the risk in each language to a maximum threshold. Both criteria constitute fully differentiable two-player games. In light of this, we propose a new adaptive optimiser solving for a local approximation to their Nash equilibrium. We evaluate both model variants on two popular NLP tasks, part-of-speech tagging and question answering. We report gains for their average and minimum performance across low-resource languages in zero- and few-shot settings, compared to joint multi-source transfer and vanilla MAML. ",Minimax and Neyman-Pearson Meta-Learning for Outlier Languages,4,"['Check out our findings of @aclmeeting paper where we introduce two new flavors of MAML!\n (1/4)', ""MAML assumes that source and target 'tasks' are i.i.d which is not realistic during cross-lingual transfer. High-resource languages belong to a few families, geographical areas, and typological features and do not reflect the majority of the world's languages (2/4) https://t.co/Mmc1Si52yx"", 'With the aim of better transfer across distant language families, we propose (i) Minimax criterion: which minimizes the maximum risk across languages, and (ii) Neyman-Pearson criterion: which upper-bounds the risk for any subset of languages. (3/4)', 'We perform experiments on POS-tagging and QA and show that the new criteria significantly improve performance over vanilla MAML and an MTL baseline. This is joint work with @PontiEdoardo, @DishaShrivasta9, @sivareddyg, and Anders Søgaard. (4/4) https://t.co/u3gGx4aLTE']",21,06,874
484,70,1383054369097011204,952949678533849088,Kareem El-Badry,"New paper. We found a new (sort of) type of interacting binary star! It's a white dwarf and a tidally distorted, bloated, stripped helium core. It looks like a cataclysmic variable (white dwarf accreting from a normal star), but the donor star is much hotter than any known cataclysmic variable donor. Conversely, it's cooler, more bloated, and more tidally distorted than known low-mass white dwarfs. There are a few other known stars with similar temperature and luminosity (“sdA” stars) but they are not close being mass transferring and are mostly in wider binaries. We think this binary was a cataclysmic variable with a donor that started transferring mass to its white dwarf companion *just* at the end of the main sequence. Now it's becoming an extremely low-mass white dwarf. In a few Gyr, the binary will shrink to extremely short periods, where it appear as an ultracompact white dwarf binary or maybe an ""AM CVn"" (ultra short period, mass-transferring) system. There are more of these transitional systems to be characterized! We're looking forward to mapping the population. thanks to @kenjshen, @thomkupfer, @bigticketdw, and other coauthors not on twitter. @PNeunteufel @kenjshen In the best-fit models, there is still a ~0.005 Msun H-burning envelope (which is why the contraction is slow). There are no shell flashes for the best-fit mass of 0.15 Msun, but there would be for >0.18 Msun or so. Thanks!",https://arxiv.org/abs/2104.07033,"We present LAMOST J0140355+392651 (hereafter J0140), a close ($P_{\rm orb} = 3.81$ hours) binary containing a bloated, low-mass ($M \approx 0.15 M_{\odot}$) proto-white dwarf (WD) and a massive ($M\approx 0.95\,M_{\odot}$) WD companion. The system's optical light curve is dominated by large-amplitude ellipsoidal variability but also exhibits additional scatter, likely driven by pulsations. The proto-WD is cooler ($T_{\rm eff} = 6800\pm 100$ K) and more puffy ($\log\left[g/\left({\rm cm\,s^{-2}}\right)\right]=4.74\pm0.07$) than any known extremely low mass (ELM) WD, but hotter than any known cataclysmic variable (CV) donor. It either completely or very nearly fills its Roche lobe ($R/R_{{\rm Roche\,lobe}}=0.99\pm0.01$), suggesting ongoing or recently terminated mass transfer. No dwarf nova-like outbursts have been observed. The spectrum is dominated by the proto-WD but shows tentative hints of H$\alpha$ emission, perhaps due to accretion onto the massive WD. The properties of the system are well-matched by MESA binary evolution models of CVs with donors that underwent significant nuclear evolution before the onset of mass transfer. In these models, the bloated proto-WD is either still losing mass via stable Roche lobe overflow or was doing so until very recently. In either case, it is evolving toward higher temperatures at near-constant luminosity to become an ELM WD. If the system is detached, mass transfer likely ended when the donor became too hot for magnetic braking to remain efficient. Evolutionary models predict that the binary will shrink to $P_{\rm orb}\lesssim 10$ minutes within a few Gyr, when it will either merge or become an AM CVn binary. J0140 provides an observational link between the formation channels of CVs, ELM WDs, detached ultracompact WD binaries, and AM CVn systems. ","LAMOST J0140355+392651: An evolved cataclysmic variable donor
transitioning to become an extremely low mass white dwarf",8,"['New paper. We found a new (sort of) type of interacting binary star! ', ""It's a white dwarf and a tidally distorted, bloated, stripped helium core. It looks like a cataclysmic variable (white dwarf accreting from a normal star), but the donor star is much hotter than any known cataclysmic variable donor. https://t.co/fPC1gbUn9Z"", ""Conversely, it's cooler, more bloated, and more tidally distorted than known low-mass white dwarfs. https://t.co/e4Gr95qqBl"", 'There are a few other known stars with similar temperature and luminosity (“sdA” stars) but they are not close being mass transferring and are mostly in wider binaries.', ""We think this binary was a cataclysmic variable with a donor that started transferring mass to its white dwarf companion *just* at the end of the main sequence. Now it's becoming an extremely low-mass white dwarf. https://t.co/Qb6EDLUKOF"", 'In a few Gyr, the binary will shrink to extremely short periods, where it appear as an ultracompact white dwarf binary or maybe an ""AM CVn"" (ultra short period, mass-transferring) system. https://t.co/9N7zR1hZHW', ""There are more of these transitional systems to be characterized! We're looking forward to mapping the population. \n\nthanks to @kenjshen, @thomkupfer, @bigticketdw, and other coauthors not on twitter."", '@PNeunteufel @kenjshen In the best-fit models, there is still a ~0.005 Msun H-burning envelope (which is why the contraction is slow). There are no shell flashes for the best-fit mass of 0.15 Msun, but there would be for >0.18 Msun or so. Thanks!']",21,04,1464
485,2,1413432101257367558,636899073,Dr Ella Peltonen,"A new paper by @wiebketous, @aaronyiding, me, et al. shared in Arxiv: ""Design Considerations for Data Daemons: Co-creating Design Futures to Explore Ethical Personal Data Management"". Comments are welcome! #AIEthics #AI #Ethics #EthicalAI #data ",https://arxiv.org/abs/2106.14975,"Mobile applications and online service providers track our virtual and physical behaviour more actively and with a broader scope than ever before. This has given rise to growing concerns about ethical personal data management. Even though regulation and awareness around data ethics are increasing, end-users are seldom engaged when defining and designing what a future with ethical personal data management should look like. We explore a participatory process that uses design futures, the Future workshop method and design fictions to envision ethical personal data management with end-users and designers. To engage participants effectively, we needed to bridge their differential expertise and make the abstract concepts of data and ethics tangible. By concretely presenting personal data management and control as fictitious entities called Data Daemons, we created a shared understanding of these abstract concepts, and empowered non-expert end-users and designers to become actively engaged in the design process. ","Design Considerations for Data Daemons: Co-creating Design Futures to
Explore Ethical Personal Data Management",1,"['A new paper by @wiebketous, @aaronyiding, me, et al. shared in Arxiv: ""Design Considerations for Data Daemons: Co-creating Design Futures to Explore Ethical Personal Data Management"". Comments are welcome! #AIEthics #AI #Ethics #EthicalAI #data ']",21,06,251
486,153,1489321348237119496,577537524,Pete Florence,"New 🤖 paper led by the awesome @WiYoungsun! The paper is essentially ""using the Force* to deform neural fields"" (In this case, DeepSDF-style representations.) A cool thing here is that robots can have tactile (e.g., force-torque) sensing... So we can do perception 👀 that fuses shape knowledge together with tactile sensing 👏, even for deformable objects. This can help do things like predict full shape deformation state, with only partial information. Note these are *multi-modal (multi-sensory)*--input neural fields, capable of fusing both: - visual data (in this case point cloud data), and - tactile data (in this case force data) If you've ever thought about how good humans are at fusing shape information and touch sensing, including of objects that are deforming... this work takes some initial steps towards that direction. Just accepted to ICRA. Super happy to see @WiYoungsun's hard work pay off on the project. Led out of @NimaFazeli7's lab at Michigan, where they know a lot about tactile stuff :) . @andyzengtweets and I were lucky to be able to help out. Also @WiYoungsun just made her Twitter account like yesterday.... if you're looking for cool new researchers to follow, she is awesome. * yes not actually the Force from Star Wars. (Btw, is it just me or are somehow Episodes 5 and 6 of Boba Fett just way better than the previous episodes?)",https://arxiv.org/abs/2202.00868,"Deformable object manipulation requires computationally efficient representations that are compatible with robotic sensing modalities. In this paper, we present VIRDO:an implicit, multi-modal, and continuous representation for deformable-elastic objects. VIRDO operates directly on visual (point cloud) and tactile (reaction forces) modalities and learns rich latent embeddings of contact locations and forces to predict object deformations subject to external contacts.Here, we demonstrate VIRDOs ability to: i) produce high-fidelity cross-modal reconstructions with dense unsupervised correspondences, ii) generalize to unseen contact formations,and iii) state-estimation with partial visio-tactile feedback ",VIRDO: Visio-tactile Implicit Representations of Deformable Objects,7,"['New 🤖 paper led by the awesome @WiYoungsun! \n\nThe paper is essentially ""using the Force* to deform neural fields"" (In this case, DeepSDF-style representations.)\n\nA cool thing here is that robots can have tactile (e.g., force-torque) sensing... ', 'So we can do perception 👀 that fuses shape knowledge together with tactile sensing 👏, even for deformable objects.\n\nThis can help do things like predict full shape deformation state, with only partial information. https://t.co/VAk2mACO24', 'Note these are *multi-modal (multi-sensory)*--input neural fields, capable of fusing both:\n- visual data (in this case point cloud data), and\n- tactile data (in this case force data) https://t.co/oqD23UGWYG', ""If you've ever thought about how good humans are at fusing shape information and touch sensing, including of objects that are deforming...\n\nthis work takes some initial steps towards that direction. https://t.co/JRvIXSHZpV"", ""Just accepted to ICRA.\n\nSuper happy to see @WiYoungsun's hard work pay off on the project.\n\nLed out of @NimaFazeli7's lab at Michigan, where they know a lot about tactile stuff :) . @andyzengtweets and I were lucky to be able to help out. https://t.co/7ITlV88Q5K"", ""Also @WiYoungsun just made her Twitter account like yesterday.... if you're looking for cool new researchers to follow, she is awesome."", '* yes not actually the Force from Star Wars.\n\n(Btw, is it just me or are somehow Episodes 5 and 6 of Boba Fett just way better than the previous episodes?)']",22,02,1404
487,27,1143548242963001344,610427323,Desika Narayanan,"hey astrotwitter we've got a new paper out led by @UFastro grad student Qi Li! we put in a cool new model for dust formation/growth/destruction in cosmo sims to ask what drives variations in the dust to gas ratio/dust to metals ratios in galaxies [1/] In short - the dust to gas ratio is driven in large part by the metallicity of galaxies (due in part to the dependence of metallicity in dust growth rates in the ISM), [2/] There are secondary factors the dust to gas ratio or dust to metals ratio depend on that we tease out with some machine learning techniques, but to first order is mostly set by the metallicity. [3/] For observers and theorists who want a dust[gas] mass from their observed/simulated galaxy, we provide a public code to make it happen! ",https://arxiv.org/abs/1906.09277v1,"We present predictions for the evolution of the galaxy dust-to-gas (DGR) and dust-to-metal (DTM) ratios from z=0 to 6, using a model for the production, growth, and destruction of dust grains implemented into the \simba\ cosmological hydrodynamic galaxy formation simulation. In our model, dust forms in stellar ejecta, grows by the accretion of metals, and is destroyed by thermal sputtering and supernovae. Our simulation reproduces the observed dust mass function at z=0, but modestly under-predicts the mass function by ~x3 at z ~ 1-2. The z=0 DGR vs metallicity relationship shows a tight positive correlation for star-forming galaxies, while it is uncorrelated for quenched systems. There is little evolution in the DGR-metallicity relationship between z=0-6. We use machine learning techniques to search for the galaxy physical properties that best correlate with the DGR and DTM. We find that the DGR is primarily correlated with the gas-phase metallicity, though correlations with the depletion timescale, stellar mass and gas fraction are non-negligible. We provide a crude fitting relationship for DGR and DTM vs. the gas-phase metallicity, along with a public code package that estimates the DGR and DTM given a set of galaxy physical properties. ",] The Dust-to-Gas and Dust-to-Metals Ratio in Galaxies from z=0-6,4,"[""hey astrotwitter we've got a new paper out led by @UFastro grad student Qi Li! we put in a cool new model for dust formation/growth/destruction in cosmo sims to ask what drives variations in the dust to gas ratio/dust to metals ratios in galaxies [1/]"", 'In short - the dust to gas ratio is driven in large part by the metallicity of galaxies (due in part to the dependence of metallicity in dust growth rates in the ISM), [2/]', 'There are secondary factors the dust to gas ratio or dust to metals ratio depend on that we tease out with some machine learning techniques, but to first order is mostly set by the metallicity. [3/]', 'For observers and theorists who want a dust[gas] mass from their observed/simulated galaxy, we provide a public code to make it happen! https://t.co/fdMWpMDFpA']",19,06,774
488,61,1506756429285191682,265421900,Eric Heiden,"We introduce a new method to learn simulators from depth and RGB videos. The ""URDF"" of an articulated rigid-body mechanism is reconstructed, and the parameters of the simulator inferred through Bayesian inference. Website: Paper: Our pipeline leverages inverse rendering (nvdiffrast) and differentiable physics (Tiny Differentiable Simulator) to track objects in the scene, find articulations via a RANSAC approach, and infer the distribution over simulation parameters. Our approach finds a digital twin for articulated mechanisms from real depth or RGB video. Check out our paper for more details! Joint work w/ Ziang Liu, @VibhavVineet, @erwincoumans, @gauravsukhatme ",https://arxiv.org/abs/2203.10488,"Being able to reproduce physical phenomena ranging from light interaction to contact mechanics, simulators are becoming increasingly useful in more and more application domains where real-world interaction or labeled data are difficult to obtain. Despite recent progress, significant human effort is needed to configure simulators to accurately reproduce real-world behavior. We introduce a pipeline that combines inverse rendering with differentiable simulation to create digital twins of real-world articulated mechanisms from depth or RGB videos. Our approach automatically discovers joint types and estimates their kinematic parameters, while the dynamic properties of the overall mechanism are tuned to attain physically accurate simulations. Control policies optimized in our derived simulation transfer successfully back to the original system, as we demonstrate on a simulated system. Further, our approach accurately reconstructs the kinematic tree of an articulated mechanism being manipulated by a robot, and highly nonlinear dynamics of a real-world coupled pendulum mechanism. Website: this https URL ",Inferring Articulated Rigid Body Dynamics from RGBD Video,3,"['We introduce a new method to learn simulators from depth and RGB videos. The ""URDF"" of an articulated rigid-body mechanism is reconstructed, and the parameters of the simulator inferred through Bayesian inference.\n\nWebsite: \nPaper: ', 'Our pipeline leverages inverse rendering (nvdiffrast) and differentiable physics (Tiny Differentiable Simulator) to track objects in the scene, find articulations via a RANSAC approach, and infer the distribution over simulation parameters. https://t.co/ccgVoJ9wWZ', 'Our approach finds a digital twin for articulated mechanisms from real depth or RGB video.\nCheck out our paper for more details!\n\nJoint work w/ Ziang Liu, @VibhavVineet, @erwincoumans, @gauravsukhatme https://t.co/8kehFh80AF']",22,03,705
489,135,1467934898728148995,2894532745,Priya L. Donti,"In our new #NeurIPS2021 paper, we provide one of the first approaches to address N-k SCOPF, a core problem for the operation of power grids, at realistic scale. Paper: Joint work with Aayushya Agarwal, @neerajssp2, @LarryPileggi, and @zicokolter 1/ N-k SCOPF aims to schedule power generation in a way that is robust to k potential equipment failures. While it's become increasingly important to solve (see, e.g., recent blackout events in the UK and Texas), N-k SCOPF is in practice prohibitively expensive to solve at scale. 2/ We propose a heuristic approach to address N-k SCOPF at scale. Our approach entails rewriting N-k SCOPF as a continuous minimax (attacker-defender) optimization problem, and solving it efficiently using insights from adversarial robustness and implicit layers in deep learning. 3/ We use our approach to address N-3 SCOPF on a realistic-size (4622 bus) system, and show that it significantly reduces the number of feasibility violations (by a factor of 3-4x) compared to state-of-the-art baselines, while taking only 21 minutes to run on a standard laptop. 4/ If you’re interested in chatting further about this work, Aayushya and I will be presenting our poster at #NeurIPS2021 on Tue, Dec 7 from 11:30am-1pm Eastern. Hope to see you there! 5/5 @nandofioretto @neerajssp2 @LarryPileggi @zicokolter Thanks! And yes, exactly :) It's a bit of a play on the name of our method, which is CAN∂Y",https://arxiv.org/abs/2111.06961,"In recent years, the ML community has seen surges of interest in both adversarially robust learning and implicit layers, but connections between these two areas have seldom been explored. In this work, we combine innovations from these areas to tackle the problem of N-k security-constrained optimal power flow (SCOPF). N-k SCOPF is a core problem for the operation of electrical grids, and aims to schedule power generation in a manner that is robust to potentially k simultaneous equipment outages. Inspired by methods in adversarially robust training, we frame N-k SCOPF as a minimax optimization problem - viewing power generation settings as adjustable parameters and equipment outages as (adversarial) attacks - and solve this problem via gradient-based techniques. The loss function of this minimax problem involves resolving implicit equations representing grid physics and operational decisions, which we differentiate through via the implicit function theorem. We demonstrate the efficacy of our framework in solving N-3 SCOPF, which has traditionally been considered as prohibitively expensive to solve given that the problem size depends combinatorially on the number of potential outages. ","Adversarially Robust Learning for Security-Constrained Optimal Power
Flow",6,"['In our new #NeurIPS2021 paper, we provide one of the first approaches to address N-k SCOPF, a core problem for the operation of power grids, at realistic scale.\n \nPaper: \n\nJoint work with Aayushya Agarwal, @neerajssp2, @LarryPileggi, and @zicokolter\n\n1/ ', ""N-k SCOPF aims to schedule power generation in a way that is robust to k potential equipment failures. While it's become increasingly important to solve (see, e.g., recent blackout events in the UK and Texas), N-k SCOPF is in practice prohibitively expensive to solve at scale. 2/"", 'We propose a heuristic approach to address N-k SCOPF at scale. Our approach entails rewriting N-k SCOPF as a continuous minimax (attacker-defender) optimization problem, and solving it efficiently using insights from adversarial robustness and implicit layers in deep learning. 3/ https://t.co/ijpTFX8hcg', 'We use our approach to address N-3 SCOPF on a realistic-size (4622 bus) system, and show that it significantly reduces the number of feasibility violations (by a factor of 3-4x) compared to state-of-the-art baselines, while taking only 21 minutes to run on a standard laptop. 4/ https://t.co/nfCtGUoOSi', 'If you’re interested in chatting further about this work, Aayushya and I will be presenting our poster at #NeurIPS2021 on Tue, Dec 7 from 11:30am-1pm Eastern. Hope to see you there! \n\nhttps://t.co/ocgvWwJSTj \n\n5/5 https://t.co/pT3pgCiX1s', ""@nandofioretto @neerajssp2 @LarryPileggi @zicokolter Thanks! And yes, exactly :) It's a bit of a play on the name of our method, which is CAN∂Y""]",21,11,1464
490,80,1129148265184862208,45724845,Swarat Chaudhuri,"New #ICML2019 paper: ""Control Regularization for Reduced Variance Reinforcement Learning"". Moral: by regularizing DeepRL with a symbolic ""control prior"", you can: 1) learn more efficiently with lower-variance gradients; 2) get provably stable policies. ",https://arxiv.org/abs/1905.05380,"Dealing with high variance is a significant challenge in model-free reinforcement learning (RL). Existing methods are unreliable, exhibiting high variance in performance from run to run using different initializations/seeds. Focusing on problems arising in continuous control, we propose a functional regularization approach to augmenting model-free RL. In particular, we regularize the behavior of the deep policy to be similar to a policy prior, i.e., we regularize in function space. We show that functional regularization yields a bias-variance trade-off, and propose an adaptive tuning strategy to optimize this trade-off. When the policy prior has control-theoretic stability guarantees, we further show that this regularization approximately preserves those stability guarantees throughout learning. We validate our approach empirically on a range of settings, and demonstrate significantly reduced variance, guaranteed dynamic stability, and more efficient learning than deep RL alone. ",Control Regularization for Reduced Variance Reinforcement Learning,1,"['New #ICML2019 paper: ""Control Regularization for Reduced Variance Reinforcement Learning"". Moral: by regularizing DeepRL with a symbolic ""control prior"", you can: 1) learn more efficiently with lower-variance gradients; 2) get provably stable policies. ']",19,05,259
491,105,1513811805906382853,1214911964356452353,Sebastian Lerch,"New paper ""Convolutional autoencoders for spatially-informed ensemble post-processing"" accepted at the AI for Earth and Space Science Workshop at #ICLR2022 - available at . Joint work with @AstroInformatix 🧵 Motivation: Station-based post-processing models require localized predictors interpolated from the NWP model's spatial forecast fields to the target locations. Predictability information contained in large-scale spatial structures is potentially lost in this interpolation step. We propose the use of convolutional autoencoders to learn compact representations of spatial input fields which can then be used to augment location-specific information as additional inputs to post-processing models. Convolutional autoencoders are applied to spatial forecast fields of different variables, and generally do a good job at reconstructing the mean forecast fields. Using AE representations as additional inputs improves the performance of post-processing models, but the results vary substantially across variables and dimensionality of the latent representations. @vitusbenson Thanks! This was done by others for example here: . We mainly wanted to try an alternative approach that appeared to be computationally more affordable.",https://arxiv.org/abs/2204.05102,"Ensemble weather predictions typically show systematic errors that have to be corrected via post-processing. Even state-of-the-art post-processing methods based on neural networks often solely rely on location-specific predictors that require an interpolation of the physical weather model's spatial forecast fields to the target locations. However, potentially useful predictability information contained in large-scale spatial structures within the input fields is potentially lost in this interpolation step. Therefore, we propose the use of convolutional autoencoders to learn compact representations of spatial input fields which can then be used to augment location-specific information as additional inputs to post-processing models. The benefits of including this spatial information is demonstrated in a case study of 2-m temperature forecasts at surface stations in Germany. ","Convolutional autoencoders for spatially-informed ensemble
post-processing",6,"['New paper ""Convolutional autoencoders for spatially-informed ensemble post-processing"" accepted at the AI for Earth and Space Science Workshop at #ICLR2022 - available at . Joint work with @AstroInformatix 🧵', ""Motivation: Station-based post-processing models require localized predictors interpolated from the NWP model's spatial forecast fields to the target locations. Predictability information contained in large-scale spatial structures is potentially lost in this interpolation step. https://t.co/yPk5wg9wVK"", 'We propose the use of convolutional autoencoders to learn compact representations of spatial input fields which can then be used to augment location-specific information as additional inputs to post-processing models. https://t.co/en6AmO8pMj', 'Convolutional autoencoders are applied to spatial forecast fields of different variables, and generally do a good job at reconstructing the mean forecast fields. https://t.co/8aHbXTeQrI', 'Using AE representations as additional inputs improves the performance of post-processing models, but the results vary substantially across variables and dimensionality of the latent representations. https://t.co/AMVxnnpdES', '@vitusbenson Thanks! This was done by others for example here: https://t.co/NP46aanoao. We mainly wanted to try an alternative approach that appeared to be computationally more affordable.']",22,04,1273
492,86,1481167298253803521,1150810215023091712,Dr. Emrah Tiras,Our new paper is on arXiv (with @Kandemir__M and Dr. Fischer): We developed a joint-simulation framework for segmented neutrino detectors around the globe. --- Kullanımda olan segmentlere ayrılmış nötrino dedektörlerini ortak bir simülasyonda toparladık.,https://arxiv.org/abs/2201.03689,"NuSD: Neutrino Segmented Detector is a Geant4-based user application that simulates inverse beta decay event in a variety of segmented scintillation detectors developed by different international collaborations. This simulation framework uses a combination of cross-programs and libraries including Geant4, ROOT and CLHEP developed and used by high energy physics community. It will enable the neutrino physics community to simulate and study neutrino interactions within different detector concepts using a single program. In addition to neutrino simulations in segmented detectors, this program can also be used for various research projects that use of scintillation detectors for different physics purposes. ","NuSD: A Geant4 based simulation framework for segmented anti-neutrino
detectors",1,['Our new paper is on arXiv (with @Kandemir__M and Dr. Fischer): \nWe developed a joint-simulation framework for segmented neutrino detectors around the globe. \n--- \nKullanımda olan segmentlere ayrılmış nötrino dedektörlerini ortak bir simülasyonda toparladık.'],22,01,261
493,79,1215216368196431873,4665536483,James Grant,Paper with @DLeslieLancs accepted to AISTATS! “On Thompson Sampling for Smoother-than-Lipschitz Bandits” provides new regret bounds in bandit problems with smooth reward functions Excited to present this work in Palermo in June! @storiLucy @DLeslieLancs Thanks Lucy!! @Boukouva1Alexis @DLeslieLancs Thanks Alexis!!,https://arxiv.org/abs/2001.02323,"Thompson Sampling is a well established approach to bandit and reinforcement learning problems. However its use in continuum armed bandit problems has received relatively little attention. We provide the first bounds on the regret of Thompson Sampling for continuum armed bandits under weak conditions on the function class containing the true function and sub-exponential observation noise. Our bounds are realised by analysis of the eluder dimension, a recently proposed measure of the complexity of a function class, which has been demonstrated to be useful in bounding the Bayesian regret of Thompson Sampling for simpler bandit problems under sub-Gaussian observation noise. We derive a new bound on the eluder dimension for classes of functions with Lipschitz derivatives, and generalise previous analyses in multiple regards. ",On Thompson Sampling for Smoother-than-Lipschitz Bandits,3,"['Paper with @DLeslieLancs accepted to AISTATS! “On Thompson Sampling for Smoother-than-Lipschitz Bandits” provides new regret bounds in bandit problems with smooth reward functions Excited to present this work in Palermo in June!', '@storiLucy @DLeslieLancs Thanks Lucy!!', '@Boukouva1Alexis @DLeslieLancs Thanks Alexis!!']",20,01,321