new

Get trending papers in your email inbox!

Subscribe

byAK and the research community

Jun 26

The Stellar Populations and Rest-Frame Colors of Star-Forming Galaxies at $z \approx 8$: Exploring the Impact of Filter Choice and Star Formation History Assumption with JADES

Our understanding of the physical properties of star-forming galaxies during the Epoch of Reionization (EoR, at z > 6) suffers from degeneracies among the apparent properties of the stars, the nebular gas, and the dust. These degeneracies are most prominent with photometry, which has insufficient (1) spectral resolution and (2) rest-frame spectral coverage. We explore ways to break these degeneracies with a sample of N = 22 high-redshift star-forming galaxies at 7 < z_{phot} leq 9, using some of the deepest existing imaging from JWST/NIRCam and JWST/MIRI with JADES. Key to this study is the imaging from JWST/MIRI at 7.7 mum, which provides coverage of the rest-frame I-band at the observed redshifts. We infer stellar population properties and rest-frame colors using a variety of filter sets and star formation history assumptions to explore the impact of these choices. Evaluating these quantities both with and without the 7.7 mum data point shows that dense spectral coverage with JWST/NIRCam (eight or more filters, including at least one medium-band) can compensate for lacking the rest-frame I-band coverage for the vast majority (approx 80%) of our sample. Furthermore, these galaxy properties are most consistently determined by assuming the delayed-tau star formation history, which provides the smallest offsets and scatters around these offsets when including JWST/MIRI. Within extragalactic surveys like JADES and CEERS, our findings suggest that robust characterization of the stellar population properties and rest-frame colors for high-redshift star-forming galaxies is possible with JWST/NIRCam alone at z approx 8.

An Atlas of Color-selected Quiescent Galaxies at $z>3$ in Public $JWST$ Fields

We present the results of a systematic search for candidate quiescent galaxies in the distant Universe in eleven JWST fields with publicly available observations collected during the first three months of operations and covering an effective sky area of sim145 arcmin^2. We homogeneously reduce the new JWST data and combine them with existing observations from the Hubble,Space,Telescope. We select a robust sample of sim80 candidate quiescent and quenching galaxies at 3 < z < 5 using two methods: (1) based on their rest-frame UVJ colors, and (2) a novel quantitative approach based on Gaussian Mixture Modeling of the NUV-U, U-V, and V-J rest-frame color space, which is more sensitive to recently quenched objects. We measure comoving number densities of massive (M_stargeq 10^{10.6} M_odot) quiescent galaxies consistent with previous estimates relying on ground-based observations, after homogenizing the results in the literature with our mass and redshift intervals. However, we find significant field-to-field variations of the number densities up to a factor of 2-3, highlighting the effect of cosmic variance and suggesting the presence of overdensities of red quiescent galaxies at z>3, as it could be expected for highly clustered massive systems. Importantly, JWST enables the robust identification of quenching/quiescent galaxy candidates at lower masses and higher redshifts than before, challenging standard formation scenarios. All data products, including the literature compilation, are made publicly available.

The Tale of Two Telescopes: How Hubble Uniquely Complements the James Webb Space Telescope: Galaxies

In this paper, we present a simple but compelling argument, focusing on galaxy science, for preserving the main imagers and operational modes of the Hubble Space Telescope (HST) for as long as is technically feasible. While star-formation started at redshifts zgtrsim10-13, when the universe was less than 300-500 Myr old, the CSFH did not peak until zsimeq1.9, and has steadily declined since that time. Hence, at least half of all stars in the universe formed in the era where HST provides its unique rest-frame UV view of unobscured young, massive stars tracing cosmic star-formation. By rendering a subset of the 556.3 hours of available HST images in 12 filters of the Hubble Ultra Deep Field (HUDF) in an appropriate mix of colors, we illustrate the unique capabilities of HST for galaxy science emphasizing that rest-frame UV-optical wavelength range. We then contrast this with the 52.7 publicly available hours of JWST/NIRCam images in 8 filters of the same HUDF area from the JADES project, rendering these at the redder near-IR wavelengths to illustrate the unique capabilities of JWST to detect older stellar populations at higher redshifts, as well as very dusty stellar populations and Active Galactic Nuclei (AGN). HST uniquely probes (unobscured) young, hot, massive stars in galaxies, while JWST reveals more advanced stages of older stellar populations, as well as relatively short-lived phases where galaxies produce and shed a lot of dust from intense star-formation, and the very high redshift universe (zgtrsim10-11) not accessible by HST. We conclude that HST and JWST are highly complementary facilities that took decades to build to ensure decades of operation. To maximize return on investment on both HST and JWST, ways will need to be found to operate HST imaging instruments in all relevant modes for as long as possible into the JWST mission.

RUBIES: a complete census of the bright and red distant Universe with JWST/NIRSpec

We present the Red Unknowns: Bright Infrared Extragalactic Survey (RUBIES), providing JWST/NIRSpec spectroscopy of red sources selected across ~150 arcmin^2 from public JWST/NIRCam imaging in the UDS and EGS fields. RUBIES novel observing strategy offers a well-quantified selection function: the survey is optimised to reach high (>70%) completeness for bright and red (F150W-F444W>2) sources that are very rare. To place these rare sources in context, we simultaneously observe a reference sample of the 2<z<7 galaxy population, sampling sources at a rate that is inversely proportional to their number density in the 3D space of F444W magnitude, F150W-F444W colour, and photometric redshift. In total, RUBIES observes ~3000 targets across 1<z_{phot}<10 with both the PRISM and G395M dispersers, and ~1500 targets at z_{phot}>3 using only the G395M disperser. The RUBIES data reveal a highly diverse population of red sources that span a broad redshift range (z_{spec}sim1-9), with photometric redshift scatter and outlier fraction that are 3 times higher than for similarly bright sources that are less red. This diversity is not apparent from the photometric SEDs. Only spectroscopy reveals that the SEDs encompass a mixture of galaxies with dust-obscured star formation, extreme line emission, a lack of star formation indicating early quenching, and luminous active galactic nuclei. As a first demonstration of our broader selection function we compare the stellar masses and rest-frame U-V colours of the red sources and our reference sample: red sources are typically more massive (M_*sim10^{10-11.5} M_odot) across all redshifts. However, we find that the most massive systems span a wide range in U-V colour. We describe our data reduction procedure and data quality, and publicly release the reduced RUBIES data and vetted spectroscopic redshifts of the first half of the survey through the DJA.

Paint Bucket Colorization Using Anime Character Color Design Sheets

Line art colorization plays a crucial role in hand-drawn animation production, where digital artists manually colorize segments using a paint bucket tool, guided by RGB values from character color design sheets. This process, often called paint bucket colorization, involves two main tasks: keyframe colorization, where colors are applied according to the character's color design sheet, and consecutive frame colorization, where these colors are replicated across adjacent frames. Current automated colorization methods primarily focus on reference-based and segment-matching approaches. However, reference-based methods often fail to accurately assign specific colors to each region, while matching-based methods are limited to consecutive frame colorization and struggle with issues like significant deformation and occlusion. In this work, we introduce inclusion matching, which allows the network to understand the inclusion relationships between segments, rather than relying solely on direct visual correspondences. By integrating this approach with segment parsing and color warping modules, our inclusion matching pipeline significantly improves performance in both keyframe colorization and consecutive frame colorization. To support our network's training, we have developed a unique dataset named PaintBucket-Character, which includes rendered line arts alongside their colorized versions and shading annotations for various 3D characters. To replicate industry animation data formats, we also created color design sheets for each character, with semantic information for each color and standard pose reference images. Experiments highlight the superiority of our method, demonstrating accurate and consistent colorization across both our proposed benchmarks and hand-drawn animations.

Video Colorization with Pre-trained Text-to-Image Diffusion Models

Video colorization is a challenging task that involves inferring plausible and temporally consistent colors for grayscale frames. In this paper, we present ColorDiffuser, an adaptation of a pre-trained text-to-image latent diffusion model for video colorization. With the proposed adapter-based approach, we repropose the pre-trained text-to-image model to accept input grayscale video frames, with the optional text description, for video colorization. To enhance the temporal coherence and maintain the vividness of colorization across frames, we propose two novel techniques: the Color Propagation Attention and Alternated Sampling Strategy. Color Propagation Attention enables the model to refine its colorization decision based on a reference latent frame, while Alternated Sampling Strategy captures spatiotemporal dependencies by using the next and previous adjacent latent frames alternatively as reference during the generative diffusion sampling steps. This encourages bidirectional color information propagation between adjacent video frames, leading to improved color consistency across frames. We conduct extensive experiments on benchmark datasets, and the results demonstrate the effectiveness of our proposed framework. The evaluations show that ColorDiffuser achieves state-of-the-art performance in video colorization, surpassing existing methods in terms of color fidelity, temporal consistency, and visual quality.

ColorMNet: A Memory-based Deep Spatial-Temporal Feature Propagation Network for Video Colorization

How to effectively explore spatial-temporal features is important for video colorization. Instead of stacking multiple frames along the temporal dimension or recurrently propagating estimated features that will accumulate errors or cannot explore information from far-apart frames, we develop a memory-based feature propagation module that can establish reliable connections with features from far-apart frames and alleviate the influence of inaccurately estimated features. To extract better features from each frame for the above-mentioned feature propagation, we explore the features from large-pretrained visual models to guide the feature estimation of each frame so that the estimated features can model complex scenarios. In addition, we note that adjacent frames usually contain similar contents. To explore this property for better spatial and temporal feature utilization, we develop a local attention module to aggregate the features from adjacent frames in a spatial-temporal neighborhood. We formulate our memory-based feature propagation module, large-pretrained visual model guided feature estimation module, and local attention module into an end-to-end trainable network (named ColorMNet) and show that it performs favorably against state-of-the-art methods on both the benchmark datasets and real-world scenarios. The source code and pre-trained models will be available at https://github.com/yyang181/colormnet.