document_text
stringlengths
1
4.69k
We investigate the sample complexity of recovering tensors with low symmetric rank from symmetric rank-one measurements, a setting particularly motivated by the study of higher-order interactions in statistics and the analysis of two-layer polynomial neural networks. Using a covering number argument, we analyze the performance of the symmetric rank minimization program and establish near-optimal sample complexity bounds when the underlying distribution is log-concave. Our measurement model involves random symmetric rank-one tensors, leading to involved probability calculations. To address these challenges, we employ the Carbery-Wright inequality, a powerful tool for studying anti-concentration properties of random polynomials, and leverage orthogonal polynomial expansions. Additionally, we provide a sample complexity lower bound via Fano’s inequality, and discuss broader implications of our results for two-layer polynomial networks.
Much of learning theory is concerned with the design and analysis of probably approximately correct (PAC) learners. The closely related transductive model of learning has recently seen more scrutiny, with its learners often used as precursors to PAC learners. Our goal in this work is to understand and quantify the exact relationship between these two models. First, we observe that modest extensions of existing results show the models to be essentially equivalent for realizable learning for most natural loss functions, up to low order terms in the error and sample complexity. The situation for agnostic learning appears less straightforward, with sample complexities potentially separated by a $\frac{1}{\epsilon}$ factor. This is therefore where our main contributions lie. Our results are two-fold: 1. For agnostic learning with bounded losses (including, for example, multiclass classification), we show that PAC learning reduces to transductive learning at the cost of low-order terms in the error and sample complexity. This is via an adaptation of the reduction of Aden-Ali et al. (2023a) to the agnostic setting. 2. For agnostic binary classification, we show the converse: transductive learning is essentially no more difficult than PAC learning. Together with our first result this implies that the PAC and transductive models are essentially equivalent for agnostic binary classification. This is our most technical result, and involves two key steps: (a) A symmetrization argument on the agnostic one-inclusion graph (OIG) of Asilis et al. (2024) to derive the worst-case agnostic transductive instance, and (b) expressing the error of the agnostic OIG algorithm for this instance in terms of the empirical Rademacher complexity of the class. We leave as an intriguing open question whether our second result can be extended beyond binary classification to show the transductive and PAC models equivalent more broadly.
Decision trees are commonly used predictive models due to their flexibility and interpretability. This paper is directed at quantifying the uncertainty of decision tree predictions by employing a Bayesian inference approach. This is challenging because these approaches need to explore both the tree structure space and the space of decision parameters associated with each tree structure. Importantly, the structure and the decision parameters are tightly coupled; small changes in the tree structure can demand vastly different decision parameters to provide accurate predictions. A challenge for existing sample-based approaches is proposing joint changes in both the tree structure and the decision parameters that result in efficient sampling. This paper takes a different approach, where each distinct tree structure is associated with a unique set of decision parameters. The proposed approach, entitled DCC-Tree, is inspired by the work in Zhou et al. (2020) for probabilistic programs and Cochrane et al. (2023) for Hamiltonian Monte Carlo (HMC) based sampling for decision trees. Results show that DCC-Tree performs comparably to other HMC-based methods and better than existing Bayesian tree methods while improving on consistency and reducing the per-proposal complexity.
Transformers have become a standard architecture in machine learning, demonstrating strong in-context learning (ICL) abilities that allow them to learn from the prompt at inference time. However, uncertainty quantification for ICL remains an open challenge, particularly in noisy regression tasks. This paper investigates whether ICL can be leveraged for distribution-free uncertainty estimation, proposing a method based on conformal prediction to construct prediction intervals with guaranteed coverage. While traditional conformal methods are computationally expensive due to repeated model fitting, we exploit ICL to efficiently generate confidence intervals in a single forward pass. Our empirical analysis compares this approach against ridge regression-based conformal methods, showing that conformal prediction with in-context learning (*CP with ICL*) achieves robust and scalable uncertainty estimates. Additionally, we evaluate its performance under distribution shifts and establish scaling laws to guide model training. These findings bridge ICL and conformal prediction, providing a theoretically grounded and new framework for uncertainty quantification in transformer-based models.
Bayesian inference with computationally expensive likelihood evaluations remains a significant challenge in many scientific domains. We propose normalizing flow regression (NFR), a novel offline inference method for approximating posterior distributions. Unlike traditional surrogate approaches that require additional sampling or inference steps, NFR directly yields a tractable posterior approximation through regression on existing log-density evaluations. We introduce training techniques specifically for flow regression, such as tailored priors and likelihood functions, to achieve robust posterior and model evidence estimation. We demonstrate NFR's effectiveness on synthetic benchmarks and real-world applications from neuroscience and biology, showing superior or comparable performance to existing methods. NFR represents a promising approach for Bayesian inference when standard methods are computationally prohibitive or existing model evaluations can be recycled.
Despite significant recent advances in probabilistic meta-learning, it is common for practitioners to avoid using deep learning models due to a comparative lack of interpretability. Instead, many practitioners simply use non-meta-models such as Gaussian processes with interpretable priors, and conduct the tedious procedure of training their model from scratch for each task they encounter. While this is justifiable for tasks with a limited number of data points, the cubic computational cost of exact Gaussian process inference renders this prohibitive when each task has many observations. To remedy this, we introduce a family of models that meta-learn sparse Gaussian process inference. Not only does this enable rapid prediction on new tasks with sparse Gaussian processes, but since our models have clear interpretations as members of the neural process family, it also allows manual elicitation of priors in a neural process for the first time. In meta-learning regimes for which the number of observed tasks is small or for which expert domain knowledge is available, this offers a crucial advantage.
Bayesian inference for hierarchical models can be very challenging. MCMC methods have difficulty scaling to large models with many observations and latent variables. While variational inference (VI) and reweighted wake-sleep (RWS) can be more scalable, they are gradient-based methods and so often require many iterations to converge. Our key insight was that modern massively parallel importance weighting methods (Bowyer et al., 2024) give fast and accurate posterior moment estimates, and we can use these moment estimates to rapidly learn an approximate posterior. Specifically, we propose using expectation maximization to fit the approximate posterior, which we call QEM. The expectation step involves computing the posterior moments using high-quality massively parallel estimates from Bowyer et al. (2024). The maximization step involves fitting the approximate posterior using these moments, which can be done straightforwardly for simple approximate posteriors such as Gaussian, Gamma, Beta, Dirichlet, Binomial, Multinomial, Categorical, etc. (or combinations thereof). We show that QEM is faster than state-of-the-art, massively parallel variants of RWS and VI, and is invariant to reparameterizations of the model that dramatically slow down gradient based methods.
We present a method to improve the calibration of deep ensembles in the small data regime in the presence of unlabeled data. Our approach, which we name $U$-ensembles, is extremely easy to implement: given an unlabeled set, for each unlabeled data point, we simply fit a different randomly selected label with each ensemble member. We provide a theoretical analysis based on a PAC-Bayes bound which guarantees that for such a labeling we obtain low negative log-likelihood and high ensemble diversity on testing samples. Empirically, through detailed experiments, we find that for low to moderately-sized training sets, $U$-ensembles are more diverse and provide better calibration than standard ensembles.
Motivated by deep neural networks, the deep Gaussian process (DGP) generalizes the standard GP by stacking multiple layers of GPs. Despite the enhanced expressiveness, GP, as an $L_2$ regularization prior, tends to be over-smooth and sub-optimal for inhomogeneous objects, such as images with edges. Recently, Q-exponential process (Q-EP) has been proposed as an $L_q$ relaxation to GP and demonstrated with more desirable regularization properties through a parameter $q>0$ with $q=2$ corresponding to GP. Sharing the similar tractability of posterior and predictive distributions with GP, Q-EP can also be stacked to improve its modeling flexibility. In this paper, we generalize Q-EP to deep Q-EP to model inhomogeneous data with improved expressiveness. We introduce shallow Q-EP as a latent variable model and then build a hierarchy of the shallow Q-EP layers. Sparse approximation by inducing points and scalable variational strategy are applied to facilitate the inference. We demonstrate the numerical advantages of the proposed deep Q-EP model by comparing with multiple state-of-the-art deep probabilistic models.
User interfaces for robotic systems often force users to shift focus between digital interfaces and their physical surroundings, which leads to inefficiencies and potential safety issues. In this paper we present a novel Mixed Reality sys- tem, which by seamlessly integrating holographic elements with the physical world, seeks to overcome these limitations. Our proposed system connects the Microsoft HoloLens2 with an overhead camera and a ground rover to enable mixed reality- based control for robotic navigation. The system allows users to set waypoints for an autonomous rover using a holographic interface displayed through the HoloLens2, providing an intuitive and immersive control experience. The interface is engineered with the objective of augmenting user awareness of both the environmental context and system dynamics, and delivers real- time visual feedback. With this proposed design, we address the challenge of enhancing user multitasking and situational aware- ness in complex environments. To the best of our knowledge, this is the first open-source mixed reality human supervisory control system supporting waypoint multi-robot control through HoloLens2. The system has been tested by many users from our department and demonstrated during educational and outreach activities on campus (e.g., during lab tours). This paper discusses the system’s design, implementation, and user experience, and provides insights into future improvements and applications of mixed reality in robotic control systems.
This paper presents MiXR-Interact, a dataset providing motion tracking data for users’ interactions in mixed reality (MR) environments, focusing on tracking their gaze, upper body movements, and hand gestures. The dataset is based on the Meta Quest Pro headset, offering an easy-to-use resource for researchers and developers working in MR and human-computer interaction (HCI). MiXR-Interact focuses on collecting natural and precise interactions with virtual objects, with three core interaction types: pushing, pointing, and grasping. To ensure robustness and generalization, each interaction is performed across six distinct directions, reflecting a diverse range of movement trajectories relative to the user’s body. This directional diversity provides critical insights into how users approach and engage with virtual objects from multiple angles. In addition, to precisely track contact points during interactions, 17 key contact points are defined for each direction and are labeled. These contact points are used as reference markers to accurately localize and quantify the joint-to-object contact points for each interaction type and direction. In addition to providing the dataset, this paper evaluates the quality and precision of the collected dataset in MR through a set of evaluation metrics. These metrics assess critical aspects of interaction performance, including Trajectory. Similarity, Joint Orientation, and Joint-to-Contact Alignment. It also details the theoretical and implementation considerations for dataset collection, offering valuable insights for applications in MR and human-robot interaction (HRI).
Human-Robot Collaboration (HRC) in manufacturing requires a balance between physical task execution and cognitive decision making. This study investigates the integration of Augmented Reality (AR) and robotics to enhance task performance in a real manufacturing setting. The research focuses on a gasket room, a critical part of the enclosure production process responsible for sealing enclosures. Initial observations identified a complex 24-step workflow involving physically demanding tasks, such as panel transportation and alignment, and cognitively demanding tasks including decision making. To enhance efficiency and quality, robots will be employed to handle repetitive and physically demanding processes, while humans will focus on the cognitively demanding process. AR will be used to design an HRC system where robots collaborate with humans in a real workspace, while AR will also be used to assist human decision making by providing real-time guidance. The proposed AR-based HRC system will be evaluated through user testing, measuring improvements in efficiency, accuracy, and cognitive workload. Despite the study’s limitation of a small participant pool, future work will expand testing to a broader user group and explore scalability across different industrial settings.
Mixed reality (MR) presents a promising frontier in rehabilitation sciences, particularly for addressing motor impairments associated with Parkinson’s disease (PD). This paper explores the integration of an MR-based interactive rehabilitation system with an ergonomic hand-support device, designed to facilitate tremor stabilization and enhance fine motor control. By leveraging real-time sensor feedback and dynamic exercise progression, the system fosters an adaptive and engaging therapeutic environment. Conventional rehabilitation methods have demonstrated limited efficacy in addressing the progressive nature of PD-related tremors, with studies reporting that over 60% of patients experience difficulty in executing daily motor tasks. Through an iterative development process informed by expert consultation and patient feedback, the proposed approach emphasizes usability and clinical feasibility. Although preliminary in scope, this study provides a foundational framework for MR-assisted rehabilitation and suggests future pathways for incorporating artificial intelligence (AI) to refine adaptive therapeutic interventions. The findings contribute to the growing discourse on digital health innovation, underscoring MR’s potential in augmenting traditional rehabilitation strategies through immersive, user-centered methodologies.
Learning from demonstrations (LfD) is a key component of state-of-the-art robot learning approaches that enables robots to learn complex tasks by observing and imitating human actions. While there is a large body work focused on developing effective algorithms, demonstration quality remains a bottleneck in LfD, mostly due to suboptimal interfaces for collecting demonstrations. This paper addresses this gap specifically in the context of bimanual tasks by proposing a VR setup for demonstration data collection in which we compare two conditions: one in which the user teleoperates robot with the robot always visible (teleoperation condition), and another where the user demonstrates the task independently without visual feedback (egocentric condition). The task involves two Panda robot arms working collaboratively to pick up a tray stacked with cubes and place it at a designated goal. Performance is measured based on success rate and completion time. Additionally, we conducted a user study to evaluate the user experience within VR environments. The collected data was then fed into a behavior cloning algorithm, where we analyzed training loss, validation performance, and error metrics such as Mean Squared Error (MSE) and Mean Absolute Error (MAE). Results suggest that the teleoperation system performs better in basic tasks, whereas the egocentric condition performed slightly better in complex tasks. The behavior cloning algorithm demonstrated that the teleoperation system had stronger generalization across all tasks compared to the egocentric system.
Teleoperation in robotic systems encompasses three primary modes of control: full teleoperation, shared control, and autonomous operation. Full teleoperation allows human operators to have complete control over the robot, enabling real-time manipulation and decision making. Shared control, a hybrid approach, integrates elements of both teleoperation and autonomous control, permitting human intervention in specific scenarios while maintaining a degree of autonomous functionality. Autonomous operation relies entirely on the robot's decision-making algorithms to perform tasks without human input. Although shared control has proven effective in static environments, recent studies indicate that its benefits diminish in dynamic settings due to the increased cognitive load on the human operator and the frequent need to switch between modes. The advent of multimodal large language models (LLMs) such as GPT-4 and Gemini has significantly advanced visual scene understanding and language-based reasoning. These capabilities can enhance shared control systems by allowing operators to act as global planners and provide natural language commands, reducing the need for constant switching. This paper proposes a novel approach that combines language-driven machine learning models with shared control frameworks to improve human-robot interaction in both static and dynamic environments. We develop a language-model-guided shared control mechanism and evaluate its performance across various settings. Results from both qualitative feedback and quantitative metrics demonstrate that our LLM-based shared controller successfully reduces operator cognitive burden while improving overall task performance.
Integrating robotic manipulators into everyday households faces the significant challenge of allowing them to be taught skills in a natural and humanly understandable way. Although learning-from-demonstration (LFD) shows promise, its reliance on quality data and cumbersome demonstration methods limits its broader application. This paper presents a comparison study on the performance of machine learning models, trained using task demonstration carried out via two traditional methods, two traditional methods augmented with augmented reality (AR), and one augmented reality based method. We compare the performance of these input methods against four ML models and two input data modalities. The results demonstrate the advantage of using AR augmented methods in data collection for LFD and the pure AR method nearly matches the performance of the highest performing AR augmented traditional method while having no drawbacks of the traditional methods.
Real-time fault detection and error diagnosis are crucial to enhance trust and reduce process delays in the emerging landscape of single-human multiple-robot systems (SHMRS) within the manufacturing industry. In this paper, we propose a hybrid reality interface that utilizes real-time VR-AR transitions alongside an action sequence storage system for robot error replay. An interactive digital twin of the robotic platform, complemented by visualizations of recorded sensor data, enables operators to troubleshoot faults and adjust behaviors in an immersive environment. We also outline a future user study designed to compare this user-centric interface with traditional control methods. This work offers significant potential for advancing human-robot collaboration by facilitating a comprehensive, retrospective analysis of robot behavior.
This paper explores how large language model-based robots assist in detecting anomalies in high-risk environments and how users perceive their usability and reliability in a safe virtual environment. We present a system where a robot using a state-of-the-art vision-language model autonomously annotates potential hazards in a virtual world. The system provides users with contextual safety information via a VR interface. We conducted a user study to evaluate the system's performance across metrics such as trust, user satisfaction, and efficiency. Results demonstrated high user satisfaction and clear hazard communication, while trust remained moderate.
Soft robotic teleoperation offers unique advantages for bimanual manipulation, but users often struggle with visual feedback during operation, particularly when robot fingers are occluded by objects. We introduce SoftBiT, a teleoperation interface that enhances user awareness through real-time soft robot finger shape visualization in extended reality (XR). Our system combines proprioceptive sensing with an XR headset (Meta Quest 2) to provide users with intuitive visual feedback about finger deformations during manipulation tasks. SoftBiT's key innovation is a real-time sim-to-real pipeline that estimates and visualizes soft finger shapes, helping users better understand robot-object interactions even when direct visual feedback is limited. Through three representative tasks (pick-and-place, assembly, and object deformation), we demonstrate how augmented proprioceptive feedback supports user decision-making during manipulation. Our shape estimation system achieves 42.55 FPS, enabling smooth real-time visualization. This work lays the foundation for future user studies investigating how proprioceptive augmentation impacts teleoperation performance and user experience, with potential extensions to more complex multi-fingered manipulation tasks.
Typical fog machines need manual activation and human monitoring. This creates a problem that robots cannot interface with those fog machines to autonomously controll it for potential augmented reality (AR) applications, e.g., augmented to a fog screen. To solve this issue, we replaced the fog machine's manual remote with a custom PCB containing an Arduino microcontroller, where we implemented a programming interface that can be used by ROS-enabled robots. Besides Arduino, it has a latching relay and a rectifier circuit. The latching relay effectively ``presses the button" to emit fog, while the rectifier reads the machine's signals to detect when it is hot and ready to use. The electrical design carefully separates high-voltage lines from control signals, and a 3D-printed enclosure keeps everything safe and accessible. For the programmable interface, it allows ROS to control the fog machine seamlessly, letting the robot toggle the fog output automatically. As a result, researchers can quickly adapt an off-the-shelf fog machine for various human-robot interaction studies, especially in settings where traditional projection surfaces are unavailable. The code, 3D models, PCB files, and documentation are available on GitHub at https://bit.ly/4b1Mq8j.
We address the problem of generating realistic 3D human object interactions (HOIs) driven by textual prompts. To this end, we take a modular design and decompose the complex task into simpler subtasks. We first develop a dual-branch diffusion model (DBDM) to generate both human and object motions conditioned on the input text, and encourage coherent motions by a cross-attention communication module between the human and object motion generation branches. We also develop an affordance prediction diffusion model (APDM) to predict the contacting area between the human and object during the interactions driven by the textual prompt. The APDM is independent of the results by the DBDM and thus can correct potential errors by the latter. Moreover, it stochastically generates the contacting points to diversify the generated motions. Finally, we incorporate the estimated contacting points into the classifier-guidance to achieve accurate and close contact between humans and objects. To train and evaluate our approach, we annotate BEHAVE dataset with text descriptions. Experimental results on BEHAVE and OMOMO demonstrate that our approach produces realistic HOIs with various interactions and different types of objects.
This paper introduces a Multi-modal Diffusion model for Motion Prediction (MDMP) that integrates and synchronizes skeletal data and textual descriptions of actions to generate refined long-term motion predictions with quantifiable uncertainty. Existing methods for motion forecasting or motion generation rely solely on either prior motions or text prompts, facing limitations with precision or control, particularly over extended durations. The multi-modal nature of our approach enhances the contextual understanding of human motion, while our graph-based transformer framework effectively capture both spatial and temporal motion dynamics. As a result, our model consistently outperforms existing generative techniques in accurately predicting long-term motions. Additionally, by leveraging diffusion models' ability to capture different modes of prediction, we estimate uncertainty, significantly improving spatial awareness in human-robot interactions by incorporating zones of presence with varying confidence levels.
Motion in-betweening is a crucial tool for animators, enabling intricate control over pose-level details in each keyframe. Recent machine learning solutions for motion in-betweening rely on complex models, incorporating skeleton-aware architectures or requiring multiple modules and training steps. In this work, we introduce a simple yet effective Transformer-based framework, employing a single Transformer encoder to synthesize realistic motions in motion in-betweening tasks. We find that data modeling choices play a significant role in improving in-betweening performance. Among others, we show that increasing data volume can yield equivalent or improved motion transitions, that the choice of pose representation is vital for achieving high-quality results, and that incorporating velocity input features enhances animation performance. These findings challenge the assumption that model complexity is the primary determinant of animation quality and provide insights into a more data-centric approach to motion interpolation. Additional videos and supplementary material are available at \url{https://silk-paper.github.io}.
Negation is a fundamental linguistic concept used by humans to convey information that they do not desire. Despite this, minimal research has focused on negation within text-guided image editing. This lack of research means that vision-language models (VLMs) for image editing may struggle to understand negation, implying that they struggle to provide accurate results. One barrier to achieving human-level intelligence is the lack of a standard collection by which research into negation can be evaluated. This paper presents the first large-scale dataset, Negative Instruction (NeIn), for studying negation within instruction-based image editing. Our dataset comprises 366,957 quintuplets, i.e., source image, original caption, selected object, negative sentence, and target image in total, including 342,775 queries for training and 24,182 queries for benchmarking image editing methods. Specifically, we automatically generate NeIn based on a large, existing vision-language dataset, MS-COCO, via two steps: generation and filtering. During the generation phase, we leverage two VLMs, BLIP and InstructPix2Pix (fine-tuned on MagicBrush dataset), to generate NeIn's samples and the negative clauses that expresses the content of the source image. In the subsequent filtering phase, we apply BLIP and LLaVA-NeXT to remove erroneous samples. Additionally, we introduce an evaluation protocol to assess the negation understanding for image editing models. Extensive experiments using our dataset across multiple VLMs for text-guided image editing demonstrate that even recent state-of-the-art VLMs struggle to understand negative queries.
Fine-tuning Stable Diffusion enables subject-driven image synthesis by adapting the model to generate images containing specific subjects. However, existing fine-tuning methods suffer from two key issues: underfitting, where the model fails to reliably capture subject identity, and overfitting, where it memorizes the subject image and reduces background diversity. To address these challenges, we propose two auxiliary consistency losses for diffusion fine-tuning. First, a prior consistency regularization loss ensures that the predicted diffusion noise for prior (non-subject) images remains consistent with that of the pretrained model, improving fidelity. Second, a subject consistency regularization loss enhances the fine-tuned model's robustness to multiplicative noise modulated latent code, helping to preserve subject identity while maintaining diversity. Our experimental results demonstrate that incorporating these losses into fine-tuning not only preserves subject identity but also enhances image diversity, outperforming DreamBooth in terms of CLIP scores, background variation, and overall visual quality.
Human motion generation is essential for fields such as animation, robotics, and virtual reality, requiring models that effectively capture motion dynamics from text descriptions. Existing approaches often rely on Contrastive Language-Image Pretraining (CLIP)-based text encoders, but their training on text-image pairs constrains their ability to understand temporal and kinematic structures inherent in motion and motion generation. This work introduces MoCLIP, a fine-tuned CLIP model with an additional motion encoding head, trained on motion sequences using contrastive learning and tethering loss. By explicitly incorporating motion-aware representations, MoCLIP enhances motion fidelity while remaining compatible with existing CLIP-based pipelines and seamlessly integrating into various CLIP-based methods. Experiments demonstrate that MoCLIP improves Top-1, Top-2, and Top-3 accuracy while maintaining competitive FID, leading to improved text-to-motion alignment results. These results highlight MoCLIP’s versatility and effectiveness, establishing it as a robust framework for enhancing motion generation.
The demand for high-quality synthetic data for model training and augmentation has never been greater in medical imaging. However, current evaluations predominantly rely on computational metrics that fail to align with human expert recognition. This leads to synthetic images that may appear realistic numerically but lack clinical authenticity, posing significant challenges in ensuring the reliability and effectiveness of AI-driven medical tools. To address this gap, we introduce GazeVal, a practical framework that synergizes expert eye-tracking data with direct radiological evaluations to assess the quality of synthetic medical images. GazeVal leverages gaze patterns of radiologists as they provide a deeper understanding of how experts perceive and interact with synthetic data in different tasks (i.e., diagnostic or Turing tests). Experiments with sixteen radiologists revealed that 96.6% of the generated images (by the most recent state-of-the-art AI algorithm) were identified as fake, demonstrating the limitations of generative AI in producing clinically accurate images.
Dataset distillation has demonstrated remarkable effectiveness in high-compression scenarios for image datasets. While video datasets inherently contain greater redundancy, existing video dataset distillation methods primarily focus on compression in the pixel space, overlooking advances in the latent space that have been widely adopted in modern text-to-image and text-to-video models. In this work, we bridge this gap by introducing a novel video dataset distillation approach that operates in the latent space using a state-of-the-art variational encoder. Furthermore, we employ a diversity-aware data selection strategy to select both representative and diverse samples. Additionally, we introduce a simple, training-free method to further compress the distilled latent dataset. By combining these techniques, our approach achieves a new state-of-the-art performance in dataset distillation, outperforming prior methods on all datasets, e.g. on HMDB51 IPC 1, we achieve a 2.6\% performance increase; on MiniUCF IPC 5, we achieve a 7.8\% performance increase.
Generating realistic dyadic human motion from text descriptions presents significant challenges, particularly for extended interactions that exceed typical training sequence lengths. While recent transformer-based approaches have shown promising results for short-term dyadic motion synthesis, they struggle with longer sequences due to inherent limitations in positional encoding schemes. In this paper, we introduce Dyadic Mamba, a novel approach that leverages State-Space Models (SSMs) to generate high-quality dyadic human motion of arbitrary length. Our method employs a simple yet effective architecture that facilitates information flow between individual motion sequences through concatenation, eliminating the need for complex cross-attention mechanisms. We demonstrate that Dyadic Mamba achieves competitive performance on standard short-term benchmarks while significantly outperforming transformer-based approaches on longer sequences. Additionally, we propose a new benchmark for evaluating long-term motion synthesis quality, providing a standardized framework for future research. Our results demonstrate that SSM-based architectures offer a promising direction for addressing the challenging task of long-term dyadic human motion synthesis from text descriptions.
We propose a framework for goal-driven human motion generation, which can synthesize interaction-rich scenarios. Given the goal positions for key joints, our pipeline automatically generates natural full-body motion that approaches the target in cluttered environments. Our pipeline solves the complex constraints in a tractable formulation by disentangling the process of motion generation into two stages. The first stage computes the trajectory of the key joints like hands and feet to encourage the character to naturally approach the target position while avoiding possible physical violation. We demonstrate that diffusion-based guidance sampling can flexibly adapt to the local scene context while satisfying goal conditions. Then the subsequent second stage can easily generate plausible full-body motion that traverses the key joint trajectories. The proposed pipeline applies to various scenarios that have to concurrently account for 3D scene geometry and body joint configurations.
In text-to-motion generation, controllability as well as generation quality and speed has become increasingly critical. The controllability challenges include generating a motion of a length that matches the given textual description and editing the generated motions according to control signals, such as the start-end positions and the pelvis trajectory. In this paper, we propose MoLA, which provides fast, high-quality, variable-length motion generation and can also deal with multiple editing tasks in a single framework. Our approach revisits the motion representation used as inputs and outputs in the model, incorporating an activation variable to enable variable-length motion generation. Additionally, we integrate a variational autoencoder and a latent diffusion model, further enhanced through adversarial training, to achieve high-quality and fast generation. Moreover, we apply a training-free guided generation framework to achieve various editing tasks with motion control inputs. We quantitatively show the effectiveness of adversarial learning in text-to-motion generation, and demonstrate the applicability of our editing framework to multiple editing tasks in the motion domain.
Creating high-quality animatable 3D human avatars from a single image remains a significant challenge in computer vision due to the inherent difficulty of reconstructing complete 3D information from a single viewpoint. Current approaches face a clear limitation: 3D Gaussian Splatting (3DGS) methods produce high-quality results but require multiple views or video sequences, while video diffusion models can generate animations from single images but struggle with consistency and identity preservation. We present SVAD, a novel approach that addresses these limitations by leveraging complementary strengths of existing techniques. Our method generates synthetic training data through video diffusion, enhances it with identity preservation and image restoration modules, and utilizes this refined data to train 3DGS avatars. Comprehensive evaluations demonstrate that SVAD outperforms state-of-the-art (SOTA) single-image methods in maintaining identity consistency and fine details across novel poses and viewpoints, while enabling real-time rendering capabilities. Through our data augmentation pipeline, we overcome the dependency on dense monocular or multi-view training data typically required by traditional 3DGS approaches. Extensive quantitative, qualitative comparisons show our method achieves superior performance across multiple metrics against baseline models. By effectively combining the generative power of diffusion models with both the high-quality results and rendering efficiency of 3DGS, our work establishes a new approach for high-fidelity avatar generation from a single image input.
This study aims to investigate the challenge of insufficient three-dimensional context in synthetic datasets for scene text rendering. Although recent advances in diffusion models and related techniques have improved certain aspects of scene text generation, most existing approaches continue to rely on 2D data, sourcing authentic training examples from movie posters and book covers, which limits their ability to capture the complex interactions among spatial layout and visual effects in real-world scenes. In particular, traditional 2D datasets do not provide the necessary geometric cues for accurately embedding text into diverse backgrounds. To address this limitation, we propose a novel standard for constructing synthetic datasets that incorporates surface normals to enrich three-dimensional scene characteristic. By adding surface normals to conventional 2D data, our approach aims to enhance the representation of spatial relationships and provide a more robust foundation for future scene text rendering methods. Extensive experiments demonstrate that datasets built under this new standard offer improved geometric context, facilitating further advancements in text rendering under complex 3D-spatial conditions.
Generative methods for image and video editing use generative models as priors to perform edits despite incomplete information, such as changing the composition of 3D objects shown in a single image. Recent methods have shown promising composition editing results in the image setting, but in the video setting, editing methods have focused on editing object's appearance and motion, or camera motion, and as a result, methods to edit object composition in videos are still missing. We propose \name as a method for editing 3D object compositions in videos of static scenes with camera motion. Our approach allows editing the 3D position of a 3D object across all frames of a video in a temporally consistent manner. This is achieved by lifting intermediate features of a generative model to a 3D reconstruction that is shared between all frames, editing the reconstruction, and projecting the features on the edited reconstruction back to each frame. To the best of our knowledge, this is the first generative approach to edit object compositions in videos. Our approach is simple and training-free, while outperforming state-of-the-art image editing baselines.
Composed image retrieval (CIR) enables users to search images using a reference image combined with textual modifications. Recent advances in vision-language models have improved CIR, but dataset limitations remain a barrier. Existing datasets often rely on simplistic, ambiguous, or insufficient manual annotations, hindering fine-grained retrieval. We introduce good4cir, a structured pipeline leveraging vision-language models to generate high-quality synthetic annotations. Our method involves: (1) extracting fine-grained object descriptions from query images, (2) generating comparable descriptions for target images, and (3) synthesizing textual instructions capturing meaningful transformations between images. This reduces hallucination, enhances modification diversity, and ensures object-level consistency. Applying our method improves existing datasets and enables creating new datasets across diverse domains. Results demonstrate improved retrieval accuracy for CIR models trained on our pipeline-generated datasets. We release our dataset construction framework to support further research in CIR and multi-modal retrieval.
Creating paired nighttime-to-daytime translation datasets remains a challenging and impractical task, as keeping every object static at different times is impossible. While 2D generative models can synthesize paired data for appearance and style translation, they often fail to maintain geometric consistency. In this paper, we propose a novel paired synthetic dataset creation pipeline that leverages 3D editing techniques to convert daytime 3D datasets into nighttime degraded scenes, generating geometrically consistent high-quality image pairs. Through this approach, we construct the first paired synthetic dataset for nighttime-to-daytime translation with geometric consistency. The synthesized data pairs can effectively enhance nighttime-to-daytime editing performance of various 2D generative models both qualitatively and quantitatively, demonstrating the advantages of using 3D editing for paired synthetic visual dataset generation. Code and Dataset are available at github.com/massyzs/3DEdting4Translation.git.
Anomaly generation is an effective way to mitigate data scarcity for anomaly detection task. Most existing works shine at industrial anomaly generation with multiple specialists or large generative models, rarely generalizing to anomalies in other applications. In this paper, we present AnomalyHybrid, a domain-agnostic framework designed to generate authentic and diverse anomalies simply by combining the reference and target images. AnomalyHybrid is a Generative Adversarial Network(GAN)-based framework having two decoders that integrate the appearance of reference image into the depth and edge structures of target image respectively. With the help of depth decoders, AnomalyHybrid achieves authentic generation especially for the anomalies with depth values changing, such a s protrusion and dent. More, it relaxes the fine granularity structural control of the edge decoder and brings more diversity. Without using annotations, AnomalyHybrid is easily trained with sets of color, depth and edge of same images having different augmentations. Extensive experiments carried on HeliconiusButterfly, MVTecAD and MVTec3D datasets demonstrate that AnomalyHybrid surpasses the GAN-based state-of-the-art on anomaly generation and its downstream anomaly classification, detection and segmentation tasks. On MVTecAD dataset, AnomalyHybrid achieves 2.06/0.32 IS/LPIPS for anomaly generation, 52.6 Acc for anomaly classification with ResNet34, 97.3/72.9 AP for image/pixel-level anomaly detection with a simple UNet.
test