Vol. MMXXVI · Issue 079 · Daily Edition

Artificial
Indifference

Published March 20, 2026
APOD: Spring Equinox at Teide Observatory
arXiv: 8 papers filed
Wire: 500 edits

Spring Equinox at Teide Observatory

Spring Equinox at Teide Observatory

The defining astronomical moment of the equinox today is at 14:46 UTC (March 20). That's when the Sun crosses the celestial equator moving north in its yearly journey through planet Earth's sky, marking the beginning of spring for our fair planet in the northern hemisphere and fall in the southern hemisphere. Then, day and night are nearly equal around the globe. In fact, both day and nighttime exposures from a spring equinox at the Observatorio del Teide in Tenerife, Canary Islands, Spain, are used in this composited skyscape. Over 1,000 images were taken with a fisheye lens and merged in th...

2026-03-20 · © Juan Carlos Casado · NASA APOD ↗

Research Filed Today

Preprints submitted to arXiv on March 20, 2026. Science before peer review.

01
While Multimodal Large Language Models demonstrate impressive semantic capabilities, they often suffer from spatial blindness, struggling with fine-grained geometric reasoning and physical dynamics. Existing solutions typically rely on explicit 3D modalities or complex geometric ...
Xianjin Wu, Dingkang Liang, Tianrui Feng et al. (+5)
02
The ability to render scenes at adjustable fidelity from a single model, known as level of detail (LoD), is crucial for practical deployment of 3D Gaussian Splatting (3DGS). Existing discrete LoD methods expose only a limited set of operating points, while concurrent continuous L...
Zhilin Guo, Boqiao Zhang, Hakan Aktas et al. (+10)
03
Visual generation with discrete tokens has gained significant attention as it enables a unified token prediction paradigm shared with language models, promising seamless multimodal architectures. However, current discrete generation methods remain limited to low-dimensional laten...
Yuqing Wang, Chuofan Ma, Zhijie Lin et al. (+7)
04
Reconstructing articulated 3D objects from a single image requires jointly inferring object geometry, part structure, and motion parameters from limited visual evidence. A key difficulty lies in the entanglement between motion cues and object structure, which makes direct articul...
Haitian Li, Haozhe Xie, Junxiang Xu et al. (+3)
05
There are two major categories of embodied navigation: Vision-Language Navigation (VLN), where agents navigate by following natural language instructions; and Object-Goal Navigation (OGN), where agents navigate to a specified target object. However, existing work primarily evalua...
Huaide Jiang, Yash Chaudhary, Yuping Wang et al. (+8)
06
Prior motion generation largely follows two paradigms: continuous diffusion models that excel at kinematic control, and discrete token-based generators that are effective for semantic conditioning. To combine their strengths, we propose a three-stage framework comprising conditio...
Chenyang Gu, Mingyuan Zhang, Haozhe Xie et al. (+3)
07
Current instruction-guided video editing models struggle to simultaneously balance precise semantic modifications with faithful motion preservation. While existing approaches rely on injecting explicit external priors (e.g., VLM features or structural conditions) to mitigate thes...
Xinyao Zhang, Wenkai Dong, Yuxin Song et al. (+10)
08
We introduce Multi-Object Generative Perception (MultiGP), a generative inverse rendering method for stochastic sampling of all radiometric constituents -- reflectance, texture, and illumination -- underlying object appearance from a single image. Our key idea to solve this inher...
Nobuo Yoshii, Xinran Nicole Han, Ryo Kawahara et al. (+2)

Source: arXiv.org · Cornell University

Wikipedia in Motion

500 edits recorded in the most recent sample. Most-edited topics:

CyprusListMosqueUnitedNationsAssemblyWorldAirport