FOD#93: When AI meant Ambient Intelligence
and other stories from the past future, including assistants lived in desks, and the future ran on buttons – a look back at the digital dreams that shaped today
This Week in Turing Post:
- Wednesday, AI 101, Technique: More Attention: 3 types to discover – Slim attention, Kolmogorov attention and Xattention
- Friday, Agentic Workflow: Human-AI communication and Human-in-the-Loop (HITL) integration
Imagined Futures, Remembered
It’s always a curious kind of fun to look back at the futures we once imagined – to sift through the wild sketches, grand claims, and ambitious prototypes, and see what stuck. What did we think knowledge machines would look like? How did we picture our schools, our offices, our cities, when AI still lived mostly in diagrams and sci-fi dreams?
This picture inspired me for this Monday’s edition:
So let’s trace a century of imagined digital futures. It turns out the past was remarkably good at anticipating the world we’re now building.
Start with Vannevar Bush’s Memex (introduced in the article As We May Think), that 1945 vision of an electromechanical desk that could pull up documents on microfilm and link ideas at the speed of thought. It was bulky, mechanical, and analog – yet its spirit lives on in hypertext, personal knowledge bases, and even in the way we now use AI to summarize and connect our information flows. Bush didn’t invent the internet; but he helped imagine it.
In the 1950s, the future arrived with buttons. From 1958 to 1963, Arthur Radebaugh's Sunday comic, Closer Than We Think predicted the future to sheer enjoyment of the readers. In one the first issues, he drew classrooms with console desks and teacher broadcasts, students responding via push-buttons and cameras. The “Push-Button School of Tomorrow” may have looked kitschy, but its premise – personalized, machine-aided learning – is at the heart of today’s edtech and intelligent tutoring systems. His direction was eerily on track!
Then came the 1960s, when the World’s Fair gave the public a taste of interactive computing. Auto-Tutors. Fingertip shopping. Consoles for remote learning and video calls. HAL 9000 made his debut in 1968 in “2001: A Space Odyssey”, embodying disembodied AI – a concept that still affects how we think about assistant technologies. Behind the fiction were serious thinkers like J.C.R. Licklider, envisioning “man-computer symbiosis” (the paper) before most homes even had a TV remote.
Looks like created by Midjourney but it’s a photo of an auto-tutor
By the 1970s, Xerox PARC was designing the Dynabook — a proto-tablet for kids to learn, create, and explore. It never shipped, but it lit the path for the iPad, the laptop, and the digital classroom. Here is a paper “A Personal Computer for Children of All Ages”, published by Alan Key where he envisioned it to work.
In the 1980s, Apple released its Knowledge Navigator video — a folding tablet with a bow-tied, conversational AI that helped a professor prep for a lecture. It featured voice recognition, touch input, and seamless video calling. It looked fantastical. Now, it looks funny and too square.
the video will open on YouTube
In the early ’90s, AT&T’s iconic “You Will” campaign wrapped corporate futurism in sleek, cinematic charm. Narrated by Tom Selleck, the ads posed a simple question: “Have you ever…?” – followed by eerily prescient glimpses of life powered by invisible intelligence. Borrow a book from 1,000 miles away? Navigate cross-country without asking for directions? Pay a toll without stopping? Send a fax from the beach? You will. No robots, no androids – just everyday people using disembodied, networked intelligence. The campaign accurately forecast e-books, GPS, telemedicine, video calls, even smartwatches – long before any of it existed.
And by the early 2000s, Ambient Intelligence entered the scene. This was the era of the smart home, the digital city, and the intelligent billboard. MIT’s Project Oxygen described AI as freely available and always-on – like oxygen itself. The big shift was subtle: intelligence moved from the foreground (desks, gadgets, screens) into the background. It became environmental. It became invisible.
What’s striking, in all these retro visions, is how many of the core ideas have persisted. The interfaces changed. The form factors shrank. But the goals – augmenting memory, easing knowledge work, making environments responsive – remain steady.
Some ideas, of course, still haven’t landed. The fully automated teacher-less classroom? Still pedagogically thorny. The intelligent city that responds to our every need? A work in progress, often with more bureaucracy than brilliance. And that charming digital butler who anticipates your needs without being asked? Well, it’s complicated. Right, Apple?
But these old visions matter. Not because they got every detail right, but because they dared to imagine what digital assistance could mean at a human level. They gave designers, engineers, and researchers something to shoot for – a vocabulary of the possible.
We smile now at push-button classrooms and bow-tied agents. With a mix of affection, admiration – and a feeling that the future isn’t built from scratch. It’s composed, recomposed, and refined from the futures we once imagined. History, you are an endless source of inspiration.
I personally would like to see more of ambient intelligence. AI, we all might really need.
Welcome to Monday. Let’s build the next one. (And wonder what people in 40 years from now will smile at looking back at us).
Curated Collections
We are reading/watching:
- Palatable Conceptions of Disembodied Being: Terra Incognita in the Space of Possible Minds by Murray Shanahan
- Foundation Model for Personalized Recommendation – Netflix blog
- What Can We Learn about Engineering and Innovation from Half a Century of the Game of Life Cellular Automaton? by Stephen Wolfram
- An Interview with OpenAI CEO Sam Altman About Building a Consumer Tech Company by Stratechery
- Managing Frontier Model Training Organizations (or Teams) by Nathan Lambert
News from The Usual Suspects ©
OpenAI shares its voice(s)
- OpenAI has launched next-gen audio models for speech-to-text and text-to-speech, now available in its API. The new models set a benchmark for transcription accuracy—especially in noisy, accented, or fast-speech scenarios. On the flip side, developers can now instruct synthetic voices to sound sympathetic, professional, or bedtime-story calm. Voice agents just got a lot more... characterful. Expect call centers and narrators to get eerily better.
Claude learns to pause
- Anthropic has introduced a deceptively simple feature for its Claude models: the “think” tool. Designed for complex reasoning in multi-step tasks, the tool gives Claude structured moments to stop and reflect mid-process—especially when juggling tools, policies, or high-stakes decisions. In benchmarks like τ-Bench, the results speak: up to 54% performance gains in tricky domains like airline support. A quiet but potent step toward more reliable AI agents.
xAI goes deeper
- Elon’s xAI is upgrading Grok with “DeeperSearch,” a more refined and patient sibling of DeepSearch, favoring credibility over speed. Also debuting: text-based image editing – think Photoshop with a prompt. Next step? EvenMoreDeeperSearch (I guess).
Lee Kai-Fu pivots to DeepSeek
- China’s AI icon Lee Kai-fu is shifting 01.AI from model builder to enterprise problem-solver, betting big on DeepSeek’s open-source momentum. With Chinese firms clamoring for GenAI post-January, Lee calls the strategy shift “as clear as the writing on the wall.” 01.AI now rides the DeepSeek wave, offering sector-specific solutions – starting with finance, law, and gaming.
Hugging Face gets analytical
- Hugging Face just gave its analytics dashboard a nice upgrade. With real-time metrics, custom time ranges, and detailed replica lifecycle views, developers can now monitor Inference Endpoints. It’s a quality-of-life boost for AI teams juggling latency, errors, and scale.
NVIDIA’s GPU Diplomacy
- NVIDIA and its venture arm NVentures are consistently – and recently, more aggressively – wiring the future of AI, one compute-hungry startup at a time. In the past month, it backed Generalist AI, a stealth robotics company from ex-DeepMind researcher Pete Florence, and acquired synthetic data startup Gretel for $320M. Generalist is building universal robots. Gretel makes synthetic data – the essential fuel when real-world data falls short. Together, they show how NVIDIA is stacking the AI pipeline: from hardware to training data. Call it vertical integration, AI edition. Figure for humanoids. Perplexity for search with citations. Moon Surgical for precision robotics. And many many others. Every company in NVIDIA’s portfolio doesn’t just use GPUs – they stretch them, stress them, and show why H100s, DGXs, and Jetsons are built for the new AI frontier. And they don’t stop at hardware – they’re plugged into the full NVIDIA infrastructure: CUDA, Omniverse, TensorRT, NeMo, Isaac, and more. While traditional VCs cycle through trends, NVIDIA and NVentures focus on long-haul bets across genAI, robotics, and biotech – domains where compute is the engine. These startups are power users shaping the next wave of demand. Led by Sid Siddeek, the team has invested in 24+ AI startups in the past year only – from Hugging Face and Mistral to AI21 Labs and Carbon Robotics. Every move expands the reach of NVIDIA’s ecosystem, steering AI’s direction through compute, data, and capital. This is GPU diplomacy – where chips drive strategy, and funding builds influence. If AI is the future, NVentures is wiring it to run green and black. By the way, I played with ChatGPT Deep Research to explore more about NVIDIA’s investment strategy. I didn’t have time to verify any of it, but still – an interesting read. (And if you haven’t experienced Deep Research yet, it’s worth checking out.)
As for GTC 2025: Nvidia is responding aggressively to ASIC threats with new hardware innovations (Blackwell GPUs, Vera Rubin architecture) and advanced software (Dynamo), aiming to maintain its lead by emphasizing flexibility, energy efficiency, and the growing computational demands of reasoning-based AI models. For the deep analysis, check out SemiAnalysis.
Models to pay attention to:
- Microsoft’s KBLaM integrates structured knowledge into LLMs with rectangular attention for low-latency, hallucination-resistant answers →read more
- Fin-R1 trains a finance-specific LLM using CoT and RL to outperform larger models on reasoning benchmarks →read more
- NVIDIA’s Cosmos-Reason1 builds a physical reasoning LLM to model space, time, and causality across embodied agents →read more
- NVIDIA’s Cosmos-Transfer1 generates controllable simulated worlds using multimodal diffusion and spatiotemporal inputs →read more
- M3 fuses 3D Gaussian splatting with foundation models for multimodal memory and rendering →read more
- Roblox’ Cube tokenizes 3D geometry for text-to-shape and scene generation in interactive environments →read more
- Tencent’s T1 launches a fast, low-hallucination reasoning model to compete in China’s LLM race →read more
There were quite a few TOP research papers this week, we will mark them with 🌟 in each section.**
LLM Architectures and Efficiency Enhancements
- Rwkv-7 "Goose" by EleutherAI and Tsinghua University improves RNN-based LLMs with dynamic state evolution and constant-memory training →read the paper
- ϕ-Decoding by Shanghai AI Lab and collaborators simulates future reasoning to balance exploration and exploitation at inference time →read the paper
- Frac-connections by ByteDance Seed reduces redundancy by splitting hidden states instead of duplicating them in MoE models →read the paper
- 🌟 Xattention by MIT, Tsinghua University, SJTU, and NVIDIA speeds up inference with block-sparse attention and antidiagonal scoring →read the paper
- 🌟Inside-Out by Technion and Google Research reveals that internal LLM knowledge often surpasses what is actually generated →read the paper
Reasoning, RL, and Fine-Tuning Techniques
- Dapo by ByteDance & Tsinghua University fine-tunes LLMs with reinforcement learning on math tasks using open-source tools →read the paper
- Reinforcement learning for reasoning in small LLMs by VNU University of Science trains small LLMs with GRPO to outperform larger models using minimal compute →read the paper
- MetaLadder by Shanghai AI Lab transfers analogical problem-solving patterns to improve math reasoning →read the paper
- Measuring AI ability to complete long tasks by METR tracks progress by measuring the time AI takes to reach human-like task success →read the paper
Multi-Agent and Agentic Systems
- Why do multi-agent LLM systems fail? by UC Berkeley & Intesa Sanpaolo identifies structural flaws across popular multi-agent LLM frameworks →read the paper
- Agents play thousands of 3D video games by Tencent uses LLMs to create agents for 3D games via behavior tree generation and feedback →read the paper
- GKG-LLM by Xi’an Jiaotong University & NUS builds a unified system for knowledge graph construction across various domains →read the paper
Privacy, Synthetic Data, and Security
- Generating synthetic data with differentially private LLM inference by Google creates DP synthetic datasets using off-the-shelf LLMs at inference time →read the paper
Diffusion and Generation Techniques
- Scale-wise distillation of diffusion models by Yandex Research distills diffusion models by scaling image resolution progressively during generation →read the paper
Surveys and Meta-Analyses
- Multimodal chain-of-thought reasoning by NUS maps out methods and challenges for reasoning across visual and textual modalities →read the paper
- Survey on evaluation of LLM-based agents by IBM, Yale & Hebrew University examines benchmarks and methods to assess agent capabilities and frameworks →read the paper
- Stop overthinking: A survey on efficient reasoning by Rice University analyzes methods to reduce LLM overthinking in reasoning tasks →read the paper
- Aligning multimodal LLM with human preference by Chinese Academy of Sciences reviews alignment techniques across modalities and identify key challenges →read the paper
That’s all for today. Thank you for reading! Please send this newsletter to your colleagues if it can help them enhance their understanding of AI and stay ahead of the curve