Skip to main content

Nvidia’s RTX shows how Neil Armstrong would appear if Apollo 11 landed today

Apollo Moon Landing with RTX Technology; courtesy of Nvidia Image used with permission by copyright holder

What was a small lunar step for man in 1969 has turned into a giant leap forward for computer graphics a half-century later. To commemorate the 50th anniversary of the Apollo 11 moon landing, Nvidia is showing the capabilities of its GeForce RTX graphics in a new demo that illustrates how the grainy images from the moon in 1969 that we’ve come to know would look if it had been captured today. Thanks to the power of real-time ray tracing, Nvidia had remastered those images by applying the principles of light to re-create the moon landing images with cinematic realism in a newly released demo of its RTX technology.

“With RTX, each pixel on the screen is generated by tracing, in real time, the path of a beam of light backwards into the camera (your viewing point), picking up details from the objects it interacts with,” Nvidia said of its new moon landing demo in a blog post. Nvidia claims that this project took its researchers more than five years. The team researched the details of the lunar lander, identified the properties of dust particles on the moon, and measured the reflectivity of the materials of the spacesuits worn by the astronauts.

That knowledge was then combined with Nvidia’s real-time ray tracing capabilities found on the RTX graphics cards to re-create the original scene in 1969, showing the sun’s rays emerging from behind the lunar lander and how it is reflected off of the surface of the moon and partially absorbed by the spacesuits worn by the astronauts Neil Armstrong and Buzz Aldrin.

Recommended Videos

Though real-time ray tracing remains in its infancy today, with limited support in game titles despite a big splashy debut late last year when Nvidia launched its GeForce RTX cards, the technology is often credited for bringing cinematic realism to scenes thanks to the way it is able to translate how light is absorbed, reflected, or refracted off of surfaces in real time. By using ray tracing, Nvidia was able to simulate how the sun’s rays react to every surface, the company said in a YouTube video highlighting this project, noting that the dynamic light and shadows gives us new perspectives on the moon landing. Nvidia’s latest effort to remaster the images from the 1969 Apollo 11 moon landing is not unlike its previous efforts in bringing more realistic details to older games, like Quake 2, by using ray tracing.

Apollo Moon Landing when seen with real-time ray tracing on RTX graphics; source: Nvidia Image used with permission by copyright holder

The graphics silicon-maker even showed Aldrin the re-created video. As the second human to walk on the surface of the moon, Aldrin appeared to be impressed with the technology. “I’ve got a photo that you can work on next — it’s called the first selfie in space.” Aldrin joked.

Chuong Nguyen
Silicon Valley-based technology reporter and Giants baseball fan who splits his time between Northern California and Southern…
Your Netgear router might be an open door for hackers
The Netgear Nighthawk XR1000v2 router placed on a desk next to its packaging box

Netgear has released a security advisory addressing two critical vulnerabilities affecting Nighthawk Pro Gaming routers and certain Wi-Fi 6 access points. The company strongly recommends that users update their devices' firmware promptly to mitigate potential risks.

The first vulnerability, identified as PSV-2023-0039, is a Remote Code Execution (RCE) flaw. This security issue allows attackers to execute arbitrary code on affected devices remotely, potentially leading to unauthorized control over the router. The second vulnerability, PSV-2021-0017, is an authentication bypass flaw, which enables attackers to circumvent authentication mechanisms and gain unauthorized access to the device's management interface.

Read more
Turns out, it’s not that hard to do what OpenAI does for less
OpenAI's new typeface OpenAI Sans

Even as OpenAI continues clinging to its assertion that the only path to AGI lies through massive financial and energy expenditures, independent researchers are leveraging open-source technologies to match the performance of its most powerful models -- and do so at a fraction of the price.

Last Friday, a unified team from Stanford University and the University of Washington announced that they had trained a math and coding-focused large language model that performs as well as OpenAI's o1 and DeepSeek's R1 reasoning models. It cost just $50 in cloud compute credits to build. The team reportedly used an off-the-shelf base model, then distilled Google's Gemini 2.0 Flash Thinking Experimental model into it. The process of distilling AIs involves pulling the relevant information to complete a specific task from a larger AI model and transferring it to a smaller one.

Read more
New MediaTek Chromebook benchmark surfaces with impressive speed
Asus Chromebook CX14

Many SoCs are being prepared for upcoming 2025 devices, and a recent benchmark suggests that a MediaTek chipset could make Chromebooks as fast as they have ever been this year.

Referencing the GeekBench benchmark, ChromeUnboxed discovered the latest scores of the MediaTek MT8196 chip, which has been reported on for some time now. With the chip being housed on the motherboard codenamed ‘Navi,’ the benchmark shows the chip excelling in single-core and multi-core benchmarks, as well as in GPU, NPU, and some other tests run.

Read more