Skip to main content

Law students in the U.K. will will dive head first into virtual reality murder case

The U.K.’s University of Westminster is trying to appeal to the Pokémon Go generation of criminal law students. The school created an Oculus Rift virtual reality scenario designed to let students get a grip with case studies using current technology.

“Instead of students only learning from books, the idea was to give students the chance to understand criminology by actually interacting with a crime scene environment,” Markos Mentzelopoulos, a senior lecturer in the university’s computer science department, told Digital Trends. “It’s a way for them to explore case studies in different ways, taking advantage of VR’s immersive properties.”

Recommended Videos

Planned to be trialed by law students in November, REal and Virtual Reality Law (REVRLaw) lets students participate in a scenario, rendered in loving detail using the VR Unity engine. Within the scenario, they can analyze pieces of evidence and even interact with participants. The case study involves two brothers who run into financial problems while shooting a film — leading to the demise of one of the siblings.

“That’s where the action scenario starts,” Mentzelopoulos said. “You are inside the house and your goal is to try and come to a conclusion about whether or not the surviving brother is guilty of murder or not, or whether it was a case of self-defense. By examining the evidence, students can become acquainted with the case study.”

While still in its testing phase, the research paper behind the project has already received a Best Paper award from the Immersive Learning Research Network (iLRN).

Of course, a tool like this is not going to be valuable at all stages of law school. While understanding the specifics of cases is important, applying the legal process to cases is far less about TV detective-style investigations than it is knowing how to formulate arguments around facts. Still, as a way of engaging new students as a jumping-off point for discussion, REVRLaw sounds promising.

“This isn’t a way of replacing existing teaching materials, it’s about supplementing them,” Mentzelopoulos said. “Books and interactive lectures are still very important. But younger students are also more willing to try out new technologies. This is a chance for them to do that.”

Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
What is HDMI 2.2? Everything you need to know
The rear of the Onn 4K Pro Streaming Device has a reset button, Ethernet port, HDMI port, USB-A port, and a barrel power connector.

Officially announced at CES 2025, HDMI 2.2 is the next-generation HDMI standard that promises to double available bandwidth for higher resolution and refresh rate support, and will require a new cable to support these new standards. It will also bring with it advanced features for improved audio and video syncing between devices.

But the new cable isn't coming until later this year, and there are no signs of TVs supporting the new standard yet. Here's everything you need to know about HDMI 2.2.
What can HDMI 2.2 do?
The standout feature of HDMI 2.2 is that is allows for up to double the bandwidth of existing Ultra High Speed HDMI cables using the HDMI 2.1 protocol. HDMI 2.2 is rated for up to 96 Gbps, opening up support for native 16K resolution support without compression, or native 4K 240Hz without compression. Throw DSC on and it should support monitors up to 4K 480Hz or 8K in excess of 120Hz.

Read more
ChatGPT now interprets photos better than an art critic and an investigator combined
OpenAI press image

ChatGPT's recent image generation capabilities have challenged our previous understing of AI-generated media. The recently announced GPT-4o model demonstrates noteworthy abilities of interpreting images with high accuracy and recreating them with viral effects, such as that inspired by Studio Ghibli. It even masters text in AI-generated images, which has previously been difficult for AI. And now, it is launching two new models capable of dissecting images for cues to gather far more information that might even fail a human glance.

OpenAI announced two new models earlier this week that take ChatGPT's thinking abilities up a notch. Its new o3 model, which OpenAI calls its "most powerful reasoning model" improves on the existing interpretation and perception abilities, getting better at "coding, math, science, visual perception, and more," the organization claims. Meanwhile, the o4-mini is a smaller and faster model for "cost-efficient reasoning" in the same avenues. The news follows OpenAI's recent launch of the GPT-4.1 class of models, which brings faster processing and deeper context.

Read more
Microsoft’s Copilot Vision AI is now free to use, but only for these 9 sites
Copilot Vision graphic.

After months of teasers, previews, and select rollouts, Microsoft's Copilot Vision is now available to try for all Edge users in the U.S. The flashy new AI tool is designed to watch your screen as you browse so you can ask it various questions about what you're doing and get useful context-appropriate responses. The main catch, however, is that it currently only works with nine websites.

For the most part, these nine websites seem like pretty random choices, too. We have Amazon, which makes sense, but also Geoguessr? I'm pretty sure the point of that site is to try and guess where you are on the map without any help. Anyway, the full site list is as follows:

Read more