Skip to main content

Midrange Nvidia GTX 1660 Ti graphics card may be 20 percent faster than GTX 1060

RTX 2080
Riley Young/Digital Trends

It has been rumored in the past month that Nvidia could soon debut a new GTX 1660Ti graphics card to entice gamers with midrange performance and lower price points. In what is now the freshest development, alleged leaked benchmarks are showing that the new offering could be as much as 20 percent faster when compared to the older GTX 1060.

AOTS – GTX 1660 Ti High (1080p) Score 7400 ( Laptop )
GTX 1060 High (1080p) Score 6200 ( Laptop ) pic.twitter.com/wdQ1lgFJ1C

— APISAK (@TUM_APISAK) January 21, 2019

The leak comes from TUM_APISAK on Twitter, who discovered Ashes of the Singularity performance benchmarks in an online database. In the scoring, the yet to be announced GTX 1660 Ti netted 7,400 at 1080p high resolution, topping the GTX 1060 at 6,200. Adding up to roughly a 1,200-point (or 20-percent) difference, this hints that the GTX 1660 Ti could be as powerful as the GTX 1060. It also suggests that the graphics card could have been tested internally at Nvidia, and could be coming soon.

Pricing is not yet known on the GTX 1660 Ti, but if midrange pricing holds up to be true, Nvidia could win gamers scared by the $600 cost of the RTX 2070 and the newly announced $350 RTX 2060. A release date is also not yet known, but Videocardz reports that the GTX 1660 Ti could be coming in February alongside a non-Ti model with the much faster GDDR5X memory. Still, it is hard to truly judge the performance value without knowing the price.

Get your weekly teardown of the tech behind PC gaming
Check your inbox!

As for now, it could be an enticing upgrade, as the older GTX 1060 remains very popular, accounting for 15.39 percent of the overall share on Steam in December of 2018. Compared to the top-of-the-line RTX 2060, very early rumors have indicated that the GTX 1660 Ti could ship with a lesser powered TU116 Turning GPU but without support for ray tracing. It also was rumored to come with a total of 1,526 CUDA cores, 20 percent more than what is found onboard the GTX 1060. As for the video memory, that was noted at 6GB of GDDR6 memory, the same amount on the RTX 2060 series.

That is powerful enough for most midrange gaming and would nicely balance out Nvidia’s lineup, falling in between both the older GTX 1060 and ray tracing-enabled RTX 2060. And, with AMD just taking the wraps off the Vega VII, this newer and cheaper card would only spice up the graphics card competition, which is always better for the wallets of gamers who are on a budget.

Arif Bacchus
Arif Bacchus is a native New Yorker and a fan of all things technology. Arif works as a freelance writer at Digital Trends…
Why Nvidia is about to get a huge edge over AMD
The Nvidia RTX 4080 Super on a pink background.

Micron has shared some performance figures for its next-gen graphics memory, and if these numbers turn out to be true, it could mean great things for upcoming graphics cards.

The company claims that its GDDR7 VRAM will offer up to 30% performance improvement in gaming scenarios, and this reportedly applies both to games that rely heavily on ray tracing and on pure rasterization. Will Nvidia's RTX 50-series, which is said to be using GDDR7 memory, turn out to be a bigger upgrade than expected?

Read more
DLSS 4 could be amazing, and Nvidia needs it to be
Nvidia GeForce RTX 4090 GPU.

I won't lie: Nvidia did a good job with Deep Learning Super Sampling (DLSS) 3, and there's almost no way that this success didn't contribute to sales. DLSS 3, with its ability to turn a midrange GPU into something much more capable, is pretty groundbreaking, and that's a strong selling point if there ever was one.

What comes next, though? The RTX 40-series is almost at an end, and soon, there'll be new GPUs for Nvidia to try and sell -- potentially without the added incentive of gen-exclusive upscaling tech. DLSS 3 will be a tough act to follow, and if the rumors about its upcoming graphics cards turn out to be true, Nvidia may really need DLSS 4 to be a smash hit.
When the GPU barely matters

Read more
AMD just revealed a game-changing feature for your graphics card
AMD logo on the RX 7800 XT graphics card.

AMD is set to reveal a research paper about its technique for neural texture block compression at the Eurographics Symposium on Rendering (EGSR) next week. It sounds like some technobabble, but the idea behind neural compression is pretty simple. AMD says it's using a neural network to compress the massive textures in games, which cuts down on both the download size of a game and its demands on your graphics card.

We've heard about similar tech before. Nvidia introduced a paper on Neural Texture Compression last year, and Intel followed up with a paper of its own that proposed an AI-driven level of detail (LoD) technique that could make models look more realistic from farther away. Nvidia's claims about Neural Texture Compression are particularly impressive, with the paper asserting that the technique can store 16 times the data in the same amount of space as traditional block-based compression.

Read more