Skip to main content

Google makes cryptography more secure with open-sourced Project Wycheproof

Google security engineers Daniel Bleichenbacher and Thai Duong announced the launch of Project Wycheproof on Monday, a set of security tests that look for known weaknesses and check for expected behaviors in cryptographic software. It’s named after the smallest mountain in the world, Mount Wycheproof, because “the smaller the mountain the easier it is to climb it.” Project Wycheproof is provided on GitHub via open source to download and use for testing popular cryptographic algorithms such as AES-EAX and AES-GCM, and related software libraries.

Overall, Project Wycheproof includes more than 80 test cases that have already uncovered more than 40 security bugs. However, a portion of these bugs and tests are not included on GitHub for the moment, as many vendors are still addressing issues reported by Google. The project also includes tools to check Java Cryptography Architecture providers, such as the default providers in OpenJDK and Bouncy Castle.

Recommended Videos

The project stems from the need to address the mistakes that appear “too often” in open source cryptographic solutions. This is what is used to encrypt/secure the transmission of data across local networks, across the internet, through the air, and when data is in an idle state. As Monday’s announcement points out, a single mistake in cryptography can have “catastrophic consequences,” and there needs to be a solution in place to fix and prevent cryptographic issues. Providing a batch of unit tests should help the overall issue.

“Our first set of tests are written in Java, because Java has a common cryptographic interface,” Monday’s blog states. “This allowed us to test multiple providers with a single test suite. While this interface is somewhat low level, and should not be used directly, we still apply a ‘defense in depth’ argument and expect that the implementations are as robust as possible.”

Cryptographic software relies on a “library,” which is a collection of resources stored alongside the software that includes needed information like documentation, configuration data, values, and more. The tests enable cryptographic software vendors to check these libraries for problems, but the results won’t mean the libraries will be 100-percent secure. The positive results simply mean that the libraries aren’t vulnerable to attacks Project Wycheproof is targeting.

Project Wycheproof will check the most popular cryptographic algorithms, and software libraries supporting those algorithms. The library testing aspect includes checking for invalid curve attacks, all Bleichenbacher’s attacks, digital signature schemes, and many more.

Ultimately, the goal of Project Wycheproof is to allow developers and vendors to easily check the security of their libraries as a substitute for of becoming cryptographers themselves, or for pouring through “hundreds of academic papers” to verify library integrity. Still, Google acknowledges that Project Wycheproof isn’t complete, and is a work in progress. Those who want to contribute to the project can head here and read Google’s requirements.

To use the new open-source tests, users will first need to install Google’s Bazel tool for building software. The Java Cryptography Extension Unlimited Strength Jurisdiction Policy Files will need to be installed as well. The GitHub listing provides full instructions to get started.

Kevin Parrish
Former Digital Trends Contributor
Kevin started taking PCs apart in the 90s when Quake was on the way and his PC lacked the required components. Since then…
OpenAI Project Strawberry: Here’s everything we know so far
a strawberry

OpenAI is steadfastly seeking to build the world's first Artificial General Intelligence (AGI). Project Strawberry is the company's next step toward that goal, and might power the next generation of ChatGPT.
What is Project Strawberry?
Project Strawberry is OpenAI's latest (and potentially greatest) large language model, one that is expected to broadly surpass the capabilities of current state-of-the-art systems with its "human-like reasoning skills" when it rolls out. This isn't a cheap piece of development, reportedly costing $7 billion on training and inference costs (with an overall $5 billion shortfall).
What can Strawberry do?
Project Strawberry will reportedly be a reasoning powerhouse. Using a combination of reinforcement learning and “chain of thought” reasoning, the new model will reportedly be able to solve math problems it has never seen before and act as a high-level agent, creating marketing strategies and autonomously solving complex word puzzles like the NYT's Connections. It can even "navigate the internet autonomously" to perform "deep research," according to internal documents viewed by Reuters in July.

During development, o1-preview was given a qualifying exam for the International Mathematics Olympiad, one that the previous GPT-4o model was only able to answer correctly with 13% accuracy. The new model answered 83% of the test's questions. In terms of coding, o1-preview scored in the 89th percentile in an online Codeforces competition. o1 can even reportedly answer questions that stumped previous models like, “which is bigger, 9.11 or 9.9?” and "how many 'Rs' are in the word 'Strawberry'?"

Read more
A new definition of ‘open source’ could spell trouble for Big AI
Meta AI can generate images within a chat in about five seconds.

The Open Source Initiative (OSI), self-proclaimed steward of the open source definition, the most widely used standard for open-source software, announced an update to what constitutes an "open source AI" on Thursday. The new wording could now exclude models from industry heavyweights like Meta and Google.

"Open Source has demonstrated that massive benefits accrue to everyone after removing the barriers to learning, using, sharing, and improving software systems," the OSI wrote in a recent blog post. "For AI, society needs the same essential freedoms of Open Source to enable AI developers, deployers, and end users to enjoy those same benefits."

Read more
Meta unveils Llama 3.1, its biggest and best open source model yet
llama 3.1 logo

Facebook parent company Meta announced the release of its Llama 3.1 open source large language model on Tuesday. The new LLM will be available in three sizes -- 8B, 70B, and 405B parameters -- the latter being the largest open-source AI built to date, which Meta CEO Mark Zuckerberg describes as "the first frontier-level open source AI model."

"Last year, Llama 2 was only comparable to an older generation of models behind the frontier," Zuckerberg wrote in a blog post Tuesday. "This year, Llama 3 is competitive with the most advanced models and leading in some areas. Starting next year, we expect future Llama models to become the most advanced in the industry."

Read more