☝️⏰ Ongoing Token Sale Learn More
☝️⏰ Ongoing Token Sale  Learn More

Why does the world need quantum-resistant encryption and how we do it

Quantum computing is an ever-growing hot topic in the IT sector today with industry players like
Google and IBM putting insane focus on it. This is very much justified, since attack vectors previously
only considered theoretical are now surfacing and getting proven in practice day after day, so no

wonder that the U.S. is also doubling it’s budget regarding this topic.

Attack vectors

The biggest attack vector is targeting those who fail to react to the threat in time, or simply deny the existence of the issue and simply trust that “someone with greater influence” is going to address the problem anyway.

Many algorithms exist which speed up previously unsolvable mathematical formulas by multiple magnitudes (e.g. converting so far exponentially hardening problems into linear ones).

The widely known attack algorithms circulating around are Grover’s and Shor’s:

What not many know is that these algorithms have advanced a lot over the past few years and newer, even more efficient attacks have also evolved lately:

Before (or instead of) diving deep into mathematical formulas, let’s see what these mean in real life scenarios.

Where are vulnerable algorithms used today?

Simply? Everywhere. The mainstream Public Key Infrastructures (PKIs) all use either Rivest–Shamir–Adleman (RSA) or Elliptic Curve (EC) cryptography, both of them vulnerable to Quantum computing-related attacks.

Since they all use signature schemes based on the above two algorithms to verify their chain of trust, these attacks mean a direct threat to both of them.

As RSA claimed, 1024-bit keys were likely to become crackable some time between 2006 and 2010 and that 2048-bit keys are sufficient until 2030. NIST recommends 2048-bit keys for RSA, even until today, although they are said to be secure only until 2030 when they may become crackable by traditional computers.

Another important misconception is that this will turn into a “bit-race”, and that we could increase encryption bits further to be protected against quantum computing attacks leveraging an increasing number of qubits, which concept fails miserably for two reasons:

1. Incrementing the key size of RSA greatly increases computation cost as well to do encryption and decryption or signing and verifying up to the point it becomes unusable for its intended purpose, as the OpenSSL benchmark below shows:


2. Remember that incrementing the encryption bits does not mean hardening in an exponential way anymore. With quantum computing it has become linear, causing a great uncertainty what bitsize could work until when.

Anyone still having doubts about the seriousness of the issue and about the potential danger regarding futureproofing secret channels should also note that it was 2015 when the NSA already announced that they were planning a transition from the widely used elliptic-curve cryptography to algorithms that are resistant to future quantum computing-related attacks.


A great example is a well known type of attack, called Man In The Middle (MITM), where an attacker disguises itself and sits transparently between two endpoints and is sniffing the data sent back and forth between the two genuine endpoints, both imperfectly thinking they are directly communicating with each other whereas they are not.

Although the technique itself is quite old, it keeps resurfacing and is still in use today. SSL and TLS are said to be solving the problem, although tools like sslstrip were born back in the day to successfully evade these issues if a slight touch of social engineering is added to the mixture.

Requirements for a successful attack

In order to successfully accomplish a MITM setup, one of these is a requirement:
  • The attacker should convince the victim to install and trust the attacker’s CA (Certificate Authority) on their local machine
  • The victim should ignore the “invalid certificate” error message when it is presented. This is only valid for applications with a Graphical User Interface (GUI) though, background processes will simply fail silently if the attacker’s CA is not among the trusted ones.

Difficulty of MITM attacks

Despite the above, without precisely knowing each and every step of a MITM flow AND without deceiving the user with Social Engineering, we could firmly say that performing a MITM attack on services protected by SSL/TLS is impossible if a proper PKI was set up.


Simply put, if the intermediary certificate issued on the fly by the attacker is not among the trusted certificates of the target then the attack will fail and might trigger alerts on the victim side as well.


Sounds like a relief, doesn’t it?

BUT: What if we did not need to tamper with certificates at all?

What if I told you that you do not need to strip SSL at all? Or even some worse news: there is no need to interact with the victims in advance to perform some CA related settings on their computer?

This becomes possible when we will be able to calculate the private keys of the genuine CAs from their public keys. A demonstration is easy if we follow the following steps to fully understand the problem:

1. Create a new CA, generate an RSA keypair with 256bit keysize and self-sign it (please note the intentionally low keysize).


2. Now let’s make this CA’s cert a trusted one on the client computer.


3. Launch some web service (easiest to demonstrate and debug, but any TLS-based protocol would work) on a computer, generate a certificate and sign it with the CA created in step 1.)


4. Navigate to the website from the client computer, which should render nicely as the site’s certificate is signed by a CA trusted by the client. Just as it should be.


5. Now if we inject a MITM type of attack, the client will be presented with a warning like:

  • Either the certificate is self-signed or it is signed by an untrusted CA (the attacker)
  • Or if SSL stripping was used, then most probably it does not load the site at all due to HSTS rules (this depends on whether the “Strict-Transport-Security” header was set previously by the server)

  • This is all expected and normal, this is what happens today if you attempt a MITM attack without Social Engineering and injecting a trusted CA into the client’s computer.

6. But here, since we generated an intentionally low keysize RSA keypair, we can factorize it to act in name of the genuinely trusted CA. This means the following:

  • We can use their identity (private key) to sign certificateson the fly as certain domains are requested over SSL/TLS
  • We didn’t need to tamper with the client’s device at all (zero touch attack) since they trust the genuine CA by default.

The biggest attack vector is targeting those who fail to react to the threat in time, or simply deny the existence of the issue and simply trust that “someone with greater influence” is going to address the problem anyway.


Many algorithms exist which speed up previously unsolvable mathematical formulas by multiple magnitudes (e.g. converting so far exponentially hardening problems into linear ones).


The widely known attack algorithms circulating around are Grover’s and Shor’s:

  • Grover’s algorithm that mainly targets symmetric crypto-

    systems (including AES, and SHA256 hashing), but the attack leverage is a lot lower (reducing complexity to either sqrt2 or sqrt3).

  • Shor’s original algorithm that on the other hand targets asymmetric cryptosystems, which are often used as a part of a Hybrid Cryptosystem or an Integrated Encryption Scheme.

Usually this is the point where CTOs neglect the issue, as it is well known that if you start using a larger keysize then step 6.) where we factorize the CA’s private key becomes impossible, because that problem is exponential. Well, the bad news is that Quantum computing attacks make it linear.

The following graph represents the complexity of the problem, 1.5e5 solved using a traditional computer with regular factoring vs. using a Quantum computer executing Shor’s algorithm instead:

And it keeps on getting worse…

Let’s take a closer look at TLS

Transport Layer Security is promising a safe haven against the attacks documented above, which is still perfect as long as the scheme is concerned.


Using TLS, however, gives the implementer the very false confidence that his channels are securely encrypted and future-proofed.


Well, (at least) the second part of the above statement is very wrong, as we break it down in the following sections. Problems lie within the mainstream algorithms used in conjunction with TLS today.

Vulnerability of TLS based communication

Almost every educated IT professional would say that communication going through a TLS protected channel is considered secure. But let’s dive into what the phrase “secure” really means. This statement was valid only if I cared about the data sent through the wire for the moment. The reasoning is simple:
  1. Today there is no Quantum computer in operation that could break the standards, that is 2048bit for RSA and 256bit for EC, or at least there is none that we are aware of.
    • This statement suggests that data sent through the wire using TLS is considered secure at the moment it is sent and received.
  2. Even NIST confirms that current standards are considered safe until 2030 only, then they might be successfully broken by traditional computers.
    • This statement suggests that we have time AT THE MOST until 2030 when today’s standards will already have been broken by traditional computers. This DOES NOT take quantum computing into consideration.
    • This statement also gives the completely wrong feeling that we MUST be safe until 2030 as per the original guidelines. See below why.

The illusion of TRUE forward secrecy

Forward secrecy, sometimes also called “Perfect Forward Secrecy” is far from perfect.
The general principle of setting up initial long-term keys and then using ephameral unique keys for each message seems to be valid, because the logic behind it says that even if a private key was compromised in the process it would not expose all the messages sent between the endpoints, just that single one with the key leaked.

Sounds smart, doesn’t it? Well it is also made useless by Quantum computing attacks for the following reason:
If we MITM a TLS-secured channel without the intention of decrypting the data flowing through and just letting it pass by, the entire handshake can be dissected with freely available tools like Wireshark.


We do nothing else, but listen to the traffic and save it for a later purpose. And that purpose is that once quantum computers with the required number of qubits and enough stability become available (which is really not an IF but a WHEN) we will be able to read everything like plaintext. Remember, you can already start experimenting with quantum computing today.


Do you remember the above deadline of NIST for this, scheduled AT MOST to 2030, with traditional computers? Now you might start realizing how late we already are …


Disclaimer: By no means did I want to insult the original inventors of Forward Secrecy in this section, because (especially at the time when it was drafted) the idea was marvelous, and back then there was no Quantum threat at all to be afraid of in practice. In fact I would rather like to thank them for focusing on this problem so early and starting to raise awareness of it.

Now that we start to grasp how practical the “theoretical” Quantum computing threat is, let’s see what we can do to keep ourselves, our company and our clients safe and sleeping well at night.


Replace vulnerable algorithms

It might seem straightforward to go with the “well then let’s replace those algorithms” approach. It is not easy as that.
Although RSA and EC-based cryptography were proven to be vulnerable in the near future, they had been truly battle-tested and did an amazing job. It is hard to let something go that is guaranteed to work well for now.

There is one thing Quantum-resistant algorithms don’t have in comparison with RSA and EC: practical implementation history. I mean there is not much of a chance that so many mathematicians were wrong incl. brains like Craig Gentry, Oded Regev, Chris Peikert,

Shafi Goldwasser and Vinod Vaikuntanathan who publish on this subject, but still.
Everyone paying close attention to the issue is “hedging their bets” whether to let something go which was proven safe in the past BUT proven broken in the future, or implement something considered safe at the moment as well as in the future.

The swap will be rolled out incrementally, starting with “hobbyist” projects that have not much to lose but willing to experiment. Quantum computing-resistant algorithms will get battle tested, one project at a time.

But for organizations having a lot at stake this issue is far more complicated and it is hard to know what to be afraid of. For them, we recommend the following approach:

Wrap vulnerable algorithms

To be on the safe side, it is possible to “wrap” the ciphertext output by today’s encryption standards into a quantum-resistant layer to have the best of both worlds.

While this might sound something “easy” and logical, the implementation is not trivial for the following reasons:

  • Cryptosystems are not designed with cross-implementation in mind

  • There are known exploits which are based on “nesting” some encryption algorithms into one another, so the algorithms to mix must be chosen professionally and carefully

Key Encapsulation Mechanism

Key Encapsulation Mechanism (KEM) stands for encapsulating the vulnerable parts of a TLS key-exchange.

Going this route we get a professional solution to armor the weak points of TLS, which could be attacked in the near future. There is a lot of traction lately in this direction to include KEM in experimental builds of major browsers, and obviously in the future no additional client software will be needed to support this natively.

Share on facebook
Share on twitter
Share on reddit
Share on linkedin

Related articles

Add meg itt a címsor szövegét

Add meg itt a címsor szövegét

Add meg itt a címsor szövegét

Add meg itt a címsor szövegét

Close Menu