Tiny Bytes: RSA
tldr RSA works by exploiting the fact we can’t easily factor 2 large prime numbers and group theory to make a trapdoor permutation, aka a function that turns x into y but y can’t easily be turned into x without a secret. However, implementing RSA gets tricky because there’s lots of subtle attacks.MathRSA takes advantage of the group Z^*_{n} (multiplicative group of integers modulo n). This is the non-negative integers less than n that have an inverse modulo n. 1 x 1 mod n = 1. 0 x int = 0 so ...
Tiny Bytes: Chilling
Hi, Just chilling tonight. Aiming to finish up chapter tomorrow. Night, Lucas
Tiny Bytes: Quickie
Hi, Did much more writing on RSA. Will finish soon. Bye, Lucas
Tiny Bytes: RSA
tldr RSA works by exploiting the fact we can’t easily factor 2 large prime numbers and group theory to make a trapdoor permutation, aka a function that turns x into y but y can’t easily be turned into x without a secret. However, implementing RSA gets tricky because there’s lots of subtle attacks.MathRSA takes advantage of the group Z^*_{n} (multiplicative group of integers modulo n). This is the non-negative integers less than n that have an inverse modulo n. 1 x 1 mod n = 1. 0 x int = 0 so ...
Tiny Bytes: Chilling
Hi, Just chilling tonight. Aiming to finish up chapter tomorrow. Night, Lucas
Tiny Bytes: Quickie
Hi, Did much more writing on RSA. Will finish soon. Bye, Lucas
Share Dialog
Share Dialog
Next chapter of the book is about hard problems. This section will be a briefer writing process bc I’ve spent so much time dealing with this theory before.
Before we run an algorithm we want a sense of how long the algorithm will take to run. The longer the algorithm takes to run the harder it is to compute the result (leading us to the idea of computational hardness).
To measure how long an algorithm will take we figure out how many simple steps are needed (e.g., adding 2 numbers is a step, reading from memory is a step). The more steps the more computation is needed. (of course, what a step is can vary depending on how you want to analyze the algorithm. Maybe reading from memory is 1 step. Maybe you want to split reading from memory into reading from cache and reading from disk). Using similar logic we can also figure out how much memory an algorithm will use.
The number of steps will depend on the size of the input. The time it takes to sort a list of elements is proportional to the number of elements I need to sort. We’d say there’s n elements and it would take O(n logn) time. When counting steps we assume a MASSIVE input. So massive that we don’t care about any constants. 10^100000000000 ~ 10^100000000000*2. That’s what the O does for us (don’t hate me math loving people for that simple explanation).
Algorithms that have polynomial steps (e.g., O(n^3)) are easy and can solve, even if n is large. If an algorithm takes exponential time to solve (e.g., O(2^n)) it will take way too long to solve if n is large. These types of hard problems are what we want for cryptography.
Hard problems on classical computers aren’t necessarily hard on quantum computers. It’s not that quantum computers can do the same EXACT algorithms faster. Rather, there’s algorithms that can only be run on quantum computers that solve the same problems in less steps. This can’t be applied to every problem, some have quantum algorithms that solve them faster, some don’t.
So not going to get too deep here. Look it up if you have more questions.
P is the set of all algorithms that can be solved in polynomial time. Is a subset of NP.
NP is the set of all algorithms that can verify a solution in polynomial time.
NP-complete is a subset of NP that share a common solution but we don’t know what that is. We can map a NP-complete algo to another in polynomial time. They are the NP algorithms to solveThese usually take an exponential+ time to solve but make for bad bases for cryptography. Sometimes, if the input is just right, they can be super easy to solve.
NP-hard algorithms are algorithms that are at least as hard as any NP-complete algorithm BUT don’t have to be in NP. Weird right. NP-complete algorithms are **NP-**hard and if a polynomial solution to NP-hard were to exist, we solve NP-complete and P=NP.
It’s worth noting we strongly don’t think P = NP but haven’t proved it. If it did, ciphers would break and hashes would be reversed.
Factoring large prime numbers is hard (probably not NP-hard, we can’t prove it but we’re pretty sure it’s not).
N = p*q where p and q are large primes. There’s lots of cool ways to factor N using classical computers that still take exponential time. (quantum do polynomial 👀). The fastest is GNFS (general number field sieve) that takes e^(1.91 x n^(1/3) x (logn)^2/3).
Factoring a 1024-bit number takes roughly 2^70 steps, 2046 takes roughly 2^90 steps, and 4096 takes about 2^128 steps (128 bit security). Researchers don’t all agree on these estimates. Interestingly, there’s been experiments when within 1.5-2 years with LOADS of parallel computing researchers have factored a 663 and 768 bit numbers. Because of this 1024 is speculated as unsafe because this is close enough that someone with near infinite resources (NSA) might be able to factor them.
We can use fancy math to make another computationally hard problem. Specifically, we can use groups. Specifically we consider the group Z_p* that has integers [1,p) where p is a prime number. If we multiply numbers together, we take the mod p of that multiplication.
We then take advantage of generators g in cyclic groups (have at least 1 generator). If a generator is raised to powers (g^1, g^2, g^3 …) it will span every element in the group.
The hard problem is given x, g find g^y=x. (discrete because they involve discrete/non-continuous values and log because you want to find log_g x). This is computationally really hard. Note, this is not NP-complete and there are some groups where this is easy.
For 40+ years the math holds up; however, the implementation of that math doesn’t always.
For factoring, the number chosen might not be N=pq but rather N=prime1 x prime 2 x prime 3 x … which makes this easy to solve. It’s also easy to solve when some bits of p or q are known. Or, N = p^r p^s when r > log p.
When implementing a scheme that relys on factoring (RSA), picking p and q is super important.
Another problem is when the bits are small. Even a super hard problem is solvable when the numbers are small. If your key is small, it can be brute forced.
Ayyy done on day 2. Let’s go,
Lucas
Next chapter of the book is about hard problems. This section will be a briefer writing process bc I’ve spent so much time dealing with this theory before.
Before we run an algorithm we want a sense of how long the algorithm will take to run. The longer the algorithm takes to run the harder it is to compute the result (leading us to the idea of computational hardness).
To measure how long an algorithm will take we figure out how many simple steps are needed (e.g., adding 2 numbers is a step, reading from memory is a step). The more steps the more computation is needed. (of course, what a step is can vary depending on how you want to analyze the algorithm. Maybe reading from memory is 1 step. Maybe you want to split reading from memory into reading from cache and reading from disk). Using similar logic we can also figure out how much memory an algorithm will use.
The number of steps will depend on the size of the input. The time it takes to sort a list of elements is proportional to the number of elements I need to sort. We’d say there’s n elements and it would take O(n logn) time. When counting steps we assume a MASSIVE input. So massive that we don’t care about any constants. 10^100000000000 ~ 10^100000000000*2. That’s what the O does for us (don’t hate me math loving people for that simple explanation).
Algorithms that have polynomial steps (e.g., O(n^3)) are easy and can solve, even if n is large. If an algorithm takes exponential time to solve (e.g., O(2^n)) it will take way too long to solve if n is large. These types of hard problems are what we want for cryptography.
Hard problems on classical computers aren’t necessarily hard on quantum computers. It’s not that quantum computers can do the same EXACT algorithms faster. Rather, there’s algorithms that can only be run on quantum computers that solve the same problems in less steps. This can’t be applied to every problem, some have quantum algorithms that solve them faster, some don’t.
So not going to get too deep here. Look it up if you have more questions.
P is the set of all algorithms that can be solved in polynomial time. Is a subset of NP.
NP is the set of all algorithms that can verify a solution in polynomial time.
NP-complete is a subset of NP that share a common solution but we don’t know what that is. We can map a NP-complete algo to another in polynomial time. They are the NP algorithms to solveThese usually take an exponential+ time to solve but make for bad bases for cryptography. Sometimes, if the input is just right, they can be super easy to solve.
NP-hard algorithms are algorithms that are at least as hard as any NP-complete algorithm BUT don’t have to be in NP. Weird right. NP-complete algorithms are **NP-**hard and if a polynomial solution to NP-hard were to exist, we solve NP-complete and P=NP.
It’s worth noting we strongly don’t think P = NP but haven’t proved it. If it did, ciphers would break and hashes would be reversed.
Factoring large prime numbers is hard (probably not NP-hard, we can’t prove it but we’re pretty sure it’s not).
N = p*q where p and q are large primes. There’s lots of cool ways to factor N using classical computers that still take exponential time. (quantum do polynomial 👀). The fastest is GNFS (general number field sieve) that takes e^(1.91 x n^(1/3) x (logn)^2/3).
Factoring a 1024-bit number takes roughly 2^70 steps, 2046 takes roughly 2^90 steps, and 4096 takes about 2^128 steps (128 bit security). Researchers don’t all agree on these estimates. Interestingly, there’s been experiments when within 1.5-2 years with LOADS of parallel computing researchers have factored a 663 and 768 bit numbers. Because of this 1024 is speculated as unsafe because this is close enough that someone with near infinite resources (NSA) might be able to factor them.
We can use fancy math to make another computationally hard problem. Specifically, we can use groups. Specifically we consider the group Z_p* that has integers [1,p) where p is a prime number. If we multiply numbers together, we take the mod p of that multiplication.
We then take advantage of generators g in cyclic groups (have at least 1 generator). If a generator is raised to powers (g^1, g^2, g^3 …) it will span every element in the group.
The hard problem is given x, g find g^y=x. (discrete because they involve discrete/non-continuous values and log because you want to find log_g x). This is computationally really hard. Note, this is not NP-complete and there are some groups where this is easy.
For 40+ years the math holds up; however, the implementation of that math doesn’t always.
For factoring, the number chosen might not be N=pq but rather N=prime1 x prime 2 x prime 3 x … which makes this easy to solve. It’s also easy to solve when some bits of p or q are known. Or, N = p^r p^s when r > log p.
When implementing a scheme that relys on factoring (RSA), picking p and q is super important.
Another problem is when the bits are small. Even a super hard problem is solvable when the numbers are small. If your key is small, it can be brute forced.
Ayyy done on day 2. Let’s go,
Lucas
Subscribe to ldnovak
Subscribe to ldnovak
<100 subscribers
<100 subscribers
No activity yet