Tiny Bytes: RSA
tldr RSA works by exploiting the fact we can’t easily factor 2 large prime numbers and group theory to make a trapdoor permutation, aka a function that turns x into y but y can’t easily be turned into x without a secret. However, implementing RSA gets tricky because there’s lots of subtle attacks.MathRSA takes advantage of the group Z^*_{n} (multiplicative group of integers modulo n). This is the non-negative integers less than n that have an inverse modulo n. 1 x 1 mod n = 1. 0 x int = 0 so ...
Tiny Bytes: Chilling
Hi, Just chilling tonight. Aiming to finish up chapter tomorrow. Night, Lucas
Tiny Bytes: Quickie
Hi, Did much more writing on RSA. Will finish soon. Bye, Lucas
Tiny Bytes: RSA
tldr RSA works by exploiting the fact we can’t easily factor 2 large prime numbers and group theory to make a trapdoor permutation, aka a function that turns x into y but y can’t easily be turned into x without a secret. However, implementing RSA gets tricky because there’s lots of subtle attacks.MathRSA takes advantage of the group Z^*_{n} (multiplicative group of integers modulo n). This is the non-negative integers less than n that have an inverse modulo n. 1 x 1 mod n = 1. 0 x int = 0 so ...
Tiny Bytes: Chilling
Hi, Just chilling tonight. Aiming to finish up chapter tomorrow. Night, Lucas
Tiny Bytes: Quickie
Hi, Did much more writing on RSA. Will finish soon. Bye, Lucas
Subscribe to ldnovak
Subscribe to ldnovak
Share Dialog
Share Dialog
<100 subscribers
<100 subscribers
Hola,
I want to follow up on the ‘why you should care about privacy’ discussion from the metaphors post. I mainly wrote about why it’s creepy to not have privacy and the sense of harm that can come from lack of privacy. Today, I want to dive deeper into what that harm actually is.
The danger from lack of privacy often feels more ominous than tangible. I get a bad feeling in my stomach when a private company owns my DNA sequence, or the government knows my location 24/7. However, defining EXACTLY what that harm is can be tricky. In this post, I’ll quickly discuss the harms that come from the lack of privacy of data from several different categories.
Before I jump in, I want to take a second to mention the idea of scale. I find that the impact of scale isn’t discussed enough when talking about privacy (and many other issues from security to racism). The potential for harm is astronomically different if someone overhears a single conversation versus every single conversation. The same is true if someone were to have just my data versus everyone’s data. I’ll try to highlight how scale changes the damage a lack of privacy can cause.
(e.g., text and voice)
What we communicate and with whom we communicate reveal a ton about who we are. What we communicate is obviously impactful data. It can reveal what movies I like to what bank I use. It’s clear how someone knowing what I am communicating can be used to influence my behavior (selling products, attempting to phish me because they know what I bought online). Similarly, it is obvious why people would want this information to check if I am doing something I’m not allowed to do (abortion, drugs).
A little less obvious is the information about who we communicate with. This reveals our social graphs -- the people we are close to and how close we are. People we communicate with more tend to be people I am closer to. And people I am close to are people who influence me more (if everyone around me buys red shoes from company X, I’ll probably purchase red shoes from company X). Social media companies exist because they know who you are close to and influenced by. If my friend is getting into golf, I’ll see more videos about golf. If someone wants to influence my behavior (e.g., buy certain products or not like a specific policy), they can influence my close connections. If I’m worried that people are doing something I don’t like (getting an abortion, buying drugs, connecting with a protest), I can check to see who is messaging with people tied to those organizations.
From a scale perspective, someone can cause harm to me if they only have this data on me. Scams, like phishing, are prevalent if this information is known. I am much more likely to trust a message from my mom’s account about my cellphone bill than defNotAScam@trustMePlz.com. It can also help refine targeted ads. If I’ve been arrested or have made someone in power angry at me, this information can be instrumental in criminal or reputational damage (phone record showing they called the mob boss a lot or that a priest messaged on Grindr).
However, where this power really shines is when it is scaled up to as many people as possible. More users and stronger network effects are crucial to social media companies as that’s what makes their ads more profitable (I wonder what the math is of impact and evaluation of these companies). I could target a lot more people with scams if a large number of messages were leaked. I’m much more afraid of someone using my messages for harm if I know there’s a 100% chance the government knows it versus a 10% chance.
The ability for a government to know what I am messaging is the part that I think is the trickiest (companies can fuck off and not know anything). There is a problem with child porn on the internet. People are doing illegal activities that are amplified. There are also way too many people doing normal things that should not be spied on. There is a danger in someone listening to my conversations. Clearly, there is some kind of balance here that allows people to have privacy and prevent bad things from happening. While I don’t have much more time, the answer still involves E2E messages because a back door or middleman means no privacy. E2E has to be the starting point (the TLDR on what I currently believe is that we have some other methods for checking for bad behavior, AND governments do have the ability to break our messaging apps. It’s just a secret, and the CIA/FBI/others don’t want to reveal those vulnerabilities. For example, the FBI eventually broke into the iPhone of the San Bernadino Bomber). Because of the power to look into messages and track the metadata, whoever can do it should also be tracked. When that power is exercised, we, the people, should know. There’s a difference between the FBI breaking into Osama Bin Laden’s phone and the local police looking at who has messaged an abortion clinic. Having transparency in that power is the only way to hold power in check. (plus, need will have the ability to punish misuse).
Well that’s all folks. Spent all the time on communication data.
Have a nice day!
Lucas
Hola,
I want to follow up on the ‘why you should care about privacy’ discussion from the metaphors post. I mainly wrote about why it’s creepy to not have privacy and the sense of harm that can come from lack of privacy. Today, I want to dive deeper into what that harm actually is.
The danger from lack of privacy often feels more ominous than tangible. I get a bad feeling in my stomach when a private company owns my DNA sequence, or the government knows my location 24/7. However, defining EXACTLY what that harm is can be tricky. In this post, I’ll quickly discuss the harms that come from the lack of privacy of data from several different categories.
Before I jump in, I want to take a second to mention the idea of scale. I find that the impact of scale isn’t discussed enough when talking about privacy (and many other issues from security to racism). The potential for harm is astronomically different if someone overhears a single conversation versus every single conversation. The same is true if someone were to have just my data versus everyone’s data. I’ll try to highlight how scale changes the damage a lack of privacy can cause.
(e.g., text and voice)
What we communicate and with whom we communicate reveal a ton about who we are. What we communicate is obviously impactful data. It can reveal what movies I like to what bank I use. It’s clear how someone knowing what I am communicating can be used to influence my behavior (selling products, attempting to phish me because they know what I bought online). Similarly, it is obvious why people would want this information to check if I am doing something I’m not allowed to do (abortion, drugs).
A little less obvious is the information about who we communicate with. This reveals our social graphs -- the people we are close to and how close we are. People we communicate with more tend to be people I am closer to. And people I am close to are people who influence me more (if everyone around me buys red shoes from company X, I’ll probably purchase red shoes from company X). Social media companies exist because they know who you are close to and influenced by. If my friend is getting into golf, I’ll see more videos about golf. If someone wants to influence my behavior (e.g., buy certain products or not like a specific policy), they can influence my close connections. If I’m worried that people are doing something I don’t like (getting an abortion, buying drugs, connecting with a protest), I can check to see who is messaging with people tied to those organizations.
From a scale perspective, someone can cause harm to me if they only have this data on me. Scams, like phishing, are prevalent if this information is known. I am much more likely to trust a message from my mom’s account about my cellphone bill than defNotAScam@trustMePlz.com. It can also help refine targeted ads. If I’ve been arrested or have made someone in power angry at me, this information can be instrumental in criminal or reputational damage (phone record showing they called the mob boss a lot or that a priest messaged on Grindr).
However, where this power really shines is when it is scaled up to as many people as possible. More users and stronger network effects are crucial to social media companies as that’s what makes their ads more profitable (I wonder what the math is of impact and evaluation of these companies). I could target a lot more people with scams if a large number of messages were leaked. I’m much more afraid of someone using my messages for harm if I know there’s a 100% chance the government knows it versus a 10% chance.
The ability for a government to know what I am messaging is the part that I think is the trickiest (companies can fuck off and not know anything). There is a problem with child porn on the internet. People are doing illegal activities that are amplified. There are also way too many people doing normal things that should not be spied on. There is a danger in someone listening to my conversations. Clearly, there is some kind of balance here that allows people to have privacy and prevent bad things from happening. While I don’t have much more time, the answer still involves E2E messages because a back door or middleman means no privacy. E2E has to be the starting point (the TLDR on what I currently believe is that we have some other methods for checking for bad behavior, AND governments do have the ability to break our messaging apps. It’s just a secret, and the CIA/FBI/others don’t want to reveal those vulnerabilities. For example, the FBI eventually broke into the iPhone of the San Bernadino Bomber). Because of the power to look into messages and track the metadata, whoever can do it should also be tracked. When that power is exercised, we, the people, should know. There’s a difference between the FBI breaking into Osama Bin Laden’s phone and the local police looking at who has messaged an abortion clinic. Having transparency in that power is the only way to hold power in check. (plus, need will have the ability to punish misuse).
Well that’s all folks. Spent all the time on communication data.
Have a nice day!
Lucas
No activity yet