# Security and Data Privacy **Published by:** [Centuries](https://paragraph.com/@centuries/) **Published on:** 2022-01-20 **URL:** https://paragraph.com/@centuries/security-and-data-privacy ## Content When we think of the most controversial topics regarding social networks in the last decade, it all comes down to Data Privacy and Data Protection. Especially Facebook and its subsidiaries, they have been scrutinized for their practices regarding user data and personal information. Yet most people do not realize or understand the roles that algorithms and big data play in consumer businesses. It might come as a surprise to some, but Mark Zuckerberg and his army of engineers are not sitting in front of their screens all day monitoring the messages you send on WhatsApp or the pictures you post on Instagram. Instead, every piece of information gets queried and fed into training the various algorithms that Facebook employs to make their platforms work. Generally, these algorithms can be divided into three categories: Recommendation Engines, Content Screening Algorithms, and A/B User Testing.Let us start with recommendation engines that make up the core of Facebook’s proprietary IP and which made it the tech behemoth it is today. Facebook collects data, that is something most people know, but for recommendation engines, they collect specific kinds of data. They track your interactions: what content you like, how long you view a specific type of content, what type of content you view, which accounts you follow, what types of posts you engage with (comments, likes, views), and how long each session you spend on the app is. They feed this information into their machine learning training models which will then decide what accounts to recommend to you, how your newsfeed is structured, and what content to show to keep you engaged as long as possible. Even more important for Facebook, is the data about how you interact with their ads. If you view them, how long you view them if you click on them and what kinds of interests you demonstrate on the platform. They then assign you to different segments and sell their valuable newsfeed real estate (around every fourth post you see) in real-time via their incredibly sophisticated auction platform to advertisers. All of that happens constantly, millions of times a day in real-time, without you noticing. It has made them the second biggest player in the ads industry after Google. The second big application is content screening. They are using complex image recognition algorithms and natural language processing to filter out as much illegal content as possible before it even gets uploaded. Back in the early days before they had algorithms, they needed to manually go through every flagged post to find the bad fruits. An impossible task as you might imagine. Today they employ over 15,000 people to help out with content moderation because, as we all know, even the most sophisticated algorithms have difficulties analyzing all the nuances of human interaction. Despite what some people might say about political biases and freedom of speech, this is actually a good thing. The moderation team has had an incredible churn and the average working time there is less than 2 years due to PTSD and other effects on mental health. They remove everything from videos of executions, rape, hate speech, misinformation, and propaganda. They are basically doing the best they can to keep our platforms safe, friendly, and to keep out as much negativity as they can. And while they are far from perfect and do not catch 100% of all bad content, it has long been proven that Facebook has been doing the best job in content screening and moderation out of all the big social networks by a high margin. Just to give you perspective: in Q1 2018, Facebook removed over 821 million posts that contained violence, nudity, and other offensive material. This is an important factor to understand when it comes to data screening and collection**. Our world is not black and white, and the platforms we use are not as nice and friendly by nature as some people think they are**. The internet has the tendency to bring out more deviant behavior in Humans, and it is an incredibly hard job to keep these platforms safe to use.How will we approach things at Solomon? Well at the beginning, we will mostly rely on our users reporting inappropriate content which we will then evaluate and remove from the platform. Later on, it will be necessary to use content screening algorithms to ensure the safety of the platform. There is simply no other way to filter content once you reach a certain user base. We will never collect or store any personal data or information about our users. As you have learned, the main reason why Web2 social networks collect data is because they rely on it for their core business: advertisements. The more accurately they can match ads with users, the higher the amount that advertisers are willing to pay the platform. At Solomon however, we will not have advertisements as our business model. Instead, we will rely on micro-transactions and transaction fees. This will allow us to give you full control over your data and will mean that there is no need for us to collect any personal data about you. We will track app usage patterns to allow for A/B testing to further optimize our features and our platform. But to do that, it is not necessary to collect any personal data, instead, we can anonymize user-profiles and put them in cohorts depending on how they use certain features. We will not collect any content or personal data. Instead, we will analyze the general usage patterns of the cohort you will be assigned to. For example: how many people fail to connect their wallet, how many people use a certain feature, or how long and how often people are using the platform. Later on, we will also enable you to choose whether or not you want to turn on content recommendations. This is important because they help with increasing engagement and from our experience, non-algorithmic feeds tend to get boring easily. You will end up missing a lot of the content that comes from accounts/people that matter the most to you. Nevertheless, those features will be optional and always opt-in required instead of opt-out later. This means we will actively ask every user if they want personal recommendations or not. Today we only discussed in-app data collection but there is another big side to the picture which is third-party, cross-platform data collection using cookies, WebCrawlers, data mining, and retail data. If you are interested in learning more about the topic, we can do part two of this article. ## Publication Information - [Centuries](https://paragraph.com/@centuries/): Publication homepage - [All Posts](https://paragraph.com/@centuries/): More posts from this publication - [RSS Feed](https://api.paragraph.com/blogs/rss/@centuries): Subscribe to updates