A Quick Primer on Why “SignalGate” is Worse than You Think

Publicly-accessible

First, enjoy some humor to warm you up to the topic.

I assume you already know the basics of the “SignalGate” scandal, as of March 27, 2025. If not, catch up to the present by checking out this detailed and, as far as I can tell, factually-accurate Wikipedia entry and this interview of Jeffrey Goldberg by The Bulwark’s Tim Miller.

Alright, if you know any MAGA or pro-Trump acquaintances and this topic happens to come up you are likely to hear some version of the defense used by CIA Director John Ratcliffe in his testimony to the Senate Intelligence Committee for using the Signal app to conduct a high-level discussion of military operations. He said that the Biden administration had issued guidance that allowed for the use of the Signal app. He failed to mention that the allowance applied to specific types of communication in specific circumstances, none of which applied to the discussion that Jeffrey Goldberg overheard. Some members of the administration and members of Congress have tried to back up Ratcliffe by saying that Signal was approved for classified communications during the Biden administration. This Snopes article refutes their claims. Read it for details and you can yell at me in the comments if you have good evidence that the Snopes article is incorrect.

In fact, up until the Trump regime took office, the Signal app was not allowed on any electronic equipment owned or leased by the federal government, nor were government employees permitted to use it for any government-related communications even on their own personal devices, if those communications included any classified information.

You may have heard that the Signal app, like Whatsapp or Telegram, will encrypt messages sent between devices. That way, a third party who happens to intercept the messages in transit can’t read them. All well and good, as far as that goes. Even better, Signal stores messages on your device in an encrypted database and the encryption key needed to decrypt the local messages is kept in your device’s keychain app. If you’re smart, you protected that keychain with a strong PIN, passphrase, or face or fingerprint ID. Unless you unlock your keychain and use the Signal app’s encryption key to decrypt the messages in the database, they are unreadable by humans.

That sure sounds secure, so what’s all the fuss from these “scaredy-cat libtards?” The big problem here is the overall security of the devices running Signal. Many (most?) of the members of the group who participated in the “SignalGate” group chat were running Signal on their personal cell phones. Those phones would be using some version of either Apple’s iOS operating system or some potentially customized version of Google’s Android operating system. These operating systems are designed for general use by the public, which means that the developers write in fewer security protections to make the devices easier to use. Most people wouldn’t put up with the many obstacles to ease of use that a tightly-secured operating system requires. While iOS and Android both have security features meant to protect users from easy and common security compromises, they are not secure enough to resist every attack highly-skilled hackers can throw at them.

Some of these hackers are members of private hacking groups, like “Anonymous.” Others are employees of government agencies, such as the NSA or Russia’s GRU. These groups are dedicated and in many cases have vast resources to investigate and exploit vulnerabilities in products like iOS and Android. There’s the classic fake tech support email or phone call. The “tech support” representative informs the user that a piece of malware has been discovered on their device and asks the user to install a piece of security software to clean up the malware or asks for a remote support session or asks the user for their login credentials. Once the user agrees the hackers have a foot in the door and can start making changes to the device that allow them to steal or change information on the device.

Then there is the discovery and exploitation of “zero-day” vulnerabilities. A “zero-day” vulnerability is a bug in the operating system that compromises the device’s security and that nobody knows about except the group of people who discovered it. There are thousands of professional and hobbyist security researchers who probe operating systems and applications for security vulnerabilities, and the vast majority of these are reported to the developers of the operating systems and applications so they can be fixed. That’s why the general public is encouraged to update their operating systems and applications regularly.

Some hacker groups, especially those employed by governments, will purposely not disclose to anyone else certain types of security vulnerabilities they discover. They are especially interested in vulnerabilities that basically overthrow the entire security architecture of the operating system or application and allow the hackers to steal data or take control of the device without the end user ever knowing it happened. In the case of the Signal application, the hackers would be able to access the device remotely, gain access to the Signal database encryption key when the user types in their PIN or shows their face to the camera to open the Signal database, and read all the user’s Signal messages, contacts, etc. while the user happily goes about chatting with their Signal contacts.

When one of these entities discovers such a vulnerability, doesn’t disclose it to anyone else, develops software to exploit it and then attacks devices using that software, we call that a “zero-day” attack. One of the most well-known pieces of software used to exploit cell phone vulnerabilities is Pegasus, a product of the Israeli firm NSO Group Technologies. Government agencies likely have their own in-house software that can perform similar functions, but we wouldn’t know about those and may never learn.

Once the organization who maintains the operating system or application that was exploited by this zero-day attack learns about the attack, figures out a fix and puts out updated software, the attack is no longer a zero-day attack. Trouble is, people using the devices vulnerable to this attack have to update the operating system or application on their devices to be protected. So let me ask you, when was the last time you updated iOS or Android or any of the applications on your cell phone? Wait, you don’t even know how to do that? Now you see why the government refused to allow Signal to be installed on its devices and told employees not to use Signal to transmit any classified information.

The most recent versions of iOS and Android are also vulnerable to exploits. How do I know that? Simple. Every operating system ever used in production by the general public has had vulnerabilities. Every single one. Have some hacker groups already discovered vulnerabilities in the current versions of iOS and Android that nobody else knows about? Most likely.

So, the members of this SignalGate group chat have been downplaying the classification level of the information shared by Pete Hegseth and denying that their use of Signal is against current government policy, and denying that they used Signal while in hostile territory, as if they had to be in Russia or China in order for foreign government hackers to read their Signal messages. Besides, some of them have said, the attack against the Houthis was successful, so what harm was done to US security?

This is all extremely bad. It means that not only have they been using Signal for some time already, but also that they have no intention to stop using it on their personal devices. Nor have they issued any public reassurances about the security protections they have applied to their personal devices, which means they don’t intend to do anything about that either. There is dedicated third-party software that can be installed on iOS and Android devices that offers some protection against these types of vulnerabilities. Is any of that software installed on the personal devices used by the parties in this group chat? None of them has offered any specifics, most likely because the answer is “no.”

Worse, they are apparently either too stupid or too arrogant to realize that since this information has become public, hackers of all types will be redoubling efforts to exploit their personal cell phones, assuming hackers haven’t done it already. Again, government hackers especially are not interested in letting their victims know that their devices have been compromised. If Russian agents, for example, had already gained access to one or more of the cell phones used during this chat, they probably wouldn’t have come to the aid of the Houthis. Why would they give the US government a reason to suspect a security compromise by saving the Houthis? They’d rather reap the future benefits of an open invitation to top-level chats about US military action (or non-action) in Europe.

In short, this situation is a security nightmare. By continuing these practices our top-level government officials are putting our entire country at risk.

If not Facebook, What? Part 1

This is the third post in a series. Since it has been months since the last post you can review the previous posts here and here.

Is it possible to have a Facebook-like experience on the internet for free without becoming the raw material for a massive data-gathering and analysis operation and then becoming the ignorant target of manipulation by entities using the results of that operation? In short, no. There is no economic model that would enable a site trying to implement Facebook’s user experience — minus the manipulation — while offering the service for free.

There is a movement afoot to force the hands of large corporations that provide web-based internet services so that they can no longer collect behavioral data from users without their knowledge. In 2018 California enacted a law, the California Consumer Privacy Act, that gives consumers the right to obtain from internet service providers any personal information the provider has collected about them, including inferences made about the consumer derived from analysis, opt out of having that data collected or sold to third parties, or have any or all of the existing data deleted. That law technically went into effect on New Year’s Day, but California is still working on the regulatory framework.

This law is likely to have a significant impact on the business models employed by most of the large companies that provide internet services. First, even though it is only a state law, many popular websites are likely to change their privacy policies nationwide and perhaps globally so they don’t have to customize the site’s code based on the legal residence of the connecting user. Second, revealing the results of their proprietary analysis algorithms to consumers will at least partially undercut the competetive advantages enjoyed by Google and Facebook, for exampe, in the war to develop the most accurate predictions about future consumer behavior. This will likely force the companies to modify their business model. It may even topple them from their dominant place in the field. Many other companies that specialize in consumer behavior analysis or the (re)sale of this proprietary data could take major financial hits as well.

Needless to say, the companies affected by this legislation will not take the change lying down. Furthermore, as more states adopt similar laws congress will face increasing pressure to preempt them with a federal law that will standardize data privacy rights so that internet service providers aren’t swamped trying to comply with several different requirements. Don’t expect a federal law with any teeth to pass in the current political environment, however. There may be hope for something substantial in 2021, depending on the outcome of this year’s election, so if you are concerned about data privacy and aren’t already motivated to get to the polls this year, find out the positions of potential candidates for the house, senate and president and vote accordingly.

In any event, unless you happen to live in California, you may not be able to manage data collected about you any better this coming year than in the past, depending on what internet services you use. Some consultants are already telling internet service provider companies not to guarantee any enhanced data privacy protections to users who are not residents of California. Otherwise, they are likely to face unnecessary lawsuits for violation of the terms of service from people who live outside of California. That doesn’t mean they customize their site code; it just means that their terms of service specify the enhanced privacy protections only apply to residents of California and if you try to use those protections they will ignore your requests.

Absent the type of legal changes that would force major internet service providers into a different economic model that did not depend on exploiting our online behavior to make it easier for others to manipulate us, how can you protect yourself from Facebook, Google, etc? The most effective method is to stop using their platforms, and the rest of this series will provide you with several ways to do just that.

Since this series focuses on Facebook’s abuses, I will concentrate on ways to replace the Facebook experience. What to do about Google, Twitter, Bing, Youtube, etc. will have to wait. First, I assume you still want to have some kind of presence online. Is there any way to replicate the features of Facebook you like while avoiding the drawbacks? People’s reasons for being on Facebook differ, of course, and alternatives may meet some people’s needs and not others. In general though the short answer is “Not easily.”

For that reason I intend to present a staged series of alternatives, starting in this post with those that involve the least effort and most closely resemble Facebook. The closest Facebook competitors I can find that offer the type of privacy and absence of behind-your-back personal data manipulation practiced by Facebook are Diaspora pods, MeWe, Minds, and Sociall.io, arranged in alphbetical order. The major drawback to these alternative platforms is that none of your Facebook friends probably has an account on any of them. Since these are social networking sites, no account means no privileged access, means even if you get your friends to your page on one of these alternate platforms, they won’t be able to see it unless you make it available to the general public. Of course you are already limiting who gets to see what you post on Facebook, right? Right … ? (Face palm!) How about if we wait while you take care of that ….

All good now? Let’s say you create an account on one of these platforms and just as on Facebook you create rules to limit access to your posts. The only way your Facebook friends will get to see your posts is if they create an account and you let that account into one of your privileged access groups, whether it’s called “friend,” “contact,” etc. Now you have to convince your Facebook friends to create and use that account. They don’t have to move their Facebook presence onto the other platform, but they do have to maintain that account and log in to it to see your posts. As long as your friends maintain their pages on Facebook you will all have to split time between the two platforms. Some people won’t stand for the inconvenience, friend or not.

Moving to an alternate social networking platform would work best if an entire group of people who wanted to keep in touch did it together, or if you, as the group leader/inspiration/provocateur, convinced the bulk of the group to migrate over. And the move is not just a matter of following the leader’s pages but each member of the group moving their Facebook presence to the other platform. That way the entire group can abandon Facebook, at least as a means for interacting with the other members of the group.

Still considering this option? Here’s a quick, beginner-level review of the platforms mentioned above. Diaspora is not really a social networking platform. It’s a software suite used to create social networking “pods.” A “pod” is a group of people who share some common interest or connection that motivates them to form an online group so they can share information with each other. Members of the “pod” can communicate with one another online because someone with technical skill has installed the Diaspora software and used it to create a social networking platform on one or more physical or virtual computers accessible to members of the “pod”. There are many, many existing Diaspora “pods.” Whether you and your Facebook friends would find any of them to be a suitable online home is anyone’s guess. If not, you would need to create a new “pod,” and there is the major drawback; you need someone with technical skill to create and maintain the “pod.” No, I won’t do it for you, unless you and your Friends all commit to pay for it. If you don’t want to investigate costs yourself, wait for some of the following posts, where I will address costs.

MeWe is purpose-built as a Facebook replacement platform. Its major selling point is that it intentionally eschews doing anything with data about you except using it to improve its own services. No sales of your behavioral data to third parties, no behind-your-back research on what makes you tick, no targeted advertising. The site has been around for about 3 years and is attracting more users, but it is still dwarfed by Facebook, and many of the new users of the site were kicked off Facebook for promoting racism, violence, conspiracy theories, or terrorism. Fortunately, you can quite easily isolate yourself from their absurdities. MeWe doesn’t push unrequested material into your feeds and doesn’t run algorithms against the information for which you do request access to proactively feed you other information their “algorithms” say you might be interested in. (I put “algorithms” in quotes because many of us suspect we get selected to receive some of the posts showing up in our Facebook news feeds because someone paid Facebook big bucks, not because Facebook’s algorithms actually singled us out as interested parties.)

Minds and Sociall.io are also meant to be Facebook replacements, but with a twist. Both of these social networking platforms use a technology called “blockchain.” In Sociall.io’s case, the “blockchain” technology is used to eliminate the need for a centralized server/server farm to host the platform. Instead, the entire social network community’s activities are processed distributively by all the computers of the members. This is no place to go into the details of how “blockchain” works, except to voice one fundamental objection to the entire project. The computers participating in a blockchain consume massive amounts of energy due to the cryptographic demands of the technology. The amount of energy consumed grows exponentially as the blockchain increases in size. All that computing power would be put to better use trying to solve more critical problems.

Minds uses blockchain for the limited purpose of generating crypto-currency. Why? The founders of Minds wanted to develop a business model that rewards users. As I pointed out in my earlier post, Facebook extracts valuable information from your online behavior and sells it, but gives you no cut. Minds also gets value from your online behavior, but they pay you for it. That’s where the crypto-currency comes in. Rather than sending you a check in the mail, they deposit crypto-currency in your online account and let you use that crypto-currency to “buy” advanced features or exclusive content. They are working on integrating their crypto-currency with other existing crypto-currencies, such as Bitcoin. Someday the crypto-currency from Minds may be convertible to dollars. This positive feature of the Minds platform is offset by the fact that crypto-currencies are a form of “blockchain” and are subject to the same criticism regarding wasteful energy use. My advice? Don’t go there.

If none of these options appeal to you, wait for my following posts where I offer more alternatives.

Why I won’t post on Facebook, part 2

A few years ago Erika invited a horse trainer to teach her how to train our horses to be more cooperative. I witnessed some of his training sessions and learned enough to try using the same techniques on our horses afterwards. I won’t bore you with all the details, but much of what we learned is directly relevant to Facebook’s relation to its users.

Our trainer pointed out that the goal of training is to get the horse to do things that benefit us, despite the fact that the horse has no natural inclination to do these things because they would be of no benefit to a horse in its ancestral environment. He pointed out how we can use the horse’s natural inclinations to teach it to do things that we want it to do. For example, we learned how to position ourselves in a circular arena in such a way that the horse would continuously trot around the inside edge of the arena to avoid having us in a location that made it nervous, and then reposition ourselves in such a way that the horse would stop or fluidly reverse course and trot in the opposite direction. Developing this trotting behavior benefitted us because Philip would be competing in horse events requiring sustained arena trotting. It just so happens that the exercise also benefitted the horse.

When it comes to Facebook, and by extension much more of the online world, we users are in the position of a horse. Most of the websites that most of us visit are designed at least partly to get us to do things that benefit the owners of the website, some of which we would otherwise not be inclined to do. Shoshana Zuboff’s book, The Age of Surveillance Capitalism, is again the best thorough examination of how the web became like this. You can find brief introductions to her work at many places on the web, including here.

But the critical issue with Facebook — and most other websites or platforms that offer you internet services for “free” — is not just that the company is trying to manipulate you, but also that you must be kept ignorant of what is happening to you. We are all very familiar with some of the manipulative methods used by older forms of media, such as pictures of rugged cowboys smoking cigarettes while riding their horses on the range or shapely young women in bikinis draped over cars. It doesn’t take a genius to figure out the implicit message being sent. These forms of influence may be crass and tasteless, but most of us don’t regard them as a threat because we can tell we are being manipulated, even if in the end we do what the advertisers want. Likewise, few of us are likely to be horrified by news sites loaded with articles with headlines like, “Newly-released Video May Show Trump Propositioning Mrs. Putin,” because we have been seeing that kind of clickbait in grocery store check-out lines for decades.

What if you are no more aware of how you are being manipulated by Facebook than my horse was when I got it to continue trotting in a circle? What if cues you are receiving in the design, layout, graphics, workflow, and interactions with Facebook and its other users are meant to provoke behaviors that you would never in your wildest dreams imagine could be stimulated by those cues. Well, there is no what if. It happens online all the time. For comparison, you can reference the use of color and other atmospheric modifications in restaurants to stimulate hunger in the patrons so they will order more food. The difference in online platforms like Facebook is that they are also collecting volumes of data on your behavior and using that data to customize your online environment, in expectation that you will be motivated to do things like buy items advertised on the site, click on links that take you to places on the web favored by advertisers, or do things in your real life that will increase your exposure to products favored by advertisers.

Google had discovered the power of using data culled from users of its websites to manipulate their behavior before Facebook even existed. I say this at least partly to stop you from speculating that I have some personal grudge with Facebook, but also because lessening exposure to Facebook doesn’t solve the problem. It happens across the web now. And it is fundamentally immoral, and should be illegal.

If that strikes you as an extreme reaction, maybe you need to consider in more detail how this works. Let’s say that you are a consultant with Cambridge Analytica, to take just one known example, and using data obtained from Facebook you determine that there is a class of potential voters with the following online behavioral characteristics: 1.) when perusing their timeline, they are almost guaranteed to like or comment on any post from friends more recent than 2 days ago, and almost guaranteed not to view any posts older than that, 2.) when commenting, they are almost guaranteed to post no more than 2 sentences and use an emoji or accompanying image over 60% of the time when commenting, 3.) tend to browse their “People You May Know” list at least once per week, and 4.) when browsing that list are almost guaranteed to a.) make a friend request to at least one person in the list with whom they already have 2 or mutual friends and b.) have a large enough list to span at least 3 pages and browse through at least the first 2 pages. Furthermore, you learn that this class of users are 63% more likely to get to a polling place and vote for the candidate Cambridge Analytica wants to promote if for at least two months before the election they are shown a set of advertisements and links to online content that promote anti-bacterial lotions and sprays or female country-music artists placed on their Facebook timelines between 5:00-6:30 AM or 9:45 -11:00 PM. Furthermore, these promotions must occur in combination with other online content attacking Nancy Pelosi, the Federal Reserve Board, or the Walt Disney Company and placed in such a way that the targeted users are likely to click on at least 75% of the content and links provided, based on their color preferences, mouse-moving habits, and instant reactions to specific topics and themes.

Even this example is too generic. For you personally, they also know that you have a persistent habit of over-clicking (for example, clicking on a link more than is necessary to follow it), are more likely to click on links highlighted in dark purple than other dark colors, and spend on average less than 2 minutes away from Facebook after clicking on a link before returning and performing further actions on Facebook. They already have learned from latitudinal studies of people exhibiting this combination of behaviors how to tailor the presentation of the content mentioned in the previous paragraph to increase the odds of you voting for their desired candidate. All they need now is sufficient resources to make enough changes to the appearance of your timeline to get the results they desire. After all, this is what they are paying Facebook for, and Facebook without that money wouldn’t exist.

In my case, at least, I have nowhere near the level of self-awareness necessary to recognize that these types of subtle clues could be manipulating my behavior. In fact, it is entirely possible that at least some groups with access to Facebook data were attempting to manipulate me to get off Facebook! Certainly they wouldn’t tell Facebook that’s what they were using the data they paid for to do, but I can imagine certain right-wing groups with that data considering me a lost cause and wanting to limit the exposure of my ideas on the Facebook platform, placing content on my timeline to chase me off the platform. No doubt Facebook would have means to catch and limit this type of customer behavior. Again, they are still in business.

Regardless, why should I expose myself to an environment where I am in effect a lab rat? Where I participate in a game in which not only do I not know the goal or the rules but I don’t even know what game is being played? If you have a good answer to this question you are welcome to post a comment. It had better be a very good answer if you want it to change my mind about Facebook.

Why I won’t post on Facebook, part 1

Here is the bargain Facebook offers you:


  1. You get space to post whatever you want about your life on a popular, online platform. You get to specify who can see what you post. You get this space for free.
  2. Facebook gets control, not only of the material you post, but also of whatever other information about you it can gather by analysis of that material, and gets to sell that information to third parties. You give them all this for free.

For several reasons I no longer consider this trade-off to be in my, or your, best interest, and over the next several posts will lay out these reasons. In a comment on my initial announcement about leaving Facebook, Erika mentioned my reaction to the book The Age of Surveillance Capitalism as a driving force behind this decision, and she was partly right. Anyone who wants a thorough assessment of the social impact of “Surveillance Capitalism” ought to read that book.

On the other hand, many of the arguments in that book only clarified and lent greater support to suspicions of Facebook that had been growing on me for a few years. The first suspicion, now amply confirmed, was that Facebook’s enforcement of user data privacy has been embarassingly sloppy since the company’s founding. In most cases the public exposure of user data was not a result of Facebook being victimized by malicious hackers, but a side-effect of its own policies and employees’ activities.

Some of you are probably thinking, “But why should I care? I don’t post anything on Facebook that I need to keep private. Anybody who does that is an idiot and deserves every bad thing that happens to them as a result of exposure.” OK, I’m not being fair. I have no idea what you are thinking, but I know what I used to think and just quoted it. The trouble with this line of thinking is that it assumes you are fully conscious of everything you reveal about yourself when you post personal information on sites like Facebook, and that the only significant facts people can learn about you from the information you post is what is on the surface.

It is obviously not true that in face-to-face interactions we only expose to others what we consciously intend to reveal. People learn a lot about us from the way we interact with them, much of it we’d rather they didn’t know. We all know this is true and it profoundly affects our behavior in our everyday social gatherings. That’s why many people felt liberated by their early experiences online. It appeared to them that much of the “negative” information about them available in face-to-face interactions was no longer accessible. It is now a truism that one of the main reasons people behave much differently online from their face-to-face behavior is that they believe they can, if they choose, be completely anonymous. This was always false but it is far less true today than it was in the early days of the internet when the meme “on the internet nobody knows you’re a dog” was coined. Today, that unqualified claim is hopelessly naive. On the internet you may yet be able to hide the fact that you are a dog, but that is little comfort when savvy actors can determine whether you like biscuits better than soup bones, which leg you prefer to scratch yourself with, how often your master takes you for a walk, and whether you’ll charge the door or hide under the couch when a burglar tries breaking in.

Many major internet companies, most notably Google and Facebook, base their entire business model on learning as much about you as they can from your interactions on their platforms and then selling that information to other commercial entities. This goes far beyond the details of your personal life that you choose to post on their platforms, whether it be photos, documents, emails, videos, messages, phone calls, comments, or even simple things such as “likes” and votes in online polls. These companies also use analytic tools to determine how you react to content posted by advertisers and other users and develop a profile of your personality that can be used to more effectively sell you things or otherwise influence your decisions. You may remember the scandal over the data collected about millions of Facebook users that Cambridge Analytica used to influence voters to support Donald Trump for president. Even though publicly Facebook execs acted as if Cambridge Analytica’s use of their data was an embarassment for the company, behind the scenes they tell their customers this is a feature, not a bug. The more effectively one of Facebook’s customers achieves its objectives by the use of accurate information about how you will behave when exposed to the customer’s message, the more valuable Facebook’s data about you becomes, and the more money Facebook’s customers are willing to pay Facebook to get their hands on this data.

Notice that the more Facebook knows about you, the more money they make. Do they pay you for this information? How about Facebook’s customers? Does this greater knowledge of your behavior produce a more satisfying online experience for you? Does it improve your life more than if Facebook and their customers left you alone? Ha. Ha. Ha. Now you know my first reason for moving much of my online activity off of Facebook. If they want to sell details of my life, let them pay me for it first. Since that deal isn’t in the cards, I’ll take my life elsewhere, thank you.