A Quick Primer on Why “SignalGate” is Worse than You Think

Publicly-accessible

First, enjoy some humor to warm you up to the topic.

I assume you already know the basics of the “SignalGate” scandal, as of March 27, 2025. If not, catch up to the present by checking out this detailed and, as far as I can tell, factually-accurate Wikipedia entry and this interview of Jeffrey Goldberg by The Bulwark’s Tim Miller.

Alright, if you know any MAGA or pro-Trump acquaintances and this topic happens to come up you are likely to hear some version of the defense used by CIA Director John Ratcliffe in his testimony to the Senate Intelligence Committee for using the Signal app to conduct a high-level discussion of military operations. He said that the Biden administration had issued guidance that allowed for the use of the Signal app. He failed to mention that the allowance applied to specific types of communication in specific circumstances, none of which applied to the discussion that Jeffrey Goldberg overheard. Some members of the administration and members of Congress have tried to back up Ratcliffe by saying that Signal was approved for classified communications during the Biden administration. This Snopes article refutes their claims. Read it for details and you can yell at me in the comments if you have good evidence that the Snopes article is incorrect.

In fact, up until the Trump regime took office, the Signal app was not allowed on any electronic equipment owned or leased by the federal government, nor were government employees permitted to use it for any government-related communications even on their own personal devices, if those communications included any classified information.

You may have heard that the Signal app, like Whatsapp or Telegram, will encrypt messages sent between devices. That way, a third party who happens to intercept the messages in transit can’t read them. All well and good, as far as that goes. Even better, Signal stores messages on your device in an encrypted database and the encryption key needed to decrypt the local messages is kept in your device’s keychain app. If you’re smart, you protected that keychain with a strong PIN, passphrase, or face or fingerprint ID. Unless you unlock your keychain and use the Signal app’s encryption key to decrypt the messages in the database, they are unreadable by humans.

That sure sounds secure, so what’s all the fuss from these “scaredy-cat libtards?” The big problem here is the overall security of the devices running Signal. Many (most?) of the members of the group who participated in the “SignalGate” group chat were running Signal on their personal cell phones. Those phones would be using some version of either Apple’s iOS operating system or some potentially customized version of Google’s Android operating system. These operating systems are designed for general use by the public, which means that the developers write in fewer security protections to make the devices easier to use. Most people wouldn’t put up with the many obstacles to ease of use that a tightly-secured operating system requires. While iOS and Android both have security features meant to protect users from easy and common security compromises, they are not secure enough to resist every attack highly-skilled hackers can throw at them.

Some of these hackers are members of private hacking groups, like “Anonymous.” Others are employees of government agencies, such as the NSA or Russia’s GRU. These groups are dedicated and in many cases have vast resources to investigate and exploit vulnerabilities in products like iOS and Android. There’s the classic fake tech support email or phone call. The “tech support” representative informs the user that a piece of malware has been discovered on their device and asks the user to install a piece of security software to clean up the malware or asks for a remote support session or asks the user for their login credentials. Once the user agrees the hackers have a foot in the door and can start making changes to the device that allow them to steal or change information on the device.

Then there is the discovery and exploitation of “zero-day” vulnerabilities. A “zero-day” vulnerability is a bug in the operating system that compromises the device’s security and that nobody knows about except the group of people who discovered it. There are thousands of professional and hobbyist security researchers who probe operating systems and applications for security vulnerabilities, and the vast majority of these are reported to the developers of the operating systems and applications so they can be fixed. That’s why the general public is encouraged to update their operating systems and applications regularly.

Some hacker groups, especially those employed by governments, will purposely not disclose to anyone else certain types of security vulnerabilities they discover. They are especially interested in vulnerabilities that basically overthrow the entire security architecture of the operating system or application and allow the hackers to steal data or take control of the device without the end user ever knowing it happened. In the case of the Signal application, the hackers would be able to access the device remotely, gain access to the Signal database encryption key when the user types in their PIN or shows their face to the camera to open the Signal database, and read all the user’s Signal messages, contacts, etc. while the user happily goes about chatting with their Signal contacts.

When one of these entities discovers such a vulnerability, doesn’t disclose it to anyone else, develops software to exploit it and then attacks devices using that software, we call that a “zero-day” attack. One of the most well-known pieces of software used to exploit cell phone vulnerabilities is Pegasus, a product of the Israeli firm NSO Group Technologies. Government agencies likely have their own in-house software that can perform similar functions, but we wouldn’t know about those and may never learn.

Once the organization who maintains the operating system or application that was exploited by this zero-day attack learns about the attack, figures out a fix and puts out updated software, the attack is no longer a zero-day attack. Trouble is, people using the devices vulnerable to this attack have to update the operating system or application on their devices to be protected. So let me ask you, when was the last time you updated iOS or Android or any of the applications on your cell phone? Wait, you don’t even know how to do that? Now you see why the government refused to allow Signal to be installed on its devices and told employees not to use Signal to transmit any classified information.

The most recent versions of iOS and Android are also vulnerable to exploits. How do I know that? Simple. Every operating system ever used in production by the general public has had vulnerabilities. Every single one. Have some hacker groups already discovered vulnerabilities in the current versions of iOS and Android that nobody else knows about? Most likely.

So, the members of this SignalGate group chat have been downplaying the classification level of the information shared by Pete Hegseth and denying that their use of Signal is against current government policy, and denying that they used Signal while in hostile territory, as if they had to be in Russia or China in order for foreign government hackers to read their Signal messages. Besides, some of them have said, the attack against the Houthis was successful, so what harm was done to US security?

This is all extremely bad. It means that not only have they been using Signal for some time already, but also that they have no intention to stop using it on their personal devices. Nor have they issued any public reassurances about the security protections they have applied to their personal devices, which means they don’t intend to do anything about that either. There is dedicated third-party software that can be installed on iOS and Android devices that offers some protection against these types of vulnerabilities. Is any of that software installed on the personal devices used by the parties in this group chat? None of them has offered any specifics, most likely because the answer is “no.”

Worse, they are apparently either too stupid or too arrogant to realize that since this information has become public, hackers of all types will be redoubling efforts to exploit their personal cell phones, assuming hackers haven’t done it already. Again, government hackers especially are not interested in letting their victims know that their devices have been compromised. If Russian agents, for example, had already gained access to one or more of the cell phones used during this chat, they probably wouldn’t have come to the aid of the Houthis. Why would they give the US government a reason to suspect a security compromise by saving the Houthis? They’d rather reap the future benefits of an open invitation to top-level chats about US military action (or non-action) in Europe.

In short, this situation is a security nightmare. By continuing these practices our top-level government officials are putting our entire country at risk.