Greetings!

A friend of mine wants to be more secure and private in light of recent events in the USA.

They originally told me they were going to use telegram, in which I explained how Telegram is considered compromised, and Signal is far more secure to use.

But they want more detailed explanations then what I provided verbally. Please help me explain things better to them! ✨

I am going to forward this thread to them, so they can see all your responses! And if you can, please cite!

Thank you! ✨

  • The Hobbyist@lemmy.zip
    link
    fedilink
    arrow-up
    1
    ·
    3 days ago

    From what I understand, sealed sender is implemented on the client side. And that’s what’s in the github repo.

    • Aria@lemmygrad.ml
      link
      fedilink
      arrow-up
      1
      ·
      3 days ago

      How does that work? I wasn’t able to find this. Can you find documentation or code that explains how the client can obscure where it came from?

        • Aria@lemmygrad.ml
          link
          fedilink
          arrow-up
          1
          ·
          2 days ago

          Okay. But this method doesn’t address that the service doesn’t need the message to include the sender to know who the sender is. The sender ('s unique device) can with 100% accuracy be appended to the message by the server after it’s received. Even if we trust them on the parts that require trust, the setup as described by the blog doesn’t do anything to prevent social graphs from being derived, since the sender is identified at the start of every conversation.

          If we trust them not to store any logs (unverifiable), then this method means they can’t precisely know how long a conversation was or how many messages were exchanged. But you can still know precisely when and how many messages both participants received, there’s just a chance that they’re talking to multiple people. Though if we’re trusting them not to store logs (unverifiable), then there shouldn’t be any data to cross reference to begin with. So if we can’t trust them, then why are we trusting them not to take note of the sender?

          The upside is that if the message is leaked to a third-party, there’s less info in it now. I’m ignoring the Github link, not because I don’t appreciate you finding it, but because I take the blog-post to be the mission statement for the code, and the blog doesn’t promise a system that comprehensively hides the sender’s identity. I trust their code to do what is described.

          • hedgehog@ttrpg.network
            link
            fedilink
            arrow-up
            1
            ·
            23 hours ago

            The sender ('s unique device) can with 100% accuracy be appended to the message by the server after it’s received.

            How?

            If I share an IP with 100 million other Signal users and I send a sealed sender message, how does Signal distinguish between me and the other 100 million users? My sender certificate is encrypted and only able to be decrypted by the recipient.

            If I’m the only user with my IP address, then sure, Signal could identify me. I can use a VPN or similar technology if I’m concerned about this, of course. Signal doesn’t consider obscuring IPs to be in scope for their mission - there was a recent Cloudflare vulnerability that impacted Signal where they mentioned this. From https://www.404media.co/cloudflare-issue-can-leak-chat-app-users-broad-location/

            404 Media asked daniel to demonstrate the issue by learning the location of multiple Signal users with their consent. In one case, daniel sent a user an image. Soon after, daniel sent a link to a Google Maps page showing the city the user was likely in.

            404 Media first asked Signal for comment in early December. The organization did not provide a statement in time for publication, but daniel shared their response to his bug report.

            “What you’re describing (observing cache hits and misses) is a generic property of how Content Distribution Networks function. Signal’s use of CDNs is neither unique nor alarming, and also doesn’t impact Signal’s end-to-end encryption. CDNs are utilized by every popular application and website on the internet, and they are essential for high-performance and reliability while serving a global audience,” Signal’s security team wrote.

            “There is already a large body of existing work that explores this topic in detail, but if someone needs to completely obscure their network location (especially at a level as coarse and imprecise as the example that appears in your video) a VPN is absolutely necessary. That functionality falls outside of Signal’s scope. Signal protects the privacy of your messages and calls, but it has never attempted to fully replicate the set of network-layer anonymity features that projects like Wireguard, Tor, and other open-source VPN software can provide,” it added.

            I saw a post about this recently on Lemmy (and Reddit), so there’s probably more discussion there.

            since the sender is identified at the start of every conversation.

            What do you mean when you say “conversation” here? Do you mean when you first access a user’s profile key, which is required to send a sealed sender message to them if they haven’t enabled “Allow From Anyone” in their settings? If so, then yes, the sender’s identity when requesting the contact would necessarily be exposed. If the recipient has that option enabled, that’s not necessarily true, but I don’t know for sure.

            Even if we trust Signal, with Sealed Sender, without any sort of random delay in message delivery, a nation-state level adversary could observe inbound and outbound network activity and derive high confidence information about who’s contacting whom.

            All of that said, my understanding is that contact discovery is a bigger vulnerability than Sealed Sender if we don’t trust Signal’s servers. Here’s the blog post from 2017 where Moxie describe their approach. (See also this blog post where they talk about improvements to “Oblivious RAM,” though it doesn’t have more information on SGX.) He basically said “This solution isn’t great if you don’t trust that the servers are running verified code.”

            This method of contact discovery isn’t ideal because of these shortcomings, but at the very least the Signal service’s design does not depend on knowledge of a user’s social graph in order to function. This has meant that if you trust the Signal service to be running the published server source code, then the Signal service has no durable knowledge of a user’s social graph if it is hacked or subpoenaed.

            He then continued on to describe their use of SGX and remote attestation over a network, which was touched on in the Sealed Sender post. Specifically:

            Modern Intel chips support a feature called Software Guard Extensions (SGX). SGX allows applications to provision a “secure enclave” that is isolated from the host operating system and kernel, similar to technologies like ARM’s TrustZone. SGX enclaves also support a feature called remote attestation. Remote attestation provides a cryptographic guarantee of the code that is running in a remote enclave over a network.

            Later in that blog post, Moxie says “The enclave code builds reproducibly, so anyone can verify that the published source code corresponds to the MRENCLAVE value of the remote enclave.” But how do we actually perform this remote attestation? And is it as secure and reliable as Signal attests?

            In the docs for the “auditee” application, the Examples page provides some additional information and describes how to use their tool to verify the MRENCLAVE value. Note that they also say that the tool is a work in progress and shouldn’t be trusted. The Intel SGX documentation likely has information as well, but most of the links that I found were dead, so I didn’t investigate further.

            A blog post titled Enhancing trust for SGX enclaves raised some concerns with SGX’s current implementation, specifically mentioning Signal’s usage, and suggested (and implemented) some improvements.

            I haven’t personally verified the MRENCLAVE values for any of Signal’s services and I’m not aware of anyone who has (successfully, at least), but I also haven’t seen any security experts stating that the technology is unsound or doesn’t actually do what’s claimed.

            Finally, I recommend you check out https://community.signalusers.org/t/overview-of-third-party-security-audits/13243 - some of the issues noted there involve the social graph and at least one involves Sealed Sender specifically (though the link is dead; I didn’t check to see if the Internet Archive has a backup).