Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am so excited about this news. I understand that some people are pessimistic, and view it as a "giving up" on complete security against nation-states. I think that's the wrong way to analyze the situation.

The dream I have is someone making a phone that is purpose-built to be secure against state actors. Unfortunately, this makes very little economic sense, and probably won't happen (maybe if some rich person started a foundation or something?). The phone would need to have pretty restricted functionality and would not be generally appealing to mass market consumers.

As it stands, securing a mass market modern smartphone, even from just remote attacks, is just intractable. We should not bury our heads in the sand and wishfully think that if they just spend a little more money, close a few more bugs, and make the sandboxing a little better, somehow iOS 16 or Android 13 will finally be completely secure against state actors. The set of features being shipped will grow fast enough that security mitigations will not someday 'catch up'.

This is the next best thing! The more we can give users the freedom to lock down their devices, the more the vision of an actual solution comes into view. This is the first step towards perhaps our only hope of solving this someday - applying formal methods and lots of public scrutiny to a small 'trusted code base', and finally telling NSO group to fuck off.

Even this dream may not pan out, but at least we can have hope.



I would suspect any phone designed to resist a state-level actor, that is made available to me (a regular citizen) would 100% be a honeypot for a state level actor.


In fact, several phones which have been advertised as such have been honeypots from state level actors.


Which ones? Not challenging you, just curious.



That's crazy! Straight out of the Wire.


Australian Federal Police did it as well: https://www.theguardian.com/australia-news/2021/sep/11/insid...


anyone big like samsung, lg, or apple? I'd love to see those articles and teardowns.


Security as a service is going to be a honeypot 100% of the time.


This comment feels disingenuous to me, but maybe I'm misinterpreting. Security features are always a service but there are real apps that provide real security. Signal and Matrix provide real encryption for communication. There's even mainstream products that do, like iMessage or Gmail, though these tend to be more selective about what is secure and what isn't (typically through walled gardens). Apple and Google both use federated learning, which is at least a step better than your typically data "anonymization." I agree that there's not enough push for serious security, especially as a default, but I also am not pessimistic on the subject either.


Signal wants your PSTN ID = real world ID, wants contacts from your phonebook which on Google phones generally means already cloudified, and is itself distributed through Google Play. Further, IIRC it's US-based so subject to acts of intervention from on high. I would be strongly suspicious of any metadata security claims, even if it nominally provides message or session-level encryption. Metadata is bad news.


> IIRC it's US-based so subject to acts of intervention from on high.

Sure, and they have been open about what information they give. If you're talking about being forced to introduce compromised code, well I'm not aware of the US government being able to force a company to do that. Signal has said before they'll shut down and then move if this is a requirement and on top of that[1], the code is open sourced and constantly scrutinized by the security community. So sounds like a pretty difficult thing to pull off.

I don't think handing your phone number to Signal is as big of a security issue as you're making it out to be.

[0] https://signal.org/bigbrother/

[1] https://www.wired.com/story/signal-earn-it-ransomware-securi...


I have a ton of concerns with Signal. They started collecting and storing user data in the cloud while being deceptive/unclear about it in their communications leading to a ton of confusion with users. In fact they're now storing exactly the same data that they've bragged about not being able to turn over since at that time they weren't keeping it. Pretty much as soon as it was clear Signal was going to start keeping user data, users started with objections and asking for a way to opt out of the data collection and bringing up security concerns but those objections were ignored.

To this day they're violating their own privacy policy because after they started storing user data in the cloud they never bothered to update the policy.

Currently it states: "Signal is designed to never collect or store any sensitive information." while in practice they store your name, your photo, your phone number, and a list of everyone you're in contact with which is pretty damn sensitive, especially if you're an activist or a whistleblower.

I've stopped using/recommending it. To this day I run into posts where people think Signal isn't collecting any user data. I hope every user who has to learn what signal is really collecting from some random internet comment thinks long and hard about what that says about how transparent and trustworthy signal is.


I recommend session now.

https://getsession.org/

It doesn't require creating an account and giving up your phone number.

They use the same signal protocol with different trade off in terms of security and privacy[0]

My only concern is they are based in Australia.

0] https://getsession.org/session-protocol-technical-informatio...


I'll give Session a look! Right now I'm using silence for unsecured texting and Jami for secure communication, but both lack polish and going from signal to silence was rough. It really needs a search function.


> They started collecting and storing user data in the cloud

> they're now storing exactly the same data that they've bragged about not being able to turn over

Can you provide me a source on this? This is the first time I've heard of this.


> This is the first time I've heard of this.

Doesn't surprise me. You're my new example of folks still unaware.

My old one was here (none of the answers this guy got tell the truth of the situation): https://old.reddit.com/r/signal/comments/q5tlg1/what_info_do...

Here's an early discussion on the user forum: https://community.signalusers.org/t/proper-secure-value-secu...

It was a total mess with tons of posts there and on the subreddit too. Here's an example: https://old.reddit.com/r/signal/comments/htmzrr/psa_disablin...

Anyone not following all the drama at the time wouldn't have a clue, and a bunch of people who did still came away with incorrect information anyway because Signal didn't make it clear at all what they were doing and they've gone out of their way to avoid answering direct questions in a clear way ever since, instead keeping the myth that they don't collect user data alive.

There's no reason they couldn't have provided a simple opt out for the data collection and avoided the issue entirely and the fact that they wouldn't do that was red flag enough, but the mess of confusion their communications caused and their refusal to update their privacy policy should be all the evidence we need that they're not to be trusted. To be fair to the folks at Signal, they may actually be trying to communicate that very message to their users as loudly as they're legally able to.

Additional links you might not enjoy:

https://community.signalusers.org/t/dont-want-pin-dont-want-...

https://community.signalusers.org/t/can-signal-please-update...

https://community.signalusers.org/t/wiki-faq-signal-pin-svr-...

https://community.signalusers.org/t/sgx-cacheout-sgaxe-attac...


The whole cloud data collection, and the fact that their privacy policy is now veritably incorrect for over 2 years now certainly makes it plausible there's more they're keeping away from us.


Sure. Aside from the Google phones upload contacts to cloud issue, and the encouraging contacts to be added thing, there are two clear problems: both metadata.

(1) It's the network of phone numbers - who knows who, when they added, that starts to draw a picture.

(2) If they have any infrastructure at all - update checks, contact additions, whatever, that is going to phone home or be polled or contacted whatsoever, particularly that which can facilitate a network response (generate network traffic when an ID is added) then the app effectively acts as an element that can be used for identity verification even if all traffic is encrypted. This is not a small issue.

These issues are not unique to Signal, but they should not be swept under the rug. FWIW I do not claim to have read or audited their code, I just feel the use of PSTN IDs (== highly available link to personal identification) is a total farce which introduces huge risk for nearly no benefit to users and is fundamentally incompatible with their nominal public stated goals (again haven't read the official text) of end user security if that security is supposed to be best-effort.


> Sure. Aside from the Google phones upload contacts to cloud issue

You can add contacts through Signal that aren't synced with Google. I've just understood this process as a way to initiate the social graph. You can just not give Signal access and start from scratch, but I don't think that accomplishes much.

Also, as far as I'm aware, Signal doesn't actually know your phone number.


The thing is, some percentage of your contacts will accidentally or knowingly grant permission for their contacts to go to Google. So by linking to that infrastructure Signal is making this problem worse, whether or not they actually facilitate the spying themselves.


I assume you're an FBI agent trying to encourage people to install your real cooler encrypted app that's not on the store and only available via sideloading.

https://nymag.com/intelligencer/2021/06/fbi-snooped-on-crimi...


Heh, nice one. Not that it's my area, but in case the above was not decodable as sarcasm to other readers, following the evidence-based / defense-in-depth strategies I'd personally recommend not using phones at all (far too little control in general) and instead recommend seeking out auditable (open source) software on actual machines you have a hope to control for secure communications. It's a deep rabbit hole with diminishing returns, though.


It's definitely tin-foil-hat level. Obviously if you're a spy you're gonna have to have next level stuff, most of us aren't Jason Bourne, even we'd like to think we are.


There are a lot of bad actors in the security space. DDG, for example. Companies like perimeter 81 I don't trust based solely on the fact that Israel regularly and frequently acts nefariously. Bitlocker replaces good drive encryption you control with something that can be unlocked by authorities. Plenty of PRISM compromised companies offer security...


sms and email are insecure-by-default protocols. Gmail/imessage extend them which necessarily will create vendor-lock in when the extension relies on some centralized service, the extensions are private, and the implementations are closed source.

Matrix fixes this, but only in the sense that they replace the whole protocol without reverse compatibility.


This comment is especially true for the majority of the VPN companies plaguing YouTube ads/sponsorships right now. It's interesting they've all pivoted more towards "get netflix content from any country" than security, and also interesting that none of the streaming services have gone after them for doing so.


Gotta trust somebody at some point? Otherwise you have to live off the grid in the woods eating squirrels and mushrooms



And yet we got TOR because it was required for National Security.


TOR is no magic bullet


No, but it was a layer of security required by DoD so it was created and continues to exist.

The same need for modern communications (phones) exists.


IMO Bunnie has the technical skills and the reputation to pull it off though.

I think it has about zero chance of withstanding physical attacks, which is important to me in a phone, but it's a nice effort.


Most of the people in charge, only care about what state the "bad"/"good" actors are from, so preferably, "our guys" should be able to do everything, and "theirs" nothing.


Bunnie Huang is working on Betrusted [1], a communications device that is designed to be secure from state actors. The first step is Precursor (about: [2], purchase:[3]) the hardware and OS that will be the platform for the communications device.

It's designed to be secure even though it communicates via insecure wifi, for instance via tethering or at home. The CPU and most peripherals are in an FPGA with an auditable bitstream to program the device to ensure there are no back doors. Hardware and software are all open source. It has anti-tamper capability.

It looks well-thought-out.

1. https://betrusted.io/

2. https://www.bunniestudios.com/blog/?p=5921

3. https://www.crowdsupply.com/sutajio-kosagi/precursor


Unless you design the FPGA inhouse and make it in your own Fab how would you know it's secure? Taiwan and Korea owe the US a lot of favors...


It's not rigorously provable, but to a large extent a "backdoored FPGA" is complete nonsense and not even worth considering.

The manufacturer/adversary knows nothing about your core design or where you'll place logic. Synthesis tools literally randomize routing and placement on each run as a natural consequence of routing being strongly NP. Further, once you add in the fact that FPGAs are often fairly high volume goods since the same chip is sold to thousands of different companies, it makes even less sense since now you have to have a backdoor that activates only on specific random designs but not any other design in regular industry use since an activation would lead to incorrect circuit behavior there. You'd also need this behavior to not show up under automated verification (you're running a verification suite against your chips, right??) which is nearing on science fiction. While, I guess you could do something like this, it'd be wildly impractical in every sense of the word.


FPGAs just have a much lower essential complexity.

Adding one undocumented latch is enough to undermine an ASIC CPU. To do that to an FPGA, you'd have to know where the layout engine is putting the circuit you intend to pwn, and good luck with that staying still under any revision.

If this did become a problem, a technique analogous to memory randomization could be employed to make any given kernel unique from the hardware's perspective.


You can’t of course know, but modifying the mask of a modern chip (millions of dollars by itself), slipping those mask(s) (you need many, one per layer of material) into production to target a subset of devices, in a way that lets you inject faults and lets you own the design the FPGA is emulating, is nuclear power level. And would imagine they would not risk it very often if at all due to the fallout it could cause.

A microcontroller on 130nm? Different story probably. Still crazy hard


I remember this talk at CCC two years ago. Has this device move forward? I haven't heard anything since that CCC talk.


I want deniability. After watching the videos from Ukraine of Russians pulling out citizens from cars forcing them to unlock their phone with guns to their heads -- I want a way to hand someone a phone, unlock it, and STILL be protected. I want my private things in a volume with deniability. Trucrypt was close.


I would pay a good premium for an iPhone with a distress code that unlocks the phone into an environment with some fake but plausible contents. Bonus points if it optionally wipes the real user partition upon entering the code.


That sort of exists, but only sort of. If you press the lock button on the side of the iPhone five consecutive times, it will then require your passcode to unlock (hopefully a high entropy passphrase), and will disable biometric authentication until unlocked with a passcode. You can set the phone to wipe after 10 failed attempts to unlock.

You can also say "Hey Siri, whose phone is this?" and your phone will lock down the same way as described above.

Of course, this doesn't protect from the $5 wrench attack, but plausible deniability only goes so far as well in a targeted attack. At least, depending on your local laws, law enforcement may not be able to compel you to provide your passphrase, but they can easily force you to use your biometric data, so this protects against that.


Buy a second phone, that's what some do in China due to an "anti-fraud" app with slightly softer enforcement


>>The dream I have is someone making a phone that is purpose-built to be secure against state actors

I just don't see how anyone could build such a thing. State level actors have the tools necessary to force you or your company to build in any backdoor they want, and prevent you from ever talking about it to anyone. US certainly does, and could just force apple to add a backdoor to this lockdown mode and apple could never even hint at its existence under legal threat.


Not just the US, so do the EU, any five eyes country, China, Korea, Taiwan. The US doesn't have a hegemony on backdoors so lets always remember that and not exclude others or act like it's an island of corruption in a world of benevolent state actors.


I don't think Korea or Australia have the power to force Apple to build backdoors into their products. Maybe they'd get to use the US one if they asked nicely.


Australian law requires that Apple enable a "backdoor" when issued a Technical Capability Notice.

I don't know one has been issued. But Apple still sells devices in Australia, so I'm assuming it has complied when it was asked.

See https://www.techtarget.com/searchsecurity/definition/Austral... for an overview.


TCNs can only be issued to CSPs, Apple isn't one.

There are enough issues with the spying bills introduced in the last few years without creating ones that don't exist.


Unless it was some kind of false flag to encourage trust, the US government asked less than nicely via the FBI and Apple told them to pound sand.


Or they could just add an implant at the factory.

Why anyone allows their devices to be manufactured overseas is beyond me.


Looking forwards to when Apple manufactures all iPhones in Sweden. Or did you mean the US, which remains stubbornly overseas and scary to the majority of the world’s population?


I meant ”not abroad”.


I don't recall getting a vote. Do you even know of a single device made in a relatively "benevolent" state actor country? I would love to know. I would love it if there was a provably secure device manufactured in some remote Pacific island that has never projected itself as a malevolent international threat like 100% of the first world countries have.


We recently discovered one of our biggest geo-political enemies manufactures all our medicines. So that's crazy.


That's because you are unwilling to buy a $1500 phone when there is the same phone for $800.


Compare the price of the Librem 5 (1299) vs. the Librem 5 USA (1999).

The former is assembled in China, the latter in the US.


Might want to update those prices. Highest priced iPhone is $1,600.


I don't think the intent was to capture the price of the most-expensive model.


>Why anyone allows their devices to be manufactured overseas is beyond me

$$$$


Realistically you cannot win against a resourceful adversary every time. But merely painting the situation through the lens of premature surrender is also a disservice.

It will be interesting to see what third-party researchers discover about these new protections. Might remember something about Apple rewriting format parsers for iMessage in memory-safe language with sandboxing as Blastdoor and it was discovered there was still plenty of attack-surface in the unprotected parsers.


It might just be better to not rely on a phone, rather than rely on something achieving perfect security against the most malicious and capable of actors.

If I was really concerned about targeted cyber attacks against me, I think that I would exclusively use computers that I would buy from random people on Craigslist, take the hard drives out and only boot with live CDs using ram disks, and only connect via random public Wi-Fi locations.


If I was really concerned about targeted cyber attacks against me, I think that I would exclusively use computers that I would buy from random people on Craigslist, take the hard drives out and only boot with live CDs using ram disks, and only connect via random public Wi-Fi locations.

Excellent precautions if you live and work in average middle-class suburbia and never go anywhere or do anything dangerous, controversial, or politically unpopular.

Lockdown Mode is not for you. It's for other people with different lives.


My point is lockdown mode won't be good enough. Which is why there is still a big bounty for it. And those wouldn't be excellent precautions if you weren't doing anything dangerous, because they would be a huge burden over just operating normally above board.

How exactly does this method stop working in cities? You could have provided some content instead of a weirdly vitriolic dismissal.


The parent was simply explaining that lockdown is not intended for a person who buys computers from Craigslist in order to enforce security.

Your mitigation is not a mitigation against being singly targeted. There are so many attack vectors in a computer outside of the boot disk. The computers sold on Craigslist should not be considered secure, since there is no level of trust in the supply chain or the state of the hardware.

For ex: If you are being directly targeted, a nation-state can purchase the computers from your local Craigslist, rewrite their bios, and list them for you to purchase. Then flood Craigslist with 100 other compromised machines.


Sure, they can do that. If they know that what you're actually doing. And you just do the same thing stupidly on repeat in the same area.

All of that certainly sounds much more involved than sending a zero-day zero-click iMessage to the well known phone number of a dissident.


I was explaining why your use case of purchasing computers from craigslist does not secure against nation-state targeted attacks. Now you are changing the conversation and saying there are other ways to attack. Of course there are many other attack vectors. I mentioned that, however the conversation was about the true level of security provided by your mitigation.


I'm not changing the conversation, I'm pointing out the simple, currently-used-against-dissident attacks that are not possible if there isn't a clear connection between dissident and device. It certainly provides pretty good protection compared to having an always connected device with a unique ID carried on you at all times. Security is oftentimes about making reasonable tradeoffs based on your risk levels.

And I think you may be overestimating even the resources and capabilities of nations.

Let's say you lived in Philadelphia. You could drive down to Baltimore or up to NYC in 90 minutes. Within that range, there are literally over 10,000 individuals selling 1 or more laptops on craigslist and other sites that I did a cursory search over. And that's not even counting all of the small mom and pop shops that are selling laptops, as well as the big box stores.

How should the adversary state figure out which of those people you're going to purchase from? Should they purchase literally every laptop in the region? Okay then...what about when people start selling more laptops they had in storage because the market is red hot?

What do they even do when they have the laptops? Do they have exploits for every BIOS for every type of laptop for the past 15 years? How do they sell the laptop to me? Do they have their agents sell them? Do they have hundreds of agents who are deep undercover in America, who could lure me in?

I just don't see "buy every laptop in a region, exploit it, and resell it, hope your target picks one up" as a viable strategy, even for the wealthiest of nations, assuming you need to do it discreetly.


This is a fantasy that could only from someone who doesn't actually need it. The people who actually need Lockdown Mode-- dissidents, organizers, journalists, etc.-- also actually need to communicate with normal people, and that means having a phone. If you're so unimportant that you can get away with your proposed computing scheme, you're not going to be the recipient of targeted cyber-attacks.


Well, I don't need it, but the people who do need it usually don't have much of a clue about infosec or cyber security.

What means of communication are available to you via a phone but not via an internet connected computer?

There isn't even anything intrinsically wrong with a cell phone, other than the fact that it encourages you to carry it everywhere and merge all communications with everyone onto a single device that is default connected to the internet.


>>"...a "giving up" on complete security against nation-states...

DEFINE:

State Actors: [0]

As one who is acting on "behalf" of a government.........

What if said government was actually an arm of the corporate entities as the state ACTING at their behest?

Crazy, I know.

[0] https://en.wikipedia.org/wiki/State_actor


The dream I have is that they do not sack us with taxes that later on they use to violate our rights.

First thing is to remove a lot of the economic power and legislative power states have and hardening security in devices is also good news. But the problem is also that they have so much money and power that they can misspend money to target people and violate their rights because yes.


The potential a phone like that would have if you explained people how states can and do put their nose into their lives is quite big IMHO. It is just that people have no idea of how much they can take from your info through a phone.


In general, I'm much more concerned with private actors than state actors. I'm aware of multiple ways in which companies use information to try to extract money from me, and they actively make my life worse in the attempt.

I have a much harder time thinking about how giving states access to my information has been harmful for me. I can think of potential harms, if the state started doing religious or ethnic persecution(not trying to diminish the chance of this, but not a problem today) so I'm aware of potential threats. But other than that... What exactly should I be worried about?


The problem is what you say: political and religious prosecution. Not today, but hey, who knows when, given a situation. So it is better they cannot have our data, right? I mean, that is the safe part of the fence.

I am concerned about any actor, as well, but take into account that a state has a huge amount of resources and if they are motivated enough can make your life worse than almost any private actor. It has happened in history, this is not something fictional.

The reach at which a private actor can harm you is much more limited in the general case IMHO.


The problem 90% of cases is the user himself. Advanced attacks such as spyware-for-hire with zero-days and stuff only affect a minority of users. For the fast majority, the vulnerabilities are much simpler: password reuse/carelessness, malware on other devices (laptop, etc) that also has access to their data, willingly sharing too much information, etc.

You don't need a special phone or hardened OS to defend against that, and users vulnerable to this will remain just as vulnerable regardless of how much hardening there is.


Most people couldn’t grasp the important ramifications even if you walked them through it from first principles. I’m not sure I can despite being very interested in information entropy my whole life.

A lot of people really don’t understand much at all about anything that they don’t constantly see and touch their whole lives. A lot of people truly just live in the moment constantly and use their higher order thinking for social navigation and sex.


>The dream I have is someone making a phone that is purpose-built to be secure against state actors

Here you go: https://puri.sm/products/librem-5.

FAQ: https://source.puri.sm/Librem5/community-wiki/-/wikis/Freque....


I feel like the closest you can come to the dream of a phone that is secure against state actors today would be a google pixel phone running graphene os.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: