Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

All I can say is GOOD.

If a person is suspected of committing a crime, and police obtain a specific, pointed, warrant for information pertaining to an individual, tech companies have a moral obligation to comply, in the best interests of humanity.

If law enforcement or spy agency asked for a dragnet warrant like "find me all of the people that might be guilty of XYZ" or "find me something this individual might be guilty of"; tech companies have a moral obligation to resist, in the best interest of humanity.

The first is an example of the justice system working correctly in a free society; the second is an example of totalitarian government seeking to frame individuals.



Not good. These tools (from search engines to AI) are increasingly part of our brains, and we should have confidentiality in using them. I already think too much about everything I put into ChatGPT, since my default assumption is it will all be made public. Now I also have to consider the possibility that random discussions will be used against me and taken out of context if I'm ever accused of committing a crime. (Like all the weird questions I ask about anonymous communications and encryption!) So everything I do with these tools will be with an eye towards the fact that it's all preserved and I'll have to explain it, which has a huge chilling effect on using the system. Just make it easy for me not to log history.


> These tools (from search engines to AI) are increasingly part of our brains, and we should have confidentiality in using them.

But you do, just like you have confidentiality in what you write in your diary.


> Not good. These tools (from search engines to AI) are increasingly part of our brains, and we should have confidentiality in using them.

Don't expect that from products with advertising business models


OpenAI and Anthropic do not have advertising business models


OpenAI is clearly moving in that direction, look at their recent verbiage and hiring.


yet, but surely they will move that way over time?


If you're not the customer, you're most likely the product.


I love the saying, but there's something of an exception here. Both companies very openly have singularity business models.


Business models make money. If your business model is theoretical, we call that "research" instead.

Real research is done on prior art. We research things because prior studies tell us that they are feasible, otherwise you are wasting valuable time on a snipe chase. There is no reproducible or substantial evidence that AGI or singularities exist. It is another Big Tech marketing lie, no different from the reneged "don't be evil" or "privacy is a human right" mottos.


Thanks for the interesting response! I disagree on a few points, though:

  If your business model is theoretical, we call that "research" instead.
Are/were Uber and Lyft "research" companies, then? Is Reddit a research company? Edison Electric?

  There is no reproducible or substantial evidence that AGI or singularities exist
There is also no substantial evidence that the sun will rise tomorrow, that climate change will continue, or a million other things that are critical to science. Physical science is empirical in that it inherently requires physical experiments, but that is not the only cognitive tool in play by a long shot.

Regardless: tell them, not me! I'm just reporting what I'd say is an objective fact: they are planning based on scientific predictions of an intelligence explosion -- at least a soft/cybernetic one if not a scarily-fast/purely-digital one.

  It is another Big Tech marketing lie
I think there's a single fact that counters this common sentiment: there is no way in hell that they'd break ground on the largest private infrastructure projects in human history as a marketing stunt. Companies are woefully-shortsighted these days, but that would be another level of foolishness altogether.

They very well may be wrong of course, but I think you're doing yourself a disservice to assume they're lying about it.


> There is also no substantial evidence that the sun will rise tomorrow

You are wasting my time with facetious arguments. There is no point having a rational discussion about the future potential of AI if we cannot take things like reality for granted.

If you want to argue in defense of AI, do it. Pointing to authority is one third of rhetoric, the other two thirds are emotional investment and logical coherence. If you don't have real proof that AGI exists, you're trying to make a point with emotions that people don't empathize with and authority that isn't authoritative. Cite sources, dammit.


> There is also no substantial evidence that the sun will rise tomorrow

the boosters are getting desperate


It’s just basic epistemology… like literally day one stuff. Too advanced for HN, I guess :(


name a similar sized tech company that hasn't


yet


Serious question. Why should someone have more privacy in a software system than they do within their home?


I have enormous privacy in my home. I can open up any book and read it with nobody logging what I read. I can destroy any notes I take and know they'll stay destroyed. I can even visit the library and do all these things in an environment with massive information access; only the card catalog usage might get logged, and I probably still don't have to tie usage to my identity because once upon a time it was totally normal to make knowledge tools publicly-accessible without the need for authentication credentials.


They maybe (not taking a stance) shouldn't, but I don't think this argument is as simple as one thinks. Doing surveillance on someone's home generally requires a court order beforehand. And depending on the country (I don't believe this applies to the US), words spoken at home also enjoy extended legal protection, i.e. they can't subpoena a friend you had a discussion with.

Now the real question is, do you consider it a conversation or a letter. Any opened¹ letters you have lying around at home can be grabbed with a court-ordered search warrant. But a conversation—you might need the warrant beforehand? It's tricky.

(Again, exact legal situation depends on the country.)

¹ Secrecy of correspondence frequently only applies to letters in sealed envelopes. But then you can get another warrant for the correspondence…


Honest question, why consider the personal home, letters or spoken words at all, considering most countries around the world already have ample and far more applicable laws/precedent for cloud hosted private documents?

For the LLM input, that maps 1:1 to documents a person has written and uploaded to cloud storage. And I don't see how generated output could weigh into that at all.


A simple answer to this is: I use local storage or end-to-end encrypted cloud backup for private stuff, and I don't for work stuff. And I make those decisions on a document-by-document basis, since I have the choice of using both technologies.

The question you are asking is: should I approach my daily search tasks with the same degree of thoughtfulness and caution that I do with my document storage choices, and do I have the same options? And the answers I would give are:

* As a consumer I don't want to have to think about this. I want to be able to answer some private questions or have conversations with a trusted confidant without those conversations being logged to my identity.

* As an OpenAI executive, I would also probably not want my users to have to think about this risk, since a lot of the future value in AI assistants is the knowledge that you can trust them like members of your family. If OpenAI can't provide that, something else will.

* As a member of a society, I really do not love the idea that we're using legal standards developed for 1990s email to protect citizens from privacy violations involving technologies that can think and even testify against you.


> [...] should I approach my daily search tasks with the same degree of thoughtfulness and caution that I do with my document storage choices [...]

Then treat them with the same degree of thoughtfulness and caution you have treated web searches on Google, Bing, DuckDuckGo or Kagi for the last decade.

Again, there is no confidant or entity here, no more so than the search algorithms we have been using for decades are at least.

> I really do not love the idea that we're using legal standards developed for 1990s email to protect citizens [...]

Fair, but again, that is in no way connected to LLMs. I still see no reason presented why LLM input should be treated any differently to cloud hosted files or web search requests.

You want better privacy? Me too, but that is not in any way connected to or changed by LLMs being common place. Same logic I find any attempt to restrict a specific social media company for privacy and algorithmic concerns laughable, if the laws remain so that any local competitors are allowed to do the same invasions.


It's not at all clear how easy it is to obtain a user's search history, when users don't explicitly log in to those services (e.g., incognito/Private browsing), and don't keep history on their local device. I've been trying to find a single example of a court case where this happened, and my Google/ChatGPT searches are coming up completely empty. Tell me if you can find one.

The closest I can find is "keyword warrants" where police ask for users who searched on a given term, but that's not quite the same thing as an exhaustive search history.

Certainly my personal intuition is that historically there has been a lot of default privacy for non-logged in "incognito" web search, which used to be most search -- and is also I think why we came to trust search so much. I expect that will change going forward, and most LLMs require user logins right from the jump.

As far as the "I can see no reason" why LLMs should be treated differently than email, well, there are plenty of good reasons why we should. If you're saying "we can't change the law," you clearly aren't paying attention to how the law has been changing around tech priorities like cryptocurrency recently. AI is an even bigger priority, so a lot of opportunity for big legal changes. Now's the time to make proposals.


> [..] single example of a court case where this happened, and my Google/ChatGPT searches are coming up completely empty.

A massive amount, part of why I am both surprised and starting to feel like this discussion stems from some being unaware of the tracking they tolerated for decades. These have been discussed to no end, covered by the usual suspects like the EFF and constantly get (re)reported across the media in "Incognito mode is not incognito" pieces.

Heck, some I know from memory [0], the rest one could find with a simple ten sec search [1].

> [...] my personal intuition is that historically there has been a lot of default privacy for non-logged in "incognito" web search [...]

There has not. No need for intuition or to believe me, just read the privacy information Google provides [2] whenever you access their sites (whether in an incognito instance or otherwise) as part of the cookie banner (and in the decade beforehand if one looked for it).

> As far as the "I can see no reason" why LLMs should be treated differently than email, well, there are plenty of good reasons why we should.

Not email. Never said email. If you are going to use quotation marks, please quote accurately ("I still see no reason presented why LLM input should be treated any differently to cloud hosted files or web search requests." is what I wrote and means something very different), I do the same to you.

Neither you, nor anyone else has provided a reason why LLM input is inherently different to other files hosted online. Happy to read those "plenty of good reasons", but they have yet to be shared.

> If you're saying "we can't change the law," [...]

I did not. I asked why existing laws should be applied differently in case of LLM input and/or changes are somehow needed for LLMs specifically or suddenly.

This really seems like LLMs, because they can be anthropomorphized, "feel" different to some and that somehow warrants different treatment, when that is an illusion.

Considering your believe that "historically there has been a lot of default privacy for non-logged in "incognito" web search", it honestly sounds like you believe there is less room for stricter regulation than my long immersed in this topic self, if I am being fully honest.

If I could implement any change, I would start with more consistent and transparent information of users at all times, which might dispel some misconceptions and help users make more informed decisions, even if they don't read the privacy policy.

Always liked a traffic light system as a concept. Then again, that is what Chrome already tells you when opening incognito mode and somehow there still seem to be assumptions that are not accurate about what that actually does and doesn't do.

TL;DR:

Yes, Search engine providers are able to identify users in incognito mode. Such tracking has always been public information, not least because they have to include it in their privacy policy.

Yes, such tracking has been used in court cases, in the US and elsewhere, to identify users and link them to their search requests done whilst using such modes.

No, LLM input is no different to search requests or files hosted online. Or at least, no one has said why LLM input is different, happy to hear arguments to the contrary though.

[0] https://www.classaction.org/media/brown-et-al-v-google-llc-e... (Google was forced to remediate billions (yes, with a b) of “Incognito” browsing records which according to plaintiffs precisely identified users at the time including being able to link them to their existing, not logged in, Google accounts. Note that this is one of two (US specific) cases I knew of the top of my head, the other was the Walshe murder, though there is no (public) information on whether incognito was used in that case: https://www.youtube.com/watch?v=cnA6XwVQUHY)

[1] https://law.justia.com/cases/colorado/supreme-court/2023/23s... and https://www.documentcloud.org/documents/23794040-j-s10032-22...

[2] https://policies.google.com/privacy?hl=en-US ("When you’re not signed in [...] we store the information we collect with unique identifiers tied to the browser, application, or device you’re using.", "This information is collected regardless of which browser or browser mode you use [...] third party sites and apps that integrate our services may still share information with Google.", I think you get the point. There never was any "default privacy for non-logged in "incognito" web search" and I can assure you, that data has always been more than sufficient to fingerprint a unique user)


I was retained as an expert witness in some of the cases involving Google, so of course I’m aware that Google keeps logs. (In general on HN I’ve found it’s always helpful to assume the person you’re arguing with might be a domain expert on the topic you’re arguing about; it’s saved me some time in the past.)

But Google’s internal logging is not the question I’m asking. I’m saying: can you find a single criminal case in the literature where police caused Google to disgorge a complete browsing history on someone who took even modest steps not to record it (ie browsed logged out.) Other than keyword search warrants, there doesn’t seem to be much. This really surprised me, since as an expert I “know” that Google has enough internal data to reconstruct this information. Yet from the outside — the experience that matters to people - they’ve managed to operate a product where real-world privacy expectations have been pretty high if you take even modest steps. I think this is where we get many of our privacy expectations from: the actual real-world lived expectations of privacy are much closer to what we want than what’s theoretically possible, or what will be possible in a future LLM-enabled surveilled world.


> [...] can you find a single criminal case in the literature where police caused Google to disgorge a complete browsing history on someone who took even modest steps not to record it (ie browsed logged out.).

Can you first point to me making a claim that would require such a case? Or can you, alternatively, point to why there is a need for change rather than just continue to apply the same level of legal protections to LLM service providers?

The fact that this started with a report about a users ChatGPT account and you felt the need to move us towards people using commercially hosted LLMs without an account (cause getting five queries in before OpenAI forces you to sign in is a realistic use case) I let slide up to this point, because whether we are talking about incognito mode access or a user with an account, doesn't change that no one here says why using Chat.com is different to Google.com. I just wanted to call it into memory, cause it is not very expert like, same with the (mis)quoting.

To make it simple, this is my question, the only thing I'd like to have answered:

When self hosted websites first became a thing, governments across the globe did not write new editorial legislation specific for these. They did just apply what was already established for print media/speech.

In this context, why should LLM input be handled differently to data hosted online?

Use file sharing services with no login requirement and the legal requirements there if this completely irrelevant red herring is absolutely necessary for you. Doesn't change anything about the question.


I think there is a non-zero chance they had no idea about this guy until OpenAI employees uncovered this, reported it, and additional cell phone data backed up the entire thing.


Why do employees need to be involved? It's AI. It is entirely capable of doing the surveillance, monitoring and reporting entirely by itself. If not now, then in the near future.


Just give the ai to user relationship a protection like attorney client privilege.

Edit: ai has already passed the bar exam.


It only "passes the bar exam" when AI, or some other flawed process, is the examiner. See e.g. https://doi.org/10.1007/s10506-024-09396-9 for a debunk.


That's not a debunk. "Calls into question" does not equal "in truth, it failed the exam. "


No, it’s a debunk. ChatGPT-4 scored in the 48th percentile (15th percentile in essays) amongst individuals that passed the bar exam. That’s very poor performance.


Thus it scored higher than almost half the humans who passed the test. In other words it too passed the bar.


Attorney-client privilege has limits. For obvious reasons I haven’t read any affidavits associated with the warrant, but it sure sounds like this would fall outside the bounds of attorney-client privilege.


With an attorney you have a clear sense of when you pass outside of that privilege. With a friend or colleague you have a social sense of what's going to remain confidential, plus memories aren't perfect. "Preserving, recording and reporting every word" is not the same as any of these things. This cannot be the world we all have to live in going forward; it's not safe or healthy.


Seems natural to extend privilege here. People are using it as a therapist.


There are a lot of counterarguments I could bring up, but just of the top, plainly, just because people use LLMs as therapists, lawyers, doctors, deities, doesn't make LLMs such.

My personal believes (we should not rely on models for such things at this stage, let's not anthropomorphize, etc.) to one side, let me ask, do you think if I used my friend Steve, who is not a lawyer but sounds very convincingly like one, to advice me on a legal dispute, that should be covered by attorney client privilege?

Cause, even given the scenario that LLMs suddenly become perfectly reliable enough to verifiably carry out legal/medical/etc. services to a point where they can actually be accepted into day-to-day practice by actual professionals and the companies are willing to take on the financial risks of any malpractice for using their models in such areas (as part of enterprise offerings for an extra fee of course), that still wouldn't and shouldn't mean that your run-of-the-mill private ChatGPT instance has the same privileges or protections that we afford to e.g. patient data when handled digitally as part of medical practice. At best (again, I dislike anthropomorphizing models, but it is easier to talk about such a scenario this way), a hypothetical ChatGPT that provides 100% accurate legal information would be akin to a private person who just happens to know a lot about the law, but never got accredited and does not have the same responsibilities.

Again though, we are far from that hypothetical anyways, "people" using LLMs that way does not change this fact. I know, unfortunately, there are people who are convinced that current day LLMs have already attain Godhood and are merely biding their time and that doesn't become real either, just because they act according to their assumptions.

I really struggle to understand, nor do I see any cogent arguments across this comment section why current day LLMs in such a scenario should be treated differently to e.g. a PKM software or cloud hosted diary and afforded the same legal protections (or lack thereof depending on viewpoint, personal stance and your local data privacy laws).


You'll find these laws privileging certain folks are contoured and controlled by the individuals who have already been granted such privilege to discourage and limit competition. Not because it's good in any way for the client.

Protectionism hurts all of society to benefit a few.


Perhaps this is a language barrier, but I genuinely do not understand what is meant by this. Like, what does this have to do with protectionism, who are the "folks" in this case, etc. Honestly asking.


Doctors control who can be a doctor, what is required to be a doctor, what doctors can and can't do, and that people are forced to go to them for Healthcare ... all to protect their personal income. Not to better Healthcare. Not to expand access to Healthcare. But precisely to make it cost more to get. They are hurting society to benefit themselves.

Milton Friedman explains it to doctors here: https://m.youtube.com/watch?v=ss5PxPlnmFk


Yeah, politely, respectfully, no.

Don't know where to start, but I want to assure you, no matter where on this planet you live, Medical Doctors are generally not at fault for high costs of care. Depending on which health care system we are talking about, the particulars may be different, but no, MDs are not interested in worsening patient care for their own benefit. Kinda difficult considering the amount of uncompensated labor and stress compared to other higher paying occupations. Ask a trainee/resident/equivalent for your local health care system if you want some details.

And people are "forced" to go to an MD for medical treatment in the same way they are "forced" to go to any other domain specific expert, it is where the experience and liability lie because they have undertaken the time, training and exams to ideally assure a specific level of care.

Incidentally, has absolutely zero to do with LLMs and the fact that this is cloud hosted software, not an entity, being or anything of the sort, so shouldn't receive any special considerations beyond what we afford to cloud hosted content. Couldn't find anything on patient data processing in that MF collection linked and as that was his area of work, was purely US centric. Medical care is however the purview of medical professionals outside the US as well, including in countries with far higher patient outcomes. If there is an applicable argument, just quote that directly over linking a collection of clips.

To bring this back to the topic at hand, LLMs can and are being used in Medical Practice already. And neither did Doctors prevent that, nor did that require a law change, because, as stated before, it is merely data being input and processed. There are EU MDRd apps for skin cancer, there are on-prem LLM solutions that adhere to existing patient privacy regulations, etc.

Basically, Doctors do not stand in the way of LLM usage (neither could they, nor do they have the time) and even if they wanted to, LLM input and output is just data and gets treated accordingly.


I can represent myself in court, but I can't prescribe my own medication. If one does not go to the doctor to get those drugs they'll die, so yes: forced.

All you assured me of is that you didn't watch the video.


How do you square this with Apple's pushback few years back against FBI who asked for a specific individual's details.

I'm not taking sides, but it sounds like if ChatGPT cooperating with LE is a Good Thing (TM), then Apple making a public spectacle of how they are not going to cooperate is .. bad?

I'm fully aware that Apple might not even be able to provide them the information, which is a separate conversation.


>How do you square this with Apple's pushback few years back against FBI who asked for a specific individual's details.

See: https://en.wikipedia.org/wiki/Apple%E2%80%93FBI_encryption_d...

>Most of these seek to compel Apple "to use its existing capabilities to extract data like contacts, photos and calls from locked iPhones running on operating systems iOS 7 and older" in order to assist in criminal investigations and prosecutions. A few requests, however, involve phones with more extensive security protections, which Apple has no current ability to break. These orders would compel Apple to write new software that would let the government bypass these devices' security and unlock the phones.[3]

That's much more different than OpenAI dumping some rows from their database. If chatgpt was end-to-end encrypted and they wanted OpenAI to backdoor their app I would be equally opposed.


Interesting that it wound up not being Cellebrite, I thought for years it was, I wonder if Cellebrite had people lie to the press that it was them. Really effective marketing.

I agree, the line is at messing with End to End Encryption. If your E2EE has a backdoor ITS NOT END TO END ENCRYPTION. Thanks.


It's not exactly E2EE. iPhone storage is locked with a 6-digit numeric passcode in most cases, which is basically no entropy. The whole thing relies on hardware security (the enclave). At least in older phones, that just meant security through obscurity since Apple's trade secrets were enough to unlock it, but maybe newer ones can't be unlocked even by Apple.


With CALEA and related laws, companies that don't keep logs can be compelled to surveil certain users from that point forward, even if that means installing hardware/software that keeps logs on them.


The difference is that in this case OpenAI was able to produce the requested information without compromising security for their other customers.


Right, for the OpenAI case to be analogous, they would have to switch to a system where your chats are homomorphically encrypted -- i.e. OpenAI does all its operations without knowing either the input or output plaintext. In that case, they'd only have encrypted chats to begin with, and would have to somehow get your key to comply with a warrant for the plaintext.

And note: the above scenario is not likely anywhere in the near future, because homomorphic encryption has something like a million times overhead, and requires you to hit the entire database on every request, when state-of-the-art LLM systems are already pushing the limits of computation.


Yes. I'm glad the FBI was able to crack the phone without Apple's help in that San Bernardino case, which humiliated Apple as a little bonus.

Apple also tried to freak the public out saying the FBI wanted a backdoor added, which was inaccurate. You can't retroactively add a backdoor, that's the whole point of it. FBI wanted Apple to unlock a specific phone, which Apple said they were capable of doing already.


With my current knowledge of the case, I'd say Apple was clearly in the moral wrong and it's a pretty dark mark in their past.

My understanding is the suspect was detained and law enforcement was not asking for a dragnet (at least thats what they stated publicly), and they were asking for a tool for a specific phone. Apple stated the FBI was asking them to backdoor in all iPhones, then the FBI countered and said thats not what they were asking for. Apple then marched triumphantly into the moral sunset over the innocent victims'; meanwhile the FBI then send funds to a dubious group with questionable ethics and ties to authoritarian regimes.

In my opinion, Apple should have expediently helped here, if for no other reason than to prevent the funding of groups that support dragnets, but also out of moral obligation to the victims.


Seeing how strained your good-faith interpretation is has further entrenched my belief that San Bernadino was a false flag operation by the FBI.

There is no world in which a post-PRISM compliant Apple cannot be coerced by the feds for an investigation. It's just a matter of how much pressure the FBI wanted to apply; Apple's colossal marketing win is the sort of thing that you would invent if you wanted to manufacture consumer trust, not "prove" anything to cryptographers. Playing devil's advocate, "authoritarian regimes" are exactly the sort of place you would send the iPhone to if you already had the information and wanted to pretend like it was hard to access.

If we assume a worst-case-scenario where Apple was already under coercion by the FBI, everything they did covers up any potential wrongdoing. It was all talk, no walk. Neither side had to show any accountability, and everyone can go on happily using their devices for private purposes.


Are you certain Apple could unlock this phone (short of making a software change that compromised all iPhones)?


And why would it matter? Even if the capability to create a magic key that unlocked a specific phone remained entirely within a company's hands for future use, why wouldn't the courts just continue to ask them to use it? It's not like the victims of all sorts of other crimes don't have similar don't similarly deserve justice.

Law enforcement at the time was even admitting (which we'd later find out to be correct) that there likely was nothing of value on the phone. It seems fairly obvious that the FBI was trying to use a high profile case to force a paradigm shift. Perhaps we can argue it'd be a good and just one, but arguing that they weren't seems not right.


Apple said they could do it. And they didn't tell the FBI they can't do it, they said they don't want to.


I make no claim either way nor do I have insider knowledge of what they could and could not do.


Neither do I have inside knowledge.

Instead I am only aware from what has been published that there is the so-called "Secure Enclave" chip in the iPhone hardware manifest that will only give up its secrets to a biometric match, or a user password. That would seem to leave Apple's hands tied?


> If law enforcement or spy agency asked for a dragnet warrant like "find me all of the people that might be guilty of XYZ" or "find me something this individual might be guilty of"; tech companies have a moral obligation to resist, in the best interest of humanity.

There is more evidence they will do this rather than that they won't. ChatGPT is a giant dragnet and 15 years ago I would've argued it's probably entirely operated and funded by the NSA. The police already can obtain a "geofenced warrant" today. We're not more than one senator up for re-election from having a new law forced down our throat "for the children" that enables them to mine OpenAI data. That is, if they don't already have a Room 641A located in their HQ.

People pour their live out into these fuzzy word predictors. OpenAI is holding a treasure trove of personal data, personality data, and other data that could be used for all kinds of intelligence work.

This is objectively bad regardless of how bad the criminal is. The last near 40 years of history, and especially the post 9/11 world, shows that if we don't stand up for these people the government will tread all over our most fundamental rights in the name of children/security/etc.

Basic rights aren't determined by how "good people" use them. They are entirely determined by how we treat "bad people" under them.


Just wait until AI is advanced enough that you can buy an AI best friend who will be with you all your life. I'm reminded of Decker's AI hologram friend in Blade Runner 2049. The only thing they got wrong was she was not collecting data for the megacorp.

Thinking again, the AI will certainly be "free".


I don't think anyone has a moral obligation to do the state's bidding, and if you think these tools will only be used morally against "bad guys", you have not been paying attention to recent events.

I also don't think the interests of the state are "in the best interests of humanity".

Sometimes the price of having nice things and them remaining nice means that people you don't like can use them, too.


Does this imply that the tech company has the moral obligation to evaluate the merits of each warrant on a case-by-case basis?


They should resist fishing expeditions. I don't think thats that hard.


Is that only a function of the number of individuals targeted by a group of warrants? What determines "group membership" of a warrant? It seems like it actually is hard to determine, both for the legal system (there are many controversies in the U.S. about whether dragnet warrants are constitutional and what constitutes a dragnet warrant) and for a company receiving these warrants.


Not fair. It means idiots who type "how do I hide a body" get caught, but smart types from HN can hide their traces. In a fair society, both dumb and smart criminals should have equal chance of getting caught. Imagine, for example, if you could use Internet only after identification. And maybe there should be reduced punishment for dumb types for humanity reasons.


Absurd. Unrealistic.

Does everyone have the same earnings potential risk regardless of their skill? Same with stealing potential.

Edit: on the flip side, white collar crimes leave a paper trail that traditional smash and grab crimes do not, so more white collar criminals should be getting caught and convicted now.


There are many routes that the government has to court order/warrant/subpoena information from tech companies.

The tech companies have just about zero ability to resist.

There should likely be legislation enacted that raises chat logs to the level of psychotherapist-patient privilege.


> There should likely be legislation enacted that raises chat logs to the level of psychotherapist-patient privilege.

Medical records are up for grabs when it comes to investigations/discovery/subpoenas/warrants/etc, it's one of the privacy exceptions in HIPAA.

They can also be used to try to identify alleged criminals, missing people, fugitives, etc. For example, if they have DNA samples, bite impressions (even though it's BS), law enforcement can even demand matching medical records without warrants.


> the Justice Department’s allegations against Rinderknecht are supported by evidence found on his phone

Sounds like they got the info from his phone, not taken from any servers, so this is likely not an example of a tech company "complying".


Administrative or judicial warrant? What if they deceived the judge?


The issue is that law enforcement personnel are going to do combinations of both. Corruption is real and companies are just made of people. Things can be done in relative secret without arousing controversy. This is the same logic libertarian shitheads use for why we shouldn't provide kids a school lunch.

Human nature doesn't follow your shitfuck ideal driven rules friend. I guarantee some day you'll find that out the hard way.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: