As a corporation, Facebook truly seems to be trying to improve its behavior for the benefit of society at large, but the company is finding it very difficult to do The Right Thing, because it would reduce future revenue growth, shrink long-term profitability, and hurt the company's competitive standing against the many other companies that are trying to eat Facebook's lunch every day.
In the extreme, Facebook's choices appear to be: (a) act in the best interest of society and get f#cked by competitors; or (b) remain a dominant force in the market, but as a side effect, f#ck everybody. All options for Facebook appear to be a mix of those two horrible choices.
> As a corporation, Facebook truly seems to be trying to improve its behavior for the benefit of society at large
Sorry, that's a hard disagree from me and I think you couldn't be further off the mark.
Facebook's entire raison d'etre from day 1 is to take as much data from as many sources as possible, then use the most powerful computers, programmed by the brightest minds, to use that data to maximise their profit with no regard for what damage that may do to you or society at large - look at what Jonathan Haidt talks about wrt to mental health problems and social media.
They use the dirtiest psychological tactics to ensure that you never put down your phone and to ensure that you only see what they deem to have maximum engagement (whatever the f that means) and only put their hands up to any nefarious shit when a spotlight is on them for it.
I can understand Facebook wanting to clean up their image from a PR perspective but it's nothing to do with altruism or wanting to serve the public better... if they can make more money from looking like a decent bunch, they'll do it.
> Facebook's entire raison d'etre from day 1 is to take as much data from as many sources as possible, then use the most powerful computers, programmed by the brightest minds, to use that data to maximise their profit
My impression was that on Day 1 it was really just to rate the attractiveness of the coeds at Harvard.
My point is that Facebook can improve its behavior only by putting its business at risk.
If Zuckerberg, Sandberg, et al could improve Facebook's behavior without putting the business at risk, they would do it in a heartbeat. But it appears they can't.
Their efforts are thus sincere but highly constrained: They will never voluntarily do anything that would put the business -- their life's work -- at risk.
If I may use an imperfect analogy: Facebook is a "polluter of society" that can't afford to stop polluting until all its competitors are forced to stop polluting society too.
> If Zuckerberg, Sandberg, et al could improve Facebook's behavior without putting the business at risk, they would do it in a heartbeat. But it appears they can't.
I think the point is they literally can't improve Facebook's behavior because their behavior is their business. They don't have any products that can function without their panopticon and Skinner boxes.
It's not that they can't compete by changing their behavior but cease to be a viable business that can even operate.
Of course it’s profitable to be a monipoly and doing the “right thing” might be to allow competitors in but one could basically never truly do that while the bottom line is the most important thing.
It’s so strange to me that so many truly believe a profit motive is all that’s needed to have good outcomes. It was never so, only starting in the 80s did companies care about shareholder value over everything else.
> It’s so strange to me that so many truly believe a profit motive is all that’s needed to have good outcomes.
Not strange. In the U.S. at least, we're acculturated to this ideology our whole lives through education and media.
That said, many/most of the wealthiest or influential market participants, fortune 500 CEOS, academics from top biz/econ programs understand the importance of trust in the economy and the role that an effective government (contracts, the rule of law, and regulation) play in enabling that trust.
If you or your industry are the target of regulation, though, government BAD, regulation BAD, regardless of what you philosophically believe.
People still think they’re getting a good deal which is mostly laughable. I’ve been in industries while wanting more regulation in them - it’s always shocking for me to imagine the amount of unreturned loyalty businesses will get from their employees.
> Facebook can improve its behavior only by putting its business at risk. ... They will never voluntarily do anything that would put the business -- their life's work -- at risk.
I don't understand how, given this, you could possibly sincerely open your original post with
> As a corporation, Facebook truly seems to be trying to improve its behavior for the benefit of society at large
I can't tell what your position is. Your opening sentence sounds like you think that Facebook should be given the benefit of the doubt because they mean well, when what the rest of what you're saying is that Facebook needs to have costs imposed on it in order to enable it to improve its behaviour
Your original point says that they are looking to change for the benefit of mankind...
They aren't. They have no sincerity. They will do what makes money. Period.
They have shown time and again they don't give two fucks about humanity, mental health, regulators etc until they are about to generate bad PR from it.
The government seems to be aiming at them right now although I suspect that once the brown envelopes stuffed with cash start passing around that will be diluted down to "honest gov, we'll start doing right!".
Their clock is about to be cleaned by Apple when they roll out the changes to apps requiring them to tell people the data that's being harvested... Facebook will quite rightly be worried right now.
If Google did the same... well, we'd see some folks bailing quick-sharp I reckon... rats and sinking ships and all that.
I'm finding these two sentences hard to reconcile.
> If Zuckerberg, Sandberg, et al could improve Facebook's behavior without putting the business at risk, they would do it in a heartbeat. But it appears they can't.
> Facebook truly seems to be trying to improve its behavior for the benefit of society at large
Imagine yourself as the CEO of a manufacturer that pollutes rivers, and you sincerely want to stop polluting, but if you stop polluting, the company's costs would increase to the point it would no longer be able to compete against all the other companies that continue polluting -- and they're trying to eat your lunch you every day. So, if you stop polluting you would quickly lose relevance, be forced to shut down plants, be forced to fire lots of decent people, and eventually go out of business.
Moreover, when the company was started, no one anywhere realized that polluting rivers was so bad for everyone. No one knew back then; no one thought of it as a problem.
Your choices are: (a) act in the best interest of society and get f#cked by competitors; or (b) remain a dominant force in the market, but as a side effect, f#ck everybody. All your options appear to be a mix of those two horrible choices.
Not just a company that pollutes rivers, but Filthy Frank's River Wreckers Pollution Distribution Specialists LLC, A company who's entire core mission, and reason for existing is the polluting of rivers.
I can imagine it of course but can't see parallels to Mark Zuckerberg. He hasn't done a substantive thing to show societies health is a priority. A tax break foundation that works on ways to spread Facebook further is not it.
Agressively lobby for criminal penalties (as in all the CXOs go to prison) for any company that continues to pollute after <date the law passes + 1year or so>, while loudly telling everyone that you will stop polluting as soon as your competitors are forced to do likewise.
Please cite any privacy legislation supported by Facebook/Zuckerberg under which CEOs or other responsible parties (not disposable middle managers) actually end up in prison (not pittance fines) for violations.
I think as many companies have started to do today, one can spin green manufacturing as a PR thing, and possibly market your product towards customers who are willing to pay more for greener manufacturing practices. Along the way, hopefully you could invest in green manufacturing improvements to make the tech cheaper at scale.
I don’t think it has to be an a or b situation. I think the best and brightest could solve the problem without decimating their profits. Perhaps I am not that smart, but surely Facebook is. (They have significantly more resources than their competitors, I imagine.)
Is it really true that Facebook would go bankrupt by being more ethical? I’m not so sure. They have a captive user base. A lot of older folks who aren’t great with tech are on Facebook, and they won’t be going anywhere that quickly. With as many users as they have — a seventh of the world’s population - I can’t imagine people will leave in droves that quickly. One of Facebook’s biggest advantages is the network effect of “everyone you know is already here”.
My opinion is that Facebook does in fact have the resources to be more ethical without loosing so much profit that they go out of business.
I think the problem partially is maximizing revenue at the cost of everything else. I’m not sure I buy into the idea that they must maximize revenue. Couldn’t they be more ethical at the cost of some money, and then that new revenue amount still is enough to cover expenses?
> I don’t think it has to be an a or b situation. I think the best and brightest could solve the problem without decimating their profits.
I hope you're right! But so far, it appears no one at Facebook has figured out how to escape this "tyranny of horrible choices."
> I think the problem partially is maximizing revenue at the cost of everything else.
I disagree. I think the problem, from the perspective of Facebook, is figuring out how to do The Right Thing while remaining relevant and competitive against the many companies trying to dislodge Facebook from its dominant position. Many of Facebook's users are addicted to the social-media-crack; if Facebook stops providing it, they will migrate to other social networks that provide it. And many of Facebook's customers -- advertisers and propagandists -- want Facebook to continue to modify user behavior on their behalf; and if Facebook stops doing that, those customers will migrate to the competition.
> the many companies trying to dislodge Facebook from its dominant position
Such as? Can you find me one company that provides a similar feature set to Facebook (cross-platform messaging & calling, personal & business pages with unlimited media uploads, groups, marketplace, dating and the network effects of everyone you know already being on it with their real name and no usernames to worry about)?
Furthermore, if Facebook stops or tones down paid advertising and unpaid spam/clickbait it will be yet another reason for users to prefer them versus the competition.
> Facebook's users are addicted to the social-media-crack
Are they? Facebook users are primarily there for keeping in touch with their friends, and happen to get sucked down the rabbit hole of bullshit by Facebook's algorithms which prioritizes engagement. Removing the engagement-generating crap won't suddenly remove the need for people to socialize.
> many of Facebook's customers -- advertisers and propagandists -- want Facebook to continue to modify user behavior on their behalf; and if Facebook stops doing that, those customers will migrate to the competition.
These customers want to go where the users are. If Facebook stops advertising but all the users remain (partly because of the lack of advertising), advertisers do not have a magic wand to move people across to another platform where they can advertise, short of paying those people to move (in which case it would be a win-win situation as people would be compensated for their time & attention).
I would choose to use my skills working for a different company in a different industry.
If Zuckerberg really had a problem with what Facebook was doing, but didn’t feel he could ethically risk the company’s growth and financial performance by changing its direction, he could quit and sell all his shares. He might take a financial hit, but he would still be one of the world’s richest people.
> If Zuckerberg really had a problem with what Facebook was doing, but didn’t feel he could ethically risk the company’s growth and financial performance by changing its direction, he could quit and sell all his shares. He might take a financial hit, but he would still be one of the world’s richest people.
He is one of the world's richest people. He seems to have concern (or at least feigns it) for the problems Facebook is causing. If he resigns and allows someone else, who is more hungry and motivated by money to take over, you believe Facebook's behavior would improve?
Imagine that you invent the idea of polluting rivers, and you set up a company to monopolize polluting rivers, and you tell people for decades that you want to stop polluting rivers, but every year the rivers get polluted by you.
The logical conclusion of your argument is this - Facebook cannot be operated safely and still make a profit and should shut down as soon as possible.
> The logical conclusion of your argument is this - Facebook cannot be operated safely and still make a profit and should shut down as soon as possible.
Personally, I wouldn't even start or be part of such a company, simple as that. I cannot imagine somebody polluting rivers on purpose just to make money but those people exist regardless. So this question is moot for quite a few people (me including) that could never ever get in this mindset and predict what they would do.
It doesn’t matter to the people forced to drink the polluted river water if the person doing the polluting feels bad about it, or doesn’t. Feeing bad does not absolve the CEO of anything.
This analogy also ignores that Facebook is putting huge amounts of money into lobbying efforts to ensure that they continue to be able to figuratively pollute the river.
> Moreover, when the company was started, no one anywhere realized that polluting rivers was so bad for everyone. No one knew back then; no one thought of it as a problem.
Zuckerberg called early users “dumb fucks” for trusting him with their data. That’s the demeanour of someone with bad (selfish) intentions from the start. Just because the damage he ended up doing is worse than the initial damage he predicted, it doesn’t excuse his continued morally bankrupt behaviour.
When you are build on a certain core you can't change who/what you are.
Facebook is built on getting / using user data to determine what to show.
Google is built on geting / using user data to determine what to show.
To betray those goals wouldn't make sense. How they go about it can change. Facebook has always gone hard and fast. They treat you more like a raw piece of meat. They will run ab tests on you and treat you like a variable in a ongoing experiment. Google has such reach that they can make minor changes and capture vast amounts of data.
Other companies are doing the same way but instead of using it to determine what to show you they use it to determine what ads look like so you can buy their product.
I don't think it's a justifiable for risk mitigation reasons to act as the oil industry has. What they can do more of is invest in more R&D for moonshot energy products. Or invest into existing green energy areas. I believe there are just better short term returns on PR (deceiving the public as much as possible), buying help & protection from regulators, and the status quo generally. I also believe the powers that be in that industry, like many others, are old, uninspired, and unreasonably resistant to change. Like Zuckerberg, they're more afraid of lost profit than destroying the world, whether by a lot or a little.
I think you are making broad statements because you don't like facebook, this is fine, but its fails to add anything.
So here's the thing, Facebook has a few big issues they can see:
1) the FB brand is toxic
2) the Facebook app/site is being seen as a ghetto for extremists and arseholes
3) instagram is a fragile cashcow.
4) AR is the next platform, which they have to nail stay in the game, which requires trust.
We all know that facebook proper is full of arseholes. Its profitable for now, but if it continues to be a ghetto for karens and racist kevs (or is seen to be) then people won't advertise. They also know that content moderation is fucking hard.
However, those are the excuses. They have good, clear, well written content guidelines. The issue is, they are not enforced equally. Trump broke the rules, he should have had his pubic hair pulled out. However because he's a politician they don't want to be accused of editorialising content.
This is because The management team are moral cowards. They want to do the right thing, but they are scared of the blowback from politicians. As they have the power to really shit on their parade. Add to this mix, a strong hysterical bunch of activists shouting at them, inside and out.
This causes the management team to withdraw into their shell. They see themselves as an island of reasonableness. the oversight committee is a step to being a useful tool to measure policy change. However it requires trust.
The bottom line is this, being a mirror of society is tough, because society has a whole bunch of toxic noise shitbags. Modelling your self on free expression means allowing these dicks to tarnish you.
Also - you could say the same and yet much much more about Google.
Google represents 10x the threat of FB because we all use it and essentially need it - and it's more broadly deployed.
FB is just FB. Use it or not.
FB can 'have it's cake and eat it'.
There's nothing wrong with using learned user behaviours to place some ads. There are reasonably narrow contexts in which privacy really isn't invaded, there's no harm really.
Where it 'gets bad' is when they follow you across the internet (like Google does ...), or when they use 'nasty algs' for interactivity (I don't think this is as bad as it seems).
Google is reading all your email and knows every search you ever made, I find that far more invasive.
FB has overstepped their bounds but there's no reason the can't go back in.
As far as 'anti trust' - it makes very little difference that FB owns WhatsApp and Insta. Break them up - very little will change.
The 'anti trust' issue is almost entirely with Google and Apple.
Google uses their search to promote their own products over others, rips off content for their search summaries, and uses Chrome and Android as a kind of 'market dumping' to ensure Search success.
Apple's 'App Store Only' rule for content distribution is questionable.
> As a corporation, Facebook truly seems to be trying to improve its behavior for the benefit of society at large.
No, it's not. Their social network is engineered entirely and unsurprisingly in support of their bottom line. A social network does not need to be a centralized, free-for-all like Facebook is, but Facebook is that way because that is what works best for their ad revenue. The rapid proliferation of disinformation and hate speech is a consequence of this broken system, but the company has always treated those very real problems as a necessary evil, a nuisance to be patched up with the least effort/cost as possible to keep the ball rolling. This does not benefit anybody but them.
I see you're getting some downvotes, but I agree with you. Put another way, though: the problem is not Zuckerberg per se, it's the business model of advertising-supported, general-purpose social networks. If you took everybody out of Facebook and replaced them all with other equally qualified people, they would behave the same, as an organization. The problem is not Who, it's What. You can't
"fix" Facebook, because the problem is pretty fundamental to what they are.
Zuckerberg holds 90% of class b shares which have 10x the voting rights of normal shares and gives him 4 billion votes. There are -2.4 billion class A shares. He has stacked the board with loyalist yes men. It is a dictatorship bent on maximizing profit. The business model and how they operate is defined by Zuckerberg. It seems to me that he should hold the majority of the blame.
strong disagree that a Facebook staffed with different folks would behave the same. Zuckerberg has a majority of the votes due to his super voting shares. Leadership matters, individual actions matter. Twitter, for all its faults, has a leader who is leading the company down a different path than the pure money chasing and dominance, and i can imagine a differently lead Twitter being making more money and being worse for society. Zuckerberg is motivated by dominance, nothing else, and he has the ability to change course. Their actions are constrained along a set of possible outcomes but the leaders of both companies are choosing where in that set they want to land.
Exactly, except that we don't have to imagine anything. If Facebook were to disappear today, there already exist many companies with similar business models willing to take its place. Some more willing to cross the line than Facebook.
This is the critical point virtually all criticism of Facebook often fails to address. Sure, you could regulate to death/kill Facebook tomorrow with legislation in country X. All that happens is a clone launches immediately overnight from a country with less onerous regulation, one that anglosphere legal systems will have even less direct control over than the Facebook we have today.
FB, for all its flaws, is at least still based in a democratic nation and operates within a (_relatively speaking_) fair legal system. That the FEC is able to demand (and force implementation of!) regulation already at FB is evidence this works, at least a little. Better the devil you know as they say...
We can't remove the natural human desire to connect to one another on the internet (and associated problems). For me personally, the cat is out of the bag - you can't rewind time and uninvent the underlying communication infrastructure. If people want a social network, the internet will make it for them again and again and host from whatever polity/region allows.
Regulations are never for particular companies, that would be legally untenable. Whatever regulation a country comes up with for Facebook will also affect any other company trying to get into their footsteps. Regulation is the only way to prevent companies from abusing their positions of power. The idea is illusory that they would do it voluntarily even if they could make a profit. Some of them might under some leadership, but not in general and not all of them.
Yes. This is surveillance capitalism taken to its extreme. Facebook didn’t invent it, they’re just doing it in a way that makes the consequences more difficult to ignore than Google, which has been able to largely sidestep the blowback by being mission critical to so many people and also having massive goodwill projects that don’t directly point to being profit driven.
It’s up to citizens of the US and EU to reign this in. We can hate the player but we gotta hate the game even more.
I would argue they basically invented it. A lot of the dirty tactics in play today are because companies feel the need to catch up to Facebook, who set the ecosystem as it is by continually being dishonest and predatory
>Surveillance capitalism was invented around 2001 as the solution to financial emergency in the teeth of the dotcom bust when the fledgling company faced the loss of investor confidence. As investor pressure mounted, Google’s leaders abandoned their declared antipathy toward advertising. Instead they decided to boost ad revenue by using their exclusive access to user data logs (once known as “data exhaust”) in combination with their already substantial analytical capabilities and computational power, to generate predictions of user click-through rates, taken as a signal of an ad’s relevance.
>Operationally this meant that Google would both repurpose its growing cache of behavioural data, now put to work as a behavioural data surplus, and develop methods to aggressively seek new sources of this surplus.
>The company developed new methods of secret surplus capture that could uncover data that users intentionally opted to keep private, as well as to infer extensive personal information that users did not or would not provide. And this surplus would then be analysed for hidden meanings that could predict click-through behaviour. The surplus data became the basis for new predictions markets called targeted advertising.
>Here was the origin of surveillance capitalism in an unprecedented and lucrative brew: behavioural surplus, data science, material infrastructure, computational power, algorithmic systems, and automated platforms. As click-through rates skyrocketed, advertising quickly became as important as search. Eventually it became the cornerstone of a new kind of commerce that depended upon online surveillance at scale.
>The success of these new mechanisms only became visible when Google went public in 2004. That’s when it finally revealed that between 2001 and its 2004 IPO, revenues increased by 3,590%.
>Surveillance capitalism is no more limited to advertising than mass production was limited to the fabrication of the Ford Model T. It quickly became the default model for capital accumulation in Silicon Valley, embraced by nearly every startup and app.
I mean, GDPR is a step in the direction. Many websites, and by extension, people seem to think that you comply by 'gdpr' by putting up a stupid cookie banner.
But the real compliance is not storing PII, then you don't even need a cookie bar!
Asking companies not to retain PII is like asking a crack addict to please ignore the crack pipe and torch while you step out for an hour. The only solution is to make PII radioactive. Tax it. Burn companies that abuse it or leak it to the ground. HIPAA is a fucking nightmare but companies still figure it out:
GDPR is mostly that; the penalties for data breaches are essentially a tax on PII. GDPR also restricts how you can process data and the user should always be informed and has the right to object.
The problem is that the GDPR is not being enforced seriously.
I think you're on the right track, but it's not just because it's an "advertising-supported" business model. It's the fundamental laws that govern our society: profit. Replace Zuckerberg, Bezos, et al. with anyone else, and the new CEOs' decisions will be bound by the same constraints.
Thought experiment: Cory Doctorow becomes CEO of Facebook with Zuckerberg's entire stock allocation and equivalent voting control. Do you stand by your assertion that nothing changes?
One of two things happens:
1) Cory Doctorow gradually morphs into Zuckerberg
2) Another social network, run more like Facebook is currently, replaces the one that Cory Doctorow is running. The reason Facebook didn't go the way of Friendster, Livejournal, and MySpace, is that he figured out how to play the game (as it currently exists) better than anyone else. Cory Doctorow would be like someone trying to win the Tour de France when everyone else was doping. In this analogy, Zuckerberg is Lance Armstrong, playing the existing game the way you need to play it to win that game.
Reddit isn't perfect, but it is run totally differently from Facebook, including: offering paid subscriptions, having an open API, and not trying to justify widespread surveillance as an ad-targeting tool.
>"As a corporation, Facebook truly seems to be trying to improve its behavior for the benefit of society at large, ..."
Could you provide some examples how "FB truly seems to be trying improve its behavior for the benefit of society at large"? Just using one very recent example - how do you reconcile that outlook with FB threats against the Ad Observatory[1]?
You’re apologizing for a company that makes hand over fist money. Has a single competitor in its space (online advertising), and that’s Google.
There is no “getting fucked” when it’s a monopoly controlling its market. Right now, legislation is helping Facebook by increasing the barrier to entry to compete with them.
So what you’re saying is Facebook created this entire situation but would not have if you know it didn’t have to. The only thing is we are only as shitty as we let ourselves be. Stop accepting shitty behavior from people and trying to justify for no reason.
I'll be honest, judging from my experience with Facebook's ads system, I'd wager they have accumulated some technical debt and their content evaluation (aka censorship) systems don't really work or don't scale. There are numerous reports of incorrect flagging of business accounts and ads managers and insanely long review processes (which, by the way, never result in an apology) on forums outside Facebook. Advertisers were moving to other platforms because Facebook became unpredictable and ads costs were skyrocketing just before the US elections.
It's a bit better now but they still seem to have problems identifying objectionable content. If the system doesn't work for ads, it won't work for orders of magnitude more user posts either.
based on your own set of possibilities it sounds like they chose b and "f#ck everyone else" contradicts "Facebook truly seems to be trying to improve its behavior for the benefit of society at large".
>As a corporation, Facebook truly seems to be trying to improve its behavior for the benefit of society at large
Based on what? Lip service? Empty gestures? Those are worth as much as Google's "Don't be evil" motto and Apple's and Nike's social justice campaigns...
Facebook could use its wealth and influence to lobby for government regulation that would rein in bad behaviour while ensuring a level playing field so less-ethical competitors would be penalized.
As a corporation with a huge amount of investment money involved, they will bend every rule and law to maximize the ROI for those investors. Also, these companies will extract every bit of data from their customers (product) that they can, in obscurity, to accomplish their goal.
Isn't there really no choice? Doesn't the corporate responsibility force them to only consider the greatest financial gain, regardless of the downsides for society?
It's almost as if capitalism requires outcomes which are exploitative. Whether that is the labor force, the environment, minority populations, civil society, population health.
Too bad no one has written a book or three looking into this. I'd read it.
In the extreme, Facebook's choices appear to be: (a) act in the best interest of society and get f#cked by competitors; or (b) remain a dominant force in the market, but as a side effect, f#ck everybody. All options for Facebook appear to be a mix of those two horrible choices.