Hacker Newsnew | past | comments | ask | show | jobs | submit | QuantumRoar's commentslogin

This is not true as often you find surprising things that are not general which ARE NOT GOSSIP. To stay with the physics theme, the Schwarzschild solution of general relativity is a very special one and I don't think anybody thinks that black holes are gossip.

And it is certainly not surprising that amateurs in general forget that F = ma is only valid if the mass does not change. The more general expression is F = dp/dt, where p is the momentum. But this, of course, is also only valid in inertial systems. It's not really important to the article but it does kind of annoy me that he uses the most special case of an expression in an argument about it being general.

He could have actually made the point about generality by comparing this expression to the most general one for the force (in a frame of reference that accelerates). That would have also shown why generality can quickly become infeasible in practice. If he knew how many approximations people make in the real world, not because they want to, but because they HAVE TO, his worldview might be a different one.

I feel like he's trying to make a point about a very specific scenario but doesn't mention it explicitly. Instead he tries to be general and therefore fails to understand that his view doesn't actually apply in general.


Theoretical entities called "black holes" cannot exist in finite universe (no matter the perspective changes that are applied). The result (if I understand it correctly) applies to a universe in which there is a single massive body and universe is eternal.

Hence, if time dilation exists (under increasing levels of gravity) "black hole" cannot form in any finite time. Since there are many massive bodies in our universe, solution to problem is not applicable.

So, in that context, "black holes" ARE GOSSIP!!!!

Let the "GOSSIP" wars commence.

Other than that, I think you make some very salient points, especially about the approximations that we make and the validity of those approximations to the specific applications/situations being studied.


Yes, the current understanding is that general relativity breaks down inside of black holes and, to understand how it really works, we need a quantum theory of gravity.

However, I would not call it gossip since, by your reasoning, anything could be gossip then. I understand gossip as being something unimportant but the Schwarzschild solution was a major milestone in the understanding of general relativity. Moreover, all scifi movies considering wormholes and such, can be traced back to the usual visualization of the black hole distorting space time. Pretty consequential discovery I'd say.


You miss the point that the Schwarzschild solution was for a very specific universe that does not in any way match ours. Hence, saying that the solution is applicable to our universe and WILL give rise to the theoretical entity know as a "black hole" is pushing the solution well beyond it actual applicability.

Much as I like good SF (and even mediocre SF), I believe there is enough evidence to say that our understanding is very incomplete and that GR (though it give some good approximations in general) may be a complete furphy.

Our problem at this time is that any time conflicting evidence comes up. it tends to get either buried or ignore or those bringing it up get ad hominem attacked.

If the evidence is obviously incorrect then it should be a simple matter of showing exactly how it is incorrect. My observations of the actions of the proponents of GR are that they tend towards dogmatism and and not discussion, ridicule and not rebuttal.

If odd things are found, then the prevailing theory (in this case GR) should be able to fairly handle these discrepancies.

There are no "stupid" questions.


He wrote "e.g. gossip", not "i.e. gossip."


You should have read the draft of this, instead of the people who did.


No. Usually you talk to your colleagues and you help each other -- especially in the same department.


If I remember correctly, there are needle-like implants with around a thousand contacts and it is quite a difficult task to get the signals out of the brain. Either you have the ADCs directly at the contacts, which means you can't get your density of contacts up, or you have the ADCs outside which will give you a nightmare of wiring. In either case the technology to actually have an interface read out individual neurons is still quite far off, as far as I know.

I'm not quite sure about all of this, so maybe someone with up to date information on the technology can help me out here?


If I may add some details....

The most popular implant is probably Blackrock Microsystems' "Utah Array", which has 96 electrodes arranged in a 10x10 grid (minus the corners). It looks like this: http://aerobe.com/wp-content/uploads/2016/11/utah-3.jpg For scale, the entire electrode grid is about 4mm on a side and the electrodes are between 0.5 and 1.5mm long (depending on the model).

There are a few other models (and similar stuff from other companies), but I'd be surprised if anything with thousands of contacts is in regular in vivo use. There are some in vitro (i.e., cells or tissue slices in a dish) systems with more contacts, but the signal quality isn't nearly as good.

We can read out the activity of single neurons--people have been doing it for single electrodes since the 1960s. It's slightly easier with a single (movable) electrode since you can creep up on the cell until its action potentials are fairly large and well-isolated from the background noise (here, large means about ±150 µV). You can't move the array or its individual electrodes, so you're stuck hoping that the individual shanks end up in good positions. Then, data is recorded at a fairly high sampling rate (say, 30 kHz) and the "spikes" are clustered based on their shapes to get individual neurons' responses.

The ADCs aren't directly at the contacts, but you want the amplifiers and ADCs as close to electrode as possible to avoid all sorts of weird EMI from the mains, other equipment, etc. Getting the grounding and shielding right is a bit of a black art and eats up tons of researcher time. (You'd think "throw it all in a Faraday cage" would work, but...it doesn't).

What else do you want to know? :-)


I've done ephys in mice and gerbils. Spike sorting is nontrivial, and the effects on local tissue from jamming long shank electrodes into cortex are nothing i'd like done to me.

Unfortunately, less invasive recording techniques will never give you the ability to record from single units.

edit: pulled your google scholar and boy am I preaching to the choir...


And none of my array stuff is published yet (grrr!)

I did single-electrode experiments for my PhD and those definitely mess up the brain after a while. The Utah array stuff strikes me as "less bad" in there's only one* big insult to the brain, but it is a pretty bad one: the arrays are inserted with a pneumatic "gun".

I think you're right that non-invasive techniques will never give us single unit data, though I hope we can get some longer-lasting implantable electrodes soon.


I'm interested in the subject and appreciated your post, thanks. A couple of questions:

How does the brain adapt to having the electrodes in there, how long before the probes are accepted as being "part of" the brain?

Do you envision we need lots more sensors than in your example above, or is said number enough for precision input (say, text/words, or navigation in a 3 dimensional position & rotation plus a temporal dimension interface)? I guess the brain would work around the rough edges (or lack of sensor resolution) just like it already does with keyboards, mice, bodies, and language.


It depends!

For most electrodes, the brain doesn't really incorporate the implant. When a single electrode is inserted, you can start recording as soon as the contacts are inside the brain (in practice, you wait a few minutes since the brain is slightly elastic and stuff moves around). In humans, this is how deep brain stimulation is done--the surgeons use the neural activity to figure out when the electrode is in the right place. For larger implants like the Utah arrays, the insertion is a bit more traumatic. Allegedly, you get a pretty good signal right away, then inflammation makes it degrade for a while, and after ~12 hrs, the signal returns. However, the animal/patient is usually recovering during this time, so it's moot.

These electrodes are usually silicone and metal, usually tungsten, platinum/platinum iridium, or iridium oxide, so the brain doesn't really "accept" them. In fact, it tries to encapsulate and reject them, which limits the lifespan of the electrodes. In my experience, a two week old array might have nice, well-isolated neurons on more than half of the channels; after two years, you'd be lucky to get single units on more than a handful of the 96 channels.

However, there's a lot of interest in developing coatings that inhibit this immune response or actually encourage neurons to grow into the array. There's a lot of promising research on this, but nothing (as far as I know) that's commercially available.

As for the number of sensors...it also depends. You can do a lot with a 96 channel array implanted in the right spot, including spelling (https://elifesciences.org/content/6/e18554#bib27) and control of a robotic arm (http://www.jhuapl.edu/prosthetics/scientists/neural.asp) though neither of these is anywhere near "native" performance yet. More electrodes might help, but there's also probably some low-hanging fruit in figuring out the right control paradigms, decoding algorithms, and even where the electrodes are placed.

For research though, more and better arrays would be great. Many brain areas have a spatial structure. In visual areas, for example, cells representing neighboring spots in the visual world[0] are also near each other. Motor and sensory cortex also have a foot-to-face progression. Bigger arrays might let us sample from a more diverse population of neurons at the same time, which could be scientifically interesting and useful for BCI. Denser arrays might also allow for better recordings from single neurons. If you have a sufficiently dense array, you can record the activity of one neuron from multiple sites--this lets you isolate its responses better (this trick is commonly used with bundles of four wires, called tetrodes).

I would also love to get my hands on arrays made from multiple materials. Platinum is great for recording the activity of single units, but lousy for stimulation; its low charge injection capacity means that high stimulation currents damage the electrodes and/or nearby cells. Iridium oxide has a much higher charge injection capacity, but lower impedance and thus, fewer well-isolated cells. A "checkerboard" pattern of Pt and IrOx electrodes would be awesome, but is apparently difficult to make (no chance you run a fab, is there?)

The flips side of all of this is that amplifiers and ADCs are expensive (though much cheaper than they used to be), and adding channels rapidly increases the data files' size too. My experiments generate about 1 GB/minute, and we record 5-8 hours a day, 6-7 days a week.

What else? :-)

[0] The organization is really "retinotopic", meaning that cells receiving inputs from the adjacent parts of the retina are near each other in the brain.


Assuming that the 1 GB/min figure is because you're saving broadband (back of the envelope math assuming you have two Utah arrays with 32 bits/sample at 30 kHz), you can get enormous space savings with compression. Field potentials are highly correlated across channels, as you no doubt know. Depending on your amplifier you may be able to pack everything into 16 bits/sample too.


Thanks for your answer! I'm afraid I'm not a fabricator of platinum electrodes... I hope you find someone who is!

I'm thinking about the immune response from the brain, and whether implementing this array of sensors can be done elsewhere than the actual brain, connecting to nerves instead of neurons, essentially creating a virtual limb. I guess it kind of misses the point of this whole thing, and can't reach as far as brain-implementations, but with the advantage of being more feasible as a solution. I think what I'm getting at is whether we need lots and lots of sensors with very high resolution data, or if we instead can ensure the computer interface is consistent enough that a "muscle memory" can be formed for controlling the virtual limb. I guess I don't have a question really haha, thanks for your time and replies!


You are thinking of Ted Berger's work at USC, among MANY other researchers' efforts. Here is a link to the class: https://classes.usc.edu/term-20161/course/bme-552/

Also, the main issue with their work is glia' scarring inside the central nervous system; the body ensures the implants are time limited.


If anything you're underselling the existing hurdles. It was once thought that the brain was immunoprivileged, but now it's known to have its own immune system. As a result implants are prone to having scar tissue form around them, and after some years it starts to inhibit their ability to perform.


You're mixing up programming with computer science. The former is a task that does not necessarily need any math (e.g. web development), the latter is literally math (e.g. category algebra).

Edit: clarification


The questions are actually pretty clear. The problem is that you lack a general understanding of mathematical concepts and therefore you don't know what to do with it. But that's fine and that's actually what the exam is for, i.e. only those may pass who understand these things.

>This is probably not a view shared by many here, but if math problems were to be communicated in more natural ways, far more people would be interested in the sciences.

I would argue the opposite. If more problems could be as well posed as mathematical problems, we wouldn't have people bullshitting their way through arguments that are in dire need of some scientifical rigor (see climate science, psychology, and other disciplines that are too complex to isolate phenomena completely). You can't handwave your way through a math exam and you shouldn't be able to do it in other fields of science.


> I would argue the opposite. If more problems could be as well posed as mathematical problems, we wouldn't have people bullshitting their way through arguments that are in dire need of some scientifical rigor (see climate science, psychology, and other disciplines that are too complex to isolate phenomena completely). You can't handwave your way through a math exam and you shouldn't be able to do it in other fields of science.

Amen. I have been saying that for years. But I think you misunderstood my original post.

I am all for 100% scientific rigidity and less bullshit in society. But what you seem to classify as "clear" is not-so-clear to others. It's actually about how condensed the information is in each question. In order to unpack those questions, you need years of experience within that field in order to get anywhere.

I bet you anything that each one of these questions could be posed in a different, yet still mathematical, manner that would make most non-mathy people (such as myself) at least understand the gist of what is being asked. But that is not something mathematicians are interested in. Which is what I was trying to say.


The issue is that each provable discovery in mathematics is named; it has to have a unique name. These discoveries are exact with at least one proof for their validity and for their limitations and assumptions; all that information is tied up in a name, say for example eigenvalues. These proofs are built on top of previous proofs (which of course are named as well), names and phrases built upon each other.

So this ever growing collection of proofs and names is an accretion (or in our moments of grander hubris a pyramid). What you are asking for is the tip of the pyramid (with its wonderful view) without the stones below, in effect what you are asking for is magic, a stone floating in the air with nothing to support it.


>in effect what you are asking for is magic, a stone floating in the air with nothing to support it.

Isn't that how we all learn math? in fact, isn't that how we all learn anything really?

When you are in 2nd grade, you don't start by learning about Riemann hypothesis. You start at the tip of the pyramid and drill downwards into complexity.

I am not asking for something magical here. I just wish that more math was communicated in a simplified manner that would help bring in people from all walks of life into the world of science.


The simple way is to follow the steps, start at the bottom and work your way up. Eventually you will get to the point where it makes sense. The terminology and the understanding is opaque to a neophyte because ideas are non-trivial.

Any sufficiently advanced technology is indistinguishable from magic -- Clarke


> You can't handwave your way through a math exam

This reminds me of a calculus exam a long time ago. One of the problems was "given the ellipsoid define by formula ______ and a certain line, find the point in the line closest to the ellipsoid". The goal for this problem was for us to use Lagrangian multipliers and I hadn't studied that one particular subject. However... They made a mistake writing the exam. The ellipsoid was symmetrical, its intersection with the xy plane was a circle and the line was also in the xy plane. I only needed high school math to successfully handwave it. Lots of facepalms from my TAs.


You've just described the vast majority of my experiences in my EE E&M class. Very hard multi-variate calculus problems, but as soon as you can find the symmetry they get dramatically easier.


The water in the US is chlorinated (at least where I was in Washington, DC). It smells and tastes really terrible.


"Water in the US" varies greatly from place to place. You see, the United States is a very large and varied place.

Amazing tasting well water pumped directly from old glaciers turned into underground reservoirs is "water in the US."

Water so brown and contaminated you can't even drink it is "water in the US."

Using fluoride, chlorine, chloramine, is "water in the US." Desalinated water along coastlines that tastes as such. Streams and springs that are safe to drink from.

Having lived in the area for years I can say even througout the D.C. Metro area, you have different tasting water as well.


Something a simple filter takes care of easily.


I never needed to bother with a filter where I live. So everybody uses them where the water is chlorinated?


It depends on how chlorinated the water tastes. I usually use the filtered water that comes out of the tap on the fridge but if that is taking too long to fill a bottle, I just use the regular tap water. Our water tastes fine. It doesn't have a strong scent or flavor and the water is very clear. Checking out the water quality reports for my county, we meet or exceed all EPA regulations regarding certain materials in the water. Some people in my county still rely on well water because they were too far from the water main when their house was built. Well water tastes gross and you pretty much have to filter it unless you just hate yourself. As for the quality of the well water in my county, I can't find this information as I'm guessing everyone's well water is different and no one is going around testing homeowners' wells. Chlorinated water is used in many places around US as it's cheap and effective. It reduces things like microbes and bacteria in the water supply. It can produce some nasty by products but the risk from consuming chlorinated water is much lower than consuming water that is potentially swimming in microbes and bacteria.


Depends where you are. It doesn't need to be at tasteable levels; the US apparently uses rather a lot and has poor quality control (Flint passim)

http://www.dailymail.co.uk/health/article-2242094/Chlorine-t... (apologies for poor source)


I use it in London, Not because the water is too chlorinated, but it really improves the taste.


Well, people who don't want to taste chlorine, yes.


I strongly believe that we should not easily segregate people into a smart kind and a not-so-smart kind. Intelligence comes in many ways and the genetics and upbringing of a person will make their intelligence manifest in different ways.

Someone being good at something, does not automatically equate to a high level of intelligence or smartness but rather a high degree of familiarity with the topic. Familiarity can be acquired either through a lot of practice or from a predisposition to understand quickly, which is intelligence or smartness.

However, what can be said is that a person who is good at almost everything, necessarily has to be smart because there wouldn't be any time to practice everything in depth. Conversely, I'd never call a person smart who's good at one thing but doesn't understand anything else.

The author says that smart, successful people are cursed with over-confidence due to them knowing one thing very well. But how can you call a person smart if they do not even possess the ability to properly self-reflect. Is that not the one thing that should define smartness?

Rationalizing things away, ignoring signs that interfere with one's world-view, and being over-confident are all traits of not-so-smart people. Just because you know how to code, does not mean you are smart.

I'd say the result of this anecdote should've been that it turns out that people can be good at their jobs and be idiots at the same time.


> The author says that smart, successful people are cursed with over-confidence due to them knowing one thing very well. But how can you call a person smart if they do not even possess the ability to properly self-reflect. Is that not the one thing that should define smartness?

I've more often heard the opposite stated, that people who are smart have a tendency to see multiple sides of every issue and struggle to come to decisions, whereas people who are less smart are more prone to see complex things in black and white. If only we had a readily available example of that phenomenon...


The Dunning–Kruger effect?


I always thought that the "accelerated" programs in public schools perpetuate this issue. There were some very sharp kids there for sure, but it might as well have been called the "conscientious track" since it was full of children who proved that they could sit quiet, learn from reading on their own, navigate institutions, etc. All the things rich parents make their children do anyway.

I've met too many individuals since who are intelligent but disorganized, energetic, or don't have a stable family that might have actually benefitted from some special attention and encouragement, unlike all the kids who were going to be just fine anyway.


The problem with that is why should those kids that can behave themselves (which used to not just be 'rich kids') be penalized by being stuck in an environment that has to slow down because half the class didn't hear the teacher the first four times because they choose not to pay attention.

In high school, I just stopped showing up. Classes were going too slow for me and didn't hold my attention. I'm not super intelligent but I do like learning and am fully capable of sitting down and focusing.

One size fits all doesn't work anywhere else, why do we expect it to work with educating children that have nothing in common except their age?


If they were advocating for the removal of the programs then it was an implication I missed. I think the posters only concern was that the classes contribute to the common misinterpretation of what smart is.

Maybe it would be better to just advertise them for what they are (according to that poster) and call them the "conscientious track"? That way kids who want in have something obvious and actionable to work towards other than "well, if you're smart you'll get accepted and if you're not, welp, enjoy burger flipping!"


That's exactly what I was getting at. Special treatment and exclusivity breeds feelings of superiority and inferiority where none need exist. Starting this at a young age tells kids that they haven't been "chosen," and kids are too young for us to blame them for not trying to take college classes etc. when they have been structurally told that those classes aren't for them.


Nope, not fake. Just another way to look at it. From the paper: "One of the deepest insights about quantum gravity that emerged in recent times is that it is expected to be holographic [1–3], meaning that there should be an equivalent description of the bulk physics using a quantum field theory with no gravity in one dimension less. One may thus seek to use holography to model the very early Universe."


I think you wanted to say: "Are we simplifying things in software development?" All of the points you have made are actually simplifications of what might be the optimal solution.

Imagine the solution space as some multidimensional space where there is somewhere an optimal solution. The dimensions include the habits of your programmers, the problem you are trying to solve, and the phase of the moon. Microservices, a special form of redundancy, continuous integration, agile development are all extreme solutions to specific problems. Solutions which are extreme in that they are somewhere in the corner of your multidimensional solution space.

They are popular because they are radical in the way they conceptualize the shape of the problem and attempt to solve it. Therefore they seem like optimal solutions at first glance when really they only apply really well to specific toy models.

Take e.g. microservices. Yes, it's really nice if you can split up your big problem into small problems and define nice and clean interfaces. But it becomes a liability if you need too much communication between the services, up until the point where you merge your microservices back together in order to take advantage of using shared memory.

Don't believe any claims that there is a categorically better way to do everything. Most often, when you see an article about something like that, it is "proved" by showing it solves a toy model very well. But actual problems are rarely like toy models. Therefore the optimal solution to an actual problem is never a definite answer from one of the "simplified corner case scenarios" but it is actually just as complex as the problem you are trying to solve.


Once you own data centers, energy consumption becomes a major consideration.

I'm not doing stuff like that but I assume the train of thought concerning the economics of energy consumption is like that: You buy new hardware and from experience you know it's going to last on average a few years. During the lifetime of your new hardware you can save some amount of money on electricity because your new hardware is more efficient than the old one. So it would make sense to hit the buy button for the new hardware when you can save money:

(Savings in electricity over the lifetime) - (Price of new hardware) > 0

I assume after a few years the savings may become significant.


It's not so simple. Deprovisioning hardware takes work to move the services away (yes, even in Google) and breaks things. It also costs to physically remove the hardware and ship it and refurbish the DC to host new hardware.

Big tech companies are often running hardware for longer than you think - just not using old hw for cloud hosting.


I assume the costs of repairs and spare parts would also enter that equation.


An aside: it can be a good exercise to use the above reasoning to decide when to buy a new car


That reasoning will push buying new cars away to the moment the old ones completely fail, and not a second before.

For cars it is much more important to calculate the risk of it failing in some moment you need it, and the costs of being suddenly carless. Also, for cars that go on a road, the most relevant factor is safety.


Having done the maths, and living in a country with very high fuel prices, I have to say that everything seems to be in favour of putting up with small old cars as long as possible!


It's also a good reason to update major appliances regularly. Washers that use too much water to fridges that are inadequate insulated.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: