Hacker Newsnew | past | comments | ask | show | jobs | submit | homarp's commentslogin

RMS on copyright "This means that copyright no longer fits in with the technology as it used to. Even if the words of copyright law had not changed, they wouldn't have the same effect. Instead of an industrial regulation on publishers controlled by authors, with the benefits set up to go to the public, it is now a restriction on the general public, controlled mainly by the publishers, in the name of the authors.

In other words, it's tyranny. It's intolerable and we can't allow it to continue this way.

As a result of this change, [copyright] is no longer easy to enforce, no longer uncontroversial, and no longer beneficial"

from https://www.gnu.org/philosophy/copyright-versus-community.en...


First, if we assume Stallman is human, we have to grant he will not be right about everything (impossible on logical grounds and supported by the fact that he publicly changed his views on certain things in the past).

Second, when it comes to action, he only argues that copyright should have reduced power, which we can all agree with; he does not appear to argue for the death of copyright. Death of copyright would seem counter-productive, unless it also implied the death of corporate ability to withhold the source from the users and many other things.

You will note that the very text you linked to is copyrighted. There’s a reason for that.


And yet he is.


Coding Agent Adaptation Lets a 9B LLM Outperform 10x Larger Models on Aider Polyglot Benchmark

and as an output, a coding agent optimized to smaller LLMs: https://github.com/itayinbarr/little-coder


interesting that using AI models from China is not discussed.

e.g. Apple buys moonshot or z.ai


like meteorites?


Craft?




LlamaBarn is the MacOS app, not the HTTP API server, which is "llama-server".

On non-Apple PCs, "llama-server" is what you use, and you can connect to it either with a browser or with an application compatible with the OpenAI API.

Perhaps using "llama-server" as the name of the project would have been less confusing for newbies than "llama.cpp".

I confess that when I first heard about "llama.cpp" I also thought that it is just a library and that I have to write my own program in order to implement a complete LLM inference backend.


this looks nice but is macos only.


check on same port, there is an OpenAI API https://github.com/ggml-org/llama.cpp/tree/master/tools/serv...


Good stuff, thanx!


like someone said above: brew install llama.cpp

llama-server -hf ggml-org/gemma-4-E4B-it-GGUF --port 8000 (with MCP support and web chat interface)

and you have OpenAI API on the same 8000 port. (https://github.com/ggml-org/llama.cpp/tree/master/tools/serv... lists the endpoints)


And why do I use ggml-org/gemma-4-E4B-it-GGUF instead of one of the 162 other models that can be found under the ggml-org namespace? And how do I even know that this is the namespace to look at?

That's what I meant by model management. I'm too tired to scroll through a bazillion models that all have very cryptic names and abbreviations just to find the one that works well on my system with my software stack.

I want a simple interface that a tool like me can scroll through easily, click on, and then have a model that works well enough. If I put in that much brain power to get my LLM working, I might as well do the work myself instead of using an LLM in the first place.


1. Go to HF

2. Choose the model they recommend

3. Run the one-liner the site gives you

Bonus: faster access to latest models and better memory usage


The first model I see on the HF homepage is this one: MiniMaxAI/MiniMax-M2.7

Do you think that this 229B parameter model will work on my consumer PC?

Stop pretending like HF is in any way beginner friendly.


https://www.quantamagazine.org/about/ says "launched by the Simons Foundation in 2012"

and https://www.simonsfoundation.org/about/ has "Since its founding in 1994 by Jim and Marilyn Simons"

https://en.wikipedia.org/wiki/Jim_Simons explains how Jim Simons got rich.

The book 'The Man Who Solved the Market' - https://www.gregoryzuckerman.com/the-books/the-man-who-solve... is a nice read.

HN discussion on a review of the book - https://news.ycombinator.com/item?id=29392041


it is an Ethereum fork, named after Jan Zurich (a cousin of the famous Chief Niklaus Emil Wirth). Jan Zurich discovered a little moon on Uranus, and named it Blaise

see timeline on https://ethereum.org/ethereum-forks/


It’s also worth noting that Jan (who strictly uses the pronouns var / val) belongs to one of the most historically marginalized groups in modern tech: One-Pass Compiler Enthusiasts. They were repeatedly ostracized by the bloated LLVM cabal for stating that any build process taking longer than 50 milliseconds is a toxic social construct. The ETH fork was actually meant to fund a decentralized safe space where nobody is ever forced to use a borrow checker.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: