I just want to say that this project is amazing. At the risk of sounding hyperbolic, I think Rust is the most exciting thing that's happening in computing today. This sort of project that plausibly replaces software traditionally written only in C/C++ with something that has performance parity, but is in a language where contributions are relatively accessible and safe, is the most exciting thing even within the bounds of an intriguing ecosystem.
As someone who is especially concerned about the performance of my tooling these days due to what seems to be a generally infinite willingness to accept web apps that are slower than desktop apps from decades ago, and which seem to continually demand more resources year over year, I really appreciate that such a distinguishing eye has been given to Alacritty's speed and resource usage. Some contemporary alternatives like Electron-based terminals are academically interesting, but are programs I'd never want to use due to the huge step backwards in these areas.
One question: do you have any plans to use Alacritty to try and advance the state of terminal emulators more generally? e.g. Displaying images, richer interfaces that don't depend on ASCII bar characters, graphs, properly tabulated results, etc. This is a direction that I wish we were going, but it's not clear to me how to get there without many sacrifices.
> do you have any plans to use Alacritty to try and advance the state of terminal emulators more generally?
I hadn't replied to this because others had already provided all of the info I have. To summarize, the author of notty[0] and I are talking about a collaboration[1]. notty has done a ton of pathfinding in this area on identifying how to add many of these features in a backwards compatible way. I'm really looking forward to see where it goes!
Is adding those features in a backwards compatible way really that important? Couldn't you just have a program send an escape code telling your term to go into "new" mode, and implement some completely different standard?
Or I suppose use terminfo, but I like the idea of dealing with text streams better.
notty author here. notty is basically way down a yakstack for me - I wanted better CLI/TUI tools, so I wanted to write a framework for writing CLI/TUI tools, but some of the features I wanted aren't supported by terminals, so I started writing a new terminal. But I don't know anything about graphics programming & this is really far away from what I actually wanted to be doing - so when jwilm showed me alacritty & mentioned implementing notty with it I was pretty stoked.
That is not how Copyleft software licenses work at all. All the AGPL gurantees is that 'Bigco' must contribute back to the community any modifications they make to the software.
Exactly! How many bigco's have a terminal emulator incorporated into a product? If you're just using this to run tmux, vim, etc. the AGPL's strengthened sharing provisions aren't going to affect you at all :-)
I was just pointing out that regardless of modification/distribution/whatever, bigco policy is to not allow ANY AGPL code within a 10 mile radius of any computer owned by said company.
The author(s) are free to use AGPL, but there are significant downsides if they care about adoption.
These aren't "weird corporate policies", they're very sensible. If they wish to use such software they need to be very careful in how, and track its use, and they just don't think having such a framework is worth it.
It's a downside if it leads to general non-adoption, either directly or because a competitor with a different license gets the market share.
I'm all for the moral stance, but moral purity in a vacuum is essentially irrelevant. Effective morality is about impact on the world. A morality that's only about the good feelings of the purist is sterile self-indulgence.
> bigco policy is to not allow ANY AGPL code within a 10 mile radius of any computer owned by said company."
Wait, that seems extremely paranoid, even if only meant figuratively... Can you explain the thinking on restricting the use of AGPL'd licensed applications?
It's a very common company policy, because it's 'never use GPL' is a much easier rule to follow than 'only use GPL when it doesnt expose the company to risk'. Programmers aren't lawyers.
You might be thinking of Lesser GPL? It should be immediately obvious why any Bigco would treat the AGPL like an exploding canister of infected blood and sharps.
The AGPL treats web publishing as the same as binary distribution. If a bigco (e.g. Google) used AGPL code as part of a web service (e.g a web-based email client) there is a risk that they'd be required to comply with requests for source code. It's a pretty scary license. I wouldn't touch it... and I run a teeny tiny little speck of a website by comparison.
This is the mindset that lead to people not realizing the impact of shellshock. If your webservice shells out to use any other tools (imagemagick for instance) the shell is now part of your app.
Which then begs the question why the author chose AGPL over regular GPL, if it's unlikely to ever apply in practice. What was the author worried about?
Meanwhile, it's much easier for a BigCo to have a blanket policy for a license which has incredibly high theoretical dangers and little clarity around its scope. And I don't blame them.
Is that true? My understanding of the AGPL was that any software product which uses it as a component becomes subject to the AGPL - it has the linking semantics of the GPL, not the LGPL. If that's not the case, please do disabuse me of my misconception!
Yes, it is like the GPL. But in this case, where the product is a standalone application, that distinction shouldn't matter unless you're actually planning on bundling it into your own product somehow.
The problem is that "outside of the company" can be murky. What if the company outsources? What if the company hires contractors? What if the company employs an intern - does the intern now have the right to distribute the software?
These are the legal landmines that BigCos want to avoid, mainly because they're questions that have not really been decided.
That only addresses part of the reason for the policy. Please, you have to be aware the legal world and companies is very complicated, and smart people spend a lot of time analyzing this.
There's a big difference between "should be fine" and "will be fine". When the stakes are small, the former can be enough. But as the stakes get larger, people favor the latter.
At a large company, the stakes get large in two ways. One is that all the numbers are just larger. But more important is that an individual decision maker's career success can become dependent on a relatively small number of things. E.g., if a lawyer approves a license that should be fine but actually isn't, that could substantially harm career prospects. It still may be a small problem overall for a major company, but if it means somebody gets fired, those are pretty big stakes.
"Integration into your products" is too narrow. For example, AGPL may mean that contractors who use company internal web services must be given access to the source code of those services. That's a frightening prospect for companies.
I meant use more in "install and use, maybe fix a few bugs", which is what I'd primarily expect for a terminal emulator. It's not the kind of software you're likely going to specially interface with your systems, unless you ship it with your own OS.
Software can be "shipped" to users in ways you may not anticipate. Even a lowly terminal emulator might find its way into a POS system, factory line, etc. (These interfaces are often shockingly primitive).
And even if not, developers might use internal code search, find what they want, and then copy and paste. The pushback on AGPL code (and GPL code even) comes from the difficulty of establishing internal policies to keep the code segregated. Much easier to have simple-to-undersatnd policies enforced at the boundaries, e.g. "no AGPL, period", instead of "AGPL code is OK for software that won't interface with our systems, as determined by either biased engineers or technically-shaky lawyers."
I think you make a good point in this instance, so I upvoted. The follow-on discussion establishes the policy of big-cos pretty well and I don't think Alacritty much benefits from the AGPL since actual integration into other software base is unlikely (except for some of the sub-libs perhaps, but personally I feel that fundamental libraries flourish better under less-restrictive licenses anyway)
Hah, yeah I'm aware, but the potential of Rust is huge.
We talk a lot about open source these days, but meanwhile the tools that we all use are sitting on huge substrates that the vast majority of us aren't contributing to and probably never will due to the complexity hurdle that needs to be overcome.
This includes our web browsers, our terminals, our editors/IDEs, our operating systems, our security software (OpenSSL, NaCL, OpenSSH), and if you're a developer, things like our databases. Although I can ostensibly write C and C++, I still don't contribute to these projects because oftentimes a whole new set of local conventions around build tools and utility libraries needs to be learned for every project, and there's a high bar of experience required before contribution is possible without the risk of introducing a memory leak or security problem.
Rust has the potential to change all of this, and that's really, really big.
> Your last paragraph suggests what you really want is a notebook style interface (in the style of mathematica) rather than a terminal.
I definitely appreciate Mathematica and its sort of rich prompt is probably closer to what a terminal should look like rather than what we have today. But most of what I'm doing all day is text editing and using companion tools like Git, which isn't a good fit for it. I'd much rather that those Mathematica utilities come to my terminal rather than me having to go to Mathematica.
Reading his comment, his concern isn't with c++ as a language, it's with the tooling and development practices around it. He suggests that because of the generally project specific nature of the tooling and development idioms, generally a higher bar of experience is required to contribute.
I can't argue with him there, and I'm about as much of a C and C++ fan as it's possible to be.
That is far from what I meant, and the suggestion that Rust is "salvation" is the stuff of infantile delusion. I'm a big fan of Rust, too, but it isn't going to be the "salvation".
I personally use Hacker News Enhancement Suite for Chrome and tag users whenever I find myself mentally rolling my eyes. Then when I'm reading a page with lots of comments and see the "SKIP" tag I collapse the thread.
It is an amazing project! One thing to note is that the terminal emulator itself isn't GPU-accelerated (there's no parallel computations that run on the GPU), the UI graphics are only rendered by the GPU (much like in the Chrome browser).
"... generally infinite willingness to accept web apps..."
Interesting.
I stay in textmode. Hence I do not need an emulator.
If I need graphics I access the files over VLAN from another computer designed for mindless consumption of graphics, like the locked-down ones they sell today with touchscreens, etc.
My understanding is that emulators like xterm can redraw the screen faster than VGA. I remember this can make textual interfaces feel snappier.
But I doubt that jobs execute any faster in X11/Wayland/whatever than they do in textmode. I cannot see how the processes would be completing any sooner by virtue of using a graphics accelereted emulator.
But I could be wrong.
I sometimes use tmux for additional virtual consoles because on the computers I control (custom kernel, devices and userland) I do not use multiple ttys, just /dev/console.
I rarely ever work within tmux. I only use it to run detached jobs. I view screen output from my tty with something like
case $1 in -B|-E|-S|-t)
tmux capturep $@ --
exec tmux showb $@ --
esac
I'm not a seasoned tmux user. I was a very early adopter. tmux is useful high quality software IMHO.
Not sure why I would ever need these slow "web apps".
I guess the third parties controlling the endpoints might be able to utilise the data they gather about users. And I am sure some users appreciate the help. Thus it is a symbiotic relationship.
I am continually making my "tooling" faster by eliminating unecessary resource consumption. It is an obsession of sorts. Constant improvement.
But given that I am working with text, graphics processing is not something I need. I would not mind being able to run my non-graphical jobs on a fast GPU, but my understanding is that the companies making these processors are not very open.
For example, the GPU in the RasperryPi.
Always interesting to hear how others are meeting their computing needs.
The Web is two things. First, it's the promise that a certain runtime with a specific minimum set of capabilities is available almost anywhere. Secondly, it's a staggeringly-huge installed base of stuff written for that runtime.
I don't think there's anything out there in the that matches the volume of deployed HTML, CSS and JS in the wild.
The horribly sad part is that HTML, CSS and JS are a gigantic Rube Goldberg implementation of "run arbitrary code in a safe sandbox," because the Web is also the world's biggest collection of legacy dependency.
IMHO, the source of the engineering cringe making everything so much sadder and less than what it could be is that the W3C/WHATWG/IETF/etc are made up of consortiums of large, foghorn-equipped corporations - corporations that have vested interests in advertising, consumer retention, and strong guarantees of indefinite consumption.
I've never really gotten the reasoning behind the technical directions the Web's gone in; a lot of things have stuck and worked, but so many more have flopped, yet the associated implementations for both the successes and failures have to be maintained going forward indefinitely.
The iterative pace on the various Web standards is another problem - things go so fast that the implementations can never get really really good, and Chrome uses literally all of your memory (whether you have 2GB or 20GB, apparently!) as a result.
---
Regarding $terminal_emulator being faster than VGA, I can emphatically state that virtually all of them are disasterously slow. aterm had some handcoded SSE circa 2001 to support fake window shadowing (fastcopy the portion of the root window image underneath the terminal window whenever the window is moved; use SSE to darken the snagged area; apply as terminal window background) but besides that sort of thing, terminal emulators have more or less never been bastions of speed.
If by VGA you mean true textmode (the 720x400 kind, generated entirely by the video card), I don't think there's much that's faster than that. Throw setfont and the Cyr_a8x8 font in there to get 80x50 (I think it is, or 80x43) and you have something hard to beat, since spamming ASCII characters at the video card's memory will always be faster than addressing pixels in a framebuffer.
Which is why GPU-accelerated terminal emulators are so interesting: they're eliminating as many software/architectural bottlenecks as possible to make those expensive framebuffer updates as quick as possible. It's definitely the way to go; games are generally rated on their ability to push GPUs to >60fps at 1080p (and increasingly 2K/4K/8K), so the capacity is really there.
The i3 window manager could be considered one of many comparable similar implementations to tmux. It's not perfect (it's not as configurable as I'd prefer), but it'd get you X and the ability to view media more easily.
I do really appreciate the tendency to want to view a computer as an industrial terminal appliance though. Task switching is still best done by associating tasks with different objects in physical space, so keeping the computer for terminal work and keeping tablets (et al) for other tasks does make legitimate sense.
---
Regarding data usage, that's a tricky one - most successful Internet companies provide some kind of service that necessarily requires the collection of arguably private information in exchange for a novel convenience. As an example, mapping services don't truly need your realtime location but having that means that they can stream the most relevant tiles of an always-up-to-date map to you. The alternative is storing an entire world map, or subsetted map(s) for the locations you think you'll need, but that'll kill almost all the storage on phones without massive SD cards.
---
I find elimination of unnecessary resource consumption a fun concept to explore, almost to the point of obsession. In this regard I often come back to Forth. I was reading this yesterday - http://yosefk.com/blog/my-history-with-forth-stack-machines.... - and it explores how Forth is essentially the mindset of eliminating ALL but the smallest functional expression of the irreducible complexity of an idea, often to the point of insanity. It's not a register-based language so it's never going to beat machine code for any modern processor, but it's a very very interesting concept to seriously explore, at least. (And I say that as someone interested in actually using Forth for something practical, as described in that article.)
---
AFAIK, the RPi actually boots off the GPU, or at least the older credit-card-sized ones did. I'm not sure about the current versions.
ATI released some documentation about their designs a while back with the subtext of enabling open-source driver development. I don't think that panned out as much as was hoped.
My understanding is that Intel has both NVIDIA and AMD beat nowadays when it comes to Linux graphics support; the two former vendors still heavily rely on proprietary drivers (on Linux) for a lot of functionality.
Sadly, since they both have to successfully compete in the market, they're unlikely to release their hardware designs in significant detail anytime soon. (Even if they did like the idea of merging, single gigantic monopolies have a lot of risk, and the behemoth that resulted would be impossible for Intel to compete with, likely.)
So, learning OpenCL and CUDA (depending on the GPU you have) is likely your best bet. There are extant established ecosystems of resources and domain knowledge for both implementations, and the relevant code is not too tragically licensed AFAIK.
> The alternative is storing an entire world map, or subsetted map(s) for the locations you think you'll need, but that'll kill almost all the storage on phones without massive SD cards.
And that's what HERE Maps does best, without needing massive storage.
Do you know of any forks or clean implementations of browsers which cut out legacy support more aggressively and/or are tuned for performance? Something like Chrome with less overhead because it doesn't bother to support deprecated features.
Unfortunately there's currently nothing out there that generally meets all of the points you've touched on. There are some projects that tick one or two boxes, but not all of them.
Dillo parses a ridiculously tiny subset of HTML and CSS, and I used it to browse the Web between 2012 and 2014 when my main workstation was an 800MHz Duron. Yes, I used it as my main browser for two years. Yes, I was using a 19 year old computer 2-4 years ago. :P
Its main issue was that it would crash at inopportune times :D taking all my open tabs with it...
The one thing it DID do right (by design) was that the amount of memory it needed to access to display a given tab was largely isolated per tab, and it didn't need to thrash around the entire process space like Chrome does, meaning 5GB+ of process image could live in swap while the program remained entirely usable. This meant I could open 1000+ tabs even though I only had 320MB RAM; switching to a tab I'd last looked at three weeks ago might take 10-15 seconds (because 100MHz SDRAM) but once the whole tab was swapped in everything would be butter-smooth again. (By "butter-smooth" I mean "20 times faster than Chrome" - on a nearly-20-year-old PC.)
I will warn you that the abstract art that the HTML/CSS parser turns webpages into is an acquired taste.
---
Another interesting project in a significantly more developed state is NetSurf, a browser that aims to target HTML5, CSS3 and JS using pure C. The binary is about 3MB right now. The renderer's quality is MUCH higher than Dillo's, but it's perceptibly laggier. This may just be because it's using GTK instead of something like FLTK; I actually suspect firmly kicking GTK out the window will improve responsiveness very significantly, particularly on older hardware.
I have high hopes for this project, but progress is very slow because it's something like a 3-6 man team; Servo has technically already superseded it and is being developed faster too. (Servo has a crash-early policy, instead of trying to be a usable browser, which is why I haven't mentioned it.)
---
The most canonical interpretation of what you've asked for that doesn't completely violate the principle of least surprise ("where did all the CSS go?!?! why is the page like THAT? ...wait, no JS!?? nooo") would have to be stock WebKit.
There are sadly very few browsers that integrate canonical WebKit; the GNOME web browser (Midori) apparently does. Thing is, you lose WebRTC and a few other goodies, and you have to lug around a laundry list of "yeah, Safari doesn't do that" (since you're using Safari's engine) but I keep hearing stories of people who switch from Chrome back to Safari on macOS with unilaterally positive noises about their battery life and system responsiveness.
I've been seriously think-tanking how to build a WebKit-based web browser that's actually decent, but at this exact moment I'm keeping a close eye on Firefox. If FF manages to keep the bulk of its extension repository in functioning order and go fully multiprocess, the browser may see a bit of a renaissance, which would be really nice to witness.
I generally use rxvt because xterm is too slow. It doesn't have menus or URL highlighting and scrollbars are nonfunctional in the presence of screen, tmux, Emacs or generally any interesting terminal app.
On Windows I use mintty and I turn off scrollbars there. I simply don't use the mouse to interact with the terminal other than to select text, and that's with selection buffer to copy.
Speed is highly relevant to me. Most modern terminal emulators are very slow, most noticeable when you get a lot of output in a panel in something like tmux.
(Simplistic benchmarks that test full screen scrolling usually hand the crown to terminals that don't bother to refresh the screen with everything output, but that's not the only bit of a terminal emulator that can be slow.)
Interesting. For me, there isn't enough integration between the GUI and terminal ... they shouldn't seem separate, but should be bridged to create a coherent experience.
As a suckless terminal user I would say it is not the goal of the terminal emulator to provide scrollback, gnu screen or tmux are far better tools for that purpose.
In the case of tmux, it's only a single line change to activate mouse scrolling. It's nice to have the times when I switch over from a browser and instinctively try to scroll around.
Why do people still say C/C++? They are two different languages with different purposes and strengths/weaknesses. Rust might be a worthy competitor with C++, assuming many improvements down the line, but it's not even in the same category as C, no matter how much the enthusiasts like to claim otherwise.
As someone who is especially concerned about the performance of my tooling these days due to what seems to be a generally infinite willingness to accept web apps that are slower than desktop apps from decades ago, and which seem to continually demand more resources year over year, I really appreciate that such a distinguishing eye has been given to Alacritty's speed and resource usage. Some contemporary alternatives like Electron-based terminals are academically interesting, but are programs I'd never want to use due to the huge step backwards in these areas.
One question: do you have any plans to use Alacritty to try and advance the state of terminal emulators more generally? e.g. Displaying images, richer interfaces that don't depend on ASCII bar characters, graphs, properly tabulated results, etc. This is a direction that I wish we were going, but it's not clear to me how to get there without many sacrifices.