Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
“I made an operating system UI within Unity” (reddit.com)
316 points by mariuz on Feb 10, 2020 | hide | past | favorite | 176 comments


Years ago, when I was adapting field manuals into Unity Games -- I got asked to create some training for a piece of field kit that relied heavily on a windows mobile device (very early windows mobile, that had a registry -- could run some win32 apps, had a noticeable delay of like 350ms between every input and it reacting...)

I had that new-job-gotta-impress energy, so went _hard_ at this project. I added so many tiny details to make it _exactly like the windows mobile device_. I had simulated battery indicators, signal that was calculated by ray-crawling to the nearest in game "tower" and attenuating based on the materials it encountered along the way (why?!) -- every menu, function, submenu that I had access to I implemented. The power button worked, the restart button would do the windows mobile boot animation. No stone unturned.

When I finished it, I assume 6 people saw it, but they never said anything to me about it. That was probably my first adulthood jolt of ennui. Ah memories.


I worked for the same field-manual-into-Unity employer for a few years, first doing DoDAF architectures on a contract on Ft Huachuca, switching to the serious games division for a small number of weeks before I politely quit in frustration.

We spent years building what amounted to UML diagrams of existing Army units. Great effort went into them. Coordination with active-duty officers, with GS civilian stakeholders, training from certified DoDAF-training instructors; we had TS-SCI clearances.

Then the resulting products went into a database somewhere, and nobody ever looked at them.

I imagine that sort of thing is rife in the government contracting world, but it seems like that particular firm has a real knack for finding truly futile things to get paid for.


It was a rough situation at times. I really took advantage of how self directed things were in the beginning. It was basically a masters program in whatever technology I chose to learn. Some people used the time to watch netflix. I wrote a 3d engine for android in almost entirely openGL ES2.0. After that, Unity was what I chose.

I did find a few things I could do that made an impact. One project I did on a lark even resulted in a little recognition.

I overheard an instructor complaining about an onerous, all hands on deck eternal data wrangling task. Seemed like an easy thing to automate. For bureaucracy dodging reasons, I wrote it in actionscript3. All it did was eat files and spit out xslx files.

Turns out it wiped out XX% of the manual labor every instructor on post had to do in a single button press.

A GS megaboss said it was going to save the government millions of dollars a year. Was a little disappointed I never personally got a challenge coin for it. My boss did :)

I'm grateful for the numbers though, I could leave that job confident I saved the tax payer more than I cost.

Hopefully some of the things I'm working on for my current employer will get some visibility in the outside world. Software is ephemera, but I'd like to make a token effort to be remembered.


Awesome mindset and perspective. Wish I could say the same about my co-workers. I'd work with you in a heartbeat... This intrinsic motivation is highly contagious to me in my experience and I wish I could be around it more often.


I had an intern that whipped up a new set of training walk through slides and used MS paint to draw color coded boxes and arrows on screenshots to teach you how to use it. It was like a day of work for him. People with C in their job title saw that PowerPoint. It was a really damn good training presentation.

I call it Schrodinger's work. Maybe it matters to someone, maybe it doesn't. You don't know until you do it. The stuff you spend a month on winds up being worthless and the stuff that you spend an hour on winds up being used everywhere. There's no telling which is which. Such is life creating support documentation for systems that live and die by things outside your team's control.


Honestly, this kind of thing is still super valuable. It's really nice when you go digging up some old resource and have it be there when it's needed, and it's amazing when it was done right.

What sucks is if it was half-assed when you finally do find it. Or worse, it gets forgotten about and the work gets duplicated every time someone needs that info.


This must have been a huge project, because I had a friend in Charlottesville, Virginia who got hired in 2008 to do almost the same for the Navy, "what amounted to UML diagrams of existing Navy units." They paid him $125 an hour and he took the work because it was the middle of the 2008 recession, but it was boring work. A year later they asked him to do more, but he turned it down.


It's basically rife in any organisation with more than ~2K+ total employees, regardless of private or public sector. The only that changes is the scope of how much funds & talent get wasted.


Hey I worked at a really dirty,ah-hem, company a few years back after leaving the army. I appreciate the details you put into your work.


Thank you! :)


Well, no fancy UI, but I did port a Z80 CPU which could run a full Operating System (CPM 2) to Unreal Engine:

https://i.imgur.com/2nagJS4.png

I did implement graphics hardware as well, so in theory you could write a GUI along the lines of Contiki or something

https://i.imgur.com/DmnVA2I.png

The hard drives were configurable (up to 4MB hard drive images supported) and as per the pic above, the light on the drives light up when accessing the disk.

ANSI/VT100 support on the terminal as well:

https://i.imgur.com/DlftREp.png

Telnet+ZMODEM client was working so you could download software directly from Internet connected BBS's:

https://i.imgur.com/K44GT1W.png

Or do word processing in space with WordStar 4:

https://i.imgur.com/rIY1he8.png

I wanted to make something a bit between a cross between FTL and 0x10C, but in 3D, and where your space controls were linked up to software running on the Z80. I had up to four Z80 instances running (could have supported more), and wanted to be able to network them to each other as well as the spaceship control plane as well.


OK, this is actually quite impressive. Geeky beyond belief, and the question is how deep would you care to go ... emulating a x86 which is running your app emulating an x86 ... Still impressed.


Funnily enough, I'm also working on a Motorola 68K emulation component as well, and I did consider trying to compile the Z80 emulator to run on it.


Okay, what was this used in, and where/how can we play with it? Preferably in test/debug mode so we can directly poke at the different features. (But source not necessary.)

Also - the ANSI graphics example... it honestly looks bitmapped, not like a bunch of colored character cells. But then the graphics example (link 2) also looks bitmapped, so I think I'm missing something.


The ANSI is on the screen on the right on that example, the Blocktronics original file is here:

https://16colo.rs/pack/blocktronics_16colors/k1-shack.ans

Colors are a little off in retrospect, not sure whether I ever fixed that. The screen on the left shows the bitmap graphics mode with the alien from Alien: Covenant.

I should upload it somewhere for people to play with and give feedback. The UE4 project build is about 1GB, I have to search around on my hard drive to find where the hell I put the Z80 floppy/hard disk images to boot the Z80 VM's though. They contained some fun stuff, like a working port of Rogue, the Infocom Zork games, a C compiler etc.


Ah, it was a bitmap image. But TIL about 16colo.rs, which is really cool.

The screenshots you've posted definitely look interesting to poke around with. Do the keys press as you type? :D

1GB is sizable but not too out of reach for those interested. Sounds like there are a lot of assets/interesting stuff elsewhere in the project?

I wondered about a good "somewhere" to put it online for a bit - I don't have a good answer for this. I checked itch.io, which apparently has a 500MB limit that can be expanded to 2GB on request. (https://itch.io/t/324861/need-help-with-size-limit-on-game-u..., https://itch.io/t/46291/uploading-large-games-faq). Archive.org would probably work, despite the most likely reaction being at least one raised eyebrow.

(And... hm, user-specifiable disk images are probably something you've already thought of.)


> Ah, it was a bitmap image. But TIL about 16colo.rs, which is really cool.

No, it's not. That ANSI image on the right is rendered in-engine via the VT100 component of the Z80 computer, which I wrote from scratch. You can take the .ans file from the 16 colors website (which is what I did) and dump it to the terminal and that's how I got that screenshot.

Getting it to look right means not only emulating the ANSI color codes, but implementing the IBM CP437 character set. CP/M purists would attack me with a pointed stick, since only the IBM PC had that character set by default, and it would have only been available on CP/M-86, not CP/M-80. Otherwise, ANSI files like that would have only displayed correctly on MS-DOS with ANSI.SYS loaded. BTW the emulated graphics card supports loading different character glyphs, so it's totally possible to load a "ye olde" fancy character set and play Zork with it, as should be done.

> The screenshots you've posted definitely look interesting to poke around with. Do the keys press as you type? :D

Not really, the hand animations flail around as you type and it looks kind of funny. It's something I'd like to improve, maybe with three keyboard proficency levels, hunt-and-peck, touch typing, and something in between.


I really love this.


Thank you!


That's dope. I'm sure the original post in this thread was a solid learning experience for its creator but unlike what you just showed, it doesn't do anything particularly novel or clever. You should write up a blog post and submit to HN!


Thank you for the kind words, it was a lot of fun to work on and I learned a lot, I hope to get back to it when time permits. Last build I did was in March 2018.


Looks like a generic Linux distro. Am I the only one getting "visual fatigue" over seeing flat design? I just feel like we need a trend change. Its been way too long. I find myself looking at the old Windows Vista, Windows XP, early Mac OS X, or even Windows 3.1 designs and longing for that simplicity.


I don't think flat design is the issue (although I agree that it's getting to be as overdone as skeuomorphism back in the early '00s). The real issue is that it's discarded all of our carefully built up visual design language (ie. what makes things feel 'intuitive') in favour of a churning mishmash of beautiful new impediments to understanding. The disappearance of top menus. Inscrutable icons instead of text. Hamburger menus, arcane X-finger-slide-from-Y-edge style secret handshakes, it just keeps getting worse.


> Looks like a generic Linux distro. Am I the only one getting "visual fatigue" over seeing flat design?

I have `comment fatigue` over flat design complaints.


I have meta-comment fatigue and I hate myself.


Fatigue comments considered harmful.


I feel like you missed the point of the post. It’s less about the aesthetics and more about the fact that someone used Unity, a game development tool, to simulate and OS UI.


Thanks for that, I was trying to figure out what "Unity" was in this context, given that Ubuntu had something with the same name as it's main UI for about 7 years...


Interestingly enough, Unity the game engine has been around for 5 years longer than Unity the DE


There are several things called Unity, which I suppose is why the game engine is often referred to as Unity3d.


Yeah, same here.

Like, what's the point of an "operating system UI" in a Linux desktop environment?

But then, I also don't get the point of doing that in some game engine called Unity.

I mean, is that a viable way to do a desktop?


You're literally on Hacker news. Not everything needs to be viable. Fun is enough.


I guess.

Maybe it's just satire about flashy but useless eye candy.

When you're running lots of VMs, you end up using as little RAM as possible. It used to bug the hell out of me that Whonix came with KDE. Now finally they switched to XFCE. My Debian VMs get 1-2GB, and some script-laden sites can freeze them.


As stated in the link (although it mightn't be clear if you're not a game dev), it's so you can make games with operating system UI, such as Emily Is Away[1].

The linked one is particularly polished; it would have taken ages to implement all those little details.

[1] https://store.steampowered.com/app/417860/Emily_is_Away/


Ah. I missed that. Thanks.


The point of doing an operating system UI in a game engine is so that you can use that operating system UI within games written in that engine!

There's several examples - Simulacra, Emily was Away, Hypnospace Outlaw - that use the interface of an (imaginary) computer as their primary framing device.

Or it could be useful in an immersive-sim type game where there happen to be computers you can interact with; you could immediately see the applications in some kind of hacking-themed game.


Huh. I never got into gaming. Except that my life has become more and more LARP.

So yeah, I get it. Of course there'd be computers in games.

But once you have an OS in a game, why not just run a VM? Or would that use too many resources?


I thought the point of posting it to HN was about the comments, about the deep hostility to open source and sharing work/code that is prevalent in the game dev community.


Pin striped OSX, now that was a time to be alive.


That and brushed metal. Such thrill.



UI designs come and go, but this point of OSX design will always look unbelievably good.


Disgusting.


I respect your opinion. There is some (a lot) of the skeuomorphism of the time, and even in parts of the old OSX that where terrible, but in my own personal opinion there was some good too.


I don't know why you mentioned Linux when it has a ton of window managers which provide real alternatives to any specific UI design. Window Maker looks different from fvwm95 looks different from IceWM looks different from <insert tiling WM here> looks different from TWM... and all of those window managers are usable, to the extent applications don't care what they're running in and the OS itself doesn't care about window managers at all.

So you can have the UI of your dreams, or at least a UI which is honestly different from what you're railing against, if you just open your eyes a little and see beyond the defaults.


But ICCCM!


This https://xpq4.sourceforge.io/ is for https://q4os.org/ which is based on http://trinitydesktop.org/ which in turn is based on https://en.wikipedia.org/wiki/K_Desktop_Environment_3 but adapted to more modern systems.

Rip it apart, integrate into your system, be happy. Maybe do it in a generalized way and "show HN". But why? It is out there, more or less ready to use.

But to be honest, i think this nostalgy is overrated. It is not good, just what you have been used to for a long time. Because you had no other options, except of using alternative windows shells which broke with every update, and most users shied away from such deep modding anyways.

Personally i can't stand the XP look, but can live with "classic" which is based on that Windows2000/W98 style. But it is too dark! And i dislike some of the common Icons and Widgets.

Anyways, i'm using something "Material" since about 20 years now, with lighter color palettes, but not that low contrast stuff which is so common now. Very small padding, and small but visible borders. You can tune and modify that into blissful oblivion, and have it applied across QT3/4/5 and GTK2/3.

It's up to you.

Or choose just another Distro because of the nice desktop backround!1!!

(Hop! Hop! Hop!)


> I just feel like we need a trend change.

Why would you like a change? So you want to see the common UIs constantly redesigned every few years?


[flagged]


I think Material UI is a good baseline for most applications, especially those who do not have the resources to design their own UI


[flagged]


Well, it literally is metaphorical cancer, in that it's a case of part (interface design) of a system (software development) going rogue and sabotaging both the function it's supposed to carry out and system as a whole in a effort to enrich itself, but it's not that great a metaphore.


Flat design is more simple than skeuomorphism, almost by definition.

What you seem to want is complexity.


There's plenty of middle ground between the current extreme flatness and, say, MacOS X Tiger. An interface with a simple 3D bevel look (such as in Motif or the toolkit in NeXTStep) is no more skeuomorphic than than a flat interface, especially not considering how prevalent touchscreen controls are today.

Flatness to the current point of minimalism is, to me, very taxing and complex. Things blend together so much it's hard to tell where one UI element begins and another one ends. I frequently misclick in Windows 10/Office 365 even though I've used it for years now, simply because the flatness makes me misinterpret the layout and function of the UI.


I miss the look of MacOS X Tiger so much. It was a thing of beauty.


Complexity is good if it allows you to see affordances more easily.

Maybe buttons should look different from non-clickable icons. That requires some complexity.


I don't disagree that complexity can be good! I was just correcting the "longing for that simplicity" statement, which was incorrect.


Flat design is way too simple.

I liked the 3Dness of the buttons in Windows 95-98.

You knew if something was clickable right away.


Easy isn’t always so simple.


Correct, but flat design is objectively simpler than skeuomorphism.

I don't even mind skeuomorphism, I quite like it, but it's just bullheaded-error that has people claiming that it's simpler than flat design.

Complexity isn't a sin or something.


From the comments:

This is a remake of Project Glass, which is a simulated operating system UI, this time implemented in Unity. It's not a full OS, but every simulated app and widget inside is fully functional.

There is currently only a Windows demo [1] available, and he's undecided whether he'll release on Github or not.

[1] https://drive.google.com/file/d/1p3AACMd6KuRnHctsq42uJfMq3Bw...


What is Project Glass in this context? Forgive my ignorance, but search is confusing here.


The same project. This version is a remake of the original version that the user had made two years ago. The full comment is clearer:

> About Project Glass:

> Glass is a simulated operating system user interface (UI) project and it is being made with Unity 2018.4. It is not a real OS, although everything in the package is functional and can be changed easily. This is basically a remake of the original project which was released 2 years ago. If you're wondering why I'm working on this project, you can search for Simulacra, Welcome to the Game or Emily is Away and you'll get the idea (plus it's fun). You can even count GTA for its phones and browser. This project is much much bigger than the examples though.

> If you want, you can try it yourself by downloading this Windows demo: https://drive.google.com/file/d/1p3AACMd6KuRnHctsq42uJfMq3Bw...

> 60fps showcase: https://www.youtube.com/watch?v=Yu_2e0LebNY


Basically the same thing, done a year and a half ago.

https://github.com/Michsky/glassos


Maybe a reference to Sun's Locking Glass project ?

https://www.youtube.com/watch?v=JXv8VlpoK_g


Looking Glass was a real window manager though.


In the 1970s and 1980s, GUIs devoted a large portion of the machine's power (and other resources such as memory and disk space) to improving usability. This reduced raw application performance and power efficiency but it was considered a worthwhile tradeoff.

For years we've been optimizing battery life, but now that mobile devices can run for 8-10 hours or more on a single charge, perhaps we should reconsider spending more of the power budget (and implementation effort) on usability.

On a side note, it baffles me that devices like the PS4 that are amazingly responsive during gameplay have extremely clunky and unresponsive system GUIs. Can't we get some smart game programmers to rewrite the PS4 system GUI to make it fast and responsive?


Maybe it's because it uses (at least at launch, dunno if they every changed it for something else) Web tech for it's interface?


It's garbage. The PSN store in particular feels like a very unresponsive web page.


reddit's modern userbase is really interesting. There are some incredibly interesting people who use it, but you also find a bunch of accounts that are under a year old with lines like this:

Please don't. I would like to see you get rewarded for your hard work. Screw the parasites.

Their new UI seems to be doing its job, but I wonder if the decline in thread quality was worth it.


> reddit's modern userbase is really interesting. ... you also find a bunch of accounts that are under a year old with lines like this:

You should visit /r/cscareerquestions and sort by New some time. Many of the younger posters have some wild expectations about CS careers, compensation, and the workplace.

90% of the young people I interview, hire, and work with are wonderful these days. However, a small minority arrive with unchecked entitlement and bizarre expectations, and I think sites like Reddit and Twitter are to blame.

I think there's a small subset of people who grew up watching that "Fuck you, pay me" video on YouTube, combined with a hot job market and an economy that hasn't seen a recession in their adult years. Combine that with stories of Google engineers making $200K+, and you end up with a small minority of developers with an extreme sense of entitlement.

I've seen developers try to invoice companies for time spent in on-site interviews, junior developers who couldn't pass basic FizzBuzz-level coding challenges demanding $250K compensation, and a surprising number of /r/cscareerquestions threads justifying "ghosting" your employer. Not to mention the meltdowns that happen when people get put on performance improvement programs. And of course, the never ending comments encouraging people to sue their employers for trivial things.

Reddit has some gems, but it's a breeding ground for keyboard warriors. Upvotes tend to follow what people want to hear rather than the best advice. It's easy to be an internet tough guy, but a lot of that grumpiness and entitlement breaks down when they meet the real world.


That kind of drive-by low-content posting is the inevitable result of lax and/overwhelmed moderation combined with a massive userbase and ultra-ephemeral posting. It's possible to suppress it, but it takes either a community small enough to have some organic buy-in of posting standards (see HN, for example), aggressive moderation, or both.


It's also interesting how most of the comments disagree with that one, yet it still has positive upvotes.


I made another comment about this: this opinion is actually commonly held, and especially vocal, in the «hobby» game dev community. I have encountered similar hostility towards open source and sharing many times, e.g. on Unreal Engine forums, it is not a reddit thing.


For context, the OP have a few others similar UI pack for sale.


Could many AI chat bot developers be live testing their bots on Reddit?


As if the quality of HN threads is better by leaps and bounds.


Thank you for linking Old Reddit.


The animations are just amazing. It's the exact field in which I still find modern applications lacking, but games exceptionally good (including game UI).

I think UI frameworks should take more from game engines, which I think has happened over the last 2 decades, albeit slowly. My best examples of it is probably currently QtQuick, which builds completely on OpenGL and uses a bunch of game engine terminology in its documentation - but I have also found that it's possible and not too hard to make working application mockups using Godot.


Looks really cool! Unfortunately it looks like there is only a Windows version available to play around with.

If this could run on Linux and have support for running arbitrary x11 apps then it would be very interesting! I don't have a lot of personal knowledge about Unity3d, but I do hope the creator puts this up on github or something. I'd love to take a closer look.


I am not sure if there is prior art for X11 proper, but there is at least one Wayland compositor built on top of the Godot engine that can run X programs through XWayland: https://github.com/SimulaVR/Simula

And yes, with a bit of hacking you can do silly things like put physics on the windows and shoot holes in them. It won't get you close to what is in that Unity demo though. That would require re-designing all the apps. In my experience it's a massive pain in the ass to get eye-candy like that working correctly for arbitrary UIs and it's not worth the effort for most apps.


It's probably not hard to get it running on Linux but it's not a shell. It's just a toy with built in 'apps.'. Running arbitrary apps rendered to a texture in this app and forwarding user input back to apps is probably out of scope.


> but it's not a shell. It's just a toy with built in 'apps.'

Yes that's what I figured. For sure not what it's designed to do as of yet, but if that gap could be bridged in a practical way then this seems like a good head start.


As someone who has never used Unity, knowing it can be used for cross platform development, I would like to know how difficult it is to target as many platforms as possible. Let’s say I made a pretty standard 2D indie platformer. Not particularly graphically intensive. Is it possible to just check the box for each platform you would like and it works pretty well, or is there a lot of conversion work?


It is roughly that simple. You'll have to do OS-specific stuff for things like metadata, icons, any alternate control methods (mouse/keyboard vs touch), etc, but in my limited trials I've never had to do any platform-specific stuff that wasn't 100% justified. This is one of Unity's most important and valuable features, IMO.


I tried making the original Macintosh OS in Godot and my takeaway was it was really enjoyable to make UIs in a game engine. It would be interesting to see some “avant-garde” movement of making not only games but native apps using game engines.


This isn't quite that, but I can never not link to https://arcan-fe.com/ in cases like this. He even updated today!


This is like Windows Vista Aero UI on crack.

There is a reason that UIs don't look like this. The transparency alone is a major issue for usability, accessibility, and productivity.

It's an interesting project. I wish this person would have pushed themselves to think of new paradigms, rather than trying to use more effects to very staid looking interaction design concepts.


Most games of the sort the creator is targeting here (games with a built-in fake OS) have their UI look fairly familiar, since it's more about creating a believable world than doing something exciting and new with the OS itself.


...Which then goes and excludes the demographic of people for whom "believable world" means "exciting and new".

As in, tell me something I don't know - I'll be able to believe [in] that much more effectively than a revamp of some old dead thing I've been staring at boredly for the past X amount of time.


Am I the only one who thought it was going to be an OS navigated in a FPS environment, like the old Doom sysadmin port?

Looks pretty slick though.


Kinda ironic, that when they were actually using real PC tech, people were claiming they weren't, whereas now Hollywood sins the crap outta the hacker-type, and nobody bats an eye.

https://en.wikipedia.org/wiki/Fsn_(file_manager)

EDIT: Come to think about it... back then, the tech was probably just about mature enough to make such a thing just about possible, certainly today it can beautifully be done, and can maybe even more efficient (for the user/work, not hardware) than classic terminal/GUI approaches. I've never used a third party file explorer, apart maybe from WinSCP, but what sounded ridiculous could actually be pretty genius.

Anyone here on a 3d filesystem explorer?


I'm sure there are plenty of usability issues but damn it looks slick.

It's sad that a kid can make this with Unity3D, probably on his/her free time, and Microsoft with all its billions is not capable of producing a good UI for Windows.


It was very clean and impressive work, but the longer I watched the more I realized how disconnected I feel from all these weird "apps".

- Music player without connection to the real world (spotify/youtube)

- Notes app without connection to the real world (the file system/other platforms)

- Messages app without connection to the real world (sms / messenger / whatsapp / etc etc)

- Photos app without connection to the real world (some kind of cloud synced file system)

- Notifications ... I really still have not found a use for them, except for IM on my phone

Stop bundling these things :( It looks nice, but the real world use is so limited. Focus on really enabling other stuff instead


I'm not sure if you realize, but this isn't intended as a "real OS" for practical use.

It's intended for use inside of computer games; whether for simulation of using computers inside the game-world, or for games where the framing device is a (simulated) desktop interface.

In which case, the "limited real world use" is hardly a flaw!


1. This is more of a design study that anything else, having unity provide your os shell is ridiculous. 2. You shouldn't consider your own needs the standard. I for myself would be very happy with - A music player that doesn't integrate with user hostile, walled garden drm garbage - Ditto for messaging - Just using your own file sync for documents and pictures

Also, well, notes without filesystem access feels really useless, we can agree on that one.


It's a nice looking display, and it's running at a nice 60FPS. However, operating systems don't generally run in full GPU mode all the time -- it's a huge amount of power to be re-drawing the screen that often, and that scales to poor battery life on mobile and expensive power costs in servers.

So that's one of many reasons practical OSes don't do their UIs with GPU shaders when they can otherwise get away with less.


Your phrasing of this is a bit unclear. I think you mean that no "real" UI redraws when nothing has changed.

But otherwise most operating systems do run in full GPU mode all the time. The true-mobile ones, so iOS & Android, are almost always using dedicated hardware to handle composition without spinning up the GPU (although both use the GPU to render, so whenever something does change it's immediately back on the GPU), but "desktop" ones are generally always doing GPU composition as long as the display is on. With some hand-wavy exceptions to that primarily for video layers in select circumstances with proper hardware support.


Windows is actually pretty good about doing incremental present, only writing the composited regions that have actually changed. But macOS is bad at this, so people have to fake it in ugly ways to reduce power consumption. It's especially bad when all you want to do is blink a cursor.

Whether apps make use of these capabilities are another question. It's pretty easy to get lazy and blat out the whole window every time, when you know the GPU can handle the pixel bandwidth, and things like imgui explicitly make this tradeoff to keep logic simple.


TIL hardware planes (the "dedicated hardware" you refer to?) != the GPU.

I presume these are inside the GPU die, though? Probably near the display controller logic?


Whether or not it's inside the GPU die would depend on if the display controller is part of the GPU die or not. On a mobile SoC everything tends to be on the one die anyway so... sure. On a discreet desktop GPU I have absolutely no idea. They might not even bother having one at all?

The mobile ones though are pretty good, though. They can do like ~6 composition layers with alpha blending, cropping, rotation, and apply a color matrix for display calibration. All in the dedicated HWC silicon, GPU can be powered off (or doing something else like rendering a game)


This is not true. OS's generally blit their entire UI with GPU shaders, for power efficiency as much as for performance. This may be mixed with scanout compositing, depending on the system.


There are now projects such as Dear ImGui / Nuklear - Immediate Mode Graphical User interface that are just that: GPU accelerated UI.

Here is something I am coding with Dear ImGui:

https://www.youtube.com/watch?v=vbZsE7ACXrw

Even on an old MacBook, it keeps running at 60 FPS.


- Redrawing the entire screen on every frame is indeed disastrous

- Redrawing entire application windows every time something gets moved around a bit and something gets a paint event is also higher load than necessary

- Drawing things that won't change every frame into textures (not shaders), and then compositing those textures together on every frame, is by far the most efficient approach today, on all platforms

I am genuinely interested to hear the definition of "less" that you refer to. It's possible some piece of assumed context is being lost in translation.


> Drawing things that won't change every frame into textures (not shaders), and then compositing those textures together on every frame, is by far the most efficient approach today, on all platforms

Not really true. It's a lot more efficient to draw quite a few things from scratch than it is to use a cache texture. Texture composition requires a lot of memory bandwidth, which tends to be a rather constrained resource. It is very effective to do things like put each window in its own texture, yes. Within a given window using caching textures tends to be a net-loss, though.


OSX and iOS would probably disagree


How does wayland work?


I would love a 60 fps fully graphically-accelerated UI. This one looks beautiful. Cool concept to prototype in Unity too.


Such UI is not insanely hard to implement in Unity, like it would be if it was HTML/CSS.


I remember working on a Visual Basic "OS" back in 2005 with something similar in mind. I don't remember going further than implementing a basic desktop and a calculator. Fun times.


He did a great job, but I wonder if unity optimizes the runtime for components that doesn't need update every frame?


I believe Unity does this if the scene is split into several separate UI canvases:

https://unity3d.com/how-to/unity-ui-optimization-tips


Next create a mock OS inside that.


That can play Unity games...


Time to make Unity UI inside it!


Now hook it up to a VR headset so I can have 6 screens?


Am I the only one who hates most animation? If the animation doesn't serve a clear purpose in signalling something to the user, I want it gone. I'm not convinced that we've seen any meaningful advance in OS UI design since Windows 95.


Type-to-search is significant— Ubuntu has it, MacOS sort of has it (Spotlight), Windows 10 definitely has it. It wasn't until it mysteriously broke last week [1] that I suddenly realised how dependent upon it I had become; I felt like a caveman manually paging through the list of installed software or seeking out the proper icon on my desktop.

[1] https://www.forbes.com/sites/gordonkelly/2020/02/09/windows-...


You lost me at MacOS "sort of has it". MacOS's spotlight works the best out of all of them you listed. AFAIK, in Gnome, you can't search for text in files with the search function. I believe you could in Unity. Window's search mostly works, but I've had issues with it looking for text in files.


As far as the filesystem goes, navigating in Windows with the keyboard is WAY easier.

I can't seem to figure out how to effectively navigate the file system on MacOS without resorting to the terminal. I routinely have trouble locating mounted file systems, can't figure out how to type in a specific path I want to go to, etc. These things are trivial on Windows, but seem not to be possible on MacOS. I'll give it the benefit of the doubt and assume that I'm just too used to Windows to discover this functionality, but it's a real PITA.


Try Cmd+Shift+G in finder or open dialogues.


Very helpful. Thanks :)

Wish this was a bit more discoverable on MacOS - I think that's what windows' file explorer is particularly good at. Though, I'm almost certainly biased.


I'm back on Windows 10 now after a decade of MacOS.

But my recollection is that it's fine as a search feature, and not that great as a launcher— if you just blindly type a prefix and press enter (as you can on Windows or Ubuntu), you end up launching a window of extended search results. Maybe I was doing it wrong somehow, but whenever I used Spotlight to search for an app, I needed to type it in, wait for the search to completely resolve, and then use the mouse to select and launch the application out of the results.


Yeah windows default search is kinda bad. Its only really OK for applications and settings. For everything else file related voidtools Everything is a godsent.


Why does MacOS only "sort of" have type-to-search?


It seems like an inaccurate statement, I would argue OSX was the first one to do type-to-search properly... Not a huge Ubuntu user admittedly.


AFAIK, it only searches through your apps, not files.


Spotlight is the OG OS integrated full text search, originating 2004/5:

https://en.wikipedia.org/wiki/Spotlight_(software)


Spotlight has not been useful to me in over half a decade. It never finds the files I need, despite the fact that I know they're there, I know I typed them right, and I know its settings hasn't hidden or ignored them. And instead it almost always shows useless files in completely obscure parts of the OS that shouldn't even show up in Spotlight. I turned it off like a year ago and haven't regretted it since.


Maybe your md index is borked. Look into the mdutil (maintenance) and mdfind (search) commands.


Interesting- I find it very useful. It even searches my email that is locally cached. Lots of apps create hooks into it. Major part of the ecosystem that makes MacOS what it is.

Not sure why it's borked for you...


Spotlight on macOS most definitely searches files, much of the system is in fact built on file system metadata attributes. (It was original designed by Dominic Giampaolo, who also architected BeOS's file system.)


Recently I discovered that Xcode uses Spotlight in an interesting way — when you want to convert crash logs from your apps into readable stack traces, you only need to place the relevant symbol files anywhere on the disk where Spotlight can find them. No need to import them directly into the IDE.


Not just FS metadata, it can index ID3 tags, EXIF, and whatnot, and it’s fairly extensible, although underused.

Kind of like the underpinning concept of AppleScript and app dictionaries, awesome tech and concepts, but it’s sad to see the promise of the extensible, composable desktop slowly dying.


> It was original designed by Dominic Giampaolo

This is the first that I'm hearing of this. That is absolutely sick.


I just tried searching for a file with Spotlight and it found the file by name.


Lots of animations do serve a clear purpose. Humans are not used to things popping up into existence immediately out of nowhere. Yes, you may argue that you and me are tech savvy and trained our brains to not care as much. But that leaves everyone else.

I think a bunch of animations shown in this piece are over the top. But quick animations can help immensely with usability. The best ones are so subtle that most people don't even notice. Interestingly, lots of game UIs have subtle animations, I guess because they are easy to do. OSX, iOS and even Android have animations _everywhere_, but they barely register unless you are looking for them (exception for windows getting minimized, but that does serve a function).

> I'm not convinced that we've seen any meaningful advance in OS UI design since Windows 95.

If that's the bar, then we haven't seen any meaningful advancements ever since the Xerox PARC.


> Humans are not used to things popping up into existence immediately out of nowhere

I'm not convinced of this, humans come with some impressive visual diffing hardware and we rely on this to detect animations in the first place. We also know that the more animation going on the more likely we are to miss things.

We also seem to be stuck in the notion of animations happening before something is shown, a slide in or a fade in or something like that. If animations are really necessary why can't the action be instant with a subtle post animation to highlight what's changed?


I'm not so sure, if todays animation really have a clear purpose. Remember, when the animations on the Iphone calculater was so long, you often got the wrong result, because it did not get each pressed button (because the animation was so long). Also it can be very frustrating, if you are in a hurry.

And in Win 95 you still could choose all colors by yourself. Now you can say bright or dark an almost no app listen to it.


> because it did not get each pressed button (because the animation was so long).

No, it was because the animation was blocking.


>If that's the bar, then we haven't seen any meaningful advancements ever since the Xerox PARC.

Yeah, probably so. It just so happens that I built my first PC sometime around 1994 or 95 so Windows 95 is the first graphical OS that I can personally remember.


My understanding is that animations came about as a way to make slow loading operations on web pages appear faster. Because they looked so elegant and fancy, designers started putting animations on software interfaces that should have otherwise been quick which leads to where we are today.

Users also started demanding "smoothness" over speed, smoothness simply being how consistently the UI interactions behaved. On Android I remember people would complain it was laggy compared to iOS, I think part of this reason was because of animation speeds. Increasing animation speeds allows you to make your interface more consistently slow.

I think animations are fine when they don't increase the time until I can do the next thing. If something takes around 1s to load and the UI shows an animation during that time, I am okay with that and I think it actually enhances the experience. But if something takes only 1ms and the designers display a 1s animation blocking me from doing the next thing until the animation has played then that is a major annoyance.


Animations far predate such loading tricks.

They provide a sense of spatialisation and context. e.g minimising a window with an animation (such as the Genie effect, but even earlier, cruder ones) helps in understanding where it went and finding it back instead of having it just vanish. Over time the user builds a model of how it operates by observing such cues. To an advanced user it makes no sense because one already understands that, so the animation becomes an inconvenient delay.

Some animations though, are just there for the heck of it, and don’t help (or even confuse) the user. They’re like the high contrast demo mode on TV, aside for being perceptually satisfying, they carry no meaning, and are actively revolting by their cognitive dissonance to users that already have a understanding of interaction models, but you’d be hard pressed to explain that to users that don’t have that understanding.


Android has system-wide animation speed settings intended for development testing. I set them all to double speed, so now my phone has nice animations and a decent response time. Results may vary with slower phones.


No, you most certainly are not. I cannot express how happy I was when I saw that the software my bank uses is built on top of ncurses. Animations don't make your work any more productive, I'd argue it reduces your productivity in the form of a distraction. All the flashiness and the UI/UX bullcrap is what made the unholy mess that is the web today. As a Gnome user, I've disabled every single animation and effect I've seen and I could't be happier.


> I'm not convinced that we've seen any meaningful advance in OS UI design since Windows 95.

I completely agree there. There have been some small improvements — XP added recent items to the Start Menu, Vista added search to the Start Menu, 7 improved the taskbar somewhat. But overall, things are still built on what 95 established, and the only overhaul since that was actually released (Windows 8) was a clear regression and Microsoft was forced to revert much of it.

Unfortunately though, while the 95-era concepts are still around, the presentation of them is an inconsistent mess now. Windows 10 is full of different UI stylings that look horrible and make the system confusing. I often suspect that people only know how to use “modern” flat UI through sheer inertia, because it's close enough to the old skeumorphic UI that what people learned back then is still useful. But how do new starters figure out what the shapes and colours mean? They don't convey meaning any more.


> completely agree there. There have been some small improvements — XP added recent items to the Start Menu, Vista added search to the Start Menu

Windows 10 added ads.


I think it's just important that every animation be justifiable as providing some sort of information. Like "where did this come from" or "something is happening here" or "if you click here, this will happen".

Any time someone's design blog starts talking about how an animation makes people feel, it's time to run screaming.


In the case of animations that provide visual clues, I wonder if animations could be adaptive to slowly get out of people’s way as they get more familiar.


Couldn't agree more, especially with apps. Airbnb, Google Maps, Snapchat etc all have these fucking awful animations for each action. Makes the whole experience clunky on my mid level Moto phone. What was the point of all this advancing compute power if everything is still slow!!!


It is 2020 and everything is slow even on current gen technology. That is the most disappointing fact about the current state of technology.


The disappointing fact about the current state of mobile apps is that companies no longer think it's necessary to build two native apps.


It shouldn’t be. We never want to build the same app multiple times. What I (as a developer) want is for OS companies to stop with all the proprietary crap and cooperate so I can write an app once. Web apps and web-as-native (and previously, cross platform GUI frameworks) are our reaction to this lack of cooperation.

See also: compilers, libraries, ORMs, autotools, Docker, ... We as an industry go to extreme lengths to work around crap that’s incompatible for no good reason.


It's incompatiable for a good reason, the reasons just don't benefit us. It makes Windows/various vendors extremely wealthy to have bizarre lock in arrangements.


Am I the only one who hates graphics on t-shirts? If the design doesn't serve a clear purpose in signalling something about the user, I want it gone. I'm not convinced that we've seen any meaningful advance in t-shirt design since Windows 95.

Ok, ok, I'm being dumb. And yeah, I tend to agree, but as a counterpoint: sometimes it's also just cool to have cool stuff. There is certainly a modernist fetish for superminimalism – HN being a locally favored example of a great social media website with no unnecessary frills – but I think your qualifier "serve a clear purpose" is not objective enough to be a good rule. What you and I think is a clear purpose are bound to be very different. I think a fade in/scaling animation for windows to appear and disappear is a better experience than just flickering them on and off – I think the clear purpose there is that it's cognitively less demanding when a window appears over the course of 200ms rather than 1ms. But hey, that's me.

What is the "clear purpose" of MacOS UI to have rounded corners, or the status bar to be translucent, or for the "genie" effect of minimizing an app. The truth is, animations have almost zero impact on the usability of an app or experience and only add to the perception of quality. Much like contrast stitching on the leather seat of your BMW, the reason it's there is simply because people think it's kinda nice to have and doesn't hurt anyone. Otherwise the only pants we'd be able to buy would be cargo pants... and no one likes cargo pants.

It's OK for stuff to exist that is "just neat".


The genie animation is actually important. It illustrates to the user that the window is being physically relocated somewhere else. This matters for a new user who doesn't know what "minimize" is and could accidentally click it wondering how to get the window back. The animation shows you exactly where it went and what you need to click to get it back.


>Am I the only one who hates graphics on t-shirts? If the design doesn't serve a clear purpose in signalling something about the user, I want it gone. I'm not convinced that we've seen any meaningful advance in t-shirt design since Windows 95.

I don't know if this helps or hurts your point, but I do in fact also hate graphical T's. All of my T-shirts are solid colors and most of them are black.

>What is the "clear purpose" of MacOS UI to have rounded corners, or the status bar to be translucent, or for the "genie" effect of minimizing an app.

Thankfully I don't have to use macOS very often, but I don't think any of those serve a purpose and they constantly (but mildly) annoy me when I do have to boot up my mac. The minimization effect is important, but it should still be fast.


Me too. I dislike both t-shirts with useless text on them and apps with useless animations in them.


the genie animation has a very clear UX purpose - where did my app just go?


Of course, the app didn't actually "go" anywhere. Minimizing it just tells the windowing system to stop drawing it. But humans aren't used to thinking about things that pop in and out of existence, so it's easier to understand the application as a physical thing that goes somewhere when hidden.


try staring at some dude's graphic tee for 8 hours trying to do some work. This is not a good analogy.


It's not almost zero impact. It's a clear negative impact to anyone who's mindful of things like latency, workflow, muscle memory, gaze patterns, information density, attention, special recall, and working memory. These design patterns are nothing but a frustrating, dazzling interruption designed to hijack your emotional faculties instead of providing you with a useful tool. They turn real concentration into mush.

Nothing comes for free and if you're optimizing for "just neat" then you're paying in usability.


They’re great when they’re done well and obnoxious when done poorly. It’s easy to do them poorly and it’s a very fine line.

I think this particular example could use some fine tuning but for the most part does a pretty killer job of delivering low latency effects with consistent easing.


Sadly, some animation is necessary for humans to perceive that a change has occurred. If a change is instantaneous, we often miss it and keep staring at the screen, waiting for a change that has already happened.

Of course, this can be eliminated in many cases, including where the change is an immediate reaction to user input, since we would then expect the change, and would not miss it.


Transition animations do serve a purpose. It's a lot less jarring to have something move away than it just disappearing (for most people, I reckon).


Windows 95 wasn't a good UI design, why are all the window buttons next to each other, to make it easier to hit the wrong one? Why can't I right click on a down arrow to make it scroll up? or right drag on a scrollbar to drag in 2d? Why click to raise and no drag and drop save? Why full screen windows? Why does it have menu bars?


Awesome! If it's a hobby, who cares about the platform. Have fun and learn stuff.

This reminds me of being a kid, and always having the "hacker" mentality. Use whatever you have available to make something awesome.

I remember having my first "computer" based project around 2001 for school. We had to create some kind of presentation "using any tool" available to us on our home computer (or available in the school library).

I opted to create my presentation using RPG Maker 2000. It had animations, sound effects, and even a little quiz at the end. I handed the project in to the teacher on a 1.44MB floppy* drive containing only the compiled windows binary.

The teacher loved it and placed me in a regional school advanced IT program where I was first introduced to QBASIC.


> [...] where I was first introduced to QBASIC.

Not sure if this is supposed to be a happy end or not.

Cool story!


Sorry, I should have specified "introduced to my first actual programming language, QBASIC".

Funny thing was, I already knew how to program (in terms of breaking problems down and using components to model data - probably from building games in RPG Maker), I just had never been shown a programming language. It was a "AHA!" moment.


Wow. Could be an interesting game. Could also be an interesting learning tool. It's a shame that it probably wouldn't be ported to Linux though.


It could trivially be built for Linux if the author decided to (or released the code). But it's also more of a tech demo than a real utility; there's no indication it runs any existing GUI applications.


It's not supposed to be a a real utility, or run real applications! It's intended for use in computer games; there's several such games that use a virtual "OS" as their framing device and core interface.


I missed that part in the text

I honestly think it's more interesting as:

- A demonstration of what's possible within Unity (it's extremely impressive, because Unity's built-in GUI system is pretty terrible and it's clear that there's some level of composability, etc. to the UI system being demonstrated)

- An experiment in creating a totally virtual, cross-platform (gorgeous) desktop environment within a single app that could be trivially run on any OS

But of course it's still a long way from the latter, if someone even wanted to take it that direction


Wine?


I bet it runs smoother and uses less resources than anything written in Electron...


Here is an analysis of unity's performance https://www.youtube.com/watch?v=tInaI3pU19Y

My take on your comment is that the problem of electron is that it uses HTML.


Electron apps are fat in so many dimensions. From time to time I make them at work. I like the ergonomics of writing desktop javascript, but those blob sizes...

Super simple, zero asset electron app? ~300MB

Relatively complicated binary created with unity that includes high resolution textures, models and sounds? 35MB.

So far, everything I've used electron for has had good enough performance though.


You are most likely correct, because the Unity developers actually care about performance.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: