I'm partially deaf and I've worn hearing aids since around five years old. I used to think it didn't affect me but as I get older I increasingly suspect it's negatively impacted my life more than I thought. Just the additional difficulty understanding in noisy environments affected my social life and development, and who knows what kind of knock-on effect that's had.
Anyway, any progress in treating similar conditions is great news.
It definitely speeds the effects of dementia and similar because your brain insists on filling in what you didn’t hear and it tends to be wildly negative, at least in my two experiences of having gone through it.
Scale matters. Using Photoshop took vastly more time and skill to pull off realistic images, limiting how many could be made. With image generation there's no practical limit. Some of it will be used for relatively innocuous purposes like making joke images for friends or menus for restaurants. But the floodgates are open for more socially negative uses.
If you're the only one in the world with an internal combustion engine, the environmental impact doesn't matter at all. When they're as common as they are now, we should start thinking about large-scale effects.
I actually love the idea of opening the game and then not playing resulting in a negative score. To quote Garfunkel & Oates, "It's better to be a loser than a spectator."
I responded with a mix of mostly B and C answers and got “advanced.” Yet, as pointed out by another commenter, selecting all D answers (which would make you an expert!) gets you called a beginner.
I can only assume the quiz itself was vibe-coded and not tested. What an incredible time we live in.
Or that it's taking into account the Dunning-Kruger effect. In that, if you think you are an expert in all cases, you are really a beginner in everything.
For anyone wondering, you can access this by tapping the button showing a 3D cube at the bottom left of the 3D viewer. The button may be cut off if you're viewing in a web view in another app like I was.
The AR viewer runs with a much higher frame rate and you can get closer to the model. However the lighting is significantly worse, which ruins the appeal. The in-browser viewer is choppy and I can feel my phone getting a little warm, but it looks a lot more like viewing the real artifacts.
The AR viewer is using ARKit on iOS which is a default system “app”. I don’t believe Google provides the same kind of built in viewer experience with AR Core being surfaced as an app.
You're thinking of the The History of England podcast, not The History of English. The History of English Podcast does cover English history, often going deeper than is strictly necessary for tracing the evolution of English, but its primary focus is language. It's also very cozy, something you could listen to while sipping tea by a warm fire, and its consistency, clarity, and depth has made it my favorite podcast.
I've been working on an iOS camera app to take natural-looking photos with reduced post-processing. The goal is take photos that look like what you see.
I just updated the RAW pipeline and I'm really happy with how the resulting photos look, plus there's this cool "RAW+ProRAW" capture mode I introduced recently.
I initially released it early last year and have been using it as my main camera app since, but I haven't mentioned it in one of these threads before. Unfortunately this post has come just a bit early for my most recent update to be approved; there's some nice improvements coming.
Curious what motivated you to create this camera app when there are a handful of well established apps that already do unprocessed imaging. What does Unpro do differently?
At the time I created it there weren't any that did the end-to-end process with ProRAW in a way that I liked. And I got really tired of manually editing every photo I took, so I built the app for me.
Plus, having full control over the way photos look, I've customized the output to match my taste; I don't think there's any other camera apps that produce photos quite like mine.
Thank you! I just released the new update on a slow rollout, and you can get it ahead of schedule by updating manually. I recommend trying out RAW photography with the Extended Dynamic Range setting set to on :)
Which is a pleasant read, and I like the pictures. Has the Librem 5's automatic JPEG output improved since you wrote the post about photography in Croatia (https://dosowisko.net/l5/photos/)?
Yes, these are quite old. I've written a GLSL shader that acts as a simple ISP capable of real-time video processing and described it in detail here: https://source.puri.sm/-/snippets/1223
It's still pretty basic compared to hardware accelerated state-of-the-art, but I think it produces decent output in a fraction of a second on the device itself, which isn't exactly a powerhouse: https://social.librem.one/@dos/115091388610379313
Before that, I had an app for offline processing that was calling darktable-cli on the phone, but it took about 30 seconds to process a single photo with it :)
Anyway, any progress in treating similar conditions is great news.
reply