Fed into a clear Claude Code max effort session with : "Inspect waldo2.png, and give me the pixel location of a raccoon holding a ham radio.". It sliced the image into small sections and gave:
"Found the raccoon holding a ham radio in waldo2.png (3840×2160).
- Raccoon center: roughly (460, 1680)
- Ham radio (walkie-talkie) center: roughly (505, 1650) — antenna tip around (510, 1585)
- Bounding box (raccoon + radio): approx x: 370–540, y: 1550–1780
It's in the lower-left area of the image, just right of the red-and-white striped souvenir umbrella, wearing a green vest. "
We would need a larger sample size than just myself, but the raccoon was in the very first spot I looked. Found it literally immediately, as if that's where my eyes naturally gravitated to first. Hopefully that's just luck and not an indictment of the image-creating ability, as if there is some element missing from this "Where's Waldo" image, that would normally make Waldo hard to find.
There have already been several attempts to procedurally generate Where’s Waldo? style images since the early Stable Diffusion days, including experiments that used a YOLO filter on each face and then processed them with ADetailer.
It's a difficult test for genai to pass. As I mentioned in a different thread, it requires a holistic understanding (in that there can only be one Waldo Highlander style), while also holding up to scrutiny when you examine any individual, ordinary figure.
I've actually been feeding them into Claude Opus 4.7 with its new high resolution image inputs, with mixed results - in one case there was no raccoon but it was SURE there was and told me it was definitely there but it couldn't find it.
It's probably the wrong place in the stack to implement this, these are very low-cost commodity microcontrollers running the firmware and the design of flight controller software is focused on time guarantees and reliability.
With the exception of low-cost consumer drones, most larger drones have at least a "Flight Controller" (embedded MCU handling guidance, navigation, and control) and a "Flight Computer" (Higher level *nix based computer running autonomy software), and the flight computer is IMO a more appropriate place to put this.
You could encrypt any Mavlink or proprietary protocol at the application layer if you're using an IP link, or you could also just rely on the telemetry radio to perform encryption between the drone and your ground station.
BetaFlight doesn't deal with over-the-air bits, it just receives PWM/PPM/S-Bus/whatever signals your receiver provides. There is no point to have encryption in firmware, because connection between RX and FC is hardwired and can be trusted.
Lack of OTA frames encryption, as far as I can tell, is mostly due to legacy reasons. In DYI FPV there are only couple of transmission standards, most of them using 2.4GHz FHSS or some CC2500 clone so you can mix-and-match transmitters and receivers as you wish. If you use custom TX/RX devices, you are pretty much locked in to that specific vendor. Also, designing a nice transmitter UX-wise requires quite a different skillset than designing nice transmitter RF-wise, so manufacturers tend to choose off-the-shelf RF modules.
The threat model for most FPV pilots (either hobbyists or people in Ukraine) doesn't really include hijacking of the air link. It's trivial to just shoot something down with interference, sometimes inadvertently.
Pretty much everyone in FPV is now using ExpressLRS, which is an open protocol. If you want an encrypted air link, then the best option I'm aware of is the proprietary TBS Crossfire protocol.
Betaflight doesn't really care about what Radio receiver you're using - as long as it can talk to it over uart (/SPI) via one of its supported protocols like crsf, ibus, sbus etc..
If you really want encryption, you can simply use a PiZero that talks CRSF to Betaflight and has an encrypted channel to your ground station over 4G LTE/Wifi/Wfb-ng/what not.
But if you're dealing with 4G and PiZero, might as well use Ardupilot + mavlink. Those tools already support this use case much better.
Betaflight is more of a proximity racing drone kind of use case. Only recently did it's GPS return to home functionality got some improvements.
Crossfire supports encryption.
Mainline ELRS can't add encryption support because the whole idea of ELRS was to reduce LoRa packet size to the bare minimum needed for 4 full res channels + a bit extra for arming and time multiplexed aux channels. There's some discussion on protocol security and scope here [0].
I'm sure these days there are multiple LoRa based links (independent and ELRS forks) that support authenticated encryption.
I think ciphering is not always allowed on remote control hobbyist bands. Some jurisdictions allow stronger radio output in exchange for such restrictions.
That and lack of demand. Most people are nice, key management is PITA, losing expensive toy from a crypto library bug is going to be frustrating.
WPA2 should be still strong enough for most purposes too(threat_model != CIA).
Do any of these attacks matter for single-tenant computers where all network packets are sent on a hardware timer (say, 10 kHz) independent of crypto computing?
Doesn't that mitigate any side-channel timing attacks from the start?
Oh cool, I get to celebrate his birthday twice per year now. That said, strangely enough, if the calendar changed after my death and people were still celebrating my birthday, I'd expect people to celebrate the day on the calendar I used rather than the accurate day.
What's the status of Rust usage inside NASA? I am currently writing software that, hopefully, will be sent to the Moon one day, and I am considering options as to software technologies.
I do not really know. I heard that it's generally perceived as a good option, but I don't know if any teams are actively using it for any missions.
My suspicion is that once it's generally accepted as part of the kernel, things may change.
If the rust community wants rust to be used in safety-critical, my personal view is that they need to prioritize robustness and stability of the rust ecosystem as a whole over frequent changes to the language or libraries to save 1-2 lines of code.
Breaking changes to APIs and tool changes are a big issue in general, so they are best avoided (almost every time some introduces a breaking change that we are forced to adopt, we have to spend thousands of dollars (in time) to adapt). It's best to take longer to release a tool, but when you do it, make sure it'll work for a long time.
We recently had this case with a tool and our project was delayed by several weeks because someone replaced a version of a tool in homebrew that introduced a breaking change. We hit multiple bugs during the upgrade.
A mission that flies won't depend on homebrew, but it would be very plausible for a bug to be fixed in a version that pulls a dependency with a higher version number, and for it to be impossible or impractical to upgrade only one or two packages. In particular (and I don't "speak" rust), if your compiler comes with core libraries that necessarily need a version of the compiler to work, you want those to be de-coupled and the core API not to change.
Please be aware that this is just my personal opinion. I don't speak on behalf of any agencies.
> Breaking changes to APIs and tool changes are a big issue in general, so they are best avoided
There seem to be two different camps in the rust world. Any crates that have to do with web (orthogonal to but not distinct from async) or some GUI-related aspects seem to be constantly breaking in major ways between releases, way too often to actually keep up with. Anything else in the rust world, such that someone coming from a traditional C/C++ background would be interacting with, has a much more mature ecosystem around it w/ saner breakage (i.e. only when/where necessary and typically only changing a name or restructuring a type, but not "let's wholly rearchitecture both our own code and the code of anyone using this in the wild" kind of thing).
Thanks Ivan, makes sense. With the ongoing work of integrating Rust into Linux, I hope we'll see some of that sought-after stability in its tooling. I appreciate the insight - I also wish more code was written with long term in mind, and by long term, I mean decades, not years.
I've been betting on creating good software that survives for decades, but most people just want quick fixes and new features, and few are willing to put in the time to "do the homework" (that is, clean up the code, debug it, benchmark it, reduce it, modularize it, etc.). Over time, with more and more fixes and features, code rots and the maintenance burden increases.
People seem to be creating open source like it's free, with no regards for the time and effort put before. Every solution we create adds to the global maintenance burden of the community. We need to set processes in place that make open source code better over time, not bigger.
Not sure if you’re aware but in Rust there’s no dependency hell. Component A can depend on version X of a library and component B can depend on incompatible version Y and you can still link in components A & B into your program without any hassle/correctness/safety issues. That doesn’t solve the “but I want to upgrade to the latest version & for it to be compatible” but that’s typically an untenable position in any environment when relying on OSS - perhaps try to work out arrangements with those projects if possible to backport fixes instead if it’s that mission critical or live with developing processes to stay on top of updates like the rest of us?
One day I'll learn rust and maybe then I'll understand.
> live with developing processes to stay on top of updates like the rest of us?
NASA follows very robust software engineering processes (even for research projects like e.g., Copilot and, to a lesser extend, Ogma). It would not be able to do what it does if it didn't.
This is a topic for a longer discussion and definitely not to be had here, but I will say that it's not conductive to a constructive discussion to see it as a problem with our processes, or us ("developing processes to stay on top of updates", "like the rest of us").
The people who work on these things are smart. This is a topic we've had long discussions on. If it was obvious or viable to fix internally, we would have done it already.
I have been programming in particular in Haskell for 20y. I've worked for all kinds of companies and organizations, big and small, for the last 18y. I am like the rest of us. The problem is not exclusive to NASA, and NASA's processes are not to blame here.
It's a problem with how to build languages and ecosystems.
My comment was not meant to disparage the work that NASA does so apologies since that’s the way it landed. The engineers working on NASA are really good. I was just trying to convey that the requirements you have are very different from the general ecosystem and thus you will always have a greater cost to do engineering. Where possible, it’s always cheaper to relax constraints at the program level, not at the individual software component level (eg auxiliary components that have a recovery path in the case of SW faults). My impression is that NASA generally strives for highly reliable systems although I think they’re getting better with the Mars copter experiment. SpaceX is also doing good work trying to drive the cost down by making launches less expensive (that way SW faults aren’t as critical in most systems and payloads themselves don’t need high reliability because they can just retry).
On the dependency front, Rust solves this about as well as you can hope for at the language level since dependencies between components don’t imply anything else about the dependency chain. I was just trying to convey that at that point there’s no way I can think of to reduce the cost of upgrading unless you make agreements with your exact SW dependencies about what versioning and changes look like for them (for general OSS that’s not generally tenable as NASA is likely to be a very small use case compared with the number of environments a popular package might get deployed to). That works in some cases but there’s no way to enforce that and nothing any language can do about it.
Generally I’ve found that organizations ossify their dependency chain on the assumption of “if it ain’t broke don’t fix it”. I’m not sure I buy that because that’s just tech debt that starts accruing and it’s better to just always pay a little bit of money along the way. Of course I don’t have any experience running teams on the kinds of problem domains NASA focuses on so I can’t speak to which development process is better for that use case. All I can note is that using off the shelf software and reducing the reliability requirements on as many components as possible generally results in a cheaper outcome (eg the Mars drone). When you’re in that domain you’re out of the high reliability domain of expensive space rocket launches and into more of the traditional SW development processes. Generally I’ve seen Rust libraries do semver better than most since that’s culturally the expectation. Even with Semver though you’re stuck if the library authors decide to go to the next major version.
"A review published last year in the journal Bioelectromagnetics found no evidence that hypersensitive individuals had an improved ability to detect EMFs, and the study found evidence of the nocebo effect in those same people."
Every time I see someone claiming to have extremely clear symptoms of EMFs sensitivy, I wonder why they don't do a double-blind test to prove the whole world they can actually detect radio-waves. Should be a trivial test to perform properly, and would clearly help the case of hypersensitive people, so why hasn't it happened yet?
The published article already shows studies have been performed and quite conclusively found these people to be liars. No point in wasting further money or time on a self suppressing group.
All my recents searches for old movies were vain. Last Sunday, I wanted to see 'Little Miss Sunshine (2006)', result: not available (in Canada anyway). Unsubscribing has started to cross my mind.
When I subscribed for Netflix, the motivation was a large catalog of old movies, cool stuff from other countries (I remember watching TV series from Iceland that were great), no ads. But the greatest selling point: it was more convenient than pirating.
Now, it's all about pushing Netflix produced content down my throat using whatever trick possible, and pirating is again more convenient (large catalog, no ads) even if less user-friendly.
So yeah, "outsmarted" is really pushing it. I'm not the only one getting bored by the current Netflix direction.
> I'm not the only one getting bored by the current Netflix direction
The current Netflix direction is the only one possible; one big catalog only worked before they had proven the viability of mass streaming—once that happened competitors bidding for exclusives (and content owners reserving material for their own services) was inevitable.
"Found the raccoon holding a ham radio in waldo2.png (3840×2160).
Which is correct!reply