Hacker Newsnew | past | comments | ask | show | jobs | submit | etik's commentslogin

Great work! Here's some prior art in the (torch) space: https://github.com/vccimaging/DiffOptics

A few notes, though paraxial approximations are "dumb", they are very useful tools for lens designers and understanding/constraining the design space - calculating the F/#, aperture stop, principal planes and is critical in some approaches. This pushes what autodiff tools are capable of because you need to get Hessians of your surface. There's also a rich history in objective function definition and quadrature integration techniques thereof which you can work to implement, and you may like to have users be able to specify explicit parametric constraints.


Yes, that DiffOptics paper was one of my main inspiration for this project. It's a very cool paper.

> There's also a rich history in objective function definition and quadrature integration techniques thereof which you can work to implement, and you may like to have users be able to specify explicit parametric constraints.

Yes, this is definitely the direction I want to take the project in. If you have any reference material to share I'd be interested!


Gaussian quadrature integration for rms spot size or wavefront error:

> Forbes, G. W. (1989). Optical system assessment for design: numerical ray tracing in the Gaussian pupil. Journal of the Optical Society of America A, 6(8), 1123. https://doi.org/10.1364/josaa.6.001123

In general, you'll want to look at MTF calculation (look at Zemax's manual for explanation/how-to). There is also a technique to target optimization at particular spatial frequencies:

> K. E. Moore, E. Elliott, et. al. "Digital Contrast Optimization - A faster and better method for optimizing system MTF," in Optical Design and Fabrication 2017 (Freeform, IODC, OFT), OSA Technical Digest (online) (Optical Society of America, 2017), paper IW1A.3


A big question here, do you think it would be possible to self-study optics and what would it take?


I received my PhD in applied physics and had colleagues, collaborators, and some published work in this (very broad) field. Although I'm familiar with the programs, my advice is not specific.

The question you have to answer first is: why do you want a PhD? Is it to do science for as long as possible? Is it to contribute to the frontier of human knowledge? Is it to participate in an global research community? Is it to land a tenure track job? Or do you not know the questions and their answers (which is fine!)?

I'll offer myself as a case study. I knew I wanted to make something tangible (hence device physics / photonics), I knew I wanted to explore the possibility of continuing in academia, and I knew that I would enjoy working in industry. I structured my PhD to go for a high risk/high reward research topic (with the thinking that if it pans out, academia would be viable without having to go through an extended PhD and multiple postdocs, which was off the table for me). I also set up to consult on industry projects, and started poking around the local startup incubators and B-school entrepreneurial offerings. My school (and PI) choice was motivated really by these factors: how I judged the impact of potential research being done by the team, how plugged-in and amenable was the environment to extracurricular work, and how supported I would be to a transition to start-ups/industry.

Figure out what you want, and treat your PhD itself as an experiment w/ testable hypotheses. If you're not sure about something, how can you build into the experience a way to find out? Is it a class, a side-project, the local community that can help? There are many factors to take into account when choosing a school because we all weigh those factors differently - once you decide what's really important to you you'll get better-tuned advice.


> They may use absorbing substrate, or may add a backside coating.

Yup, since the optic is planar it can integrate with backend coating processes

> Contact image sensors ... No clue how this relates to meta-lenses.

I'm also not sure how "contact" got into the copy

> I suspect it's just a bad diagram. Their barrel design is impossible to manufacture.

Yea, the barrels end up looking like more traditional barrels

Source: Metalenz CTO


Neat, thanks for the reply! Impressive tech you've got.


Metalenz | Boston, MA | Software Engineering and Computer Vision Roles | Onsite & Remote | https://www.metalenz.com

Metalenz is a growing, venture-backed start-up that is the first to commercialize meta-optics and enable the next generation of 3D sensing in consumer electronics, automotive and industrial robotics markets. Unlike traditional optics, the company’s metasurface technology provides complex, multifunctional optical performance in a single semiconductor layer, relocating large-scale production of optics to semiconductor foundries, that print lenses like computer chips.

We are looking for engineers across our company, please reach out if any of the following sound interesting to you:

* Computer vision models and algorithms using our unique hardware, leveraging polarization degrees of freedom of light (from deep learning models to low-level computational imaging)

* Performant, hardware-secure imaging on Android

* Instrumentation and automation of optical metrology systems

* TCAD-equivalent tooling for optical system design

We offer competitive salary and equity. Benefits include full medical, dental, and vision coverage, and flexible vacation policy. If you have any questions or want to apply, please reach out to Pawel at pawel.latawiec@metalenz.com, or apply at our website here: https://www.metalenz.com/careers.


We (metalenz.com) use Julia extensively in our optical design/simulation/modeling workflows:

1. Design team keeps live sessions during interactive work, otherwise launches on virtualized servers (and things take sufficiently long to compute TTFX is a rounding error)

2. Above mentioned team is exclusively Julia. Julia shows up in other things we do (low-level computer vision), but not dominant nor exclusive

3. High performance & expressive programming and flexible autodiff system for scientific computation


Ah, that's too bad, but I can't really blame them. Going from a fully-featured Python codebase they already had in https://github.com/brandondube/prysm and translating into Julia would need to be motivated by much more than speed - after all the limiting factor here is FFT and heavy computations like cis, so I'm pleasantly surprised they reported gains from their simple port compared to numpy. It's also much easier to go from a C/C++ computational code to Julia than it is from a Python one, because the mental performance models are more similar.

For reference, we (metalenz.com) have a large Julia codebase centered around optical design, simulation, and analysis. The motivation for that is more along the lines of composability and clarity of abstractions (aided by multiple-dispatch). We can differentiate through our physical optics solver code (forward or reverse) for optimization/ML, plug in our designs across a hierarchy of different E&M solvers, run on GPU, and write very efficient code when our profiling identifies a bottleneck. If we just had to perform one thing (physical optics simulations), then our investment wouldn't be as justified.


There is a recent effort [1] to provide low-level support for faster operations by transforming user code to take advantage of a compiler's instruction set, memory packing, etc. This is being expanded upon to essentially provide a Julia-native BLAS. Some of the benchmarks are even competitive with or beat Intel MKL (calibrate that statement appropriately to your level of trust in benchmarks). I wouldn't count out a Julia ARPACK implementation just yet.

[1] LoopVectorization: https://github.com/chriselrod/LoopVectorization.jl Announcement post and discussion: https://discourse.julialang.org/t/ann-loopvectorization/3284...


You're probably confusing ARPACK with BLAS / LAPACK.

Pure julia ARPACK already exists, e.g. https://github.com/haampie/ArnoldiMethod.jl/.

A competive BLAS-gemm is implemented here https://github.com/YingboMa/MaBLAS.jl/blob/master/src/gemm.j... (single-threaded).

A LAPACK-like library could be https://github.com/JuliaLinearAlgebra/GenericLinearAlgebra.j...


I was mostly referring to the parent comment's suggestion that low level numerical libraries wouldn't benefit from a pure Julia implementation, specifically to the statement that it was still better to write optimized C/FORTRAN and call from Julia. Indeed, MaBLAS, which you linked, is built on top of LoopVectorization.jl.

I don't know how well ArnoldiMethod.jl compares with ARPACK, but if there is a gap my suggestion is simply that these recent developments might help bridge it :)


FWIW, MaBLAS currently does not depend on LoopVectorization.jl, the code to generate kernels is all handwritten.


This is great, a lot of the performance in BLAS comes from memory management. This does not solve my problem though. Sometimes you want to manually control memory, and Julia does not make it easy to do so. In particular when interfacing with BLAS/LAPACK though Julia, memory management is quite ugly, mainly due to the need for allocating work arrays, and a pure Julia implementation of BLAS is unlikely to fix this. I don't think writing eg ARPACK performantly in Julia is imposible, just painful to the point that writing it in FORTRAN starts to make sense (to me).


> I don't think writing eg ARPACK performantly in Julia is impossible

Not sure what you mean, ARPACK just wraps a bunch of calls to LAPACK and BLAS, that's all. It does not have any low-level linalg kernels of its own. Also, its main bottleneck is typically outside of the algorithm, namely matrix-vector products. ArnoldiMethod.jl is pretty much a pure Julia implementation of ARPACK without any dependency on LAPACK (only BLAS when used with Float32/Float64/ComplexF32/ComplexF64). Finally note that you can easily beat ARPACK by adding GPU support, since they don't provide it.


The person I was responding to seemed to have read my comment and thought I had an issue with raw performance. My point above is that writing iterative code that makes many LAPACK calls becomes difficult to write in Julia because there is no way to manage the memory in this situation other than ccall-ing everything yourself, at which point I would rather write FORTRAN. I work on eigenvalue solvers, so it is all more or less just wrapping a bunch of calls to BLAS/LAPACK. As you say, the main bottleneck is in the BLAS calls anyways, but the excess allocation in Julia can really slow you down. ARPACK is maybe a bad example because its all just mat-vec, when you need to do something like compute an SVD every iteration, thats where you run into issues.


Julia version here: https://github.com/StatisticalRethinkingJulia/StatisticalRet...

with implementations in CmdStan (calling Stan from Julia), Turing, Mamba, and DynamicHMC (which requires hand-coding the log-density function of posterior).


Any advice about which one of those packages to pick?


I haven't used the others but Stan is great.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: