Zhang, PeiyiMa, DonghanCheng, XiTsai, Andy P.Tang, YuGao, Hao-ChengFang, LiBi, ChengLandreth, Gary E.Chubykin, Alexander A.Huang, Fang2024-04-162024-04-162023Zhang P, Ma D, Cheng X, et al. Deep learning-driven adaptive optics for single-molecule localization microscopy. Nat Methods. 2023;20(11):1748-1758. doi:10.1038/s41592-023-02029-0https://hdl.handle.net/1805/40054The inhomogeneous refractive indices of biological tissues blur and distort single-molecule emission patterns generating image artifacts and decreasing the achievable resolution of single-molecule localization microscopy (SMLM). Conventional sensorless adaptive optics methods rely on iterative mirror changes and image-quality metrics. However, these metrics result in inconsistent metric responses and thus fundamentally limit their efficacy for aberration correction in tissues. To bypass iterative trial-then-evaluate processes, we developed deep learning-driven adaptive optics for SMLM to allow direct inference of wavefront distortion and near real-time compensation. Our trained deep neural network monitors the individual emission patterns from single-molecule experiments, infers their shared wavefront distortion, feeds the estimates through a dynamic filter and drives a deformable mirror to compensate sample-induced aberrations. We demonstrated that our method simultaneously estimates and compensates 28 wavefront deformation shapes and improves the resolution and fidelity of three-dimensional SMLM through >130-µm-thick brain tissue specimens.en-USAttribution 4.0 InternationalSuper-resolution microscopyFluorescence imagingDeep learningMicroscopyDeep learning-driven adaptive optics for single-molecule localization microscopyArticle