Hackathon Submissions

Project Description Links Team
Image-Driven Analysis of Bubble Dynamics and Foam Stability
ID: H-001
This project presents a fully image-driven and chemistry-agnostic analysis of foam stability using time-resolved microscopy images. Interpretable bubble-scale features—such as population dynamics, coarsening rates, size heterogeneity, and spatial organization—are extracted to quantify and rank foam stability. An explainable machine-learning model demonstrates that early microscopic dynamics can reliably predict long-term macroscopic foam behavior. Code
Video
Doc
Hesham Eina Abdalla
Contrastive Micrograph-Metadata Pre-Training
ID: H-002
We implement CLIP ( Contrastive Language-Image Pre-Training) but for HAADF-STEM images and paired metadata vectors. We show that it can effectively find a shared embedding space where metadata that fits to the style of of an image is embedded to vectors that have high cosine similarity to the embedded vector of the image. Similarly, metadata combinations that do not fit with the image style are pushed away. We also discuss the usefulness of the learned embedding space for a possible lightweight physics-informed denoiser Code
Video
Doc
Henrik Eliasson
Angus Lothian
Automated Crystalline Domain Segmentation in Polycrystalline TEM Images via Machine Learning-Based FFT Analysis
ID: H-003
This project introduces a fully automated pipeline for crystallographic analysis of HRTEM images, effectively eliminating the subjectivity and inefficiency of manual FFT peak selection. By utilizing Gaussian Mixture Models (GMM) for automated signal classification and integrating them with DBSCAN clustering, we enable consistent, high-throughput segmentation of crystalline domains. This workflow offers a widely accessible and reproducible alternative to resource-intensive 4D-STEM techniques for characterizing complex heterogeneous nanomaterials. Code
Video
Doc
Woojin Bae
Jinho Rhee
Shihyun Kim
Alexa for AFM
ID: H-004
We demonstrate how to operate a microscope with voice control, by connecting a speech-to-text module with an instrument application programming interface. Code
Video
Doc
Gunstheimer
Corte-LeĂłn
Towards Automated Materials Analysis: Deep LearningDenoising and Phase Identification from 4D-STEM
ID: H-005
This project develops a deep learning-assisted 4D-STEM pipeline that denoises low signal-to-noise diffraction patterns and automates phase identification, enabling reliable detection of O1 and O3 phases in NMC811. By making nanoscale phase mapping robust to complex background noise and weak reflections, it supports systematic investigation of structural deviations linked to degradation in Ni-rich layered cathodes. Code
Video
Doc
Ethan Lo
Fanzhi Su
stem-denoising-hackathon
ID: H-006
Deep learning for ultra-low dose STEM denoising Code
Video
Doc
Avital Wagner
Willem de Kleijne
Jay te Beest
Akshaya Kumarjaishankar
AFM-SPARK
ID: H-007
AFM-SPARK is a system-specific super-resolution framework for Atomic Force Microscopy (AFM) imaging. The method reconstructs a global high-resolution (HR) AFM image from a fast low-resolution (LR) scan and a small number of selectively acquired HR patches, significantly reducing acquisition time while preserving nanoscale fidelity. Code
Video
Doc
Youngwoo Choi
Seungwhan Ryu
Junho Yang
Sanggil Park
Jihui Won
Chaeyul Kang
LLM-Assisted Structural Interpretation of Microscopy Images
ID: H-008
Recent advances in machine learning and large language models (LLMs) offer new opportunities to assist scientific interpretation. However, directly applying LLMs to microscopy images risks hallucination and loss of scientific rigor if not carefully constrained by quantitative evidence. This project addresses this gap by designing a lightweight, explainable pipeline that combines classical image analysis with evidence-grounded LLM-ready interpretation, without relying on opaque end-to-end black-box models. Code
Video
Doc
Shyam Sundar Debsarkar
Iman Chatterjee
Ugochukwu Philip Ochieze
MASK-ViT: Mask-Free Nanoscale Feature Detection
ID: H-009
MASK-ViT leverages self-supervised Vision Transformer embeddings and saliency-driven patch pooling to automatically detect and characterize nanoscale features in microscopy images without pixel-level annotations. The approach enables scalable, annotation-efficient materials characterization through embedding-based analysis and unsupervised discovery. Code
Video
Doc
Gowtham Nimmal Haribabu
Gaaurav Lath
Smartscan
ID: H-010
SmartScan is an AI-powered adaptive AFM scanning system that uses machine learning to dynamically optimize scan parameters (speed, resolution, force) in real-time, achieving 37% faster scanning while simultaneously improving image quality by balancing thermal drift against tracking error. The system learns from physics-based simulations on real AFM data to discover that faster scanning (10-12 ”m/s) actually produces better results than traditional conservative speeds (5 ”m/s), a counterintuitive finding that challenges conventional AFM operation practices. Code
Video
Doc
Abdulrhman Mohamed
Ahmed Mahmoud
Syed Ahmed Khaderi
Digital Bubble Stability tracking microscope
ID: H-011
Digital foam simulator: Random Forest Regressor (Aiscia Platform) predicts bubble metrics from lab inputs, then nearest-neighbor mapping returns realistic microscope frames/animations—no live microscope needed. Code
Video
Doc
Aziza Mohammed
Ahyen Mostofa
Dr. Usman Chaudhry
Yasmin Abdelkarim
Souhil Sid
Rehaan Hussain
Dr. Fadwa El Mellouhi
Dr. Harris Rabbani
The Gold Standard for low-dose STEM
ID: H-012
This project presents a deep-learning workflow for ultra-low-dose STEM imaging, where low dwell times introduce severe scan artifacts that limit image quality. By combining Fourier-domain pre-processing with a U-Net trained on real paired data and a custom loss that preserves high-frequency detail, the method aims to remove these artifacts while maintaining resolution at low doses. Code
Video
Doc
Jay te Beest
Avital Wagner
Willem de Kleijne
Akshaya Kumar Jaishankar
AIScientist4Microscopy
ID: H-013
AIScientist4Microscopy is a proof-of-concept “AI Scientist” workflow tailored to microscopy. The goal is to demonstrate an end-to-end pipeline that can (1) generate credible, research-grade hypotheses in AI-for-microscopy, (2) translate each hypothesis into a runnable experimental plan with code, and (3) produce research artifacts (figures, metrics, and paper-style writeups) in a way that is easy to demo and extend. Code
Video
Doc
Adib Bazgir
Mahule Roy
Yuwen Zhang
VLRIMM - Vision-Language Retrieval for Identical Materials Morphology
ID: H-014
VLRIMM is a multi-modal RAG pipeline designed to bridge the gap between raw visual data and textual knowledge, transform a single micrograph into an interactive dialogue with global current scientific knowledge by aligning visual morphology with textual research. Pairing state-of-the-art foundation models (Meta's DINOv3 and OpenAI's text embedding) and RAG, VLRIMM is designed for the future of Self-Driving Labs. It provides an autonomous link between observed microstructures and peer-reviewed information like synthesis method, eliminating the knowledge lag and manual research bottlenecks. Code
Video
Doc
Kevin Zhang
Sartaaj Khan
Mohammad Taha
Thomas Pruyn
Unsupervised Microstructure Image Analysis and Question Answering Using CLIP and VLMs
ID: H-015
We explore automated understanding of materials microscopy images using multimodal AI models. By combining scalable zero-shot classification with CLIP and reasoning-based analysis using vision language models, our approach identifies imaging techniques and material categories from real-world microscopy data, enabling efficient and interpretable microscopy analysis at scale. Code
Video
Doc
Subham Ghosh
Shubham Tiwari
Ayush Kumar Pandey
Pranjul Chandra Bhatt
MCP Servers for Theory - Experimental Matching
ID: H-016
Used existing FerroSIM simulation and developed an MCP server to control FerroSIM through agents, and then combine this with the digital twin AFM investigating a ferroelectric sample to perform some rudimentary theory-experiment matching through digital-digital agentic AI workflows Code
Video
Doc
Vani Nigam
Achuth Chandrashekhar
MARVL: Materials-Science Aware Reasoning Dataset for Vision–Language Models
ID: H-017
We introduce MARVL, a comprehensive, materials-science-aware reasoning dataset designed to train and benchmark vision–language models on microscopic imaging tasks across AFM, SEM, and TEM modalities. Using automated literature mining and curated experimental data, MARVL captures real imaging conditions, tip or instrument-induced artifacts, and scientifically grounded explanations. This dataset enables next-generation VLMs to perform accurate feature interpretation with transparent reasoning, expanding AI capabilities across nanoscale materials characterization. Code
Video
Doc
Mohd Zaki
Indrajeet Mandal
Prince Sharma
Megha Mondal
Sudhakar Kumar
Vishal Bhaskar
Arjun Chand
Shivnandi
”Stack: An AI-Powered Platform for Atomic Surface Analysis and Microscopy Simulation
ID: H-018
”Stack is an open-source, AI-driven platform that simplifies atomic surface structure research by combining machine learning potentials, multi-agent workflows, and natural language interfaces into a unified system. The platform allows researchers to generate, relax, and analyze atomic surfaces through conversational queries, automatically producing visualizations, standardized data outputs, and AI-generated summary reports. It supports multiple microscopy simulation modalities (STM, AFM, IETS) and is designed for applications in catalysis, semiconductor research, and 2D materials science. Code
Video
Doc
Aritra Roy
Kevin Shen
Ben Blaiszik
Piyush Ranjan Maharana
Latent-Constrained Autoencoder for 4D-STEM Clustering
ID: H-019
This project applies unsupervised deep learning to identify crystalline and amorphous domains in 4D-STEM diffraction data from metallographic samples. By training a rotation-invariant autoencoder to compress diffraction patterns into low-dimensional latent representations and clustering them with HDBSCAN, the method robustly detects grain boundaries while reducing spurious cluster fragmentation. Incorporating physically motivated rotational invariance significantly improves boundary localization and accurately captures the finite width of grain boundaries in experimental data. Code
Video
Doc
Martin Eriksen
JoaquĂ­n OtĂłn
Mauricio Matta
HAADF-Segment: Bridging Supervised Segmentation and Unsupervised Anomaly Detection for Automated Defect Analysis in 2D Transition Metal Dichalcogenides
ID: H-020
This project establishes an automated deep learning pipeline for HAADF-STEM analysis, utilizing a custom U-Net architecture to perform precise multi-class segmentation of atomic species in 2D transition metal dichalcogenides. By coupling this supervised model with unsupervised manifold learning techniques, the framework enables the robust identification and quantification of point defects and structural instabilities without relying on exhaustive manual labeling. Code
Video
Doc
Cobi Allen
Mostafa A. Mostafa
Nannan Zhang nz257@exeter.ac.uk
Navid Haghmoradi
4Denoising
ID: H-021
We adapted state-of-the-art unsupervised deep-learning model (UDVD) to work on 4DSTEM datacubes. To train the model we created semi-synthetic low-dose data by modifying open-source datacubes. Our approach is generalizable to other types of high-dimensionality data streams, and after training can potentially be deployed on live applications. Code
Video
Doc
Leonardo Cancellara
William Talbott
Ankit Shrivastava
Jordi Weingard
Dingqiao Ji
Ian MacLaren
DeepScan Pro: Intelligent Microscopy Agent
ID: H-022
DeepScan Pro is an interactive software pipeline that transforms passive scanning electron microscopes (SEM/STEM) into intelligent discovery agents. Addressing the inefficiency of blind raster scanning, it utilizes a Multi-Stage Active Learning architecture. Code
Video
Doc
Md Habibur Rahman
Anomaly Detection and Clustering of Atomic-Resolution STEM Images Using Semantic Segmentation
ID: H-023
This project develops a machine-learning framework for automated defect detection, localisation, and clustering in atomic-resolution HAADF-STEM images. It combines supervised image classification, weakly supervised U-Net segmentation, and unsupervised clustering to identify multiple, coexisting defect types without pixel-level annotations. Demonstrated on CdTe and SrTiO₃ datasets, the framework achieves high classification accuracy and produces quantitative, material-independent heatmaps, providing a scalable and reproducible workflow for high-throughput, data-driven electron microscopy analysis. Code
Video
Doc
Claudia Sosa Espada
Surabhi Gunjur Sathish
Thomas Karagiannis
U-Net-based pipeline for high-throughput quantification of crack growth during in situ TEM tensile testing
ID: H-024
This project analyzes crack growth and propagation in materials using computer vision and deep learning techniques applied to microscopy images. The system tracks crack tip positions, measures crack geometry parameters (length, area, width, CTOD), and monitors temporal evolution across multiple experimental datasets captured at different magnifications. Code
Video
Doc
Vivek Devulapalli
Integrating AFM and STEM Digital Twins with LLMs for Automated Data Interpretation
ID: H-025
This project introduces Microscopy-LLM, a framework that bridges Claude AI with AFM and STEM digital twins via specialized MCP servers. It enables researchers to perform automated image segmentation and data interpretation through natural language, transforming complex binary datasets into an intuitive, conversational workflow. This 'AI co-pilot' effectively lowers the technical barrier to advanced materials characterization and provides a scalable blueprint for the future of autonomous laboratories. Code
Video
Doc
Josep Cruañes Giner
Fanzhi Su
Graph-O-Foam: ActiveScan Copilot
ID: H-026
The Graph-O-Foam project forecasts foam stability by converting lab data into synthetic microscopy frames and extracting bubble metrics with computer vision. Energy-based models (EBM) showed top accuracy, physics-informed transformers balanced realism and predictions, and a Streamlit dashboard enables real-time visualization and analysis. Code
Video
Doc
Vedashree Yemula
Mohamed Bouchekouf
Param Trimbake
Mohammed Tanvir
Muntasir Mahmud
Phase Mapping Ni-based alloys with EBSD Using Deep Learning
ID: H-027
We applied convolutional neural networks to classify EBSD patterns into different Ni-based alloys. Both contrastive and cross entropy learning were tested, and their accuracies were compared with each other. Code
Video
Doc
Alfred Yan
Gabriel Trindade dos Santos
Ramandeep Mandia
Roberto dos Reis
Vinayak Dravid
Proof of Concept for Automated STEM Tilting Using Image Analysis for Quantifoil Grids
ID: H-028
Quantifoil grids with circular holes provide an interesting test case for automated tilting in STEM - the holes of the grid will appear elliptical when mistilted, and perfectly circular when perpendicular to the beam direction. The goal of this project was to develop an algorithm to simulate the automated tilting of a Quantifoil grid by examining a simulated micrograph, quantifying the circularity of the hole, and using tilt to optimize for maximum circularity Code
Video
Doc
Elizabeth Heon
Sai Venkata Gayathri Ayyagari
Sita Sirisha Madugula
Darel Pates
Lynnicia Massenburg
Andrew Balog
CAVA: A Causal Analysis and Validation Agent for Microscopy-Informed Materials Discovery
ID: H-029
CAVA is an agentic causal reasoning framework designed to support microscopy-informed materials discovery by integrating data-driven causal analysis with literature-based validation. The system first identifies robust causal relationships in microscopy-derived data using established causal discovery algorithms and regime-aware consensus, then employs a large language model agent to contextualize and validate these relationships against prior scientific studies. By combining automated causal discovery with agent-based literature reasoning, CAVA provides a transparent and interpretable foundation for causal understanding and decision-making in experimental materials science. Code
Video
Kamyar Barakati
Elyar Tourani
Vivek Chawla
Mohammad Amin Moradi
eJect
ID: H-030
We developed a digital twin STEM framework to simulate atom manipulation experiments, then validated the approach on a WS₂ monolayer using a ThermoFisher Spectra 300. Survival analysis of atoms under point and blast irradiation revealed both ballistic and non-ballistic damage contributions, highlighting differences between simulated and experimental behavior. Code
Video
Doc
Austin C. Houston
Dominick L. Pelaia
Levi M. Dunn
RHEED universal translator
ID: H-031
RHEED Universal Translator is a bidirectional deep-learning tool that links in-situ RHEED diffraction patterns with ex-situ XPS stoichiometry for SrTiO₃ thin films. It learns a shared latent representation to predict stoichiometry from a RHEED image or generate a RHEED-like pattern from a target stoichiometry, and it’s packaged in a Streamlit no-code app with pretrained weights for easy demos and extension. Code
Video
Doc
Asraful Haque
Jawad Chowdhury
Sumner Harris
Bio-STEMGPT: An AI Assistant for TEM Data Analysis
ID: H-032
We present Bio-STEMGPT, a chatbot that integrates segmentation tools from a user's chosen segmentation model to help users identify features in TEM images and explore the resulting segmented data. We envision this as a tool for scientists integrating TEM data into their projects who may be unfamiliar with analyzing this type of data. Code
Video
Doc
Bridget Vincent
Arielle Rothman
Saurin Savla
Identifying the Spatial Domains of Sm-doped BiFeO3 using STEM and Machine Learning
ID: H-033
Spatial Domains of Sm-doped BiFeO3 using various Sm concentrations and window sizes are investigated using STEM, principal component analysis, k-means clustering, and supervised machine learning. The results suggest that the ideal window size is 40, with the domains being well distinguished for both k-means clustering and supervised machine learning using random forests. These methods have the potential to create a more efficient and automated method to determine the best window size to distinguish between spatial domains of ferroelectric materials. Code
Video
Doc
Fatima Anwar
Hayden Dennison
Julie Schlanz
Emily Stump
SPM–(S)TEM correlative studies for nanoscale characterisation and data curation for implied transfer learning.
ID: H-034
In electron microscopy and Scanning probe microscopy, the scarcity of curated training data and the low throughput of atomic-resolution techniques often limits the adoption of supervised learning methods.Although automation is improving through manufacturer-provided software interfaces, e.g. ‘Autoscript’ from Thermofisher scientific in electron microscopy, and digital twin software, human oversight remains essential during operation. An approach that would enable a combinatorial approach to generating sufficient labelled data could be through transfer learning. Here we present a framework for correlative SPM–(S)TEM studies based on physics-informed digital twins. Code
Video
Doc
Timothy Lambden
Angle-Dependent Morphologies of Ferroelectric Domain Walls
ID: H-035
The morphology of ferroelectric domain walls with different characteristic angles were analyzed by neural networks and computer vision algorithms. Neural networks could differentiate between high-angle (>130 degree) and lower-angle (<130 degree) domain walls from their morphology. Further, higher-angle domain walls were found to prefer a curved morphology, while lower-angle domain walls prefered a straight morphology, in line with expectations for ferroelectric–ferroelastic domain walls. Code
Video
Doc
Laurel Washburn
Ehtesham Anwar
Sho Watanabe
Aidan Cotton
Kaurab Gautam
Nabin Khanal
Ralph Bulanadi
Moiré Fringes in High-Resolution Transmission Electron Microscopy
ID: H-036
Our project involved separating HRTEM moiré fringes into two their two lattice layers. This was done using a machine learning model that predicted what the two layers should look like based on the moiré fringe. Code
Video
Doc
Yuval Noiman
Hank Perry
Jackie Cheng
Automated Detection of Ice Artifacts in Cryo-EM Using Machine Learning
ID: H-037
Cryogenic electron microscopy (Cryo-EM) enables structural analysis of biological macromolecules by reconstructing three-dimensional structures from two-dimensional images of rapidly frozen samples. However, Cryo-EM micrographs suffer from low signal-to-noise ratio, poor contrast, and contamination from ice artifacts, making it difficult to distinguish true biological signal from noise. To address this challenge, we propose an automated ice artifact detection approach using a deep convolutional neural network. Code
Video
Doc
Tharushi Rajaguru
Shehani Kahawatte
Elif Dursun
Diana Oliveira
Automated Region-of-Interest Detection in Microscopy Images Using VGG-16 and Patch-Based Classification
ID: H-038
This project presents a patch-based deep-learning framework for automated region-of-interest detection in atomic-resolution STEM images. Using a VGG-16–based model, the approach combines supervised defect classification with unsupervised clustering to identify defect-bearing regions, extract their spatial coordinates, and enable scalable, reproducible, and automated microscopy analysis across different material systems. Code
Video
Doc
R. A. W. Ayyubi
Shoaib Masood
Fatemeh Karimi
Zahira El Khalidi
Jaeyeon Jo
Automated Defect Detection in STEM Images Using Machine Learning
ID: H-039
We developed an ensemble deep learning framework for automated defect detection in high-resolution scanning transmission electron microscopy (STEM) images of CdTe and SrTiO₃ (STO). The approach follows patch-level classification using custom convolutional neural networks (CNNs) trained on frequency-enhanced image representations to identify crystalline and defect regions. Code
Video
Doc
Noah Holt
Joshua Marvin
Ramji Subedi
Kamal Khanal
Mohd Tauhid Khan
Machine learning–enabled SEM-driven surface morphometrics
ID: H-040
Recently, a classical computer-vision approach for quantitative SEM analysis, defining microcracks as a feature class and estimating crack count, length distribution, and area within the field of view, are applied. To streamline analysis, the framework was extended with neural‑network segmentation, improving generalization, reducing preprocessing demands, and enabling additional pixel‑level crack detection alongside classical approach. For this purpose, plasma treated cobalt sputtered samples were imaged at multiple fields of view to demonstrate the implementation. Code
Video
Doc
Vineet Kumar
Yogesh Paul
Himanshu Mishra PhD.
Saurabh Sudan
Machine Learning Denoising of Reciprocal Space Maps for realistic center-of-mass evaluation
ID: H-041
This project utilizes the self-supervised Noise2Self method to denoise reciprocal space maps from nano-XRD experiments. It enables rapid denoising and more accurate center-of-mass calculations without requiring clean reference data. The approach effectively reduces noise while preserving subtle structural features. Further optimization is needed for very weak diffraction signals, such as those from quantum wells Code
Video
Doc
Leonardo Oliveira
Soroush Motahari
Navyanth Kusampudi
Kartik Umate
Meizhong Lyu
EM-Caddie
ID: H-042
EM-Caddie is a web-based, no-coding-required platform that enables microscopy researchers to analyze and process images using ML tools. Users can describe desired operations in plain English, and the system applies pre-trained models for tasks like super-resolution, segmentation, FFT analysis, and line profile extraction, all through an interactive interface. The platform streamlines workflows, reduces reliance on specialized software or scripts, and supports extension with additional models and tools. Code
Video
Doc
Jessica Gerac
Kohen Goble
Kyle Hollars
Automated particle detection and quantitative analysis from electron microscopy images
ID: H-043
We developed NanORange, an AI-assisted workflow that automatically detects nanoparticles in noisy cryo-TEM images by combining denoising/contrast enhancement, adaptive thresholding, and instance-aware boundary separation to prevent merged detections in crowded regions. The segmented particles are then fit with circles/ellipses to extract quantitative metrics and generate particle-size distributions for datasets. This pipeline improves consistency across variable-contrast micrographs, and outputs analysis-ready tables for rapid, reproducible vesicle statistics. Code
Video
Doc
Seyed Aref Golsorkhi
Mohammad Javad Raei
Frances Joan Alvarez
Marc Mamak
3-D Membrane Reconstruction from a Single 2-D FIB-SEM Micrograph
ID: H-044
This project introduces an open-source computational framework that statistically reconstructs 3D membrane microstructures from a single 2D FIB-SEM slice using Gaussian Random Fields. This tool enables rapid, understandable prediction of critical transport properties like permeability and pore connectivity without the need for expensive 3D imaging. Code
Video
Doc
Antonio M. Lancuentra
Bill Yinqi Wang
Ahmad Ali
Thomas Zhang
GridScope: LLM assistant for automated microscopy
ID: H-045
GridScope is an AI-powered automation platform for Scanning Transmission Electron Microscopy (STEM) that bridges the gap between experimental design and instrument execution. Researchers describe imaging objectives in natural language—such as 'acquire a 5×5 grid at 3 ”m spacing' or 'explore tilt angles from 0° to 60°'—and receive executable Python scripts validated against a physics-based Digital Twin. Code
Video
Doc
Shuchi Sanandiya
Alexander Pattison
Sanchit Bansal
iPotNET
ID: H-046
This project introduces a physics-informed deep learning framework to solve the inverse scattering problem in Scanning Transmission Electron Microscopy (STEM). By fusing visual detector data with physical metadata (thickness, rotation) into a SwinUNETR backbone, the model reconstructs quantitative atomic electrostatic potentials with high fidelity (>54 dB PSNR). This approach explicitly mitigates non-linear dynamical scattering artifacts, significantly outperforming traditional iDPC methods which depends on ideal condition and assumptions on optically thick and tilted crystalline samples. Code
Video
Doc
Haipei Shao
Sridurgesh Ravichandran
Zeyu Wang
Mattia Lizzano
Andrea Cicconardi
Nanoscale Structure Disentanglement-KFUPM
ID: H-047
NanoscaleAnalyzer is a React component that provides a UI for uploading microscopy images and running a simulated ML pipeline to extract nanoscale structural features (grains, domains, defects, and optional STM spectroscopy). It is built with React and uses Tailwind CSS for styling and lucide-react for icons. Code
Video
Doc
Abbas Adamu Abdullahi
Aminu Rabiu Doguwa
Mauliady Satria
Dr Abdurahman Aliyu
Abubakar Dahiru Shuaibu
Ahmad Abbas Dalhatu
Sara Isabel Gracia Uribe
Dr Mariah Batool
RAG for Microscopy Data Analysis
ID: H-048
Developed Retrieval Augmented Generation (RAG) for Microscopy data analysis. Here the RAG agent interfaces with process-specific programs and generates code based on user-specified prompts. Code
Video
Doc
Ganesh Narasimha
Zijie Wu
Learning Atomic Defects Without Real Data: YOLO Trained on Fully Synthetic STEM Imagery
ID: H-049
We present a framework for generating fully synthetic STEM images of atomic lattices containing vacancy, interstitial, and grain boundary defects and using this data to train an object detection model. A YOLO-based detector trained exclusively on synthetic data accurately identifies defects in experimentally acquired STEM images, achieving up to 99.5% accuracy on labeled test data and strong qualitative performance on additional datasets. This approach enables real-time, scalable defect detection while reducing reliance on manually labeled experimental data. Code
Video
Doc
Victoria Augoustides
Jed Doman
Mahmoud Hawary
Tatiana Proksch
Jingyun Yang
Cell Checker
ID: H-050
My project develops a basic cell checker app that analyzes a users AFM data and evaluates if the surface indicates a health surface to culture cells of the selected type. Code
Video
Doc
Dale Herzog
Interpretable Digital Twins for Autonomous STEM Aberration Correction
ID: H-051
We present a machine-learning-assisted framework for automated aberration correction in STEM, addressing the nonlinear and strongly coupled nature of corrector tuning that limits conventional, operator-dependent workflows. By combining LLM-based log parsing, symbolic regression, a corrector digital twin, and reinforcement learning, the framework learns aberration–response relationships to enable faster, more stable, and reproducible correction. Code
Video
Doc
Yingheng Tang
Kang’an Wang
Haozhi Sha
Juhyeok Lee
Peter Ercius
Multi-Pass Preprocessing for Robust Hysteresis Loop Fitting in Ferroelectric Materials
ID: H-052
Hysteresis loops are experimental signatures representing the characteristics of ferroelectric materials. Consequently, extracting useful features from the hysteresis loops proves to be crucial in describing the ferroelectric properties and its switching behavior. In this study, we aim to improve the hysteresis loop fitting using a 50x50 hysteresis loop dataset on a PbTiO3 thin film, acquired by piezoresponse force spectroscopy. Code
Video
Doc
Mingxin Zhang
Thanakrit Yoongsomporn
Hardik Tankaria
Sivakorn Kanharattanachai
Combined Approaches for Drift Correction and Domain Dynamics Analysis in AFM Imaging
ID: H-053
Atomic Force Microscopy is inherently a time-consuming imaging technique, with data acquisition times on the order of minutes. This makes the instrument sensitive to small positional drifts caused by thermal fluctuations, mechanical perturbations, piezoelectric scanner instabilities or smaller mechanical perturbations of the AFM head or sample. Therefore, this project is aimed at distinguishing between changes due to instrumental error (location) drift and sample dynamics, as well as drift correction. Code
Video
Doc
Aadarsh Kumar
Yevhen Brych
Score-Based Super-Resolution for Atomic-Scale MoS2 Imaging
ID: H-054
We propose Score-Based Super-Resolution (SBSR), a conditional diffusion framework for atomic-scale reconstruction from low-dose HAADF-STEM images. By combining physics-informed degradation modeling with score-based generative learning, our approach restores high-fidelity atomic lattice structures while reducing reliance on high electron doses. This enables more robust imaging of beam-sensitive nanomaterials without introducing artificial structures. Code
Video
Doc
Xinyuan Wang
Mingli Huang
Jingyuan Sun
Fanzhi
Uncertainty-Driven AFM Sampling with LLM Guidance
ID: H-055
AFM provide amazing detail, but they are incredibly slow. Scanning every single pixel is inefficient, especially when large parts of a sample might be flat or empty. Our motivation was simple: can we scan fewer lines—saving time—but still get a perfect image? Code
Video
Doc
Shuting Xie
Wenyi Yao
Defect Classification for 2D-Material STEM Datasets
ID: H-056
This project presents a physics-guided, machine learning–based framework for automated identification and classification of atomic-scale defects in HAADF-STEM images of Janus MoWSSe. By combining interpretable Z-contrast descriptors with a two-stage convolutional neural network, the approach enables scalable defect localization, classification, and statistical analysis across large experimental datasets. Code
Video
Doc
Vikas Reddy Paduri
Nirmal Singh
Prabhat Prajapati
Mehran Yasir
Vinayak Srivastava
Karishma Begum
Abinava Yeshwanth K.J
Mapping 2D-flexibility and large conformational transitions from HS-AFM with parsimonious data-driven models
ID: H-057
The problem we tackle in this hackathon is to develop a lightweight and self-contained ML tool that helps dissect, interpret and identify both 2D-flexibility and large conformational transitions of biomacromolecules. In addition, HS-AFM could require external control and/or deep knowledge of the system, which we attempt to substitute with a conceptual CG model that reflects interfacial dynamics with surfaces of different hydropathy. The difference between our simulations and existing fitting tools, is that in our case the effects of the surface are included, and the defined observables are reduced to 2D. Hence, tackling a 2D-flexibility of domains is possible. Code
Video
Doc
Assit. Prof. Horacio V. Guzman
M.Sc.c. Ian Addison-Smith
M.Sc.c. Celica Krigul
Ph.D.c. Willy Menacho
VantaScope 5090 Pro
ID: H-058
Deep learning ensemble with Fuzzy Logic and Explainability Layers that automatically characterizes graphene samples with linguistic and human-understandable outputs. The tool then provides predictive analytics on the material properties based on the structure. Code
Video
Doc
Haidar Bin Hamid
Juhyeon Park
MCP for simulator
ID: H-059
MCP for simulator Code
Video
Doc
Guanlin He
Yongwen SUn
Automated Atom-Resolved Defect and Element Classification in 2D MoWSSe HAADF-STEM Dataset
ID: H-060
2D material defect classification and identification Code
Video
Doc
Cheng-Yu Chen
Swarnendu Das
George Hollyer
Pawan Vedanti
IQ-X: Image Quality Assessment via Cross-Modal Validation
ID: H-061
IQ-X is a reference-based, physics-informed framework for assessing AFM image quality using SEM as a stability baseline. Code
Video
Doc
Aditya Raghavan
Chiranjib Chakrabarti
ANCHOR: Registration by Alignment
ID: H-062
We developed an automated, landmark-based workflow for aligning multi-round microscopy datasets. The method uses cellular landmarks to match features across imaging rounds, fits smooth axial mismatches, and assigns local confidence scores to indicate alignment reliability. This approach enables reproducible integration of multiple imaging rounds while avoiding common failures of commonly used manual or global registration methods. Code
Video
Doc
Saven Denha
Dima Traboulsi
Jonah Wilbur
MicroSeg Lab: One-Shot Microscopy Segmentation with LLM-Guided Hybrid Refinement
ID: H-063
MicroSeg is a one-shot microscopy segmentation pipeline that produces instance and union masks from a single uploaded image. It runs fast classical segmentation first, then uses review-gated SAM refinement and optional LLM planning to improve results only when needed. Reference images/masks and minimal user hints provide a lightweight way to steer segmentation toward the intended phase on hard cases. Code
Video
Doc
Shakti P. Padhy
Sushant Sinha
Chase Katz
RONIN - Ronchigram based Optical Neural Inference for aberration detectioN
ID: H-064
Aberration correction is critical for achieving high spatial resolution in scanning transmission electron microscopy (STEM). Conventional correction relies on expert interpretation of ronchigrams, making the process slow and difficult to automate. In this project, we present RONIN, a physics-informed deep learning framework that predicts dominant electron-optical aberrations directly from ronchigram images. Using synthetically generated ronchigrams and a ResNet-based regression model, RONIN demonstrates accurate inference of several low-order aberrations, highlighting its potential for closed-loop and autonomous microscope alignment. Code
Video
Doc
Sriram Sankar
Manikandan Sundararaman
Aditya Raghavan
Autonomous Identification of Metal Microstructural Features via Latent Space Mapping-Based Microscope Control
ID: H-065
We develop an autonomous microscopy framework that uses machine learning to identify metal microstructural features directly during EBSD acquisition, without prior knowledge or human intervention. By encoding Kikuchi diffraction patterns into a latent space and adaptively guiding data acquisition, the approach efficiently resolves multiscale microstructural heterogeneity while minimizing the number of required measurements. This work provides a foundation for autonomous microstructure mapping and data-driven alloy design. Code
Video
Doc
Pierre BELAMRI-REGENPIED
Martin COURTOIS
Mathieu CALVAT
Neal BRODNIK
JC STINVILLE
Henry PROUDHON
MicrosCopilot: An Agentic, Physics‑Aware AI Framework for Confocal Microscopy
ID: H-066
The project is a “Microscopilot” that combines classical image processing, particle tracking, and a configurable digital twin of Brownian motion with large language model agents that guide the user through the full workflow: simulating or loading movies, detecting and tracking particles, extracting physical parameters (such as diffusion coefficients), and explaining the results in plain language for materials and tribology researchers. The system is organized as multiple cooperating agents (data loading, detection/tracking, physics analysis, and explanation) behind a simple UI so that experimentalists can quickly go from raw z‑stacks or time‑series images to quantitative, reproducible results and narrative reports during the hackathon. Code
Video
Doc
Abhishek Gupta
Samuel Hee
Generative Topographical Interpretation of AFM Data
ID: H-067
Nano-Constellations is an automated pipeline that reconstructs raw AFM data into high-fidelity 3D topographical maps of TiO2 surfaces. By implementing asymmetric cropping to eliminate edge-noise and utilizing Delaunay Triangulation for structural reconstruction, we transform raw TiO2 intensity data into a 3D geometric web. The results demonstrate a high-fidelity 3D reconstruction that allows for perspective flipping and real-time visualization of interatomic strain. Code
Video
Doc
Sinny J. Trivedi
TwinSpec: A Digital Twin Framework for GIWAXS Data, Geometry, and Physics-Aware ML
ID: H-068
TwinSpec is a modular digital twin framework for grazing-incidence wide-angle X-ray scattering (GIWAXS) that integrates a data digital twin, an interactive lab console, and a geometry-driven visual twin. By converting literature-derived GIWAXS measurements into reproducible, instrument-aware representations and coupling them to an interactive control surface, TwinSpec enables exploration of how experimental geometry and processing conditions influence scattering outcomes without requiring synchrotron access. A lightweight, physics-aware ML module demonstrates that the resulting data infrastructure is directly usable for downstream machine learning workflows. https://www.twinspec.org Code
Video
Doc
Tajah Trapier

Use the resources below to prepare your final hackathon submission.
Each section includes links and instructions to help you navigate the process.


📩 Access the Hackathon Dataset

Find all microscopy datasets used in the Digital Twin Microscope and the Hackathon. This includes raw STEM data, metadata, and other related files.

Open Dataset

đŸ§Ș Digital Twin Microscope – Demo Notebooks

Explore the Digital Twin Microscope through interactive notebooks. These show how to simulate scans, load data, and work with the digital twin environment.

Open Demo Notebooks

📁 Preparing Your Data For Submission

Use this notebook to properly format your datasets and prepare files for submission. It demonstrates how to create clean, well-structured datasets from raw microscopy data.

Open Preparation Notebook

📬 Need Help?

For more details or assistance with the hackathon datasets, please contact:

Rama Vasudevan: vasudevanrk@ornl.gov

Utkarsh Pratiush: upratius@vols.utk.edu