Submissions
Hackathon Submissions
| Project | Description | Links | Team |
|---|---|---|---|
|
Image-Driven Analysis of Bubble Dynamics and Foam Stability ID: H-001 |
This project presents a fully image-driven and chemistry-agnostic analysis of foam stability using time-resolved microscopy images. Interpretable bubble-scale featuresâsuch as population dynamics, coarsening rates, size heterogeneity, and spatial organizationâare extracted to quantify and rank foam stability. An explainable machine-learning model demonstrates that early microscopic dynamics can reliably predict long-term macroscopic foam behavior. |
Code Video Doc |
Name: Hesham Eina Abdalla | Email: hesham.eina@gmail.com | Affiliation: Qatar University (Alumni) | Position: Electrical Engineer / Graduate Student |
|
Contrastive Micrograph-Metadata Pre-Training ID: H-002 |
We implement CLIP ( Contrastive Language-Image Pre-Training) but for HAADF-STEM images and paired metadata vectors. We show that it can effectively find a shared embedding space where metadata that fits to the style of of an image is embedded to vectors that have high cosine similarity to the embedded vector of the image. Similarly, metadata combinations that do not fit with the image style are pushed away. We also discuss the usefulness of the learned embedding space for a possible lightweight physics-informed denoiser |
Code Video Doc |
Henrik Eliasson, haoel@dtu.dk, Technical University of Denmark, Postdoc Angus Lothian, angus.lothian@hotmail.com, Stemson (stemson.ai), Research collaborator |
|
Automated Crystalline Domain Segmentation in Polycrystalline TEM Images via Machine Learning-Based FFT Analysis ID: H-003 |
This project introduces a fully automated pipeline for crystallographic analysis of HRTEM images, effectively eliminating the subjectivity and inefficiency of manual FFT peak selection. By utilizing Gaussian Mixture Models (GMM) for automated signal classification and integrating them with DBSCAN clustering, we enable consistent, high-throughput segmentation of crystalline domains. This workflow offers a widely accessible and reproducible alternative to resource-intensive 4D-STEM techniques for characterizing complex heterogeneous nanomaterials. |
Code Video Doc |
Woojin Bae, usmebbb@snu.ac.kr, School of Chemical and Biological Engineering, Seoul National University, Undergraduate Student Jinho Rhee, jhrhee01@snu.ac.kr , School of Chemical and Biological Engineering, Seoul National University, PhD Student Shihyun Kim, shihyun00@snu.ac.kr , School of Chemical and Biological Engineering, Seoul National University, MS Student |
|
Alexa for AFM ID: H-004 |
We demonstrate how to operate a microscope with voice control, by connecting a speech-to-text module with an instrument application programming interface. |
Code Video Doc |
Gunstheimer, Hans, gunstheimer@nanosurf.com, 1) Institute of Microstructure Technology, Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany 2) Nanosurf AG, Liestal, Switzerland, student and industry Corte-León, Héctor, corte@nanosurf.com, Nanosurf AG, Liestal, Switzerland, industry |
|
Towards Automated Materials Analysis: Deep LearningDenoising and Phase Identification from 4D-STEM ID: H-005 |
This project develops a deep learning-assisted 4D-STEM pipeline that denoises low signal-to-noise diffraction patterns and automates phase identification, enabling reliable detection of O1 and O3 phases in NMC811. By making nanoscale phase mapping robust to complex background noise and weak reflections, it supports systematic investigation of structural deviations linked to degradation in Ni-rich layered cathodes. |
Code Video Doc |
Ethan Lo, Univeristy of Cambridge, Student Fanzhi Su, Univeristy of Cambridge, Student |
|
stem-denoising-hackathon ID: H-006 |
Deep learning for ultra-low dose STEM denoising |
Code Video Doc |
Avital Wagner(avital.wagner@radboudumc.nl), postdoc at Radboud University Medical Center Willem de Kleijne(w.p.m.dekleijne@tudelft.nl), student Jay te Beest(j.t.te.beest@liacs.leidenuniv.nl), student Akshaya Kumarjaishankar(akshaya.kumarjaishankar@ru.nl), student research assistant in The Radboudumc |
|
AFM-SPARK ID: H-007 |
AFM-SPARK is a system-specific super-resolution framework for Atomic Force Microscopy (AFM) imaging. The method reconstructs a global high-resolution (HR) AFM image from a fast low-resolution (LR) scan and a small number of selectively acquired HR patches, significantly reducing acquisition time while preserving nanoscale fidelity. |
Code Video Doc |
Youngwoo Choi, slpdavid@kaist.ac.kr, KAIST, Post Doc. Seungwhan Ryu, victoryhwan@kaist.ac.kr, KAIST, Student Junho Yang, ddid904@kaist.ac.kr, KAIST, Student Sanggil Park, sanggil201@kaist.ac.kr, KAIST, Student Jihui Won, wjh530@kaist.ac.kr, KAIST, Student Chaeyul Kang, braveyul0416@kaist.ac.kr, KAIST, Student |
|
LLM-Assisted Structural Interpretation of Microscopy Images ID: H-008 |
Recent advances in machine learning and large language models (LLMs) offer new opportunities to assist scientific interpretation. However, directly applying LLMs to microscopy images risks hallucination and loss of scientific rigor if not carefully constrained by quantitative evidence. This project addresses this gap by designing a lightweight, explainable pipeline that combines classical image analysis with evidence-grounded LLM-ready interpretation, without relying on opaque end-to-end black-box models. |
Code Video Doc |
Shyam Sundar Debsarkar, debsarss@mail.uc.edu , Final year PhD student in CS (AI) at the University of Cincinnati. Major areas - Medical imaging, microscopy, deep learning, LLM, VLMs etc. Iman Chatterjee, chattein@mail.uc.edu, University of Cincinnati, PhD student Ugochukwu Philip Ochieze, ugochuop@mail.uc.edu, Msc student, Materials and Metallurgical Engineering, University of Cincinnati |
|
MASK-ViT: Mask-Free Nanoscale Feature Detection ID: H-009 |
MASK-ViT leverages self-supervised Vision Transformer embeddings and saliency-driven patch pooling to automatically detect and characterize nanoscale features in microscopy images without pixel-level annotations. The approach enables scalable, annotation-efficient materials characterization through embedding-based analysis and unsupervised discovery. |
Code Video Doc |
Gowtham Nimmal Haribabu, g.nimmalharibabu@tudelft.nl, TUDelft, PostDoc Gaaurav Lath, g.m.k.lath@student.tue.nl, TU/e, Student |
|
Smartscan ID: H-010 |
SmartScan is an AI-powered adaptive AFM scanning system that uses machine learning to dynamically optimize scan parameters (speed, resolution, force) in real-time, achieving 37% faster scanning while simultaneously improving image quality by balancing thermal drift against tracking error. The system learns from physics-based simulations on real AFM data to discover that faster scanning (10-12 ”m/s) actually produces better results than traditional conservative speeds (5 ”m/s), a counterintuitive finding that challenges conventional AFM operation practices. |
Code Video Doc |
Abdulrhman Mohamed, 60311969@udst.edu.qa, College of computing and IT, Data anlayst and Computer Vision Engineer, 1st year BSc in Information technology Ahmed Mahmoud, 60310418@udst.edu.qa, College of computing and IT, AI engineer, 2nd year BSc in AI and Data Science Syed Ahmed Khaderi, 60317695@udst.edu.qa, College of computing and IT, ML and Data science engineer , 1st year BSc in Software Engineering |
|
Digital Bubble Stability tracking microscope ID: H-011 |
Digital foam simulator: Random Forest Regressor (Aiscia Platform) predicts bubble metrics from lab inputs, then nearest-neighbor mapping returns realistic microscope frames/animationsâno live microscope needed. |
Code Video Doc |
Aziza Mohammed, aziza@aiscia.com, Aiscia, Software Engineer Ahyen Mostofa, ahyenmostofa@aiscia.com, Aiscia, Operations Manager Dr. Usman Chaudhry, chusman@hbku.edu.qa, HBKU, Associate Researcher Yasmin Abdelkarim, Yassmin@aiscia.com, Aiscia, Chemical Engineer Souhil Sid, souhilsid@aiscia.com, Aiscia, R&D Engineer Rehaan Hussain, rehaan@aiscia.com, Aiscia, AI and ML Specialist Dr. Fadwa El Mellouhi, felmellouhi@aiscia.com, Aiscia, CEO of Aiscia and Professor at HBKU Dr. Harris Rabbani, hrabbani@hbku.edu.qa, HBKU, Professor |
|
The Gold Standard for low-dose STEM ID: H-012 |
This project presents a deep-learning workflow for ultra-low-dose STEM imaging, where low dwell times introduce severe scan artifacts that limit image quality. By combining Fourier-domain pre-processing with a U-Net trained on real paired data and a custom loss that preserves high-frequency detail, the method aims to remove these artifacts while maintaining resolution at low doses. |
Code Video Doc |
Jay te Beest (j.t.te.beest@liacs.leidenuniv.nl) Leiden University PhD Avital Wagner (avital.wagner@radboudumc.nl) Radboud University Nijmegen Postdoc Willem de Kleijne (w.p.m.dekleijne@tudelft.nl) TU Deflt PhD Akshaya Kumar Jaishankar (akshaya.kumarjaishankar@ru.nl) Radboud University Nijmegen MSc |
|
AIScientist4Microscopy ID: H-013 |
AIScientist4Microscopy is a proof-of-concept âAI Scientistâ workflow tailored to microscopy. The goal is to demonstrate an end-to-end pipeline that can (1) generate credible, research-grade hypotheses in AI-for-microscopy, (2) translate each hypothesis into a runnable experimental plan with code, and (3) produce research artifacts (figures, metrics, and paper-style writeups) in a way that is easy to demo and extend. |
Code Video Doc |
Adib Bazgir, abwbw@missouri.edu, Department of Mechanical and Aerospace Engineering, University of Missouri-Columbia, PhD Student Mahule Roy, roymahule26@gmail.com, Institute of Biomedical Engineering, University of Oxford, MSc Student Yuwen Zhang, zhangyu@missouri.edu, Department of Mechanical and Aerospace Engineering, University of Missouri-Columbia, Professor |
|
VLRIMM - Vision-Language Retrieval for Identical Materials Morphology ID: H-014 |
VLRIMM is a multi-modal RAG pipeline designed to bridge the gap between raw visual data and textual knowledge, transform a single micrograph into an interactive dialogue with global current scientific knowledge by aligning visual morphology with textual research. Pairing state-of-the-art foundation models (Meta's DINOv3 and OpenAI's text embedding) and RAG, VLRIMM is designed for the future of Self-Driving Labs. It provides an autonomous link between observed microstructures and peer-reviewed information like synthesis method, eliminating the knowledge lag and manual research bottlenecks. |
Code Video Doc |
Kevin Zhang, kevinzy.zhang@mail.utoronto.ca, University of Toronto Department of Material Science & Engineering, MASc student Sartaaj Khan, sartaaj.khan@mail.utoronto.ca, University of Toronto Department of Chemical Engineering & Applied Chemistry, PhD student Mohammad Taha, mohammad.taha@mail.utoronto.ca, University of Toronto Department of Materials Science & Engineering, MASc student Thomas Pruyn, tom.pruyn@mail.utoronto.ca, University of Toronto Department of Chemical Engineering & Applied Chemistry, PhD student |
|
Unsupervised Microstructure Image Analysis and Question Answering Using CLIP and VLMs ID: H-015 |
We explore automated understanding of materials microscopy images using multimodal AI models. By combining scalable zero-shot classification with CLIP and reasoning-based analysis using vision language models, our approach identifies imaging techniques and material categories from real-world microscopy data, enabling efficient and interpretable microscopy analysis at scale. |
Code Video Doc |
Subham Ghosh, subham_g1@mfs.iitr.ac.in, IIT Roorkee, PhD student Shubham Tiwari, shubham_t@mt.iitr.ac.in, IIT Roorkee, PhD student Ayush Kumar Pandey, ayush_p@ph.iitr.ac.in, IIT Roorkee, PhD student Pranjul Chandra Bhatt, pranjul_b@mt.iitr.ac.in, IIT Roorkee, PhD student |
|
MCP Servers for Theory - Experimental Matching ID: H-016 |
Used existing FerroSIM simulation and developed an MCP server to control FerroSIM through agents, and then combine this with the digital twin AFM investigating a ferroelectric sample to perform some rudimentary theory-experiment matching through digital-digital agentic AI workflows |
Code Video Doc |
Vani Nigam, vnigam@andrew.cmu.edu, Carnegie Mellon University, MS Student Achuth Chandrashekhar,achuthc@andrew.cmu.edu, Carnegie Mellon University, PhD Student |
|
MARVL: Materials-Science Aware Reasoning Dataset for VisionâLanguage Models ID: H-017 |
We introduce MARVL, a comprehensive, materials-science-aware reasoning dataset designed to train and benchmark visionâlanguage models on microscopic imaging tasks across AFM, SEM, and TEM modalities. Using automated literature mining and curated experimental data, MARVL captures real imaging conditions, tip or instrument-induced artifacts, and scientifically grounded explanations. This dataset enables next-generation VLMs to perform accurate feature interpretation with transparent reasoning, expanding AI capabilities across nanoscale materials characterization. |
Code Video Doc |
Mohd Zaki, mzaki4@jh.edu, Johns Hopkins University, USA, Postdoc Indrajeet Mandal, srz228569@iitd.ac.in, Indian Institute of Technology Delhi, India, PhD student Prince Sharma, psharma47@jh.edu, Johns Hopkins University, USA, Postdoc Megha Mondal, ms1221898@mse.iitd.ac.in, Indian Institute of Technology Delhi, India, student Sudhakar Kumar, msm242549@mse.iitd.ac.in, Indian Institute of Technology Delhi, India, student Vishal Bhaskar, msz238268@mse.iitd.ac.in, Indian Institute of Technology Delhi, India, PhD student Arjun Chand, ms1221886@mse.iitd.ac.in, Indian Institute of Technology Delhi, India, student Shivnandi, msz228065@mse.iitd.ac.in, Indian Institute of Technology Delhi, India, PhD student |
|
”Stack: An AI-Powered Platform for Atomic Surface Analysis and Microscopy Simulation ID: H-018 |
”Stack is an open-source, AI-driven platform that simplifies atomic surface structure research by combining machine learning potentials, multi-agent workflows, and natural language interfaces into a unified system. The platform allows researchers to generate, relax, and analyze atomic surfaces through conversational queries, automatically producing visualizations, standardized data outputs, and AI-generated summary reports. It supports multiple microscopy simulation modalities (STM, AFM, IETS) and is designed for applications in catalysis, semiconductor research, and 2D materials science. |
Code Video Doc |
Aritra Roy, pgr.aritra.roy@lsbu.ac.uk, London South Bank University, Student Kevin Shen, kevin.shen.3.14159@gmail.com, NobleAI, staff Ben Blaiszik, blaiszik@uchicago.edu, UChicago/Argonne, staff Piyush Ranjan Maharana, piyushmaharana15@gmail.com, CSIR, student |
|
Latent-Constrained Autoencoder for 4D-STEM Clustering ID: H-019 |
This project applies unsupervised deep learning to identify crystalline and amorphous domains in 4D-STEM diffraction data from metallographic samples. By training a rotation-invariant autoencoder to compress diffraction patterns into low-dimensional latent representations and clustering them with HDBSCAN, the method robustly detects grain boundaries while reducing spurious cluster fragmentation. Incorporating physically motivated rotational invariance significantly improves boundary localization and accurately captures the finite width of grain boundaries in experimental data. |
Code Video Doc |
Martin Eriksen, eriksen@pic.es, IFAE-PIC, Senior Research Scientist (fixed-term appointment) JoaquĂn OtĂłn, joton@cells.es, CELLS-ALBA, Staff researcher Mauricio Matta, mauricio.matta@icn2.cat, ICN2, PhD student |
|
HAADF-Segment: Bridging Supervised Segmentation and Unsupervised Anomaly Detection for Automated Defect Analysis in 2D Transition Metal Dichalcogenides ID: H-020 |
This project establishes an automated deep learning pipeline for HAADF-STEM analysis, utilizing a custom U-Net architecture to perform precise multi-class segmentation of atomic species in 2D transition metal dichalcogenides. By coupling this supervised model with unsupervised manifold learning techniques, the framework enables the robust identification and quantification of point defects and structural instabilities without relying on exhaustive manual labeling. |
Code Video Doc |
Cobi Allen, cja69@cam.ac.uk, University of Cambridge, PhD Student Mostafa A. Mostafa, mostafa.ahmed1292002@gmail.com, University of Girona, Master's Student Nannan Zhang nz257@exeter.ac.uk, University of Exeter, PhD Student Navid Haghmoradi, navid.haghmoradi@kit.edu, Karlsruhe Institute of Technology, Postdoctoral Researcher |
|
4Denoising ID: H-021 |
We adapted state-of-the-art unsupervised deep-learning model (UDVD) to work on 4DSTEM datacubes. To train the model we created semi-synthetic low-dose data by modifying open-source datacubes. Our approach is generalizable to other types of high-dimensionality data streams, and after training can potentially be deployed on live applications. |
Code Video Doc |
Leonardo Cancellara, leonardo.cancellara@mpikg.mpg.de, Max Planck Institute of Colloids and Interfaces, Postdoc William Talbott, william.talbott@manchester.ac.uk, University of Manchester, PhD student Ankit Shrivastava, shrivastavaa@ornl.gov, Oak Ridge National Laboratory, Postdoc Jordi Weingard, jordi.weingard@postgrad.manchester.ac.uk, University of Manchester, Postgraduate Dingqiao Ji, Dingqiao.Ji@mpikg.mpg.de, Max Planck Institute of Colloids and Interfaces, PhD student Ian MacLaren, Ian.MacLaren@glasgow.ac.uk, School of Physics and Astronomy, University of Glasgow, Associate Professor |
|
DeepScan Pro: Intelligent Microscopy Agent ID: H-022 |
DeepScan Pro is an interactive software pipeline that transforms passive scanning electron microscopes (SEM/STEM) into intelligent discovery agents. Addressing the inefficiency of blind raster scanning, it utilizes a Multi-Stage Active Learning architecture. |
Code Video Doc |
Md Habibur Rahman, rahma103@purdue.edu, Purdue University, Student |
|
Anomaly Detection and Clustering of Atomic-Resolution STEM Images Using Semantic Segmentation ID: H-023 |
This project develops a machine-learning framework for automated defect detection, localisation, and clustering in atomic-resolution HAADF-STEM images. It combines supervised image classification, weakly supervised U-Net segmentation, and unsupervised clustering to identify multiple, coexisting defect types without pixel-level annotations. Demonstrated on CdTe and SrTiOâ datasets, the framework achieves high classification accuracy and produces quantitative, material-independent heatmaps, providing a scalable and reproducible workflow for high-throughput, data-driven electron microscopy analysis. |
Code Video Doc |
Claudia Sosa Espada, claudia.sosaespada@ucdconnect, University College Dublin, Postgraduate student Surabhi Gunjur Sathish, ss5216@bath.ac.uk , University of Bath, Undergraduate Student Thomas Karagiannis, thomas.karagiannis@ucdconnect.ie , University College Dublin, PhD student |
|
U-Net-based pipeline for high-throughput quantification of crack growth during in situ TEM tensile testing ID: H-024 |
This project analyzes crack growth and propagation in materials using computer vision and deep learning techniques applied to microscopy images. The system tracks crack tip positions, measures crack geometry parameters (length, area, width, CTOD), and monitors temporal evolution across multiple experimental datasets captured at different magnifications. |
Code Video Doc |
Vivek Devulapalli, Laboratory for Mechanics of Materials & Nanostructures, Empa Thun, Switzerland, Postdoc |
|
Integrating AFM and STEM Digital Twins with LLMs for Automated Data Interpretation ID: H-025 |
This project introduces Microscopy-LLM, a framework that bridges Claude AI with AFM and STEM digital twins via specialized MCP servers. It enables researchers to perform automated image segmentation and data interpretation through natural language, transforming complex binary datasets into an intuitive, conversational workflow. This 'AI co-pilot' effectively lowers the technical barrier to advanced materials characterization and provides a scalable blueprint for the future of autonomous laboratories. |
Code Video Doc |
Josep Cruañes Giner, josep.cruanes@icn2.cat, ICN2, PhD Student Fanzhi Su, fs521@cam.ac.uk, University of Cambridge, PhD student |
|
Graph-O-Foam: ActiveScan Copilot ID: H-026 |
The Graph-O-Foam project forecasts foam stability by converting lab data into synthetic microscopy frames and extracting bubble metrics with computer vision. Energy-based models (EBM) showed top accuracy, physics-informed transformers balanced realism and predictions, and a Streamlit dashboard enables real-time visualization and analysis. |
Code Video Doc |
Vedashree Yemula,vedashreeyemula07@gmail.com,DY Patil Univercity,student Mohamed Bouchekouf,mohamed0bouchekouf@gmail.com,undergrad,freelancer Param Trimbake,param.trimbake@gmail.com,oryx international school, student Mohammed Tanvir,2kemo2sabe@gmail.com,MoF,System Engineer Muntasir Mahmud,muntasirsiltan123@gmail.com, University of Doha for Science and Technology, student |
|
Phase Mapping Ni-based alloys with EBSD Using Deep Learning ID: H-027 |
We applied convolutional neural networks to classify EBSD patterns into different Ni-based alloys. Both contrastive and cross entropy learning were tested, and their accuracies were compared with each other. |
Code Video Doc |
Alfred Yan, alfredyan2027@u.northwestern.edu, Northwestern University, PhD student Gabriel Trindade dos Santos, gabriel.tsantos@northwestern.edu, Northwestern University, postdoc Ramandeep Mandia, ramandeep.mandia@northwestern.edu, Northwestern University, postdoc Roberto dos Reis, roberto.reis@northwestern.edu, Northwestern University, research assistant professor Vinayak Dravid, v-dravid@northwestern.edu, Northwestern University, PI |
|
Proof of Concept for Automated STEM Tilting Using Image Analysis for Quantifoil Grids ID: H-028 |
Quantifoil grids with circular holes provide an interesting test case for automated tilting in STEM - the holes of the grid will appear elliptical when mistilted, and perfectly circular when perpendicular to the beam direction. The goal of this project was to develop an algorithm to simulate the automated tilting of a Quantifoil grid by examining a simulated micrograph, quantifying the circularity of the hole, and using tilt to optimize for maximum circularity |
Code Video Doc |
Elizabeth Heon, eheon@vols.utk.edu, University of Tennessee, PhD Sai Venkata Gayathri Ayyagari, sfa5683@psu.edu, Pennsylvania State University, PhD student Sita Sirisha Madugula, madugulas@ornl.gov, Oak Ridge National Laboratory, Postdoc Darel Pates, Dpates@vols.utk.edu, PhD, University of Tennessee Knoxville Lynnicia Massenburg, massenburgln@ornl.gov, Oak Ridge National Laboratory, Postdoc Andrew Balog, arb6059@psu.edu, Pennsylvania State University, PhD student |
|
CAVA: A Causal Analysis and Validation Agent for Microscopy-Informed Materials Discovery ID: H-029 |
CAVA is an agentic causal reasoning framework designed to support microscopy-informed materials discovery by integrating data-driven causal analysis with literature-based validation. The system first identifies robust causal relationships in microscopy-derived data using established causal discovery algorithms and regime-aware consensus, then employs a large language model agent to contextualize and validate these relationships against prior scientific studies. By combining automated causal discovery with agent-based literature reasoning, CAVA provides a transparent and interpretable foundation for causal understanding and decision-making in experimental materials science. |
Code Video |
Kamyar Barakati Elyar Tourani Vivek Chawla Mohammad Amin Moradi |
|
eJect ID: H-030 |
We developed a digital twin STEM framework to simulate atom manipulation experiments, then validated the approach on a WSâ monolayer using a ThermoFisher Spectra 300. Survival analysis of atoms under point and blast irradiation revealed both ballistic and non-ballistic damage contributions, highlighting differences between simulated and experimental behavior. |
Code Video Doc |
Austin C. Houston, ahoust17@vols.utk.edu, University of Tennessee - Knoxville, PhD Student Dominick L. Pelaia, domptech2@gmail.com, L&N STEM Academy, High School Student Levi M. Dunn, whittlegears@gmail.com, L&N STEM Academy, High School Student |
|
RHEED universal translator ID: H-031 |
RHEED Universal Translator is a bidirectional deep-learning tool that links in-situ RHEED diffraction patterns with ex-situ XPS stoichiometry for SrTiOâ thin films. It learns a shared latent representation to predict stoichiometry from a RHEED image or generate a RHEED-like pattern from a target stoichiometry, and itâs packaged in a Streamlit no-code app with pretrained weights for easy demos and extension. |
Code Video Doc |
Asraful Haque Jawad Chowdhury Sumner Harris |
|
Bio-STEMGPT: An AI Assistant for TEM Data Analysis ID: H-032 |
We present Bio-STEMGPT, a chatbot that integrates segmentation tools from a user's chosen segmentation model to help users identify features in TEM images and explore the resulting segmented data. We envision this as a tool for scientists integrating TEM data into their projects who may be unfamiliar with analyzing this type of data. |
Code Video Doc |
Bridget Vincent, bridgetvincent@ucsb.edu, University of California Santa Barbara, student Arielle Rothman, ariellerothman537@gmail.com, University of Toronto, Student Saurin Savla, srs7054@psu.edu, The Pennsylvania State University, Student |
|
Identifying the Spatial Domains of Sm-doped BiFeO3 using STEM and Machine Learning ID: H-033 |
Spatial Domains of Sm-doped BiFeO3 using various Sm concentrations and window sizes are investigated using STEM, principal component analysis, k-means clustering, and supervised machine learning. The results suggest that the ideal window size is 40, with the domains being well distinguished for both k-means clustering and supervised machine learning using random forests. These methods have the potential to create a more efficient and automated method to determine the best window size to distinguish between spatial domains of ferroelectric materials. |
Code Video Doc |
Fatima Anwar, anwarfm@mail.uc.edu, University of Cincinnati, Graduate Student Hayden Dennison, dennisty@mail.uc.edu, University of Cincinnati, Graduate Student Julie Schlanz, schlanje@mail.uc.edu, University of Cincinnati, Graduate Student Emily Stump, stumpjb@mail.uc.edu, University of Cincinnati, Graduate Student |
|
SPMâ(S)TEM correlative studies for nanoscale characterisation and data curation for implied transfer learning. ID: H-034 |
In electron microscopy and Scanning probe microscopy, the scarcity of curated training data and the low throughput of atomic-resolution techniques often limits the adoption of supervised learning methods.Although automation is improving through manufacturer-provided software interfaces, e.g. âAutoscriptâ from Thermofisher scientific in electron microscopy, and digital twin software, human oversight remains essential during operation. An approach that would enable a combinatorial approach to generating sufficient labelled data could be through transfer learning. Here we present a framework for correlative SPMâ(S)TEM studies based on physics-informed digital twins. |
Code Video Doc |
Timothy Lambden, tpgl3@cam.ac.uk, University of Cambridge, PhD Student |
|
Angle-Dependent Morphologies of Ferroelectric Domain Walls ID: H-035 |
The morphology of ferroelectric domain walls with different characteristic angles were analyzed by neural networks and computer vision algorithms. Neural networks could differentiate between high-angle (>130 degree) and lower-angle (<130 degree) domain walls from their morphology. Further, higher-angle domain walls were found to prefer a curved morphology, while lower-angle domain walls prefered a straight morphology, in line with expectations for ferroelectricâferroelastic domain walls. |
Code Video Doc |
Laurel Washburn (University of Tennessee, Chattanooga, Bachelor Graduate) Ehtesham Anwar (CSIR National Chemical Laboratory, PhD Student) Sho Watanabe (University of Tokyo, Masters Student) Aidan Cotton (North Carolina State University, PhD Student Kaurab Gautam (University of Cincinnati, PhD Student) Nabin Khanal (University of Cincinnati, PhD Student) Ralph Bulanadi (Oak Ridge National Laboratory, Postdoctoral Researcher), bulanadira@ornl.gov |
|
Moiré Fringes in High-Resolution Transmission Electron Microscopy ID: H-036 |
Our project involved separating HRTEM moiré fringes into two their two lattice layers. This was done using a machine learning model that predicted what the two layers should look like based on the moiré fringe. |
Code Video Doc |
Yuval Noiman, noimanyl@mail.uc.edu, University of Cincinnati College of Engineering and Applied Science, MS student Hank Perry, perryhy@mail.uc.edu, University of Cincinnati College of Engineering and Applied Science, ME student Jackie Cheng, chengjz@mail.uc.edu, University of Cincinnati College of Engineering and Applied Science, ME student |
|
Automated Detection of Ice Artifacts in Cryo-EM Using Machine Learning ID: H-037 |
Cryogenic electron microscopy (Cryo-EM) enables structural analysis of biological macromolecules by reconstructing three-dimensional structures from two-dimensional images of rapidly frozen samples. However, Cryo-EM micrographs suffer from low signal-to-noise ratio, poor contrast, and contamination from ice artifacts, making it difficult to distinguish true biological signal from noise. To address this challenge, we propose an automated ice artifact detection approach using a deep convolutional neural network. |
Code Video Doc |
Tharushi Rajaguru, tharushirajaguru@gmail.com, University of Cincinnati, Graduate Student Shehani Kahawatte, kahawads@mail.uc.edu, University of Cincinnati, Graduate Student Elif Dursun, sahineb@mail.uc.edu, University of Cincinnati, PhD student Diana Oliveira, diana.oliveira@inl.int, International Iberian Nanotechnology Laboratory, Research Assistant |
|
Automated Region-of-Interest Detection in Microscopy Images Using VGG-16 and Patch-Based Classification ID: H-038 |
This project presents a patch-based deep-learning framework for automated region-of-interest detection in atomic-resolution STEM images. Using a VGG-16âbased model, the approach combines supervised defect classification with unsupervised clustering to identify defect-bearing regions, extract their spatial coordinates, and enable scalable, reproducible, and automated microscopy analysis across different material systems. |
Code Video Doc |
R. A. W. Ayyubi, rayyub2@uic.edu, University of Illinois Chicago, Student Shoaib Masood, smasoo20@uic.edu, University of Illinois Chicago, Student Fatemeh Karimi, fkarim4@uic.edu, University of Illinois Chicago, Student Zahira El Khalidi, zahira@uic.edu, University of Illinois Chicago, Visiting research assistant professor Jaeyeon Jo, jaejo@uic.edu, University of Illinois Chicago, Postdoc |
|
Automated Defect DetectionâŻin STEMâŻImagesâŻUsing MachineâŻLearning ID: H-039 |
We developed an ensemble deep learning framework for automated defect detection in high-resolution scanning transmission electron microscopy (STEM) images of CdTe and SrTiOâ (STO). The approach follows patch-level classification using custom convolutional neural networks (CNNs) trained on frequency-enhanced image representations to identify crystalline and defect regions. |
Code Video Doc |
Noah Holt, noah.holt@okstate.edu, Oklahoma State University, student Joshua Marvin, joshua.marvin@okstate.edu, Oklahoma State University, Student Ramji Subedi, ramji.subedi@okstate.edu, Oklahoma State University, Student Kamal Khanal, kamal.khanal@okstate.edu, Oklahoma State University, Student Mohd Tauhid Khan, mohd_tauhid.khan@okstate.edu, Oklahoma State University, Student |
|
Machine learningâenabled SEM-driven surface morphometrics ID: H-040 |
Recently, a classical computer-vision approach for quantitative SEM analysis, defining microcracks as a feature class and estimating crack count, length distribution, and area within the field of view, are applied. To streamline analysis, the framework was extended with neuralânetwork segmentation, improving generalization, reducing preprocessing demands, and enabling additional pixelâlevel crack detection alongside classical approach. For this purpose, plasma treated cobalt sputtered samples were imaged at multiple fields of view to demonstrate the implementation. |
Code Video Doc |
Vineet Kumar, vineet05k@gmail.com, Dept. of Surface and Plasma Science, Faculty of Mathematics and Physics, Charles University in Prague, Czech Republic, PhD student Yogesh Paul, yogeshpaul@ymail.com, Institute for Neuromodulation and Neurotechnology, University Hospital and University of Tuebingen, Tuebingen, Germany, PhD student Himanshu Mishra PhD., hmishra022@gmail.com, Dept. of Surface and Plasma Science, Faculty of Mathematics and Physics, Charles University in Prague, Czech Republic, Researcher Saurabh Sudan, sorav.sudan@gmail.com, Researcher |
|
Machine Learning Denoising of Reciprocal Space Maps for realistic center-of-mass evaluation ID: H-041 |
This project utilizes the self-supervised Noise2Self method to denoise reciprocal space maps from nano-XRD experiments. It enables rapid denoising and more accurate center-of-mass calculations without requiring clean reference data. The approach effectively reduces noise while preserving subtle structural features. Further optimization is needed for very weak diffraction signals, such as those from quantum wells |
Code Video Doc |
Leonardo Oliveira, leonardo.oliveira@fysik.lu.se, Lund University and MAX IV â Sweden, postdoc Soroush Motahari, s.motahari@mpi-susmat.de, MPI SusMat â Germany, PhD. Candidate Navyanth Kusampudi, n.kusampudi@mpi-susmat.de, MPI SusMat â Germany, postdoc Kartik Umate, k.umate@mpi-susmat.de, MPI SusMat â Germany, PhD. Candidate Meizhong Lyu, lvlvmei@umich.edu, University of Michigan â USA, Postdoc |
|
EM-Caddie ID: H-042 |
EM-Caddie is a web-based, no-coding-required platform that enables microscopy researchers to analyze and process images using ML tools. Users can describe desired operations in plain English, and the system applies pre-trained models for tasks like super-resolution, segmentation, FFT analysis, and line profile extraction, all through an interactive interface. The platform streamlines workflows, reduces reliance on specialized software or scripts, and supports extension with additional models and tools. |
Code Video Doc |
Jessica Gerac, NC State University, Graduate Student Kohen Goble, NC State University, Graduate Student Kyle Hollars, NC State University, Undergraduate Student |
|
Automated particle detection and quantitative analysis from electron microscopy images ID: H-043 |
We developed NanORange, an AI-assisted workflow that automatically detects nanoparticles in noisy cryo-TEM images by combining denoising/contrast enhancement, adaptive thresholding, and instance-aware boundary separation to prevent merged detections in crowded regions. The segmented particles are then fit with circles/ellipses to extract quantitative metrics and generate particle-size distributions for datasets. This pipeline improves consistency across variable-contrast micrographs, and outputs analysis-ready tables for rapid, reproducible vesicle statistics. |
Code Video Doc |
Seyed Aref Golsorkhi, Golsorsf@mail.uc.edu, PhD candidate, Graduate assistant at Advanced Materials Characterization Center Mohammad Javad Raei, raei.mohammadjavad@gmail.com, Software Developer Frances Joan Alvarez, alvarez.fd@pg.com, P&G, TEM expert Marc Mamak,mamak.m@pg.com , P&G, R&D Director/Principal Scientist |
|
3-D Membrane Reconstruction from a Single 2-D FIB-SEM Micrograph ID: H-044 |
This project introduces an open-source computational framework that statistically reconstructs 3D membrane microstructures from a single 2D FIB-SEM slice using Gaussian Random Fields. This tool enables rapid, understandable prediction of critical transport properties like permeability and pore connectivity without the need for expensive 3D imaging. |
Code Video Doc |
Antonio M. Lancuentra, antonio.m.lancuentra@gmail.com, Engineer in career transition to ML/AI Bill Yinqi Wang, billyq.wang@mail.utoronto.ca, University of Toronto, Recent Undergrad Graduate Ahmad Ali, a14ali@torontomu.ca, Toronto Metropolitan University, Undergraduate Student Thomas Zhang, thomasz61821@gmail.com, University of Toronto, Undergraduate Student |
|
GridScope: LLM assistant for automated microscopy ID: H-045 |
GridScope is an AI-powered automation platform for Scanning Transmission Electron Microscopy (STEM) that bridges the gap between experimental design and instrument execution. Researchers describe imaging objectives in natural languageâsuch as 'acquire a 5Ă5 grid at 3 ”m spacing' or 'explore tilt angles from 0° to 60°'âand receive executable Python scripts validated against a physics-based Digital Twin. |
Code Video Doc |
Shuchi Sanandiya, ss185@illinois.edu, UIUC, PhD Student Alexander Pattison, ajpattison@lbl.gov, Lawrence Berkeley National Laboratory, Postdoc Sanchit Bansal, sanchitbansal2019@gmail.com, UIUC, Alumni |
|
iPotNET ID: H-046 |
This project introduces a physics-informed deep learning framework to solve the inverse scattering problem in Scanning Transmission Electron Microscopy (STEM). By fusing visual detector data with physical metadata (thickness, rotation) into a SwinUNETR backbone, the model reconstructs quantitative atomic electrostatic potentials with high fidelity (>54 dB PSNR). This approach explicitly mitigates non-linear dynamical scattering artifacts, significantly outperforming traditional iDPC methods which depends on ideal condition and assumptions on optically thick and tilted crystalline samples. |
Code Video Doc |
Haipei Shao , haipei.shao@gmail.com Department of Chemistry, Faculty of Science, National University of Singapore, 3 Science Drive 3, Singapore 117543, Singapore PhD student Sridurgesh Ravichandran , e1554212@u.nus.edu , National University of Singapore School of Computing ,Master of Computing in Artificial Intelligence Zeyu Wang , zeyu.wang@iit.it , Istituto Italiano di Tecnologia , Postdoc Mattia Lizzano ,mattia.lizzano@iit.it , Istituto Italiano di Tecnologia, PHD Andrea Cicconardi, andrea.cicconardi@iit.it , Istituto Italiano di Tecnologia, PhD student |
|
Nanoscale Structure Disentanglement-KFUPM ID: H-047 |
NanoscaleAnalyzer is a React component that provides a UI for uploading microscopy images and running a simulated ML pipeline to extract nanoscale structural features (grains, domains, defects, and optional STM spectroscopy). It is built with React and uses Tailwind CSS for styling and lucide-react for icons. |
Code Video Doc |
Abbas Adamu Abdullahi (PhD Chemistry KFUPM) Aminu Rabiu Doguwa (PhD Material Sci. KFUPM) Mauliady Satria (PhD Chemistry KFUPM) Dr Abdurahman Aliyu (Robotics, EmbodiedAI, Mechatronic Systems, PDF KFUPM) KSA Abubakar Dahiru Shuaibu (PhD Chemistry KFUPM) Ahmad Abbas Dalhatu (PhD Chemistry KFUPM) Sara Isabel Gracia Uribe (MS Chemistry KFUPM) Dr Mariah Batool (AI-Driven Image & Data Analysis, PDF; University of Connecticut) Greater Hartford |
|
RAG for Microscopy Data Analysis ID: H-048 |
Developed Retrieval Augmented Generation (RAG) for Microscopy data analysis. Here the RAG agent interfaces with process-specific programs and generates code based on user-specified prompts. |
Code Video Doc |
Ganesh Narasimha, ggn@ornl.gov, ORNL, Post-doc Zijie Wu, wuz2@ornl.gov, ORNL, Post-doc |
|
Learning Atomic Defects Without Real Data: YOLO Trained on Fully Synthetic STEM Imagery ID: H-049 |
We present a framework for generating fully synthetic STEM images of atomic lattices containing vacancy, interstitial, and grain boundary defects and using this data to train an object detection model. A YOLO-based detector trained exclusively on synthetic data accurately identifies defects in experimentally acquired STEM images, achieving up to 99.5% accuracy on labeled test data and strong qualitative performance on additional datasets. This approach enables real-time, scalable defect detection while reducing reliance on manually labeled experimental data. |
Code Video Doc |
Victoria Augoustides, victaugo@gmail.com, University of North Carolina at Chapel Hill, PhD Candidate Jed Doman, tjjdoman@gmail.com, Ramona Optics, Industry Mahmoud Hawary, myhawary@ncsu.edu, Nuclear Engineering Department, NC State University, PhD student. Tatiana Proksch, prokschtania@gmail.com, Materials Science and Engineering Department, North Carolina State University, PhD Candidate Jingyun Yang, jingyun.yang@duke.edu, Department of Electrical and Computer Engineering, Duke University, PhD Student |
|
Cell Checker ID: H-050 |
My project develops a basic cell checker app that analyzes a users AFM data and evaluates if the surface indicates a health surface to culture cells of the selected type. |
Code Video Doc |
Dale Herzog, Fresno Ideaworks Makerspace, Volunteer-Member |
|
Interpretable Digital Twins for Autonomous STEM Aberration Correction ID: H-051 |
We present a machine-learning-assisted framework for automated aberration correction in STEM, addressing the nonlinear and strongly coupled nature of corrector tuning that limits conventional, operator-dependent workflows. By combining LLM-based log parsing, symbolic regression, a corrector digital twin, and reinforcement learning, the framework learns aberrationâresponse relationships to enable faster, more stable, and reproducible correction. |
Code Video Doc |
Yingheng Tang, ytang4@lbl.gov, Lawrence Berkeley National Laboratory, postdoc Kangâan Wang, KAWang@lbl.gov, UC Berkeley, graduate student Haozhi Sha, hsha@lbl.gov, UCLA, postdoc Juhyeok Lee, jhlee0667@lbl.gov, Lawrence Berkeley National Laboratory, postdoc Peter Ercius, percius@lbl.gov, Lawrence Berkeley National Laboratory, staff scientist |
|
Multi-Pass Preprocessing for Robust Hysteresis Loop Fitting in Ferroelectric Materials ID: H-052 |
Hysteresis loops are experimental signatures representing the characteristics of ferroelectric materials. Consequently, extracting useful features from the hysteresis loops proves to be crucial in describing the ferroelectric properties and its switching behavior. In this study, we aim to improve the hysteresis loop fitting using a 50x50 hysteresis loop dataset on a PbTiO3 thin film, acquired by piezoresponse force spectroscopy. |
Code Video Doc |
Mingxin Zhang, z08040992048@gmail.com, MI-6 Ltd, Data Scientist Thanakrit Yoongsomporn, yoongsomporn.t@gmail.com, MI-6 Ltd, Data Science Intern Hardik Tankaria, hardiktankaria1406@gmail.com, MI-6 Ltd, ML Intern Sivakorn Kanharattanachai, sivakorn.jb@gmail.com, MI-6 Ltd, Data Scientist |
|
Combined Approaches for Drift Correction and Domain Dynamics Analysis in AFM Imaging ID: H-053 |
Atomic Force Microscopy is inherently a time-consuming imaging technique, with data acquisition times on the order of minutes. This makes the instrument sensitive to small positional drifts caused by thermal fluctuations, mechanical perturbations, piezoelectric scanner instabilities or smaller mechanical perturbations of the AFM head or sample. Therefore, this project is aimed at distinguishing between changes due to instrumental error (location) drift and sample dynamics, as well as drift correction. |
Code Video Doc |
Aadarsh Kumar(aadarsh.kumar@ucdconnect.ie, University College Dublin, PhD Student) Yevhen Brych(yevhen.brych@ucdconnect.ie, University College Dublin, PhD Student) |
|
Score-Based Super-Resolution for Atomic-Scale MoS2 Imaging ID: H-054 |
We propose Score-Based Super-Resolution (SBSR), a conditional diffusion framework for atomic-scale reconstruction from low-dose HAADF-STEM images. By combining physics-informed degradation modeling with score-based generative learning, our approach restores high-fidelity atomic lattice structures while reducing reliance on high electron doses. This enables more robust imaging of beam-sensitive nanomaterials without introducing artificial structures. |
Code Video Doc |
Xinyuan Wang, xinyuan.wang@cwi.nl, Centrum Wiskunde & Informatica (CWI), Student Mingli Huang, mhuang28@sheffield.ac.uk, University of Sheffield, Student Jingyuan Sun, jingyuansun@tudelft.nl, Delft University of Technology, Student Fanzhi (Clark) Su, fs521@cam.ac.uk, The University of Cambridge, Student |
|
Uncertainty-Driven AFM Sampling with LLM Guidance ID: H-055 |
AFM provide amazing detail, but they are incredibly slow. Scanning every single pixel is inefficient, especially when large parts of a sample might be flat or empty. Our motivation was simple: can we scan fewer linesâsaving timeâbut still get a perfect image? |
Code Video Doc |
Shuting Xie( rowan.xie1011@gmail.com, University of Toronto, Master's student) Wenyi Yao(wyao45@uwo.ca, Western University, Master's student) |
|
Defect Classification for 2D-Material STEM Datasets ID: H-056 |
This project presents a physics-guided, machine learningâbased framework for automated identification and classification of atomic-scale defects in HAADF-STEM images of Janus MoWSSe. By combining interpretable Z-contrast descriptors with a two-stage convolutional neural network, the approach enables scalable defect localization, classification, and statistical analysis across large experimental datasets. |
Code Video Doc |
Vikas Reddy Paduri Nirmal Singh Prabhat Prajapati Mehran Yasir Vinayak Srivastava Karishma Begum Abinava Yeshwanth K.J |
|
Mapping 2D-flexibility and large conformational transitions from HS-AFM with parsimonious data-driven models ID: H-057 |
The problem we tackle in this hackathon is to develop a lightweight and self-contained ML tool that helps dissect, interpret and identify both 2D-flexibility and large conformational transitions of biomacromolecules. In addition, HS-AFM could require external control and/or deep knowledge of the system, which we attempt to substitute with a conceptual CG model that reflects interfacial dynamics with surfaces of different hydropathy. The difference between our simulations and existing fitting tools, is that in our case the effects of the surface are included, and the defined observables are reduced to 2D. Hence, tackling a 2D-flexibility of domains is possible. |
Code Video Doc |
Assit. Prof. Horacio V. Guzman, Institut de CiĂšncia de Materials de Barcelona, CSIC, 08193 Barcelona, Spain, Professor M.Sc.c. Ian Addison-Smith, Institut de CiĂšncia de Materials de Barcelona, CSIC, 08193 Barcelona, Spain, student M.Sc.c. Celica Krigul, Institut de CiĂšncia de Materials de Barcelona, CSIC, 08193 Barcelona, Spain, student Ph.D.c. Willy Menacho, Institut de CiĂšncia de Materials de Barcelona, CSIC, 08193 Barcelona, Spain, student |
|
VantaScope 5090 Pro ID: H-058 |
Deep learning ensemble with Fuzzy Logic and Explainability Layers that automatically characterizes graphene samples with linguistic and human-understandable outputs. The tool then provides predictive analytics on the material properties based on the structure. |
Code Video Doc |
Haidar Bin Hamid, binhamhr@mail.uc.edu, University of Cincinnati, Student Juhyeon Park, park4jk@mail.uc.edu. PhD in Chemistry. |
|
MCP for simulator ID: H-059 |
MCP for simulator |
Code Video Doc |
Guanlin He Yongwen SUn |
|
Automated Atom-Resolved Defect and Element Classification in 2D MoWSSe HAADF-STEM Dataset ID: H-060 |
2D material defect classification and identification |
Code Video Doc |
Cheng-Yu Chen Swarnendu Das George Hollyer Pawan Vedanti |
|
IQ-X: Image Quality Assessment via Cross-Modal Validation ID: H-061 |
IQ-X is a reference-based, physics-informed framework for assessing AFM image quality using SEM as a stability baseline. |
Code Video Doc |
Aditya Raghavan(University of Tennessee Knoxville)graduate student, Chiranjib Chakrabarti (University College Dublin, Postdoc) |
|
ANCHOR: Registration by Alignment ID: H-062 |
We developed an automated, landmark-based workflow for aligning multi-round microscopy datasets. The method uses cellular landmarks to match features across imaging rounds, fits smooth axial mismatches, and assigns local confidence scores to indicate alignment reliability. This approach enables reproducible integration of multiple imaging rounds while avoiding common failures of commonly used manual or global registration methods. |
Code Video Doc |
Saven Denha, denhas@mcmaster.ca, McMaster University, PhD student Dima Traboulsi, trabould@mcmaster.ca, McMaster University, MSc student Jonah Wilbur, wilbuj1@mcmaster.ca, McMaster University, BSc student |
|
MicroSeg Lab: One-Shot Microscopy Segmentation with LLM-Guided Hybrid Refinement ID: H-063 |
MicroSeg is a one-shot microscopy segmentation pipeline that produces instance and union masks from a single uploaded image. It runs fast classical segmentation first, then uses review-gated SAM refinement and optional LLM planning to improve results only when needed. Reference images/masks and minimal user hints provide a lightweight way to steer segmentation toward the intended phase on hard cases. |
Code Video Doc |
Shakti P. Padhy Sushant Sinha Chase Katz |
|
RONIN - Ronchigram based Optical Neural Inference for aberration detectioN ID: H-064 |
Aberration correction is critical for achieving high spatial resolution in scanning transmission electron microscopy (STEM). Conventional correction relies on expert interpretation of ronchigrams, making the process slow and difficult to automate. In this project, we present RONIN, a physics-informed deep learning framework that predicts dominant electron-optical aberrations directly from ronchigram images. Using synthetically generated ronchigrams and a ResNet-based regression model, RONIN demonstrates accurate inference of several low-order aberrations, highlighting its potential for closed-loop and autonomous microscope alignment. |
Code Video Doc |
Sriram Sankar (ASU) Manikandan Sundararaman (ASU) Aditya Raghavan (UTK) |
|
Autonomous Identification of Metal Microstructural Features via Latent Space Mapping-Based Microscope Control ID: H-065 |
We develop an autonomous microscopy framework that uses machine learning to identify metal microstructural features directly during EBSD acquisition, without prior knowledge or human intervention. By encoding Kikuchi diffraction patterns into a latent space and adaptively guiding data acquisition, the approach efficiently resolves multiscale microstructural heterogeneity while minimizing the number of required measurements. This work provides a foundation for autonomous microstructure mapping and data-driven alloy design. |
Code Video Doc |
Pierre BELAMRI-REGENPIED, MinesParis, Ph.D. Student Martin COURTOIS, MinesParis, Ph.D. Student Mathieu CALVAT, University of Illinois at Urbana-Champaign, Postdoctoral researcher Neal BRODNIK, MinesParis, Postdoctoral researcher JC STINVILLE, University of Illinois at Urbana-Champaign, Assistant Professor Henry PROUDHON, MinesParis, Researcher |
|
MicrosCopilot: An Agentic, PhysicsâAware AI Framework for Confocal Microscopy ID: H-066 |
The project is a âMicroscopilotâ that combines classical image processing, particle tracking, and a configurable digital twin of Brownian motion with large language model agents that guide the user through the full workflow: simulating or loading movies, detecting and tracking particles, extracting physical parameters (such as diffusion coefficients), and explaining the results in plain language for materials and tribology researchers. The system is organized as multiple cooperating agents (data loading, detection/tracking, physics analysis, and explanation) behind a simple UI so that experimentalists can quickly go from raw zâstacks or timeâseries images to quantitative, reproducible results and narrative reports during the hackathon. |
Code Video Doc |
Abhishek Gupta, a.kumargupta@uva.nl, University of Amsterdam, Phd student Samuel Hee, samuelheeyi0309@gmail.com, NTU Singapore, undergrad student |
|
Generative Topographical Interpretation of AFM Data ID: H-067 |
Nano-Constellations is an automated pipeline that reconstructs raw AFM data into high-fidelity 3D topographical maps of TiO2 surfaces. By implementing asymmetric cropping to eliminate edge-noise and utilizing Delaunay Triangulation for structural reconstruction, we transform raw TiO2 intensity data into a 3D geometric web. The results demonstrate a high-fidelity 3D reconstruction that allows for perspective flipping and real-time visualization of interatomic strain. |
Code Video Doc |
Sinny J. Trivedi (sinny.trivedi1@ucd.ie), Postdoctoral Researcher at University College Dublin, Ireland |
|
TwinSpec: A Digital Twin Framework for GIWAXS Data, Geometry, and Physics-Aware ML ID: H-068 |
TwinSpec is a modular digital twin framework for grazing-incidence wide-angle X-ray scattering (GIWAXS) that integrates a data digital twin, an interactive lab console, and a geometry-driven visual twin. By converting literature-derived GIWAXS measurements into reproducible, instrument-aware representations and coupling them to an interactive control surface, TwinSpec enables exploration of how experimental geometry and processing conditions influence scattering outcomes without requiring synchrotron access. A lightweight, physics-aware ML module demonstrates that the resulting data infrastructure is directly usable for downstream machine learning workflows. https://www.twinspec.org |
Code Video Doc |
Tajah Trapier, tctrapie@ncsu.edu, Materials Science and Engineering Department, North Carolina State University, Student |
Use the resources below to prepare your final hackathon submission.
Each section includes links and instructions to help you navigate the process.
đŠ Access the Hackathon Dataset
Find all microscopy datasets used in the Digital Twin Microscope and the Hackathon. This includes raw STEM data, metadata, and other related files.
Open Datasetđ§Ș Digital Twin Microscope â Demo Notebooks
Explore the Digital Twin Microscope through interactive notebooks. These show how to simulate scans, load data, and work with the digital twin environment.
Open Demo Notebooksđ Preparing Your Data For Submission
Use this notebook to properly format your datasets and prepare files for submission. It demonstrates how to create clean, well-structured datasets from raw microscopy data.
Open Preparation NotebookđŹ Need Help?
For more details or assistance with the hackathon datasets, please contact:
Rama Vasudevan: vasudevanrk@ornl.gov
Utkarsh Pratiush: upratius@vols.utk.edu