Live tracking and analyzing of the dynamics of chimeric antigen receptor (CAR) T-cells targeting cancer cells can open new avenues for the development of cancer immunotherapy. However, imaging via conventional microscopy approaches can result in cellular damage, and assessments of cell-to-cell interactions are extremely difficult and labor-intensive. When researchers applied deep learning and 3D holographic microscopy to the task, however, they not only avoided these difficultues but found that AI was better at it than humans were.
A critical stage in the development of the human immune system’s ability to respond not just generally to any invader (such as pathogens or cancer cells) but specifically to that particular type of invader and remember it should it attempt to invade again is the formation of a junction between an immune cell called a T-cell and a cell that presents the antigen, or part of the invader that is causing the problem, to it. This process is like when a picture of a suspect is sent to a police car so that the officers can recognize the criminal they are trying to track down. The junction between the two cells, called the immunological synapse, or IS, is the key process in teaching the immune system how to recognize a specific type of invader.
Since the formation of the IS junction is such a critical step for the initiation of an antigen-specific immune response, various techniques allowing researchers to observe the process as it happens have been used to study its dynamics. Most of these live imaging techniques rely on fluorescence microscopy, where genetic tweaking causes part of a protein from a cell to fluoresce, in turn allowing the subject to be tracked via fluorescence rather than via the reflected light used in many conventional microscopy techniques.
However, fluorescence-based imaging can suffer from effects such as photo-bleaching and photo-toxicity, preventing the assessment of dynamic changes in the IS junction process over the long term. Fluorescence-based imaging still involves illumination, whereupon the fluorophores (chemical compounds that cause the fluorescence) emit light of a different color. Photo-bleaching or photo-toxicity occur when the subject is exposed to too much illumination, resulting in chemical alteration or cellular damage.
One recent option that does away with fluorescent labelling and thereby avoids such problems is 3D holographic microscopy or holotomography (HT). In this technique, the refractive index (the way that light changes direction when encountering a substance with a different density—why a straw looks like it bends in a glass of water) is recorded in 3D as a hologram.
Until now, HT has been used to study single cells, but never cell-cell interactions involved in immune responses. One of the main reasons is the difficulty of “segmentation,” or distinguishing the different parts of a cell and thus distinguishing between the interacting cells; in other words, deciphering which part belongs to which cell.
Manual segmentation, or marking out the different parts manually, is one option, but it is difficult and time-consuming, especially in three dimensions. To overcome this problem, automatic segmentation has been developed in which simple computer algorithms perform the identification.
“But these basic algorithms often make mistakes,” explained Professor YongKeun Park, “particularly with respect to adjoining segmentation, which of course is exactly what is occurring here in the immune response we’re most interested in.”
So, the researchers applied a deep learning framework to the HT segmentation problem. Deep learning is a type of machine learning in which artificial neural networks based on the human brain recognize patterns in a way that is similar to how humans do this. Regular machine learning requires data as an input that has already been labelled. The AI “learns” by understanding the labeled data and then recognizes the concept that has been labelled when it is fed novel data. For example, AI trained on a thousand images of cats labelled “cat” should be able to recognize a cat the next time it encounters an image with a cat in it. Deep learning involves multiple layers of artificial neural networks attacking much larger, but unlabeled datasets, in which the AI develops its own ‘labels’ for concepts it encounters.
In essence, the deep learning framework that KAIST researchers developed, called DeepIS, came up with its own concepts by which it distinguishes the different parts of the IS junction process. To validate this method, the research team applied it to the dynamics of a particular IS junction formed between chimeric antigen receptor (CAR) T-cells and target cancer cells. They then compared the results to what they would normally have done: the laborious process of performing the segmentation manually. They found not only that DeepIS was able to define areas within the IS with high accuracy, but that the technique was even able to capture information about the total distribution of proteins within the IS that may not have been easily measured using conventional techniques.
“In addition to allowing us to avoid the drudgery of manual segmentation and the problems of photo-bleaching and photo-toxicity, we found that the AI actually did a better job,” Professor Park added.
The next step will be to combine the technique with methods of measuring how much physical force is applied by different parts of the IS junction, such as holographic optical tweezers or traction force microscopy.
< (Clockwise from top-left) Professor YongKeun Park, Professor Chan Hyuk Kim, Dr. Young-Ho Lee, and PhD Candidate Moosung Lee >
-Profile
Professor YongKeun Park
Department of Physics
Biomedical Optics Laboratory
KAIST
Dielectric tensor tomography allows the direct measurement of the 3D dielectric tensors of optically anisotropic structures A research team reported the direct measurement of dielectric tensors of anisotropic structures including the spatial variations of principal refractive indices and directors. The group also demonstrated quantitative tomographic measurements of various nematic liquid-crystal structures and their fast 3D nonequilibrium dynamics using a 3D label-free tomographic method. Th
2022-03-223D holographic microscopy leads to in-depth analysis of bacterial cells accumulating the bacterial bioplastic, polyhydroxyalkanoate (PHA) A research team at KAIST has observed how bioplastic granule is being accumulated in living bacteria cells through 3D holographic microscopy. Their 3D imaging and quantitative analysis of the bioplastic ‘polyhydroxyalkanoate’ (PHA) via optical diffraction tomography provides insights into biosynthesizing sustainable substitutes for petroleum-bas
2021-07-28Researchers have developed a new way to visualize the progression of nonalcoholic fatty liver disease (NAFLD) in mouse models of the disease. The new microscopy method provides a high-resolution 3D view that could lead to important new insights into NAFLD, a condition in which too much fat is stored in the liver. “It is estimated that a quarter of the adult global population has NAFLD, yet an effective treatment strategy has not been found,” said professor Pilhan Kim from the Gradua
2020-08-21Professor Wangyeol Oh of KAIST’s Mechanical Engineering Department has succeeded in developing an optical imaging endoscope system that employs an imaging velocity, which is up to 3.5 times faster than the previous systems. Furthermore, he has utilized this endoscope to acquire the world’s first high-resolution 3D images of the insides of in vivo blood vessel. Professor Oh’s work is Korea’s first development of blood vessel endoscope system, possessing an imaging spee
2014-03-25