▶ Watch On-Demand | 55 Minutes

Linking Cardiac Nanostructure and Molecular Organization to Cardiac Function

Learn about a novel approach for studying the nano-structural underpinnings of electrical signal propagation
presented by Rengasayee (Sai) Veeraraghavan, Ph.D. and Heather Struckman, BS, MS, Ohio State University's Nanocardiology Lab (August 25, 2021)

       PRESENTATION HIGHLIGHTS:

  • [00:02:49] Introduction: Taking an engineering approach to cardiology
  • [00:09:27] How heart cells communicate
  • [00:12:37] The flaws in existing models & the phenomena they cannot explain
  • [00:16:23] The iCLEM method for understanding protein distribution along with structural context
  • [00:31:24] Quantiying protein organization at the nano-scale with the STORM-RLA method
  • [00:40:41] Probing structure-function relationships
  • [00:44:52] Live audience Q&A

 

Q&A with the Speakers

Can I use iCLEM on any kind of sample?

You can, provided you can find a structural fiducial to exploit. The whole concept of iCLEM relies on exploiting some advantage of the sample fiducials. What you need is to identify a structure that is well defined and easily identifiable with just basic heavy metal staining or that kind of work on the electron microscope and can be easily unilabeled for the light microscope. If you can find that, then this entire pipeline can be applied to any sample.

Were the experiments shown here, especially any showing STORM data, performed on cells or on tissue samples?

Except for the confocal images of isolated cells, all of our STORM data is done in tissue because intercalated disks are cell-cell junctions, so if we separate the cells that's the first thing to break. So we have to study in tissue, which presents some interesting challenges because cardiac muscle is one of the most optically dense tissues. This is one of the reasons we spent a lot of time developing our protocols. It is also why we like the wide-field approach to STORM that Vutara has, as it has enabled us to collect these kinds of images in a very optically dense sample.

If you use iCLEM, do not need to perform CLEM?

At the risk of potentially offending the experts in CLEM, I would say yes, but sort of.

I would not throw away CLEM in the conventional sense because, let's face it, iCLEM does sort of a different set of things from conventional CLEM in many ways. Ideally, you might want to have access to both and use them to address different kinds of questions. If you want to characterize a system across multiple scales, with a rich quantitative data set coming out of it, you might prefer iCLEM. But if you just want to directly see exactly structure and protein layouts, you need conventional CLEM.

Do you have any plans or goals or ideas on how to constrain the generation of convex hulls (shown in the STORM data) based on parameters from the EM derived data.

We've been thinking about that. At this point, we don't have any specific efforts in that direction because, to really effectively do that, we would need to better automate the segmentation and analysis on the EM work. And that's what we're trying to tackle first before we get into that.

We already have the links for the mechanical junction and the gap junction already established within the plicate and interplicate. So what we could do is, based on the stain, we could go ahead and define the convex hull based on the length of the gap junction, with the gap junction and the mechanical junction link based on the fiducial points.

In other words, basically, the EM derived parameters would help constrain not only the Convex Hull Fitting but the actual cluster analysis itself. And it would put it relative to the [?], because right now, we only have it set at 50 nm going out to STORM data, so we can roughly just say, on the whole distribution, where our cut-offs are based on those links. But we can basically call it better.

Have you considered using serial section array or serial block-face to study the intercalated structure?

We're actually working on it. We're collaborating with some very talented electron microscopists and it's a work in progress. Again, there's some methodological challenges vis-à-vis optimizing the heavy metal staining for those experiments.

How is the labeling for STORM done? What is your accuracy in determining the position of the molecules?

The STORM labeling is basic, conventional antibody labeling, so we're using a primary antibody and a secondary antibody for the fluorophore.

Now, we do understand that that does somewhat limit our precision and we take that into account while analyzing the data, but we also validate both localization precision and resolution by looking at multicolored beats and such. We can also take approaches like point-splatting the STORM data and looking at Full Width Half Max of some of those objects to get a sense for what size objects we are resolving using these techniques. Particularly do this for the fiducial proteins because know a lot more about their ultrastructural layout.

It looked like some of the STORM images were 3D. How did you get 3D STORM?

That's one of the reasons we prefer the wide-field based approach with the bi-plane [in the Vutara system]. What we're able to do is have a five micron section of tissue on the sample slide, and we're able to collect z-stacks on the Vutara, and that's how we're able to reconstruct intercalated disks. From a STORM standpoint, it's kind of a tricky challenge in that it has its nanoscale structural properties, but in z it can extend five to 10 microns depending on how it's folded in space.

What is the quantitative nature of this data?

So when I say "quantitative," I should put a caveat on that. We don't ever really reach to reach for a microscope if we want to measure how much of a protein is present in absolute count. Really, microscopy is not the right tool for that in our opinion. We use it to quantitatively address "where" questions. If I want to know where something is located and how it's distributed in space then a microscope is my ideal tool. In terms of relative distribution, we can get quantitative data and say "there's X percent of a given signal that is a certain radius around the fiducial compared to other regions" and so on. We can get quite precise relative measurements in terms of how much and absolute measurements in terms of where.

Is there a particular image analysis approach?

In general, I will take that question to say "is there a particular analysis approach that we like?"

Our answer is "no." A basic premise to our research approach is we want to customize the image analysis, to specifically answer the question at hand. That doesn't necessarily mean developing a new tool every time, but rather selecting the right tool for the job and, if needed, developing a new tool. In the case of the confocal images, we couldn't find a previous approach that answered the question we had, so we developed a new tool. I guess that also happened with the STORM data but now we're finding different ways to take those same tools and use them in different ways depending on what research question involves.

Have you considered looking at changes in normal cells under phosphorylation?

Yes. We collaborate with molecular biologists. None of these problems can be solved in any one research layer. We are the structural physiology people, we collaborate with functional physiology and computational folks as well as molecular biologists. Our molecular biology collaborators are able to give us insights into changes in proteins like phosphorylation and so on. We can then start developing probing how does that molecular change correlate with structural changes and functional changes.