Radiopaedia Blog

19th Nov 2019 01:00 UTC

Letters to Radiopaedia

As part of our 2019 December Supporter Drive, we will be sharing a number of letters that we have received from colleagues from around the world, telling us how Radiopaedia has helped them personally and their community. 

These letters mean a great deal to all of us who volunteer our time to the project and are thrilled that their authors have allowed us to reproduce them here. We hope that they will mean as much to you as they did to us. 

As the month progresses, we'll be incrementally adding more letters here. 

 

BECOME A SUPPORTER

 

BECOME A SUPPORTER

BECOME A SUPPORTER

BECOME A SUPPORTER

BECOME A SUPPORTER

 

Over the past 6 months, we have been working on a new section of the website centered around artificial intelligence in radiology. In the spirit of Radiopaedia, we wanted to create a free resource that was both reliable and accessible to the non-computer science crowd. Although it may seem that artificial intelligence has emerged abruptly in our profession, it is necessary to remember that these concepts are actually at work behind the scenes in many technologies already utilized by radiologists (e.g. voice recognition and other natural language processing applications), and as our professions evolve and improve, so will we. 

A sound understanding of the basic concepts of AI is a great tool to have in your arsenal. As this branch of medical science becomes more embedded in contemporary practice, this will be an essential asset. Radiopaedia will continue to evolve to include relevant content in this field to ensure its users are up to date, informed and most of all, able to access this information for free. 

Creating a new section of the website to accommodate the influx of curious readers was phase one of our project. We are also working on an up to date record of publicly available image datasets for researchers to peruse, with (at the time of writing this blog) over 100 links. 

We hope you enjoy this new section of the website. It has been a rewarding project, and I hope our users take as much away from it as I did creating it. 

You can find our new articles here, and our image databases here.

Project type: create a new section of the website

Outcome: 57 new artificial intelligence articles

Team: Andrew Murphy (lead), Candace MooreJames Condon

 

Andrew is a Radiopaedia senior editor and an Australian-trained radiographer based in Vancouver, Canada. He is currently leading the artificial intelligence sub-council of the Canadian Association of Medical Radiation Technologists Professional Practices Advisory Council.

 

 

 

Radiopaedia.org and the American Society of Neuroradiology (ASNR)  are again collaborating on giving you all the opportunity to submit an adult brain case to ASNR 2019 Case of the Day. 

Each day during the  ASNR 57th Annual Meeting (May 18-23) in Boston, MA, USA a case will be shown as the official Case of the Day. This has traditionally been 'invite only', but just like last year, this year one of the cases will be chosen from cases you submit to Radiopaedia.org. 

In addition to one ASNR 2019 case of the day winner, we will also be showcasing a number of the best submissions as our very own Radiopaedia.org 'cases of the day' on our home page and through social media. And, even better, you will be contributing to your personal case library and making Radiopaedia.org even better! 

Prizes

There are a number of prizes available: 

Winner

The winner gets two awesome prizes:

  1.  Hotel Room: Two (2) nights at the meeting hotel (Sheraton Boston), includes complimentary daily in-room WiFi and health club access (value of USD$660).

    The prize is courtesy of the American Society of Neuroradiology (ASNR). The reservation can be used at any point during the ASNR 57th Annual Meeting dates from Saturday, May 18 through Thursday, May 23.

    Winners must plan on attending (registering) for the ASNR Annual Meeting.

    Alternatively, you can use this prize in the next two years (58th/2020 or 59th/2021 ASNR Annual Meetings). The prize is not, however, transferable. 

    Any questions, please contact Anna Smith, at ASNR office, 630-574-0220, Ext. 231 or email: [email protected] 
     
  2. 12-month all-access pass to Radiopaedia's online courses valued at USD$480.  
Runner-up

The Radiopaedia.org editorial team will be selecting a runner-up who will receive a 12-month all-access pass to Radiopaedia's online courses valued at USD$480.  

Last year's cases

Have a look at the 2018 winning submissions and notable mentions here

Submitting a case

To make your case eligible for the ASNR 2019 Case of the Day, simply:

  1. upload an awesome Adult Brain Case (see below)
  2. add the tag "ASNR2019" in the right-hand column of the case edit page

Please make sure that your case is fully fleshed out (see our case publishing guidelines

Submitting a case is easy, especially if you are using one of our case uploaders. If not, then you can do it the old-fashioned browser-based way. If you are not already familiar with how this works, this short video will help. 

Dates

Submissions close on February 28th 2019, and the winner will be chosen by ASNR committee in the following couple of weeks. The winner will then be contacted by email, so please make sure the email listed in your Radiopaedia.org profile is correct. 

Poster

The winner will then be asked to take a few choice images from their case and make a two-slide powerpoint poster (Question/Answer) which will be shown at the actual conference. This is not an onerous task, and the template will be provided to you. Here is an example. 

A physical poster will also be printed from your slides (by ASNR) and shown. This will be done for you, so if you are not attending, it is not a problem.  

Contact

If you have any questions, please write to [email protected].

20th Dec 2018 05:19 UTC

New Feature: UK and US spelling

Because Radiopaedia started in Australia, we have until now supported only UK spelling. That's right oedema, haemorrhage and colour.

We know, however, that many of you prefer US spelling and are completely freaked out by all these extra letters. We can empathise. To us spelling oesophagus esophaus is, well, just a bit creepy.

And we also know how much some of you like z instead of s. Oh my, look at how zany organise looks with a z! I love it! 

Thanks to our amazing (or should that be amasing?) Radiopaedia Supporters we will now be changing all the spelling to be just right for you. No more flamewars among the editors about the pros and cons of oedema vs edema, aluminium vs aluminum or haemorrhage vs hemorrhage. You get the spelling you prefer. 

We'll take a punt based on your browser language setting, but to get it perfect just update the language setting of your free Radiopaedia profile and we'll do the rest. You can read more about it here

Update your profile settings

 

Features like this are, however, only possible because of Radiopaedia Supporters. If you are a supporter, thank you. Seriously. This is the sort of feature that you allow us to build.

If you are not a supporter but would like to become one, then this is the perfect time to do so. It will make you feel warm and tingly all over. Nothing quite compares, especially when one of the perks is no ads!

Become a Supporter

 

 

This month I was fortunate enough to co-author a really interesting paper in Radiology entitled Chest Radiographs in Congestive Heart Failure: Visualizing Neural Network Learning 1. We described a novel use for GANs (more about these shortly) in helping to visualize disease predictions made by AI - and the results were quite literally revealing. 

Like it or not, artificial intelligence has become a big deal in radiology of late, and while it is almost certainly over-hyped, it is likely that we’ll soon see some integration into clinical practice. In this post, I want to briefly describe our research, show some animated GIFs (always fun) and speculate on the future.

First, a little background on GANs…

What do the three above images have in common? You probably can't tell instantly, but the answer is that none of them are real. Each image was artificially created by a GAN, a Generative Adversarial Network 2,3. The x-ray, the bedroom, and the celebrity dude are all totally fake - although you could argue that every celebrity is fake, but that’s another issue.

GANs are a fascinating form of deep learning where two neural networks compete against each other (adversarially) to learn how to create fake data. The generator network is tasked with creating fake data (in our case fake chest x-rays) and the discriminator network is tasked with detecting fake data from amongst real data (detecting fake chest x-rays).

Initially, the generator is terrible at producing fake x-rays and the discriminator spots them all. But the generator learns from these rejections and over many cycles it gets better and better at making x-rays that appear realistic. Likewise, the discriminator gets better and better at spotting even subtle forgeries. Eventually, the generator learns how to create fake data that is indistinguishable from real data (within the limits of its architecture).

Unlike fake news, fake data is a good thing and can be really, really useful... tremendously useful. I know that seems counterintuitive at first (and at second and at third) but it is true. There are already hundreds of applications for GANs that have been described in the scientific literature, across many disparate fields. So far their use in radiology, however, has been relatively small.

Now on to our real fake research... and GIFs!

Our idea was to use the example of heart failure prediction to see if a chest x-ray GAN could help reveal the image features learned by a neural network. We basically asked, “okay AI, if you’re so confident that this chest has heart failure, show me what you would change on the x-ray to remove the disease?”. The expectation would be that a well-trained model would highlight traditional features of cardiac failure like cardiomegaly (arrowheads), pleural effusions (arrow) and airspace opacity (star) - which is exactly what it did.

The full technical details are in the paper and supplement 4, but the quick summary is that we used ~100,000 chest x-rays to create a generator capable of producing low-resolution fakes (128 x 128 pixels) from a latent space. We then encoded ~7,000 real chest x-rays into the latent space, trained a smaller neural network to predict heart failure (BNP levels) on these representations, statistically manipulated them to remove the heart failure prediction, and then decoded the result into a fake “healthy” version of the original x-ray.

By superimposing the predicted change over the original x-ray, we create what we call a Generative Visual Rationale (GVR). The orange represents density that the model would remove and purple density that the model would add in order to remove the prediction of heart failure. Here’s an animated GIF (as promised) showing the model dynamically lowering its heart failure prediction and the associated GVR.  

  

Seeing beyond the expected

However, heart failure was not all that the GVRs revealed. You’ll note above that the chest wall highlights purple and breast tissue orange. That's odd, right? But not when you consider that we used B-type natriuretic peptide blood levels (BNP) as our label for heart failure and that BNP has a known independent negative association with obesity and positive association with female gender 5,6. So the model was, in fact, using image features not associated with heart failure to improve its BNP predictions, and the GVRs conveyed this.

Side markers were another predictive factor that the GVRs exposed. The model would often add a conventional (non-digital) side marker when attempting to remove a heart failure prediction, probably because at our institution conventional side markers are primarily used in non-urgent settings where patients are more likely to be well with a low pre-test probability for heart failure. So the AI was using the external marker to help game its predictions. Look back at this first GIF to see this happen on the patient's right. 

We also took normal chest x-rays and asked the model to give them heart failure (inverse GVRs). These confirmed again that cardiomegaly, pleural effusions and airspace opacity had been learned as signs of heart failure, but also that pacemakers had been learned - materializing as if from nowhere in another GIF!

  

You might ask - were we simply imposing our own preconceived notions on the GVRs? To test this, we compared GVRs from our well-trained model to a deliberately overfitted model that had seen the test data during training (a big deep learning no-no). Our hypothesis was that the overfitted model would perform extremely well on the test data (because of memorization) but that it would not produce very meaningful GVRs. Sure enough, blinded GVR assessment by a radiologist and radiology registrar confirmed this, with only 36% highlighting potential heart failure features compared to 80% from the well-trained model.

So, what does this mean for the future?

Well, arguably for the first time we now have a method for visualizing AI predictions in medical imaging that goes beyond identifying which image patches contribute to the final prediction. We have a technique that can reveal global image features in combination. From a safety perspective, this is a welcome advance, as it allows radiologists to confirm that individual predictions are reasonable, and to better detect AI faults, cheating, and biases.

The major current limitation to our method is GAN resolution, although it seems likely that this will be overcome 3. The architecture needed for GVRs is also different to commonly used neural networks and so this may further limit use, especially if the predictive power of GVR-friendly techniques is inferior.

Extrapolating further, it is conceivable that GVRs could soon be used to uncover imaging signs of disease previously unknown to humans. It's also conceivable that instead of visually predicting disease, the technique could be used to visually predict the future. “Hey AI, show me what you think this lesion/mass/bleed will look like tomorrow? Or next year?”. The amount of follow-up imaging performed on our patients is so large, and time is such an accessible and definite label, that training a radiology "pre-cognition" system is possibly not that far fetched.

VIEW THE RESEARCH PAPER

About The Authors: Dr. Andrew Dixon (last author, blog author) is a radiologist and Co-Director of Radiology Training at the Alfred Hospital in Melbourne. He is Academic Director for Radiopaedia. Dr. Jarrel Seah (first author) is a radiology registrar at the Alfred Hospital in Melbourne. Dr. Jennifer Tang (second author) is a radiology registrar at the Royal Melbourne Hospital. Andy Kitchen (third author) is a machine learning researcher and organizer of the Melbourne Machine Learning & AI Meetup. Associate Professor Frank Gaillard (fourth author) is a neuroradiologist and Director of Research in the University of Melbourne Department of Radiology and Royal Melbourne Hospital. He is Founder and Editor in Chief of Radiopaedia. 
References

1. Seah JCY, Tang JSN, Kitchen A, Gaillard F, Dixon AF. Chest Radiographs in Congestive Heart Failure: Visualizing Neural Network Learning. (2018) Radiology. doi:10.1148/radiol.2018180887 - Pubmed

2. Goodfellow, Ian J., Pouget-Abadie, Jean, Mirza, Mehdi, Xu, Bing, Warde-Farley, David, Ozair, Sherjil, Courville, Aaron, Bengio, Yoshua. Generative Adversarial Networks. (2014) arxiv.org/abs/1406.2661

3. Karras, Tero, Aila, Timo, Laine, Samuli, Lehtinen, Jaakko. Progressive Growing of GANs for Improved Quality, Stability, and Variation. (2017) arxiv.org/abs/1710.10196

4. Seah, Jarrel, Tang, Jennifer, Kitchen, Andy, Seah, Jonathan. Generative Visual Rationales. (2018) arxiv.org/abs/1804.04539

5. Clerico A, Giannoni A, Vittorini S, Emdin M. The paradox of low BNP levels in obesity. (2012) Heart failure reviews. 17 (1): 81-96. doi:10.1007/s10741-011-9249-z - Pubmed

6. Hsich EM, Grau-Sepulveda MV, Hernandez AF, Eapen ZJ, Xian Y, Schwamm LH, Bhatt DL, Fonarow GC. Relationship between sex, ejection fraction, and B-type natriuretic peptide levels in patients hospitalized with heart failure and associations with inhospital outcomes: findings from the Get With The Guideline-Heart Failure Registry. (2013) American heart journal. 166 (6): 1063-1071.e3. doi:10.1016/j.ahj.2013.08.029 - Pubmed

Blog Subscription

We will only send you an email when there are new posts.

Updating… Please wait.

 Unable to process the form. Check for errors and try again.

 Thank you for updating your details.