Radiopaedia Blog

 

Radiopaedia.org and the American Society of Neuroradiology (ASNR)  are again collaborating on giving you all the opportunity to submit an adult brain case to ASNR 2019 Case of the Day. 

Each day during the  ASNR 57th Annual Meeting (May 18-23) in Boston, MA, USA a case will be shown as the official Case of the Day. This has traditionally been 'invite only', but just like last year, this year one of the cases will be chosen from cases you submit to Radiopaedia.org. 

In addition to one ASNR 2019 case of the day winner, we will also be showcasing a number of the best submissions as our very own Radiopaedia.org 'cases of the day' on our home page and through social media. And, even better, you will be contributing to your personal case library and making Radiopaedia.org even better! 

Prizes

There are a number of prizes available: 

Winner

The winner gets two awesome prizes:

  1.  Hotel Room: Two (2) nights at the meeting hotel (Sheraton Boston), includes complimentary daily in-room WiFi and health club access (value of USD$660).

    The prize is courtesy of the American Society of Neuroradiology (ASNR). The reservation can be used at any point during the ASNR 57th Annual Meeting dates from Saturday, May 18 through Thursday, May 23.

    Winners must plan on attending (registering) for the ASNR Annual Meeting.

    Alternatively, you can use this prize in the next two years (58th/2020 or 59th/2021 ASNR Annual Meetings). The prize is not, however, transferable. 

    Any questions, please contact Anna Smith, at ASNR office, 630-574-0220, Ext. 231 or email: asmith@asnr.org 
     
  2. 12-month all-access pass to Radiopaedia's online courses valued at USD$480.  
Runner-up

The Radiopaedia.org editorial team will be selecting a runner-up who will receive a 12-month all-access pass to Radiopaedia's online courses valued at USD$480.  

Last year's cases

Have a look at the 2018 winning submissions and notable mentions here

Submitting a case

To make your case eligible for the ASNR 2019 Case of the Day, simply:

  1. upload an awesome Adult Brain Case (see below)
  2. add the tag "ASNR2019" in the right-hand column of the case edit page

Please make sure that your case is fully fleshed out (see our case publishing guidelines

Submitting a case is easy, especially if you are using one of our case uploaders. If not, then you can do it the old-fashioned browser-based way. If you are not already familiar with how this works, this short video will help. 

Dates

Submissions close on February 28th 2019, and the winner will be chosen by ASNR committee in the following couple of weeks. The winner will then be contacted by email, so please make sure the email listed in your Radiopaedia.org profile is correct. 

Poster

The winner will then be asked to take a few choice images from their case and make a two-slide powerpoint poster (Question/Answer) which will be shown at the actual conference. This is not an onerous task, and the template will be provided to you. Here is an example. 

A physical poster will also be printed from your slides (by ASNR) and shown. This will be done for you, so if you are not attending, it is not a problem.  

Contact

If you have any questions, please write to general@radiopaedia.org.

Leave a comment4 comment on this post.

Because Radiopaedia started in Australia, we have until now supported only UK spelling. That's right oedema, haemorrhage and colour.

We know, however, that many of you prefer US spelling and are completely freaked out by all these extra letters. We can empathise. To us spelling oesophagus esophaus is, well, just a bit creepy.

And we also know how much some of you like z instead of s. Oh my, look at how zany organise looks with a z! I love it! 

Thanks to our amazing (or should that be amasing?) Radiopaedia Supporters we will now be changing all the spelling to be just right for you. No more flamewars among the editors about the pros and cons of oedema vs edema, aluminium vs aluminum or haemorrhage vs hemorrhage. You get the spelling you prefer. 

We'll take a punt based on your browser language setting, but to get it perfect just update the language setting of your free Radiopaedia profile and we'll do the rest. You can read more about it here

Update your profile settings

 

Features like this are, however, only possible because of Radiopaedia Supporters. If you are a supporter, thank you. Seriously. This is the sort of feature that you allow us to build.

If you are not a supporter but would like to become one, then this is the perfect time to do so. It will make you feel warm and tingly all over. Nothing quite compares, especially when one of the perks is no ads!

Become a Supporter

 

 

Leave a commentNo comments on this post.

This month I was fortunate enough to co-author a really interesting paper in Radiology entitled Chest Radiographs in Congestive Heart Failure: Visualizing Neural Network Learning 1. We described a novel use for GANs (more about these shortly) in helping to visualize disease predictions made by AI - and the results were quite literally revealing. 

Like it or not, artificial intelligence has become a big deal in radiology of late, and while it is almost certainly over-hyped, it is likely that we’ll soon see some integration into clinical practice. In this post, I want to briefly describe our research, show some animated GIFs (always fun) and speculate on the future.

First, a little background on GANs…

What do the three above images have in common? You probably can't tell instantly, but the answer is that none of them are real. Each image was artificially created by a GAN, a Generative Adversarial Network 2,3. The x-ray, the bedroom, and the celebrity dude are all totally fake - although you could argue that every celebrity is fake, but that’s another issue.

GANs are a fascinating form of deep learning where two neural networks compete against each other (adversarially) to learn how to create fake data. The generator network is tasked with creating fake data (in our case fake chest x-rays) and the discriminator network is tasked with detecting fake data from amongst real data (detecting fake chest x-rays).

Initially, the generator is terrible at producing fake x-rays and the discriminator spots them all. But the generator learns from these rejections and over many cycles it gets better and better at making x-rays that appear realistic. Likewise, the discriminator gets better and better at spotting even subtle forgeries. Eventually, the generator learns how to create fake data that is indistinguishable from real data (within the limits of its architecture).

Unlike fake news, fake data is a good thing and can be really, really useful... tremendously useful. I know that seems counterintuitive at first (and at second and at third) but it is true. There are already hundreds of applications for GANs that have been described in the scientific literature, across many disparate fields. So far their use in radiology, however, has been relatively small.

Now on to our real fake research... and GIFs!

Our idea was to use the example of heart failure prediction to see if a chest x-ray GAN could help reveal the image features learned by a neural network. We basically asked, “okay AI, if you’re so confident that this chest has heart failure, show me what you would change on the x-ray to remove the disease?”. The expectation would be that a well-trained model would highlight traditional features of cardiac failure like cardiomegaly (arrowheads), pleural effusions (arrow) and airspace opacity (star) - which is exactly what it did.

The full technical details are in the paper and supplement 4, but the quick summary is that we used ~100,000 chest x-rays to create a generator capable of producing low-resolution fakes (128 x 128 pixels) from a latent space. We then encoded ~7,000 real chest x-rays into the latent space, trained a smaller neural network to predict heart failure (BNP levels) on these representations, statistically manipulated them to remove the heart failure prediction, and then decoded the result into a fake “healthy” version of the original x-ray.

By superimposing the predicted change over the original x-ray, we create what we call a Generative Visual Rationale (GVR). The orange represents density that the model would remove and purple density that the model would add in order to remove the prediction of heart failure. Here’s an animated GIF (as promised) showing the model dynamically lowering its heart failure prediction and the associated GVR.  

  

Seeing beyond the expected

However, heart failure was not all that the GVRs revealed. You’ll note above that the chest wall highlights purple and breast tissue orange. That's odd, right? But not when you consider that we used B-type natriuretic peptide blood levels (BNP) as our label for heart failure and that BNP has a known independent negative association with obesity and positive association with female gender 5,6. So the model was, in fact, using image features not associated with heart failure to improve its BNP predictions, and the GVRs conveyed this.

Side markers were another predictive factor that the GVRs exposed. The model would often add a conventional (non-digital) side marker when attempting to remove a heart failure prediction, probably because at our institution conventional side markers are primarily used in non-urgent settings where patients are more likely to be well with a low pre-test probability for heart failure. So the AI was using the external marker to help game its predictions. Look back at this first GIF to see this happen on the patient's right. 

We also took normal chest x-rays and asked the model to give them heart failure (inverse GVRs). These confirmed again that cardiomegaly, pleural effusions and airspace opacity had been learned as signs of heart failure, but also that pacemakers had been learned - materializing as if from nowhere in another GIF!

  

You might ask - were we simply imposing our own preconceived notions on the GVRs? To test this, we compared GVRs from our well-trained model to a deliberately overfitted model that had seen the test data during training (a big deep learning no-no). Our hypothesis was that the overfitted model would perform extremely well on the test data (because of memorization) but that it would not produce very meaningful GVRs. Sure enough, blinded GVR assessment by a radiologist and radiology registrar confirmed this, with only 36% highlighting potential heart failure features compared to 80% from the well-trained model.

So, what does this mean for the future?

Well, arguably for the first time we now have a method for visualizing AI predictions in medical imaging that goes beyond identifying which image patches contribute to the final prediction. We have a technique that can reveal global image features in combination. From a safety perspective, this is a welcome advance, as it allows radiologists to confirm that individual predictions are reasonable, and to better detect AI faults, cheating, and biases.

The major current limitation to our method is GAN resolution, although it seems likely that this will be overcome 3. The architecture needed for GVRs is also different to commonly used neural networks and so this may further limit use, especially if the predictive power of GVR-friendly techniques is inferior.

Extrapolating further, it is conceivable that GVRs could soon be used to uncover imaging signs of disease previously unknown to humans. It's also conceivable that instead of visually predicting disease, the technique could be used to visually predict the future. “Hey AI, show me what you think this lesion/mass/bleed will look like tomorrow? Or next year?”. The amount of follow-up imaging performed on our patients is so large, and time is such an accessible and definite label, that training a radiology "pre-cognition" system is possibly not that far fetched.

VIEW THE RESEARCH PAPER

About The Authors: Dr. Andrew Dixon (last author, blog author) is a radiologist and Co-Director of Radiology Training at the Alfred Hospital in Melbourne. He is Academic Director for Radiopaedia. Dr. Jarrel Seah (first author) is a radiology registrar at the Alfred Hospital in Melbourne. Dr. Jennifer Tang (second author) is a radiology registrar at the Royal Melbourne Hospital. Andy Kitchen (third author) is a machine learning researcher and organizer of the Melbourne Machine Learning & AI Meetup. Associate Professor Frank Gaillard (fourth author) is a neuroradiologist and Director of Research in the University of Melbourne Department of Radiology and Royal Melbourne Hospital. He is Founder and Editor in Chief of Radiopaedia. 
References

1. Seah JCY, Tang JSN, Kitchen A, Gaillard F, Dixon AF. Chest Radiographs in Congestive Heart Failure: Visualizing Neural Network Learning. (2018) Radiology. doi:10.1148/radiol.2018180887 - Pubmed

2. Goodfellow, Ian J., Pouget-Abadie, Jean, Mirza, Mehdi, Xu, Bing, Warde-Farley, David, Ozair, Sherjil, Courville, Aaron, Bengio, Yoshua. Generative Adversarial Networks. (2014) arxiv.org/abs/1406.2661

3. Karras, Tero, Aila, Timo, Laine, Samuli, Lehtinen, Jaakko. Progressive Growing of GANs for Improved Quality, Stability, and Variation. (2017) arxiv.org/abs/1710.10196

4. Seah, Jarrel, Tang, Jennifer, Kitchen, Andy, Seah, Jonathan. Generative Visual Rationales. (2018) arxiv.org/abs/1804.04539

5. Clerico A, Giannoni A, Vittorini S, Emdin M. The paradox of low BNP levels in obesity. (2012) Heart failure reviews. 17 (1): 81-96. doi:10.1007/s10741-011-9249-z - Pubmed

6. Hsich EM, Grau-Sepulveda MV, Hernandez AF, Eapen ZJ, Xian Y, Schwamm LH, Bhatt DL, Fonarow GC. Relationship between sex, ejection fraction, and B-type natriuretic peptide levels in patients hospitalized with heart failure and associations with inhospital outcomes: findings from the Get With The Guideline-Heart Failure Registry. (2013) American heart journal. 166 (6): 1063-1071.e3. doi:10.1016/j.ahj.2013.08.029 - Pubmed

Leave a comment12 comment on this post.

30th Jul 2018 09:00 UTC

Using Playlists to Teach

I use Radiopaedia playlists to teach all the time. And, they are awesome! 

My use of playlists has ranged from small group tutorials with undergraduate medical students to 1-to-1 vivas with radiology registrars coming up to their fellowship examinations. I have used playlists to present to an international audience at conferences and as part of radiology workshops for non-radiologists.

I love the way that playlists allow me to teach radiology with scrollable stacks the way I use PACS on a day-to-day basis. The frustration of trying to explain a topic or concept with single images from a CT or MRI is gone!

Playlists are flexible and allow me to remove part of the case that I don't want to use in the teaching session. And, I can intersperse slides downloaded from powerpoint in-between scrollable stacks of images.

There’s nothing like teaching radiology using a tool that mirrors the way we interact with images on a day-to-day basis.

I get to choose from the 30,000 cases on Radiopaedia when I'm creating my teaching session and if I can't find the case I'm looking for, I can always upload one of my own.

I always have a playlist ready for a reporting-room teaching session if time allows and if I'm caught out, I can rapidly throw one together on the site.

Radiopaedia is an awesome teaching resource. Playlists supercharge the teaching experience and allow me to communicate effectively with my students, trainees and colleagues.

Plus, I get to send them the link to the teaching session afterwards so they can look at the cases again.

Here are some examples:

Playlist for a lecture  Playlist for teaching

If you want a bit of a helping hand pulling together a playlist, have a look at this 3-minute video.

Dr Jeremy Jones

Leave a comment1 comment on this post.

Radiopaedia.org and UK Radiology Congress (UKRC) are delighted to be bringing the case contest to the UKRCO 2018 meeting in Liverpool, UK.

Case contest

After the success of UKRC 2017, we are back with even more cases for radiologists, radiographers and anyone involved in medical imaging to try their hand at.

We have six case streams (covering neuro, MSK, chest, abdominal, pediatrics and plain film imaging), each with their own prizes. If you are on site at UKRCO 2018, you can win access to next year's conference and a Radiopaedia All-Access Course Pass (valued at USD$480). The overall winner will get a 128 GB iPad, kindly sponsored by MEDICA.

Online entries as always are welcome, with six 12 month Radiopaedia All-Access Course Passes up for grabs!

More will be available via our social media streams (Facebook, Twitter, Instagram).

Click here for the course page with all questions, answer forms, and more prize details:

UKRCO 2018 Case Contest

Answer deadline: Tuesday 03 July, 2359 BST

Look forward to seeing those of you at UKRCO 2018 and to everyone else - GOOD LUCK!

Leave a comment3 comment on this post.

Blog Subscription

We will only send you an email when there are new posts.

Updating… Please wait.

 Unable to process the form. Check for errors and try again.

 Thank you for updating your details.