AI-based multimodal analysis of neurology work-up data for dementia assessment

00:58:28
https://www.youtube.com/watch?v=Urp8xtWZFGQ

الملخص

TLDRIn this engaging seminar, VJ Kachala, a scientist at Boston University, presented his pioneering work on using AI and machine learning to enhance the diagnosis and assessment of Alzheimer's disease and other dementias. The discussion revolved around the integration of various clinical data types, such as demographics, MRI scans, and cognitive tests, into robust predictive models. Kachala emphasized the rigorous validation processes his team employs to ensure the utility of these models in clinical settings. He further illustrated the importance of collaborative efforts between computer scientists and clinicians to address the challenges of dementia diagnostics, shedding light on the need for assistive tools in routine neurology practices. The seminar concluded with insights into how these innovations could potentially transform patient assessment and foster better clinical outcomes in dementia care.

الوجبات الجاهزة

  • 🤖 AI integration enhances dementia diagnosis.
  • 🧠 Multimodal data improves predictive accuracy.
  • ⏳ Models validate on diverse patient datasets.
  • 🩻 MRI plays a critical role in assessment.
  • 🧪 Collaboration between researchers and clinicians is vital.
  • 📈 AI models can adapt to missing data.
  • 🔍 Tools show potential for clinical application.
  • 💡 Understanding individual data contributions is key.
  • 📊 Ongoing research aims to quantify treatment effects.
  • 🚀 Future tools aim for broader applicability in neurology.

الجدول الزمني

  • 00:00:00 - 00:05:00

    The session is a talk led by Andrew Stern, introducing VJ Kachala, a cognitive behavioral neurologist specializing in Alzheimer's disease research using AI and machine learning techniques.

  • 00:05:00 - 00:10:00

    VJ Kachala discusses the ongoing collaboration between computer science and neurology in addressing Alzheimer's disorder diagnostics, emphasizing the importance of multi-modal data collection in enhancing the understanding of dementia.

  • 00:10:00 - 00:15:00

    He reflects how the shortage of experts in astrophysics, particularly in places like India, motivates the development of assistive tools to diagnose Alzheimer's and related dementias more efficiently.

  • 00:15:00 - 00:20:00

    Kachala emphasizes the relevance and challenges of integrating various data types from clinical, demographic, and neuroimaging sources to train machine learning models for dementia assessment effectively.

  • 00:20:00 - 00:25:00

    Key barriers in clinical trials for Alzheimer’s disease are outlined, with Kachala noting how machine learning can enhance eligibility screening and trial designs.

  • 00:25:00 - 00:30:00

    The discussion transitions to the research team's focus on creating interpretable machine learning models using diverse datasets while addressing the complexities of diagnosing different forms of dementia.

  • 00:30:00 - 00:35:00

    Kachala provides an overview of their robust approach to ensure models are validated effectively, ensuring their accuracy and generalizability across varied datasets.

  • 00:35:00 - 00:40:00

    An outline of their data collection efforts and collaborations with well-known institutes like Stanford and the Framingham Heart Study is shared, focusing on harmonization of diverse data to ensure quality input for machine learning.

  • 00:40:00 - 00:45:00

    He explains the mechanics of neural networks and convolutional networks in data processing, particularly in analyzing MRI images for patterns indicative of Alzheimer’s.

  • 00:45:00 - 00:50:00

    The talk highlights the importance of enabling predictability in discerning types of dementia and their associated features through deep learning models, alongside comparative assessments with clinical data and expert evaluations.

  • 00:50:00 - 00:58:28

    Final summary points are shared concerning the ongoing efforts to incorporate AI tools into clinical practices, emphasizing the model’s capability to handle different types of data while aiming for practical integration in real-world medical assessments.

اعرض المزيد

الخريطة الذهنية

فيديو أسئلة وأجوبة

  • Who is VJ Kachala?

    VJ Kachala is a researcher at Boston University specializing in AI and machine learning applications in dementia research.

  • What is the focus of VJ Kachala's research?

    His research focuses on utilizing AI and machine learning to improve the diagnosis and assessment of Alzheimer's disease and related dementias.

  • What types of data are used in Kachala's models?

    The models utilize multimodal data, including clinical demographics, neuroimaging, and cognitive testing.

  • How do Kachala's models predict dementia?

    The models leverage machine learning techniques to analyze data and provide predictions about a patient's cognitive status.

  • What is the significance of validation in Kachala's research?

    Validation is crucial to ensure the reliability and clinical applicability of the developed AI models.

  • What clinical tools are being developed?

    The research aims to create assistive tools for neurologists to facilitate dementia assessments.

  • How does the model handle missing data?

    The model can still make predictions even if certain data modalities are unavailable.

  • What progress has been made in clinical trials?

    The research has shown promising results in predicting biomarker positivity for participants in clinical trials.

  • What was a key takeaway from the seminar?

    The integration of AI in clinical neurology holds the potential to enhance diagnostic accuracy and efficiency.

  • Will Kachala's tools be available in clinical settings?

    Yes, pilot studies are being conducted to evaluate the tools in various clinical centers.

عرض المزيد من ملخصات الفيديو

احصل على وصول فوري إلى ملخصات فيديو YouTube المجانية المدعومة بالذكاء الاصطناعي!
الترجمات
en
التمرير التلقائي:
  • 00:00:03
    I look forward to your talk PJ thank
  • 00:00:05
    [Music]
  • 00:00:10
    you
  • 00:00:12
    right now look at that everybody's
  • 00:00:14
    coming
  • 00:00:17
    in hi everybody we'll just uh wait a few
  • 00:00:21
    for minutes for people to come in
  • 00:00:35
    you can see my full screen right not the
  • 00:00:37
    zoom okay yeah no it's uh it's just
  • 00:00:41
    slides okay it's good to see some names
  • 00:00:44
    from MGH here as well it's great
  • 00:01:22
    okay Tracy is 12:02 should I should we
  • 00:01:24
    start sounds great that's good okay
  • 00:01:28
    great well yeah I'm sure people will
  • 00:01:29
    come
  • 00:01:30
    um trickle in so um welcome everyone um
  • 00:01:34
    not everybody knows me I'm Andrew Stern
  • 00:01:36
    I'm a cognitive behavioral neurologist
  • 00:01:38
    of briam and women's and um I'm um very
  • 00:01:42
    happy to introduce VJ kachala um from
  • 00:01:45
    beu to give us a very interesting talk
  • 00:01:47
    and I was telling Tracy that um VJ and I
  • 00:01:50
    met um playing squash together um and by
  • 00:01:53
    chance after the game um it so happened
  • 00:01:56
    that we asked what each other did and we
  • 00:01:58
    both happened to be Alzheimer's
  • 00:01:59
    researchers
  • 00:02:00
    um and from there I've struck up a lot
  • 00:02:02
    of conversations about uh diagnosing
  • 00:02:04
    Alzheimer's disease and related
  • 00:02:05
    dementias and it turns out that VJ is uh
  • 00:02:08
    trained and is a is a Bonafide card
  • 00:02:10
    carrying computer scientist and came to
  • 00:02:13
    dementia research um after a long
  • 00:02:15
    journey through um Al artificial
  • 00:02:17
    intelligence and machine learning and is
  • 00:02:19
    um is is the real deal and he's
  • 00:02:22
    wonderfully decided to use his powers
  • 00:02:24
    for good and has some very interesting
  • 00:02:26
    data on the use of of of AI and machine
  • 00:02:29
    learning for for everything from
  • 00:02:31
    diagnosing Alzheimer disease to uh
  • 00:02:33
    digital pathology of cancer as well um
  • 00:02:36
    so uh he he's over at bu in that really
  • 00:02:39
    cool looking building with the stack of
  • 00:02:41
    looks like a stack of books uh with a
  • 00:02:42
    beautiful view of Back Bay and uh holds
  • 00:02:46
    numerous grants from NIH and the Gates
  • 00:02:48
    Foundation and various other uh sources
  • 00:02:51
    and so um we're really excited to have
  • 00:02:53
    him and uh take it away VJ yes thank you
  • 00:02:57
    Andrew for the nice intro and thank you
  • 00:02:59
    for inviting me to give this talk uh I
  • 00:03:02
    guess today you know my goal is to sort
  • 00:03:04
    of really talk about some of the work
  • 00:03:06
    that we've been doing recently in the
  • 00:03:07
    context of building some interesting uh
  • 00:03:10
    AI based methods for looking at
  • 00:03:13
    multimodel data collected using in a
  • 00:03:15
    routine neurology setting for broadly
  • 00:03:18
    dementia
  • 00:03:19
    assessment uh I think our lab's goal has
  • 00:03:22
    been mainly to really think about
  • 00:03:24
    building assistive tools for neurology
  • 00:03:26
    and I think we've been kind of slowly
  • 00:03:28
    making some progress to K dementia uh
  • 00:03:31
    I've been asked to basically share two
  • 00:03:34
    of these slides so I'll probably pause
  • 00:03:37
    here I mean I don't know if I have to
  • 00:03:38
    pause for like few
  • 00:03:40
    seconds one and there is one more slide
  • 00:03:44
    I think this
  • 00:03:45
    one uh so I'll move
  • 00:03:47
    forward uh so uh this is actually an
  • 00:03:50
    important slide for me before I go into
  • 00:03:52
    the topic I just want to talk about who
  • 00:03:55
    we are uh just as a quick introduction
  • 00:03:57
    you know our lab is a mixture of you
  • 00:03:59
    know PhD students in computer science as
  • 00:04:02
    well as MD students uh from the medical
  • 00:04:05
    school at
  • 00:04:06
    bu uh we collaborate very closely with
  • 00:04:08
    some practicing neurologists neuro
  • 00:04:10
    Radiologists neuros psychologists and
  • 00:04:13
    neuropathologists uh some of the names
  • 00:04:15
    might sound quite familiar uh we learn
  • 00:04:18
    clinically relevant aspects obviously
  • 00:04:19
    Andrew also taught us a lot of things
  • 00:04:21
    more recently when we talk with the
  • 00:04:23
    clinicians so it's a it's great that we
  • 00:04:25
    are kind of interacting together uh
  • 00:04:28
    because the kind of questions that we
  • 00:04:29
    they attempting to address it requires
  • 00:04:32
    basically a multidisiplinary
  • 00:04:34
    team uh so in this talk I'm not going to
  • 00:04:36
    spend too much time to discuss why I'm
  • 00:04:38
    doing this because I think rather focus
  • 00:04:40
    on how we trying to really solve some of
  • 00:04:42
    the clinical questions uh using some
  • 00:04:45
    methodologies we are trying to develop
  • 00:04:47
    uh mainly related to machine learning
  • 00:04:50
    and and applying them to multimodel data
  • 00:04:54
    such as neuroimaging and other uh
  • 00:04:56
    routinely collected clinical information
  • 00:04:59
    um so I back in 2018 I saw this paper
  • 00:05:04
    which I think kind of somewhat inspired
  • 00:05:05
    me to really think about it and I think
  • 00:05:07
    they make made some statements I think
  • 00:05:10
    this is a sobering reality here which is
  • 00:05:12
    the the there is a shortfall of experts
  • 00:05:15
    around the world I mean obviously in
  • 00:05:17
    Boston we're very lucky to have all the
  • 00:05:19
    experts but overall if I go to India
  • 00:05:21
    where I'm from uh it's really bad uh the
  • 00:05:25
    time it takes for an appointment is very
  • 00:05:27
    long uh and I think for that reason I
  • 00:05:30
    think I'm really motivated to build some
  • 00:05:33
    assistive tools uh using routinely
  • 00:05:35
    collected clinical data that can serve
  • 00:05:37
    as you know hopefully someday uh be
  • 00:05:41
    practice uh for broadly thinking about
  • 00:05:43
    Alzheimer's disease and related
  • 00:05:45
    dementias uh and as we learned from the
  • 00:05:48
    clinicians I I've tried to understand
  • 00:05:50
    that in order to build these
  • 00:05:51
    sophisticated systems we got to be able
  • 00:05:53
    to leverage all this multimodel data in
  • 00:05:57
    Native formats uh and when I sa
  • 00:05:59
    multimodel data I'm talking about
  • 00:06:01
    clinical demographic data anything that
  • 00:06:03
    comes from EHR such as patient history
  • 00:06:06
    bedside cognitive tests
  • 00:06:07
    neuropsychological tests neuroimaging EG
  • 00:06:11
    Etc and as we build these models one of
  • 00:06:14
    the painstaking things that we do over
  • 00:06:17
    time in fact I would say 80% of our time
  • 00:06:19
    we spend on really validating our
  • 00:06:22
    Frameworks and we approach that using
  • 00:06:24
    all these different ways one is
  • 00:06:26
    obviously thinking about computational
  • 00:06:28
    validation
  • 00:06:29
    uh we are also trying to see if we can
  • 00:06:31
    understand how these models have some
  • 00:06:33
    corresponding biomarker evidence uh we
  • 00:06:36
    also bring clinical experts to do some
  • 00:06:38
    expert level
  • 00:06:39
    comparison uh and then finally uh if on
  • 00:06:42
    some cases if you have some postmodem
  • 00:06:44
    data we also think about postmodem
  • 00:06:46
    validation as well there is another
  • 00:06:50
    interesting um statistic that I found
  • 00:06:53
    which I I also thought was very
  • 00:06:54
    interesting uh some of you may be
  • 00:06:55
    involved with um dementia trials so
  • 00:06:58
    clearly it's a very expensive Endeavor
  • 00:07:01
    uh and this was a paper I think was it's
  • 00:07:03
    a white paper I think was published by
  • 00:07:06
    the group at USC where they kind of
  • 00:07:08
    really listed all these different
  • 00:07:09
    barriers for clinical trials for
  • 00:07:12
    Alzheimer disease but I think broadly
  • 00:07:13
    applies to dementia so clearly the the
  • 00:07:17
    screen failure rate is extremely high at
  • 00:07:19
    least as as mentioned in that document
  • 00:07:22
    so partly because I think the the
  • 00:07:25
    clinical diagnostic criteria may not
  • 00:07:27
    necessarily meet the clinical trial in
  • 00:07:29
    enrollment criteria so clearly there are
  • 00:07:31
    these pet scans and other things that
  • 00:07:33
    are expected to do and you normally
  • 00:07:34
    don't do them in a routine workout so
  • 00:07:39
    there is
  • 00:07:41
    a so there is a need I think also to
  • 00:07:44
    really think about how to build these
  • 00:07:45
    sophisticated approaches to think about
  • 00:07:47
    clinical trials as well uh so with those
  • 00:07:51
    two sort of really motivating questions
  • 00:07:53
    I think our research questions have been
  • 00:07:55
    kind of really to think about how to
  • 00:07:57
    build uh effective machine learning
  • 00:08:00
    models broadly deep learning models that
  • 00:08:01
    are interpretable so that we can explain
  • 00:08:04
    what's going on with these models and
  • 00:08:06
    how to leverage these deep learning
  • 00:08:08
    approaches to process routinely
  • 00:08:09
    collected data multimodel data to assess
  • 00:08:12
    all kinds of
  • 00:08:13
    demenas uh and um finally like I said
  • 00:08:17
    earlier we need to really think about
  • 00:08:18
    how to validate these things because
  • 00:08:19
    unless we validate them it's hard to
  • 00:08:21
    really think about
  • 00:08:23
    translation um so I want to quickly talk
  • 00:08:26
    a little bit about the data set that we
  • 00:08:28
    have access to over the past I would say
  • 00:08:30
    maybe seven years eight years or so
  • 00:08:32
    we've been slowly collecting data across
  • 00:08:35
    multiple different cohorts uh some of
  • 00:08:37
    them obviously are publicly available
  • 00:08:40
    like Adney and UK bi bank and other
  • 00:08:42
    things so so we've been kind of really
  • 00:08:44
    collecting all the data from them and I
  • 00:08:46
    think the one in the center that you see
  • 00:08:47
    here is coming from the the Framingham
  • 00:08:50
    heart study and buus the headquarters
  • 00:08:52
    for the Framingham heart study so we
  • 00:08:54
    have access to uh the FHS data I'm on
  • 00:08:57
    the executive committee there so there's
  • 00:08:59
    a lot of in interesting data that they
  • 00:09:01
    are collecting as well uh and we also
  • 00:09:04
    have some collaboration
  • 00:09:07
    with Folks at uh Stanford so also it's a
  • 00:09:12
    Pacific Goodall Center so they also have
  • 00:09:14
    provided some data on mainly Louis body
  • 00:09:17
    dimens so one of the things as we
  • 00:09:20
    collect this data is we really think
  • 00:09:22
    about how to harmonize all this data
  • 00:09:24
    because we are talking about multimodel
  • 00:09:25
    data not just MRI scans but we have to
  • 00:09:28
    come up with automated pipelines that
  • 00:09:31
    can allow us to harmonize the data
  • 00:09:33
    across all these different participants
  • 00:09:34
    coming in from all these different
  • 00:09:35
    cohorts so there is clearly a lot of
  • 00:09:38
    work to do even before we think of
  • 00:09:40
    machine learning and I think I'm kind of
  • 00:09:44
    really outlining here an overview of
  • 00:09:46
    what it means to spend time to build all
  • 00:09:49
    these pipelines so clearly we talk about
  • 00:09:52
    data collection data processing
  • 00:09:54
    normalization
  • 00:09:55
    harmonization and then in the context of
  • 00:09:57
    Imaging data we are talking about
  • 00:10:00
    registration uh normalization and other
  • 00:10:03
    U alignment techniques to sort of really
  • 00:10:05
    think about how to put all these data
  • 00:10:07
    together to make the data emble for
  • 00:10:10
    machine learning I think that's kind of
  • 00:10:11
    really the significant amount of time
  • 00:10:13
    that we spend and then maybe I would say
  • 00:10:16
    25% of the time we actually really think
  • 00:10:20
    about
  • 00:10:22
    um so with that I just want to give a
  • 00:10:25
    brief overview on exactly what I mean by
  • 00:10:29
    machine learning so I want to start off
  • 00:10:32
    by taking by by talking about a generic
  • 00:10:35
    Concept in modeling a task um so let's
  • 00:10:39
    say we consider a problem of deciding
  • 00:10:41
    whether a person has cognitive
  • 00:10:42
    impairment you know when given a large
  • 00:10:44
    number of numeric input variables that
  • 00:10:48
    kind of represent the characteristics of
  • 00:10:49
    that person uh one standard approach uh
  • 00:10:53
    is to use let's say logistic regression
  • 00:10:55
    that estimates how to weight each of
  • 00:10:57
    those input uh variables so that the
  • 00:11:00
    weighted sum is a good indicator of
  • 00:11:02
    cognitive
  • 00:11:03
    impairment but as you all know more than
  • 00:11:06
    I do uh dementia is very complex to
  • 00:11:09
    diagnose there are many kinds of things
  • 00:11:10
    going on and often involves complex
  • 00:11:12
    interactions so if you want to really
  • 00:11:14
    model this correctly then we can add
  • 00:11:17
    extra inputs uh known as these
  • 00:11:19
    interaction terms each represents
  • 00:11:22
    basically a product of two or more input
  • 00:11:25
    variables but if multi-way interactions
  • 00:11:28
    are kind of need to be modeled the
  • 00:11:30
    number of interaction terms basically
  • 00:11:32
    increase exponentially so the neural
  • 00:11:35
    network alternative is to add a layer of
  • 00:11:37
    hidden factors or hidden
  • 00:11:40
    layers uh so the basically the first
  • 00:11:42
    step here is to sort of determine which
  • 00:11:44
    hidden factors are active and then the
  • 00:11:46
    active ones are used to determine when
  • 00:11:48
    when the disease is present or absent um
  • 00:11:51
    so in the context of images as inputs uh
  • 00:11:55
    there are certain you know deep learning
  • 00:11:57
    approaches such as convolution neural
  • 00:11:59
    networks that basically exploit the the
  • 00:12:01
    structure of the image uh which is
  • 00:12:03
    organized in this simple XYZ format or a
  • 00:12:06
    grit format uh so in addition the the
  • 00:12:09
    hierarchies or the intermediate layers
  • 00:12:11
    that I talked about before of the neural
  • 00:12:13
    network they are created by performing
  • 00:12:15
    these certain operations called
  • 00:12:17
    convolutions so the convolution operator
  • 00:12:20
    is basically a generic filter that can
  • 00:12:22
    be applied uh on these images uh and
  • 00:12:26
    it's it's it's a it's a deep learning
  • 00:12:28
    architecture that can be used used or
  • 00:12:29
    developed when images are often the
  • 00:12:31
    inputs uh so for example uh if I want to
  • 00:12:34
    learn from a brain
  • 00:12:36
    MRI uh a scan of hundreds of individuals
  • 00:12:39
    to predict you know who have signs of
  • 00:12:41
    brain atrophy for instance corresponding
  • 00:12:43
    to let's say Alzheimer's then my goto
  • 00:12:46
    deep learning neural network would be
  • 00:12:47
    this convolution neural network and just
  • 00:12:50
    to
  • 00:12:51
    clarify uh the convolution neural
  • 00:12:53
    network is just one among many many deep
  • 00:12:55
    learning approaches that allow us to
  • 00:12:57
    process this intrinsic uh structure of
  • 00:13:00
    uh complex data such as
  • 00:13:02
    image uh in the recent years there has
  • 00:13:05
    been tremendous progress on this field
  • 00:13:07
    and there are many many modern machine
  • 00:13:09
    learning approaches that I think are in
  • 00:13:11
    play uh so again in order to again build
  • 00:13:14
    some image processing pipeline clearly
  • 00:13:17
    like you can see here one of the key
  • 00:13:19
    steps is to think about volumetric
  • 00:13:21
    registration uh I'm talking about
  • 00:13:24
    100,000 participants which means I have
  • 00:13:28
    very very large number of volumetric
  • 00:13:30
    scans so I can't simply rely on
  • 00:13:32
    something like a free Surfer or some
  • 00:13:34
    other tool that's out there because we
  • 00:13:36
    are trying to make sure that the data is
  • 00:13:37
    actually amable for some of these
  • 00:13:39
    sophisticated methods so we have
  • 00:13:41
    internally built these pipelines to
  • 00:13:43
    really think about registration so here
  • 00:13:45
    in this case you're seeing the source
  • 00:13:47
    image and Target image so we are trying
  • 00:13:49
    to really align the images in all these
  • 00:13:51
    three planes sagital coronal and axial
  • 00:13:55
    planes for each case and then we have to
  • 00:13:57
    do that across all these 100 ,000 cases
  • 00:13:59
    so I would say it's not 100% automated
  • 00:14:03
    but I think we're trying to get there to
  • 00:14:05
    make sure that we are doing this in a
  • 00:14:07
    most automated fashion possible and then
  • 00:14:10
    once the registration part is done we
  • 00:14:12
    also have to think about removing
  • 00:14:15
    certain things that we probably think
  • 00:14:17
    are probably inter are going to
  • 00:14:18
    interfere with the model development or
  • 00:14:20
    maybe not relevant in the context of
  • 00:14:22
    looking at what's actually inside the
  • 00:14:23
    brain so we have this um again a neural
  • 00:14:27
    network approach which automat atically
  • 00:14:29
    takes each slice in all the three planes
  • 00:14:32
    and then tries to remove the the skull
  • 00:14:34
    so that the output of that is actually
  • 00:14:37
    an image which is something that can be
  • 00:14:38
    useful for the next step um so one of
  • 00:14:44
    our main motivations was to see if you
  • 00:14:46
    could basically build and develop and
  • 00:14:49
    actually validate an
  • 00:14:51
    interpretable uh deep learning framework
  • 00:14:54
    that could ready that could use readily
  • 00:14:56
    available data such as you know
  • 00:14:57
    demographics and bedside cognitive
  • 00:14:59
    testing and imaging such as
  • 00:15:01
    MRI uh to predict if a person is at risk
  • 00:15:04
    of
  • 00:15:05
    Alzheimer's so it all started when a PhD
  • 00:15:08
    student uh named shangan who is now at
  • 00:15:10
    Microsoft uh is the first author of this
  • 00:15:13
    paper that was published in brain came
  • 00:15:15
    up with an idea of creating a a very
  • 00:15:17
    computationally efficient uh deep neural
  • 00:15:20
    network to process the entire raw MRI
  • 00:15:23
    scans of the brain on hundreds and on
  • 00:15:25
    thousands of those cases uh so this is a
  • 00:15:28
    sort of a variant of the convolutional
  • 00:15:30
    neural network that I talked before it's
  • 00:15:32
    it's basically uh a framework where we
  • 00:15:35
    take these volumetric patches
  • 00:15:37
    automatically and then the the it
  • 00:15:39
    outputs basically these volumetric heat
  • 00:15:41
    maps that can nicely highlight sub
  • 00:15:44
    regions within the brain that point to a
  • 00:15:47
    high degree Association of disease risk
  • 00:15:50
    which was then used to make the final
  • 00:15:52
    prediction uh with this kind of
  • 00:15:54
    framework um one can visualize these
  • 00:15:57
    high-risk sub regions and that was and
  • 00:16:00
    this was actually an innovation that was
  • 00:16:01
    appreciated by the reviewers uh from the
  • 00:16:05
    from the
  • 00:16:06
    computational uh standpoint uh this
  • 00:16:09
    framework can very efficiently work
  • 00:16:12
    because it can process these volumetric
  • 00:16:14
    scans quickly uh you know as the model
  • 00:16:17
    was trained to sort of infer these local
  • 00:16:19
    patterns of the cerebral structure that
  • 00:16:22
    suggested an overall disease
  • 00:16:25
    State um and after we build the model
  • 00:16:28
    models one of the things we did was we
  • 00:16:31
    basically trained the model on one
  • 00:16:32
    cohort such as the adne cohort and then
  • 00:16:36
    we sort of use the other cohort such as
  • 00:16:38
    the premam heart study and the the
  • 00:16:40
    national Alzheimer's cotic center data
  • 00:16:42
    coming in from 37 adcs and also the
  • 00:16:45
    Australian cohort to sort of really
  • 00:16:46
    validate how the model performs on those
  • 00:16:48
    external cohorts and I think this was
  • 00:16:50
    kind of really an important task that we
  • 00:16:52
    keep in mind to do computation
  • 00:16:55
    validation uh and and that's how at
  • 00:16:58
    least I we're trying to sort of really
  • 00:17:00
    think about creating these robust
  • 00:17:02
    pipelines that are not just you know one
  • 00:17:04
    fancy method and one data set but
  • 00:17:06
    hopefully generalizable enough uh across
  • 00:17:09
    many different
  • 00:17:11
    cohorts uh so as an extension what we
  • 00:17:14
    did uh was to now in addition to
  • 00:17:17
    thinking about MRI scans alone or just
  • 00:17:19
    demographics we then collected a lot of
  • 00:17:22
    data coming in from um other neuro
  • 00:17:25
    neurology workup data for example you
  • 00:17:27
    know neuros side testing medical history
  • 00:17:30
    functional assessments so all this data
  • 00:17:33
    was then combined with the MRIs to ask a
  • 00:17:36
    slightly more sophisticated question and
  • 00:17:38
    in this case what we asked was to really
  • 00:17:40
    think about okay given a person's
  • 00:17:43
    information let's say demographics
  • 00:17:45
    patient history functional assessments
  • 00:17:47
    neuros assessments and MRI scans can you
  • 00:17:50
    the first pass at least assess if the
  • 00:17:53
    person has normal cognition or healthy
  • 00:17:54
    cognition or some sort of mild cognitive
  • 00:17:57
    impairment or dementia
  • 00:17:59
    and if the model tries to predict if the
  • 00:18:01
    person has likelihood of dementia then
  • 00:18:03
    is this dementia due to Alzheimer's or
  • 00:18:05
    some other
  • 00:18:07
    iology so in this case again we did a
  • 00:18:10
    lot of work on getting data from
  • 00:18:12
    multiple different cohorts as you can
  • 00:18:14
    see here there are some numbers where we
  • 00:18:16
    were able to get at least a decent
  • 00:18:18
    number of cases for even non-alzheimer's
  • 00:18:21
    uh cases non dementia cases and the this
  • 00:18:25
    multimodal neural network sort of was
  • 00:18:27
    able to process both the Imaging data as
  • 00:18:30
    well as the non-imaging data the
  • 00:18:32
    neurology workup data and then combine
  • 00:18:34
    that information to make this kind of a
  • 00:18:36
    two-tier
  • 00:18:37
    assessment um one of the advantages of
  • 00:18:41
    some of the things that because we've
  • 00:18:42
    been building internally is to sort of
  • 00:18:44
    really think about okay now the model is
  • 00:18:46
    built so what is the model actually
  • 00:18:48
    looking at and how is the model trying
  • 00:18:50
    to assess a person's let's say status on
  • 00:18:52
    whether they have healthy cognition MC
  • 00:18:54
    or
  • 00:18:56
    dementia and again after that is done
  • 00:18:58
    then is that Dimension due to
  • 00:18:59
    Alzheimer's or non-alzheimer's and you
  • 00:19:02
    you are seeing here is this rank corded
  • 00:19:04
    list of features or or input information
  • 00:19:07
    of of personal level information that
  • 00:19:09
    turned out to be important in terms of
  • 00:19:11
    making that specific assessment so this
  • 00:19:14
    is another way to sort of really bring
  • 00:19:16
    that kind of
  • 00:19:17
    explainability uh to the model and see
  • 00:19:20
    how the model is actually trying to
  • 00:19:22
    assess these different um
  • 00:19:25
    questions um and then uh in about 100
  • 00:19:29
    cases or so uh we had basically
  • 00:19:32
    neuropathology data so what we did was
  • 00:19:35
    we took the model predicted
  • 00:19:37
    probabilities or model predicted
  • 00:19:38
    assessments and then we saw how they
  • 00:19:41
    sort of aligned with neuropathology
  • 00:19:43
    grades on some of these um cases and the
  • 00:19:46
    data was obtained from uh The Knack
  • 00:19:49
    which is the national Alzheimer's
  • 00:19:50
    coordinating Center the adne about 25
  • 00:19:53
    cases and some from even the Framingham
  • 00:19:55
    heart study and what you're seeing here
  • 00:19:58
    is just basically you know using the
  • 00:20:00
    oneway um Anova test uh we rejected
  • 00:20:03
    basically the null hypothesis of there
  • 00:20:05
    being no significant uh differences in
  • 00:20:07
    the model predicted scores uh between
  • 00:20:10
    the semi-quantitative neuropathology
  • 00:20:12
    scores as well uh so here we have the
  • 00:20:15
    ABC scores listed here and the model
  • 00:20:17
    seems to at least do a decent job in
  • 00:20:19
    terms of uh predicting the severity of
  • 00:20:22
    the disease if you
  • 00:20:23
    will um and then uh again just to add
  • 00:20:28
    that interpretability to the model uh
  • 00:20:31
    what we did was we took those heat Maps
  • 00:20:33
    like I said those interpretability maps
  • 00:20:35
    that are coming from the model and we
  • 00:20:37
    try to spatially align the prediction
  • 00:20:40
    heat maps on the MRIs itself and then we
  • 00:20:43
    were asking the simple question as to
  • 00:20:44
    see how the model sort of really even
  • 00:20:47
    predicts those regions of interest that
  • 00:20:49
    correspond with the disease so this is
  • 00:20:51
    basically a case from the firmingham
  • 00:20:53
    heart study uh where the neuropathology
  • 00:20:55
    report was available um so what we did
  • 00:20:58
    did was we just basically took that and
  • 00:21:00
    then aligned it so this subject actually
  • 00:21:02
    had uh clinically confirmed Alzheimer's
  • 00:21:05
    disease uh with affected regions I think
  • 00:21:08
    I think a
  • 00:21:09
    bilateral asymmetrical temporal loopes I
  • 00:21:12
    always make a mistake on identifying
  • 00:21:14
    that and the right side hippocampus is
  • 00:21:16
    also affected the singulate cortex is
  • 00:21:18
    also affected in this case the Corpus
  • 00:21:20
    kosum and part of the even the paral L
  • 00:21:22
    and the frontal L uh so the First Column
  • 00:21:25
    basically shows just the M MRI and then
  • 00:21:28
    the second second column basically is
  • 00:21:29
    the model predicted heat map on that MRI
  • 00:21:33
    and the third column is simply the
  • 00:21:34
    alignment of the model predictions on
  • 00:21:36
    the MRI and the fourth and the fifth
  • 00:21:38
    columns are talking about the
  • 00:21:39
    neuropathology grades that were
  • 00:21:40
    basically graded in each of those
  • 00:21:42
    specific
  • 00:21:44
    regions um and then after doing these
  • 00:21:48
    kinds of Assessments I think the second
  • 00:21:50
    TI question was about to think about
  • 00:21:52
    ncmc and dementia and then if dementia
  • 00:21:54
    then is it due to Alzheimer's or due to
  • 00:21:57
    non-alzheimer's and then when I met
  • 00:21:59
    Andrew and few other people who kind of
  • 00:22:02
    gave me a good lecture on you know
  • 00:22:04
    that's not how things are done in a
  • 00:22:05
    neurology clinic uh so I think this is
  • 00:22:08
    basically a brief summary of what I
  • 00:22:10
    learned from Andrew and correct me if
  • 00:22:12
    I'm wrong Andrew so basically how are
  • 00:22:14
    neurologist evaluating patients today
  • 00:22:16
    you know maybe a patient walks into the
  • 00:22:17
    clinic with a family member with some
  • 00:22:19
    kind of a chief complaint involving some
  • 00:22:21
    memory related issues the neurologist
  • 00:22:24
    then is becomes a data scientist and
  • 00:22:26
    then gathers information from coming
  • 00:22:28
    from
  • 00:22:29
    multiple different aspects such as
  • 00:22:30
    demographics personal level history
  • 00:22:32
    medications Etc perhaps administer some
  • 00:22:35
    some tests involving neurological exams
  • 00:22:37
    bside cognitive exams and even refer a
  • 00:22:40
    neuros assessment and sometimes they
  • 00:22:42
    order even an MRI so so technically when
  • 00:22:47
    you have all this data all this
  • 00:22:49
    multimodel data I think the goal here is
  • 00:22:52
    to sort of really assess the cognitive
  • 00:22:54
    status which is again the first pass is
  • 00:22:57
    healthy cognition MC and dementia but
  • 00:23:00
    then go much more deeper to understand
  • 00:23:02
    the underlying cause of dementia is it
  • 00:23:04
    Alzheimer's disease is it depression is
  • 00:23:07
    it Louis body what are the things going
  • 00:23:08
    on and often times at least in the cases
  • 00:23:11
    that we have seen there are multiple
  • 00:23:13
    factors that are simultaneously involved
  • 00:23:15
    so which means mixed dienas right so
  • 00:23:17
    that's kind of really I think probably a
  • 00:23:19
    slightly better picture than what I
  • 00:23:20
    presented before so with this as the
  • 00:23:24
    goal what we did was to again go back
  • 00:23:27
    into the data sets that we have had and
  • 00:23:29
    collected a lot more data and then asked
  • 00:23:32
    that sophisticated question where can we
  • 00:23:34
    perform a more effective uh differential
  • 00:23:37
    diagnosis of dementia so again here is a
  • 00:23:41
    list of all the things that we have
  • 00:23:42
    collected across all these subjects uh
  • 00:23:44
    this is over a data set of about 50,000
  • 00:23:47
    cases or so uh and then again based on
  • 00:23:52
    the uh advice that we received from a
  • 00:23:56
    group of neurologists we started to then
  • 00:23:58
    think about different categories of
  • 00:24:00
    these ethology so on the left side is
  • 00:24:01
    basically table I'll slowly go through
  • 00:24:03
    the the list here so the first three are
  • 00:24:05
    obviously that normal cognition MCI and
  • 00:24:08
    dementia but then you have underneath
  • 00:24:10
    them 10 distinct groups of theologies
  • 00:24:13
    right obviously all s disease is one of
  • 00:24:15
    them and then we group The Louis body
  • 00:24:17
    dementias which is basically dementia
  • 00:24:19
    with Louis bodies and PD Dementia in one
  • 00:24:21
    category there is vascular contributions
  • 00:24:24
    there is prion disease which includes
  • 00:24:25
    cjd there is FTD and its variance and
  • 00:24:28
    there is NPH which seemingly is a
  • 00:24:30
    distinct category and there are the
  • 00:24:32
    systemic factors such as infectious
  • 00:24:34
    diseases substance abuse alcohol abuse
  • 00:24:37
    medications Etc and there is a
  • 00:24:39
    psychiatric component as well which
  • 00:24:41
    includes schizophrenia depression
  • 00:24:43
    bipolar Etc and there is a traumatic
  • 00:24:46
    brain injury component which could
  • 00:24:47
    possibly also cause dementia and then
  • 00:24:50
    the 10th category here is this other
  • 00:24:52
    dementia conditions which I think was
  • 00:24:55
    again sort of really not a perfect
  • 00:24:58
    definition but still it includes some
  • 00:25:00
    some aspects such as neoplasms
  • 00:25:02
    Huntington Etc so with this sort of this
  • 00:25:05
    very broad goal so we again trained a
  • 00:25:08
    more sophisticated neural network that
  • 00:25:10
    combined all this multimodal data and
  • 00:25:12
    then started to address all these
  • 00:25:14
    different things in a systematic way so
  • 00:25:17
    which means we are still asking that
  • 00:25:18
    twoti question which is you know is the
  • 00:25:21
    person having healthy cognition MCI or
  • 00:25:23
    dementia but after that now the model is
  • 00:25:26
    trying to understand what are the what
  • 00:25:28
    is the root cause of Dementia in this
  • 00:25:30
    specific individual and in in fact
  • 00:25:33
    because we have these 10 things listed
  • 00:25:35
    we can also identify potentially even
  • 00:25:38
    mixed Dimensions as well in this context
  • 00:25:40
    right so here is some performance curves
  • 00:25:42
    the model was able to sort of really get
  • 00:25:44
    almost like a 90% accuracy on assessing
  • 00:25:46
    these conditions and then after the
  • 00:25:48
    computational aspect we started to ask
  • 00:25:51
    more questions related to validating the
  • 00:25:54
    model so on the top plot here you see is
  • 00:25:57
    after the model made that prediction of
  • 00:25:59
    the probability of this person likely to
  • 00:26:02
    have Alzheimer's disease we divided the
  • 00:26:05
    the participants into two groups one is
  • 00:26:07
    basically to understand if the model has
  • 00:26:09
    even the ability to predict uh uh all
  • 00:26:13
    sers in the context of MCI so MCI due to
  • 00:26:16
    alzheimer's is one category and again
  • 00:26:17
    obviously if the person has dementia
  • 00:26:19
    then the underlying cause could be maybe
  • 00:26:22
    ad as a contributing factor so
  • 00:26:24
    interestingly our model was able to sort
  • 00:26:27
    of really assess
  • 00:26:28
    the differences between those people who
  • 00:26:30
    have pral disease due to
  • 00:26:32
    Alzheimer's so that was really an
  • 00:26:34
    interesting finding uh and the bottom
  • 00:26:37
    here you are seeing is that the model
  • 00:26:39
    probability sort of really increased as
  • 00:26:41
    the CDR ratings increased on all these
  • 00:26:44
    different condition on all these
  • 00:26:46
    different cohorts so on the left bottom
  • 00:26:48
    you see The Knack cohort Center is the
  • 00:26:50
    adne and the third is the Framingham
  • 00:26:52
    cohart where the probability of the
  • 00:26:55
    model predicting to the person having
  • 00:26:56
    dementia sort of really increased as the
  • 00:26:59
    CDR rating increased and please note CDR
  • 00:27:01
    was actually not used as the input to
  • 00:27:03
    the model but only to sort of really
  • 00:27:05
    think about validation
  • 00:27:07
    here um and then here is a slide that
  • 00:27:10
    basically summarizes the model's ability
  • 00:27:12
    to assess both single and sort of disc
  • 00:27:16
    cering or mixed dienes uh the bottom
  • 00:27:19
    part is basically showing the number of
  • 00:27:21
    cases but uh the the color table here is
  • 00:27:25
    basically showing the performance of the
  • 00:27:26
    model in terms of assessing these dual
  • 00:27:29
    pathologies like ad and LBD ad FD and so
  • 00:27:33
    on and so forth so that I think was one
  • 00:27:36
    of the key Innovations here is that the
  • 00:27:38
    neural networks or the the the
  • 00:27:41
    technology I think has evolved so in so
  • 00:27:43
    much that we can now ask these more
  • 00:27:46
    interesting questions or probably more
  • 00:27:48
    relevant questions in the context of uh
  • 00:27:51
    uh adrd broadly um and then after this
  • 00:27:56
    what we also did was to really
  • 00:27:58
    understand how the model predictions
  • 00:28:01
    aligned with some kind of a biomarker
  • 00:28:03
    evidence uh so on the top row you're
  • 00:28:06
    seeing uh basically information coming
  • 00:28:08
    from pet uh so on the top left is the
  • 00:28:11
    amalo pet data um Center is Center on
  • 00:28:15
    the top is the to pet and the B the top
  • 00:28:18
    right is the fdg pet so here we again
  • 00:28:21
    took the model predictions again because
  • 00:28:23
    there are those 13 categories the model
  • 00:28:26
    actually makes the prediction on each
  • 00:28:28
    each of those categories independently
  • 00:28:30
    so I can effectively take the model's
  • 00:28:32
    prediction on whether the person has
  • 00:28:33
    let's say Alzheimer's disease and take
  • 00:28:35
    that probability and align that and see
  • 00:28:38
    if the if the model is able to
  • 00:28:40
    effectively differentiate between those
  • 00:28:42
    who are ameloid positive versus who are
  • 00:28:44
    not and similarly for TAA and ftg as
  • 00:28:47
    well so the top row is actually showing
  • 00:28:49
    the probability on the probability of
  • 00:28:51
    Alzheimer's disease on the y axis and
  • 00:28:54
    the two cords I think are split on the
  • 00:28:55
    x-axis so again statistically the model
  • 00:28:58
    was able to identify the differences
  • 00:29:00
    between those who are pet positive
  • 00:29:03
    versus those who are not in the bottom
  • 00:29:05
    we sort of expanded Beyond Alzheimer's
  • 00:29:07
    and looked at probability of frontal
  • 00:29:10
    temporal degeneration with MRIs and ftg
  • 00:29:12
    pet and in the context of Louis body
  • 00:29:15
    dementia we looked at some dat scans and
  • 00:29:17
    we were also able to show that the model
  • 00:29:19
    predictions aligned with the presence of
  • 00:29:22
    disease in the context of dat scans for
  • 00:29:25
    LBD um and then
  • 00:29:29
    one more thing we did was about what 150
  • 00:29:32
    cases or so again we looked at the
  • 00:29:34
    neuropathology data and then ask the
  • 00:29:37
    same question which is how is the model
  • 00:29:38
    able to align with the neuropathology
  • 00:29:41
    information or neuropathology evidence
  • 00:29:43
    uh the top row here is showing the model
  • 00:29:46
    predictions of Alzheimer's disease with
  • 00:29:49
    ABC
  • 00:29:50
    scores uh and then the second row here
  • 00:29:52
    is showing the model probability of
  • 00:29:55
    Alzheimer's with cerebral amul Ando ay
  • 00:29:58
    and
  • 00:29:59
    arteriosclerosis Aros
  • 00:30:02
    sclerosis and then the bottom row is
  • 00:30:05
    showing the model predictions with uh
  • 00:30:08
    the the model predictions of vascular
  • 00:30:10
    dementia the first two and FTD on the
  • 00:30:13
    bottom right and I think this is kind of
  • 00:30:15
    giving us the evidence that whatever the
  • 00:30:18
    model is trying to do by processing all
  • 00:30:21
    this multimodel information is somehow
  • 00:30:24
    able to sort of align with some kind of
  • 00:30:25
    evidence of disease that ISS in
  • 00:30:29
    neuropathology
  • 00:30:31
    um and finally one last validation thing
  • 00:30:34
    we did was to um think about clinical
  • 00:30:37
    validation so this was really the icing
  • 00:30:39
    on the cake where we invited about uh 12
  • 00:30:42
    neurologists and um seven Radiologists
  • 00:30:46
    practicing all of them and then we
  • 00:30:48
    basically took about 100 cases or so
  • 00:30:51
    randomly selected and we gave them all
  • 00:30:53
    those cases to review and they were
  • 00:30:56
    asked the same question which was again
  • 00:30:58
    identify if the person has healthy
  • 00:31:01
    cognition MCI or dementia and if they
  • 00:31:03
    think the person has dementia identify
  • 00:31:06
    all those theologies that might occur or
  • 00:31:08
    that might cause dementia so they did
  • 00:31:10
    the same exercise in each one of them
  • 00:31:12
    did independently on those 100
  • 00:31:14
    cases and the same thing in the context
  • 00:31:16
    of radiologist what we did was we gave
  • 00:31:18
    them basically those cases that were
  • 00:31:20
    already diagnosed with demena so they
  • 00:31:22
    were not doing the NC MCI de assessment
  • 00:31:25
    but they were going in with those cases
  • 00:31:27
    and identify in which of those 10
  • 00:31:28
    theologies were actually contributing to
  • 00:31:30
    the
  • 00:31:31
    condition um and uh we sort of averaged
  • 00:31:35
    all that information and then here what
  • 00:31:37
    I'm showing you is a result which is
  • 00:31:38
    kind of really a summary which was
  • 00:31:40
    really encouraging for us to show that
  • 00:31:43
    in that randomly selected uh cases the
  • 00:31:46
    100 cases the neurologist assessments
  • 00:31:48
    augmented by the eii model exceeded
  • 00:31:51
    basically the neurologist only
  • 00:31:52
    evaluations by about 26.25 per. uh I
  • 00:31:56
    have a similar result even for the
  • 00:31:58
    radiologist which I didn't get a chance
  • 00:31:59
    to show here but uh the assessment was
  • 00:32:03
    also done independently on the on the
  • 00:32:05
    group of Radiologists and we also show
  • 00:32:07
    we also observed uh an increase uh in
  • 00:32:10
    terms of um the the
  • 00:32:14
    assessments um
  • 00:32:16
    so again this was published recently in
  • 00:32:19
    nature medicine I think again this is
  • 00:32:21
    basically a summary of the work that we
  • 00:32:23
    have done starting from asking a very
  • 00:32:25
    simple question where the goal was
  • 00:32:27
    mainly to think about creating a method
  • 00:32:30
    an interpretable machine learning method
  • 00:32:32
    that can do some sort of a binary
  • 00:32:35
    classification although may not be
  • 00:32:36
    clinically that relevant but at least it
  • 00:32:38
    at least got us going in terms of
  • 00:32:40
    building those very sophisticated neural
  • 00:32:43
    networks interpretable neural networks
  • 00:32:46
    uh and then we started to engage more
  • 00:32:49
    clinicians and understood more about the
  • 00:32:51
    clinical question so the second question
  • 00:32:53
    was more about trying to really
  • 00:32:55
    differentiate between ncmc and dementia
  • 00:32:58
    but again also if dementia then how do
  • 00:33:00
    you sort of really think about
  • 00:33:02
    Alzheimer's as a thing causing dementia
  • 00:33:04
    versus non-alzheimer's dementia but I
  • 00:33:07
    think as we expanded further we were
  • 00:33:09
    able to sort of really go in deeper to
  • 00:33:11
    really look at multiple different
  • 00:33:13
    theologies including mixed dienas I
  • 00:33:16
    think that was really the uh latest work
  • 00:33:18
    that I think uh was
  • 00:33:21
    published um we are right now I think we
  • 00:33:24
    have this tool in the form of like a
  • 00:33:26
    software so what we doing right now is
  • 00:33:27
    we are doing some pilot studies at uh a
  • 00:33:30
    few medical centers one is a a medical
  • 00:33:33
    center in Arizona Brain and Spine Center
  • 00:33:35
    they see about 10,000 patients a year
  • 00:33:38
    they have three places three clinics uh
  • 00:33:41
    and they are sort of really integrating
  • 00:33:42
    this tool in their their clinic today
  • 00:33:45
    because they want to they also do a lot
  • 00:33:47
    of clinical trials uh but they also want
  • 00:33:49
    to sort of really test it out and see
  • 00:33:51
    how this is going to potentially help
  • 00:33:53
    them because most of the times at least
  • 00:33:55
    in in their case uh the neurologist I
  • 00:33:59
    think the number of neurologists are not
  • 00:34:00
    that many so they are relying on nurse
  • 00:34:03
    practitioners and others to sort of make
  • 00:34:04
    those initial assessments so they feel
  • 00:34:06
    like this can potentially help their
  • 00:34:09
    workflow uh we don't have any FDA
  • 00:34:11
    approval yet so this is basically a
  • 00:34:12
    research study just to understand
  • 00:34:14
    exactly what's going on in that in that
  • 00:34:16
    setting uh we also recently started a uh
  • 00:34:19
    collaboration at Carl Health which is uh
  • 00:34:22
    connected with the University of
  • 00:34:23
    Illinois Arana champagne where it's it's
  • 00:34:27
    a new medical Center they're also trying
  • 00:34:28
    to understand how these things can help
  • 00:34:31
    in the context of medical Wellness
  • 00:34:32
    programs so he even they see that
  • 00:34:34
    there's a potential need there so we're
  • 00:34:36
    trying to really evaluate primarily in
  • 00:34:38
    the context of research setting but at
  • 00:34:40
    least now we feel good that the
  • 00:34:42
    clinicians are willing to embrace uh
  • 00:34:44
    such tools um in their
  • 00:34:47
    practice um so I guess this is kind of
  • 00:34:51
    really the summary of U what I really
  • 00:34:53
    wanted to share which
  • 00:34:55
    is my preference is to work with data in
  • 00:34:59
    the native format because I think
  • 00:35:01
    there's a lot of value in terms of
  • 00:35:03
    processing and harnessing that
  • 00:35:04
    information obviously it's a lot it's
  • 00:35:06
    very tedious but I think there's a lot
  • 00:35:08
    of value in terms of taking that raw
  • 00:35:10
    data whether it's an MRI scan or whether
  • 00:35:13
    it's an EEG recording or a CT scan or
  • 00:35:15
    even a pet scan I think there's a lot of
  • 00:35:17
    value in terms of relying on the raw
  • 00:35:19
    data obviously there are many tools such
  • 00:35:22
    as free Surfer and other things which
  • 00:35:23
    can allow you to get derived measures
  • 00:35:26
    clearly you can do something with them
  • 00:35:28
    but at least personally we feel like
  • 00:35:29
    there is exciting work that can be done
  • 00:35:32
    in the context of leveraging this raw
  • 00:35:33
    data and multimodel
  • 00:35:35
    data um because I'm a computer scientist
  • 00:35:39
    I really focus on validation because I
  • 00:35:41
    know that or at least I think it's very
  • 00:35:44
    important to think about translation and
  • 00:35:46
    the only way to think about translation
  • 00:35:47
    to really is to make sure that the
  • 00:35:49
    models that we building have meaningful
  • 00:35:52
    value and the only way to show that
  • 00:35:54
    value is not by showing some accuracy or
  • 00:35:57
    performance but also really thinking
  • 00:35:59
    about those um comprehensive steps in
  • 00:36:02
    terms of bringing clinicians on board
  • 00:36:05
    showing the alignment of model with you
  • 00:36:07
    know hopefully biomarker evidence or
  • 00:36:09
    neuropathology evidence and to really
  • 00:36:11
    sort of really comprehensively uh do all
  • 00:36:13
    those things and only then I think uh
  • 00:36:16
    hopefully uh these tools can be
  • 00:36:19
    translated uh so if anyone's interested
  • 00:36:21
    I'm happy to talk more about the Tool uh
  • 00:36:23
    with that I just want to thank my
  • 00:36:25
    sponsors and happy to take any questions
  • 00:36:29
    all right thank you VJ um very
  • 00:36:33
    interesting work I ask the audience if
  • 00:36:35
    there's any questions we got a hand up
  • 00:36:38
    from from Mike
  • 00:36:40
    Fox yeah I'm I'm Blown Away absolutely
  • 00:36:43
    fantastic talk um question for you in
  • 00:36:46
    these analysis where you threw all the
  • 00:36:47
    data together you know the MRI data the
  • 00:36:49
    clinical assessments um did it back out
  • 00:36:52
    which source of data was was most useful
  • 00:36:54
    in other words if you didn't have all
  • 00:36:55
    that data was it rely on most on the MRI
  • 00:36:58
    rely mostly on their age the clinical
  • 00:37:00
    assessments yeah it's a great question
  • 00:37:02
    so in one of the I maybe um so there is
  • 00:37:05
    a way for us to go back into each
  • 00:37:08
    person's case and then and and rank
  • 00:37:12
    order the list of things that can be
  • 00:37:14
    useful in terms of making that
  • 00:37:15
    prediction one of the things which I
  • 00:37:17
    didn't get a chance to talk about is to
  • 00:37:21
    that our model has the capability to
  • 00:37:23
    make a prediction even if one data
  • 00:37:26
    modality is missing for example we
  • 00:37:28
    trained it on all the stuff that I just
  • 00:37:30
    described which is MRIs EGS and all that
  • 00:37:33
    but if let's say you introduce a new
  • 00:37:35
    case where EEG is not available or MRI
  • 00:37:39
    is not available it'll take whatever is
  • 00:37:41
    available and still make a prediction
  • 00:37:42
    with a confidence value so in that at
  • 00:37:45
    least we feel like that that I think
  • 00:37:47
    that's probably the primary reason why
  • 00:37:49
    the Carl health system is interested
  • 00:37:51
    because they're mainly looking at
  • 00:37:53
    patients in a primary care setting or
  • 00:37:54
    sort of more Upstream neurology settings
  • 00:37:57
    and they feel like at least if the model
  • 00:37:59
    can do an initial pass on the prediction
  • 00:38:02
    with whatever information is available
  • 00:38:04
    then they can hopefully use that to sort
  • 00:38:06
    of really decide on what kind of test
  • 00:38:08
    can be recommended or whether they have
  • 00:38:10
    to let's say order an MRI or some
  • 00:38:11
    something like that so I think there's a
  • 00:38:13
    value in terms of creating these
  • 00:38:14
    resilient systems which can take
  • 00:38:16
    whatever is available and make a
  • 00:38:18
    prediction and to your point yes it is
  • 00:38:20
    capable for us to rank order the
  • 00:38:23
    important things at an individual level
  • 00:38:27
    and and you do that for everybody maybe
  • 00:38:28
    you haven't done it yet but I'm just
  • 00:38:30
    curious is it more dependent on the MRI
  • 00:38:32
    scan or more dependent on the clinical
  • 00:38:33
    data just to put it in two big buckets
  • 00:38:36
    So based on my knowledge and based on
  • 00:38:38
    what I've seen if I was to do a simple
  • 00:38:41
    ncmc assessments MRI doesn't turn out to
  • 00:38:44
    be the most important uh modality but if
  • 00:38:47
    I have to look at certain aspects of
  • 00:38:50
    let's say Alzheimer's versus traumatic
  • 00:38:53
    brain injury or Alzheimer's versus or
  • 00:38:55
    some other mixed dienas then MRI seem to
  • 00:38:58
    play a bigger role in terms of making
  • 00:39:00
    the that kind of assessment so um yeah
  • 00:39:03
    so so it depends on I guess what the
  • 00:39:06
    question is it relies on the MRI for
  • 00:39:07
    some questions it relies on the clinical
  • 00:39:09
    data for others yes brilliant talk all
  • 00:39:11
    let other people yeah thank youe
  • 00:39:17
    David um we can't hear you I don't know
  • 00:39:21
    you're not muted but we still can't hear
  • 00:39:23
    you it's in the chat David
  • 00:39:29
    uh you wrote it down the largest
  • 00:39:30
    increase in diagnostic accuracy from
  • 00:39:32
    adding the model of clinical adjustment
  • 00:39:33
    is in
  • 00:39:35
    Chi uh and what I'm not sure what Chi is
  • 00:39:38
    but do you know what data or features
  • 00:39:40
    underlay the Improvement so what what's
  • 00:39:41
    so good about the AI to improve upon
  • 00:39:44
    numerologists
  • 00:39:47
    um I think if I based on my what I've
  • 00:39:51
    seen I think
  • 00:39:54
    uh for in the context of radiologist for
  • 00:39:57
    instance
  • 00:39:57
    when I gave them all the cases who have
  • 00:40:00
    been clinically diagnosed with dementia
  • 00:40:02
    and when I asked them this question okay
  • 00:40:03
    can you really list what are those
  • 00:40:06
    contributing
  • 00:40:07
    factors uh they seem to do great on
  • 00:40:11
    identifying the primary contributing
  • 00:40:12
    factor like finding let's say an atrophy
  • 00:40:15
    pattern on the on the brain MRI but
  • 00:40:17
    often times there is this sort of this
  • 00:40:20
    not so much consensus on looking at
  • 00:40:22
    other contributing factors so again when
  • 00:40:26
    I'm grouping all all their scores
  • 00:40:28
    together clearly there is some kind of a
  • 00:40:31
    Capa value which is not that high but um
  • 00:40:36
    but clearly I think by augmenting this
  • 00:40:39
    kind of a standardized assessment
  • 00:40:41
    because the computer is only going to do
  • 00:40:43
    in one one way because that's how we
  • 00:40:45
    train the model but because you have
  • 00:40:47
    these experts who are probably thinking
  • 00:40:50
    slightly differently maybe there is not
  • 00:40:52
    much consensus so I think there's the
  • 00:40:54
    way I think about this whole thing is to
  • 00:40:56
    really think about how AI can help
  • 00:40:59
    standardize uh some of these aspects
  • 00:41:02
    which hopefully can then be augmented to
  • 00:41:04
    uh within the practice so that's how I
  • 00:41:06
    think uh I see the value uh there is
  • 00:41:09
    going to be variability because you know
  • 00:41:11
    expert in Boston may think differently
  • 00:41:13
    versus expert in India uh for example I
  • 00:41:16
    have seen I'm trying to work with a
  • 00:41:17
    hospital in India one of the um most
  • 00:41:21
    frequent uh conditions there is B
  • 00:41:24
    vitamin
  • 00:41:25
    deficiency which is not so much observed
  • 00:41:28
    here at least in in in in the US right
  • 00:41:30
    so so for them they're so much not
  • 00:41:33
    biased but they so much really want to
  • 00:41:35
    ask that question as the first question
  • 00:41:37
    because clearly a lot of people have
  • 00:41:38
    that vitamin deficiency there and that
  • 00:41:40
    could be like hopefully a reversible
  • 00:41:41
    cause of
  • 00:41:42
    denture uh so I think depending on the
  • 00:41:44
    practice depending on the culture and
  • 00:41:46
    location there may be some differences
  • 00:41:48
    in terms of how they
  • 00:41:50
    assess uh so hopefully I think in an
  • 00:41:54
    Ideal World I feel like AI could help
  • 00:41:56
    standardize some of these
  • 00:41:59
    assessments that's great the other
  • 00:42:01
    question which I think David have which
  • 00:42:02
    I have have too is sort of when
  • 00:42:05
    understanding the prediction of
  • 00:42:07
    different underlying ideologic diagnosis
  • 00:42:09
    to patients dementia sounds like for
  • 00:42:11
    some patients you had um pathologic
  • 00:42:14
    validation but broadly speaking what
  • 00:42:16
    would be the gold standard for decid for
  • 00:42:18
    for validating the model um so you mean
  • 00:42:22
    to say that we would want to validate on
  • 00:42:24
    every condition or every patient
  • 00:42:27
    I guess like to to determine whe how
  • 00:42:29
    good the model was like the model plus
  • 00:42:31
    neurologist versus neurologist alone was
  • 00:42:33
    for classifying patients into different
  • 00:42:36
    diagnostic categories so had what's the
  • 00:42:38
    gold standard for which a good question
  • 00:42:41
    for that for the diagnostic for those
  • 00:42:43
    cases yeah so for those cases that we
  • 00:42:45
    selected the we had a consensus
  • 00:42:47
    diagnosis coming from reports like Knack
  • 00:42:50
    or other things so clearly there is a
  • 00:42:52
    there is a team that is I think
  • 00:42:54
    evaluating the person as opposed to a
  • 00:42:55
    single clinician so that's I think what
  • 00:42:58
    we are relying so far as a consensus
  • 00:43:00
    diagnosis or a clinical goal standard if
  • 00:43:02
    you will yeah I was particularly
  • 00:43:04
    impressed by the fact that it could that
  • 00:43:06
    it could predict you know hard pathology
  • 00:43:09
    data even which is even better than uh
  • 00:43:14
    yeah consensus diagnosis from Knack or
  • 00:43:15
    whatever I I had one question I guess
  • 00:43:18
    Shabani has one too but maybe maybe I
  • 00:43:19
    can just quickly ask so you know as a
  • 00:43:22
    cognitive neurologist you know a very
  • 00:43:23
    common question I get from my patients
  • 00:43:25
    that I can't answer is what's going to
  • 00:43:26
    happen to me in five years or in one
  • 00:43:28
    year or in two years so the rate of
  • 00:43:30
    cognitive decline sort of irrespective
  • 00:43:32
    of the ideologic diagnosis the ideologic
  • 00:43:34
    diagnosis is important if you're going
  • 00:43:35
    to prescribe someone mamab or enroll a
  • 00:43:37
    clinical trial most patients don't
  • 00:43:38
    really care whether they have Louis
  • 00:43:39
    bodies in their brain or ID plaques but
  • 00:43:41
    they want to know you know what can I
  • 00:43:43
    expect in the next few years so there's
  • 00:43:46
    probably a lot of longitudinal data in
  • 00:43:48
    these these cohorts and so can you can
  • 00:43:49
    you take a cross-sectional or or single
  • 00:43:52
    time point or even historical data set
  • 00:43:54
    and use the model to predict the rate of
  • 00:43:56
    cognitive decline over the next next
  • 00:43:57
    three years let's
  • 00:43:58
    say I think it's a great question and um
  • 00:44:02
    I definitely think it's a very important
  • 00:44:05
    question as well um I don't think we
  • 00:44:07
    have enough data as much as we have
  • 00:44:10
    collected for this cross-sectional stuff
  • 00:44:12
    but yeah it's definitely a great thing
  • 00:44:15
    to do in the future we haven't done that
  • 00:44:18
    yet anecdotally and I'm sure people
  • 00:44:21
    agree it's a big question we pay from my
  • 00:44:22
    patients that we just can't
  • 00:44:24
    answer Shabani go ahead thank you Andrew
  • 00:44:27
    that was actually the question
  • 00:44:29
    beautifully asked the question I had is
  • 00:44:31
    how far in advance can you predict and
  • 00:44:33
    that's that's probably the key question
  • 00:44:35
    um my my other question to you was I
  • 00:44:38
    really liked your talk my my question on
  • 00:44:41
    that value ad I'm curious you know what
  • 00:44:44
    is your plan to sort of prove that value
  • 00:44:47
    ad so would you envision either
  • 00:44:49
    implementing this where you had for
  • 00:44:52
    example people randomized to the AI
  • 00:44:54
    generated diagnosis versus the
  • 00:44:56
    neurologist and then actually showing
  • 00:44:59
    that there's a either a cost benefit
  • 00:45:01
    some benefit that's what are those
  • 00:45:03
    benefits that you would look for it's a
  • 00:45:06
    it's a great question that's been uh in
  • 00:45:09
    my mind for some time now um I think I
  • 00:45:13
    can give a pretty decent answer on the
  • 00:45:15
    clinical trial
  • 00:45:17
    angle uh because the pilot study that we
  • 00:45:20
    completed at uh in Arizona like I said
  • 00:45:22
    they are more interested in clinical
  • 00:45:24
    trials so what we observed was our model
  • 00:45:26
    was able ble to predict biomarker
  • 00:45:28
    positivity uh uh which was 33% higher I
  • 00:45:33
    haven't yet published the work but the
  • 00:45:34
    plan is to do that so we were able to
  • 00:45:37
    predict better biomarker positivity when
  • 00:45:39
    I biomarker I'm talking specifically
  • 00:45:41
    about pet positivity because they don't
  • 00:45:43
    do pet scans often the sponsor usually
  • 00:45:45
    pays the money to get pet scans for
  • 00:45:47
    these participants who are enrolled uh
  • 00:45:50
    so we took the data that is done in the
  • 00:45:51
    neurology workup and we predicted who
  • 00:45:53
    might turn out to be pet positive both
  • 00:45:55
    amalon and to and we were about on an
  • 00:45:57
    average 33% higher so there is I think
  • 00:46:01
    potential economic benefit there in
  • 00:46:02
    terms of screening patients for for
  • 00:46:04
    these trials uh in the context of uh
  • 00:46:07
    clinical diagnosis um like we're just
  • 00:46:09
    starting this uh study at Carl Health uh
  • 00:46:13
    the randomization was one of the topics
  • 00:46:15
    that we discussed but I think they are
  • 00:46:17
    more aligned to think about Medicare
  • 00:46:19
    Wellness visits where I think more
  • 00:46:22
    primary care or geriatricians are
  • 00:46:24
    involved in sort of really assessing the
  • 00:46:26
    these annual uh checkups so they think
  • 00:46:30
    that they first of all they don't have
  • 00:46:32
    time so they want to really get this
  • 00:46:33
    first pass thing but the economic
  • 00:46:35
    benefit is absolutely the key to
  • 00:46:37
    understand how this eventually becomes
  • 00:46:39
    sort of a value
  • 00:46:41
    proposition uh but I think there are few
  • 00:46:43
    ways to do it one I think what you
  • 00:46:45
    suggested I think is a very great way to
  • 00:46:47
    sort of really randomize and sort of
  • 00:46:48
    really evaluate the cost and everything
  • 00:46:51
    we aren't there yet but hopefully in the
  • 00:46:53
    future we can do that but from the
  • 00:46:55
    clinical trial standpoint at least the a
  • 00:46:57
    there is a quantitative uh value that I
  • 00:46:59
    think uh can be hopefully a sponsor
  • 00:47:02
    might like it as opposed to just the
  • 00:47:09
    clinic more questions I have lots of
  • 00:47:12
    nobody else had one uh Edinson I think
  • 00:47:15
    has have a
  • 00:47:16
    question um yes um hi thank you for the
  • 00:47:20
    talk I have a question related to the
  • 00:47:23
    use of diffusion MRI data um How do you
  • 00:47:27
    to use in the future diffusion MRI data
  • 00:47:30
    to kind of address some information
  • 00:47:33
    about the connectivity in the brain due
  • 00:47:36
    to um Parkinson Alzheimer
  • 00:47:39
    diseases um I think it's a very
  • 00:47:42
    important modality um unfortunately for
  • 00:47:45
    any of the work we have done we have not
  • 00:47:47
    used diffusion MRI um primarily because
  • 00:47:51
    again I've been taught by the
  • 00:47:52
    neurologist that that's not normally
  • 00:47:54
    done in a clinical setting not every
  • 00:47:56
    everywhere maybe in some places they do
  • 00:47:59
    so I think the whole motivation has been
  • 00:48:02
    for us to really think about routinely
  • 00:48:03
    collected clinical data or a routinely
  • 00:48:06
    collected neurology workup data and in
  • 00:48:09
    fact that's the reason why I did not use
  • 00:48:11
    pet scans as input to the
  • 00:48:13
    model um uh in fact we were also
  • 00:48:16
    debating on whether to use CSF as a
  • 00:48:18
    input but we did in in in some questions
  • 00:48:21
    so we trying to separate between what's
  • 00:48:23
    practically available uh and what's kind
  • 00:48:26
    of a research question which I think is
  • 00:48:29
    very interesting but a separate question
  • 00:48:31
    uh but in this context we have not used
  • 00:48:33
    diffusion
  • 00:48:35
    MRI
  • 00:48:38
    thanks great other questions Mike do you
  • 00:48:41
    have another one or is it um I do but I
  • 00:48:44
    want to make sure people I want we have
  • 00:48:48
    time I loved the um the heat map you
  • 00:48:50
    showed I think it was from your brain
  • 00:48:51
    paper of the interpretable AI where it
  • 00:48:53
    actually gave you a boxal wise map of of
  • 00:48:55
    which boxels it was using to make its
  • 00:48:57
    decision-making can that be
  • 00:48:59
    individualized in other words you
  • 00:49:00
    mentioned you can go to to how it rings
  • 00:49:02
    and and on those single subject Maps um
  • 00:49:06
    I guess you could make a single subject
  • 00:49:08
    heat map of what voxal it's looking at
  • 00:49:10
    when it's making its differential
  • 00:49:12
    diagnosis and I'm just wondering if if
  • 00:49:14
    you know producing those single subject
  • 00:49:16
    maps and showing those to you know a
  • 00:49:18
    neur radiologist or a neurologist you
  • 00:49:20
    know we all the time pull up these scans
  • 00:49:21
    and argue about where the atrophy is or
  • 00:49:23
    isn't yeah um I guess is that something
  • 00:49:26
    you've explored
  • 00:49:27
    and are those single subject heat Maps
  • 00:49:29
    actually useful for someone as an
  • 00:49:31
    augmentation you're interpreting the raw
  • 00:49:32
    Radiology data yes yeah absolutely I
  • 00:49:35
    think the framework is capable of
  • 00:49:37
    creating individualized Maps uh the
  • 00:49:39
    example that I actually showed you was
  • 00:49:41
    one single case from the famham h study
  • 00:49:43
    which had postmodem data available oh so
  • 00:49:46
    that Heap was one patient not a
  • 00:49:48
    population average no ah one single
  • 00:49:51
    person but we could obviously average it
  • 00:49:54
    but I think the case I was showing was
  • 00:49:55
    on a single person
  • 00:49:57
    understood and then you you emphasized
  • 00:49:59
    the importance of raw data um going into
  • 00:50:01
    the pipeline and you brought free Surfer
  • 00:50:03
    up multiple times so everybody has their
  • 00:50:05
    favorite way that they think they can
  • 00:50:07
    process the data have you actually done
  • 00:50:09
    a head-to-head of the raw data versus
  • 00:50:12
    something like free Surfer some Advanced
  • 00:50:14
    analysis of that Mr Data uh we haven't
  • 00:50:17
    uh we thought about it I think uh the
  • 00:50:22
    just to be again free serer is fantastic
  • 00:50:25
    uh I think it does a lot of interesting
  • 00:50:27
    things first of all it takes a lot of
  • 00:50:28
    time so I'm not saying everybody should
  • 00:50:31
    do it it was just more of a is it an
  • 00:50:33
    assumption that it won't add value or is
  • 00:50:35
    it proven that it won't add value
  • 00:50:37
    because I get there's a lot of downsides
  • 00:50:38
    I agree um I I haven't done the study so
  • 00:50:42
    I can't comment on it but I guess um
  • 00:50:45
    given that I'm coming from probably
  • 00:50:49
    non uh neurons background uh I never was
  • 00:50:53
    a big fan of derived
  • 00:50:54
    measures uh I think think clearly
  • 00:50:57
    there's so much more information on the
  • 00:50:59
    MRI and if you simplifying it to 100
  • 00:51:02
    scalar values you're effectively
  • 00:51:05
    synthesizing the information right so
  • 00:51:08
    but you're right maybe an interesting
  • 00:51:10
    study to do would be to compare the free
  • 00:51:12
    Surfer derived measures and then sorry I
  • 00:51:16
    I um not the free Surfer derive measures
  • 00:51:18
    right not those scalers that are 100
  • 00:51:20
    lists of atrophy I'm more talking about
  • 00:51:22
    the the voxal wise or ver ver verticy
  • 00:51:25
    wise map right so you're taking the raw
  • 00:51:27
    MRI data but you still have to warp it
  • 00:51:28
    into a common Atlas space so that your
  • 00:51:31
    AI algorithm compare each subject to
  • 00:51:32
    each other but how you warp the raw MRI
  • 00:51:35
    data into Atlas space that was why free
  • 00:51:37
    server got so popular they said hey our
  • 00:51:39
    standard way for warping into a single
  • 00:51:41
    space isn't good enough we're going to
  • 00:51:43
    align each sulcus and gyrus and so it's
  • 00:51:46
    more of the normalization step um not
  • 00:51:49
    not the not the I agree 100% with the
  • 00:51:52
    drive measure thing it's more um your
  • 00:51:55
    your platform could provide a really
  • 00:51:57
    cool test of this debate of how do you
  • 00:51:59
    normalize MRI data into Atlas space so
  • 00:52:01
    you can compare across subjects yeah um
  • 00:52:04
    so in that case I think presurfer is a
  • 00:52:06
    very good tool um and there are a few
  • 00:52:08
    other tools as well so like uh megel
  • 00:52:11
    University I think has created a lot of
  • 00:52:13
    tools as well uh there's m& at m& space
  • 00:52:16
    is something that we use uh there is a
  • 00:52:18
    hammer Smith Atlas there's also Ron Ron
  • 00:52:21
    kiliani Atlas so there are many
  • 00:52:22
    different atlases that you can leverage
  • 00:52:25
    so um so the for for practical reasons
  • 00:52:30
    for building that pipeline we did not
  • 00:52:32
    use freeer for because we feel like
  • 00:52:35
    other pipelines are slightly more
  • 00:52:36
    computationally efficient my
  • 00:52:38
    understanding is that free Surfer is
  • 00:52:40
    used not just to align or not just to
  • 00:52:42
    register but also somehow do the next
  • 00:52:45
    step which is to derive those measures
  • 00:52:47
    so so for those practical reasons we we
  • 00:52:49
    just relied on building an own um
  • 00:52:52
    pipeline thank you very cool yeah
  • 00:52:57
    great um more
  • 00:53:00
    questions I have one more if nobody else
  • 00:53:03
    does which is another selfish selfish
  • 00:53:06
    question about my own practice about um
  • 00:53:09
    treating patients with anti amid
  • 00:53:11
    immunotherapy which is new one I mean
  • 00:53:14
    there's a lot of things we face with
  • 00:53:16
    like for example predicting Arya but uh
  • 00:53:18
    you know which is a side effect but even
  • 00:53:20
    more important in my opinion is
  • 00:53:21
    predicting drug response and another
  • 00:53:23
    question we often get from patients who
  • 00:53:25
    are on these new Therapies is like well
  • 00:53:26
    how do we know if the drug is working
  • 00:53:28
    because the expectation is that a
  • 00:53:30
    patient even on the new medication is
  • 00:53:32
    going to decline
  • 00:53:34
    longitudinally uh just perhaps at a
  • 00:53:37
    slower rate than if they wouldn't have
  • 00:53:39
    yeah so if you have so if you have
  • 00:53:42
    hundreds of thousands of
  • 00:53:46
    cases again would there be a way to put
  • 00:53:49
    in as one variable you know ameloid
  • 00:53:52
    immunotherapy duration of treatment or
  • 00:53:54
    something like that and sort of try to
  • 00:53:57
    quantify what its contribution is to a
  • 00:54:00
    patient's status in a huge model to sort
  • 00:54:03
    of derive what a patient
  • 00:54:05
    level how how much of their current
  • 00:54:08
    status is affected by the fact that
  • 00:54:10
    they're on this treatment does that make
  • 00:54:12
    sense um because we we can't measure an
  • 00:54:14
    improvement from the drug it just
  • 00:54:15
    doesn't it because but can we quantify
  • 00:54:18
    what they might have been like if they
  • 00:54:19
    hadn't been on the treatment I see it's
  • 00:54:21
    like a what if condition basically yeah
  • 00:54:23
    yeah like what to what degree does the
  • 00:54:25
    presence of being on this drug
  • 00:54:27
    likely to to be mitigating the patient's
  • 00:54:29
    cognitive status yeah it's a it's a very
  • 00:54:32
    interesting question but seems like a
  • 00:54:34
    bit of a risky question I don't know you
  • 00:54:37
    can ask these B of questions today in
  • 00:54:39
    the age of AI I mean AI is I think
  • 00:54:41
    facing a lot of heat in terms of really
  • 00:54:43
    thinking about there's some scrutiny
  • 00:54:45
    going on right so I I I appreciate the
  • 00:54:49
    question but I don't know if I can uh
  • 00:54:52
    okay I am educated enough to to comment
  • 00:54:54
    on that I guess I guess more broadly
  • 00:54:57
    speaking just say yeah not just about
  • 00:54:59
    Amal but generally speaking like what I
  • 00:55:02
    guess what modifiable factors are
  • 00:55:04
    affecting a patient's cognition yes
  • 00:55:06
    whether it's being on a certain drug
  • 00:55:08
    whether it's their smoking status their
  • 00:55:10
    so so that you could tell a patient this
  • 00:55:11
    is what you need to do yeah to make
  • 00:55:13
    memory better okay so let's talk about
  • 00:55:16
    modifiable factor
  • 00:55:18
    that let's say again I deriving 100
  • 00:55:20
    features some of them are basically BMI
  • 00:55:23
    values or some other things or their
  • 00:55:25
    diabetes status or SM status these are
  • 00:55:27
    inputs to the model so I can in theory
  • 00:55:30
    effectively create a um random peration
  • 00:55:34
    on these
  • 00:55:35
    values and then input that to the model
  • 00:55:38
    and run thousand times I can actually
  • 00:55:41
    get different predictions every time so
  • 00:55:43
    effectively what I'm trying to do is
  • 00:55:44
    keeping everything else constant and
  • 00:55:46
    maybe change one of these modifiable
  • 00:55:48
    risk factors and then see how the
  • 00:55:50
    predictions come out and then perhaps
  • 00:55:52
    then I think maybe think about those
  • 00:55:54
    recommendations or suggestions on
  • 00:55:55
    potential changing a modifiable risk
  • 00:55:57
    factor so I think in theory it's totally
  • 00:56:00
    possible because effectively it's a
  • 00:56:02
    microc Time the model is you just put
  • 00:56:05
    the data and then it gives you the
  • 00:56:06
    output so but yeah I like the angle of
  • 00:56:09
    modifiable risk factors but I don't know
  • 00:56:11
    if I can talk about
  • 00:56:12
    IM okay well I mean if we did this
  • 00:56:15
    change yes you stop smoking then we
  • 00:56:18
    expect would expect this change in your
  • 00:56:20
    trory or whatever I have a I have a
  • 00:56:22
    trainee who is interested in modifiable
  • 00:56:24
    risk factors he's thinking along those
  • 00:56:26
    slides
  • 00:56:28
    actually great all
  • 00:56:32
    right um well maybe you'll come back in
  • 00:56:34
    another five to 10 years and you won't
  • 00:56:36
    be talking to neurologists because we
  • 00:56:37
    won't be needed anymore you'll just be
  • 00:56:38
    talking to some AI Bots need
  • 00:56:42
    more so I think there's one question on
  • 00:56:44
    the one okay all right sorry Shabani uh
  • 00:56:49
    are you there to ask it Shani or did you
  • 00:56:51
    you to go could the pipeline generate a
  • 00:56:53
    hypothetical
  • 00:56:55
    person uh based on a person's data and
  • 00:56:57
    see if their trajectory differs from
  • 00:56:58
    what they'd expect to be based on their
  • 00:57:00
    demographic and what not data yeah so it
  • 00:57:02
    kind of aligns to your question which is
  • 00:57:04
    about how do you not just think about
  • 00:57:07
    prediction but also
  • 00:57:08
    trajectories as so yeah I think these
  • 00:57:11
    are possible
  • 00:57:14
    um just to be honest I think mainly
  • 00:57:17
    coming from a completely different field
  • 00:57:19
    uh the biggest learning experience for
  • 00:57:21
    me was to align with what clinicians
  • 00:57:25
    speak I think that that took a while
  • 00:57:27
    because what are the question I mean I
  • 00:57:29
    don't want to randomly create some fancy
  • 00:57:30
    models like I think the first one even
  • 00:57:33
    though small anecdotal here evidence
  • 00:57:36
    here so the paper that was published in
  • 00:57:38
    brain was initially reviewed by I think
  • 00:57:40
    nature medicine I think after four
  • 00:57:42
    rounds of revision one of the reviewers
  • 00:57:44
    made a comment saying that oh this is
  • 00:57:45
    not clinically relevant and it got
  • 00:57:48
    rejected so that was a learning
  • 00:57:51
    experience so I think I'm trying to
  • 00:57:53
    understand what actually is a problem uh
  • 00:57:56
    hopefully build tools after that uh as
  • 00:57:58
    opposed to starting to build tools and
  • 00:58:00
    going in that direction so I'm still
  • 00:58:04
    learning it's been terrific this is
  • 00:58:06
    obviously very helpful uh they going to
  • 00:58:08
    be helpful to tools um to us and maybe
  • 00:58:10
    even to Primary Care as well okay thank
  • 00:58:13
    you VJ um and uh I'm sure people can
  • 00:58:16
    reach out with with more pepper you with
  • 00:58:18
    more questions if they so desire thank
  • 00:58:19
    you so much for for joining us thank you
  • 00:58:22
    for having me appreciate bye take care
  • 00:58:25
    bye by
الوسوم
  • Alzheimer's disease
  • Dementia
  • AI
  • Machine learning
  • Neuroimaging
  • Clinical trials
  • Cognitive assessment
  • Predictive modeling
  • Validation
  • Boston University