<html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
<meta name="Generator" content="Microsoft Word 15 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:DengXian;
panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
{font-family:Aptos;}
@font-face
{font-family:"\@DengXian";
panose-1:2 1 6 0 3 1 1 1 1 1;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
font-size:12.0pt;
font-family:"Aptos",sans-serif;
mso-ligatures:standardcontextual;}
p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph
{mso-style-priority:34;
margin-top:0in;
margin-right:0in;
margin-bottom:0in;
margin-left:.5in;
font-size:12.0pt;
font-family:"Aptos",sans-serif;
mso-ligatures:standardcontextual;}
p.xxxmsonormal, li.xxxmsonormal, div.xxxmsonormal
{mso-style-name:x_xxmsonormal;
margin:0in;
font-size:12.0pt;
font-family:"Aptos",sans-serif;}
.MsoChpDefault
{mso-style-type:export-only;
font-size:10.0pt;
mso-ligatures:none;}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
/* List Definitions */
@list l0
{mso-list-id:1016613435;
mso-list-template-ids:-569483720;}
@list l1
{mso-list-id:2007173633;
mso-list-type:hybrid;
mso-list-template-ids:-2112034774 1121208646 67698713 67698715 67698703 67698713 67698715 67698703 67698713 67698715;}
@list l1:level1
{mso-level-text:"\(%1\)";
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-.25in;}
@list l1:level2
{mso-level-number-format:alpha-lower;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-.25in;}
@list l1:level3
{mso-level-number-format:roman-lower;
mso-level-tab-stop:none;
mso-level-number-position:right;
text-indent:-9.0pt;}
@list l1:level4
{mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-.25in;}
@list l1:level5
{mso-level-number-format:alpha-lower;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-.25in;}
@list l1:level6
{mso-level-number-format:roman-lower;
mso-level-tab-stop:none;
mso-level-number-position:right;
text-indent:-9.0pt;}
@list l1:level7
{mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-.25in;}
@list l1:level8
{mso-level-number-format:alpha-lower;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-.25in;}
@list l1:level9
{mso-level-number-format:roman-lower;
mso-level-tab-stop:none;
mso-level-number-position:right;
text-indent:-9.0pt;}
ol
{margin-bottom:0in;}
ul
{margin-bottom:0in;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang="EN-US" link="#467886" vlink="#96607D" style="word-wrap:break-word">
<div class="WordSection1">
<p class="MsoNormal"><span style="font-size:11.0pt"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt">The next BIC Postdocs and Students seminar will take place at 11:30 AM on Monday, February 10.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt">Venue: de Grandpré Communications Centre at the Neuro.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt">We will have 2 speakers and 2 corresponding discussions on:<o:p></o:p></span></p>
<ol style="margin-top:0in" start="1" type="1">
<li class="MsoListParagraph" style="color:blue;margin-left:0in;mso-list:l1 level1 lfo3">
<span style="font-size:11.0pt">AI-based segmentation of structural MRI, and <o:p>
</o:p></span></li><li class="MsoListParagraph" style="color:blue;margin-left:0in;mso-list:l1 level1 lfo3">
<span style="font-size:11.0pt">fMRI-based representation of speech conversations in the human auditory cortex<o:p></o:p></span></li></ol>
<p class="MsoNormal"><span style="font-size:11.0pt;color:blue"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt">Pizza and soft drinks will be provided
</span><span style="font-size:14.0pt;color:blue">after the seminar, at 12:30 PM</span><span style="font-size:11.0pt;color:blue">
</span><span style="font-size:11.0pt">- courtesy of the BIC director (for attendees of the seminar).<o:p></o:p></span></p>
<p class="MsoListParagraph"><span style="font-size:11.0pt"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;color:blue">Presenter: Gurucharan Marthi Krishna Kumar;
<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;color:blue">Shmuel Lab<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;color:blue">Title: </span><i><span lang="EN-CA" style="font-size:11.0pt;color:blue">NestedMorph</span></i><span lang="EN-CA" style="font-size:11.0pt;color:blue">: Enhancing Deformable Medical Image Registration
with Nested Attention Mechanisms</span><span style="font-size:11.0pt;color:blue"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:10.0pt"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt">Summary: Deformable image registration is crucial for aligning medical images in a non-linear fashion across different modalities, allowing for precise spatial correspondence between varying anatomical structures.
This paper presents NestedMorph, a novel network utilizing a Nested Attention Fusion approach to improve intra-subject deformable registration between T1-weighted (T1w) MRI and diffusion MRI (dMRI) data. NestedMorph integrates high-resolution spatial details
from an encoder with semantic information from a decoder using a multi-scale framework, enhancing both local and global feature extraction. Our model notably outperforms existing methods, including CNN-based approaches like VoxelMorph, MIDIR, and CycleMorph,
as well as Transformer-based models such as TransMorph and ViT-V-Net, and traditional techniques like NiftyReg and SyN. Evaluations on the HCP dataset demonstrate that NestedMorph achieves superior performance across key metrics, including SSIM, HD95, and
SDlogJ, with the highest SSIM of 0.89, and the lowest HD95 of 2.5 and SDlogJ of 0.22. These results highlight NestedMorph</span><span lang="ZH-CN" style="font-size:11.0pt;font-family:DengXian">’</span><span style="font-size:11.0pt">s ability to capture both
local and global image features effectively, leading to superior registration performance. The promising outcomes of this study underscore NestedMorph</span><span lang="ZH-CN" style="font-size:11.0pt;font-family:DengXian">’</span><span style="font-size:11.0pt">s
potential to significantly advance deformable medical image registration, providing a robust framework for future research and clinical applications.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;color:blue">Presenter: Etienne Abassi<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;color:blue">Zatorre Lab<o:p></o:p></span></p>
<p class="xxxmsonormal"><span style="font-size:11.0pt;color:blue">Title: <span style="mso-ligatures:standardcontextual">
The representation of speech conversations in the human auditory cortex<o:p></o:p></span></span></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal" style="text-align:justify;text-indent:.5in"><span style="color:black">Auditory perception is shaped by human social nature, and we rely heavily on hearing to navigate our social world through conversations, including not just participating
in it but also listening to others’ conversation. While the neural basis of speech understanding has been studied at word or sentence levels, the influence of social context on processing entire conversations and its interaction with semantics remains underexplored.
To explore this, we conducted a 7T fMRI study using AI-generated auditory stimuli to examine how the brain processes conversations from a third-person perspective. Healthy male and female young adults listened to our stimuli while we manipulated social context
(two-speaker dialogues vs. one-speaker monologues) and semantic context (intact vs. sentence-scrambled conversations). Whole-brain analyses revealed significant effects of semantic context in the left superior temporal sulcus (STS), with stronger activity
for scrambled over intact conversations. While social context alone had no direct effect, it interacted with semantic context: the left STS showed greater differences in activity between scrambled and intact dialogues compared to monologues. ROI analysis in
the functionally localized speech-selective auditory cortex supported these findings. A multivariate classifier trained on neural data demonstrated better discrimination of individual sentences when embedded in dialogues rather than monologues, suggesting
that social context sharpens the perceptual representation of sentences. Overall, the study highlights the influence of both semantic and social contexts on neural speech processing. It suggests specialized mechanisms in the left STS for processing prototypical
conversations, such as intact dialogues, emphasizing the importance of considering social and semantic factors in understanding speech processing at the large-scale level of a whole conversation. These findings raise questions about the predictive or other
neural mechanisms active during naturalistic speech perception.<o:p></o:p></span></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal"><span style="font-size:11.0pt"><o:p> </o:p></span></p>
<p class="MsoNormal">See you all,<o:p></o:p></p>
<p class="MsoNormal">Amir.<o:p></o:p></p>
<p class="MsoNormal"><span style="font-size:11.0pt"><o:p> </o:p></span></p>
</div>
</body>
</html>