Themes of interest
This is my new thing! It became my new thing simply because I was deeply frustrated with the current state of psychiatry. On the one hand, we have diagnostic manuals that tell us what a mental disorder is based on a collection of symptoms. This is problematic simply because of how we lumping symptoms together, many of them are heterogeneous and there are overlap among them too. On the other hand, we then take these as is and try to find biomarkers in the brain, only to see not much. I'd like to address both at the same time by just removing all the labels first, looking at the brain and see how different they are. From there, I want to go back to psychiatric symptoms and rebuild the taxonomy from the bottom. I expect a lot of complications because there's no one-to-one mapping but at least, I think starting from the brain gives us more to work with, as it has done in other fields such as neurology.
Why does the human brain look like the way it does? It is far from a collection of randomly connected nodes, with many complex network features such as modularity, smallworldness, ... A very prolific line of work has been studying these features to come up with hypotheses about this why question. For example, smallworldness implies a cost-benefit trade-off so maybe that's a factor. Complementary to this approach, some other works came up with optimization algorithms to see if the resulting network looks like the brain. For example, if you'd want to optimally balance the trade-off, would you end up with a brain-like connectivity pattern? I'm interested in both, but lean towards the latter.
I started looking into this issue during my PhD, when I realized our game theoretical approach has the authority to settle some debates there. Game theory is a normative modeling framework so it's built to answer these types of questions. Precisely, I went ahead to test the trade-off idea using two game theoretical works. The main question is simple but important: Is the brain wired to optimize communication efficiency given the cost of wiring? In the first paper, I went ahead with the premise that the brain's structure supports optimal communication and wanted to see what optimal communication on top of that network looks like. In the second one, I went even deeper and asked how would the network look like if all nodes set to optimize their communication? Together, we found that the communication in brain networks is not optimal and the network structure itself is not optimal either. There are other factors (obviously) such as reliability of computation and perhaps even the evolutionary history, that makes brains not optimal in any single metric (or two in this case).
Relevant works:
Causality weaves the fabric of modern science and, consequently, understanding how the brain works and fails requires causal inference, both among brain regions or between brain and behavior. During the past decade, neuroscience has enjoyed rapid technological advancement of tools needed to record and manipulate an ever-growing number of neural elements. Yet, the very logical foundations of how we characterize what caused what is as old as the early days of neuroscience. In my PhD project I undertook an ambitious project to first see what are the methodological issues that the field never really ended up looking at, and then come up with a better framework. The framework I settled on is called multi-perturbation Shapley value analysis or MSA for short. It builds upon a simple, yet rigorous and axiomatic concept: fairness.
Intuitively, one can see an effect (let's say some cognitive function) as a product of a complex web of contributing causes (e.g., brain regions). Each region can contribute its own way, just as how different people with widely different experties work together to build a house or an orchestra coordinating to perfect a piece. But eventually, we should be able to compensate them for their contribution, and we should have a fair system for this compensation so that every player (person or brain region) gets what it deserves. You see, this is very intuitive but how exactly should we define fairness here? Game theory does that for us. A whole body of research spanning from 1950s tried and eventually they found a mathematically sound way of doing so, giving us a unique division of return among the players, meaning that there's no other way if you want everyone to get exactly what they deserve.
In a couple of papers using in-silico models (from simple networks of 19 neurons to LLMs) I then show where the conventional conceptual framework for causal inference falls apart and how MSA gives us a better picture of who's doing what in the system. With my friend Shrey Dixit, we also made a fast and general-purpose Python package for it. He then went even further and made a software specific for lesion symptom mapping.
Relevant works:
What's so special about brain networks, if anything at all? There's this intuitive feeling that, since they keep the animal (and us) survive then what you see in them might be also useful for AI models. I have an interest in developing brain-inspired AI models but mainly to understand brains as opposed to improve AI. One feature that we found very interesting but neglected (mainly because of methodological limitations) is reciprocity in brain networks. There are mixed results about how much reciprocal connectivity is in the brain, mainly because it's very difficult to measure them but there are logical deductions. For example there shouldn't be strong loops by which information returns to a neural unit immediately, which seems to be the case in the brain too. The reasons are, neurons have refractory periods so it'll simply be a waste of spike if what goes around comes around right after it left. Also, it means not much computation happened to it either, as opposed to long loops where information goes around and comes back after many steps of transformation. So, even though neuroimaging methods don't help here, we expect this to be the case and we can simply show it using neural networks. This is what we did in two papers, one to come up with an algorithm that modulates reciprocity without messing with other features too much, and the other by checking how it impacts computation. Bottom-line is: Reciprocity (or strong-loops) are bad, tada!
Relevant works: