Marcus Ghosh (@marcusghosh) 's Twitter Profile
Marcus Ghosh

@marcusghosh

Postdoctoral Fellow (@ImperialX_AI, @SchmidtFutures).

Working on multisensory integration with @neuralreckoning.

🐘 @[email protected]

ID: 1140273182651428865

linkhttps://profiles.imperial.ac.uk/m.ghosh calendar_today16-06-2019 15:01:26

356 Tweet

947 Followers

568 Following

Dan Goodman (@neuralreckoning) 's Twitter Profile Photo

New article: thoughts on a non-hierarchical science where decisions are made by junior scientists rather than senior ones, who have an (optional) advisory role. Socially and scientifically better. thesamovar.github.io/zavarka/invert…

Danyal (@danakarca) 's Twitter Profile Photo

*Fully funded PhD position opening*💥 I have an opening for my first PhD student to come to Imperial College London, co-supervised with Dan Goodman, working on new projects in neuro-AI 🧠 Must be UK home fees status (3 yrs in UK prior). Plz share! Further details below 👇👇

*Fully funded PhD position opening*💥 

I have an opening for my first PhD student to come to <a href="/imperialcollege/">Imperial College London</a>, co-supervised with <a href="/neuralreckoning/">Dan Goodman</a>, working on new projects in neuro-AI 🧠 

Must be UK home fees status (3 yrs in UK prior).

Plz share! Further details below 👇👇
Kayson Fakhar (@kaysonfakhar) 's Twitter Profile Photo

A reminder that we have this seminar series for next week. You can register using the links in the mentioned thread. Keep in mind the correct dates (the previous post is off by a day).

A reminder that we have this seminar series for next week. You can register using the links in the mentioned thread. Keep in mind the correct dates (the previous post is off by a day).
Friedemann Zenke (@hisspikeness) 's Twitter Profile Photo

1/6 Surrogate gradients (SGs) are empirically successful at training spiking neural networks (SNNs). But why do they work so well, and what is their theoretical basis? In our new preprint led by Julia Gygax, we provide the answers: arxiv.org/abs/2404.14964

1/6 Surrogate gradients (SGs) are empirically successful at training spiking neural networks (SNNs). But why do they work so well, and what is their theoretical basis? In our new preprint  led by <a href="/JuliaGygax4/">Julia Gygax</a>, we provide the answers: arxiv.org/abs/2404.14964
Marcus Ghosh (@marcusghosh) 's Twitter Profile Photo

Awesome to see two Zebrafish Rock! papers out from my old colleagues @uclcdb this week (Anya Suppermpool, The Rihel lab, Stephen Wilson etc): ⚪️ Sleep pressure and synapses (doi.org/10.1038/s41586…) ⚪️ Neuronal asymmetry (doi.org/10.1126/scienc…)

Marcus Ghosh (@marcusghosh) 's Twitter Profile Photo

Really enjoyed this thoughtful pair of articles, from Máté Lengyel & Jonathan Pillow, on the usefulness of Marr's three levels in #neuroscience: ❤️doi.org/10.1113/JP2795… ❤️‍🩹doi.org/10.1113/JP2795…

Kayson Fakhar (@kaysonfakhar) 's Twitter Profile Photo

2 years into making, the last chapter of my PhD thesis, and a set of wonderful collaborators: @m00rcheh, @ShreyDixitAI, Bratislav Misic, Caio Seguin, G. Zamora-López, Claus C Hilgetag. 🧵 What can we learn about communication in brain networks from 7000 million virtual lesions:

2 years into making, the last chapter of my PhD thesis, and a set of wonderful collaborators: @m00rcheh, @ShreyDixitAI, <a href="/misicbata/">Bratislav Misic</a>, <a href="/caioseguin/">Caio Seguin</a>, <a href="/GZamora_Lopez/">G. Zamora-López</a>, <a href="/CCHilgetag/">Claus C Hilgetag</a>. 

🧵 What can we learn about communication in brain networks from 7000 million virtual lesions:
Danyal (@danakarca) 's Twitter Profile Photo

I hadn’t really thought about the variety of algorithms that *could* optimally underpin multi-modal processing/integration, until this awesome work by Marcus Ghosh & team. Clear findings w/ experimentally testable predictions on nonlinear fusion in agents & organisms! 👌