Brainhack Warsaw

Let’s Brainhack

25-27.03.2022 @ University of Warsaw

❤️❤️❤️


Facebook Twitter

About Brainhack Warsaw 2022

On the last weekend of March 2022, the fourth edition of Brainhack Warsaw will take place. During this three-day event dedicated to students and PhD students, we will work in teams on neuroscience-related projects. This year’s edition is organized by the Neuroinformatics Student Club operating at Physics Faculty, University of Warsaw.

The aim of the event is to meet new, enthusiastic researchers, make new friendships in academia, learn, share the knowledge on data mining, machine learning and brain research, but also promote open science in the spirit of the whole Brainhack community (Craddock et al., 2016). Attendees of various backgrounds are welcome to join!

By submitting your own, genuine research project, you can gain a priceless leadership experience as you will manage a group of researchers at our three-day event. Go ahead, let your creativity bloom and share your idea with us!

Deadline for project proposals: 20.01.2022

Announcement of projects: 27.01.2022

Participant registration starts: 28.01.2022

Deadline for participant registration: 17.03.2022

Please send all the related questions at themailing address:

One on the main establishments for Brainhack Warsaw is to provide a safe and comfortable experience for everyone, regardless of their gender, gender identity and expression, sexual orientation, disability, physical appearance, body size, race, age or religion. Please make sure to familirize with a Code of Conduct of the Brainhack Global. If you vitness any form of harassment at our event, please inform us at our (e-mail adress) so we can react and prevent any future events like that.

Pandemic situation

We are well aware of the COVID-19 pandemic situation. However, we will try our best to perform this year’s edition on-site. It is crucial to us that the environment during the whole event would be as safe as possible. Therefore, Brainhack will take place while following all sanitary and epidemiological restrictions.

Unfortunately, to care for your safety, we are forced to accept applications for team leadership and participation only from those of you who will declare to willingly show us your COVID vaccination certificate. That way we can guarantee you a wonderful, as well as safe, time with us!

Past Editions

This is the fourth time that Brainhack Warsaw is happening. We are proud that out initiative has gone so far and happy to share with you what have happened before. You can read about all the projects conducted before, engage with our previous guests and also get a glimpse of what you might be expecting this year.

Remember that reading all about what we experienced before might be a great start on your yourney with Neuroinformatics, Machine Learning and Brain Sciences. Reading about previous editions you can also be inspired to prepare Brainhack Warsaw 2022 project idea. Although the last edition did not take place due to Coronavirus epidemic, we encourage you to familiarize with its really inspiring projects.

We believe that the drive for exploration, curiosity and reaching out is what makes our community a great place for everyone. See what it was and is like to be a part of this initiative and do not hesitate to contact us in case of any further questions conserning previous editions of our hackathon conference.

Click on our brain and find out more about Brainhack Warsaw!


‎‎‎‎‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎

‎‎‎‎‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎

‎‎‎‎‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎

Brainhack 2017

2017

Brainhack 2019

2019

Brainhack 2020

2020

‎‎‎‎‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎‎ ‎

Venue

Brainhack will take place at the University of Warsaw, Faculty of Physics, Pasteura 5, 02-093 Warsaw, Poland.

Speakers

Dynamic Eye-Brain Imaging: from eye anatomy to brain function.

The lecture will be streamed in real time

Abstract: Vision is arguably the most important of our senses and it relies on the synchronous functioning of the eyes and the brain. These organs are highly interdependent: pathologies of the eyes can impact brain functionality 1 , and brain impairments affect how the visual information is encoded at the eye-level 2,3 . While nowadays ophthalmic biomedical devices are able to extract high-resolution anatomical measurements and behavioral measurements of the eyes, no technology is able to perform anatomical assessments of the eye while it moves, yet eye-movements are a behavioral readout encompassing valuable biomarkers in brain disorders 4,5 . Magnetic Resonance Imaging (MRI) is a particularly promising non-invasive and versatile technique because it provides measurements related both to the tissue/organ structure and to the regional neural activity. However, the image artefacts arising from eye motion prevent the applicability of MR techniques to eye imaging, therefore impeding the investigation of the interplay between anatomical structures and their motion. In this talk I will present our patented structural MRI protocol 6,7 that allows dynamic acquisitions of the eye while it moves during quasi-naturalistic vision. To test the efficacy of this method, eye-movements and eye-axial lengths – as extracted from the MR images – were compared with eye-tracker measurements and optical biometry, respectively. This new non-invasive technology can estimate the rotation axes from the MR images with up to 97% accuracy with respect to the eye-tracker hardware. The high-resolution MRI scans of the human eye (1 mm 3 ) – acquired during natural movement – permit to quantify the optical axial length with an accuracy having the same order of magnitude of the one obtained with ocular biometry 6,7 . Furthermore, I will present our follow up of the technique in the field of Blood Oxygen Level Dependent functional MRI. We will understand how the works are interconnected and I will discuss the new frontiers these techniques open both in the field of ophthalmic MRI and vision neuroscience.

Bio: Dr. Benedetta Franceschiello is a research scientist in the EEG Section CHUV-UNIL of the Center for Biomedical Imaging (CIBM) and in the Laboratory for Investigative Neurophysiology (LINE), in the Radiology Department of the University Hospital Center (CHUV) and University of Lausanne (UNIL), Switzerland.


Deciphering neurodegenerative pathology using deep learning

The lecture will be streamed in real time

Abstract: Neurodegenerative diseases represent a diverse and devastating group of disorders such as Alzheimer’s and Parkinson’s disease. Despite their diversity, these disorders are all characterized by the abnormal aggregation of select proteins in the brain and progressive cognitive decline. While neurodegenerative diseases are known to exhibit certain clinical behaviors such as memory and speech loss, these symptoms often overlap between diseases and thus prevent accurate diagnosis. Therefore, in order to better understand these disorders, pathological assessment of post-mortem brain tissue is performed and serves as the current gold standard for disease diagnosis.

Disease pathology found in brain tissue is rich with information and we believe that current manual approaches can only uncover a small fraction due to the tissue’s immense size and heterogeneity. In previous work from our lab and others, deep learning approaches have proven to be powerful tools for overcoming such challenges to reveal new insight into cancer pathology. In this talk, I will present different deep learning approaches that we developed in our lab to decipher the heterogeneity of neurodegenerative pathology. I will also discuss how our findings may inform on the underlying biology of neurodegeneration.

Bio Tony received his bachelor’s degree in Physics at St. Mary’s University and his PhD in Molecular Biophysics from the University of Texas Southwestern Medical Center (UTSW). In his graduate work, Tony worked on developing various computer vision tools to characterize protein interactions and organization at the cell membrane at the single-molecule level. In his current position as a Post-Doctoral Scientist in the labs of Marc Diamond and Satwik Rajaram at UTSW, Tony’s current research focuses on using machine learning to elucidate the pathological features of neurodegenerative diseases.

Projects


Project 1: Generating Sequences of Rat Poses

Project 2: Automatic detection of zebrafinch vocalizations using TinyML

Project 3: Talking with Machine Learning models using chatbots

Project 4: Learning resting-state EEG data analysis through software development

Project 5: Isometric Latent Space Representation of EEG signals for bootstrapping and classification

Project 6: Workflow for automated classification of sMRI images of psychiatric disorders using neural networks

Project 7: Artificial intelligence-based techniques for neglect identification


Project 1: Generating Sequences of Rat Poses

Authors: Paweł Pierzchlewicz 1

  1. University of Göttingen & University of Tübingen

Abstract The goal to understand behaviour has been the backbone for driving research in various areas of neuroscience. Recently machine learning has provided a set of tools, which allows us to study behaviour in previously unimaginable ways. Specifically deep learning helps us shine some light on high dimensional data such as behaviour. Particularly interesting for behaviour could be the sub field of generative modelling, where one strives to model and sample from some target distribution. A prominent example of such models are generative adversarial networks (GANs), which have presented impressive results in generating novel entities (faces, cats, memes, etc.). However, their implicit nature makes it harder to intuitively understand the generative process. Thankfully a different equally exciting method called Normalising Flows (NFs) allows us to explicitly transform one distribution (e.g. standard normal) into another (e.g. faces), through a series of invertible transformations. As a result, providing us with an easier to understand generative model.

In this project we will attempt to learn to generate temporally coherent sequences of rat poses based on the Rat7m dataset using NFs. To achieve this we will explore variants and constraints on the latent space to best capture the distribution of pose sequences. One possible direction would be to constraint the latent space such that following a linear path between two points generates a coherent “pose movie” between the two corresponding poses. Finally, we will analyse the learned latent space to show that NFs can serve as a powerful analysis tool for behavioural neuroscience researchers. Meaningful structure is expected to emerge in the latent space indicating clusters of actions, temporal similarity of poses or some other interpretable object.

A list of 1-5 key papers/materials summarising the subject:

  1. https://arxiv.org/abs/1605.08803
  2. https://arxiv.org/pdf/1907.01108.pdf
  3. https://www.nature.com/articles/s41592-021-01106-6
  4. https://arxiv.org/abs/1807.03039

A list of requirements for taking part in the project:

  • Understanding of probability theory linear algebra and calculus.
  • Prior experience with machine learning.
  • PyTorch.
  • Communicative English.

A maximal number of participants: 6

What other professions (other than programmist) are desirable in the project?

  • animator
  • artist

Skills and competences you can learn during the project:

  1. Probabilistic machine learning techniques.
  2. Generative modelling. Machine leaning project teamwork.
  3. Insight into the interaction between machine learning and neuroscience.

Is there a plan for extending this work to a paper in case the results are promising? Yes


Project 2: Automatic detection of zebrafinch vocalizations using TinyML

Author: Mateusz Kostecki, Msc1 / Cezary Paziewski 2

This proejct has been cancelled

  1. Nencki Institute
  2. Faculty of Physics, University of Warsaw & Nencki Institute

Abstract Zebrafinch song learning is now a popular model system for the study of vocal development and motor control. Researchers studying neural basis of birdsong are taking advantage of many tools of system neuroscience to elucidate neural mechanisms guiding the production of vocalizations. One of them is optogenetics, a method that uses laser pulses to switch on or off neurons in different brain areas. In studies of birdsong one often wants to achieve a high level of stimulation precision, activating neurons only at specific moments of a bird’s song. The problem here lays with timing: using computer to analyze incoming audio stream and deliver stimulation based on results generates delays, using microcontrollers, on the other hand, usually allows detecting the vocalizations only by amplitude, which is highly imprecise. We would like to take advantage of recent developments in TinyML to create a system in which a neural network implemented on a Arduino microcontroller would detect vocalizations of a zebrafinch, popular bird in the study of birdsong, and trigger a laser using TTL pulse. 

A list of 1-5 key papers/materials summarising the subject:

  1. https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0181992,
  2. https://www.science.org/doi/full/10.1126/science.aaw4226

A list of requirements for taking part in the project:

  • fluent Python, Arduino,
  • basic knowledge of TinyML is recommended

A maximal number of participants: 10

What other professions (other than programmist) are desirable in the project?

  • artists
  • biologists
  • musicians

Skills and competences you can learn during the project:

  1. The knowledge how to implement tinyML in the development of behavioural paradigms in systems neuroscience

Is there a plan for extending this work to a paper in case the results are promising? Yes


Project 3: Talking with Machine Learning models using chatbots

Authors: Michał Kuźba, MSc 1

  1. Expedia Group

Abstract: Machine Learning models are often blackboxes. It means their predictions are hard to interpret and trust. However, there’s ongoing research in the area of interpretable/explainable Machine Learning.

Why don’t we talk to the model to ask and understand more about the decisions it makes? We can use an existing chatbot framework such as Dialogflow, plug in a blackbox ML model and interact with the model in a conversational way.

Imagine interrogating the ML model for doing biased, wrong or weird decisions.

What can we do?

  1. Train some ““controversial”” ML models (medical, COVID, financial, legal, including biases, etc.)
  2. Create a dialogue system as an interface for the model
  3. Work on explaining model decisions
  4. Discover questions to ask the ML models
  5. Deploy our chatbot to a larger audience
  6. Anything that comes to your mind and we might do using a chatbot as an interface for the Machine Learning model (All ideas are welcome!)

A list of 1-5 key papers/materials summarising the subject:

  1. Book about interpretable ML https://christophm.github.io/interpretable-ml-book/
  2. Kuźba, Michał. “Conversational explanations of Machine Learning models using chatbots.”
  3. Kuźba, Michał, and Przemysław Biecek. “What Would You Ask the Machine Learning Model? Identification of User Needs for Model Explanations Based on Human-Model Conversations.” Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, Cham, 2020.
  4. Miller, Tim. “Explanation in artificial intelligence: Insights from the social sciences.” Artificial intelligence 267 (2019): 1-38.
  5. Sokol, Kacper, and Peter A. Flach. “Conversational Explanations of Machine Learning Predictions Through Class-contrastive Counterfactual Statements.” IJCAI. 2018.

List of useful skills in the project:

  • Neurobiological knowledge
  • Signal analysis
  • Data visualization
  • EEG signal analysis and modelling
  • Network analysis
  • Programming in Python, Igraph (python library)
  • English (reading, writing)
  • BSc program, or higher
  • Bring computer

A maximal number of participants: 8

What other professions (other than programmist) are desirable in the project? Specialists in Cognitive Science, Human-Computer Interaction, Social Sciences, chatbot UX are also welcome. Additionally, part of the chatbot development is a matter of design and not only coding. Ideally though, majority of participants are programming. If you feel like joining the project and unsure about your contribution email me at kuzba[dot]michal[at]gmail[dot]com.

Skills and competences to be acquired during the project: “1. Participate in the process of developing a chatbot – from design and programming, to testing, deploying and talking to it

  1. Plan and conduct some creative experiments
  2. Learn about interpretability, work with Cloud and chatbot framework “

Is there a plan for extending this work to a paper in case the results are promising? Yes


Project 4: Learning resting-state EEG data analysis through software development

Authors: Marcin Koculak, Msc1

#### THIS PROJECT IS FULLY OCCUPIED

  1. C-Lab, Institute of Psychology, Jagiellonian University

Abstract: Electroencephalography (EEG) is a popular method to investigate how our brains work and process information. Most of the researchers rely on software written by others to do their analyses, following example pipelines from the published literature. This creates a situation, where users have little practical knowledge about what is actually happening with their data and how choices made on each step of the analysis impact the final outcome. This is especially noticeable when using proprietary software, where source code is unavailable and analysis options are usually limited. Open-source projects can be easily inspected and the code tweaked, but rarely documentation provides implementation details or compare similar functions across different software. This main goal of this project is to help participants understand what EEG data represents, how it is collected, and how it can be analysed. We will take a mostly practical approach and acquire the knowledge through coding our own software. This will force us to understand every step of the process, so we can arrive at the proper result at the end. We will focus mainly on the most common preprocessing steps, but we should have a working pipeline for a simple analysis at the end of the hackathon. We will be programming in Julia – a relatively new language that draws inspiration from C, Matlab, and Python, with a strong focus on scientific computing. The project will assume participants have no experience in Julia, but familiarizing yourself with it before the brainhack or having experience with other languages (especially Python and Matlab) will definitely help. Author of the project will also try to arrange an EEG equipment, so participants will be able to analyse their own data.

A list of 1-5 key papers/materials summarising the subject: Instead of typical research papers, we will mostly work with other software to see how similar steps are handled there, so going through documentation of at least one of these projects will be very useful:

  1. MNE-Python https://mne.tools/stable/overview/index.html
  2. Fieldtrip https://www.fieldtriptoolbox.org/documentation/
  3. EEGLab https://eeglab.org/ Apart from that, reading or watching a tutorial introducing to programming in Julia, e.g.:
  4. https://www.youtube.com/playlist?list=PLZLVmS9rf3nOlvvbN9zTAFc7aujnvuFTV
  5. https://julialang.org/learning/

List of useful skills in the project:

  • Conversational level of English
  • Not being scared too much by technical language and math
  • Willingness to learn to code in a new language
  • Interest in EEG analysis

What other professions (other than programmist) are desirable in the project? Psychologists, biologists, medical professionals or cognitive science students should fit well if interested in EEG. As for artists, apart from learning to code, they might help in designing graphical elements of the software. Up to half of the participants.

Skills and competences to be acquired during the project: Experience with programming in Julia, better understanding of EEG data and analyses, working in a team on a software development project.

A maximal number of participants: 16

Is there a plan for extending this work to a paper in case the results are promising? Yes


Project 5: Isometric Latent Space Representation of EEG signals for bootstrapping and classification

Authors: Adam Sobieszek 1

  1. MISMaP, University of Warsaw

THIS PROJECT IS FULLY OCCUPIED

Abstract: We will train a generative adversarial network (GAN) in order to construct a latent representation of EEG signals from some domain (a representation, where one point in a vector space corresponds to one EEG signal the network can generate). The dataset we’ll use will either be data from experiments on emotional word processing of K. Imbir, J. Żygierewicz, myself and others, or some other publicly available dataset, e.g. of BCI data. The goal of the project is to develop a GAN capable of generating EEG signals similar to a given dataset of EEG samples, that learns a latent space representation of those signals that is isometric to the output domain. This means that a distance in latent space between two points representing two signals approximately corresponds to a measure of distance between these two signals. This is useful as it (a) adds smoothness to the representation, such that signals that are similar correspond to points that are near each other, (b) directions in latent space start to correspond to useful features of the signals, which makes classification much easier, (c) you can use such a latent space to generate, for example, a typical signal from some category, or easily bootstrap new signals similar to a set of signals (which can be used, for example, in data augmentation or bootstrap statistical tests).

We will code and train this network, design a distance metric (in order to design a regularization term, based on path-length regularization, that makes the latent space isometric) and perform a preliminary investigation of the usefulness of this latent representation of EEG signals for bootstrapping and classification.

A list of 1-5 key papers/materials summarising the subject: Path-length regularization in GANs:

  1. https://arxiv.org/pdf/1912.04958.pdf
  2. https://paperswithcode.com/method/path-length-regularization
    Examples of use of GANs for generation of EEG signals:
  3. https://www.sciencedirect.com/science/article/pii/S0208521621001273?via%3Dihub
  4. https://iopscience.iop.org/article/10.1088/1741-2552/abecc5/pdf

A list of requirements for taking part in the project:

  • Either knowledge of python (we’ll use PyTorch for the neural net) or mathematics (linear algebra, multi-variate calculus), as we’ll spend some time designing a regularization term for the network.
  • It is not required to be proficient in the topics discussed in the abstract (GANs, path-length regularization, latent space representations), as we will spend some time at the beginning of the project acquainting ourselves with them.”

A maximal number of participants on the project: 8

What other professions (other than programmist) are desirable in the project? Not all participants need to know how to code, however an understanding of either EEG signal characteristics, neural networks, or linear algebra and muli-variate calculus, would be valuable. So mathematicians / psychologists / bio-medical.

Skills and competences you can learn during the project: Experience in writing generative adversarial networks, possibly a publication later, if the results are promising.

Is there a plan for extending this work to a paper in case the results are promising? Yes


Project 6: Workflow for automated classification of sMRI images of psychiatric disorders using neural networks

Team leader of this project will join us remotely!

Authors: Sara Khalil 1

#### THIS PROJECT IS FULLY OCCUPIED

  1. Faculty of Life Sciences, University of Bradford

Abstract: Despite the advances in medicine, there are still no diagnostic methods for psychiatric disorders depending mainly on the subjective description of the condition by the patients. Among different neuroimaging techniques, structural MRI is considered the most convenient methods that can be used for diagnosis of different psychiatric disorders because it is widely available and less biased. Using deep learning for diagnosis of psychiatric disorders are widely explored, however, in this study, we will test the possibility of classification of different disorders using neural networks. We will develop workflow for sMRI images that automatically do the processing, segmentation into gray and white matter using FSL, conversion into video, using ResNet for the classification, and identification of significant regions of the brain. This workflow will be deployed into a website that psychiatrist can upload sMRI and get the most probable diagnosis. The model will undergo continuous updates. We will use the data from openMRI dataset (https://openneuro.org/datasets/ds000030/versions/1.0.0) and other datasets will be discussed with participants.

A list of 1-5 key papers summarising the subject:

  1. Latest advances in this field https://www.nature.com/articles/s41380-019-0365-9/
  2. https://academic.oup.com/pcm/article/3/3/202/5898685/
  3. https://www.sciencedirect.com/science/article/pii/S2213158221000280/
  4. one of the packages we will use https://github.com/miykael/gif_your_nifti/
  5. nextflow resources https://www.nextflow.io/docs/latest/script.html

A list of requirements for taking part in the project: As long as you can code, you can join, I will provide instructions for each step. English will be the language of instruction.

A maximum number of participants: 20

What other professions (other than programmist) are desirable in the project? Clinicians and neurobiologist

What can the participant gain from the project? This project consists of different ministeps through which we will implement

  1. workflow through nextflow or any necessary one
  2. MRI images processing and Voxel based morphometry using fsl or ANTs
  3. transformation of processed gray matter and white matter into video using NIFTI to gif
  4. using artificial neural networks for for classifications (most probably ResNet)
  5. deployment of the workflow
  6. This work will be submitted to peer reviewed journal.

Is there a plan for extending this work to a peer-reviewed paper in case the results are promising? Yes


Project 7: Artificial intelligence-based techniques for neglect identification

Team leader of this project will join us remotely!

Authors: Benedetta Franceschiello, PhD 1

  1. CIBM Center for Biomedical Imaging, EEG CHUV-UNIL Section.
  2. LINE, Laboratory for Investigative Neurophysiology Radiology Department, Lausanne University Hospital (CHUV) and University of Lausanne

Abstract: Background and Objective: Eye-movement trajectories are rich behavioral data, providing a window on how the brain processes information. We address the challenge of characterizing signs of visuo-spatial neglect from saccadic eye trajectories recorded in brain-damaged patients with spatial neglect as well as in healthy controls during a visual search task. Methods: In a previous study, we established a standardized pre-processing pipeline adaptable to other task-based eye-tracker measurements. By using a convolutional neural network, we automatically analysed 1-dimensional eye trajectories (x-projections) and found that we could classify brain damaged patients vs. healthy individuals with an accuracy of 86±5%. Moreover, the algorithm scores correlate with the degree of severity of neglect signs estimated with standardized paper-and-pencil test and with white matter tracts impairment via Diffusion Tensor Imaging (DTI). Interestingly, the latter showed a clear correlation with the third branch of the superior longitudinal fasciculus (SLF), especially damaged in neglect. Data are already pre-processed in a standardised fashion, and ready to be analysed. Aim: The purpose of this project is to extend these analyses from 1D trajectories (x-projections) to 2D images, i.e. by representing the eye-tracking trajectories in 2D. The goal is to verify whether adding 1 dimensionality and applying recent computer vision technique would entail increased sensibility and sensitivity than the one we have at present. Furthermore, we would like to underpin the neural mechanisms laying behind the results.

A list of 1-5 key papers summarising the subject:

  1. https://www.medrxiv.org/content/10.1101/2020.07.02.20143941v2;
  2. Bourgeois, A., Chica, A.B., Migliaccio, R., Bayle, D.J., Duret, C., Pradat-Diehl, P., Lunven, M., Pouget, P., Bartolomeo, P.: Inappro-priate rightward saccades after right hemisphere damage: Oculomotor analysis and anatomical correlates. Neuropsychologia 73, 1–11 (2015);
  3. de Schotten, M.T., Urbanski, M., Duffau, H., Volle, E., Lévy, R., Dubois, B., Bartolomeo, P.: Direct evidence for a parietal-frontal pathway subserving spatial awareness in humans. Science

A list of requirements for taking part in the project:

  • Possibility to access a supercomputer to launch the analysis.
  • Good English level.
  • At least a part of the crew enrolling to this project should have good programming skills and familiarity with machine learning techniques.

Maximum number of participants: 7

What other professions (other than programmist) are desirable in the project?

  • Medical
  • Artist
  • Psychologist

What can the participant gain from the project? This is an interdisciplinary project that combines different expertise:

  1. machine learning and programming,
  2. computer vision, applied statistics (from a psychological perspectives) and neuroscience. Participants will learn how to speak a common language and co-operate together towards a common goal (of early diagnosis of the disease)

Is there a plan for extending this work to a peer-reviewed paper in case the results are promising? Yes

Preliminary schedule

 

Friday, 25th March 2022

Saturday, 26th March 2022

Sunday, 27th March 2022

9:00

 

Brain hacking

Brain hacking

10:00

 

Brain hacking

Brain hacking

11:00

 

Brain hacking

Brain hacking

12:00

 

Brain hacking

Brain hacking

13:00

 

Brain hacking

Lunch

14:00

 

Lunch

Brain hacking

15:00

Opening & BrainTech presentation

Neurotalk (streamed in real time):

Anthony Vega, PhD: Deciphering neurodegenerative pathology using deep learning.

Preparing final presentations

16:00

5-min blitz project opening presentations

Brain hacking and the beggining of BrainTech contest

 

A round of 10-min final presentations

17:00

Break

Brain hacking

17:30

Ignite talk (streamed in real time):

Dr. Benedetta Franceschiello

Goodbye drinks

18:00

Dynamic Eye-Brain Imaging: from eye anatomy to brain function.

Brain hacking

 

18:30

Brainstorming

 

19:00

Brainstorming

Dinner

 

20:00

Brainstorming

Brain hacking

 

21:00

Late-night social

Brain hacking

 

22:00

 

Brain hacking

 

23:00

 

Brain hacking and the end of the contest

 

Participant Registration

The registration for Brainhack Warsaw 2022 is closed.

See you soon!

Registration took place in two rounds and there was a small registration fee (to cover the catering during the event) for the project participants:

  • EARLY -> 28.01 - 06.02 -> 135 PLN
  • REGULAR -> 07.02 - 17.03 -> 150 PLN

Partners & sponsors

test image size

test image size

test image size

test image size

Patron medialny

test image size

Committee

The Brainhack Warsaw 2022 committee:



Advisory board: