What excites me is that we know of at least one exceptional vision system: the brain - which theoretically gives Artificial Intelligence researchers a system to mimic. Interestingly, some neuroscientists have shown how critical is motion when our visual cortex learns to recognize patterns . This is why I work with video data. We humans have developed our perception from a continuous stream of frames - not from individual images, so why should the machines we make be any different? Over the coming years I would like to build algorithms which can perceive video through machine learning. As a secondary aim I would like my systems to be extensible to even comprehend single images.
Biography: I graduated with a Masters in CG, Vision & Imaging in late 2010 from UCL. Here, I researched with Gabriel Brostow (aka Gabe) on detecting regions of occlusion in consecutive video frames. I also did a brief stint at The University of Warwick with Nasir Rajpoot, developing registration and dimensionality reduction techniques for cancerous tissue examined under Toponome Imaging System. Previously, I was stationed at LUMS SSE where I worked with Sohaib Khan . In my 3 years stay, I collaborated with biologists at LUMS SSE and MRC NIMR in developing tracking techniques for fluorescence microscopy.
Over the years, I have mainly tackled the problem of occlusion in my work. In the overall scheme of things I believe time is the missing dimension in most vision research. I am currently intrigued by problems where learning can help reveal more information about video sequences.
Apart from Computer Vision, I have had interludes into Systems research - working on Google's MapReduce with Umar Saif.