News
Invited Speakers
Call for Papers
Important Dates
Organizers
Program Commitee
Submission
Contacts
VECTaR2010
VECTaR2009
|
Workshop Program is available ( link to pdf )
Prof. Shaogang Gong, Queen Mary University of London, UK
URL: http://www.eecs.qmul.ac.uk/~sgg/
Title: Multi-Camera Re-Identification
Abstract:
Effective correlation of multiple independent data sources is
critical for multi-camera visual surveillance. Video data from a
distributed camera network can often be huge and widely disparate in
nature, with meaningful object associations often being not only
previously unknown, but also highly context dependent and sparse. In
this talk, I describe a holistic approach to multi-source object
association among disjoint camera views, known as the problem of
re-identification. The model utilises three distinct components: (1)
the use of relative ranking to replace the more conventional metric
distance matching method for comparing data from two different
sources; (2) learning spatio-temporal activity profiles to reduce the
search space for discovering meaningful associations between camera
views; and (3) the use of interactive data mining to quickly and
effectively guide the system to discover useful and/or previously
unknown associations. The system is extensively evaluated against the
UK Home Office i-LIDS multi-camera dataset. The results show that the
combination of these three components permits the system to operate
more quickly and effectively with a significantly higher success rate over previous techniques.
Bio:
Shaogang Gong is Professor of Visual Computation at Queen Mary
University of London, a Fellow of the Institution of Electrical
Engineers and a Fellow of the British Computer Society. He received
his D.Phil in computer vision from Keble College, Oxford University in 1989.
He has published over 250 papers in computer vision and machine
learning, and two books "Visual Analysis of Behaviour: From Pixels to
Semantics" (2011), and "Dynamic Vision: From Images to Face
Recognition" (2000). His work focuses on behaviour recognition, video
semantic content analysis, object detection and tracking, face and
expression analysis, gesture and action recognition.
Dr. Ivan Laptev, INRIA France
URL: http://www.irisa.fr/vista/Equipe/People/Ivan.Laptev.html
Registration: the paper code for registration is: 'WS18-yy' (yy is the number of your paper ID in the VECTaR workshop). (updated on 27/09/2011).
The conference ID for using the pdf-xepress is 'iccv2011x'. The camera-ready submission deadline is extended to 14/09/2011. (updated on 12/09/2011).
With the vast development of Internet capacity and speed, as well as wide adoptation of media technologies in people's daily life, it is highly demanding to efficiently process or organize video events rapidly emerged from the Internet (e.g., YouTube), wider surveillance networks, mobile devices, smart cameras, etc. The human visual perception system could, without difficulty, interpret and recognize thousands of events in videos, despite high level of video object clutters, different types of scene context, variability of motion scales, appearance changes, occlusions and object interactions. For a computer vision system, it has been very challenging to achieve automatic video event understanding for decades. Broadly speaking, those challenges include robust detection of events under motion clutters, event interpretation under complex scenes, multi-level semantic event inference, putting events in context and multiple cameras, event inference from object interactions, etc.
In recent years, steady progress has been made towards better models for video event categorization and recognition, e.g., from modeling events with bag of spatial temporal features to discovering event context, from detecting events using a single camera to inferring events through a distributed camera network, and from low-level event feature extraction and description to high-level semantic event classification and recognition. However, the current progress in video event analysis is still far more from its promise. It is still very difficult to retrieve or categorise a specific video segment based on their content in a real multimedia system or in surveillance applications. The existing techniques are usually tested on simplified scenarios, such as the KTH dataset, and real-life applications are much more challenging and require special attention. To advance the progress further, we must adapt recent or existing approaches to find new solutions for intelligent video event understanding.
The goal of this workshop is to provide a forum for recent research advances in the area of video event categorisation, tagging and retrieval. The workshop seeks original high-quality submissions from leading researchers and practitioners in academia as well as industry, dealing with theories, applications and databases of visual event recognition. Real-life applications in the context of multimedia metadata, i.e. event analysis and recognition on videos from the Internet, surveillance cameras, and mobile devices, etc., will be the theme of this year's workshop. Topics include the following, but not limited to:
- Motion interpretation and grouping
- Human Action representation and recognition
- Abnormal event detection
- Contextual event inference
- Event recognition among a distributed camera network
- Multi-modal event recognition
- Spatial temporal features for event categorization
- Hierarchical event recognition
- Probabilistic graph models for event reasoning
- Machine learning for event recognition
- Global/local event descriptors
- Metadata construction for event recognition
- Bottom up and top down approaches for event recognition
- Event-based video segmentation and summarization
- Video event database gathering and annotation
- Efficient indexing and concepts modeling for video event retrieval
- Semantic-based video event retrieval
- On-line video event tagging
- Evaluation methodologies for event-based systems
- Event-based applications (security, sports, news, etc.)
- Submission Deadline
July 11th, 2011 Extended to July 18th, 2011
- Notification of Acceptance
August 22nd, 2011 Extended to August 26th, 2011
- Camera-Ready Submission
September 10th, 2011 Extended September 14th, 2011
- Workshop November 13th, 2011
- Prof. Tieniu Tan, Chinese Academy of Sciences, China
- Prof. Thomas S. Huang, University of Illinois at Urbana-Champaign, USA
Program Chairs
- Prof. Liang Wang, Chinese Academy of Sciences, China
- Dr. Jianguo Zhang, University of Dundee, UK
- Dr. Ling Shao, The University of Sheffield, UK
- Rama Chellappa , University of Maryland, USA
- Chekuri S. Choudary, RNET Technologies, Dayton, OH, USA
- James W. Davis , Ohio State University, USA
- Ling-Yu Duan , Peking University, China
- Tim Ellis , Kingston University, UK
- James Ferryman , University of Reading, UK
- GianLuca Foresti , University of Udine, Italy
- Shaogang Gong , Queen Mary University London, UK
- Jungong Han , Centrum Wiskunde & Informatica, The Netherlands
- kaiqi Hang , Chinese Academy of Sciences, China
- Winston Hsu , National Taiwan University
- Ran He, Chinese Academy of Sciences
- Yu-Gang Jiang , Columbia University, USA
- Graeme A. Jones , Kingston University, UK
- Ivan Laptev , INRIA, France
- Jianmin Li , Tsinghua University, China
- Xuelong Li , Chinese Academy of Sciences, China
- Zhu Li , Hong Kong Polytechnic University, China
- Marcin Marszalek , Unviersity of Oxford, UK
- Tao Mei , Microsoft Research Asia
- Paul Miller , Queen's University Belfast, UK
- Ram Nevatia , University of Southern California, USA
- Yanwei Pang , Tianjin University, China
- Federico Pernici , Università di Firenze, Italy
- Carlo Regazzoni , University of Genoa, Italy
- Shin'ichi Satoh , National Institute of Informatics, Japan
- Dan Schonfeld , University of Illinois at Chicago, USA
- Ling Shao , The University of Sheffield, UK
- Yan Song , University of Science and Technology of China
- Peter Sturm , INRIA, France
- Dacheng Tao , Sydney University of Technology, Australia
- Xin-Jing Wang , Microsoft Research Asia
- Quan Wang, Ericsson R&D Division, USA
- Tao Xiang , Queen Mary University London, UK
- Dong Xu , Nanyang Technological University, Singapore
- Hongbin Zha , Peking University, China
- Zhang Zhang, Chinese Academy of Sciences
- Jianguo Zhang , University of Dundee, UK
- Lei Zhang , Microsoft Research Asia
- Liang Wang , Chinese Academy of Sciences, China
- Pingkun Yan , Chinese Academy of Sciences, China
- Yuan Yuan , Chinese Academy of Sciences, China
- When submitting manuscripts to this workshop, the authors acknowledge that manuscripts substantially similar in content have NOT been submitted to another conference, workshop, or journal. However, dual submission to the ICCV 2011 main conference and VECTaR'11 is allowed.
- The format of a paper submission is the same as the ICCV main conference. Please follow instructions on the ICCV 2011 website http://www.iccv2011.org/paper-submission.
- For the paper submission, please go to the Submission Website (https://cmt.research.microsoft.com/VECTAR2011/)
- ! Note: To get a paper ID, please register an initial submission first through the submission website.
Each submission will be reviewed by at least three reviewers from program committee members and external reviewers for originality, significance, clarity, soundness, relevance and technical contents.
Accepted papers will be published together with the proceedings of ICCV 2011 (included in the main conference DVD and in IEEE Xplore). High-quality papers will be invited to submit a special issue of a good computer vision journal after the conference.
|