We are happy to announce the list of accepted papers. We would like to point out that we have received a large number of papers and that, unfortunately, some high-value scientific works have also been rejected for lack of space.
We accepted 52 papers (45% acceptance rate) .
Deadline for early registration is July 10, 2017. For further information please click here.
NOTE: Reservation cut-off date is June 30, 2017. Reservations received after this date will be accepted on a space available basis. We remind you that Lecce is a popular touristic destination so we recommend you to reserve ASAP. For further information please click here.
|8||Structured LSTM for Human-Object Interaction Detection and Anticipation||Anh Truong, Atsuo Yoshitaka||ORAL|
|9||Latent Embeddings for Collective Activity Recognition||Yongyi Tang, Peizhen Zhang, Jian-Fang Hu, Wei-Shi Zheng||POSTER|
|15||Semantic Annotation of Surveillance Videos for Abnormal Crowd Behaviour Search and Analysis||Melike Sah, Cem Direkoglu||POSTER|
|18||Building an Intelligent Video and Image Analysis Evaluation Platform for Public Security||Chuanping Hu, Gengjian Xue, Lin Mei, Li Qi, Jie Shao, Yanfeng Shang, Jian Wang||POSTER|
|19||People Detection in Top-View Fisheye Imaging||Oded Krams, Nahum Kiryati||ORAL|
|20||ADM-HIPaR: An Efficient Background Subtraction Approach||Thien Huynh-The, Sungyoung Lee, Cam-Hao Hua||ORAL|
|23||Motion Compensation of Submillimeter Wave 3D Imaging Radar Data for Security Screening||Maria Axelsson, Mikael Karlsson, Henrik Peterson||POSTER|
|27||Background Modeling using Adaptive Properties of Hybrid Features||Jaemyun Kim, Adin Ramirez Rivera,) Byeongwoo Kim, Kaushik Roy, Oksam Chae||POSTER|
|28||Action Recognition from Extremely Low-Resolution Thermal Image Sequence||Takayuki Kawashima, Yasutomo Kawanishi, Daisuke Deguchi, Ichiro Ide, Hiroshi Murase, Tomoyoshi Aizawa, Masato Kawade||POSTER|
|37||Background Subtraction Using Encoder-Decoder Structured Convolutional Neural Network||Kyungsun Lim, Won-Dong Jang, Chang-Su Kim||ORAL|
|38||Action Recognition based on a mixture of RGB and Depth based skeleton||Srijan Das, Michal Koperski, Francois Bremond, Gianpiero Francesca||POSTER|
|41||Triplet CNN and Pedestrian Attribute Recognition for Improved Person Re-identification||Yiqiang CHEN, Stefan Duffner, Andrei STOIAN, Jean-yves DUFOUR, Atilla BASKURT||ORAL|
|44||Active Collaborative Ensemble Tracking||Kourosh Meshgi, Maryadm sadat Mirzaei, Shigeyuki Oba, Shin Ishii||POSTER|
|46||Deep Spatial Pyramid for Person Re-identification||Slawomir Bak, Peter Carr||ORAL|
|48||Attributes Co-occurrence Pattern Mining for Video-based Person Re-identification||Xiu Zhang, Federico Pala, Bir Bhanu||POSTER|
|49||Inferring State Transition from Bystander to Participant in Free-style Conversational Interaction||Tatsuya Era, Hiroki Yoshimura, Masashi Nishiyama, Yoshio Iwai||ORAL|
|51||PASS: Privacy Aware Secure Signature Scheme for Surveillance Systems||Jihye Kim, Seunghwa Lee, Jungjun Yoon, Hankyung Ko, Seungri Kim, Hyunok Oh||POSTER|
|54||Movies Tags Extraction Using Deep Learning||Umair Khan, Miguel Amor, Naveed Ejaz, Heiko Sparenberg||POSTER|
|56||Generative Adversarial Models for People Attribute Recognition in Surveillance||Matteo Fabbri, Simone Calderara, Rita Cucchiara||ORAL|
|57||Aerial Video Surveillance System for Small-Scale UAV Environment Monitoring||Danilo Avola, Gian Luca Foresti, Niki Martinel, Christian Micheloni, Daniele Pannone, Claudio Piciarelli||POSTER|
|58||A 3D-Autism Dataset for Repetitive Behaviours with Kinect Sensor||Omar RIHAWI, Djemal MERAD, Jean Luc Damoiseaux||POSTER|
|59||Fast gender recognition in videos using a novel descriptor based on the gradient magnitudes of facial landmarks||George Azzopardi, Antonio Greco, Alessia Saggese, Mario Vento||POSTER|
|61||Exploiting Gaussian Mixture Importance for Person Re-identification||Xiangping Zhu, Amran Bhuiyan, Mohamed Lamine Mekhalfi, Vittorio Murino||ORAL|
|67||Enhancing audio surveillance with Deep Neural Networks||Federico Colangelo, Federica Battisti, Marco Carli, Alessandro Neri, Francesco Calabrò||ORAL|
|71||Approximate License Plate String Matching for Vehicle Re-Identification||NATTTACHAI WATCHARAPINCHAI, SITAPA RUJIKIETGUMJORN||POSTER|
|72||An Evidential Framework for Pedestrian Detection in High-Density Crowds||Jennifer Vandoni, Emanuel Aldea, Sylvie Le Hégarat||ORAL|
|77||Multi-region Bilinear Convolutional Neural Networks for Person Re-Identification||Evgeniya Ustinova, Victor Lempitsky||ORAL|
|78||Multi-scale Histogram Tone Mapping Algorithm Enables Better Object Detection in Wide Dynamic Range Images||Jie Yang||POSTER|
|79||An Adaptive Fusion Scheme of Color and Edge Features for Background Subtraction||Kaushik Roy, Jaemyun Kim, Md Tauhid Bin Iqbal, Farkhod Makhmudkhujaev, Byungyong Ryu, Oksam Chae||ORAL|
|80||CNN-based Cascaded Multi-task Learning of High-level Prior and Density Estimation for Crowd Counting||Vishwanath Sindagi, Vishal Patel||ORAL|
|84||Learning to Detect Violent Videos using Convolutional Long Short-Term Memory||Swathikiran Sudhakaran, Oswald Lanz||POSTER|
|91||Active visual tracking in multi-agent scenarios||Yiming Wang, Andrea Cavallaro||ORAL|
|94||A Signal Detection Theory Approach for Camera Tamper Detection||Pranav Mantini, Shishir K. Shah||POSTER|
|96||Video-Based Single Sample Face Recognition Using Face Frontalization via Autoencoders Deep Neural Networks||Saman Bashbaghi, Mostafa Parchami, Eric Granger||ORAL|
|97||Multi-Object tracking using Multi-Channel Part Appearance Representation||Thi Lan Anh NGUYEN, Francois Bremond, Furqan Muhammed Khan, Farhood Negin||ORAL|
|99||Combining Spatial and Temporal Features for Crowd Counting with Point Supervision||Haiying Jiang||POSTER|
|100||Abnormal behavior detection in LWIR surveillance of railway platforms||Kristof Van Beeck, Kristof Van Engeland, Joost Vennekens, Toon Goedemé||POSTER|
|102||Effective Heart Rate Estimation Using Deep Learning on Time-Frequency Representations||Gee-Sern Hsu, ArulMurugan Ambikapa, Ming-Shiang Chen||POSTER|
|103||Action Localization in Video using a Graph-based Feature Representation||Iveel Jargalsaikhan, Noel O’Connor, Suzanne Little||ORAL|
|104||A knowledge-based approach for video event detection using spatio-temporal sliding windows||Danilo Cavaliere, Sabrina Senatore, Pierluigi Ritrovato, Luca Greco||POSTER|
|105||Learning Feature Representation for Face Verification||Sangwoo Park, Jongmin Yu, Moongu Jeon||POSTER|
|106||Robust License Plate Detection In The Wild||Gee-Sern Hsu, ArulMurugan Ambikapa, Sheng-Luen Chung||POSTER|
|109||A batch asynchronous tracker for wireless smart-camera networks||Sandeep Katragadda, Andrea Cavallaro||POSTER|
|111||Analytics of Deep Neural Network in Change Detection||Tsubasa Minematsu, Atsushi Shimada, Rin-Ichiro Taniguchi||POSTER|
|115||Suspected Vehicle Detection for Driving without License Plate Using Symmelets and Edge Connectivity||Jun-Wei Hsieh||POSTER|
|116||An efficient and effective method for people detection from top-view depth cameras||Vincenzo Carletti, Luca Del Pizzo, Gennaro Percannella, Mario Vento||ORAL|
|118||Hyper-optimization tools comparison for parameter tuning applications||Camille Maurice, Jorge Francisco Madrigal Diaz, Frédéric Lerasle||POSTER|
|119||Applying Audio Description for Context Understanding of Surveillance Videos by People With Visual Impairments||Virginia Campos, Luiz Goncalves, Tiago Araujo||POSTER|
|126||Convolutional NNs for Face Recognition in Video Surveillance Using a Single Training Sample Per Person||Mostafa Parchami, Saman Bashbaghi, Eric Granger||ORAL|
|128||Modeling and classification of trajectories based on a Gaussian process decomposition into discrete components||Damian Campo, Mohamad Baydoun, Lucio Marcenaro, Andrea Cavallaro, Carlo Regazzoni||POSTER|
|130||Joint Cost Minimization for Multi-Object Tracking||ABHIJEET BORAGULE, Moongu Jeon||POSTER|
|131||Activity Recognition Using a Panoramic Camera for Homecare||Oscal T.-C. Chen, Ching-Han Tsai, Hung Ha Manh, Wei-Chih Lai||POSTER|
- Presentation format: Oral talks are given 20 minutes plus 5 minutes for questions. This includes time for switching between speakers and the introduction. You will be asked to leave the podium once your time is up. Please make sure you do not exceed the given time limit.
- Arrival time: Oral presenters should be present at the podium at least 20 minutes before the start of the session. Please introduce yourself to your session chair in the presentation room, upload your slides on USB drive to conference computer or test your laptop (if you use conference computer, remember to embed (and not link) all videos into the presentation).
- Facilites: Projector is supporting 4:3 format. Logitec presenter and microphone is provided to speakers. Conference Laptop will support Adobe Acrobat and Microsoft Powerpoint 2010/2013.
- NOTE: All oral presentations have also been allocated a poster presentation. The poster presentation will be scheduled in the subsequent poster session after the talk. For detailed instructions for poster preparation, see below (spotlight presentation is not required for posters associated to oral presentations).
- Presentation format: The poster format is A0 portrait. Poster boards are 841mm wide x 1189mm tall (equivalent to 33.1 inch wide x 46.8 inch tall). Adhesive material and/or pins will be provided for mounting the posters to the boards. The board for your poster will be identified by paper number from the program. If you have special requirements, please contact local organizers as soon as possible. We will try to accommodate your requests as much as possible.
- Arrival time: Poster presenters are asked to install their posters during the coffee break prior to the poster session. Remove posters promptly at the conclusion of each poster session.
- Poster printing: We can support you in poster printing; if you need to print you poster on-site, please send an email to email@example.com.
- Spotlight Presentation:
- Each poster will also have a ORAL spotlight presentation right at the beginning of and during each poster session.
- Each spotlight presenter will give a very short (2 minutes max) “teaser” talk to advertise his/her poster. Note that the intention of the spotlights is to draw people’s attention to the posters and are not meant to be mini-talks. Think of it as a commercial for your paper to motivate people to learn more (most commercials are only 30-60 seconds long).
- Feel free to include pictures. However, do not include any videos or clickable animations in the slide.
- DEADLINE: Slides for the spotlight are due by August 10th (11:59 PM CET). This is a hard deadline! Slides have to be sent in ppt format to firstname.lastname@example.org
Boards to stick posters are pre-assigned. Please refer to the tables attached below to find your board. Presenters should indicate the assigned board number in their oral or spotlight presentation in order to allow conference attendees to easily reach it.