VBS2018 Program

Program

VBS2018 will consist of two sessions:

  • A private session on February 4, 2018 at Chulalongkorn University, where textual KIS tasks will be evaluated in an elaborate session with the purpose of collecting enough data for a detailed evaluation.
  • The official VBS session on February 5, 2018 from 16:00 – 19:00, which will partially overlap with the Welcome Reception of MMM 2018 (starts at 17:00). The competition will include Known-Item Search (KIS) and Ad-hoc Video Search (AVS) tasks with runs:
    • Expert Run
      • 5 textual KIS tasks
      • 5 visual KIS tasks
      • 5 AVS tasks
    • Novice Run
      • 5 visual KIS tasks
      • 5 AVS tasks

We will give a time limit of 5 minutes per task.

Posters

As in the previous years, every team need to prepare a poster about their VBS system (format: A0 in portrait layout).  The posters should serve as information to the audience and should include all relevant details (you don’t need to actively present the posters). You don’t need to give an oral presentation about your system this year.

 

Interaction Logging

Our motivation is to collect additional information for a planned journal publication summarizing the event and analyzing the results. Therefore, with each submission, we need the teams to submit also a high-level sequence of actions that are performed before the submission. For example, if you use keyword search, browse few pages or browse on an imagemap, then you try a color sketch, browse again, then your system could log a high-level sequence K; P; …; P; C; P; …, B; … that you submit with video and shot-id to the VBS server. The log should be cleared only in the beginning of each task (VBS server will remind it). Since more users from one team can control different tools, a unique tool ID should start the log text. For multiple submissions during one task, each subsequence should end with the actual time (see examples later).
As each tool has its own set of query initialization, filtering and browsing options, the vocabulary of logged actions will consist of generalized unified (mandatory) and tool specific (optional) parts. The unified mandatory part is represented by a capital letter.
K = keyword search in automatically detected annotations (most teams use DCNNs), use this letter also for automatically detected concepts
A = using extracted audio data (usually searched by keywords as well)
O = using optical character recognition (usually searched by keywords as well)
C = search by color-sketch
E = search by edge-sketch
M = search by motion-sketch
S = query by example similarity search (when users pick an example query object from results or external source)
F = filtering using various attributes or actual dataset ordering, part of the database is cut off
P = paging, visiting a next/previous page in the actual ordering
B = browsing using a tool specific browsing system, e.g., zoom in/out in a hierarchical imagemap
T = the tool used external results from other team members
X = whenever you stop the actual search strategy and start from the scratch (reset all)
– = if you turn off just one action, for example, if you turn off edge-sketch you print -E
In brackets, additional brief tool-specific details can be provided for each letter, for example K(value, position, annotation source, …); P(next); B(zoomout); … If you perform more actions in one step, you can just concatenate them …; CE; … represents using color-edge sketch at once or …; CF(0.2cr); … represents using only 20% of the ranking by color.
Few examples of such log messages
“TID1;K(horse)F(0.3kr);P(next);P(next);time 23:45” telling tool id 1 submitted at 23:45 the sequence: keyword search by “horse” considering just 30% of the dataset, 2x next page, before submit
“TID1;K(horse)F(0.3kr);C(rerank);time 23:45” telling tool id 1 submitted at 23:45 the sequence: keyword search by “horse” considering just 30% of the dataset and then reranked by color before submit
“TID1;M;B(zoomin);B(pan);time 23:45” telling tool id 1 submitted at 23:45 the sequence: motion sketch query and then two specific browse actions before submit
“TID1;M;B(zoomin);B(pan);time 23:45;X;C;P(next);P(next);time 23:47” example of the log for two submissions during one task, user decided to cancel all previous actions by X
Note1 – the tool specific information in parentheses will be used mainly to better understand ambiguities (e.g., if two annotation sources are used), BUT most of the planned statistics will consider only the high-level actions (% of browsing actions, how many teams used a color-sketch in visual tasks, etc.). Therefore, if it is difficult for you to implement the logging of the tool specific information, please provide at least the letter sequences.
Note 2 – instead of comma, semicolon is used to separate actions in the log. Tool specific items are still separated by comma.
Note 3 – the tool performance AND the ability to submit the logs will play an important role for the participation on the planned journal.
If you have any further questions, please don’t hesitate to contact Jakub Lokoc for more information.

 

One-Page Reports

Since it turned out over the years, that many systems used during the actual VBS event differ more or less from the system described in the VBS papers, since teams continue to develop and optimize their systems even after paper submission. Therefor, this year we want to try something new:

Every VBS team needs to submit a one-page report (in ACM style) to the VBS organizers, which should include the following information (a style-sheet will be provided):

  • Does the system use components or optimizations that are not described in the VBS paper. If so, these should be summarized.
  • How was the experience with the tool during the VBS – what worked well/worse than expected, what were the challenge, problems, and suprises.
  • How did the team interact with the system during the VBS – which features were used most, how helpful were interaction means, and how successfully did they work?

These one-page reports will be published on ArXiv.org and referenced on the VBS website.