IEEE Visualization 2008 Design Contest
The goal of this year's contest is to design a visualization that is effective at answering real domain-science questions on real data sets. The use of existing visualization tools and research prototypes, and combinations of such tools are perfectly acceptible in arriving at an effective design. You will of course want to give credit to those who built the tools you used, but the focus is on effective visualization.
We asked the scientists what they want to know about the data -- not how to display it, but what they hope to learn from the visualizations. Here's what they want to know, with each question tagged by its relative importance. The questions are ordered from simpler to more difficult to visualize:
Notes on questions 1,3: Ambient gas is very cool (72 Kelvin). Shocked gas is around 2000-3000 K. Ionized gas is much hotter: 20,000 Kelvin). Temperature thus indicates where shock waves and radiation are present.
Notes on questions 4,5: There is not a straightforward "turbulence" calculation, but it is known that areas of high turbulence will have high curl (at the scale of the turbulence). Curl can be computed from the velocity field. A description of how to compute curl magnitude and an example curl-magnitude-computing program can be found on the data description page. Teams are welcome to come up with their own turbulence-estimating data set derived from velocity; be sure to document the calculation being used.
There are two metrics for evaluation: the the effectiveness of the visualization and the completeness of the visualization. An effective solution clearly communicates the variables under display; such a display clearly tells the story of what occurred within the data set and helps an expert viewer answer the domain questions. A complete solution discusses the significant features of the data and how they are depicted by the visualization. It includes legends and color maps to indicate quantitative measurements to an uninformed viewer. It describes the techniques and software systems used to produce the visualization. The effective measure counts for 80% of the total points and the completeness measure 20%.
Evaluating Effectiveness (80% of total score)
The judges for this part of the score will be the domain scientists who submitted the data and questions.
Effectiveness on each of the five quesrtions will be evaluated on a five-point scale, with 5 being "I could see the answer immediately and clearly" and 1 being "I know the answer already, but I still can't see it in the visualization." To randomize learning effects, we intend to have each judge view the submissions in a different, randomly-selected order. Each judge will read the PDF file accompying the submission before judging the video and/or still-image submissions, so that they will be familiar with the techniques and with how the authors believe the visualization is best viewed to answer the questions.
The total effectiveness score will be the sum of the individual scores, weighted by the relative-importance values placed on the questions (the point scores). These point scores reflect the relative importance of the questions to the scientists, not the relative ease with which each can be displayed.
The mean total score from all judges will be used as the effectiveness score.
Evaluating Completeness (20% of total score)
The judges for this part of the score will be practicing visualization researchers.
Completeness will be evaluated on a five-point scale, with 5 being "I could implement this and get these same pictures and know what settings to put on all of the parameters" and 1 being "I have no idea how to make this picture."
The mean total score from all judges will be used as the completeness score.
Determining winning entries
The final score for each team will be determined by adding 80% of the effectiveness score to 20% of the completeness score. The scores will be sorted from highest to lowest.
The highest-scoring entry will be evaluated by a group consisting of the current judges, the conference chair, and judges of past contests to determine if it is of sufficient merit to deserve an IEEE Visualization award. If so, the first-place prize will be awarded to the team that submitted this entry.
The entry with the second-highest score will be similarly evaluated and if it is of sufficient merit the team that submitted it will be awarded second place.
The entry with the third-highest score will be similarly evaluated and if it is of sufficient merit the team that submitted it will be awarded third place.
Breaking Ties: In the case of identical numerical final scores, the team with the higher effectiveness score will be selected. In the case of identical total and effectiveness scores, the team with the higher score on the question with the largest relative-imporance score will be selected. In the case of identical scores in all questions, a coin toss will be used.