This is Part 3 of 7 of the highlights from “Illuminating the Path: The Research and Development Agenda for Visual Analytics.” Please see this post for an introduction to the study and access to the other 6 parts.
Visual Analytics (VA) tools need to integrate and visualize different data types. But the integration of this data needs to be “based on their meaning rather than the original data type” in order to “facilitate knowledge discovery through information synthesis.” However, “many existing visual analytics systems are data-type-centric. That is, they focus on a particular type of data [...].”
We know that different types of data are regularly required to conduct solid anlaysis, so developing a data synthesis capability is particularly important. This means ability to “bring data of different types together in a single environment [...] to concentrate on the meaning of the data rather than on the form in which it was originally packaged.”
To be sure, information synthesis needs to “extend beyond the current data-type modes of analysis to permit the analyst to consider dynamic information of all types in seamless environment.” So we need to “eliminate the artificial constraints imposed by data type so that we can aid the analyst in reaching deeper analytical insight.”
To this end, we need breakthroughs in “automatic or semi-automatic approaches for identifying [and coding] content of imagery and video data.” A semi-automatic approach could draw on crowdsourcing, much like Ushahidi‘s Swift River.
In other words, we need to develop visual analytical tools that do not force the analyst to “perceptually and cognitively integrate multiple elements. [...] Systems that force a user to view sequence after sequence of information are time-consuming and error-prone.” New techniques are also needed to do away with the separation of ‘what I want and the act of doing it.’”