One can only agree with Euan Adie, that “the way we present genomic and proteomic data on the web sucks” (read post on Nascent). And this holds for biological networks: depiction of protein-protein interactions as colorful hairballs results in impressive figures but is not obligatorily very useful. While the network representation is a powerful abstract representation of biological processes, it is trivial to say that a graph (with its jungle of nodes and edges) is far from resembling even remotely to an actual living cell as you see it under the microscope… In the crude visualization of biological process as simple graphs, space, time, multi-scale structure and biological context are missing.
Charles DeLisi makes an attempt to tackle the problem of visualization of complex mutli-scale biological networks by introducing the use of metagraphs (Hu et al, 2007, Nature Biotech 25:547). Metagraphs have so-called metanodes in addition to simple nodes. A metanode contains a subgraph composed of child (meta)nodes, which are revealed only when the metanode is in its “expanded” state. Edges link simple nodes while metaedges link “contracted” metanodes and are inferred from the links carried by nodes of the underlying subgraph. A key distinctive feature of metagraphs is that several instances (carrying different “labels”) of a node can be shared between distinct metanodes (eg when a protein belongs to different complexes).
Metanodes can represent directly the multi-scale modular hierarchy of a network, incorporate biological context (eg sets of proteins sharing the same GO annotation) or even represent groups of orthologous genes. With this representation, implemented in the software VisANT (http://visant.bu.edu/), “semantic zooming” into the network is made possible. This would be similar to zooming into a Google Map, when not only the scale of the map changes but also the resolution of the labels and various abstract annotations, as is best seen using the “hybrid” mode superposing annotations with the satellite picture.
This analogy with Google Map illustrates also the limits of the current network representation as “maps” of cellular processes. There is still a long way until the graphs representing biological networks can really be mapped onto cellular structures to result into better visualization tools but also into more realistic computational models of the whole cell. In a sense, a “Google Cell” should also have a “hybrid” mode, where the abstract representation can be superposed onto the “satellite image” version of the biological object visualized. As if little tiny networks would be folded inside each voxel of a 3D full reconstruction of a cell, such as the one recently published by Antony and colleagues (Höög et al, 2007, see post). Something like integrating interaction networks, “ORFeome”-like datasets and electron tomography…