How do you handle terabytes of data? That is a question that more and more investigators must face, on a weekly basis.
Are you one of them? Light-sheet fluorescence imaging, for example, generates so much data in each experimental run that handling and storing the raw data is a challenge. Next-generation sequencing is another, much more ubiquitous, case.
Read the July issue editorial “Byte-ing off more than you can chew” and let us know about your own experience, problems and practical (or impractical) solutions.
Report this comment
This is a paramount problem in everyday up-dating as well as researching. Besides all difficulties, the article refers to, there is another problem, unfortunately overlooked, which plays a central role: neither all humans nor all animal are borne equal (e.g. from my website: 115. Stagnaro Sergio. Single Patient Based Medicine: its paramount role in Future Medicine. Public Library of Science. https://medicine.plosjournals.org/perlserv/?request=read-response 2005). In a few words, in spite of a tremendous number of data, it’s really difficult to apply them to a single individual!