Eight teams had answers to that question that were good enough to get them each part of $2M from the Joint Information Systems Committee (JISC) in the United Kingdom, the National Endowment for the Humanities (NEH) and the National Science Foundation (NSF) in the US, and the Social Sciences and Humanities Research Council (SSHRC) in Canada. The winners were announced last Thursday night in Ottawa.
The Digging into Data Challenge is part of a grant competition to spur the use of data mining and advanced computational techniques in the humanities, where much research is still carried out by real people touching real books.
“Trying to manage a deluge of data and turn bits of information into useful knowledge is a problem that affects almost everyone in today’s digital age,” said NEH Chairman Jim Leach. “With this international grant program, NEH is hoping to seed projects that will not only benefit researchers in the humanities, but also lead to shared cultural understanding.”
The winning proposers aren’t all looking at the same data — some are looking for patterns in a body of 53,000 18th- century letters, while others (lead by members from Tufts University) will be digging through a literal pile of a million books
This project supports the creation of a framework to produce “dynamic variorum” editions of classics texts that enable the reader to automatically link not only to variant editions but also to relevant citations, quotations, people, and places that are found in a digital library of more than one million primary and secondary source texts.