Tuesday, December 15, 2009

Comments

Comment on Preserving Tim's blog (who has a good summary of this week's readings)

Monday, December 14, 2009

Reading Notes, Dec. 15

Cloud Computing:
There are different forms of cloud computing, f.e. utility computing (offering storage and virtual servers); offering web services (and not full applications); platforms to build your own service; MSP, such as remote virus scanning and anti-spam. The articel was particularly useful in conjunction with a recent article in OCLC Systems and Services on "Library in the Clouds," discussing the pros and cons of cloud computing for libraries.

The Future of the Library
The article argues that as more people are able to find information online, they are less likely to visit libraries, and libraries will have to re-define their roles. I found the article a bit over-general, and would have liked to see more discussion on the differences between different types of libraries. I also did not quite understand why the author thought libraries weren't culture-based in the past.

Tuesday, December 8, 2009

Muddiest Point Dec. 2

I don't have a muddiest point this week.

Monday, November 30, 2009

Comments

Commented on Tiffany Brand's blog and Preserving Tim's blog...

Muddiest Point

Are there any non-commercial open source search engines that provide an alternative to google?

Reading Notes Dec. 1, 2009

Using a Wiki to Manage a Library Instruction Program
This article discusses a program to integrate wikis into library instruction at the library at East Tennessee State University, where the author is employed as reference librarian -- wikis are used to plan and expand on library instruction sessions -- professors can expand on specific questions they have during information sessions, or can help students to specify assignments for an instruction session.
Also, wikis can be used as a collaborative tool for exchanging information among instruction librarians. In addition, the article includes some helpful links to commercial sites with software allowing you to create your own wiki, as well as some background literature on wikis.

Creating the Academic Library Folksonomy

The article discusses how users can create folksonomy -- taxonomies of keywords created and shared by lots of users. Such folksonomies can help catalog useful internet sites, and integrate these folksonomies into academic libraries. This can provide quality indices of web sites, and also bring "grey" literature to light. The article gives a few examples of libraries that are trying out social tagging, f.e. the University of Pennsylvania, or Stanford University. Services uch as delicio or citeulike allow users to share these tags.

The article also discusses very briefly some of the risks and concerns about social tagging, for example spagging (spam tagging), or the variation of tags and the lack of controlled vocabulary. There are other concerns, not mentioned by the article, for example that tagging might provide institutions with a rationale to cut cataloging jobs, a discussion which has been held at the LC for a while, and an ongoing discussion whether or not it is useful to have controlled vocabularies to describe collections.

Weblogs: Their Use and Application in Science and Technology Libraries


Discusses the history of weblogs beginning in the early 1990s. Web logs are web sites that resemble personal journals, which allow visitors to comment on entries, and also include an archive of past weblogs. The authors highlight that web logs really took off in the late 1990s, when software package became available that allow users to build their own blogs. The article highlights web logs as a collaborative tool in science and technology libraries, which has some advatages over, for example email, since it is more easily searchable and can be organized according to specific subjects. RSS feeds can serve as a reminder to users to visit blogs. The article also explores reference blogs as an alternative to email. In blog instruction, librarians can have an important function, and train students in setting up and maintaining blogs. The article is from 2004 -- since then, many libraries have integrated blogs into their services..

Wikipedia

Interview with Jimmy Wales on Wikipedia -- the question here is to what extent the ideal of wikipedia matches the reality -- there was an article in Times Magazine, which Donna Guerin recommended for our LIS 2000 class, about some of the discrepancies: Wikipedia: A Victim of Its Own Success?

http://www.time.com/time/magazine/article/0,9171,1924492-1,00.html

These discrepancies suggest to take Jimmy Wales claims about wikipedia as this diverse, global, bottom-up project with a grain of salt, and rather reflect about persistent hierarchies in the way knowledge is created and organized.

Monday, November 23, 2009

Monday, November 16, 2009

Comments

Commented on Kristine Harveaux-Lundeen's blog and Letisha Goerner's 2600
blog

Sunday, November 15, 2009

Reading Notes November 17

Shreeves, S. L., Habing, T. O., Hagedorn, K., & Young, J. A. (2005). Current developments and future trends for the OAI protocol for metadata harvesting. Library Trends, 53(4), 576-589.

This text was not easy to understand, but here are some excerpts:

"The mission of the Open Archives Initiative (...) is to "develop and promote interoperability standards that aim to facilitate the efficient dissemination of content" (Open Archives Initiative, n.d. a). The Protocol for Metadata Harvesting, a tool developed through the OAI, facilitates interoperability between disparate and diverse collections of metadata through a relatively simple protocol based on common standards (XML, HTTP, and Dublin Core)."

"The OAI protocol requires that data providers expose metadata in at least unqualified Dublin Core; however, the use of other metadata schmas is possible and encouraged. The protocol can provide access to parts of the "invisible Web" that are not easily accessible to search engines
(such as resources within databases) (Sherman & Price, 2003) and can provide ways for communities of interest to aggregate resources from geographically diffuse collections."

-- The data is searchable and browsable without any manual cataloging of the various OAI repositories.

-- I could not pull up the text on how search engines work today, and will continue the reading tomorrow, once I get access to another computer.

White Paper: The Deep Web: Surfacing Hidden Value
Michael K. Bergman


Journal of Electronic Publishing, vol. 7, no. 1, August, 2001
DOI: http://dx.doi.org/10.3998/3336451.0007.104


-- This article is very enlightening -- it covers a technology (DeepPlanet, which can search the deep web)

-Most of the Web's information is buried far down on dynamically generated sites, and standard search engines never find it-- it si about 500 times the size of the surface web

--Traditional search engines create their indices by spidering or crawling surface Web pages. To be discovered, the page must be static and linked to other pages. Traditional search engines can not "see" or retrieve content in the deep Web

--The deep Web is qualitatively different from the surface Web. Deep Web sources store their content in searchable databases that only produce results dynamically in response to a direct request.

-- The deep web is a very coveted commodity

-- Study by NEC research initiative that search engine sby google and northern light only crawl 16% of the web's content

--Search engines obtain their listings in two ways: Authors may submit their own Web pages, or the search engines "crawl" or "spider" documents by following one hypertext link to another. The latter returns the bulk of the listings. Crawlers work by recording every hypertext link in every page they index crawling.

-- The crawls used to be indiscriminate, but "the most recent generation of search engines (notably Google) have replaced the random link-following approach with directed crawling and indexing based on the "popularity" of pages. In this approach, documents more frequently cross-referenced than other documents are given priority both for crawling and in the presentation of results. This approach provides superior results when simple queries are issued, but exacerbates the tendency to overlook documents with few links."

__ the problem here: without a linkage from another Web document, a page will never be discovered.

-- They don't use the term invisible web -- it is not invisible, but rather unindexable

-- the article continues to describe the study in more detail

--it is impossible to completely index the deep content, but new technologies need to be developed to search the complete web

Searching must evolve to encompass the complete Web.

Muddiest Point November 10

I wasn't quite sure if The University of Pittsburgh had an institutional repository -- if it does, how can you access it?

Tuesday, November 3, 2009

Monday, November 2, 2009

Comments

Commented on Preserving Tim's blog and Letty's LIS 2600 blog.

Muddiest Point, Week 10

I don't have a muddiest point for last week, but I do have several for this week. Here is one:
The article by Bergholz on xml was very helpful. It was, however, somewhat dated (from 2000). What has happened since the article was published -- have xml schemas replaced dtds? How have the development of rdfs and doms advanced since 2000?

Reading Notes Week 10 (November 10)

Bergholz, XML Tutorial
The article by Andre Bergholz is the clearest of all assigned articles, and I understood the other articles better after reading it. I really liked that he included examples, and also used library catalog examples, so you can see how it is applied in the library environment. It is, however, somewhat dated (from 2000), so it would be nice to read an up-to-date version of this on xml.
-- makes clear that html is layout oriented, whereas xml is structure oriented
-- both sgml and html influenced the development of xml
-- xml is about meaningful annotation
-- syntactically, xml looks like html
-- well formed document begins with prologue and has one element, and additional number of instructions can be added
-- dtds define the structure of xml documents
-- dtd elements can be nonterminal or terminal
-- elements can have zero or more attributes
-- extensions to xml include namespaces and addressing and linking capabilities
-- namespaces and dtds do not work well together
-- in html, links only point to a document
-- html is one-way
- xml extends htmls linking capacities
-- extended links connect more than one document
-- there is also an extensible stylesheet language, two languages: a transformation language and a formatting language
-- dtds have disadvantages and xml schemas address these disadvantages
-- xml schemas is well formed xml
-- the goal is to replace dtds
-- development of rdfs and doms also affect xml

Ogbuji, A Survey of XML Standards

--The world of XML is growing, with a huge variety of standards and technologies that interact in complex ways.
-- Points out that it can be difficult for beginners to navigate the most important aspects of XML, and for users to keep track of new entries and changes in the space
--includes a list of resources, if we want to explore this in more detail

XML Schema Tutorial

--XML Schemas are successors of DDT (document type definition)
--XML Schemas secure data communications
--they are extensible
--the schema element is rot of every schema
--xml schemas define elemnst of xml files
--attributes are complex
--restrictions on attributes are facets
--a complex element contains other elements or attributes
Martin Bryan. Introducing the Extensible Markup Language (XML)

This article was very difficult to follow for a non-programmer. What I understood from it, however, was that xml defines the structure of a document, rather than its appearance (like html). Therefore, xml is ideal for databases, so it can pull different elements from different locations together. “By storing data in the clearly defined format provided by XML you can ensure that your data will be transferable to a wide range of hardware and software environments. New techniques in programming and processing data will not affect the logical structure of your document's message.”

Sunday, October 25, 2009

Reading Notes Week 9 (Oct. 27)

Goans, D., Leach, G., & Vogel, T. M. (2006). Beyond HTML: Developing and re-imagining library web guides in a content management system.

This article addresses and evaluates a common problem at many libraries -- a multiplicity of static html content guides put together by different liaison librarians. By developing a CMS, the library at GSU was able to separate content, which was collected in a MYSQL database, from layout and design, and create a more coherent, dynamic system. The system continued to give librarians flexibility when working with their content guides. the article also highlighted that training of both users and librarians is instrumental when implementing a CMS.

HTML Cheatsheet - this is a very straightforward site that gives you an overview of html code that allows you to fix the code in your web pages -- working directly with html gives you sometimes more control over your site than working with design software like dreamweaver alone.

W3 School Cascading Style Sheet Tutorial: helpful for a quick introduction to CSS.

Muddiest Point Week 8/9

How do the ranking systems of different search engines -- f. e. yahoo or chrome -- differ from google's algorithm system?

Sunday, October 18, 2009

Muddiest Point (Readings Oct. 20)

Where does Google find the information for its indices of web sites-- on the individual servers?

Reading Notes, Week 7/8 (Oct. 20)

Brin and Page on Google

I don’t like much of Google’s pompous rhetoric in general; and you can’t expect the founders of Google to be more modest or critical about their company, but other than saying that Google is great, they didn’t really say much of any substance about the way the search engine works. In terms of understanding the inner life of Google, the text by Brin and Page, on the “Anatomy of a Large-Scale Hypertextual Web Search Engine,” (text from Lis 2000) was more enlightening. Still, there is the fundamental problem of the lack of transparency about Google’s algorithm and the way pages are being ranked, which contradicts this rhetoric of openness and all-inclusiveness, which also glosses over the fact that, while Google may index millions of sites, a lot of other sites are missed by the way the pages are being ranked. Google “will never exhaust the store and meaning of the Web,” wrote Terrence Brooks in his text we read in Lis 2000, “The nature and meaning in the age of Google.” (Information Research, vol. 9. no. 3 (April 2004) I was somewhat puzzled about their statement that Google is pretty much everywhere where there is power, leaving out a critical piece of the chain, the phone and cable companies, the internet service providers, on which almost all access – and lack of access-- depends. So, aggregated on a screen at Google headquarters in Mountain View it may look as if Google is everywhere, but of course this is not true, certainly not in the US, and not in Europe, where a lot of people just do not have and can’t afford internet access through expensive providers. And the way they talked about lack of Google use in Africa, it just sounded as if it were one big market that can and should be conquered – not much of reflection about the many digital divides here as well.

How Internet Infrastructure Works

--Network hierarchy, connected trough POP’s, and NAP’s
-- the internet as a collection of huge corporate networks that all communicate with each other at NAP’s
-- they rely on backbones (fiber optic trunk lines) and routers to talk to each other
-- who pays for the backbones?
-- you can identify net and host through IP address
-- DNS (domain name servers) convert domain names into IP addresses

Andrew Pace, Dismantling Integrated Library Systems, Library Journal, 2/1, 2004
-Web is fueling changes in the ILS
--Problems with the inter-operability of ILS "Today, interoperability in library automation is more myth than reality. Some of us wonder if we may lose more than we gain in this newly dismantled world."
-- There are many vendors, many systems, in many libraries, only pieces operate together
-There are some attempts to start from scratch
-- Better systems costs more -- libraries save at the wrong spot
--Some of the best ideas have come from librarians
--open source is a possibility, but not fully developed(as of 2004)
-- the future lies in integration and re-integration
-- has this chnaged?

Comments Week 7 or 8 (Oct. 20)

Commented on:
http://preservingtim.blogspot.com/
And on:
http://jonwebsterslis2600blog.blogspot.com

Saturday, October 3, 2009

Assignment 3: CiteUlike and Zotero

My citeulike url:
http://www.citeulike.org/user/khering/

The import from zotero/google books is called zotero, or file-import-09-10-03 (generic) and the one from citeulike is called citeulike_import
My three main collections are digital-preservation; audio-preservation; curating-oral-history plus additional tags.

Friday, October 2, 2009

Week 5/6 (Sept. 29-Oct. 6)

Muddiest Point:
This is an issue regarding compression and preservation of different file types which I have encountered when working with multimedia files in an archive. Is it possible for a computer to detect previous versions of a file that was saved as a different file type? For example, if you download an image as a .jpg file, import it into photoshop, alter it, and then save it as a .gif file or .tiff file, can a computer theoretically detect that the .gif file used to be a .jpg file? Such a trail is important for preservation, because the extension might obscure the fact that a file might have been compressed in a lossy format before, so at first sight it might look as if a file is compressed in a lossless format, even though it has been compressed in a lossy format before...

Reading Notes:

This week was the week of networks, both in LIS 2000 and in this class, and I am feeling a bit networked-out. Like many people, I have become acquainted with LAN networks at work while crawling under tables to disentangle some amorphous cable masses to figure out why a printer got stuck. As far as I understand it, LAN networks still depend on hard wires (co-axial cables as we learned). What is important here is that LAN networks do not depend on leased telecommunication networks, which gives the owners more control. As far as I understood, ethernet is a technology that enables LAN (the history of ethernet was actually fairly interesting, too -- I was not aware it was invented by R. Metcalfe at XEROX.) The article on the variety of computer networks was dizzying -- I was particularly interested in MAN networks -- who has control over MAN's -- cities or towns or private companies? I didn't know that the internet is short for internetwork. I want to know more about the physical infrastructure of the internet -- where are the hubs located? Important is the mix of private and public networks, which politicizes the whole issue -- who controls the access to these networks? And how are libraries connected to them? Does the U of Pittsburgh have a CAN, by the way?

RFID

this was a very informative article with a pragmatic perspective on RFID -- as a technology, it brings advantages, but also new pressures to increase efficiency (and potential job cuts) -- I also found her reflections on the rationale of libraries to introduce new technology very enlightening: a technology becomes introduced, is around, and libraries have to adapt and deal with these changes -- with RFID it sounds as if the development is going in this direction, if libraries like it or not....

Commented on:
Kristine Harveaux-Lundeen.s blog and rsj2600's blog

Friday, September 25, 2009

Week 4 (or 5) Sept. 22-29

Muddiest Point

I would like to learn more about the underlying structure of various databases I use as a researcher. How does a keyword search in a database look like from a technical perspective? Do keyword searches differ -- from a technical perspective-- from searches based on controlled subject headings, such as the LC subject headings? And how do databases, such as JSTOR, rank results -- what is the basis for the ranking?

Reading notes:

I enjoyed reading the behind-the-scenes report on the production of the digital Imagining Pittsburgh Collection Pittsburgh collection, produced under the lead of the DRL with an IMLS grant. It was not only interesting from a technical perspective, but was also a refreshingly candid project description (often, project reports are rather self-congratulatory and don’t mention the difficulties posed by larger digitizing projects to collaborate, and to deal with technical challenges, content, and different organizational backgrounds all at once). It highlighted the challenges of the three institutions that were collaborating on the project, of agreeing on shared standards, while also serving the individual interests of each institution. A major challenge for many digitization projects is the selection of the images that should be digitized and Galloway underlined that the subject headings that the project created were key for the selection of which images to digitize. After last week’s reading’s, the paragraphs on metadata were interesting, and highlighted how the Dublin Core elements were critical in ensuring the interoperability of the metadata of the individual institutions. From a variety of options, they project participants agreed on using the LC subject headings for the description. Galloway also addressed different workflow challenges, and the difficulties of working with different databases in different projects. They agreed on the quality for the production masters (600 dpi) that ensured the uniform quality of the images. (The quality of the production master also allows to look at different sizes of the image, and to magnify parts of individual images when exploring the collection online). Finally, it outlined the challenges allowing users to find different ways to explore the collections as a whole, and individual images.

I also looked at the site, and the reader can do subject searches, keyword searches, searches by collection. You can explore by time, location, collection, or theme. You can also look at images with captions, with full record, or just captions. It really offers a lot of ways to search and explore.

Has anyone looked the experimental visualization prototype, the Bungee View, in more detail?

Compression
Compression is a huge issue in multi-media collections, so the articles were very enlightening – I didn’t understand all the details about the different algorithms, but it clarified the principles of compression, and, in the section on video compression, the differences between a video file and a video stream. Unfortunately, the link to the part of the article on lossy compression did not work...The advantages of compression are clear – they save space on expensive storage devices. On the other hand, it also creates huge problems for archives, which have to deal with files in x many formats, many of which are in compressed, often proprietary formats, so they aren’t archival quality to begin with. The pressure to compress video files is even greater than for audio files, because they are so big – uncompressed video would take up an enormous server space. So, many archives just don’t have the money to buy all that server space, and have no choice but to save the files in a compressed format. So, in a different way than for paper, space continues to be a huge problem.

Just wanted to double check – once a file is in a compressed, lossy, format, you can not just uncompress the file – the missing data is gone, is it?

Comments

Commented on Tiffany J. Brand's blog:
http://tiffanybrandlis2600.blogspot.com/

And Letisha Goerner's blog:

http://letishagoerner2600.blogspot.com/

Friday, September 18, 2009

Week 3 (or 4), Sept. 15-Sept. 22

Muddiest Point

We touched on this only briefly during the lecture, but I'd like to better understand DSpace and how libraries can use it?

Reading Notes

I was glad I was able to read some of the reading notes from other students -- it helped clarify things a bit -- the Dublin Core Data Model article was bewildering.

My ideas about metadata had been rather foggy, and so the article on metadata was very enlightening. Metadata can reflect content, context, and structure. In libraries, bibliographic metadata provides access to contents, f.e. through indexes and catalogs. In archives and museums, metadata often describes the context of records and enables authentication of records and objects (f.e. accession records). Metadata is not only description, but also relate to administration, accession, and preservation and use of collections. Representing the structure of objects is central to metadata development. Systems should reflect content, but should also include additional information about the content. This also helps to specify the intellectual integrity of objects, and maintaining the relationships between objects. This has become a central aim in the preservation of digital objects. It was particularly interesting to read about the role of metadata in digital preservation, as it ensures that digital information, or information objects as she calls them, will survive through migrations of hardware and software and will be preserved.

It was helpful to get a sense of the different types of databases, but the article was only slightly clearer to me than the Dublin Core Data Model one (I tried reading both from various directions, but it didn't do much good) -- I'd like to see it illustrations/models of various models of databases, maybe that will make things a bit clearer.

Comments

I commented on Annie's LIS 2600 blog:
http://annie-lis2600-at-pitt-blog.blogspot.com/

Thursday, September 10, 2009

Week 2 (Sept. 8-Sept. 15), Assignment 2

Digitizing Images and Uploading them on Flickr

Here is the URL (I think) to my FlickR account. I will add more comments on the digitization process soon.

Flickr URL:
http://www.flickr.com/photos/42354457@N07/sets/72157622333487728/

Notes on the digitization:
I created this mini exhibit on the history of public transportation planning by Pittsburgh citizens in the early 1920s. I don’t have a car, and am often frustrated with public transport options, so I am always interested in learning about people's suggestions for improving public transport in the past. Usually, these suggestions never materialized. So, I found this report in the collections of the university library and scanned it based on the guidelines – master images scanned at 600 dpi. I used greyscale for the text images (much better than B&W photo) and color image for the diagrams. I then used photoshop and reduced the image size twice and created an image for the screen and an image for the thumbnail, and saved both as jpgs. I then uploaded it to Flickr and added tags and comments – it was the first time I created a Flickr account, so it was all new to me. I thought I was the first one to digitize the report, but as it always happens when you think you are the first, you are certainly not. I found out that the report had already been digitized as part of the Historic Pittsburgh Collections. So much for being the pioneer digitizer.
Some notes and problems: What struck me when creating the master image was how big the image is when you create a really good scan (it was exceeding 100 MB’s). This is a pretty big file, and so this is indicating the space problems on your hard drive or storage unit that you encounter if you make really high quality scans. And drive space costs money. So, with limited money and hard drive space, you can definitely not save everything.
I also had some problems with writing the tags for Flickr. Flickr always combined my key words into one big word, so it looks really strange, like: 60yearsbeforethesubwaytherewasaplan. Weird, isn’t it?
Also, I wasn’t sure if it is possible to actually arrange the thumbnails where they belong (at the first page of the exhibit in Flickr). Instead, all my thumbnails are showing up as part of the images that are part of the exhibit and Flickr is creating its own thumbnails. Does anyone have any suggestions?

Reading Notes
I was roughly familiar with the story of Linux, but it was interesting to read the story how it evolved as a spin-off from UNIX, and has been developed as an open source software ever since by a large group of programmers. It was also interesting that Linux programmers became more user friendly over time. I still find it difficult to understand, though. Can you just install it on a windows machine? I browsed through the Mac OSX/Linux article (I suppose we are not supposed to read these technical pieces line by line), and what struck me was Singh's critical, but dispassionate position toward windows as a client computer and his acknowledgement that most people use windows, not the least because it is cheaper than Mac. The update on the windows roadmap reads like a promotion piece – everything is moving forward, steadily improving, if you only trust microsoft!
Overall, the readings about the different operating systems highlight that compatibility is a huge problem – compatibility between older and newer operating systems, and also compatibility between mac and windows – I am not sure about Linux. It also creates a lot of practical problems if you work at a place that has both macs and pcs – at the archives where I used to work we often had to transfer multimedia files back and forth between windows and macs, which was not always easy.

Muddiest point
The lectures were all very clear; I don’t really have a muddy point. I am just generally wondering about the implications of the constant development and updates of hardware for libraries and archives – should libraries keep versions of old hardware models (f.e. 268 PCs with 5 ¼ inch floppy drives) so that they will be able to read files on old floppies that may contain important files and re-emerge one day?


Comments

I am a little unclear on this, but are we supposed to document where we have left comments?
I commented on Kristine Harveux-Lundeen's blog:
http://2600kristineharveaux-lundeen.blogspot.com/
And on Letisha Goerner's blog:
http://letishagoerner2600.blogspot.com/

Friday, September 4, 2009

Week 1, September 1st, 2009

Muddiest Point:

One of the muddiest points of this week concerns the issue of defining “information,” “information technology,” and to distinguish this definition from “knowledge” and “wisdom” (in the DIKW pyramid). I am wondering if it may help to highlight that the definition of “information” and “information technology” depends on the context, on the way it is used, and on the history of the term. Does information really always have to be “true” or “new” as one of the definitions implied? Doesn’t it also facilitate the dissemination of “old” information? And, as Tim Notari has pointed out in his blog entry, isn’t it difficult to determine what is “true” information? And does “information” automatically lead to “knowledge”?

Readings

Lied Library @ Four Years, 2005

While not the most scintillating read, I liked the practical perspective on the experiences with maintaining the technology at Lied library and the realistic tone of the article. The article highlighted the function of the library as a gateway of providing access to computers. It also hints at the challenges and costs associated with this function, and the need to always update the technology in the library (the three year replacement cycle of pc’s). I was interested to read about the restrictions for computer usage to community users of the library (which I suppose are users who are not students), which hints at possible consequences of making the expensive technology accessible to the students – what is open to some, becomes restricted to others. I would be curious to hear how the library is doing now, and how it is dealing with budget cuts and limited resources – how can technology be maintained at this level with limited resources?

Content, nor Containers, OCLC report 2004

I think among the most interesting points of the report is the statement that libraries should help provide context for content, or information, and should help to establish authenticity and provenance of content. For example, libraries can provide information to patrons on how search engines work, which group or company developed them for what purpose, and what kind of information they include, and what information they may exclude. In this respect, reference librarians can not only help patrons to find information, but also explain different ways of accessing information. Who finances OCLC, by the way?

Clifford Lynch, Information Literacy, 1998

While more than ten years old, Clifford Lynch’s appeal to focus information technology literacy not only on the knowledge of computers, and of specific applications, but also on the infrastructure that supports the technology, and on economic, social, political, and historical problems, still resonates today. Beyond becoming familiar with specific applications, this is also what I hope to get out of this class.