Jump to content

Research Methods in Information Science/Printable version

From Wikibooks, open books for an open world


Research Methods in Information Science

The current, editable version of this book is available in Wikibooks, the open-content textbooks collection, at
https://en.wikibooks.org/wiki/Research_Methods_in_Information_Science

Permission is granted to copy, distribute, and/or modify this document under the terms of the Creative Commons Attribution-ShareAlike 3.0 License.

Identifying research problems

Literature review process

[edit | edit source]

Formulating answerable research questions

[edit | edit source]

Booth's SPICE structure

[edit | edit source]

[1]

References

[edit | edit source]
  1. Kloda, Lorie (2016-03-04). "Asking the Right Question". Evidence Based Library and Information Practice. 11 (1(S)): 7–9. doi:10.18438/B8XG9C. ISSN 1715-720X.


Research design

Reliability

[edit | edit source]

Validity

[edit | edit source]

Operationalization

[edit | edit source]

Coding

[edit | edit source]

The time dimension

[edit | edit source]

Choosing a method

[edit | edit source]


The historical method

The historical method employs the systematic study of historical facts to explain human political and social behavior. This method uses comparison to recapture details, personalities, and ideas.

"Although there is a difference of opinion regarding acceptance of historical research as a truly scientific research, as it does not permit enough precision and objectivity, yet there is a consensus that historical research has much to contribute in the field of library and information science."[1]

Historical research can be seen as ethically imperative for LIS practitioners.[2]

How the research process might go:

  1. Identification and delineation of the problem of historical significance;
  2. Collection of data and information through primary and secondary sources;
  3. Formulation of a hypothesis, if possible;
  4. Verifying the gathered data in terms of authenticity of sources and the validity of their contents;
  5. Organization and analysis of the pertinent data; and
  6. Presentation of facts in a readable form with proper organization, composition, exposition and interpretation

Identifying problem

[edit | edit source]

Collecting data

[edit | edit source]

Archival materials, newspapers, eyewitness accounts, literary writings, catalogs, non-documents (i.e. archaeological and geological remains)

Primary vs Secondary vs Teriary


Formulating a hypothesis

[edit | edit source]

Very beneficial in reducing researcher bias. However, these can be very hard to test. Causal hypotheses are particularly important and very hard to test.

Verifying data

[edit | edit source]

Organizing and analyzing pertinent data

[edit | edit source]

Limitations of the historical method

[edit | edit source]
  • A tendency to overrely on limited evidence or on secondary sources
  • Argument from silence
  • Easy to do overbroad problems
  • Easy to read the present into the past

References

[edit | edit source]


The research survey

Surveys are generalizable and replicable.

Instrumentation

[edit | edit source]

Knight, G. P., Tein, J. Y., Shell, R., & Roosa, M. (1992). The cross‐ethnic equivalence of parenting and family interaction measures among Hispanic and Anglo‐American families. Child Development, 63(6), 1392-1403.

Limitations

[edit | edit source]

Surveys can be obtrusive. Survey respondents can be self-selecting.


Examples

[edit | edit source]


The descriptive survey

More and more librarians are called upon to justify the educational and cultural significance of library service. Limited resources often force librarians to focus critical eyes on library activities, technologies, procedures, and expenditures. Revised concepts of the functions of the library demand new policies and procedures, as well as re-examination of the old ones.

Besides their usefulness in producing generalizable, replicable knowledge, surveys can also aid in this process. A surveys can serve as the means to a critical examination of a library and its services.

The survey method

[edit | edit source]

Goals

[edit | edit source]

A survey has four goals:

  1. To collect all facts pertinent to the problem being attacked. Observations and opinions are useful, but they must be based upon factual evidence in order to be valid.
  2. The survey must be a critical analysis. All facts should be explored with an attitude of healthful curiosity, an attitude which is essential to progress. The library's services, its routines, the organization -- all should be questioned.
  3. The survey should be a means of interpreting the library to the public. And while people are often more attentive to the outside expert than to the employees of the library, an impartial investigation, even if carried on by the library itself, is likely to carry weight with administrators and public alike. If it is objective, thorough and impartial, it will provide effective publicity material, besides being a means of publicity itself.
  4. The fourth purpose of the survey is the analysis and interpretation of facts. It is at this point that the survey ceases to be a mere collection of data and becomes the foundation for future improvements.

Types of surveys

[edit | edit source]

Criteria for the survey

[edit | edit source]

A good survey must satisfy several criteria. First, the survey must be an exact and impartial analysis of facts. All pertinent information must be reported impartially and exactly, regardless of the conclusions pointed to. It is just as dishonest to conceal pertinent data as it is to alter them, and either procedure will make the survey a piece of vicious and misleading propaganda.

A second criterion is that the facts presented must be typical if the survey purports to reveal typical situations. This does not mean that unusual facts must be excluded, but it does mean that they should be clearly labeled as nonrepresentative. An entirely false picture may easily be presented by an accumulation of nontypical items, and it is, therefore, especially important to select those items which have a direct bearing on the points in question.

Third, the data presented must be reliable, and those which cannot be verified should be omitted. In cases where much material is obtained by sampling, it is important to see that a representative and reliable sample is secured.

Finally, the mere assembly of facts does not constitute a survey. Facts are useful only to the degree with which they are assimilated and organized into a logical and systematic whole. There is no substitute for the application of sound judgment and intelligence, and the absence of these factors cannot be compensated for by the completeness of the data or involved statistical manipulation.

Planning the survey

[edit | edit source]

Administering the survey

[edit | edit source]

Some surveys may not require a great amount of publicity for their success. Of this type, is the study of the library's administrative processes, designed primarily to aid the librarian in planning the work of the library more effectively. In most cases, however, the survey will receive better cooperation and will attain its aims only if it is widely advertised. One of the best methods for insuring cooperation and attracting attention is by the wise selection of those in charge. To give tone and authority to the investigation, it has sometimes proved satisfactory to have an advisory committee carefully selected from leaders in the community. While little actual service is expected of such a group, their very presence insures a favorable attitude and reception. Every community has leaders whose appointment to such a committee would be news.

A second means of stimulating community interest in the survey is by the use of local talent. Local specialists in fields to be touched by the survey may sometimes be of great help, and the more people concerned with the survey, the better the foundation for a favorable reception later by the community at large.

Organized groups in the community present a third avenue through which interest in the survey may be aroused. In some communities, such agencies have been the actual sponsors of the survey, and in most communities they can be of service in stimulating interest. They also may provide valuable feedback on your survey and its ability to reach all members of the population the survey attempts to reach.

The usual methods of library publicity should be utilized while the survey is being planned and while it is under way (the newspaper, radio, posters and bulletins). No opportunity should be lost to keep survey news continually before the community through these means. Ordinarily, news items will be well received by both the newspapers and the public.

The importance of preparing a favorable public opinion cannot be overstressed since a sympathetic attitude is essential if the survey is to seek data from outside the library. Since reliable and authentic information is hard to obtain from an apathetic community, publicity is a legitimate concern of the library embarking upon a survey. Moreover, acceptance of the findings and implementing of recommendations will depend upon the interest and enthusiasm of the community, whatever program is recommended.

Finally, the survey should never be considered the end but merely a means to an end: achievement of the library's social purpose. Facts and figures, no matter how carefully collected or thoroughly presented, have no meaning unless directed toward a central objective and analyzed and interpreted with sound judgment. This should be clearly understood at the beginning, while the survey is in progress, and when it is presented finally to the community. It is when each fact is analyzed, compared with other items and applied to the problems at hand that the survey achieves its real aim, the basis of future action and a more effective library program.

Special considerations for online surveys

[edit | edit source]
A tool for creating online surveys.

The Community Backgrounds for Library Service

[edit | edit source]

In order to answer the question, "What type of library service is needed in the community?" a great deal must be known regarding the area to be served. What are the important factors in the library's community environment? What social changes are emerging in this environment? These are questions which require historical, geographical and social data and, hence, an important part of an effective library survey is a study of the community itself.

Much information may already have been collected by other agencies, and while the sources for such information will vary with different communities, a few may be suggested:

  • Reports of the United States census
  • Reports of the local school census
  • Zoning commission
  • Local historical societies
  • Community groups
  • Oral histories
  • City tax authority
  • Law enforcement departments
  • Welfare agencies
  • Institutional research departments

Geographical factors

[edit | edit source]

History

[edit | edit source]

Populations of patrons and potential patrons

[edit | edit source]

Demographic factors

[edit | edit source]

Economic structure

[edit | edit source]

Educational facilities

[edit | edit source]

Local groups

[edit | edit source]

Library Finance

[edit | edit source]

Administration of the Library

[edit | edit source]

Library Personnel

[edit | edit source]

Library Use

[edit | edit source]

Surveying the Community for Potential Library Use

[edit | edit source]

Surveys of Larger Areas

[edit | edit source]

Preparing the Report and Disseminating the Findings

[edit | edit source]

[3]


Observational methods

The interview

[edit | edit source]

The focus group

[edit | edit source]

The Delphi study

[edit | edit source]

The oral history

[edit | edit source]


The collection assessment

Collection assessments typically serve one of two purposes:

  • to inform librarians about current holdings so that they can make better decisions regarding purchases, subscriptions, resource sharing, or collection policies.[4]
  • as part of a larger assessment of a library, such as for a grant or accreditation effort.

Researchers typically undertake collection assessments to make major determinations. To support librarians and administrators in making these decisions, collection assessments typically employ a number of different methods. Triangulation can be very important in these projects: each of the measures listed below can suggest facts about the collection, but does not provide very much certainty.

Usage-based methods

[edit | edit source]

Metrics

[edit | edit source]

Circulation

[edit | edit source]

For many years, the most popular measure of library service has been the number of items circulated. Hence, a few general observations on circulation figures should be made. First of all, circulation and use are not synonymous terms. The fact that a book is taken out of the library does not necessarily mean that it has been read. Second, circulation items must be defined if circulation figures are to be comparable. Variants in the reporting of circulation figures include: the length of the circulation period, type of material included, the inclusion of renewals, etc. A careful definition of circulation and a thorough catalog of the items included will remove many of the dangers of possible misinterpretation of circulation statistics.

A simple count of books loaned is one of the most widely used measures of library service. The measure is easy to secure (such figures are kept by nearly all libraries), is intrinsically simple and, therefore, easily understood. Barring certain limitations cited above, most people know what is meant by 250 circulations in the children's picture book section. But gross circulation figures are in most cases so simple that they obscure important information. They provide no evidence as to the type of reader, and thus somewhat indirectly imply equal use of various types of material by all reader groups—a situation which is rarely, if ever, true.

Of more interest is the percentage of all items in a section that have actually circulated or number of unique borrowers in a specific section.

In-house usage

[edit | edit source]

Library rules and regulations often place many obstacles in the way of measuring the use of periodicals and reference materials. Since these items often do not circulate outside in the library, it is difficult to produce even the gross figures of circulation for periodicals.

Most ILSs include a method for recording "in-house use". While this can help to estimate collection use, it too is somewhat problematic. By this means, one cannot tell whether the volume has been actually used or set aside as inadequate. And this method depends to a certain extent upon the users' disposition to leave volumes unshelved rather than to return them to their regular places. Finally, certain ILSs store in-house usage in a database table separate from circulations proper, which makes these data harder to access and synthesize.

Interlibrary loan requests

[edit | edit source]

Aguilar, W. (1986). The application of relative use and interlibrary demand in collection development. Collection Management, 8(1), 15-24.

  • only use of this method seems to be Ochola, J. N. (2003). Use of circulation statistics and interlibrary loan data in collection management. Collection Management, 27(1), 1-13.

Byrd, G. D., Thomas, D. A., & Hughes, K. E. (1982). Collection development using interlibrary loan borrowing and acquisitions statistics. Bulletin of the Medical Library Association, 70(1), 1.

Log data

[edit | edit source]

Interpreting usage data

[edit | edit source]

Borin and Yi divide usage indicators into three levels:

  • Access-level indicators show how a user tries to access a resource. This is the most superficial level, as it doesn't demonstrate whether or not the patron even used a resource in any meaningful way. These indicators include database logins, article hits, link resolver logins, gate counts, and search logs from catalogs and discovery layers.
  • Use-level indicators are at an intermediate level, showing that a patron actually made some attempt to use a resource. These indicators include ILL requests, article downloads and views, and circulation statistics.
  • Impact-level indicators show evidence that patrons actually found a resource valuable for their learning or research. The primary indicator at this level is citation analysis of student or faculty writing.[5]

Collection depth and breadth-based methods

[edit | edit source]

Note that collection assessments that rely on collection data need to rely on accurate, up-to-date data. Therefore, a complete, up-to-date inventory of the collection (either physical or electronic holdings) is crucial to ensure that the assessment is based on accurate data.[6]

Note that these metrics are heuristics to the imprecise art of collection development[7].

Metrics

[edit | edit source]

Collection size

[edit | edit source]

Collection size is simply a report of how many items are located in each section.

Acquisition rates is the number of items added to a particular section in a given period of time.

Expenditures can be broken down by section for a simple metric.

These metrics are easy to understand and collect, but reveal little about how a library's collections meet patron needs. They can be somewhat illuminating when compared with a mapping of high-enrollment or flagship programs, such as University of Michigan's mapping[8].


Representation of standard lists

[edit | edit source]

This method is often referred to as the checklist method. In this method, a librarian will select a standard list of titles. The library's catalog is then checked to see how many of the titles are either owned by or accessible via the library. This metric is reported as a total number or a percentage of titles found.

For topic-specific assessments: Professional and scholarly associations put together standards for their disciplines, which can be used. Another technique for developing a check list in a special field has been suggested. An important textbook or several textbooks are examined, and each reference to a book is noted. These references are then tabulated and summarized, and the books mentioned as references are listed in order of frequency of mention. By this means, a reputable list of books in a given field is compiled, though the time required to prepare such a list throws some doubt on its economy. This procedure was used by librarians as early as 1937 [9].

Librarians can consult Choice reviews, Publishers Weekly, Citation reports.

Lists can be subjective and arbitrary. The use of different bibliographies in the same field insures against bias of a single list, but can add greatly to the amount of effort required in such a project. Lists can also become outdated quickly or otherwise be unsuitable for the a library's patrons.

Some authors have argued that this method is more appropriate to smaller or specialized libraries[7].

Goldhor's inductive method
[edit | edit source]

Overlap analysis

[edit | edit source]

Wilson, F. L., & McGrath, W. E. (1990). Cluster analysis of title overlap in twenty-one library collections in western New York.

Percentage of available titles that a library has purchased

[edit | edit source]

Choose a representative press (e.g. Columbia University Press) and see how many of their publications have been published

Interpreting collection depth data

[edit | edit source]

Contextualizing

[edit | edit source]

There are several environmental factors that we need to consider:

  • Nature and goals of the institution
  • Budget
  • Peer institutions
  • Consortial memberships
  • Patron demographics

For public libraries:

  • Population in service area: public libraries are often asked to report the number of items per capita. For example, the ALA used to issue a standard of how many books per capita a public library should own.[10]

For academic institutions:

  • Credentials offered
  • Enrollment in various areas

Incorporating faculty expertise

[edit | edit source]

Whaley Jr, J. H. (1981). An Approach to Collection Analysis. Library Resources and Technical Services, 25(3), 330-38.

Conspectus models

[edit | edit source]

[11]

RLG collection levels
[edit | edit source]

Physical assessment methods

[edit | edit source]

Format of a collection assessment report

[edit | edit source]

Intner (2003) recommends that a collection assessment report include the following sections at minimum:

  1. Executive summary
  2. Introduction including background and collecting goals
  3. Profile of the collection
  4. Comparisons of selected features of the profile with the same features at peer institutions
  5. Quality measures such as request fill rates, waiting time, use, etc., arranged by subject or department
  6. Comparisons of selected quality measures with those of peer libraries
  7. Conclusions about collection performance
  8. Recommendations
  9. Appendices containing bibliography of sources, notes on methodology, raw data, and other supporting documents [6].


References

[edit | edit source]
  1. Bhatt, R. K.; Bhatt, S. C. (1994). "Application of historical method of research in the study of library and information science: an overview". Annals of library science and documentation. 41 (4).
  2. Wiegand, W. A. (1999). Tunnel vision and blind spots: What the past tells us about the present; Reflections on the twentieth-century history of American librarianship. The Library Quarterly, 1-32.
  3. This chapter based in large part on McDiarmid, Errett (1940). The library survey : problems and methods. Chicago: American Library Association. Retrieved 10 November 2015.
  4. Agee, Jim (September 2005). "Collection evaluation: a foundation for collection development". Collection Building. 24 (3): 92–95. doi:10.1108/01604950510608267.
  5. Borin, Jacqueline; Yi, Hua (3 October 2008). "Indicators for collection evaluation: a new dimensional framework". Collection Building. 27 (4): 136–143. doi:10.1108/01604950810913698.
  6. a b Intner, Sheila S. (September 2003). "Making your collections work for you: collection evaluation myths & realities". Library Collections, Acquisitions, and Technical Services. 27 (3): 339–350. doi:10.1016/S1464-9055(03)00067-8.
  7. a b Lundin, Anne (1989). "List-checking in collection development: An imprecise art". Collection Management. 11 (3–4): 103–112.
  8. "Categories". University of Michigan Library. Retrieved 30 November 2015.
  9. Dalziel, Charles (1937). "Evaluation of periodicals for electrical engineers". The Library Quarterly: 354–372.
  10. McDiarmid, Errett (1940). The library survey : problems and methods. Chicago: American Library Association. p. 102. Retrieved 10 November 2015.
  11. McAbee, Sonja L.; Hubbard, William J. (8 June 2003). "The Current Reality of National Book Publishing Output and Its Effect on Collection Assessment". Collection Management. 28 (4): 67–78. doi:10.1300/J105v28n04_05.


The ethnography

Sandstrom, A. R., & Sandstrom, P. E. (1995). The use and misuse of anthropological methods in library and information science research. The Library Quarterly, 161-199.

Chatman, Elfreda A. "Field Research: Methodological Themes." Library and Information Science Research 6 (1984): 425-38.


The case study

Case studies are very common in LIS. They can be very helpful for generating hypotheses.

Action research

[edit | edit source]

Limitations

[edit | edit source]
  • Though they can be very suggestive or illuminating, they are not generalizable.
  • Strong publishing bias toward positive outcomes


Bibliometric methods

Bibliometrics is the statistical analysis of publications to identify decisions made by their authors and readers. Citation analysis is a particularly well-known bibliometric method, although a number of other bibliometric methods are also used. Researchers use bibliometric methods to explore the impact of a field, a set of researchers, or a particular paper. Bibliometrics also has a wide range of other applications, such as in descriptive linguistics, the development of thesauri, and evaluation of reader usage.

Citation counts

[edit | edit source]

Key metrics

[edit | edit source]

Issues and limitations

[edit | edit source]


Usability and user experience studies

Diary studies

[edit | edit source]

There are two types of diary studies:

  1. Elicitation studies, where participants capture media that are then used as prompts for discussion in interviews. The method is a way to trigger the participant’s memory.
  2. Feedback studies, where participants answer predefined questions about events. This is a way of getting immediate answers from the participants.[1]

Advantages

[edit | edit source]

They allow:

  • collecting longitudinal and temporal information;
  • reporting events and experiences in context;
  • determining the antecedents, correlations, and consequences of daily experiences.

Limitations

[edit | edit source]
  • Diary studies might generate inaccurate recall, especially if using the elicitation type of diary studies.
  • Interpreting the expressed emotions and experiences is highly challenging. It can require special training in psychology, especially when participants record their experiences in multiple formats (e.g. text and pictures).[2]
  • Low control
  • User participation tends to decline, especially without investigator involvement.[3]
  • Risk of disturbing the action.
  • The instrument (e.g. paper format diary) is often disconnected from the evaluated application (e.g. a smartphone app). This leads to situations in which the instrument is unavailable and the user cannot record their experiences.[2]

Think aloud protocol

[edit | edit source]

Walkthroughs

[edit | edit source]

Cognitive walkthroughs

[edit | edit source]

Pluralistic walkthroughs

[edit | edit source]

Remote usability testing

[edit | edit source]

References

[edit | edit source]
  1. Carter and Mankoff (2005). When participants do the capturing: the role of media in diary studies. CHI '05 Proceedings of the SIGCHI conference on Human factors in computing systems.
  2. a b Isomursu, Minna; Tähti, Marika; Väinämö, Soili; Kuutti, Kari (April 2007). "Experimental evaluation of five methods for collecting emotions in field settings with mobile applications". International Journal of Human-Computer Studies. 65 (4): 404–418. doi:10.1016/j.ijhcs.2006.11.007.
  3. Palen, Leysia; Salzman, Marilyn (2002). "Voice-Mail Diary Studies for Naturalistic Data Capture under Mobile Conditions". Computer supported cooperative work: 87–95.


Technical services and cataloging studies

Scope of this chapter

[edit | edit source]

In Mugridge's 2014 study, the most common goal of technical services assessment was to streamline processes, and almost all of the decisions that were informed by these analyses were reallocation of staff.[1] While the authors of this book recognize the importance of such analyses, this chapter does not include such methods; they instead will be found in the chapter on Management and operational research. Likewise, numerous papers calculate the costs of catalog work or perform cost-benefit analyses. This type of study is likewise excluded so that we may concentrate on methods that illuminate the quality of technical services work, particularly catalog records.

Identifying values

[edit | edit source]

One issue with current cataloging research is that there is not much consensus on values and metrics. Catalogers can generally identify a good record when they see one, but its otherwise difficult to define exactly what a "good" record or practice is in this field. The following publications may be useful in identifying the values you would like to study:

  • Ranganathan, S. R. (1955). Heading and canons: Comparative study of five catalogue codes. Madras: S. Viswanathan.
  • Hider, P., & Tan, K. C. (2008). Constructing record quality measures based on catalog use. Cataloging & Classification Quarterly, 46(4), 338-361.

Additionally, Van Wyck offers four "performance indicators":

  1. Timeliness
  2. Accuracy
  3. Completeness
  4. Consistency[2]

It seems that cataloging values may rely on extrinsic features as well. Gorman has suggested that the value of a catalog record is related to the value of the resource cataloged. (In Stalberg, E., & Cronin, C. (2011). Assessing the Cost and Value of Bibliographic. Library Resources & Technical Services, 55(3), 124.)

There are several values that can be quantitatively measured, but which contribute most to discovery?

  • Level of authority control
  • Level of typographical errors
  • Process vs. Big picture focus[3]
  • ROI vs. service vs. innovation focus[3]
  • Facilitation of the FRBR user tasks

Another place to check: Conway, M. (2010). Research in Cataloging and Classification. Cataloging & Classification Quarterly, 19(1).

This has some assessment ideas: http://downloads.alcts.ala.org/ce/11202013AssessmentStrategiesCatalogingSlides.pdf

Identifying high-impact fields: Carrie Preston, “High Speed Cataloging Without Sacrificing Subject Access or Authority Control: A Case Study,” in Radical Cataloging: Essays at the Front, ed. K. R. Roberto (Jefferson, NC: McFarland & Co., 2008)

Radio, E. (2016). Semiotic Principles for Metadata Auditing and Evaluation. Cataloging & Classification Quarterly, 1-19.

Discovery success (http://connect.ala.org/files/7981/costvaluetaskforcereport2010_06_18_pdf_77542.pdf)

Display understanding (http://connect.ala.org/files/7981/costvaluetaskforcereport2010_06_18_pdf_77542.pdf)

A/B testing

[edit | edit source]

Note that this method can also be used to assess actual content, as in http://journal.code4lib.org/articles/7738

Pre/post comparisons

[edit | edit source]

Measure user discovery before and after an enhancement project.

Surveys

[edit | edit source]

Surveys administered when users request ILLs, storage materials to identify what piece of data they need before determining whether or not it was useful.

Balanced scorecard method

[edit | edit source]

Example: Kim, D. S. (2010). Using the balanced scorecard for strategic operation of the cataloging department. Cataloging & Classification Quarterly, 48(6-7), 572-584.

Total quality management method

[edit | edit source]

Example: Khurshid, Z. (1997). The application of TQM in cataloguing. Library Management, 18(6), 274-279.


References

[edit | edit source]
  1. Mugridge, Rebecca L. (27 May 2014). "Technical Services Assessment". Library Resources & Technical Services. 58 (2): 100–110. doi:10.5860/lrts.58n2.100. ISSN 2159-9610. Retrieved 18 January 2016.
  2. Van Wyk, A. C. (1997). The development of performance indicators to measure cataloguing quality in the Technical Services Division of the Unisa Library with special reference to item throughput time. MOUSAION, 15, 53-67.
  3. a b Schomberg, Jessica (18 December 2015). "Examination of Cataloging Assessment Values Using the Q Sort Method". Cataloging & Classification Quarterly. 54 (1): 1–22. doi:10.1080/01639374.2015.1072864.


The log analysis

Peters, T. A. (1993). The history and development of transaction log analysis. Library hi tech, 11(2), 41-66.

Chapter 1 of http://www.oclc.org/content/dam/research/publications/library/2010/2010-06.pdf


Theoretical approaches

Many papers in Library and Information Science take theoretical approaches that synthesizes prior research. Chu (2015) found that almost 40% of the articles published between 2001 and 2010 in the Journal of Documentation represented a theoretical, rather than an empirical method[1].

Conceptual analysis

[edit | edit source]

Model building

[edit | edit source]

Theory development

[edit | edit source]

References

[edit | edit source]
  1. Heting Chu (2015). "Research methods in library and information science: A content analysis". Library and Information Science Research. Elsevier. 37 (1).


Management and operational research

Benchmarking

[edit | edit source]

Time studies

[edit | edit source]

Workflow analysis

[edit | edit source]

Workplace culture

[edit | edit source]

Including response to concepts such as social justice (suggested by Schomberg, J. (2015). Examination of Cataloging Assessment Values Using the Q Sort Method. Cataloging & Classification Quarterly, 1-22.)