Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals

Business articles from across Nature Portfolio

Latest research and reviews.

research paper in business sciences

Companies inadvertently fund online misinformation despite consumer backlash

Many companies unknowingly advertise on websites that publish misinformation despite the reputational and financial risks, and increased transparency for consumers and advertisers could counter unintended ad revenue going to misinformation websites.

  • Wajeeha Ahmad
  • Erik Brynjolfsson

research paper in business sciences

Alternative protein sources: science powered startups to fuel food innovation

Harnessing the potential of considerable food security efforts requires the ability to translate them into commercial applications. In this Perspective, the author explores the alternative protein source start-up landscape.

  • Elena Lurie-Luke

research paper in business sciences

Facing the storm: Developing corporate adaptation and resilience action plans amid climate uncertainty

  • Katharina Hennes
  • David Bendig
  • Andreas Löschel

research paper in business sciences

A new commercial boundary dataset for metropolitan areas in the USA and Canada, built from open data

  • Byeonghwa Jeong
  • Karen Chapple

research paper in business sciences

Model-based financial regulations impair the transition to net-zero carbon emissions

As the financial system is increasingly important in catalysing the green transition, it is critical to assess the impediments it may face. This study shows that existing financial regulations may impair the shift of financial resources from high-carbon to low-carbon assets.

  • Matteo Gasparini
  • Matthew C. Ives
  • Eric Beinhocker

research paper in business sciences

Compound Matrix-Based Project Database (CMPD)

  • Zsolt T. Kosztyán
  • Gergely L. Novák

Advertisement

News and Comment

research paper in business sciences

Green and greening jobs

Scaling up adoption of green technologies in energy, mobility, construction, manufacturing and agriculture is imperative to set countries on a sustainable development path, but that hinges on having the right workforce, argues Jonatan Pinkse.

  • Jonatan Pinkse

research paper in business sciences

I fell out of love with the lab, and in love with business

The COVID-19 pandemic changed Karolina Makovskytė’s career ambitions, propelling her to a business development role in her home nation of Lithuania.

  • Jacqui Thornton

research paper in business sciences

Model-based financial regulation challenges for the net-zero transition

Current model-based financial regulations favour carbon-intensive investments. This is likely to disincentivize banks from investing in new low-carbon assets, impairing the transition to net zero. Financial regulators and policymakers should consider how this bias may impact financial system stability and broader societal objectives.

  • Matthew Ives

research paper in business sciences

The potential of DAOs for funding and collaborative development in the life sciences

VitaDAO funds longevity research through a blockchain-based decentralized autonomous organization (DAO), showcasing the potential of collaborative, transparent and alternative systems while also highlighting the challenges of coordination, regulation, biases and skepticism in reshaping traditional research financing methods.

  • Simone Fantaccini
  • Laura Grassi
  • Andrea Rampoldi

research paper in business sciences

Growing demand for environmental science expertise in the corporate sector

Growing awareness of environmental risks and mounting regulatory and consumer pressure have driven unprecedented demand for environmental science expertise in the corporate sector. Recruiting skilled individuals with academic backgrounds and fostering collaboration among businesses, research institutions, universities and environmental professionals are vital for enhancing environmental knowledge and capability in companies.

  • Alexey K. Pavlov
  • Daiane G. Faller
  • Jane E. Collins

research paper in business sciences

Can non-profits beat antibiotic resistance and soaring drug costs?

Effective, affordable antimicrobial drugs aren’t moneymakers, despite being desperately needed. Can non-profit organizations pick up the slack?

  • Maryn McKenna

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

research paper in business sciences

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

admsci-logo

Journal Menu

  • Administrative Sciences Home
  • Aims & Scope
  • Editorial Board
  • Instructions for Authors
  • Special Issues
  • Article Processing Charge
  • Indexing & Archiving
  • Editor’s Choice Articles
  • Most Cited & Viewed
  • Journal Statistics
  • Journal History
  • Journal Awards
  • Editorial Office

Journal Browser

  • arrow_forward_ios Forthcoming issue arrow_forward_ios Current issue
  • Vol. 14 (2024)
  • Vol. 13 (2023)
  • Vol. 12 (2022)
  • Vol. 11 (2021)
  • Vol. 10 (2020)
  • Vol. 9 (2019)
  • Vol. 8 (2018)
  • Vol. 7 (2017)
  • Vol. 6 (2016)
  • Vol. 5 (2015)
  • Vol. 4 (2014)
  • Vol. 3 (2013)
  • Vol. 2 (2012)
  • Vol. 1 (2011)

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

What Is in the Future of Business Research and Management? Emerging Issues after COVID-19 Time

  • Print Special Issue Flyer

Special Issue Editors

Special issue information.

  • Published Papers

A special issue of Administrative Sciences (ISSN 2076-3387).

Deadline for manuscript submissions: closed (1 October 2022) | Viewed by 36306

Share This Special Issue

research paper in business sciences

Dear Colleagues,

The recent experience of the COVID-19 pandemic has forever marked our experience, perspective, and attitude at the individual and organizational level. Being positioned in a new reality, a need to reshape and restructure, as well as continuously adapt, has arisen in order to face the unpredicted sorrounding business environment. To survive under restrictive government policies and challenging market behaviours, business organizations had to efficiently and effectively respond the recent pandemic. This involved the necessity of flexibility, reflection, and resilient adaption, so that businesses remained in equilibrium.

In this situation, management scholars need to reshape and question the theoretical frameworks which have been in place for the past decades, providing a supplement to the existing literature and exteding it beyond—trying to comply with the so-called “new normal” of the business environment.  

The aim of this Special Issue is to discuss the most important managerial and organizational implications of the pandemic and the future challenges that public and private organizations will have to face in the coming years; we are interested in future-oriented business implications deriving from the occurred pandemic.

Theoretical, conceptual, and empirical contributions in the field of business research and management linked to, but not limited to, the following topics are welcomed: business modeling and planning; change management; big data and business analytics; innovation and technology management; business ethics; corporate governance and accountability; corporate social responsibility; human and intellectual capital management; corporate finance and investments; accounting, auditing, and budgeting; financial analysis and reporting; international management; and public management and governance.

All the publications of the papers in this issue will be presented in the “1st Conference in Business Research and Management” organized by the University of Castilla-La Mancha and the University of Rome “Tor Vergata”.

Dr. Matteo Cristofaro Dr. Pablo Ruiz-Palomino Dr. Fiorella Pia Salvatore Dr. Pedro Jiménez Estevez Dr. Andromahi Kufo Dr. Ricardo Martínez-Cañas Guest Editors

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website . Once you are registered, click here to go to the submission form . Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a double-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Administrative Sciences is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

  • business research
  • organizations
  • future challenges

Published Papers (5 papers)

research paper in business sciences

Further Information

Mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

A review of data science in business and industry and a future view

  • October 2019
  • Applied Stochastic Models in Business and Industry 36(4)

Grazia Vicario at Politecnico di Torino

  • Politecnico di Torino

Shirley Coleman at Newcastle University

  • Newcastle University

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations

Sahil Sharma

  • Rendi Aprijal
  • Iqbal Wiranata Siregar
  • Andysah Putera Utama Siahaan
  • Leni Marlina

María del Carmen Romero

  • Dilek Özdemir Yılmaz

Simon Schütte

  • Shirley COLEMAN

A. Al-Khowarizmi

  • Fitria Wulandari Ramlan
  • Syahril Efendi
  • Yoshida Sary

Karina Sopp

  • Wayne S. Smith

Shirley Coleman

  • J. Bacardit

Kayvan Pazouki

  • STAT PROBABIL LETT

Paolo Giudici

  • Gareth James

Daniela Witten

  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

Business Analytics and Data Science: Once Again?

  • Published: 20 December 2016
  • Volume 59 , pages 77–79, ( 2017 )

Cite this article

research paper in business sciences

  • Martin Bichler 1 ,
  • Armin Heinzl 2 &
  • Wil M. P. van der Aalst 3  

9469 Accesses

30 Citations

Explore all metrics

Avoid common mistakes on your manuscript.

1 Introduction

“Everything has already been said, but not yet by everyone”. We wouldn’t be surprised if the title of this editorial reminds you of this famous quote attributed to Karl Valentin. Business Analytics is a relatively new term and there does not seem to be an established academic definition. Holsapple et al. ( 2014 ) write, “a crucial observation, on which the paper is based, is that ‘the definition’ of analytics does not exist”. They describe Business Analytics as “evidence-based problem recognition and solving that happen within the context of business situations”, and also highlight that mathematical and statistical techniques have long been studied in business schools under such titles as Operations Research and Management Science, Simulation Analysis, Econometrics, and Financial Analysis. However, their article shows that the availability of large data sets in business has made these techniques much more important in all fields of management.

At the same time, Data Science has become a very popular term describing an interdisciplinary field about processes and systems to extract knowledge or insights from data. Data Science is a broader term, but also closely related to Business Analytics. In a recent book by one of the authors, Data Science is defined as follows: “Data science is an interdisciplinary field aiming to turn data into real value. Data may be structured or unstructured, big or small, static or streaming. Value may be provided in the form of predictions, automated decisions, models learned from data, or any type of data visualization delivering insights. Data science includes data extraction, data preparation, data exploration, data transformation, storage and retrieval, computing infrastructures, various types of mining and learning, presentation of explanations and predictions, and the exploitation of results taking into account ethical, social, legal, and business aspects” (Van der Aalst 2016 ).

Dhar ( 2013 ) starts his article asking why we need a new term and whether Data Science is different from statistics and gives an affirmative answer. He mentions new types of data being analyzed, new methods, and new questions being asked. Clearly, both definitions are overlapping. Both Business Analytics and Data Science want to “turn data into value”.

It is not surprising, that these terms have been adopted quickly by those in our community who are close to Operations Research and the Management Sciences (OR/MS). Data analysis and optimization have always been at the core of the INFORMS, and the INFORMS Information Systems Society (ISS) is one of the large INFORMS sub-communities. Many sessions at the INFORMS Annual Meeting or the Conference on Information Systems and Technology (CIST), which is organized by the INFORMS ISS just before the Annual Meeting every year, devoted to predictive or prescriptive analytics. But the range topics is much wider than that.

Process mining has become an important direction in Business Process Management (BPM). About half of the papers presented at the International Conference on Business Process Management are about data-driven process management. A successful track on Data Science and Business Analytics has been established at ICIS in the recent years, and many papers combining data analysis, information technology, and optimization are submitted to our BISE department “Computational Methods and Decision Support Systems” and the department “IS Engineering and Technology”. For the remainder of this editorial we will only talk about Analytics for brevity, and leave the question open which term may be adopted in which community in the in the future.

As many recent topics in Information Systems, Analytics is multi-disciplinary. There are colleagues in Econometrics, in Machine Learning, and Operations Research who contribute significantly. In this brief editorial, we want to discuss how Analytics contributes to our field, and how our profession can contribute to developments in Analytics. Given the vast amounts of data that we are collecting, this topic will most likely stay with us for a long time and eventually have a significant impact on both our research and teaching.

Internet-based systems generate huge amounts of data, which allow us to better understand how people interact on markets or in social media. Many new types of information systems draw on the availability of data about user behavior or sensor data about the environment. Such information systems adapt to the users and provide better ways to coordinate. Let’s pick a few examples to make the point.

Recommender systems are probably among the most well-known types of analytics-based information systems. They collect data about user preferences to then provide tailor-made recommendations for books, movies, or other products. By now, there is a large body of literature about mathematical methods such as matrix factorization or collaborative filtering, which allow for effective recommendations. At the same time, there is a growing behavioral literature analyzing the impact of these systems on human decision making. Thus, the topic addresses both, design and user behavior, a combination that was always at the core of information systems research.

Interactive marketing is another field which heavily draws on analytics. Real-time bidding (RTB) is a means by which display advertising is bought and sold on a per-impression basis via auction. With real-time bidding, advertising buyers bid on an impression and, if the bid is won, the buyer’s ad is instantly displayed on the publisher’s site. This is the fastest growing segment in the digital advertising market and it combines predictive models to better estimate the preferences and tastes of users and the bidding in a highly automated fashion. The topic addresses a wealth of problems ranging from distributed systems to auction theory, machine learning, and, last but not least, data privacy!

Also, the Internet-of-Things has led to many new applications which require data analysis, distributed systems, and optimization to go hand in hand in ever growing applications. We have all seen case studies on smart mobility solutions, intelligent ports and transportation systems, or smart home solutions where sensors communicate and coordinate with humans in real-time, often with a substantial increase in economic efficiency. Consider for example condition-based maintenance making use of the analysis of sensors data: maintenance is performed when analytical techniques suggest that the system is going to fail or that performance is deteriorating.

These are just a few examples. Analytics are used in many other domains ranging from customer journeys, call-centers, and credit rating to staffing, e-government, and delivery services. Moreover, there is no reason to assume that the trend towards more data and evidence-based management and analytics-based information systems will end.

In the BISE editorial statement “information systems are understood as socio-technical systems comprising people, tasks, and information technology”. The above examples suggest that Analytics opens up many new and exciting research questions for our community, eventually leading to a new class of analytics - based information systems , which sense their environment and respond to the users that they ultimately serve. This requires an integrated view addressing privacy concerns, engineering challenges, and a thorough “social science” analysis of the impact of new systems. In summary, Analytics provides many new opportunities for our field and describes an almost natural progression of many lines of information systems research.

Information Systems, as it is taught at most universities, is already well-prepared for the design and analysis of this new breed of analytics-based information systems. On the one hand, Information Systems programs typically include topics such as data engineering, data mining, software engineering, distributed systems, and operations research, which are essential for their design.

On the other hand, our field is also a social science and as such most curricula have courses on Econometrics and empirical methods. Techniques for causal inference and discrete choice models have always been important ingredients of a social scientist’s education, and they prove to be incredibly valuable for business analysts. Econometrics and Machine Learning will be essential elements of new curricula in most schools in the future, if this is not already the case. Overall, the fact that we address “design and behavior” in our education can be a significant advantage for our students in the job market and in research.

While our Bachelor and Master programs are well positioned, there are many new developments in education in Analytics. There are so many new Bachelor and Master programs in either Business Analytics or Data Science at various universities that we do not even start to list specific programs. The European Data Science Academy ( http://edsa-project.eu/ ) is an EU Horizon 2020 funded program providing useful information. For our field, it is important to stay abreast of these new developments.

4 Conclusion

Analytics is not just a short-term trend. The availability of more and more data from sensor networks or human–computer interaction leads to new types of information systems in which data analysis plays an important role. Analytics will help us to better understand the environment and to adapt to the needs of users and organizations, when we design new systems. It is important to reflect this development also in our curricula. Analytics is inherently linked with our field, and we are looking forward to a growing number of submissions in this area.

Dhar V (2013) Data science and prediction. Commun ACM 56(12):64–73

Article   Google Scholar  

Holsapple C, Lee-Post A, Pakath R (2014) A unified foundation for business analytics. Decis Support Syst 64C(August):130–141

Van der Aalst W (2016) Process mining: data science in action. Springer, Heidelberg

Book   Google Scholar  

Download references

Author information

Authors and affiliations.

Department of Informatics, Decision Sciences and Systems, Technical University of Munich (TUM), Boltzmannstr 3, 85748, Munich, Germany

Martin Bichler

Chair of General Management and Information Systems, University of Mannheim, 68161, Mannheim, Germany

Armin Heinzl

Department of Mathematics and Computer Science (MF 7.103), Eindhoven University of Technology, PO Box 513, 5600 MB, Eindhoven, The Netherlands

Wil M. P. van der Aalst

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Martin Bichler .

Rights and permissions

Reprints and permissions

About this article

Bichler, M., Heinzl, A. & van der Aalst, W.M.P. Business Analytics and Data Science: Once Again?. Bus Inf Syst Eng 59 , 77–79 (2017). https://doi.org/10.1007/s12599-016-0461-1

Download citation

Published : 20 December 2016

Issue Date : April 2017

DOI : https://doi.org/10.1007/s12599-016-0461-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Find a journal
  • Publish with us
  • Track your research
  • Tools and Resources
  • Customer Services
  • Business Education
  • Business Law
  • Business Policy and Strategy
  • Entrepreneurship
  • Human Resource Management
  • Information Systems
  • International Business
  • Negotiations and Bargaining
  • Operations Management
  • Organization Theory
  • Organizational Behavior
  • Problem Solving and Creativity
  • Research Methods
  • Social Issues
  • Technology and Innovation Management
  • Share This Facebook LinkedIn Twitter

Article contents

Qualitative designs and methodologies for business, management, and organizational research.

  • Robert P. Gephart Robert P. Gephart Alberta School of Business, University of Alberta
  •  and  Rohny Saylors Rohny Saylors Carson College of Business, Washington State University
  • https://doi.org/10.1093/acrefore/9780190224851.013.230
  • Published online: 28 September 2020

Qualitative research designs provide future-oriented plans for undertaking research. Designs should describe how to effectively address and answer a specific research question using qualitative data and qualitative analysis techniques. Designs connect research objectives to observations, data, methods, interpretations, and research outcomes. Qualitative research designs focus initially on collecting data to provide a naturalistic view of social phenomena and understand the meaning the social world holds from the point of view of social actors in real settings. The outcomes of qualitative research designs are situated narratives of peoples’ activities in real settings, reasoned explanations of behavior, discoveries of new phenomena, and creating and testing of theories.

A three-level framework can be used to describe the layers of qualitative research design and conceptualize its multifaceted nature. Note, however, that qualitative research is a flexible and not fixed process, unlike conventional positivist research designs that are unchanged after data collection commences. Flexibility provides qualitative research with the capacity to alter foci during the research process and make new and emerging discoveries.

The first or methods layer of the research design process uses social science methods to rigorously describe organizational phenomena and provide evidence that is useful for explaining phenomena and developing theory. Description is done using empirical research methods for data collection including case studies, interviews, participant observation, ethnography, and collection of texts, records, and documents.

The second or methodological layer of research design offers three formal logical strategies to analyze data and address research questions: (a) induction to answer descriptive “what” questions; (b) deduction and hypothesis testing to address theory oriented “why” questions; and (c) abduction to understand questions about what, how, and why phenomena occur.

The third or social science paradigm layer of research design is formed by broad social science traditions and approaches that reflect distinct theoretical epistemologies—theories of knowledge—and diverse empirical research practices. These perspectives include positivism, interpretive induction, and interpretive abduction (interpretive science). There are also scholarly research perspectives that reflect on and challenge or seek to change management thinking and practice, rather than producing rigorous empirical research or evidence based findings. These perspectives include critical research, postmodern research, and organization development.

Three additional issues are important to future qualitative research designs. First, there is renewed interest in the value of covert research undertaken without the informed consent of participants. Second, there is an ongoing discussion of the best style to use for reporting qualitative research. Third, there are new ways to integrate qualitative and quantitative data. These are needed to better address the interplay of qualitative and quantitative phenomena that are both found in everyday discourse, a phenomenon that has been overlooked.

  • qualitative methods
  • research design
  • methods and methodologies
  • interpretive induction
  • interpretive science
  • critical theory
  • postmodernism
  • organization development

Introduction

Qualitative research uses linguistic symbols and stories to describe and understand actual behavior in real settings (Denzin & Lincoln, 1994 ). Understanding requires describing “specific instances of social phenomena” (Van Maanen, 1998 , p. xi) to determine what this behavior means to lay participants and to scientific researchers. This process produces “narratives-non-fiction division that link events to events in storied or dramatic fashion” to uncover broad social science principles at work in specific cases (p. xii).

A research design and/or proposal is often created at the outset of research to act as a guide. But qualitative research is not a rule-governed process and “no one knows” the rules to write memorable and publishable qualitative research (Van Maanen, 1998 , p. xxv). Thus qualitative research “is anything but standardized, or, more tellingly, impersonal” (p. xi). Design is emergent and is often created as it is being done.

Qualitative research is also complex. This complexity is addressed by providing a framework with three distinct layers of knowledge creation resources that are assembled during qualitative research: the methods layer, the logic layer, and the paradigmatic layer. Research methods are addressed first because “there is no necessary connection between research strategies and methods of data collection and analysis” (Blaikie, 2010 , p. 227). Research methods (e.g., interviews) must be adapted for use with the specific logical strategies and paradigmatic assumptions in mind.

The first, or methods, layer uses qualitative methods to “collect data.” That is, to observe phenomena and record written descriptions of observations, often through field notes. Established methods for description include participant and non-participant observation, ethnography, focus groups, individual interviews, and collection of documentary data. The article explains how established methods have been adapted and used to answer a range of qualitative research questions.

The second, or logic, layer involves selecting a research strategy—a “logic, or set of procedures, for answering research questions” (Blaikie, 2010 , p. 18). Research strategies link research objectives, data collection methods, and logics of analysis. The three logical strategies used in qualitative organizational research are inductive logic, deductive logic and abductive logic (Blaikie, 2010 , p. 79). 1 Each logical strategy makes distinct assumptions about the nature of knowledge (epistemology), the nature of being (ontology), and how logical strategies and assumptions are used in data collection and analysis. The task is to describe important methods suitable for each logical strategy, factors to consider when selecting methods (Blaikie, 2010 ), and illustrates how data collection and analysis methods are adapted to ensure for consistency with specific logics and paradigms.

The third, or paradigms, layer of research design addresses broad frameworks and scholarly traditions for understanding research findings. Commitment to a paradigm or research tradition entails commitments to theories, research strategies, and methods. Three paradigms that do empirical research and seek scientific knowledge are addressed first: positivism, interpretive induction, and interpretive abduction. Then, three scholarly and humanist approaches that critique conventional research and practice to encourage organizational change are discussed: critical theory and research, postmodern perspectives, and organization development (OD). Paradigms or traditions provide broad scholarly contexts that make specific studies comprehensible and meaningful. Lack of grounding in an intellectual tradition limits the ability of research to contribute: contributions always relate to advancing the state of knowledge in specific unfolding research traditions that also set norms for assessing research quality. The six research designs are explained to show how consistency in design levels can be achieved for each of the different paradigms. Further, qualitative research designs must balance the need for a clear plan to achieve goals with the need for adaptability and flexibility to incorporate insights and overcome obstacles that emerge during research.

Our general goal has been to provide a practical guide to inspire and assist readers to better understand, design, implement, and publish qualitative research. We conclude by addressing future challenges and trends in qualitative research.

The Substance of Research Design

A research design is a written text that can be prepared prior to the start of a research project (Blaikie, 2010 , p. 4) and shared or used as “a private working document.” Figure 1 depicts the elements of a qualitative research design and research process. Interest in a topic or problem leads researchers to pose questions and select relevant research methods to fulfill research purposes. Implementation of the methods requires use of logical strategies in conjunction with paradigms of research to specify concepts, theories, and models. The outcomes, depending on decisions made during research, are scientific knowledge, scholarly (non-scientific) knowledge, or applied knowledge useful for practice.

Figure 1. Elements of qualitative research design.

Research designs describe a problem or research question and explain how to use specific qualitative methods to collect and analyze qualitative data that answer a research question. The purposes of design are to describe and justify the decisions made during the research process and to explain how the research outcomes can be produced. Designs are thus future-oriented plans that specify research activities, connect activities to research goals and objectives, and explain how to interpret the research outcomes using paradigms and theories.

In contrast, a research proposal is “a public document that is used to obtain necessary approvals for a research proposal to proceed” (Blaikie, 2010 , p. 4). Research designs are often prepared prior to creating a research proposal, and research proposals often require the inclusion of research designs. Proposals also require greater formality when they are the basis for a legal contract between a researcher and a funding agency. Thus, designs and proposals are mutually relevant and have considerable overlap but are addressed to different audiences. Table 1 provides the specific features of designs and proposals. This discussion focuses on designs.

Table 1. Decisions Necessitated by Research Designs and Proposals

RESEARCH DESIGNS

Title or topic of project

Research problem and rationale for exploring problem

Research questions to address problem: purpose of study

Choice of logic of inquiry to investigate each research question

Statement of ontological and epistemological assumptions made

Statement or description of research paradigms used

Explanation of relevant concepts and role in research process

Statement of hypotheses to be tested (positivist), orienting proposition to be examined (interpretive) or mechanisms investigated (critical realism)

Description of data sources

Discussion of methods used to select data from sources

Description of methods of data collection, summarization, and analysis

Discussion of problems and limitations

RESEARCH PROPOSALS: add the items below to items above

Statement of aims and research significance

Background on need for research

Budget and justification for each item

Timetable or stages of research process

Specification of expected outcomes and benefits

Statement of ethical issues and how they can be managed

Explanation of how new knowledge will be disseminated

Source: Based on Blaikie ( 2010 ), pp. 12–34.

The “real starting point” for a research design (or proposal) is “the formulation of the research question” (Blaikie, 2010 , p. 17). There are three types of research questions: “what” questions seek descriptions; “why” questions seek answers and understanding; and “how” questions address conditions where certain events occur, underlying mechanisms, and conditions necessary for change interventions (p. 17). It is useful to start with research questions rather than goals, and to explain what the research is intended to achieve (p. 17) in a technical way.

The process of finding a topic and formulating a useful research question requires several considerations (Silverman, 2014 , pp. 31–33, 34–40). Researchers must avoid settings where data collection will be difficult (pp. 31–32); specify an appropriate scope for the topic—neither too wide or too narrow—that can be addressed (pp. 35–36); fit research questions into a relevant theory (p. 39); find the appropriate level of theory to address (p. 42); select appropriate designs and research methods (pp. 42–44); ensure the volume of data can be handled (p. 48); and do an effective literature review (p. 48).

A literature review is an important way to link the proposed research to current knowledge in the field, and to explain what was previously known or what theory suggests to be the case (Blaikie, 2010 , p. 17). Research questions can used to bound and frame the literature review while the literature review often inspires research questions. The review may also provide bases for creating new hypotheses and for answering some of the initial research questions (Blaikie, 2010 , p. 18).

Layers of Research Design

There are three layers of research design. The first layer focuses on research methods for collecting data. The second layer focuses on the logical frameworks used for analyzing data. The third layer focuses on the paradigm used to create a coherent worldview from research methods and logical frameworks.

Layer One: Design as Research Methods

Qualitative research addresses the meanings people have for phenomena. It collects narratives of organizational activity, uses analytical induction to create coherent representations of the truths and meanings in organizational contexts, and then creates explanations of this conduct and its prevalence (Van Maanan, 1998 , pp. xi–xii). Thus qualitative research involves “doing research with words” (Gephart, 2013 , title) in order to describe the linguistic symbols and stories that members use in specific settings.

There are four general methods for collecting qualitative data and creating qualitative descriptions (see Table 2 ). The in-depth case study approach provides a history of an event or phenomenon over time using multiple data sources. Observational strategies use the researcher to observe and describe behavior in actual settings. Interview strategies use a format where a researcher asks questions of an informant. And documentary research collects texts, documents, official records, photographs, and videos as data—formally written or visually recorded evidence that can be replayed and reviewed (Creswell, 2014 , p. 190). These methods are adapted to fit the needs of specific projects.

Table 2. Qualitative Data Collection Methods

Type

Brief Description

Key Example(s) and Reference Source(s)

Provides thick description of a single event or phenomenon unfolding over time

Perlow ( ); Mills, Duerpos, and Wiebe ( ); Stake ( ); Piekkari and Welch ( )

Participant Observation

Observe, participate in, and describe actual settings and behaviors

McCall and Simmons ( )

Barker ( )

Graham ( )

Ethnography

Insider description of micro-culture developed through active participation in the culture

Van Maanen ( ); Ybema, Yanow, Wels, and Kamsteeg ( ); Cunliffe ( ); Van Maanen ( )

Systematic Self-Observation

Strategy for training lay informants to observe and immediately record selected experiences

Rodrguez, Ryave, and Tracewell ( ); Rodriguez and Ryave ( )

Single-Informant Interviews

Traditional structured interview

Pose preset and fixed questions and record answers to produce (factual) information on phenomena, explore concepts and test theory

Easterby-Smith, Thorpe, and Jackson et al. ( )

Unstructured interview

Use interview guide with themes to develop and pose in situ questions that fit unfolding interview

Easterby-Smith et al. ( )

Active interview

Unstructured interview with questions and answers co-constructed with informant that reveals the co-construction of meaning

Holstein and Gubrium ( )

Ethnographic interview

Meeting where researcher meets informant to pose systematic questions that teach the researcher about the informant’s questions

Spradley ( )

McCurdy, Spradley, and Shandy ( )

Long interview

Extended use of structured interview method that includes demographic and open-ended questions. Designed to efficiently uncover the worldview of informants without prolonged field involvement

McCracken ( )

Gephart and Richardson ( )

Focus Group

A group interview used to collect data on a predetermined topic (focus) and mediated by the researcher

Morgan ( )

Records and Texts

Photographic and visual methods

Produce accurate visual images of physical phenomena in field settings that can be analyzed or used to elicit informant reports

Ray and Smith ( )

Greenwood, Jack, and Haylock ( )

Video methods

Produce “different views’ of activity and permanent record that can be repeatedly examined and used to verify accuracy and validity of research claims

LeBaron, Jarzabkowski, Pratt, and Fetzer ( )

Textual data and documentary data collection

Hodder ( )

The In-Depth Case Study Method

The in-depth case study is a key strategy for qualitative research (Piekkari & Welch, 2012 ). It was the most common qualitative method used during the formative years of the field, from 1956 to 1965 , when 48% of qualitative papers published in the Administrative Science Quarterly used the case study method (Van Maanen, 1998 , p. xix). The case design uses one or more data collection strategies to describe in detail how a single event or phenomenon, selected by a researcher, has changed over time. This provides an understanding of the processes that underlie changes to the phenomenon. In-depth case study methods use observations, documents, records, and interviews that describe the events in the case unfolded and their implications. Case studies contextualize phenomena by studying them in actual situations. They provide rich insights into multiple dimensions of a single phenomenon (Campbell, 1975 ); offer empirical insights into what, how, and why questions related to phenomena; and assist in the creation of robust theory by providing diverse data collected over time (Gephart & Richardson, 2008 , p. 36).

Maniha and Perrow ( 1965 ) provide an example of a case study concerned with organizational goal displacement, an important issue in early organizational theorizing that proposed organizations emerge from rational goals. Organizational rationality was becoming questioned at the time that the authors studied a Youth Commission with nine members in a city of 70,000 persons (Maniha & Perrow, 1965 ). The organization’s activities were reconstructed from interviews with principals and stakeholders of the organization, minutes from Youth Commission meetings, documents, letters, and newspaper accounts (Maniha & Perrow, 1965 ).

The account that emerged from the data analysis is a history of how a “reluctant organization” with “no goals to guide it” was used by other aggressive organizations for their own ends. It ultimately created its own mission (Maniha & Perrow, 1965 ). Thus, an organization that initially lacked rational goals developed a mission through the irrational process of goal slippage or displacement. This finding challenged prevailing thinking at the time.

Observational Strategies

Observational strategies involve a researcher present in a situation who observes and records, the activities and conversations that occur in the setting, usually in written field notes. The three observational strategies in Table 2 —participant observation, ethnography, and systematic self-observation—differ in terms of the role of the researcher and in the data collection approach.

Participant observation . This is one of the earliest qualitative methods (McCall & Simmons, 1969 ). One gains access to a setting and an informant holding an appropriate social role, for example, client, customer, volunteer, or researcher. One then observes and records what occurs in the setting using field notes. Many features or topics in a setting can become a focus for participant observers. And observations can be conducted using continuum of different roles from the complete participant, observer as participant, and participant observer, to the complete observer who observes without participation (Creswell, 2014 , Table 9.2, p. 191).

Ethnography . An ethnography is “a written representation of culture” (Van Maanen, 1988 ) produced after extended participation in a culture. Ethnography is a form of participant observation that focuses on the cultural aspects of the group or organization under study (Van Maanen, 1988 , 2010 ). It involves prolonged and close contact with group members in a role where the observer becomes an apprentice to an informant to learn about a culture (Agar, 1980 ; McCurdy, Spradley, & Shandy, 2005 ; Spradley, 1979 ).

Ethnography produces fine-grained descriptions of a micro-culture, based on in-depth cultural participation (McCurdy et al., 2005 ; Spradley, 1979 , 2016 ). Ethnographic observations seek to capture cultural members’ worldviews (see Perlow, 1997 ; Van Maanen, 1988 ; Watson, 1994 ). Ethnographic techniques for interviewing informants have been refined into an integrated developmental research strategy—“the ethno-semantic method”—for undertaking qualitative research (Spradley, 1979 , 2016 ; Van Maanen, 1981 ). The ethnosemantic method uses a structured approach to uncover and confirm key cultural features, themes, and cultural reasoning processes (McCurdy et al., 2005 , Table 3 ; Spradley, 1979 ).

Systematic Self-Observation . Systematic self-observation (SSO) involves “training informants to observe and record a selected feature of their own everyday experience” (Rodrigues & Ryave, 2002 , p. 2; Rodriguez, Ryave, & Tracewell, 1998 ). Once aware that they are experiencing the target phenomenon, informants “immediately write a field report on their observation” (Rodrigues & Ryave, 2002 , p. 2) describing what was said and done, and providing background information on the context, thoughts, emotions, and relationships of people involved. SSO generates high-quality field notes that provide accurate descriptions of informants’ experiences (pp. 4–5). SSO allows informants to directly provide descriptions of their personal experiences including difficult to capture emotions.

Interview Strategies

Interviews are conversations between researchers and research participants—termed “subjects” in positivist research and informants in “interpretive research.” Interviews can be conducted as individual face-to-face interactions (Creswell, 2014 , p. 190) or by telephone, email, or through computer-based media. Two broad types of interview strategies are (a) the individual interview and (b) the group interview or focus group (Morgan, 1997 ). Interviews elicit informants’ insights into their culture and background information, and obtain answers and opinions. Interviews typically address topics and issues that occur outside the interview setting and at previous times. Interview data are thus reconstructions or undocumented descriptions of action in past settings (Creswell, 2014 , p. 191) that provide descriptions that are less accurate and valid descriptions than direct, real-time observations of settings.

Structured and unstructured interviews. Structured interviews pose a standardized set of fixed, closed-ended questions (Easterby-Smith, Thorpe, & Jackson, 2012 ) to respondents whose responses are recorded as factual information. Responses may be forced choice or open ended. However, most qualitative research uses unstructured or partially structured interviews that pose open-ended questions in a flexible order that can be adapted. Unstructured interviews allow for detailed responses and clarification of statements (Easterby-Smith et al., 2012 ; McLeod, 2014 )and the content and format can be tailored to the needs and assumptions of specific research projects (Gephart & Richardson, 2008 , p. 40).

The informant interview (Spradley, 1979 ) poses questions to informants to elicit and clarify background information about their culture, and to validate ethnographic observations. In interviews, informants teach the researcher their culture (Spradley, 1979 , pp. 24–39). The informant interview is part of a developmental research sequence (McCurdy et al., 2005 ; Spradley, 1979 ) that begins with broad “grand tour” questions that ask an informant to describe an important domain in their culture. The questions later narrow to focus on details of cultural domains and members’ folk concepts. This process uncovers semantic relationships among concepts of members and deeper cultural themes (McCurdy et al., 2005 ; Spradley, 1979 ).

The long interview (McCracken, 1988 ) involves a lengthy, quasi-structured interview sessions with informants to acquire rapid and efficient access to cultural themes and issues in a group. Long interviews differ ethnographic interviews by using a “more efficient and less obtrusive format” (p. 7). This creates a “sharply focused, rapid and highly intense interview process” that avoids indeterminate and redundant questions and pre-empts the need for observation or involvement in a culture. There are four stages in the long interview: (a) review literature to uncover analytical categories and design the interview; (b) review cultural categories to prepare the interview guide; (c) construct the questionnaire; and (d) analyze data to discover analytical categories (p. 30, fig. 1 ).

The active interview is a dynamic process where the researcher and informant co-construct and negotiate interview responses (Holstein & Gubrium, 1995 ). The goal is to uncover the subjective meanings that informants hold for phenomenon, and to understand how meaning is produced through communication. The active approach is common in interpretive, critical, and postmodern research that assumes a negotiated order. For example, Richardson and McKenna ( 2000 ) explored how ex-patriate British faculty members themselves interpreted and explained their expatriate experience. The researchers viewed the interview setting as one where the researchers and informants negotiated meanings between themselves, rather than a setting where prepared questions and answers were shared.

Documentary, Photographic, and Video Records as Data

Documents, records, artifacts, photographs, and video recordings are physically enduring forms of data that are separable from their producers and provide mute evidence with no inherent meaning until they are read, written about, and discussed (Hodder, 1994 , p. 393). Records (e.g., marriage certificate) attest to a formal transaction, are associated with formal governmental institutions, and may have legally restricted access. In contrast, documents are texts prepared for personal reasons with fewer legal restrictions but greater need for contextual interpretation. Several approaches to documentary and textual data analysis have been developed (see Table 3 ). Documents that researchers have found useful to collect include public documents and minutes of meetings; detailed transcripts of public hearings; corporate and government press releases; annual reports and financial documents; private documents such as diaries of informants; and news media reports.

Photographs and videos are useful for capturing “accurate” visual images of physical phenomena (Ray & Smith, 2012 ) that can be repeatedly reexamined and used as evidence to substantiate research claims (LeBaron, Jarzabkowski, Pratt, & Fetzer, 2018 ). Photos taken from different positions in space may also reveal different features of phenomena. Videos show movement and reveal activities as processes unfolding over time and space. Both photos and videos integrate and display the spatiotemporal contexts of action.

Layer Two: Design as Logical Frameworks

The second research design layer links data collection and analysis methods (Tables 2 and 3 ) to three logics of enquiry that answer specific questions: inductive, deductive, and abductive logical strategies (see Table 4 ). Each logical strategy focuses on producing different types of knowledge using distinctive research principles, processes, and types of research questions they can address.

Table 3. Data Analysis and Integrated Data Collection and Analysis Strategies

Strategy

Brief Explanation

Key References

Compassionate Research Methods

Immersive and experimental approach to using ethnographic understanding to enhancing care for others

Dutton, Workman, and Hardin ( )

Hansen and Trank ( )

Computer-Aided Interpretive Textual Analysis

Strategy for computer supported interpretive textual analysis of documents and discourse that capture members’ first-order meanings

Kelle ( )

Gephart ( , )

Content Analysis

Establishing categories for a document or text then counting the occurrences of categories and showing concern with issues of reliability and validity

Sonpar and Golden-Biddle ( )

Duriau, Reger, and Pfarrer ( )

Greckhamer, Misngyi, Elms, and Lacey ( )

Silverman ( )

Document, Record and Artifact Analysis

Uses many procedures for contemporary, non-document data analysis

Hodder ( )

Dream Analysis

Technique for detecting countertransference of emotions from researcher to informant to uncover how researchers are tacitly and unconsciously embedded in their own observations and interpretations

de Rond and Tuncalp ( )

Ethnomethodology

A sociological approach to analysis of sensemaking practices used in face to face communication

Coulon ( )

Garfinkel ( , )

Gephart ( , )

Whittle ( )

Ethnosemantic Analysis

Systematic approach to uncover first-order concepts and terms of members, verify their meaning, and construct folk taxonomies for meaningful cultural domains

Spradley ( )

McCurdy, Spradley, and Shandy ( )

Akeson ( )

Van Maanen ( )

Expansion Analysis

Form of discourse analysis that produces a detailed, line by line, data-driven interpretation of a text or transcript

Cicourel ( )

Gephart, Topal, and Zhang ( )

Grounded Theorizing

Inductive development of theory from systematically obtained and analyzed observations

Glaser and Strauss ( )

Gephart ( )

Locke ( , )

Smith ( )

Walsh et al. ( )

Interpretive Science

A methodology for doing scientific research using abduction that provides discovery oriented replicable scientific knowledge that is interpretive and not positivist

Schutz ( , )

Garfinkel ( )

Gephart ( )

Pattern matching

Unspecified process of matching/finding patterns in qualitative data, often confirmed by subjects’ verbal reports and quantitative analysis

Lee and Mitchell ( )

Lee, Mitchell, Wise, and Fireman ( )

Yan and Gray ( )

Phenomenological Analysis

Methodology/ies for examining individuals’ experiences

Gill ( )

Storytelling Inquiry

Six distinct approaches to storytelling useful for eliciting fine-grained and detailed stories from informants

Boje ( )

Rosile, Boje, Carlon, Downs, and Saylors ( )

Boje and Saylors ( )

Narrative and Textual Analysis

Analysis of written and spoken verbal behavior and documents using techniques from literary criticism, rhetoric, and sociolinguistic analysis to understand discourse

McCloskey ( )

Boje ( )

Gephart ( , , )

Ganzin, Gephart, and Suddaby ( )

Martin ( )

Calas and Smircich ( )

Pollach ( )

Organization Development/Action Research

Approaches to improving organizational structure and functioning through practice-based interventions

Cummings and Worley ( )

Buono and Savall ( )

Worley, Zardet, Bonnet, and Savall ( )

Table 4. Logical Strategies for Answering Qualitative Research Questions with Evidence

Feature

Inductive

Deductive

Abductive

Ontology

Realist

Realist/Objectivist

Interpretive/Constructionist

Assumptions

Objective world that is perceived subjectively; hence perceptions of reality can differ

Single objective reality independent of people’s perceptions

Questions

What—describe and explain phenomena

Why—explain associations between/among phenomena

What, why, and how—describe and explain conditions for occurrence of phenomena from lay and scientific perspectives

Aim

Logic

Linear: Begin with singular statements and conclude via induction with generalizations

Linear: Establish associations via induction or abduction then test them using deductive reasoning

Spiral processes: Analytical process moves from lay actors’ accounts to technical descriptions using scientific accounts

Scientist makes an hypothesis that appears to explain observations then proposes what gave rise to it (Blaikie, , p. 164)

Primary Focus

Objective features of settings described through subjective, personal perspectives

Objective features of broad realities described from objective, unbiased perspectives

Intersubjective meanings and interpretations used in everyday life to construct objective features and reveal subjective meanings

Principles

Facts gained by unbiased observations

Elimination method

Hypotheses are not used to compare facts

Borrow or invent a theory, express it as a deductive argument, deduce a conclusion, test the conclusion. If it passes, treat the conclusion as the explanation.

Construct second-order scientific theories by generalization/induction and inference from observations of actors’ activities, terms, meanings, and theories.

Incorporate members’ meanings—phenomena left out of inductive and deductive research.

Outcomes

Describes features of domain of social action and infers from one set of facts to another: hence can confirm existence of phenomena in initial domain but cannot discover phenomena outside of previously known domain

Scientist has great freedom to propose theory but nature decides on the validity of conclusions: knowledge limited to prior hypotheses, no discovery possible (Blaikie, , p. 144)

, p. 165)

Based in part on Blaikie ( 1993 ), ch. 5 & 6; Blaikie ( 2010 ), p. 84, table 4.1

The Inductive Strategy

Induction is the scientific method for many scholars (Blaikie, 1993 , p. 134), and an essential logic for qualitative management research (Pratt, 2009 , p. 856). Inductive strategies ask “what” questions to explore a domain to discover unknown features of a phenomenon (Blaikie, 2010 , p. 83). There are four stages to the inductive strategy: (a) observe and record all facts without selection or anticipating their importance; (b) analyze, compare, and classify facts without employing hypotheses; (c) develop generalizations inductively based on the analyses; and (d) subject generalizations to further testing (Blaikie, 1993 , p. 137).

Inductive research assumes a real world outside human thought that can be directly sensed and described (Blaikie, 2010 ). Principles of inductive research reflect a realist and objectivist ontology. The selection, definition, and measurement of characteristics to be studied are developed from an objective, scientific point of view. Facts about organizational features need to be obtained using unbiased measurement. Further, the elimination method is used to find “the characteristics present in all the positive cases, which are absent in all the negative cases, and which vary in appropriate degrees” (Blaikie, 1993 , p. 135). This requires data collection methods that provide unbiased evidence of the objective facts without pre-supposing their importance.

Induction can establish limited generalizations about phenomena based solely on the observations collected. Generalizations need to be based on the entire sample of data, not on selected observations from large data sets, to establish their validity. The scope of generalization is limited to the sample of data itself. Induction creates evidence to increase our confidence in a conclusion, but the conclusions do not logically follow from premises (Blaikie, 1993 , p. 164). Indeed, inferences from induction cannot be extended beyond the original set of observations and no logical or formal process exists to establish the universality of inferences.

Key data collection methods for inductive designs include observational strategies that allow the researcher to view behavior without making a priori hypotheses, to describe behavior that occurs “naturally” in settings, and to record non-impressionistic descriptions of behavior. Interviews can also elicit descriptions of settings and behavior for inductive qualitative research. Data analysis methods need to describe actual interactions in real settings including discourse among members. These methods include ethnosemantic analysis to uncover key terms and validate actual meanings used by members; analyses of conversational practices that show how meaning is negotiated through sequential turn taking in discourse; and grounded theory-based concept coding and theory development that use the constant comparative method.

Facts or descriptions of events can be compared to one another and generalizations can be made about the world using induction (Blaikie, 2010 ). Outcomes from inductive analysis include descriptions of features in a limited domain of social action that are inferred to exist in other similar settings. Propositions and broader insights can be developed inductively from these descriptions.

The Deductive Strategy

Deductive logic (Blaikie, 1993 , 2010 ) addresses “why” questions to explain associations between concepts that represent phenomena of interest. Researchers can use induction, abduction, or any means, to develop then test the hypotheses to see if they are valid. Hypotheses that are not rejected are temporarily corroborated. The outcomes from deduction are tested hypotheses. Researchers can thus be very creative in hypothesis construction but they cannot discover new phenomena with deduction that is based only on phenomena known in advance (Blaikie, 2010 ). And there is also no purely logical or mechanical process to establish “the validity of [inductively constructed] universal statements from a set of singular statements” from which deductive hypotheses were formed (Hempel, 1966 , p. 15 cited in Blaikie, 1993 , p. 140).

The deductive strategy uses a realist and objectivist ontology and imitates natural science methods. Useful data collection methods include observation, interviewing, and collection of documents that contain facts. Deduction addresses the assumedly objective features of settings and interactions. Appropriate data analysis methods include content coding to identify different types, features, and frequencies of observed phenomena; grounded theory coding and analytical induction to create categories in data, determine how categories are interrelated, and induce theory from observations; and pattern recognition to compare current data to prior models and samples. Content analysis and non-parametric statistics can be used to quantify qualitative data and make it more amenable to analysis, although quantitative analysis of qualitative data is not, strictly speaking, qualitative research (Gephart, 2004 ).

The Abductive Strategy

Abduction is “the process used to produce social scientific accounts of social life by drawing on the concepts and meanings used by social actors, and the activities in which they engage” (Blaikie, 1993 , p. 176). Abductive reasoning assumes that the socially meaningful world is the world experienced by members. The first abductive task is to discover the insider view that is basic to the actions of social actors (p. 176) by uncovering the subjective meanings held by social actors. Subjective meaning (Schutz, 1973a , 1973b ) refers to the meaning that actions hold for the actors themselves and that they can express verbally. Subjective meaning is not inexpressible ideas locked in one’s mind. Abduction starts with lay descriptions of social life, then moves to technical, scientific descriptions of social life (Blaikie, 1993 , p. 177) (see Table 4 ). Abduction answers “what” questions with induction, why questions with deduction, and “how” questions with hypothesized processes that explain how, and under what conditions, phenomena occur. Abduction involves making a logical leap that infers an explanatory process to explain an outcome in an oscillating logic. Deductive, inductive, and inferential processes move recursively from actors’ accounts to social science accounts and back again in abduction (Gephart, 2018 ). This process enables all theory and second-order scientific concepts to be grounded in actors’ first-order meanings.

The abductive strategy contains four layers: (a) everyday concepts and meanings of actors, used for (b) social interaction, from which (c) actors provide accounts, from which (d) social scientific descriptions are made, or theories are generated and applied, to interpret phenomena (Blaikie, 1993 , p. 177). The multifaceted research process, described in Table 4 , requires locating and comprehending members’ important everyday concepts and theories before observing or creating disruptions that force members to explain the unstated knowledge behind their action. The researcher then integrates members’ first-order concepts into a general, second-order scientific theory that makes first-order understandings recoverable.

Abduction emerged from Weber’s interpretive sociology ( 1978 ) and Peirce’s ( 1936 ) philosophy. But Alfred Schutz ( 1973a , 1973b ) is the contemporary scholar who did the most to extend our understanding of abduction, although he never used the term “abduction” (Blaikie, 1993 , 2010 ; Gephart, 2018 ). Schutz conceived abduction as an approach to verifiable interpretive knowledge that is scientific and rigorous (Blaikie, 1993 ; Gephart, 2018 ). Abduction is appropriate for research that seeks to go beyond description to explanation and prediction (Blaikie, 1993 , p. 163) and discovery (Gephart, 2018 ). It employs an interpretive ontology (Schutz, 1973a , 1973b ) and social constructionist epistemology (Berger & Luckmann, 1966 ), using qualitative methods to discover “why people do what they do” (Blaikie, 1993 ).

Dynamic data collection methods are needed for abductive research to capture descriptions of interactions in actual settings and their meanings to members. Observational and interview approaches that elicit members’ concepts and theories are particularly relevant to abductive understanding (see Table 2 ). Data analysis methods must analyze situated, first-order (common sense) discourse as it unfolds in real settings and then systematically develop second-order concepts or theories from data. Relevant approaches to produce and validate findings include ethnography, ethnomethodology, and grounded theorizing (see Table 3 ). The combination of what, why, and how questions used in abduction produces a broader understanding of phenomena than do what and why deductive and inductive questions.

Layer Three: Paradigms of Research

Scholarly paradigms integrate methods, logics, and intellectual worldviews into coherent theoretical perspectives and form the most abstract level of research design. Six paradigms are widely used in management research (Burrell & Morgan, 1979 ; Cunliffe, 2011 ; Gephart, 2004 , 2013 ; Gephart & Richardson, 2008 ; Hassard, 1993 ). The first three perspectives—positivism, interpretive induction, and interpretive abduction—build on logics of design and seek to produce rigorous empirical research that constitutes evidence (see Table 5 ). Three additional perspectives pursue philosophical, critical, and practical knowledge: critical theory, postmodernism, and organization development (see Table 6 ). Tables 5 and 6 describe important features of each research design to show similarities and differences in the processes through which theoretical meaning is bestowed on research results in management and organization studies.

Table 5. Paradigms, Logical Strategies, and Methodologies for Empirical Research

DIMENSION

Positivism

Interpretive Induction

Interpretive Science

Nature of Reality

Realism: Single objective, durable, knowable reality independent of people

Socially constructed reality with subjective and objective features

Material reality socially constructed through inter-subjective practices that link objective to subjective meanings

Goal

Discover facts and causal interrelationships among facts (variables)

Provide descriptive accounts, theories and data-based understandings of members’ practices

Develop second-order scientific theories from lay members’ first-order concepts and everyday understandings

Research Questions

Why questions

What questions

What, why, and how questions

Methods Foci

Facts

Variables, hypotheses, associations, and correlations

Meanings: Describe language use in real life contexts, communication, meaning during organizational action

Meaning: Describe how members construct and maintain a sense of shared meaning and social structure (intersubjectivity)

Methods Orientation

Logical strategies

Induction

Abduction

Induction

Deduction

Data Collection Methods

Observation

Interviews

Audio and video records

Field notes

Document collection

Ethnography Participant observation

Interviewing

Audio or video tape recording

Field notes Document collection

Ethnography

Participant observation

Informant interviewing

Audio or video with detailed transcriptions of conversation and recording

Field notes

Document collection

Data Analysis Methods

Pattern matching

Content analysis

Grounded

Theory

Analytical induction

Grounded theory coding

Gioia method

Schutz’s abductive method

Expansion analysis

Conversation analysis

Ethnomethodogy

Interpretive textual analysis

Research Process

, p. 90)

Research Design Stages

Research Outcomes

Assessing knowledge

Types of Knowledge Sought

Scientific knowledge

Scholarly knowledge that is interpretive and has scientific features

Scientific knowledge that is replicable, reliable and valid

Practice-oriented knowledge of members’ gained based on first-order understandings

Sources: Based on and adapted and extended from Blaikie ( 1993 , pp. 137, 145, & 152); Blaikie ( 2010 , Table 4.1, p. 84); Gephart ( 2013 , Table 9.1, p. 291) and Gephart ( 2018 , Table 3.1, pp. 38–39).

Table 6. Alternative Paradigms, Logical Strategies, and Methodologies

Dimension

Critical Research

Postmodern Perspectives

Organization Development Research

Dialectical reality with objective contradictions and reified structures that produce power-based inequities

Uncover, dereify, and challenge taken-for-granted meanings and practices to reduce power inequities, enable emancipation, and motivate social change

Reduce hidden costs

Enhance value added for humans

Actions and ideologies that create reified, objective social structures that are oppressive—OR—disrupt reified structures

Analysis of texts and discourse that shape and bestow power to show their value-laden nature

Describe and uncover sources of oppression and discord

Produce accounts that enable or encourage social action and change

Emphasis on description, unveiling of reified structure, change

Describe and uncover sources of oppression and discord

Produce accounts that enable or encourage social action and change

Emphasis on description, unveiling of reified structure, change

Reflection,

Critical reflexivity

Dialectical methods

Reflection

Deconstruction

Linguistic play

Deduction

Induction

Abduction

All methods possibly useful

Case descriptions

Document collection

Collect documents and texts

Observations, interviews

All qualitative methods are possibly useful

Dialogical Inquiry

Critical ethnography

Storytelling inquiry

Critical discourse analysis

Narrative and rhetorical analysis

Deconstruction

Pattern matching

Storytelling

Qualimetrics

Hidden cost analysis

Unmasking of oppression

Development of political strategies for action

Trigger actions that produce change

Trace the conflictual role of power in organizational life

Create texts that disrupt the readers’ conceptions and viewpoints

Challenge status quo knowledge

Expose hidden knowledge and hidden interests

Motivate action to resist categorizations

Qualitative and quantitative improvements in organizational functioning and performance

Reduction of hidden costs

Quality of theory developed

Positive impacts on management policies and practices to reduce oppression, inequities

Novel research to

produce novelinsights

Examineperformance outcomes

Political knowledge, historical knowledge, change orientation

Disruptive knowledge, change orientation, philosophical, literary, and rhetorical texts

Practical knowledge

Actionable knowledge

Based in part on Gephart ( 2004 , 2013 , 2018 ).

The Positivist Approach

The qualitative positivist approach makes assumptions equivalent to those of quantitative research (Gephart, 2004 , 2018 ). It assumes the world is objectively describable and comprehensible using inductive and deductive logics. And rigor is important and achieved by reliability, validity, and generalizability of findings (Kirk & Miller, 1986 ; Malterud, 2001 ). Qualitative positivism mimics natural science logics and methods using data recorded as words and talk rather than numerals.

Positivist research (Bitektine, 2008 ; Su, 2018 ) starts with a hypothesis. This can, but need not, be based in data or inductive theory. The research process, aimed at publication in peer-reviewed journals, requires researchers to (a) identify variables to measure, (b) develop operational definitions of the variables, (c) measure (describe) the variables and their inter-relationships, (d) pose hypotheses to test relationships among variables, then (e) compare observations to hypotheses for testing (Blaikie, 2010 ). When data are consistent with theory, theory passes the test. Otherwise the theory fails. This theory is also assessed for its logical correctness and value for knowledge. The positivist approach can assess deductive and inductive generalizations and provide evidence concerning why something occurs—if proposed hypotheses are not rejected.

Positivists view qualitative research as highly subject to biases that must be prevented to ensure rigor, and 23 methodological steps are recommended to enhance rigor and prevent bias (Gibbert & Ruigrok, 2010 , p. 720). Replicability is another concern because methodology descriptions in qualitative publications “insufficiently describe” how methods are used (Lee, Mitchell, & Sablynski, 1999 , p. 182) and thereby prevent replication. To ensure replicability, a qualitative “article’s description of the method must be sufficiently detailed to allow a reader . . . to replicate that reported study either in a hypothetical or actual manner.”

Qualitative research allows positivists to observe naturally unfolding behavior in real settings and allow “the real world” of work to inform research and theory (Locke & Golden-Biddle, 2004 ). Encounters with the actual world provide insights into meaning construction by members that cannot be captured with outsider (etic) approaches. For example, past quantitative research provided inconsistent findings on the importance of pre- and post-recruitment screening interviews for job choices of recruits. A deeper investigation was thus designed to examine how recruitment impacts job selection (Rynes, Bretz, & Gerhart, 1991 ). To do so, students undergoing recruitment were asked to “tell us in their own words” how their recruiting and decision processes unfolded (Rynes et al., 1991 , p. 399). Using qualitative evidence, the researchers found that, in contrast to quantitative findings, “people do make choices based on how they are treated” (p. 509), and the choices impact recruitment outcomes. Rich descriptions of actual behavior can disconfirm quantitative findings and produce new findings that move the field forward.

An important limitation of positivism is its common emphasis on outsiders’ or scientific observers’ objective conceptions of the world. This limits the attention positivist research gives to members’ knowledge and allows positivist research to impose outsiders’ meanings on members’ everyday behavior, leading to a lack of understanding of what the behavior means to members. Another limitation is that no formal, logical, or proven techniques exist to assess the strength of “relationships” among qualitative variables, although such assessments can be formally done using well-formed quantitative data and techniques. Thus, qualitative positivists often provide ambiguous or inexplicit quantitative depictions of variable relations (e.g., “strong relationship”). Alternatively, the analysts quantify qualitative data by assigning numeric codes to categories (Greckhamer, Misngyi, Elms, & Lacey, 2008 ), using non-parametric statistics, or quantitative content analysis (Sonpar & Golden-Biddle, 2008 ) to create numerals that depict associations among variables.

An illustrative example of positivist research . Cole ( 1985 ) studied why and how organizations change their working structures from bureaucratic forms to small, self-supervised work teams that allow for worker participation in shop floor activities. Cole found that existing research on workplace change focused on the micropolitical level of organizations. He hypothesized that knowledge could be advanced differently, by examining the macropolitical change in industries or nations. Next, a testable conclusion was deduced: a macro analysis of the politics of change can better predict the success of work team implementation, measured as the spread of small group work structures, than an examination of the micropolitics of small groups ( 1985 ). Three settings were selected for the research: Japan, Sweden, and the United States. Japanese data were collected from company visits and interviews with employment officials and union leaders. Swedish documentary data on semiautonomous work groups were used and supplemented by interviews at Volvo and Saab, and prior field research in Sweden. U.S. data were collected through direct observations and a survey of early quality circle adopters.

Extensive change was observed in Sweden and Japan but changes to small work groups were limited in the United States (Cole, 1985 ). This conclusion was verified using records of the experiences of the three nations in work reform, compared across four dimensions: timing and scope of changes, managerial incentives to innovate, characteristics of mobilization, and political dimensions of change. Data revealed the United States had piecemeal experimentation and resistance to reform through the 1970s; diffusion emerged in Japan in the early 1960s and became extensive; and Swedish workplace reform started in the 1960s and was widely and rapidly diffused.

Cole then answered the questions of “why” and “how” the change occurred in some countries but not others. Regarding why Japanese and Swedish managers were motivated to introduce workplace change due to perceived managerial problems and the changing national labor market. Differences in the political processes also influenced change. Management, labor, and government interest in workplace change was evident in Japan and Sweden but not in the United States where widespread resistance occurred. As to how, the change occurred through macropolitical processes (Cole, 1985 , p. 120), specifically, the commitment of the national business leadership to the change and whether or not the change was contested or uncontested by labor impacted the adoption of change. Organizational change usually occurs through broad macropolitical processes, hence “the importance of macro-political variables in explaining these outcomes” (p. 122).

Interpretive Induction

Two streams of qualitative research claim the label of “interpretive research” in management and organization studies. The first stream, interpretive induction, emphasizes induction as its primary logical strategy (e.g., Locke, 2001 , 2002 ; Pratt, 2009 ). It assumes a “real world” that is inherently objective but interpreted through subjective lenses, hence different people can perceive or report different things. This research is interpretive because it addresses the meanings and interpretations people give to organizational phenomena, and how this meaning is provided and used. Interpretive induction contributes to scientific knowledge by providing empirical descriptions, generalizations, and low-level theories about specific contexts based on thick descriptions of members’ settings and interactions (first-order understandings) as data.

The interpretive induction paradigm addresses “what” questions that describe and explain the existence and features of phenomena. It seeks to uncover the subjective, personal knowledge that subjects have of the objective world and does so by creating descriptive accounts of the activities of organizational members. Interpretive induction creates inductive theories based on limited samples that provide low-scope, abstract theory. Limitations (Table 5 ) include the fact that inductive generalizations are limited to the sample used for induction and need to be subjected to additional tests and comparisons for substantiation. Second, research reports often fail to provide details to allow replication of the research. Third, formal methods for assessing the accuracy and validity of results and findings are limited. Fourth, while many features of scientific research are evident in interpretive induction research, the research moves closer to humanistic knowledge than to science when the basic assumptions of inductive analysis are relaxed—a common occurrence.

An illustrative example of interpretive induction research . Adler and Adler ( 1988 , 1998 ) undertook a five-year participant-observation study of a college basketball program (Adler, 1998 , p. 32). They sought to “examine the development of intense loyalty in one organization.” Intense loyalty evokes “devotional commitment of . . . (organizational) members through a subordination that sometime borders on subservience” (p. 32). The goal was to “describe and analyze the structural factors that emerged as most related” to intense loyalty (p. 32).

The researchers divided their roles. Peter Adler was the active observer and “expert” who undertook direct observations while providing counsel to players (p. 33). Patricia Adler took the peripheral role of “wife” and debriefed the observer. Two research questions were posed: (a) “what” kinds of organizational characteristics foster intense loyalty? (b) “how” do organizations with intense loyalty differ structurally from those that lack intense loyalty?

The first design stage (Table 5 ) recorded unbiased observations in extensive field notes. Detailed “life history” accounts were obtained from 38 team members interviewed (Adler & Adler, 1998 , p. 33). Then analytical induction and the constant comparative method (Glaser & Strauss, 1967 ) were used to classify and compare observations (p. 33). Once patterns emerged, informants were questioned about variations in patterns (p. 34) to develop “total patterns” (p. 34) reflecting the collective belief system of the group. This process required a “careful and rigorous means of data collection and analysis” that was “designed to maximize both the reliability and validity of our findings” (p. 34). The study found five conceptual elements were essential to the development of intense loyalty: domination, identification, commitment, integration, and goal alignment (p. 35).

The “what” question was answered by inducing a generalization (stage 3): paternalistic organizations with charismatic leadership seek people who “fit” the organization’s style and these people require extensive socialization to foster intense loyalty. This description contrasts with rational bureaucratic organizations that seek people who fit specific, generally known job descriptions and require limited socialization (p. 46). The “how” question is answered by inductive creation of another generalization: organizations that control the extra-organizational activities of members are more likely to evoke intense loyalty by forcing members to subordinate all other interests to those of the organization (p. 46).

The Interpretive Abduction Approach

The second stream of interpretive research—interpretive abduction—produces scientific knowledge using qualitative methods (Gephart, 2018 ). The approach assumes that commonsense knowledge is foundational to how actors know the world. Abductive theory is scientifically built from, and refers to, everyday life meanings, in contrast to positivist and interpretive induction research that omits concern with the worldview of members. Further, interpretive abduction produces second-order or scientific theory and concepts from members’ first-order commonsense concepts and meanings (Gephart, 2018 , p. 34; Schutz, 1973a , 1973b ).

The research process, detailed in Table 5 (process and stages), focuses on collecting thick descriptive data on organizations, identifying and interpreting first-order lay concepts, and creating abstract second-order technical constructs of science. The second-order concepts describe the first-order principles and terms social actors use to organize their experience. They compose scientific concepts that form a theoretical system to objectively describe, predict, and explain social organization (Gephart, 2018 , p. 35). This requires researchers to understand the subjective view of the social actors they study, and to develop second-order theory based on actors’ subjective meanings. Subjective meaning can be shared with others through language use and communication and is not private knowledge.

A central analytical task for interpretive abduction is creating second-order, ideal-type models of social roles, motives, and interactions that describe the behavioral trajectories of typical actors. Ideal-type models can be objectively compared to one another and are the special devices that social science requires to address differences between social phenomena and natural phenomena (Schutz, 1973a , 1973b ). The models, once built, are refined to preserve actors’ subjective meanings, to be logically consistent, and to present human action from the actor’s point of view. Researchers can then vary and compare the models to observe the different outcomes that emerge. Scientific descriptions can then be produced, and theories can be created. Interpretive abduction (Gephart, 2018 , p. 35) allows one to addresses what, why, and how questions in a holistic manner, to describe relationships among scientific constructs, and to produce “empirically ascertainable” and verifiable relations among concepts (Schutz, 1973b , p. 65) that are logical, hold practical meaning to lay actors, and provide abstract, objective meaning to interpretive scientists (Gephart, 2018 , p. 35). Abduction produces knowledge about socially shared realities by observing interactions, uncovering members’ first-order meanings, and then developing technical second-order or scientific accounts from lay accounts.

Interpretive abduction (Gephart, 2018 ) uses well-developed methods to create, refine, test, and verify second-order models, and it provides well-developed tools to support technical, second-level analyses. Research using the interpretive abduction approach includes a study of how technology change impacts sales automobile practices (Barley, 2015 ) and an investigation study of how abduction was used to develop new prescription drugs (Dunne & Dougherty, 2016 ).

An illustrative example of the interpretive abduction approach . Perlow ( 1997 ) studied time management among software engineers facing a product launch deadline. Past research verified the widespread belief that long working hours for staff are necessary for organizational success. This belief has adversely impacted work life and led to the concept of a “time bind” faced by professionals (Hochschild, 1997 ). One research question that subsequently emerged was, “what underlies ‘the time bind’ experienced by engineers who face constant deadlines and work interruptions?” (Perlow, 1997 , p. xvii). This is an inductive question about the causes and consequences of long working hours not answered in prior research that is hard to address using induction or deduction. Perlow then explored assumption underlying the hypothesis, supported by lay knowledge and management literature, that even if long working hours cause professionals to destroy their life style, long work hours “further the goals of our organizations” and “maximize the corporation’s bottom line” (Perlow, 1997 , p. 2).

The research commenced (Table 5 , step 1) when Perlow gained access to “Ditto,” a leader in implementing flexible work policies (Perlow, 1997 , p. 141) and spent nine months doing participant observation four days a week. Perlow collected descriptive data by walking around to observe and converse with people, attended meetings and social events, interviewed engineers at work and home and spouses at home, asked participants to record activities they undertook on selected working days (Perlow, 1997 , p. 143), and made “thousands of pages of field notes” (p. 146) to uncover trade-offs between work and home life.

Perlow ( 1997 , pp. 146–147) analyzed first-order concepts uncovered through his observations and interviews from 17 stories he wrote for each individual he had studied. The stories described workstyles, family lives, and traits of individuals; provided objective accounts of subjective meanings each held for work and home; offered background information; and highlighted first-order concepts. Similarities and differences in informant accounts were explored with an empirically grounded scheme for coding observations into categories using grounded theory processes (Gioia, Corley, & Hamilton, 2012 ). The process allowed Perlow to find key themes in stories that show work patterns and perceptions of the requirements of work success, and to create ideal-type models of workers (step 3). Five stories were selected for detailed analysis because they reveal important themes Perlow ( 1997 , p. 147). For example, second-order, ideal-type models of different “roles” were constructed in step 3 including the “organizational superstar” (pp. 15–21) and “ideal female employee” (pp. 22–32) based on first-order accounts of members. The second-order ideal-type scientific models were refined to include typical motives. The models were compared to one another (step 4) to describe and understand how the actions of these employee types differed from other employee types and how these variations produced different outcomes for each trajectory of action (steps 4 and 5).

Perlow ( 1997 ) found that constant help-seeking led engineers to interrupt other engineers to get solutions to problems. This observation led to the abductively developed hypothesis that interruptions create a time crisis atmosphere for engineers. Perlow ( 1997 ) then created a testable, second-order ideal-type (scientific) model of “the vicious working cycle” (p. 96), developed from first-order data, that explains the productivity problems that the firm (and other research and development firms)—commonly face. Specifically, time pressure → crisis mentality → individual heroics → constant interruptions of others’ work to get help → negative consequences for individual → negative consequences for the organization.

Perlow ( 1997 ) then tested the abductive hypothesis that the vicious work cycle caused productivity problems (stage 5). To do so, the vicious work cycle was transformed into a virtuous cycle using scheduling quiet times to prevent work interruptions: relaxed work atmosphere → individuals focus on own work completion → few interruptions → positive consequences for individual and organization. To test the hypothesis, an experiment was conducted (research process 2 in Table 5 ) with engineers given scheduled quiet times each morning with no interruptions. The experiment was successful: the project deadline was met. The hypothesis about work interruptions and the false belief that long hours are needed for success were supported (design stage 6). Unfortunately, the change was not sustained and engineers reverted to work interruptions when the experiment ended.

There are three additional qualitative approaches used in management research that pursue objectives other than producing empirical findings and developing or testing theories. These include critical theory and research, postmodernism, and change intervention research (see Table 6 ).

The Critical Theory and Research Approach

The term “critical” has many meanings including (a) critiques oriented to uncovering ideological manifestations in social relations (Gephart, 2013 , p. 284); (b) critiques of underlying assumptions of theories; and (c) critique as self-reflection that reflexively encapsulates the investigator (Morrow, 1994 , p. 9). Critical theory and critical management studies bring these conceptions of critical to bear on organizations and employees.

Critical theory and research extend the theories Karl Marx, and the Frankfurt School in Germany (Gephart & Kulicki, 2008 ; Gephart & Pitter, 1995 ; Habermas, 1973 , 1979 ; Morrow, 1994 ; Offe, 1984 , 1985 ). Critical theory and research assume that social science research differs from natural science research because social facts are human creations and social phenomena cannot be controlled as readily as natural phenomena (Gephart, 2013 , p. 284; Morrow, 1994 , p. 9). As a result, critical theory often uses a historical approach to explore issues that arise from the fundamental contradictions of capitalism. Critical research explores ongoing changes within capitalist societies and organizations, and analyzes the objective structures that constrain human imagination and action (Morrow, 1994 ). It seeks to uncover the contradictions of advanced capitalism that emerge from the fundamental contradiction of capitalism: owners of capital have the right to appropriate the surplus value created by workers. This basic contradiction produces further contradictions that become sources of workplace oppression and resistance that create labor issues. Thus contradictions reveal how power creates consciousness (Poutanen & Kovalainen, 2010 ). Critical reflection is used to de-reify taken-for-granted structures that create power inequities and to motivate resistance and critique and escape from dominant structures (see Table 6 ).

Critical management studies build on critical theory in sociology. It seeks to transform management and provide alternatives to mainstream theory (Adler, Forbes, & Willmott, 2007 ). The focus is “the social injustice and environmental destruction of the broader social and economic systems” served by conventional, capitalist managers (Adler et al., 2007 , p. 118). Critical management research examines “the systemic corrosion of moral responsibility when any concern for people or for the environment . . . requires justification in terms of its contribution to profitable growth” (p. 4). Critical management studies goes beyond scientific skepticism to undertake a radical critique of socially divisive and environmentally destructive patterns and structures (Adler et al., 2007 , p. 119). These studies use critical reflexivity to uncover reified capitalist structures that allow certain groups to dominate others. Critical reflection is used to de-reify and challenge the facts of social life that are seen as immutable and inevitable (Gephart & Richardson, 2008 , p. 34). The combination of dialogical inquiry, critical reflection, and a combination of qualitative and quantitative methods and data are common in this research (Gephart, 2013 , p. 285). Some researchers use deductive logics to build falsifiable theories while other researchers do grounded theory building (Blaikie, 2010 ). Validity of critical research is assessed as the capability the research has to produce critical reflexivity that comprehends dominant ideologies and transforms repressive structures into democratic processes and institutions (Gephart & Richardson, 2008 ).

An illustrative example of critical research . Barker ( 1998 , p. 130) studied “concertive control” in self-managed work teams in a small manufacturing firm. Concertive control refers to how workers collaborate to engage in self-control. Barker sought to understand how control practices in the self-managed team setting, established to allow workers greater control over their work, differed from previous bureaucratic processes. Interviews, observations, and documents were used as data sources. The resultant description of work activities and control shows that rather than allowing workers greater control, the control process enacted by workers themselves became stronger: “The iron cage becomes stronger” and almost invisible “to the workers it incarcerates” (Barker, 1998 , p. 155). This study shows how traditional participant observation methods can be used to uncover and contest reified structures and taken-for-granted truths, and to reveal the hidden managerial interests served.

Postmodern Perspectives

The postmodern perspective (Boje, Gephart, & Thatchenkery, 1996 ) is based in philosophy, the humanities, and literary criticism. Postmodernism, as an era, refers to the historical stage following modernity that evidences a new cultural worldview and style of intellectual production (Boje et al., 1996 ; Jameson, 1991 ; Rosenau, 1992 ). Postmodernism offers a humanistic approach to reconceptualize our experience of the social world in an era where it is impossible to establish any foundational underpinnings for knowledge. The postmodern perspective assumes that realities are contradictory in nature and value-laden (Gephart & Richardson, 2008 ; Rosenau, 1992 , p. 6). It addresses the values and contradictions of contemporary settings, how hidden power operates, and how people are categorized (Gephart, 2013 ). Postmodernism also challenges the idea that scientific research is value free, and asks “whose values are served by research?”

Postmodern essays depart from concerns with systematic, replicable research methods and designs (Calas, 1987 ). They seek instead to explore the values and contradictions of contemporary organizational life (Gephart, 2013 , p. 289). Research reports have the character of essays that seek to reconceptualize how people experience the world (Martin, 1990 ; Rosenau, 1992 ) and to disrupt this experience by producing “reading effects” that unsettle a community (Calas & Smircich, 1991 ).

Postmodernism examines intertextual relations—how texts become embedded in other texts—rather than causal relations. It assumes there are no singular realities or truths, only multiple realities and multiple truths, none of which are superior to other truths (Gephart, 2013 ). Truth is conceived as the outcome of language use in a context where power relations and multiple realities exist.

From a methodological view, postmodern research tends to focus on discourse: texts and talk. Data collection (in so far as it occurs) focuses on records of discourse—texts of spoken and written verbal communication (Fairclough, 1992 ). Use of formal or official records including recordings, texts and transcripts is common. Analytically, scholars tend to use critical discourse analysis (Fairclough, 1992 ), narrative analysis (Czarniawska, 1998 ; Ganzin, Gephart, & Suddaby, 2014 ), rhetorical analysis (Culler, 1982 ; Gephart, 1988 ; McCloskey, 1984 ) and deconstruction (Calais & Smircich, 1991 ; Gephart, 1988 ; Kilduff, 1993 ; Martin, 1990 ) to understand how categories are shaped through language use and come to privilege or subordinate individuals.

Postmodernism challenges models of knowledge production by showing how political discourses produce totalizing categories, showing how categorization is a tool for social control, and attempting to create opportunities for alternative representations of the world. It thus provides a means to uncover and expose discursive features of domination, subordination, and resistance in society (Locke & Golden-Biddle, 2004 ).

An illustrative example of postmodern research . Martin ( 1990 ) deconstructed a conference speech by a company president. The president was so “deeply concerned” about employee well-being and involvement at work that he encouraged a woman manager “to have her Caesarian yesterday” so she could participate in an upcoming product launch. Martin deconstructs the story to reveal the suppression of gender conflict in the dialogue and how this allows gender conflict and subjugation to continue. This research established the existence of important domains of organizational life, such as tacit gender conflict, that have not been adequately addressed and explored the power dynamics therein.

The Organization Development Approach

OD involves a planned and systematic diagnosis and intervention into an organizational system, supported by top management, with the intent of improving the organization’s effectiveness (Beckhard, 1969 ; Palmer, Dunford, & Buchanan, 2017 , p. 282). OD research (termed “clinical research” by Schein, 1987 ) is concerned with changing attitudes and behaviors to instantiate fundamental values in organizations. OD research often follows the general process of action research (Lalonde, 2019 ) that involves working with actors in an organization to help improve the organization. OD research involves a set of stages the OD practitioner (the leader of the intervention) uses: (a) problem identification; (b) consultation between OD practitioner and client; (c) data collection and problem diagnosis; (d) feedback; (e) joint problem diagnosis; (f) joint action planning; (g) change actions; and (h) further data gathering to move recursively to a refined step 1.

An illustrative example of the organization development approach . Numerous OD techniques exist to help organizations change (Palmer et al., 2017 ). The OD approach is illustrated here by the socioeconomic approach to management (SEAM) (Buono & Savall, 2007 ; Savall, 2007 ). SEAM provides a scientific approach to organizational intervention consulting that integrates qualitative information on work practices and employee and customer needs (socio) with quantitative and financial performance measures (economics). The socioeconomic intervention process commences by uncovering dysfunctions that require attention in an organization. SEAM assumes that organizations produce both (a) explicit benefits and costs and (b) hidden benefits and costs. Hidden costs refer to economic implications of organizational dysfunctions (Worley, Zardet, Bonnet, & Savall, 2015 , pp. 28–29). These include problems in working conditions; work organization; communication, co-ordination, and co-operation; time management; integrated training; and strategy implementation (Savall, Zardet, & Bonnet, 2008 , p. 33). Explicit costs are emphasized in management decision-making but hidden costs are ignored. Yet hidden costs from dysfunctions often greatly outstrip explicit costs.

For example, a fishing company sought to protect its market share by reducing the price and quality of products, leading to the purchase of poor-quality fish (Savall et al., 2008 , pp. 31–32). This reduced visible costs by €500,000. However, some customers stopped purchasing because of the lower-quality product, producing a loss of sales of €4,000,000 in revenue or an overall drop in economic performance of €3,500,000. The managers then changed their strategy to focus on health and quality. They implemented the SEAM approach, assessed the negative impact of the hidden costs on value added and revenue received, and purchased higher-quality fish. Visible costs (expenses) increased by €1,000,000 due to the higher cost for a better-quality product, but the improved quality (performance) cut the hidden costs by increasing loyalty and increased sales by €5,000,000 leaving an increased profit of €4,000,000.

SEAM allows organizations to uncover hidden costs in their operations and to convert these costs into value-added human potential through a process termed “qualimetrics.” Qualimetrics assesses the nature of hidden costs and organizational dysfunctions, develops estimates of the frequencies and amounts of hidden costs in specific organizational domains, and develops actions to reduce the hidden costs and thereby release additional value added for the organization (Savall & Zardet, 2011 ). The qualimetric process is participative and involves researchers who use observations, interviews and focus groups of employees to (a) describe, qualitatively, the dysfunctions experienced at work (qualitative data); (b) estimate the frequencies with which dysfunctions occur (quantitative data); and (c) estimate the costs of each dysfunction (financial data). Then, strategic change actions are developed to (a) identify ways to reduce or overcome the dysfunction, (b) estimate how frequently the dysfunction can be remedied, and (c) estimate the overall net costs of removing the hidden costs to enhance value added. The economic balance is then assessed for changes to transform the hidden costs into value added.

OD research creates actionable knowledge from practice (Lalonde, 2019 ). OD intervention consultants use multistep processes to change organizations that are flexible practices not fixed research designs. OD plays an important role in developing evidence-based practices to improve organizational functioning and performance. Worley et al. ( 2015 ) provide a detailed example of the large-scale implementation of the SEAM OD approach in a large, international firm.

Here we discuss implication of qualitative research designs for covert research, reporting qualitative work and novel integrations of qualitative and quantitative work.

Covert Research

University ethics boards require researchers who undertake research with human participants to obtain informed consent from the participants. Consent requires that all participants must be informed of details of the research procedure in which they will be involved and any risks of participation. Researchers must protect subjects’ identities, offer safeguards to limit risks, and insure informant anonymity. This consent must be obtained in the form of a signed agreement from the participant, obtained prior to the commencement of research observations (McCurdy et al., 2005 , pp. 29–32).

Covert research that fails to fully disclose research purposes or practices to participants, or that is otherwise deceptive by design or tacit practice, has long been considered “suspect” in the field (Graham, 1995 ; Roulet, Gill, Stenger, & Gill, 2017 ). This is changing. Research methodologists have shown that the over/covert dimension is a continuum, not a dichotomy, and that unintended covert elements occur in many situations (Roulet et al., 2017 ). Thus all qualitative observation involves some degree of deception due practical constraints on doing observations since it is difficult to do fully overt research, particularly in observational contexts with many people, and to gain advance consent from everyone in the organization one might encounter.

There are compelling benefits to covert research. It can provide insights not possible if subjects are fully informed of the nature or existence of the research. For example, the year-long, covert observational study of an asylum as a “total institution” (Goffman, 1961 ) showed how ineffective the treatment of mental illness was at the time. This opened the field of mental health to social science research (Roulet et al., 2017 , p. 493). Covert research can also provide access to institutions that researchers would otherwise be excluded from, including secretive and secret organizations (p. 492). This could allow researchers to collect data as an insider and to better see and experience the world from members’ perspective. It could also reduce “researcher demand effects” that occur when informants obscure their normal behavior to conform to research expectations. Thus, the inclusion of covert research data collection in research designs and proposals is an emerging trend and realistic possibility. Ethics applications can be developed that allow for aspects of covert research, and observations in many public settings do not require informed consent.

The Appropriate Style for Reporting Qualitative Work

The appropriate style for reporting qualitative research has become an issue of concern. For example, editors of the influential Academy of Management Journal have noted the emergence of an “AMJ style” for qualitative work (Bansal & Corley, 2011 , p. 234). They suggest that all qualitative work should use this style so that qualitative research can “benefit” from: “decades of refinement in the style of quantitative work.” The argument is that most scholars can assess the empirical and theoretical contributions of quantitative work but find it difficult to do so for qualitative research. It is easier for quantitatively trained editors and scholars “to spot the contribution of qualitative work that mimics the style of quantitative research.” Further, “the majority of papers submitted to . . . AMJ tend to subscribe to the paradigm of normal science that aims to find relationships among valid constructs that can be replicated by anyone” (Bansal, Smith, & Vaara, 2018 , p. 1193). These recommendations appear to explicitly encourage the reporting of qualitative results as if they were quantitatively produced and interpreted and highlights the advantage of conformity to the prevailing positivist perspective to gain publication in AMJ.

Yet AMJ editors have also called for researchers to “ensure that the research questions, data, and analysis are internally consistent ” (Bansal et al., 2018 , p. 1193) and to “Be authentic , detailed and clear in argumentation” (emphasis added) (Bansal et al., 2018 , p. 1193). These calls for consistency appear to be inconsistent with suggestions to present all qualitative research using a style that mimics quantitative, positivist research. Adopting the quantitative or positivist style for all qualitative reports may also confuse scholars, limit research quality, and hamper efforts to produce innovative, non-positivist research. This article provides six qualitative research designs to ensure a range of qualitative research publications are internally consistent in methods, logics, paradigmatic commitments, and writing styles. These designs provide alternatives to positivist mimicry in non-positivist scholarly texts.

Integrating Qualitative and Quantitative Research in New Ways

Qualitative research often omits consideration of the naturally occurring uses of numbers and statistics in everyday discourse. And quantitative researchers tend to ignore qualitative evidence such as stories and discourse. Yet knowledge production processes in society “rely on experts and laypeople and, in so doing, make use of both statistics and stories in their attempt to represent and understand social reality” (Ainsworth & Hardy, 2012 , p. 1649). Numbers and statistics are often used in stories to create legitimacy, and stories provide meaning to numbers (Gephart, 1988 ). Hence stories and statistics cannot be separated in processes of knowledge production (Ainsworth & Hardy, 2012 , p. 1697). The lack of attention to the role of quantification in everyday life means a huge domain of organizational discourse—all talk that uses numbers, quantities, and statistics—is largely unexplored in organizational research.

Qualitative research has, however, begun to study how words and numbers are mutually used for organizational storytelling (Ainsworth & Hardy, 2012 ; Gephart, 2016 ). This focus offers the opportunity to develop research designs to explore qualitative features and processes involved in quantitative phenomena such as financial crises (Gephart, 2016 ), to address how stories and numbers need to work together to create legitimate knowledge (Ainsworth & Hardy, 2012 ), and to show how statistics are used rhetorically to convince others of truths in organizational research (Gephart, 1988 ).

Ethnostatistics (Gephart, 1988 ; Gephart & Saylors, 2019 ) provides one example of how to integrate qualitative and quantitative research. Ethnostatistics examines how statistics are constructed and used by professionals. It explores how statistics are constructed in real settings, how violations of technical assumptions impact statistical outcomes, and how statistics are used rhetorically to convince others of the truth of research outcomes. Ethnostatistics has been used to reinterpret data from four celebrated network studies that themselves were reanalyzed (Kilduff & Oh, 2006 ). The ethnostatistical reanalyses revealed how ad hoc practices, including judgment calls and the imputation of new data into old data set for reanalysis, transformed the focus of network research from diffusion models to structural equivalence models.

Another innovative study uses a Bayesian ethnostatistical approach to understand how the pressure to produce sophisticated and increasingly complex theoretical narratives for causal models has impacted the quantitative knowledge generated in top journals (Saylors & Trafimow, 2020 ). The use of complex causal models has increased substantially over time due to a qualitative and untested belief that complex models are true. Yet statistically speaking, as the number of variables in a model increase, the likelihood the model is true rapidly decreases (Saylors & Trafimow, 2020 , p. 3).

The authors test the previously untested (qualitative) belief that complex causal models can be true. They found that “the joint probability of a six variable model is about 3.5%” (Saylors & Trafimow, 2020 , p. 1). They conclude that “much of the knowledge generated in top journals is likely false” hence “not reporting a (prior) belief in a complex model” should be relegated to the set of questionable research practices. This study shows how qualitative research that explores the lay theories and beliefs of statisticians and quantitative researchers can challenge and disrupt conventions in quantitative research, improve quantitative practices, and contribute qualitative foundations to quantitative research. Ethnostatistics thus opens the qualitative foundations of quantitative research to critical qualitative analyses.

The six qualitative research design processes discussed in this article are evident in scholarly research on organizations and management and provide distinct qualitative research designs and approaches to use. Qualitative research can provide research insights from several theoretical perspectives, using well-developed methods to produce scientific and scholarly insights into management and organizations. These approaches and designs can also inform management practice by creating actionable knowledge. The intended contribution of this article is to describe these well-developed methods, articulate key practices, and display core research designs. The hope is both to better equip researchers to do qualitative research, and to inspire them to do so.

Acknowledgments

The authors wish to acknowledge the assistance of Karen Lund at The University of Alberta for carefully preparing Figure 1 . Thanks also to Beverly Zubot for close reading of the manuscript and helpful suggestions.

  • Adler, P. A. , & Adler, P. (1988). Intense loyalty in organizations: A case study of college athletics. Administrative Science Quarterly , 401–417.
  • Adler, P. A. , & Adler, P. (1998). Intense loyalty in organizations: A case study of college athletics. In J. Van Maanen (Ed.), Qualitative studies of organizations (pp. 31–50). Thousand Oaks, CA: SAGE.
  • Adler, P. S. , Forbes, L. C. , & Willmott, H. (2007). Critical management studies. Academy of Management Annals , 1 (1), 119–180.
  • Agar, M. H. (1980). The professional stranger: An informal introduction to ethnography . New York, NY: Academic Press.
  • Ainsworth, S. , & Hardy, C. (2012). Subjects of inquiry: Statistics, stories, and the production of knowledge. Organization Studies , 33 (12), 1693–1714.
  • Akeson, C. (2005). Getting the truth: The police detective and the art of interviewing. In D. W. McCurdy , J. P. Spradley , & D. J. Shandy (Eds.), The cultural experience: Ethnography in complex society (2nd ed., pp. 103–111). Long Grove, IL: Waveland Press.
  • Avenier, M.-J. , & Thomas, C. (2015). Finding one’s way around various methodological guidelines for doing rigorous case studies: A comparison of four epistemological frameworks. Systèmes d’information & Management , 20 (1), 61–98.
  • Bansal, P. , & Corley, K. (2011). From the editors—The coming of age for qualitative research: Embracing the diversity of qualitative methods. Academy of Management Journal , 54 (2), 233–237.
  • Bansal, P. , Smith, W. K. , & Vaara, E. (2018). From the editors: New ways of seeing through qualitative research. Academy of Management Journal , 61 (4), 1189–1195.
  • Barker, J. R. (1993). Tightening the iron cage: Concertive control in self-managing teams. Administrative Science Quarterly , 38 (3), 408–437.
  • Barker, J. (1998). Tightening the iron cage: Concertive control in self-managing teams. In J. Van Maanen (Ed.), Qualitative studies of organizations (pp. 126–158). Thousand Oaks, CA: SAGE.
  • Barley, S. R. (2015). Why the internet makes buying a car less loathsome: How technologies change role relations. Academy of Management Discoveries , 1 (1), 31–60.
  • Beckhard, R. (1969). Organization development: Strategies and models . Boston, MA: Addison-Wesley.
  • Berger, P. L. , & Luckmann, T. (1966). The social construction of reality: A treatise in the sociology of knowledge . New York, NY: Doubleday.
  • Bhaskar, R. (1978). A realist theory of science . Sussex: Harvester Press; Atlantic Highlands, NJ: Humanities Press.
  • Bitektine, A. (2008). Prospective case study design: Qualitative method for deductive theory testing. Organizational Research Methods , 11 (1), 160–180.
  • Blaikie, N. (1993). Approaches to social enquiry (1st ed.). Cambridge, UK: Polity Press.
  • Blaikie, N. (2010). Designing social research: The logic of anticipation (2nd ed.). Cambridge, UK: Polity Press.
  • Boje, D. M. (2001). Narrative methods for organizational and communication research . Thousand Oaks, CA: SAGE.
  • Boje, D. M. (2008). Storytelling organizations . Thousand Oaks, CA: SAGE.
  • Boje, D. M. , Gephart, R. P., Jr. , & Thatchenkery, T. J. (Eds.). (1996). Postmodern management and organization theory . Thousand Oaks, CA: SAGE.
  • Boje, D. M. , & Saylors, R. (2014). Quantum storytelling: An ontological perspective on process. In F. Cooren , E. Vaara , A. Langley , & H. Tsoukas (Eds.), Language and communication at work: Discourse, narrativity, and organizing (pp. 197–217). Oxford, UK: Oxford University Press.
  • Buono, A. F. , & Savall, H. (Eds.). (2007). Socio-economic intervention in organizations: The intervener-researcher and the SEAM approach to organizational analysis . Charlotte, NC: Information Age.
  • Burrell, G. , & Morgan, G. (1979). Sociological paradigms and organisational analysis: Elements of the sociology of corporate life . London, UK: Heinemann.
  • Calás, M. B. (1987). Organizational science/fiction: The postmodern in the management disciplines (Unpublished doctoral dissertation). University of Massachusetts, Amherst.
  • Calás, M. B. , & Smircich, L. (1991). Voicing seduction to silence leadership. Organization Studies , 12 (4), 567–601.
  • Campbell, D. T. (1975). Degrees of freedom and the case study. Comparative Political Studies , 8 (2), 178–193.
  • Cicourel, A. V. (1980). Three models of discourse analysis: The role of social structure. Discourse Processes , 3 (2), 101–132.
  • Cole, R. E. (1985). The macropolitics of organizational change: A comparative analysis of the spread of small-group activities. Administrative Science Quarterly , 30 (4), 560–585.
  • Coulon, A. (1995). Ethnomethodology . Thousand Oaks, CA: SAGE.
  • Creswell, J. W. (2014). Research design: Qualitative, quantitative, and mixed methods approaches (4th ed.). Thousand Oaks, CA: SAGE.
  • Culler, J. (1982). On deconstruction: Theory and criticism after structuralism . Ithaca, NY: Cornell University Press.
  • Cummings, T. G. , & Worley, C. G. (2015). Organization development and change (10th ed.). Stamford, CT: Cengage Learning.
  • Cunliffe, A. L. (2010). Retelling tales of the field: In search of organizational ethnography 20 years on. Organizational Research Methods , 13 (2), 224–239.
  • Cunliffe, A. L. (2011). Crafting qualitative research: Morgan and Smircich 30 years on. Organizational Research Methods , 14 (4), 647–673.
  • Czarniawska, B. (1998). A narrative approach to organization studies . Thousand Oaks, CA: SAGE.
  • de Rond, M. , & Tuncalp, D. (2017). Where the wild things are: How dreams can help identify countertransference in organizational research. Organizational Research Methods , 20 (3), 413–437.
  • Denzin, N. K. , & Lincoln, Y. S. (1994). Introduction: Entering the field of qualitative research. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (1st ed., pp. 1–7). Thousand Oaks, CA: SAGE.
  • Dunne, D. D. , & Dougherty, D. (2016). Abductive reasoning: How innovators navigate in the labyrinth of complex innovation. Organization Studies , 37 (2), 131–159.
  • Duriau, V. J. , Reger, R. K. , & Pfarrer, M. D. (2007). A content analysis of the content analysis literature in organization studies: Research themes, data sources, and methodological refinements. Organizational Research Methods , 10 (1), 5–34.
  • Dutton, J. E. , Workman, K. M. , & Hardin, A. E. (2014). Compassion at work. Annual Review of Organizational Psychology and Organizational Behavior , 1 (1), 277–304.
  • Easterby-Smith, M. , Thorpe, R. , & Jackson, P. (2012). Management research (4th ed.). Thousand Oaks, CA: SAGE.
  • Fairclough, N. (1992). Discourse and social change . Cambridge, UK: Polity Press.
  • Ganzin, M. , Gephart, R. P., Jr. , & Suddaby, R. (2014). Narrative and the construction of myths in organizations. In F. Cooren , E. Vaara , A. Langley , & H. Tsoukas (Eds.), Language and communication at work: Discourse, narrativity, and organizing (pp. 219–260). Oxford, UK: Oxford University Press.
  • Garfinkel, H. (1964). Studies of the routine grounds of everyday activities. Social Problems , 11 (3), 225–250.
  • Garfinkel, H. (1967). Studies in ethnomethodology . Englewood Cliffs, NJ: Prentice-Hall.
  • Gephart, R. P., Jr. (1978). Status degradation and organizational succession: An ethnomethodological approach. Administrative Science Quarterly , 23 (4), 553–581.
  • Gephart, R. P. (1986). Deconstructing the defense for quantification in social science: A content analysis of journal articles on the parametric strategy. Qualitative Sociology , 9 (2), 126–144.
  • Gephart, R. P., Jr. (1988). Ethnostatistics: Qualitative foundations for quantitative research . Thousand Oaks, CA: SAGE.
  • Gephart, R. P., Jr. (1993). The textual approach: Risk and blame in disaster sensemaking. Academy of Management Journal , 36 (6), 1465–1514.
  • Gephart, R. P., Jr. (1997). Hazardous measures: An interpretive textual analysis of quantitative sensemaking during crises. Journal of Organizational Behavior , 18 (S1), 583–622.
  • Gephart, R. P., Jr. (2004). From the editors: Qualitative research and the Academy of Management Journal . Academy of Management Journal , 47 (4), 454–462.
  • Gephart, R. P., Jr. (2013). Doing research with words: Qualitative methodologies and industrial/organizational psychology. In J. M. Cortina & R. S. Landis (Eds.), Modern research methods for the study of behavior in organizations (pp. 265–317). New York, NY: Routledge.
  • Gephart, R. P., Jr. (2016). Counter-narration with numbers: Understanding the interplay of words and numerals in fiscal storytelling. European Journal of Cross-Cultural Competence and Management , 4 (1), 21–40.
  • Gephart, R. P., Jr. (2018). Qualitative research as interpretive social science. In C. Cassell , A. L. Cunliffe , & G. Grandy (Eds.), The SAGE handbook of qualitative business and management research methods: History and traditions (pp. 33–53). Thousand Oaks, CA: SAGE.
  • Gephart, R. P., Jr. , & Kulicki, M. (2008). Environmental ethics and business: Toward a Habermasian perspective. In D. M. Boje (Ed.), Critical theory ethics for business and public administration (pp. 373–393). Charlotte, NC: Information Age.
  • Gephart, R. P., Jr. , & Pitter, R. (1995). Textual analysis in technology research: An investigation of the management of technology risk. Technology Studies , 2 (2), 325–354.
  • Gephart, R. P., Jr. , & Richardson, J. (2008). Qualitative research methodologies and international human resource management. In M. M. Harris (Ed.), Handbook of research in international human resource management (pp. 29–52). New York, NY: Erlbaum.
  • Gephart, R. P., Jr. , & Saylors, R. (2019). Ethnostatistics . In P. Atkinson , S. Delamont , A. Cernat , J. W. Sakshaug , & R. A. Williams (Eds.), SAGE research methods foundations . Thousand Oaks, CA: SAGE.
  • Gephart, R. P., Jr. , Topal, C. , & Zhang, Z. (2010). Future-oriented sensemaking: Temporalities and institutional legitimation. In T. Hernes & S. Maitlis (Eds.), Process, sensemaking, & organizing (pp. 275–311). Oxford, UK: Oxford University Press.
  • Gibbert, M. , & Ruigrok, W. (2010). The “what” and “how” of case study rigor: Three strategies based on published work. Organizational Research Methods , 13 (4), 710–737.
  • Gill, G. J. (2017). Dynamics of democratization: Elites, civil society and the transition process . New York, NY: Macmillan International Higher Education.
  • Gioia, D. A. , Corley, K. G. , & Hamilton, A. L. (2012). Seeking qualitative rigor in inductive research: Notes on the Gioia methodology. Organizational Research Methods , 16 (1), 15–31.
  • Glaser, B. G. , & Strauss, A. L. (1967). The discovery of grounded theory: Strategies for qualitative research . Chicago, IL: Aldine.
  • Goffman, E. (1961). Asylums: Essays on the social situation of mental patients and other inmates . New York, NY: Anchor Books.
  • Graham, L. (1995). On the line at Subaru-Isuzu: The Japanese model and the American worker . Ithaca, NY: Cornell University Press.
  • Greckhamer, T. , Misangyi, V. F. , Elms, H. , & Lacey, R. (2008). Using qualitative comparative analysis in strategic management research: An examination of combinations of industry, corporate, and business-unit effects. Organizational Research Methods , 11 (4), 695–726.
  • Greenwood, M. , Jack, G. , & Haylock, B. (2019). Toward a methodology for analyzing visual rhetoric in corporate reports. Organizational Research Methods , 22 (3), 798–827.
  • Habermas, J. (1973). Legitimation crisis ( T. McCarthy , Trans.). Frankfurt, Germany: Suhrkamp Verlag.
  • Habermas, J. (1979). Communication and the evolution of society ( T. McCarthy , Trans.). Boston, MA: Beacon Press.
  • Hansen, H. , & Trank, C. Q. (2016). This is going to hurt: Compassionate research methods. Organizational Research Methods , 19 (3), 352–375.
  • Hassard, J. (1993). Sociology and organization theory: Positivism, paradigms and postmodernity . Cambridge, UK: Cambridge University Press.
  • Hempel, C. (1966). Philosophy of natural science . Upper Saddle River, NJ: Prentice-Hall.
  • Hochschild, A. R. (1997). The time bind: When work becomes home and home becomes work . New York, NY: Metropolitan Books.
  • Hodder, I. (1994). The interpretation of documents and material culture. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (1st ed., pp. 393–402). Thousand Oaks, CA: SAGE.
  • Holstein, J. A. , & Gubrium, J. F. (1995). The active interview . Thousand Oaks, CA: SAGE.
  • Jameson, F. (1991). Postmodernism, or, the cultural logic of late capitalism . Durham, NC: Duke University Press.
  • Kelle, U. (Ed.). (1995). Computer-aided qualitative data analysis: Theory, methods and practice . London, UK: SAGE.
  • Kilduff, M. (1993). Deconstructing organizations. Academy of Management Review , 18 (1), 13–31.
  • Kilduff, M. , & Oh, H. (2006). Deconstructing diffusion: An ethnostatistical examination of Medical Innovation network data reanalyses. Organizational Research Methods , 9 (4), 432–455.
  • Kirk, J. , & Miller, M. L. (1986). Reliability and validity in qualitative research . Beverly Hills, CA: SAGE.
  • Lalonde, C. (2019). The development of actionable knowledge in crisis management. In R. P. Gephart, Jr. , C. C. Miller , & K. Svedberg Helgesson (Eds.), The Routledge companion to risk, crisis and emergency management (pp. 431–446). New York, NY: Routledge.
  • LeBaron, C. , Jarzabkowski, P. , Pratt, M. G. , & Fetzer, G. (2018). An introduction to video methods in organizational research. Organizational Research Methods , 21 (2), 239–260.
  • Lee, T. W. , & Mitchell, T. R. (1994). An alternative approach: The unfolding model of voluntary employee turnover. Academy of Management Review , 19 (1), 51–89.
  • Lee, T. W. , Mitchell, T. R. , & Sablynski, C. J. (1999). Qualitative research in organizational and vocational psychology, 1979–1999. Journal of Vocational Behavior , 55 (2), 161–187.
  • Lee, T. W. , Mitchell, T. R. , Wise, L. , & Fireman, S. (1996). An unfolding model of voluntary employee turnover. Academy of Management Journal , 39 (1), 5–36.
  • Locke, K. (2001). Grounded theory in management research . Thousand Oaks, CA: SAGE.
  • Locke, K. (2002). The grounded theory approach to qualitative research. In F. Drasgow & N. Schmitt (Eds.), Measuring and analyzing behavior in organizations: Advances in measurement and data analysis (pp. 17–43). San Francisco, CA: Jossey-Bass.
  • Locke, K. , & Golden-Biddle, K. (2004). An introduction to qualitative research: Its potential for industrial and organizational psychology. In S. G. Rogelberg (Ed.), Handbook of research methods in industrial and organizational psychology (pp. 99–118). Oxford, UK: Blackwell.
  • Malterud, K. (2001). Qualitative research: Standards, challenges, and guidelines. The Lancet , 358 (9280), 483–488.
  • Maniha, J. , & Perrow, C. (1965). The reluctant organization and the aggressive environment. Administrative Science Quarterly , 10 (2), 238–257.
  • Martin, J. (1990). Deconstructing organizational taboos: The suppression of gender conflict in organizations. Organization Science , 1 (4), 339–359.
  • McCall, G. J. , & Simmons, J. L. (1969). (Eds.). Issues in participant observation: A text and reader . Reading, MA: Addison-Wesley.
  • McCloskey, H. J. (1984). Ecological ethics and politics .Lanham, MD: Rowman and Littlefield
  • McCloskey, D. N. (1985). The rhetoric of economics (1st ed.). Madison: University of Wisconsin Press.
  • McCracken, G. (1988). The long interview . Thousand Oaks, CA: SAGE.
  • McCurdy, D. , Spradley, J. , & Shandy, D. (2005). The cultural experience: Ethnography in complex society (2nd ed.). Long Grove, IL: Waveland Press.
  • McLeod, S. A. (2014, February 5). The interview research method . Simply Psychology.
  • Mills, A. J. , Durepos, G. , & Wiebe, E. (Eds.). (2010). Encyclopedia of case study research (Vol. 1). Thousand Oaks, CA: SAGE.
  • Morgan, D. L. (1997). Focus groups as qualitative research (2nd ed.). Thousand Oaks, CA: SAGE.
  • Morrow, R. A. (with Brown, D. D. ). (1994). Critical theory and methodology (1st ed.). Thousand Oaks, CA: SAGE.
  • Offe, C. (1984). Das Wachstum der Dienstleistungsarbeit: Vier soziologische Erklärungsansätze. In C. Offe (Ed.), Arbeitsgeselleschaft. Strukturprobleme und Zukunftsperspektiven (pp. 291–319). Frankfurt am Main, Germany: Campus Verlag.
  • Offe, C. (1985). New social movements: Challenging the boundaries of institutional politics. Social Research , 52 (4), 817–868.
  • Palmer, I. , Dunford, R. , & Buchanan, D. A. (2017). Managing organizational change: A multiple perspectives approach (3rd ed.). New York, NY: McGraw-Hill Education.
  • Perlow, L. A. (1997). Finding time: How corporations, individuals, and families can benefit from new work practices . Ithaca, NY: Cornell University Press.
  • Piekkari, R. , & Welch, C. (2012). Pluralism in international business and international management research: Making the case. In R. Piekkari & C. Welch (Eds.), Rethinking the case study in international business and management research (pp. 3–23). Cheltenham, UK: Edward Elgar.
  • Peirce, C. S. (1936). The Collected Papers . Eds. Charles Hartshorne and Paul Weiss . Cambridge M.A.: Harvard University Press.
  • Pollach, I. (2012). Taming textual data: The contribution of corpus linguistics to computer-aided text analysis. Organizational Research Methods , 15 (2), 263–287.
  • Poutanen, S. , & Kovalainen, A. (2010). Critical theory. In A. J. Mills , G. Durepos , & E. Wiebe (Eds.), Encyclopedia of case study research (Vol. 1, pp. 260–264). Thousand Oaks, CA: SAGE.
  • Pratt, M. G. (2009). For the lack of a boilerplate: Tips on writing up (and reviewing) qualitative research. Academy of Management Journal , 52 (5), 856–862.
  • Ray, J. L. , & Smith, A. D. (2012). Using photographs to research organizations: Evidence, considerations, and application in a field study. Organizational Research Methods , 15 (2), 288–315.
  • Richardson, J. , & McKenna, S. (2000). Metaphorical “types” and human resource management: Self-selecting expatriates. Industrial and Commercial Training , 32 (6), 209–218.
  • Rodriguez, N. , & Ryave, A. (2002). Systematic self-observation . Thousand Oaks, CA: SAGE.
  • Rodriguez, N. , Ryave, A. , & Tracewell, J. (1998). Withholding compliments in everyday life and the covert management of disaffiliation. Journal of Contemporary Ethnography , 27 (3), 323–345.
  • Rosenau, P. M. (1992). Post-modernism and the social sciences: Insights, inroads, and intrusions . Princeton, NJ: Princeton University Press.
  • Rosile, G. A. , Boje, D. M. , Carlon, D. M. , Downs, A. , & Saylors, R. (2013). Storytelling diamond: An antenarrative integration of the six facets of storytelling in organization research design. Organizational Research Methods , 16 (4), 557–580.
  • Roulet, T. J. , Gill, M. J. , Stenger, S. , & Gill, D. J. (2017). Reconsidering the value of covert research: The role of ambiguous consent in participant observation. Organizational Research Methods , 20 (3), 487–517.
  • Rynes, S. L. , Bretz, R. D. , & Gerhart, B. (1991). The importance of recruitment in job choice: A different way of looking. Personnel Psychology , 44 (3), 487–521.
  • Savall, H. (2007). ISEOR’s socio-economic method: A case of scientific consultancy. In A. F. Buono & H. Savall (Eds.), Socio-economic intervention in organizations: The intervener-researcher and the SEAM approach to organizational analysis (pp. 1–31). Charlotte, NC: Information Age.
  • Savall, H. , & Zardet, V. (2011). The qualimetrics approach: Observing the complex object . Charlotte, NC: Information Age.
  • Savall, H. , Zardet, V. , & Bonnet, M. (2008). Releasing the untapped potential of enterprises through socio-economic management . Geneva, Switzerland: International Labour Organization.
  • Saylors, R. , & Trafimow, D. (2020). Why the increasing use of complex causal models is a problem: On the danger sophisticated theoretical narratives pose to truth . Organizational Research Methods , 1094428119893452.
  • Schein, E. H. (1987). The clinical perspective in fieldwork . Thousand Oaks, CA: SAGE.
  • Schutz, A. (1973a). Common-sense and scientific interpretation of human action. In M. Natanson (Ed.), Collected papers, volume I: The problem of social reality (pp. 3–47). The Hague, The Netherlands: Martinus Nijhoff.
  • Schutz, A. (1973b). Concept and theory formation in the social sciences. In M. Natanson (Ed.), Collected papers, volume I: The problem of social reality (pp. 48–66). The Hague, The Netherlands: Martinus Nijhoff.
  • Silverman, D. (2014). Interpreting qualitative data (5th ed.). Thousand Oaks, CA: SAGE.
  • Smith, A. (2015). What grounded theory is . . . Organizational Research Methods , 18 (4), 578–580.
  • Sonpar, K. , & Golden-Biddle, K. (2008). Using content analysis to elaborate adolescent theories of organization. Organizational Research Methods , 11 (4), 795–814.
  • Spradley, J. P. (1979). The ethnographic interview . Belmont, CA: Wadsworth.
  • Spradley, J. P. (2016). Participant observation . Long Grove, IL: Waveland Press.
  • Stake, R. E. (2005). Qualitative case studies. In N. K. Denzin & Y. S. Lincoln (Eds.), The SAGE handbook of qualitative research (3rd ed., pp. 443–466). Thousand Oaks, CA: SAGE.
  • Su, N. (2018). Positivist qualitative methods. In C. Cassell , A. L. Cunliffe , & G. Grandy (Eds.), The SAGE handbook of qualitative business and management research methods: History and traditions (pp. 17–32). Thousand Oaks, CA: SAGE.
  • Van Maanen, J. (1973). Observations on the making of policemen. Human Organization , 32 (4), 407–418.
  • Van Maanen, J. (1981, March). Fieldwork on the beat: An informal introduction to organizational ethnography . Paper presented at the workshop on Innovations in Methodology for Organizational Research, Center for Creative Leadership, Greensboro, NC.
  • Van Maanen, J. (1988). Tales of the field: On writing ethnography . Chicago, IL: University of Chicago Press.
  • Van Maanen, J. (1998). Different strokes: Qualitative research on the Administrative Science Quarterly from 1956 to 1996. In J. Van Maanen (Ed.), Qualitative studies of organizations (pp. ix–xxxii). Thousand Oaks, CA: SAGE.
  • Van Maanen, J. (2010). A song for my supper: More tales of the field. Organizational Research Methods , 13 (2), 240–255.
  • Walsh, I. , Holton, J. A. , Bailyn, L. , Fernandez, W. , Levina, N. , & Glaser, B. (2015). What grounded theory is . . . A critically reflective conversation among scholars. Organizational Research Methods , 18 (4), 581–599.
  • Watson, T. J. (1994) . In search of management: Culture, chaos, and control in managerial work . London, UK: Routledge.
  • Weber, M. (1978). Economy and society: An outline of interpretive sociology . Berkeley: University of California Press.
  • Whittle, A. (2018). Ethnomethodology. In C. Cassell , A. L. Cunliffe , & G. Grandy (Eds.), The SAGE handbook of qualitative business and management research methods: History and traditions (pp. 217–232). Thousand Oaks, CA: SAGE.
  • Worley, C. G. , Zardet, V. , Bonnet, M. , & Savall, A. (2015). Becoming agile: How the SEAM approach to management builds adaptability . Hoboken, NJ: Jossey-Bass.
  • Yan, A. , & Gray, B. (1994). Bargaining power, management control, and performance in United States-China joint ventures: A comparative case study. Academy of Management Journal , 37 (6), 1478–1517.
  • Ybema, S. , Yanow, D. , Wels, H. , & Kamsteeg, F. (2009). Organizational ethnography: Studying the complexity of everyday life . Thousand Oaks, CA: SAGE.

1. The fourth logic is retroduction. This refers to the process of building hypothetical models of structures and mechanisms that are assumed to produce empirical phenomena. It is the primary logic used in the critical realist approach to scientific research (Avenier & Thomas, 2015 ; Bhaskar, 1978 ). Retroduction requires the use of inductive or abductive strategies to discover the mechanisms that explain regularities (Blaikie, 2010 , p. 87). There is no evident logic for discovering mechanisms and this requires disciplined scientific thinking aided by creative imagination, intuition, and guesswork (Blaikie, 2010 ). Retroduction is likr deduction in asking “what” questions and differs from abduction because it produces explanations rather than understanding, causes rather than reasons, and hypothetical conceptual mechanisms rather than descriptions of behavioral processes as outcomes. Retroduction is becoming important in the field but has not as yet been extensively used in management and organization studies (for examples of uses, see Avenier & Thomas, 2015 ); hence, we do not address it at length in this article.

Related Articles

  • Agency Theory in Business and Management Research
  • Assessing the State of Top Management Teams Research
  • Case Study Research: A State-of-the-Art Perspective
  • Advances in Team Creativity Research

Printed from Oxford Research Encyclopedias, Business and Management. Under the terms of the licence agreement, an individual user may print out a single article for personal use (for details see Privacy Policy and Legal Notice).

date: 10 June 2024

  • Cookie Policy
  • Privacy Policy
  • Legal Notice
  • Accessibility
  • [81.177.182.136]
  • 81.177.182.136

Character limit 500 /500

Current Selection: All SSRN Networks Modify

To use this feature, you must allow cookies or be signed in.

MEMBER SIGN IN

First-time user? Free Registration

USER ID: 
PASSWORD: 

Forgot ID or Password? | Contact Us

Impossible? Let’s see.

Whether we're shaping the future of sustainability, or optimizing algorithms, or even exploring epidemiological studies, Google Research strives to continuously progress science, advance society, and improve the lives of billions of people.

Person looking up at screen

Advancing the state of the art

Our teams advance the state of the art through research, systems engineering, and collaboration across Google. We publish hundreds of research papers each year across a wide range of domains, sharing our latest developments in order to collaboratively progress computing and science.

Learn more about our philosophy.

Watch the film

Link to Youtube Video

Read the latest

GRatIO2024-1-logo

May 24 · BLOG

PrivateSyntheticData-0-Hero

MAY 16 · BLOG

Med-Gemini-0-Hero

MAY 15 · BLOG

Model-explorer-hero

MAY 14 · BLOG

Connectomics2024-1a-ExcitatoryNeurons

MAY 02 · BLOG

Scaling-hierarchical-clustering-hero

MAY 01 · BLOG

Our research drives real-world change

MedPalm2

Improving our LLM designed for the medical domain

  • Large language models encode clinical knowledge Publication
  • Towards Expert-Level Medical Question Answering with Large Language Models Publication
  • Our latest health AI research updates Article
  • Med-PaLM 2, our expert-level medical LLM Video

Project Contrails

Project Contrails

A cost-effective and scalable way AI is helping to mitigate aviation’s climate impact

  • A human-labeled Landsat-8 contrails dataset Dataset
  • Can Google AI make flying more sustainable? Video
  • Estimates of broadband upwelling irradiance fromm GOES-16 ABI Publication
  • How AI is helping airlines mitigate the climate impact of contrails Blog

See our impact across other projects

open building

Open Buildings

Project Relate

Project Relate

Flood Forcasting

Flood Forecasting

We work across domains

Our vast breadth of work covers AI/ML foundations, responsible human-centric technology, science & societal impact, computing paradigms, and algorithms & optimization. Our research teams impact technology used by people all over the world.

One research paper started it all

The research we do today becomes the Google of the future. Google itself began with a research paper, published in 1998, and was the foundation of Google Search. Our ongoing research over the past 25 years has transformed not only the company, but how people are able to interact with the world and its information.

Legacy

Responsible research is at the heart of what we do

The impact we create from our research has the potential to reach billions of people. That's why everything we do is guided by methodology that is grounded in responsible practices and thorough consideration.

responsible-ai

Help us shape the future

Academic community

We've been working alongside the academic research community since day one. Explore the ways that we collaborate and provide resources and support through a variety of student and faculty programs.

Career Opportunities

From Accra to Zürich, to our home base in Mountain View, we’re looking for talented scientists, engineers, interns, and more to join our teams not only at Google Research but all research projects across Google.

Explore our other teams and product areas

Google Cloud

Google DeepMind

LABS.GOOGLE

The economic potential of generative AI: The next productivity frontier

research paper in business sciences

AI has permeated our lives incrementally, through everything from the tech powering our smartphones to autonomous-driving features on cars to the tools retailers use to surprise and delight consumers. As a result, its progress has been almost imperceptible. Clear milestones, such as when AlphaGo, an AI-based program developed by DeepMind, defeated a world champion Go player in 2016, were celebrated but then quickly faded from the public’s consciousness.

Generative AI applications such as ChatGPT, GitHub Copilot, Stable Diffusion, and others have captured the imagination of people around the world in a way AlphaGo did not, thanks to their broad utility—almost anyone can use them to communicate and create—and preternatural ability to have a conversation with a user. The latest generative AI applications can perform a range of routine tasks, such as the reorganization and classification of data. But it is their ability to write text, compose music, and create digital art that has garnered headlines and persuaded consumers and households to experiment on their own. As a result, a broader set of stakeholders are grappling with generative AI’s impact on business and society but without much context to help them make sense of it.

About the authors

This article is a collaborative effort by Michael Chui , Eric Hazan , Roger Roberts , Alex Singla , Kate Smaje , Alex Sukharevsky , Lareina Yee , and Rodney Zemmel , representing views from QuantumBlack, AI by McKinsey; McKinsey Digital; the McKinsey Technology Council; the McKinsey Global Institute; and McKinsey’s Growth, Marketing & Sales Practice.

The speed at which generative AI technology is developing isn’t making this task any easier. ChatGPT was released in November 2022. Four months later, OpenAI released a new large language model, or LLM, called GPT-4 with markedly improved capabilities. 1 “Introducing ChatGPT,” OpenAI, November 30, 2022; “GPT-4 is OpenAI’s most advanced system, producing safer and more useful responses,” OpenAI, accessed June 1, 2023. Similarly, by May 2023, Anthropic’s generative AI, Claude, was able to process 100,000 tokens of text, equal to about 75,000 words in a minute—the length of the average novel—compared with roughly 9,000 tokens when it was introduced in March 2023. 2 “Introducing Claude,” Anthropic PBC, March 14, 2023; “Introducing 100K Context Windows,” Anthropic PBC, May 11, 2023. And in May 2023, Google announced several new features powered by generative AI, including Search Generative Experience and a new LLM called PaLM 2 that will power its Bard chatbot, among other Google products. 3 Emma Roth, “The nine biggest announcements from Google I/O 2023,” The Verge , May 10, 2023.

To grasp what lies ahead requires an understanding of the breakthroughs that have enabled the rise of generative AI, which were decades in the making. For the purposes of this report, we define generative AI as applications typically built using foundation models. These models contain expansive artificial neural networks inspired by the billions of neurons connected in the human brain. Foundation models are part of what is called deep learning, a term that alludes to the many deep layers within neural networks. Deep learning has powered many of the recent advances in AI, but the foundation models powering generative AI applications are a step-change evolution within deep learning. Unlike previous deep learning models, they can process extremely large and varied sets of unstructured data and perform more than one task.

Never just tech

Creating value beyond the hype

Let’s deliver on the promise of technology from strategy to scale.

Foundation models have enabled new capabilities and vastly improved existing ones across a broad range of modalities, including images, video, audio, and computer code. AI trained on these models can perform several functions; it can classify, edit, summarize, answer questions, and draft new content, among other tasks.

All of us are at the beginning of a journey to understand generative AI’s power, reach, and capabilities. This research is the latest in our efforts to assess the impact of this new era of AI. It suggests that generative AI is poised to transform roles and boost performance across functions such as sales and marketing, customer operations, and software development. In the process, it could unlock trillions of dollars in value across sectors from banking to life sciences. The following sections share our initial findings.

For the full version of this report, download the PDF .

Key insights

Generative AI’s impact on productivity could add trillions of dollars in value to the global economy. Our latest research estimates that generative AI could add the equivalent of $2.6 trillion to $4.4 trillion annually across the 63 use cases we analyzed—by comparison, the United Kingdom’s entire GDP in 2021 was $3.1 trillion. This would increase the impact of all artificial intelligence by 15 to 40 percent. This estimate would roughly double if we include the impact of embedding generative AI into software that is currently used for other tasks beyond those use cases.

About 75 percent of the value that generative AI use cases could deliver falls across four areas: Customer operations, marketing and sales, software engineering, and R&D. Across 16 business functions, we examined 63 use cases in which the technology can address specific business challenges in ways that produce one or more measurable outcomes. Examples include generative AI’s ability to support interactions with customers, generate creative content for marketing and sales, and draft computer code based on natural-language prompts, among many other tasks.

Generative AI will have a significant impact across all industry sectors. Banking, high tech, and life sciences are among the industries that could see the biggest impact as a percentage of their revenues from generative AI. Across the banking industry, for example, the technology could deliver value equal to an additional $200 billion to $340 billion annually if the use cases were fully implemented. In retail and consumer packaged goods, the potential impact is also significant at $400 billion to $660 billion a year.

Generative AI has the potential to change the anatomy of work, augmenting the capabilities of individual workers by automating some of their individual activities. Current generative AI and other technologies have the potential to automate work activities that absorb 60 to 70 percent of employees’ time today. In contrast, we previously estimated that technology has the potential to automate half of the time employees spend working. 4 “ Harnessing automation for a future that works ,” McKinsey Global Institute, January 12, 2017. The acceleration in the potential for technical automation is largely due to generative AI’s increased ability to understand natural language, which is required for work activities that account for 25 percent of total work time. Thus, generative AI has more impact on knowledge work associated with occupations that have higher wages and educational requirements than on other types of work.

The pace of workforce transformation is likely to accelerate, given increases in the potential for technical automation. Our updated adoption scenarios, including technology development, economic feasibility, and diffusion timelines, lead to estimates that half of today’s work activities could be automated between 2030 and 2060, with a midpoint in 2045, or roughly a decade earlier than in our previous estimates.

Generative AI can substantially increase labor productivity across the economy, but that will require investments to support workers as they shift work activities or change jobs. Generative AI could enable labor productivity growth of 0.1 to 0.6 percent annually through 2040, depending on the rate of technology adoption and redeployment of worker time into other activities. Combining generative AI with all other technologies, work automation could add 0.5 to 3.4 percentage points annually to productivity growth. However, workers will need support in learning new skills, and some will change occupations. If worker transitions and other risks can be managed, generative AI could contribute substantively to economic growth and support a more sustainable, inclusive world.

The era of generative AI is just beginning. Excitement over this technology is palpable, and early pilots are compelling. But a full realization of the technology’s benefits will take time, and leaders in business and society still have considerable challenges to address. These include managing the risks inherent in generative AI, determining what new skills and capabilities the workforce will need, and rethinking core business processes such as retraining and developing new skills.

Where business value lies

Generative AI is a step change in the evolution of artificial intelligence. As companies rush to adapt and implement it, understanding the technology’s potential to deliver value to the economy and society at large will help shape critical decisions. We have used two complementary lenses to determine where generative AI, with its current capabilities, could deliver the biggest value and how big that value could be (Exhibit 1).

The first lens scans use cases for generative AI that organizations could adopt. We define a “use case” as a targeted application of generative AI to a specific business challenge, resulting in one or more measurable outcomes. For example, a use case in marketing is the application of generative AI to generate creative content such as personalized emails, the measurable outcomes of which potentially include reductions in the cost of generating such content and increases in revenue from the enhanced effectiveness of higher-quality content at scale. We identified 63 generative AI use cases spanning 16 business functions that could deliver total value in the range of $2.6 trillion to $4.4 trillion in economic benefits annually when applied across industries.

That would add 15 to 40 percent to the $11 trillion to $17.7 trillion of economic value that we now estimate nongenerative artificial intelligence and analytics could unlock. (Our previous estimate from 2017 was that AI could deliver $9.5 trillion to $15.4 trillion in economic value.)

Our second lens complements the first by analyzing generative AI’s potential impact on the work activities required in some 850 occupations. We modeled scenarios to estimate when generative AI could perform each of more than 2,100 “detailed work activities”—such as “communicating with others about operational plans or activities”—that make up those occupations across the world economy. This enables us to estimate how the current capabilities of generative AI could affect labor productivity across all work currently done by the global workforce.

Some of this impact will overlap with cost reductions in the use case analysis described above, which we assume are the result of improved labor productivity. Netting out this overlap, the total economic benefits of generative AI —including the major use cases we explored and the myriad increases in productivity that are likely to materialize when the technology is applied across knowledge workers’ activities—amounts to $6.1 trillion to $7.9 trillion annually (Exhibit 2).

How we estimated the value potential of generative AI use cases

To assess the potential value of generative AI, we updated a proprietary McKinsey database of potential AI use cases and drew on the experience of more than 100 experts in industries and their business functions. 1 ” Notes from the AI frontier: Applications and value of deep learning ,” McKinsey Global Institute, April 17, 2018.

Our updates examined use cases of generative AI—specifically, how generative AI techniques (primarily transformer-based neural networks) can be used to solve problems not well addressed by previous technologies.

We analyzed only use cases for which generative AI could deliver a significant improvement in the outputs that drive key value. In particular, our estimates of the primary value the technology could unlock do not include use cases for which the sole benefit would be its ability to use natural language. For example, natural-language capabilities would be the key driver of value in a customer service use case but not in a use case optimizing a logistics network, where value primarily arises from quantitative analysis.

We then estimated the potential annual value of these generative AI use cases if they were adopted across the entire economy. For use cases aimed at increasing revenue, such as some of those in sales and marketing, we estimated the economy-wide value generative AI could deliver by increasing the productivity of sales and marketing expenditures.

Our estimates are based on the structure of the global economy in 2022 and do not consider the value generative AI could create if it produced entirely new product or service categories.

While generative AI is an exciting and rapidly advancing technology, the other applications of AI discussed in our previous report continue to account for the majority of the overall potential value of AI. Traditional advanced-analytics and machine learning algorithms are highly effective at performing numerical and optimization tasks such as predictive modeling, and they continue to find new applications in a wide range of industries. However, as generative AI continues to develop and mature, it has the potential to open wholly new frontiers in creativity and innovation. It has already expanded the possibilities of what AI overall can achieve (see sidebar “How we estimated the value potential of generative AI use cases”).

In this section, we highlight the value potential of generative AI across business functions.

Generative AI could have an impact on most business functions; however, a few stand out when measured by the technology’s impact as a share of functional cost (Exhibit 3). Our analysis of 16 business functions identified just four—customer operations, marketing and sales, software engineering, and research and development—that could account for approximately 75 percent of the total annual value from generative AI use cases.

Notably, the potential value of using generative AI for several functions that were prominent in our previous sizing of AI use cases, including manufacturing and supply chain functions, is now much lower. 5 Pitchbook. This is largely explained by the nature of generative AI use cases, which exclude most of the numerical and optimization applications that were the main value drivers for previous applications of AI.

In addition to the potential value generative AI can deliver in function-specific use cases, the technology could drive value across an entire organization by revolutionizing internal knowledge management systems. Generative AI’s impressive command of natural-language processing can help employees retrieve stored internal knowledge by formulating queries in the same way they might ask a human a question and engage in continuing dialogue. This could empower teams to quickly access relevant information, enabling them to rapidly make better-informed decisions and develop effective strategies.

In 2012, the McKinsey Global Institute (MGI) estimated that knowledge workers spent about a fifth of their time, or one day each work week, searching for and gathering information. If generative AI could take on such tasks, increasing the efficiency and effectiveness of the workers doing them, the benefits would be huge. Such virtual expertise could rapidly “read” vast libraries of corporate information stored in natural language and quickly scan source material in dialogue with a human who helps fine-tune and tailor its research, a more scalable solution than hiring a team of human experts for the task.

In other cases, generative AI can drive value by working in partnership with workers, augmenting their work in ways that accelerate their productivity. Its ability to rapidly digest mountains of data and draw conclusions from it enables the technology to offer insights and options that can dramatically enhance knowledge work. This can significantly speed up the process of developing a product and allow employees to devote more time to higher-impact tasks.

Following are four examples of how generative AI could produce operational benefits in a handful of use cases across the business functions that could deliver a majority of the potential value we identified in our analysis of 63 generative AI use cases. In the first two examples, it serves as a virtual expert, while in the following two, it lends a hand as a virtual collaborator.

Customer operations: Improving customer and agent experiences

Generative AI has the potential to revolutionize the entire customer operations function, improving the customer experience and agent productivity through digital self-service and enhancing and augmenting agent skills. The technology has already gained traction in customer service because of its ability to automate interactions with customers using natural language. Research found that at one company with 5,000 customer service agents, the application of generative AI increased issue resolution by 14 percent an hour and reduced the time spent handling an issue by 9 percent. 1 Erik Brynjolfsson, Danielle Li, and Lindsey R. Raymond, Generative AI at work , National Bureau of Economic Research working paper number 31161, April 2023. It also reduced agent attrition and requests to speak to a manager by 25 percent. Crucially, productivity and quality of service improved most among less-experienced agents, while the AI assistant did not increase—and sometimes decreased—the productivity and quality metrics of more highly skilled agents. This is because AI assistance helped less-experienced agents communicate using techniques similar to those of their higher-skilled counterparts.

The following are examples of the operational improvements generative AI can have for specific use cases:

  • Customer self-service. Generative AI–fueled chatbots can give immediate and personalized responses to complex customer inquiries regardless of the language or location of the customer. By improving the quality and effectiveness of interactions via automated channels, generative AI could automate responses to a higher percentage of customer inquiries, enabling customer care teams to take on inquiries that can only be resolved by a human agent. Our research found that roughly half of customer contacts made by banking, telecommunications, and utilities companies in North America are already handled by machines, including but not exclusively AI. We estimate that generative AI could further reduce the volume of human-serviced contacts by up to 50 percent, depending on a company’s existing level of automation.
  • Resolution during initial contact. Generative AI can instantly retrieve data a company has on a specific customer, which can help a human customer service representative more successfully answer questions and resolve issues during an initial interaction.
  • Reduced response time. Generative AI can cut the time a human sales representative spends responding to a customer by providing assistance in real time and recommending next steps.
  • Increased sales. Because of its ability to rapidly process data on customers and their browsing histories, the technology can identify product suggestions and deals tailored to customer preferences. Additionally, generative AI can enhance quality assurance and coaching by gathering insights from customer conversations, determining what could be done better, and coaching agents.

We estimate that applying generative AI to customer care functions could increase productivity at a value ranging from 30 to 45 percent of current function costs.

Our analysis captures only the direct impact generative AI might have on the productivity of customer operations. It does not account for potential knock-on effects the technology may have on customer satisfaction and retention arising from an improved experience, including better understanding of the customer’s context that can assist human agents in providing more personalized help and recommendations.

Marketing and sales: Boosting personalization, content creation, and sales productivity

Generative AI has taken hold rapidly in marketing and sales functions, in which text-based communications and personalization at scale are driving forces. The technology can create personalized messages tailored to individual customer interests, preferences, and behaviors, as well as do tasks such as producing first drafts of brand advertising, headlines, slogans, social media posts, and product descriptions.

Introducing generative AI to marketing functions requires careful consideration. For one thing, mathematical models trained on publicly available data without sufficient safeguards against plagiarism, copyright violations, and branding recognition risks infringing on intellectual property rights. A virtual try-on application may produce biased representations of certain demographics because of limited or biased training data. Thus, significant human oversight is required for conceptual and strategic thinking specific to each company’s needs.

Potential operational benefits from using generative AI for marketing include the following:

  • Efficient and effective content creation. Generative AI could significantly reduce the time required for ideation and content drafting, saving valuable time and effort. It can also facilitate consistency across different pieces of content, ensuring a uniform brand voice, writing style, and format. Team members can collaborate via generative AI, which can integrate their ideas into a single cohesive piece. This would allow teams to significantly enhance personalization of marketing messages aimed at different customer segments, geographies, and demographics. Mass email campaigns can be instantly translated into as many languages as needed, with different imagery and messaging depending on the audience. Generative AI’s ability to produce content with varying specifications could increase customer value, attraction, conversion, and retention over a lifetime and at a scale beyond what is currently possible through traditional techniques.
  • Enhanced use of data. Generative AI could help marketing functions overcome the challenges of unstructured, inconsistent, and disconnected data—for example, from different databases—by interpreting abstract data sources such as text, image, and varying structures. It can help marketers better use data such as territory performance, synthesized customer feedback, and customer behavior to generate data-informed marketing strategies such as targeted customer profiles and channel recommendations. Such tools could identify and synthesize trends, key drivers, and market and product opportunities from unstructured data such as social media, news, academic research, and customer feedback.
  • SEO optimization. Generative AI can help marketers achieve higher conversion and lower cost through search engine optimization (SEO) for marketing and sales technical components such as page titles, image tags, and URLs. It can synthesize key SEO tokens, support specialists in SEO digital content creation, and distribute targeted content to customers.
  • Product discovery and search personalization. With generative AI, product discovery and search can be personalized with multimodal inputs from text, images, and speech, and a deep understanding of customer profiles. For example, technology can leverage individual user preferences, behavior, and purchase history to help customers discover the most relevant products and generate personalized product descriptions. This would allow CPG, travel, and retail companies to improve their e-commerce sales by achieving higher website conversion rates.

We estimate that generative AI could increase the productivity of the marketing function with a value between 5 and 15 percent of total marketing spending.

Our analysis of the potential use of generative AI in marketing doesn’t account for knock-on effects beyond the direct impacts on productivity. Generative AI–enabled synthesis could provide higher-quality data insights, leading to new ideas for marketing campaigns and better-targeted customer segments. Marketing functions could shift resources to producing higher-quality content for owned channels, potentially reducing spending on external channels and agencies.

Generative AI could also change the way both B2B and B2C companies approach sales. The following are two use cases for sales:

  • Increase probability of sale. Generative AI could identify and prioritize sales leads by creating comprehensive consumer profiles from structured and unstructured data and suggesting actions to staff to improve client engagement at every point of contact. For example, generative AI could provide better information about client preferences, potentially improving close rates.
  • Improve lead development. Generative AI could help sales representatives nurture leads by synthesizing relevant product sales information and customer profiles and creating discussion scripts to facilitate customer conversation, including up- and cross-selling talking points. It could also automate sales follow-ups and passively nurture leads until clients are ready for direct interaction with a human sales agent.

Our analysis suggests that implementing generative AI could increase sales productivity by approximately 3 to 5 percent of current global sales expenditures.

This analysis may not fully account for additional revenue that generative AI could bring to sales functions. For instance, generative AI’s ability to identify leads and follow-up capabilities could uncover new leads and facilitate more effective outreach that would bring in additional revenue. Also, the time saved by sales representatives due to generative AI’s capabilities could be invested in higher-quality customer interactions, resulting in increased sales success.

Software engineering: Speeding developer work as a coding assistant

Treating computer languages as just another language opens new possibilities for software engineering. Software engineers can use generative AI in pair programming and to do augmented coding and train LLMs to develop applications that generate code when given a natural-language prompt describing what that code should do.

Software engineering is a significant function in most companies, and it continues to grow as all large companies, not just tech titans, embed software in a wide array of products and services. For example, much of the value of new vehicles comes from digital features such as adaptive cruise control, parking assistance, and IoT connectivity.

According to our analysis, the direct impact of AI on the productivity of software engineering could range from 20 to 45 percent of current annual spending on the function. This value would arise primarily from reducing time spent on certain activities, such as generating initial code drafts, code correction and refactoring, root-cause analysis, and generating new system designs. By accelerating the coding process, generative AI could push the skill sets and capabilities needed in software engineering toward code and architecture design. One study found that software developers using Microsoft’s GitHub Copilot completed tasks 56 percent faster than those not using the tool. 1 Peter Cihon et al., The impact of AI on developer productivity: Evidence from GitHub Copilot , Cornell University arXiv software engineering working paper, arXiv:2302.06590, February 13, 2023. An internal McKinsey empirical study of software engineering teams found those who were trained to use generative AI tools rapidly reduced the time needed to generate and refactor code—and engineers also reported a better work experience, citing improvements in happiness, flow, and fulfillment.

Our analysis did not account for the increase in application quality and the resulting boost in productivity that generative AI could bring by improving code or enhancing IT architecture—which can improve productivity across the IT value chain. However, the quality of IT architecture still largely depends on software architects, rather than on initial drafts that generative AI’s current capabilities allow it to produce.

Large technology companies are already selling generative AI for software engineering, including GitHub Copilot, which is now integrated with OpenAI’s GPT-4, and Replit, used by more than 20 million coders. 2 Michael Nuñez, “Google and Replit join forces to challenge Microsoft in coding tools,” VentureBeat, March 28, 2023.

Product R&D: Reducing research and design time, improving simulation and testing

Generative AI’s potential in R&D is perhaps less well recognized than its potential in other business functions. Still, our research indicates the technology could deliver productivity with a value ranging from 10 to 15 percent of overall R&D costs.

For example, the life sciences and chemical industries have begun using generative AI foundation models in their R&D for what is known as generative design. Foundation models can generate candidate molecules, accelerating the process of developing new drugs and materials. Entos, a biotech pharmaceutical company, has paired generative AI with automated synthetic development tools to design small-molecule therapeutics. But the same principles can be applied to the design of many other products, including larger-scale physical products and electrical circuits, among others.

While other generative design techniques have already unlocked some of the potential to apply AI in R&D, their cost and data requirements, such as the use of “traditional” machine learning, can limit their application. Pretrained foundation models that underpin generative AI, or models that have been enhanced with fine-tuning, have much broader areas of application than models optimized for a single task. They can therefore accelerate time to market and broaden the types of products to which generative design can be applied. For now, however, foundation models lack the capabilities to help design products across all industries.

In addition to the productivity gains that result from being able to quickly produce candidate designs, generative design can also enable improvements in the designs themselves, as in the following examples of the operational improvements generative AI could bring:

  • Enhanced design. Generative AI can help product designers reduce costs by selecting and using materials more efficiently. It can also optimize designs for manufacturing, which can lead to cost reductions in logistics and production.
  • Improved product testing and quality. Using generative AI in generative design can produce a higher-quality product, resulting in increased attractiveness and market appeal. Generative AI can help to reduce testing time of complex systems and accelerate trial phases involving customer testing through its ability to draft scenarios and profile testing candidates.

We also identified a new R&D use case for nongenerative AI: deep learning surrogates, the use of which has grown since our earlier research, can be paired with generative AI to produce even greater benefits. To be sure, integration will require the development of specific solutions, but the value could be significant because deep learning surrogates have the potential to accelerate the testing of designs proposed by generative AI.

While we have estimated the potential direct impacts of generative AI on the R&D function, we did not attempt to estimate the technology’s potential to create entirely novel product categories. These are the types of innovations that can produce step changes not only in the performance of individual companies but in economic growth overall.

Industry impacts

Across the 63 use cases we analyzed, generative AI has the potential to generate $2.6 trillion to $4.4 trillion in value across industries. Its precise impact will depend on a variety of factors, such as the mix and importance of different functions, as well as the scale of an industry’s revenue (Exhibit 4).

For example, our analysis estimates generative AI could contribute roughly $310 billion in additional value for the retail industry (including auto dealerships) by boosting performance in functions such as marketing and customer interactions. By comparison, the bulk of potential value in high tech comes from generative AI’s ability to increase the speed and efficiency of software development (Exhibit 5).

In the banking industry, generative AI has the potential to improve on efficiencies already delivered by artificial intelligence by taking on lower-value tasks in risk management, such as required reporting, monitoring regulatory developments, and collecting data. In the life sciences industry, generative AI is poised to make significant contributions to drug discovery and development.

We share our detailed analysis of these industries below.

Generative AI supports key value drivers in retail and consumer packaged goods

The technology could generate value for the retail and consumer packaged goods (CPG) industry by increasing productivity by 1.2 to 2.0 percent of annual revenues, or an additional $400 billion to $660 billion. 1 Vehicular retail is included as part of our overall retail analysis. To streamline processes, generative AI could automate key functions such as customer service, marketing and sales, and inventory and supply chain management. Technology has played an essential role in the retail and CPG industries for decades. Traditional AI and advanced analytics solutions have helped companies manage vast pools of data across large numbers of SKUs, expansive supply chain and warehousing networks, and complex product categories such as consumables. In addition, the industries are heavily customer facing, which offers opportunities for generative AI to complement previously existing artificial intelligence. For example, generative AI’s ability to personalize offerings could optimize marketing and sales activities already handled by existing AI solutions. Similarly, generative AI tools excel at data management and could support existing AI-driven pricing tools. Applying generative AI to such activities could be a step toward integrating applications across a full enterprise.

Generative AI at work in retail and CPG

Reinvention of the customer interaction pattern.

Consumers increasingly seek customization in everything from clothing and cosmetics to curated shopping experiences, personalized outreach, and food—and generative AI can improve that experience. Generative AI can aggregate market data to test concepts, ideas, and models. Stitch Fix, which uses algorithms to suggest style choices to its customers, has experimented with DALL·E to visualize products based on customer preferences regarding color, fabric, and style. Using text-to-image generation, the company’s stylists can visualize an article of clothing based on a consumer’s preferences and then identify a similar article among Stitch Fix’s inventory.

Retailers can create applications that give shoppers a next-generation experience, creating a significant competitive advantage in an era when customers expect to have a single natural-language interface help them select products. For example, generative AI can improve the process of choosing and ordering ingredients for a meal or preparing food—imagine a chatbot that could pull up the most popular tips from the comments attached to a recipe. There is also a big opportunity to enhance customer value management by delivering personalized marketing campaigns through a chatbot. Such applications can have human-like conversations about products in ways that can increase customer satisfaction, traffic, and brand loyalty. Generative AI offers retailers and CPG companies many opportunities to cross-sell and upsell, collect insights to improve product offerings, and increase their customer base, revenue opportunities, and overall marketing ROI.

Accelerating the creation of value in key areas

Generative AI tools can facilitate copy writing for marketing and sales, help brainstorm creative marketing ideas, expedite consumer research, and accelerate content analysis and creation. The potential improvement in writing and visuals can increase awareness and improve sales conversion rates.

Rapid resolution and enhanced insights in customer care

The growth of e-commerce also elevates the importance of effective consumer interactions. Retailers can combine existing AI tools with generative AI to enhance the capabilities of chatbots, enabling them to better mimic the interaction style of human agents—for example, by responding directly to a customer’s query, tracking or canceling an order, offering discounts, and upselling. Automating repetitive tasks allows human agents to devote more time to handling complicated customer problems and obtaining contextual information.

Disruptive and creative innovation

Generative AI tools can enhance the process of developing new versions of products by digitally creating new designs rapidly. A designer can generate packaging designs from scratch or generate variations on an existing design. This technology is developing rapidly and has the potential to add text-to-video generation.

Factors for retail and CPG organizations to consider

As retail and CPG executives explore how to integrate generative AI in their operations, they should keep in mind several factors that could affect their ability to capture value from the technology:

  • External inference. Generative AI has increased the need to understand whether generated content is based on fact or inference, requiring a new level of quality control.
  • Adversarial attacks. Foundation models are a prime target for attack by hackers and other bad actors, increasing the variety of potential security vulnerabilities and privacy risks.

To address these concerns, retail and CPG companies will need to strategically keep humans in the loop and ensure security and privacy are top considerations for any implementation. Companies will need to institute new quality checks for processes previously handled by humans, such as emails written by customer reps, and perform more-detailed quality checks on AI-assisted processes such as product design.

Why banks could realize significant value

Generative AI could have a significant impact on the banking industry , generating value from increased productivity of 2.8 to 4.7 percent of the industry’s annual revenues, or an additional $200 billion to $340 billion. On top of that impact, the use of generative AI tools could also enhance customer satisfaction, improve decision making and employee experience, and decrease risks through better monitoring of fraud and risk.

Banking, a knowledge and technology-enabled industry, has already benefited significantly from previously existing applications of artificial intelligence in areas such as marketing and customer operations. 1 “ Building the AI bank of the future ,” McKinsey, May 2021. Generative AI applications could deliver additional benefits, especially because text modalities are prevalent in areas such as regulations and programming language, and the industry is customer facing, with many B2C and small-business customers. 2 McKinsey’s Global Banking Annual Review , December 1, 2022.

Several characteristics position the industry for the integration of generative AI applications:

  • Sustained digitization efforts along with legacy IT systems. Banks have been investing in technology for decades, accumulating a significant amount of technical debt along with a siloed and complex IT architecture. 3 Akhil Babbar, Raghavan Janardhanan, Remy Paternoster, and Henning Soller, “ Why most digital banking transformations fail—and how to flip the odds ,” McKinsey, April 11, 2023.
  • Large customer-facing workforces. Banking relies on a large number of service representatives such as call-center agents and wealth management financial advisers.
  • A stringent regulatory environment. As a heavily regulated industry, banking has a substantial number of risk, compliance, and legal needs.
  • White-collar industry. Generative AI’s impact could span the organization, assisting all employees in writing emails, creating business presentations, and other tasks.

Generative AI at work in banking

Banks have started to grasp the potential of generative AI in their front lines and in their software activities. Early adopters are harnessing solutions such as ChatGPT as well as industry-specific solutions, primarily for software and knowledge applications. Three uses demonstrate its value potential to the industry.

A virtual expert to augment employee performance

A generative AI bot trained on proprietary knowledge such as policies, research, and customer interaction could provide always-on, deep technical support. Today, frontline spending is dedicated mostly to validating offers and interacting with clients, but giving frontline workers access to data as well could improve the customer experience. The technology could also monitor industries and clients and send alerts on semantic queries from public sources. For example, Morgan Stanley is building an AI assistant using GPT-4, with the aim of helping tens of thousands of wealth managers quickly find and synthesize answers from a massive internal knowledge base. 4 Hugh Son, “Morgan Stanley is testing an OpenAI-powered chatbot for its 16,000 financial advisors,” CNBC, March 14, 2023. The model combines search and content creation so wealth managers can find and tailor information for any client at any moment.

One European bank has leveraged generative AI to develop an environmental, social, and governance (ESG) virtual expert by synthesizing and extracting from long documents with unstructured information. The model answers complex questions based on a prompt, identifying the source of each answer and extracting information from pictures and tables.

Generative AI could reduce the significant costs associated with back-office operations. Such customer-facing chatbots could assess user requests and select the best service expert to address them based on characteristics such as topic, level of difficulty, and type of customer. Through generative AI assistants, service professionals could rapidly access all relevant information such as product guides and policies to instantaneously address customer requests.

Code acceleration to reduce tech debt and deliver software faster

Generative AI tools are useful for software development in four broad categories. First, they can draft code based on context via input code or natural language, helping developers code more quickly and with reduced friction while enabling automatic translations and no- and low-code tools. Second, such tools can automatically generate, prioritize, run, and review different code tests, accelerating testing and increasing coverage and effectiveness. Third, generative AI’s natural-language translation capabilities can optimize the integration and migration of legacy frameworks. Last, the tools can review code to identify defects and inefficiencies in computing. The result is more robust, effective code.

Production of tailored content at scale

Generative AI tools can draw on existing documents and data sets to substantially streamline content generation. These tools can create personalized marketing and sales content tailored to specific client profiles and histories as well as a multitude of alternatives for A/B testing. In addition, generative AI could automatically produce model documentation, identify missing documentation, and scan relevant regulatory updates to create alerts for relevant shifts.

Factors for banks to consider

When exploring how to integrate generative AI into operations, banks can be mindful of a number of factors:

  • The level of regulation for different processes. These vary from unregulated processes such as customer service to heavily regulated processes such as credit risk scoring.
  • Type of end user. End users vary widely in their expectations and familiarity with generative AI—for example, employees compared with high-net-worth clients.
  • Intended level of work automation. AI agents integrated through APIs could act nearly autonomously or as copilots, giving real-time suggestions to agents during customer interactions.
  • Data constraints. While public data such as annual reports could be made widely available, there would need to be limits on identifiable details for customers and other internal data.

Pharmaceuticals and medical products could see benefits across the entire value chain

Our analysis finds that generative AI could have a significant impact on the pharmaceutical and medical-product industries—from 2.6 to 4.5 percent of annual revenues across the pharmaceutical and medical-product industries, or $60 billion to $110 billion annually. This big potential reflects the resource-intensive process of discovering new drug compounds. Pharma companies typically spend approximately 20 percent of revenues on R&D, 1 Research and development in the pharmaceutical industry , Congressional Budget Office, April 2021. and the development of a new drug takes an average of ten to 15 years. With this level of spending and timeline, improving the speed and quality of R&D can generate substantial value. For example, lead identification—a step in the drug discovery process in which researchers identify a molecule that would best address the target for a potential new drug—can take several months even with “traditional” deep learning techniques. Foundation models and generative AI can enable organizations to complete this step in a matter of weeks.

Generative AI at work in pharmaceuticals and medical products

Drug discovery involves narrowing the universe of possible compounds to those that could effectively treat specific conditions. Generative AI’s ability to process massive amounts of data and model options can accelerate output across several use cases:

Improve automation of preliminary screening

In the lead identification stage of drug development, scientists can use foundation models to automate the preliminary screening of chemicals in the search for those that will produce specific effects on drug targets. To start, thousands of cell cultures are tested and paired with images of the corresponding experiment. Using an off-the-shelf foundation model, researchers can cluster similar images more precisely than they can with traditional models, enabling them to select the most promising chemicals for further analysis during lead optimization.

Enhance indication finding

An important phase of drug discovery involves the identification and prioritization of new indications—that is, diseases, symptoms, or circumstances that justify the use of a specific medication or other treatment, such as a test, procedure, or surgery. Possible indications for a given drug are based on a patient group’s clinical history and medical records, and they are then prioritized based on their similarities to established and evidence-backed indications.

Researchers start by mapping the patient cohort’s clinical events and medical histories—including potential diagnoses, prescribed medications, and performed procedures—from real-world data. Using foundation models, researchers can quantify clinical events, establish relationships, and measure the similarity between the patient cohort and evidence-backed indications. The result is a short list of indications that have a better probability of success in clinical trials because they can be more accurately matched to appropriate patient groups.

Pharma companies that have used this approach have reported high success rates in clinical trials for the top five indications recommended by a foundation model for a tested drug. This success has allowed these drugs to progress smoothly into Phase 3 trials, significantly accelerating the drug development process.

Factors for pharmaceuticals and medical products organizations to consider

Before integrating generative AI into operations, pharma executives should be aware of some factors that could limit their ability to capture its benefits:

  • The need for a human in the loop. Companies may need to implement new quality checks on processes that shift from humans to generative AI, such as representative-generated emails, or more detailed quality checks on AI-assisted processes, such as drug discovery. The increasing need to verify whether generated content is based on fact or inference elevates the need for a new level of quality control.
  • Explainability. A lack of transparency into the origins of generated content and traceability of root data could make it difficult to update models and scan them for potential risks; for instance, a generative AI solution for synthesizing scientific literature may not be able to point to the specific articles or quotes that led it to infer that a new treatment is very popular among physicians. The technology can also “hallucinate,” or generate responses that are obviously incorrect or inappropriate for the context. Systems need to be designed to point to specific articles or data sources, and then do human-in-the-loop checking.
  • Privacy considerations. Generative AI’s use of clinical images and medical records could increase the risk that protected health information will leak, potentially violating regulations that require pharma companies to protect patient privacy.

Work and productivity implications

Technology has been changing the anatomy of work for decades. Over the years, machines have given human workers various “superpowers”; for instance, industrial-age machines enabled workers to accomplish physical tasks beyond the capabilities of their own bodies. More recently, computers have enabled knowledge workers to perform calculations that would have taken years to do manually.

These examples illustrate how technology can augment work through the automation of individual activities that workers would have otherwise had to do themselves. At a conceptual level, the application of generative AI may follow the same pattern in the modern workplace, although as we show later in this chapter, the types of activities that generative AI could affect, and the types of occupations with activities that could change, will likely be different as a result of this technology than for older technologies.

The McKinsey Global Institute began analyzing the impact of technological automation of work activities and modeling scenarios of adoption in 2017. At that time, we estimated that workers spent half of their time on activities that had the potential to be automated by adapting technology that existed at that time, or what we call technical automation potential. We also modeled a range of potential scenarios for the pace at which these technologies could be adopted and affect work activities throughout the global economy.

Technology adoption at scale does not occur overnight. The potential of technological capabilities in a lab does not necessarily mean they can be immediately integrated into a solution that automates a specific work activity—developing such solutions takes time. Even when such a solution is developed, it might not be economically feasible to use if its costs exceed those of human labor. Additionally, even if economic incentives for deployment exist, it takes time for adoption to spread across the global economy. Hence, our adoption scenarios, which consider these factors together with the technical automation potential, provide a sense of the pace and scale at which workers’ activities could shift over time.

About the research

This analysis builds on the methodology we established in 2017. We began by examining the US Bureau of Labor Statistics O*Net breakdown of about 850 occupations into roughly 2,100 detailed work activities. For each of these activities, we scored the level of capability necessary to successfully perform the activity against a set of 18 capabilities that have the potential for automation.

We also surveyed experts in the automation of each of these capabilities to estimate automation technologies’ current performance level against each of these capabilities, as well as how the technology’s performance might advance over time. Specifically, this year, we updated our assessments of technology’s performance in cognitive, language, and social and emotional capabilities based on a survey of generative AI experts.

Based on these assessments of the technical automation potential of each detailed work activity at each point in time, we modeled potential scenarios for the adoption of work automation around the world. First, we estimated a range of time to implement a solution that could automate each specific detailed work activity, once all the capability requirements were met by the state of technology development. Second, we estimated a range of potential costs for this technology when it is first introduced, and then declining over time, based on historical precedents. We modeled the beginning of adoption for a specific detailed work activity in a particular occupation in a country (for 47 countries, accounting for more than 80 percent of the global workforce) when the cost of the automation technology reaches parity with the cost of human labor in that occupation.

Based on a historical analysis of various technologies, we modeled a range of adoption timelines from eight to 27 years between the beginning of adoption and its plateau, using sigmoidal curves (S-curves). This range implicitly accounts for the many factors that could affect the pace at which adoption occurs, including regulation, levels of investment, and management decision making within firms.

The modeled scenarios create a time range for the potential pace of automating current work activities. The “earliest” scenario flexes all parameters to the extremes of plausible assumptions, resulting in faster automation development and adoption, and the “latest” scenario flexes all parameters in the opposite direction. The reality is likely to fall somewhere between the two.

The analyses in this paper incorporate the potential impact of generative AI on today’s work activities. The new capabilities of generative AI, combined with previous technologies and integrated into corporate operations around the world, could accelerate the potential for technical automation of individual activities and the adoption of technologies that augment the capabilities of the workforce. They could also have an impact on knowledge workers whose activities were not expected to shift as a result of these technologies until later in the future (see sidebar “About the research”).

Automation potential has accelerated, but adoption to lag

Based on developments in generative AI, technology performance is now expected to match median human performance and reach top-quartile human performance earlier than previously estimated across a wide range of capabilities (Exhibit 6). For example, MGI previously identified 2027 as the earliest year when median human performance for natural-language understanding might be achieved in technology, but in this new analysis, the corresponding point is 2023.

As a result of these reassessments of technology capabilities due to generative AI, the total percentage of hours that could theoretically be automated by integrating technologies that exist today has increased from about 50 percent to 60–70 percent. The technical potential curve is quite steep because of the acceleration in generative AI’s natural-language capabilities.

Interestingly, the range of times between the early and late scenarios has compressed compared with the expert assessments in 2017, reflecting a greater confidence that higher levels of technological capabilities will arrive by certain time periods (Exhibit 7).

Our analysis of adoption scenarios accounts for the time required to integrate technological capabilities into solutions that can automate individual work activities; the cost of these technologies compared with that of human labor in different occupations and countries around the world; and the time it has taken for technologies to diffuse across the economy. With the acceleration in technical automation potential that generative AI enables, our scenarios for automation adoption have correspondingly accelerated. These scenarios encompass a wide range of outcomes, given that the pace at which solutions will be developed and adopted will vary based on decisions that will be made on investments, deployment, and regulation, among other factors. But they give an indication of the degree to which the activities that workers do each day may shift (Exhibit 8).

As an example of how this might play out in a specific occupation, consider postsecondary English language and literature teachers, whose detailed work activities include preparing tests and evaluating student work. With generative AI’s enhanced natural-language capabilities, more of these activities could be done by machines, perhaps initially to create a first draft that is edited by teachers but perhaps eventually with far less human editing required. This could free up time for these teachers to spend more time on other work activities, such as guiding class discussions or tutoring students who need extra assistance.

Our previously modeled adoption scenarios suggested that 50 percent of time spent on 2016 work activities would be automated sometime between 2035 and 2070, with a midpoint scenario around 2053. Our updated adoption scenarios, which account for developments in generative AI, models the time spent on 2023 work activities reaching 50 percent automation between 2030 and 2060, with a midpoint of 2045—an acceleration of roughly a decade compared with the previous estimate. 6 The comparison is not exact because the composition of work activities between 2016 and 2023 has changed; for example, some automation has occurred during that time period.

Adoption is also likely to be faster in developed countries, where wages are higher and thus the economic feasibility of adopting automation occurs earlier. Even if the potential for technology to automate a particular work activity is high, the costs required to do so have to be compared with the cost of human wages. In countries such as China, India, and Mexico, where wage rates are lower, automation adoption is modeled to arrive more slowly than in higher-wage countries (Exhibit 9).

Generative AI’s potential impact on knowledge work

Previous generations of automation technology were particularly effective at automating data management tasks related to collecting and processing data. Generative AI’s natural-language capabilities increase the automation potential of these types of activities somewhat. But its impact on more physical work activities shifted much less, which isn’t surprising because its capabilities are fundamentally engineered to do cognitive tasks.

As a result, generative AI is likely to have the biggest impact on knowledge work, particularly activities involving decision making and collaboration, which previously had the lowest potential for automation (Exhibit 10). Our estimate of the technical potential to automate the application of expertise jumped 34 percentage points, while the potential to automate management and develop talent increased from 16 percent in 2017 to 49 percent in 2023.

Generative AI’s ability to understand and use natural language for a variety of activities and tasks largely explains why automation potential has risen so steeply. Some 40 percent of the activities that workers perform in the economy require at least a median level of human understanding of natural language.

As a result, many of the work activities that involve communication, supervision, documentation, and interacting with people in general have the potential to be automated by generative AI, accelerating the transformation of work in occupations such as education and technology, for which automation potential was previously expected to emerge later (Exhibit 11).

Labor economists have often noted that the deployment of automation technologies tends to have the most impact on workers with the lowest skill levels, as measured by educational attainment, or what is called skill biased. We find that generative AI has the opposite pattern—it is likely to have the most incremental impact through automating some of the activities of more-educated workers (Exhibit 12).

Another way to interpret this result is that generative AI will challenge the attainment of multiyear degree credentials as an indicator of skills, and others have advocated for taking a more skills-based approach to workforce development in order to create more equitable, efficient workforce training and matching systems. 7 A more skills-based approach to workforce development predates the emergence of generative AI. Generative AI could still be described as skill-biased technological change, but with a different, perhaps more granular, description of skills that are more likely to be replaced than complemented by the activities that machines can do.

Previous generations of automation technology often had the most impact on occupations with wages falling in the middle of the income distribution. For lower-wage occupations, making a case for work automation is more difficult because the potential benefits of automation compete against a lower cost of human labor. Additionally, some of the tasks performed in lower-wage occupations are technically difficult to automate—for example, manipulating fabric or picking delicate fruits. Some labor economists have observed a “hollowing out of the middle,” and our previous models have suggested that work automation would likely have the biggest midterm impact on lower-middle-income quintiles.

However, generative AI’s impact is likely to most transform the work of higher-wage knowledge workers because of advances in the technical automation potential of their activities, which were previously considered to be relatively immune from automation (Exhibit 13).

Generative AI could propel higher productivity growth

Global economic growth was slower from 2012 to 2022 than in the two preceding decades. 8 Global economic prospects , World Bank, January 2023. Although the COVID-19 pandemic was a significant factor, long-term structural challenges—including declining birth rates and aging populations—are ongoing obstacles to growth.

Declining employment is among those obstacles. Compound annual growth in the total number of workers worldwide slowed from 2.5 percent in 1972–82 to just 0.8 percent in 2012–22, largely because of aging. In many large countries, the size of the workforce is already declining. 9 Yaron Shamir, “Three factors contributing to fewer people in the workforce,” Forbes , April 7, 2022. Productivity, which measures output relative to input, or the value of goods and services produced divided by the amount of labor, capital, and other resources required to produce them, was the main engine of economic growth in the three decades from 1992 to 2022 (Exhibit 14). However, since then, productivity growth has slowed in tandem with slowing employment growth, confounding economists and policy makers. 10 “The U.S. productivity slowdown: an economy-wide and industry-level analysis,” Monthly Labor Review, US Bureau of Labor Statistics, April 2021; Kweilin Ellingrud, “ Turning around the productivity slowdown ,” McKinsey Global Institute, September 13, 2022.

The deployment of generative AI and other technologies could help accelerate productivity growth, partially compensating for declining employment growth and enabling overall economic growth. Based on our estimates, the automation of individual work activities enabled by these technologies could provide the global economy with an annual productivity boost of 0.5 to 3.4 percent from 2023 to 2040, depending on the rate of automation adoption—with generative AI contributing 0.1 to 0.6 percentage points of that growth—but only if individuals affected by the technology were to shift to other work activities that at least match their 2022 productivity levels (Exhibit 15). In some cases, workers will stay in the same occupations, but their mix of activities will shift; in others, workers will need to shift occupations.

Considerations for business and society

History has shown that new technologies have the potential to reshape societies. Artificial intelligence has already changed the way we live and work—for example, it can help our phones (mostly) understand what we say, or draft emails. Mostly, however, AI has remained behind the scenes, optimizing business processes or making recommendations about the next product to buy. The rapid development of generative AI is likely to significantly augment the impact of AI overall, generating trillions of dollars of additional value each year and transforming the nature of work.

But the technology could also deliver new and significant challenges. Stakeholders must act—and quickly, given the pace at which generative AI could be adopted—to prepare to address both the opportunities and the risks. Risks have already surfaced, including concerns about the content that generative AI systems produce: Will they infringe upon intellectual property due to “plagiarism” in the training data used to create foundation models? Will the answers that LLMs produce when questioned be accurate, and can they be explained? Will the content generative AI creates be fair or biased in ways that users do not want by, say, producing content that reflects harmful stereotypes?

Using generative AI responsibly

Generative AI poses a variety of risks. Stakeholders will want to address these risks from the start.

Fairness: Models may generate algorithmic bias due to imperfect training data or decisions made by the engineers developing the models.

Intellectual property (IP): Training data and model outputs can generate significant IP risks, including infringing on copyrighted, trademarked, patented, or otherwise legally protected materials. Even when using a provider’s generative AI tool, organizations will need to understand what data went into training and how it’s used in tool outputs.

Privacy: Privacy concerns could arise if users input information that later ends up in model outputs in a form that makes individuals identifiable. Generative AI could also be used to create and disseminate malicious content such as disinformation, deepfakes, and hate speech.

Security: Generative AI may be used by bad actors to accelerate the sophistication and speed of cyberattacks. It also can be manipulated to provide malicious outputs. For example, through a technique called prompt injection, a third party gives a model new instructions that trick the model into delivering an output unintended by the model producer and end user.

Explainability: Generative AI relies on neural networks with billions of parameters, challenging our ability to explain how any given answer is produced.

Reliability: Models can produce different answers to the same prompts, impeding the user’s ability to assess the accuracy and reliability of outputs.

Organizational impact: Generative AI may significantly affect the workforce, and the impact on specific groups and local communities could be disproportionately negative.

Social and environmental impact: The development and training of foundation models may lead to detrimental social and environmental consequences, including an increase in carbon emissions (for example, training one large language model can emit about 315 tons of carbon dioxide). 1 Ananya Ganesh, Andrew McCallum, and Emma Strubell, “Energy and policy considerations for deep learning in NLP,” Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics , June 5, 2019.

There are economic challenges too: the scale and the scope of the workforce transitions described in this report are considerable. In the midpoint adoption scenario, about a quarter to a third of work activities could change in the coming decade. The task before us is to manage the potential positives and negatives of the technology simultaneously (see sidebar “Using generative AI responsibly”). Here are some of the critical questions we will need to address while balancing our enthusiasm for the potential benefits of the technology with the new challenges it can introduce.

Companies and business leaders

How can companies move quickly to capture the potential value at stake highlighted in this report, while managing the risks that generative AI presents?

How will the mix of occupations and skills needed across a company’s workforce be transformed by generative AI and other artificial intelligence over the coming years? How will a company enable these transitions in its hiring plans, retraining programs, and other aspects of human resources?

Do companies have a role to play in ensuring the technology is not deployed in “negative use cases” that could harm society?

How can businesses transparently share their experiences with scaling the use of generative AI within and across industries—and also with governments and society?

Policy makers

What will the future of work look like at the level of an economy in terms of occupations and skills? What does this mean for workforce planning?

How can workers be supported as their activities shift over time? What retraining programs can be put in place? What incentives are needed to support private companies as they invest in human capital? Are there earn-while-you-learn programs such as apprenticeships that could enable people to retrain while continuing to support themselves and their families?

What steps can policy makers take to prevent generative AI from being used in ways that harm society or vulnerable populations?

Can new policies be developed and existing policies amended to ensure human-centric AI development and deployment that includes human oversight and diverse perspectives and accounts for societal values?

Individuals as workers, consumers, and citizens

How concerned should individuals be about the advent of generative AI? While companies can assess how the technology will affect their bottom lines, where can citizens turn for accurate, unbiased information about how it will affect their lives and livelihoods?

How can individuals as workers and consumers balance the conveniences generative AI delivers with its impact in their workplaces?

Can citizens have a voice in the decisions that will shape the deployment and integration of generative AI into the fabric of their lives?

Technological innovation can inspire equal parts awe and concern. When that innovation seems to materialize fully formed and becomes widespread seemingly overnight, both responses can be amplified. The arrival of generative AI in the fall of 2022 was the most recent example of this phenomenon, due to its unexpectedly rapid adoption as well as the ensuing scramble among companies and consumers to deploy, integrate, and play with it.

All of us are at the beginning of a journey to understand this technology’s power, reach, and capabilities. If the past eight months are any guide, the next several years will take us on a roller-coaster ride featuring fast-paced innovation and technological breakthroughs that force us to recalibrate our understanding of AI’s impact on our work and our lives. It is important to properly understand this phenomenon and anticipate its impact. Given the speed of generative AI’s deployment so far, the need to accelerate digital transformation and reskill labor forces is great.

These tools have the potential to create enormous value for the global economy at a time when it is pondering the huge costs of adapting and mitigating climate change. At the same time, they also have the potential to be more destabilizing than previous generations of artificial intelligence. They are capable of that most human of abilities, language, which is a fundamental requirement of most work activities linked to expertise and knowledge as well as a skill that can be used to hurt feelings, create misunderstandings, obscure truth, and incite violence and even wars.

We hope this research has contributed to a better understanding of generative AI’s capacity to add value to company operations and fuel economic growth and prosperity as well as its potential to dramatically transform how we work and our purpose in society. Companies, policy makers, consumers, and citizens can work together to ensure that generative AI delivers on its promise to create significant value while limiting its potential to upset lives and livelihoods. The time to act is now. 11 The research, analysis, and writing in this report was entirely done by humans.

Michael Chui is a partner in McKinsey’s Bay Area office, where Roger Roberts is a partner and Lareina Yee is a senior partner; Eric Hazan is a senior partner in McKinsey’s Paris office; Alex Singla is a senior partner in the Chicago office; Kate Smaje and Alex Sukharevsky are senior partners in the London office; and Rodney Zemmel is a senior partner in the New York office.

The authors wish to thank Pedro Abreu, Rohit Agarwal, Steven Aronowitz, Arun Arora, Charles Atkins, Elia Berteletti, Onno Boer, Albert Bollard, Xavier Bosquet, Benjamin Braverman, Charles Carcenac, Sebastien Chaigne, Peter Crispeels, Santiago Comella-Dorda, Eleonore Depardon, Kweilin Ellingrud, Thierry Ethevenin, Dmitry Gafarov, Neel Gandhi, Eric Goldberg, Liz Grennan, Shivani Gupta, Vinay Gupta, Dan Hababou, Bryan Hancock, Lisa Harkness, Leila Harouchi, Jake Hart, Heiko Heimes, Jeff Jacobs, Begum Karaci Deniz, Tarun Khurana, Malgorzata Kmicinska, Jan-Christoph Köstring, Andreas Kremer, Kathryn Kuhn, Jessica Lamb, Maxim Lampe, John Larson, Swan Leroi, Damian Lewandowski, Richard Li, Sonja Lindberg, Kerin Lo, Guillaume Lurenbaum, Matej Macak, Dana Maor, Julien Mauhourat, Marco Piccitto, Carolyn Pierce, Olivier Plantefeve, Alexandre Pons, Kathryn Rathje, Emily Reasor, Werner Rehm, Steve Reis, Kelsey Robinson, Martin Rosendahl, Christoph Sandler, Saurab Sanghvi, Boudhayan Sen, Joanna Si, Alok Singh, Gurneet Singh Dandona, François Soubien, Eli Stein, Stephanie Strom, Michele Tam, Robert Tas, Maribel Tejada, Wilbur Wang, Georg Winkler, Jane Wong, and Romain Zilahi for their contributions to this report.

For the full list of acknowledgments, see the downloadable PDF .

Explore a career with us

Related articles.

Moving illustration of wavy blue lines that was produced using computer code

What every CEO should know about generative AI

Circular hub element virtual reality of big data, technology concept.

Exploring opportunities in the generative AI value chain

A green apple split into 3 parts on a gray background. Half of the apple is made out of a digital blue wireframe mesh.

What is generative AI?

Skip to Content

CU Boulder, industry partner on space docking and satellite AI research

Hanspeter Schaub standing in front of a vacuum chamber.

Docking with a satellite orbiting Earth is delicate business, with one wrong move spelling disaster. A team of industry and University of Colorado Boulder researchers is trying to make it easier.

The work is part of two major business-university grant partnerships that include the lab of Hanspeter Schaub, a professor and chair of the Ann and H.J. Smead Department of Aerospace Engineering Sciences.

“The goal with these grants is very much tech transfer,” Schaub said. “We’re combining university research with business goals and initiatives to develop a product or service.”

The first project is a U.S. Space Force Small Business Technology Transfer grant with In Orbit Aerospace Inc. The goal is to use electro adhesive forces to ease docking between satellites, future space cargo vehicles, or orbital debris. Electro adhesion uses short-range strong electric fields to hold together adjacent bodies, even if they are not made of magnetic materials.

“Docking in space is surprisingly difficult. If servicer bumps target vehicle in an unexpected manner, it’s going to bounce off and fly away. Electro adhesion has been used a lot already with manufacturing on Earth. With electric fields, you can create attractive forces to grab stuff. They’re not huge forces, but they’re nice,” Schaub said.

The team completed early work on the project last year and has now advanced to a second stage, which began in May.

Schaub’s portion of the grant is worth about $500,000 over 18 months, and includes numerical modeling and atmospheric experiments as well as the creation of samples to test in the lab’s vacuum chamber that approximates orbital conditions.

It is not the only business development grant in Schaub’s lab. He and Associate Professor Nisar Ahmed are also in the process of setting up a contract with Trusted Space, Inc. on a U.S. Air Force STTR grant to advance autonomous satellite fault identification. CU Boulder’s portion of this project is worth roughly $300,000 over 18 months.

Like all electronics and machines, satellites sometimes fail. The goal of the effort with Trusted Space is to develop an AI that can automatically identify likely sources of errors.

“If a satellite isn’t tracking in orbit, maybe something bumped into it, maybe the rate gyroscope is off, maybe everything is fine but a sensor is giving bad information. There might be 10 different reasons why and we’re trying to down select in an automated way so a human doesn’t have to scour through datasets manually,” Schaub said.

The team has completed proof of concept work on a Phase 1 grant and is now advancing to Phase 2, modeling dozens of potential errors.

Both grants make extensive use of Basilisk, a piece of software developed by Schaub’s lab to conduct spacecraft mission simulations.

Although many of Schaub’s grants are directly with government agencies or multi-university initiatives, he said conducting work with a business partner offers unique opportunities for advancing science and additional potential for students.

“Students get exposure to industry and are excited because suddenly people outside the research community are interested in what they’re doing,” Schaub said. “They attend meetings and see how projects are run, what challenges industry is trying to solve. It helps influence their dissertations and gives more focus. I see a lot of benefits and companies also often want to hire the students.”

Related News

Capstone Satellite orbiting the moon.

CU Boulder leading $5 million multi-university project to advance the space economy

Mahmoud Hussein with students in his lab.

CU Engineering faculty land prestigious multidisciplinary Department of Defense projects

Rendering of a satellite orbiting the Earth

CU Boulder developing space wargames simulation facility

  • Colorado Center for Astrodynamics Research (CCAR)
  • Hanspeter Schaub News
  • Nisar Ahmed News

Apply   Visit   Give

Departments

  • Ann and H.J. Smead Aerospace Engineering Sciences
  • Chemical & Biological Engineering
  • Civil, Environmental & Architectural Engineering
  • Computer Science
  • Electrical, Computer & Energy Engineering
  • Paul M. Rady Mechanical Engineering
  • Applied Mathematics
  • Biomedical Engineering
  • Creative Technology & Design
  • Engineering Education
  • Engineering Management
  • Engineering Physics
  • Environmental Engineering
  • Integrated Design Engineering
  • Materials Science & Engineering

Affiliates & Partners

  • ATLAS Institute
  • BOLD Center
  • Colorado Mesa University
  • Colorado Space Grant Consortium
  • Discovery Learning
  • Engineering Honors
  • Engineering Leadership
  • Entrepreneurship
  • Herbst Program for Engineering, Ethics & Society
  • Integrated Teaching and Learning
  • Global Engineering
  • Mortenson Center for Global Engineering
  • National Center for Women & Information Technology
  • Western Colorado University

IMAGES

  1. ⛔ Writing a science research paper. Guide: Writing the Scientific Paper

    research paper in business sciences

  2. 🌷 Research papers online. What is a Research Paper?. 2022-11-01

    research paper in business sciences

  3. How to Write a Scientific Paper

    research paper in business sciences

  4. 38+ Research Paper Samples

    research paper in business sciences

  5. Research Paper Format

    research paper in business sciences

  6. 🎉 Part of the research paper. Components of a Research Paper. 2019-01-27

    research paper in business sciences

VIDEO

  1. 4.4 MARKET RESEARCH / IB BUSINESS MANAGEMENT / primary, secondary, sampling, quantitative, qual

  2. Business studies Class 11

  3. Web scraping using ParseHub

  4. Exploring the Intersection of Science and Business

  5. Missouri University of Science And Technology Scholarships in Uk/Study in Uk 2024-2025

  6. Business Studies, business studies question paper 2024 class 12, mp board business studies paper

COMMENTS

  1. Journal of Business Research

    The Journal of Business Research aims to publish research that is rigorous, relevant, and potentially impactful. Recognizing the intricate relationships between the many areas of business activity, JBR examines a wide variety of business decision contexts, processes and activities, developing …. View full aims & scope.

  2. Journal of Management: Sage Journals

    Journal of Management (JOM) peer-reviewed and published bi-monthly, is committed to publishing scholarly empirical and theoretical research articles that have a high impact on the management field as a whole.JOM covers domains such as business strategy and policy, entrepreneurship, human resource management, organizational behavior, organizational theory, and research methods.

  3. Artificial Intelligence in Business: From Research and Innovation to

    The paper investigates the wide range of implications of artificial intelligence (AI), and delves deeper into both positive and negative impacts on governments, communities, companies, and individuals. ... The extended version of the manuscript has been submitted to Journal of Business Research, Elsevier for consideration as a journal research ...

  4. 15845 PDFs

    Explore the latest full-text research PDFs, articles, conference papers, preprints and more on BUSINESS RESEARCH. Find methods information, sources, references or conduct a literature review on ...

  5. Business & Management

    Business & Management. As an independent publisher, Sage Business & Management has been at the forefront of research and scholarship, marked by our influential journals, textbooks, and digital resources that unite theory and practice. We are committed to informing researchers and educating students to build a thriving global society and make a ...

  6. Research in International Business and Finance

    Methodologies and conceptualization issues related to finance. Research in International Business and Finance (RIBAF) seeks to consolidate its position as a premier scholarly vehicle of academic finance. The Journal publishes high quality, insightful, well-written papers that explore current and new issues in international finance.

  7. Business Perspectives and Research: Sage Journals

    Business Perspectives and Research (BPR) aims to publish conceptual, empirical and applied research. The empirical research published in BPR focuses on testing, extending and building management theory. The goal is to expand and enhance the understanding of business and management through empirical investigation and theoretical analysis.

  8. Data science: a game changer for science and innovation

    This paper shows data science's potential for disruptive innovation in science, industry, policy, and people's lives. We present how data science impacts science and society at large in the coming years, including ethical problems in managing human behavior data and considering the quantitative expectations of data science economic impact. We introduce concepts such as open science and e ...

  9. Business

    Scaling up adoption of green technologies in energy, mobility, construction, manufacturing and agriculture is imperative to set countries on a sustainable development path, but that hinges on ...

  10. 12399 PDFs

    Explore the latest full-text research PDFs, articles, conference papers, preprints and more on BUSINESS ANALYTICS. Find methods information, sources, references or conduct a literature review on ...

  11. The impact of COVID-19 on small business outcomes and ...

    Abstract. To explore the impact of coronavirus disease 2019 (COVID-19) on small businesses, we conducted a survey of more than 5,800 small businesses between March 28 and April 4, 2020. Several themes emerged. First, mass layoffs and closures had already occurred—just a few weeks into the crisis.

  12. Research challenges and opportunities in business analytics

    In particular, network science coupled with machine learning is a very promising approach for research in business analytics. It has been successfully demonstrated in constructing and analysing large-scale networks from social media to predict audience selection and targeting Zhang, Bhattacharya, & Ram, Citation 2016 ).

  13. What Is in the Future of Business Research and Management ...

    Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications. ... science parks; business ethics ...

  14. Full article: What do we know about business and economics research

    2.1. Data selection strategy. For selecting the data, we relied on the Scopus database. It is the largest multidisciplinary database in social sciences, economics, finance and business studies and is widely used for conducting bibliometric studies (Baker et al., Citation 2021; Donthu et al., Citation 2020).Scopus is considered a middle choice in terms of the rigorousness of vetting research ...

  15. 41223 PDFs

    Explore the latest full-text research PDFs, articles, conference papers, preprints and more on MANAGEMENT SCIENCE. Find methods information, sources, references or conduct a literature review on ...

  16. Business analytics and big data research in information systems

    Business analytics summarises all methods, processes, technologies, applications, skills, and organisational structures necessary to analyse past or current data to manage and plan business performance. While in the past, business intelligence was rather focused on data integration and reporting descriptive analytics, business analytics is ...

  17. A review of data science in business and industry and a future view

    Abstract. The aim of this paper is to frame Data Science, a fashion and emerging topic nowadays in the context. of business and industry. W e open with a discussion about the origin of Data ...

  18. Business Analytics and Data Science: Once Again?

    Both Business Analytics and Data Science want to "turn data into value". It is not surprising, that these terms have been adopted quickly by those in our community who are close to Operations Research and the Management Sciences (OR/MS). Data analysis and optimization have always been at the core of the INFORMS, and the INFORMS Information ...

  19. Qualitative Designs and Methodologies for Business, Management, and

    Critical theory and research assume that social science research differs from natural science research because social facts are human creations and social phenomena cannot be controlled as readily as natural phenomena (Gephart, 2013, p. 284; Morrow, 1994, p. 9). As a result, critical theory often uses a historical approach to explore issues ...

  20. BComHons Business Sciences (Management)

    The programme comprises the following courses: Research Paper in Business Sciences. Statistical Research Design and Analysis (Coursework) AND Statistical Research Design and Analysis (Project) Advanced Studies in Entrepreneurship. Advanced Studies in Managerial Decision Making and Business Behaviour. Advanced Studies in Organisational Theory.

  21. Digital transformation in business and management research: An overview

    This study is divided into two parts: (1) mapping the thematic evolution of the DT research in the areas of business and management by focusing on papers that were published in the Chartered Association of Business Schools' (ABS) ≥ 2-star journals during the period 2010-2020; (2) based on the findings of the first part, proposing a ...

  22. Business Sciences

    School of Business Sciences. We are situated in Johannesburg in the economic and commercial heart of the country. Multi-disciplinary engagement ensures we remain relevant to South Africa's changing landscape and informs our teaching and research. Research Our research enables us to transfer new knowledge to our curricula and keeps our ...

  23. Search eLibrary :: SSRN

    Definitions of Measures Associated with References, Cites, and Citations. Total References: Total number of references to other papers that have been resolved to date, for papers in the SSRN eLibrary. Total Citations: Total number of cites to papers in the SSRN eLibrary whose links have been resolved to date. Note: The links for the two pages containing a paper's References and Citation links ...

  24. Google Research

    Advancing the state of the art. Our teams advance the state of the art through research, systems engineering, and collaboration across Google. We publish hundreds of research papers each year across a wide range of domains, sharing our latest developments in order to collaboratively progress computing and science. Learn more about our philosophy.

  25. Researchers plan to retract landmark Alzheimer's paper ...

    A version of this story appeared in Science, Vol 384, Issue 6700. Authors of a landmark Alzheimer's disease research paper published in Nature in 2006 have agreed to retract the study in response to allegations of image manipulation. University of Minnesota (UMN) Twin Cities neuroscientist Karen Ashe, the paper's senior author, acknowledged ...

  26. Software that detects 'tortured acronyms' in research ...

    In 2022, IOPP retracted nearly 500 papers from conference proceedings after the PPS flagged tortured phrases in the papers. When Eggleton and her team investigated, they found reams of other problems—fake identity, citation cartels in which researchers insert irrelevant references to one another, and even entirely fabricated research.

  27. Emerging trends and impact of business intelligence & analytics in

    The objective of this paper is to understand characteristics of organizations in different clusters, where is the effectiveness of BI&A seen and which areas or functions is it used for. ... data science & machine learning. ... Journal of Business Research 96: 228-237. Crossref. Google Scholar. Baesens B, Bapna R, Marsden JR, et al. (2016 ...

  28. The state of AI in early 2024: Gen AI adoption spikes and starts to

    If 2023 was the year the world discovered generative AI (gen AI), 2024 is the year organizations truly began using—and deriving business value from—this new technology.In the latest McKinsey Global Survey on AI, 65 percent of respondents report that their organizations are regularly using gen AI, nearly double the percentage from our previous survey just ten months ago.

  29. Economic potential of generative AI

    Research found that at one company with 5,000 customer service agents, the application of generative AI increased issue resolution by 14 percent an hour and reduced the time spent handling an issue by 9 percent. 1 Erik Brynjolfsson, Danielle Li, and Lindsey R. Raymond, Generative AI at work, National Bureau of Economic Research working paper ...

  30. CU Boulder, industry partner on space docking and satellite AI research

    The work is part of two major business-university grant partnerships that include the lab of Hanspeter Schaub, a professor and chair of the Ann and H.J. Smead Department of Aerospace Engineering Sciences. "The goal with these grants is very much tech transfer," Schaub said.