Human-Computer Interaction (HCI)

The Human-Computer Interaction Group in EECS studies interaction in current and future computing environments, spanning workplaces, homes, public spaces, and beyond. The HCI group engages in collaborations with scholars and designers across campus, driving research presented at venues such as CHI, UIST, DIS, VIS, and CSCW, and creates novel artifacts that lead to real-world impact, benefiting end users beyond the academic community. Our research takes place in several spaces on campus. In addition to core EECS facilities, we work in the Berkeley Institute of Design (BID), a 4000-sq ft research lab designed to foster interdisciplinary collaboration. Researchers have access to the Jacobs Institute for Design Innovation which provides extensive prototyping and fabrication resources. Many of our faculty also have close ties to the School of Information which offers masters and PhD programs focused on HCI.

Context-aware computing:

Activity analysis, Embodied and Wearable Computing, Smart Spaces, Location-aware systems, Privacy technologies, Affective Computing.

Perceptual Interfaces:

Virtual reality (VR) and Augmented reality (AR), Vision-based interfaces, Conversational interfaces

Collaboration and Learning:

Tutorial and instruction systems, Crowdsourcing, Pattern-based authoring tools, Learning at scale, Remote group collaboration technologies, Citizen science

Digital Design and Fabrication:

Prototyping tools, DIY and Maker Culture, Computational Design, Creativity-support tools, Sensing technologies

Human-Centered Artificial Intelligence:

Human-robot interaction, Explainable AI, Interactive Machine Learning, Responsible AI, Multimedia retrieval and understanding, Recommender Systems

Interactive Data Exploration and Presentation:

Visualization and visual analytics, Sketch-based and direct manipulation interfaces, Computational notebooks

Optometry and Human Vision Simulation:

Computer aided cornea modeling and visualization, Medical imaging, Virtual environments for surgical simulation, Vision realistic rendering

Usable Programming:

Usable programming languages, Programming environments, Program synthesizers, Programming by demonstration, Tools for non-programmers, novices and end-user programmers

Research Centers

  • Algorithms and Computing for Education
  • Berkeley Artificial Intelligence Research Lab
  • Berkeley Equity and Access in Algorithms, Mechanisms, and Optimization
  • Berkeley Institute of Design
  • Berkeley Laboratory for Automation Science and Engineering
  • Center for Augmented Cognition
  • Center for Information Technology Research in the Interest of Society - The Banatao Institute
  • CITRIS Connected Communities
  • CITRIS Health
  • CITRIS People and Robots
  • EPIC Data lab
  • FHL Vive Center for Enhanced Reality
  • Human-Assistive Robotic Technologies Lab
  • Jacobs Institute for Design Innovation
  • Tele-Immersion
  • Verified Human Interfaces, Control, and Learning for Semi-Autonomous Systems
  • Visual Computing Lab
  • Anca Dragan
  • Björn Hartmann
  • Marti Hearst
  • Aditya Parameswaran
  • Eric Paulos
  • Niloufar Salehi
  • Gopala Krishna Anumanchipalli
  • Ruzena Bajcsy
  • Michael Ball
  • Brian A. Barsky
  • John F. Canny (coordinator)
  • Sarah Chasins
  • Armando Fox
  • Ken Goldberg
  • Susan L. Graham
  • Preeya Khanna
  • Michael Lustig
  • James O'Brien
  • Carlo H. Séquin

Faculty Awards

  • National Academy of Engineering (NAE) Member: Ruzena Bajcsy, 1997. Susan L. Graham, 1993.
  • American Academy of Arts and Sciences Member: Ruzena Bajcsy, 2007. Susan L. Graham, 1995.
  • Berkeley Citation: Ruzena Bajcsy, 2023. Carlo H. Séquin, 2016. Susan L. Graham, 2009.
  • Sloan Research Fellow: Preeya Khanna, 2024. Aditya Parameswaran, 2020. Anca Dragan, 2018. Björn Hartmann, 2013. Michael Lustig, 2013. James O'Brien, 2003.

Related Courses

  • CS 160. User Interface Design and Development
  • CS 260A. User Interface Design and Development
  • CS 260B. Human-Computer Interaction Research

human computer interaction Recently Published Documents

Total documents.

  • Latest Documents
  • Most Cited Documents
  • Contributed Authors
  • Related Sources
  • Related Keywords

Project-based learning in human–computer interaction: a service‐dominant logic approach

Purpose This study aims to propose a service-dominant logic (S-DL)-informed framework for teaching innovation in the context of human–computer interaction (HCI) education involving large industrial projects. Design/methodology/approach This study combines S-DL from the field of marketing with experiential and constructivist learning to enable value co-creation as the primary method of connecting diverse actors within the service ecology. The approach aligns with the current conceptualization of central university activities as a triad of research, education and innovation. Findings The teaching framework based on the S-DL enabled ongoing improvements to the course (a project-based, bachelor’s-level HCI course in the computer science department), easier management of stakeholders and learning experiences through students’ participation in real-life projects. The framework also helped to provide an understanding of how value co-creation works and brought a new dimension to HCI education. Practical implications The proposed framework and the authors’ experience described herein, along with examples of projects, can be helpful to educators designing and improving project-based HCI courses. It can also be useful for partner companies and organizations to realize the potential benefits of collaboration with universities. Decision-makers in industry and academia can benefit from these findings when discussing approaches to addressing sustainability issues. Originality/value While HCI has successfully contributed to innovation, HCI education has made only moderate efforts to include innovation as part of the curriculum. The proposed framework considers multiple service ecosystem actors and covers a broader set of co-created values for the involved partners and society than just learning benefits.

Recommender Systems: Past, Present, Future

The origins of modern recommender systems date back to the early 1990s when they were mainly applied experimentally to personal email and information filtering. Today, 30 years later, personalized recommendations are ubiquitous and research in this highly successful application area of AI is flourishing more than ever. Much of the research in the last decades was fueled by advances in machine learning technology. However, building a successful recommender sys-tem requires more than a clever general-purpose algorithm. It requires an in-depth understanding of the specifics of the application environment and the expected effects of the system on its users. Ultimately, making recommendations is a human-computer interaction problem, where a computerized system supports users in information search or decision-making contexts. This special issue contains a selection of papers reflecting this multi-faceted nature of the problem and puts open research challenges in recommender systems to the fore-front. It features articles on the latest learning technology, reflects on the human-computer interaction aspects, reports on the use of recommender systems in practice, and it finally critically discusses our research methodology.

Research on the Construction of Human-Computer Interaction System Based on a Machine Learning Algorithm

In this paper, we use machine learning algorithms to conduct in-depth research and analysis on the construction of human-computer interaction systems and propose a simple and effective method for extracting salient features based on contextual information. The method can retain the dynamic and static information of gestures intact, which results in a richer and more robust feature representation. Secondly, this paper proposes a dynamic planning algorithm based on feature matching, which uses the consistency and accuracy of feature matching to measure the similarity of two frames and then uses a dynamic planning algorithm to find the optimal matching distance between two gesture sequences. The algorithm ensures the continuity and accuracy of the gesture description and makes full use of the spatiotemporal location information of the features. The features and limitations of common motion target detection methods in motion gesture detection and common machine learning tracking methods in gesture tracking are first analyzed, and then, the kernel correlation filter method is improved by designing a confidence model and introducing a scale filter, and finally, comparison experiments are conducted on a self-built gesture dataset to verify the effectiveness of the improved method. During the training and validation of the model by the corpus, the complementary feature extraction methods are ablated and learned, and the corresponding results obtained are compared with the three baseline methods. But due to this feature, GMMs are not suitable when users want to model the time structure. It has been widely used in classification tasks. By using the kernel function, the support vector machine can transform the original input set into a high-dimensional feature space. After experiments, the speech emotion recognition method proposed in this paper outperforms the baseline methods, proving the effectiveness of complementary feature extraction and the superiority of the deep learning model. The speech is used as the input of the system, and the emotion recognition is performed on the input speech, and the corresponding emotion obtained is successfully applied to the human-computer dialogue system in combination with the online speech recognition method, which proves that the speech emotion recognition applied to the human-computer dialogue system has application research value.

Human–Computer Interaction-Oriented African Literature and African Philosophy Appreciation

African literature has played a major role in changing and shaping perceptions about African people and their way of life for the longest time. Unlike western cultures that are associated with advanced forms of writing, African literature is oral in nature, meaning it has to be recited and even performed. Although Africa has an old tribal culture, African philosophy is a new and strange idea among us. Although the problem of “universality” of African philosophy actually refers to the question of whether Africa has heckling of philosophy in the Western sense, obviously, the philosophy bred by Africa’s native culture must be acknowledged. Therefore, the human–computer interaction-oriented (HCI-oriented) method is proposed to appreciate African literature and African philosophy. To begin with, a physical object of tablet-aid is designed, and a depth camera is used to track the user’s hand and tablet-aid and then map them to the virtual scene, respectively. Then, a tactile redirection method is proposed to meet the user’s requirement of tactile consistency in head-mounted display virtual reality environment. Finally, electroencephalogram (EEG) emotion recognition, based on multiscale convolution kernel convolutional neural networks, is proposed to appreciate the reflection of African philosophy in African literature. The experimental results show that the proposed method has a strong immersion and a good interactive experience in navigation, selection, and manipulation. The proposed HCI method is not only easy to use, but also improves the interaction efficiency and accuracy during appreciation. In addition, the simulation of EEG emotion recognition reveals that the accuracy of emotion classification in 33-channel is 90.63%, almost close to the accuracy of the whole channel, and the proposed algorithm outperforms three baselines with respect to classification accuracy.

Wearable devices in diving: A systematic review (Preprint)

BACKGROUND Wearable devices have grown enormously in importance in recent years. While wearables have generally been well studied, they have not yet been discussed in the underwater environment. OBJECTIVE The reason for this systematic review was to systematically search for the wearables for underwater operation used in the scientific literature, to make a comprehensive map of their capabilities and features, and to discuss the general direction of development. METHODS In September 2021, we conducted an extensively search of existing literature in the largest databases using keywords. For this purpose, only articles were used that contained a wearable or device that can be used in diving. Only articles in English were considered, as well as peer-reviewed articles. RESULTS In the 36 relevant studies that were found, four device categories could be identified: safety devices, underwater communication devices, head-up displays and underwater human-computer interaction devices. CONCLUSIONS The possibilities and challenges of the respective technologies were considered and evaluated separately. Underwater communication has the most significant influence on future developments. Another topic that has not received enough attention is human-computer interaction.

Analyzing the mental states of the sports student based on augmentative communication with human–computer interaction

Recognition of facial expressions and its application to human computer interaction, physical education system and training framework based on human–computer interaction for augmentative and alternative communication, enhancing the human-computer interaction through the application of artificial intelligence, machine learning, and data mining, applications of human-computer interaction for improving erp usability in education systems, export citation format, share document.

  • Reviews / Why join our community?
  • For companies
  • Frequently asked questions

Human-Computer Interaction (HCI)

What is human-computer interaction (hci).

Human-computer interaction (HCI) is a multidisciplinary field of study focusing on the design of computer technology and, in particular, the interaction between humans (the users) and computers. While initially concerned with computers, HCI has since expanded to cover almost all forms of information technology design.

  • Transcript loading…

Here, Professor Alan Dix explains the roots of HCI and which areas are particularly important to it.

The Meteoric Rise of HCI

HCI surfaced in the 1980s with the advent of personal computing, just as machines such as the Apple Macintosh, IBM PC 5150 and Commodore 64 started turning up in homes and offices in society-changing numbers. For the first time, sophisticated electronic systems were available to general consumers for uses such as word processors, games units and accounting aids. Consequently, as computers were no longer room-sized, expensive tools exclusively built for experts in specialized environments, the need to create human-computer interaction that was also easy and efficient for less experienced users became increasingly vital. From its origins, HCI would expand to incorporate multiple disciplines, such as computer science, cognitive science and human-factors engineering.

human computer interaction research topics

HCI soon became the subject of intense academic investigation. Those who studied and worked in HCI saw it as a crucial instrument to popularize the idea that the interaction between a computer and the user should resemble a human-to-human, open-ended dialogue. Initially, HCI researchers focused on improving the usability of desktop computers (i.e., practitioners concentrated on how easy computers are to learn and use). However, with the rise of technologies such as the Internet and the smartphone, computer use would increasingly move away from the desktop to embrace the mobile world. Also, HCI has steadily encompassed more fields:

“…it no longer makes sense to regard HCI as a specialty of computer science; HCI has grown to be broader, larger and much more diverse than computer science itself. HCI expanded from its initial focus on individual and generic user behavior to include social and organizational computing, accessibility for the elderly, the cognitively and physically impaired, and for all people, and for the widest possible spectrum of human experiences and activities. It expanded from desktop office applications to include games, learning and education, commerce, health and medical applications, emergency planning and response, and systems to support collaboration and community. It expanded from early graphical user interfaces to include myriad interaction techniques and devices, multi-modal interactions, tool support for model-based user interface specification, and a host of emerging ubiquitous, handheld and context-aware interactions.” — John M. Carroll, author and a founder of the field of human-computer interaction.

The UX Value of HCI and Its Related Realms

HCI is a broad field which overlaps with areas such as user-centered design (UCD) , user interface (UI) design and user experience (UX) design . In many ways, HCI was the forerunner to UX design.

human computer interaction research topics

Despite that, some differences remain between HCI and UX design. Practitioners of HCI tend to be more academically focused. They're involved in scientific research and developing empirical understandings of users. Conversely, UX designers are almost invariably industry-focused and involved in building products or services—e.g., smartphone apps and websites. Regardless of this divide, the practical considerations for products that we as UX professionals concern ourselves with have direct links to the findings of HCI specialists about users’ mindsets. With the broader span of topics that HCI covers, UX designers have a wealth of resources to draw from, although much research remains suited to academic audiences. Those of us who are designers also lack the luxury of time which HCI specialists typically enjoy. So, we must stretch beyond our industry-dictated constraints to access these more academic findings. When you do that well, you can leverage key insights into achieving the best designs for your users. By “collaborating” in this way with the HCI world, designers can drive impactful changes in the market and society.

Learn More about Human-Computer Interaction

The Interaction Design Foundation’s encyclopedia chapter on Human-Computer Interaction , by John M. Carroll, a founder of HCI, is an ideal source for gaining a solid understanding of HCI as a field of study.

Keep up to date with the latest developments in HCI at the international society for HCI, SIGCHI .

Learn the tools of HCI with our courses on HCI, taught by Professor Alan Dix, author of one of the most well-known textbooks on HCI:

Human-Computer Interaction: The Foundations of UX Design

Perception and Memory in HCI and UX

Design for Thought and Emotion

Questions related to Human-Computer Interaction (HCI)

Cognition in human-computer interaction includes the mental processes occurring between humans and computers. This encompasses perceiving inputs from the computer, processing them in the brain, and producing outputs like physical actions, speech, and facial expressions. 

The video above looks at cognition as a continuous input-output loop that goes from action, through to perception (input through our senses), to cognition (mental processing), back to action (the output). Although one might perceive this process as starting with perception, it is vital to remember that perceptions often trigger actions, but at their core, humans and animals focus on performing activities in the world. This understanding is crucial for the design of effective digital interactions.

Design in human-computer interaction, as discussed in the video, is about achieving goals within constraints. It involves understanding the purpose or goal, like enjoyment or work efficiency, and navigating the constraints, such as medium, platform, time, and money, to achieve that purpose. 

It is essential to understand the materials, both digital and human, and to make trade-offs between different goals and constraints. Ultimately, the central message is that the user is at the heart of what you do as a designer. Understanding the users and the technology you work with is crucial for successful design.

Ergonomics in Human-Computer Interaction (HCI) refers to the design and implementation of interfaces that ensure user comfort, efficiency, and effectiveness. In this video, HCI expert Prof Alan Dix discusses touch and haptics in user interfaces, highlighting the importance of ergonomics in device design.

Copyright holder: On Demand News-April Brown _ Appearance time: 04:42 - 04:57 _ Link: https://www.youtube.com/watch?v=LGXMTwcEqA4

Copyright holder: Ultraleap _ Appearance time: 05:08 - 05:15 _ Link: https://www.youtube.com/watch?v=GDra4IJmJN0&ab_channel=Ultraleap

For example, mobile phones and cars use haptic feedback to provide users with intuitive and engaging experiences. However, poorly implemented haptic feedback can confuse users. This underscores the importance of ergonomics in HCI to ensure that interfaces are user-friendly, intuitive, and do not cause strain or discomfort, ultimately enhancing the user's overall experience with a device or application.

Human-Computer Interaction (HCI) is crucial due to its direct impact on the user experience. 

As highlighted in the video, the shift towards service orientation, prompted by the internet and digital goods, has made usability and user experience increasingly important. Users now have multiple choice points and can easily swap services if they are not satisfied, which underscores the criticality of user experience. Prof Alan Dix uses the analogy of Maslow’s hierarchy of needs in the context of user interfaces, stating that once the basic needs of functionality and usability are addressed, user experience becomes the key differentiator. 

User experience is the factor that will make someone choose your product over another. Therefore, optimizing the HCI is paramount to ensure the success and competitiveness of a product or service.

HCI does not require any knowledge of coding. While coding can be a part of the design process and implementation, it is not necessary for understanding and applying the principles of human-computer interaction.

The first computer, as we know it today, was invented in the 1950s. At that time, computers were room-sized and cost millions of dollars or pounds or euros in current terms. Thomas Watson of IBM famously mispredicted that five computers would be enough forever, reflecting the sentiment of the time. Over the decades, the cost and size of computers have drastically reduced, making them accessible to the general public. By the mid-70s, the first personal computers were coming through, and today, the total number of computers and smartphones exceeds the number of people in the world. 

For a detailed evolution of computer technology, watch the video below:

Copyright holder: Tim Colegrove _ Appearance time: 3:02 - 3:09 Copyright license and terms: CC BY-SA 4.0, via Wikimedia Commons _ Link: https://commons.wikimedia.org/wiki/File:Trinity77.jpg

Copyright holder: Mk Illuminations _ Appearance time: 6:30 - 6:40 _ Link: https://www.youtube.com/watch?v=4DD5qLvHANs

If you are looking to study Human-Computer Interaction (HCI), the Interaction Design Foundation (IxDF) is the most authoritative online learning platform. IxDF offers three comprehensive online HCI courses:

HCI: Foundations of UX Design : This course provides a solid foundation in HCI principles and how they apply to UX design.

HCI: Design for Thought and Emotion : Unlock the secrets of the human mind and learn how to apply these insights to your work.

HCI: Perception and Memory : Learn about the role of perception and memory in HCI and how to design interfaces that align with human cognitive capabilities.

Enroll in these courses to enhance your HCI knowledge and skills from the comfort of your home.

Answer a Short Quiz to Earn a Gift

Which of the following professionals is NOT typically involved in the field of Human-Computer Interaction?

  • Marketing Manager
  • Usability Engineer
  • User Experience Designer

What significant shift has HCI made since its inception in terms of design focus?

  • From desktop interfaces to everyday computing
  • From mobile interfaces to virtual reality
  • From physical interfaces to solely gesture-based controls

Which element is important in the design of human-computer interactions to ensure intuitive use?

  • Complex animations
  • High graphical resolution
  • User psychology

In the context of HCI, what is a common method used to understand user needs and behaviors?

  • Algorithmic prediction
  • Cultural probes
  • Demographics

What role does HCI play in the development of new technologies?

  • It brings together user needs with technological advances.
  • It focuses solely on aesthetic enhancements.
  • It limits the scope of device functionality.

Better luck next time!

Do you want to improve your UX / UI Design skills? Join us now

Congratulations! You did amazing

You earned your gift with a perfect score! Let us send it to you.

Check Your Inbox

We’ve emailed your gift to [email protected] .

Literature on Human-Computer Interaction (HCI)

Here’s the entire UX literature on Human-Computer Interaction (HCI) by the Interaction Design Foundation, collated in one place:

Learn more about Human-Computer Interaction (HCI)

Take a deep dive into Human-Computer Interaction (HCI) with our course Human-Computer Interaction: The Foundations of UX Design .

Interactions between products/designs/services on one side and humans on the other should be as intuitive as conversations between two humans—and yet many products and services fail to achieve this. So, what do you need to know so as to create an intuitive user experience ? Human psychology? Human-centered design? Specialized design processes? The answer is, of course,  all  of the above, and this course will cover them all.

Human-Computer Interaction (HCI) will give you the skills to properly understand, and design, the relationship between the “humans”, on one side, and the “computers” (websites, apps, products, services, etc.), on the other side. With these skills, you will be able to build products that work more efficiently and therefore sell better. In fact, the Bureau of Labor Statistics predicts the IT and Design-related occupations will grow by 12% from 2014–2024, faster than the average for all occupations. This goes to show the immense demand in the market for professionals equipped with the right design skills .

Whether you are a newcomer to the subject of HCI or a professional, by the end of the course you will have learned how to implement user-centered design for the best possible results .

In the “ Build Your Portfolio: Interaction Design Project ”, you’ll find a series of practical exercises that will give you first-hand experience of the methods we’ll cover. If you want to complete these optional exercises, you’ll create a series of case studies for your portfolio which you can show your future employer or freelance customers.

This in-depth, video-based course is created with the amazing Alan Dix , the co-author of the internationally best-selling textbook  Human-Computer Interaction and a superstar in the field of Human-Computer Interaction . Alan is currently professor and Director of the Computational Foundry at Swansea University.    

All open-source articles on Human-Computer Interaction (HCI)

Human computer interaction - brief intro.

human computer interaction research topics

Interaction Design - brief intro

Data visualization for human perception, design iteration brings powerful results. so, do it again designer.

human computer interaction research topics

  • 1.1k shares

Usability Evaluation

Affordances, visual representation, disruptive innovation, contextual design, how to use mental models in ux design.

human computer interaction research topics

Visual Aesthetics

Activity theory, wearable computing, card sorting, 3d user interfaces, end-user development, context-aware computing, social computing, human-robot interaction, open access—link to us.

We believe in Open Access and the  democratization of knowledge . Unfortunately, world-class educational materials such as this page are normally hidden behind paywalls or in expensive textbooks.

If you want this to change , cite this page , link to us, or join us to help us democratize design knowledge !

Privacy Settings

Our digital services use necessary tracking technologies, including third-party cookies, for security, functionality, and to uphold user rights. Optional cookies offer enhanced features, and analytics.

Experience the full potential of our site that remembers your preferences and supports secure sign-in.

Governs the storage of data necessary for maintaining website security, user authentication, and fraud prevention mechanisms.

Enhanced Functionality

Saves your settings and preferences, like your location, for a more personalized experience.

Referral Program

We use cookies to enable our referral program, giving you and your friends discounts.

Error Reporting

We share user ID with Bugsnag and NewRelic to help us track errors and fix issues.

Optimize your experience by allowing us to monitor site usage. You’ll enjoy a smoother, more personalized journey without compromising your privacy.

Analytics Storage

Collects anonymous data on how you navigate and interact, helping us make informed improvements.

Differentiates real visitors from automated bots, ensuring accurate usage data and improving your website experience.

Lets us tailor your digital ads to match your interests, making them more relevant and useful to you.

Advertising Storage

Stores information for better-targeted advertising, enhancing your online ad experience.

Personalization Storage

Permits storing data to personalize content and ads across Google services based on user behavior, enhancing overall user experience.

Advertising Personalization

Allows for content and ad personalization across Google services based on user behavior. This consent enhances user experiences.

Enables personalizing ads based on user data and interactions, allowing for more relevant advertising experiences across Google services.

Receive more relevant advertisements by sharing your interests and behavior with our trusted advertising partners.

Enables better ad targeting and measurement on Meta platforms, making ads you see more relevant.

Allows for improved ad effectiveness and measurement through Meta’s Conversions API, ensuring privacy-compliant data sharing.

LinkedIn Insights

Tracks conversions, retargeting, and web analytics for LinkedIn ad campaigns, enhancing ad relevance and performance.

LinkedIn CAPI

Enhances LinkedIn advertising through server-side event tracking, offering more accurate measurement and personalization.

Google Ads Tag

Tracks ad performance and user engagement, helping deliver ads that are most useful to you.

Share Knowledge, Get Respect!

or copy link

Cite according to academic standards

Simply copy and paste the text below into your bibliographic reference list, onto your blog, or anywhere else. You can also just hyperlink to this page.

New to UX Design? We’re Giving You a Free ebook!

The Basics of User Experience Design

Download our free ebook The Basics of User Experience Design to learn about core concepts of UX design.

In 9 chapters, we’ll cover: conducting user interviews, design thinking, interaction design, mobile UX design, usability, UX research, and many more!

  • Research & Faculty
  • Offices & Services
  • Information for:
  • Faculty & Staff
  • News & Events
  • Contact & Visit
  • About the Department
  • Message from the Chair
  • Computer Science Major (BS/BA)
  • Computer Science Minor
  • Data Science and Engineering Minor
  • Combined BS (or BA)/MS Degree Program
  • Intro Courses
  • Special Programs & Opportunities
  • Student Groups & Organizations
  • Undergraduate Programs
  • Undergraduate Research
  • Senior Thesis
  • Peer Mentors
  • Curriculum & Requirements
  • MS in Computer Science
  • PhD in Computer Science
  • Admissions FAQ
  • Financial Aid
  • Graduate Programs
  • Courses Collapse Courses Submenu
  • Research Overview
  • Research Areas
  • Systems and Networking
  • Security and Privacy
  • Programming Languages
  • Artificial Intelligence
  • Human-Computer Interaction
  • Vision and Graphics
  • Groups & Labs
  • Affiliated Centers & Institutes
  • Industry Partnerships
  • Adobe Research Partnership
  • Center for Advancing Safety of Machine Intelligence
  • Submit a Tech Report
  • Tech Reports
  • Tenure-Track Faculty
  • Faculty of Instruction
  • Affiliated Faculty
  • Adjunct Faculty
  • Postdoctoral Fellows
  • PhD Students
  • Outgoing PhDs and Postdocs
  • Visiting Scholars
  • News Archive
  • Weekly Bulletin
  • Monthly Student Newsletter
  • All Public Events
  • Seminars, Workshops, & Talks
  • Distinguished Lecture Series
  • CS Colloquium Series
  • CS + X Events
  • Tech Talk Series
  • Honors & Awards
  • External Faculty Awards
  • University Awards
  • Department Awards
  • Student Resources
  • Undergraduate Student Resources
  • MS Student Resources
  • PhD Student Resources
  • Student Organization Resources
  • Faculty Resources
  • Postdoc Resources
  • Staff Resources
  • Purchasing, Procurement and Vendor Payment
  • Expense Reimbursements
  • Department Operations and Facilities
  • Initiatives
  • Student Groups
  • CS Faculty Diversity Committee
  • Broadening Participation in Computing (BPC) Plan
  • Northwestern Engineering

Research   /   Research Areas Human-Computer Interaction

Human-Computer Interaction (HCI) is a rapidly expanding area of research and development that has transformed the way we use computers in the last thirty years. Research topics and areas include augmented-reality, collective action, computer-mediated communication, computer-supported collaborative work, crowdsourcing and social computing, cyberlearning and future learning technologies, inclusive technologies and accessibility, interactive audio, mixed-initiative systems, mobile interaction design, multi-touch interaction, social media, social networks, tangible user interfaces, ubiquitous computing, and user-centered design.

Northwestern hosts a vibrant HCI community across schools, with faculty and students involved in a wide range of projects. Students in HCI are enrolled in programs in Computer Science, Communication, Learning Sciences, and Technology & Social Behavior. Students also take courses and attend seminars through the Segal Design Institute.

Photo of Nabil Alshurafa

Nabil Alshurafa

Associate Professor of Preventive Medicine and (by courtesy) Computer Science and Electrical and Computer Engineering

Email Nabil Alshurafa

Photo of Sruti Bhagavatula

Sruti Bhagavatula

Assistant Professor of Instruction

Email Sruti Bhagavatula

Photo of Larry Birnbaum

Larry Birnbaum

Professor of Computer Science

Email Larry Birnbaum

Jeremy Birnholtz

Associate Professor, Communication Studies

Associate Professor, Department of Computer Science

Photo of Nick Diakopoulos

Nick Diakopoulos

Assistant Professor, Northwestern School of Communications

Email Nick Diakopoulos

Photo of Elizabeth Gerber

Elizabeth Gerber

Professor of Mechanical Engineering and (by courtesy) Computer Science

Professor of Communication Studies

Co-Director, Center for Human Computer Interaction + Design

Email Elizabeth Gerber

Photo of Darren Gergle

Darren Gergle

Professor, Communication Studies and (by courtesy) Computer Science

Email Darren Gergle

Photo of Kristian Hammond

Kristian Hammond

Bill and Cathy Osborn Professor of Computer Science

Director, Master of Science in Artificial Intelligence Program

Director, Center for Advancing Safety of Machine Intelligence (CASMI)

Email Kristian Hammond

Photo of Michael Horn

Michael Horn

Professor of Education and Social Policy

Email Michael Horn

Photo of Ian Horswill

Ian Horswill

Associate Professor of Computer Science

Email Ian Horswill

Photo of Jessica Hullman

Jessica Hullman

Ginni Rometty Professor

Email Jessica Hullman

Photo of Matthew Kay

Matthew Kay

Associate Professor of Communication Studies

Email Matthew Kay

Photo of Eleanor O'Rourke

Eleanor O'Rourke

Assistant Professor of Computer Science

Assistant Professor of Education and Social Policy

Email Eleanor O'Rourke

Photo of Bryan Pardo

Bryan Pardo

Email Bryan Pardo

Photo of Sarah Van Wart

Sarah Van Wart

Adjunct Assistant Professor

Email Sarah Van Wart

Photo of Uri Wilensky

Uri Wilensky

Lorraine Morton Professor

Email Uri Wilensky

Photo of Marcelo Worsley

Marcelo Worsley

Karr Family Associate Professor of Computer Science

Associate Professor of Learning Sciences, School of Education and Social Policy

Email Marcelo Worsley

Photo of Haoqi Zhang

Haoqi Zhang

Email Haoqi Zhang

More in this section

  • Engineering Home
  • CS Department

Related Links

  • Research at McCormick
  • Meet our Faculty
  • Northwestern Research Overview

Contact Info

Jessica Hullman Associate Professor Email

  • Research Projects

HCII Summer Undergraduate Research Projects

Project #1 a new bridge to the digital economy: integrated ai-augmented learning and collaboration.

Mentor: Carolyn Rose , faculty

Description: This three-year NSF Future of Work (FOW) project, which started in October of 2022, seeks to address shortages of IT workers, while creating cost-effective, accessible pathways to living wage digital economy jobs for workers who previously lacked those opportunities. The tools and knowledge created by the project could eventually be applied to other STEM-focused community college degree programs across the nation, potentially impacting the lives of millions. This interdisciplinary project offers numerous opportunities to embed undergraduate research experiences related to advances in AI to enable learning interventions, design of interventions motivated by learning sciences principles, or development of extensions to an AI-augmented learning platform. The project will tackle five research questions: First, how can the Knowledge-Learning-Instruction (KLI) framework developed by learning scientists be used to align knowledge components in community college IT courses with the most effective AI-driven educational technologies to enhance and accelerate learning of those components? Second, to what extent do intelligent tutoring systems (ITS) and computer-supported collaborative learning (CSCL) experiences increase mastery and decrease the time needed to achieve it? Third, to what extent and in what ways do forms of example-based learning, used together with ITS and CSCL, further support learning and enable a wider range of learners to succeed? Fourth, to what extent can CSCL technology foster effective collaboration between community college students in 2-year information technology degree programs and the professional staff of partner firms on real-world (cloud computing) problems in the context of capstone projects and internships? Fifth, how successfully do students in the AI-augmented curricular pathway created by this project move into IT jobs relative to students in the standard course pathways?

Nature of Student Involvement: The two REU interns will work closely with McLaren, Rosé, and Teffera to achieve the project goals described above. The REU interns will be involved in weekly project meetings with McLaren’s or Rosé’s in which the research goals of the project and progress will be discussed. The REUs will also have the opportunity to work closely and exchange experiences with many other undergraduate summer interns who annually work on human-computer interaction (HCI) projects within Carnegie Mellon’s Human Computer Interaction Institute. A variety of learning opportunities will arise within the summer program at CMU, including a poster session and research talks by CMU faculty that the REU interns are encouraged to attend.

Skills we are interested in (not all required):

  • Experience with curriculum design
  • Running user studies, programming in Python
  • Artificial intelligence/NLP/Machine Learning

Project #2 Active learning in STEM education

Mentor: Paulo Carvalho , systems scientist 

Description: Mastering material in STEM classes requires students to learn and memorize large amounts of new knowledge in a short period of time. One way that has long been argued to improve such learning is by having students practice new knowledge by spacing questions over time (spaced retrieval practice). However, the evidence for the benefits of spaced retrieval practice in STEM contexts is limited. How can learn by doing be optimized in STEM classes and what computational algorithms best capture this learning?

Nature of Student Involvement: Students will be involved in all steps of experimental research, data analytics, and student modeling.

  • Quantitative data analyses
  • Experience with experimental design and data collection
  • Data science/learning analytics/student modeling experience preferred but not required

Project #3 Advancing Metamaterials by exploring novel structures, developing design tools and fabrication methods

Mentor: Alexandra Ion , faculty 

Description: We are looking to push the boundaries of mechanical metamaterials by unifying material and device. Metamaterials are advanced materials that can be designed to exhibit unusual properties and complex behavior. Their function is defined by their cell structure, i.e., their geometry. Such materials can incorporate entire mechanisms, computation, or re-configurable properties within their compliant cell structure, and have applications in product design, shape-changing interfaces, prosthetics, aerospace and many more.

In this project, we will develop design tools that allow novice users and makers to design their own complex materials and fabricate them using 3D printing or laser cutting. This may involve playfully exploring new cell designs, creating novel application examples by physical prototyping and developing open source software.

Nature of Student Involvement: Students will be part of all stages of research and will be fully embedded within the lab. We aim to give students a good insight into the nature of research and make it a fun summer!

  • CS skills: software development, background in geometry, optimization, and/or simulation
  • 3D modeling basics (CAD tools, e.g., Autodesk Fusion 360 or similar)
  • Basic knowledge of classical mechanics or material science

Project #4 AI in the Accessible Kitchen: Supporting Blind and Visually Impaired People in Performing Activities of Daily Living

Mentor: Patrick Carrington , faculty 

Description: Poor nutrition is prevalent among people with vision impairments. Studies have shown that this poor nutritional status is due to a number of factors, including social and structural issues, financial barriers, as well as independent meal preparation challenges. Previously reported aversions to cooking have led to dietary choices that include eating at restaurants over 40% of the time. The significant financial burden of these choices, combined with the aversion to cooking “from-scratch meals” leads to the alternative option of buying and cooking frozen or prepared foods which are costly, unhealthy, and calorie-rich. Challenges associated with preparing meals include sensory, procedural, and physical challenges. Our research has aimed to address this gap by developing systems that bridge the digital and physical challenges faced by vision impaired people to enable more independent meal preparation. This project would involve hardware and software prototyping as well as user studies.

Nature of Student Involvement: The student would be involved in hardware and software prototyping as well as early user tests.

Skills we are interested in (not all required):  The ideal student has some experience with 

  • Sensors 
  • UI design/development
  • Qualitative methods/data analysis

Project #5 AI Privacy

Mentor: Sauvik Das , faculty

Description: This project focuses on advancing our vision of "Privacy through Design" in the development of AI products and services. REUs will collaborate with PIs Das and Forlizzi, as well their Ph.D. students. The project entails co-designing materials aimed at helping AI practitioners prioritize privacy in consumer-facing AI products. It builds upon two key research efforts: interviews with 35 industry AI professionals to understand their privacy practices, and the development of a taxonomy of consumer AI privacy harms based on AI incidents and failures. The participating undergraduates will have the opportunity to brainstorm, create, and evaluate tools and methods designed to mitigate privacy risks in AI product design, contributing directly to the evolving landscape of AI and privacy.

Nature of Student Involvement: Students will be involved in ideating, creating and/or evaluating tools and resources to help AI practitioners mitigate privacy risk

  • Programming experience to build prototype systems
  • Experience with qualitative methods (e.g., interviews)
  • Prototyping systems with LLMs

Project #6 AI-CARING: Agents, Care Coordination, Trust and Affiliation

Mentor: Mai Lee Chang , postdoctoral fellow

Description: AI-CARING is a NSF AI Institute that is committed to both doing foundational AI research and developing technology that is useful and beneficial for society. The overall project focuses on developing AI systems to aid older adults, including those who experience cognitive decline, to continue living in their homes longer.

Our thrust focuses on how AI systems can learn the structures and forms of people's interpersonal relationships and how agents can provide support for tasks of daily living that support people's performance of self and reinforce their close familial bonds, friendships, and relationships with professionals like their doctors and other care providers.

Over the summer, we want to explore what an agent will need to learn about older adults’ and their informal caregivers’ (e.g., spouses, adult children, neighbors) day-to-day activities coordination (e.g., getting to doctor's appointments, getting food, arranging services, picking up meds) in order to provide support that is robust to uncertainty and changes in goals, care network structure, and environment. This work will capture the practices, priorities, values, triggers, and breakdowns in coordination. We are also interested in identifying critical moments or triggers that lead to changes in trust and affiliation (i.e., who the agent works for) when an agent or robot interacts with an older adult and their surroundings to design more trustworthy agents/robots for successful long-term support.

Nature of Student Involvement: Student research assistants will aid researchers in designing and executing studies to understand the needs and challenges that older adults and their informal caregivers face when making care coordination plans and how they adapt to changes. This will take a retrospective approach, interviewing stakeholders and reviewing their communication logs, calendars, and other coordination artifacts to reconstruct how they accomplished various care tasks. Student researchers will also conduct field study to discover triggers of trust and affiliation changes to understand what information an AI agent and robot need in order to learn about the changes in trust and affiliation.

  • Fieldwork including observations, interviews, and directed storytelling
  • Brainstorming
  • Design of conversational interfaces
  • Understanding of social psychology: social interaction, trust, affiliation
  • Design and execution of user studies

Project #7 Chemistry Tutor Machine Learning Programmer

Mentor: Bruce McLaren , faculty

Description: Learn about intelligent tutoring systems and how to apply machine learning to them! You will be responsible for the development of machine learned detectors designed to identify specific student behaviors for the Stoich Tutor ( https://stoichtutor.andrew.cmu.edu/ ). You will first complete work on the development of a detector for “gaming the system” and then move on to additional detectors. You will work with the research team that is investigating the link between behavioral, cognitive, and affective aspects of students and their engagement with the Stoich Tutor. You should have a computer science background with skills in HTML5/CSS3/JavaScript, Python, and an optional background in chemistry and familiarity or interest in machine learning. You will work with Prof. Bruce McLaren and Research Programmers Hayden Stec and Leah Teffera, with Stec and Teffera as the primary mentors.

Nature of Student Involvement: Programming and using machine learning to develop detectors of student behavior

  • Computer Science background 
  • Skills in HTML5/CSS3/JavaScript 
  • Python is required 
  • Background in chemistry and familiarity or interest in machine learning are optional

Project #8 Chemistry Tutor Programmer and Data Scientist

Description: Learn about intelligent tutoring systems and data science! You will be responsible for extending the Stoich Tutor ( https://stoichtutor.andrew.cmu.edu/ ) and its associated grading script. The specific way in which the tutor and grading script will be extended will be determined by results from a study conducted during 2023-2024. Additionally, you will analyze data from prior studies to guide extensions to the tutor as well as assist in the overall project, in which the research team is investigating the link between behavioral, cognitive, and affective aspects of students and their engagement with the tutor. You should have a computer science background with skills in HTML5/CSS3/JavaScript, Python, and an optional background in chemistry. You will work with Prof. Bruce McLaren and Research Programmers Hayden Stec and Leah Teffera, with Stec and Teffera as the primary mentors.

Nature of Student Involvement: The student will program a tutor for chemistry as well as maintaining and extending a grading script that assesses log data from the program

  • A Computer Science background 
  • Skills in HTML5/CSS3/JavaScript
  • Python is required
  • A background in chemistry is optional but highly desirable.

Project #9 Cloud Administrator Intelligent Tutor Programmer

Description: In this project you will bring your knowledge of computer science and cloud computing to the task of building intelligent tutoring systems (ITSs) to help community college students learn about cloud computing. You will design and write code to develop intelligent tutoring systems that will be embedded in the SAIL Cloud Administrator course. The tutors will support local community college students in better understanding programming and computational thinking. You will learn about code repositories and good software engineering methodologies and practices. You should have a computer science background with skills in HTML5/CSS3/JavaScript, and an optional background in education (e.g. TA-ing a course) and familiarity or interest in cloud administration/computing. You will work with Prof. Bruce McLaren and Leah Teffera, with Teffera as the primary mentor.

Nature of Student Involvement: The student will develop intelligent tutors for an online Cloud Administrator course.

  • Must have a computer science background
  • Optional background in education (e.g. TA-ing a course)
  • Familiarity or interest in cloud administration/computing

Project #10 Codespec: a computer programming environment

Mentor: Carl Haynes-Magyar , Presidential Postdoctoral Fellow

Description: Despite the potential of Parsons problems, few environments offer a seamless transition between different problem types. Codespec supports learners in practicing how to solve a programming problem as a Pseudocode Parsons problem, a Parsons problem, a Faded Parsons problem, a fix-code problem, or a write-code problem. The goal of the project is to develop, evaluate, and implement algorithms for knowledge tracing, adaptive problem-sequencing, and ready-to-publish results that include learning curve analysis.

Nature of Student Involvement: Write code to develop back-end features for Codespec.

Skills we are interested in (not all required): 

  • Experience or interest in computing education research, tools and environments.
  • Experience with data science, learning analytics, student modeling, large language models.
  • Preferred: CS (or other technical) major, HTML/CSS/JavaScript, Python, Django, VueJS, Figma skills.

Project #11 Computational Understanding of User Interfaces

Mentor: Jeffrey Bigham , faculty 

Description: The goal of our UI Understanding project is to build machine learning technologies that can learn to computationally understand and interact with user interfaces designed to be used by people. We have built technologies that understand graphical user interfaces from pixels, generate custom user interface code automatically, graphically reflow existing user interface to personalize them for specific abilities, and automate actions across different devices. Many of these projects target applications for people with disabilities who use user interfaces in ways other than those assumed by developers.

Nature of Student Involvement: Students will be involved in all aspects of research under the mentorship of senior graduate students and faculty -- including, defining project goals and scope, training or adapting large computer vision and natural language models, writing and presenting about results. Many of our past REU students have submitted their work to peer-reviewed venues and many have gone on to PhD programs in the area.

Skills we are interested in (not all required): Prefer students with familiarity with the following broad technical areas (will also get a chance to learn more during the REU!)

  • Experience training/fine-tuning modern machine learning pipelines (computer vision, language, multimodal)
  • Experience with one or more UI and interaction toolkit (e.g.., SwiftUI, React, others)
  • Familiarity with LLM APIs and best practices (e.g., GPT4, Claude, etc)
  • Knowledge of human-AI interaction, designing for ML systems, human-centered AI, etc.

Project #12 Designing algorithms for adaptive Extended Reality (XR)

Mentor: David Lindlbauer , faculty

Description: Extended Reality (XR) interfaces allow users to interact with the digital world anywhere and anytime. By embedding interfaces directly into users' environments, XR interfaces can both enhance productivity and be less distracting than traditional computing devices such as smartphones.

In this research, we aim to design algorithms that create context-aware XR interfaces, i.e., interfaces that automatically adjust when, where, and how to display XR interfaces. These algorithms can be optimization-based or learning-based, and form the backbone of XR systems that continuously adapt to users' needs and requirements when they interact with XR systems in a wide range of applications, from productivity, entertainment, manufacturing, or healthcare.

The research involves developing a novel algorithm for adaptive XR, and evaluating the approach in a comparative user study.

Nature of Student Involvement: Students will collaborate with other undergraduate and graduate students in creating the concept and planning of the research, and lead the implementation of the algorithm and the creation of the user study platform.

  • Strong programming skill (c# or similar)
  • Experience with Unity or Unreal
  • Interest in XR

Project #13 Designing for workers’ experiences of health & wellbeing

Mentor: Franchesca Spektor , graduate student advised by faculty Sarah Fox and Jodi Forlizzi

Description: Low wage workers are at increased risk for injury and disablement while on the job. However, traditional ways of understanding, tracking, and reporting occupational injury may be insufficient. While formal reporting requirements from OSHA may address a torn ACL, regulatory bodies provide few avenues for reporting on the repetitive chronic strain which results in injuries over time. This project aims to conduct participatory design research with local service workers in the Pittsburgh region – from custodial staff to home care workers – to learn if new tools and technologies may help bridge the gap between health & safety policies and workers’ needs on the ground. We will explore training needs, threat of retaliation, administrative barriers to reporting, and health data governance.

Nature of Student Involvement: The REU students will be closely involved in the project, focusing specifically on understanding the health & safety needs of a local service context. The students will be responsible for conducting a literature review and may have the chance to:

  • Conduct interviews, diary studies, and workshops with local workers
  • Contribute to data analysis using thematic analysis
  • Develop design prototypes to support health & safety reporting, and worker wellbeing
  • The students should hold care and curiosity about worker wellbeing, workplace technologies, and labor issues
  • Programming and technical prototyping skills for web and mobile applications
  • Experience with user research, design thinking, and UX backgrounds
  • Prior experience reading academic literature and conducting literature reviews

Project #14 Designing Inclusive Collaboration Environments

Mentor: Laura Dabbish , faculty 

Description: Open source software is important to sustaining the world’s infrastructure, and millions of volunteers help maintain it. However, growing evidence shows that people of different genders, particularly women, face particular barriers when contributing to open source software. Our research interviews people of diverse genders who have made significant open source contributions to understand how they became highly involved in open source, the barriers they face, and how they overcome them. We will also perform statistical analysis using data science on GitHub trace data to understand the extent to which our findings generalize, and the wider effects of barriers we uncover. Finally we are exploring interventions for enhancing inclusion in open collaboration environments.

Nature of Student Involvement: Students on this project will help us develop web based interventions for encouraging inclusive open source project collaboration environments, carry out interviews with open source contributors, and perform statistical analysis using data science on GitHub trace data.

  • Front end web programming and UX design skills would be helpful for this project
  • Strong organizational and interpersonal skills are important, other skills can be learned
  • Any of the following skills helpful: experience conducting interviews, experience with data science pipelines (eg, using python, SQL or R)

Project #15 Developing Novel Interfaces for Live Streaming

Mentor: Noor Hammad , graduate student advised by Jessica Hammer

Description: This project explores how to build live streaming interfaces that afford new capabilities to viewers, streamers, and game developers. We will use a system architecture built by the Centre for Transformational Play that enables the creation of “game-aware” overlays for any Unity game streamed on the Twitch platform. You will be working closely with an interdisciplinary team to improve the system, add new features, and provide technical support for research studies on Twitch.

Nature of Student Involvement: The student will be collaborating with the research team on activities such as weekly meetings, low-fidelity prototyping, study design, and pilot tests. They will be expected to complete software development tasks independently.

Skills we are interested in: 

  • Web development, particularly advanced JavaScript
  • Interest in live streaming and/or games
  • Prepared to use good collaborative software development practices (e.g. documentation, Git)
  • Some experience with Unity game development
  • Experience working with AWS or other cloud services.

Project #16 Digital Learning Game Programmer

Mentor: Bruce McLaren , faculty 

Description: Decimal Point and Ocean Adventure are learning games developed in the McLearn Lab at CMU to help late elementary and middle school students learn about decimals and decimal operations. For this project, you will write and revise code to alter and extend the two games to prepare them for new classroom studies. You will learn about code repositories, state-of-the-art software engineering methodology, and good software practices. You should have a computer science background with skills in HTML5/CSS3/JavaScript, and an optional background in mathematics or education (e.g. TA-ing a class). You will work with Prof. Bruce McLaren and Research Programmer Hayden Stec, with Stec as the primary mentor.

Nature of Student Involvement: The student will be involved in design, development, testing, and revising of code for digital learning games.

  • HTML5/CSS3/JavaScript
  • optional background in mathematics or education (e.g. TA-ing a class)

Project #17 Digital Learning Games Quality Assurance Programmer

Description: Decimal Point and Ocean Adventure are learning games developed in the McLearn Lab at CMU to help late elementary and middle school students learn about decimals and decimal operations, while Angle Jungle is a learning game to help middle school students learn about angles. For this position you will perform software quality assurance engineering to improve all of the McLearn Lab game materials — various decimal/angle tests, surveys, and three learning games (Decimal Point, Ocean Adventure, Angle Jungle) — testing the materials on all of the devices used in schools — an Apple laptop, an iPad and a ChromeBook -- and address any bugs that you found or have been previously reported. You will learn about code repositories, state-of-the-art software engineering and quality assurance methodology, and good software practices. You should have a computer science background with skills in HTML5/CSS3/JavaScript. You will work with Prof. Bruce McLaren and Research Programmer Hayden Stec, with Stec as the primary mentor.

Nature of Student Involvement: The student will be involved in design, development, testing, and revising of code for digital learning games

  • Computer science background 
  • An interest in mathematics education is also valuable.

Project #18 Evaluate Impact of Transparency in Ride-Sharing Algorithm

Mentor: Seyun Kim , graduate student advised by faculty Motahhare Eslami and Haiyi Zhu

Description: Gig economy platforms such as Uber or Lyft use algorithmic decision-making that are blackbox and lack transparency in its decision making to users of the system including drivers. A way to evaluate and test the algorithm’s impact and output is to conduct algorithmic auditing. For this project, we develop an intervention to assess the impact of whether transparency of an algorithm’s decision-making process influences perceptions of equity. We will also assess what type of information to the users would be considered as more impactful.

Nature of Student Involvement: The students will be responsible for building an intervention in the form of a software interface that engages with gig economy workers. The software interface will be an additional add-on feature to an existing platform (project) with a research group at another institute. The student will be responsible for data collection and quantitative, qualitative data analysis to understand the impact of the intervention. The data collection will involve engaging with participants in the wild.

  • Experience in software development (front end and back end) 
  • Running experiments in the real-world (data collection in the wild) 
  • Statistical analysis (quantitative analysis). The student does not necessarily need to have qualitative analysis experience.

Project #19 Human-Centered Data Science and Visualization

Mentor: Adam Perer , faculty 

Description: The Data Interaction Group (DIG) has a mission to empower everyone to analyze and communicate data with interactive systems. Our group conducts research in computer science at the intersection of human-computer interaction, machine learning, data science, programming languages, and data management.

This summer, we plan to build new tools for data scientists to help them better understand their data, which will hopefully result in better downstream machine-learning models derived from the data.

Nature of Student Involvement: Research, coding, user studies

  • Programming experience
  • Interest in data science and machine learning
  • Web development skills

Project #20 Mixed-reality AI STEM Learning

Mentor: Nesra Yannier , senior systems scientist 

Description: This project focuses on developing a mixed-reality educational system and Intelligent Science Stations bridging physical and virtual worlds to improve children's STEM learning and enjoyment in a collaborative way. It uses depth camera sensing and computer vision to detect physical objects and provide personalized immediate feedback to children as they experiment and make discoveries in their physical environment. NoRILLA Intelligent Science Stations ( www.norilla.org ) are being used in many school districts, after school programs, children's museums and science centers (e.g., Carnegie Science Center, Children's Museum of Atlanta, Please Touch Museum, CaixaForum AI Museum in Spain). Research with hundreds of children has shown that it improves children's learning by 5 times compared to equivalent tablet or computer games. This REU project will focus on extending Intelligent Science Stations to different content areas, creating new modules and curriculum, designing new games and interfaces as well as collecting and analyzing data in schools and museums of children interacting with Intelligent Science Stations and Exhibits.

Nature of Student Involvement: The student will work closely with the project lead and will be involved in different aspects of the project including design, development and research. The student will help take the project further by developing new modules/games, computer vision algorithms/tools and AI enhancements on the platform and deployment of upcoming installations.

  • Familiarity with software and hardware components
  • Familiarity with computer vision, interface development, Java/Processing, robotics and/or game design is a plus

Project #21 Scrolling Technique Library

Mentor: Brad Myers , faculty and HCII Director

Description: We have developed a new way to test how well a scrolling technique works, and we need to re-implement some older techniques to see how they compare. For example, the original Macintosh scrollbars from 1984 had arrows at the top and bottom, and a draggable indicator in the middle. Even earlier scrollbars worked entirely differently. I am hoping to recruit one good programmer to help recreate some old scrolling techniques, and possibly try out some brand new ones, like for Virtual Reality applications, to test how well they do compared to regular scrolling techniques like two-fingers on a touchpad or smartphone screen. If there is time, the project will include running user tests on the implemented techniques, and writing a CHI paper based in part on the results.

Nature of Student Involvement: The student on this project will be implementing all of the techniques as web applications.

  • The student on this project must be an excellent programmer JavaScript or TypeScript
  • Preferably with expertise in React or other web framework. 
  • Experience with running user studies would be a plus.

Project #22 Studying Novel Live Streaming Interfaces

Description: This project explores how people use live streaming interfaces to support shared attention in streamed experiences, including entertainment games and educational content. As part of this work, we will use novel live streaming “game-aware” interfaces developed by the Center for Transformational Play that allow viewers to customize their live streaming experience. You will be working closely with an interdisciplinary team to develop and execute studies into how viewers make use of these interfaces to support their streaming experiences.

Nature of Student Involvement: This student will collaborate with the research team on activities such as weekly meetings, participant recruitment, data collection, and data analysis. They will be expected to complete problem solving and logistical tasks independently.

Skills we are interested in:

  • Comfortable with independent work
  • Interest in mixed methods research
  • Good organizational skills
  • Experience with mixed research methods

Project #23 Supporting Designers in Learning to Co-create with AI for Complex Computational Design Tasks

Mentor: Nikolas Martelaro , faculty 

Description: Advancements in generative AI (GenAI) are rapidly disrupting creative professionals' work across a range of domains. To ensure that GenAI benefits creative professionals, rather than devaluing their labor, it is critical that we prepare the workforce to work with these technologies to effectively leverage their comparative advantages as humans. However, recent studies indicate that creative professionals face significant challenges in adopting GenAI successfully into their workflows. In this project, we will explore novel interactive interfaces and interaction patterns that allow professional designers to work more effectively with GenAI tools across different domains. First, we will iteratively build interactive prototypes and then evaluate their effectiveness through user studies. The results of this work will contribute to advancing future AI-augmented creative work.

Nature of Student Involvement: Attending weekly research meetings, supporting prototyping, and supporting preparation and facilitation of user studies

  • Programming
  • User research

Project #24 Supporting End-users in Auditing Harmful Algorithmic Behaviors in Generative AI

Mentor: Wesley Deng , graduate student advised by faculty Motahhare Eslami and Ken Holstein

Description: Despite impressive capabilities, text-to-image (TTI) generative AI also carries risks of biases and discrimination. Traditional algorithm audits done by small groups of AI experts often miss harmful biases due to the AI team's cultural blind spots and the difficulty of predicting the range of ways TTI systems will be used once deployed widely. Recent works from our research group have highlighted the effectiveness of end users in detecting biases overlooked by experts, demonstrating the value of engaging end users in algorithmic audits. Despite this potential, there is a lack of structured support and tools for public participation in auditing generative AI. In this project, you will build upon an existing interface ( https://taiga.weaudit.org/ ) we developed to further design, develop, and evaluate new tools and mechanisms to better support end users in auditing and red teaming Stable Diffusion, an open source TTI model. Overall, this project aims to empower end users in the auditing process and enhance public involvement in creating a responsible and ethical generative AI landscape.

Nature of Student Involvement: Research assistants (RAs) will work closely with the research team and will be involved in the design, development, and evaluation of the system. This means the RA will get exposure to project ideation, rapid prototyping, front end web development (using Javascript/HTML/CSS/other web technologies), and conducting user evaluations.

  • Experience in software development, and in particular the ability to learn new technologies
  • Experience with web technologies such as JavaScript is preferred
  • Experience in designing and conducting user studies evaluating interactive systems is preferred
  • Familiarity with text-to-image generative AI (DALL-E, Stable Diffusion) is encouraged
  • Interests in exploring topics such as Responsible AI, Human-AI Interaction, Algorithmic Fairness and Transparency

Project #25 Supporting middle school math homework with novel parent support tools

Mentor: Conrad Borchers , graduate student advised by Vincent Aleven and Ken Koedinger

Description: This project aims to design, implement, and test AI-based support tools that provide tailored recommendations to parents for how they might support their children's homework. The student will engage in high-fidelity prototyping and design research, studying parental engagement and student learning during interactions with the tool. The research output will contribute new scientific understanding of how cognitive and socio-emotional support can be merged productively. Prior work has identified that differences in parent styles during homework support relate to achievement gaps. Therefore, one potential research question is how to design interactive systems powered by AI that help parents and students adopt more favorable attitudes and approaches to homework.

The research extends prior project activities in the context of a grant on smart middle school mathematics homework support. We have conducted prototyping sessions with several students and their parents, identifying key needs for more effective and equitable homework support. We are also planning to pilot an initial prototype of the tool in late November. Therefore, the REU intern will contribute to design research activities at a stage of high fidelity.

Nature of Student Involvement: The REU intern will be tasked with conducting in-depth interview sessions and interactive usability studies with parents and children to refine the tool's design, including potential involvement in programming.

  • Required: Demonstrated track record of conducting high-fidelity prototyping, usability testing, and user experience research, ideally in close collaboration with software engineers
  • Desirable but not required: Experience in deploying machine learning applications in a web application, including experience with frontend engineering; experience with learning technologies

Project #26 Supporting Upper Extremity Health Monitoring and Management for Wheelchair Users

Mentor: Patrick Carrington , faculty

Description: Upper extremity (UE) health issues are a common concern among wheelchair users and have a large impact on their independence, social participation, and quality of life. However, despite the well-documented prevalence and negative impacts, these issues remain unresolved. Existing solutions (e.g. surgical repair, conservative treatments) often fail to promote sustained UE health improvement in wheelchair users’ day-to-day lives. In this project, we explore how health tracking technologies could support wheelchair users’ UE health self-care, including movement sensing, modeling body mechanics, and developing appropriate user interfaces for feedback and data analysis.

Nature of Student Involvement: The student will be involved in development, prototyping, and/or user studies.

Skills we are interested in (not all required): Candidates should ideally have experience programming, some experience with machine learning is helpful, experience with hardware and specifically IMUs is also positive.

Project #27 Tangible Privacy & Security

Description: This project aims to address the persistent challenges of human error and negligence in cybersecurity and privacy by leveraging tangible computing. Building upon our lab's previous research - Spidey Sense, Bit Whisperer, and Smart Webcam Cover, we aim to overcome barriers related to awareness, ability, and motivation among end-users. Through the strategic introduction of tangible computing, our goal is to empower users with greater control and an enhanced understanding of privacy-invasive sensors in the physical world, thereby fostering proactive engagement in security and privacy practices. In pursuit of these goals, we are exploring two key ideas: 1) the development of a tangible control for privacy preferences in shared spaces, especially in large environments such as buildings or city-wide spaces, and 2) the creation of privacy-invasive sensors that provide clear indications of the data being captured and the range of data being collected—such as a webcam that visually communicates its field of view in the real world.

Nature of Student Involvement: Student will be involved in Ideation, Prototyping and Conducting Interviews.

  • Hardware prototyping skills
  • Conducting interviews and analyzing response data
  • Programming skills

Project #28 Technologies for Training Everyday Mindfulness and Emotional Regulation

Mentor: Anna Fang , graduate student advised by Haiyi Zhu

Description: Practices like mindfulness and meditation that help people feel present without judgment and for reducing stress have become emerging topics of interest in HCI research. Technologies for improving people’s ability to be self-aware, emotionally self-regulate, or self-transcend often ask users to imagine themselves in environments like a quiet forest or watching the calm ocean waves, meant to feel separate and isolated from their everyday lives. However, the primary goal of practices is actually for people to generalize these skills to their day-to-day, in which most people do not usually live in these natural environments, silent atmospheres, or sitting in meditation postures.

As a result, in this project, we will be exploring a novel technological space for supporting practice of self-care skills in daily environments. We will not only explore and design for people’s needs, but also build and evaluate HCI technology (e.g. immersive technologies like VR or mixed reality, wearable devices, social computing systems) for practicing things like mindfulness, calm breathing, lowering anxiety and stress, or other self-regulation techniques. Our goal is to help people be better prepared for mental and emotional regulation when challenges arise.

Nature of Student Involvement: The student will work closely with the project leads. Students may participate in coding/development, organizing and carrying out interviews or other evaluation techniques for the project, organizing and analyzing findings, etc. Students will get a breadth of experience, such as learning things from need-finding to system building to paper writing. We welcome students to contribute their own ideas and feel ownership over guiding this project along with the faculty and student advisors!

  • Experience in programming or software development (e.g. Python, Java)
  • Experience or interest in interviewing or conducting user studies
  • Interest in developing for or applying artificial intelligence, VR/AR technologies, wearables, or other technical HCI
  • Passion and excitement about novel technologies for mental health!

Summer Undergraduate Research Program

  • Application
  • Your Summer at a Glance

Contact 

Email us  

Research Topic: Human-Computer Interaction

The robot rights and responsibilities scale: development and validation of a….

The discussion and debates surrounding the robot rights topic demonstrate vast differences in the possible philosophical, ethical, and legal approaches…

Excitation Transfer Across Displays of Different Immersive Quality

Full Title: Excitation Transfer Across Displays of Different Immersive Quality: Investigating the Temporal Dynamics of Intra-Stimulus Arousal Escalation and Decay.…

Technology, Privacy, and Sexting: Mediated Sex

Technology, Privacy, and Sexting: Mediated Sex takes a scientific approach to sexting, using both quantitative and qualitative methods to investigate why…

The Power of Personal Ontologies: Individual Traits Prevail Over Robot Traits…

This study examines facets of robot humanization, defined as how people think of robots as social and human-like entities through…

Channel Affordances for Sexting: Social Presence Relates to Improved Self-Esteem, Sexual…

Sexting involves the sharing of sexually explicit material, including photos and text-based messages, with another person via smartphones and computers.…

Effects of Congruity on the State of User Presence in Virtual…

The present study investigates how the user state of presence is affected by contingencies in the design of virtual environments.…

Capturing Social Presence: Concept Explication Through an Empirical Analysis of Social…

Initially the province of telecommunication and early computer-mediated communication (CMC) literature, multiple systematic reviews suggest “social presence” is now used…

Screenertia: Understanding “Stickiness” of Media Through Temporal Changes in Screen Use

Descriptions of moment-by-moment changes in attention contribute critical elements to theory and practice about how people process media. We introduce…

A Review on Human-Computer Interaction (HCI)

Ieee account.

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

Human-Computer Interaction and Visualization

HCI researchers at Google have enormous potential to impact the experience of Google users as well as conduct innovative research. Grounded in user behavior understanding and real use, Google’s HCI researchers invent, design, build and trial large-scale interactive systems in the real world. We declare success only when we positively impact our users and user communities, often through new and improved Google products. HCI research has fundamentally contributed to the design of Search, Gmail, Docs, Maps, Chrome, Android, YouTube, serving over a billion daily users. We are engaged in a variety of HCI disciplines such as predictive and intelligent user interface technologies and software, mobile and ubiquitous computing, social and collaborative computing, interactive visualization and visual analytics. Many projects heavily incorporate machine learning with HCI, and current projects include predictive user interfaces; recommenders for content, apps, and activities; smart input and prediction of text on mobile devices; user engagement analytics; user interface development tools; and interactive visualization of complex data.

Recent Publications

Some of our teams.

Impact-Driven Research, Innovation and Moonshots

Security, privacy and abuse

We're always looking for more talented, passionate people.

Careers

Human-computer interaction

Redefining human experiences through innovations in research, design, and technology.

A close-up of a man wearing casual clothing, he has his smartphone in his hand and he is using an assistive mobile app for people with vision disabilities to assist him.

Redefining accessibility to build more inclusive technologies

One male and two females in medium conference room with a Crestron Microsoft Teams Rooms on Android device and touch display being used for remote video meeting with Teams Meetings. Large mounted display showing remote participants and two Surface Device

Making remote and hybrid meetings work in the new future of work

picture of ed and danna promoting their webinar

Webinar: Designing computer vision algorithms to describe the visual world to people who are blind or low vision

Daniel McDuff and Xin Lui's headshot appears next to their webinar title, camera based health sensing now on demand

Camera-based non-contact health sensing

Showing 1 – 10 of 4341 results

RUBICON: Rubric-based Evaluation of Domain Specific Human-AI Conversations  

Param Biyani, Yasharth Bajpai , Arjun Radhakrishna , Gustavo Soares , Sumit Gulwani

AIware 2024 | July 2024

Explaining CLIP’s performance disparities on data from blind/low vision users  

Daniela Massiceti , Camilla Longden , Agnieszka Slowik , Samuel Wills, Martin Grayson , Cecily Morrison

2024 Computer Vision and Pattern Recognition | June 2024

Meet MicroCode: a Live and Portable Programming Tool for the BBC micro:bit  

Kobi Hartley, Elisa Rubegni, Lorraine Underwood, Joe Finney, Thomas Ball , Steve Hodges , Eric Anderson, Peli de Halleux , James Devine , Michal Moskal

23rd annual ACM Interaction Design and Children (IDC) Conference | June 2024

Inclusive Digital Maker Futures Workshop  

Upcoming: June 16, 2024  |  Delft, Netherlands

Host conference: 23rd ACM Interaction Design and Children Conference (opens in new tab) June 17-20, 2024 This workshop will bring together researchers and educators to imagine a future of low-cost, widely available digital making for children,…

ICD 2024 logo - Inclusive Digital Maker Futures for Children via Physical Computing

What’s Your Story: Jacki O’Neill  

May 16, 2024

Jacki O’Neill saw an opportunity to expand Microsoft research efforts to Africa. She now leads Microsoft Research Africa, Nairobi (formerly MARI). O’Neill talks about the choices that got her there, the lab’s impact, and how…

Circle photo of Jacki O'Neill, director of the Microsoft Africa Research Institute (MARI), with a microphone in the corner on a blue and green gradient background

SharedNeRF: Leveraging Photorealistic and View-dependent Rendering for Real-time and Remote Collaboration  

Mose Sakashita, Bala Kumaravel, Nicolai Marquardt , Andrew D. Wilson

CHI 2024 | May 2024

Honorable Mention, CHI 2024

Research Focus: Week of May 13, 2024  

May 15, 2024  |  Leonardo Nunes , Sara Malvar , Bruno Silva , Ranveer Chandra , Serena Hillman , Thomas Ball , Peli de Halleux , James Devine , Michal Moskal , Vivek Seshadri , Manohar Swaminathan , Sunayana Sitaram , Hanna Wallach , Jennifer Wortman Vaughan , Qi Chen , Xiubo Geng , Corby Rosset , Carolyn Buractaon , Jingwen Lu , Yeyun Gong , Nick Craswell , Xing Xie , Fan Yang , Bryan Tower , Jason (Zengzhong) Li , Rangan Majumder , Jennifer Neville , Harsha Simhadri , Manik Varma , Mao Yang , Bonnie Kruft

Welcome to Research Focus, a series of blog posts that highlights notable publications, events, code/datasets, new hires and other milestones from across the research community at Microsoft. Large language models (LLMs) have shown remarkable performance…

Research Focus: May 13, 2024

Microsoft at CHI 2024: Innovations in human-centered design  

May 15, 2024  |  Jeevana Priya Inala , Chenglong Wang , Lev Tankelevitch , Advait Sarkar , Abigail Sellen , Sean Rintel , Q. Vera Liao , Yun Wang , Michel Pahud , Judith Amores , Jaron Lanier , Emre Kiciman , Gagan Bansal , Adam Fourney , Eric Horvitz , Nicolai Marquardt , Andy Wilson , Mary Czerwinski

From immersive virtual experiences to interactive design tools, Microsoft Research is at the frontier of exploring how people engage with technology. Discover our latest breakthroughs in human-computer interaction research at CHI 2024.

Microsoft at CHI 2024

Big or Small, It’s All in Your Head: Visuo-Haptic Illusion of Size-Change Using Finger-Repositioning  

Myung Jin Kim, Eyal Ofek, Michel Pahud , Mike Sinclair, Andrea Bianchi

CodeAid: Evaluating a Classroom Deployment of an LLM-based Programming Assistant that Balances Student and Educator Needs  

Majeed Kazemitabaar, Runlong Ye, Xiaoning Wang, Austin Z. Henley, Paul Denny, Michelle Craig, Tovi Grossman

  • Follow on Twitter
  • Like on Facebook
  • Follow on LinkedIn
  • Subscribe on Youtube
  • Follow on Instagram
  • Subscribe to our RSS feed

Share this page:

  • Share on Twitter
  • Share on Facebook
  • Share on LinkedIn
  • Share on Reddit

EDITORIAL article

This article is part of the research topic.

Affective Computing and Mental Workload Assessment to Enhance Human-Machine Interaction

Editorial: Affective Computing and Mental Workload Assessment to Enhance Human-Machine Interaction Provisionally Accepted

  • 1 Department of Information Engineering, Faculty of Engineering, Marche Polytechnic University, Italy
  • 2 Department of Engineering and Geology, G. d'Annunzio University of Chieti and Pescara, Italy
  • 3 Lega del Filo d'Oro ONLUS, Italy

The final, formatted version of the article will be published soon.

In the last years, the use of interactive systems and adaptive interfaces has considerably increased due to a growing interest in human-machine interaction (HMI). The computer technology development and its integration into robot platform have facilitated the HMI while performing an interactive task. Human-Computer Interaction (HCI), described as a discipline concerned with the design, evaluation, and implementation of interactive computing systems for human use, is the basis for Human-Robot Interaction (HRI) Yanco and Drury (2002). Overall, the incorporation of human cognitive and affective state monitoring in interactive systems can lead to more efficient and personalized user experiences to control the robot or machine during a real-time interaction. Specifically, the information about the mental workload (MWL), particularly in challenging situations, allows for the ability to automatically adjust the difficulty level of tasks or provide additional support with respect to a recorded mental state change. Moreover, affective computing techniques hold great potential for revolutionizing the way people interact with technology and enhancing the overall quality of user experiences. Considered the technological improvement of mobile and wearable sensors (e.g., wearable and portable electroencephalographic (EEG), functional near infrared spectroscopy (fNIRS) systems and smart devices for physiological signal collection), and contactless devices (e.g., compact infrared thermal and RGB-D cameras), it is possible to better understand the mental user condition and respond to the evaluated cognitive states. With respect to EEG frequency bands, it is possible to map mental states with attentional, affective, and reward-related components, elicited by visual stimuli, considering a high variability within and between the involved subjects Welter and Lotte (2024). By applying artificial intelligence and machine learning algorithms to physiological signals, the authors found that certain EEG patterns were associated with different levels of cognitive load, highlighting the potential of using EEG signals to assess users' mental workload in real-time. Specifically, neuroadaptive systems can be enabled to control the machine’s behavior according to the current cognitive and affective users’ states. Additionally, Brain-Computer Interface (BCI), with feedback regarding the user’s emotional state, estimated through EEG, may provide an advantageous framework for affective HCI and emotional cognitive regulation. In this context, the Research Topic aims to collect recent advances in monitoring cognitive and affective states, emphasizing the technological improvements that enhance the HMI in a user-oriented way. The Research Topic, which ended in December 2023, is a collection of four articles written by 16 international authors, and it presents an overview of MWL assessment in HMI scenario, mainly using EEG data analysis. In this context, Federer et al. (2023) have captured the users’ brain activity associated with motor responses to different sensory events, extracting event-related potentials (ERPs) from the continuous EEG recording and reflecting the brain’s response to a particular event. In this study, participants had to react to auditory events presented by two different paradigms in a virtual-reality (VR) setting: by entering codes on virtual keypads to open doors and perceiving sound stimuli externally generated. The study identified a modulation of ERPs’ amplitude related to the different paradigms. Specifically, the results assessed a relationship between the ERPs’ amplitude and the system latency that should be close to zero to mimic the real-world physical interaction. The observed reduction in ERPs' intensity when the events were externally generated indicates a potential difference in cognitive processing when participants are actively engaged compared to passive perception of external stimuli. Similarly, Gallegos Ayala et al. (2023) proposed a BCI approach to provide a real-time and continuous MWL assessment in different scenarios, usable to optimize HCI. In this work, a novel classification algorithm was presented to extract frontal theta oscillations from EEG recordings and to detect MWL during different cognitive tasks. In detail, a published data set that investigated subject dependent task transfer was analyzed through Filter Bank Common Spatial Patterns. The proposed approach enabled a binary classification of MWL with performances of 92.00% and 92.35%, respectively, for either low or high workload vs. an initial no MWL condition. Vukelic et al. (2023) combined a BCI with deep reinforcement learning (RL) for robot training in a 3-D physically realistic simulation environment. The authors applied this method to EEG signals acquired through both wet- and dry-based electrode systems for automatic classification of perceived errors during a robot task. The obtained results indicated that the employment of BCI-based deep RL in combination with the dry EEG system can significantly accelerate the learning process and offers promising opportunities for training robots and improving their performance in complex tasks. Welter and Lotte (2024) presented an original study where the perception of visual artifacts coincided with aesthetic experiences (AE) that could positively affect health and wellbeing, paving the way for the development of innovative passive BCI systems for enhancing art appreciation. Scientific understanding of the neural dynamics behind AEs, composed of complex cognitive and affective mental and physiological states, would allow a personalized art presentation to improve AE without the necessity of explicit user feedback. The authors proposed a literature review regarding the relationship between EEG rhythm modulation and the attentional, affective, and reward components of AE. This research has important implications for the fields of neuroaesthetics, cognitive neuroscience, and human-computer interaction, offering new possibilities for using technology to improve health and wellbeing through art. The paper summarized the state-of-the-art in oscillatory EEG based visual neuroaesthetics and painted a road map toward the development of ecologically valid neuroaesthetic passive BCI systems that could optimize AEs as well as their beneficial consequences. Authors showed that oscillatory EEG features can contain information about aesthetic preferences and can be highlighted by machine learning approaches. The works collected by this Research Topic describe a wide range of methods for MWL detection, mainly based on EEG signals. Further studies are indeed necessary to improve the actual knowledge of physiological mechanisms associated with the neurophysiological condition of the user. Moreover, technological enhancements of sensing systems could be crucial for the development of more effective HMI frameworks. In view of the strong interest in the academic and industrial fields, we believe that the present Research Topic will increase rigor and reproducibility in mental workload evaluation research.

Keywords: Human Machine Interaction (HMI), Cognitive and affective states, Mental workload (MWL), Affective Computing, artificial intelligence

Received: 05 Apr 2024; Accepted: 13 May 2024.

Copyright: © 2024 Iarlori, Monteriù, Perpetuini, Filippini and Cardone. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Dr. Sabrina Iarlori, Department of Information Engineering, Faculty of Engineering, Marche Polytechnic University, Ancona, Italy

People also looked at

human computer interaction research topics

Enter the URL below into your favorite RSS reader.

Research Project Review: Human-computer Interactions

  • Citation (BibTeX)

View more stats

A review of a body of research conducted by Dr. Gain Park, an assistant professor in the Department of Journalism and Media Studies at New Mexico State University. This review contains a summary of Dr. Park’s research on human-computer interactions, commentary on its contributions and significance, as well as insights from Dr. Park.

With artificial intelligence (AI) technology growing in popularity, the need for research and understanding on this topic has become important. The unknown can be scary, but researchers like Dr. Gain Park are helping to bridge the gap by exploring human-computer communication, its impact on users, and its implications on the future of communication.

Dr. Park, an assistant professor in the Department of Journalism and Media Studies at New Mexico State University (NMSU), initially became interested in human-computer interactions when she and an NMSU student worked on a study titled, “Hey Siri, I’ll Have What You’re Having: Chabot Pressure on Food Choices.” They found that chatbots are social actors, meaning they can play a meaningful and participatory role in a communication interaction, and that the same social pressure and influence that exists in human-human interactions is also possible in human-computer interactions. This is supported by the Computers Are Social Actors (CASA) Paradigm, which says that humans apply human social rules and scripts to computers. This served as a foundation for many of Dr. Park’s future studies.

Dr. Park has worked with a team of researchers and a developer from other universities and countries to create an impressive body of research that has evolved over the past few years. This body of research includes numerous professionally published scholarly articles, many of which have been presented at national conferences. Dr. Park’s research journey has explored AI in areas of consumer services, mental health, and fundraising.

In the consumer services field, Dr. Park and others studied how to improve human-computer interactions so that these interactions would be helpful to humans and not triggering. The COVID-19 pandemic and an increase in mental health struggles during that time prompted Dr. Park’s studies to shift to the mental health field. The focus became how chatbots could be a resource for those who feel that their struggles are moderate and just need some emotional and social support and someone to talk to. More recently, the studies shifted to fundraising efforts, which have also struggled since COVID-19, and how positive human-computer interactions could impact users’ willingness to donate.

According to Dr. Park, “A big part of future human communication will include human-computer interaction.” Human-computer interactions have already begun integrating into many workplaces, organizations, and institutions. According to Dr. Park, people are turning to chatbots for both professional and interpersonal interactions. That is why it is important to study these interactions and their impact on human behaviors and attitudes.

This is what differentiates Dr. Park’s research from other AI research; it is human-centered and user-centered. “The current focus in the scholar world, in my opinion, is too focused on the machine side,” said Dr. Park. “But I wanted to move the focus to the human side. What do these functions do to the human to change their behavior and perception?”

Another reason Dr. Park’s research is so important is because it encourages and highlights the need for AI literacy. There are entire classes dedicated to teaching college students about media literacy and ways they can wisely engage with the whirlwind of media around them. AI is both a new media agent and a new social agent that is being increasingly interacted with in professional and personal contexts. This research emphasizes the importance of users being aware of the social factors at play, how they may be impacted or influenced, and how they can benefit from the interaction.

Dr. Park’s research demonstrates that AI technologies can be a helpful tool for struggling and overworked consumer service, mental health, and nonprofit organizations – not to take people’s jobs, but to support them by taking simple tasks off their plates so they can focus on bigger tasks. AI can also be helpful to individuals, including people with moderate mental health struggles or college students. Dr. Park’s research exemplifies that two things can be true – one, that AI can be a positive and helpful tool and two, that it is a new and evolving agent that requires some understanding and literacy to interact with.

“This is just a machine, like computers and internet and other media, and having firm literacy skills, so that I will have this conversation and actually be benefitted from this conversation, will be important,” explained Dr. Park. “People are using it, people will rely on this technology more and more, and now is the time to look at its social effect on general users.”

Dr. Park’s research aims to bring information about AI technology and human-computer interactions to general users, so that AI will not just be a tool understood and used by organizations and specialized groups but will be a tool understood and used by the general public.

“It’s about making the world a better place,” affirmed Dr. Park. “That’s my actual belief. I want to make the world a better place by finding connections within the area of AI media agents and people/users.”

  • Search Portfolio
  • Reports and Publications
  • About Portfolio

SBIR Phase I: Narrative interface technology to support two-way human-computer interaction for the disabled community

Project Number 2304553 Agency/Funding Organization NSF Funding Year 2023 View Full Project Details for SBIR Phase I: Narrative interface technology to support two-way human-computer interaction for the disabled community

spring 2008

Cs376: research topics in human-computer interaction.

Tuesday & Thursday, 12:50PM – 2:05PM , Wallenberg 124

Scott Klemmer , Gates 384, Office Hours: Tuesdays 11:15am - 12:15pm

ta : Joel Brandt , Gates 372, Office Hours: Thursdays 3:30pm - 5:30pm (except May 15 and May 22), and by email appointment

Syllabus · Submit and View Assignments · Grading · Project Information and Ideas · Staff E-mail (cs376@cs)

New: View research project papers and presentations!

This course is a broad graduate-level introduction to HCI research. The course begins with seminal work on interactive systems, and moves through current and future research areas in interaction techniques and the design, prototyping, and evaluation of user interfaces. Topics include computer-supported cooperative work; audio, speech, and multimodal interfaces; user interface toolkits; design methods; evaluation methods; ubiquitous and context-aware computing; tangible interfaces; haptic interaction; and mobile interfaces.

Students in this course are encouraged to attend CS547, the HCI seminar , on Fridays from 12:30 - 2:00.

Course Structure

The course consists of two components: the reading and discussion of research papers , and a quarter-long research project .

For each class period, students will submit short critiques of the assigned readings ( submitted online in this format by 7am on the day of class). After 7am on the day of class, all critiques will be made available for other students to read (again, through the online submission system ). The discussion leader and course staff will all read these before class to prepare for discussion. Students are expected to do all of the readings, but critiques are only required for those marked on the syllabus .

In addition to critiques, students will be asked to lead one class discussion . For details on how to structure a discussion, go here. On their discussion day, students should submit their materials instead of their critique using the online submission system . The discussant should read all student critiques before class and integrate them into the discussion.

Printed copies of readings through April 8 will be handed out on the first day of class. Readers for the remainder of the course will be available for purchase in class on April 8.

Note: Stanford students can use the Stanford Library proxy for off-campus access to the readings posted on ACM Portal.

spring 2012

Cs376: research topics in human-computer interaction.

Monday & Wednesday, 1:15PM – 3:05PM , GSB Littlefield 107

Scott Klemmer , Gates 384, Office Hours: Fridays 4:30‑5:30pm

ta : Chinmay Kulkarni, Gates 377, Office Hours: Mondays 3.05pm - 4.30pm (office hours are held in Bytes Cafe)

Final Presentations

Date: Friday June 3rd, 12:15PM – 3:15PM

Jurors: TBA

Location: Wallenberg 124

Visitor Parking: near Cantor Art Center ( map ), and look for yellow and black 'P's on the map.

Come see final project presentations on Fri Jun 8 ! Free and open to the public!

This is a 4-unit course, open to all graduate students. For undergraduates, earning an A- or better in cs147 is a prerequisite. (Graduate students with a unit cap may enroll for 3 units; the workload is the same.) Students registered for the class will receive a letter grade—the "credit/no credit" option is not available.

Students in this course are encouraged to attend CS547, the HCI seminar ; Fridays 12:50 - 2:05pm.

Course Structure

The course comprises two pieces: reading and discussing research papers , and a quarter-long research project .

For each class period, students will submit short commentaries on the assigned readings ( submitted online in this format by 7am on the day of class). After 7am on the day of class, all commentaries will be made available for other students to read (again, through the online submission system ). The discussion leader and course staff will all read these before class to prepare for discussion. Students are expected to do all of the readings; commentaries are only required for those marked on the syllabus .

Students will lead one class discussion each. For details on how to structure a discussion, go here. The discussant(s) should meet with Scott and Jesse at the end of the previous class - come to this meeting with a plan for your discussion. On discussion day, students submit their materials instead of their commentary using the online submission system . The discussant should read all student commentaries before class and integrate them into the discussion. Finally, the discussant is responsible for grading the student commentaries.

Note: Stanford students can use the Stanford Library proxy for off-campus access to the readings posted on ACM Portal.

IMAGES

  1. Human-computer interaction and related research fields

    human computer interaction research topics

  2. What is Human Computer Interaction? and what are HCI applications

    human computer interaction research topics

  3. Research Topics/Trends in Human Computer Interaction: Introduction

    human computer interaction research topics

  4. What is Human Computer Interaction? and what are HCI applications

    human computer interaction research topics

  5. Human-Computer Interaction (HCI)

    human computer interaction research topics

  6. Human-Computer Interaction (HCI): Importance and Applications

    human computer interaction research topics

VIDEO

  1. Introduction to HCI

  2. Human Computer Interaction

  3. HUMAN COMPUTER INTERACTION

  4. Human Computer Interaction

  5. Human Computer Interaction

  6. CS408 Human Computer Interaction Quiz 4 Fall 2023 Virtual University of Pakistan

COMMENTS

  1. Research Area: HCI

    The Human-Computer Interaction Group in EECS studies interaction in current and future computing environments, spanning workplaces, homes, public spaces, and beyond. The HCI group engages in collaborations with scholars and designers across campus, driving research presented at venues such as CHI, UIST, DIS, VIS, and CSCW, and creates novel ...

  2. human computer interaction Latest Research Papers

    Computer Interaction. Purpose This study aims to propose a service-dominant logic (S-DL)-informed framework for teaching innovation in the context of human-computer interaction (HCI) education involving large industrial projects. Design/methodology/approach This study combines S-DL from the field of marketing with experiential and ...

  3. What is Human-Computer Interaction (HCI)?

    Human-computer interaction (HCI) is a multidisciplinary field of study focusing on the design of computer technology and, in particular, the interaction between humans (the users) and computers. While initially concerned with computers, HCI has since expanded to cover almost all forms of information technology design. Show video transcript ...

  4. Mapping Human-Computer Interaction Research Themes and Trends from Its

    Human-computer interaction (HCI) is an interdisciplinary field of research and practice that focuses on both the interaction between computers and users (human) and the design of interfaces that... Mapping Human-Computer Interaction Research Themes and Trends from Its Existence to Today: A Topic Modeling-Based Review of past 60 Years ...

  5. CS376: Research Topics in Human-Computer Interaction

    This course is a broad graduate-level introduction to HCI research. The course begins with seminal work on interactive systems, and moves through current and future research areas in interaction techniques and the design, prototyping, and evaluation of user interfaces. Topics include computer-supported cooperative work; audio, speech, and ...

  6. Human-Computer Interaction

    Human-Computer Interaction (HCI) is a rapidly expanding area of research and development that has transformed the way we use computers in the last 30 years. Northwestern hosts a vibrant HCI community across schools with faculty and students involved in a wide range of projects. Research topics and areas include augmented-reality, collective action, computer-mediated communication, computer ...

  7. CS376

    1: The research question is absent or trivial (the answer is obvious). 2: There is a promising question but it is not clearly stated. 3: The question is clearly stated but has only minor impact on the field. 4: The question is clearly stated and its answer has major impact on the field. Hypothesis. 4 points.

  8. CS347

    CS 347 — Human-Computer Interaction Research. This course is an advanced survey of HCI research. We cover foundations and frontiers: seminal work on interactive systems, and recent advances. Core topics include interaction, social computing, and design; breadth topics include AI+HCI, media tools, programming tools, and accessibility.

  9. CS376

    CS 376 — Human-Computer Interaction Research. This course is a broad graduate-level introduction to HCI research. We cover seminal work on interactive systems, moving through recent contributions in interaction, social computing, and design. This is a 4-unit course. For undergraduates or masters students in CS or SymSys, earning an A- or ...

  10. Research Projects

    Our group conducts research in computer science at the intersection of human-computer interaction, machine learning, data science, programming languages, and data management. ... mindfulness and meditation that help people feel present without judgment and for reducing stress have become emerging topics of interest in HCI research. Technologies ...

  11. Human Computer Interaction

    Human-Computer Interaction. J. May, in International Encyclopedia of the Social & Behavioral Sciences, 2001 Human-computer interaction (HCI) is the study of how people use technological artifacts, and their design. Unified cognitive architectures such as GOMS and Soar, derived from artificial intelligence, have proven useful theoretically, but too detailed for general application in design.

  12. Research Topic: Human-Computer Interaction

    Screenertia: Understanding "Stickiness" of Media Through Temporal Changes in Screen Use. Descriptions of moment-by-moment changes in attention contribute critical elements to theory and practice about how people process media. We introduce…. Human-Computer Interaction. February 11, 2022.

  13. CS376: Research Topics in Human-Computer Interaction

    This course is a broad graduate-level introduction to HCI research. The course begins with seminal work on interactive systems, and moves through current and future research areas in interaction techniques and the design, prototyping, and evaluation of user interfaces. Topics include computer-supported cooperative work; audio, speech, and ...

  14. A Review on Human-Computer Interaction (HCI)

    Abstract: Human-Computer Interaction (HCI), has risen to prominence as a cutting-edge research area in recent years. Human-computer interaction has made significant contributions to the development of hazard recognition over the last 20 years, as well as spawned a slew of new research topics, including multimodal data analysis in hazard recognition experiments, the development of efficient ...

  15. Human-Computer Interaction and Visualization

    Human-Computer Interaction and Visualization. HCI researchers at Google have enormous potential to impact the experience of Google users as well as conduct innovative research. Grounded in user behavior understanding and real use, Google's HCI researchers invent, design, build and trial large-scale interactive systems in the real world.

  16. CS376

    1: The research question is absent or trivial (the answer is obvious). 2: There is a promising question but it is not clearly stated. 3: The question is clearly stated but has only minor impact on the field. 4: The question is clearly stated and its answer has major impact on the field. Hypothesis.

  17. PDF C our s e : C S279: Research Topics in Human-Computer Interaction

    will select and present papers on PL topics including type systems, program synthesis, and metaprogramming. Students enrolled in 279r will select and present systems HCI papers about communicating intent between humans and computers, such as programming by demonstration and representing transformations on large piles of data.

  18. CS376: Research Topics in Human-Computer Interaction

    This course is a broad graduate-level introduction to HCI research. The course begins with seminal work on interactive systems, and moves through current and future research areas in interaction techniques and the design, prototyping, and evaluation of user interfaces. Topics include computer-supported cooperative work; audio, speech, and ...

  19. Human-computer interaction

    A computer monitor provides a visual interface between the machine and the user. Human-computer interaction (HCI) is research in the design and the use of computer technology, which focuses on the interfaces between people and computers.HCI researchers observe the ways humans interact with computers and design technologies that allow humans to interact with computers in novel ways.

  20. Human-computer interaction

    Inclusive Digital Maker Futures Workshop. Upcoming: June 16, 2024 | Delft, Netherlands. Host conference: 23rd ACM Interaction Design and Children Conference (opens in new tab) June 17-20, 2024 This workshop will bring together researchers and educators to imagine a future of low-cost, widely available digital making for children,…. Publication.

  21. CS376: Research Topics in Human-Computer Interaction

    In this course, you will complete a quarter-long research project. This project will be completed in groups of two. At a high level, successful projects will raise an important research question, and plan and execute a methodology for answering that question. Often, this methodology will include building and evaluating a prototype system, but ...

  22. Frontiers

    Human-Computer Interaction (HCI), described as a discipline concerned with the design, evaluation, and implementation of interactive computing systems for human use, is the basis for Human-Robot Interaction (HRI) Yanco and Drury (2002). ... The Research Topic, which ended in December 2023, is a collection of four articles written by 16 ...

  23. Research Project Review: Human-computer Interactions

    A review of a body of research conducted by Dr. Gain Park, an assistant professor in the Department of Journalism and Media Studies at New Mexico State University. This review contains a summary of Dr. Park's research on human-computer interactions, commentary on its contributions and significance, as well as insights from Dr. Park.

  24. SBIR Phase I: Narrative interface technology to support two-way human

    The Interagency Rehabilitation and Disability Research Portfolio (IRAD), identified by National Institutes of Health Library, is free of known copyright restrictions. Site created and maintained by the Eunice Kennedy Shriver National Institute of Child Health and Human Development and the NIH Library as a government created work.

  25. CS376: Research Topics in Human-Computer Interaction

    This course is a broad graduate-level introduction to HCI research. The course begins with seminal work on interactive systems, and moves through current and future research areas in interaction techniques and the design, prototyping, and evaluation of user interfaces. Topics include computer-supported cooperative work; audio, speech, and ...

  26. CS376: Research Topics in Human-Computer Interaction

    This course is a broad graduate-level introduction to HCI research. The course begins with seminal work on interactive systems, and moves through current and future research areas in interaction techniques and the design, prototyping, and evaluation of user interfaces. Topics include computer-supported cooperative work; audio, speech, and ...