Hardware and Architecture

The machinery that powers many of our interactions today — Web search, social networking, email, online video, shopping, game playing — is made of the smallest and the most massive computers. The smallest part is your smartphone, a machine that is over ten times faster than the iconic Cray-1 supercomputer. The capabilities of these remarkable mobile devices are amplified by orders of magnitude through their connection to Web services running on building-sized computing systems that we call Warehouse-scale computers (WSCs).

Google’s engineers and researchers have been pioneering both WSC and mobile hardware technology with the goal of providing Google programmers and our Cloud developers with a unique computing infrastructure in terms of scale, cost-efficiency, energy-efficiency, resiliency and speed. The tight collaboration among software, hardware, mechanical, electrical, environmental, thermal and civil engineers result in some of the most impressive and efficient computers in the world.

Recent Publications

Some of our teams.

System performance

We're always looking for more talented, passionate people.

Careers

  • Who’s Teaching What
  • Subject Updates
  • MEng program
  • Opportunities
  • Minor in Computer Science
  • Resources for Current Students
  • Program objectives and accreditation
  • Graduate program requirements
  • Admission process
  • Degree programs
  • Graduate research
  • EECS Graduate Funding
  • Resources for current students
  • Student profiles
  • Instructors
  • DEI data and documents
  • Recruitment and outreach
  • Community and resources
  • Get involved / self-education
  • Rising Stars in EECS
  • Graduate Application Assistance Program (GAAP)
  • MIT Summer Research Program (MSRP)
  • Sloan-MIT University Center for Exemplary Mentoring (UCEM)
  • Electrical Engineering
  • Computer Science
  • Artificial Intelligence + Decision-making
  • AI and Society
  • AI for Healthcare and Life Sciences
  • Artificial Intelligence and Machine Learning
  • Biological and Medical Devices and Systems
  • Communications Systems
  • Computational Biology
  • Computational Fabrication and Manufacturing
  • Computer Architecture
  • Educational Technology
  • Electronic, Magnetic, Optical and Quantum Materials and Devices
  • Graphics and Vision
  • Human-Computer Interaction
  • Information Science and Systems
  • Integrated Circuits and Systems
  • Nanoscale Materials, Devices, and Systems
  • Natural Language and Speech Processing
  • Optics + Photonics
  • Optimization and Game Theory
  • Programming Languages and Software Engineering
  • Quantum Computing, Communication, and Sensing
  • Security and Cryptography
  • Signal Processing
  • Systems and Networking
  • Systems Theory, Control, and Autonomy
  • Theory of Computation
  • Departmental History
  • Departmental Organization
  • Visiting Committee
  • Explore all research areas

We design the next generation of computer systems. Working at the intersection of hardware and software, our research studies how to best implement computation in the physical world. We design processors that are faster, more efficient, easier to program, and secure. Our research covers systems of all scales, from tiny Internet-of-Things devices with ultra-low-power consumption to high-performance servers and datacenters that power planet-scale online services. We design both general-purpose processors and accelerators that are specialized to particular application domains, like machine learning and storage. We also design Electronic Design Automation (EDA) tools to facilitate the development of such systems.

Advances in computer architecture create quantum leaps in the capabilities of computers, enabling new applications and driving the creation of entirely new classes of computer systems. For example, deep learning, which has transformed many areas of computer science, was made practical by hardware accelerators (initially GPUs and later more specialized designs); and advances in computer performance have also made self-driving cars and autonomous drones possible.

Computer architecture spans many layers of the hardware and software stack, and as a result we collaborate with researchers in many other areas. For example, several of our current projects focus on the design of domain-specific architectures, and involve researchers in programming languages and compilers to ensure that our systems are broadly useful, as well as domain experts. In addition, the waning of Moore’s Law is making emerging technologies, like CN-FETs, photonics, or resistive memories, an attractive way to implement computation, sparking collaborations with experts in these areas.

computer hardware research

Latest news in computer architecture

Qs ranks mit the world’s no. 1 university for 2024-25.

Ranking at the top for the 13th year in a row, the Institute also places first in 11 subject areas.

Department of EECS Announces 2024 Promotions

Adam Belay, Manya Ghobadi, Stefanie Mueller, and Julian Shun are all being promoted to associate professor with tenure.

QS World University Rankings rates MIT No. 1 in 11 subjects for 2024

The Institute also ranks second in five subject areas.

The Department of Electrical Engineering and Computer Science (EECS) is proud to announce multiple promotions.

EECS Alliance Roundup: 2023

Founded in 2019, The EECS Alliance program connects industry leading companies with EECS students for internships, post graduate employment, networking, and collaborations.  In 2023, it has grown to include over 30 organizations that have either joined the Alliance or participate in its flagship program, 6A.

Upcoming events

Doctoral thesis: guiding deep probabilistic models, doctoral thesis: designing for participation and power in data collection and analysis.

Hardware / Software Systems

Computational techniques are now a major innovation catalyst for all aspects of human endeavor. Our research aims to develop tomorrow’s information technology that supports innovative applications, from big data analytics to the Internet of Things.

profs Philip Levis and Balaji Prabhakar

Hardware/Software Systems covers all aspects of information technology, including energy efficient and robust hardware systems, software defined networks, secure distributed systems, data science, and integrated circuits and power electronics.

  • Energy-Efficient Hardware Systems
  • Software Defined Networking
  • Mobile Networking
  • Secure Distributed Systems
  • Data Science
  • Embedded Systems
  • Integrated Circuits and Power Electronics

Faculty in Hardware/Software Systems research area

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • TECHNOLOGY FEATURE
  • 07 December 2021

How remouldable computer hardware is speeding up science

  • Jeffrey M. Perkel

You can also search for this author in PubMed   Google Scholar

This virtual-reality arena for flies tests the insects’ reaction times. Credit: Matthew Isaacson

Michael Reiser is, as he puts it, “fanatical about timing”. A neuroscientist at the Howard Hughes Medical Institute’s Janelia Research Campus in Ashburn, Virginia, Reiser studies fly vision. Some of his experiments involve placing flies in an immersive virtual-reality arena and seamlessly redrawing the scene while tracking how the insects respond. Modern PCs, with their complex operating systems and multitasking central processing units (CPUs), cannot guarantee the temporal precision required. So Reiser, together with engineers at Sciotex, a technology firm in Newtown Square, Pennsylvania, found a piece of computing hardware that could: an FPGA.

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 51 print issues and online access

185,98 € per year

only 3,65 € per issue

Rent or buy this article

Prices vary by article type

Prices may be subject to local taxes which are calculated during checkout

Nature 600 , 348-349 (2021)

doi: https://doi.org/10.1038/d41586-021-03627-8

Related Articles

computer hardware research

  • Computer science
  • Information technology
  • Computational biology and bioinformatics

Meta’s AI translation model embraces overlooked languages

Meta’s AI translation model embraces overlooked languages

News & Views 05 JUN 24

Scaling neural machine translation to 200 languages

Scaling neural machine translation to 200 languages

Article 05 JUN 24

Accelerating AI: the cutting-edge chips powering the computing revolution

Accelerating AI: the cutting-edge chips powering the computing revolution

News Feature 04 JUN 24

The dream of electronic newspapers becomes a reality — in 1974

The dream of electronic newspapers becomes a reality — in 1974

News & Views 07 MAY 24

How scientists are making the most of Reddit

How scientists are making the most of Reddit

Career Feature 01 APR 24

A global timekeeping problem postponed by global warming

A global timekeeping problem postponed by global warming

Article 27 MAR 24

AI’s keen diagnostic eye

AI’s keen diagnostic eye

Outlook 18 APR 24

So … you’ve been hacked

So … you’ve been hacked

Technology Feature 19 MAR 24

No installation required: how WebAssembly is changing scientific computing

No installation required: how WebAssembly is changing scientific computing

Technology Feature 11 MAR 24

Faculty Positions in School of Engineering, Westlake University

The School of Engineering (SOE) at Westlake University is seeking to fill multiple tenured or tenure-track faculty positions in all ranks.

Hangzhou, Zhejiang, China

Westlake University

computer hardware research

High-Level Talents at the First Affiliated Hospital of Nanchang University

For clinical medicine and basic medicine; basic research of emerging inter-disciplines and medical big data.

Nanchang, Jiangxi, China

The First Affiliated Hospital of Nanchang University

computer hardware research

Professor/Associate Professor/Assistant Professor/Senior Lecturer/Lecturer

The School of Science and Engineering (SSE) at The Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen) sincerely invites applications for mul...

Shenzhen, China

The Chinese University of Hong Kong, Shenzhen (CUHK Shenzhen)

computer hardware research

Faculty Positions& Postdoctoral Research Fellow, School of Optical and Electronic Information, HUST

Job Opportunities: Leading talents, young talents, overseas outstanding young scholars, postdoctoral researchers.

Wuhan, Hubei, China

School of Optical and Electronic Information, Huazhong University of Science and Technology

computer hardware research

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Advertisement

Advertisement

A systematic literature review on hardware implementation of artificial intelligence algorithms

  • Published: 28 May 2020
  • Volume 77 , pages 1897–1938, ( 2021 )

Cite this article

computer hardware research

  • Manar Abu Talib 1 ,
  • Sohaib Majzoub 1 ,
  • Qassim Nasir   ORCID: orcid.org/0000-0002-2837-3402 1 &
  • Dina Jamal 1  

6689 Accesses

78 Citations

Explore all metrics

Artificial intelligence (AI) and machine learning (ML) tools play a significant role in the recent evolution of smart systems. AI solutions are pushing towards a significant shift in many fields such as healthcare, autonomous airplanes and vehicles, security, marketing customer profiling and other diverse areas. One of the main challenges hindering the AI potential is the demand for high-performance computation resources. Recently, hardware accelerators are developed in order to provide the needed computational power for the AI and ML tools. In the literature, hardware accelerators are built using FPGAs, GPUs and ASICs to accelerate computationally intensive tasks. These accelerators provide high-performance hardware while preserving the required accuracy. In this work, we present a systematic literature review that focuses on exploring the available hardware accelerators for the AI and ML tools. More than 169 different research papers published between the years 2009 and 2019 are studied and analysed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

computer hardware research

Similar content being viewed by others

computer hardware research

Machine Learning: Algorithms, Real-World Applications and Research Directions

computer hardware research

Machine learning and deep learning

computer hardware research

Deep Learning: A Comprehensive Overview on Techniques, Taxonomy, Applications and Research Directions

Sze V, Chen Y-H, Yang T-J, Emer JS (2017) Efficient processing of deep neural networks: a tutorial and survey. Proc IEEE 105(12):2295–2329

Article   Google Scholar  

Pau LF (1991) Artificial intelligence and financial services. IEEE Trans Knowl Data Eng 3(2):137–148

Yao X, Zhou J, Zhang J, Boer CR (2017) From intelligent manufacturing to smart manufacturing for industry 4.0 driven by next generation artificial intelligence and further on. In: 5th International Conference on Enterprise Systems (ES)

Bishnoi L, Narayan Singh S (2018) Artificial intelligence techniques used in medical sciences: a review. In: 8th International Conference on Cloud Computing, Data Science and Engineering (Confluence), pp 106–113

Parker DS (1989) Integrating AI and DBMS through stream processing. In: Proceedings of Fifth International Conference on Data Engineering

Fraley JB, Cannady J (2017) The promise of machine learning in cybersecurity. SoutheastCon

Farabet C, Poulet C, Han JY, LeCun Y (2009). CNP: an FPGA-based processor for convolutional networks. Presented at the 2009 International Conference on Field Programmable Logic and Applications (FPL)

Rao Q, Frtunikj J (2018) Deep learning for self-driving cars. In: Proceedings of the 1st International Workshop on Software Engineering for AI in Autonomous Systems—SEFAIS ’18

Duffany JL (2010) Artificial intelligence in GPS navigation systems. Presented at the 2010 2nd International Conference on Software Technology and Engineering (ICSTE 2010)

Schutzer D (1983) Applications of artificial intelligence to military communications. In: IEEE Military Communications Conference, pp 786–790

Misra J, Saha I (2010) Artificial neural networks in hardware: a survey of two decades of progress. Neurocomputing 74(1–3):239–255

Baji T (2018) Evolution of the GPU device widely used in AI and massive parallel processing. In: IEEE 2nd Electron Devices Technology and Manufacturing Conference (EDTM)

Jawandhiya P (2018) Hardware design for machine learning. Int J Artif Intell Appl (IJAIA) 9(1):63–84

Google Scholar  

Shawahna A, Sait SM, El-Maleh A (2019) FPGA-based accelerators of deep learning networks for learning and classification: a review. IEEE Access 7:7823–7859

Lucas SM (2009) Computational intelligence and AI in games: a new IEEE transaction. IEEE Trans Comput Intell AI Games 1(1):1–3

Rigos S (2012) A hardware acceleration unit for face detection. In: Mediterranean Conference on Embedded Computing (MECO), Bar, pp 17–21

Mittal S (2018) A survey of FPGA-based accelerators for convolutional neural networks. Neural Comput Appl 32(4):1109–1139

Guo K, Zeng S, Yu J, Wang Y, Yang H (2019) [DL] A survey of FPGA-based neural network inference accelerators. ACM Trans Reconfig Technol Syst 12(1):1–26

Wang T, Wang C, Zhou X, Chen H (2018) A survey of FPGA based deep learning accelerators: challenges and opportunities. arXiv preprint arXiv:1901.04988

Budgen D, Brereton P (2006) Performing systematic literature reviews in software engineering. In: Proceeding of the 28th International Conference on Software Engineering—ICSE ’06

Ma Y, Cao Y, Vrudhula S, Seo J (2017) An automatic RTL compiler for high-throughput FPGA implementation of diverse deep convolutional neural networks. In: 27th International Conference on Field Programmable Logic and Applications (FPL)

Nurvitadhi E, Venkatesh G, Sim J, Marr D, Huang R, Ong Gee Hock J, Liew YT, Srivatsan K, Moss D, Subhaschandra S, Boudoukh G (2017) Can FPGAs beat GPUs in accelerating next-generation deep neural networks? In: Proceedings of the 2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays—FPGA ’17

Lacey G, Taylor G, Areibi S (2016) Deep learning on FPGAs: past, present, and future, pp 1–8. arXiv: 1602.04283

Faraone J, Gambardella G, Boland D, Fraser N, Blott M, Leong PHW (2018) Customizing low-precision deep neural networks for FPGAs. In: 28th International Conference on Field Programmable Logic and Applications (FPL)

Cheng Kwang-Ting, Wang Yi-Chu (2011) Using mobile GPU for general-purpose computing; a case study of face recognition on smartphones. In: Proceedings of 2011 International Symposium on VLSI Design, Automation and Test

Ouerhani Y, Jridi M, AlFalou A (2010) Fast face recognition approach using a graphical processing unit “GPU”. In: IEEE International Conference on Imaging Systems and Techniques

Li E, Wang B, Yang L, Peng Y, Du Y, Zhang Y, Chiu Y-J (2012) GPU and CPU cooperative acceleration for face detection on modern processors. Presented at the 2012 IEEE International Conference on Multimedia and Expo (ICME)

Shah AA, Zaidi ZA, Chowdhry BS, Daudpoto J (2016) Real time face detection/monitor using raspberry pi and MATLAB. In: IEEE 10th International Conference on Application of Information and Communication Technologies (AICT)

Oro D, Fernandez C, Saeta JR, Martorell X, Hernando J (2011) Real-time GPU-based face detection in HD video sequences. In: IEEE International Conference on Computer Vision Workshops (ICCV Workshops)

Gao C, Lu SL (2008) Novel FPGA based Haar classifier face detection algorithm acceleration. Presented at the 2008 International Conference on Field Programmable Logic and Applications (FPL)

Cho J, Mirzaei S, Oberg J, Kastner R (2009) FPGA-based face detection system using Haar classifiers. In: Proceeding of the ACM/SIGDA International Symposium on Field Programmable Gate Arrays—FPGA ’09

He C, Papakonstantinou A, Chen D (2009) A novel SoC architecture on FPGA for ultra fast face detection. Presented at the 2009 IEEE International Conference on Computer Design (ICCD 2009)

Farrugia N, Mamalet F, Roux S, Fan Yang, Paindavoine M (2009) Fast and robust face detection on a parallel optimized architecture implemented on FPGA. IEEE Trans Circuits Syst Video Technol 19(4):597–602

Farabet C, Poulet C, LeCun Y (2009) An FPGA-based stream processor for embedded real-time vision with convolutional networks. In: IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops

Kyrkou C, Theocharides T (2011) A flexible parallel hardware architecture for AdaBoost-based real-time object detection. IEEE Trans Very Large Scale Integr (VLSI) Syst 19(6):1034–1047

Zhou W, Zou Y, Dai L, Zeng X (2011) A high speed reconfigurable face detection architecture. Presented at the 2011 IEEE 9th International Conference on ASIC (ASICON 2011)

Wang N-J, Chang S-C, Chou P-J (2012) A real-time multi-face detection system implemented on FPGA. Presented at the 2012 International Symposium on Intelligent Signal Processing and Communications Systems (ISPACS 2012)

Bauer S, Brunsmann U, Schlotterbeck-Macht S (2009) FPGA implementation of a HOG-based pedestrian recognition system. In: MPC Workshop, pp 49–58

Hiromoto M, Miyamoto R (2009) Hardware architecture for high-accuracy real-time pedestrian detection with CoHOG features. In: IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops

Bauer S, Kohler S, Doll K, Brunsmann U (2010) FPGA-GPU architecture for kernel SVM pedestrian detection. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition—Workshops

Kryjak T, Komorkiewicz M, Gorgon M (2012) FPGA implementation of real-time headshoulder detection using local binary patterns, SVM and foreground object detection. In: Conference on Design and Architectures for Signal and Image Processing (DASIP), pp 1–8

Sharma B, Thota R, Vydyanathan N, Kale A (2009) Towards a robust, real-time face processing system using CUDA-enabled GPUs. In: International Conference on High Performance Computing (HiPC)

Kong J, Deng Y (2010) GPU accelerated face detection. In: International Conference on Intelligent Control and Information Processing

Hefenbrock D, Oberg J, Thanh NTN, Kastner R, Baden SB (2010) Accelerating Viola-Jones face detection to FPGA-level using GPUs. In: 18th IEEE Annual International Symposium on Field-Programmable Custom Computing Machines

Masek J, Burget R, Uher V, Guney S (2013) Speeding up Viola-Jones algorithm using multi-Core GPU implementation. Presented at the 2013 36th International Conference on Telecommunications and Signal Processing (TSP)

Jain V, Patel D (2016) A GPU based implementation of robust face detection system. Procedia Comput Sci 87:156–163

Lescano G, Santana P, Costaguta R (2017) Analysis of a GPU implementation of Viola-Jones’ algorithm for features selection. J Comput Sci Technol 17(1):68–73

Hahnle M, Saxen F, Hisung M, Brunsmann U, Doll K (2013) FPGA-based real-time pedestrian detection on high-resolution images. In: Proceedings IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp 629–635

Komorkiewicz M, Kluczewski M, Gorgon M (2012) Floating point HOG implementation for real-time multiple object detection. Presented at the 2012 22nd International Conference on Field Programmable Logic and Applications (FPL)

Ma X, Najjar WA, Roy-Chowdhury AK (2015) Evaluation and acceleration of high-throughput fixed-point object detection on FPGAs. IEEE Trans Circuits Syst Video Technol 25(6):1051–1062

Dwith CYN, Rathna GN (2012) Parallel implementation of LBP based face recognition on GPU using OpenCL. In: The International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT), pp 755–760

Oh C, Yi S, Yi Y (2015) Real-time face detection in full HD images exploiting both embedded CPU and GPU. Presented at the 2015 IEEE International Conference on Multimedia and Expo (ICME)

Oh C, Yi S, Yi Y (2018) Real-time and energy-efficient face detection on CPU-GPU heterogeneous embedded platforms. IEICE Trans Inf Syst E 101(12):2878–2888

Negi K, Dohi K, Shibata Y, Oguri K (2011) Deep pipelined one-chip FPGA implementation of a real-time image-based human detection algorithm. In: International Conference on Field-Programmable Technology

Zhao J, Zhu S, Huang X (2013) Real-time traffic sign detection using SURF features on FPGA. In: IEEE High Performance Extreme Computing Conference (HPEC)

Nasse F, Thurau C, Fink GA (2009) Face detection using GPU-based convolutional neural networks. In Proceedings of the 13th international conference on computer analysis of images and patterns. Springer, Berlin, pp 83–90

Li H, Lin Z, Shen X, Brandt J, Hua G (2015) A convolutional neural network cascade for face detection. In: IEEE Conference on Computer Vision and Pattern Recognition, pp 5325–5334

Cengil E, Cinar A, Guler Z (2017) A GPU-based convolutional neural network approach for image classification. Presented at the 2017 International Artificial Intelligence and Data Processing Symposium (IDAP)

Tijtgat N, Ranst WV, Volckaert B, Goedeme T, Turck FD (2017) Embedded real-time object detection for a UAV warning system. ICCVW. Venice, Italy, pp 2110–2118

Berjon D, Cuevas C, Moran F, Garcia N (2013) GPU-based implementation of an optimized nonparametric background modeling for real-time moving object detection. IEEE Trans Consum Electron 59(2):361–369

Obukhov A (2011) Haar classifiers for object detection with CUDA. In: GPU computing gems, Emerald Edition. Elsevier, pp 517–544

Pertsau D, Uvarov A (2013) Face detection algorithm using Haar-like feature for GPU architecture. In: IEEE 7th International Conference on Intelligent Data Acquisition and Advanced Computing Systems (IDAACS)

Coates A, Baumstarck P, Le Q, Ng AY (2009) Scalable learning for object detection with GPU hardware. In: IEEE/RSJ International Conference on Intelligent Robots and Systems

Oro D, Fern’ndez C, Segura C, Martorell X, Hernando J (2012) Accelerating boosting-based face detection on GPUs. In: 41st International Conference on Parallel Processing

Herout A, Jošth R, Juránek R, Havel J, Hradiš M, Zemčík P (2010) Real-time object detection on CUDA. J Real-Time Image Proc 6(3):159–170

Zhuang H, Low K-S, Yau W-Y (2012) Multichannel pulse-coupled-neural-network-based color image segmentation for object detection. IEEE Trans Ind Electron 59(8):3299–3308

Lozano OM, Otsuka K (2008) Simultaneous and fast 3D tracking of multiple faces in video by GPU-based stream processing. In: IEEE International Conference on Acoustics. Speech and Signal Processing, ICASSP, p 2008

Possa PR, Mahmoudi SA, Harb N, Valderrama C, Manneback P (2014) A multi-resolution FPGA-based architecture for real-time edge and corner detection. IEEE Trans Comput 63(10):2376–2388

Article   MathSciNet   Google Scholar  

Barbosa JPF, Ferreira APA, Rocha RCF, Albuquerque ES, Reis JR, Albuquerque DS, Barros ENS (2015) A high performance hardware accelerator for dynamic texture segmentation. J Syst Archit 61(10):639–645

Kryjak T, Komorkiewicz M, Gorgon M (2012) Real-time background generation and foreground object segmentation for high-definition colour video stream in FPGA device. J Real-Time Image Proc 9(1):61–77

Park J, Sung W (2016) FPGA based implementation of deep neural networks using on-chip memory only. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)

Zhao M, Hu C, Wei F, Wang K, Wang C, Jiang Y (2019) Real-time underwater image recognition with FPGA embedded system for convolutional neural network. Sensors 19(2):350

Zhang T, Zhou W, Jiang X, Liu Y (2018) FPGA-based implementation of hand gesture recognition using convolutional neural network. Presented at the 2018 IEEE International Conference on Cyborg and Bionic Systems (CBS)

Reyes E, Gómez C, Norambuena E, Ruiz-del-Solar J (2019) Near real-time object recognition for pepper based on deep neural networks running on a backpack. In: RoboCup 2018: Robot World Cup XXII. Springer, pp 287–298

Zhou Y, Wang W, Huang X (2015) FPGA design for PCANet deep learning network. In: IEEE 23rd Annual International Symposium on Field-Programmable Custom Computing Machines

Hikawa H, Kaida K (2015) Novel FPGA implementation of hand sign recognition system with SOM-Hebb classifier. IEEE Trans Circuits Syst Video Technol 25(1):153–166

Svab J, Krajnik T, Faigl J, Preucil L (2009) FPGA based speeded up robust features. Presented at the 2009 IEEE International Conference on Technologies for Practical Robot Applications (TePRA)

Yao L, Feng H, Zhu Y, Jiang Z, Zhao D, Feng W (2009) An architecture of optimised SIFT feature detection for an FPGA implementation of an image matcher. In: International Conference on Field-Programmable Technology

Gu Q, Takaki T, Ishii I (2013) Fast FPGA-based multiobject feature extraction. IEEE Trans Circuits Syst Video Technol 23(1):30–45

Knag P, Kim JK, Chen T, Zhang Z (2015) A sparse coding neural network ASIC with on-chip learning for feature extraction and encoding. IEEE J Solid-State Circuits 50(4):1070–1079

Bouris D, Nikitakis A, Papaefstathiou I (2010) Fast and efficient FPGA-based feature detection employing the SURF algorithm. Presented at the 2010 18th IEEE Annual International Symposium on Field-Programmable Custom Computing Machines

Ali U, Malik MB, Munawar K (2009) FPGA/soft-processor based real-time object tracking system. In: 5th Southern Conference on Programmable Logic (SPL)

Liu S, Papakonstantinou A, Wang H, Chen D (2011) Real-time object tracking system on FPGAs. Presented at the 2011 Symposium on Application Accelerators in High-Performance Computing (SAAHPC 2011)

Kryjak T, Gorgon M (2013) Real-time implementation of the ViBe foreground object segmentation algorithm. In: Federated Conference on Computer Science and Information Systems (FedCSIS), pp 591–596

Saqib F, Dutta A, Plusquellic J, Ortiz P, Pattichis MS (2015) Pipelined decision tree classification accelerator implementation in FPGA (DT-CAIF). IEEE Trans Comput 64(1):280–285

Pan J, Lauterbach C, Manocha D (2010) g-Planner: real-time motion planning and global navigation using GPUs. In: Proceedings of AAAI Conference on Artificial Intelligence 1245–1251

Vasumathi B, Moorthi S (2012) Implementation of hybrid ANN-PSO algorithm on FPGA for harmonic estimation. Eng Appl Artif Intell 25(3):476–483

Appleyard J, Kocisky T, Blunsom P (2016) Optimizing performance of recurrent neural networks on gpus. arXiv preprint arXiv:1604.01946

Wang Y, Xu J, Han Y, Li H, Li X (2016) DeepBurning: automatic generation of FPGA-based learning accelerators for the neural network family, pp 1–6

Sharma H, Park J, Amaro E, Thwaites B, Kotha P, Gupta A, Kim Joon K, Mishra A, Esmaeilzadeh H (2016) DNNWeaver: from high-level deep network models to FPGA acceleration. In: Workshop on Cognitive Architectures

DiCecco R, Lacey G, Vasiljevic J, Chow P, Taylor G, Areibi S (2016) Caffeinated FPGAs: FPGA framework for convolutional neural networks. In: International Conference on Field-Programmable Technology (FPT)

Umuroglu Y, Fraser NJ, Gambardella G, Blott M, Leong P, Jahre M, Vissers K (2017) FINN: a framework for fast, scalable binarized neural network inference. In: Proceedings of the 2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays—FPGA ’17

Geng T, Wang T, Sanaullah A, Yang C, Patel R, Herbordt M (2018) A framework for acceleration of CNN training on deeply-pipelined FPGA clusters with work and weight load balancing. Presented at the 2018 28th International Conference on Field Programmable Logic and Applications (FPL)

Jia Y, Shelhamer E, Donahue J, Karayev S, Long J, Girshick R, Guadarrama S, Darrell T (2014) Caffe: convolutional architecture for fast feature embedding. In: Proceedings of the ACM International Conference on Multimedia—MM ’14

Venieris SI, Bouganis C-S (2016) FPAGConvNet: a framework for mapping convolutional neural networks on FPGAs. In: IEEE 24th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM)

Samragh M, Ghasemzadeh M, Koushanfar F (2017) Customizing neural networks for efficient FPGA implementation. Presented at the 2017 IEEE 25th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM)

Liu Z, Dou Y, Jiang J, Xu J, Li S, Zhou Y, Xu Y (2017) Throughput-optimized FPGA accelerator for deep convolutional neural networks. ACM Trans Reconfig Technol Syst 10(3):1–23

Guan Y, Liang H, Xu N, Wang W, Shi S, Chen X, Sun G, Zhang W, Cong J (2017) FP-DNN: an automated framework for mapping deep neural networks onto FPGAs with RTL-HLS hybrid templates. In: IEEE 25th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM)

Wei X, Yu CH, Zhang P, Chen Y, Wang Y, Hu H, Cong J (2017) Automated systolic array architecture synthesis for high throughput CNN inference on FPGAs. Presented at the 54th Annual Design Automation Conference 2017

Zhao R, Ng H-C, Luk W, Niu X (2018) Towards efficient convolutional neural network for domain-specific applications on FPGA. In: 28th International Conference on Field Programmable Logic and Applications (FPL)

Bottleson J, Kim S, Andrews J, Bindu P, Murthy DN, Jin J (2016) clCaffe: OpenCL accelerated Caffe for convolutional neural networks. In: IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)

Rabhi S, Sun W, Perez J, Kristensen MRB, Liu J, Oldridge E (2019) Accelerating recommender system training 15x with RAPIDS. In: Proceedings of the Workshop on ACM Recommender Systems Challenge. RecSys Challenge ’19: ACM Recommender Systems Challenge 2019 Workshop

Gong J, Shen H, Zhang G, Liu X, Li S, Jin G, Maheshwari N, Fomenko E, Segal E (2018) Highly efficient 8-bit low precision inference of convolutional neural networks with IntelCaffe. In: Proceedings of the 1st on Reproducible Quality-Efficient Systems Tournament on Co-designing Pareto-efficient Deep Learning (ReQuEST ’18). Association for Computing Machinery, New York, NY, USA, Article 2, 1

Abdelouahab K, Pelcat M, Serot J, Bourrasset C, Berry F (2017) Tactics to directly map CNN graphs on embedded FPGAs. IEEE Embed Syst Lett 9(4):113–116

Sharma H et al (2016) From High-level deep neural models to FPGAs. In: 49th Annual IEEE/ACM International Symposium on Microarchitecture, pp 1–12

Ma Y, Suda N, Cao Y, Vrudhula S, Seo JS (2018) ALAMO: FPGA acceleration of deep learning algorithms with a modularized RTL compiler. Integration 62:14–23

Venieris SI (2017) Latency-driven design for FPGA-based convolutional neural networks

Zeng H, Zhang C, Prasanna V (2018) Fast generation of high throughput customized deep learning accelerators on FPGAs. In: International Conference on Reconfigurable Computing FPGAs, ReConFig 2017, vol 2018-Janua, pp 1–8

Venieris SI (2018) f-CNN x : a toolflow for mapping multiple convolutional neural networks on FPGAs

Ma Y, Cao Y, Vrudhula S, Seo JS (2020) Automatic compilation of diverse CNNs onto high-performance FPGA accelerators. IEEE Trans Comput Des Integr Circuits Syst. https://doi.org/10.1109/TCAD.2018.2884972

Ma Y, Suda N, Cao Y, Seo JS, Vrudhula S (2016) Scalable and modularized RTL compilation of convolutional neural networks onto FPGA. In: 26th International Conference on Field-Programmable Logic and Applications (FPL)

Cadambi S, Graf HP (2010) A programmable parallel accelerator for learning and classification, pp 273–283

Art P (2011) Artificial neural network acceleration on FPGA using custom instruction, pp 450–455

Luo G, Zhang C, Cong J, Sun J, Sun G, Wu D (2016) Energy-efficient CNN implementation on a deeply pipelined FPGA cluster, pp 326–331

Sun F et al (2018) A high-performance accelerator for large-scale convolutional neural networks. In: Proceedings of the 15th IEEE International Symposium on International Parallel and Distributed Processing with Application. 16th IEEE International Conference on Ubiquitous Computing and Communications, ISPA/IUCC 2017, pp 622–629

Qiao Y (2011) FPGA-accelerated deep convolutional neural networks for high throughput and energy efficiency. Seismol Res Lett 82(2):2010–2011

Motamedi M, Gysel P, Akella V, Ghiasi S (2016) Design space exploration of FPGA-based deep convolutional neural networks. In: Proceeding of Asia and South Pacific Design Automation Conference, ASP-DAC, vol 25–28 Jan, pp 575–580

Rahman A, Lee J, Choi K (2016) Efficient FPGA acceleration of convolutional neural networks using logical-3D compute array, pp 1393–1398

Zhang J, Li J (2017) Improving the performance of OpenCL-based FPGA accelerator for convolutional neural network, pp 25–34

Yonekawa H, Nakahara H (2017) On-chip memory based binarized convolutional deep neural network applying batch normalization free technique on an FPGA. In: Proceedings of IEEE 31st International Parallel and Distributed Processing Symposium Work, IPDPSW, pp 98–105

Nakahara H, Fujii T, Sato S (2017) A fully connected layer elimination for a binarizec convolutional neural network on an FPGA. In: 27th International Conference on Field-Programmable Logic and Applications (FPL), pp 1–4

Kim L (2017) DeepX: deep learning accelerator for restricted Boltzmann machine artificial neural networks, pp 1–13

Zhao R et al (2017) Accelerating binarized convolutional neural networks with software-programmable FPGAs, pp 15–24

Aydonat U, O’Connell S, Capalija D, Ling AC, Chiu GR (2017) An OpenCL(TM) deep learning accelerator on Arria 10, pp 55–64

Shimoda M, Sato S, Nakahara H (2018) All binarized convolutional neural network and its implementation on an FPGA. In: International Conference on Field-Programmable Technology, ICFPT, vol 2018-Janua, pp 291–294

Xian A, Chang M, Culurciello E (2017) Hardware accelerators for recurrent neural networks on FPGA, pp 0–3

Guo J, Yin S, Ouyang P, Liu L, Wei S (2017) Bit-width based resource partitioning for CNN acceleration on FPGA. In: Proceedings of IEEE 25th Annual International Symposium on Field-Programmable Custom Computing Machines. FCCM 2017, p 31

Zhang C, Prasanna V (2017) Frequency domain acceleration of convolutional neural networks on CPU-FPGA shared memory system, pp 35–44

Yan S, Lu L, Liang Y, Xiao Q, Tai Y-W (2017) Exploring heterogeneous algorithms for accelerating deep convolutional neural networks on FPGAs, pp 1–6

Gong L, Wang C, Li X, Chen X, Zhou X (2017) Work-in-progress: a power-efficient and high performance FPGA accelerator for convolutional neural networks

Ma Y, Cao Y, Vrudhula S, Seo J (2017) Optimizing loop operation and dataflow in FPGA acceleration of deep convolutional neural networks, pp 45–54

Nguyen D, Kim D, Lee J (2017) Double MAC: doubling the performance of convolutional neural networks on modern FPGAs. In: Proceedings of 2017 Design, Automation and Test in Europe Conference and Exhibition, pp 890–893

Hwang WJ, Jhang YJ, Tai TM (2017) An efficient FPGA-based architecture for convolutional neural networks. In: 40th International Conference on Telecommunications and Signal Processing, TSP, vol 2017-Janua, pp 582–588

Ma Y, Cao Y, Vrudhula S, Seo JS (2018) Optimizing the convolution operation to accelerate deep neural networks on FPGA. IEEE Trans Very Large Scale Integr Syst 26(7):1354–1367

Guan Y, Yuan Z, Sun G, Cong J (2017) FPGA-based accelerator for long short-term memory recurrent neural networks. In: Proceedings of Asia and South Pacific Design Automation Conference, ASP-DAC, pp 629–634

Ma Y, Kim M, Cao Y, Vrudhula S, Seo JS (2017) End-to-end scalable FPGA accelerator for deep residual networks. In: Proceedings of IEEE International Symposium on Circuits and Systems, pp 0–3

Yu J et al (2018) Instruction driven cross-layer CNN accelerator with winograd transformation on FPGA. In: International Conference on Field-Programmable Technology, ICFPT 2017, vol 2018-Janua, pp 227–230

Kim JH, Grady B, Lian B, Brothers J, Anderson JH (2017) FPGA-based CNN inference accelerator synthesized from multi-threaded C software, pp 268–273

Moss DJM et al (2017) High performance binary neural networks on the Xeon+FPGATM platform. In: 27th International Conference on Field-Programmable Logic and Applications (FPL)

Guo K et al (2018) Angel-Eye: a complete design flow for mapping CNN onto embedded FPGA. IEEE Trans Comput Des Integr Circuits Syst 37(1):35–47

Gong L, Wang C, Li X, Chen H, Zhou X (2018) MALOC: a fully pipelined FPGA accelerator for convolutional neural networks with all layers mapped on chip. IEEE Trans Comput Des Integr Circuits Syst 37(11):2601–2612

Duarte RP (2018) Lite-CNN: a high-performance architecture to execute CNNs in low density FPGAs

Rybalkin V, Pappalardo A, Ghaffar MM, Gambardella G, Wehn N, Blott M (2018) FINN-L: Library extensions and design trade-off analysis for variable precision LSTM networks on FPGAs. In: Proceedings of 2018 International Conference on Field-Programmable Logic and Applications (FPL), pp 89–96

Yu Q, Wang C, Ma X, Li X, Zhou X, (2015) A deep learning prediction process accelerator based FPGA. In: Proceedings of 2015 IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, CCGrid 2015, no 500, pp 1159–1162

Abdelfattah MS et al (2018) DLA: compiler and FPGA overlay for neural network inference acceleration

Nurvitadhi E et al (2018) In-package domain-specific ASICs for Intel® Stratix® 10 FPGAs: a case study of accelerating deep learning using TensorTile ASIC, pp 106–110

Zhang C (2015) Optimizing FPGA-based accelerator design for deep convolutional neural networks, pp 161–170

Qiu J et al (2016) Going deeper with embedded FPGA platform for convolutional neural network, pp 26–35

Vrudhula S et al (2016) Throughput-optimized OpenCL-based FPGA accelerator for large-scale convolutional neural networks, pp 16–25

Wang Y et al (2016) Low power convolutional neural networks on a chip. In: Proceedings of IEEE International Symposium on Circuits and Systems, vol 2016-July, no 1, pp 129–132

Feng G, Hu Z, Chen S, Wu F (2016) Energy-efficient and high-throughput FPGA-based accelerator for convolutional neural networks, pp 4–6

Wang C, Gong L, Yu Q, Li X, Xie Y, Zhou X (2017) DLAU: a scalable deep learning accelerator unit on FPGA. IEEE Trans Comput Des Integr Circuits Syst 36(3):513–517

Park J, Lotfi-Kamran P, Sharma H, Esmaeilzadeh H, Yazdanbakhsh A (2016) Neural acceleration for GPU throughput processors, pp 482–493

Strigl D, Kofler K, Podlipnig S (2010) Performance and scalability of GPU-based convolutional neural networks. In: Proceedings of the 18th Euromicro Conference on Parallel, Distributed and Network-based Processing, PDP 2010, pp 317–324

Guzhva A, Dolenko S, Persiantsev I (2009) Multifold acceleration of neural network computations using GPU. In: Artificial Neural Networks—ICANN 2009, pp 373–380

Li B, Zhou E, Huang B, Duan J, Wang Y, Xu N, Zhang J, Yang H (2014) Large scale recurrent neural network on GPU. In: International Joint Conference on Neural Networks (IJCNN)

Kim Y, Lee J, Kim J-S, Jei H, Roh H (2018) Efficient multi-GPU memory management for deep learning acceleration. In: IEEE 3rd International Workshops on Foundations and Applications of Self* Systems (FAS*W)

Bhuiyan MA, Pallipuram VK, Smith MC (2010) Acceleration of spiking neural networks in emerging multi-core and GPU architectures. In: IEEE International Symposium on Parallel and Distributed Processing, Workshops and Phd Forum (IPDPSW)

Zhang X, Gu N, Ye H (2016) Multi-GPU based recurrent neural networks language model training. In: Communications in computer and information science, pp 484–493

Potluri S, Fasih A, Vutukuru LK, Machot FA, Kyamakya K (2011) CNN based high performance computing for real time image processing on GPU. Presented at the 16th Int’l Symposium on Theoretical Electrical Engineering (ISTET)

Farah NICLA (2014) A new classification approach for neural networks hardware: from standards chips to embedded systems on chip, pp 491–534

Jin L, Wang Z, Gu R, Yuan C, Huang Y (2014) Training large scale deep neural networks on the Intel Xeon Phi many-core coprocessor. In: IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)

Kurth T, Zhang J, Satish N, Racah E, Mitliagkas I, Patwary MMA, Malas T, Sundaram N, Bhimji W, Smorkalov M et al (2017) Deep learning at 15PF: supervised and semi-supervised classification for scientific data. In: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis. ACM, pp 7

Georganas E, Avancha S, Banerjee K, Kalamkar D, Henry G, Pabst H, Heinecke A (2018) Anatomy of high-performance deep learning convolutions on SIMD architectures. In: Proceedings of the International Conference for High Performance Computing, Networking, Storage, and Analysis, SC ’18, Piscataway, NJ, USA. IEEE Press, pp 66:1–66:12

Viebke A, Memeti S, Pllana S, Abraham A (2017) CHAOS: a parallelization scheme for training convolutional neural networks on Intel Xeon Phi. J Supercomput 75(1):197–227

Mathuriya A, Bard D, Mendygral P, Meadows L, Arnemann J, Shao L, He S, Karna T, Moise D, Pennycook SJ, Maschhoff K, Sewall J, Kumar N, Ho S, Ringenburg MF, Prabhat P, Lee V (2018) CosmoFlow: using deep learning to learn the universe at scale. In: SC18: International Conference for High Performance Computing, Networking, Storage and Analysis

Hu Y, Zhai J, Li D, Gong Y, Zhu Y, Liu W, Su L, Jin J (2018) BitFlow: exploiting vector parallelism for binary neural networks on CPU. Presented at the 2018 IEEE International Parallel and Distributed Processing Symposium (IPDPS)

“Virtex-5”, Xilinx.com (2019). https://www.xilinx.com/products/boards-and-kits/device-family/nav-virtex-5.html . Accessed 16 Oct 2019

“Stratix V GX FPGA Development Kit”, Intel.com (2019). https://intel.ly/31pCBMl . Accessed 16 Oct 2019

“Arria 10 GX FPGA Development Kit”, Intel.com (2019). https://intel.ly/2ITEPwO . Accessed 16 Oct 2019

Chen Y et al (2014) DaDianNao: a machine-learning supercomputer

Amazon.com (2019). https://www.amazon.com/NVIDIA-Computing-Processor-Graphic-900-22081-2250-000/dp/B00KDRRTB8 . Accessed: 16 Oct 2019

Amazon.com (2019). https://www.amazon.com/Nvidia-TESLA-Accelerator-Processing-900-2G600-0000-000/dp/B01MDNO5BK . Accessed 16 Oct 2019

“NVIDIA GeForce GT 730 Review”, Benchmarks.ul.com (2019). https://benchmarks.ul.com/hardware/gpu/NVIDIA+GeForce+GT+730+review . Accessed 16 Oct 2019

Download references

Acknowledgements

We would like to thank the University of Sharjah and OpenUAE Research and Development Group for funding this research study. We are also grateful to our research assistants who helped in collecting, summarizing and analysing the 169 research papers used in this SLR study.

Author information

Authors and affiliations.

University of Sharjah, Sharjah, United Arab Emirates

Manar Abu Talib, Sohaib Majzoub, Qassim Nasir & Dina Jamal

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Qassim Nasir .

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Talib, M.A., Majzoub, S., Nasir, Q. et al. A systematic literature review on hardware implementation of artificial intelligence algorithms. J Supercomput 77 , 1897–1938 (2021). https://doi.org/10.1007/s11227-020-03325-8

Download citation

Published : 28 May 2020

Issue Date : February 2021

DOI : https://doi.org/10.1007/s11227-020-03325-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Hardware accelerators
  • Artificial intelligence
  • Machine learning
  • AI on hardware
  • Real-time AI
  • Find a journal
  • Publish with us
  • Track your research

Computer Hardware

Chips and transistors.

silicon chip with tiny wires

Moore's Law

Xxxxx a little followup, computers in life: control systems, control system / moore's flashlight demo, computer hardware - cpu, ram, and persistent storage, aside: cpu "cores", cpu examples, cpu variant: gpu - graphics processing unit, ram examples, 3. persistent storage: hard drive, flash drive, persistent storage, newer technology: flash, file system.

file system organizes the bytes of persistent storage

Persistent Storage Examples

Pictures of hardware, microcontroller - cheap computer chip, arduino computer.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • HHS Author Manuscripts

Logo of nihpa

An Introduction to hardware, software, and other information technology needs of biomedical biobanks

1 Brain Tumor Translational Resource, David Geffen School of Medicine at UCLA, Los Angeles, CA, 90095

2 Department of Pathology and Laboratory Medicine, UC Davis School of Medicine, Sacramento, CA, 95817

William H. Yong

3 Department of Pathology and Laboratory Medicine, David Geffen School of Medicine at UCLA, Los Angeles, CA, 90095

4 Jonsson Comprehensive Cancer Center, UCLA, Los Angeles, CA, 90024

Biobanks support medical research by facilitating access to biospecimens. Biospecimens that are linked to clinical and molecular information are particularly useful for translational biomedical research. Tracking and managing the biospecimens and their associated data are therefore crucial tasks in the functioning of a biobank. Adequate computing hardware, efficient and comprehensive biobanking software, and cost-effective data storage are needed for proper management of biospecimens. As biobanks build up extensive stores of specimens and patient data, ethical considerations also inevitably arise. Herein, we describe some basic considerations for establishing a biobanking information technology infrastructure that a beginning biobanker needs. Finally, we also discuss trends and future needs in biobanking informatics.

1. Introduction

Biobanks are an essential aspect of biomedical research in this new era of targeted therapy or personalized medicine. These biorepositories serve as a ready source of high quality tissue and blood specimens, sourced and stored in coordination with surgeons and pathology staff. The biospecimens may contain genomic, epigenomic, transcriptomic, proteomic or metabolomic changes that characterize the patient’s disorder or cancer. One of the most valuable uses of biobanks arises from the linking of patient clinical information with the aforementioned changes in the biospecimens. These linkages can be analyzed to determine whether specific genetic or other changes might predict response to a specific therapy. A sufficiently large number of biospecimens can provide statistical power for answering research questions. The appropriate computing or informatics infrastructure is critical for managing the data and performing analyses for these biospecimen-based studies that are a fundamental component of many modern clinical trials.

2. Hardware and basic software requirements

To manage data, a computer with the correct operating system to run the software, sufficient memory, and reasonable speed for efficient operations is necessary. An operating system should be primarily chosen based on broad compatibility with software to be used. Currently, most computers can operate satisfactorily with 8–16 gigabytes of RAM at a processor speed of 1 GHz or more. In addition, hard drives with at least 1 terabyte (Tb) should be adequate for daily work. However, these hardware requirements are likely to change every few years as software evolves- typically requiring greater processing power and more storage capacity. External drives and other forms of data storage may be necessary for additional storage and backup. Cloud storage is emerging as a dynamic and cost-effective alternative to physical forms of data storage and will be discussed later in this chapter. These computers require office and security software. An adequate office program should have word processing, spreadsheet, and presentation functionalities. Other useful software include e-mail and note taking applications. To protect sensitive patient information, security software is needed to protect against malware. Malware are software programs that can damage or cause unwanted actions in the computer. In the general public, the term malware may be used interchangeably with the term virus. However, for those in the information technology (IT) world, malware typically encompasses a number of often different but sometimes overlapping sub-types that include viruses, spyware, adware, and ransomware. Viruses are programs that can replicate in your computer and spread to other linked computers while damaging the software on them and sometimes completely incapacitating the computer. Spyware can be used to steal passwords and private information. Adware are unwanted software that project advertisements that sometimes may also have virus capabilities. Ransomware are malicious software that can lock out the end-user unless a ransom is paid. Anti-malware or antivirus software have the ability to search for the relevant malware or viruses. A complementary protective element is a firewall. A firewall monitors network traffic, i.e. data coming into the computer and leaving the computer over the ethernet. If the firewall detects anomalies known to be malicious, it can stop the transmission of data. These can be hardware based or software based. Larger companies with sophisticated IT staff and infrastructure can have hardware firewalls that protect their entire network. For smaller entities with modest budgets, a software firewall can be purchased together with the anti-malware or antivirus. The security programs should also have the capability of regularly scheduled system scans and update procedures. In addition, computer hardware and software must be updated periodically to ensure compatibility and maintain efficiency. Consequently, computing choices must remain scalable and financially feasible. A checklist of these requirements can be found in Table 1 .

Computing requirements in establishing a biobank. Each category includes minimum and optimal considerations as well as common functionalities. See Table 2 for BIMS requirements

Biobank Computing Requirement ChecklistConsiderations
Desktop computerLaptop computer
Windows 10, Apple OS XMust be compatible with biobanking software
Minimum: 8 GBBetter: 16 GB
Minimum: 500 GBBetter: 1 – 2 TB
Minimum:
7200 RPM
Better:
10,000 – 15,000 RPM
Minimum: 10 MbpsBetter: 100–200 Mbps
Minimum: 500 GBBetter: 1–2 TB
Security, EncryptionAudited by third-party
RedundancyStable customer base/finances
SpreadsheetsE-mail & Note log
Word processorsPresentations
FirewallAnti-malware/anti-virus
Scheduled scansProtection against phishing

2.1. Backing up data

Direct Attached Storage (DAS), Network Attached Storage (NAS), and file servers are the 3 major ways to backup data on your computer. Direct attached storage is typically an external drive attached to the computer via a Universal Serial Bus (USB) connection. Currently, external drives with USB 3.0 connections that allow significantly faster data transfers than the original USB connections are common. One should ensure that the drive connection is compatible with the computer’s ports for such connections. The external drives should be encrypted, and password protected. Network attached storage are, as the name implies, storage accessible over a network. It is essentially, a collection of hard drives connected to a network that the biobank staffs’ computers can access. NAS is ideal for simple file storage. File servers or servers are similar to network attached storage except that they are essentially computers with hard drives giving them more capabilities to partition storage, to control different tiers of access, and to run shared programs. In short, NAS is less complex to manage than servers but has less functionality. The IT staff at your institution will likely have a preferred mode of providing backup storage. We use an encrypted external drive to back-up data in our own laboratory space and also store data in folders on servers at a remote location provided by our departmental IT staff. Having a local drive is helpful in that, sometimes when the network is down, one can still work from the local files. In addition, if the computer it is attached to is not functioning, the external drive can be easily moved to another computer,

2.2. Redundant Servers

In times of power outages or main server failures, redundant servers are necessary to maintain biobank server functionality. With the same specifications and applications as the original servers, redundant servers come online when the dedicated servers are down and continue to provide support until normal server function can be restored ( 1 ). Typically, data must be encrypted en route to the server and on the server, and governmental privacy and security requirements must be met. The secondary server should ideally have a location different from the primary server. The secondary server can be set up to mirror the primary server. A server status page is used to check the primary server on a regular schedule for an expected response. There should be a failover service to automatically switch to the secondary server when the primary server fails to return the expected response. The failover service should switch back to the primary server once it is functional. Multiple backup servers can also be stringed together to provide multiple levels of redundancy, in case even the secondary server is out of service. This redundancy should be provided by your IT department.

3. Biobank Information Management System (BIMS)

3.1. biobank information management system (bims), a form of laboratory information management system (lims).

With the immense amounts of data with which biobanks are associated, biobank information management systems (BIMS) are powerful if not necessary tools. LIMS are data management software programs that manage the various types of information in laboratory environments. A BIMS is essentially a LIMS that is adapted for biobanks. As each biobank, from the informatics point of view, is essentially a large workflow, a BIMS support the multiple processes involved to assist personnel in tracking and managing samples. However, not all BIMS are the same, as each configuration is designed to best support the processes of a particular laboratory with its own unique workflows and data set types. In general, a BIMS serve a set of core functions: storage and registration of a sample and its corresponding data, tracking of the sample throughout the laboratory workflow, storage locations, organization and analysis of data, and auditing of sample data. It is important that the BIMS keep a running custody log for each sample. The custody log would be a chronological record of staff handling each specimen at each step of the workflow. In case of a missing biospecimen, the custody trail can help in tracking down a biospecimen and provide a window into how to improve the standard operating procedure.

3.2. Considerations for biospecimen labeling and registration in the BIMS

In order to effectively navigate through this extensive database, the ideal BIMS requires a user-friendly front end offering flexible search criteria ( 2 ). An index of labels (categories) and their respective abbreviations should be available for anyone using the BIMS to be able to properly categorize biospecimens and to conduct efficient searches. With any new categories, a standard abbreviation should be chosen and included in the index for future use. A well-established BIMS often has a large and practical ontology or hierarchical nomenclature for categorizing biospecimens typically available from pick lists. For example, the BIMS would have options for type of biospecimen such as tissue, blood, and cerebrospinal fluid as well as source such as lung, brain, heart etc. There might be further options to characterize source such as left upper lobe, right kidney or left temporal lobe. In addition, the BIMS should have the capability to specify materials derived from the biospecimens such as cell lines, DNA, RNA, and protein etc., and analytical data such as quality assurance metrics like RNA integrity number (RIN).

A BIMS should be able to integrate multiple types of inputs into a single searchable framework. For example, whole slide digital images, photos, molecular data, and scanned documents may be attached to a biospecimen. Integration into the singular framework also eliminates duplicates to streamline data access ( 3 ). Free texting for categorization should be avoided as that may be lead to inconsistencies in the data entry through typographical or formatting errors. When any new data is entered into the BIMS, ideally there should be a second party present to audit all the newly entered data. Practically, total and contemporaneous audit is difficult and only a subset of data entered is typically audited. Some software requires data entry of an important data element in duplicate; i.e. the data must be entered twice, and the data must match. If a biospecimen already has associated data imbedded in bar codes or radio frequency identification (RFID) tags, bar code or RFID scanners linked to the BIMS can be used to capture the associated data. These steps limit simple data mis entries that can have profound consequences due to biospecimen misidentification.

If the appropriate consent for research has been obtained, the patient’s name, date of birth, medical record number, diagnosis, and other clinical information can be collected and associated with the biospecimen. For the best protection of the patient’s privacy, all specimens should be assigned a research identifier that can add a layer of separation from the patient’s name and clinical identifiers (date of birth, medical record number etc.). An identifier unique to the patient and a second identifier unique to the biospecimen are necessary. Having only a patient identifier is inadequate as the patient may have more than one biospecimen over time. Each specimen must have a date of collection in order to begin creating a chronological record for the specimen. Under some protocols, deidentified tissue is collected such that only basic information such as tissue or cancer type is provided to the biobank.

3.3. Aliquots, chain of custody, and location tracking

Once in the biobank, each biospecimen is tracked by the biobanking software with constant updating of biospecimen quantity, storage location, storage method, and storage conditions. It is imperative that a custody log be maintained meticulously ( 4 ). Every specimen should also have a genealogy that gives a record of aliquots and derivatives and their quantities. Aliquots are smaller volumes of the original specimen. Often the original specimen is divided into several aliquots to store in suitably sized containers or to create several different derivatives or to provide to researchers. Derivatives may be thought of as materials extracted or derived from the original biospecimen. Examples of derivatives include: Nucleic acids extracted from tissue, cell lines grown out from cancer biospecimens, formalin fixed paraffin embedded blocks made from tissue, white blood cells or serum collected from blood etc. Maintaining a comprehensive genealogy also allows researchers to track availability for each specimen so as to prevent depleting irreplaceable and unique biospecimens.

With each specimen also follows a research history, as the inherent value of any biobank comes from the variety of research efforts it is able to support. A typical experimental history for a specimen would entail the specific proposed research study in need of the sample, grant funding that the proposal has received, IRB approval, experimental procedures performed on the sample, relevant data and results from the experiment, any consequent publication history, and possible clinical trials supported by the research. Finally, as biobanks often work with other institutions, samples must be sent out for collaborative research efforts. This requires industry standard hazard classifications, destination, the courier service employed, and tracking numbers. Table 2 organizes these multiple layers of data for each biospecimen.

General information for each biospecimen to be stored in a biobank

Preliminary Information
NameDate of Birth
Medical record numberDiagnosis
Research consentDate of specimen release
TissueBlood
OrganCerebrospinal fluid
Other solidOther bodily fluid
Biobank Specimen Information
History of custodyCurrent location
Current method of storageStorage conditions
(Materials derived from the biospecimen) (Smaller samples of the original biospecimen, e.g. A 5 ml tube of blood may be aliquoted into five 1 ml cryovials)
FFPE blocks and slides
DNA, RNA, protein, cell lines
Proposed research studyGrant funding
IRB approvalExperimental procedures
Data obtainedResults
Publication historyTrials supported
Hazard classificationsDestination
Courier serviceTracking number

3.4. Freezer Maps

Freezer mapping creates comprehensive and updated location inventories of biospecimens and their corresponding aliquots. Freeze maps can greatly expedite research efforts by reducing time spent in finding specific samples. The freezer software should also allow users to create their own defined fields as searchable categories to navigate the variety of specimens available in storage; at the minimum, there should be localization as the level of the shelf of the freezer or rack of a liquid nitrogen vat. Fig. 1 shows a typical grid that would be displayed by the freezer software program when searching for a specific specimen or aliquot according to particular criteria among multiple freezers and multiple divisions within each freezer. In addition, tracking storage conditions such as temperature and humidity are desirable to ensure specimen integrity. A sensor (or multiple sensors) within each freezer tracks and logs internal conditions that are recorded into the freezer software often via a wireless network connection. In case of freezer failure, specimen degradation can occur very quickly as temperatures rise and samples are exposed to moisture. For timely response in transferring affected samples to functioning freezers, alarms should be present to notify appropriate staff of any malfunction. Each freezer can be equipped with a physical audible alarm, and the freezer software can be configured to provide notifications if freezer conditions deviate from the norm. Certain freezer software programs also feature labeling functions along with label printers for marking vials and slides. Labels should remain adherent and be waterproof to prevent loss of identification under freezing and thawing conditions, and they can be printed according to set templates.

An external file that holds a picture, illustration, etc.
Object name is nihms-1051961-f0001.jpg

Example of freezer software interface displaying sample locations, relevant specimen information, and options for adding new specimens.

3.5. Radio frequency identification (RFID) tags

Radio Frequency Identification (RFID) technology offers numerous advantages in reducing errors while identifying, tracking, and archiving biospecimens ( 5 ). RFID tags can be scanned without direct alignment. Multiples tags can be scanned simultaneously, and each tag possesses a relatively high data storage capacity compared to most bar codes. Furthermore, RFID systems are capable of data transmission, essential in tracking storage conditions such as temperature, and multiple cycle of read-write processes can be performed on each tag to keep a running log of any changes. However, implementing RFID systems can be difficult in terms of high equipment and software setup costs, as well as inevitable technological obsolescence necessitating periodic software updates and new hardware. There also exist security concerns in employing RFID systems, in which radio communication channels remain open and vulnerable to unwarranted access. This privacy concern can be overcome with encryption, use of research identifiers, shielding, and limiting access to biospecimen storage areas. A cost benefit analysis is advised prior to implementation.

3.6. Biobanking Example

Mr. John Doe is a patient who has been diagnosed with glioblastoma multiforme (GBM). His records show his birthdate to be 1/1/1970, and he has been assigned a medical record number: MRN X-01234. Mr. John Doe has given research consent ahead of time for the tumor to be used in research studies. Mr. John Doe then undergoes surgery on 1/1/2016, and the GBM is removed. Up to this point, patient and surgical information is logged by the electronic health record program authorized by the institution. When the GBM is obtained by clinical pathology personnel, the BIMS assigns the specimen its research identifier: R-5678. Since it is a tumor, specimen R-5678 is categorized as a solid biologic obtained from brain, right parietel lobe, and it then undergoes quality control histologic assessments, such as tumor percentage, necrosis percentage, or cancer biomarkers. The quality control results are then logged in before the specimen can be stored away. Once released by pathology staff to researchers, the history of custody for specimen R-5678 begins within the biobanking software database. After proper labeling, the specimen is assigned a slot within a specific storage freezer. The method of storage (frozen), and specific storage conditions (temperature and humidity) are tracked by the freezer software. As samples of specimen R-5678 are requested, its genealogy of derived samples within the BBS keeps track of all FFPE slides and block requests from various research staff. If research personnel decide to use specimen R-5678 for glioblastoma research, there must first exist records of the research project proposal, proper grant funding, and IRB approval for the project linked to the specimen within the BBS database. Any experimental procedures performed on the tumor specimen or any of its derivatives, all data, and results obtained from those procedures are logged into the BBS as well. Furthermore, any publications resulting from the research project as well as clinical trials developed in accordance with the research are continuously logged for specimen R-5678 ( Fig. 2 ).

An external file that holds a picture, illustration, etc.
Object name is nihms-1051961-f0002.jpg

Flowchart mapping data movement into and out of a typical Biobanking system. This flowchart shows possible types of data inputted to the Biobanking software, as well as different types of cloud and physical data storage.

4. Biobank collaborations and web-based portals

Perhaps the greatest value of biobanks lies in the potential access to large numbers of biospecimens from multiple centers where often each center alone would not have sufficient material to power a study. For researchers at different sites around the world to access specimens, a consortium of biobanks can provide a single web-based portal that permits searching of their libraries of biospecimens. If the biobanks use the same BIMS, access to the shared data is facilitated. Often however, a separate database is created requiring data entry from the diverse biobanks and the web portal provides access to the central database. Regardless, through a web-based portal, collaborators can obtain pertinent information on the variety of samples available in a given biobank consortium. With proper access privileges granted ahead of time (based on an application and a relevant documented IRB-approved protocol), researchers can search the content of the BIMS through most web browsers, providing the liberty to acquire data from anywhere with an internet connection. Different levels of search access can be provided depending on the approved research protocol. Once the researcher has identified a set of biospecimens that they are interested in, they can submit a request to a central oversight committee that coordinates with the individual biobanks for shipping. A slightly different model is one where the researcher submits a request for biospecimens, e.g. lung carcinomas from patients that have been treated with a specific drug, to a central site which then runs a search across the associated biobank’s BIMS databases either directly themselves or indirectly by requesting the individual center to run the searches.

5. Commercial Cloud Data Storage

Biobanking data can be stored in cloud-based infrastructure. That is, instead of storing the data on local servers, the BIMS data can be stored remotely “in the cloud” with servers provided by the BIMS vendor or with a commercial data storage entity. There are several criteria by which a good cloud storage provider might be selected. A reliable provider should have an expansive customer base of business clients that can attest to the provider’s trusted cloud infrastructure as well as its profitable and stable finances, ensuring the provider is successful in handling large databases. To establish databases of sensitive information, cloud providers need to have security programs and multiple levels of encryption methods to prevent data breaches and ensure privacy of patient information. Most cloud storage providers are validated by third-party auditors to ensure security protocols meet international industry standards ( 6 ). Moreover, providers should be approved by the institution and have a strong record of services operating under HIPAA or relevant national privacy compliance requirements. Cloud storage offers numerous advantages over conventional practices of maintaining physical on-site data servers. In terms of financial cost and scalability, cloud storage is attractive. Establishing physical on-site data servers requires institutions to make large financial and personnel investments in acquiring data storage hardware and dedicating IT staff to set up such extensive data systems. Also, the institution may have to periodically purchase new hardware or schedule major overhauls to roll out new software. On the other hand, cloud licenses can be obtained with relative ease as the physical infrastructure and relevant software are already established by the service provider. In addition, expansions and updates are managed and executed by the commercial service providers that typically have efficiencies of scale. Consequently, some authors argue that cloud storage is the best solution to cater to rapidly expanding biobanking needs ( 7 ).

6. Patient Privacy and Ethical Considerations

As with any collection of patient information, biobanks must follow strict legal and ethical guidelines. Foremost, any patient data can only be obtained from participants who have given consent for corresponding specimens to be used in research studies. All samples should be anonymized, using a coding system with research identifiers so as to prevent anyone from being able to track a specimen back to the original patient. Such research identifiers should only be given out to collaborators on a need-to-know basis, to further minimize patient information from being compromised. Under the direction of the institution, all personnel should receive computer security and HIPAA compliance training to be prepared against potential security breaches and phishing attacks that may compromise privacy of patient data. This may include learning to recognize spam e-mails, create strong passwords, report suspicious notifications, and adhere to privacy and ethical guidelines. Protecting patient privacy in this manner is not only a legal requirement of research compliance but also works to maintain research integrity. Restricting researchers from matching specimens to individuals also ensures that researcher cannot manipulate their results to produce expected results in support of their clinical procedures or experiments ( 8 ). However, unique complications arise with genomic data. Even with de-identification, the risk of privacy breach and information exposure still exists as genotypes are very specific to each individual. While these issues can be mitigated with complete disassociation of data from patient identities, this significantly detracts from the value of the biobank as it does not allow any way of updating clinical records. The utility of specimens increases with the amount of data to which they can be associated, and if that database becomes too disjointed and partitioned for the sake of privacy, the biobank’s value to research, society and the biospecimen donor is diminished and can be rendered useless ( 9 ).

7. Biobanking Going Forward

Though increasing numbers of biobanks are emerging, there have still yet to be any widely accepted industry-wide standards or international registries. Establishing standardized protocols should greatly increase the efficiency of mining clinical data as various institutions employ identical methodologies in characterizing and annotating specimens stored in their respective biobanks. Standardization can be expedited with automated systems that would organize specimens into predetermined categories, whether based on tissue type, molecular markers, or preservation methods ( 10 ). Consequently, researchers would be able to trace specimens with ease between multiple projects using a single standardized code system. Currently, most biobanks work in complete independence, each operating under their individual standards of specimen organization and data collection. This can make collaborations between biobanks difficult, requiring additional methods to convert different data catalogues into a single registry for comparisons or pooling data together. A number of initiatives are underway to increase interoperability. A global registry to which all biobanks can subscribe would usher in a promising future of biobanking, defined by comprehensive metadata and extensive collaboration in furthering medical research.

Acknowledgement

This work was supported in part by NIH:NCI P50-CA211015, NIH:NIMH U24 MH100929, the Art of the Brain Foundation, and the Henry E. Singleton Brain Cancer Research Program.

Suggestions or feedback?

MIT News | Massachusetts Institute of Technology

  • Machine learning
  • Social justice
  • Black holes
  • Classes and programs

Departments

  • Aeronautics and Astronautics
  • Brain and Cognitive Sciences
  • Architecture
  • Political Science
  • Mechanical Engineering

Centers, Labs, & Programs

  • Abdul Latif Jameel Poverty Action Lab (J-PAL)
  • Picower Institute for Learning and Memory
  • Lincoln Laboratory
  • School of Architecture + Planning
  • School of Engineering
  • School of Humanities, Arts, and Social Sciences
  • Sloan School of Management
  • School of Science
  • MIT Schwarzman College of Computing

Modular, scalable hardware architecture for a quantum computer

Press contact :, media download.

Rendering shows the 4 layers of a semiconductor chip, with the top layer being a vibrant burst of light.

*Terms of Use:

Images for download on the MIT News office website are made available to non-commercial entities, press and the general public under a Creative Commons Attribution Non-Commercial No Derivatives license . You may not alter the images provided, other than to crop them to size. A credit line must be used when reproducing images; if one is not provided below, credit the images to "MIT."

Rendering shows the 4 layers of a semiconductor chip, with the top layer being a vibrant burst of light.

Previous image Next image

Quantum computers hold the promise of being able to quickly solve extremely complex problems that might take the world’s most powerful supercomputer decades to crack.

But achieving that performance involves building a system with millions of interconnected building blocks called qubits. Making and controlling so many qubits in a hardware architecture is an enormous challenge that scientists around the world are striving to meet.

Toward this goal, researchers at MIT and MITRE have demonstrated a scalable, modular hardware platform that integrates thousands of interconnected qubits onto a customized integrated circuit. This “quantum-system-on-chip” (QSoC) architecture enables the researchers to precisely tune and control a dense array of qubits. Multiple chips could be connected using optical networking to create a large-scale quantum communication network.

By tuning qubits across 11 frequency channels, this QSoC architecture allows for a new proposed protocol of “entanglement multiplexing” for large-scale quantum computing.

The team spent years perfecting an intricate process for manufacturing two-dimensional arrays of atom-sized qubit microchiplets and transferring thousands of them onto a carefully prepared complementary metal-oxide semiconductor (CMOS) chip. This transfer can be performed in a single step.

“We will need a large number of qubits, and great control over them, to really leverage the power of a quantum system and make it useful. We are proposing a brand new architecture and a fabrication technology that can support the scalability requirements of a hardware system for a quantum computer,” says Linsen Li, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on this architecture.

Li’s co-authors include Ruonan Han, an associate professor in EECS, leader of the Terahertz Integrated Electronics Group, and member of the Research Laboratory of Electronics (RLE); senior author Dirk Englund, professor of EECS, principal investigator of the Quantum Photonics and Artificial Intelligence Group and of RLE; as well as others at MIT, Cornell University, the Delft Institute of Technology, the U.S. Army Research Laboratory, and the MITRE Corporation. The paper appears today in Nature .

Diamond microchiplets

While there are many types of qubits, the researchers chose to use diamond color centers because of their scalability advantages. They previously used such qubits to produce integrated quantum chips with photonic circuitry.

Qubits made from diamond color centers are “artificial atoms” that carry quantum information. Because diamond color centers are solid-state systems, the qubit manufacturing is compatible with modern semiconductor fabrication processes. They are also compact and have relatively long coherence times, which refers to the amount of time a qubit’s state remains stable, due to the clean environment provided by the diamond material.

In addition, diamond color centers have photonic interfaces which allows them to be remotely entangled, or connected, with other qubits that aren’t adjacent to them.

“The conventional assumption in the field is that the inhomogeneity of the diamond color center is a drawback compared to identical quantum memory like ions and neutral atoms. However, we turn this challenge into an advantage by embracing the diversity of the artificial atoms: Each atom has its own spectral frequency. This allows us to communicate with individual atoms by voltage tuning them into resonance with a laser, much like tuning the dial on a tiny radio,” says Englund.

This is especially difficult because the researchers must achieve this at a large scale to compensate for the qubit inhomogeneity in a large system.

To communicate across qubits, they need to have multiple such “quantum radios” dialed into the same channel. Achieving this condition becomes near-certain when scaling to thousands of qubits. To this end, the researchers surmounted that challenge by integrating a large array of diamond color center qubits onto a CMOS chip which provides the control dials. The chip can be incorporated with built-in digital logic that rapidly and automatically reconfigures the voltages, enabling the qubits to reach full connectivity.

“This compensates for the in-homogenous nature of the system. With the CMOS platform, we can quickly and dynamically tune all the qubit frequencies,” Li explains.

Lock-and-release fabrication

To build this QSoC, the researchers developed a fabrication process to transfer diamond color center “microchiplets” onto a CMOS backplane at a large scale.

They started by fabricating an array of diamond color center microchiplets from a solid block of diamond. They also designed and fabricated nanoscale optical antennas that enable more efficient collection of the photons emitted by these color center qubits in free space.

Then, they designed and mapped out the chip from the semiconductor foundry. Working in the MIT.nano cleanroom, they post-processed a CMOS chip to add microscale sockets that match up with the diamond microchiplet array.

They built an in-house transfer setup in the lab and applied a lock-and-release process to integrate the two layers by locking the diamond microchiplets into the sockets on the CMOS chip. Since the diamond microchiplets are weakly bonded to the diamond surface, when they release the bulk diamond horizontally, the microchiplets stay in the sockets.

“Because we can control the fabrication of both the diamond and the CMOS chip, we can make a complementary pattern. In this way, we can transfer thousands of diamond chiplets into their corresponding sockets all at the same time,” Li says.

The researchers demonstrated a 500-micron by 500-micron area transfer for an array with 1,024 diamond nanoantennas, but they could use larger diamond arrays and a larger CMOS chip to further scale up the system. In fact, they found that with more qubits, tuning the frequencies actually requires less voltage for this architecture.

“In this case, if you have more qubits, our architecture will work even better,” Li says.

The team tested many nanostructures before they determined the ideal microchiplet array for the lock-and-release process. However, making quantum microchiplets is no easy task, and the process took years to perfect.

“We have iterated and developed the recipe to fabricate these diamond nanostructures in MIT cleanroom, but it is a very complicated process. It took 19 steps of nanofabrication to get the diamond quantum microchiplets, and the steps were not straightforward,” he adds.

Alongside their QSoC, the researchers developed an approach to characterize the system and measure its performance on a large scale. To do this, they built a custom cryo-optical metrology setup.

Using this technique, they demonstrated an entire chip with over 4,000 qubits that could be tuned to the same frequency while maintaining their spin and optical properties. They also built a digital twin simulation that connects the experiment with digitized modeling, which helps them understand the root causes of the observed phenomenon and determine how to efficiently implement the architecture.

In the future, the researchers could boost the performance of their system by refining the materials they used to make qubits or developing more precise control processes. They could also apply this architecture to other solid-state quantum systems.

This work was supported by the MITRE Corporation Quantum Moonshot Program, the U.S. National Science Foundation, the U.S. Army Research Office, the Center for Quantum Networks, and the European Union’s Horizon 2020 Research and Innovation Program.

Share this news article on:

Related links.

  • Quantum Photonics and AI Laboratory
  • Terahertz Integrated Electronics Group
  • Research Laboratory of Electronics
  • Microsystems Technology Laboratories
  • Department of Electrical Engineering and Computer Science

Related Topics

  • Computer science and technology
  • Quantum computing
  • Electronics
  • Semiconductors
  • Electrical Engineering & Computer Science (eecs)
  • National Science Foundation (NSF)

Related Articles

This graphic depicts a stylized rendering of the quantum photonic chip and its assembly process. The bottom half of the image shows a functioning quantum micro-chiplet (QMC), which emits single-photon pulses that are routed and manipulated on a photonic integrated circuit (PIC). The top half of the image shows how this chip is made: Diamond QMCs are fabricated separately and then transferred into ...

Scaling up the quantum chip

MIT researchers have fabricated a diamond-based quantum sensor on a silicon chip using traditional fabrication techniques (pictured), which could enable low-cost quantum hardware.

Quantum sensing on a chip

computer hardware research

Toward mass-producible quantum computers

Previous item Next item

More MIT News

Isaiah Andrews sits outside at MIT campus.

Through econometrics, Isaiah Andrews is making research more robust

Read full story →

A rendering shows the MIT campus and Cambridge, with MIT buildings in red.

Students research pathways for MIT to reach decarbonization goals

Namrata Kala sits in glass-walled building

Improving working environments amid environmental distress

Ashesh Rambachan converses with a student in the front of a classroom.

A data-driven approach to making better choices

On the left, Erik Lin-Greenberg talks, smiling, with two graduate students in his office. On the right, Tracy Slatyer sits with two students on a staircase, conversing warmly.

Paying it forward

Portrait photo of John Fucillo posing on a indoor stairwell

John Fucillo: Laying foundations for MIT’s Department of Biology

  • More news on MIT News homepage →

Massachusetts Institute of Technology 77 Massachusetts Avenue, Cambridge, MA, USA

  • Map (opens in new window)
  • Events (opens in new window)
  • People (opens in new window)
  • Careers (opens in new window)
  • Accessibility
  • Social Media Hub
  • MIT on Facebook
  • MIT on YouTube
  • MIT on Instagram
  • Network infrastructure

computer hardware

  • Rahul Awati
  • Linda Rosencrance

What is computer hardware?

Computer hardware is a collective term used to describe any of the physical components of an analog or digital computer . The term hardware distinguishes the tangible aspects of a computing device from software , which consists of written, machine-readable instructions or programs that tell physical components what to do and when to execute the instructions.

Hardware and software are complementary. A computing device can function efficiently and produce useful output only when both hardware and software work together appropriately.

Computer hardware can be categorized as being either internal or external components . Generally, internal hardware components are those necessary for the proper functioning of the computer, while external hardware components are attached to the computer to add or enhance functionality.

What are internal computer hardware components?

Internal components collectively process or store the instructions delivered by the program or operating system ( OS ). These include the following:

  • Motherboard . This is a printed circuit board that holds the central processing unit ( CPU ) and other essential internal hardware and functions as the central hub that all other hardware components run through.
  • CPU . The CPU is the brain of the computer that processes and executes digital instructions from various programs; its clock speed determines the computer's performance and efficiency in processing data.
  • RAM . RAM -- or dynamic RAM -- is temporary memory storage that makes information immediately accessible to programs; RAM is volatile memory, so stored data is cleared when the computer powers off.
  • Hard drive . Hard disk drives are physical storage devices that store both permanent and temporary data in different formats, including programs, OSes, device files, photos, etc.
  • Solid-state drive ( SSD ). SSDs are solid-state storage devices based on NAND flash memory technology; SSDs are non-volatile, so they can safely store data even when the computer is powered down.
  • Optical drive . Optical drives typically reside in an on-device drive bay; they enable the computer to read and interact with nonmagnetic external media, such as compact disc read-only memory or digital video discs.
  • Heat sink. This is a passive piece of hardware that draws heat away from components to regulate/reduce their temperature to help ensure they continue to function properly. Typically, a heat sink is installed directly atop the CPU, which produces the most heat among internal components.
  • Graphics processing unit . This chip-based device processes graphical data and often functions as an extension to the main CPU.
  • Network interface card ( NIC ). A NIC is a circuit board or chip that enables the computer to connect to a network; also known as a network adapter or local area network adapter , it typically supports connection to an Ethernet network.

Other computing components, such as USB ports, power supplies, transistors and chips, are also types of internal hardware.

This computer hardware chart below illustrates what typical internal computer hardware components look like.

a chart of computer hardware components

What are external hardware components?

External hardware components, also called peripheral components , are those items that are often externally connected to the computer to control either input or output functions. These hardware devices are designed to either provide instructions to the software (input) or render results from its execution (output).

Common input hardware components include the following:

  • Mouse . A mouse is a hand-held pointing device that moves a cursor around a computer screen and enables interaction with objects on the screen. It may be wired or wireless.
  • Keyboard . A keyboard is an input device featuring a standard QWERTY keyset that enables users to input text, numbers or special characters.
  • Microphone . A microphone is a device that translates sound waves into electrical signals and supports computer-based audio communications.
  • Camera. A camera captures visual images and streams them to the computer or through a computer to a network device.
  • Touchpad . A touchpad is an input device, external or built into a laptop, used to control the pointer on a display screen. It is typically an alternative to an external mouse.
  • USB flash drive . A USB flash drive is an external, removable storage device that uses flash memory and interfaces with a computer through a USB port.
  • Memory card . A memory card is a type of portable external storage media, such as a CompactFlash card , used to store media or data files.

Other input hardware components include joysticks, styluses and scanners .

Examples of output hardware components include the following:

  • Monitor . A monitor is an output device similar to a TV screen that displays information, documents or images generated by the computing device.
  • Printer . Printers render electronic data from a computer into printed material.
  • Speaker. A speaker is an external audio output device that connects to a computer to generate a sound output.
  • Headphones, earphones, earbuds. Similar to speakers, these devices provide audio output that's audible only to a single listener.

Hardware vs. software

Hardware refers to the computer's tangible components or delivery systems that store and run the written instructions provided by the software. The software is the intangible part of the device that lets the user interact with the hardware and command it to perform specific tasks. Computer software includes the following:

  • OS and related utilities;
  • programs that control certain computer functions; and
  • applications that usually perform operations on user-supplied data.

On mobile devices and laptop computers, virtual keyboards are also considered software because they're not physical.

Since the software and hardware depend on each other to enable a computer to produce useful output, the software must be designed to work properly with the hardware.

The presence of malicious software, or malware , such as viruses , Trojan horses , spyware and worms , can have a huge effect on computer programs and a system's OS. Hardware is not affected by malware, though.

However, malware can affect the system in other ways. For example, it can consume a large portion of the computer's memory or even replicate itself to fill the device's hard drive. This slows down the computer and may also prevent legitimate programs from running. Malware can also prevent users from accessing the files in the computer's hardware storage.

types of malware

What is hardware virtualization?

Hardware virtualization is the abstraction of physical computing resources from the software that uses those resources. Simply put, when software is used to create virtual versions of hardware instead of using physical, tangible hardware components for some computing function, it is known as hardware virtualization .

Sometimes referred to as platform or server virtualization , hardware virtualization is executed on a particular hardware platform by host software. It requires a virtual machine manager called a hypervisor that creates virtual versions of internal hardware. This enables the hardware resources of one physical machine to be shared among OSes and applications and to be used more efficiently.

In cloud computing , hardware virtualization is often associated with infrastructure as a service ( IaaS ), a delivery model that provides hardware resources over high-speed internet. A cloud service provider ( CSP ), such as Amazon Web Services or Microsoft Azure , hosts all the hardware components that are traditionally present in an on-premises data center, including servers, storage and networking hardware, as well the software that makes virtualization possible.

This makes IaaS and CSPs different from hardware as a service ( HaaS ) provider that hosts only hardware but not software. Typically, an IaaS provider also supplies a range of services to accompany infrastructure components, such as the following:

  • load balancing

Some CSPs also provide storage resiliency services, such as automated backup , replication and disaster recovery .

What is hardware as a service?

While it's common for individuals or businesses to purchase computer hardware and then periodically replace or upgrade it, they can also lease physical and virtual hardware from a service provider. The provider then becomes responsible for keeping hardware up to date, including its various physical components and the software running on it.

This is known as the HaaS model .

The biggest advantage of HaaS is that it reduces the costs of hardware purchases and maintenance, enabling organizations to shift from a capital expense budget to a generally less expensive operating expense budget. Also, since most HaaS offerings are based on a pay-as-you-go model, it makes it easier for organizations to control costs, while still having access to the hardware they need for their operational and business continuity.

In HaaS, physical components that belong to a managed service provider ( MSP ) are installed at a customer's site. A service-level agreement ( SLA ) defines the responsibilities of both parties.

consumption-based computing pluses and minuses

The customer may either pay a monthly fee for using the MSP's hardware, or its use may be incorporated into the MSP's fee structure for installing, monitoring and maintaining the hardware. Either way, if the hardware breaks down or becomes outdated, the MSP is responsible for repairing or replacing it.

Depending upon the terms of the SLA, decommissioning hardware may include wiping proprietary data, physically destroying hard drives and certifying that old equipment has been recycled legally.

Continue Reading About computer hardware

  • 7 major server hardware components you should know
  • How to choose the right PC for business
  • Data processing units accelerate infrastructure performance
  • Virtual servers vs. physical servers: What are the differences?
  • Storage class memory makes its way into the enterprise

Related Terms

Dig deeper on network infrastructure.

computer hardware research

Hadoop as a service (HaaS)

SarahWilson

Intel Foundry launches as enterprise AI surges

AntoneGonsalves

Nokia bolsters mission-critical industrial edge capabilities

JoeO’Halloran

personal computer (PC)

BenLutkevich

Organizations have ramped up their use of communications platform as a service and APIs to expand communication channels between ...

Google will roll out new GenAI in Gmail and Docs first and then other apps throughout the year. In 2025, Google plans to ...

For successful hybrid meeting collaboration, businesses need to empower remote and on-site employees with a full suite of ...

Mobile payments provide customers with a fast and secure way to pay without cash or physical cards. Managing these systems can be...

Mobile payment systems can vary in terms of their fees, setup process and functionality. Organizations must know how to choose ...

To succeed with enterprises, pricing for Copilot+ PC will have to come down for high-volume sales and business software will need...

A main focus of the Dell Technologies World 2024 conference was AI and how it impacts infrastructure environments. Dell ...

In this Q&A, Dell's Matt Baker lays out how its AI Factory is designed for faster AI adoption, why there are so many chatbots and...

An incredible amount of research must go into data center site selection. If the location does not fit company demands, the data ...

IT service providers are upskilling a large portion of their workforces on the emerging technology. The campaign seeks to boost ...

Early-stage companies featured at MIT Sloan's annual CIO event tap partners to speed up technology deployment and broaden their ...

Kaseya's pledge to partners promises more pricing controls and a new subscription service that lets MSPs manage and secure their ...

What is Quantum Computing?

Demystifying quantum mechanics.

Quantum computer in Devs

Why Do We Need Quantum Computing?

  • Quantum Superposition
  • Quantum Entanglement
  • Quantum Noise
  • Quantum Computing Challenges
  • Quantum Computing Outlook

Quantum computing is the next great frontier in human technological advancement. The transistor's revolution is plain to see, and its achievements for classical computing are everywhere: from the CPUs and GPUs that allow us to suspend disbelief, through the smartphones keeping us connected, and ultimately, the Internet: that fabric that's become an indelible element of our reality. While the transistor allowed for the programmable automation and digitization of human work (and play), quantum computing and its transistor analog — the qubit — will open doors that were previously closed while revealing new ones that we previously had no idea were even there. Here's an explanation of what quantum computing is, why we need it, and a high-level explanation of how it works. 

Quantum computing is an analog to the computing we know and love. But while computing leverages the classical transistor, quantum computing takes advantage of the world of the infinitely small — the quantum world — to run calculations on specialized hardware known as Quantum Processing Units (QPU). Qubits are the quantum equivalent of transistors. And while the latter’s development is increasingly constrained by quantum effects and difficulties in further miniaturization, quantum computing already thrives in this world. Quantum refers to the smallest indivisible unit of any physical particle. This means quantum computing’s unit, the qubit, is usually made from single atoms or even from subatomic particles such as electrons and photons. But while transistors can only ever represent two states (either 1 or 0, which gave way to the binary world within our tech), qubits can represent all possible states: 0, 1, and all variations within the combination of both states at the same time. This ability is referred to as a superposition, one of the phenomena behind quantum computing’s prowess.

Qubits allow for much more information to be considered and processed simultaneously, opening the door to solving problems with degrees of complexity that would stall even the most powerful present – and future – supercomputers. Problems with multiple variables such as airplane traffic control (which takes into account speed, tonnage, and the multitude of simultaneous planes, flying or not, within an airspace); sensor placement (such as the BMW Sensor Placement Challenge, which was recently solved in mere minutes by quantum ); the age-old optimization problem of the traveling salesman (attempting to find the shortest route connecting multiple sale locations); and protein folding (which attempts to foresee any of trillions of ways an amino acid chain can present itself) are examples of workloads where quantum computers shine. Quantum computing will also render all currently-used cryptographic algorithms moot – protection that would take even the most powerful supercomputers too long to break at the human time scale will take moments in quantum computers. This frames another element of the race for quantum computers – the ability to create cryptographic algorithms that can withstand them. Institutions such as the National Institute of Standards and Technology (NIST) have been putting new post-quantum solutions through their paces to find one that can guarantee security in the post-quantum future. Materials science, chemistry, cryptography, and multivariate problem solving are quantum computing’s proverbial home. And more are sure to materialize as we grasp this technology’s capabilities.

What Is Quantum Superposition?

If you were to imagine the flip of a coin, classical computing would divide its result into a 0 or a 1 according to the flip ending in either heads or tails. In the qubit world, however, you’d be able to see both heads and tails simultaneously, as well as the different positions the coin takes while it spins before your eyes as it rotates between both outcomes. While classical computers work with deterministic outcomes, quantum computing thus leverages the field of probabilities. This abundance of possible states allows quantum computers to process much more information than a binary system ever could. Other important quantum computing concepts besides superposition are entanglement and quantum interference.

What Is Quantum Entanglement?

Entanglement happens when two qubits have been inextricably connected in such a way that you can’t describe the state of one of them without describing the state of the other. As a result, they’ve become a single system and influence one another — even though they are separate qubits. Their states are correlated, meaning that according to the entanglement type, both particles can be in the same or even opposite states, but knowing the state of one allows you to know the state of the other. This happens across any distance: entangled particles don’t really have a physical limit to how far away they can be from each other. This is why Einstein called entanglement “spooky action at a distance.” Imagine that you're watching a tennis match. The two players are correlated – the movements of one lead to a countermovement from the other. If you were to describe why tennis player A moved to one point of the court and hit the ball towards one area of its opponent’s field, you’d have to consider the previous actions of tennis player B; their current position; the quality and variables of their game, and several other factors. To describe the actions (or, in the qubit sense, the state) of one means you can’t ignore the actions (or state) of the other.

What Is Quantum Noise?

Any system that’s trying to be balanced and coherent must withstand outside interference. This is why many computer components, such as audio cards, feature EMI (ElectroMagnetic Interference) shielding, or your house has insulation that tries to keep its environment stabler than what the world actually looks like outside your windows.

In quantum computing, coherence is a much, much more fickle affair. Qubit states and qubit entanglement are especially prone to environmental interference (noise) and can crash in a microsecond (a millionth of a second). This noisiness can assume the form of radiation; temperature (which is why some qubit designs need to be cooled to near absolute zero, or −273.15 °C ); activity from neighboring qubits (the same happens with how close transistors are placed to one another nowadays); and even impacts from other subatomic particles invisible to the naked eye. And these are just some of the possible causes of noise that then introduce errors into the quantum computation, compromising the results. In classical computing, errors usually flip a bit (from 0 to 1 or vice-versa), but in quantum computing, as we’ve seen, there are many intermediate states of information. So errors can influence these states, which are orders of magnitude more than just a 1 or a 0.

Stay On the Cutting Edge: Get the Tom's Hardware Newsletter

Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.

This puts practical limitations on the amount of time a quantum computer’s qubits are operational, how long their entangled states last, and how accurate their results are.

More noise means that the qubit’s states can change or collapse (decohere) before a given workload is finished, generating a wrong result. Quantum computing thus tries to reduce environmental noise as much as possible by implementing error correction that checks and adapts to environmental interference or by trying to accelerate the speed at which qubits operate so that they can produce more work before the qubits’ coherence is lost.

What Are the Current Challenges in Quantum Computing?

Intel Quantum

Quantum computing research is one of the most complex topics known to humankind, placing an immediate barrier on who can pursue it. Typically, only the wealthiest institutions or Big Tech companies have dipped their toes into it in any significant manner. Only a few scientists can (and want to) work in this field, and its infancy means significant investment in materials, iterative development, and research funding. The field is in its early stages, too, which is a challenge (or a playground, depending on how you see it). Currently, multiple companies are following their own, disparate roads towards building a functional quantum computer. IBM has chosen the superconducting qubit as its weapon of choice; Quantum Brilliance works with diamond-based qubits that can operate at ambient temperatures; QCI has gone the Entropy Quantum Computing (EQC) route, which tries to take environmental interference into account; Xanadu’s Borealis QPU leverages photonics; Microsoft is still pursuing topological qubits that haven’t even materialized yet. Each of these companies extolls the merits of their chosen approach – and each of them has reasons to invest in it, borne from thousands of hours of work and millions of dollars invested. It’s important to frame this not so much as a race; it just means that there are multiple venues of exploration. But there is, in fact, a race towards additional funding and market share. The company that first breaks through towards quantum advantage — the point where a quantum computer provably outpaces any existing or future supercomputer in solving a particular problem or set of problems — will be the first to reap benefits. And being the first to walk the next step for humanity’s computing sciences has indisputable advantages in shaping its future.

What’s the Outlook for Quantum Computing?

Currently, quantum computers are still in the Noisy Intermediate-scale Quantum Era (NISQ). Scientists are struggling to scale to higher qubit counts that are necessary to unlock more powerful quantum computers and more complex arrangements of qubits. This is mostly due to the issue of quantum interference, which we alluded to earlier. However, solving this problem is only a matter of time. Post-NISQ quantum devices will eventually come, even if the absence of a specific name for it is itself a reference to the long road ahead. Expectations on quantum computing market growth are disparate, but most projections seem to point towards a market worth $20 billion to $30 billion by 2030. But this is an ecosystem that’s seeing daily breakthroughs; all it takes is for one of those to result in acceleration on the road towards the coveted quantum supremacy age of quantum to throw those projections on the wayside. As the state of quantum computing currently stands, we can expect an acceleration in the pace of development and in the number of qubits being deployed in quantum processing units. IBM’s roadmap is one of the clearest – the company expects to have as many as 433 operational qubits this year through its Osprey QPU , more than triple those found in its 2021 QPU, Eagle . The company aims to have a 1,121 qubit QPU by 2023 (Condor), and projects its QPUs will house more than 1 million qubits from 2026 forward. That said, the exact number of qubits needed to leave the NISQ era behind is unclear; different qubits have different capabilities and can produce different amounts of work. Going forward, standardization is the name of the game: IBM’s proposed CLOPS standard of quantum performance is one such example in a still nascent industry that’s trying to coalesce. Concerted industry efforts to standardize comparisons between different QPUs are also underway and are a prerequisite for the healthy future of the space. It's a whole, wide world in the quantum computing space. And we’re just getting started.

Francisco Pires is a freelance news writer for Tom's Hardware with a soft side for quantum computing.

Researchers create 'quantum drums' to store qubits — one step closer to groundbreaking internet speed and security

Commodore 64 claimed to outperform IBM's quantum system — sarcastic researchers say 1 MHz computer is faster, more efficient, and decently accurate

Fractal announces its first chair and headset, also shows off some cases

  • kerberos_20 but can it run crysis? Reply
  • View All 1 Comment

Most Popular

computer hardware research

Computer Hope

How to Insert Accent Marks and Special Characters

How to Insert Accent Marks and Special Characters picture

If you frequently use characters with accent marks, download AX, a Windows program that lets you type any letter, press the F8 key, and cycle through the accents marks. ... Continue reading

Today's Computer Word: Docking Station

Yesterday's word - Random word

Docking Station picture

Alternatively known as a universal port replicator, a docking station is a hardware device that allows portable computers to connect with other devices with little or no... Continue reading

How to Change Browser Download Settings for PDF Files

How to Change Browser Download Settings for PDF Files picture

By default, most online PDF (Portable Document Format) files open in the Internet browser as a new window or tab. This page shows how to make the browser download PDF fi... Continue reading

Visitor Favorites

HTML color codes Keyboard shortcuts Download YouTube NTLDR is Missing Clear history MS-DOS Beep codes Bootable USB Slow computer First computer Password folder F1 - F12 keys Touchpad help CMOS setup Safe Mode

What character is used in an OR operator?

1. Ampersand 2. Bracket 3. Colon 4. Pipe

Take a test

Today in Computer History

1997 - Sierra Semiconductor announced a name change for the corporation PMC-Sierra, Inc. 1998 - McAfee announced it would acquire Dr. Solomon's Group PLC. 2008 - Apple released the iPhone 3G and MobileMe. 2016 - iMesh website and service are shut down.

Computer History - Computer Pioneers

computer hardware research

McKinsey Technology Trends Outlook 2023

After a tumultuous 2022 for technology investment and talent, the first half of 2023 has seen a resurgence of enthusiasm about technology’s potential to catalyze progress in business and society. Generative AI deserves much of the credit for ushering in this revival, but it stands as just one of many advances on the horizon that could drive sustainable, inclusive growth and solve complex global challenges.

To help executives track the latest developments, the McKinsey Technology Council  has once again identified and interpreted the most significant technology trends unfolding today. While many trends are in the early stages of adoption and scale, executives can use this research to plan ahead by developing an understanding of potential use cases and pinpointing the critical skills needed as they hire or upskill talent to bring these opportunities to fruition.

Our analysis examines quantitative measures of interest, innovation, and investment to gauge the momentum of each trend. Recognizing the long-term nature and interdependence of these trends, we also delve into underlying technologies, uncertainties, and questions surrounding each trend. This year, we added an important new dimension for analysis—talent. We provide data on talent supply-and-demand dynamics for the roles of most relevance to each trend. (For more, please see the sidebar, “Research methodology.”)

New and notable

All of last year’s 14 trends remain on our list, though some experienced accelerating momentum and investment, while others saw a downshift. One new trend, generative AI, made a loud entrance and has already shown potential for transformative business impact.

Research methodology

To assess the development of each technology trend, our team collected data on five tangible measures of activity: search engine queries, news publications, patents, research publications, and investment. For each measure, we used a defined set of data sources to find occurrences of keywords associated with each of the 15 trends, screened those occurrences for valid mentions of activity, and indexed the resulting numbers of mentions on a 0–1 scoring scale that is relative to the trends studied. The innovation score combines the patents and research scores; the interest score combines the news and search scores. (While we recognize that an interest score can be inflated by deliberate efforts to stimulate news and search activity, we believe that each score fairly reflects the extent of discussion and debate about a given trend.) Investment measures the flows of funding from the capital markets into companies linked with the trend. Data sources for the scores include the following:

  • Patents. Data on patent filings are sourced from Google Patents.
  • Research. Data on research publications are sourced from the Lens (www.lens.org).
  • News. Data on news publications are sourced from Factiva.
  • Searches. Data on search engine queries are sourced from Google Trends.
  • Investment. Data on private-market and public-market capital raises are sourced from PitchBook.
  • Talent demand. Number of job postings is sourced from McKinsey’s proprietary Organizational Data Platform, which stores licensed, de-identified data on professional profiles and job postings. Data is drawn primarily from English-speaking countries.

In addition, we updated the selection and definition of trends from last year’s study to reflect the evolution of technology trends:

  • The generative-AI trend was added since last year’s study.
  • We adjusted the definitions of electrification and renewables (previously called future of clean energy) and climate technologies beyond electrification and renewables (previously called future of sustainable consumption).
  • Data sources were updated. This year, we included only closed deals in PitchBook data, which revised downward the investment numbers for 2018–22. For future of space technologies investments, we used research from McKinsey’s Aerospace & Defense Practice.

This new entrant represents the next frontier of AI. Building upon existing technologies such as applied AI and industrializing machine learning, generative AI has high potential and applicability across most industries. Interest in the topic (as gauged by news and internet searches) increased threefold from 2021 to 2022. As we recently wrote, generative AI and other foundational models  change the AI game by taking assistive technology to a new level, reducing application development time, and bringing powerful capabilities to nontechnical users. Generative AI is poised to add as much as $4.4 trillion in economic value from a combination of specific use cases and more diffuse uses—such as assisting with email drafts—that increase productivity. Still, while generative AI can unlock significant value, firms should not underestimate the economic significance and the growth potential that underlying AI technologies and industrializing machine learning can bring to various industries.

Investment in most tech trends tightened year over year, but the potential for future growth remains high, as further indicated by the recent rebound in tech valuations. Indeed, absolute investments remained strong in 2022, at more than $1 trillion combined, indicating great faith in the value potential of these trends. Trust architectures and digital identity grew the most out of last year’s 14 trends, increasing by nearly 50 percent as security, privacy, and resilience become increasingly critical across industries. Investment in other trends—such as applied AI, advanced connectivity, and cloud and edge computing—declined, but that is likely due, at least in part, to their maturity. More mature technologies can be more sensitive to short-term budget dynamics than more nascent technologies with longer investment time horizons, such as climate and mobility technologies. Also, as some technologies become more profitable, they can often scale further with lower marginal investment. Given that these technologies have applications in most industries, we have little doubt that mainstream adoption will continue to grow.

Organizations shouldn’t focus too heavily on the trends that are garnering the most attention. By focusing on only the most hyped trends, they may miss out on the significant value potential of other technologies and hinder the chance for purposeful capability building. Instead, companies seeking longer-term growth should focus on a portfolio-oriented investment across the tech trends most important to their business. Technologies such as cloud and edge computing and the future of bioengineering have shown steady increases in innovation and continue to have expanded use cases across industries. In fact, more than 400 edge use cases across various industries have been identified, and edge computing is projected to win double-digit growth globally over the next five years. Additionally, nascent technologies, such as quantum, continue to evolve and show significant potential for value creation. Our updated analysis for 2023 shows that the four industries likely to see the earliest economic impact from quantum computing—automotive, chemicals, financial services, and life sciences—stand to potentially gain up to $1.3 trillion in value by 2035. By carefully assessing the evolving landscape and considering a balanced approach, businesses can capitalize on both established and emerging technologies to propel innovation and achieve sustainable growth.

Tech talent dynamics

We can’t overstate the importance of talent as a key source in developing a competitive edge. A lack of talent is a top issue constraining growth. There’s a wide gap between the demand for people with the skills needed to capture value from the tech trends and available talent: our survey of 3.5 million job postings in these tech trends found that many of the skills in greatest demand have less than half as many qualified practitioners per posting as the global average. Companies should be on top of the talent market, ready to respond to notable shifts and to deliver a strong value proposition to the technologists they hope to hire and retain. For instance, recent layoffs in the tech sector may present a silver lining for other industries that have struggled to win the attention of attractive candidates and retain senior tech talent. In addition, some of these technologies will accelerate the pace of workforce transformation. In the coming decade, 20 to 30 percent of the time that workers spend on the job could be transformed by automation technologies, leading to significant shifts in the skills required to be successful. And companies should continue to look at how they can adjust roles or upskill individuals to meet their tailored job requirements. Job postings in fields related to tech trends grew at a very healthy 15 percent between 2021 and 2022, even though global job postings overall decreased by 13 percent. Applied AI and next-generation software development together posted nearly one million jobs between 2018 and 2022. Next-generation software development saw the most significant growth in number of jobs (exhibit).

Job posting for fields related to tech trends grew by 400,000 between 2021 and 2022, with generative AI growing the fastest.

Image description:

Small multiples of 15 slope charts show the number of job postings in different fields related to tech trends from 2021 to 2022. Overall growth of all fields combined was about 400,000 jobs, with applied AI having the most job postings in 2022 and experiencing a 6% increase from 2021. Next-generation software development had the second-highest number of job postings in 2022 and had 29% growth from 2021. Other categories shown, from most job postings to least in 2022, are as follows: cloud and edge computing, trust architecture and digital identity, future of mobility, electrification and renewables, climate tech beyond electrification and renewables, advanced connectivity, immersive-reality technologies, industrializing machine learning, Web3, future of bioengineering, future of space technologies, generative AI, and quantum technologies.

End of image description.

This bright outlook for practitioners in most fields highlights the challenge facing employers who are struggling to find enough talent to keep up with their demands. The shortage of qualified talent has been a persistent limiting factor in the growth of many high-tech fields, including AI, quantum technologies, space technologies, and electrification and renewables. The talent crunch is particularly pronounced for trends such as cloud computing and industrializing machine learning, which are required across most industries. It’s also a major challenge in areas that employ highly specialized professionals, such as the future of mobility and quantum computing (see interactive).

Michael Chui is a McKinsey Global Institute partner in McKinsey’s Bay Area office, where Mena Issler is an associate partner, Roger Roberts  is a partner, and Lareina Yee  is a senior partner.

The authors wish to thank the following McKinsey colleagues for their contributions to this research: Bharat Bahl, Soumya Banerjee, Arjita Bhan, Tanmay Bhatnagar, Jim Boehm, Andreas Breiter, Tom Brennan, Ryan Brukardt, Kevin Buehler, Zina Cole, Santiago Comella-Dorda, Brian Constantine, Daniela Cuneo, Wendy Cyffka, Chris Daehnick, Ian De Bode, Andrea Del Miglio, Jonathan DePrizio, Ivan Dyakonov, Torgyn Erland, Robin Giesbrecht, Carlo Giovine, Liz Grennan, Ferry Grijpink, Harsh Gupta, Martin Harrysson, David Harvey, Kersten Heineke, Matt Higginson, Alharith Hussin, Tore Johnston, Philipp Kampshoff, Hamza Khan, Nayur Khan, Naomi Kim, Jesse Klempner, Kelly Kochanski, Matej Macak, Stephanie Madner, Aishwarya Mohapatra, Timo Möller, Matt Mrozek, Evan Nazareth, Peter Noteboom, Anna Orthofer, Katherine Ottenbreit, Eric Parsonnet, Mark Patel, Bruce Philp, Fabian Queder, Robin Riedel, Tanya Rodchenko, Lucy Shenton, Henning Soller, Naveen Srikakulam, Shivam Srivastava, Bhargs Srivathsan, Erika Stanzl, Brooke Stokes, Malin Strandell-Jansson, Daniel Wallance, Allen Weinberg, Olivia White, Martin Wrulich, Perez Yeptho, Matija Zesko, Felix Ziegler, and Delphine Zurkiya.

They also wish to thank the external members of the McKinsey Technology Council.

This interactive was designed, developed, and edited by McKinsey Global Publishing’s Nayomi Chibana, Victor Cuevas, Richard Johnson, Stephanie Jones, Stephen Landau, LaShon Malone, Kanika Punwani, Katie Shearer, Rick Tetzeli, Sneha Vats, and Jessica Wang.

Explore a career with us

Related articles.

A profile of a woman with her hand up to her chin in a thoughtful pose.  A galaxy bursting with light is superimposed over profile, centered over her mind.

McKinsey Technology Trends Outlook 2022

illustration two females standing in metaverse

Value creation in the metaverse

illustration of eye in dots

Quantum computing funding remains strong, but talent gap raises concern

  • Skip to main content
  • Skip to search
  • Skip to footer

Products and Services

Live stream splunk's user conference for free.

Global broadcast | June 11–12, 2024

computer hardware research

Making AI work for you

Cisco AI is where the AI hype ends and meaningful help begins.

Certifications

Cisco Validated

Announced at Cisco Live

computer hardware research

Cisco XDR with AI Assistant

Remediate the highest-priority incidents with an AI-first XDR solution.

computer hardware research

Cisco Networking Cloud 

One platform experience. Assured, secured, and simplified.

computer hardware research

Secure Firewall 1200 Series

Compact, all-in-one SD-WAN firewall for your distributed enterprise branch.

Catch up on what you missed

Keynote: Vision for the Future

CEO Chuck Robbins addresses how to connect and protect your business in the AI era.

Keynote: Go Beyond

Learn about Cisco, Splunk, and reaping the benefits of the AI revolution.

Deep dive sessions

See tech announcements and strategic direction from Cisco's senior tech leaders.

View keynotes and tech sessions in the on-demand library.

Press release

Cisco Live puts AI center stage and more. 

Cisco launches $1B global AI investment fund.

computer hardware research

Validate your AI skills with certifications

Join all Cisco U. Theater sessions live and direct from Cisco Live or replay them, access learning promos, and more. It's time to Go Beyond the basics and level up your learning.

computer hardware research

Identity is the new perimeter

Stop identity-based attacks while providing a seamless authentication experience with Cisco Duo's new Continuous Identity Security. 

Inside Cisco

  • More events

Cisco reveals Nexus HyperFabric

Cisco Nexus HyperFabric makes it easy for customers to deploy, manage, and monitor generative AI models and inference applications without deep IT knowledge and skills.

Cisco and Splunk launch integrated Full-Stack Observability experience

Using Cisco and Splunk observability solutions, customers can build an observability practice that meets their IT environment needs for on-premises, hybrid, and multicloud.

ThousandEyes Digital Experience Assurance shifts IT operations

New Cisco ThousandEyes capabilities and AI-native workflows in Cisco Networking Cloud will deliver Digital Experience Assurance, transforming IT operations.

computer hardware research

A new AI era begins

Introducing the fastest, most intelligent Windows PCs ever. Windows 11 Copilot+ PCs give you lightning speed, unique Copilot+ PC experiences, and more at a price that outperforms.

The best Windows yet

When there’s a lot to do, let Windows 11 help you get it done.

Windows Search bar

Meet Windows 11

Learn how to use the new features of Windows 11 and see what makes it the best Windows yet.

PC open with Start menu on the screen

Upgrade your experience

Learn how to get Windows 11 on your current PC 4 , or purchase a new PC that can run Windows 11.

Person with PC on their lap

Need help transferring files, resetting a password, or upgrading to Windows 11? Explore the Windows support page for helpful articles on all things Windows. Have a specific issue you’re troubleshooting? Ask your question in the Microsoft Community.

Image represents Copilot creativity and productivity

Meet Copilot in Windows

Find the information and ideas you need to power your ingenuity. Copilot in Windows 6 is an AI feature that allows you to get answers fast and ask follow-up questions, get AI-generated graphics based on your ideas, and kickstart your creativity while you work. Get to know Copilot in Windows, your new intelligent assistant.

Sync your PC & phone

Microsoft Phone Link makes it possible to make calls, reply to texts, and check your phone’s notifications from your PC 5 .

Find the right fit

Explore a selection of new PCs, or get help selecting the best computer for your unique needs.

People sitting at a table looking at a PC

Better together

Discover the Windows 11 experiences built to bring your favorite Microsoft tools to life.

Layers of Microsoft Store logo bouncing off the page, with Edge, and Teams logo next to it

Microsoft Store

The apps you need. The shows you love. Find them fast in the new Microsoft Store. 1 2

Microsoft Edge screen with Edge logo

Microsoft Edge

Make the most of your time online with the browser built for Windows.

Microsoft 365 features with logo

Microsoft 365

Maximize your productivity with easy-to-use Windows 11 multitasking tools built to work with the Microsoft apps you use every day. 3

Looking for more?

Get help with your transition to Windows 11, and make the most of your Windows experience.

Windows 11 Search bar

Get Microsoft news and updates

Subscribe to our newsletter to get the latest news, feature updates, how-to tips, deals and more for Windows and other Microsoft products.

Become an insider

Register with the Windows Insider Program and start engaging with engineers to help shape the future of Windows.

  • 1 Screens simulated. Features and app availability may vary by region.
  • 2 Some apps shown coming later. Certain apps only available through Microsoft Store app in Windows 11.
  • 3 Microsoft 365 subscription sold separately.
  • 4 Windows 11 upgrade is available for eligible PCs that meet minimum device specifications . Upgrade timing may vary by device. Internet service fees may apply. Features and app availability may vary by region. Certain features require specific hardware (see Windows 11 specifications ).
  • 5 Phone Link experience comes preinstalled on your PC with Windows 10 (running Windows 10, May 2019 Update at the least) or Windows 11. To experience the full functionality, Android phones must be running Android 7.0 or later. Phone Link for iOS requires iPhone with iOS 14 or higher, Windows 11 device, Bluetooth connection and the latest version of the Phone Link app. Not available for iPad (iPadOS) or MacOS. Device compatibility may vary. Regional restrictions may apply.
  • 6 Copilot in Windows (in preview) is available in select global markets and will roll out starting in summer 2024 to Windows 11 PCs in the European Economic Area. Copilot in Windows 10 functionality is limited and has specific system requirements . Learn More .

Follow Microsoft Windows

X icon (formally twitter icon)

Share this page

DB-City

  • Bahasa Indonesia
  • Eastern Europe
  • Moscow Oblast

Elektrostal

Elektrostal Localisation : Country Russia , Oblast Moscow Oblast . Available Information : Geographical coordinates , Population, Altitude, Area, Weather and Hotel . Nearby cities and villages : Noginsk , Pavlovsky Posad and Staraya Kupavna .

Information

Find all the information of Elektrostal or click on the section of your choice in the left menu.

  • Update data
Country
Oblast

Elektrostal Demography

Information on the people and the population of Elektrostal.

Elektrostal Population157,409 inhabitants
Elektrostal Population Density3,179.3 /km² (8,234.4 /sq mi)

Elektrostal Geography

Geographic Information regarding City of Elektrostal .

Elektrostal Geographical coordinatesLatitude: , Longitude:
55° 48′ 0″ North, 38° 27′ 0″ East
Elektrostal Area4,951 hectares
49.51 km² (19.12 sq mi)
Elektrostal Altitude164 m (538 ft)
Elektrostal ClimateHumid continental climate (Köppen climate classification: Dfb)

Elektrostal Distance

Distance (in kilometers) between Elektrostal and the biggest cities of Russia.

Elektrostal Map

Locate simply the city of Elektrostal through the card, map and satellite image of the city.

Elektrostal Nearby cities and villages

Elektrostal Weather

Weather forecast for the next coming days and current time of Elektrostal.

Elektrostal Sunrise and sunset

Find below the times of sunrise and sunset calculated 7 days to Elektrostal.

DaySunrise and sunsetTwilightNautical twilightAstronomical twilight
8 June02:43 - 11:25 - 20:0701:43 - 21:0701:00 - 01:00 01:00 - 01:00
9 June02:42 - 11:25 - 20:0801:42 - 21:0801:00 - 01:00 01:00 - 01:00
10 June02:42 - 11:25 - 20:0901:41 - 21:0901:00 - 01:00 01:00 - 01:00
11 June02:41 - 11:25 - 20:1001:41 - 21:1001:00 - 01:00 01:00 - 01:00
12 June02:41 - 11:26 - 20:1101:40 - 21:1101:00 - 01:00 01:00 - 01:00
13 June02:40 - 11:26 - 20:1101:40 - 21:1201:00 - 01:00 01:00 - 01:00
14 June02:40 - 11:26 - 20:1201:39 - 21:1301:00 - 01:00 01:00 - 01:00

Elektrostal Hotel

Our team has selected for you a list of hotel in Elektrostal classified by value for money. Book your hotel room at the best price.



Located next to Noginskoye Highway in Electrostal, Apelsin Hotel offers comfortable rooms with free Wi-Fi. Free parking is available. The elegant rooms are air conditioned and feature a flat-screen satellite TV and fridge...
from


Located in the green area Yamskiye Woods, 5 km from Elektrostal city centre, this hotel features a sauna and a restaurant. It offers rooms with a kitchen...
from


Ekotel Bogorodsk Hotel is located in a picturesque park near Chernogolovsky Pond. It features an indoor swimming pool and a wellness centre. Free Wi-Fi and private parking are provided...
from


Surrounded by 420,000 m² of parkland and overlooking Kovershi Lake, this hotel outside Moscow offers spa and fitness facilities, and a private beach area with volleyball court and loungers...
from


Surrounded by green parklands, this hotel in the Moscow region features 2 restaurants, a bowling alley with bar, and several spa and fitness facilities. Moscow Ring Road is 17 km away...
from

Elektrostal Nearby

Below is a list of activities and point of interest in Elektrostal and its surroundings.

Elektrostal Page

Direct link
DB-City.comElektrostal /5 (2021-10-07 13:22:50)

Russia Flag

  • Information /Russian-Federation--Moscow-Oblast--Elektrostal#info
  • Demography /Russian-Federation--Moscow-Oblast--Elektrostal#demo
  • Geography /Russian-Federation--Moscow-Oblast--Elektrostal#geo
  • Distance /Russian-Federation--Moscow-Oblast--Elektrostal#dist1
  • Map /Russian-Federation--Moscow-Oblast--Elektrostal#map
  • Nearby cities and villages /Russian-Federation--Moscow-Oblast--Elektrostal#dist2
  • Weather /Russian-Federation--Moscow-Oblast--Elektrostal#weather
  • Sunrise and sunset /Russian-Federation--Moscow-Oblast--Elektrostal#sun
  • Hotel /Russian-Federation--Moscow-Oblast--Elektrostal#hotel
  • Nearby /Russian-Federation--Moscow-Oblast--Elektrostal#around
  • Page /Russian-Federation--Moscow-Oblast--Elektrostal#page
  • Terms of Use
  • Copyright © 2024 DB-City - All rights reserved
  • Change Ad Consent Do not sell my data

Encyclopedia Britannica

  • Games & Quizzes
  • History & Society
  • Science & Tech
  • Biographies
  • Animals & Nature
  • Geography & Travel
  • Arts & Culture
  • On This Day
  • One Good Fact
  • New Articles
  • Lifestyles & Social Issues
  • Philosophy & Religion
  • Politics, Law & Government
  • World History
  • Health & Medicine
  • Browse Biographies
  • Birds, Reptiles & Other Vertebrates
  • Bugs, Mollusks & Other Invertebrates
  • Environment
  • Fossils & Geologic Time
  • Entertainment & Pop Culture
  • Sports & Recreation
  • Visual Arts
  • Demystified
  • Image Galleries
  • Infographics
  • Top Questions
  • Britannica Kids
  • Saving Earth
  • Space Next 50
  • Student Center

Elektrostal

Elektrostal

Our editors will review what you’ve submitted and determine whether to revise the article.

computer hardware research

Elektrostal , city, Moscow oblast (province), western Russia . It lies 36 miles (58 km) east of Moscow city. The name, meaning “electric steel,” derives from the high-quality-steel industry established there soon after the October Revolution in 1917. During World War II , parts of the heavy-machine-building industry were relocated there from Ukraine, and Elektrostal is now a centre for the production of metallurgical equipment. Pop. (2006 est.) 146,189.

IMAGES

  1. Hardware Projects Laboratory

    computer hardware research

  2. The Rise of Hardware Development Services

    computer hardware research

  3. Is a Masters in Computer Science Worth It? [2024 Guide]

    computer hardware research

  4. World Ranking of Top Computer Scientists in 2021 (7th Edition

    computer hardware research

  5. Everything You Need to Know About Computer Hardware

    computer hardware research

  6. PPT

    computer hardware research

VIDEO

  1. COMPUTER HARDWARE SERVICING

  2. Computer hardware chart💻

  3. Computing the Universe

  4. Computer Hardware and the CPU components

  5. AI hardware research

  6. Nine Algorithms That Changed the Future: The Ingenious Ideas That Drive Today's Computers [1/6]

COMMENTS

  1. Hardware & devices

    Research Area(s): Artificial intelligence, Graphics and multimedia, Hardware and devices The Microsoft Applied Sciences Group is seeking a distinguished Senior Researcher to join our cutting-edge team focused on the advancement of photo-realistic digital humans and synthetic data.

  2. Hardware and Architecture

    Hardware and Architecture. The machinery that powers many of our interactions today — Web search, social networking, email, online video, shopping, game playing — is made of the smallest and the most massive computers. The smallest part is your smartphone, a machine that is over ten times faster than the iconic Cray-1 supercomputer.

  3. The New Hardware Development Trend and the Challenges in Data

    These new hardware is expected to break through the architecture of the entire computer system and convert the assumptions of the upper software. They also require that the architecture of data management and analysis software and related technologies have hardware awareness. ... 4.2 Future Research. New hardware environments such as ...

  4. Computer science and technology

    Modular, scalable hardware architecture for a quantum computer. ... Undergraduates Ben Lou, Srinath Mahankali, and Kenta Suzuki, whose research explores math and physics, are honored for their academic excellence. May 2, 2024. Read full story ...

  5. Computer Architecture

    Computer Architecture. We design the next generation of computer systems. Working at the intersection of hardware and software, our research studies how to best implement computation in the physical world. We design processors that are faster, more efficient, easier to program, and secure. Our research covers systems of all scales, from tiny ...

  6. 13036 PDFs

    Explore the latest full-text research PDFs, articles, conference papers, preprints and more on COMPUTER HARDWARE. Find methods information, sources, references or conduct a literature review on ...

  7. Hardware / Software Systems

    Our research aims to develop tomorrow's information technology that supports innovative applications, from big data analytics to the Internet of Things. Hardware/Software Systems covers all aspects of information technology, including energy efficient and robust hardware systems, software defined networks, secure distributed systems, data ...

  8. Big data needs a hardware revolution

    Big data needs a hardware revolution. Artificial intelligence is driving the next wave of innovations in the semiconductor industry. Software companies make headlines but research on computer ...

  9. Materials challenges and opportunities for quantum computing hardware

    Abstract. Quantum computing hardware technologies have advanced during the past two decades, with the goal of building systems that can solve problems that are intractable on classical computers. The ability to realize large-scale systems depends on major advances in materials science, materials engineering, and new fabrication techniques.

  10. How remouldable computer hardware is speeding up science

    How remouldable computer hardware is speeding up science. Field-programmable gate arrays can speed up applications ranging from genomic alignment to deep learning. This virtual-reality arena for ...

  11. Computer hardware

    PDP-11 CPU board. Computer hardware comprises the physical parts of a computer, such as the central processing unit (CPU), random access memory (RAM), motherboard, computer data storage, graphics card, sound card, and computer case.It includes external devices such as a monitor, mouse, keyboard, and speakers.. By contrast, software is the set of instructions that can be stored and run by hardware.

  12. Today's computing challenges: opportunities for computer hardware

    Evidently, it is a large challenge for computer hardware industry. However, at the same time it also provides great opportunities for the hardware design industry to develop novel technologies and to take leadership away from incumbents. ... Three different approaches were used to collect research articles: Searching Google scholar and IEEE ...

  13. A systematic literature review on hardware implementation of ...

    These accelerators provide high-performance hardware while preserving the required accuracy. In this work, we present a systematic literature review that focuses on exploring the available hardware accelerators for the AI and ML tools. More than 169 different research papers published between the years 2009 and 2019 are studied and analysed.

  14. Computer Hardware

    Computer Hardware. The computer is an amazingly useful general-purpose technology, to the point that now cameras, phones, thermostats, and more are all now little computers. This section will introduce major parts and themes of how computer hardware works. "Hardware" refers the physical parts of the computer, and "software" refers to the code ...

  15. An Introduction to hardware, software, and other information technology

    Biobanks support medical research by facilitating access to biospecimens. Biospecimens that are linked to clinical and molecular information are particularly useful for translational biomedical research. ... In addition, computer hardware and software must be updated periodically to ensure compatibility and maintain efficiency. Consequently ...

  16. Tech hardware

    Computer hardware market revenue in worldwide from 2018 to 2028, by segment (in billion U.S. dollars) Premium Statistic IT data center systems total spending worldwide 2012-2024

  17. Modular, scalable hardware architecture for a quantum computer

    A scalable, modular hardware platform can integrate thousands of interconnected qubits onto a customized integrated circuit. This "quantum-system-on-chip" (QSoC) architecture enables researchers to precisely tune and control a dense array of qubits.

  18. What is computer hardware?

    hardware: In information technology, hardware is the physical aspect of computers, telecommunications, and other devices. The term arose as a way to distinguish the "box" and the electronic circuitry and components of a computer from the program you put in it to make it do things. The program came to be known as the software .

  19. Modular, scalable hardware architecture for a quantum computer

    Modular, scalable hardware architecture for a quantum computer. ScienceDaily . Retrieved June 7, 2024 from www.sciencedaily.com / releases / 2024 / 05 / 240529144245.htm

  20. What is Quantum Computing?

    Quantum computing is an analog to the computing we know and love. But while computing leverages the classical transistor, quantum computing takes advantage of the world of the infinitely small ...

  21. Computer Hope's Free Computer Help

    Today in Computer History. 1924 - Donald Davies was born. 2010 - Apple introduces the iPhone 4. Computer History - Computer Pioneers. Free computer help and support. Answering all your computer related questions with complete information on all hardware and software.

  22. What is Cybersecurity?

    Cybersecurity refers to any technology, measure or practice for preventing cyberattacks or mitigating their impact. Cybersecurity aims to protect individuals' and organizations' systems, applications, computing devices, sensitive data and financial assets against computer viruses, sophisticated and costly ransomware attacks, and more.

  23. McKinsey Technology Trends Outlook 2023

    McKinsey Technology Trends Outlook 2023. (81 pages) After a tumultuous 2022 for technology investment and talent, the first half of 2023 has seen a resurgence of enthusiasm about technology's potential to catalyze progress in business and society. Generative AI deserves much of the credit for ushering in this revival, but it stands as just ...

  24. Cisco: Software, Network, and Cybersecurity Solutions

    New Cisco ThousandEyes capabilities and AI-native workflows in Cisco Networking Cloud will deliver Digital Experience Assurance, transforming IT operations. Read press release. Cisco is a worldwide technology leader. Our purpose is to power an inclusive future for all through software, networking, security, computing, and more solutions.

  25. Experience the Power of Windows 11 OS, Computers, & Apps

    5 Phone Link experience comes preinstalled on your PC with Windows 10 (running Windows 10, May 2019 Update at the least) or Windows 11. To experience the full functionality, Android phones must be running Android 7.0 or later. Phone Link for iOS requires iPhone with iOS 14 or higher, Windows 11 device, Bluetooth connection and the latest ...

  26. Elektrostal

    In 1938, it was granted town status. [citation needed]Administrative and municipal status. Within the framework of administrative divisions, it is incorporated as Elektrostal City Under Oblast Jurisdiction—an administrative unit with the status equal to that of the districts. As a municipal division, Elektrostal City Under Oblast Jurisdiction is incorporated as Elektrostal Urban Okrug.

  27. Elektrostal, Moscow Oblast, Russia

    Elektrostal Geography. Geographic Information regarding City of Elektrostal. Elektrostal Geographical coordinates. Latitude: 55.8, Longitude: 38.45. 55° 48′ 0″ North, 38° 27′ 0″ East. Elektrostal Area. 4,951 hectares. 49.51 km² (19.12 sq mi) Elektrostal Altitude.

  28. Elektrostal

    Elektrostal, city, Moscow oblast (province), western Russia.It lies 36 miles (58 km) east of Moscow city. The name, meaning "electric steel," derives from the high-quality-steel industry established there soon after the October Revolution in 1917. During World War II, parts of the heavy-machine-building industry were relocated there from Ukraine, and Elektrostal is now a centre for the ...

  29. Elektrostal

    Elektrostal. Elektrostal ( Russian: Электроста́ль) is a city in Moscow Oblast, Russia. It is 58 kilometers (36 mi) east of Moscow. As of 2010, 155,196 people lived there.