Scientific Computing with Case Studies

Dianne P. O'Leary, University of Maryland SIAM , 2009 ISBN: 978-0-898716-66-5; Language: English

Written for advanced undergraduate and early graduate courses in numerical analysis and scientific computing, this book provides a practical guide to the numerical solution of linear and nonlinear equations, differential equations, optimization problems, and eigenvalue problems. The book treats standard problems and introduces important variants such as sparse systems, differential-algebraic equations, constrained optimization, Monte Carlo simulations, and parametric studies.

In addition, a supplemental set of MATLAB code files is available for download.  Optimization Toolbox is also discussed.

  • Get companion software

Get Started

Ideas and Tools to Teach with MATLAB and Simulink

Find resources

Keep Exploring

MATLAB and Simulink Courseware: Interactive Teaching Content and Examples

Select a Web Site

Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .

You can also select a web site from the following list

How to Get Best Site Performance

Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.

  • América Latina (Español)
  • Canada (English)
  • United States (English)
  • Belgium (English)
  • Denmark (English)
  • Deutschland (Deutsch)
  • España (Español)
  • Finland (English)
  • France (Français)
  • Ireland (English)
  • Italia (Italiano)
  • Luxembourg (English)
  • Netherlands (English)
  • Norway (English)
  • Österreich (Deutsch)
  • Portugal (English)
  • Sweden (English)
  • United Kingdom (English)

Asia Pacific

  • Australia (English)
  • India (English)
  • New Zealand (English)

Contact your local office

(Stanford users can avoid this Captcha by logging in.)

  • Send to text email RefWorks EndNote printer

Scientific computing with case studies [electronic resource]

Available online.

  • SIAM Publications Online

More options

  • Find it at other libraries via WorldCat
  • Contributors

Description

Creators/contributors, contents/summary.

  • Part I. Preliminaries: Mathematical Modeling, Errors, Hardware, and Software
  • 1. Errors and arithmetic
  • 2. Sensitivity analysis: when a little means a lot
  • 3. Computer memory and arithmetic: a look under the hood
  • 4. Design of computer programs: writing your legacy
  • Part II. Dense Matrix Computations: 5. Matrix factorizations
  • 6. Case study: image deblurring: I can see clearly now
  • 7. Case study: updating and downdating matrix factorizations: a change in plans
  • 8. Case study: the direction-of-arrival problem
  • Part III. Optimization and Data Fitting: 9. Numerical methods for unconstrained optimization
  • 10. Numerical methods for constrained optimization
  • 11. Case study: classified information: the data clustering problem
  • 12. Case study: achieving a common viewpoint: yaw, pitch, and roll
  • 13. Case study: fitting exponentials: an interest in rates
  • 14. Case study: blind deconvolution: errors, errors, everywhere
  • 15. Case study: blind deconvolution: a matter of norm
  • Part IV. Monte Carlo Computations: 16. Monte Carlo principles
  • 17. Case study: Monte-Carlo minimization and counting one, two, too many
  • 18. Case study: multidimensional integration: partition and conquer
  • 19. Case study: models of infections: person to person
  • Part V. Ordinary Differential Equations: 20. Solution of ordinary differential equations
  • 21. Case study: more models of infection: it's epidemic
  • 22. Case study: robot control: swinging like a pendulum
  • 23. Case study: finite differences and finite elements: getting to know you
  • Part VI. Nonlinear Equations and Continuation Methods: 24. Nonlinear systems
  • 25. Case study: variable-geometry trusses
  • 26. Case study: beetles, cannibalism, and chaos
  • Part VII. Sparse Matrix Computations with Application to Partial Differential Equations: 27. Solving sparse linear systems: taking the direct approach
  • 28. Iterative methods for linear systems
  • 29. Case study: elastoplastic torsion: twist and stress
  • 30. Case study: fast solvers and Sylvester equations: both sides now
  • 31. Case study: eigenvalues: valuable principles
  • 32. Multigrid methods: managing massive meshes
  • Bibliography
  • (source: Nielsen Book Data)

Bibliographic information

Browse related items.

Stanford University

  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Non-Discrimination
  • Accessibility

© Stanford University , Stanford , California 94305 .

scientific computing case study

  • Science & Math
  • Mathematics

Fulfillment by Amazon (FBA) is a service we offer sellers that lets them store their products in Amazon's fulfillment centers, and we directly pack, ship, and provide customer service for these products. Something we hope you'll especially enjoy: FBA items qualify for FREE Shipping and Amazon Prime.

If you're a seller, Fulfillment by Amazon can help you grow your business. Learn more about the program.

Kindle app logo image

Download the free Kindle app and start reading Kindle books instantly on your smartphone, tablet, or computer - no Kindle device required .

Read instantly on your browser with Kindle for Web.

Using your mobile phone camera - scan the code below and download the Kindle app.

QR code to download the Kindle App

Image Unavailable

Scientific Computing with Case Studies

  • To view this video download Flash Player

Follow the authors

Dianne Prost O'Leary

Scientific Computing with Case Studies 0th Edition

  • ISBN-10 0898716667
  • ISBN-13 978-0898716665
  • Publisher Society for Industrial and Applied Mathematics
  • Publication date March 19, 2009
  • Language English
  • Dimensions 6.75 x 0.75 x 9.75 inches
  • Print length 395 pages
  • See all details

Amazon First Reads | Editors' picks at exclusive prices

Editorial Reviews

Book description, about the author, product details.

  • Publisher ‏ : ‎ Society for Industrial and Applied Mathematics; 0 edition (March 19, 2009)
  • Language ‏ : ‎ English
  • Paperback ‏ : ‎ 395 pages
  • ISBN-10 ‏ : ‎ 0898716667
  • ISBN-13 ‏ : ‎ 978-0898716665
  • Item Weight ‏ : ‎ 1.79 pounds
  • Dimensions ‏ : ‎ 6.75 x 0.75 x 9.75 inches
  • #801 in Discrete Mathematics (Books)
  • #12,758 in Mathematics (Books)
  • #175,137 in Unknown

About the authors

Dianne prost o'leary.

Discover more of the author’s books, see similar authors, read author blogs and more

Dianne P. O'Leary

Customer reviews.

Customer Reviews, including Product Star Ratings help customers to learn more about the product and decide whether it is the right product for them.

To calculate the overall star rating and percentage breakdown by star, we don’t use a simple average. Instead, our system considers things like how recent a review is and if the reviewer bought the item on Amazon. It also analyzed reviews to verify trustworthiness.

  • Sort reviews by Top reviews Most recent Top reviews

Top reviews from the United States

There was a problem filtering reviews right now. please try again later..

scientific computing case study

  • Amazon Newsletter
  • About Amazon
  • Accessibility
  • Sustainability
  • Press Center
  • Investor Relations
  • Amazon Devices
  • Amazon Science
  • Start Selling with Amazon
  • Sell apps on Amazon
  • Supply to Amazon
  • Protect & Build Your Brand
  • Become an Affiliate
  • Become a Delivery Driver
  • Start a Package Delivery Business
  • Advertise Your Products
  • Self-Publish with Us
  • Host an Amazon Hub
  • › See More Ways to Make Money
  • Amazon Visa
  • Amazon Store Card
  • Amazon Secured Card
  • Amazon Business Card
  • Shop with Points
  • Credit Card Marketplace
  • Reload Your Balance
  • Amazon Currency Converter
  • Your Account
  • Your Orders
  • Shipping Rates & Policies
  • Amazon Prime
  • Returns & Replacements
  • Manage Your Content and Devices
  • Recalls and Product Safety Alerts
  • Conditions of Use
  • Privacy Notice
  • Consumer Health Data Privacy Disclosure
  • Your Ads Privacy Choices

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • v.12(1); 2014 Jan

Logo of plosbiol

Best Practices for Scientific Computing

Greg wilson.

1 Mozilla Foundation, Toronto, Ontario, Canada

D. A. Aruliah

2 University of Ontario Institute of Technology, Oshawa, Ontario, Canada

C. Titus Brown

3 Michigan State University, East Lansing, Michigan, United States of America

Neil P. Chue Hong

4 Software Sustainability Institute, Edinburgh, United Kingdom

5 Space Telescope Science Institute, Baltimore, Maryland, United States of America

Richard T. Guy

6 University of Toronto, Toronto, Ontario, Canada

Steven H. D. Haddock

7 Monterey Bay Aquarium Research Institute, Moss Landing, California, United States of America

Kathryn D. Huff

8 University of California Berkeley, Berkeley, California, United States of America

Ian M. Mitchell

9 University of British Columbia, Vancouver, British Columbia, Canada

Mark D. Plumbley

10 Queen Mary University of London, London, United Kingdom

11 University College London, London, United Kingdom

Ethan P. White

12 Utah State University, Logan, Utah, United States of America

Paul Wilson

13 University of Wisconsin, Madison, Wisconsin, United States of America

The author(s) have made the following declarations about their contributions: Wrote the paper: GVW DAA CTB NPCH MD RTG SHDH KH IMM MDP BW EPW PW.

We describe a set of best practices for scientific software development, based on research and experience, that will improve scientists' productivity and the reliability of their software.

Introduction

Scientists spend an increasing amount of time building and using software. However, most scientists are never taught how to do this efficiently. As a result, many are unaware of tools and practices that would allow them to write more reliable and maintainable code with less effort. We describe a set of best practices for scientific software development that have solid foundations in research and experience, and that improve scientists' productivity and the reliability of their software.

Software is as important to modern scientific research as telescopes and test tubes. From groups that work exclusively on computational problems, to traditional laboratory and field scientists, more and more of the daily operation of science revolves around developing new algorithms, managing and analyzing the large amounts of data that are generated in single research projects, combining disparate datasets to assess synthetic problems, and other computational tasks.

Scientists typically develop their own software for these purposes because doing so requires substantial domain-specific knowledge. As a result, recent studies have found that scientists typically spend 30% or more of their time developing software [1] , [2] . However, 90% or more of them are primarily self-taught [1] , [2] , and therefore lack exposure to basic software development practices such as writing maintainable code, using version control and issue trackers, code reviews, unit testing, and task automation.

We believe that software is just another kind of experimental apparatus [3] and should be built, checked, and used as carefully as any physical apparatus. However, while most scientists are careful to validate their laboratory and field equipment, most do not know how reliable their software is [4] , [5] . This can lead to serious errors impacting the central conclusions of published research [6] : recent high-profile retractions, technical comments, and corrections because of errors in computational methods include papers in Science [7] , [8] , PNAS [9] , the Journal of Molecular Biology [10] , Ecology Letters [11] , [12] , the Journal of Mammalogy [13] , Journal of the American College of Cardiology [14] , Hypertension [15] , and The American Economic Review [16] .

In addition, because software is often used for more than a single project, and is often reused by other scientists, computing errors can have disproportionate impacts on the scientific process. This type of cascading impact caused several prominent retractions when an error from another group's code was not discovered until after publication [6] . As with bench experiments, not everything must be done to the most exacting standards; however, scientists need to be aware of best practices both to improve their own approaches and for reviewing computational work by others.

This paper describes a set of practices that are easy to adopt and have proven effective in many research settings. Our recommendations are based on several decades of collective experience both building scientific software and teaching computing to scientists [17] , [18] , reports from many other groups [19] –, guidelines for commercial and open source software development [26] ,, and on empirical studies of scientific computing [28] – [31] and software development in general (summarized in [32] ). None of these practices will guarantee efficient, error-free software development, but used in concert they will reduce the number of errors in scientific software, make it easier to reuse, and save the authors of the software time and effort that can used for focusing on the underlying scientific questions.

Our practices are summarized in Box 1 ; labels in the main text such as “(1a)” refer to items in that summary. For reasons of space, we do not discuss the equally important (but independent) issues of reproducible research, publication and citation of code and data, and open science. We do believe, however, that all of these will be much easier to implement if scientists have the skills we describe.

Box 1. Summary of Best Practices

  • A program should not require its readers to hold more than a handful of facts in memory at once.
  • Make names consistent, distinctive, and meaningful.
  • Make code style and formatting consistent.
  • Make the computer repeat tasks.
  • Save recent commands in a file for re-use.
  • Use a build tool to automate workflows.
  • Work in small steps with frequent feedback and course correction.
  • Use a version control system.
  • Put everything that has been created manually in version control.
  • Every piece of data must have a single authoritative representation in the system.
  • Modularize code rather than copying and pasting.
  • Re-use code instead of rewriting it.
  • Add assertions to programs to check their operation.
  • Use an off-the-shelf unit testing library.
  • Turn bugs into test cases.
  • Use a symbolic debugger.
  • Use a profiler to identify bottlenecks.
  • Write code in the highest-level language possible.
  • Document interfaces and reasons, not implementations.
  • Refactor code in preference to explaining how it works.
  • Embed the documentation for a piece of software in that software.
  • Use pre-merge code reviews.
  • Use pair programming when bringing someone new up to speed and when tackling particularly tricky problems.
  • Use an issue tracking tool.

Write Programs for People, Not Computers

Scientists writing software need to write code that both executes correctly and can be easily read and understood by other programmers (especially the author's future self). If software cannot be easily read and understood, it is much more difficult to know that it is actually doing what it is intended to do. To be productive, software developers must therefore take several aspects of human cognition into account: in particular, that human working memory is limited, human pattern matching abilities are finely tuned, and human attention span is short [33] – [37] .

First, a program should not require its readers to hold more than a handful of facts in memory at once (1a) . Human working memory can hold only a handful of items at a time, where each item is either a single fact or a “chunk” aggregating several facts [33] , [34] , so programs should limit the total number of items to be remembered to accomplish a task. The primary way to accomplish this is to break programs up into easily understood functions, each of which conducts a single, easily understood, task. This serves to make each piece of the program easier to understand in the same way that breaking up a scientific paper using sections and paragraphs makes it easier to read.

Second, scientists should make names consistent, distinctive, and meaningful (1b) . For example, using non-descriptive names, like a and foo, or names that are very similar, like results and results2, is likely to cause confusion.

Third, scientists should make code style and formatting consistent (1c) . If different parts of a scientific paper used different formatting and capitalization, it would make that paper more difficult to read. Likewise, if different parts of a program are indented differently, or if programmers mix CamelCaseNaming and pothole_case_naming, code takes longer to read and readers make more mistakes [35] , [36] .

Let the Computer Do the Work

Science often involves repetition of computational tasks such as processing large numbers of data files in the same way or regenerating figures each time new data are added to an existing analysis. Computers were invented to do these kinds of repetitive tasks but, even today, many scientists type the same commands in over and over again or click the same buttons repeatedly [17] . In addition to wasting time, sooner or later even the most careful researcher will lose focus while doing this and make mistakes.

Scientists should therefore make the computer repeat tasks (2a) and save recent commands in a file for re-use (2b) . For example, most command-line tools have a “history” option that lets users display and re-execute recent commands, with minor edits to filenames or parameters. This is often cited as one reason command-line interfaces remain popular [38] , [39] : “do this again” saves time and reduces errors.

A file containing commands for an interactive system is often called a script , though there is real no difference between this and a program. When these scripts are repeatedly used in the same way, or in combination, a workflow management tool can be used. The paradigmatic example is compiling and linking programs in languages such as Fortran, C++, Java, and C# [40] . The most widely used tool for this task is probably Make ( http://www.gnu.org/software/make ), although many alternatives are now available [41] . All of these allow people to express dependencies between files, i.e., to say that if A or B has changed, then C needs to be updated using a specific set of commands. These tools have been successfully adopted for scientific workflows as well [42] .

To avoid errors and inefficiencies from repeating commands manually, we recommend that scientists use a build tool to automate workflows (2c) , e.g., specify the ways in which intermediate data files and final results depend on each other, and on the programs that create them, so that a single command will regenerate anything that needs to be regenerated.

In order to maximize reproducibility, everything needed to re-create the output should be recorded automatically in a format that other programs can read. (Borrowing a term from archaeology and forensics, this is often called the provenance of data.) There have been some initiatives to automate the collection of this information, and standardize its format [43] , but it is already possible to record the following without additional tools:

  • unique identifiers and version numbers for raw data records (which scientists may need to create themselves);
  • unique identifiers and version numbers for programs and libraries;
  • the values of parameters used to generate any given output; and
  • the names and version numbers of programs (however small) used to generate those outputs.

Make Incremental Changes

Unlike traditional commercial software developers, but very much like developers in open source projects or startups, scientific programmers usually don't get their requirements from customers, and their requirements are rarely frozen [31] , [44] . In fact, scientists often can't know what their programs should do next until the current version has produced some results. This challenges design approaches that rely on specifying requirements in advance.

Programmers are most productive when they work in small steps with frequent feedback and course correction (3a) rather than trying to plan months or years of work in advance. While the details vary from team to team, these developers typically work in steps that are sized to be about an hour long, and these steps are often grouped in iterations that last roughly one week. This accommodates the cognitive constraints discussed in the first section, and acknowledges the reality that real-world requirements are constantly changing. The goal is to produce working (if incomplete) code after each iteration. While these practices have been around for decades, they gained prominence starting in the late 1990s under the banner of agile development [45] , [46] .

Two of the biggest challenges scientists and other programmers face when working with code and data are keeping track of changes (and being able to revert them if things go wrong), and collaborating on a program or dataset [23] . Typical solutions are to email software to colleagues or to copy successive versions of it to a shared folder, e.g., Dropbox ( http://www.dropbox.com ). However, both approaches are fragile and can lead to confusion and lost work when important changes are overwritten or out-of-date files are used. It's also difficult to find out which changes are in which versions or to say exactly how particular results were computed at a later date.

The standard solution in both industry and open source is to use a version control system (3b) (VCS) [27] , [47] . A VCS stores snapshots of a project's files in a repository (or a set of repositories). Programmers can modify their working copy of the project at will, then commit changes to the repository when they are satisfied with the results to share them with colleagues.

Crucially, if several people have edited files simultaneously, the VCS highlights the differences and requires them to resolve any conflicts before accepting the changes. The VCS also stores the entire history of those files, allowing arbitrary versions to be retrieved and compared, together with metadata such as comments on what was changed and the author of the changes. All of this information can be extracted to provide provenance for both code and data.

Many good VCSes are open source and freely available, including Git ( http://git-scm.com ), Subversion ( http://subversion.apache.org ), and Mercurial ( http://mercurial.selenic.com ). Many free hosting services are available as well, with GitHub ( https://github.com ), BitBucket ( https://bitbucket.org ), SourceForge ( http://sourceforge.net ), and Google Code ( http://code.google.com ) being the most popular. As with coding style, the best one to use is almost always whatever your colleagues are already using [27] .

Reproducibility is maximized when scientists put everything that has been created manually in version control (3c) , including programs, original field observations, and the source files for papers. Automated output and intermediate files can be regenerated as needed. Binary files (e.g., images and audio clips) may be stored in version control, but it is often more sensible to use an archiving system for them, and store the metadata describing their contents in version control instead [48] .

Don't Repeat Yourself (or Others)

Anything that is repeated in two or more places is more difficult to maintain. Every time a change or correction is made, multiple locations must be updated, which increases the chance of errors and inconsistencies. To avoid this, programmers follow the DRY Principle [49] , for “don't repeat yourself,” which applies to both data and code.

For data, this maxim holds that every piece of data must have a single authoritative representation in the system (4a) . Physical constants ought to be defined exactly once to ensure that the entire program is using the same value; raw data files should have a single canonical version, every geographic location from which data has been collected should be given an ID that can be used to look up its latitude and longitude, and so on.

The DRY Principle applies to code at two scales. At small scales, modularize code rather than copying and pasting (4b) . Avoiding “code clones” has been shown to reduce error rates [50] : when a change is made or a bug is fixed, that change or fix takes effect everywhere, and people's mental model of the program (i.e., their belief that “this one's been fixed”) remains accurate. As a side effect, modularizing code allows people to remember its functionality as a single mental chunk, which in turn makes code easier to understand. Modularized code can also be more easily repurposed for other projects.

At larger scales, it is vital that scientific programmers re-use code instead of rewriting it (4c) . Tens of millions of lines of high-quality open source software are freely available on the web, and at least as much is available commercially. It is typically better to find an established library or package that solves a problem than to attempt to write one's own routines for well established problems (e.g., numerical integration, matrix inversions, etc.).

Plan for Mistakes

Mistakes are inevitable, so verifying and maintaining the validity of code over time is immensely challenging [51] . While no single practice has been shown to catch or prevent all mistakes, several are very effective when used in combination [47] , [52] , [53] .

The first line of defense is defensive programming . Experienced programmers add assertions to programs to check their operation (5a) because experience has taught them that everyone (including their future self) makes mistakes. An assertion is simply a statement that something holds true at a particular point in a program; as the example below shows, assertions can be used to ensure that inputs are valid, outputs are consistent, and so on.

equation image

Assertions can make up a sizeable fraction of the code in well-written applications, just as tools for calibrating scientific instruments can make up a sizeable fraction of the equipment in a lab. These assertions serve two purposes. First, they ensure that if something does go wrong, the program will halt immediately, which simplifies debugging. Second, assertions are executable documentation , i.e., they explain the program as well as checking its behavior. This makes them more useful in many cases than comments since the reader can be sure that they are accurate and up to date.

The second layer of defense is automated testing . Automated tests can check to make sure that a single unit of code is returning correct results ( unit tests ), that pieces of code work correctly when combined ( integration tests ), and that the behavior of a program doesn't change when the details are modified ( regression tests ). These tests are conducted by the computer, so that they are easy to rerun every time the program is modified. Creating and managing tests is easier if programmers use an off-the-shelf unit testing library (5b) to initialize inputs, run tests, and report their results in a uniform way. These libraries are available for all major programming languages including those commonly used in scientific computing [54] – [56] .

Tests check to see whether the code matches the researcher's expectations of its behavior, which depends on the researcher's understanding of the problem at hand [57] – [59] . For example, in scientific computing, tests are often conducted by comparing output to simplified cases, experimental data, or the results of earlier programs that are trusted. Another approach for generating tests is to turn bugs into test cases (5c) by writing tests that trigger a bug that has been found in the code and (once fixed) will prevent the bug from reappearing unnoticed. In combination these kinds of testing can improve our confidence that scientific code is operating properly and that the results it produces are valid. An additional benefit of testing is that it encourages programmers to design and build code that is testable (i.e., self-contained functions and classes that can run more or less independently of one another). Code that is designed this way is also easier to understand and more reusable.

No matter how good one's computational practice is, reasonably complex code will always initially contain bugs. Fixing bugs that have been identified is often easier if you use a symbolic debugger (5d) to track them down. A better name for this kind of tool would be “interactive program inspector” since a debugger allows users to pause a program at any line (or when some condition is true), inspect the values of variables, and walk up and down active function calls to figure out why things are behaving the way they are. Debuggers are usually more productive than adding and removing print statements or scrolling through hundreds of lines of log output [60] , because they allow the user to see exactly how the code is executing rather than just snapshots of state of the program at a few moments in time. In other words, the debugger allows the scientist to witness what is going wrong directly, rather than having to anticipate the error or infer the problem using indirect evidence.

Optimize Software Only after It Works Correctly

Today's computers and software are so complex that even experts find it hard to predict which parts of any particular program will be performance bottlenecks [61] . The most productive way to make code fast is therefore to make it work correctly, determine whether it's actually worth speeding it up, and—in those cases where it is—to use a profiler to identify bottlenecks (6a) .

This strategy also has interesting implications for choice of programming language. Research has confirmed that most programmers write roughly the same number of lines of code per unit time regardless of the language they use [62] . Since faster, lower level, languages require more lines of code to accomplish the same task, scientists are most productive when they write code in the highest-level language possible (6b) , and shift to low-level languages like C and Fortran only when they are sure the performance boost is needed. (Using higher-level languages also helps program comprehensibility, since such languages have, in a sense, “pre-chunked” the facts that programmers need to have in short-term memory.)

Taking this approach allows more code to be written (and tested) in the same amount of time. Even when it is known before coding begins that a low-level language will ultimately be necessary, rapid prototyping in a high-level language helps programmers make and evaluate design decisions quickly. Programmers can also use a high-level prototype as a test oracle for a high-performance low-level reimplementation, i.e., compare the output of the optimized (and usually more complex) program against the output from its unoptimized (but usually simpler) predecessor in order to check its correctness.

Document Design and Purpose, Not Mechanics

In the same way that a well documented experimental protocol makes research methods easier to reproduce, good documentation helps people understand code. This makes the code more reusable and lowers maintenance costs [47] . As a result, code that is well documented makes it easier to transition when the graduate students and postdocs who have been writing code in a lab transition to the next career phase. Reference documentation and descriptions of design decisions are key for improving the understandability of code. However, inline documentation that recapitulates code is not useful. Therefore we recommend that scientific programmers document interfaces and reasons, not implementations (7a) . For example, a clear description like this at the beginning of a function that describes what it does and its inputs and outputs is useful:

equation image

In contrast, the comment in the code fragment below does nothing to aid comprehension:

equation image

If a substantial description of the implementation of a piece of software is needed, it is better to refactor code in preference to explaining how it works (7b) , i.e., rather than write a paragraph to explain a complex piece of code, reorganize the code itself so that it doesn't need such an explanation. This may not always be possible—some pieces of code are intrinsically difficult—but the onus should always be on the author to convince his or her peers of that.

The best way to create and maintain reference documentation is to embed the documentation for a piece of software in that software (7c) . Doing this increases the probability that when programmers change the code, they will update the documentation at the same time.

Embedded documentation usually takes the form of specially-formatted and placed comments. Typically, a documentation generator such as Javadoc, Doxygen, or Sphinx extracts these comments and generates well-formatted web pages and other human-friendly documents ( http://en.wikipedia.org/wiki/Comparison_of_documentation_generators ). Alternatively, code can be embedded in a larger document that includes information about what the code is doing (i.e., literate programming). Common approaches to this include this use of knitr [63] and IPython Notebooks [64] .

Collaborate

In the same way that having manuscripts reviewed by other scientists can reduce errors and make research easier to understand, reviews of source code can eliminate bugs and improve readability. A large body of research has shown that code reviews are the most cost-effective way of finding bugs in code [65] , [66] . They are also a good way to spread knowledge and good practices around a team. In projects with shifting membership, such as most academic labs, code reviews help ensure that critical knowledge isn't lost when a student or postdoc leaves the lab.

Code can be reviewed either before or after it has been committed to a shared version control repository. Experience shows that if reviews don't have to be done in order to get code into the repository, they will soon not be done at all [27] . We therefore recommend that projects use pre-merge code reviews (8a) .

An extreme form of code review is pair programming , in which two developers sit together while writing code. One (the driver) actually writes the code; the other (the navigator) provides real-time feedback and is free to track larger issues of design and consistency. Several studies have found that pair programming improves productivity [67] , but many programmers find it intrusive. We therefore recommend that teams use pair programming when bringing someone new up to speed and when tackling particularly tricky problems (8b) .

Once a team grows beyond a certain size, it becomes difficult to keep track of what needs to be reviewed, or of who's doing what. Teams can avoid a lot of duplicated effort and dropped balls if they use an issue tracking tool (8c) to maintain a list of tasks to be performed and bugs to be fixed [68] . This helps avoid duplicated work and makes it easier for tasks to be transferred to different people. Free repository hosting services like GitHub include issue tracking tools, and many good standalone tools exist as well, such as Trac ( http://trac.edgewall.org ).

We have outlined a series of recommended best practices for scientific computing based on extensive research, as well as our collective experience. These practices can be applied to individual work as readily as group work; separately and together, they improve the productivity of scientific programming and the reliability of the resulting code, and therefore the speed with which we produce results and our confidence in them. They are also, we believe, prerequisites for reproducible computational research: if software is not version controlled, readable, and tested, the chances of its authors (much less anyone else) being able to re-create results are remote.

Our 24 recommendations are a beginning, not an end. Individuals and groups who have incorporated them into their work will find links to more advanced practices at Software Carpentry ( http://software-carpentry.org ).

Research suggests that the time cost of implementing these kinds of tools and approaches in scientific computing is almost immediately offset by the gains in productivity of the programmers involved [17] . Even so, the recommendations described above may seem intimidating to implement. Fortunately, the different practices reinforce and support one another, so the effort required is less than the sum of adding each component separately. Nevertheless, we do not recommend that research groups attempt to implement all of these recommendations at once, but instead suggest that these tools be introduced incrementally over a period of time.

How to implement the recommended practices can be learned from many excellent tutorials available online or through workshops and classes organized by groups like Software Carpentry. This type of training has proven effective at driving adoption of these tools in scientific settings [17] , [69] .

For computing to achieve the level of rigor that is expected throughout other parts of science, it is necessary for scientists to begin to adopt the tools and approaches that are known to improve both the quality of software and the efficiency with which it is produced. To facilitate this adoption, universities and funding agencies need to support the training of scientists in the use of these tools and the investment of time and money in building better scientific software. Investment in these approaches by both individuals and institutions will improve our confidence in the results of computational science and will allow us to make more rapid progress on important scientific questions than would otherwise be possible.

Acknowledgments

We are grateful to Joel Adamson, Aron Ahmadia, Roscoe Bartlett, Erik Bray, Stephen Crouch, Michael Jackson, Justin Kitzes, Adam Obeng, Karthik Ram, Yoav Ram, and Tracy Teal for feedback on this paper.

Funding Statement

Neil Chue Hong was supported by the UK Engineering and Physical Sciences Research Council (EPSRC) Grant EP/H043160/1 for the UK Software Sustainability Institute. Ian M. Mitchell was supported by NSERC Discovery Grant #298211. Mark Plumbley was supported by EPSRC through a Leadership Fellowship (EP/G007144/1) and a grant (EP/H043101/1) for SoundSoftware.ac.uk. Ethan White was supported by a CAREER grant from the US National Science Foundation (DEB 0953694). Greg Wilson was supported by a grant from the Sloan Foundation. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons
  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Mathematics LibreTexts

1: Introduction to Scientific Computing

  • Last updated
  • Save as PDF
  • Page ID 53644

This page is a draft and is under active development. 

  • Peter Staab
  • Fitchburg State University

So you picked up (or probably actually clicked on) this text and maybe you wonder what Scientific Computing actually is. This will give a brief overview of the field and try to explain with some actual examples. In short, scientific computing is a field of study that solves problems from the sciences and mathematics that are generally too difficulty to solve using standard techniques and need to resort to writing some computer code to get an answer.

  • 1.1: Examples of Scientific Computing
  • 1.2: Modeling
  • 1.3: Computing
  • 1.4: Ideas needed to do Effective Scientific Computing
  • 1.5: Examples of Scientific Computing
  • 1.6: Writing Code for Scientific Computing
  • 1.7: Let's get started

A graphic of a cell being destroyed by a laser

Case study: Via Nova Therapeutics uses CDD Vault to support global collaboration

Microscopes in a lab with Labvantage logo

Case study: Increasing R&D success with a digitally native laboratory system

Astra Zeneca workflow

Case study: How AstraZeneca is overcoming obstacles to analytical data access

Media Partners

Google Custom Search

We use Google for our search. By clicking on „enable search“ you enable the search box and accept our terms of use.

Information on the use of Google Search

  • TUM School of Computation, Information and Technology
  • Technical University of Munich

Technical University of Munich

Case Studies

Photo: fauxels / pexels.com

Combining study and practice: To help our students apply mathematics during their education, we offer case studies as a module for master's students. Here, you put into practice what you learn in the lectures – in real projects with a high degree of personal responsibility and design options.

The first step is to understand a practical challenge and then model and analyze it in small teams. The students then develop and implement suitable solutions. In the process, our department also cooperates with external partners.

We offer case studies in the areas of:  Life Sciences ,  Optimization  and  Scientific Computing

Case study procedure

1. project planning.

Students start their project work in small teams. They get to know the project partners, jointly plan the project goals and the concrete procedure, draw up a schedule and assign responsibilities. Regular consultations with the supervisors ensure that the mutual expectations of the project are aligned and that the goals set are realistic.

2. Poster presentation

While the participants are already working on the problem, they prepare a presentation of their project and first results in the form of a poster. In doing so, they show that they are able to present scientific facts in a generally understandable way – without sacrificing any precision in their statements. The teams present their posters at a special event.

3. Intensive support and soft skills

Of course, the supervisors support the students professionally. The groups regularly report on their progress and plan the next steps. They discuss technical issues and receive technical support. In addition, the students learn how to plan, structure and present presentations in an appealing way.

4. Interim presentation

In an interim presentation, each team reports on the current status of their project. Afterwards, participants and supervisors give their feedback.

5. Final workshop

The highlight and conclusion of the case studies is a scientific workshop. Here, the students present their results in a lecture followed by discussion. In the case studies fields "Scientific Computing" and "Life Sciences Mathematics", each team explains its approach and the core results of the project work in a final report.

Become a cooperation partner

Do you have a project that is suitable for students? We welcome partners from universities, research institutions and companies interested in research and development in the fields of optimization, scientific computing or life sciences. We will be happy to check whether a cooperation is suitable. Please contact Dr. Florian Lindemann .

Here you can find some of our cooperation partners from past case studies:

Optimization

  • Audi, BMW, car2go, DB, DLR, Flixbus, Framos, HAWE Hydraulik, iABG, IAV, Logivations, risklab GmbH, Siemens, the World Food Programme
  • various research institutes from other departments at TUM

Scientific Computing

  • Continental
  • High-Performance Computing Center Stuttgart HLRS
  • IBMI Institute of Biological and Medical Imaging

Life Science Mathematics

  • Helmholtz Center Munich
  • Clinic "Rechts der Isar"
  • TUM School of Life Sciences

Case Studies: Life Science Mathematics

In the  Case Studies: Life Science Mathematics  module, students model a biological system in small groups of 2 to 4 – largely independently – using methods we teach in our biomathematics and biostatistics classes. In small groups, students receive intensive and individualized mentoring and support.

We offer introductions to soft skills, such as presentation techniques and literature research. In addition, you will acquire and try out skills in organizational and project work if you participate.

Module description:  Case Studies: Life Science Mathematics  (MA5616)

Focus and goals

We place a special emphasis on combining mathematics with a concrete application from the life sciences and the according training. The projects are mainly current interdisciplinary projects, which the students carry out in collaboration with groups from different TUM departments, other research institutions and clinics.

As a biomathematician you will always work in interdisciplinary teams. In doing so, you will have to consider and learn other technical languages. In addition, we like to offer you the opportunity to try out your own ideas or implement them in practice-relevant projects during your mathematics studies. To this end, we have been organizing practice-oriented seminars in the life sciences for many years. In 2016, we adopted the concept of case studies in optimization, which implements exactly these ideas.

scientific computing case study

Prof. Christina Kuttler

kuttler(at)ma.tum.de

scientific computing case study

Prof. Johannes Müller

johannes.mueller(at)mytum.de

Case Studies: Optimization

Case studies in optimization  include two courses: "Case Studies: Discrete Optimization" and "Case Studies: Nonlinear Optimization". During every summer semester, students in these courses experience what it means

  • to apply complex mathematical methods to real-world problems,
  • to work together with companies from research and industry, and
  • to present their ideas and results in public.

This involves first understanding a practical challenge, then modeling and analyzing it in small teams. The students then develop and implement suitable solutions. In the process, our department usually cooperates with external partners.

Module description:

  • Case Studies: Discrete Optimization  (MA4512)
  • Case Studies: Nonlinear Optimization  (MA4513)

scientific computing case study

Dr. Michael Ritter

michael.ritter(at)tum.de

scientific computing case study

Dr. Florian Lindemann

lindemann(at)tum.de

Case Studies: Scientific Computing

The module "Case Studies: Scientific Computing" prepares students for future work in an interdisciplinary environment. For this purpose, they work on a project in the form of a practical task from the natural sciences and technology. The case studies normally take place once a year. We cooperate with other TUM departments, non-university research institutions and industry partners. The topics of the projects – from areas such as autonomous driving, simulation of novel materials, analysis of real-time capable control systems – are determined by the cooperation partners.

Module description:   Case Studies: Scientific Computing  (MA4306)

Procedure and goals

Groups of 3 to 4 students work on a project task. In order to practice interdisciplinary cooperation, at least one participant should not study mathematics. In the case studies, they work on the individual projects in all its aspects. This includes

  • a precise formulation of the problem,
  • determining the resources required for the process in the sense of project management,
  • the mathematical modeling and selection of the mathematical techniques required for solving the problem,
  • their practical application and adaptation to the concrete problem, and
  • the professional presentation of the achieved results in a final report and the associated oral presentation to a larger audience.

All steps of the project work promote the students' ability to communicate and cooperate through teamwork. Together they find a common language and reflect on group processes. At the same time, they update already acquired specialist knowledge and skills and apply them practically and concretely – especially from an interdisciplinary point of view.

scientific computing case study

Prof. Rainer Callies

rainer.callies(at)tum.de

Practice and career for mathematics students

The Department of Mathematics offers you further opportunities to gain practical experience during your studies.

Data Information Lab

The TUM Data Innovation Lab (TUM-DI-LAB)  is aimed at Master's students who want to research data-driven methods for interdisciplinary practical tasks. Every semester, the Lab offers new projects.

Case Studies: Bachelor Mathematics

In order for our students to learn how to work in an application-oriented way during their  bachelor studies , we offer them Case Studies of Mathematical Modelling .

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: 26 March 2024

The role of computational science in digital twins

  • Karen Willcox   ORCID: orcid.org/0000-0003-2156-9338 1 &
  • Brittany Segundo 2  

Nature Computational Science volume  4 ,  pages 147–149 ( 2024 ) Cite this article

4498 Accesses

3 Altmetric

Metrics details

  • Applied mathematics
  • Computational science

Digital twins hold immense promise in accelerating scientific discovery, but the publicity currently outweighs the evidence base of success. We summarize key research opportunities in the computational sciences to enable digital twin technologies, as identified by a recent National Academies of Sciences, Engineering, and Medicine consensus study report.

You have full access to this article via your institution.

The National Academies of Sciences, Engineering, and Medicine (NASEM) recently published a report on the Foundational Research Gaps and Future Directions for Digital Twins 1 . Driven by broad federal interest in digital twins, the authoring committee explored digital twin definitions, use cases, and needed mathematical, statistical, and computational research. This Comment highlights the report’s main messages, with a focus on those aspects relevant to the intersection of computational science and digital twins.

Digital twin definition and elements

The NASEM report 1 proposes the following definition of a digital twin, modified from a definition published by the American Institute of Aeronautics and Astronautics 2 :

“A digital twin is a set of virtual information constructs that mimics the structure, context, and behavior of a natural, engineered, or social system (or system-of-systems), is dynamically updated with data from its physical twin, has a predictive capability, and informs decisions that realize value. The bidirectional interaction between the virtual and the physical is central to the digital twin.”

The refined definition refers to “a natural, engineered, or social system (or system-of-systems)” to describe digital twins of physical systems in the broadest sense possible, including the engineered world, natural phenomena, biological entities, and social systems. The definition introduces the phrase “predictive capability” to emphasize that a digital twin must be able to issue predictions beyond the available data to drive decisions that realize value. Finally, the definition highlights the bidirectional interaction that comprises feedback flows of information from the physical system to the virtual representation to update the latter, and from the virtual back to the physical system to enable decision making, either automatic or with humans in the loop (Fig. 1 ). The notion of a digital twin goes beyond simulation to include tighter integration between models, data, and decisions.

figure 1

Information flows bidirectionally between the virtual representation and physical counterpart. These information flows may be through automated processes, human-driven processes, or a combination of the two. Adapted with permission from ref. 1 , The National Academies Press.

The bidirectional interaction forms a feedback loop that comprises dynamic data-driven model updating (for instance, sensor fusion, inversion, data assimilation) and optimal decision making (for instance, control and sensor steering). The dynamic, bidirectional interaction tailors the digital twin to a particular physical counterpart and supports the evolution of the virtual representation as the physical counterpart changes or is better characterized. Data from the physical counterpart are used to update the virtual models, and the virtual models are used to drive changes in the physical system. This feedback loop may occur in real time, such as for dynamic control of an autonomous vehicle or a wind farm, or it may occur on slower time scales, such as post-imaging updating of a digital twin and subsequent treatment planning for a cancer patient. The digital twin may provide decision support to a human, or decision making may be shared jointly between the digital twin and a human as a human–agent team. Human–digital twin interactions may rely on a human to design, manage, and/or operate elements of the digital twin, such as selecting sensors and data sources, managing the models underlying the virtual representation, and implementing algorithms and analytics tools.

An important theme that runs throughout the report is the notion that the digital twin be “fit for purpose,” meaning that model types, fidelity, resolution, parameterization, frequency of updates, and quantities of interest be chosen, and in many cases dynamically adapted, to fit the particular decision task and computational constraints at hand. Implicit in tailoring a digital twin to a task is the notion that an exact replica of a physical asset is not always necessary or desirable. Instead, digital twins should support complex tradeoffs of risk, performance, computation time, and cost in decision making. An additional consideration is the complementary role of models and data — a digital twin is distinguished from traditional modeling and simulation in the way that models and data work together to drive decision making. In cases in which an abundance of data exists and the decisions fall largely within the realm of conditions represented by the data, a data-centric view of a digital twin is appropriate. In cases that are data-poor and call upon the digital twin to issue predictions in extrapolatory regimes that go well beyond the available data, a model-centric view of a digital twin is appropriate — a mathematical model and its associated numerical model form the core of the digital twin, and data are assimilated through these models.

Computational science challenges and opportunities

Substantial foundational mathematical, statistical, and computational research is needed to bridge the gap between the current state of the art and aspirational digital twins.

Virtual representation

A fundamental challenge for digital twins is the vast range of spatial and temporal scales that the virtual representation may need to address. In many applications, the scale at which computations are feasible falls short in resolving key phenomena and does not achieve the fidelity needed to support decisions. Different applications of digital twins drive different requirements for modeling fidelity, data, precision, accuracy, visualization, and time-to-solution, yet many of the potential uses of digital twins are currently intractable with existing computational resources. Investments in both computing resources and mathematical/algorithmic advances are necessary elements for closing the gap between what can be simulated and what is needed to achieve trustworthy digital twins. Particular areas of importance include multiscale modeling, hybrid modeling, and surrogate modeling. Hybrid modeling entails a combination of empirical and mechanistic modeling approaches that leverage the best of both data-driven and model-driven formulations. Combining data-driven models with mechanistic models requires effective coupling techniques to facilitate the flow of information while understanding the inherent constraints and assumptions of each model. More generally, models of different fidelity may be employed across various subsystems, assumptions may need to be reconciled, and multimodal data from different sources must be synchronized. Overall, simulations for digital twins will likely require a federation of individual simulations rather than a single, monolithic software system, necessitating their integration for a full digital twin ecosystem. Aggregating risk measures and quantifying uncertainty across multiple, dynamic systems is nontrivial and requires the scaling of existing methods.

Physical counterpart

Digital twins rely on the real-time (or near real-time) processing of accurate and reliable data that is often heterogeneous, large-scale, and multiresolution. While significant literature has been devoted to best practices around gathering and preparing data for use, several important opportunities merit further exploration. Handling outlier or anomalous data is critical to data quality assurance; robust methods are needed to identify and ignore spurious outliers while accurately representing salient rare events. On the other hand, constraints on resources, time, and accessibility may hinder gathering data at the frequency or resolution needed to adequately capture system dynamics. This under-sampling, particularly in complex systems with large spatiotemporal variability, could lead to overlooking critical events or significant features. Innovative sampling approaches should be used to optimize data collection. Artificial intelligence (AI) and machine learning (ML) methods that focus on maximizing average-case performance may yield large errors on scarce events, so new loss functions and performance metrics are needed. Improvements in sensor integrity, performance and reliability, as well as the ability to detect and mitigate adversarial attacks, are crucial in advancing the trustworthiness of digital twins. To address the vast amounts of data, such as large-scale streaming data, needed for digital twins in certain applications, data assimilation methods that leverage optimized ML models, architectures, and computational frameworks must be developed.

Ethics, privacy, data governance, and security

Digital twins in certain settings may rely on identifiable (or re-identifiable) data, while others may contain proprietary or sensitive information. Protecting individual privacy requires proactive consideration within each element of the digital twin ecosystem. In sensitive or high-risk settings, digital twins necessitate heightened levels of security, particularly around the transmission of information between the physical and virtual counterparts. In some cases, an automated controller may issue commands directly to the physical counterpart based on results from the virtual counterpart; securing these communications from interference is paramount.

Physical-to-virtual feedback flow

Inverse problem methodologies and data assimilation are required to combine physical observations and virtual models. Digital twins require calibration and updating on actionable time scales, which highlights foundational gaps in inverse problem and data assimilation theory, methodology, and computational approaches. ML and AI could have large roles to play in addressing these challenges, such as through online learning techniques for continuously updating models using streaming data. Additionally, in settings where data are limited, approaches such as active learning and reinforcement learning can help guide the collection of additional data most salient to the digital twin’s objectives.

Virtual-to-physical feedback flow

The digital twin may drive changes in the physical counterpart itself (for instance, through control) or in the observational systems associated with the physical counterpart (for instance, through sensor steering), through an automatic controller or a human. Mathematically and statistically sophisticated formulations exist for optimal experimental design (OED), but few approaches scale to the kinds of high-dimensional problems anticipated for digital twins. In the context of digital twins, OED must be tightly integrated with data assimilation and control or decision-support tasks to optimally design and steer data collection. Real-time digital twin computations may require edge computing under constraints on computational precision, power consumption, and communication. ML models that can be executed rapidly are well-suited to meet these requirements, but their black-box nature is a barrier to establishing trust. Additional work is needed to develop trusted ML and surrogate models that perform well under the computational and temporal conditions required. Dynamic adaptation needs for digital twins may benefit from reinforcement learning approaches, but there is a gap between theoretical performance guarantees and efficacious methods in practical domains.

Verification, validation, and uncertainty quantification (VVUQ)

VVUQ must play a role in all elements of the digital twin ecosystem and is critical to the responsible development, use, and sustainability of digital twins. Evolution of the physical counterpart in real-world use conditions, changes in data collection, noisiness of data, changes in the distribution of the data shared with the virtual twin, changes in the prediction and/or decision tasks posed to the digital twin, and updates to the digital twin virtual models all have consequences for VVUQ. Verification and validation help build trustworthiness in the virtual representation, while uncertainty quantification informs the quality of its predictions. Novel challenges of VVUQ for digital twins arise from model discrepancies, unresolved scales, surrogate modeling, AI, hybrid modeling, and the need to issue predictions in extrapolatory regimes. However, digital twin VVUQ must also address the uncertainties associated with the physical counterpart, including changes to sensors or data collection equipment, and the evolution of the physical counterpart. Applications that require real-time updating also require continual VVUQ, and this is not yet computationally feasible. VVUQ also plays a role in understanding the impact of mechanisms used to pass information between the physical and virtual. These include challenges arising from parameter uncertainty and ill-posed or indeterminate inverse problems, as well as uncertainty introduced by the inclusion of the human-in-the-loop.

Conclusions

Digital twins are emerging as enablers for significant, sustainable progress across multiple domains of science, engineering, and medicine. However, realizing these benefits requires a sustained and holistic commitment to an integrated research agenda that addresses foundational challenges across mathematics, statistics, and computing. Within the virtual representation, advancing the models themselves is necessarily domain specific, but advancing the hybrid modeling and surrogate modeling embodies shared challenges that crosscut domains. Similarly, many of the physical counterpart challenges around sensor technologies and data are domain specific, but issues around fusing multimodal data, data interoperability, and advancing data curation practices embody shared challenges that crosscut domains. When it comes to the bidirectional flows, dedicated efforts are needed to advance data assimilation, inverse methods, control, and sensor-steering methodologies that are applicable across domains, while at the same time recognizing the domain-specific nature of decision making. Finally, there is substantial opportunity to develop innovative digital twin VVUQ methods that translate across domains. To get more details on research directions for the computational sciences in digital twins, we refer the reader to the full 2023 report from NASEM 1 .

National Academies of Sciences, Engineering, and Medicine. Foundational Research Gaps and Future Directions for Digital Twins (The National Academies Press, 2023); https://doi.org/10.17226/26894

Digital Twin: Definition & Value — An AIAA and AIA Position Paper (AIAA, 2021); https://www.aia-aerospace.org/publications/digital-twin-definition-value-an-aiaa-and-aia-position-paper/

Download references

Acknowledgements

The 2023 NASEM report 1 to which this Comment refers was authored by K.W. (chair), D. Bingham, J. Chung, C. Chung, C. Cruz-Neira, C. Grant, J. Kinter III, R. Leung, P. Moin, L. Ohno-Machado, C. Parris, I. Qualters, I. Thiele, C. Tucker, R. Willett, and X. Ye. The report was sponsored by the Department of Energy (Advanced Scientific Computing Research, Biological and Environmental Research), the National Institutes of Health (National Cancer Institute, Office of Data Science Strategy, National Institute of Biomedical Imaging and Bioengineering), the National Science Foundation (Engineering Directorate, Mathematical and Physical Sciences Directorate), and the Department of Defense (Air Force Office of Scientific Research, Defense Advanced Research Projects Agency).

Author information

Authors and affiliations.

University of Texas at Austin, Austin, TX, USA

Karen Willcox

The National Academies of Sciences, Engineering, and Medicine, Washington DC, USA

Brittany Segundo

You can also search for this author in PubMed   Google Scholar

Contributions

K.W. was the committee chair and B.S. was the study director for the NASEM study committee that produced the 2023 report 1 .

Corresponding author

Correspondence to Karen Willcox .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Willcox, K., Segundo, B. The role of computational science in digital twins. Nat Comput Sci 4 , 147–149 (2024). https://doi.org/10.1038/s43588-024-00609-4

Download citation

Published : 26 March 2024

Issue Date : March 2024

DOI : https://doi.org/10.1038/s43588-024-00609-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

scientific computing case study

Universities Have a Computer-Science Problem

The case for teaching coders to speak French

Photo of college students working at their computers as part of a hackathon at Berkeley in 2018

Listen to this article

Produced by ElevenLabs and News Over Audio (NOA) using AI narration.

Updated at 5:37 p.m. ET on March 22, 2024

Last year, 18 percent of Stanford University seniors graduated with a degree in computer science, more than double the proportion of just a decade earlier. Over the same period at MIT, that rate went up from 23 percent to 42 percent . These increases are common everywhere: The average number of undergraduate CS majors at universities in the U.S. and Canada tripled in the decade after 2005, and it keeps growing . Students’ interest in CS is intellectual—culture moves through computation these days—but it is also professional. Young people hope to access the wealth, power, and influence of the technology sector.

That ambition has created both enormous administrative strain and a competition for prestige. At Washington University in St. Louis, where I serve on the faculty of the Computer Science & Engineering department, each semester brings another set of waitlists for enrollment in CS classes. On many campuses, students may choose to study computer science at any of several different academic outposts, strewn throughout various departments. At MIT, for example, they might get a degree in “Urban Studies and Planning With Computer Science” from the School of Architecture, or one in “Mathematics With Computer Science” from the School of Science, or they might choose from among four CS-related fields within the School of Engineering. This seepage of computing throughout the university has helped address students’ booming interest, but it also serves to bolster their demand.

Another approach has gained in popularity. Universities are consolidating the formal study of CS into a new administrative structure: the college of computing. MIT opened one in 2019. Cornell set one up in 2020. And just last year, UC Berkeley announced that its own would be that university’s first new college in more than half a century. The importance of this trend—its significance for the practice of education, and also of technology—must not be overlooked. Universities are conservative institutions, steeped in tradition. When they elevate computing to the status of a college, with departments and a budget, they are declaring it a higher-order domain of knowledge and practice, akin to law or engineering. That decision will inform a fundamental question: whether computing ought to be seen as a superfield that lords over all others, or just a servant of other domains, subordinated to their interests and control. This is, by no happenstance, also the basic question about computing in our society writ large.

When I was an undergraduate at the University of Southern California in the 1990s, students interested in computer science could choose between two different majors: one offered by the College of Letters, Arts and Sciences, and one from the School of Engineering. The two degrees were similar, but many students picked the latter because it didn’t require three semesters’ worth of study of a (human) language, such as French. I chose the former, because I like French.

An American university is organized like this, into divisions that are sometimes called colleges , and sometimes schools . These typically enjoy a good deal of independence to define their courses of study and requirements as well as research practices for their constituent disciplines. Included in this purview: whether a CS student really needs to learn French.

The positioning of computer science at USC was not uncommon at the time. The first academic departments of CS had arisen in the early 1960s, and they typically evolved in one of two ways: as an offshoot of electrical engineering (where transistors got their start), housed in a college of engineering; or as an offshoot of mathematics (where formal logic lived), housed in a college of the arts and sciences. At some universities, including USC, CS found its way into both places at once.

The contexts in which CS matured had an impact on its nature, values, and aspirations. Engineering schools are traditionally the venue for a family of professional disciplines, regulated with licensure requirements for practice. Civil engineers, mechanical engineers, nuclear engineers, and others are tasked to build infrastructure that humankind relies on, and they are expected to solve problems. The liberal-arts field of mathematics, by contrast, is concerned with theory and abstraction. The relationship between the theoretical computer scientists in mathematics and the applied ones in engineers is a little like the relationship between biologists and doctors, or physicists and bridge builders. Keeping applied and pure versions of a discipline separate allows each to focus on its expertise, but limits the degree to which one can learn from the other.

Read: Programmers, stop calling yourself engineers

By the time I arrived at USC, some universities had already started down a different path. In 1988, Carnegie Mellon University created what it says was one of the first dedicated schools of computer science. Georgia Institute of Technology followed two years later. “Computing was going to be a big deal,” says Charles Isbell, a former dean of Georgia Tech’s college of computing and now the provost at the University of Wisconsin-Madison. Emancipating the field from its prior home within the college of engineering gave it room to grow, he told me. Within a decade, Georgia Tech had used this structure to establish new research and teaching efforts in computer graphics, human-computer interaction, and robotics. (I spent 17 years on the faculty there, working for Isbell and his predecessors, and teaching computational media.)

Kavita Bala, Cornell University’s dean of computing, told me that the autonomy and scale of a college allows her to avoid jockeying for influence and resources. MIT’s computing dean, Daniel Huttenlocher, says that the speed at which computing evolves justifies the new structure.

But the computing industry isn’t just fast-moving. It’s also reckless. Technology tycoons say they need space for growth, and warn that too much oversight will stifle innovation. Yet we might all be better off, in certain ways, if their ambitions were held back even just a little. Instead of operating with a deep understanding or respect for law, policy, justice, health, or cohesion, tech firms tend to do whatever they want . Facebook sought growth at all costs, even if its take on connecting people tore society apart . If colleges of computing serve to isolate young, future tech professionals from any classrooms where they might imbibe another school’s culture and values—engineering’s studied prudence, for example, or the humanities’ focus on deliberation—this tendency might only worsen.

Read: The moral failure of computer scientists

When I raised this concern with Isbell, he said that the same reasoning could apply to any influential discipline, including medicine and business. He’s probably right, but that’s cold comfort. The mere fact that universities allow some other powerful fiefdoms to exist doesn’t make computing’s centralization less concerning. Isbell admitted that setting up colleges of computing “absolutely runs the risk” of empowering a generation of professionals who may already be disengaged from consequences to train the next one in their image. Inside a computing college, there may be fewer critics around who can slow down bad ideas. Disengagement might redouble. But he said that dedicated colleges could also have the opposite effect. A traditional CS department in a school of engineering would be populated entirely by computer scientists, while the faculty for a college of computing like the one he led at Georgia Tech might also house lawyers, ethnographers, psychologists, and even philosophers like me. Huttenlocher repeatedly emphasized that the role of the computing college is to foster collaboration between CS and other disciplines across the university. Bala told me that her college was established not to teach CS on its own but to incorporate policy, law, sociology, and other fields into its practice. “I think there are no downsides,” she said.

Mark Guzdial is a former faculty member in Georgia Tech’s computing college, and he now teaches computer science in the University of Michigan’s College of Engineering. At Michigan, CS wasn’t always housed in engineering—Guzdial says it started out inside the philosophy department, as part of the College of Literature, Science and the Arts. Now that college “wants it back,” as one administrator told Guzdial. Having been asked to start a program that teaches computing to liberal-arts students, Guzdial has a new perspective on these administrative structures. He learned that Michigan’s Computer Science and Engineering program and its faculty are “despised” by their counterparts in the humanities and social sciences. “They’re seen as arrogant, narrowly focused on machines rather than people, and unwilling to meet other programs’ needs,” he told me. “I had faculty refuse to talk to me because I was from CSE.”

In other words, there may be downsides just to placing CS within an engineering school, let alone making it an independent college. Left entirely to themselves, computer scientists can forget that computers are supposed to be tools that help people. Georgia Tech’s College of Computing worked “because the culture was always outward-looking. We sought to use computing to solve others’ problems,” Guzdial said. But that may have been a momentary success. Now, at Michigan, he is trying to rebuild computing education from scratch, for students in fields such as French and sociology. He wants them to understand it as a means of self-expression or achieving justice—and not just a way of making software, or money.

Early in my undergraduate career, I decided to abandon CS as a major. Even as an undergraduate, I already had a side job in what would become the internet industry, and computer science, as an academic field, felt theoretical and unnecessary. Reasoning that I could easily get a job as a computer professional no matter what it said on my degree, I decided to study other things while I had the chance.

I have a strong memory of processing the paperwork to drop my computer-science major in college, in favor of philosophy. I walked down a quiet, blue-tiled hallway of the engineering building. All the faculty doors were closed, although the click-click of mechanical keyboards could be heard behind many of them. I knocked on my adviser’s door; she opened it, silently signed my paperwork without inviting me in, and closed the door again. The keyboard tapping resumed.

The whole experience was a product of its time, when computer science was a field composed of oddball characters, working by themselves, and largely disconnected from what was happening in the world at large. Almost 30 years later, their projects have turned into the infrastructure of our daily lives. Want to find a job? That’s LinkedIn. Keep in touch? Gmail, or Instagram. Get news? A website like this one, we hope, but perhaps TikTok. My university uses a software service sold by a tech company to run its courses. Some things have been made easier with computing. Others have been changed to serve another end, like scaling up an online business.

Read: So much for ‘learn to code’

The struggle to figure out the best organizational structure for computing education is, in a way, a microcosm of the struggle under way in the computing sector at large. For decades, computers were tools used to accomplish tasks better and more efficiently. Then computing became the way we work and live. It became our culture, and we began doing what computers made possible, rather than using computers to solve problems defined outside their purview. Tech moguls became famous, wealthy, and powerful. So did CS academics (relatively speaking). The success of the latter—in terms of rising student enrollments, research output, and fundraising dollars—both sustains and justifies their growing influence on campus.

If computing colleges have erred, it may be in failing to exert their power with even greater zeal. For all their talk of growth and expansion within academia, the computing deans’ ambitions seem remarkably modest. Martial Hebert, the dean of Carnegie Mellon’s computing school, almost sounded like he was talking about the liberal arts when he told me that CS is “a rich tapestry of disciplines” that “goes far beyond computers and coding.” But the seven departments in his school correspond to the traditional, core aspects of computing plus computational biology. They do not include history, for example, or finance. Bala and Isbell talked about incorporating law, policy, and psychology into their programs of study, but only in the form of hiring individual professors into more traditional CS divisions. None of the deans I spoke with aspires to launch, say, a department of art within their college of computing, or one of politics, sociology, or film. Their vision does not reflect the idea that computing can or should be a superordinate realm of scholarship, on the order of the arts or engineering. Rather, they are proceeding as though it were a technical school for producing a certain variety of very well-paid professionals. A computing college deserving of the name wouldn’t just provide deeper coursework in CS and its closely adjacent fields; it would expand and reinvent other, seemingly remote disciplines for the age of computation.

Near the end of our conversation, Isbell mentioned the engineering fallacy, which he summarized like this: Someone asks you to solve a problem, and you solve it without asking if it’s a problem worth solving. I used to think computing education might be stuck in a nesting-doll version of the engineer’s fallacy, in which CS departments have been asked to train more software engineers without considering whether more software engineers are really what the world needs. Now I worry that they have a bigger problem to address: how to make computer people care about everything else as much as they care about computers.

This article originally mischaracterized the views of MIT’s computing dean, Daniel Huttenlocher. He did not say that computer science would be held back in an arts-and-science or engineering context, or that it needs to be independent.

  • College of Engineering and Computing
  • Location Location
  • Contact Contact
  • Colleges and Schools
  • News and Events
  • 2024 News Archive

USC team selected for NSF project to address equitable water solutions

Testing for safe water

According to the National Institutes of Health ,  2.2 billion people lacked safely managed drinking water and two billion more lacked a basic handwashing facility worldwide in 2022. It is estimated that the numbers will continue to increase by 2030, unless proactive solutions are developed.

To help solve these pressing issues, a team led by the University of South Carolina’s College of Engineering and Computing (CEC) faculty was one of 15 multidisciplinary teams selected for phase one of the National Science Foundation ( NSF ) Convergence Accelerator program’s Track K: Equitable Water Solutions . The NSF is investing $9.8 million toward the track topic, which aims to develop innovative technologies and solutions to improve U.S. freshwater systems. 

Track K builds upon an investment in foundational research from two NSF-funded workshops. Based on findings from both workshops, there is an urgent need to combine existing knowledge with advancements in areas such as engineering, computing and environmental sciences to create new technologies and solutions. Some of the challenges that will be addressed include freshwater supply and management, and resiliency against rising temperatures, drought and pollution.

"Ensuring safe and equitable water resources while incorporating environmentally sustainable practices is imperative to our future," says Erwin Gianchandani, NSF assistant director for Technology, Innovation and Partnerships . "Through programs like the Convergence Accelerator, NSF is harnessing the nation's diverse talent to stimulate innovation, technologies and solutions to address fit-for-purpose needs across the nation."

Civil and Environmental Engineering Professor Jasim Imran , an expert in water resources engineering, is principal investigator for USC’s $650,000 project , “COMPASS: Comprehensive Prediction, Assessment and Equitable Solutions for Storm-Induced Contamination of Freshwater Systems.” Co-principal investigators are CEC professors Austin Downey (mechanical engineering), Erfan Goharian (civil and environmental engineering) and Jason Bakos (computer science and engineering). Also involved in the project are Etienne Toussaint (USC School of Law), Mohammed Baalousha (USC Arnold School of Public Health), Meeta Banerjee and Thomas Crawford (USC College of Arts and Sciences), and Sadik Khan (Jackson State University). The project began this past January. 

Developing an equitable solution for access to safe water requires a diversity of expertise, viewpoints and lived experiences. - Jasim Imran

The project addresses the challenges of freshwater quality and quantity by implementing next-generation sensors, advanced flood modeling and co-generated policy knowledge to enhance community resiliency. Extreme weather events often result in the release of toxic chemicals, sewage, and agricultural wastes, which disproportionately affect underserved communities with outdated infrastructure and limited resources. 

The research will utilize a modular sensor system, including unpiloted aerial vehicles and low-cost nuclear magnetic resonance (NMR) spectrometers, to monitor and assess contaminants in watersheds. The interdisciplinary team, which includes expertise in social sciences, public policy and environmental justice, aims to empower communities to implement equitable and sustainable solutions.

“Developing an equitable solution for access to safe water requires a diversity of expertise, viewpoints and lived experiences,” Imran says. “In collaboration with industry and government agencies, the team will leverage next-generation sensors to develop an enhanced understanding of the interactions between storm-induced contaminants and communities. A key outcome is that it will drive new knowledge on policy and planning that will be co-generated by scientists, engineers and policy experts.”

As part of the project, low-cost, field deployable NMR sensors systems will be developed to integrate data collection methods with hydrologic modeling. NMR also provides optimal sensing technology for developing a contaminant detection, quantification, and tracking system without restricting sensor development to focus on a specific contaminant. 

“The team will develop and deploy semi-permanent sensing systems consisting of NMR spectroscopy,” Imran says. “These systems will take two forms, an unmanned aerial vehicle-deployable smart buoy for aquatic environments, and a pump-through system that would sit next to a body of water and pump sampled water through a tube to the NMR system.”

The project will establish the groundwork for a potential second phase, which would lead to an enhanced understanding of community vulnerability to storm-induced contaminants, advancements in acquiring data for real-time flood and contaminant tracking, and equipping communities with tools to design adaptive, active and sustainable next-generation water infrastructure.

"Access to clean water, and especially equitable access, is and will be a challenge that is on top of mind, and one which requires a truly convergent approach, covering engineering, scientific, political, and social dimensions,” CEC Dean Hossein Haj-Hariri . “To be one of a small handful of teams to receive a planning grant for this topic is a testament that we have the thought-leading minds to be on top of this challenge.” 

Challenge the conventional. Create the exceptional. No Limits.

NASA Logo

Scientific Consensus

scientific computing case study

It’s important to remember that scientists always focus on the evidence, not on opinions. Scientific evidence continues to show that human activities ( primarily the human burning of fossil fuels ) have warmed Earth’s surface and its ocean basins, which in turn have continued to impact Earth’s climate . This is based on over a century of scientific evidence forming the structural backbone of today's civilization.

NASA Global Climate Change presents the state of scientific knowledge about climate change while highlighting the role NASA plays in better understanding our home planet. This effort includes citing multiple peer-reviewed studies from research groups across the world, 1 illustrating the accuracy and consensus of research results (in this case, the scientific consensus on climate change) consistent with NASA’s scientific research portfolio.

With that said, multiple studies published in peer-reviewed scientific journals 1 show that climate-warming trends over the past century are extremely likely due to human activities. In addition, most of the leading scientific organizations worldwide have issued public statements endorsing this position. The following is a partial list of these organizations, along with links to their published statements and a selection of related resources.

American Scientific Societies

Statement on climate change from 18 scientific associations.

"Observations throughout the world make it clear that climate change is occurring, and rigorous scientific research demonstrates that the greenhouse gases emitted by human activities are the primary driver." (2009) 2

American Association for the Advancement of Science

"Based on well-established evidence, about 97% of climate scientists have concluded that human-caused climate change is happening." (2014) 3

AAAS emblem

American Chemical Society

"The Earth’s climate is changing in response to increasing concentrations of greenhouse gases (GHGs) and particulate matter in the atmosphere, largely as the result of human activities." (2016-2019) 4

ACS emblem

American Geophysical Union

"Based on extensive scientific evidence, it is extremely likely that human activities, especially emissions of greenhouse gases, are the dominant cause of the observed warming since the mid-20th century. There is no alterative explanation supported by convincing evidence." (2019) 5

AGU emblem

American Medical Association

"Our AMA ... supports the findings of the Intergovernmental Panel on Climate Change’s fourth assessment report and concurs with the scientific consensus that the Earth is undergoing adverse global climate change and that anthropogenic contributions are significant." (2019) 6

AMA emblem

American Meteorological Society

"Research has found a human influence on the climate of the past several decades ... The IPCC (2013), USGCRP (2017), and USGCRP (2018) indicate that it is extremely likely that human influence has been the dominant cause of the observed warming since the mid-twentieth century." (2019) 7

AMS emblem

American Physical Society

"Earth's changing climate is a critical issue and poses the risk of significant environmental, social and economic disruptions around the globe. While natural sources of climate variability are significant, multiple lines of evidence indicate that human influences have had an increasingly dominant effect on global climate warming observed since the mid-twentieth century." (2015) 8

APS emblem

The Geological Society of America

"The Geological Society of America (GSA) concurs with assessments by the National Academies of Science (2005), the National Research Council (2011), the Intergovernmental Panel on Climate Change (IPCC, 2013) and the U.S. Global Change Research Program (Melillo et al., 2014) that global climate has warmed in response to increasing concentrations of carbon dioxide (CO2) and other greenhouse gases ... Human activities (mainly greenhouse-gas emissions) are the dominant cause of the rapid warming since the middle 1900s (IPCC, 2013)." (2015) 9

GSA emblem

Science Academies

International academies: joint statement.

"Climate change is real. There will always be uncertainty in understanding a system as complex as the world’s climate. However there is now strong evidence that significant global warming is occurring. The evidence comes from direct measurements of rising surface air temperatures and subsurface ocean temperatures and from phenomena such as increases in average global sea levels, retreating glaciers, and changes to many physical and biological systems. It is likely that most of the warming in recent decades can be attributed to human activities (IPCC 2001)." (2005, 11 international science academies) 1 0

U.S. National Academy of Sciences

"Scientists have known for some time, from multiple lines of evidence, that humans are changing Earth’s climate, primarily through greenhouse gas emissions." 1 1

UNSAS emblem

U.S. Government Agencies

U.s. global change research program.

"Earth’s climate is now changing faster than at any point in the history of modern civilization, primarily as a result of human activities." (2018, 13 U.S. government departments and agencies) 12

USGCRP emblem

Intergovernmental Bodies

Intergovernmental panel on climate change.

“It is unequivocal that the increase of CO 2 , methane, and nitrous oxide in the atmosphere over the industrial era is the result of human activities and that human influence is the principal driver of many changes observed across the atmosphere, ocean, cryosphere, and biosphere. “Since systematic scientific assessments began in the 1970s, the influence of human activity on the warming of the climate system has evolved from theory to established fact.” 1 3-17

IPCC emblem

Other Resources

List of worldwide scientific organizations.

The following page lists the nearly 200 worldwide scientific organizations that hold the position that climate change has been caused by human action. http://www.opr.ca.gov/facts/list-of-scientific-organizations.html

U.S. Agencies

The following page contains information on what federal agencies are doing to adapt to climate change. https://www.c2es.org/site/assets/uploads/2012/02/climate-change-adaptation-what-federal-agencies-are-doing.pdf

Technically, a “consensus” is a general agreement of opinion, but the scientific method steers us away from this to an objective framework. In science, facts or observations are explained by a hypothesis (a statement of a possible explanation for some natural phenomenon), which can then be tested and retested until it is refuted (or disproved).

As scientists gather more observations, they will build off one explanation and add details to complete the picture. Eventually, a group of hypotheses might be integrated and generalized into a scientific theory, a scientifically acceptable general principle or body of principles offered to explain phenomena.

1. K. Myers, et al, "Consensus revisited: quantifying scientific agreement on climate change and climate expertise among Earth scientists 10 years later", Environmental Research Letters Vol.16 No. 10, 104030 (20 October 2021); DOI:10.1088/1748-9326/ac2774 M. Lynas, et al, "Greater than 99% consensus on human caused climate change in the peer-reviewed scientific literature", Environmental Research Letters Vol.16 No. 11, 114005 (19 October 2021); DOI:10.1088/1748-9326/ac2966 J. Cook et al., "Consensus on consensus: a synthesis of consensus estimates on human-caused global warming", Environmental Research Letters Vol. 11 No. 4, (13 April 2016); DOI:10.1088/1748-9326/11/4/048002 J. Cook et al., "Quantifying the consensus on anthropogenic global warming in the scientific literature", Environmental Research Letters Vol. 8 No. 2, (15 May 2013); DOI:10.1088/1748-9326/8/2/024024 W. R. L. Anderegg, “Expert Credibility in Climate Change”, Proceedings of the National Academy of Sciences Vol. 107 No. 27, 12107-12109 (21 June 2010); DOI: 10.1073/pnas.1003187107 P. T. Doran & M. K. Zimmerman, "Examining the Scientific Consensus on Climate Change", Eos Transactions American Geophysical Union Vol. 90 Issue 3 (2009), 22; DOI: 10.1029/2009EO030002 N. Oreskes, “Beyond the Ivory Tower: The Scientific Consensus on Climate Change”, Science Vol. 306 no. 5702, p. 1686 (3 December 2004); DOI: 10.1126/science.1103618

2. Statement on climate change from 18 scientific associations (2009)

3. AAAS Board Statement on Climate Change (2014)

4. ACS Public Policy Statement: Climate Change (2016-2019)

5. Society Must Address the Growing Climate Crisis Now (2019)

6. Global Climate Change and Human Health (2019)

7. Climate Change: An Information Statement of the American Meteorological Society (2019)

8. American Physical Society (2021)

9. GSA Position Statement on Climate Change (2015)

10. Joint science academies' statement: Global response to climate change (2005)

11. Climate at the National Academies

12. Fourth National Climate Assessment: Volume II (2018)

13. IPCC Fifth Assessment Report, Summary for Policymakers, SPM 1.1 (2014)

14. IPCC Fifth Assessment Report, Summary for Policymakers, SPM 1 (2014)

15. IPCC Sixth Assessment Report, Working Group 1 (2021)

16. IPCC Sixth Assessment Report, Working Group 2 (2022)

17. IPCC Sixth Assessment Report, Working Group 3 (2022)

Discover More Topics From NASA

Explore Earth Science

scientific computing case study

Earth Science in Action

Earth Action

Earth Science Data

The sum of Earth's plants, on land and in the ocean, changes slightly from year to year as weather patterns shift.

Facts About Earth

scientific computing case study

Colby Bodtorf

Ibm cloud case study.

In today’s digital world, many companies are turning to cloud computing to improve their operations. Cloud computing offers various benefits, but it also comes with its challenges. In this blog post, we’ll examine a case study from the IBM Cloud Case Studies site to understand how one company benefited from using IBM Cloud, along with the advantages and disadvantages of cloud computing.

Chosen Case Study: “Transforming Financial Operations with IBM Cloud”

The chosen case study focuses on how a financial company improved its operations by using IBM Cloud. It talks about how the company faced challenges with managing large amounts of data and needed a solution to make their operations more efficient. By implementing IBM Cloud services, the company was able to streamline its processes, improve data security, and enhance customer service. Ultimately making operations much smoother.

Advantages of Cloud Computing:

  • Scalability: Companies can easily scale their resources up or down based on their needs, allowing for flexibility and cost savings.
  • Cost-Efficiency: Cloud computing eliminates the need for companies to invest in expensive hardware and infrastructure upfront, reducing overall IT costs.
  • Accessibility: Cloud services can be accessed from anywhere with an internet connection, enabling remote work and collaboration.
  • Automatic Updates: Cloud providers regularly update their services with new features and security patches, relieving companies of the burden of manual updates.
  • Disaster Recovery: Cloud platforms offer built-in disaster recovery solutions, ensuring that companies can quickly recover their data in case of emergencies.

Disadvantages of Cloud Computing:

  • Security Concerns: Storing data on third-party servers raises concerns about data security and privacy breaches.
  • Downtime Risks: Companies are reliant on their cloud service provider’s uptime, and any downtime can disrupt operations and affect productivity.
  • Internet Dependency: Cloud computing requires a stable internet connection, which may not be available in all locations.
  • Vendor Lock-In: Migrating from one cloud provider to another can be challenging and costly, leading to vendor lock-in.
  • Compliance Challenges: Companies must ensure that they comply with industry regulations and data protection laws when using cloud services.

If I were working for the company in the case study, I would carefully consider the pros and cons of the IBM Cloud solution. While IBM Cloud offers many benefits such as scalability, cost-efficiency, and security features, it’s essential to assess whether it meets the company’s specific needs and requirements.

Factors such as data sensitivity, regulatory compliance, and budget constraints would influence my decision. If the IBM Cloud solution aligns with the company’s goals and addresses its challenges effectively, I would agree with its implementation. However, if there are significant concerns regarding security, downtime risks, or compliance issues, I might seek alternative solutions or additional safeguards.

In conclusion, while IBM Cloud and cloud computing, in general, offer numerous advantages for companies, it’s crucial to weigh the benefits against the potential drawbacks and make informed decisions based on the specific context and requirements of each organization.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Advertisement

Supported by

Use of Abortion Pills Has Risen Significantly Post Roe, Research Shows

Pam Belluck

By Pam Belluck

Pam Belluck has been reporting about reproductive health for over a decade.

  • Share full article

On the eve of oral arguments in a Supreme Court case that could affect future access to abortion pills, new research shows the fast-growing use of medication abortion nationally and the many ways women have obtained access to the method since Roe v. Wade was overturned in June 2022.

The Details

A person pours pills out of a bottle into a gloved hand.

A study, published on Monday in the medical journal JAMA , found that the number of abortions using pills obtained outside the formal health system soared in the six months after the national right to abortion was overturned. Another report, published last week by the Guttmacher Institute , a research organization that supports abortion rights, found that medication abortions now account for nearly two-thirds of all abortions provided by the country’s formal health system, which includes clinics and telemedicine abortion services.

The JAMA study evaluated data from overseas telemedicine organizations, online vendors and networks of community volunteers that generally obtain pills from outside the United States. Before Roe was overturned, these avenues provided abortion pills to about 1,400 women per month, but in the six months afterward, the average jumped to 5,900 per month, the study reported.

Overall, the study found that while abortions in the formal health care system declined by about 32,000 from July through December 2022, much of that decline was offset by about 26,000 medication abortions from pills provided by sources outside the formal health system.

“We see what we see elsewhere in the world in the U.S. — that when anti-abortion laws go into effect, oftentimes outside of the formal health care setting is where people look, and the locus of care gets shifted,” said Dr. Abigail Aiken, who is an associate professor at the University of Texas at Austin and the lead author of the JAMA study.

The co-authors were a statistics professor at the university; the founder of Aid Access, a Europe-based organization that helped pioneer telemedicine abortion in the United States; and a leader of Plan C, an organization that provides consumers with information about medication abortion. Before publication, the study went through the rigorous peer review process required by a major medical journal.

The telemedicine organizations in the study evaluated prospective patients using written medical questionnaires, issued prescriptions from doctors who were typically in Europe and had pills shipped from pharmacies in India, generally charging about $100. Community networks typically asked for some information about the pregnancy and either delivered or mailed pills with detailed instructions, often for free.

Online vendors, which supplied a small percentage of the pills in the study and charged between $39 and $470, generally did not ask for women’s medical history and shipped the pills with the least detailed instructions. Vendors in the study were vetted by Plan C and found to be providing genuine abortion pills, Dr. Aiken said.

The Guttmacher report, focusing on the formal health care system, included data from clinics and telemedicine abortion services within the United States that provided abortion to patients who lived in or traveled to states with legal abortion between January and December 2023.

It found that pills accounted for 63 percent of those abortions, up from 53 percent in 2020. The total number of abortions in the report was over a million for the first time in more than a decade.

Why This Matters

Overall, the new reports suggest how rapidly the provision of abortion has adjusted amid post-Roe abortion bans in 14 states and tight restrictions in others.

The numbers may be an undercount and do not reflect the most recent shift: shield laws in six states allowing abortion providers to prescribe and mail pills to tens of thousands of women in states with bans without requiring them to travel. Since last summer, for example, Aid Access has stopped shipping medication from overseas and operating outside the formal health system; it is instead mailing pills to states with bans from within the United States with the protection of shield laws.

What’s Next

In the case that will be argued before the Supreme Court on Tuesday, the plaintiffs, who oppose abortion, are suing the Food and Drug Administration, seeking to block or drastically limit the availability of mifepristone, the first pill in the two-drug medication abortion regimen.

The JAMA study suggests that such a ruling could prompt more women to use avenues outside the formal American health care system, such as pills from other countries.

“There’s so many unknowns about what will happen with the decision,” Dr. Aiken said.

She added: “It’s possible that a decision by the Supreme Court in favor of the plaintiffs could have a knock-on effect where more people are looking to access outside the formal health care setting, either because they’re worried that access is going away or they’re having more trouble accessing the medications.”

Pam Belluck is a health and science reporter, covering a range of subjects, including reproductive health, long Covid, brain science, neurological disorders, mental health and genetics. More about Pam Belluck

IMAGES

  1. (PDF) Case Study of Cloud Computing Security and Emerging Security

    scientific computing case study

  2. Introduction to Scientific Computing

    scientific computing case study

  3. Scientific Computing: Studies and Applications

    scientific computing case study

  4. Scientific and Technical Computing Textbook Available Now

    scientific computing case study

  5. Introduction to Scientific Computing

    scientific computing case study

  6. Mastering Scientific Computing with R (Paperback)

    scientific computing case study

VIDEO

  1. Cloud Enabled Mobile Computing

  2. CLOUD COMPUTING CASE STUDY 1 PRESENTATTION

  3. Scientific Computing Lab Introduction

  4. SCIENTIFIC COMPUTING USING MATLAB WEEK 3 ASSIGNMENT-3 ANSWERS #NPTEL #WEEK-3 #ANSWERS #2024

  5. CLOUD COMPUTING || CASE STUDY PRESENTATION (YOGES & SUTHA)

  6. Day 5 Applications in Scientific Computing

COMMENTS

  1. Scientific Computing with Case Studies

    Scientific Computing with Case Studies. Learning through doing is the foundation of this book, which allows readers to explore case studies as well as expository material. The book provides a practical guide to the numerical solution of linear and nonlinear equations, differential equations, optimization problems, and eigenvalue problems.

  2. Scientific Computing with Case Studies, by Dianne P. O'Leary

    Scientific Computing with Case Studies by Dianne P. O'Leary SIAM Press , 2009 Learning through doing is the foundation of this book, which allows readers to explore case studies as well as expository material. The book provides a practical guide to the numerical solution of linear and nonlinear equations, differential equations, optimization ...

  3. Scientific Computing with Case Studies

    Tracking objects, controlling the navigation of a spacecraft, assessing the quality of machined parts, and identifying proteins seem to have little in common, but all of these problems (and many more problems in computer vision and computational geometry) share a core computational task: rotating and translating two objects so that they have a common coordinate system. In this case study, we ...

  4. Scientific Computing with Case Studies

    Chemistry, Computer Science. 2010. TLDR. This dissertation introduces a new, computationally efficient method called PATI for predicting the molecular alignment tensor based on the three-dimensional structure of the molecule and presents two computational methods for rigid docking based on long range NMR constraints.

  5. Scientific Computing with Case Studies

    Scientific Computing with Case Studies. Dianne P. O'Leary. SIAM, Mar 19, 2009 - Mathematics - 399 pages. This book is a practical guide to the numerical solution of linear and nonlinear equations, differential equations, optimization problems, and eigenvalue problems. It treats standard problems and introduces important variants such as sparse ...

  6. Scientific Computing with Case Studies

    In this chapter we study some algorithms for solving various types of ordinary differential equation (ODE) problems. We'll consider problems in which data values are known at a single initial time (Section 20.1) as well as those for which values are known at two different points in time or space (Section 20.5). In between, we'll discuss problems that have algebraic constraints (Section 20.4 ...

  7. Scientific Computing with Case Studies, by Dianne P. O'Leary

    Scientific Computing with Case Studies. by Dianne P. O'Leary. SIAM Press , 2009. This website contains additional notes, exercises, case studies, etc. No solutions to exercises or challenges are provided. Unit I. Preliminaries: Mathematical Modeling, Errors, Hardware, and Software. Supplementary exercises. Sample lecture notes: Slides and Handout.

  8. Scientific Computing with Case Studies

    Scientific Computing with Case Studies. Dianne P. O'Leary, University of Maryland. SIAM , 2009. ISBN: 978--898716-66-5; Language: English. Written for advanced undergraduate and early graduate courses in numerical analysis and scientific computing, this book provides a practical guide to the numerical solution of linear and nonlinear equations ...

  9. Biomedical Visual Computing: Case Studies and Challenges

    This case study's visualization challenges involved creating interactive, visual design, and analysis tools for large-scale complex geometries for use by our clinical collaborators. To develop effective visualization tools, we engaged in considerable discussion, interaction, and iteration with our collaborators.

  10. PDF Computing Applications with TPUs Case Studies on Accelerating Scientific

    Through the few case studies, we explore using TPU for scientific computing. The case studies include Fourier transform (DFT, FFT, NUFFT), linear system solver (CG), numerical optimization (ADMM), and their applications in medical imaging. We formulate the problem and design the algorithms in accordance with TPU's

  11. Scientific computing with case studies [electronic resource]

    Scientific computing with case studies [electronic resource] Responsibility Dianne P. O'Leary Imprint Philadelphia, Pa. : Society for Industrial and Applied Mathematics (SIAM, 3600 Market Street, Floor 6, Philadelphia, PA 19104), 2009 Physical description 1 electronic text (xvi, 383 p.) : ill. (chiefly col.), digital file

  12. Scientific computing case studies

    Products and services. Our innovative products and services for learners, authors and customers are based on world-class research and are relevant, exciting and inspiring.

  13. Scientific Computing with Case Studies

    The case studies illustrate mathematical modeling and algorithm design, for problems in physics, engineering, epidemiology, chemistry, and biology. For advanced undergraduates and beginning graduate students, and scientists whose work involves numerical computing. Read more. Product details ...

  14. Scientific Computing with Case Studies, by Dianne P. O'Leary

    Scientific Computing with Case Studies, .... Search in: Advanced search. Contemporary Physics Volume 52, 2011 - Issue 1. Journal homepage. 196 Views 0 CrossRef citations to date 0. Altmetric Book reviews. Scientific Computing with Case Studies, by Dianne P. O'Leary ...

  15. Best Practices for Scientific Computing

    We have outlined a series of recommended best practices for scientific computing based on extensive research, as well as our collective experience. ... (2007) Software development environments for scientific and engineering software: a series of case studies. In: Proceedings 29th International Conference on Software Engineering. pp. 550-559 ...

  16. Case study: Navigating the AI surge

    From generative AI, LLM training and inference, data science, 3D design and collaboration, simulation, industrial digitalisation, to rendering and 3D graphics, and video processing - a wide range of workloads is supported. Selecting computing systems that anticipate future growth helps to avoid premature upgrades.

  17. 1: Introduction to Scientific Computing

    In short, scientific computing is a field of study that solves problems from the sciences and mathematics that are generally too difficulty to solve using standard techniques and need to resort to writing some computer code to get an answer. 1.1: Examples of Scientific Computing;

  18. Case study

    Case study: Navigating the AI Surge - Advancements, Challenges, and the Sustainable Paths Forward

  19. Case Studies

    Case Studies: Scientific Computing. The module "Case Studies: Scientific Computing" prepares students for future work in an interdisciplinary environment. For this purpose, they work on a project in the form of a practical task from the natural sciences and technology. The case studies normally take place once a year.

  20. The role of computational science in digital twins

    Digital twins hold immense promise in accelerating scientific discovery, but the publicity currently outweighs the evidence base of success. We summarize key research opportunities in the ...

  21. Scientific Computing with Case Studies

    To begin, recall that an eigenvector of a matrix is a vector w with the property that multiplication of the vector by the matrix simply scales the vector. The scale factor λ is called an eigenvalue, or principal value of the matrix. The eigensystem (eigenvalues and eigenvectors) of A has several nice properties, summarized in Pointers 5.1 and 5.6.

  22. Universities Have a Computer-Science Problem

    The case for teaching coders to speak French. Updated at 5:37 p.m. ET on March 22, 2024. Last year, 18 percent of Stanford University seniors graduated with a degree in computer science, more than ...

  23. College of Engineering and Computing

    To help solve these pressing issues, a team led by the University of South Carolina's College of Engineering and Computing (CEC) faculty was one of 15 multidisciplinary teams selected for phase one of the National Science Foundation Convergence Accelerator program's Track K: Equitable Water Solutions. The NSF is investing $9.8 million ...

  24. Scientific Consensus

    It's important to remember that scientists always focus on the evidence, not on opinions. Scientific evidence continues to show that human activities (primarily the human burning of fossil fuels) have warmed Earth's surface and its ocean basins, which in turn have continued to impact Earth's climate.This is based on over a century of scientific evidence forming the structural backbone of ...

  25. Land

    Territorial spatial planning requires thoughtful consideration of the scientific layout and synergistic control of production, living, and ecological spaces (PLESs). However, research in this field often neglects the human perspective and fails to account for people's demands and behavioral characteristics. This study evaluates the level and spatial characteristics of residents' production ...

  26. IBM Cloud Case Study

    In today's digital world, many companies are turning to cloud computing to improve their operations. Cloud computing offers various benefits, but it also comes with its challenges. ... In this blog post, we'll examine a case study from the IBM Cloud Case Studies site to understand how one company benefited from using IBM Cloud, along with ...

  27. Scientific Computing with Case Studies

    We seldom get it right the first time. Whether we are composing an important e-mail, seasoning a stew, painting a picture, or planning an experiment, we almost always make improvements on our original thought. The same is true of engineering design; we draft a plan, but changes are almost always made. Perhaps the customer changes the performance specifications, or perhaps a substitution of ...

  28. Scientific Computing with Case Studies

    Abstract. 24.1 The Problem. In this chapter we focus on solving nonlinear systems of equations. Nonlinear system of equations: Given a function F : ℛ n → ℛ n, find a point x ∊ ℛ n such that. We call such a point a solution of the equation or a zero of F. We'll assume that n > 1; see Pointer 24.1 for methods for solving nonlinear ...

  29. Scientific Computing with Case Studies

    computational science textbook; scientific computing textbook; case studies in scientific computing; numerical computing; computational linear algebra; optimization; Monte Carlo; differential equations; solution of nonlinear equations; Authors Affiliations

  30. Use of Abortion Pills Has Risen Significantly Post Roe, Research Shows

    The News. On the eve of oral arguments in a Supreme Court case that could affect future access to abortion pills, new research shows the fast-growing use of medication abortion nationally and the ...