AMANDA (Windows, OS X, Linux)
A Glossary of Special Terms
Term | Meaning |
---|---|
EPUB | A standardized format for digital books. |
FTP | FTP stands for File Transfer Protocol. It is a protocol used to transfer files from one computer to another via a wired or wireless network. |
Gantt chart | A type of bar chart used for project schedules, in which the tasks to be completed are shown as bars on the vertical axis, and time is shown on the horizontal axis, with the width of a given bar indicating the length of a given task. This facilitates planning by automating the tracking of milestone schedules and dependencies. |
GTD | GTD stands for Getting Things Done. It is a productivity method created by productivity consultant David Allen that allows users to focus on those tasks that should be addressed in a given context and at the right timescale of planning, from current activities to life-long goals. |
IP | IP stands for Intellectual Property, such as inventions and work products that are often patented or copyrighted. |
Linux | Linux is a family of open-source operating systems created by Linus Torvalds in 1991, serving as an alternative to the commercial ones. |
MTA | MTA stands for Materials Transfer Agreement—contracts that govern the transfer of research materials (e.g., DNA plasmids, cell lines) across institutions. |
MySQL | MySQL is an open-source database management system, consisting of a server back end that houses the data and a front end that allows users to query the database in very flexible ways. |
OCR | OCR stands for Optical Character Recognition—a process by which text is automatically recognized in an image, for example, converting a FAX or photo of a document into an editable text file. |
PDF stands for Portable Document Format, which serves as a standard format for many different types of devices and operating systems to be able to display (and sometimes edit) documents. | |
PMID | PMID stands for PubMed ID—the unique identifier used in the PubMed database to refer to published papers. |
SFTP | SFTP stands for SSH File Transfer Protocol but is often also referred to as Secure File Transfer Protocol. Its purpose is to transfer data over a network, similarly to FTP, but with added security (encryption). |
SSH | SSH stands for Secure Shell. This allows a remote user to connect to the operating system of their computer via a terminal-like interface. |
SSD | SSD stands for Solid State Drive. An SSD is a type of storage device for a computer that uses flash memory instead of a spinning disk, as in a typical hard drive. Compared with spinning hard drives, these are smaller, require less power, generate less heat, are less likely to break during routine use, and, crucially, enable vastly faster read and write speeds. |
TB | TB stands for Terabyte—a unit of measuring file size on a computer. One terabyte is equivalent to one thousand gigabytes, one million megabytes, or one trillion bytes. |
VNC | VNC stands for Virtual Network Computing—a desktop sharing system that transmits video signal and commands from one computer to another, allowing a user to interact with a remote computer the same way as if it were the computer they were currently using. |
VPN | VPN stands for Virtual Private Network. A virtual private network allows connections to internet-based resources with high security (encryption of data). |
WYSIWYG | WYSIWYG stands for What You See Is What You Get. This refers to applications where the output of text or other data being edited appears the same on-screen as it will when it is a finished project, such as a sheet of paper with formatted text (Microsoft Word and Scrivener are such, whereas LaTeX is not). |
Windows | Windows refers to the operating system Microsoft Windows. It is one of the most common operating systems in use today and is compatible with the vast majority of applications and hardware. |
XML | XML stands for Extensible Markup Language. Extensible Markup Language is a markup language used to encode documents such that they are readable by both humans and a variety of software. |
Although there is a huge variety of different types of scientific enterprises, most of them contain one or more activities that can be roughly subsumed by the conceptual progression shown in Figure 1 . This life cycle progresses from brainstorming and ideation through planning, execution of research, and then creation of work products. Each stage requires unique activities and tools, and it is crucial to establish a pipeline and best practices that enable the results of each phase to effectively facilitate the next phase. All of the recommendations given below are designed to support the following basic principles:
The Life Cycle of Research Activity
Various projects occupy different places along a typical timeline. The life cycle extends from creative ideation to gathering information, to formulating a plan, to the execution for the plan, and then to producing a work product such as a grant or paper based on the results. Many of these phases necessitate feedback to a prior phase, shown in thinner arrows (for example, information discovered during a literature search or attempts to formalize the work plan may require novel brainstorming). This diagram shows the product (end result) of each phase and typical tools used to accomplish them.
These basic principles can be used as the skeleton around which specific strategies and new software products can be deployed. Whenever possible, these can be implemented via external administration services (i.e., by a dedicated project manager or administrator inside the group), but this is not always compatible with budgetary constraints, in which case they can readily be deployed by each principal investigator. The PIs also have to decide whether they plan to suggest (or insist) that other people in the group also use these strategies, and perhaps monitor their execution. In our experience, it is most essential for anyone leading a complex project or several to adopt these methods (typically, a faculty member or senior staff scientist), whereas people tightly focused on one project and with limited concurrent tasks involving others (e.g., Ph.D. students) are not essential to move toward the entire system (although, for example, the backup systems should absolutely be ensured to be implemented among all knowledge workers in the group). The following are some of the methods that have proven most effective in our own experience.
Several key elements should be pillars of your Information Technology (IT) infrastructure ( Figure 2 ). You should be familiar enough with computer technology that you can implement these yourself, as it is rare for an institutional IT department to be able to offer this level of assistance. Your primary disk should be a large (currently, ∼2TB) SSD drive or, better, a disk card (such as the 2TB SSD NVMe PCIe) for fast access and minimal waiting time. Your computer should be so fast that you spend no time (except in the case of calculations or data processing) waiting for anything—your typing and mouse movement should be the rate-limiting step. If you find yourself waiting for windows or files to open, obtain a better machine.
Schematic of Data Flow and Storage
Three types of information: data (facts and datasets), action plans (schedules and to-do lists), and work product (documents) all interact with each other in defining a region of work space for a given research project. All of this should be hosted on a single PC (personal computer). It is accessed by a set of regular backups of several types, as well as by the user who can interact with raw files through the file system or with organized data through a variety of client applications that organize information, schedules, and email. See Table 2 for definitions of special terms.
One key element is backups—redundant copies of your data. Disks fail—it is not a question of whether your laptop or hard drive will die, but when. Storage space is inexpensive and researchers' time is precious: team members should not tolerate time lost due to computer snafus. The backup and accessibility system should be such that data are immediately recoverable following any sort of disaster; it only has to be set up once, and it only takes one disaster to realize the value of paranoia about data. This extends also to laboratory inventory systems—it is useful to keep (and back up) lists of significant equipment and reagents in the laboratory, in case they are needed for the insurance process in case of loss or damage.
The main drive should be big enough to keep all key information (not primary laboratory data, such as images or video) in one volume—this is to facilitate cloning. You should have an extra internal drive (which can be a regular disk) of the same size or bigger. Use something like Carbon Copy Cloner or SuperDuper to set up a nightly clone operation. When the main disk fails (e.g., the night before a big grant is due), boot from the clone and your exact, functioning system is ready to go. For Macs, another internal drive set up as a Time Machine enables keeping versions of files as they change. You should also have an external drive, which is likewise a Time Machine or a clone: you can quickly unplug it and take it with you, if the laboratory has to be evacuated (fire alarm or chemical emergency) or if something happens to your computer and you need to use one elsewhere. Set a calendar reminder once a month to check that the Time Machine is accessible and can be searched and that your clone is actually updated and bootable. A Passport-type portable drive is ideal when traveling to conferences: if something happens to the laptop, you can boot a fresh (or borrowed) machine from the portable drive and continue working. For people who routinely install software or operating system updates, I also recommend getting one disk that is a clone of the entire system and applications and then set it to nightly clone the data only , leaving the operating system files unchanged. This guarantees that you have a usable system with the latest data files (useful in case an update or a new piece of software renders the system unstable or unbootable and it overwrites the regular clone before you notice the problem). Consider off-site storage. CrashPlan Pro is a reasonable choice for backing up laboratory data to the cloud. One solution for a single person's digital content is to have two extra external hard drives. One gets a clone of your office computer, and one is a clone of your home computer, and then you swap—bring the office one home and the home one to your office. Update them regularly, and keep them swapped, so that should a disaster strike one location, all of the data are available. Finally, pay careful attention (via timed reminders) to how your laboratory machines and your people's machines are being backed up; a lot of young researchers, especially those who have not been through a disaster yet, do not make backups. One solution is to have a system like CrashPlan Pro installed on everyone's machines to do automatic backup.
Another key element is accessibility of information. Everyone should be working on files (i.e., Microsoft Word documents) that are inside a Dropbox or Box folder; whatever you are working on this month, the files should be inside a folder synchronized by one of these services. That way, if anything happens to your machine, you can access your files from anywhere in the world. It is critical that whatever service is chosen, it is one that s ynchronizes a local copy of the data that live on your local machine (not simply keeps files in the cloud) —that way, you have what you need even if the internet is down or connectivity is poor. Tools that help connect to your resources while on the road include a VPN (especially useful for secure connections while traveling), SFTP (to transfer files; turn on the SFTP, not FTP, service on your office machine), and Remote Desktop (or VNC). All of these exist for cell phone or tablet devices, as well as for laptops, enabling access to anything from anywhere. All files (including scans of paper documents) should be processed by OCR (optical character recognition) software to render their contents searchable. This can be done in batch (on a schedule), by Adobe Acrobat's OCR function, which can be pointed to an entire folder of PDFs, for example, and left to run overnight. The result, especially with Apple's Spotlight feature, is that one can easily retrieve information that might be written inside a scanned document.
Here, we focus on work product and the thought process, not management of the raw data as it emerges from equipment and experimental apparatus. However, mention should be made of electronic laboratory notebooks (ELNs), which are becoming an important aspect of research. ELNs are a rapidly developing field, because they face a number of challenges. A laboratory that abandons paper notebooks entirely has to provide computer interfaces anywhere in the facility where data might be generated; having screens, keyboards, and mice at every microscope or other apparatus station, for example, can be expensive, and it is not trivial to find an ergonomically equivalent digital substitute for writing things down in a notebook as ideas or data appear. On the other hand, keeping both paper notebooks for immediate recording, and ELNs for organized official storage, raises problems of wasted effort during the (perhaps incomplete) transfer of information from paper to the digital version. ELNs are also an essential tool to prevent loss of institutional knowledge as team members move up to independent positions. ELN usage will evolve over time as input devices improve and best practices are developed to minimize the overhead of entering meta-data. However, regardless of how primary data are acquired, the researcher will need specific strategies for transitioning experimental findings into research product in the context of a complex set of personal, institutional, and scientific goals and constraints.
The pipeline begins with ideas, which must be cultivated and then harnessed for subsequent implementation ( Altshuller, 1984 ). This step consists of two components: identifying salient new information and arranging it in a way that facilitates novel ideas, associations, hypotheses, and strategic plans for making impact.
For the first step, we suggest an automated weekly PubCrawler search, which allows Boolean searches of the literature. Good searches to save include ones focusing on specific keywords of interest, as well as names of specific people whose work one wants to follow. The resulting weekly email of new papers matching specific criteria complements manual searches done via ISI's Web of Science, Google Scholar, and PubMed. The papers of interest should be immediately imported into a reference manager, such as Endnote, along with useful Keywords and text in the Notes field of each one that will facilitate locating them later. Additional tools include DevonAgent and DevonSphere, which enable smart searches of web and local resources, respectively.
Brainstorming can take place on paper or digitally (see later discussion). We have noticed that the rate of influx of new ideas is increased by habituating to never losing a new idea. This can be accomplished by establishing a voicemail contact in your cell phone leading to your own office voicemail (which allows voice recordings of idea fragments while driving or on the road, hands-free) and/or setting up Endnote or a similar server-synchronized application to record (and ideally transcribe) notes. It has been our experience that the more one records ideas arising in a non-work setting, the more often they will pop up automatically. For notes or schematics written on paper during dedicated brainstorming, one tool that ensures that nothing is lost is an electronic pen. For example, the Livescribe products are well integrated with Evernote and ensure that no matter where you are, anything you write down becomes captured in a form accessible from anywhere and are safe no matter what happens to the original notebook in which they were written.
Enhancing scientific thought, creative brainstorming, and strategic planning is facilitated by the creation of mind maps: visual representations of spatial structure of links between concepts, or the mapping of planned activity onto goals of different timescales. There are many available mind map software packages, including MindNode; their goal is to enable one to quickly set down relationships between concepts with a minimum of time spent on formatting. Examples are shown in Figures 3 A and 3B. The process of creating these mind maps (which can then be put on one's website or discussed with the laboratory members) helps refine fuzzy thinking and clarifies the relationships between concepts or activities. Mind mappers are an excellent tool because their light, freeform nature allows unimpeded brainstorming and fluid changes of idea structure but at the same time forces one to explicitly test out specific arrangements of plans or ideas.
Mind Mapping
(A and B) The task of schematizing concepts and ideas spatially based on their hierarchical relationships with each other is a powerful technique for organizing the creative thought process. Examples include (A), which shows how the different projects in our laboratory relate to each other. Importantly, it can also reveal disbalances or gaps in coverage of specific topics, as well as help identify novel relationships between sub-projects by placing them on axes (B) or even identify novel hypotheses suggested by symmetry.
(C) Relationships between the central nervous system (CNS) and regeneration, cancer, and embryogenesis. The connecting lines in black show typical projects (relationships) already being pursued by our laboratory, and the lack of a project in the space between CNS and embryogenesis suggests a straightforward hypothesis and project to examine the role of the brain in embryonic patterning.
It is important to note that mind maps can serve a function beyond explicit organization. In a good mapped structure, one can look for symmetries (revealing relationships that are otherwise not obvious) between the concepts involved. An obvious geometric pattern with a missing link or node can help one think about what could possibly go there, and often identifies new relationships or items that had not been considered ( Figure 3 C), in much the same way that gaps in the periodic table of the elements helped identify novel elements.
The input and output of the feedback process between brainstorming and literature mining is information. Static information not only consists of the facts, images, documents, and other material needed to support a train of thought but also includes anything needed to support the various projects and activities. It should be accessible in three ways, as it will be active during all phases of the work cycle. Files should be arranged on your disk in a logical hierarchical structure appropriate to the work. Everything should also be searchable and indexed by Spotlight. Finally, some information should be stored as entries in a data management system, like Evernote or DevonThink, which have convenient client applications that make the data accessible from any device.
Notes in these systems should include useful lists and how-to's, including, for example:
Each note can have attachments, which include manuals, materials safety sheets, etc. DevonThink needs a little more setup but is more robust and also allows keeping the server on one's own machine (nothing gets uploaded to company servers, unlike with Evernote, which might be a factor for sensitive data). Scientific papers should be kept in a reference manager, whereas books (such as epub files and PDFs of books and manuscripts) can be stored in a Calibre library.
A special case of static information is email, including especially informative and/or actionable emails from team members, external collaborators, reviewers, and funders. Because the influx of email is ever-increasing, it is important to (1) establish a good infrastructure for its management and (2) establish policies for responding to emails and using them to facilitate research. The first step is to ensure that one only sees useful emails, by training a good Bayesian spam filter such as SpamSieve. We suggest a triage system in which, at specific times of day (so that it does not interfere with other work), the Inbox is checked and each email is (1) forwarded to someone better suited to handling it, (2) responded quickly for urgent things that need a simple answer, or (3) started as a Draft email for those that require a thoughtful reply. Once a day or a couple of times per week, when circumstances permit focused thought, the Draft folder should be revisited and those emails answered. We suggest a “0 Inbox” policy whereby at the end of a day, the Inbox is basically empty, with everything either delegated, answered, or set to answer later.
We also suggest creating subfolders in the main account (keeping them on the mail server, not local to a computer, so that they can be searched and accessed from anywhere) as follows:
Incoming emails belonging to those categories (for example, an email from an NIH program officer acknowledging a grant submission, a collaborator who emailed a plan of what they will do next, or someone who promised to answer a specific question) should be sorted from the Inbox to the relevant folder. Every couple of weeks (according to a calendar reminder), those folders should be checked, and those items that have since been dealt with can be saved to a Saved Messages folder archive, whereas those that remain can be Replied to as a reminder to prod the relevant person.
In addition, as most researchers now exchange a lot of information via email, the email trail preserves a record of relationships among colleagues and collaborators. It can be extremely useful, even years later, to be able to go back and see who said what to whom, what was the last conversation in a collaboration that stalled, who sent that special protocol or reagent and needs to be acknowledged, etc. It is imperative that you know where your email is being stored, by whom, and their policy on retention, storage space limits, search, backup, etc. Most university IT departments keep a mail server with limited storage space and will delete your old emails (even more so if you move institutions). One way to keep a permanent record with complete control is with an application called MailSteward Pro. This is a front-end client for a freely available MySQL server, which can run on any machine in your laboratory. It will import your mail and store unlimited quantities indefinitely. Unlike a mail server, this is a real database system and is not as susceptible to data corruption or loss as many other methods.
A suggested strategy is as follows. Keep every single email, sent and received. Every month (set a timed reminder), have MailSteward Pro import them into the MySQL database. Once a year, prune them from the mail server (or let IT do it on their own schedule). This allows rapid search (and then reply) from inside a mail client for anything that is less than one year old (most searches), but anything older can be found in the very versatile MailStewardPro Boolean search function. Over time, in addition to finding specific emails, this allows some informative data mining. Results of searches via MailStewardPro can be imported into Excel to, for example, identify the people with whom you most frequently communicate or make histograms of the frequency of specific keywords as a function of time throughout your career.
With ideas, mind maps, and the necessary information in hand, one can consider what aspects of the current operations plan can be changed to incorporate plans for new, impactful activity.
A very useful strategy involves breaking down everything according to the timescales of decision-making, such as in the Getting Things Done (GTD) philosophy ( Figure 4 ) ( Allen, 2015 ). Activities range from immediate (daily) tasks to intermediate goals all the way to career-scale (or life-long) mission statements. As with mind maps, being explicit about these categories not only force one to think hard about important aspects of their work, but also facilitate the transmission of this information to others on the team. The different categories are to be revisited and revised at different rates, according to their position on the hierarchy. This enables you to make sure that effort and resources are being spent according to priorities.
Scales of Activity Planning
Activities should be assigned to a level of planning with a temporal scale, based on how often the goals of that level get re-evaluated. This ranges from core values, which can span an entire career or lifetime, all the way to tactics that guide day-to-day activities. Each level should be re-evaluated at a reasonable time frame to ensure that its goals are still consistent with the bigger picture of the level(s) above it and to help re-define the plans for the levels below it.
We also strongly recommend a yearly personal scientific retreat. This is not meant to be a vacation to “forget about work” but rather an opportunity for freedom from everyday minutiae to revisit, evaluate, and potentially revise future activity (priorities, action items) for the next few years. Every few years, take more time to re-map even higher levels on the pyramid hierarchy; consider what the group has been doing—do you like the intellectual space your group now occupies? Are your efforts having the kind of impact you realistically want to make? A formal diagram helps clarify the conceptual vision and identify gaps and opportunities. Once a correct level of activity has been identified, it is time to plan specific activities.
A very good tool for this purpose, which enables hierarchical storage of tasks and subtasks and their scheduling, is OmniFocus ( Figure 5 ). OmniFocus also enables inclusion of files (or links to files or links to Evernote notes of information) together with each Action. It additionally allows each action to be marked as “Done” once it is complete, providing not only a current action plan but a history of every past activity. Another interesting aspect is the fact that one can link individual actions with specific contexts: visualizing the database from the perspective of contexts enables efficient focus of attention on those tasks that are relevant in a specific scenario. OmniFocus allows setting reminders for specific actions and can be used for adding a time component to the activity.
Project Planning
This figure shows a screenshot of the OmniFocus application, illustrating the nested hierarchy of projects and sub-projects, arranged into larger groups.
The best way to manage time relative to activity (and to manage the people responsible for each activity) is to construct Gantt charts ( Figure 6 ), which can be used to plan out project timelines and help keep grant and contract deliverables on time. A critical feature is that it makes dependencies explicit, so that it is clear which items have to be solved/done before something else can be accomplished. Gantt charts are essential for complex, multi-person, and/or multi-step projects with strict deadlines (such as grant deliverables and progress reports). Software such as OmniPlanner can also be used to link resources (equipment, consumables, living material, etc.) with specific actions and timelines. Updating and evaluation of a Gantt chart for a specific project should take place on a time frame appropriate to the length of the next immediate phase; weekly or biweekly is typical.
Timeline Planning
This figure shows a screenshot of a typical Gantt chart, in OmniPlan software, illustrating the timelines of different project steps, their dependencies, and specific milestones (such as a due date for a site visit or grant submission). Note that Gantt software automatically moves the end date for each item if its subtasks' timing changes, enabling one to see a dynamically correct up-to-date temporal map of the project that adjusts for the real-world contingencies of research.
In addition to the comprehensive work plan in OmniFocus or similar, it is helpful to use a Calendar (which synchronizes to a server, such as Microsoft Office calendar with Exchange server). For yourself, make a task every day called “Monday tasks,” etc., which contains all the individual things to be accomplished (which do not warrant their own calendar reminder). First thing in the morning, one can take a look at the day's tasks to see what needs to be done. Whatever does not get done that day is to be copied onto another day's tasks. For each of the people on your team, make a timed reminder (weekly, for example, for those with whom you meet once a week) containing the immediate next steps for them to do and the next thing they are supposed to produce for your meeting. Have it with you when you meet, and give them a copy, updating the next occurrence as needed based on what was decided at the meeting to do next. This scheme makes it easy for you to remember precisely what needs to be covered in the discussion, serves as a record of the project and what you walked about with whom at any given day (which can be consulted years later, to reconstruct events if needed), and is useful to synchronize everyone on the same page (if the team member gets a copy of it after the meeting).
Writing, to disseminate results and analysis, is a central activity for scientists. One of the OmniFocus library's sections should contain lists of upcoming grants to write, primary papers that are being worked on, and reviews/hypothesis papers planned. Microsoft Word is the most popular tool for writing papers—its major advantage is compatibility with others, for collaborative manuscripts (its Track Changes feature is also very well implemented, enabling collaboration as a master document is passed from one co-author to another). But Scrivener should be seriously considered—it is an excellent tool that facilitates complex projects and documents because it enables WYSIWYG text editing in the context of a hierarchical structure, which allows you to simultaneously work on a detailed piece of text while seeing the whole outline of the project ( Figure 7 ).
Writing Complex Materials
This figure shows a screenshot from the Scrivener software. The panel on the left facilitates logical and hierarchical organization of a complex writing project (by showing where in the overall structure any given text would fit), while the editing pane on the right allows the user to focus on writing a specific subsection without having to scroll through (but still being able to see) the major categories within which it must fit.
It is critical to learn to use a reference manager—there are numerous ones, including, for example, Endnote, which will make it much easier to collaborate with others on papers with many citations. One specific tip to make collaboration easier is to ask all of the co-authors to set the reference manager to use PMID Accession Number in the temporary citations in the text instead of the arbitrary record number it uses by default. That way, a document can have its bibliography formatted by any of the co-authors even if they have completely different libraries. Although some prefer collaborative editing of a Google Doc file, we have found a “master document” system useful, in which a file is passed around among collaborators by email but only one can make (Tracked) edits at a time (i.e., one person has the master doc and everyone makes edits on top of that).
One task most scientists regularly undertake is writing reviews of a specific subfield (or Whitepapers). It is often difficult, when one has an assignment to write, to remember all of the important papers that were seen in the last few years that bear on the topic. One method to remedy this is to keep standing document files, one for each topic that one might plausibly want to cover and update them regularly. Whenever a good paper is found, immediately enter it into the reference manager (with good keywords) and put a sentence or two about its main point (with the citation) into the relevant document. Whenever you decide to write the review, you will already have a file with the necessary material that only remains to be organized, allowing you to focus on conceptual integration and not combing through literature.
The life cycle of research can be viewed through the lens of the tools used at different stages. First there are the conceptual ideas; many are interconnected, and a mind mapper is used to flesh out the structure of ideas, topics, and concepts; make it explicit; and share it within the team and with external collaborators. Then there is the knowledge—facts, data, documents, protocols, pieces of information that relate to the various concepts. Kept in a combination of Endnote (for papers), Evernote (for information fragments and lists), and file system files (for documents), everything is linked and cross-referenced to facilitate the projects. Activities are action items, based on the mind map, of what to do, who is doing what, and for which purpose/grant. OmniFocus stores the subtasks within tasks within goals for the PI and everyone in the laboratory. During meetings with team members, these lists and calendar entries are used to synchronize objectives with everyone and keep the activity optimized toward the next step goals. The product—discovery and synthesis—is embodied in publications via a word processor and reference manager. A calendar structure is used to manage the trajectory from idea to publication or grant.
The tools are currently good enough to enable individual components in this pipeline. Because new tools are continuously developed and improved, we recommend a yearly overview and analysis of how well the tools are working (e.g., which component of the management plan takes the most time or is the most difficult to make invisible relative to the actual thinking and writing), coupled to a web search for new software and updated versions of existing programs within each of the categories discussed earlier.
A major opportunity exists for software companies in the creation of integrated new tools that provide all the tools in a single integrated system. In future years, a single platform will surely appear that will enable the user to visualize the same research structure from the perspective of an idea mind map, a schedule, a list of action items, or a knowledge system to be queried. Subsequent development may even include Artificial Intelligence tools for knowledge mining, to help the researcher extract novel relationships among the content. These will also need to dovetail with ELN platforms, to enable a more seamless integration of project management with primary data. These may eventually become part of the suite of tools being developed for improving larger group dynamics (e.g., Microsoft Teams). One challenge in such endeavors is ensuring the compatibility of formats and management procedures across groups and collaborators, which can be mitigated by explicitly discussing choice of software and process, at the beginning of any serious collaboration.
Regardless of the specific software products used, a researcher needs to put systems in place for managing information, plans, schedules, and work products. These digital objects need to be maximally accessible and backed up, to optimize productivity. A core principle is to have these systems be so robust and lightweight as to serve as an “external brain” ( Menary, 2010 )—to maximize creativity and deep thought by making sure all the details are recorded and available when needed. Although the above discussion focused on the needs of a single researcher (perhaps running a team), future work will address the unique needs of collaborative projects with more lateral interactions by significant numbers of participants.
We thank Joshua Finkelstein for helpful comments on a draft of the manuscript. M.L. gratefully acknowledges support by an Allen Discovery Center award from the Paul G. Allen Frontiers Group (12171) and the Barton Family Foundation.
Please note you do not have access to teaching notes, the role of research in management decision making.
Management Decision
ISSN : 0025-1747
Article publication date: 1 March 1974
Introduction The last decade or so has seen an immense growth in the amount of research carried out in a wide range of management aspects. Currently, the position is one of continuing growth, as a review of the many journals of management, organisation and allied subjects will testify. Not only has research become more widespread, it has also become much more sophisticated. A look at those same journals in the early 1960s will reveal methods of reporting very different from those of today. Now the use of statistical methods for data testing—especially multi‐variate techniques—is commonplace, if not always intelligible to the layman or even fellow academics and research workers. Research designs have become more elaborate, and the topics of research often elevated to such a level of academic sophistication as to seem irrelevant to the manager. While it would be unfair to suggest that all research in management takes this form, there is sufficient to make the manager ask what it all has to do with him, as Bennett et al and Gee make clear.
Bennett, R. (1974), "The Role of Research in Management Decision Making", Management Decision , Vol. 12 No. 3, pp. 189-198. https://doi.org/10.1108/eb001050
Copyright © 1974, MCB UP Limited
All feedback is valuable.
Please share your general feedback
Contact Customer Support
Businesses perform research for various purposes, including acquiring vital information about their consumers and business clients. Management's primary duty is to make decisions, and without the assistance of study and analysis of the current situation and future forecasting, decisions may be ineffective. As a result, research can help you make better decisions. Based on research, management may make sound and well-informed judgments.
Businesses conduct research to determine the effectiveness of their advertising. For example, a dairy manufacturer would wish to know how many individuals saw its most recent television commercial. The dairy company may discover that the longer the television ad runs, the more people become aware of its advertising. If few individuals have seen the ads, the corporation may elect to display them at other times.
Because of research, a company can make well-informed decisions. The company will gather information about critical business areas, analyze them, develop a strategy, and distribute business information during the research process. Reports presented to top management frequently include information on customer and employee preferences and all accessible channels for sales, marketing, finance, and production. These details are used by management to select the optimal plan. At all stages and phases of business operations, research is required. Initial research is needed to determine if entering the given type of business would be lucrative and whether there is a market for the proposed product.
Concerning the employees, properly conducted research can provide vital facts about their satisfaction quotient, the difficulties they face, and how they can address problems related to workplace relationships. An analysis of the results would allow management to make changes to improve the overall effectiveness of the organization and its personnel. Workers can be trained and guided to meet the demands of the organization. This would benefit both personal and professional development while increasing overall organizational effectiveness.
Research is essential for managerial decision-making. All strategic business sectors are researched and appraised before developing more efficient procedures. All businesses usually have many ways of doing an activity. The organization picks the most effective, productive, and profitable through proper research.
Research can answer questions for various problems, from getting a grip on industry trends, identifying new products to produce and deliver to the market, or deciding on which site to locate an outlet, to understand better what it needs to fulfill customer demands. Research can also help evaluate if a product is accepted in the market. Research aids expansion into new markets.
Research helps in testing the potential success of new products. Before marketing them, businesses must understand what kinds of products consumers would like. For example, a restaurant may conduct focus groups in the beginning to test different varieties of burgers. Small groups of consumers will most likely participate in the focus groups. The focus group could be used to determine which burgers clients prefer. Ultimately, the company may test the burgers through surveys with larger groups of people.
Related Articles:
World's leading professional association of Internet Research Specialists - We deliver Knowledge, Education, Training, and Certification in the field of Professional Online Research. The AOFIRS is considered a major contributor in improving Web Search Skills and recognizes Online Research work as a full-time occupation for those that use the Internet as their primary source of information.
by Anam Ahmed
Published on 21 Nov 2018
Knowledge is power, as the saying goes. Conducting thorough research in business is an excellent strategy to learn more about your market, customers and competition. With that information in hand, you can make innovative and well-thought-out decisions to help grow your business. Research helps companies to plan new products, develop advertising campaigns and compete with direct competitors. Without research, companies would be left in silos, trying to navigate the market in the dark. When you’re in a management function, you’re in a key decision-making position in the company. As a result, it’s imperative to rely on solid research to determine your organization’s next steps.
Conducting research to better understand the industry your company operates in is integral to success. Knowing who your competition is, who your customers are and what products or services to offer will help you to develop a solid plan. In addition, business research helps organizations avoid future failures. Organizations can determine whether they should expand operations or scale back based on how the industry is doing as a whole. They can even decide if they should apply for a new loan or pay back debts sooner based on current interest rates. Understanding the industry also helps businesses price their products or services effectively, ensuring they are in line with market rates and competitors.
Your customers are the reason your business exists. As a result, it’s vital to know who they are, how they think, how they feel and why they might need your products or services. Organizations conduct market research in various ways, such as through phone or online surveys, and can also purchase research that has already been published for their industry. It’s a great way to understand what your customers’ biggest challenges are so that you can determine how to help them. Market research is also vital to new product development. Research helps to reduce risk when making a big investment in creating a new product or offering a new service.
Knowing your customers also helps to fine-tune marketing campaigns. This way, you can target customers effectively, really honing in on their pain points and offering your organization as a viable solution. Brand research helps organizations to understand how their customers view them and shows any changes needed to improve the business' overall image.
Every business has some kind of competition; no one operates alone. As a result, it’s important to know who your true competitors are and how you compare. Companies that are honest about their strengths and weaknesses as compared to their competitors have a higher chance of success. Through effective competitor analysis and research, organizations can determine if they need to develop new products or services, whether they should consider new marketing strategies or if their pricing plan needs some tweaks. By understanding the competition better, organizations also can develop new ways to increase market share.
india free notes.com
Read BBA, BMS, B.Com Syllabus wise Notes
Taking stock of the industry.
Conducting research to better understand the industry your company operates in is integral to success. Knowing who your competition is, who your customers are and what products or services to offer will help you to develop a solid plan. In addition, business research helps organizations avoid future failures. Organizations can determine whether they should expand operations or scale back based on how the industry is doing as a whole. They can even decide if they should apply for a new loan or pay back debts sooner based on current interest rates. Understanding the industry also helps businesses price their products or services effectively, ensuring they are in line with market rates and competitors.
Your customers are the reason your business exists. As a result, it’s vital to know who they are, how they think, how they feel and why they might need your products or services. Organizations conduct market research in various ways, such as through phone or online surveys, and can also purchase research that has already been published for their industry. It’s a great way to understand what your customers’ biggest challenges are so that you can determine how to help them. Market research is also vital to new product development. Research helps to reduce risk when making a big investment in creating a new product or offering a new service.
Knowing your customers also helps to fine-tune marketing campaigns. This way, you can target customers effectively, really honing in on their pain points and offering your organization as a viable solution. Brand research helps organizations to understand how their customers view them and shows any changes needed to improve the business’ overall image.
Every business has some kind of competition; no one operates alone. As a result, it’s important to know who your true competitors are and how you compare. Companies that are honest about their strengths and weaknesses as compared to their competitors have a higher chance of success. Through effective competitor analysis and research, organizations can determine if they need to develop new products or services, whether they should consider new marketing strategies or if their pricing plan needs some tweaks. By understanding the competition better, organizations also can develop new ways to increase market share.
During the process of research, a business would acquire key information related to different areas of the business which the business would analyze, strategize and use the collected business information for improving the efficiency and performance of the business. Reports sent to the top-level management usually have information about the employee preference, consumer likes and dislikes and the different channels that are available effective sales, finance, production, and marketing.
The information so gathered by a business about different areas aids in determining the ideal and best strategy suited to the organization. Say for instance, before initially starting an organization, research helps in evaluating whether the said business if started would be a profitable venture and whether there really exists a demand for the product manufactured by the company. Thus effective research conducted helps in every phase or stages of the business operations by helping in good decision-making.
A clearly carried out research aids in not only uncovering but even in a thorough understanding of the level of staff satisfaction. The management through well-conducted research comes to know of the difficulties experienced by the staff along with getting a clear picture about how to handle the situation at the place of work. Thus it is true that well-conducted research helps the management and the organization in undertaking the needed changes for the efficient, smooth and successful functioning of the organization and in providing satisfaction level to its employees at the workplace. This helps to increase their motivational level as they get coached and trained in their line of need. This helps improve the personal as well as the professional performance of the employees thus improving the overall performance of the organization.
By undertaking effective research in different areas, all the areas of the business get thoroughly analyzed and evaluated thus helping in picking up the good techniques for better and more efficient ways that would help in increasing the productivity and profitability of the organization. In short, it cannot be denied that effective research undertaken provides an answer to all the problems of a business.
[…] VIEW […]
New research framework will help ai chatbots better mimic human conversation.
By Kevin Manne
Release Date: July 29, 2024
BUFFALO, N.Y. — As artificial intelligence increasingly impacts our daily lives, researchers in the University at Buffalo School of Management have developed a new framework to transform AI chatbots into more intuitive, human-like conversation partners.
Forthcoming in AIS Transactions on Human-Computer Interaction, the study introduces the Chatbot Discourse Design Framework, which helps find key conversation patterns in discussions between humans and connects them to chatbot designs to enhance their conversational abilities.
“Chatbots are everywhere, from customer service to health care, and their success hinges upon their ability to understand what you’re saying and provide meaningful responses,” says study co-author Raj Sharman, PhD, professor of management science and systems in the UB School of Management. “Our framework will allow all types of organizations to improve their operations and overall customer experience.”
To build their framework, the researchers conducted a comprehensive search of publications spanning more than two decades, from 2000 to 2022. Their search yielded nearly 100 articles focused on the discourse analysis used in chatbot design, and also included papers from before 2000 that offered foundational insights.
Through this investigation, they identified three distinct chatbot types: informative, task-based and conversational, each requiring tailored conversation strategies.
“The recent failure of some well-known chatbots shows how important it is for a chatbot to understand the kind of conversation you want to have with it,” says Sharman. “By building chatbots that recognize different types of discussions, businesses can make them more efficient, engaging and trustworthy for users.”
Sharman collaborated on the study with UB School of Management graduates Sagarika Suresh Thimmanayakanapalya, PhD ’20, senior quantitative researcher at JPMorganChase, who was lead author; and Pavankumar Mulgund, PhD ’20, assistant professor of management information systems at the University of Memphis Fogelman College of Business and Economics, who was second author.
Now in its 100th year, the UB School of Management is recognized for its emphasis on real-world learning, community and impact, and the global perspective of its faculty, students and alumni. The school also has been ranked by Bloomberg Businessweek, Forbes and U.S. News & World Report for the quality of its programs and the return on investment it provides its graduates. For more information about the UB School of Management, visit management.buffalo.edu .
Contact Kevin Manne Associate Director of Communications School of Management 716-645-5238 [email protected]
Do you have questions or comments for the Office of the Provost? Let us know your thoughts and we’ll be happy to get back to you.
A campus-wide, student-centric effort to ensure that UB’s PhD programs remain among the strongest in the world.
Transforming the understanding and treatment of mental illnesses.
Información en español
Esta página también está disponible en español .
Mental health includes emotional, psychological, and social well-being. It is more than the absence of a mental illness—it’s essential to your overall health and quality of life. Self-care can play a role in maintaining your mental health and help support your treatment and recovery if you have a mental illness.
Self-care means taking the time to do things that help you live well and improve both your physical health and mental health. This can help you manage stress, lower your risk of illness, and increase your energy. Even small acts of self-care in your daily life can have a big impact.
Here are some self-care tips:
Self-care looks different for everyone, and it is important to find what you need and enjoy. It may take trial and error to discover what works best for you.
Learn more about healthy practices for your mind and body .
Seek professional help if you are experiencing severe or distressing symptoms that have lasted 2 weeks or more, such as:
If you have concerns about your mental health, talk to a primary care provider. They can refer you to a qualified mental health professional, such as a psychologist, psychiatrist, or clinical social worker, who can help you figure out the next steps. Find tips for talking with a health care provider about your mental health.
You can learn more about getting help on the NIMH website. You can also learn about finding support and locating mental health services in your area on the Substance Abuse and Mental Health Services Administration website.
If you or someone you know is struggling or having thoughts of suicide, call or text the 988 Suicide & Crisis Lifeline at 988 or chat at 988lifeline.org . This service is confidential, free, and available 24 hours a day, 7 days a week. In life-threatening situations, call 911.
Suicide is preventable—learn about warning signs of suicide and action steps for helping someone in emotional distress.
GREAT: Helpful Practices to Manage Stress and Anxiety: Learn about helpful practices to manage stress and anxiety. GREAT was developed by Dr. Krystal Lewis, a licensed clinical psychologist at NIMH.
Getting to Know Your Brain: Dealing with Stress: Test your knowledge about stress and the brain. Also learn how to create and use a “ stress catcher ” to practice strategies to deal with stress.
Guided Visualization: Dealing with Stress: Learn how the brain handles stress and practice a guided visualization activity.
Mental Health Minute: Stress and Anxiety in Adolescents: Got 60 seconds? Take a mental health minute to learn about stress and anxiety in adolescents.
Last Reviewed: February 2024
Unless otherwise specified, the information on our website and in our publications is in the public domain and may be reused or copied without permission. However, you may not reuse or copy images. Please cite the National Institute of Mental Health as the source. Read our copyright policy to learn more about our guidelines for reusing NIMH content.
Published on 30.7.2024 in Vol 26 (2024)
Authors of this article:
1 State Key Laboratory of Cardiovascular Disease, Fuwai Hospital, National Center for Cardiovascular Diseases, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China
2 Ping An Healthcare and Technology, Beijing, China
Yuejin Yang, MD, PhD
State Key Laboratory of Cardiovascular Disease
Fuwai Hospital, National Center for Cardiovascular Diseases
Chinese Academy of Medical Sciences & Peking Union Medical College
No 167, Beilishi Road
Xicheng District
Beijing, 100037
Phone: 86 13701151408
Email: [email protected]
Background: Machine learning (ML) risk prediction models, although much more accurate than traditional statistical methods, are inconvenient to use in clinical practice due to their nontransparency and requirement of a large number of input variables.
Objective: We aimed to develop a precise, explainable, and flexible ML model to predict the risk of in-hospital mortality in patients with ST-segment elevation myocardial infarction (STEMI).
Methods: This study recruited 18,744 patients enrolled in the 2013 China Acute Myocardial Infarction (CAMI) registry and 12,018 patients from the China Patient-Centered Evaluative Assessment of Cardiac Events (PEACE)-Retrospective Acute Myocardial Infarction Study. The Extreme Gradient Boosting (XGBoost) model was derived from 9616 patients in the CAMI registry (2014, 89 variables) with 5-fold cross-validation and validated on both the 9125 patients in the CAMI registry (89 variables) and the independent China PEACE cohort (10 variables). The Shapley Additive Explanations (SHAP) approach was employed to interpret the complex relationships embedded in the proposed model.
Results: In the XGBoost model for predicting all-cause in-hospital mortality, the variables with the top 8 most important scores were age, left ventricular ejection fraction, Killip class, heart rate, creatinine, blood glucose, white blood cell count, and use of angiotensin-converting enzyme inhibitors (ACEIs) and angiotensin II receptor blockers (ARBs). The area under the curve (AUC) on the CAMI validation set was 0.896 (95% CI 0.884-0.909), significantly higher than the previous models. The AUC for the Global Registry of Acute Coronary Events (GRACE) model was 0.809 (95% CI 0.790-0.828), and for the TIMI model, it was 0.782 (95% CI 0.763-0.800). Despite the China PEACE validation set only having 10 available variables, the AUC reached 0.840 (0.829-0.852), showing a substantial improvement to the GRACE (0.762, 95% CI 0.748-0.776) and TIMI (0.789, 95% CI 0.776-0.803) scores. Several novel and nonlinear relationships were discovered between patients’ characteristics and in-hospital mortality, including a U-shape pattern of high-density lipoprotein cholesterol (HDL-C).
Conclusions: The proposed ML risk prediction model was highly accurate in predicting in-hospital mortality. Its flexible and explainable characteristics make the model convenient to use in clinical practice and could help guide patient management.
Trial Registration: ClinicalTrials.gov NCT01874691; https://clinicaltrials.gov/study/NCT01874691
Acute myocardial infarction (AMI) is a major cause of hospitalization and mortality in China, while ST-segment elevation myocardial infarction (STEMI) accounts for over 80% of myocardial infarctions [ 1 - 3 ]. It is critical to accurately predict the risks of in-hospital mortality for patients with STEMI to improve prognosis. Traditionally, most risk prediction models have been based on generalized linear regression methods [ 4 , 5 ]. Although straightforward to understand and apply, these models require parametric assumptions [ 6 , 7 ]. For example, using the logistic regression (LR) method, the Global Registry in Acute Coronary Events (GRACE) [ 4 ] and Thrombolysis in Myocardial Infarction (TIMI) risk scores [ 5 ] oversimplified the complexity of the real association among variables and outcome, resulting in poor predictive accuracy [ 8 , 9 ]. Recently, machine learning (ML) techniques have been increasingly used for predicting different clinical events in cardiovascular disease [ 10 - 12 ] and have achieved higher accuracy than traditional models. However, ML models, often built on a large number of variables, are difficult to use in clinical practice due to the need for extensive input data and the challenge of identifying specific therapeutic targets. The complexity and ambiguity of ML models require a shift toward explainable artificial intelligence (XAI) methods to guarantee that the model outputs are comprehensible for end users [ 13 ]. Moreover, ML models tend to use a large number of variables [ 14 - 16 ]. However, in clinical practice, where many scenarios are unknown, a significant challenge is how to apply the model more flexibly when some variables are missing. Therefore, we aimed to develop an ML risk prediction model for in-hospital mortality in patients with STEMI that is not only highly accurate but also explainable and flexible with the number of input variables (tolerant to the missing variables), making it easy to use in clinical practice.
The patients included in this study were from the China Acute Myocardial Infarction (CAMI) registry [ 3 ], organized and conducted by the Fuwai Hospital, National Center for Cardiovascular Diseases, China, from January 2013 to September 2014. The methodology of the CAMI registry (NCT01874691) has been previously described [ 3 ]. In short, the CAMI registry was a prospective, nationwide, multicenter observational study for patients with AMI. The registry included 3 levels of hospitals (provincial, prefecture, and county), reflecting the typical Chinese governmental and administrative model and providing broad geographic representation across all provinces and municipalities across mainland China. Patients with AMI were consecutively enrolled, and data were collected upon their arrival and throughout their hospital stay until discharge. Data were collected, validated, and submitted by trained clinical cardiologists or cardiovascular fellows to ensure accuracy and reliability at each participating site. Patients diagnosed as non-STEMI (NSTEMI) or lack of in-hospital mortality status were excluded from the study.
The CAMI registry data were used for model derivation and internal validation. Patients with STEMI hospitalized in 2014 (n=9616, 51.3%) were used to derive the model, while those hospitalized in 2013 (n=9125, 48.7%) were used for internal validation. An independent cohort of patients from the China Patient-Centered Evaluative Assessment of Cardiac Events (PEACE)-Retrospective Acute Myocardial Infarction Study [ 2 ], another nationally representative sample of patients with STEMI spanning from 2001 to 2011 (N=12,108), was also used to externally validate the proposed risk prediction model. We only selected 10 important variables to carry out the validation, with the aim of assessing the proposed risk prediction model’s flexibility when applied in daily clinical practice. The internal validation set sampled at a different time point, along with the independent external validation set, were both used to assess the model’s reproducibility and generalizability to new and different patients.
Both study protocols conformed to the ethical guidelines of the 1975 Declaration of Helsinki and were approved by the ethics review board committee of Fuwai Hospital (431) [ 2 , 3 ]. Written informed consent was obtained from eligible patients before registration. All data were anonymized.
The main outcome was all-cause in-hospital mortality, defined as death for any reason during hospitalization.
The patients with STEMI included in the CAMI cohort were characterized by a total of 89 variables (Table S1 in Multimedia Appendix 1 ), including social demographics, presentation characteristics, laboratory tests, treatment, medical history, and more [ 3 ]. The patients with STEMI included in the China PEACE-Retrospective Acute Myocardial Infarction Study [ 2 ] were characterized by 10 variables, including age, weight, Killip class, heart rate, systolic blood pressure (SBP), glucose, creatinine, white blood cell (WBC) count, high-density lipoprotein cholesterol (HDL-C), and use of angiotensin-converting enzyme inhibitors (ACEIs) or angiotensin II receptor blockers (ARBs).
Model construction.
The predictive model was developed using the Extreme Gradient Boosting (XGBoost) [ 17 ] approach based on the CAMI derivation set. XGBoost ensembles [ 18 ] a series of relatively weak base classifiers (typically decision trees) into a stronger one sequentially and has achieved state-of-the-art results in many clinical challenges [ 10 , 19 ]. Apart from its highly competitive and accurate predictive performance, we chose the XGBoost method for its ability to handle missing data automatically [ 17 ]. Users do not need to impute the missing values when deriving, validating, and applying the XGBoost model. XGBoost provides the importance score of each variable, representing the frequency that a variable is used across all trees. The hyperparameters in the XGBoost model were tuned by 5-fold cross-validation on the derivation set.
The Shapley Additive Explanations (SHAP) method [ 20 ] was used to interpret the derived XGBoost model. It offers explanations on how the XGBoost model makes predictions and interprets the complex nonlinear relationship among the predictors and outcomes [ 19 ]. This method has been applied recently in other clinical studies [ 10 , 19 ]. SHAP represents the predicted risk as a cumulative effect of contributing variables for each prediction. The variable impact values that SHAP computes essentially represent the change in the predicted risk of the XGBoost model when we observe a feature (such as the weight of a patient) versus when we do not observe the feature (such as not knowing a patient’s weight).
XGBoost’s ability to handle missing values automatically makes it a robust and flexible choice for dealing with input variables. Users are free to input any number of available variables and leave other unrecorded ones as “N/A” (not available) values. Several experiments were conducted to assess the XGBoost model’s flexibility. First, we retained the top 20, 15, and 10 most important variables and replaced the others with “N/A” values on the CAMI derivation set. Second, we randomly reduced the number of available variables from 89 to 10 in the CAMI validation set ( Multimedia Appendix 1 ). Third, we included 10 variables from the independent China PEACE data set for our analysis.
Descriptive statistics were estimated as mean (SD) for the continuous variables and frequency (percentage) for the categorical ones. The missing rates for each variable were also calculated. Missing values were imputed using the chained equation method proposed in the Multiple Imputation by Chained Equations (MICE) algorithm [ 21 ], as the models being compared—namely, lasso LR, random forest, TIMI scores, and GRACE scores—cannot handle missing data automatically. The discrimination ability was estimated by the area under the curve (AUC). Isotonic regression [ 22 ] was used downstream of the XGBoost model to adjust the predictions [ 23 , 24 ]. The calibration was assessed using the Hosmer-Lemeshow goodness-of-fit test [ 25 ] on the CAMI derivation set. Additionally, a decile plot of observed versus predicted risk was used to visualize the calibration.
The in-hospital mortality rate was 6.9% (n=663), 6.8% (n=621), and 9.3% (n=1132) in the CAMI derivation, validation, and China PEACE sets, respectively. The descriptive statistics of the CAMI and China PEACE data set are illustrated in Table S2 in Multimedia Appendix 1 , while the missing rates are listed in Table S3 in Multimedia Appendix 1 .
Figure 1 illustrates the receiver operating characteristic (ROC) curves of all the compared models. XGBoost produced the highest discrimination performance for in-hospital mortality with an AUC of 0.896 (95% CI 0.884-0.909; P <.05) on CAMI validation set, better than the 2 compared ML methods: random forest (AUC 0.861, 95% CI 0.845-0.876) and LR with lasso penalty (0.850, 95% CI 0.834-0.866). The XGBoost model also exhibited a significant improvement over the 2 well-established models: GRACE score (AUC 0.809, 95% CI 0.790-0.828) and TIMI score (AUC 0.782, 95% CI 0.763-0.800) . All comparisons were statistically significant when P <.05.
The Hosmer-Lemeshow statistic for the XGBoost model was 2.378 ( P =.97), indicating a very good calibration result. The decile plot further confirmed strong agreement between XGBoost predicted probability and the observed in-hospital mortality risk ( Figure 2 ).
The hyperparameters for XGBoost and random forest, tuned by 5-fold cross-validation, are listed in Tables S4 and S5 in Multimedia Appendix 1 .
Figure 3 illustrates the variable importance score in the XGBoost model, reflecting the frequency with which a variable was used across all trees. Age was the most important predictor of in-hospital mortality, followed by left ventricular ejection fraction (LVEF), Killip class, heart rate, creatinine, and blood glucose.
Figure 4 explains the rationale behind the model’s prediction of an individual’s risk. It displays the relative contributions of all features toward the predicted risk of in-hospital mortality. For instance, a predicted risk value of 0.01 for illustrated patient A was influenced by variables such as Killip class, LVEF, age, weight, and use of ACEI/ARB, among others. The red bars in Figure 3 indicate variables that increase the risk (pushing to the right), while the blue bars indicate variables that decrease the risk (pushing to the left). The length of each bar corresponds to the magnitude of its effect.
Figure 5 shows important novel and nonlinear relationships between individual variables and in-hospital mortality risk captured by the XGBoost model. For example, when age was less than 56 years, its attribution to in-hospital mortality was consistent and increased linearly after age was higher than 56 (J-shaped relationship). The heart rate variable displayed an S-shaped relationship with in-hospital mortality risk. The risk increased linearly after the heart rate was higher than 73 bpm and almost doubled until it reached 125 bpm. LVEF followed an inverted S-shaped pattern. Creatinine increased linearly until 26 and became constant after that (inverted J-shaped relationship), similar to WBC. Higher blood glucose reflected an increased in-hospital mortality risk. Variables like total cholesterol, SBP, and weight showed an L-shaped pattern. An N-shaped relationship was shown for neutrophilic granulocytes. Patients with neutrophilic granulocytes between 77% and 90% were predicted to have a higher in-hospital mortality risk. HDL-C displayed a U-shaped pattern. For potassium, a value between 4.13 and 4.49 mmol/L predicted the lowest in-hospital mortality risk.
When we retained the top 20, 15, and 10 most important variables ( Figure 2 ) and replaced others as “N/A” values in the CAMI validation set, the XGBoost model still achieved an AUC of 0.892 (95% CI 0.879-0.905), 0.885(95% CI 0.872-0.899), and 0.877(95% CI: 0.862 0.891), respectively. When the number of retained variables was reduced randomly from 89 to 10, the AUC decreased from 0.896 to 0.825 (SD 0.020) (20 available variables) to 0.810 (SD 0.011) (10 available variables) (Figure S1 in Multimedia Appendix 1 ). When the XGBoost model was validated on the China PEACE data set with the top 10 available variables ( Figure 2 ), it achieved an AUC of 0.840 (95% CI 0.829-0.852). For comparison, the TIMI score and GRACE score applied to the China PEACE data set gained AUCs of 0.762 (95% CI 0.748-0.776) and 0.789 (95% CI 0.776-0.803). The XGBoost model still significantly outperformed the conventional TIMI and GRACE risk score models.
For practical convenience, we embedded the XGBoost prediction model in a web-based calculator that required only the top 10 most important variables as inputs [ 19 ].
In this study, we proposed a risk model that predicted in-hospital mortality for patients with STEMI by incorporating the ML method XGBoost and the model interpretation approach SHAP. The model we constructed had excellent performance in terms of high predictive accuracy, high tolerance to missing values (flexibility), and good clinical interpretability. Importantly, we identified the top 7 clinical factors affecting in-hospital mortality as age, LVEF, Killip class, heart rate, creatinine, glucose, and WBC. Among these, LVEF, glucose, and WBC were not included in the current traditional predictive models. Although creatinine is also included in the GRACE score, its relationship with mortality is not a simple linear one. The predictive value of glucose and WBC exceeds that of other variables in traditional predictive models, such as blood pressure, weight, and medical history (hypertension, diabetes, and angina). We believe that these findings can help doctors understand the value of ML models and uncover the pathophysiological significance of certain clinical variables in myocardial infarction.
While traditional statistical models such as TIMI and GRACE, as recommended by current guidelines [ 26 ], are useful and user-friendly, their overly simplified nature may result in inadequate predictive accuracy for risk classification and decision-making [ 8 ]. First, these models are developed based on a limited number of variables and may not encompass comprehensive information. Second, the LR method used by these models requires strong assumptions, including a linear relationship under the logit function, independence of observations, and no multicollinearity among variables [ 7 , 8 , 25 , 27 ]. This results in underestimating the complexity of the real association among variables and outcomes.
In contrast, ML methods can handle a larger number of variables, require no parametric assumptions, and can learn the complex relationships hidden in the data automatically [ 9 ]. The XGBoost method overcomes these limitations by generating a series of classification and regression trees (CARTs) with each one learning the residuals of its predecessors. The boosting mechanism gives the model a strong predictive power. As observed, the XGBoost model achieved an impressive AUC of 0.896 (95% CI 0.884-0.909) on the CAMI validation set, outperforming the other methods and proving to be a more powerful and effective tool for clinical risk prediction.
The XGBoost model’s ability to tolerate missing values makes it well-suited for clinical applications, where incomplete variables are frequent [ 28 - 30 ]. While most ML methods achieve accuracy and precision by learning from a large number of variables, they often lose practicality because it is usually difficult to collect all the predictors used in the model in clinical practice. In such cases, missing values must be imputed if clinicians still want to apply the model. The proposed XGBoost model overcomes this weakness thanks to its ability to deal with missing values. We demonstrated that the XGBoost model’s performance is relatively robust when faced with incomplete data compared to the traditional LR model. Even with only the top 10 important variables, the XGBoost model achieved an AUC of 0.877(95% CI 0.862-0.891) on the CAMI validation set. On the independent China PEACE set with only the top 10 important variables available, XGBoost gained an AUC of 0.840 (95% CI 0.829-0.852) compared to TIMI 0.762 (95% CI 0.748-0.776) and GRACE 0.789 (95% CI 0.776-0.803). These results demonstrated the XGBoost model’s flexibility and generalization ability, which could alleviate concerns about the feasibility of applying complex ML models in clinical practice.
Another concern about the complex ML approaches applied in clinical practice is their lack of transparency. Unlike the widely employed LR method, whose coefficients clearly indicate the effect of predictive factors on the outcome, the black-box nature of complex ML algorithms applied in medical tasks has been seriously criticized and doubted in recent years [ 8 , 9 ]. To address this issue, our study used SHAP to interpret how the predicted risk was determined for individual patients and uncover the complex relationship between predictors and outcomes embedded in the XGBoost model.
Our results showed that HDL-C displayed a U-shaped relationship with in-hospital mortality among patients with STEMI. In the previous studies, Madsen et al [ 31 ] reported a U-shaped association between HDL-C and mortality, using data from 52,268 men and 64,240 women enrolled in 2 prospective population-based studies. Similarly, Bowe et al [ 32 ] found a U-shaped relationship between HDL-C and the risk of all-cause mortality in patients with kidney disease. For the variable potassium, our result showed that the patients with STEMI with potassium levels ranging from 4.13 to 4.49 mmol/L had the lowest in-hospital mortality risk, while levels greater than 4.5 mmol/L increased the mortality risk. Clinical practice guidelines recommend maintaining serum potassium levels between 4.0 and 5.0 mmol/L in patients with acute myocardial infarction (AMI) [ 33 , 34 ]. However, recent studies have challenged these guidelines, reporting that potassium levels greater than 4.5 mmol/L are associated with increased mortality [ 35 - 37 ]. Our study found that creatinine >1.1mg/dl (94.5/L) contributed to a higher in-hospital mortality risk. A previous study [ 38 ] reported that an elevated serum creatinine level (defined as creatinine ≥1.2 mg/dl) predicted a higher long-term mortality risk in patients with AMI.
For the variable blood glucose, our results showed that levels less than 8.15 mmol/L were safer for patients with STEMI. Another study reported that the best cutoff values for 30-day mortality among patients with STEMI were 149 mg/dL (8.27 mmol/L) for those without diabetes, 231 mg/dL (12.82 mmol/L) for those with diabetes, and 169 mg/dL (9.38 mmol/L) for all patients [ 39 ]. For the variable WBC, our result showed that a higher WBC count was associated with higher in-hospital mortality risk, with a safer threshold being less than 10.77/L. Cannon et al [ 40 ] reported that mortality at 30 days showed a curvilinear increase with increasing WBC count, with mortality rising in patients with WBC count >10,000 /dL (P< .0001). Previous studies often investigated this relationship by categorizing or binning continuous variables and regressing the outcome on the categorical variables. However, this approach is heavily influenced by predefined cutoffs and cannot provide a continuous picture of the relationship. In contrast, our model offered more thorough and quantitative insights into the exact change in risk induced by specific patient characteristics. By interpreting how each variable contributed to in-hospital mortality, our study could help clinicians identify specific therapeutic targets and further guide patient management.
Our research has a certain guiding significance for clinical implementation. First, the new model is significantly superior to traditional GRACE and TIMI models, helping doctors predict patient prognosis. Second, ML has identified several variables not included in past models, which may serve as potential targets for clinical intervention or provide further understanding of the pathophysiology of disease development, such as WBC and blood glucose. Third, while clinicians often find it difficult to understand the variables selected by ML, adopting the XGBoost model and model interpretation approach SHAP further increases accuracy by capturing nonlinear relationships among the predictors and outcomes. This offers a clear explanation for why ML can improve predictive efficiency, thus enhancing clinicians’ understanding of the performance improvement of ML. Methodologically, we used internal validation and a large sample size of independent external validation, all leading to consistent conclusions.
However, despite the superior performance of the proposed XGBoost model, several limitations still exist. First, the proposed XGBoost model was derived and validated on the Chinese STEMI patient cohort. Further validation is needed to confirm its efficiency on more general cohorts. Second, the study was designed prospectively, but this research is a retrospective analysis, so the variables recruited in our study may be limited. The model may be more powerful if more informative variables were added.
In conclusion, the proposed ML model in our paper demonstrated strong advantages in predictive ability, flexibility, and interpretability. Although some results need further study and verification, we have shown the benefits of complex models in the field of disease predictions. We offered a web calculator for convenient application, and we hope our study can help augment and extend the effectiveness of cardiologists to improve patient care and promote incorporating ML into daily practice.
This work was supported by the Twelfth Five-Year Planning Project of the Scientific and Technological Department of China (2011BAI11B02) and the Chinese Academy of Medical Sciences (CAMS) Innovation Fund for Medical Sciences (CIFMS; 2016-I2M-1-009 and 2020-I2M-C&T-B-050).
YY conceived the study. JY contributed to the literature search and the development of the manuscript under the supervision of YY. YL and XL contributed to the data analysis. ST contributed to literature screening. YZ, TC, and GX contributed to data extraction and assessment. HX and XG contributed to the revision. All authors contributed to the critical review of the manuscript and approved the final draft. The corresponding author attests that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted. YY is the guarantor of the study.
None declared.
Additional information and tables.
angiotensin-converting enzyme inhibitor |
area under the curve |
angiotensin II receptor blocker |
acute myocardial infarction |
China Acute Myocardial Infarction |
classification and regression tree |
Global Registry in Acute Coronary Events |
high-density lipoprotein cholesterol |
left ventricular ejection fraction |
logistic regression |
Multiple Imputation by Chained Equations |
machine learning |
non–ST-segment elevation myocardial infarction |
Patient-Centered Evaluative Assessment of Cardiac Events |
receiver operating characteristic |
systolic blood pressure |
Shapley Additive Explanations |
ST-segment elevation myocardial infarction |
Thrombolysis In Myocardial Infarction |
white blood cell |
Extreme Gradient Boosting |
explainable artificial intelligence |
Edited by T de Azevedo Cardoso; submitted 19.06.23; peer-reviewed by H Sun, L Borges; comments to author 11.01.24; revised version received 25.03.24; accepted 18.06.24; published 30.07.24.
©Jingang Yang, Yingxue Li, Xiang Li, Shuiying Tao, Yuan Zhang, Tiange Chen, Guotong Xie, Haiyan Xu, Xiaojin Gao, Yuejin Yang. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 30.07.2024.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.
If you have type 1 diabetes and kidney disease, you may be eligible for a trial investigating how Ozempic affects kidney function and glucose control.
Clinical Trials Identifier: NCT05822609
Trial Name: Trial of Semaglutide for Diabetic Kidney Disease in Type 1 Diabetes (RT1D)
Diabetes Type: Type 1 diabetes
Trial Sponsor: University of Washington
This study is researching how Ozempic (semaglutide) affects kidney function in people who have type 1 diabetes and chronic kidney disease.
Ozempic is a GLP-1 receptor agonist that has been approved for type 2 diabetes and weight management under the brand name Wegovy . It is not currently approved for type 1 diabetes.
Besides kidney function, the trial will also test whether Ozempic is safe and effective at managing blood glucose in type 1 diabetes. Several small studies, such as the STEMT Trial , have shown promising results for Ozempic in type 1 diabetes. Based on these findings, researchers predict that Ozempic will reduce total daily insulin doses and better balance glucose levels.
Researchers are recruiting 60 adults with type 1 diabetes and kidney disease for this trial. Participants will either receive Ozempic (semaglutide 1 mg) or a placebo injection; all participants will wear a CGM.
At the end of the 26-week trial, participants will undergo an MRI scan to measure how much oxygen is delivered to the kidneys and used by kidney cells. Kidney oxygenation is an important indicator of kidney function since both too little and too much oxygen can damage kidney cells.
Researchers will also measure urinary albumin to creatinine ratio (UACR) and estimated glomerular filtration rate (eGFR). UACR measures the amount of protein in the urine; the presence of urine protein can indicate kidney disease. They will also measure eGFR, which is a test of kidney function. Note that the American Diabetes Association recommends annual UACR and eGFR screenings for people who have type 1 diabetes.
The study will also track participants’ time in range , glucose variation, and total daily insulin dose.
Kidney disease is a serious complication of diabetes that affects approximately 30% of people with type 1, according to the National Kidney Foundation . While keeping blood sugar levels in target range has been shown to reduce the risk of developing chronic kidney disease, there are still no medications approved to prevent or treat kidney disease in type 1 diabetes.
Ozempic has already shown impressive results in type 2 diabetes and kidney disease. Recently, the FLOW trial found that Ozempic slowed the progression of kidney disease by 24% and reduced the risk of death from kidney disease and major cardiac events.
If Ozempic is successful at improving kidney function in this study, there will likely be larger trials that could eventually lead to FDA approval for Ozempic in type 1 diabetes.
This study will also help advance research on adjunctive therapies for type 1 diabetes – non-insulin medications that can be used to help manage complications of diabetes. Studies are underway to investigate GLP-1s and SGLT-2s as well as the kidney drug Kerendia (finerenone) as adjunctive therapies. Currently, Smylin (pramlintide) is the only approved adjunctive therapy, but it is not widely used by people with type 1.
You may be eligible if you:
People who have had recent diabetic ketoacidosis or a history of pancreatitis are not eligible for this study. See a full list of inclusion and exclusion criteria here .
This study is recruiting in Colorado and Washington, as well as Toronto, Canada. To enroll or learn more about this study, contact [email protected] or call 206-897-4728.
Learn more about new research for type 1 diabetes:
This paper showcases the integration of several technologies to develop an Unmanned Traffic Management System that enables the centralized coordination of unmanned ground and aerial vehicles. By addressing the need for safe and efficient autonomous vehicle operations, this system contributes to improved safety and reliability in various applications, from civilian to military contexts. Furthermore, the exploration of dynamic vision-based drone detection methods adds valuable insights into the field of real-time image processing and deep learning. In that perspective, a more in-depth computer vision development is been presented. The system's core components include the Swarmie, an unmanned ground vehicle (UGV) guided through a wireless mesh network through radio frequency enabled (RF) markers. Simultaneously, an unmanned aircraft vehicle (UAV) is controlled by an IoT cloud platform that sends coordinates to an embedded system. The integration of wireless communication and navigation markers is a proof to the importance of circuitry and microcontrollers in developing RF markers to enhance navigation. One of the primary objectives of this research is the development of a dynamic vision-based drone detection system for sense and avoid actions. Two different methods are explored for drone detection. The first method utilizes the Viola & Jones algorithm. The second method involves the You Only Look Once (YOLO) RealTime Object Detection algorithm. The performance of these methods is evaluated, providing insights into the effectiveness of each approach in real-time drone detection.
Polygenic Risk Scores (PRS) are now playing an important role in predicting overall risk of breast cancer risk by means of adding contribution factors across independent genetic variants influencing the disease. However, PRS models may work better in some ethnic populations compared to others, thus requiring populaion specific validation. This study evaluates the performance of 140 previously published PRS models in a Thai population, an underrepresented ethnic group. To rigorously evaluate the performance of 140 breast PRS models, we employed generalized linear models (GLM) combined with a robust evaluation strategy, including Five fold cross validation and bootstrap analysis in which each model was tested across 1,000 bootstrap iterations to ensure the robustness of our findings and to identify models with consistently strong predictive ability. Among the 140 models evaluated, 38 demonstrated robust predictive ability, identified through > 163 bootstrap iterations (95% CI: 163.88). PGS004688 exhibited the highest performance, achieving an AUROC of 0.5930 (95% CI: 0.5903,0.5957) and a McFadden's pseudo R squared of 0.0146 (95% CI: 0.0139,0.0153). Women in the 90th percentile of PRS had a 1.83 fold increased risk of breast cancer compared to those within the 30th to 70th percentiles (95% CI: 1.04,3.18). This study highlights the importance of local validation for PRS models derived from diverse populations, demonstrating their potential for personalized breast cancer risk assessment. Model PGS004688, with its robust performance and significant risk stratification, warrants further investigation for clinical implementation in breast cancer screening and prevention strategies. Our findings emphasize the need for adapting and utilizing PRS in diverse populations to provide more accessible public health solutions.
The authors have declared no competing interest.
This study was funded by the National Science Research and Innovation Fund.
I confirm all relevant ethical guidelines have been followed, and any necessary IRB and/or ethics committee approvals have been obtained.
The details of the IRB/oversight body that provided approval or exemption for the research described are given below:
The study exclusively utilized human data that were initially published in the article with the DOI 10.1007/s10549-021-06152-4, which were provided upon request.
I confirm that all necessary patient/participant consent has been obtained and the appropriate institutional forms have been archived, and that any patient/participant/sample identifiers included were not known to anyone (e.g., hospital staff, patients or participants themselves) outside the research group so cannot be used to identify individuals.
I understand that all clinical trials and any other prospective interventional studies must be registered with an ICMJE-approved registry, such as ClinicalTrials.gov. I confirm that any such study reported in the manuscript has been registered and the trial registration ID is provided (note: if posting a prospective study registered retrospectively, please provide a statement in the trial ID field explaining why the study was not registered in advance).
I have followed all appropriate research reporting guidelines, such as any relevant EQUATOR Network research reporting checklist(s) and other pertinent material, if applicable.
All data produced in the present study are available upon reasonable request to the authors
View the discussion thread.
Supplementary Material
Thank you for your interest in spreading the word about medRxiv.
NOTE: Your email address is requested solely to identify you as the sender of this article.
IMAGES
VIDEO
COMMENTS
The Importance of Research in Management. Before taking any action in business, it is necessary to research the same. Research is an important step to evaluate the idea. David Sarnoff, an American businessman and pioneer of American radio and television, once quoted, "Research is the distance between an idea and its realization.".
What Is Management Research Actually Good For? San Jose, California, is home to one of the most peculiar structures ever built: the Winchester Mystery House, a 160-room Victorian mansion that ...
A common and long-established practice of leading management journals is that they require that authors make a theoretical contribution (Boer et al., 2015).Rabetino et al. (2020) note that such contributions are based on diverse ontological, epistemological, and methodological assumptions; embrace disparate conceptual approaches (behavioral, institutional, evolutionary, etc.); and seek to ...
The Importance of Business Research. Business research plays a crucial role in an organization's success by providing the following benefits: 1. Identifying Problems and Opportunities. Business research helps to identify the problems and opportunities that exist in a particular market or industry. By conducting research, organizations can ...
World Management Survey 2010. The role of management education. One of the findings from this research is the importance of education in the performance of managers and non-managers. This was ...
The importance of research in decision-making has never been greater than today, given the complexity of contemporary industry. Businesses use research as their compass to steer them through a sea of uncertainty. Research makes a key contribution to well-informed decisions that fuel business growth and lead industry advances by giving crucial ...
She has been a Visiting Research Associate at the Institute of Education at the University of London. She has published her works in international and peer‐reviewed journals such as The Service Industries Journal, Management Decision, Journal of Technology Management & Innovation, Intangible Capital and Economía Industrial.
Management research is important because improving management is seen as the best means of attaining, sustaining, and enhancing our civilization. Management research addresses a moving target with the unusual feature that in most cases, a management solution begets more problems than it solves. In philosophical terms, the ontological question ...
Management Research Methods aims to foster in readers an under- standing of the basic research processes and a capacity to identify management-relatedresearchquestions.Readerswilllearntheman-
Lack of communication between managers and their employees can hurt productivity and even undermine the customer experience. Female managers are more adept at building rapport among mixed-gender teams, which can improve an organization's performance, says research by Jorge Tamayo. 18 Jun 2024. Cold Call Podcast.
Given the exploratory nature of the present study, two broad propositions were formulated in the present study. Proposition 1: Novelty, rigor in theorizing and methodology, use of multiple and innovative research approaches, and practical impact are likely to be considered as. attributes of interesting research.
Explore 'Research Methodology for Management Decisions' to gain proficiency in research techniques crucial for strategic decision-making. Learn how to design effective studies, analyze data, and apply research findings to solve real-world business problems. Understand the importance of research ethics and the impact of research on ...
Here, we focus on work product and the thought process, not management of the raw data as it emerges from equipment and experimental apparatus. However, mention should be made of electronic laboratory notebooks (ELNs), which are becoming an important aspect of research. ELNs are a rapidly developing field, because they face a number of challenges.
Abstract: Research management is an emerging field of study and its development is significant to the advancement of research enterprise. Developing the science of research management requires investigating social mechanisms involved in research management. Yet, studies on social mechanisms of research management is lacking in the literature. To
Abstract. Introduction The last decade or so has seen an immense growth in the amount of research carried out in a wide range of management aspects. Currently, the position is one of continuing growth, as a review of the many journals of management, organisation and allied subjects will testify. Not only has research become more widespread, it ...
The above points state the importance of research in business decision-making. Research is required to collect information and figures about a company's consumers, staff, and competitors. Businesses can make better managerial decisions based on these figures. Research is essential at every stage of business, from market and competitor analysis ...
Research helps companies to plan new products, develop advertising campaigns and compete with direct competitors. Without research, companies would be left in silos, trying to navigate the market in the dark. When you're in a management function, you're in a key decision-making position in the company. As a result, it's imperative to rely ...
Ideation and creativity are important, but at some point they have to give way to management concerns: building a business model and a business case to prove the viability of those ideas. ... Research-Technology Management 59(1): 61-63. Online. Julian Birkinshaw and Michael Mol. 2006. How management innovation happens. MIT Sloan Management ...
Critical thinking. Critical thinking refers to a person's ability to think rationally and analyze and interpret information and make connections. This skill is important in research because it allows individuals to better gather and evaluate data and establish significance. Common critical thinking skills include: Open-mindedness.
Understand the importance, need and significance of the research. Understand research design and the process of research design. Formulate a research problem and state it as a hypothesis. 1.3 MEANING OF RESEARCH Research is a process to discover new knowledge to find answers to a question. The word
UNIT 1. MAKINGObjectivesAfter going through this unit, you should be able to:Explain. he meaning of research in the context of making i. elligent decisions.Discuss the need for research in decision making.Expl. n the actual process of research and its role in m. Distinguish between the various types of research.
The purpose of this paper is to create a model based on a dialogical approach among city branding stakeholders using the Coordinated Management of Meaning (CMM) framework. This study combines conceptual and theoretical frameworks with a case study to explore how the dialogical approach enhances coordination among stakeholders.
Research helps to reduce risk when making a big investment in creating a new product or offering a new service. Knowing your customers also helps to fine-tune marketing campaigns. This way, you can target customers effectively, really honing in on their pain points and offering your organization as a viable solution.
BUFFALO, N.Y. — As artificial intelligence increasingly impacts our daily lives, researchers in the University at Buffalo School of Management have developed a new framework to transform AI chatbots into more intuitive, human-like conversation partners. Forthcoming in AIS Transactions on Human ...
The Division of Intramural Research Programs (IRP) is the internal research division of the NIMH. Over 40 research groups conduct basic neuroscience research and clinical investigations of mental illnesses, brain function, and behavior at the NIH campus in Bethesda, Maryland. ... Self-care looks different for everyone, and it is important to ...
Background: Machine learning (ML) risk prediction models, although much more accurate than traditional statistical methods, are inconvenient to use in clinical practice due to their nontransparency and requirement of a large number of input variables. Objective: We aimed to develop a precise, explainable, and flexible ML model to predict the risk of in-hospital mortality in patients with ST ...
Journal of Applied Ecology publishes impactful research at the intersection of ecological science and the management of biological resources. Abstract Multi-purpose land use is of great importance for sustainable development, particularly in the context of increasing pressures on land to provide ecosystem services (e.g. food, energy) and ...
This study will also help advance research on adjunctive therapies for type 1 diabetes - non-insulin medications that can be used to help manage complications of diabetes. Studies are underway to investigate GLP-1s and SGLT-2s as well as the kidney drug Kerendia (finerenone) as adjunctive therapies.
The integration of wireless communication and navigation markers is a proof to the importance of circuitry and microcontrollers in developing RF markers to enhance navigation. One of the primary objectives of this research is the development of a dynamic vision-based drone detection system for sense and avoid actions.
Polygenic Risk Scores (PRS) are now playing an important role in predicting overall risk of breast cancer risk by means of adding contribution factors across independent genetic variants influencing the disease. However, PRS models may work better in some ethnic populations compared to others, thus requiring populaion specific validation. This study evaluates the performance of 140 previously ...