• Part 2 Problem-solving »
  • Chapter 3 Solving Problems by Searching
  • Edit on GitHub

Chapter 3 Solving Problems by Searching 

When the correct action to take is not immediately obvious, an agent may need to plan ahead : to consider a sequence of actions that form a path to a goal state. Such an agent is called a problem-solving agent , and the computational process it undertakes is called search .

Problem-solving agents use atomic representations, that is, states of the world are considered as wholes, with no internal structure visible to the problem-solving algorithms. Agents that use factored or structured representations of states are called planning agents .

We distinguish between informed algorithms, in which the agent can estimate how far it is from the goal, and uninformed algorithms, where no such estimate is available.

3.1 Problem-Solving Agents 

If the agent has no additional information—that is, if the environment is unknown —then the agent can do no better than to execute one of the actions at random. For now, we assume that our agents always have access to information about the world. With that information, the agent can follow this four-phase problem-solving process:

GOAL FORMULATION : Goals organize behavior by limiting the objectives and hence the actions to be considered.

PROBLEM FORMULATION : The agent devises a description of the states and actions necessary to reach the goal—an abstract model of the relevant part of the world.

SEARCH : Before taking any action in the real world, the agent simulates sequences of actions in its model, searching until it finds a sequence of actions that reaches the goal. Such a sequence is called a solution .

EXECUTION : The agent can now execute the actions in the solution, one at a time.

It is an important property that in a fully observable, deterministic, known environment, the solution to any problem is a fixed sequence of actions . The open-loop system means that ignoring the percepts breaks the loop between agent and environment. If there is a chance that the model is incorrect, or the environment is nondeterministic, then the agent would be safer using a closed-loop approach that monitors the percepts.

In partially observable or nondeterministic environments, a solution would be a branching strategy that recommends different future actions depending on what percepts arrive.

3.1.1 Search problems and solutions 

A search problem can be defined formally as follows:

A set of possible states that the environment can be in. We call this the state space .

The initial state that the agent starts in.

A set of one or more goal states . We can account for all three of these possibilities by specifying an \(Is\-Goal\) method for a problem.

The actions available to the agent. Given a state \(s\) , \(Actions(s)\) returns a finite set of actions that can be executed in \(s\) . We say that each of these actions is applicable in \(s\) .

A transition model , which describes what each action does. \(Result(s,a)\) returns the state that results from doing action \(a\) in state \(s\) .

An action cost function , denote by \(Action\-Cost(s,a,s\pr)\) when we are programming or \(c(s,a,s\pr)\) when we are doing math, that gives the numeric cost of applying action \(a\) in state \(s\) to reach state \(s\pr\) .

A sequence of actions forms a path , and a solution is a path from the initial state to a goal state. We assume that action costs are additive; that is, the total cost of a path is the sum of the individual action costs. An optimal solution has the lowest path cost among all solutions.

The state space can be represented as a graph in which the vertices are states and the directed edges between them are actions.

3.1.2 Formulating problems 

The process of removing detail from a representation is called abstraction . The abstraction is valid if we can elaborate any abstract solution into a solution in the more detailed world. The abstraction is useful if carrying out each of the actions in the solution is easier than the original problem.

3.2 Example Problems 

A standardized problem is intended to illustrate or exercise various problem-solving methods. It can be given a concise, exact description and hence is suitable as a benchmark for researchers to compare the performance of algorithms. A real-world problem , such as robot navigation, is one whose solutions people actually use, and whose formulation is idiosyncratic, not standardized, because, for example, each robot has different sensors that produce different data.

3.2.1 Standardized problems 

A grid world problem is a two-dimensional rectangular array of square cells in which agents can move from cell to cell.

Vacuum world

Sokoban puzzle

Sliding-tile puzzle

3.2.2 Real-world problems 

Route-finding problem

Touring problems

Trveling salesperson problem (TSP)

VLSI layout problem

Robot navigation

Automatic assembly sequencing

3.3 Search Algorithms 

A search algorithm takes a search problem as input and returns a solution, or an indication of failure. We consider algorithms that superimpose a search tree over the state-space graph, forming various paths from the initial state, trying to find a path that reaches a goal state. Each node in the search tree corresponds to a state in the state space and the edges in the search tree correspond to actions. The root of the tree corresponds to the initial state of the problem.

The state space describes the (possibly infinite) set of states in the world, and the actions that allow transitions from one state to another. The search tree describes paths between these states, reaching towards the goal. The search tree may have multiple paths to (and thus multiple nodes for) any given state, but each node in the tree has a unique path back to the root (as in all trees).

The frontier separates two regions of the state-space graph: an interior region where every state has been expanded, and an exterior region of states that have not yet been reached.

3.3.1 Best-first search 

In best-first search we choose a node, \(n\) , with minimum value of some evaluation function , \(f(n)\) .

../_images/Fig3.7.png

3.3.2 Search data structures 

A node in the tree is represented by a data structure with four components

\(node.State\) : the state to which the node corresponds;

\(node.Parent\) : the node in the tree that generated this node;

\(node.Action\) : the action that was applied to the parent’s state to generate this node;

\(node.Path\-Cost\) : the total cost of the path from the initial state to this node. In mathematical formulas, we use \(g(node)\) as a synonym for \(Path\-Cost\) .

Following the \(PARENT\) pointers back from a node allows us to recover the states and actions along the path to that node. Doing this from a goal node gives us the solution.

We need a data structure to store the frontier . The appropriate choice is a queue of some kind, because the operations on a frontier are:

\(Is\-Empty(frontier)\) returns true only if there are no nodes in the frontier.

\(Pop(frontier)\) removes the top node from the frontier and returns it.

\(Top(frontier)\) returns (but does not remove) the top node of the frontier.

\(Add(node, frontier)\) inserts node into its proper place in the queue.

Three kinds of queues are used in search algorithms:

A priority queue first pops the node with the minimum cost according to some evaluation function, \(f\) . It is used in best-first search.

A FIFO queue or first-in-first-out queue first pops the node that was added to the queue first; we shall see it is used in breadth-first search.

A LIFO queue or last-in-first-out queue (also known as a stack ) pops first the most recently added node; we shall see it is used in depth-first search.

3.3.3 Redundant paths 

A cycle is a special case of a redundant path .

As the saying goes, algorithms that cannot remember the past are doomed to repeat it . There are three approaches to this issue.

First, we can remember all previously reached states (as best-first search does), allowing us to detect all redundant paths, and keep only the best path to each state.

Second, we can not worry about repeating the past. We call a search algorithm a graph search if it checks for redundant paths and a tree-like search if it does not check.

Third, we can compromise and check for cycles, but not for redundant paths in general.

3.3.4 Measuring problem-solving performance 

COMPLETENESS : Is the algorithm guaranteed to find a solution when there is one, and to correctly report failure when there is not?

COST OPTIMALITY : Does it find a solution with the lowest path cost of all solutions?

TIME COMPLEXITY : How long does it take to find a solution?

SPACE COMPLEXITY : How much memory is needed to perform the search?

To be complete, a search algorithm must be systematic in the way it explores an infinite state space, making sure it can eventually reach any state that is connected to the initial state.

In theoretical computer science, the typical measure of time and space complexity is the size of the state-space graph, \(|V|+|E|\) , where \(|V|\) is the number of vertices (state nodes) of the graph and \(|E|\) is the number of edges (distinct state/action pairs). For an implicit state space, complexity can be measured in terms of \(d\) , the depth or number of actions in an optimal solution; \(m\) , the maximum number of actions in any path; and \(b\) , the branching factor or number of successors of a node that need to be considered.

3.4 Uninformed Search Strategies 

3.4.1 breadth-first search .

When all actions have the same cost, an appropriate strategy is breadth-first search , in which the root node is expanded first, then all the successors of the root node are expanded next, then their successors, and so on.

../_images/Fig3.9.png

Breadth-first search always finds a solution with a minimal number of actions, because when it is generating nodes at depth \(d\) , it has already generated all the nodes at depth \(d-1\) , so if one of them were a solution, it would have been found.

All the nodes remain in memory, so both time and space complexity are \(O(b^d)\) . The memory requirements are a bigger problem for breadth-first search than the execution time . In general, exponential-complexity search problems cannot be solved by uninformed search for any but the smallest instances .

3.4.2 Dijkstra’s algorithm or uniform-cost search 

When actions have different costs, an obvious choice is to use best-first search where the evaluation function is the cost of the path from the root to the current node. This is called Dijkstra’s algorithm by the theoretical computer science community, and uniform-cost search by the AI community.

The complexity of uniform-cost search is characterized in terms of \(C^*\) , the cost of the optimal solution, and \(\epsilon\) , a lower bound on the cost of each action, with \(\epsilon>0\) . Then the algorithm’s worst-case time and space complexity is \(O(b^{1+\lfloor C^*/\epsilon\rfloor})\) , which can be much greater than \(b^d\) .

When all action costs are equal, \(b^{1+\lfloor C^*/\epsilon\rfloor}\) is just \(b^{d+1}\) , and uniform-cost search is similar to breadth-first search.

3.4.3 Depth-first search and the problem of memory 

Depth-first search always expands the deepest node in the frontier first. It could be implemented as a call to \(Best\-First\-Search\) where the evaluation function \(f\) is the negative of the depth.

For problems where a tree-like search is feasible, depth-first search has much smaller needs for memory. A depth-first tree-like search takes time proportional to the number of states, and has memory complexity of only \(O(bm)\) , where \(b\) is the branching factor and \(m\) is the maximum depth of the tree.

A variant of depth-first search called backtracking search uses even less memory.

3.4.4 Depth-limited and iterative deepening search 

To keep depth-first search from wandering down an infinite path, we can use depth-limited search , a version of depth-first search in which we supply a depth limit, \(l\) , and treat all nodes at depth \(l\) as if they had no successors. The time complexity is \(O(b^l)\) and the space complexity is \(O(bl)\)

../_images/Fig3.12.png

Iterative deepening search solves the problem of picking a good value for \(l\) by trying all values: first 0, then 1, then 2, and so on—until either a solution is found, or the depth- limited search returns the failure value rather than the cutoff value.

Its memory requirements are modest: \(O(bd)\) when there is a solution, or \(O(bm)\) on finite state spaces with no solution. The time complexity is \(O(bd)\) when there is a solution, or \(O(bm)\) when there is none.

In general, iterative deepening is the preferred uninformed search method when the search state space is larger than can fit in memory and the depth of the solution is not known .

3.4.5 Bidirectional search 

An alternative approach called bidirectional search simultaneously searches forward from the initial state and backwards from the goal state(s), hoping that the two searches will meet.

../_images/Fig3.14.png

3.4.6 Comparing uninformed search algorithms 

../_images/Fig3.15.png

3.5 Informed (Heuristic) Search Strategies 

An informed search strategy uses domain–specific hints about the location of goals to find colutions more efficiently than an uninformed strategy. The hints come in the form of a heuristic function , denoted \(h(n)\) :

\(h(n)\) = estimated cost of the cheapest path from the state at node \(n\) to a goal state.

3.5.1 Greedy best-first search 

Greedy best-first search is a form of best-first search that expands first the node with the lowest \(h(n)\) value—the node that appears to be closest to the goal—on the grounds that this is likely to lead to a solution quickly. So the evaluation function \(f(n)=h(n)\) .

Search Algorithms

TECHBREAUX's avatar

A search algorithm is a type of algorithm used in artificial intelligence to find the best or most optimal solution to a problem by exploring a set of possible solutions, also called a search space. A search algorithm filters through a large number of possibilities to find a solution that works best for a given set of constraints.

Search algorithms typically operate by organizing the search space into a particular type of graph, commonly a tree, and evaluate the best score, or cost, of traversing each branch of the tree. A solution is a path from the start state to the goal state that optimizes the cost given the parameters of the implemented algorithm.

Search algorithms are typically organized into two categories:

Uninformed Search: Algorithms that are general purpose traversals of the state space or search tree without any information about how good a state is. These are also referred to as blind search algorithms.

Informed Search: Algorithms that have information about the goal during the traversal, allowing the search to prioritize its expansion toward the goal state instead of exploring directions that may yield a favorable cost but don’t lead to the goal, or global optimum. By including extra rules that aid in estimating the location of the goal (known as heuristics) informed search algorithms can be more computationally efficient when searching for a path to the goal state.

Types of Search Algorithms

There are many types of search algorithms used in artificial intelligence, each with their own strengths and weaknesses. Some of the most common types of search algorithms include:

Depth-First Search (DFS)

This algorithm explores as far as possible along each branch before backtracking. DFS is often used in combination with other search algorithms, such as iterative deepening search, to find the optimal solution. Think of DFS as a traversal pattern that focuses on digging as deep as possible before exploring other paths.

Breadth-First Search (BFS)

This algorithm explores all the neighbor nodes at the current level before moving on to the nodes at the next level. Think of BFS as a traversal pattern that tries to explore broadly across many different paths at the same time.

Uniform Cost Search (UCS)

This algorithm expands the lowest cumulative cost from the start, continuing to explore all possible paths in order of increasing cost. UCS is guaranteed to find the optimal path between the start and goal nodes, as long as the cost of each edge is non-negative. However, it can be computationally expensive when exploring a large search space, as it explores all possible paths in order of increasing cost.

Heuristic Search

This algorithm uses a heuristic function to guide the search towards the optimal solution. A* search, one of the most popular heuristic search algorithms, uses both the actual cost of getting to a node and an estimate of the cost to reach the goal from that node.

Application of Search Algorithms

Search algorithms are used in various fields of artificial intelligence, including:

Pathfinding

Pathfinding problems involve finding the shortest path between two points in a given graph or network. BFS or A* search can be used to explore a graph and find the optimal path.

Optimization

In optimization problems, the goal is to find the minimum or maximum value of a function, subject to some constraints. Search algorithms such as hill climbing or simulated annealing are often used in optimization cases.

Game Playing

In game playing, search algorithms are used to evaluate all possible moves and choose the one that is most likely to lead to a win, or the best possible outcome. This is done by constructing a search tree where each node represents a game state and the edges represent the moves that can be taken to reach the associated new game state.

The following algorithms have dedicated, more in-depth content:

All contributors

rykunk21's avatar

Looking to contribute?

  • Learn more about how to get involved.
  • Edit this page on GitHub to fix an error or make an improvement.
  • Submit feedback to let us know how we can improve Docs.

Learn AI on Codecademy

Data scientist: machine learning specialist.

Ensure that you are logged in and have the required permissions to access the test.

A server error has occurred. Please refresh the page or try after some time.

An error has occurred. Please refresh the page or try after some time.

Signup and get free access to 100+ Tutorials and Practice Problems Start Now

Algorithms

Linear Search

  • Binary Search
  • Ternary Search
  • Bubble Sort
  • Selection Sort
  • Insertion Sort
  • Counting Sort
  • Bucket Sort
  • Basics of Greedy Algorithms
  • Graph Representation
  • Breadth First Search
  • Depth First Search
  • Minimum Spanning Tree
  • Shortest Path Algorithms
  • Flood-fill Algorithm
  • Articulation Points and Bridges
  • Biconnected Components
  • Strongly Connected Components
  • Topological Sort
  • Hamiltonian Path
  • Maximum flow
  • Minimum Cost Maximum Flow
  • Basics of String Manipulation
  • String Searching
  • Z Algorithm
  • Manachar’s Algorithm
  • Introduction to Dynamic Programming 1
  • 2 Dimensional
  • State space reduction
  • Dynamic Programming and Bit Masking

Solve Problems

ATTEMPTED BY: 2239 SUCCESS RATE: 68% LEVEL: Easy

  • Participate

Equal Strings

ATTEMPTED BY: 955 SUCCESS RATE: 54% LEVEL: Medium

AND Subsequence

ATTEMPTED BY: 569 SUCCESS RATE: 43% LEVEL: Medium

Adjacent Sum Greater than K

ATTEMPTED BY: 993 SUCCESS RATE: 69% LEVEL: Medium

Equal Diverse Teams

ATTEMPTED BY: 1546 SUCCESS RATE: 29% LEVEL: Easy

Permutation Swaps

ATTEMPTED BY: 1100 SUCCESS RATE: 36% LEVEL: Easy

Guess Permutation

ATTEMPTED BY: 685 SUCCESS RATE: 68% LEVEL: Hard

ATTEMPTED BY: 555 SUCCESS RATE: 75% LEVEL: Medium

Equal Parity Sum

ATTEMPTED BY: 752 SUCCESS RATE: 59% LEVEL: Medium

Employee rating

ATTEMPTED BY: 2728 SUCCESS RATE: 63% LEVEL: Easy

Google

  • An alphabet
  • A special character
  • Minimum 8 characters

A password reset link will be sent to the following email id

  • Practice Searching Algorithms
  • MCQs on Searching Algorithms
  • Tutorial on Searching Algorithms
  • Linear Search
  • Binary Search
  • Ternary Search
  • Jump Search
  • Sentinel Linear Search
  • Interpolation Search
  • Exponential Search
  • Fibonacci Search
  • Ubiquitous Binary Search
  • Linear Search Vs Binary Search
  • Interpolation Search Vs Binary Search
  • Binary Search Vs Ternary Search
  • Sentinel Linear Search Vs Linear Search

Best First Search (Informed Search)

In BFS and DFS , when we are at a node, we can consider any of the adjacent as the next node. So both BFS and DFS blindly explore paths without considering any cost function. 

The idea of Best First Search is to use an evaluation function to decide which adjacent is most promising and then explore.

Best First Search falls under the category of Heuristic Search or Informed Search.

Implementation of Best First Search:

We use a priority queue or heap to store the costs of nodes that have the lowest evaluation function value. So the implementation is a variation of BFS, we just need to change Queue to PriorityQueue. 

Illustration:

Let us consider the below example:

Best First Search (Informed Search) We start from source “S” and search for goal “I” using given costs and Best First search.   pq initially contains S We remove S from pq and process unvisited neighbors of S to pq. pq now contains {A, C, B} (C is put before B because C has lesser cost)   We remove A from pq and process unvisited neighbors of A to pq. pq now contains {C, B, E, D}   We remove C from pq and process unvisited neighbors of C to pq. pq now contains {B, H, E, D}   We remove B from pq and process unvisited neighbors of B to pq. pq now contains {H, E, D, F, G} We remove H from pq.   Since our goal “I” is a neighbor of H, we return.

Below is the implementation of the above idea:

 
                 
 

Analysis :  

  • The worst-case time complexity for Best First Search is O(n * log n) where n is the number of nodes. In the worst case, we may have to visit all nodes before we reach goal. Note that priority queue is implemented using Min(or Max) Heap, and insert and remove operations take O(log n) time.
  • The performance of the algorithm depends on how well the cost or evaluation function is designed.

Special cases of Best first search:

  • Greedy Best first search algorithm
  • A* search algorithm

Please Login to comment...

Similar reads.

  • Best Twitch Extensions for 2024: Top Tools for Viewers and Streamers
  • Discord Emojis List 2024: Copy and Paste
  • Best Adblockers for Twitch TV: Enjoy Ad-Free Streaming in 2024
  • PS4 vs. PS5: Which PlayStation Should You Buy in 2024?
  • 10 Best Free VPN Services in 2024

Improve your Coding Skills with Practice

 alt=

What kind of Experience do you want to share?

Search Algorithms in Artificial Intelligence

Search algorithms in artificial intelligence are significant because they provide solutions to many artificial intelligence-related issues. There are a variety of search algorithms in artificial intelligence. The importance and characteristics of search algorithms in AI are covered in this article. Additionally, it will categorize search algorithms in artificial intelligence and list the most often used ones.

Introduction

Search algorithms in AI are algorithms that aid in the resolution of search issues. A search issue comprises the search space, start, and goal state. By evaluating scenarios and alternatives, search algorithms in artificial intelligence assist AI agents in achieving the objective state.

The algorithms provide search solutions by transforming the initial state to the desired state. Therefore, AI machines and applications can only perform search functions and discover viable solutions with these algorithms.

AI agents make artificial intelligence easy. These agents carry out tasks to achieve a specific objective and plan actions that can lead to the intended outcome. The combination of these actions completes the given task. The AI agents discover the best solution by considering all alternatives or solutions. Search algorithms in artificial intelligence are used to find the best possible solutions for AI agents.

Problem-solving Agents

Building agents with rational behavior is the subject of artificial intelligence research. These agents often use a search algorithm in the background to complete their job. Search techniques are universal problem-solving approaches in Artificial Intelligence. Rational or problem-solving agents mostly use these search strategies or algorithms in AI to solve a particular problem and provide the best result. The goal-based agents are problem-solving agents that use atomic representation. We will learn about different problem-solving search algorithms in AI in this topic.

Search Algorithm Terminologies

  • Search - Searching solves a search issue in a given space step by step. Three major factors can influence a search issue.
  • Search Space - A search space is a collection of potential solutions a system may have.
  • Start State - The jurisdiction where the agent starts the search.
  • Goal test - A function that examines the current state and returns whether or not the goal state has been attained.
  • Search tree - A Search tree is a tree representation of a search issue. The node at the root of the search tree corresponds to the initial condition.
  • Actions - It describes all the steps, activities, or operations accessible to the agent.
  • Transition model - It can be used to convey a description of what each action does.
  • Path Cost - It is a function that gives a cost to each path.
  • Solution - An action sequence connects the start node to the target node.
  • Optimal Solution - If a solution has the lowest cost among all solutions, it is said to be the optimal answer.

Properties of Search Algorithms

The four important properties of search algorithms in artificial intelligence for comparing their efficiency are as follows:

  • Completeness - A search algorithm is said to be complete if it guarantees to yield a solution for any random input if at least one solution exists.
  • Optimality - A solution discovered for an algorithm is considered optimal if it is assumed to be the best solution (lowest path cost) among all other solutions.
  • Time complexity - It measures how long an algorithm takes to complete its job.
  • Space Complexity - The maximum storage space required during the search, as determined by the problem's complexity.

These characteristics often contrast the effectiveness of various search algorithms in artificial intelligence.

Importance of Search Algorithms in Artificial Intelligence

The following points explain how and why the search algorithms in AI are important:

  • Solving problems : Using logical search mechanisms, including problem description, actions, and search space, search algorithms in artificial intelligence improve problem-solving. Applications for route planning, like Google Maps, are one real-world illustration of how search algorithms in AI are utilized to solve problems. These programs employ search algorithms to determine the quickest or shortest path between two locations.
  • Search programming : Many AI activities can be coded in terms of searching, which improves the formulation of a given problem's solution.
  • Goal-based agents : Goal-based agents' efficiency is improved through search algorithms in artificial intelligence. These agents look for the most optimal course of action that can offer the finest resolution to an issue to solve it.
  • Support production systems : Search algorithms in artificial intelligence help production systems run. These systems help AI applications by using rules and methods for putting them into practice. Production systems use search algorithms in artificial intelligence to find the rules that can lead to the required action.
  • Neural network systems : The neural network systems also use these algorithms. These computing systems comprise a hidden layer, an input layer, an output layer, and coupled nodes. Neural networks are used to execute many tasks in artificial intelligence. For example, the search for connection weights that will result in the required input-output mapping is improved by search algorithms in AI.

Types of Search Algorithms in AI

We can divide search algorithms in artificial intelligence into uninformed (Blind search) and informed (Heuristic search) algorithms based on the search issues.

types of search algorithm in ai

Uninformed/Blind Search

The uninformed search needs domain information, such as proximity or goal location. It works by brute force because it only contains information on traversing the tree and identifying leaf and goal nodes.

Uninformed search is a method of searching a search tree without knowledge of the search space, such as initial state operators and tests for the objective, and is also known as blind search. It goes through each tree node until it reaches the target node. These algorithms are limited to producing successors and distinguishing between goal and non-goal states.

  • Breadth-first search - This is a search method for a graph or tree data structure. It starts at the tree root or searches key and goes through the adjacent nodes in the current depth level before moving on to the nodes in the next depth level. It uses the queue data structure that works on the first in, first out (FIFO) concept. It is a complete algorithm as it returns a solution if a solution exists.

breadth first search

  • Depth-first search - It is also an algorithm used to explore graph or tree data structures. It starts at the root node, as opposed to the breadth-first search. It goes through the branch nodes and then returns. It is implemented using a stack data structure that works on the concept of last in, first out (LIFO).

depth first search

  • Uniform cost search(UCS) - Unlike breadth-first and depth-first algorithms, uniform cost search considers the expense. When there are multiple paths to achieving the desired objective, the optimal solution of uniform cost algorithms is the one with the lowest cost. So uniform cost search will check the expense to go to the next node. It will choose the path with the least cost if there are multiple paths. Only finite states and the absence of loops with zero weights make UCS complete. Also, only when there are no negative costs is UCS optimum. It is similar to the breadth-first search if each transition's cost is the same.

uniform cost search

  • Iterative deepening depth-first search - It performs a depth-first search to level 1, then restarts, completes a depth-first search to level 2, and so on until the answer is found. It only generates a node once all the lower nodes have been produced. It only stores a node stack. The algorithm terminates at depth d when the goal node is found.

iterative deepening depth first search

  • Bidirectional search - It searches forward from the initial state and backward from the target state until they meet to identify a common state. The route from the initial state is joined to the path from the goal state. Each search only covers half of the entire path.

bidirectional search

Informed Search

Informed search algorithms in AI use domain expertise. Problem information is accessible in an informed search, which can guide the search. As a result, informed search strategies are more likely to discover a solution than uninformed ones.

Heuristic search is another name for informed search. A heuristic is a method that, while not always guaranteed to find the best solution, is guaranteed to find a decent solution in a reasonable amount of time. An informed search can answer many complex problems that would be impossible to handle otherwise.

Greedy Search - The closest node to the target node is expanded in greedy search algorithms in AI. A heuristic function, h, determines the closeness factor (x). h(x) is a distance estimate between one node and the end or target node. The closer the node is to the endpoint, the smaller the h(x) value. When the greedy search looks for the best route to the target node, it will select nodes with the lowest possible values. This algorithm is implemented through the priority queue. It is not an optimal algorithm. It can get stuck in loops.

For example, imagine a simple game where the goal is to reach a specific location on a board. The player can move in any direction but walls are blocking some paths. In a greedy search approach, the player would always choose the direction that brings them closer to the goal, without considering the potential obstacles or the fact that some paths may lead to dead ends.

If the chosen path leads to a dead end or a loop, the algorithm will keep moving back and forth between the same nodes, without ever exploring other options. This can result in an infinite loop where the algorithm keeps repeating the same steps and fails to find a solution.

A * Search - A* Tree Search, known as A* Search, combines the strengths of uniform-cost search and greedy search. To find the best path from the starting state to the desired state, the A* search algorithm investigates all potential paths in a graph or grid. The algorithm calculates the cost of each potential move at each stage using the following two criteria:

  • How much it costs to reach the present node?
  • The approximate cost from the present node to the goal.

A heuristic function is used to determine the estimated cost and estimate the distance between the current node and the desired state. The acceptable nature of this heuristic function ensures that it never overestimates the actual cost of achieving the goal.

The path with the lowest overall cost is chosen after an A* search examines each potential route based on the sum of the actual cost and the estimated cost (i.e., the cost so far and the estimated cost-to-go). By doing this, the algorithm is guaranteed to always investigate the most promising path first, which is most likely to lead to the desired state.

  • Search algorithms in AI are algorithms that aid in the resolution of search issues. A search issue comprises the search space, start, and goal state.
  • These algorithms are essential because they aid in solving AI problems and support other systems, such as neural networks and manufacturing systems.
  • Search algorithms in artificial intelligence define the problem (initial state, target state, state space, space cost, etc.) and then perform search operations to find the best solution to the given problem.
  • Search algorithms in artificial intelligence are classified into two types: informed algorithms and uninformed algorithms. Breadth-first, depth-first, and uniform-cost algorithms are examples of informed algorithms. Greedy, A* tree and A* graph algorithms are examples of uninformed algorithms.
  • Vehicle routing, nurse scheduling, record retrieval, and industrial processes are some of AI's most common uses of search algorithms.

Additional Resources

  • AI Application
  • Admiral “Amazing Grace” Hopper

Exploring the Intricacies of NP-Completeness in Computer Science

Understanding p vs np problems in computer science: a primer for beginners, understanding key theoretical frameworks in computer science: a beginner’s guide.

Learn Computer Science with Python

Learn Computer Science with Python

CS is a journey, not a destination

  • Foundations

Understanding Algorithms: The Key to Problem-Solving Mastery

problem solving using search algorithm

The world of computer science is a fascinating realm, where intricate concepts and technologies continuously shape the way we interact with machines. Among the vast array of ideas and principles, few are as fundamental and essential as algorithms. These powerful tools serve as the building blocks of computation, enabling computers to solve problems, make decisions, and process vast amounts of data efficiently.

An algorithm can be thought of as a step-by-step procedure or a set of instructions designed to solve a specific problem or accomplish a particular task. It represents a systematic approach to finding solutions and provides a structured way to tackle complex computational challenges. Algorithms are at the heart of various applications, from simple calculations to sophisticated machine learning models and complex data analysis.

Understanding algorithms and their inner workings is crucial for anyone interested in computer science. They serve as the backbone of software development, powering the creation of innovative applications across numerous domains. By comprehending the concept of algorithms, aspiring computer science enthusiasts gain a powerful toolset to approach problem-solving and gain insight into the efficiency and performance of different computational methods.

In this article, we aim to provide a clear and accessible introduction to algorithms, focusing on their importance in problem-solving and exploring common types such as searching, sorting, and recursion. By delving into these topics, readers will gain a solid foundation in algorithmic thinking and discover the underlying principles that drive the functioning of modern computing systems. Whether you’re a beginner in the world of computer science or seeking to deepen your understanding, this article will equip you with the knowledge to navigate the fascinating world of algorithms.

What are Algorithms?

At its core, an algorithm is a systematic, step-by-step procedure or set of rules designed to solve a problem or perform a specific task. It provides clear instructions that, when followed meticulously, lead to the desired outcome.

Consider an algorithm to be akin to a recipe for your favorite dish. When you decide to cook, the recipe is your go-to guide. It lists out the ingredients you need, their exact quantities, and a detailed, step-by-step explanation of the process, from how to prepare the ingredients to how to mix them, and finally, the cooking process. It even provides an order for adding the ingredients and specific times for cooking to ensure the dish turns out perfect.

In the same vein, an algorithm, within the realm of computer science, provides an explicit series of instructions to accomplish a goal. This could be a simple goal like sorting a list of numbers in ascending order, a more complex task such as searching for a specific data point in a massive dataset, or even a highly complicated task like determining the shortest path between two points on a map (think Google Maps). No matter the complexity of the problem at hand, there’s always an algorithm working tirelessly behind the scenes to solve it.

Furthermore, algorithms aren’t limited to specific programming languages. They are universal and can be implemented in any language. This is why understanding the fundamental concept of algorithms can empower you to solve problems across various programming languages.

The Importance of Algorithms

Algorithms are indisputably the backbone of all computational operations. They’re a fundamental part of the digital world that we interact with daily. When you search for something on the web, an algorithm is tirelessly working behind the scenes to sift through millions, possibly billions, of web pages to bring you the most relevant results. When you use a GPS to find the fastest route to a location, an algorithm is computing all possible paths, factoring in variables like traffic and road conditions, to provide you the optimal route.

Consider the world of social media, where algorithms curate personalized feeds based on our previous interactions, or in streaming platforms where they recommend shows and movies based on our viewing habits. Every click, every like, every search, and every interaction is processed by algorithms to serve you a seamless digital experience.

In the realm of computer science and beyond, everything revolves around problem-solving, and algorithms are our most reliable problem-solving tools. They provide a structured approach to problem-solving, breaking down complex problems into manageable steps and ensuring that every eventuality is accounted for.

Moreover, an algorithm’s efficiency is not just a matter of preference but a necessity. Given that computers have finite resources — time, memory, and computational power — the algorithms we use need to be optimized to make the best possible use of these resources. Efficient algorithms are the ones that can perform tasks more quickly, using less memory, and provide solutions to complex problems that might be infeasible with less efficient alternatives.

In the context of massive datasets (the likes of which are common in our data-driven world), the difference between a poorly designed algorithm and an efficient one could be the difference between a solution that takes years to compute and one that takes mere seconds. Therefore, understanding, designing, and implementing efficient algorithms is a critical skill for any computer scientist or software engineer.

Hence, as a computer science beginner, you are starting a journey where algorithms will be your best allies — universal keys capable of unlocking solutions to a myriad of problems, big or small.

Common Types of Algorithms: Searching and Sorting

Two of the most ubiquitous types of algorithms that beginners often encounter are searching and sorting algorithms.

Searching algorithms are designed to retrieve specific information from a data structure, like an array or a database. A simple example is the linear search, which works by checking each element in the array until it finds the one it’s looking for. Although easy to understand, this method isn’t efficient for large datasets, which is where more complex algorithms like binary search come in.

Binary search, on the other hand, is like looking up a word in the dictionary. Instead of checking each word from beginning to end, you open the dictionary in the middle and see if the word you’re looking for should be on the left or right side, thereby reducing the search space by half with each step.

Sorting algorithms, meanwhile, are designed to arrange elements in a particular order. A simple sorting algorithm is bubble sort, which works by repeatedly swapping adjacent elements if they’re in the wrong order. Again, while straightforward, it’s not efficient for larger datasets. More advanced sorting algorithms, such as quicksort or mergesort, have been designed to sort large data collections more efficiently.

Diving Deeper: Graph and Dynamic Programming Algorithms

Building upon our understanding of searching and sorting algorithms, let’s delve into two other families of algorithms often encountered in computer science: graph algorithms and dynamic programming algorithms.

A graph is a mathematical structure that models the relationship between pairs of objects. Graphs consist of vertices (or nodes) and edges (where each edge connects a pair of vertices). Graphs are commonly used to represent real-world systems such as social networks, web pages, biological networks, and more.

Graph algorithms are designed to solve problems centered around these structures. Some common graph algorithms include:

Dynamic programming is a powerful method used in optimization problems, where the main problem is broken down into simpler, overlapping subproblems. The solutions to these subproblems are stored and reused to build up the solution to the main problem, saving computational effort.

Here are two common dynamic programming problems:

Understanding these algorithm families — searching, sorting, graph, and dynamic programming algorithms — not only equips you with powerful tools to solve a variety of complex problems but also serves as a springboard to dive deeper into the rich ocean of algorithms and computer science.

Recursion: A Powerful Technique

While searching and sorting represent specific problem domains, recursion is a broad technique used in a wide range of algorithms. Recursion involves breaking down a problem into smaller, more manageable parts, and a function calling itself to solve these smaller parts.

To visualize recursion, consider the task of calculating factorial of a number. The factorial of a number n (denoted as n! ) is the product of all positive integers less than or equal to n . For instance, the factorial of 5 ( 5! ) is 5 x 4 x 3 x 2 x 1 = 120 . A recursive algorithm for finding factorial of n would involve multiplying n by the factorial of n-1 . The function keeps calling itself with a smaller value of n each time until it reaches a point where n is equal to 1, at which point it starts returning values back up the chain.

Algorithms are truly the heart of computer science, transforming raw data into valuable information and insight. Understanding their functionality and purpose is key to progressing in your computer science journey. As you continue your exploration, remember that each algorithm you encounter, no matter how complex it may seem, is simply a step-by-step procedure to solve a problem.

We’ve just scratched the surface of the fascinating world of algorithms. With time, patience, and practice, you will learn to create your own algorithms and start solving problems with confidence and efficiency.

Related Articles

problem solving using search algorithm

Three Elegant Algorithms Every Computer Science Beginner Should Know

Smart. Open. Grounded. Inventive. Read our Ideas Made to Matter.

Which program is right for you?

MIT Sloan Campus life

Through intellectual rigor and experiential learning, this full-time, two-year MBA program develops leaders who make a difference in the world.

Earn your MBA and SM in engineering with this transformative two-year program.

A rigorous, hands-on program that prepares adaptive problem solvers for premier finance careers.

A 12-month program focused on applying the tools of modern data science, optimization and machine learning to solve real-world business problems.

Combine an international MBA with a deep dive into management science. A special opportunity for partner and affiliate schools only.

A doctoral program that produces outstanding scholars who are leading in their fields of research.

Bring a business perspective to your technical and quantitative expertise with a bachelor’s degree in management, business analytics, or finance.

Apply now and work for two to five years. We'll save you a seat in our MBA class when you're ready to come back to campus for your degree.

Executive Programs

The 20-month program teaches the science of management to mid-career leaders who want to move from success to significance.

A full-time MBA program for mid-career leaders eager to dedicate one year of discovery for a lifetime of impact.

A joint program for mid-career professionals that integrates engineering and systems thinking. Earn your master’s degree in engineering and management.

Non-degree programs for senior executives and high-potential managers.

A non-degree, customizable program for mid-career professionals.

23 MIT startups to watch in 2024

New research examines ways the generative AI marketplace might evolve

4 developmental tasks you — and everyone else — will face in retirement

Credit: Alejandro Giraldo

Ideas Made to Matter

How to use algorithms to solve everyday problems

Kara Baskin

May 8, 2017

How can I navigate the grocery store quickly? Why doesn’t anyone like my Facebook status? How can I alphabetize my bookshelves in a hurry? Apple data visualizer and MIT System Design and Management graduate Ali Almossawi solves these common dilemmas and more in his new book, “ Bad Choices: How Algorithms Can Help You Think Smarter and Live Happier ,” a quirky, illustrated guide to algorithmic thinking. 

For the uninitiated: What is an algorithm? And how can algorithms help us to think smarter?

An algorithm is a process with unambiguous steps that has a beginning and an end, and does something useful.

Algorithmic thinking is taking a step back and asking, “If it’s the case that algorithms are so useful in computing to achieve predictability, might they also be useful in everyday life, when it comes to, say, deciding between alternative ways of solving a problem or completing a task?” In all cases, we optimize for efficiency: We care about time or space.

Note the mention of “deciding between.” Computer scientists do that all the time, and I was convinced that the tools they use to evaluate competing algorithms would be of interest to a broad audience.

Why did you write this book, and who can benefit from it?

All the books I came across that tried to introduce computer science involved coding. My approach to making algorithms compelling was focusing on comparisons. I take algorithms and put them in a scene from everyday life, such as matching socks from a pile, putting books on a shelf, remembering things, driving from one point to another, or cutting an onion. These activities can be mapped to one or more fundamental algorithms, which form the basis for the field of computing and have far-reaching applications and uses.

I wrote the book with two audiences in mind. One, anyone, be it a learner or an educator, who is interested in computer science and wants an engaging and lighthearted, but not a dumbed-down, introduction to the field. Two, anyone who is already familiar with the field and wants to experience a way of explaining some of the fundamental concepts in computer science differently than how they’re taught.

I’m going to the grocery store and only have 15 minutes. What do I do?

Do you know what the grocery store looks like ahead of time? If you know what it looks like, it determines your list. How do you prioritize things on your list? Order the items in a way that allows you to avoid walking down the same aisles twice.

For me, the intriguing thing is that the grocery store is a scene from everyday life that I can use as a launch pad to talk about various related topics, like priority queues and graphs and hashing. For instance, what is the most efficient way for a machine to store a prioritized list, and what happens when the equivalent of you scratching an item from a list happens in the machine’s list? How is a store analogous to a graph (an abstraction in computer science and mathematics that defines how things are connected), and how is navigating the aisles in a store analogous to traversing a graph?

Nobody follows me on Instagram. How do I get more followers?

The concept of links and networks, which I cover in Chapter 6, is relevant here. It’s much easier to get to people whom you might be interested in and who might be interested in you if you can start within the ball of links that connects those people, rather than starting at a random spot.

You mention Instagram: There, the hashtag is one way to enter that ball of links. Tag your photos, engage with users who tag their photos with the same hashtags, and you should be on your way to stardom.

What are the secret ingredients of a successful Facebook post?

I’ve posted things on social media that have died a sad death and then posted the same thing at a later date that somehow did great. Again, if we think of it in terms that are relevant to algorithms, we’d say that the challenge with making something go viral is really getting that first spark. And to get that first spark, a person who is connected to the largest number of people who are likely to engage with that post, needs to share it.

With [my first book], “Bad Arguments,” I spent a month pouring close to $5,000 into advertising for that project with moderate results. And then one science journalist with a large audience wrote about it, and the project took off and hasn’t stopped since.

What problems do you wish you could solve via algorithm but can’t?

When we care about efficiency, thinking in terms of algorithms is useful. There are cases when that’s not the quality we want to optimize for — for instance, learning or love. I walk for several miles every day, all throughout the city, as I find it relaxing. I’ve never asked myself, “What’s the most efficient way I can traverse the streets of San Francisco?” It’s not relevant to my objective.

Algorithms are a great way of thinking about efficiency, but the question has to be, “What approach can you optimize for that objective?” That’s what worries me about self-help: Books give you a silver bullet for doing everything “right” but leave out all the nuances that make us different. What works for you might not work for me.

Which companies use algorithms well?

When you read that the overwhelming majority of the shows that users of, say, Netflix, watch are due to Netflix’s recommendation engine, you know they’re doing something right.

Related Articles

Cars and trucks on the highway with AI/digital graphics overlayed on top

Javatpoint Logo

Artificial Intelligence

Control System

  • Interview Q

Intelligent Agent

Problem-solving, adversarial search, knowledge represent, uncertain knowledge r., subsets of ai, artificial intelligence mcq, related tutorials.

JavaTpoint

Search algorithms in AI are the algorithms that are created to aid the searchers in getting the right solution. The search issue contains search space, first start and end point. Now by performing simulation of scenarios and alternatives, searching algorithms help AI agents find the optimal state for the task.

Logic used in algorithms processes the initial state and tries to get the expected state as the solution. Because of this, AI machines and applications just functioning using search engines and solutions that come from these algorithms can only be as effective as the algorithms.

AI agents can make the AI interfaces usable without any software literacy. The agents that carry out such activities do so with the aim of reaching an end goal and develop action plans that in the end will bring the mission to an end. Completion of the action is gained after the steps of these different actions. The AI-agents finds the best way through the process by evaluating all the alternatives which are present. Search systems are a common task in artificial intelligence by which you are going to find the optimum solution for the AI agents.

In Artificial Intelligence, Search techniques are universal problem-solving methods. or in AI mostly used these search strategies or algorithms to solve a specific problem and provide the best result. Problem-solving agents are the goal-based agents and use atomic representation. In this topic, we will learn various problem-solving search algorithms.

Searchingis a step by step procedure to solve a search-problem in a given search space. A search problem can have three main factors: Search space represents a set of possible solutions, which a system may have. It is a state from where agent begins . It is a function which observe the current state and returns whether the goal state is achieved or not. A tree representation of search problem is called Search tree. The root of the search tree is the root node which is corresponding to the initial state. It gives the description of all the available actions to the agent. A description of what each action do, can be represented as a transition model. It is a function which assigns a numeric cost to each path. It is an action sequence which leads from the start node to the goal node. If a solution has the lowest cost among all solutions.

Following are the four essential properties of search algorithms to compare the efficiency of these algorithms:

A search algorithm is said to be complete if it guarantees to return a solution if at least any solution exists for any random input.

If a solution found for an algorithm is guaranteed to be the best solution (lowest path cost) among all other solutions, then such a solution for is said to be an optimal solution.

Time complexity is a measure of time for an algorithm to complete its task.

It is the maximum storage space required at any point during the search, as the complexity of the problem.

Here, are some important factors of role of search algorithms used AI are as follow.

"Workflow" logical search methods like describing the issue, getting the necessary steps together, and specifying an area to search help AI search algorithms getting better in solving problems. Take for instance the development of AI search algorithms which support applications like Google Maps by finding the fastest way or shortest route between given destinations. These programs basically conduct the search through various options to find the best solution possible.

Many AI functions can be designed as search oscillations, which thus specify what to look for in formulating the solution of the given problem.

Instead, the goal-directed and high-performance systems use a wide range of search algorithms to improve the efficiency of AI. Though they are not robots, these agents look for the ideal route for action dispersion so as to avoid the most impacting steps that can be used to solve a problem. It is their main aims to come up with an optimal solution which takes into account all possible factors.

AI Algorithms in search engines for systems manufacturing help them run faster. These programmable systems assist AI applications with applying rules and methods, thus making an effective implementation possible. Production systems involve learning of artificial intelligence systems and their search for canned rules that lead to the wanted action.

Beyond this, employing neural network algorithms is also of importance of the neural network systems. The systems are composed of these structures: a hidden layer, and an input layer, an output layer, and nodes that are interconnected. One of the most important functions offered by neural networks is to address the challenges of AI within any given scenarios. AI is somehow able to navigate the search space to find the connection weights that will be required in the mapping of inputs to outputs. This is made better by search algorithms in AI.

The uninformed search does not contain any domain knowledge such as closeness, the location of the goal. It operates in a brute-force way as it only includes information about how to traverse the tree and how to identify leaf and goal nodes. Uninformed search applies a way in which search tree is searched without any information about the search space like initial state operators and test for the goal, so it is also called blind search.It examines each node of the tree until it achieves the goal node.

Informed search algorithms use domain knowledge. In an informed search, problem information is available which can guide the search. Informed search strategies can find a solution more efficiently than an uninformed search strategy. Informed search is also called a Heuristic search.

A heuristic is a way which might not always be guaranteed for best solutions but guaranteed to find a good solution in reasonable time.

Informed search can solve much complex problem which could not be solved in another way.

An example of informed search algorithms is a traveling salesman problem.





Youtube

  • Send your Feedback to [email protected]

Help Others, Please Share

facebook

Learn Latest Tutorials

Splunk tutorial

Transact-SQL

Tumblr tutorial

Reinforcement Learning

R Programming tutorial

R Programming

RxJS tutorial

React Native

Python Design Patterns

Python Design Patterns

Python Pillow tutorial

Python Pillow

Python Turtle tutorial

Python Turtle

Keras tutorial

Preparation

Aptitude

Verbal Ability

Interview Questions

Interview Questions

Company Interview Questions

Company Questions

Trending Technologies

Artificial Intelligence

Cloud Computing

Hadoop tutorial

Data Science

Angular 7 Tutorial

Machine Learning

DevOps Tutorial

B.Tech / MCA

DBMS tutorial

Data Structures

DAA tutorial

Operating System

Computer Network tutorial

Computer Network

Compiler Design tutorial

Compiler Design

Computer Organization and Architecture

Computer Organization

Discrete Mathematics Tutorial

Discrete Mathematics

Ethical Hacking

Ethical Hacking

Computer Graphics Tutorial

Computer Graphics

Software Engineering

Software Engineering

html tutorial

Web Technology

Cyber Security tutorial

Cyber Security

Automata Tutorial

C Programming

C++ tutorial

Data Mining

Data Warehouse Tutorial

Data Warehouse

RSS Feed

problem solving using search algorithm

Data Science Central

  • Author Portal
  • 3D Printing
  • AI Data Stores
  • AI Hardware
  • AI Linguistics
  • AI User Interfaces and Experience
  • AI Visualization
  • Cloud and Edge
  • Cognitive Computing
  • Containers and Virtualization
  • Data Science
  • Data Security
  • Digital Factoring
  • Drones and Robot AI
  • Internet of Things
  • Knowledge Engineering
  • Machine Learning
  • Quantum Computing
  • Robotic Process Automation
  • The Mathematics of AI
  • Tools and Techniques
  • Virtual Reality and Gaming
  • Blockchain & Identity
  • Business Agility
  • Business Analytics
  • Data Lifecycle Management
  • Data Privacy
  • Data Strategist
  • Data Trends
  • Digital Communications
  • Digital Disruption
  • Digital Professional
  • Digital Twins
  • Digital Workplace
  • Marketing Tech
  • Sustainability
  • Agriculture and Food AI
  • AI and Science
  • AI in Government
  • Autonomous Vehicles
  • Education AI
  • Energy Tech
  • Financial Services AI
  • Healthcare AI
  • Logistics and Supply Chain AI
  • Manufacturing AI
  • Mobile and Telecom AI
  • News and Entertainment AI
  • Smart Cities
  • Social Media and AI
  • Functional Languages
  • Other Languages
  • Query Languages
  • Web Languages
  • Education Spotlight
  • Newsletters
  • O’Reilly Media

Using Uninformed & Informed Search Algorithms to Solve 8-Puzzle (n-Puzzle) in Python

SandipanDey

  • July 6, 2017 at 3:30 am

This problem appeared as a project in the  edX course ColumbiaX: CSMM.101x Artificial Intelligence (AI) . In this assignment an agent will be implemented to solve the 8-puzzle game (and the game generalized to an n × n array).

The following description of the problem is taken from the course:

I. Introduction

An instance of the  n-puzzle game  consists of a board holding n^2-1  distinct movable tiles, plus an empty space. The tiles are numbers from the set 1,..,n^2-1 . For any such board, the empty space may be legally swapped with any tile horizontally or vertically adjacent to it. In this assignment, the blank space is going to be represented with the number 0. Given an initial state of the board, the combinatorial search problem is to find a sequence of moves that transitions this state to the goal state; that is, the configuration with all tiles arranged in ascending order 0,1,… ,n^2−1 . The search space is the set of all possible states reachable from the initial state. The blank space may be swapped with a component in one of the four directions  {‘Up’, ‘Down’, ‘Left’, ‘Right’} , one move at a time. The cost of moving from one configuration of the board to another is the same and equal to one. Thus, the total cost of path is equal to the number of moves made from the initial state to the goal state.

II. Algorithm Review

The searches begin by visiting the root node of the search tree, given by the initial state. Among other book-keeping details, three major things happen in sequence in order to visit a node:

  • First, we remove a node from the frontier set.
  • Second, we check the state against the goal state to determine if a solution has been found.
  • Finally, if the result of the check is negative, we then expand the node. To expand a given node, we generate successor nodes adjacent to the current node, and add them to the frontier set. Note that if these successor nodes are already in the frontier, or have already been visited, then they should not be added to the frontier again.

This describes the life-cycle of a visit, and is the basic order of operations for search agents in this assignment—(1) remove, (2) check, and (3) expand. In this assignment, we will implement algorithms as described here.

III. What The Program Need to Output

Example: breadth-first search.

im1.png

The output file should contain exactly the following lines:

path_to_goal: [‘Up’, ‘Left’, ‘Left’] cost_of_path: 3 nodes_expanded: 10 fringe_size: 11 max_fringe_size: 12 search_depth: 3 max_search_depth: 4 running_time: 0.00188088 max_ram_usage: 0.07812500

The following algorithms are going to be implemented and taken from the lecture slides from the same course.

im2.png

The following figures and animations show how the 8-puzzle was solved starting from different initial states with different algorithms. For A* and ID-A* search we are going to use  Manhattan heuristic , which is an  admissible heuristic  for this problem. Also, the figures display the search paths from starting state to the goal node (the states with red text denote the path chosen). Let’s start with a very simple example. As can be seen, with this simple example all the algorithms find the same path to the goal node from the initial state.

Example 1: Initial State: 1,2,5,3,4,0,6,7,8

b_b0.gv.png

The nodes  expanded  by  BFS  (also the nodes that are in the  fringe  /  frontier  of the queue) are shown in the following figure:

fulltree_bfs.png

The  path  to the  goal  node (as well as the nodes expanded) with  ID-A*  is shown in the following figure:

b_i0.gv.png

Now let’s try a little more complex examples:

Example 2: Initial State: 1,4,2,6,5,8,7,3,0

The  path  to the  goal  node with  A*  is shown in the following figure:

board8.gv.png

All the nodes  expanded  by  A*  (also the nodes that are in the  fringe  /  frontier  of the queue) are shown in the following figure:

fulltree_ast.png

The  path  to the  goal  node with  BFS  is shown in the following figure:

board8.gv.png

All the nodes  expanded  by  BFS  are shown in the following figure:

2808334267

Example 3: Initial State: 1,0,2,7,5,4,8,6,3

The  path  to the  goal  node with  A*  is shown in the following figures:

b_a4_1.gv

The nodes  expanded  by  A*  (also the nodes that are in the  fringe  /  frontier  of the priority queue) are shown in the following figure (the tree is huge, use  zoom  to view it properly):

2808341174

The nodes  expanded  by  ID-A*  are shown in the following figure (again the tree is huge, use  zoom  to view it properly):

2808343958

The same problem (with a little variation) also appeared a programming exercise in the  Coursera Course Algorithm-I  (By Prof.  ROBERT SEDGEWICK ,  Princeton ) . The description of the problem taken from the assignment is shown below (notice that the  goal state  is  different  in this version of the same problem):

Write a program to solve the 8-puzzle problem (and its natural generalizations) using the A* search algorithm.

im1

  • Hamming priority function.  The number of blocks in the wrong position, plus the number of moves made so far to get to the state. Intutively, a state with a small number of blocks in the wrong position is close to the goal state, and we prefer a state that have been reached using a small number of moves.
  • Manhattan priority function.  The sum of the distances (sum of the vertical and horizontal distance) from the blocks to their goal positions, plus the number of moves made so far to get to the state.

im2.png

(2)  The following  15-puzzle  is solvable in  6 steps , as shown below:

board6.png

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Related Content

' data-src=

IEEE Account

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

This week: the arXiv Accessibility Forum

Help | Advanced Search

Computer Science > Robotics

Title: solving stochastic orienteering problems with chance constraints using monte carlo tree search.

Abstract: We present a new Monte Carlo Tree Search (MCTS) algorithm to solve the stochastic orienteering problem with chance constraints, i.e., a version of the problem where travel costs are random, and one is assigned a bound on the tolerable probability of exceeding the budget. The algorithm we present is online and anytime, i.e., it alternates planning and execution, and the quality of the solution it produces increases as the allowed computational time increases. Differently from most former MCTS algorithms, for each action available in a state the algorithm maintains estimates of both its value and the probability that its execution will eventually result in a violation of the chance constraint. Then, at action selection time, our proposed solution prunes away trajectories that are estimated to violate the failure probability. Extensive simulation results show that this approach can quickly produce high-quality solutions and is competitive with the optimal but time-consuming solution.
Comments: Paper to appear on the IEEE Transactions on Automation Science and Engineering
Subjects: Robotics (cs.RO)
Cite as: [cs.RO]
  (or [cs.RO] for this version)
  Focus to learn more arXiv-issued DOI via DataCite

Submission history

Access paper:.

  • HTML (experimental)
  • Other Formats

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

bioengineering-logo

Article Menu

problem solving using search algorithm

  • Subscribe SciFeed
  • Recommended Articles
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Improved dipole source localization from simultaneous meg-eeg data by combining a global optimization algorithm with a local parameter search: a brain phantom study.

problem solving using search algorithm

Graphical Abstract

1. Introduction

2.1. physical head phantom, 2.2. computational head phantom, 2.3. eeg and meg data analyses, 2.4. statistical analyses of dipole localization algorithm parameter estimates, 4. discussion, 5. conclusions, supplementary materials, author contributions, institutional review board statement, informed consent statement, data availability statement, conflicts of interest.

  • Mégevand, P.; Seeck, M. Electroencephalography, magnetoencephalography and source localization: Their value in epilepsy. Curr. Opin. Neurol. 2018 , 31 , 176–183. [ Google Scholar ] [ CrossRef ]
  • He, B.; Sohrabpour, A.; Brown, E.; Liu, Z. Electrophysiological Source Imaging: A Noninvasive Window to Brain Dynamics. Annu. Rev. Biomed. Eng. 2018 , 20 , 171–196. [ Google Scholar ] [ CrossRef ]
  • Escalona-Vargas, D.I.; Lopez-Arevalo, I.; Gutiérrez, D. Multicompare Tests of the Performance of Different Metaheuristics in EEG Dipole Source Localization. Sci. World J. 2014 , 2014 , 524367. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Pantazis, D.; Adler, A. MEG Source Localization via Deep Learning. Sensors 2021 , 21 , 4278. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Grech, R.; Cassar, T.; Muscat, J.; Camilleri, K.P.; Fabri, S.G.; Zervakis, M.; Xanthopoulos, P.; Sakkalis, V.; Vanrumste, B. Review on solving the inverse problem in EEG source analysis. J. NeuroEng. Rehabil. 2008 , 5 , 25. [ Google Scholar ] [ CrossRef ]
  • Khosla, D.; Singh, M.; Don, M. Spatio-temporal EEG source localization using simulated annealing. IEEE Trans. Biomed. Eng. 1997 , 44 , 1075–1091. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Acar, Z.A.; Makeig, S. Effects of Forward Model Errors on EEG Source Localization. Brain Topogr. 2013 , 26 , 378–396. [ Google Scholar ] [ CrossRef ]
  • Hui, Z.; Ying, L.; Lingyue, W.; Ning, Y.; Shuo, Y. EEG Dipole Source Localization using Deep Neural Network. Int. J. Circuits Syst. Signal Process. 2022 , 16 , 132–140. [ Google Scholar ] [ CrossRef ]
  • Abdulkadirov, R.; Lyakhov, P.; Nagornov, N. Survey of Optimization Algorithms in Modern Neural Networks. Mathematics 2023 , 11 , 2466. [ Google Scholar ] [ CrossRef ]
  • Gavin, H.P. The Levenberg-Marquardt Method for Nonlinear Least Squares Curve-Fitting Problems. 2013. Available online: https://api.semanticscholar.org/CorpusID:5708656 (accessed on 11 August 2024).
  • Coffman, B.A.; Salisbury, D.F. MEG Methods: A Primer of Basic MEG Analysis. In Neuroimaging in Schizophrenia ; Kubicki, M., Shenton, M.E., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 191–210. [ Google Scholar ] [ CrossRef ]
  • Medani, T.; Garcia-Prieto, J.; Tadel, F.; Antonakakis, M.; Erdbrügger, T.; Höltershinken, M.; Mead, W.; Schrader, S.; Joshi, A.; Engwer, C.; et al. Brainstorm-DUNEuro: An integrated and user-friendly Finite Element Method for modeling electromagnetic brain activity. Neuroimage 2023 , 267 , 119851. [ Google Scholar ] [ CrossRef ]
  • Delahaye, D.; Chaimatanan, S.; Mongeau, M. Simulated annealing: From basics to applications. In Handbook of Metaheuristics ; Springer: Berlin/Heidelberg, Germany, 2019; pp. 1–35. [ Google Scholar ]
  • Baldominos, A.; Ramón-Lozano, C. Optimizing EEG energy-based seizure detection using genetic algorithms. In Proceedings of the 2017 IEEE Congress on Evolutionary Computation (CEC), Donostia, Spain, 5–8 June 2017; pp. 2338–2345. [ Google Scholar ] [ CrossRef ]
  • Jiang, T.; Luo, A.; Li, X.; Kruggel, F. A comparative study of global optimization approaches to MEG source localization. Int. J. Comput. Math. 2003 , 80 , 305–324. [ Google Scholar ] [ CrossRef ]
  • Qiu, L.; Li, Y.; Yao, D. A feasibility study of EEG dipole source localization using particle swarm optimization. In Proceedings of the 2005 IEEE Congress on Evolutionary Computation, Edinburgh, Scotland, UK, 2–5 September 2005; Volume 1, pp. 720–726. [ Google Scholar ] [ CrossRef ]
  • Sekihara, K.; Haneishi, H.; Ohyama, N. Details of simulated annealing algorithm to estimate parameters of multiple current dipoles using biomagnetic data. IEEE Trans. Med. Imaging 1992 , 11 , 293–299. [ Google Scholar ] [ CrossRef ]
  • Gerson, J.; Cardenas, V.A.; Fein, G. Equivalent dipole parameter estimation using simulated annealing. Electroencephalogr. Clin. Neurophysiol. Potentials Sect. 1994 , 92 , 161–168. [ Google Scholar ] [ CrossRef ]
  • Van Uitert, R.; Johnson, C. Can a spherical model substitute for a realistic head model in forward and inverse MEG simulations? Proc. BIOMAG 2002 , 25 , 52–68. [ Google Scholar ]
  • Muravchik, C.H.; Nehorai, A. EEG/MEC error bounds for a static dipole source with a realistic head model. IEEE Trans. Signal Process. 2001 , 49 , 470–484. [ Google Scholar ] [ CrossRef ]
  • Cuffin, B.N.; Schomer, D.L.; Ives, J.R.; Blume, H. Experimental tests of EEG source localization accuracy in realistically shaped head models. Clin. Neurophysiol. 2001 , 112 , 2288–2292. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Bolander, H.; Moran, J.E.; Nagesh, V.; Mason, K.M.; Bowyer, S.M.; Barkley, G.L.; Tepley, N. MEG localization errors associated with a realistic cortical model. In Proceedings of the Biomag 2002: Proceedings of the 13th International Conference on Biomagnetism, Jena, Germany, 10–14 August 2002; pp. 743–745. [ Google Scholar ]
  • McNay, D.; Michielssen, E.; Rogers, R.; Taylor, S.; Akhtari, M.; Sutherling, W. Multiple source localization using genetic algorithms. J. Neurosci. Methods 1996 , 64 , 163–172. [ Google Scholar ] [ CrossRef ]
  • Escalona-Vargas, D.; Murphy, P.; Lowery, C.L.; Eswaran, H. Genetic algorithms for dipole location of fetal magnetocardiography. In Proceedings of the 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, USA, 16–20 August 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 904–907. [ Google Scholar ]
  • Lewis, P.S.; Mosher, J.C. Genetic algorithms for minimal source reconstructions. In Proceedings of the 27th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 1–3 November 1993; IEEE: Piscataway, NJ, USA, 1993; pp. 335–338. [ Google Scholar ]
  • Scherg, M.; Bast, T.; Berg, P. Multiple source analysis of interictal spikes: Goals, requirements, and clinical value. J. Clin. Neurophysiol. 1999 , 16 , 214–224. [ Google Scholar ] [ CrossRef ]
  • Sequeira, C.; Sanchez-Quesada, F.; Sancho, M.; Hidalgo, I.; Ortiz, T. A genetic algorithm approach for localization of deep sources in MEG. Phys. Scr. 2005 , 2005 , 140. [ Google Scholar ] [ CrossRef ]
  • Rytsar, R.; Pun, T. EEG source reconstruction using global optimization approaches: Genetic algorithms versus simulated annealing. Int. J. Tomogr. Stat. 2010 , 14 , 83–94. [ Google Scholar ]
  • Aydin, Ü.; Vorwerk, J.; Dümpelmann, M.; Küpper, P.; Kugel, H.; Heers, M.; Wellmer, J.; Kellinghaus, C.; Haueisen, J.; Rampp, S.; et al. Combined EEG/MEG can outperform single modality EEG or MEG source reconstruction in presurgical epilepsy diagnosis. PLoS ONE 2015 , 10 , e0118753. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Cho, J.-H.; Vorwerk, J.; Wolters, C.H.; Knösche, T.R. Influence of the head model on EEG and MEG source connectivity analyses. Neuroimage 2015 , 110 , 60–77. [ Google Scholar ] [ CrossRef ]
  • Mahmutoglu, M.A.; Rupp, A.; Baumgärtner, U. Simultaneous EEG/MEG yields complementary information of nociceptive evoked responses. Clin. Neurophysiol. 2022 , 143 , 21–35. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Pittau, F.; Grova, C.; Moeller, F.; Dubeau, F.; Gotman, J. Patterns of altered functional connectivity in mesial temporal lobe epilepsy. Epilepsia 2012 , 53 , 1013–1023. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Haneef, Z.; Lenartowicz, A.; Yeh, H.J.; Levin, H.S.; Engel, J., Jr.; Stern, J.M. Functional connectivity of hippocampal networks in temporal lobe epilepsy. Epilepsia 2014 , 55 , 137–145. [ Google Scholar ] [ CrossRef ]
  • Wu, D.; Chang, F.; Peng, D.; Xie, S.; Li, X.; Zheng, W. The morphological characteristics of hippocampus and thalamus in mesial temporal lobe epilepsy. BMC Neurol. 2020 , 20 , 235. [ Google Scholar ] [ CrossRef ]
  • Ostergard, T.A.; Miller, J.P. Surgery for epilepsy in the primary motor cortex: A critical review. Epilepsy Behav. 2019 , 91 , 13–19. [ Google Scholar ] [ CrossRef ]
  • Karunakaran, S.; Rollo, M.J.; Kim, K.; Johnson, J.A.; Kalamangalam, G.P.; Aazhang, B.; Tandon, N. The interictal mesial temporal lobe epilepsy network. Epilepsia 2018 , 59 , 244–258. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Sohrabpour, A.; He, B. Exploring the extent of source imaging: Recent advances in noninvasive electromagnetic brain imaging. Curr. Opin. Biomed. Eng. 2021 , 18 , 100277. [ Google Scholar ] [ CrossRef ]
  • Hirano, R.; Emura, T.; Nakata, O.; Nakashima, T.; Asai, M.; Kagitani-Shimono, K.; Kishima, H.; Hirata, M. Fully-automated spike detection and dipole analysis of epileptic MEG using deep learning. IEEE Trans. Med. Imaging 2022 , 41 , 2879–2890. [ Google Scholar ] [ CrossRef ]
  • Razorenova, A.; Yavich, N.; Malovichko, M.; Fedorov, M.; Koshev, N.; Dylov, D.V. Deep learning for non-invasive cortical potential imaging. In Proceedings of the International Workshop on Machine Learning in Clinical Neuroimaging, Vancouver, BC, Canada, 8–12 October 2022; Springer: Berlin/Heidelberg, Germany, 2020; pp. 45–55. [ Google Scholar ]
  • Pathak, D.; Kashyap, R.; Rahamatkar, S. Chapter 10—A study of deep learning approach for the classification of electroencephalogram (EEG) brain signals. In Artificial Intelligence and Machine Learning for EDGE Computing ; Pandey, R., Khatri, S.K., Singh, N.K., Verma, P., Eds.; Academic Press: Cambridge, MA, USA, 2022; pp. 133–144. [ Google Scholar ] [ CrossRef ]
  • Fuchs, M.; Wagner, M.; Wischmann, H.-A.; Köhler, T.; Theißen, A.; Drenckhahn, R.; Buchner, H. Improving source reconstructions by combining bioelectric and biomagnetic data. Electroencephalogr. Clin. Neurophysiol. 1998 , 107 , 93–111. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Hnazaee, M.F.; Wittevrongel, B.; Khachatryan, E.; Libert, A.; Carrette, E.; Dauwe, I.; Meurs, A.; Boon, P.; Van Roost, D.; Van Hulle, M.M. Localization of deep brain activity with scalp and subdural EEG. NeuroImage 2020 , 223 , 117344. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Collier, T.J.; Kynor, D.B.; Bieszczad, J.; Audette, W.E.; Kobylarz, E.J.; Diamond, S.G. Creation of a human head phantom for testing of electroencephalography equipment and techniques. IEEE Trans. Biomed. Eng. 2012 , 59 , 2628–2634. [ Google Scholar ] [ CrossRef ]
  • Mobashsher, A.; Abbosh, A. Three-dimensional human head phantom with realistic electrical properties and anatomy. IEEE Antennas Wirel. Propag. Lett. 2014 , 13 , 1401–1404. [ Google Scholar ] [ CrossRef ]
  • Hunold, A.; Funke, M.; Eichardt, R.; Stenroos, M.; Haueisen, J. EEG and MEG: Sensitivity to epileptic spike activity as function of source orientation and depth. Physiol. Meas. 2016 , 37 , 1146. [ Google Scholar ] [ CrossRef ]
  • Wolters, C.H.; Köstler, H.; Möller, C.; Härdtlein, J.; Grasedyck, L.; Hackbusch, W. Numerical mathematics of the subtraction method for the modeling of a current dipole in EEG source reconstruction using finite element head models. SIAM J. Sci. Comput. 2008 , 30 , 24–45. [ Google Scholar ] [ CrossRef ]
  • Li, Z.; Park, B.-K.; Liu, W.; Zhang, J.; Reed, M.P.; Rupp, J.D.; Hoff, C.N.; Hu, J. A statistical skull geometry model for children 0–3 years old. PLoS ONE 2015 , 10 , e0127322. [ Google Scholar ] [ CrossRef ]
  • Thigpen, N.N.; Kappenman, E.S.; Keil, A. Assessing the internal consistency of the event-related potential: An example analysis. Psychophysiology 2017 , 54 , 123–138. [ Google Scholar ] [ CrossRef ]
  • Hu, L.; Mouraux, A.; Hu, Y.; Iannetti, G.D. A novel approach for enhancing the signal-to-noise ratio and detecting automatically event-related potentials (ERPs) in single trials. Neuroimage 2010 , 50 , 99–111. [ Google Scholar ] [ CrossRef ]
  • FieldTrip Toolbox. Creating a BEM Volume Conduction Model of the Head for Source Reconstruction of EEG Data—FieldTrip Toolbox. Available online: https://www.fieldtriptoolbox.org/tutorial/headmodel_eeg_bem/ (accessed on 14 February 2024).
  • Vorwerk, J.; Oostenveld, R.; Piastra, M.C.; Magyari, L.; Wolters, C.H. The FieldTrip-SimBio pipeline for EEG forward solutions. Biomed. Eng. Online 2018 , 17 , 37. [ Google Scholar ] [ CrossRef ]
  • Jatoi, M.A.; Kamel, N.; Faye, I.; Malik, A.S.; Bornot, J.M.; Begum, T. BEM based solution of forward problem for brain source estimation. In Proceedings of the 2015 IEEE International Conference on Signal and Image Processing Applications (ICSIPA), Kuala Lumpur, Malaysia, 19–21 October 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 180–185. [ Google Scholar ]
  • Gramfort, A.; Papadopoulo, T.; Olivi, E.; Clerc, M. Forward field computation with OpenMEEG. Comput. Intell. Neurosci. 2011 , 2011 , 923703. [ Google Scholar ] [ CrossRef ]
  • Pizzo, F.; Roehri, N.; Giusiano, B.; Lagarde, S.; Carron, R.; Scavarda, D.; Bartolomei, F. The ictal signature of thalamus and basal ganglia in focal epilepsy: A SEEG study. Neurology 2021 , 96 , e280–e293. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Chikara, R.K.; Jahromi, S.; Tamilia, E.; Madsen, J.R.; Stufflebeam, S.M.; Pearl, P.L.; Papadelis, C. Electromagnetic source imaging predicts surgical outcome in children with focal cortical dysplasia. Clin. Neurophysiol. 2023 , 153 , 88–101. [ Google Scholar ] [ CrossRef ]
  • Gonzalez-Moreno, A.; Aurtenetxe, S.; Lopez-Garcia, M.-E.; del Pozo, F.; Maestu, F.; Nevado, A. Signal-to-noise ratio of the MEG signal after preprocessing. J. Neurosci. Methods 2014 , 222 , 56–61. [ Google Scholar ] [ CrossRef ]
  • Song, J.; Davey, C.; Poulsen, C.; Luu, P.; Turovets, S.; Anderson, E.; Li, K.; Tucker, D. EEG source localization: Sensor density and head surface coverage. J. Neurosci. Methods 2015 , 256 , 9–21. [ Google Scholar ] [ CrossRef ]
  • Ward, D.; Jones, R.; Bones, P.J.; Carroll, G. Enhancement of deep epileptiform activity in the EEG via 3-D adaptive spatial filtering. IEEE Trans. Biomed. Eng. 1999 , 46 , 707–716. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Cleophas, T.; Zwinderman, A. Non-parametric Tests for Three or More Samples (Friedman and Kruskal-Wallis). In Clinical Data Analysis on a Pocket Calculator ; Springer: Berlin/Heidelberg, Germany, 2016; pp. 193–197. [ Google Scholar ] [ CrossRef ]
  • Tan, H.H.; Lim, K.H. Review of second-order optimization techniques in artificial neural networks backpropagation. In IOP Conference Series: Materials Science and Engineering ; IOP Publishing: Bristol, UK, 2019; p. 012003. [ Google Scholar ]
  • Escalona-Vargas, D.I.; López-Arévalo, I.; Gutiérrez, D. On the performance of metahuristic algorithms in the solution of the EEG inverse problem. In Proceedings of the 2011 Third World Congress on Nature and Biologically Inspired Computing, Salamanca, Spain, 19–21 October 2021; IEEE: Piscataway, NJ, USA, 2011; pp. 69–74. [ Google Scholar ]
  • Venkateswaran, C.; Ramachandran, M.; Ramu, K.; Prasanth, V.; Mathivanan, G. Application of simulated annealing in various field. Mater. Charact. 2022 , 1 , 1–8. [ Google Scholar ] [ CrossRef ]
  • Hillebrand, A.; Barnes, G.R. A quantitative assessment of the sensitivity of Whole-Head MEG to activity in the adult human cortex. NeuroImage 2002 , 16 , 638–650. [ Google Scholar ] [ CrossRef ]
  • Huizenga, H.; Van Zuijen, T.; Heslenfeld, D.J.; Molenaar, P. Simultaneous MEG and EEG source analysis. Phys. Med. Biol. 2001 , 46 , 1737. [ Google Scholar ] [ CrossRef ]
  • Im, C.-H.; Jung, H.-K.; Fujimaki, N. Anatomically constrained dipole adjustment (ANACONDA) for accurate MEG/EEG focal source localizations. Phys. Med. Biol. 2005 , 50 , 4931. [ Google Scholar ] [ CrossRef ]
  • Ramantani, G.; Boor, R.; Paetau, R.; Ille, N.; Feneberg, R.; Rupp, A.; Boppel, T.; Scherg, M.; Rating, D.; Bast, T. MEG versus EEG: Influence of background activity on interictal spike detection. J. Clin. Neurophysiol. 2006 , 23 , 498–508. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Rere, L.M.R.; Fanany, M.I.; Arymurthy, A.M. Simulated Annealing Algorithm for Deep Learning. Procedia Comput. Sci. 2015 , 72 , 137–144. [ Google Scholar ] [ CrossRef ]

Click here to enlarge figure

Tissue TypeTissue Layer Thickness in mmConductivity in S/m
Brain2.76 mm 0.330
Skull4.16 mm0.004
Scalp3.90 mm0.330
Source[x y z] Location in mm in SCS System Theta (θ) Value in DegreesPhi (φ) Value in Degrees
Right 1[8.62, −21.4, 0.63]11236
Right 2[−36.76, −5.14, 45.82]68186
Right 3[−0.27, −18.41, 12.01]31107
Right 4[10.66, −5.30, 28.83]4778
Right 5[56.09, −16.60, 33.42]48112
Right 6[47.54, −23.29, 27.32]13584
Left 1[13.39, 27.66, 53.16]11226
Left 2[7.68, 18.73, 35.37]32121
Left 3[16.92, 4.60, 59.22]3648
Left 4[15.76, 44.49, 33.32]6853
Left 5[40.31, 11.13, 44.68]73191
Left 6[52.03, 10.55, 27.39]116118
Source[x, y, z] Location in mm in SCS SystemTheta (θ) Value in DegreesPhi (φ) Value in Degrees
S1[50.68, 26.54, 65.3]10878
S2[55.24, 20.18, 82.11]56113
S3[53.44, 41.38, 63.12]2457
S4[−15.87, 51.26, 69.33]12463
S5[5.30, −43.02, 78.15]146119
S6[12.60, −14.18, 85.23]171138
S7[7.65, −29.41, 71.78]2154
S8[57.01, −33.96, 54.12]5978
S9[14.14, 21.36, 62.13]11291
S10[1.21, 34.98, 21.68]74162
S11[4.41, 41.26, 44.73]8249
S12[−11.27, −25.57, 58.12]6136
S13[29.12, −23.44, 45.47]3887
S14[7.77, −2.51, 70.69]4576
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Bastola, S.; Jahromi, S.; Chikara, R.; Stufflebeam, S.M.; Ottensmeyer, M.P.; De Novi, G.; Papadelis, C.; Alexandrakis, G. Improved Dipole Source Localization from Simultaneous MEG-EEG Data by Combining a Global Optimization Algorithm with a Local Parameter Search: A Brain Phantom Study. Bioengineering 2024 , 11 , 897. https://doi.org/10.3390/bioengineering11090897

Bastola S, Jahromi S, Chikara R, Stufflebeam SM, Ottensmeyer MP, De Novi G, Papadelis C, Alexandrakis G. Improved Dipole Source Localization from Simultaneous MEG-EEG Data by Combining a Global Optimization Algorithm with a Local Parameter Search: A Brain Phantom Study. Bioengineering . 2024; 11(9):897. https://doi.org/10.3390/bioengineering11090897

Bastola, Subrat, Saeed Jahromi, Rupesh Chikara, Steven M. Stufflebeam, Mark P. Ottensmeyer, Gianluca De Novi, Christos Papadelis, and George Alexandrakis. 2024. "Improved Dipole Source Localization from Simultaneous MEG-EEG Data by Combining a Global Optimization Algorithm with a Local Parameter Search: A Brain Phantom Study" Bioengineering 11, no. 9: 897. https://doi.org/10.3390/bioengineering11090897

Article Metrics

Article access statistics, supplementary material.

ZIP-Document (ZIP, 136 KiB)

Further Information

Mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

IMAGES

  1. What is Problem Solving Algorithm?, 4 Steps, Representation

    problem solving using search algorithm

  2. Introduction to Problem-Solving using Search Algorithms for Beginners

    problem solving using search algorithm

  3. PPT

    problem solving using search algorithm

  4. Introduction to Problem-Solving using Search Algorithms for Beginners

    problem solving using search algorithm

  5. Problem-solving algorithm

    problem solving using search algorithm

  6. Introduction to Problem-Solving using Search Algorithms for Beginners

    problem solving using search algorithm

VIDEO

  1. Search Algorithm Example (BFS)

  2. Artificial Intelligence-19: A* Search Algorithm Explained!

  3. 02_AI_Uniformed Search Algorithms

  4. A* On A Pathfinding Problem

  5. Informed Search Algorithm (greedy & A star)

  6. Searching and Sorting Algorithms Full Course

COMMENTS

  1. Introduction to Problem-Solving using Search Algorithms for Beginners

    An Introduction to Problem-Solving using Search ...

  2. Search Algorithms Part 1: Problem Formulation and Searching for

    Figure 1: A simplified road map of part of Romania. The problem is to travel from Arad to Bucharest in a day. For the agent, the goal will be to reach Bucharest the following day.

  3. Chapter 3 Solving Problems by Searching

    In general, exponential-complexity search problems cannot be solved by uninformed search for any but the smallest instances. 3.4.2 Dijkstra's algorithm or uniform-cost search When actions have different costs, an obvious choice is to use best-first search where the evaluation function is the cost of the path from the root to the current node.

  4. Searching Algorithms

    Searching Algorithms

  5. PDF 3 SOLVING PROBLEMS BY SEARCHING

    eral general-purpose search algorithms that can be used to solve these problems and compare the advantages of each algorithm. The algorithms are uninformed, in the sense that they are given no information about the problem other than its definition. Chapter 4 deals with informed search algorithms, ones that have some idea of where to look for ...

  6. PDF Solving problems by searching

    Toy problems (but sometimes useful) Illustrate or exercise various problem-solving methods Concise, exact description Can be used to compare performance Examples: 8-puzzle, 8-queens problem, Cryptarithmetic, Vacuum world, Missionaries and cannibals, simple route finding. Real-world problem. More difficult No single, agreed-upon description ...

  7. AI

    Search Algorithms - AI

  8. PDF Solving problems by searching

    AI: Solving problems by searching

  9. Search Algorithms in AI

    Search Algorithms in AI

  10. Problem Solving in Artificial Intelligence by Search Algorithms

    The initial stage of problem-solving always involves setting a goal. This goal serves as a reference point, guiding the intelligent agent to act in a way that maximizes its performance measure ...

  11. AI Search Algorithms With Examples

    AI Search Algorithms With Examples

  12. PDF Lecture 2 Uninformed Search

    We often use search algorithms to solve challenging problems. A problem is challenging when it is easy to recognize a solution, but there is no e cient algorithm to derive a solution. These problems are good candidates for applying search algorithms. An example is NP-hard problems. A search algorithm mimics how we solve a complex problem in ...

  13. Linear Search Practice Problems Algorithms

    Linear Search Practice Problems | Algorithms

  14. A* Search Algorithm

    A* Search Algorithm

  15. PDF Problem-Solving as Search

    solving. -Famous book: "Human Problem Solving" (1972) •Automated reasoning is a natural search task •More recently: Smarter algorithms -Given that almost all AI formalisms (planning, learning, etc.) are NP-complete or worse, some form of search is generally unavoidable (no "smarter" algorithm available).

  16. Best First Search (Informed Search)

    Best First Search (Informed Search)

  17. Search Algorithms in Artificial Intelligence

    Search Algorithms in Artificial Intelligence

  18. Understanding Algorithms: The Key to Problem-Solving Mastery

    Graph algorithms are designed to solve problems centered around these structures. Some common graph algorithms include: Depth-First Search (DFS): This algorithm explores as far as possible along each branch before retracing its steps. Think of DFS as exploring a maze and always choosing the next unexplored path, backtracking only when a dead ...

  19. How to use algorithms to solve everyday problems

    My approach to making algorithms compelling was focusing on comparisons. I take algorithms and put them in a scene from everyday life, such as matching socks from a pile, putting books on a shelf, remembering things, driving from one point to another, or cutting an onion. These activities can be mapped to one or more fundamental algorithms ...

  20. Search Algorithms in AI

    Search Algorithms in AI

  21. Using Uninformed & Informed Search Algorithms to Solve 8-Puzzle (n

    The same problem (with a little variation) also appeared a programming exercise in the Coursera Course Algorithm-I (By Prof. ROBERT SEDGEWICK, Princeton).The description of the problem taken from the assignment is shown below (notice that the goal state is different in this version of the same problem): Write a program to solve the 8-puzzle problem (and its natural generalizations) using the ...

  22. Analysis of Searching Algorithms in Solving Modern Engineering Problems

    Many current engineering problems have been solved using artificial intelligence search algorithms. To conduct this research, we selected certain key algorithms that have served as the foundation for many other algorithms present today. This article exhibits and discusses the practical applications of A*, Breadth-First Search, Greedy, and Depth-First Search algorithms. We looked at several ...

  23. Solving mazes with Depth-First Search

    Depth-first search (sometimes referred to in this article as DFS) is a graph/tree traversal algorithm that follows a path as far as it can until it either, reaches the goal or has nowhere else to ...

  24. Solving Stochastic Orienteering Problems with Chance Constraints Using

    We present a new Monte Carlo Tree Search (MCTS) algorithm to solve the stochastic orienteering problem with chance constraints, i.e., a version of the problem where travel costs are random, and one is assigned a bound on the tolerable probability of exceeding the budget. The algorithm we present is online and anytime, i.e., it alternates planning and execution, and the quality of the solution ...

  25. Dynamic scheduling of hybrid flow shop problem with uncertain process

    The trained model was applied to solve test problems mirroring the training set's distribution. For each scale, three sets of instance problems were generated and solved 10 times to ensure reliability. The results in Table 5 reveal that for all three problem scales, the NEAT algorithm consistently achieved optimal C max values.

  26. Bioengineering

    Dipole localization, a fundamental challenge in electromagnetic source imaging, inherently constitutes an optimization problem aimed at solving the inverse problem of electric current source estimation within the human brain. The accuracy of dipole localization algorithms is contingent upon the complexity of the forward model, often referred to as the head model, and the signal-to-noise ratio ...