James Borda Writer, Game and Interaction Designer

Week Six Update

Continuing with my study of general algorithms:

  • Uninformed Search (These are “dumb” methods – using brute force calculation.)
    • Depth-First Search: Begin at the starting node of a decision tree and follow one branch as deep as it goes. Then back up and follow the next branch, continuing until a solution node is found. Not optimal (does not necessarily find the shortest path to the goal node) and only complete if there are no loops in the decision tree; otherwise it can get cycle in a loop and never find a solution. Potentially very slow if the graph is very deep.
    • Depth-Limited Search: As above, but the algorithm determines an arbitrary maximum depth to search. Not complete – the goal node may be deeper than the maximum depth, and not optimal – returns the first successful path it finds even if there is a shorter path.
    • Iterative Deepening Search: Performs a depth-limited search of depth 1, then depth 2, etc. until it reaches a solution. Complete and optimal in unweighted graphs. It is the preferred algorithm when the depth of the solution is not known. Same time complexity as DFS and DLS – it can potentially search every node in the graph before finding the solution.
    • Breadth-First Search: Search the graph from root node in order of distance from the root. Different from IDS in that each node that is searched must be stored – so it has a higher spacial complexity (higher memory requirements). They also differ in how they handle the queue of discovered but unsearched nodes – IDS uses a LIFO (Last-In-First-Out) implementation and BFS uses a FIFO (First-In-First-Out) implementation. I’m not sure what the significance is in terms of efficiency. BFS is complete and optimal. Preferred if the solution is known to be close to the starting node.
    • Bidirectional Search: Performs two breadth-first searches simultaneously, one from the starting node and one from the goal node. When both searches find a common node, the path from start to goal can be reconstructed. Complete and optimal, but only possible if the goal node is known in advance. Advantage is that the time and space complexity are half an ordinary BFS, since it only needs to search half the depth of the graph.
    • Uniform-Cost Search: A method for finding the path with the lowest cost on a weighted graph. From the starting node, evaluate the path costs of each connected edge. Put them in a priority queue from least cost to highest. Follow the path at the front of the queue and again evaluate the costs from that node, adding to the accumulated cost of the current path. Then reorder the priority queue (least to highest cost), pick the lowest cost path, and repeat. The solution is found when the path at the top of the priority queue contains a solution. Optimal and complete if the path costs are non-negative. Time and space complexity are the same as BFS.
  • Informed Search (These methods are more efficient that uninformed search, because they employ heuristics to evaluate the quality of any state in the search space).
    • Best-First Search: Keeps a list of ‘closed’ and ‘open’ nodes. Unvisited nodes are arranged in a queue according to the evaluation function f(n) = g(n) + h(n), where h(n) is a heuristic function (an educated guess) and g(n) is the estimated cost. This algorithm is “greedy” because it will always choose the least expensive node in the queue; however this makes it possible to overlook global minima in favor of local minima. Complexity in both time and space is O(b^m).
    • A* Search:
    • Hill-Climbing Search
    • Simulated Annealing
    • Tabu Search

Also consulted with Patrick Hebron about a particular problem with my previous iteration: when a citizen proposes a change in the “Law String,” the voters are chosen purely at random. This means it’s possible for a single ‘voter’ to be consulted multiple times. So we discussed several strategies for making sure that the ‘voters’ are unique.

  1. The first algorithm he suggested was somewhat akin to a Depth-First Search. The idea is to start from the first position of the ‘voters’ array and pick one citizen at random. Then move to the second position, again choose a random citizen, and then check from the beginning of the array to ensure that this voter is not a repeat of a previous random choice. Iterate until all the voters have been chosen. The big problem with this is that its time complexity is geometric (O(b^d) in O-notation). This will be a big problem if I am using a large number of voters, which I would like in principle to be able to do (for statistical comparisons). For example, if I have 10,000 voters, this algorithm could take (10,000 * 10,000) = 100,000,000 iterations to complete.
  2. Patrick then suggested I try using a java class called ‘set’ which works like an indexed ArrayList. It includes a method called “add” which “Adds the specified element to this set if it is not already present (optional operation).” So hopefully that will be a more efficient way to go.

Setting up a spacial version of the ABM

I reviewed Repast, but ultimately decided to program the whole thing myself in Processing – this is because I am new to programming and feel that a crucial aspect of this exercise is the programming experience.

This may have been a mistake… last night I tried to add a ‘GridSquare’ class to delineate the field the agents move on, and broke the program. I still can’t figure out how to fix it…

More consideration of the fitness function problem

Thinking about changing this to essentially a Tragedy of the Commons simulator – create a resource that agents can harvest and trade, and let the agents vote on how much each agent can extract. And later include tendency to cheat, penalties, etc.


Post Details

Posted: March 1, 2012


Comments: 0