Mobile Robot Lab Home

HUNT Project

Directions to the Lab

Overview | Biological Basis | Computational Model | Simulation Restuls | References | In Progress | Paper |

   A common behavior in animals or human beings is deception. We focus on deceptive behavior in robotics because the appropriate use of deception is beneficial in several domains ranging from the military to a more everyday context. In this research, novel algorithms are developed for the deceptive behavior of a robot, inspired by the observed deceptive behavior of squirrels for cache protection strategies, evaluating the results via simulation studies.

  Squirrel's Deceptive Behaviors in Food Hoarding

   In this research, we focus on the deception behavior of squirrels in terms of their food hoarding strategies. Food hoarding is an important behavior for many animal species, such as birds and rodents. Food-hoarding strategies are mainly comprised of two parts: caching and protecting the food. The deceptive component falls in the food protection phase.

Figure 1:  Black Eastern gray squirrel moving peanuts

Figure 1: Black Eastern gray squirrel moving peanuts [9]

A. Cache Formation

   Food caching activity ranges widely from highly dispersed (scatter hoards) to highly clumped (larder hoards). Scatter hoarders cache a few items in many small/scattered caches. On the other hand, larder hoarders place most of the food in one or a few central locations referred to as middens. The evolution of the particular hoarding strategy for a species depends on the abilities of individuals to defend their caches against pilfering [1]. According to observation, animals use a larder hoarding cache strategy when their competitors are conspecifics or they are weaker than themselves; however, when potential competitors are heterospecific or stronger adversaries, animals tend to use a scatter hoarding strategy [1].

B. Cache Protection

   After hoarding food items, animals begin to protect their resources from pilfering by patrolling the caches. First, animals move around the caching areas and check whether the cached food items are safe. However, animals generally change their behavior after they experienced pilfering. Of particular use in this study is an interesting deceptive behavior observed in the food protection strategy of certain squirrels [2].
   One general food protecting behavior of animals is changing the locations of its food items. According to Preston's experiments [3,4], after kangaroo rats experienced pilfering from conspecific or heterospecific competitors, they moved the location of their food items.
Social context (i.e., presence or absence of competitors) also appears to be pivotal to the expression of cache protection behaviors. Deceptive behavior in the tree squirrel has been observed with respect to food protection [2]. While patrolling, tree squirrels visit the cache locations and check on their food. However, if potential competitors are present nearby, tree squirrels visit several empty cache locations. This deceptive behavior attempts to confuse competitors about the food's location, so that they can protect against the loss of their hoarded food. After the potential competitors leave the territory, the tree squirrels move the location of their stored food items, if pilfering occurs.

  Computational Model

   A model of a bio-inspired behavior-based model [5] of squirrel caching and protecting behaviors for application to robotic systems is now presented. Simulations studies were performed in MissionLab , a software package developed by the Mobile Robotics laboratory at Georgia Tech [6]. MissionLab provides a graphical user interface that enables users to specify behavioral states and the control transitions between states easily, yielding a finite state acceptor (FSA), which can then be compiled down to executable code for both simulations and robots. The caching behaviors created for this project are combined with pre-existing behaviors such as avoiding obstacles, moving toward an object, or injecting randomness (noise).
   Like the squirrel's behavior, the model consists of two main parts - caching behavior and patrolling (protecting) behavior. The simulation is based on interactions between two robotic agents: a squirrel robot (resource storer) and a competitor robot (resource pilferer). Figure 2(a), illustrates the high-level model.

Figure 2:  (a) High-level FSA: caching behaviors of squirrels, (b) sub-FSA: food hoarding, and (c)  sub-FSA: food patrolling

Figure 2: (a) High-level FSA: caching behaviors of squirrels, (b) sub-FSA: food hoarding, and (c) sub-FSA: food patrolling"

A. Caching Strategy

   Many groups, including ours [7], have studied foraging behavior in robotics. In the caching simulation, one robot is required to store the scattered resources in safe locations. The caching sub-state (Fig. 2b) consists of several states and triggers. First, the robot wanders around searching for food items. When the robot detects a food item during foraging, it is picked up. Then, the robot selects the place to cache this item based on a pre-defined probabilistic distribution. After selecting a specific caching place out of several choices, the robot moves to the location and drops the item there. The robot repeats this strategy until the “enough food cached” trigger is activated.

B. Protecting Strategy

   After caching is complete, the robot begins to move between the caching locations to patrol the resources. The behaviors of the robot include goal-oriented movement, selecting places, and waiting behavior (figure 2(c)).
Initially the robot employs the true patrolling strategy, when the “select true location” trigger is activated. This trigger calculates which of the many caching locations the robot should patrol in the current step. The calculation is a random selection based on the transition probabilities among the places. Probabilistic transitions between behavioral states have been used for successfully developing models of wolf pack predation [8]. Transition probabilities are determined by the number of cached items. If a place has more items, the probability to visit is higher.
   In each state, the next patrol state is determined based on these transition probabilities. The system generates a random number and determines the next location if the number is in certain range. When the squirrel robot detects the presence of competitor, deceptive behavior is triggered and the squirrel robot patrols the false (empty) caching locations to deceive the competitor. The selection of deceptive locations is also calculated by transition probabilities. Here, the transition probabilities among the false locations are set as uniform distributions (fig. 3(d)). These are not based on ethological observations as they were in the wolf pack case [8], as that data is unfortunately not available.
   In each patrolling state, the robot goes to the cache and remains there for a finite amount of time. The time spent at the cache is determined by the number of food items in that place. If a place contains n food items, the robot stays there for n seconds. At the end of the waiting phase, the robot selects the next patrolling locations based on equation and goes to the next patrolling state.

C. Competitor Robot Behavior

   A competitor robot has a simple mechanism in the current scenario. The competitor robot simply wanders around the map to try to find the squirrel robot. When it detects the squirrel robot, it determines whether it is at the potential caching location or not. To recognize the caching area, the competitor robot observes how long the squirrel robot stays in place. Since the squirrel robot takes time to patrol the caching place proportional to the number of food items, the competitor robot can get an evidence of caching areas based on the squirrel robot’s staying time duration. Therefore, if the duration is over a threshold, set manually, it activates the “detect caching area” trigger. Then, the competitor robot goes to this location and remains until the end of pilfering. The duration of pilfering is determined by the number of cached items. If the duration is less than the threshold, the competitor determines that the current location of the squirrel robot is not the true cache. It then returns to “wander” state and repeats the detecting process again.

  Simulation Results

   A simple scenario of the squirrel-like deceptive behavior was simulated in MissionLab. The simulation environment is shown in figure 3. Yellow-colored food items were randomly placed around the map. In this simulation, the robot detects these food items by discriminating colors. Three caching places and three empty places were chosen arbitrarily.
   First, the robot finds a food item and stores it in the pre-defined caching places as shown in figure 3(a). When the number of the cached items is over a threshold for any of the caches, the state of the robot switches to the cache protection. If a competitor is not present, it patrols the true caching locations (fig. 3b) Otherwise, the deceptive patrolling strategy is activated, and the robot moves to empty caching places (fig. 3c).

Figure 3(a)

Figure 3(a): Caching Strategy

Figure 3(b)

Figure 3(b): True Patrolling Strategy

Figure 3(c)

Figure 3(c): Deceptive Patrolling Strategy

   To evaluate the approach, the performance was evaluated by measuring the time duration until the competitor robot detects the exact caching places and begins pilfering. The same scenarios without deceptive behaviors formed the baseline. Comparing the baseline results to the measured time when deception is active, serves as an evaluation of its effectiveness. The simulation was run 10 times per each condition-with and without deceptive behaviors.
   Table 1 and Table 2 show the simulation results for the entire trials. In two, the average time to successful pilferage when the squirrel robot includes deceptive behavior is 10.5 minutes (std: 2.65), compared to the average time duration without deception is 7.74 minutes (std: 2.85). The statistical analysis yielded 0.039 p-value (< 0.05) with the Student’s t-test, a significant difference between the results of the two conditions.
   As a result, it can be concluded that the deceptive behavior affects significantly the robot’s performance. With deceptive behaviors, the squirrel robot protects resources longer and performs significantly better than the one without deceptive behaviors.

Table 1:

Table 1: Time duration until competitor successfully pilferages resources in contexts; (a) with deceptive behaviors and (b) without deceptive behaviors. (Measurements given in minutes).

Table 2:

Table 2: Average time to pilferage with deceptive behaviors and without deceptive behaviors.


[1] F. Gerhardt. Food Pilfering in Larder-Hoarding Red Squirrels (Tamiasciurus Hudsonicus). Journal of Mammalogy (2005)
[2] M. a. Steele, et al., Cache protection strategies of a scatter-hoarding rodent: do tree squirrels engage in behavioural deception? Animal Behaviour (2008)
[3] S. D. Preston and L. F. Jacobs. Conspecific pilferage but not presence affects Merriam's Kangaroo rat cache strategy. Behavioral Ecology (2001)
[4] S. D. Preston and L. F. Jacobs. Cache decision making: the effects of competition on cache decisions in Merriam's kangaroo rat. Journal of comparative psychology (2005)
[5] R.C. Arkin, “Behavior-based Robotics,” MIT Press, 1998.
[6] D. MacKenzie, R. Arkin, and J. Cameron. Multiagent Mission Specification and Execution. Autonomous Robotics (1997)
[7] Balch, T. and Arkin, R.C., Communication in Reactive Multiagent Robotic Systems, Autonomous Robots (1994)
[8] Madden, J., Arkin, R.C., and McNulty, D., Multi-robot System Based on Model of Wolf Hunting Behavior to Emulate Wolf and Elk Interactions, Proc. IEEE International Conference on Robotics and Biomimetics (2010)
[9] Wikipedia,

  In Progress

We are currently working on developing and running real robot experiments.


Several videos of simulation runs:

Simulation: Squirrel Robot's Caching Strategy
Simulation: Squirrel Robot's Patrolling Strategy with/without Deceptive Behaviors

Video of the robot experiment:

Robot Experiment (.m4v format)    (.mov format)


The paper for this project can be found here