Steering Behaviours
Robin Green (rgreen@eaeurope.com)
R&D Programmer, Bullfrog Productions Ltd
This tutorial is to introduce steering behaviours, a technique for giving moving objects in your game or simulation an amount of autonomy.
Path finding through a 2D grid of different ground types is pretty much a solved problem, but to add more realism in games like Dungeon Keeper 2 and ThemePark World, we wanted to allow characters to be involved in crushes at doorways and for them not to walk through each other. After reading a paper written by Craig Reynolds called "Steering Behaviours for Autonomous Characters" we decided to implement and adapt the system to our own needs. Various other people have used the same techniques in their games, including Jim Van Verth and John O'Brien from Red Storm who extended the techniques for formation movement in their strategy tank game "Force 21".
The behaviours themselves are simple to code but in combination can produce complex AI, and the resulting techniques can be used in both 2D and 3D with equal success. The discussion below first introduces the system, goes into detail of each behaviour, looks at strategies for combining the behaviours successfully to achieve a desired effect and finally covers some of the difficulties of using steering behaviours in a full production game.
Before we start, here's some definitions:
- Vehicle
- The abstract representation of your moving object. It encapsulates information on position, speed, orientation, etc.
- Pilot
- The AI code that steers the vehicle through your world. A way of letting us talk about the decision making process as opposed to the internal representations.
- Behaviour
- A small piece of code that makes some decisions based on the current state and ultimately generates a steering force vector.
First we're going to define a vehicle. Every moving object in the world is given a position in space and a velocity:
class Vehicle { public: void update(); public: BehaviourList list; Vector2d pos; Vector2d vel; };To update the position of an object, we first update the velocity with a steering force and finally add the velocity to the position. The idea that new_postion = old_position + velocity is a well understood one and comes under the heading of Euler Integration. Sure, it's not a very stable way of integrating forces over time but the aim of the system is merely to generate movement so there's no need for anything more complicated than this:
void Vehicle::update() { Vector2d force(0,0); // force accumulator BehaviourList::iterator i; // iterate through all behaviours for(int i=list.begin(); i!=list.end(); i++) { force += i->calculate(); // sum the forces } vel += force; // update velocity pos += vel; // update position }In the above code we've got a list of "behaviour objects" in an STL container, each of which has a calculate() member function. That's just one way of structuring the code and you may wish to structure it differently, but for the purposes of discussion it's a starting point. We can now make modifications to this system that allow more control by constraining the forces. Say we give the Pilot some more parameters - a mass, a maximum force (analogous to an engine size) and a maximum speed. Now we can rewrite the integration to take these into account:
class Vehicle { public: void update(); public: BehaviourList list; float mass; Vector2d pos; Vector2d vel; float max_force; float max_speed; };void Vehicle::update() { Vector2d force(0,0); // force accumulator BehaviourList::iterator i; // iterate through all behaviours for(int i=list.begin(); i!=list.end(); i++) { force += i->calculate(); // sum the forces } Vector2d steering = truncate(force,max_force); // truncate force to engine size Vector2d acc = steering / mass; // acceleration = force/mass vel += truncate(velocity + steering, max_speed) // update velocity pos += vel; // update position }
This code above makes use of a C++ 2D vector class that simplifies the calculations by allowing anonymous objects to be used as partial results in complex expressions. This is all basic C++ operator overloading, the efficiency of which in games is a subject in itself (don't start me on it, you won't like it. :-)
This definition is for a 2D vehicle which is the area I will be covering in this tutorial, but extending the technique into 3D is not complicated. Positions and velocities are defined as 3D vectors and updating the system proceeds much the same. The only complications appear when we add a couple of vectors to track the local coordinate space of the vehicle to facilitate conversions to and from local and world spaces. More on this later.
This is just the simplest vehicle you can imagine but more general vehicles can be used. Steering behaviours only describe where a vehicle should be going and how fast. They say nothing about how the vehicle should get there - you're free to use wheeled vehicles (as in Force21), legged models, aerodynamic flight models, sequenced motion capture data or whatever you fancy.
I guess this is what you're here for. In Reynolds paper he covers a number of basic movement behaviours - seek, pursuit, arrival, obstacle avoidance, etc. - which I'll cover here with added commentary on why it works as well as how to code it efficiently,
Seek and Flee
Two very simple behaviours that move a vehicle towards or away from a target position with constant velocity. In the case of seek, when the vehicle reaches the target it will will orbit the target like a moth buzzing around a light bulb (as opposed to a planet revolving around a star). Looks especially good if the target moves intermittently.Vector2d Seek::calculate(const Vector2d &target) { desired_velocity = normalise(target - pos) * max_speed; return desired_velocity - vel; }Vector2d Flee::calculate(const Vector2d &target) { desired_velocity = normalise(pos - target) * max_speed; return desired_velocity - vel; }The difference between seek and flee is that seek moves towards a target and flee moves away - the desired_velocity is negated. Also notice that normalise takes a square root - we'll get to efficiency later on.
So what's going on here? Starting with seek, remember the only mechanism we have for controlling a character is to change the velocity by adding together acceleration forces. First we work out the velocity we should have to reach the target (this is the desired_velocity, which is mis-typed in the paper as pos-target. That would be a vector from the target to the vehicle, not the other way around. Remember kids, a vector from somewhere to a target is always (to-from)), we scale this w.r.t. our maximum speed, obeying the rules we've given ourselves so we can't change direction faster than max_force, and we return the difference between our desired velocity and our current velocity. Why the difference? Well, to change the velocity of a character correctly we need to generate a force that, when added to our current velocity, gives us our desired end result. We must calculate an acceleration, and that is a difference between two velocities. Here's a vector diagram:
Figure 1: vector diagram of seek.Let's look at this a little closer. Initially we have a velocity of zero, so at the first iteration our velocity will be set to max_force, and a few iterations later we'll hit max_speed (given that max_force is less than max_speed). Later on, if (target-pos).length()==0.0 then we've reached our target and so we need to stop, so the routine will return -vel giving us a zero velocity at the next iteration. Where does the buzzing-around-the-light behaviour come from? It comes from our settings for max_speed and max_force. If you set max_force to be less than max_speed, it'll take several iterations to actually come to a stop and by that time you'll have overshot your target point. Keep this kind of thing in mind - in this world, reaching targets is a matter of approximation and "close enough" tests. You can never truly test for (pos==target && vel.length()==0.0).
The turning circle is also proportional to max_force. The smaller the force is compared to the maximum speed, the larger the turning circle will be, so the ratio of max_force to max_speed control several things:
- time it takes to get to max_speed.
- turning circle radius (rate of change of direction).
- stopping speed (rate of change of speed).
- accuracy of hitting targets.
Tweaking these values is what gives the system much of its "life".
Pursuit
This behaviour introduces predictive targets. Given a moving target to intercept, the character first predicts where the target will be and steers towards that position:Vector2d Pursuit::calculate(const Pilot &target) { float dist = (target.pos - pos).length(); Vector2d target_pos = target.predict(dist/max_speed); desired_velocity = normalise(target_pos - pos) * max_speed; return desired_velocity - vel; }As usual, this is more subtle than it looks. How do we define predict()? Well, there are many ways, each one being a "pursuit strategy", which is the sort of thing lectures at Robotics Conferences agonise over. The simplest strategy to code is to assume that the target vehicle is moving with a constant velocity and isn't going to turn before you get there, so predict() would go something like:
Vector2d Pilot::predict(float time) { return pos + time * vel; }Reynolds goes into a fair amount of detail about how to create a good hunting predictor, and describes a routine that takes into account the distance between the two vehicles and their relative headings. If you're in front of your prey and it's facing towards you, head for its exact position (same as predict(0.0)), otherwise scale your prediction time upwards w.r.t. the time it'll take you to turn. This turning time will be proportional to the dot product between your two facing directions, and that's normalise(vel) if you've got a non-zero velocity. If you do have a zero velocity then that technique won't work, so we're going to have to add a facing direction to our Pilot class:
class Vehicle { public: Vector2d predict(); void update(); public: BehaviourList list; float mass; Vector2d pos; Vector2d vel; float max_force; float max_speed; Vector2d facing; };If we're adding a facing direction, we may as well go the whole hog and produce a local frame of reference for the object so we can transform objects to and from a local coordinate space. In 2D a local space is defined by a pair of orthogonal unit length vectors, one in the facing direction and one pointing to the left (mathematics rotates anti-clockwise - a right handed coordinate system), in effect a 2x2 matrix defining a transform from local to world space. This will become useful later for optimising some behaviours:
... float max_force; float max_speed; Vector2d local_xaxis; Vector2d local_yaxis; };Updating the local space axes only takes place when we have a non-zero velocity which we can test in length-squared space to save a square root. At this point we also define that, in local space, all our Pilots face down the +x axis. In 2D this code looks like:
void update_local_space() { const float minimum_velocity_sq = 0.001f; float lensq = vel.lengthsq(); // find length squared if(lensq >= minimum_velocity) { local_xaxis = vel * 1.0f/sqrt(lensq); // in effect normalise(vel) local_yaxis = perp(local_xaxis); // find perpendicular vector } }We also have to remember to give vehicles an initial facing direction in the constructor and copy it in the copy constructor - never let a value go undefined, that would be Bad. We'll also need some operations using this local space:
Vector2d Vehicle::vector_local_to_world(const Vector2d &v); Vector2d Vehicle::vector_world_to_local(const Vector2d &v); Vector2d Vehicle::point_local_to_world(const Vector2d &v); Vector2d Vehicle::point_world_to_local(const Vector2d &v);Extending this system into 3D means we have to take a cross product with an arbitrary "approximate up vector" to get a z-axis, then cross the normalised vel with this z-axis to get the y-axis. But what if you happen to be facing exactly along the approximate up vector? The cross product would return a zero length vector, and we're screwed. If this will be a problem for you and you're using a fully orientable 3D system and you really should be using quaternions to encode your orientation... but that's a different story.
Arrival
The problem with seek is that you are usually travelling at full speed when you arrive at the position you're travelling towards. Arrival is a behaviour that slows to a stop when reaching a stationary target (if you do an arrival to a moving target, depending on the relative max_speeds of the the vehicles, you may or may not reach the target).Vector2d Arrival::calculate(const Pilot &target) { const float decel_time = 10.0f; Vector2d target_offset = target - pos; float dist = target_offset.length(); float speed = distance / decel_time; float clipped_speed = min(speed,max_speed); Vector2d desired_vel = target_offset * (clipped_speed / dist); return desired_vel - vel; }Explanation: This is in effect a seek that has it's speed related to distance from the target. The crucial line is where speed is calculated:
float speed = distance / decel_time;You're basically calculating a speed where, if you wanted to get to the target in decel_time units of time, how fast would you have to go? When the vehicle is far away from the target, distance/decel_time will be a big number because to get to the target in, say, 10 time steps you'd have to go damn fast. The speed is then clipped to max_speed, meaning that deceleration won't take effect until you're pretty close to the target, specifically (max_speed*decel_time) units of distance. So your vehicle will scuttle along at max_speed until you're (max_speed*decel_time) units away, at which time it'll start decelerating to zero speed when distance==0.
In reality your vehicle isn't going to actually stop moving as reaching the target is numerically improbable - the system moves asymptotically towards a stop. The movement will be so small you won't notice unless you are still updating the facing direction or local space from this really small velocity. This is why we needed to add an epsilon test to update_local_space() otherwise your vehicle may suddenly flip directions when it's supposed to be stationary.
Offset Pursuit and Formations
What is Offset Pursuit? Imagine a spaceship docking with a mother ship, a plane doing a strafing run over a gun placement or a missile doing a "near miss". You want a vehicle to travel close to but not through a point. This is where we get to use our local coordinate space to good effect. Take the coordinate of the intercept point we want to go through and localise it (project it into object space). The offset can now be specified relative to the vehicle's origin and axes. When you're done you just globalise your resulting target (project it to world space) and use that position for a seek. Here's a diagram:
Figure 2: conversions to and from local space.In the diagram above, we localised coordinates w.r.t the current vehicle and worked out a target position from that space. We can just as easily localise relative to the target vehicle and obtain a position that is relative to its facing direction (assuming it has one) - this way we can not only specify a close pass, but also specify a close pass in front of the target vehicle. It all depends on what effect you're looking for.
Vector2d Offset::calculate(const Pilot &target) { // where are we aiming for? const float offset = 3.0f; Vector2d t = point_world_to_local(target.pos); Vector2d p(t.x,t.y - (offset * sgn(t.y))); Vector2d seekpos = point_local_to_world(p); // do a seek to that point (could do arrival too) desired_velocity = normalise(seekpos - pos) * max_speed; return desired_velocity - vel; }This same technique also works for moving groups of characters around in formation - one of the characters is the leader of the formation and all formation positions are relative to the coordinate system of that leader. As the leader turns and follows a path, all the relative positions turn too and the other vehicles in formation bustle to keep up with their position in the formation. This works well for open spaces, but fails to produce natural behaviours in constricted space like a maze as we will see later. To code formation() you'll need a pointer to a leader vehicle and assign a formation position for each follower in "leader-space":
Figure 3: formations positions in local "leader" and world space.Local coordinate systems are useful for special effects like force fields and particle systems that move as if they were attached to the vehicle directly or secured by rubber bands (depending on the ratio of max_force to max_speed used by the effect vehicles). It's also possible to do hierarchically jointed systems by passing local spaces down the tree of relationships and transforming back to "relative world space" (that is world space relative to your parent node) as you ascend the tree, finally arriving at actual world space when you reach the root. Many cool effects for the taking.
Obstacle Avoidance
One of the hallmarks of an intelligent vehicle is that it doesn't blindly bump into it's environment - it predicts where it will get into trouble and avoids the problem before it gets there. Again this behaviour uses local space.
Figure 4: obstacle avoidance.The idea is very simple: steer to keep a box in front of the vehicle free of obstacles. We first need to do a search for candidate obstacles in the local area. Having found this list of candidates we project their origin points into local space. This transforms the vehicle onto the origin pointing down, say, the +x axis, simplifying the task.
The first job is to reject objects that are behind our vehicle and, as we're in vehicle space, anything behind us will have a negative x coordinate. Don't forget the some obstacles may be behind us but their radius could be in front of us (e.g. we're inside an obstacle. Not unheard of.). I just ignored this case as it doesn't make sense to avoid an obstacle while you're inside it.
The next thing to do is reject obstacles that lie outside of our box. The best way to do this is to expand the radius of each circle by half the width of our box, transforming our problem from box-sphere intersections to line-sphere ones. If a circle has an x-coordinate minus it's radius that's greater than then length of our box, it's too far forwards to have an intersection so reject it. Next we reject obstacles that are too far from our line - just check their y-coordinate. Finally we have a list of obstacles for ray-circle intersection tests.
To recap:
- If the x-coordinate of an obstacle is negative AND its radius is less than that distance it is rejected.
- If the x-coordinate of an obstacle is positive AND the x-coordinate is greater than the length of the cylinder PLUS the obstacle's radius, reject it because it's too far forward.
- If the distance from the x-axis to the centre of the obstacle is greater than it's expanded radius, it doesn't intersect our cylinder so reject it.
Figure 5: rejections and finding the closest intersectionAs you can see, these tests are trivial to do in 2D. 3D is similar but a little more fiddly as you've got to take 2D distances but it can be coded efficiently if you work in distance-squared space.
If any obstacles are left, it's time to choose one of them to avoid. The reason we only choose only one to avoid is that there must be a predictable behaviour when an obstacle is found. Obstacles can easily be merged together into clumps and this algorithm must still work predictably. To choose the closest threat, we intersect the circle with the +x-axis which will return two hit points. The case where a circle is tangent to the line shouldn't interest us; that would have the box grazing an obstacle, not really a cause for changing our direction of travel. If you compare an optimised version to the general line-circle code you can see how efficient local-space code can become - everything is axis-aligned and all dot and cross products are open for optimisation.
Having found our two intersections we must choose the hit that is positive and closest to x==0, but remember it's quite possible that we are inside an obstacle so you can't just use the -ve root (the root from the line-sphere intersection that is always least). We must generate both roots and choose the least positive intersection. After finding this intersection, we store this value and a reference to the actual obstacle as our current closest threat and carry on doing the line-circle intersections for the rest of the circles.
Now we've found our closest threat, we must generate a steering force that will move the box away from the obstacle. The easiest way to do this is to take the vector from the centre of the object to the x-axis and extend it by some factor. The vector you return will be this vector projected into world space using the vector_local_to_world() function. When added to the current velocity, this final vector will cause the vehicle to veer away.
Figure 6: the repelling force.The resulting code in 2D looks like this:
Vector2d Avoid_Obstacle::calculate(Vehicle &c) { // initialise the closest hit to a huge value. float min_dist = FLT_MAX; Vector2d best_steer; ObstaclePtrList list; // work out the length of the box Vector2d diff = c.predict_pos(10.0f) - c.pos; float search_dist = diff.length() + c.radius; grid.get_obstacle_candidates(c.pos,search_dist,list); // walk through the list looking for the closest threat ObstaclePtrList::iterator i; for(i=list.begin(); i!=list.end(); i++) { // get radius of the obstacle float column_radius = (*i)->radius; // get local-space position of the obstacle. Vector2d v = c.point_to_local((*i)->pos); // do the rejection tests... if(v.x <= 0.0) continue; if(v.x - column_radius > search_dist) continue; // if there's a definite intersection... if(fabs(v.y) < c.radius + column_radius) { // find exact line-circle intersection float isect = xaxis_circle_intersect(v,column_radius); if(isect >= 0.0 && isect < min_dist) { min_dist = isect; best_steer.y = -v.y; } } } if(min_dist == FLT_MAX) return Vector2d(); return c.vec_to_global(best_steer); }There are a number of tweaks you can use to get the most out of this behaviour:
- You can make the length of the repelling cylinder proportional to the vehicles velocity. This means the faster you travel, the more the vehicle will anticipate the obstacles around it.
- You can scale the repelling force w.r.t. the distance away from the vehicle. This makes the vehicle take tiny readjustments of the obstacle is far away and larger steering adjustments if the obstacle is close. The vehicle skims the surfaces of the obstacles more accurately.
- You can add in a braking force, a force vector from the obstacle to the vehicle proportional in length to the distance from the vehicle to the closest hit that will cause the vehicle to slow down as it gets close to an obstacle, in effect braking to steer.
Doing obstacle avoidance in 3D is only a little more involved. The centres of the obstacles must be projected onto the YZ plane, resulting in a set of circles. The same tests go on - is the obstacle behind the cylinder, is the obstacle too far forward, and the closest threat is found by finding the least positive hit between the x-axis and all remaining spheres. The final force is calculated from the centre of the closest threat to the x-axis and will be a vector with a zero X component and a 2D vector on the YZ plane, e.g. (0,yf,zf)
Figure 7: avoidance in 3D.There is one gotcha that happens when you use avoid obstacle with a direction following or seeking algorithm - the forces interact to give you a jittery velocity direction. The effect happens because the vehicle is being forced to face away from the desired path to avoid an obstacle, then next iteration the box is empty and the vehicle returns to facing towards the path bringing the box into collision with the obstacle. As far as I can tell this can't be avoided by recoding avoid obstacle or the path following, but it can be avoided by separating the facing direction from the velocity vector. If you take a series of previous velocities and average them the resulting vector can be used as a filtered facing direction and the jittering averages out. Of course, this costs memory...
Separation
Separation is a behaviour that stops groups of vehicles all sitting on top of each other and causes them to use space more realistically. It's a classic "flocking" behaviour, one of the three behaviours that are used to animate flocking groups (separation, cohesion and alignment) and it's based on electrostatic repulsion.A vehicle sends a request to the database for a collection of all the other vehicles that lie within a circle around itself. Next it works out the vector from itself to each vehicle and scales this vector by the negative inverse distance (-1.0/(vehicle.pos-pos).length()) to each of these vehicles and sums all of these. The resulting vector is a repelling force away from all neighbours. Additionally, you can add in a constant value to "buffer" the vehicles apart from each other by a few units of distance:
Vector2d Separation::calculate(Vehicle &c) { // add some pixels separation between all objects. const float buffer = 1.0f; VehiclePtrList list; // get a list of the candidates to repel Vector2d steering; int search_dist = 2.0 * c.radius; grid.get_collision_candidates(c.pos,search_dist,list); // walk the list of candidates VehiclePtrList::iterator i; for(i=list.begin(); i!= list.end(); i++) { if(*i == &c) continue; // if it's me, continue Vector2d to = c.pos - (*i)->pos; // vector FROM repelling TO this - so no negate! float dist = to.lengthsq(); // note: no sqrt. float test_dist = c.radius + (*i)->radius + buffer; if(dist < (test_dist*test_dist)) steering += to/sqrt(dist); // finally do the sqrt. } return steering; }What does the repelling force actually look like? It's a 1/x^2 curve where x is the combined radii of the vehicles. Here are some plots to help you visualise the force field:
Figure 8: visualisations of repelling force fields.How does the force field work when the vehicles have different radii? Let's try an example. Say the vehicle has a radius of 2 units and the obstacle has a radius of 4 units. If the distance between the vehicles is more than the sum of their two radii then there is no need to repel each other, so the size of the force is zero. As the vehicles get closer to each other the line joining their centres reaches the threshold point and a repelling force is generated. This force starts off weak because the distance is large and gets geometrically larger as they get closer. If the centres of the two vehicles coincide the force becomes essentially infinite (as expressed by a "divide by zero" error, i.e. geometrically, the force is infinite in all directions!). When the objects are just touching they gently nudge each other apart, but when there's a full-on intersection they kick each other out of the way with considerable force.
Figure 9: the magnitude of a repelling force.The forces generated can be very large numerically using the full range of representable values from 0 to +inf, so this is one place in the code where close attention should be paid to numerical stability.
How does the database find all collision candidates? This is a classic spatial searching problem, and can be solved in several ways. The easiest way is to maintain a grid of cells (buckets) throughout the navigable space that contain a list of all objects that intersect that cell. Take the candidate circle, approximate it with a box and read off the cells that the box intersects with. Care must be taken that you round values down and up as appropriate to make sure all valid cells are accessed. But hey, that's basic games stuff. You've just got to remember that every time you update the position or radius of a vehicle or obstacle that you delete it from the grid and reinsert it at it's new position, and that code can go into Vehicle::update().
Why is this force just added to the velocity? In all the previous behaviours we've calculated desired_velocity-vel, so why the difference? The difference is that in previous bits of code we've been accelerating towards some desired position but a separation force is just that, a force that influences the direction of travel. Sure, you could calculate a position from the force that would be where you want the vehicle to go but that would be a waste of calculation. Forces are penalties for getting in a bad position and the penalties should be proportional to the "badness" of the situation. It's a subtle distinction, but it's important.
There are some gotchas associated with separation. Because there is no upper limit to the number of other vehicles that may be repelling you, it's quite easy to get vehicles into situations where they are being pushed by sheer force of numbers into places you don't want them to go - through walls, through obstacles, etc. Picture the situation where a vehicle is cornered and being pushed by any number of neighbours into the corner, or being pushed into a large obstacle. In the case of an obstacle, the vehicle is trying to avoid it but that doesn't help if it's facing in the opposite direction and it's collision cylinder doesn't hit the obstacle. In the case of the wall (explained later), the sum of the forces must never exceed the force the wall generates to repel vehicles:
Figure 10: problems with separation.It is possible to enforce an upper limit on the number of vehicles to be repelled by, but then how do you choose which subset of the collision candidates are you going to use? Taking the top N vehicles by size is a good method, so you'll need to sort your candidate list. A different solution that I use is:
- Add a repelling force to the Obstacle so that they push out vehicles that have become intersecting. I call this behaviour RepelObstacle and it works just like Separation.
- Layer your repelling forces. The rules I currently use are:
- Walls must never be intersected, and by that I mean the centre point of a vehicle must never cross a wall boundary. This is enforced at the integrator level using a non-penetration constraint. Basically, don't ever allow the vehicles to get into an invalid state. If they want to intersect a wall, just don't update their position.
- Obstacles come next. They can be intersected if absolutely necessary but they will repel vehicles very quickly if they do become intersected. To produce a huge repelling force I use a 1/x^3 curve instead of the normal 1/x^2. Not a force to be trifled with.
- After all that has been settled, then obstacles can repel each other.
This ordering is done when the behaviours are combined together using a weighted mean. More on combining behaviours later.
Follow Path
Follow Path is another basic behaviour that causes vehicles to move along a polyline and stop at the final point. Each point on the path is called a waypoint, the lines between points are path segments, a collection of consecutive path segments is called a subpath, the vehicle is steering towards the current waypoint which increments along the path when we have reached the previous one, and the final point is called the destination point:
Figure 11: naming the parts of a path.There are three main techniques of path following: dog-on-a-string, reprojection and grappling hooks.
- Dog on a string
Linearly interpolate a target point down the polyline and make the vehicle do a seek to that position. How fast to move this target point? It depends on what sort of effect you're looking for. If the target moves too slowly then the vehicle will circle the target like a moth around a light. If it moves too fast it's possible that the vehicle will get stuck in the environment. It is possible to work out how "hard" the target point is pulling on your vehicle by using a distance measure from the vehicle to the target and use this information to speed up or slow down the moving target.- Reprojection
Estimate the vehicle's future position on its current heading then project this point onto the path using a "closest point on line to point" method. Use this projected point as your target for a seek. In addition, if the distance between your projected point and the point on path is small then don't return a steering force - this gives you the effect of keeping vehicles within a wide stroke path. This technique has the advantage of always providing a good seek point regardless of your speed of travel. Problems occur only if your future position intersects thin walls or your path self-intersects leading to areas of confusion. Setup must take into account that the vehicle may not be facing in the correct direction initially to travel down the path, so be sure to return seek points that move the vehicle in the right direction. Overall, the most robust technique once you sort these problems out.
Figure 12: examples of bad reprojections.
- Grappling Hooks
Given that your waypoints are line-of-sight preprocessed (most path finders should do this and if they're not just insert waypoints until they are), you can just seek the next waypoint. As you approach that waypoint by some distance threshold, increment your waypoint counter and head for the next waypoint. When you animate this it looks like the vehicle throws out a grappling hook, Spiderman style, to the next waypoint and drags itself there hence the name. This was the technique used in DK2 and the one we'll look at in detail below.The overriding thinking behind path following was that the vehicle must get to the destination if at all possible. If it gets stuck and there's a different route there, take that route instead and keep on going, trying, renavigating and retrying until you finally achieve the goal. (One thing to note is that DK2 only allows for removal of walls and never insertion. This very basic underlying decision affected the design and optimisation of our AI design enormously and our producers had to be made aware that it wasn't something you could change a few weeks before ship!)
The waypoints were generated using Bullfrog's "Ariadne" path finding system. This system takes a rectangular grid of cells and a single rule function that returns true if you can move from tile type A to tile type B (how you choose to encode the tile type for height, danger, impenetrability, etc. is up to you). Having this single rule function allowed us to specify whether the world has infinitely thin walls or thick walls at the flick of a switch, something we used to good effect when porting the nav system to Theme Park World (called SimThemePark in the US).
Ariadne then triangulates the world using an insertion Delaunay triangulation algorithm and then works as a database that responds to requests for paths from some point (x,y) to another point (x,y). The system is dynamic so that you can dig new rooms, remove and add walls and the system will selectively update the triangulation in a pretty optimal way. The path queries are very fast too, and uses a modified A* algorithm to find a path through the triangle adjacency graph which is then post-processed to give you an optimal geometric path that is offset out from walls. One point to note is that the path returned is not necessarily the best but it is one of many good routes and for tiny differences in your starting position you may get radically different routes to your destination. In practice this is a good feature rather than a problem. With these notes in mind we've got to "pilot" down the path.
The path structure returned by Ariadne is an ordered list of waypoints plus a count of the number of waypoints. To fill a path structure with data, we call:
struct Path { SLONG ax8, ay8; // Start of route SLONG bx8, by8; // End of route SLONG PathLength; WayPoint WayPoints[MAX_PATHLEN]; } path;path_init8(&path, pos.x,pos.y,end_point.x,end_point.y,PATH_MIDDLE);which fills the path structure with waypoints and sets PathLength. (Note that, due to Ariadne being a cross-platform tool optimised for speed and size, all positions are stored and returned in 8.8 fixed point, just to confuse things!). The point (ax8,ay8) is a copy of the initial position and the point (bx8,by8) is just a copy of the end_point. path.waypoint[0] is the first waypoint and not the initial position so you're gonna have to special case the first segment of a path. If Ariadne can't find a path to the desired destination then PathLength==0. If a path has only one waypoint, the last one, then:
PathLength = 1; WayPoints[0].x = bx8; WayPoints[0].y = by8;Otherwise we can loop through 0..PathLength-1 waypoints in order.
Due to the triangulation algorithm, handily every waypoint output by Ariadne is line-of-sight visible to the previous one. This lends itself to the grappling hooks method outlined above. In addition, because the seek behaviour won't slow to a stop when it reaches the final point, the last segment of the path is travelled using an arrival behaviour. Voila, that's it, follow_path.
Except for the gotchas.
Occasionally, due to the interaction of other behaviour forces and the follow path behaviour, the vehicle will reach a waypoint, increment it's current waypoint and then get pushed back due to a flood of vehicles coming in the opposite direction. This will cause the vehicle, quite legally, to be trying to get to a waypoint by passing through an impassable object. The same effect could happen with any of the other path following techniques, so it's a general problem in complicated mazes and dense worlds.
Figure 13: getting stuck.To solve this problem, we introduce the concept of pilot metrics. These are informational functions that tell the outside world and AI itself how the vehicle is doing in it's task of following the path. We used three functions:
float is_stuck(); float is_finished_path(); float is_crowded();Each of these metrics return a value between 0.0 and 1.0. is_stuck() will report how much progress the vehicle is making. If it's making little or no forward progress the value returned will be high, maxing out when the vehicle is being consistently forced each iteration in the opposite direction. is_finished_path() will vary smoothly from 0.0 to 1.0 as the vehicle follows the current path, ending up at 1.0 when the vehicle stops at the final target position. is_crowded() tells us if the next waypoint is too densely packed with other vehicles (by summing their areas) to actually go there.
Starting with the definition of is_finished_path (you'll see why in a minute). We need to find out the total length of the whole path and store it, which will be recalculated every time we want to renavigate to a new point. Next we want to find the ratio of distance travelled over total distance, which is easiest to calculate as:
1.0f - (distance_to_go / total_distance)
To calculate the distance to go, all we need to is add up the geometric lengths of all the remaining path segments and add in the distance from our current position to the current waypoint. Sounds like a lot of square roots? Well, no, not if you use fast distance approximations. We aren't interested in amazingly accurate distances when all we need is a ratio of distances so the qlength approximation is perfect for this job.
is_stuck() is defined in terms of is_finished_path. What we need is a ratio of "progress" - distance travelled in the right direction over total possible progress. I chose to implement this as a circular list represented by a word filled with bits. Each iteration, the word is shifted left and the LSBit is set 1 or 0 depending on whether the vehicle had made forward progress or not. To find out the ratio of progress all you do is count the number of forward progresses made (set bits) over the total length of the buffer (sizeof(word)*8). This does means that a tiny glitch backwards would affect is_stuck longer than one iteration, but the overall effect is good.
is_crowded() works by using the same techniques as separation. It calls get_collision_candidates() using the next waypoint as the search position to find out if the next waypoint is too crowded. We then sum the areas of each vehicle returned in the list and divide this value by some threshold (e.g. the geometric area of a tile, or some factor based on this vehicles current radius), saturated at 1.0.
Having got that out of the way, back to the question of a vehicle getting stuck. Now we have a way of finding out if a vehicle is stuck, all we have to do is generate a waypoint that is visible so we can travel towards it which we can do by calling for a renav - regenerate a path from where we are to where we want to go. Similarly, if we can give the navigation system a hint that the current path is impossible to follow, we can mark this path or area as currently unnavigable and renav through a different route. Now we can guarantee that our vehicles will reach their designated position.
follow_path is therefore broken into three sections:
Renav if stuck
Are we stuck? If so regenerate the path.Find the current waypoint
Have we finished the path? If so, we're done. Stand still.
Have we reached a waypoint? If so, increment the current waypoint.Travel to current waypoint
Are we on the last path segment? If so, do an arrival to the current waypoint.
Else seek the current waypoint.There are a couple of tweakable values in there. The test for being stuck needs a threshold to test against, mine is called is_stuck_threshold. The test for reaching a waypoint needs a distance test, so there'll be a closeness_to_waypoint threshold in there (I use 2.0*radius which allows big vehicles to be more sloppy about hitting points and tiny vehicles more fiddly. It gives the vehicles character but may not be what you're looking for). You may want vehicles to be especially accurate about positioning themselves at the final point of a path, so a different closeness_to_last_waypoint may be needed for finishing a path, and finally we'll also need a decelleration_time for the arrival.
Two last tweaks we can add. You might like to scale the speed of a vehicle w.r.t. the density of vehicles around the current waypoint. Why? Well, you may have noticed that a group of people crushing to fit through a doorway moves more slowly than if there was plenty of space. Crowds of people work as an inverse liquid: increase the pressure and the speed of movement decreases. We can find out the density from is_crowded, and we can scale the clamping to max_speed used in seek and arrival with this density value.
The other tweak is to scale the speed of forward motion w.r.t. the angle between our facing direction and the desired direction of travel. This forces vehicles to stay still until they are facing pretty much in the direction they want to go, using a function that looks something like:
Figure 14: constraining movement based on lateral forces.This will make vehicles accelerate a bit as they turn to face the required direction, accelerating away as they face their direction of travel. For some vehicles this kind of behaviour makes no sense (e.g. motion captured data that would have sliding feet as it turns) but for others it looks good. Your mileage may vary.
Avoid Walls and Containment
Reynolds calls it containment, I call it avoiding walls. This is another penalty force that causes vehicles to steer away from and be repelled by walls. The basic idea is this: you project a line from your current position to some predicted position. If that line intersects any wall edge you find out how far into the wall it projects, project that vector back through the surface normal, and that is your force vector. Best described with pictures:
Figure 15: steering to avoid walls.The effect is to push the vehicle away from the wall proportionally to it's direction of travel, velocity and angle w.r.t. the wall. All in one simple geometric construction! True, it has little to do with the physical realities of bouncing off walls (normal and tangential forces), but the effect is pretty good. One drawback is that sliding along a wall doesn't really work too well and the walls end up seeming rough and abrasive (no tangential sliding is the same effect as a high coeff. of friction).
As you can see from the above diagram, there is no particular reason to use axis-aligned surfaces other than the optimisations you can make. You could use arbitrarily aligned half-spaces, polylines or whatever you like. Just so long as the surface has a normal and you can find the intersection point. How do you calculate this force? We'll go through the general 2D case for avoiding a non-axis-aligned half space and then go on to discuss optimisations for axis-aligned regular grids afterwards.
To find the force, first calculate the intersection point, if one exists. This is done using the usual raytracing code for a ray/halfspace intersection (equivalent code can be found in Graphics Gems, p11 or in An Introduction to Ray Tracing, p50). Having found the intersection point, find the vector from that point to the predicted position - this is the section of the prediction vector that is intersecting the wall. To calculate the part of that vector that is parallel to the wall normal you project the intersection vector onto the normal vector and negate the result:
force = N * -dot(N, I);The optimisations for regular, axis-aligned grids are quite interesting. Given that your normal is going to be one of (1,0), (-1,0), (0,1) or (0,-1) and your walls being regularly spaced makes calculating the intersection point just a simple divide, and the dot product collapses into a single line of code. Let's look closer. Your vehicle is going to be sitting inside a tile, facing in a particular direction (first picture):
Figure 16: optimisations for axis-aligned walls.You'll only need to test the ray against two of the enclosing walls, and a simple sign test will tell you which walls you're going to be interested in:
Vector2d ray; bool left,right,top,bottom; if(ray.x >= double2long(0.0)) { // same as dot(ray,Vector2d(0,1)) left = false; right = true; } else { left = true; right = false; } // remember, the y axis goes top to bottom. if(ray.y >= double2long(0.0)) { // same as dot(ray,Vector2d(1,0)) bottom = true; top = false; } else { bottom = false; top = true; }Next, you find out if there's an intersection point, and generate the force vector from that directly:
// is there a right hand wall in this tile? if(right) { if(cant_navigate(this_tile,map_lookup(mappos.x+1,mappos.y))) { // convert wall position to local coords Vector2d wall = map_local_to_global(mappos.x+1,0); SLONG wallpos = wall.x - c.radius; if(newpos.x > wallpos) { // the predicted pos intersects the wall, generate a force. return Vector2d(2.0*(wallpos - newpos.x),0); } } }...and use similar code for left, top and bottom walls. Why do we calculate wallpos as wall.x - c.radius? Well, our vehicles have an extent (see second diagram above), a radius and we want to stop our vehicles from touching walls, not just passing through walls. To do this we extend walls outwards by the radius of our vehicle and then use a point-in-halfspace test. By extending our walls we're losing a dimension, so spheres become points and lines stay as lines. This simplifies our calculations considerably.
As we've extended our walls outwards, what about the corner points? If you want correct collision reaction at external corners, we're going to have to extend the corners with a circular buffer.
Figure 17: avoiding corner points.You can then do a ray-circle test to get the intersection point and use the normal to that point as a plane of intersection and calculate the intersection vector as normal:
// is there a top right corner? if(top && right) { if(cant_navigate(this_tile,map_lookup(mappos.x+1,mappos.y-1)) && !cant_navigate(this_tile,map_lookup(mappos.x ,mappos.y-1)) && !cant_navigate(this_tile,map_lookup(mappos.x+1,mappos.y )) ) { // generate a circle centered on the corner Vector2d centre = map_local_to_global(mappos.x+1,mappos.y); SLONG isect = line_circle_intersect(c.pos,ray,centre,c.radius); // if hit found inside search area... if(isect!=SLONG_MAX && isect>0 && isect<=raylen) { // calc hit point Vector2d hitpos = c.pos + ray * isect; // calc unit normal (vector length = c.radius) Vector2d n = (hitpos - centre) / c.radius; Vector2d force = n * (2.0*dot(hitpos-newpos,n)); return force; } } }That's about it for avoid_walls, and for the behaviours we've used so far. The next section is all on the pitfalls of combining behaviours into a full working system.
The previous section covers the behaviours we used in DK2, so in this section I'll briefly cover some of some more behaviours that were covered by Reynolds in his GDC paper and hopefully explain them in a little more detail.
Wander
A useful technique for adding in a little randomness to a vehicle's motion (or a lot, making it look drunk), or as a default behaviour when you've got nothing better to do. When a vehicle wanders around a space, it basically keeps going forwards, but with a little randomness in it's facing direction. There are several ways of achieving this and the obvious one would seem to be adding a random vector to the current velocity each iteration. Sure, it gives you a randomised walk but it looks a little too much like Brownian motion rather than an aimless wandering. You could filter the randomness by using some frequency of Perlin noise keyed on the vehicle's position in space and this gives good, controllable results but at quite some cost.The "games hack" version works by considering a circle (or in 3D a sphere) projected in front of the vehicle. Initially, in vehicle space, pick a random target point on the circle, project it back into world space and steer towards that. Next iteration add a small random vector to the target on the circle and reproject it back onto the circle (renormalise the vector), then steer towards that. A really elegant solution that uses only one sqrt.
Figure 18: examples of wander constructions.The wandering behaviour is easily controllable. The size of the circle controls the range of possible turning angles (a small circle gives you relatively straight wandering, a big circle gives you more turning). The relative position of the circle allows you to control whether or not you generate backward facing vectors for amoeba like spins or bias steering to one side for spiralling effects and the size of the random vector controls the randomness of your wandering path.
Pursue and Evade.
Pursuit and evasion strategies are the sort of problems that researchers have spent a long time working on. Sure, there are optimal strategies for running away just as there are optimal strategies for chasing things, but mostly in games you'll be wanting control over whether or not vehicles catch each other. This "artificial stupidity" is a typical game design trick - make the perfect game player then make it execute dumb decisions, but controllably dumb so you can set the "intelligence" of your opponent. That's how artificial opponents are usually coded.As we saw earlier, in order to pursue another vehicle you'll need three pieces of information: your target's current position, it's velocity and an estimate of how long it will take you to get there (t). You simply project your target's position t units of time into the future and aim for that. Controllability can come from misjudging how long it will take to get there (undershooting) or by adding a random or deliberately computed component to the estimated position (missing the intercept point and sailing past).
evade is exactly the opposite - take your opponents position and heading and estimate how long it will take to get to you. Predict their position and steer away from it (just negate the seek vector). In combination with the pursue behaviour this can lead to really quite boring chases that ultimately rely on raw straight line speed. To get more interesting chases add a little randomness to either the chaser or quarry, maybe using a small amount of the wander behaviour.
Flow Field Following
Marshalling a group of flighty young vehicles to follow a script is exactly like herding cats. Depending on various random settings you might get the effect you're looking for or you just might not, that's the problem with randomness and this is where flow fields come in. A flow field or vector field is a 2D or 3D grid of (encoded) direction vectors that can be used to specify desired directions based on a vehicle's position. By painting these directions of flow onto a bitmap your art team can gain control of groups of vehicles in an easily controllable manner that doesn't require programmers to bang their heads against solid objects. You don't like the effect, you can just "repaint" it and try again. It's a real timesaver for set pieces, animations and level editors.
Figure 19: following a flow field.To follow a flow field, project your vehicle's position forwards in time. Read the desired steering direction on the map at that position. The difference between your current velocity and this future velocity is the steering vector (e.g. steering = projected_vel - current_vel). Don't project too far into the future or you'll get bizarre steering effects similar to integrating a function with too large a timestep.
Interpose and Hide
interpose is where a vehicle attempts to position itself between two targets and these targets can be any combination of fixed points or moving targets. It's an essential tool for sports simulations like soccer or ice hockey. The vehicle registers the positions of two targets (e.g. an attacking player and the hockey puck) and attempts to intercept a point halfway along the line between the two. Additional intelligence can be added by predicting where both ends of the line will be when the vehicle arrives and heading for the halfway point between those two positions. Halfway along the line is not the only point you can choose, you may like to bias the interposition point at some other ratio between the two.
Figure 20: interpose (left) and hide (right).hide is where a vehicle again registers two targets but is looking to get one of those targets between itself and the other target. The vehicle finds the line between the two targets, extends it some distance beyond the hiding target and seeks that point. If more than one vehicle is looking to hide behind one object you can use both the hiding target's extent and the distance offset to give you more than one hiding place. Some really cute effects of little groups of vehicles hiding from an enemy behind bits of scenery can easily be set up.
Unaligned Collision Avoidance (UCA)
One of the more impressive steering behaviours and the least well described in previous papers, this one really gives intelligent effects for interacting groups of vehicles. Picture two groups of vehicles flowing through corridors that meet at a crossroads. When the two flows of vehicles meet they must navigate past each other without colliding, planning their progress so as to avoid collision. How do we do this? Expensively is the answer. You can easily get too bogged down with turning this simple behaviour into a fully blown simultaneously solved simulation step, so there are a number of simplifications we'll have to take to make it tractable for the current generation of real-time games.
![]()
Figure 21: unaligned collision avoidance.Each vehicle will check all other vehicles in it's vicinity to calculate their nearest approach - the position in the future at which, if they kept going in their current directions and velocities, they would be closest to each other. If this distance is below some distance threshold they must both take evasive action by turning and slowing down or speeding up. When groups of vehicles approach each other in this way it's possible that many vehicles will have a close approach, but only the closest one is considered. (Note the assumptions: only one closest approach is important and future velocities are constant. These mean the technique will not always work in dense situations, but it's pretty good nevertheless).
So, there's three parts to this behaviour:
- doing a neighbourhood search.
- finding the closest approach from the set of collision candidates.
- taking evasive action.
Neighbourhood search. Because in UCA vehicles probably won't be travelling in straight lines for too long, it's important to put an upper bound on neighbourhood searches. You're looking for pairs of vehicles that have a closest approach lower than the threshold distance, so be sure to only check each pair once otherwise you'll be wasting resources. You can do this by adding a "timestamp" that is incremented each iteration IFF it already has a vehicle to avoid this iteration. If the timestamp is less than the current iteration then it's ripe for UCA. (32 bit values will give you approximately 828 days of updates at 60Hz before the numbers wrap!)
Finding the closest approach. You've got two vehicles in space that have constant velocity and you're looking to find the time in the future where they are closest. Let's transform this problem into a simpler one - how's about we say that one of the vehicles is fixed at the origin. To transform our general case into this one you just subtract the position and velocity of one of the vehicles from the other one. some basic definitions (vectors in bold, scalars in lower case):
p = (vehicle2.pos - vehicle1.pos) v = (vehicle2.vel - vehicle1.vel) a = v.p b = |v| time = - a / b2 distance_squared = |p|2 + 2*a*time + b2*time2We're not really interested in finding out if there is a collision so we don't have to solve the quadratic for the sum of the radii, we're just interested in the distance of closest approach and every pair of vehicles has a closest distance even if they are diverging (time of closest will be zero, distance will just be |p|). Again, as we're comparing distances there's no need to find the true distance, we can just as easily compare the distance squared and save ourselves some square roots.
There are only four outcomes from this calculation, and three of those are quick-outs.
1) if a==0 the vehicle is moving parallel to the origin. 2) if a>0 the closest approach was in the past. 3) if b==0 there is no relative motion. 4) if a<0 && b!=0 we have a closest approach.
Figure 22: closest approach cases.The diagrams above show what the situations look like, but as with all graphics programming there ain't no such thing as equal-to-zero in floating point, so we have to re-express these calculations as tests against a small epsilon value we'll call e:
1) if(-e<a<e) then return no_collision. 2) if(a>e) then return no_collision. 3) if(-e<b<e) then return no_collision. 4) if(a<-e && !(-e<b<e)) then calculate time and distance. 5) if(time < max_time) we have a collision. 6) else return no_collision.Evasive action. So we've found the closest vehicle to this one and got a time to collision that's within our upper bound and we know the closest approach distance, what do we do with it? There's two things a vehicle can do to avoid a collision - it can speed up/slow down and it can turn.
To decide who accelerates and who decelerates we need to know how much extra speed each vehicle has. If one vehicle is going at full-speed, it can't possibly accelerate. If they're both going at full speed then one of them has to slow down. If they both have speed to spare, one has to speed up and one has to slow down - and you may as well choose arbitrarily.
To calculate the turning force you need the vehicles to turn away from each other. To find an "away" direction, take the mean velocity of the two vehicles, for each vehicle subtract it from the current velocity, scale it and voila, two vectors facing away from each other (with the usual hacks for vectors that cancel each other out).
Flocking (Separation, Cohesion, Alignment)
Ah, the famous Boids simulation. This is basically a set of three behaviours that interact to give you the beautiful flocking behaviour of groups of birds or fishes. All three of these behaviours make use of neighbourhood searches to find the set of other vehicles that are considered close by. For neighbourhood searches in Boids, Reynolds used a "pie chart" style neighbourhood mask. The thinking goes that vehicles shouldn't care about vehicles behind themselves in collision avoidance because they can't "see" them.
Figure 23: the boids neighbourhood mask.If another vehicle is inside a circle surrounding the vehicle AND the dot product between the vehicle's forward direction and the vector from the vehicle to the collision candidate is below some threshold, the candidate is considered "a neighbour". Note that for simulations that have several groups of vehicles you'll need to include a test to see if the neighbour candidate is part of "your gang" if you don't want vehicles going off and following the wrong flock.
The first behaviour, separation, takes the set of neighbours and finds the vectors from the vehicle to each one of them. The length squared of each vector is calculated, the vectors are then normalised and summed with 1/r2 weighting:
Vector2d steering(0,0); for(i=0; i<number_neighbours; ++i) { Vector2d to = neighbour[i].pos - pos; float r = 1.0f / to.lensq(); steering += r * normalise(to); }
Figure 24: separation.The idea is that neighbours that are further away will have less of an effect on the repelling force than those closer to your vehicle. The 1/r2 weighting isn't intrinsic and other, maybe table based, weighting systems could just as easily be used for interesting effects.
The second behaviour cohesion keeps the overall group together. Search the local vicinity for neighbours and calculate the average point of your neighbours not including yourself and steer towards that position. What happens if your character has become separated from the group and the neighbourhood search returns no neighbours? You're a bit stuck, which is why the original Boids system did a global search of all other boids at enormous expense. One strategy is to keep an axis aligned bounding box for your whole flock (something you should probably do for clipping anyway) and use the central point of that for an emergency target.
Figure 25: cohesionFinally alignment takes your set of neighbours and steers the vehicle to generally face in the same direction as the flock. Loop through your set of neighbours and calculate the average forward facing vector (that's the normalised vehicle velocity). Add the difference between this and your own facing direction to yourself.
Figure 26: alignmentIn practice these three behaviours are calculated concurrently from a single run through of the set of neighbours as a single weighted sum. This composite behaviour is often called what people refer to as the flocking behaviour.
Steering Formations
Red Storm Entertainment have done some sterling work in extending the formation behaviour into a fully working game. In their military simulation Force 21, vehicles are assigned to groups which must manoeuvre around the world intelligently following paths, avoiding obstacles and carrying out military orders. In contrast to DK2 their maps were not block based or under user control and had large open areas - an excellent starting point for formation movement.Some interesting results were:
- Wheeling a formation had to be done carefully as the extrema of large formations could end up moving very quickly. Exactly the same problem happens in real life vehicle combat.
- There is one formation that can always get past an obstacle such as a house or depression - the column. If a vehicle following its formation position found itself to be intersecting an obstacle in the near future, it would interpolate it's formation position with a column position and use that resulting formation position to avoid collision. Great little insight.
Figure 27: example of column interpolation for formations.DK2 was to have formations as part of the gameplay for battles but we encountered problems with manoeuvring formations around corners in constrained areas like corridors. How should it work? If you naively move the formation through tightly constrained spaces you'll find a lot of your formation positions intersecting walls or being crushed into very tight areas. The ideal solution would be to warp formation space around the corner using something like a generalised space deformation. The formation positions would be distorted around the corner and individual creatures would then attempt to keep up with the warped version of their assigned position. Groups would also have to travel at the pace of the slowest vehicle otherwise vehicles would not be able to keep up, breaking the formation. There were many problems to solve:
- We would have to complicate gameplay by introducing a new object late in development - a formation group. Creatures would have to be added to and removed from the group as and when they died, got scared enough to run away or were reassigned to different jobs. This was a complication we didn't need at that time.
- How do we specify that a waypoint has been reached for the group? You could consider a waypoint reached when the leader reaches it or you could consider it reached when any one of the group is within range of it.
- Creatures all had different extents so selecting a good formation would have to be a dynamic operation.
- The user has complete control of the layout of the dungeon and it is hard to describe what a good space deformation looks like in every cornering case, especially as the path finding system doesn't take into account the extent of a formation.
Figure 28: examples of space warps for formation cornering.Plenty of interesting research to be done there.
All in all the problem was a bit too hard for this project under the time constraints we had. It also suited our game to resort to the random group approach - each vehicle had different speeds and extents which led to fun combinations. The emergent behaviour was that large, slow moving creatures would block the way for all but the smallest creatures when going into battle. The small creatures would arrive first, get scared and turn back only to be crushed between the enemy and the big creatures. Mayhem ensued, so we kept it in!
So, you've got a set of interesting movements and you want to blend them together to make your vehicle avoid walls AND avoid each other AND follow the path. Some of these forces must critically be applied to the vehicle this iteration. Some of the forces are cosmetic. Some forces will be huge and some will be tiny. A few of the forces will be trying to cancel each other out (e.g. separation and avoid_wall steering), and still others will try to lead the vehicle into an invalid state (like embedded in an obstacle). Some behaviours may even be turned off by the AI as they're not currently needed, such as hiding or fleeing.
There's one thing to note about the behaviours we've designed so far, they fall into two camps - forces that are always on (constant forces) and forces that occur once in a while (occasional forces). Path following and flocking forces are always on, obstacle avoidance and other avoidance forces only occur when a collision is imminent.We've got to get our heads around the semantics behind each force, the meaning and relative magnitude of each steering vector and find a way of merging them together to get a single velocity for the vehicle to follow this iteration. This is the "black art" area of steering behaviours and there are several strategies that seem to work.
In DK2 we had a set of rules that had to be enforced on behaviours:
- Vehicles MUST NOT intersect walls.
- Vehicles SHOULD NOT intersect obstacles.
- Vehicles should TRY NOT to intersect each other.
This hierarchy of rules had to be enforced due to design decisions made early on in the games evolution before steering behaviours were introduced. The basic idea was that the world changes constantly and that certain types of creature-obstacle intersections (and some downright cheating) was better than failing to complete a task. This is covered in more detail later in the section on "Difficult Cases".
Weighted Average
The first strategy is a simple weighted average- compute your forces, scale them and sum the result. The benefit to this strategy is that all your forces contribute to the resulting motion all the time and it seems easy to control which forces are dominant at any moment. If a creature gets stuck in a dense situation where there's a lot of separation "buffeting" going on, it will still execute it's wall avoidance which should swamp out other forces. The overall effect is wall avoidance. There are however downsides:
- It's expensive. You're calculating every force every iteration for every vehicle. Not every behaviour contributes a force each iteration which can lead to wastage of, say, map lookups just to find out that there is nothing to return.
- Where do the truncations to max_force happen? Do you sum all the raw forces and truncate at the end? Do you truncate each force before summing them, adding a lot of square roots to the calculation time?
- There are undesirable effects in scaling forces. Scaling your path motion forces means that you'll be moving along the path more slowly than you should, just so that obstacle avoidance can negate this force.
- How big should scaling be? Should your scaling factors sum to 1.0 or to some other number? If they sum to 1.0, you're forcing path motion to produce slower movement than you would like. If your scaling factors sum to greater than 1.0 so that constant forces like path following give you reasonable values, how big do the occasionally forces need to be to swamp them? What if two occasional forces fire at the same time?
This was the technique we used in DK2, but it was fiddly to work out exactly what the weighting values should be. I wouldn't recommend it to anyone who likes to sleep well at night. Here's a snippet of the resulting code:
Vehicle::Vehicle() { Vehicle::clear_behaviours(); Vehicle::add_behaviour(1.0f,"avoid_walls"); Vehicle::add_behaviour(1.4f,"avoid_obstacle"); Vehicle::add_behaviour(0.8f,"repel_obstacle"); Vehicle::add_behaviour(0.2f,"follow_path"); Vehicle::add_behaviour(0.5f,"separation"); ... }void Vehicle::next_step() { Vector2d steering; VehicleList::iterator i; for(i = behaviour_list.begin(); i != behaviour_list.end(); i++) { steering += i->calculate_force(*this) * i->behaviour_scale; } // step the vehicle forwards. Vector2d acc = truncate(steering,max_force) / mass; vel = truncate(vel + acc, max_speed); pos = pos + vel; // recalc local coordinate system. local_xaxis = normalise(vel); local_yaxis = perp(local_xaxis); }Prioritised Dithering
The second strategy is the one used by Craig Reynolds in his demonstrations, and he calls it prioritised dithering. The idea is that there is a conceptual tree of operations that can be used this iteration. The algorithm goes like this.
- Generate a random value.
- If this random value is greater than a threshold, execute the no.1 behaviour (usually wall avoidance).
- If the previous behaviour returned no value, execute the next behaviour.
- If the previous behaviour returned no value, execute the next behaviour, etc.
- Until done.
This example only randomises the first behaviour, but feel free to randomise others further down the tree if that suits your system. Some code:
void Vehicle::next_step() { Vector2d steering; if(random() < 0.65f) steering = containment(this); if(steering.is_zero()) steering = avoid_obstacle(this); if(steering.is_zero()) steering = follow_path(this); ... }In effect you execute behaviours in order until one of them returns a non-zero value, but add in a little bit of randomisation to allow other behaviours to "bleed in". This works very well in systems with limited numbers of behaviours running concurrently and most of the behaviours are occasional (as in Craig's examples).
Problems with this technique occur when the world forces behaviours to become continuous. For example, in DK2 it was likely that at certain points in the game the density of creatures would get very high like during a battle. This causes the separation behaviour to become continuous and if you haven't randomised it it will be executed over path following, causing the creature to fail to follow the path. Creatures will get buffeted around by other creatures and not reach their destinations. In our game it was more important that the creature reached its destination than it kept its illusion of reality. Such is the dilemma for AI coders in games that allow the user to do almost anything they like. Worst case scenarios do happen, code for them.
Conclusion
We've also tried out a number of other strategies which have proven unsatisfactory - executing only one behaviour per iteration (too choppy), an "energy budget" which executes behaviours until they produce a desired vector magnitude (wasteful, you end up throwing away the last result) and other forms of dithering.It would seem that the best strategy for blending behaviours is a hybrid of the two strategies above - make yourself a list of decisions where each level of the list may be a weighted average of several behaviours. I've not personally tried this strategy so I'll leave it as an exercise for the reader. (I've always wanted to say that!)
In coding DK2, we came across a number of problem cases in our changing world which required some pretty nasty hacks to solve. First, here's a recap of the game.
The gamer has a square grid based world called the Dungeon. Each cell of the world has a type - path, solid rock, gold, lava, etc. The gamer also has a number of creatures of different types under his control that he can task to dig new rooms, research spells, mine resources or fight off the incoming heroes. When rooms have been dug the user can magically select their type - training room, torture room, treasury, etc. Each one of these rooms has a different layout of features like columns, chests of gold, torture equipment, etc. The user can also select groups of creatures to pick up and drop anywhere on the map that is under their control (i.e. claimed tiles).
Extent
The first major problem was to do with creatures having an extent. Most navigation techniques in games treat vehicles as points without a size and just extend collidable objects and areas by the maximum radius of the largest vehicle (Minkowski Sum). Problem here is that DK2 had a wide range of creatures with radii of 0.2 to 0.8 of a tile width - some creatures could pass each other with space to spare, others would block the way almost completely (which was the point). This range of extents precluded us precalculating collision maps, it would take too much memory. We just had to make the creatures intelligent enough to navigate paths regardless. Paths would sometimes go through corners and (very occasionally) along the surface of walls, so path following had to take this into account.
Figure 29: examples of bad navigation paths.
- The tile-based world could have obstacles on each and every tile. Some rooms like the torture room and the training room had large obstacles arranged in grid patterns with a small amount of room between each obstacle. Big creatures still had to navigate between the spaces without getting too distracted in avoiding obstacles, hence all the trouble with ordering behaviours and blending them correctly.
- One room caused a lot of heartache - the dormitory. Higher level creatures would enter your dungeon by "choice" and make for the dormitory where they would place down their bed. Each creature had different sleeping arrangements (a coffin, a slimy depression, a nest, etc) which it would place randomly in the dorm. There was no way of controlling the density of these obstacles and obstacles would appear at random times while other creatures were navigating through the room to get elsewhere. A nasty case.
- Interactions between creatures has to take into account their relative sizes. When a huge Bile Demon walks through a crowd of imps, the imps must be scattered and the Bile Demon must remain mostly unaffected. This was done using a table of relative masses, from one to five, showing how much force each creature gets from a separation vector.
ME 1 2 3 4 5 +--------------------- 1| 1/2 1/3 1/4 1/5 1/6 Y 2| 2/3 1/2 2/5 2/6 2/7 O 3| 3/4 3/5 1/2 3/7 3/8 U 4| 4/5 4/6 4/7 1/2 4/9 5| 5/6 5/7 5/8 5/9 1/2
- There was one obstacle that was bigger than all the others - the Dungeon Heart. In order for obstacle searches to return an obstacle for collision detection, the centre points of all obstacles are checked against the search area. Fine, but if the obstacle covers 3x3 tiles then a naive search won't return the obstacle as the central point wasn't inside the search area even though the obstacles extent is inside the search. The side effect of this was creatures walking through the edges of the dungeon heart, suddenly realising they were intersecting it and backing off, only to do the same thing a few iterations later. To fix it we special cased that one object, lamers that we are. A better solution would be to add a reference to the obstacle to each map cell that it intersects, making sure it only gets processed once for each map search.
Figure 30: example of a bad neighbourhood search.Changing World
- Imps are the workhorses of your dungeon - they dig out rooms, mine gold, claim land and reinforce walls. There were occasions where an imp was reinforcing a room corner and the user creates a type of room that has columns at the corners. The imp gets stuck in the corner behind this immovable column. The fix was to enforce a few iterations of obstacle ignoring if the imp was previously stuck and this was called escape_stuck_mode. Not a hack I'm proud of.
Figure 31: A stuck imp.
- When the user picked up creatures they were free to drop them wherever they wanted to, even if it was inside obstacles and into stuck situations. Extra code had to be added to post-process drop positions to make sure they were valid - you shouldn't be allowed to drop creatures inside other creatures or obstacles. Bad. Naughty.
- The path generation knows nothing about the outside world and can quite easily give you a waypoint that is inside an obstacle. Check for this situation and take it into account when deciding whether or not you've reached a waypoint.
- A creatures would occasionally come up against the enemy - the Heroes. There's no way a hero would get out of the way for a creature so they were flagged as "blockers" and would not react to separation forces. In effect they could act as movable obstacles even though they could use separation between themselves. Creatures would also give heroes a wide berth by special-casing the separation radius for a creature-hero pair.
Path Following meets AI
- There's more to path following than simply navigating down a path. The game AI contains information on why the creature is going to that destination that the Pilot shouldn't need to know - was the creature going to a destination to run an animation that needed accurate placing? Was it just going to be redirected when it got there (as in the case of battle)? Was it going to interact with an object that is currently represented as an obstacle? To cover these situations we added a set of flags to the Pilot API that allowed us to specify certain kinds of end conditions on the path following:
ARRIVE_WITH_VELOCITY - use seek for last waypoint. ARRIVE_AND_STOP - use arrival for last waypoint. IGNORE_END_OBSTACLES - do not use repel_obstacle for last waypoint. REPEL_END_OBSTACLES - use repel_obstacle for last waypoint. IGNORE_FINAL_CREATURE - do not use separation within range of the last waypoint. REPEL_FINAL_CREATURE - use separation within range of the last waypoint. ACCURATE_POSITIONING - linearly interpolate along final path section. LOOSE_POSITIONING - use "arrive" and distance threshold for last waypoint.Real World Development
- Of course, just to complicate things, DK2 was originally going to be a simultaneous ship Playstation and PC product meaning that all the behaviours and calculations I've been talking about actually took place in 16.16 bit fixed-point with all the associated overflow/underflow checks.
- To save memory further all paths were stored as partial lists of max 6 waypoints, with the vehicle renavigating as it reached the end of the stored subpath.
Thanks to all at Bullfrog and Electronic Arts for generously giving me the time to write this tutorial, to Ian Shaw for believing that this technique would work, Jarl Ostensen for being the best maths explainer around, the long suffering game teams at Bullfrog for actually using my work in production games and Big Up to the hive mind of Bullfrog R&D for listening to me go on and on and on about steering behaviours without complaining too much (we rock!). Many thanks to Craig Reynolds for answering my constant emails asking how he did things and for graciously providing a pre-print of his original paper. Thanks to my wife Christiane for her constant support and for giving me the quiet time to finish this paper.
Eberley, D, Magic Software library. (http://www.magic-software.com/)
Reynolds C.W., "Steering Behaviours for Autonomous Characters", Proceedings of the Games Developer Conference, p763-782, 1999. (http://www.red.com/cwr/steer/)
Van Verth, Jim et al, "Formation-Based Pathfinding With Real-World Vehicles", Proceedings of the Games Developer Conference, p675-689, 2000.