Fuzzy AGV

In this example, we are going to go more in depth than we did for our first example. Before we go any further, let me show you what our application will look like, and then a brief explanation of an inference engine is in order:

 

Although AForge.NET makes it very easy and transparent for us to create an InferenceSystem object, we should probably start first by telling you a little bit about what such a system is. A fuzzy inference system is a model capable of executing fuzzy computing. This is accomplished using a database, linguistic variables and a rule base, all of which can be in memory. The typical operation of a fuzzy inference system is as follows:

  • Get the numeric inputs
  • Utilize the database with the linguistic variables to obtain linguistic meaning for each numerical input
  • Verify which rules from the rule base are activated by the input
  • Combine the results of the activated rules to obtain a fuzzy output

For us, the part where most of the work will be performed is in initializing our fuzzy logic system. Let's break this down into our individual steps as previously outlined.

First, we prepare the linguistic labels (fuzzy sets) that compose the distances we will have. They are NearMedium, and Far:

            FuzzySet fsNear = new FuzzySet( "Near", new TrapezoidalFunction( 15, 50, TrapezoidalFunction.EdgeType.Right ) );
FuzzySet fsMedium = new FuzzySet( "Medium", new TrapezoidalFunction( 15, 50, 60, 100 ) );
FuzzySet fsFar = new FuzzySet( "Far", new TrapezoidalFunction( 60, 100, TrapezoidalFunction.EdgeType.Left ) );

Next, we initialize the linguistic variables we'll need. The first, lvRight, will be an input variable for the right distance measurement:

            LinguisticVariable lvRight = new LinguisticVariable( "RightDistance", 0, 120 );
lvRight.AddLabel( fsNear );
lvRight.AddLabel( fsMedium );
lvRight.AddLabel( fsFar );

Now, we do the same for the left distance input measurement:

            LinguisticVariable lvLeft = new LinguisticVariable( "LeftDistance", 0, 120 );
lvLeft.AddLabel( fsNear );
lvLeft.AddLabel( fsMedium );
lvLeft.AddLabel( fsFar );

Our last linguistic variable will be for the front distance measurement:

            LinguisticVariable lvFront = new LinguisticVariable( "FrontalDistance", 0, 120 );
lvFront.AddLabel( fsNear );
lvFront.AddLabel( fsMedium );
lvFront.AddLabel( fsFar );

We now focus on the linguistic labels (fuzzy sets) that compose the angle. We need to do this step so that we can create our final linguistic variable:

            FuzzySet fsVN = new FuzzySet( "VeryNegative", new TrapezoidalFunction( -40, -35, TrapezoidalFunction.EdgeType.Right));
FuzzySet fsN = new FuzzySet( "Negative", new TrapezoidalFunction( -40, -35, -25, -20 ) );
FuzzySet fsLN = new FuzzySet( "LittleNegative", new TrapezoidalFunction( -25, -20, -10, -5 ) );
FuzzySet fsZero = new FuzzySet( "Zero", new TrapezoidalFunction( -10, 5, 5, 10 ) );
FuzzySet fsLP = new FuzzySet( "LittlePositive", new TrapezoidalFunction( 5, 10, 20, 25 ) );
FuzzySet fsP = new FuzzySet( "Positive", new TrapezoidalFunction( 20, 25, 35, 40 ) );
FuzzySet fsVP = new FuzzySet( "VeryPositive", new TrapezoidalFunction( 35, 40, TrapezoidalFunction.EdgeType.Left));

Now we can create our final linguistic variable for the angle:

            LinguisticVariable lvAngle = new LinguisticVariable( "Angle", -50, 50 );
lvAngle.AddLabel( fsVN );
lvAngle.AddLabel( fsN );
lvAngle.AddLabel( fsLN );
lvAngle.AddLabel( fsZero );
lvAngle.AddLabel( fsLP );
lvAngle.AddLabel( fsP );
lvAngle.AddLabel( fsVP );

We can now move on to creating our fuzzy database. For our application, this is an in-memory dictionary of linguistic variables, but there is no reason you can't implement it as a SQL, NoSQL, or any other type of concrete database, should you so desire:

            Database fuzzyDB = new Database( );
fuzzyDB.AddVariable( lvFront );
fuzzyDB.AddVariable( lvLeft );
fuzzyDB.AddVariable( lvRight );
fuzzyDB.AddVariable( lvAngle );

Next, we will create the main inference engine. What is most interesting about this next line of code is the CentroidDifuzzifier. At the end of our inference process, we are going to need a numeric value to control other parts of the process. In order to obtain this number, a defuzzification method is performed. Let me explain it.

The output of our fuzzy inference system is a set of rules with a firing strength greater than zero. This firing strength applies a constraint to the consequent fuzzy sets of the rules. When we put all those fuzzy sets together, they result in a shape that is the linguistic output meaning. The centroid method will calculate the center of the area of our shape to obtain the numerical representation of the output. It uses numerical approximation, so several intervals will be chosen. As the number of intervals increases, so does the precision of our output:

IS = new InferenceSystem(fuzzyDB, new CentroidDefuzzifier(1000));

Next, we can start adding the rules to our inference system:

After all this work, our inference system is ready to go!

The main code loop for our application will look like this. We will describe each function in detail:

if (FirstInference)
GetMeasures();

try
{
DoInference();
MoveAGV();
GetMeasures();
}
catch (Exception ex)
{
Debug.WriteLine(ex);
}

Let's take a quick look at the GetMeasures function.

After getting the current bitmap as well as the position of our AGV, we call the HandleAGVOnWall function, which handles the scenario where our AGV is up against a wall and has nowhere to move. After this, DrawAGV handles drawing our AGV within our map. Finally, RefreshTerrain does exactly what its name implies:

        private void GetMeasures()
{
// Getting AGV's position
pbTerrain.Image = CopyImage(OriginalMap);
Bitmap b = (Bitmap) pbTerrain.Image;
Point pPos = new Point(pbRobot.Left - pbTerrain.Left + 5, pbRobot.Top - pbTerrain.Top + 5);

// AGV on the wall
HandleAGVOnWall(b, pPos);

DrawAGV(pPos, b);

RefreshTerrain();
}

DrawAGV gets any obstacles in front of us, to the left and to the right. If you have the Show Beams checkbox selected, you will see the front, left, and right beam avoidance detectors displayed:

private void DrawAGV(Point pPos, Bitmap b)
{
Point pFrontObstacle = GetObstacle(pPos, b, -1, 0);
Point pLeftObstacle = GetObstacle(pPos, b, 1, 90);
Point pRightObstacle = GetObstacle(pPos, b, 1, -90);

// Showing beams
Graphics g = Graphics.FromImage(b);
if (cbLasers.Checked)
{
g.DrawLine(new Pen(Color.Red, 1), pFrontObstacle, pPos);
g.DrawLine(new Pen(Color.Red, 1), pLeftObstacle, pPos);
g.DrawLine(new Pen(Color.Red, 1), pRightObstacle, pPos);
}

// Drawing AGV
if (btnRun.Text != RunLabel)
{
g.FillEllipse(new SolidBrush(Color.Blue), pPos.X - 5, pPos.Y - 5, 10, 10);
}

g.DrawImage(b, 0, 0);
g.Dispose();

// Updating distances texts
txtFront.Text = GetDistance(pPos, pFrontObstacle).ToString();
txtLeft.Text = GetDistance(pPos, pLeftObstacle).ToString();
txtRight.Text = GetDistance(pPos, pRightObstacle).ToString();
}

The DoInference function runs one epoch (instance, generation, and so on) of our fuzzy inference system. Ultimately, it is responsible for determining the next angle of our AGV.

 private void DoInference()
{
// Setting inputs
IS?.SetInput("RightDistance", Convert.ToSingle(txtRight.Text));
IS?.SetInput("LeftDistance", Convert.ToSingle(txtLeft.Text));
IS?.SetInput("FrontalDistance", Convert.ToSingle(txtFront.Text));

// Setting outputs
try
{
double NewAngle = IS.Evaluate("Angle");
txtAngle.Text = NewAngle.ToString("##0.#0");
Angle += NewAngle;
}
catch (Exception)
{
}
}

The MoveAGV function is responsible for moving our AGV one step. Approximately 50% of the code in this function is dedicated to drawing the historical trajectory of your AGV if you have Track Path checked:

 private void MoveAGV()
{
double rad = ((Angle + 90) * Math.PI) / 180;
int Offset = 0;
int Inc = -4;

Offset += Inc;
int IncX = Convert.ToInt32(Offset * Math.Cos(rad));
int IncY = Convert.ToInt32(Offset * Math.Sin(rad));

// Leaving the track
if (cbTrajeto.Checked)
{
Graphics g = Graphics.FromImage(OriginalMap);
Point p1 = new Point(pbRobot.Left - pbTerrain.Left + pbRobot.Width / 2, pbRobot.Top - pbTerrain.Top + pbRobot.Height / 2);
Point p2 = new Point(p1.X + IncX, p1.Y + IncY);
g.DrawLine(new Pen(new SolidBrush( Color.Green)), p1, p2);
g.DrawImage(OriginalMap, 0, 0);
g.Dispose();
}

pbRobot.Top = pbRobot.Top + IncY;
pbRobot.Left = pbRobot.Left + IncX;
}

The main application with Show Beams selected:

With our application running, the AGV is successfully navigating obstacles and both the path and the beams are displayed. Angle is the angle that our AGV is currently facing, and the sensor readings relate to the front, left, and right beam sensors being displayed:

Our AGV is making a successful complete pass through the obstacle course and continuing on:

Track Path and Show Beams can be separately selected:

This shows how, using the left and right mouse buttons, we can add obstacles and gateways to obstruct and allow the AGV to pass through, respectively:

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.141.200.3